From patchwork Mon Jun 17 15:55:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54852 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D039B1BF1F; Mon, 17 Jun 2019 17:56:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 34D311BEDA for ; Mon, 17 Jun 2019 17:55:58 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFp2I4000615; Mon, 17 Jun 2019 08:55:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=6mspvRqDV8jBVN8nOl9wJ/iRNbx5b6+Vde/J9doa0VA=; b=R6KJIbsUfVd12E4WTExnoS6y+urTjN6vOmQ0NPLGfN4jL08bqoUeeVk2NV5VQUcXm7// POuXMZFQ9T5Jou4uQIG11yckDdfFoYTmYnVypWwW29ciuJTMEOYDTnsxkAjAfp4pZbsL bfvMy5N6ED5c0sLUfJtpn9BiFznZl7+CC5vNaokrX06VFLaLkcK+qxiNILZO/1W4rCnS BS2GBlt6cs7bI3PprOjGAJu7a707CW35D6BPIRoeYHZaMsMUHcr2fKxxxTDu27jRU14l c0CAluForpwxQ4PbXr/TKeF2B9ZSTThGc9us4VdFOhT984jJo+ClfESXfFyvEi49s63p sQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2t68rp9bdm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 17 Jun 2019 08:55:55 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:55:53 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:55:53 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 1C5113F703F; Mon, 17 Jun 2019 08:55:50 -0700 (PDT) From: To: , Thomas Monjalon , Jerin Jacob , Nithin Dabilpuram , "Vamsi Attunuru" CC: Pavan Nikhilesh Date: Mon, 17 Jun 2019 21:25:11 +0530 Message-ID: <20190617155537.36144-2-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 01/27] common/octeontx2: add build infrastructure and HW definition X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add the make and meson based build infrastructure along with HW definition header file. This patch adds skeleton otx2_mbox.c file to make sure all header files are intact, subsequent patches add content to otx2_mbox.c This patch also updates CONFIG_RTE_MAX_VFIO_GROUPS value to 128 as the system can have up to 128 PFs/VFs. For octeontx2 meson build target, CONFIG_RTE_MAX_VFIO_GROUPS defined as 128 so no additional changes required. Signed-off-by: Jerin Jacob Signed-off-by: Pavan Nikhilesh --- config/defconfig_arm64-octeontx2-linuxapp-gcc | 3 + drivers/common/Makefile | 2 + drivers/common/meson.build | 2 +- drivers/common/octeontx2/Makefile | 32 + drivers/common/octeontx2/hw/otx2_nix.h | 1379 +++++++++++++++++ drivers/common/octeontx2/hw/otx2_npa.h | 305 ++++ drivers/common/octeontx2/hw/otx2_npc.h | 472 ++++++ drivers/common/octeontx2/hw/otx2_rvu.h | 212 +++ drivers/common/octeontx2/hw/otx2_sso.h | 209 +++ drivers/common/octeontx2/hw/otx2_ssow.h | 56 + drivers/common/octeontx2/hw/otx2_tim.h | 34 + drivers/common/octeontx2/meson.build | 23 + drivers/common/octeontx2/otx2_common.h | 34 + drivers/common/octeontx2/otx2_mbox.c | 5 + drivers/common/octeontx2/otx2_mbox.h | 10 + .../rte_common_octeontx2_version.map | 4 + mk/rte.app.mk | 2 + 17 files changed, 2783 insertions(+), 1 deletion(-) create mode 100644 drivers/common/octeontx2/Makefile create mode 100644 drivers/common/octeontx2/hw/otx2_nix.h create mode 100644 drivers/common/octeontx2/hw/otx2_npa.h create mode 100644 drivers/common/octeontx2/hw/otx2_npc.h create mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h create mode 100644 drivers/common/octeontx2/hw/otx2_sso.h create mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h create mode 100644 drivers/common/octeontx2/hw/otx2_tim.h create mode 100644 drivers/common/octeontx2/meson.build create mode 100644 drivers/common/octeontx2/otx2_common.h create mode 100644 drivers/common/octeontx2/otx2_mbox.c create mode 100644 drivers/common/octeontx2/otx2_mbox.h create mode 100644 drivers/common/octeontx2/rte_common_octeontx2_version.map diff --git a/config/defconfig_arm64-octeontx2-linuxapp-gcc b/config/defconfig_arm64-octeontx2-linuxapp-gcc index 9eae84538..f20da2442 100644 --- a/config/defconfig_arm64-octeontx2-linuxapp-gcc +++ b/config/defconfig_arm64-octeontx2-linuxapp-gcc @@ -16,3 +16,6 @@ CONFIG_RTE_LIBRTE_VHOST_NUMA=n # Recommend to use VFIO as co-processors needs SMMU/IOMMU CONFIG_RTE_EAL_IGB_UIO=n + +# Max supported NIX LFs +CONFIG_RTE_MAX_VFIO_GROUPS=128 diff --git a/drivers/common/Makefile b/drivers/common/Makefile index 87b8a59a4..e7abe210e 100644 --- a/drivers/common/Makefile +++ b/drivers/common/Makefile @@ -23,4 +23,6 @@ ifeq ($(CONFIG_RTE_LIBRTE_COMMON_DPAAX),y) DIRS-y += dpaax endif +DIRS-y += octeontx2 + include $(RTE_SDK)/mk/rte.subdir.mk diff --git a/drivers/common/meson.build b/drivers/common/meson.build index a50934108..7b5e566f3 100644 --- a/drivers/common/meson.build +++ b/drivers/common/meson.build @@ -2,6 +2,6 @@ # Copyright(c) 2018 Cavium, Inc std_deps = ['eal'] -drivers = ['cpt', 'dpaax', 'mvep', 'octeontx', 'qat'] +drivers = ['cpt', 'dpaax', 'mvep', 'octeontx', 'octeontx2', 'qat'] config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON' driver_name_fmt = 'rte_common_@0@' diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile new file mode 100644 index 000000000..e5737532a --- /dev/null +++ b/drivers/common/octeontx2/Makefile @@ -0,0 +1,32 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_common_octeontx2.a + +CFLAGS += $(WERROR_FLAGS) +CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2 + +ifneq ($(CONFIG_RTE_ARCH_64),y) +CFLAGS += -Wno-int-to-pointer-cast +CFLAGS += -Wno-pointer-to-int-cast +endif + +EXPORT_MAP := rte_common_octeontx2_version.map + +LIBABIVER := 1 + +# +# all source are stored in SRCS-y +# +SRCS-y += otx2_mbox.c + +LDLIBS += -lrte_eal +LDLIBS += -lrte_ethdev + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/common/octeontx2/hw/otx2_nix.h b/drivers/common/octeontx2/hw/otx2_nix.h new file mode 100644 index 000000000..d5ad98834 --- /dev/null +++ b/drivers/common/octeontx2/hw/otx2_nix.h @@ -0,0 +1,1379 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_NIX_HW_H__ +#define __OTX2_NIX_HW_H__ + +/* Register offsets */ + +#define NIX_AF_CFG (0x0ull) +#define NIX_AF_STATUS (0x10ull) +#define NIX_AF_NDC_CFG (0x18ull) +#define NIX_AF_CONST (0x20ull) +#define NIX_AF_CONST1 (0x28ull) +#define NIX_AF_CONST2 (0x30ull) +#define NIX_AF_CONST3 (0x38ull) +#define NIX_AF_SQ_CONST (0x40ull) +#define NIX_AF_CQ_CONST (0x48ull) +#define NIX_AF_RQ_CONST (0x50ull) +#define NIX_AF_PSE_CONST (0x60ull) +#define NIX_AF_TL1_CONST (0x70ull) +#define NIX_AF_TL2_CONST (0x78ull) +#define NIX_AF_TL3_CONST (0x80ull) +#define NIX_AF_TL4_CONST (0x88ull) +#define NIX_AF_MDQ_CONST (0x90ull) +#define NIX_AF_MC_MIRROR_CONST (0x98ull) +#define NIX_AF_LSO_CFG (0xa8ull) +#define NIX_AF_BLK_RST (0xb0ull) +#define NIX_AF_TX_TSTMP_CFG (0xc0ull) +#define NIX_AF_RX_CFG (0xd0ull) +#define NIX_AF_AVG_DELAY (0xe0ull) +#define NIX_AF_CINT_DELAY (0xf0ull) +#define NIX_AF_RX_MCAST_BASE (0x100ull) +#define NIX_AF_RX_MCAST_CFG (0x110ull) +#define NIX_AF_RX_MCAST_BUF_BASE (0x120ull) +#define NIX_AF_RX_MCAST_BUF_CFG (0x130ull) +#define NIX_AF_RX_MIRROR_BUF_BASE (0x140ull) +#define NIX_AF_RX_MIRROR_BUF_CFG (0x148ull) +#define NIX_AF_LF_RST (0x150ull) +#define NIX_AF_GEN_INT (0x160ull) +#define NIX_AF_GEN_INT_W1S (0x168ull) +#define NIX_AF_GEN_INT_ENA_W1S (0x170ull) +#define NIX_AF_GEN_INT_ENA_W1C (0x178ull) +#define NIX_AF_ERR_INT (0x180ull) +#define NIX_AF_ERR_INT_W1S (0x188ull) +#define NIX_AF_ERR_INT_ENA_W1S (0x190ull) +#define NIX_AF_ERR_INT_ENA_W1C (0x198ull) +#define NIX_AF_RAS (0x1a0ull) +#define NIX_AF_RAS_W1S (0x1a8ull) +#define NIX_AF_RAS_ENA_W1S (0x1b0ull) +#define NIX_AF_RAS_ENA_W1C (0x1b8ull) +#define NIX_AF_RVU_INT (0x1c0ull) +#define NIX_AF_RVU_INT_W1S (0x1c8ull) +#define NIX_AF_RVU_INT_ENA_W1S (0x1d0ull) +#define NIX_AF_RVU_INT_ENA_W1C (0x1d8ull) +#define NIX_AF_TCP_TIMER (0x1e0ull) +#define NIX_AF_RX_DEF_OL2 (0x200ull) +#define NIX_AF_RX_DEF_OIP4 (0x210ull) +#define NIX_AF_RX_DEF_IIP4 (0x220ull) +#define NIX_AF_RX_DEF_OIP6 (0x230ull) +#define NIX_AF_RX_DEF_IIP6 (0x240ull) +#define NIX_AF_RX_DEF_OTCP (0x250ull) +#define NIX_AF_RX_DEF_ITCP (0x260ull) +#define NIX_AF_RX_DEF_OUDP (0x270ull) +#define NIX_AF_RX_DEF_IUDP (0x280ull) +#define NIX_AF_RX_DEF_OSCTP (0x290ull) +#define NIX_AF_RX_DEF_ISCTP (0x2a0ull) +#define NIX_AF_RX_DEF_IPSECX(a) (0x2b0ull | (uint64_t)(a) << 3) +#define NIX_AF_RX_IPSEC_GEN_CFG (0x300ull) +#define NIX_AF_RX_CPTX_INST_QSEL(a) (0x320ull | (uint64_t)(a) << 3) +#define NIX_AF_RX_CPTX_CREDIT(a) (0x360ull | (uint64_t)(a) << 3) +#define NIX_AF_NDC_RX_SYNC (0x3e0ull) +#define NIX_AF_NDC_TX_SYNC (0x3f0ull) +#define NIX_AF_AQ_CFG (0x400ull) +#define NIX_AF_AQ_BASE (0x410ull) +#define NIX_AF_AQ_STATUS (0x420ull) +#define NIX_AF_AQ_DOOR (0x430ull) +#define NIX_AF_AQ_DONE_WAIT (0x440ull) +#define NIX_AF_AQ_DONE (0x450ull) +#define NIX_AF_AQ_DONE_ACK (0x460ull) +#define NIX_AF_AQ_DONE_TIMER (0x470ull) +#define NIX_AF_AQ_DONE_ENA_W1S (0x490ull) +#define NIX_AF_AQ_DONE_ENA_W1C (0x498ull) +#define NIX_AF_RX_LINKX_CFG(a) (0x540ull | (uint64_t)(a) << 16) +#define NIX_AF_RX_SW_SYNC (0x550ull) +#define NIX_AF_RX_LINKX_WRR_CFG(a) (0x560ull | (uint64_t)(a) << 16) +#define NIX_AF_EXPR_TX_FIFO_STATUS (0x640ull) +#define NIX_AF_NORM_TX_FIFO_STATUS (0x648ull) +#define NIX_AF_SDP_TX_FIFO_STATUS (0x650ull) +#define NIX_AF_TX_NPC_CAPTURE_CONFIG (0x660ull) +#define NIX_AF_TX_NPC_CAPTURE_INFO (0x668ull) +#define NIX_AF_TX_NPC_CAPTURE_RESPX(a) (0x680ull | (uint64_t)(a) << 3) +#define NIX_AF_SEB_ACTIVE_CYCLES_PCX(a) (0x6c0ull | (uint64_t)(a) << 3) +#define NIX_AF_SMQX_CFG(a) (0x700ull | (uint64_t)(a) << 16) +#define NIX_AF_SMQX_HEAD(a) (0x710ull | (uint64_t)(a) << 16) +#define NIX_AF_SMQX_TAIL(a) (0x720ull | (uint64_t)(a) << 16) +#define NIX_AF_SMQX_STATUS(a) (0x730ull | (uint64_t)(a) << 16) +#define NIX_AF_SMQX_NXT_HEAD(a) (0x740ull | (uint64_t)(a) << 16) +#define NIX_AF_SQM_ACTIVE_CYCLES_PC (0x770ull) +#define NIX_AF_PSE_CHANNEL_LEVEL (0x800ull) +#define NIX_AF_PSE_SHAPER_CFG (0x810ull) +#define NIX_AF_PSE_ACTIVE_CYCLES_PC (0x8c0ull) +#define NIX_AF_MARK_FORMATX_CTL(a) (0x900ull | (uint64_t)(a) << 18) +#define NIX_AF_TX_LINKX_NORM_CREDIT(a) (0xa00ull | (uint64_t)(a) << 16) +#define NIX_AF_TX_LINKX_EXPR_CREDIT(a) (0xa10ull | (uint64_t)(a) << 16) +#define NIX_AF_TX_LINKX_SW_XOFF(a) (0xa20ull | (uint64_t)(a) << 16) +#define NIX_AF_TX_LINKX_HW_XOFF(a) (0xa30ull | (uint64_t)(a) << 16) +#define NIX_AF_SDP_LINK_CREDIT (0xa40ull) +#define NIX_AF_SDP_SW_XOFFX(a) (0xa60ull | (uint64_t)(a) << 3) +#define NIX_AF_SDP_HW_XOFFX(a) (0xac0ull | (uint64_t)(a) << 3) +#define NIX_AF_TL4X_BP_STATUS(a) (0xb00ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xb10ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_SCHEDULE(a) (0xc00ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_SHAPE(a) (0xc10ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_CIR(a) (0xc20ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_SHAPE_STATE(a) (0xc50ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_SW_XOFF(a) (0xc70ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_TOPOLOGY(a) (0xc80ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_MD_DEBUG0(a) (0xcc0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_MD_DEBUG1(a) (0xcc8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_MD_DEBUG2(a) (0xcd0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_MD_DEBUG3(a) (0xcd8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_DROPPED_PACKETS(a) (0xd20ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_DROPPED_BYTES(a) (0xd30ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_RED_PACKETS(a) (0xd40ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_RED_BYTES(a) (0xd50ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_YELLOW_PACKETS(a) (0xd60ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_YELLOW_BYTES(a) (0xd70ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_GREEN_PACKETS(a) (0xd80ull | (uint64_t)(a) << 16) +#define NIX_AF_TL1X_GREEN_BYTES(a) (0xd90ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_SCHEDULE(a) (0xe00ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_SHAPE(a) (0xe10ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_CIR(a) (0xe20ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_PIR(a) (0xe30ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_SCHED_STATE(a) (0xe40ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_SHAPE_STATE(a) (0xe50ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_SW_XOFF(a) (0xe70ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_TOPOLOGY(a) (0xe80ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_PARENT(a) (0xe88ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_MD_DEBUG0(a) (0xec0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_MD_DEBUG1(a) (0xec8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_MD_DEBUG2(a) (0xed0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL2X_MD_DEBUG3(a) (0xed8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_SCHEDULE(a) \ + (0x1000ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_SHAPE(a) \ + (0x1010ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_CIR(a) \ + (0x1020ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_PIR(a) \ + (0x1030ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_SCHED_STATE(a) \ + (0x1040ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_SHAPE_STATE(a) \ + (0x1050ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_SW_XOFF(a) \ + (0x1070ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_TOPOLOGY(a) \ + (0x1080ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_PARENT(a) \ + (0x1088ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_MD_DEBUG0(a) \ + (0x10c0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_MD_DEBUG1(a) \ + (0x10c8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_MD_DEBUG2(a) \ + (0x10d0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3X_MD_DEBUG3(a) \ + (0x10d8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SCHEDULE(a) \ + (0x1200ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SHAPE(a) \ + (0x1210ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_CIR(a) \ + (0x1220ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_PIR(a) \ + (0x1230ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SCHED_STATE(a) \ + (0x1240ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SHAPE_STATE(a) \ + (0x1250ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_SW_XOFF(a) \ + (0x1270ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_TOPOLOGY(a) \ + (0x1280ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_PARENT(a) \ + (0x1288ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_MD_DEBUG0(a) \ + (0x12c0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_MD_DEBUG1(a) \ + (0x12c8ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_MD_DEBUG2(a) \ + (0x12d0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL4X_MD_DEBUG3(a) \ + (0x12d8ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_SCHEDULE(a) \ + (0x1400ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_SHAPE(a) \ + (0x1410ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_CIR(a) \ + (0x1420ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_PIR(a) \ + (0x1430ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_SCHED_STATE(a) \ + (0x1440ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_SHAPE_STATE(a) \ + (0x1450ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_SW_XOFF(a) \ + (0x1470ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_PARENT(a) \ + (0x1480ull | (uint64_t)(a) << 16) +#define NIX_AF_MDQX_MD_DEBUG(a) \ + (0x14c0ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3_TL2X_CFG(a) \ + (0x1600ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3_TL2X_BP_STATUS(a) \ + (0x1610ull | (uint64_t)(a) << 16) +#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b) \ + (0x1700ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b) \ + (0x1800ull | (uint64_t)(a) << 18 | (uint64_t)(b) << 3) +#define NIX_AF_TX_MCASTX(a) \ + (0x1900ull | (uint64_t)(a) << 15) +#define NIX_AF_TX_VTAG_DEFX_CTL(a) \ + (0x1a00ull | (uint64_t)(a) << 16) +#define NIX_AF_TX_VTAG_DEFX_DATA(a) \ + (0x1a10ull | (uint64_t)(a) << 16) +#define NIX_AF_RX_BPIDX_STATUS(a) \ + (0x1a20ull | (uint64_t)(a) << 17) +#define NIX_AF_RX_CHANX_CFG(a) \ + (0x1a30ull | (uint64_t)(a) << 15) +#define NIX_AF_CINT_TIMERX(a) \ + (0x1a40ull | (uint64_t)(a) << 18) +#define NIX_AF_LSO_FORMATX_FIELDX(a, b) \ + (0x1b00ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_CFG(a) \ + (0x4000ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_SQS_CFG(a) \ + (0x4020ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_TX_CFG2(a) \ + (0x4028ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_SQS_BASE(a) \ + (0x4030ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RQS_CFG(a) \ + (0x4040ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RQS_BASE(a) \ + (0x4050ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_CQS_CFG(a) \ + (0x4060ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_CQS_BASE(a) \ + (0x4070ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_TX_CFG(a) \ + (0x4080ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_TX_PARSE_CFG(a) \ + (0x4090ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_CFG(a) \ + (0x40a0ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RSS_CFG(a) \ + (0x40c0ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RSS_BASE(a) \ + (0x40d0ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_QINTS_CFG(a) \ + (0x4100ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_QINTS_BASE(a) \ + (0x4110ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_CINTS_CFG(a) \ + (0x4120ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_CINTS_BASE(a) \ + (0x4130ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_CFG0(a) \ + (0x4140ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_CFG1(a) \ + (0x4148ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a) \ + (0x4150ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a) \ + (0x4158ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a) \ + (0x4170ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_TX_STATUS(a) \ + (0x4180ull | (uint64_t)(a) << 17) +#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b) \ + (0x4200ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_LOCKX(a, b) \ + (0x4300ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_TX_STATX(a, b) \ + (0x4400ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_RX_STATX(a, b) \ + (0x4500ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_LFX_RSS_GRPX(a, b) \ + (0x4600ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3) +#define NIX_AF_RX_NPC_MC_RCV (0x4700ull) +#define NIX_AF_RX_NPC_MC_DROP (0x4710ull) +#define NIX_AF_RX_NPC_MIRROR_RCV (0x4720ull) +#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730ull) +#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) \ + (0x4800ull | (uint64_t)(a) << 16) +#define NIX_PRIV_AF_INT_CFG (0x8000000ull) +#define NIX_PRIV_LFX_CFG(a) \ + (0x8000010ull | (uint64_t)(a) << 8) +#define NIX_PRIV_LFX_INT_CFG(a) \ + (0x8000020ull | (uint64_t)(a) << 8) +#define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030ull) + +#define NIX_LF_RX_SECRETX(a) (0x0ull | (uint64_t)(a) << 3) +#define NIX_LF_CFG (0x100ull) +#define NIX_LF_GINT (0x200ull) +#define NIX_LF_GINT_W1S (0x208ull) +#define NIX_LF_GINT_ENA_W1C (0x210ull) +#define NIX_LF_GINT_ENA_W1S (0x218ull) +#define NIX_LF_ERR_INT (0x220ull) +#define NIX_LF_ERR_INT_W1S (0x228ull) +#define NIX_LF_ERR_INT_ENA_W1C (0x230ull) +#define NIX_LF_ERR_INT_ENA_W1S (0x238ull) +#define NIX_LF_RAS (0x240ull) +#define NIX_LF_RAS_W1S (0x248ull) +#define NIX_LF_RAS_ENA_W1C (0x250ull) +#define NIX_LF_RAS_ENA_W1S (0x258ull) +#define NIX_LF_SQ_OP_ERR_DBG (0x260ull) +#define NIX_LF_MNQ_ERR_DBG (0x270ull) +#define NIX_LF_SEND_ERR_DBG (0x280ull) +#define NIX_LF_TX_STATX(a) (0x300ull | (uint64_t)(a) << 3) +#define NIX_LF_RX_STATX(a) (0x400ull | (uint64_t)(a) << 3) +#define NIX_LF_OP_SENDX(a) (0x800ull | (uint64_t)(a) << 3) +#define NIX_LF_RQ_OP_INT (0x900ull) +#define NIX_LF_RQ_OP_OCTS (0x910ull) +#define NIX_LF_RQ_OP_PKTS (0x920ull) +#define NIX_LF_RQ_OP_DROP_OCTS (0x930ull) +#define NIX_LF_RQ_OP_DROP_PKTS (0x940ull) +#define NIX_LF_RQ_OP_RE_PKTS (0x950ull) +#define NIX_LF_OP_IPSEC_DYNO_CNT (0x980ull) +#define NIX_LF_SQ_OP_INT (0xa00ull) +#define NIX_LF_SQ_OP_OCTS (0xa10ull) +#define NIX_LF_SQ_OP_PKTS (0xa20ull) +#define NIX_LF_SQ_OP_STATUS (0xa30ull) +#define NIX_LF_SQ_OP_DROP_OCTS (0xa40ull) +#define NIX_LF_SQ_OP_DROP_PKTS (0xa50ull) +#define NIX_LF_CQ_OP_INT (0xb00ull) +#define NIX_LF_CQ_OP_DOOR (0xb30ull) +#define NIX_LF_CQ_OP_STATUS (0xb40ull) +#define NIX_LF_QINTX_CNT(a) (0xc00ull | (uint64_t)(a) << 12) +#define NIX_LF_QINTX_INT(a) (0xc10ull | (uint64_t)(a) << 12) +#define NIX_LF_QINTX_ENA_W1S(a) (0xc20ull | (uint64_t)(a) << 12) +#define NIX_LF_QINTX_ENA_W1C(a) (0xc30ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_CNT(a) (0xd00ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_WAIT(a) (0xd10ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_INT(a) (0xd20ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_INT_W1S(a) (0xd30ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_ENA_W1S(a) (0xd40ull | (uint64_t)(a) << 12) +#define NIX_LF_CINTX_ENA_W1C(a) (0xd50ull | (uint64_t)(a) << 12) + + +/* Enum offsets */ + +#define NIX_TX_VTAGOP_NOP (0x0ull) +#define NIX_TX_VTAGOP_INSERT (0x1ull) +#define NIX_TX_VTAGOP_REPLACE (0x2ull) + +#define NIX_TX_ACTIONOP_DROP (0x0ull) +#define NIX_TX_ACTIONOP_UCAST_DEFAULT (0x1ull) +#define NIX_TX_ACTIONOP_UCAST_CHAN (0x2ull) +#define NIX_TX_ACTIONOP_MCAST (0x3ull) +#define NIX_TX_ACTIONOP_DROP_VIOL (0x5ull) + +#define NIX_INTF_RX (0x0ull) +#define NIX_INTF_TX (0x1ull) + +#define NIX_TXLAYER_OL3 (0x0ull) +#define NIX_TXLAYER_OL4 (0x1ull) +#define NIX_TXLAYER_IL3 (0x2ull) +#define NIX_TXLAYER_IL4 (0x3ull) + +#define NIX_SUBDC_NOP (0x0ull) +#define NIX_SUBDC_EXT (0x1ull) +#define NIX_SUBDC_CRC (0x2ull) +#define NIX_SUBDC_IMM (0x3ull) +#define NIX_SUBDC_SG (0x4ull) +#define NIX_SUBDC_MEM (0x5ull) +#define NIX_SUBDC_JUMP (0x6ull) +#define NIX_SUBDC_WORK (0x7ull) +#define NIX_SUBDC_SOD (0xfull) + +#define NIX_STYPE_STF (0x0ull) +#define NIX_STYPE_STT (0x1ull) +#define NIX_STYPE_STP (0x2ull) + +#define NIX_STAT_LF_TX_TX_UCAST (0x0ull) +#define NIX_STAT_LF_TX_TX_BCAST (0x1ull) +#define NIX_STAT_LF_TX_TX_MCAST (0x2ull) +#define NIX_STAT_LF_TX_TX_DROP (0x3ull) +#define NIX_STAT_LF_TX_TX_OCTS (0x4ull) + +#define NIX_STAT_LF_RX_RX_OCTS (0x0ull) +#define NIX_STAT_LF_RX_RX_UCAST (0x1ull) +#define NIX_STAT_LF_RX_RX_BCAST (0x2ull) +#define NIX_STAT_LF_RX_RX_MCAST (0x3ull) +#define NIX_STAT_LF_RX_RX_DROP (0x4ull) +#define NIX_STAT_LF_RX_RX_DROP_OCTS (0x5ull) +#define NIX_STAT_LF_RX_RX_FCS (0x6ull) +#define NIX_STAT_LF_RX_RX_ERR (0x7ull) +#define NIX_STAT_LF_RX_RX_DRP_BCAST (0x8ull) +#define NIX_STAT_LF_RX_RX_DRP_MCAST (0x9ull) +#define NIX_STAT_LF_RX_RX_DRP_L3BCAST (0xaull) +#define NIX_STAT_LF_RX_RX_DRP_L3MCAST (0xbull) + +#define NIX_SQOPERR_SQ_OOR (0x0ull) +#define NIX_SQOPERR_SQ_CTX_FAULT (0x1ull) +#define NIX_SQOPERR_SQ_CTX_POISON (0x2ull) +#define NIX_SQOPERR_SQ_DISABLED (0x3ull) +#define NIX_SQOPERR_MAX_SQE_SIZE_ERR (0x4ull) +#define NIX_SQOPERR_SQE_OFLOW (0x5ull) +#define NIX_SQOPERR_SQB_NULL (0x6ull) +#define NIX_SQOPERR_SQB_FAULT (0x7ull) + +#define NIX_XQESZ_W64 (0x0ull) +#define NIX_XQESZ_W16 (0x1ull) + +#define NIX_VTAGSIZE_T4 (0x0ull) +#define NIX_VTAGSIZE_T8 (0x1ull) + +#define NIX_RX_ACTIONOP_DROP (0x0ull) +#define NIX_RX_ACTIONOP_UCAST (0x1ull) +#define NIX_RX_ACTIONOP_UCAST_IPSEC (0x2ull) +#define NIX_RX_ACTIONOP_MCAST (0x3ull) +#define NIX_RX_ACTIONOP_RSS (0x4ull) +#define NIX_RX_ACTIONOP_PF_FUNC_DROP (0x5ull) +#define NIX_RX_ACTIONOP_MIRROR (0x6ull) + +#define NIX_RX_VTAGACTION_VTAG0_RELPTR (0x0ull) +#define NIX_RX_VTAGACTION_VTAG1_RELPTR (0x4ull) +#define NIX_RX_VTAGACTION_VTAG_VALID (0x1ull) +#define NIX_TX_VTAGACTION_VTAG0_RELPTR \ + (sizeof(struct nix_inst_hdr_s) + 2 * 6) +#define NIX_TX_VTAGACTION_VTAG1_RELPTR \ + (sizeof(struct nix_inst_hdr_s) + 2 * 6 + 4) +#define NIX_RQINT_DROP (0x0ull) +#define NIX_RQINT_RED (0x1ull) +#define NIX_RQINT_R2 (0x2ull) +#define NIX_RQINT_R3 (0x3ull) +#define NIX_RQINT_R4 (0x4ull) +#define NIX_RQINT_R5 (0x5ull) +#define NIX_RQINT_R6 (0x6ull) +#define NIX_RQINT_R7 (0x7ull) + +#define NIX_MAXSQESZ_W16 (0x0ull) +#define NIX_MAXSQESZ_W8 (0x1ull) + +#define NIX_LSOALG_NOP (0x0ull) +#define NIX_LSOALG_ADD_SEGNUM (0x1ull) +#define NIX_LSOALG_ADD_PAYLEN (0x2ull) +#define NIX_LSOALG_ADD_OFFSET (0x3ull) +#define NIX_LSOALG_TCP_FLAGS (0x4ull) + +#define NIX_MNQERR_SQ_CTX_FAULT (0x0ull) +#define NIX_MNQERR_SQ_CTX_POISON (0x1ull) +#define NIX_MNQERR_SQB_FAULT (0x2ull) +#define NIX_MNQERR_SQB_POISON (0x3ull) +#define NIX_MNQERR_TOTAL_ERR (0x4ull) +#define NIX_MNQERR_LSO_ERR (0x5ull) +#define NIX_MNQERR_CQ_QUERY_ERR (0x6ull) +#define NIX_MNQERR_MAX_SQE_SIZE_ERR (0x7ull) +#define NIX_MNQERR_MAXLEN_ERR (0x8ull) +#define NIX_MNQERR_SQE_SIZEM1_ZERO (0x9ull) + +#define NIX_MDTYPE_RSVD (0x0ull) +#define NIX_MDTYPE_FLUSH (0x1ull) +#define NIX_MDTYPE_PMD (0x2ull) + +#define NIX_NDC_TX_PORT_LMT (0x0ull) +#define NIX_NDC_TX_PORT_ENQ (0x1ull) +#define NIX_NDC_TX_PORT_MNQ (0x2ull) +#define NIX_NDC_TX_PORT_DEQ (0x3ull) +#define NIX_NDC_TX_PORT_DMA (0x4ull) +#define NIX_NDC_TX_PORT_XQE (0x5ull) + +#define NIX_NDC_RX_PORT_AQ (0x0ull) +#define NIX_NDC_RX_PORT_CQ (0x1ull) +#define NIX_NDC_RX_PORT_CINT (0x2ull) +#define NIX_NDC_RX_PORT_MC (0x3ull) +#define NIX_NDC_RX_PORT_PKT (0x4ull) +#define NIX_NDC_RX_PORT_RQ (0x5ull) + +#define NIX_RE_OPCODE_RE_NONE (0x0ull) +#define NIX_RE_OPCODE_RE_PARTIAL (0x1ull) +#define NIX_RE_OPCODE_RE_JABBER (0x2ull) +#define NIX_RE_OPCODE_RE_FCS (0x7ull) +#define NIX_RE_OPCODE_RE_FCS_RCV (0x8ull) +#define NIX_RE_OPCODE_RE_TERMINATE (0x9ull) +#define NIX_RE_OPCODE_RE_RX_CTL (0xbull) +#define NIX_RE_OPCODE_RE_SKIP (0xcull) +#define NIX_RE_OPCODE_RE_DMAPKT (0xfull) +#define NIX_RE_OPCODE_UNDERSIZE (0x10ull) +#define NIX_RE_OPCODE_OVERSIZE (0x11ull) +#define NIX_RE_OPCODE_OL2_LENMISM (0x12ull) + +#define NIX_REDALG_STD (0x0ull) +#define NIX_REDALG_SEND (0x1ull) +#define NIX_REDALG_STALL (0x2ull) +#define NIX_REDALG_DISCARD (0x3ull) + +#define NIX_RX_MCOP_RQ (0x0ull) +#define NIX_RX_MCOP_RSS (0x1ull) + +#define NIX_RX_PERRCODE_NPC_RESULT_ERR (0x2ull) +#define NIX_RX_PERRCODE_MCAST_FAULT (0x4ull) +#define NIX_RX_PERRCODE_MIRROR_FAULT (0x5ull) +#define NIX_RX_PERRCODE_MCAST_POISON (0x6ull) +#define NIX_RX_PERRCODE_MIRROR_POISON (0x7ull) +#define NIX_RX_PERRCODE_DATA_FAULT (0x8ull) +#define NIX_RX_PERRCODE_MEMOUT (0x9ull) +#define NIX_RX_PERRCODE_BUFS_OFLOW (0xaull) +#define NIX_RX_PERRCODE_OL3_LEN (0x10ull) +#define NIX_RX_PERRCODE_OL4_LEN (0x11ull) +#define NIX_RX_PERRCODE_OL4_CHK (0x12ull) +#define NIX_RX_PERRCODE_OL4_PORT (0x13ull) +#define NIX_RX_PERRCODE_IL3_LEN (0x20ull) +#define NIX_RX_PERRCODE_IL4_LEN (0x21ull) +#define NIX_RX_PERRCODE_IL4_CHK (0x22ull) +#define NIX_RX_PERRCODE_IL4_PORT (0x23ull) + +#define NIX_SENDCRCALG_CRC32 (0x0ull) +#define NIX_SENDCRCALG_CRC32C (0x1ull) +#define NIX_SENDCRCALG_ONES16 (0x2ull) + +#define NIX_SENDL3TYPE_NONE (0x0ull) +#define NIX_SENDL3TYPE_IP4 (0x2ull) +#define NIX_SENDL3TYPE_IP4_CKSUM (0x3ull) +#define NIX_SENDL3TYPE_IP6 (0x4ull) + +#define NIX_SENDL4TYPE_NONE (0x0ull) +#define NIX_SENDL4TYPE_TCP_CKSUM (0x1ull) +#define NIX_SENDL4TYPE_SCTP_CKSUM (0x2ull) +#define NIX_SENDL4TYPE_UDP_CKSUM (0x3ull) + +#define NIX_SENDLDTYPE_LDD (0x0ull) +#define NIX_SENDLDTYPE_LDT (0x1ull) +#define NIX_SENDLDTYPE_LDWB (0x2ull) + +#define NIX_SENDMEMALG_SET (0x0ull) +#define NIX_SENDMEMALG_SETTSTMP (0x1ull) +#define NIX_SENDMEMALG_SETRSLT (0x2ull) +#define NIX_SENDMEMALG_ADD (0x8ull) +#define NIX_SENDMEMALG_SUB (0x9ull) +#define NIX_SENDMEMALG_ADDLEN (0xaull) +#define NIX_SENDMEMALG_SUBLEN (0xbull) +#define NIX_SENDMEMALG_ADDMBUF (0xcull) +#define NIX_SENDMEMALG_SUBMBUF (0xdull) + +#define NIX_SENDMEMDSZ_B64 (0x0ull) +#define NIX_SENDMEMDSZ_B32 (0x1ull) +#define NIX_SENDMEMDSZ_B16 (0x2ull) +#define NIX_SENDMEMDSZ_B8 (0x3ull) + +#define NIX_SEND_STATUS_GOOD (0x0ull) +#define NIX_SEND_STATUS_SQ_CTX_FAULT (0x1ull) +#define NIX_SEND_STATUS_SQ_CTX_POISON (0x2ull) +#define NIX_SEND_STATUS_SQB_FAULT (0x3ull) +#define NIX_SEND_STATUS_SQB_POISON (0x4ull) +#define NIX_SEND_STATUS_SEND_HDR_ERR (0x5ull) +#define NIX_SEND_STATUS_SEND_EXT_ERR (0x6ull) +#define NIX_SEND_STATUS_JUMP_FAULT (0x7ull) +#define NIX_SEND_STATUS_JUMP_POISON (0x8ull) +#define NIX_SEND_STATUS_SEND_CRC_ERR (0x10ull) +#define NIX_SEND_STATUS_SEND_IMM_ERR (0x11ull) +#define NIX_SEND_STATUS_SEND_SG_ERR (0x12ull) +#define NIX_SEND_STATUS_SEND_MEM_ERR (0x13ull) +#define NIX_SEND_STATUS_INVALID_SUBDC (0x14ull) +#define NIX_SEND_STATUS_SUBDC_ORDER_ERR (0x15ull) +#define NIX_SEND_STATUS_DATA_FAULT (0x16ull) +#define NIX_SEND_STATUS_DATA_POISON (0x17ull) +#define NIX_SEND_STATUS_NPC_DROP_ACTION (0x20ull) +#define NIX_SEND_STATUS_LOCK_VIOL (0x21ull) +#define NIX_SEND_STATUS_NPC_UCAST_CHAN_ERR (0x22ull) +#define NIX_SEND_STATUS_NPC_MCAST_CHAN_ERR (0x23ull) +#define NIX_SEND_STATUS_NPC_MCAST_ABORT (0x24ull) +#define NIX_SEND_STATUS_NPC_VTAG_PTR_ERR (0x25ull) +#define NIX_SEND_STATUS_NPC_VTAG_SIZE_ERR (0x26ull) +#define NIX_SEND_STATUS_SEND_MEM_FAULT (0x27ull) + +#define NIX_SQINT_LMT_ERR (0x0ull) +#define NIX_SQINT_MNQ_ERR (0x1ull) +#define NIX_SQINT_SEND_ERR (0x2ull) +#define NIX_SQINT_SQB_ALLOC_FAIL (0x3ull) + +#define NIX_XQE_TYPE_INVALID (0x0ull) +#define NIX_XQE_TYPE_RX (0x1ull) +#define NIX_XQE_TYPE_RX_IPSECS (0x2ull) +#define NIX_XQE_TYPE_RX_IPSECH (0x3ull) +#define NIX_XQE_TYPE_RX_IPSECD (0x4ull) +#define NIX_XQE_TYPE_SEND (0x8ull) + +#define NIX_AQ_COMP_NOTDONE (0x0ull) +#define NIX_AQ_COMP_GOOD (0x1ull) +#define NIX_AQ_COMP_SWERR (0x2ull) +#define NIX_AQ_COMP_CTX_POISON (0x3ull) +#define NIX_AQ_COMP_CTX_FAULT (0x4ull) +#define NIX_AQ_COMP_LOCKERR (0x5ull) +#define NIX_AQ_COMP_SQB_ALLOC_FAIL (0x6ull) + +#define NIX_AF_INT_VEC_RVU (0x0ull) +#define NIX_AF_INT_VEC_GEN (0x1ull) +#define NIX_AF_INT_VEC_AQ_DONE (0x2ull) +#define NIX_AF_INT_VEC_AF_ERR (0x3ull) +#define NIX_AF_INT_VEC_POISON (0x4ull) + +#define NIX_AQINT_GEN_RX_MCAST_DROP (0x0ull) +#define NIX_AQINT_GEN_RX_MIRROR_DROP (0x1ull) +#define NIX_AQINT_GEN_TL1_DRAIN (0x3ull) +#define NIX_AQINT_GEN_SMQ_FLUSH_DONE (0x4ull) + +#define NIX_AQ_INSTOP_NOP (0x0ull) +#define NIX_AQ_INSTOP_INIT (0x1ull) +#define NIX_AQ_INSTOP_WRITE (0x2ull) +#define NIX_AQ_INSTOP_READ (0x3ull) +#define NIX_AQ_INSTOP_LOCK (0x4ull) +#define NIX_AQ_INSTOP_UNLOCK (0x5ull) + +#define NIX_AQ_CTYPE_RQ (0x0ull) +#define NIX_AQ_CTYPE_SQ (0x1ull) +#define NIX_AQ_CTYPE_CQ (0x2ull) +#define NIX_AQ_CTYPE_MCE (0x3ull) +#define NIX_AQ_CTYPE_RSS (0x4ull) +#define NIX_AQ_CTYPE_DYNO (0x5ull) + +#define NIX_COLORRESULT_GREEN (0x0ull) +#define NIX_COLORRESULT_YELLOW (0x1ull) +#define NIX_COLORRESULT_RED_SEND (0x2ull) +#define NIX_COLORRESULT_RED_DROP (0x3ull) + +#define NIX_CHAN_LBKX_CHX(a, b) \ + (0x000ull | ((uint64_t)(a) << 8) | (uint64_t)(b)) +#define NIX_CHAN_R4 (0x400ull) +#define NIX_CHAN_R5 (0x500ull) +#define NIX_CHAN_R6 (0x600ull) +#define NIX_CHAN_SDP_CH_END (0x7ffull) +#define NIX_CHAN_SDP_CH_START (0x700ull) +#define NIX_CHAN_CGXX_LMACX_CHX(a, b, c) \ + (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | \ + (uint64_t)(c)) + +#define NIX_INTF_SDP (0x4ull) +#define NIX_INTF_CGX0 (0x0ull) +#define NIX_INTF_CGX1 (0x1ull) +#define NIX_INTF_CGX2 (0x2ull) +#define NIX_INTF_LBK0 (0x3ull) + +#define NIX_CQERRINT_DOOR_ERR (0x0ull) +#define NIX_CQERRINT_WR_FULL (0x1ull) +#define NIX_CQERRINT_CQE_FAULT (0x2ull) + +#define NIX_LF_INT_VEC_GINT (0x80ull) +#define NIX_LF_INT_VEC_ERR_INT (0x81ull) +#define NIX_LF_INT_VEC_POISON (0x82ull) +#define NIX_LF_INT_VEC_QINT_END (0x3full) +#define NIX_LF_INT_VEC_QINT_START (0x0ull) +#define NIX_LF_INT_VEC_CINT_END (0x7full) +#define NIX_LF_INT_VEC_CINT_START (0x40ull) + +/* Enums definitions */ + +/* Structures definitions */ + +/* NIX admin queue instruction structure */ +struct nix_aq_inst_s { + uint64_t op : 4; + uint64_t ctype : 4; + uint64_t lf : 7; + uint64_t rsvd_23_15 : 9; + uint64_t cindex : 20; + uint64_t rsvd_62_44 : 19; + uint64_t doneint : 1; + uint64_t res_addr : 64; /* W1 */ +}; + +/* NIX admin queue result structure */ +struct nix_aq_res_s { + uint64_t op : 4; + uint64_t ctype : 4; + uint64_t compcode : 8; + uint64_t doneint : 1; + uint64_t rsvd_63_17 : 47; + uint64_t rsvd_127_64 : 64; /* W1 */ +}; + +/* NIX completion interrupt context hardware structure */ +struct nix_cint_hw_s { + uint64_t ecount : 32; + uint64_t qcount : 16; + uint64_t intr : 1; + uint64_t ena : 1; + uint64_t timer_idx : 8; + uint64_t rsvd_63_58 : 6; + uint64_t ecount_wait : 32; + uint64_t qcount_wait : 16; + uint64_t time_wait : 8; + uint64_t rsvd_127_120 : 8; +}; + +/* NIX completion queue entry header structure */ +struct nix_cqe_hdr_s { + uint64_t tag : 32; + uint64_t q : 20; + uint64_t rsvd_57_52 : 6; + uint64_t node : 2; + uint64_t cqe_type : 4; +}; + +/* NIX completion queue context structure */ +struct nix_cq_ctx_s { + uint64_t base : 64;/* W0 */ + uint64_t rsvd_67_64 : 4; + uint64_t bp_ena : 1; + uint64_t rsvd_71_69 : 3; + uint64_t bpid : 9; + uint64_t rsvd_83_81 : 3; + uint64_t qint_idx : 7; + uint64_t cq_err : 1; + uint64_t cint_idx : 7; + uint64_t avg_con : 9; + uint64_t wrptr : 20; + uint64_t tail : 20; + uint64_t head : 20; + uint64_t avg_level : 8; + uint64_t update_time : 16; + uint64_t bp : 8; + uint64_t drop : 8; + uint64_t drop_ena : 1; + uint64_t ena : 1; + uint64_t rsvd_211_210 : 2; + uint64_t substream : 20; + uint64_t caching : 1; + uint64_t rsvd_235_233 : 3; + uint64_t qsize : 4; + uint64_t cq_err_int : 8; + uint64_t cq_err_int_ena : 8; +}; + +/* NIX instruction header structure */ +struct nix_inst_hdr_s { + uint64_t pf_func : 16; + uint64_t sq : 20; + uint64_t rsvd_63_36 : 28; +}; + +/* NIX i/o virtual address structure */ +struct nix_iova_s { + uint64_t addr : 64; /* W0 */ +}; + +/* NIX IPsec dynamic ordering counter structure */ +struct nix_ipsec_dyno_s { + uint32_t count : 32; /* W0 */ +}; + +/* NIX memory value structure */ +struct nix_mem_result_s { + uint64_t v : 1; + uint64_t color : 2; + uint64_t rsvd_63_3 : 61; +}; + +/* NIX statistics operation write data structure */ +struct nix_op_q_wdata_s { + uint64_t rsvd_31_0 : 32; + uint64_t q : 20; + uint64_t rsvd_63_52 : 12; +}; + +/* NIX queue interrupt context hardware structure */ +struct nix_qint_hw_s { + uint32_t count : 22; + uint32_t rsvd_30_22 : 9; + uint32_t ena : 1; +}; + +/* NIX receive queue context structure */ +struct nix_rq_ctx_hw_s { + uint64_t ena : 1; + uint64_t sso_ena : 1; + uint64_t ipsech_ena : 1; + uint64_t ena_wqwd : 1; + uint64_t cq : 20; + uint64_t substream : 20; + uint64_t wqe_aura : 20; + uint64_t spb_aura : 20; + uint64_t lpb_aura : 20; + uint64_t sso_grp : 10; + uint64_t sso_tt : 2; + uint64_t pb_caching : 2; + uint64_t wqe_caching : 1; + uint64_t xqe_drop_ena : 1; + uint64_t spb_drop_ena : 1; + uint64_t lpb_drop_ena : 1; + uint64_t wqe_skip : 2; + uint64_t rsvd_127_124 : 4; + uint64_t rsvd_139_128 : 12; + uint64_t spb_sizem1 : 6; + uint64_t rsvd_150_146 : 5; + uint64_t spb_ena : 1; + uint64_t lpb_sizem1 : 12; + uint64_t first_skip : 7; + uint64_t rsvd_171 : 1; + uint64_t later_skip : 6; + uint64_t xqe_imm_size : 6; + uint64_t rsvd_189_184 : 6; + uint64_t xqe_imm_copy : 1; + uint64_t xqe_hdr_split : 1; + uint64_t xqe_drop : 8; + uint64_t xqe_pass : 8; + uint64_t wqe_pool_drop : 8; + uint64_t wqe_pool_pass : 8; + uint64_t spb_aura_drop : 8; + uint64_t spb_aura_pass : 8; + uint64_t spb_pool_drop : 8; + uint64_t spb_pool_pass : 8; + uint64_t lpb_aura_drop : 8; + uint64_t lpb_aura_pass : 8; + uint64_t lpb_pool_drop : 8; + uint64_t lpb_pool_pass : 8; + uint64_t rsvd_319_288 : 32; + uint64_t ltag : 24; + uint64_t good_utag : 8; + uint64_t bad_utag : 8; + uint64_t flow_tagw : 6; + uint64_t rsvd_383_366 : 18; + uint64_t octs : 48; + uint64_t rsvd_447_432 : 16; + uint64_t pkts : 48; + uint64_t rsvd_511_496 : 16; + uint64_t drop_octs : 48; + uint64_t rsvd_575_560 : 16; + uint64_t drop_pkts : 48; + uint64_t rsvd_639_624 : 16; + uint64_t re_pkts : 48; + uint64_t rsvd_702_688 : 15; + uint64_t ena_copy : 1; + uint64_t rsvd_739_704 : 36; + uint64_t rq_int : 8; + uint64_t rq_int_ena : 8; + uint64_t qint_idx : 7; + uint64_t rsvd_767_763 : 5; + uint64_t rsvd_831_768 : 64;/* W12 */ + uint64_t rsvd_895_832 : 64;/* W13 */ + uint64_t rsvd_959_896 : 64;/* W14 */ + uint64_t rsvd_1023_960 : 64;/* W15 */ +}; + +/* NIX receive queue context structure */ +struct nix_rq_ctx_s { + uint64_t ena : 1; + uint64_t sso_ena : 1; + uint64_t ipsech_ena : 1; + uint64_t ena_wqwd : 1; + uint64_t cq : 20; + uint64_t substream : 20; + uint64_t wqe_aura : 20; + uint64_t spb_aura : 20; + uint64_t lpb_aura : 20; + uint64_t sso_grp : 10; + uint64_t sso_tt : 2; + uint64_t pb_caching : 2; + uint64_t wqe_caching : 1; + uint64_t xqe_drop_ena : 1; + uint64_t spb_drop_ena : 1; + uint64_t lpb_drop_ena : 1; + uint64_t rsvd_127_122 : 6; + uint64_t rsvd_139_128 : 12; + uint64_t spb_sizem1 : 6; + uint64_t wqe_skip : 2; + uint64_t rsvd_150_148 : 3; + uint64_t spb_ena : 1; + uint64_t lpb_sizem1 : 12; + uint64_t first_skip : 7; + uint64_t rsvd_171 : 1; + uint64_t later_skip : 6; + uint64_t xqe_imm_size : 6; + uint64_t rsvd_189_184 : 6; + uint64_t xqe_imm_copy : 1; + uint64_t xqe_hdr_split : 1; + uint64_t xqe_drop : 8; + uint64_t xqe_pass : 8; + uint64_t wqe_pool_drop : 8; + uint64_t wqe_pool_pass : 8; + uint64_t spb_aura_drop : 8; + uint64_t spb_aura_pass : 8; + uint64_t spb_pool_drop : 8; + uint64_t spb_pool_pass : 8; + uint64_t lpb_aura_drop : 8; + uint64_t lpb_aura_pass : 8; + uint64_t lpb_pool_drop : 8; + uint64_t lpb_pool_pass : 8; + uint64_t rsvd_291_288 : 4; + uint64_t rq_int : 8; + uint64_t rq_int_ena : 8; + uint64_t qint_idx : 7; + uint64_t rsvd_319_315 : 5; + uint64_t ltag : 24; + uint64_t good_utag : 8; + uint64_t bad_utag : 8; + uint64_t flow_tagw : 6; + uint64_t rsvd_383_366 : 18; + uint64_t octs : 48; + uint64_t rsvd_447_432 : 16; + uint64_t pkts : 48; + uint64_t rsvd_511_496 : 16; + uint64_t drop_octs : 48; + uint64_t rsvd_575_560 : 16; + uint64_t drop_pkts : 48; + uint64_t rsvd_639_624 : 16; + uint64_t re_pkts : 48; + uint64_t rsvd_703_688 : 16; + uint64_t rsvd_767_704 : 64;/* W11 */ + uint64_t rsvd_831_768 : 64;/* W12 */ + uint64_t rsvd_895_832 : 64;/* W13 */ + uint64_t rsvd_959_896 : 64;/* W14 */ + uint64_t rsvd_1023_960 : 64;/* W15 */ +}; + +/* NIX receive side scaling entry structure */ +struct nix_rsse_s { + uint32_t rq : 20; + uint32_t rsvd_31_20 : 12; +}; + +/* NIX receive action structure */ +struct nix_rx_action_s { + uint64_t op : 4; + uint64_t pf_func : 16; + uint64_t index : 20; + uint64_t match_id : 16; + uint64_t flow_key_alg : 5; + uint64_t rsvd_63_61 : 3; +}; + +/* NIX receive immediate sub descriptor structure */ +struct nix_rx_imm_s { + uint64_t size : 16; + uint64_t apad : 3; + uint64_t rsvd_59_19 : 41; + uint64_t subdc : 4; +}; + +/* NIX receive multicast/mirror entry structure */ +struct nix_rx_mce_s { + uint64_t op : 2; + uint64_t rsvd_2 : 1; + uint64_t eol : 1; + uint64_t index : 20; + uint64_t rsvd_31_24 : 8; + uint64_t pf_func : 16; + uint64_t next : 16; +}; + +/* NIX receive parse structure */ +struct nix_rx_parse_s { + uint64_t chan : 12; + uint64_t desc_sizem1 : 5; + uint64_t imm_copy : 1; + uint64_t express : 1; + uint64_t wqwd : 1; + uint64_t errlev : 4; + uint64_t errcode : 8; + uint64_t latype : 4; + uint64_t lbtype : 4; + uint64_t lctype : 4; + uint64_t ldtype : 4; + uint64_t letype : 4; + uint64_t lftype : 4; + uint64_t lgtype : 4; + uint64_t lhtype : 4; + uint64_t pkt_lenm1 : 16; + uint64_t l2m : 1; + uint64_t l2b : 1; + uint64_t l3m : 1; + uint64_t l3b : 1; + uint64_t vtag0_valid : 1; + uint64_t vtag0_gone : 1; + uint64_t vtag1_valid : 1; + uint64_t vtag1_gone : 1; + uint64_t pkind : 6; + uint64_t rsvd_95_94 : 2; + uint64_t vtag0_tci : 16; + uint64_t vtag1_tci : 16; + uint64_t laflags : 8; + uint64_t lbflags : 8; + uint64_t lcflags : 8; + uint64_t ldflags : 8; + uint64_t leflags : 8; + uint64_t lfflags : 8; + uint64_t lgflags : 8; + uint64_t lhflags : 8; + uint64_t eoh_ptr : 8; + uint64_t wqe_aura : 20; + uint64_t pb_aura : 20; + uint64_t match_id : 16; + uint64_t laptr : 8; + uint64_t lbptr : 8; + uint64_t lcptr : 8; + uint64_t ldptr : 8; + uint64_t leptr : 8; + uint64_t lfptr : 8; + uint64_t lgptr : 8; + uint64_t lhptr : 8; + uint64_t vtag0_ptr : 8; + uint64_t vtag1_ptr : 8; + uint64_t flow_key_alg : 5; + uint64_t rsvd_383_341 : 43; + uint64_t rsvd_447_384 : 64; /* W6 */ +}; + +/* NIX receive scatter/gather sub descriptor structure */ +struct nix_rx_sg_s { + uint64_t seg1_size : 16; + uint64_t seg2_size : 16; + uint64_t seg3_size : 16; + uint64_t segs : 2; + uint64_t rsvd_59_50 : 10; + uint64_t subdc : 4; +}; + +/* NIX receive vtag action structure */ +struct nix_rx_vtag_action_s { + uint64_t vtag0_relptr : 8; + uint64_t vtag0_lid : 3; + uint64_t rsvd_11 : 1; + uint64_t vtag0_type : 3; + uint64_t vtag0_valid : 1; + uint64_t rsvd_31_16 : 16; + uint64_t vtag1_relptr : 8; + uint64_t vtag1_lid : 3; + uint64_t rsvd_43 : 1; + uint64_t vtag1_type : 3; + uint64_t vtag1_valid : 1; + uint64_t rsvd_63_48 : 16; +}; + +/* NIX send completion structure */ +struct nix_send_comp_s { + uint64_t status : 8; + uint64_t sqe_id : 16; + uint64_t rsvd_63_24 : 40; +}; + +/* NIX send CRC sub descriptor structure */ +struct nix_send_crc_s { + uint64_t size : 16; + uint64_t start : 16; + uint64_t insert : 16; + uint64_t rsvd_57_48 : 10; + uint64_t alg : 2; + uint64_t subdc : 4; + uint64_t iv : 32; + uint64_t rsvd_127_96 : 32; +}; + +/* NIX send extended header sub descriptor structure */ +RTE_STD_C11 +union nix_send_ext_w0_u { + uint64_t u; + struct { + uint64_t lso_mps : 14; + uint64_t lso : 1; + uint64_t tstmp : 1; + uint64_t lso_sb : 8; + uint64_t lso_format : 5; + uint64_t rsvd_31_29 : 3; + uint64_t shp_chg : 9; + uint64_t shp_dis : 1; + uint64_t shp_ra : 2; + uint64_t markptr : 8; + uint64_t markform : 7; + uint64_t mark_en : 1; + uint64_t subdc : 4; + }; +}; + +RTE_STD_C11 +union nix_send_ext_w1_u { + uint64_t u; + struct { + uint64_t vlan0_ins_ptr : 8; + uint64_t vlan0_ins_tci : 16; + uint64_t vlan1_ins_ptr : 8; + uint64_t vlan1_ins_tci : 16; + uint64_t vlan0_ins_ena : 1; + uint64_t vlan1_ins_ena : 1; + uint64_t rsvd_127_114 : 14; + }; +}; + +struct nix_send_ext_s { + union nix_send_ext_w0_u w0; + union nix_send_ext_w1_u w1; +}; + +/* NIX send header sub descriptor structure */ +RTE_STD_C11 +union nix_send_hdr_w0_u { + uint64_t u; + struct { + uint64_t total : 18; + uint64_t rsvd_18 : 1; + uint64_t df : 1; + uint64_t aura : 20; + uint64_t sizem1 : 3; + uint64_t pnc : 1; + uint64_t sq : 20; + }; +}; + +RTE_STD_C11 +union nix_send_hdr_w1_u { + uint64_t u; + struct { + uint64_t ol3ptr : 8; + uint64_t ol4ptr : 8; + uint64_t il3ptr : 8; + uint64_t il4ptr : 8; + uint64_t ol3type : 4; + uint64_t ol4type : 4; + uint64_t il3type : 4; + uint64_t il4type : 4; + uint64_t sqe_id : 16; + }; +}; + +struct nix_send_hdr_s { + union nix_send_hdr_w0_u w0; + union nix_send_hdr_w1_u w1; +}; + +/* NIX send immediate sub descriptor structure */ +struct nix_send_imm_s { + uint64_t size : 16; + uint64_t apad : 3; + uint64_t rsvd_59_19 : 41; + uint64_t subdc : 4; +}; + +/* NIX send jump sub descriptor structure */ +struct nix_send_jump_s { + uint64_t sizem1 : 7; + uint64_t rsvd_13_7 : 7; + uint64_t ld_type : 2; + uint64_t aura : 20; + uint64_t rsvd_58_36 : 23; + uint64_t f : 1; + uint64_t subdc : 4; + uint64_t addr : 64; /* W1 */ +}; + +/* NIX send memory sub descriptor structure */ +struct nix_send_mem_s { + uint64_t offset : 16; + uint64_t rsvd_52_16 : 37; + uint64_t wmem : 1; + uint64_t dsz : 2; + uint64_t alg : 4; + uint64_t subdc : 4; + uint64_t addr : 64; /* W1 */ +}; + +/* NIX send scatter/gather sub descriptor structure */ +RTE_STD_C11 +union nix_send_sg_s { + uint64_t u; + struct { + uint64_t seg1_size : 16; + uint64_t seg2_size : 16; + uint64_t seg3_size : 16; + uint64_t segs : 2; + uint64_t rsvd_54_50 : 5; + uint64_t i1 : 1; + uint64_t i2 : 1; + uint64_t i3 : 1; + uint64_t ld_type : 2; + uint64_t subdc : 4; + }; +}; + +/* NIX send work sub descriptor structure */ +struct nix_send_work_s { + uint64_t tag : 32; + uint64_t tt : 2; + uint64_t grp : 10; + uint64_t rsvd_59_44 : 16; + uint64_t subdc : 4; + uint64_t addr : 64; /* W1 */ +}; + +/* NIX sq context hardware structure */ +struct nix_sq_ctx_hw_s { + uint64_t ena : 1; + uint64_t substream : 20; + uint64_t max_sqe_size : 2; + uint64_t sqe_way_mask : 16; + uint64_t sqb_aura : 20; + uint64_t gbl_rsvd1 : 5; + uint64_t cq_id : 20; + uint64_t cq_ena : 1; + uint64_t qint_idx : 6; + uint64_t gbl_rsvd2 : 1; + uint64_t sq_int : 8; + uint64_t sq_int_ena : 8; + uint64_t xoff : 1; + uint64_t sqe_stype : 2; + uint64_t gbl_rsvd : 17; + uint64_t head_sqb : 64;/* W2 */ + uint64_t head_offset : 6; + uint64_t sqb_dequeue_count : 16; + uint64_t default_chan : 12; + uint64_t sdp_mcast : 1; + uint64_t sso_ena : 1; + uint64_t dse_rsvd1 : 28; + uint64_t sqb_enqueue_count : 16; + uint64_t tail_offset : 6; + uint64_t lmt_dis : 1; + uint64_t smq_rr_quantum : 24; + uint64_t dnq_rsvd1 : 17; + uint64_t tail_sqb : 64;/* W5 */ + uint64_t next_sqb : 64;/* W6 */ + uint64_t mnq_dis : 1; + uint64_t smq : 9; + uint64_t smq_pend : 1; + uint64_t smq_next_sq : 20; + uint64_t smq_next_sq_vld : 1; + uint64_t scm1_rsvd2 : 32; + uint64_t smenq_sqb : 64;/* W8 */ + uint64_t smenq_offset : 6; + uint64_t cq_limit : 8; + uint64_t smq_rr_count : 25; + uint64_t scm_lso_rem : 18; + uint64_t scm_dq_rsvd0 : 7; + uint64_t smq_lso_segnum : 8; + uint64_t vfi_lso_total : 18; + uint64_t vfi_lso_sizem1 : 3; + uint64_t vfi_lso_sb : 8; + uint64_t vfi_lso_mps : 14; + uint64_t vfi_lso_vlan0_ins_ena : 1; + uint64_t vfi_lso_vlan1_ins_ena : 1; + uint64_t vfi_lso_vld : 1; + uint64_t smenq_next_sqb_vld : 1; + uint64_t scm_dq_rsvd1 : 9; + uint64_t smenq_next_sqb : 64;/* W11 */ + uint64_t seb_rsvd1 : 64;/* W12 */ + uint64_t drop_pkts : 48; + uint64_t drop_octs_lsw : 16; + uint64_t drop_octs_msw : 32; + uint64_t pkts_lsw : 32; + uint64_t pkts_msw : 16; + uint64_t octs : 48; +}; + +/* NIX send queue context structure */ +struct nix_sq_ctx_s { + uint64_t ena : 1; + uint64_t qint_idx : 6; + uint64_t substream : 20; + uint64_t sdp_mcast : 1; + uint64_t cq : 20; + uint64_t sqe_way_mask : 16; + uint64_t smq : 9; + uint64_t cq_ena : 1; + uint64_t xoff : 1; + uint64_t sso_ena : 1; + uint64_t smq_rr_quantum : 24; + uint64_t default_chan : 12; + uint64_t sqb_count : 16; + uint64_t smq_rr_count : 25; + uint64_t sqb_aura : 20; + uint64_t sq_int : 8; + uint64_t sq_int_ena : 8; + uint64_t sqe_stype : 2; + uint64_t rsvd_191 : 1; + uint64_t max_sqe_size : 2; + uint64_t cq_limit : 8; + uint64_t lmt_dis : 1; + uint64_t mnq_dis : 1; + uint64_t smq_next_sq : 20; + uint64_t smq_lso_segnum : 8; + uint64_t tail_offset : 6; + uint64_t smenq_offset : 6; + uint64_t head_offset : 6; + uint64_t smenq_next_sqb_vld : 1; + uint64_t smq_pend : 1; + uint64_t smq_next_sq_vld : 1; + uint64_t rsvd_255_253 : 3; + uint64_t next_sqb : 64;/* W4 */ + uint64_t tail_sqb : 64;/* W5 */ + uint64_t smenq_sqb : 64;/* W6 */ + uint64_t smenq_next_sqb : 64;/* W7 */ + uint64_t head_sqb : 64;/* W8 */ + uint64_t rsvd_583_576 : 8; + uint64_t vfi_lso_total : 18; + uint64_t vfi_lso_sizem1 : 3; + uint64_t vfi_lso_sb : 8; + uint64_t vfi_lso_mps : 14; + uint64_t vfi_lso_vlan0_ins_ena : 1; + uint64_t vfi_lso_vlan1_ins_ena : 1; + uint64_t vfi_lso_vld : 1; + uint64_t rsvd_639_630 : 10; + uint64_t scm_lso_rem : 18; + uint64_t rsvd_703_658 : 46; + uint64_t octs : 48; + uint64_t rsvd_767_752 : 16; + uint64_t pkts : 48; + uint64_t rsvd_831_816 : 16; + uint64_t rsvd_895_832 : 64;/* W13 */ + uint64_t drop_octs : 48; + uint64_t rsvd_959_944 : 16; + uint64_t drop_pkts : 48; + uint64_t rsvd_1023_1008 : 16; +}; + +/* NIX transmit action structure */ +struct nix_tx_action_s { + uint64_t op : 4; + uint64_t rsvd_11_4 : 8; + uint64_t index : 20; + uint64_t match_id : 16; + uint64_t rsvd_63_48 : 16; +}; + +/* NIX transmit vtag action structure */ +struct nix_tx_vtag_action_s { + uint64_t vtag0_relptr : 8; + uint64_t vtag0_lid : 3; + uint64_t rsvd_11 : 1; + uint64_t vtag0_op : 2; + uint64_t rsvd_15_14 : 2; + uint64_t vtag0_def : 10; + uint64_t rsvd_31_26 : 6; + uint64_t vtag1_relptr : 8; + uint64_t vtag1_lid : 3; + uint64_t rsvd_43 : 1; + uint64_t vtag1_op : 2; + uint64_t rsvd_47_46 : 2; + uint64_t vtag1_def : 10; + uint64_t rsvd_63_58 : 6; +}; + +/* NIX work queue entry header structure */ +struct nix_wqe_hdr_s { + uint64_t tag : 32; + uint64_t tt : 2; + uint64_t grp : 10; + uint64_t node : 2; + uint64_t q : 14; + uint64_t wqe_type : 4; +}; + +/* NIX Rx flow key algorithm field structure */ +struct nix_rx_flowkey_alg { + uint64_t key_offset :6; + uint64_t ln_mask :1; + uint64_t fn_mask :1; + uint64_t hdr_offset :8; + uint64_t bytesm1 :5; + uint64_t lid :3; + uint64_t reserved_24_24 :1; + uint64_t ena :1; + uint64_t sel_chan :1; + uint64_t ltype_mask :4; + uint64_t ltype_match :4; + uint64_t reserved_35_63 :29; +}; + +/* NIX LSO format field structure */ +struct nix_lso_format { + uint64_t offset : 8; + uint64_t layer : 2; + uint64_t rsvd_10_11 : 2; + uint64_t sizem1 : 2; + uint64_t rsvd_14_15 : 2; + uint64_t alg : 3; + uint64_t rsvd_19_63 : 45; +}; + +#endif /* __OTX2_NIX_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_npa.h b/drivers/common/octeontx2/hw/otx2_npa.h new file mode 100644 index 000000000..2224216c9 --- /dev/null +++ b/drivers/common/octeontx2/hw/otx2_npa.h @@ -0,0 +1,305 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_NPA_HW_H__ +#define __OTX2_NPA_HW_H__ + +/* Register offsets */ + +#define NPA_AF_BLK_RST (0x0ull) +#define NPA_AF_CONST (0x10ull) +#define NPA_AF_CONST1 (0x18ull) +#define NPA_AF_LF_RST (0x20ull) +#define NPA_AF_GEN_CFG (0x30ull) +#define NPA_AF_NDC_CFG (0x40ull) +#define NPA_AF_NDC_SYNC (0x50ull) +#define NPA_AF_INP_CTL (0xd0ull) +#define NPA_AF_ACTIVE_CYCLES_PC (0xf0ull) +#define NPA_AF_AVG_DELAY (0x100ull) +#define NPA_AF_GEN_INT (0x140ull) +#define NPA_AF_GEN_INT_W1S (0x148ull) +#define NPA_AF_GEN_INT_ENA_W1S (0x150ull) +#define NPA_AF_GEN_INT_ENA_W1C (0x158ull) +#define NPA_AF_RVU_INT (0x160ull) +#define NPA_AF_RVU_INT_W1S (0x168ull) +#define NPA_AF_RVU_INT_ENA_W1S (0x170ull) +#define NPA_AF_RVU_INT_ENA_W1C (0x178ull) +#define NPA_AF_ERR_INT (0x180ull) +#define NPA_AF_ERR_INT_W1S (0x188ull) +#define NPA_AF_ERR_INT_ENA_W1S (0x190ull) +#define NPA_AF_ERR_INT_ENA_W1C (0x198ull) +#define NPA_AF_RAS (0x1a0ull) +#define NPA_AF_RAS_W1S (0x1a8ull) +#define NPA_AF_RAS_ENA_W1S (0x1b0ull) +#define NPA_AF_RAS_ENA_W1C (0x1b8ull) +#define NPA_AF_AQ_CFG (0x600ull) +#define NPA_AF_AQ_BASE (0x610ull) +#define NPA_AF_AQ_STATUS (0x620ull) +#define NPA_AF_AQ_DOOR (0x630ull) +#define NPA_AF_AQ_DONE_WAIT (0x640ull) +#define NPA_AF_AQ_DONE (0x650ull) +#define NPA_AF_AQ_DONE_ACK (0x660ull) +#define NPA_AF_AQ_DONE_TIMER (0x670ull) +#define NPA_AF_AQ_DONE_INT (0x680ull) +#define NPA_AF_AQ_DONE_ENA_W1S (0x690ull) +#define NPA_AF_AQ_DONE_ENA_W1C (0x698ull) +#define NPA_AF_LFX_AURAS_CFG(a) (0x4000ull | (uint64_t)(a) << 18) +#define NPA_AF_LFX_LOC_AURAS_BASE(a) (0x4010ull | (uint64_t)(a) << 18) +#define NPA_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 18) +#define NPA_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 18) +#define NPA_PRIV_AF_INT_CFG (0x10000ull) +#define NPA_PRIV_LFX_CFG(a) (0x10010ull | (uint64_t)(a) << 8) +#define NPA_PRIV_LFX_INT_CFG(a) (0x10020ull | (uint64_t)(a) << 8) +#define NPA_AF_RVU_LF_CFG_DEBUG (0x10030ull) +#define NPA_AF_DTX_FILTER_CTL (0x10040ull) + +#define NPA_LF_AURA_OP_ALLOCX(a) (0x10ull | (uint64_t)(a) << 3) +#define NPA_LF_AURA_OP_FREE0 (0x20ull) +#define NPA_LF_AURA_OP_FREE1 (0x28ull) +#define NPA_LF_AURA_OP_CNT (0x30ull) +#define NPA_LF_AURA_OP_LIMIT (0x50ull) +#define NPA_LF_AURA_OP_INT (0x60ull) +#define NPA_LF_AURA_OP_THRESH (0x70ull) +#define NPA_LF_POOL_OP_PC (0x100ull) +#define NPA_LF_POOL_OP_AVAILABLE (0x110ull) +#define NPA_LF_POOL_OP_PTR_START0 (0x120ull) +#define NPA_LF_POOL_OP_PTR_START1 (0x128ull) +#define NPA_LF_POOL_OP_PTR_END0 (0x130ull) +#define NPA_LF_POOL_OP_PTR_END1 (0x138ull) +#define NPA_LF_POOL_OP_INT (0x160ull) +#define NPA_LF_POOL_OP_THRESH (0x170ull) +#define NPA_LF_ERR_INT (0x200ull) +#define NPA_LF_ERR_INT_W1S (0x208ull) +#define NPA_LF_ERR_INT_ENA_W1C (0x210ull) +#define NPA_LF_ERR_INT_ENA_W1S (0x218ull) +#define NPA_LF_RAS (0x220ull) +#define NPA_LF_RAS_W1S (0x228ull) +#define NPA_LF_RAS_ENA_W1C (0x230ull) +#define NPA_LF_RAS_ENA_W1S (0x238ull) +#define NPA_LF_QINTX_CNT(a) (0x300ull | (uint64_t)(a) << 12) +#define NPA_LF_QINTX_INT(a) (0x310ull | (uint64_t)(a) << 12) +#define NPA_LF_QINTX_ENA_W1S(a) (0x320ull | (uint64_t)(a) << 12) +#define NPA_LF_QINTX_ENA_W1C(a) (0x330ull | (uint64_t)(a) << 12) + + +/* Enum offsets */ + +#define NPA_AQ_COMP_NOTDONE (0x0ull) +#define NPA_AQ_COMP_GOOD (0x1ull) +#define NPA_AQ_COMP_SWERR (0x2ull) +#define NPA_AQ_COMP_CTX_POISON (0x3ull) +#define NPA_AQ_COMP_CTX_FAULT (0x4ull) +#define NPA_AQ_COMP_LOCKERR (0x5ull) + +#define NPA_AF_INT_VEC_RVU (0x0ull) +#define NPA_AF_INT_VEC_GEN (0x1ull) +#define NPA_AF_INT_VEC_AQ_DONE (0x2ull) +#define NPA_AF_INT_VEC_AF_ERR (0x3ull) +#define NPA_AF_INT_VEC_POISON (0x4ull) + +#define NPA_AQ_INSTOP_NOP (0x0ull) +#define NPA_AQ_INSTOP_INIT (0x1ull) +#define NPA_AQ_INSTOP_WRITE (0x2ull) +#define NPA_AQ_INSTOP_READ (0x3ull) +#define NPA_AQ_INSTOP_LOCK (0x4ull) +#define NPA_AQ_INSTOP_UNLOCK (0x5ull) + +#define NPA_AQ_CTYPE_AURA (0x0ull) +#define NPA_AQ_CTYPE_POOL (0x1ull) + +#define NPA_BPINTF_NIX0_RX (0x0ull) +#define NPA_BPINTF_NIX1_RX (0x1ull) + +#define NPA_AURA_ERR_INT_AURA_FREE_UNDER (0x0ull) +#define NPA_AURA_ERR_INT_AURA_ADD_OVER (0x1ull) +#define NPA_AURA_ERR_INT_AURA_ADD_UNDER (0x2ull) +#define NPA_AURA_ERR_INT_POOL_DIS (0x3ull) +#define NPA_AURA_ERR_INT_R4 (0x4ull) +#define NPA_AURA_ERR_INT_R5 (0x5ull) +#define NPA_AURA_ERR_INT_R6 (0x6ull) +#define NPA_AURA_ERR_INT_R7 (0x7ull) + +#define NPA_LF_INT_VEC_ERR_INT (0x40ull) +#define NPA_LF_INT_VEC_POISON (0x41ull) +#define NPA_LF_INT_VEC_QINT_END (0x3full) +#define NPA_LF_INT_VEC_QINT_START (0x0ull) + +#define NPA_INPQ_SSO (0x4ull) +#define NPA_INPQ_TIM (0x5ull) +#define NPA_INPQ_DPI (0x6ull) +#define NPA_INPQ_AURA_OP (0xeull) +#define NPA_INPQ_INTERNAL_RSV (0xfull) +#define NPA_INPQ_NIX0_RX (0x0ull) +#define NPA_INPQ_NIX1_RX (0x2ull) +#define NPA_INPQ_NIX0_TX (0x1ull) +#define NPA_INPQ_NIX1_TX (0x3ull) +#define NPA_INPQ_R_END (0xdull) +#define NPA_INPQ_R_START (0x7ull) + +#define NPA_POOL_ERR_INT_OVFLS (0x0ull) +#define NPA_POOL_ERR_INT_RANGE (0x1ull) +#define NPA_POOL_ERR_INT_PERR (0x2ull) +#define NPA_POOL_ERR_INT_R3 (0x3ull) +#define NPA_POOL_ERR_INT_R4 (0x4ull) +#define NPA_POOL_ERR_INT_R5 (0x5ull) +#define NPA_POOL_ERR_INT_R6 (0x6ull) +#define NPA_POOL_ERR_INT_R7 (0x7ull) + +#define NPA_NDC0_PORT_AURA0 (0x0ull) +#define NPA_NDC0_PORT_AURA1 (0x1ull) +#define NPA_NDC0_PORT_POOL0 (0x2ull) +#define NPA_NDC0_PORT_POOL1 (0x3ull) +#define NPA_NDC0_PORT_STACK0 (0x4ull) +#define NPA_NDC0_PORT_STACK1 (0x5ull) + +#define NPA_LF_ERR_INT_AURA_DIS (0x0ull) +#define NPA_LF_ERR_INT_AURA_OOR (0x1ull) +#define NPA_LF_ERR_INT_AURA_FAULT (0xcull) +#define NPA_LF_ERR_INT_POOL_FAULT (0xdull) +#define NPA_LF_ERR_INT_STACK_FAULT (0xeull) +#define NPA_LF_ERR_INT_QINT_FAULT (0xfull) + +/* Structures definitions */ + +/* NPA admin queue instruction structure */ +struct npa_aq_inst_s { + uint64_t op : 4; + uint64_t ctype : 4; + uint64_t lf : 9; + uint64_t rsvd_23_17 : 7; + uint64_t cindex : 20; + uint64_t rsvd_62_44 : 19; + uint64_t doneint : 1; + uint64_t res_addr : 64; /* W1 */ +}; + +/* NPA admin queue result structure */ +struct npa_aq_res_s { + uint64_t op : 4; + uint64_t ctype : 4; + uint64_t compcode : 8; + uint64_t doneint : 1; + uint64_t rsvd_63_17 : 47; + uint64_t rsvd_127_64 : 64; /* W1 */ +}; + +/* NPA aura operation write data structure */ +struct npa_aura_op_wdata_s { + uint64_t aura : 20; + uint64_t rsvd_62_20 : 43; + uint64_t drop : 1; +}; + +/* NPA aura context structure */ +struct npa_aura_s { + uint64_t pool_addr : 64;/* W0 */ + uint64_t ena : 1; + uint64_t rsvd_66_65 : 2; + uint64_t pool_caching : 1; + uint64_t pool_way_mask : 16; + uint64_t avg_con : 9; + uint64_t rsvd_93 : 1; + uint64_t pool_drop_ena : 1; + uint64_t aura_drop_ena : 1; + uint64_t bp_ena : 2; + uint64_t rsvd_103_98 : 6; + uint64_t aura_drop : 8; + uint64_t shift : 6; + uint64_t rsvd_119_118 : 2; + uint64_t avg_level : 8; + uint64_t count : 36; + uint64_t rsvd_167_164 : 4; + uint64_t nix0_bpid : 9; + uint64_t rsvd_179_177 : 3; + uint64_t nix1_bpid : 9; + uint64_t rsvd_191_189 : 3; + uint64_t limit : 36; + uint64_t rsvd_231_228 : 4; + uint64_t bp : 8; + uint64_t rsvd_243_240 : 4; + uint64_t fc_ena : 1; + uint64_t fc_up_crossing : 1; + uint64_t fc_stype : 2; + uint64_t fc_hyst_bits : 4; + uint64_t rsvd_255_252 : 4; + uint64_t fc_addr : 64;/* W4 */ + uint64_t pool_drop : 8; + uint64_t update_time : 16; + uint64_t err_int : 8; + uint64_t err_int_ena : 8; + uint64_t thresh_int : 1; + uint64_t thresh_int_ena : 1; + uint64_t thresh_up : 1; + uint64_t rsvd_363 : 1; + uint64_t thresh_qint_idx : 7; + uint64_t rsvd_371 : 1; + uint64_t err_qint_idx : 7; + uint64_t rsvd_383_379 : 5; + uint64_t thresh : 36; + uint64_t rsvd_447_420 : 28; + uint64_t rsvd_511_448 : 64;/* W7 */ +}; + +/* NPA pool context structure */ +struct npa_pool_s { + uint64_t stack_base : 64;/* W0 */ + uint64_t ena : 1; + uint64_t nat_align : 1; + uint64_t rsvd_67_66 : 2; + uint64_t stack_caching : 1; + uint64_t rsvd_71_69 : 3; + uint64_t stack_way_mask : 16; + uint64_t buf_offset : 12; + uint64_t rsvd_103_100 : 4; + uint64_t buf_size : 11; + uint64_t rsvd_127_115 : 13; + uint64_t stack_max_pages : 32; + uint64_t stack_pages : 32; + uint64_t op_pc : 48; + uint64_t rsvd_255_240 : 16; + uint64_t stack_offset : 4; + uint64_t rsvd_263_260 : 4; + uint64_t shift : 6; + uint64_t rsvd_271_270 : 2; + uint64_t avg_level : 8; + uint64_t avg_con : 9; + uint64_t fc_ena : 1; + uint64_t fc_stype : 2; + uint64_t fc_hyst_bits : 4; + uint64_t fc_up_crossing : 1; + uint64_t rsvd_299_297 : 3; + uint64_t update_time : 16; + uint64_t rsvd_319_316 : 4; + uint64_t fc_addr : 64;/* W5 */ + uint64_t ptr_start : 64;/* W6 */ + uint64_t ptr_end : 64;/* W7 */ + uint64_t rsvd_535_512 : 24; + uint64_t err_int : 8; + uint64_t err_int_ena : 8; + uint64_t thresh_int : 1; + uint64_t thresh_int_ena : 1; + uint64_t thresh_up : 1; + uint64_t rsvd_555 : 1; + uint64_t thresh_qint_idx : 7; + uint64_t rsvd_563 : 1; + uint64_t err_qint_idx : 7; + uint64_t rsvd_575_571 : 5; + uint64_t thresh : 36; + uint64_t rsvd_639_612 : 28; + uint64_t rsvd_703_640 : 64;/* W10 */ + uint64_t rsvd_767_704 : 64;/* W11 */ + uint64_t rsvd_831_768 : 64;/* W12 */ + uint64_t rsvd_895_832 : 64;/* W13 */ + uint64_t rsvd_959_896 : 64;/* W14 */ + uint64_t rsvd_1023_960 : 64;/* W15 */ +}; + +/* NPA queue interrupt context hardware structure */ +struct npa_qint_hw_s { + uint32_t count : 22; + uint32_t rsvd_30_22 : 9; + uint32_t ena : 1; +}; + +#endif /* __OTX2_NPA_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_npc.h b/drivers/common/octeontx2/hw/otx2_npc.h new file mode 100644 index 000000000..848d42d34 --- /dev/null +++ b/drivers/common/octeontx2/hw/otx2_npc.h @@ -0,0 +1,472 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_NPC_HW_H__ +#define __OTX2_NPC_HW_H__ + +/* Register offsets */ + +#define NPC_AF_CFG (0x0ull) +#define NPC_AF_ACTIVE_PC (0x10ull) +#define NPC_AF_CONST (0x20ull) +#define NPC_AF_CONST1 (0x30ull) +#define NPC_AF_BLK_RST (0x40ull) +#define NPC_AF_MCAM_SCRUB_CTL (0xa0ull) +#define NPC_AF_KCAM_SCRUB_CTL (0xb0ull) +#define NPC_AF_KPUX_CFG(a) \ + (0x500ull | (uint64_t)(a) << 3) +#define NPC_AF_PCK_CFG (0x600ull) +#define NPC_AF_PCK_DEF_OL2 (0x610ull) +#define NPC_AF_PCK_DEF_OIP4 (0x620ull) +#define NPC_AF_PCK_DEF_OIP6 (0x630ull) +#define NPC_AF_PCK_DEF_IIP4 (0x640ull) +#define NPC_AF_KEX_LDATAX_FLAGS_CFG(a) \ + (0x800ull | (uint64_t)(a) << 3) +#define NPC_AF_INTFX_KEX_CFG(a) \ + (0x1010ull | (uint64_t)(a) << 8) +#define NPC_AF_PKINDX_ACTION0(a) \ + (0x80000ull | (uint64_t)(a) << 6) +#define NPC_AF_PKINDX_ACTION1(a) \ + (0x80008ull | (uint64_t)(a) << 6) +#define NPC_AF_PKINDX_CPI_DEFX(a, b) \ + (0x80020ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3) +#define NPC_AF_KPUX_ENTRYX_CAMX(a, b, c) \ + (0x100000ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6 | \ + (uint64_t)(c) << 3) +#define NPC_AF_KPUX_ENTRYX_ACTION0(a, b) \ + (0x100020ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6) +#define NPC_AF_KPUX_ENTRYX_ACTION1(a, b) \ + (0x100028ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6) +#define NPC_AF_KPUX_ENTRY_DISX(a, b) \ + (0x180000ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3) +#define NPC_AF_CPIX_CFG(a) \ + (0x200000ull | (uint64_t)(a) << 3) +#define NPC_AF_INTFX_LIDX_LTX_LDX_CFG(a, b, c, d) \ + (0x900000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \ + (uint64_t)(c) << 5 | (uint64_t)(d) << 3) +#define NPC_AF_INTFX_LDATAX_FLAGSX_CFG(a, b, c) \ + (0x980000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \ + (uint64_t)(c) << 3) +#define NPC_AF_MCAMEX_BANKX_CAMX_INTF(a, b, c) \ + (0x1000000ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \ + (uint64_t)(c) << 3) +#define NPC_AF_MCAMEX_BANKX_CAMX_W0(a, b, c) \ + (0x1000010ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \ + (uint64_t)(c) << 3) +#define NPC_AF_MCAMEX_BANKX_CAMX_W1(a, b, c) \ + (0x1000020ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \ + (uint64_t)(c) << 3) +#define NPC_AF_MCAMEX_BANKX_CFG(a, b) \ + (0x1800000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_MCAMEX_BANKX_STAT_ACT(a, b) \ + (0x1880000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_MATCH_STATX(a) \ + (0x1880008ull | (uint64_t)(a) << 8) +#define NPC_AF_INTFX_MISS_STAT_ACT(a) \ + (0x1880040ull + (uint64_t)(a) * 0x8) +#define NPC_AF_MCAMEX_BANKX_ACTION(a, b) \ + (0x1900000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_MCAMEX_BANKX_TAG_ACT(a, b) \ + (0x1900008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_INTFX_MISS_ACT(a) \ + (0x1a00000ull | (uint64_t)(a) << 4) +#define NPC_AF_INTFX_MISS_TAG_ACT(a) \ + (0x1b00008ull | (uint64_t)(a) << 4) +#define NPC_AF_MCAM_BANKX_HITX(a, b) \ + (0x1c80000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define NPC_AF_LKUP_CTL (0x2000000ull) +#define NPC_AF_LKUP_DATAX(a) \ + (0x2000200ull | (uint64_t)(a) << 4) +#define NPC_AF_LKUP_RESULTX(a) \ + (0x2000400ull | (uint64_t)(a) << 4) +#define NPC_AF_INTFX_STAT(a) \ + (0x2000800ull | (uint64_t)(a) << 4) +#define NPC_AF_DBG_CTL (0x3000000ull) +#define NPC_AF_DBG_STATUS (0x3000010ull) +#define NPC_AF_KPUX_DBG(a) \ + (0x3000020ull | (uint64_t)(a) << 8) +#define NPC_AF_IKPU_ERR_CTL (0x3000080ull) +#define NPC_AF_KPUX_ERR_CTL(a) \ + (0x30000a0ull | (uint64_t)(a) << 8) +#define NPC_AF_MCAM_DBG (0x3001000ull) +#define NPC_AF_DBG_DATAX(a) \ + (0x3001400ull | (uint64_t)(a) << 4) +#define NPC_AF_DBG_RESULTX(a) \ + (0x3001800ull | (uint64_t)(a) << 4) + + +/* Enum offsets */ + +#define NPC_INTF_NIX0_RX (0x0ull) +#define NPC_INTF_NIX0_TX (0x1ull) + +#define NPC_LKUPOP_PKT (0x0ull) +#define NPC_LKUPOP_KEY (0x1ull) + +#define NPC_MCAM_KEY_X1 (0x0ull) +#define NPC_MCAM_KEY_X2 (0x1ull) +#define NPC_MCAM_KEY_X4 (0x2ull) + +enum NPC_ERRLEV_E { + NPC_ERRLEV_RE = 0, + NPC_ERRLEV_LA = 1, + NPC_ERRLEV_LB = 2, + NPC_ERRLEV_LC = 3, + NPC_ERRLEV_LD = 4, + NPC_ERRLEV_LE = 5, + NPC_ERRLEV_LF = 6, + NPC_ERRLEV_LG = 7, + NPC_ERRLEV_LH = 8, + NPC_ERRLEV_R9 = 9, + NPC_ERRLEV_R10 = 10, + NPC_ERRLEV_R11 = 11, + NPC_ERRLEV_R12 = 12, + NPC_ERRLEV_R13 = 13, + NPC_ERRLEV_R14 = 14, + NPC_ERRLEV_NIX = 15, + NPC_ERRLEV_ENUM_LAST = 16, +}; + +enum npc_kpu_err_code { + NPC_EC_NOERR = 0, /* has to be zero */ + NPC_EC_UNK, + NPC_EC_IH_LENGTH, + NPC_EC_L2_K1, + NPC_EC_L2_K2, + NPC_EC_L2_K3, + NPC_EC_L2_K3_ETYPE_UNK, + NPC_EC_L2_K4, + NPC_EC_L2_MPLS_2MANY, + NPC_EC_MPLS_UNK, + NPC_EC_NSH_UNK, + NPC_EC_IP_TTL_0, + NPC_EC_IP_FRAG_OFFSET_1, + NPC_EC_IP_VER, + NPC_EC_IP6_HOP_0, + NPC_EC_IP6_VER, + NPC_EC_TCP_FLAGS_FIN_ONLY, + NPC_EC_TCP_FLAGS_ZERO, + NPC_EC_TCP_FLAGS_RST_FIN, + NPC_EC_TCP_FLAGS_URG_SYN, + NPC_EC_TCP_FLAGS_RST_SYN, + NPC_EC_TCP_FLAGS_SYN_FIN, + NPC_EC_VXLAN, + NPC_EC_NVGRE, + NPC_EC_GRE, + NPC_EC_GRE_VER1, + NPC_EC_L4, + NPC_EC_OIP4_CSUM, + NPC_EC_IIP4_CSUM, + NPC_EC_LAST /* has to be the last item */ +}; + +enum NPC_LID_E { + NPC_LID_LA = 0, + NPC_LID_LB, + NPC_LID_LC, + NPC_LID_LD, + NPC_LID_LE, + NPC_LID_LF, + NPC_LID_LG, + NPC_LID_LH, +}; + +#define NPC_LT_NA 0 + +enum npc_kpu_la_ltype { + NPC_LT_LA_8023 = 1, + NPC_LT_LA_ETHER, + NPC_LT_LA_IH_NIX_ETHER, + NPC_LT_LA_IH_8_ETHER, + NPC_LT_LA_IH_4_ETHER, + NPC_LT_LA_IH_2_ETHER, +}; + +enum npc_kpu_lb_ltype { + NPC_LT_LB_ETAG = 1, + NPC_LT_LB_CTAG, + NPC_LT_LB_STAG, + NPC_LT_LB_BTAG, + NPC_LT_LB_QINQ, + NPC_LT_LB_ITAG, +}; + +enum npc_kpu_lc_ltype { + NPC_LT_LC_IP = 1, + NPC_LT_LC_IP_OPT, + NPC_LT_LC_IP6, + NPC_LT_LC_IP6_EXT, + NPC_LT_LC_ARP, + NPC_LT_LC_RARP, + NPC_LT_LC_MPLS, + NPC_LT_LC_NSH, + NPC_LT_LC_PTP, + NPC_LT_LC_FCOE, +}; + +/* Don't modify Ltypes upto SCTP, otherwise it will + * effect flow tag calculation and thus RSS. + */ +enum npc_kpu_ld_ltype { + NPC_LT_LD_TCP = 1, + NPC_LT_LD_UDP, + NPC_LT_LD_ICMP, + NPC_LT_LD_SCTP, + NPC_LT_LD_ICMP6, + NPC_LT_LD_IGMP = 8, + NPC_LT_LD_ESP, + NPC_LT_LD_AH, + NPC_LT_LD_GRE, + NPC_LT_LD_NVGRE, + NPC_LT_LD_NSH, + NPC_LT_LD_TU_MPLS_IN_NSH, + NPC_LT_LD_TU_MPLS_IN_IP, +}; + +enum npc_kpu_le_ltype { + NPC_LT_LE_VXLAN = 1, + NPC_LT_LE_VXLANGPE, + NPC_LT_LE_GENEVE, + NPC_LT_LE_GTPC, + NPC_LT_LE_GTPU, + NPC_LT_LE_NSH, + NPC_LT_LE_TU_MPLS_IN_GRE, + NPC_LT_LE_TU_NSH_IN_GRE, + NPC_LT_LE_TU_MPLS_IN_UDP, +}; + +enum npc_kpu_lf_ltype { + NPC_LT_LF_TU_ETHER = 1, + NPC_LT_LF_TU_PPP, + NPC_LT_LF_TU_MPLS_IN_VXLANGPE, + NPC_LT_LF_TU_NSH_IN_VXLANGPE, + NPC_LT_LF_TU_MPLS_IN_NSH, + NPC_LT_LF_TU_3RD_NSH, +}; + +/* Don't modify Ltypes upto SCTP, otherwise it will + * effect flow tag calculation and thus RSS. + */ +enum npc_kpu_lg_ltype { + NPC_LT_LG_TU_IP = 1, + NPC_LT_LG_TU_IP6, + NPC_LT_LG_TU_ARP, + NPC_LT_LG_TU_ETHER_IN_NSH, +}; + +enum npc_kpu_lh_ltype { + NPC_LT_LH_TU_TCP = 1, + NPC_LT_LH_TU_UDP, + NPC_LT_LH_TU_ICMP, + NPC_LT_LH_TU_SCTP, + NPC_LT_LH_TU_ICMP6, + NPC_LT_LH_TU_IGMP = 8, + NPC_LT_LH_TU_ESP, + NPC_LT_LH_TU_AH, +}; + +/* Structures definitions */ +struct npc_kpu_profile_cam { + uint8_t state; + uint8_t state_mask; + uint16_t dp0; + uint16_t dp0_mask; + uint16_t dp1; + uint16_t dp1_mask; + uint16_t dp2; + uint16_t dp2_mask; +}; + +struct npc_kpu_profile_action { + uint8_t errlev; + uint8_t errcode; + uint8_t dp0_offset; + uint8_t dp1_offset; + uint8_t dp2_offset; + uint8_t bypass_count; + uint8_t parse_done; + uint8_t next_state; + uint8_t ptr_advance; + uint8_t cap_ena; + uint8_t lid; + uint8_t ltype; + uint8_t flags; + uint8_t offset; + uint8_t mask; + uint8_t right; + uint8_t shift; +}; + +struct npc_kpu_profile { + int cam_entries; + int action_entries; + struct npc_kpu_profile_cam *cam; + struct npc_kpu_profile_action *action; +}; + +/* NPC KPU register formats */ +struct npc_kpu_cam { + uint64_t dp0_data : 16; + uint64_t dp1_data : 16; + uint64_t dp2_data : 16; + uint64_t state : 8; + uint64_t rsvd_63_56 : 8; +}; + +struct npc_kpu_action0 { + uint64_t var_len_shift : 3; + uint64_t var_len_right : 1; + uint64_t var_len_mask : 8; + uint64_t var_len_offset : 8; + uint64_t ptr_advance : 8; + uint64_t capture_flags : 8; + uint64_t capture_ltype : 4; + uint64_t capture_lid : 3; + uint64_t rsvd_43 : 1; + uint64_t next_state : 8; + uint64_t parse_done : 1; + uint64_t capture_ena : 1; + uint64_t byp_count : 3; + uint64_t rsvd_63_57 : 7; +}; + +struct npc_kpu_action1 { + uint64_t dp0_offset : 8; + uint64_t dp1_offset : 8; + uint64_t dp2_offset : 8; + uint64_t errcode : 8; + uint64_t errlev : 4; + uint64_t rsvd_63_36 : 28; +}; + +struct npc_kpu_pkind_cpi_def { + uint64_t cpi_base : 10; + uint64_t rsvd_11_10 : 2; + uint64_t add_shift : 3; + uint64_t rsvd_15 : 1; + uint64_t add_mask : 8; + uint64_t add_offset : 8; + uint64_t flags_mask : 8; + uint64_t flags_match : 8; + uint64_t ltype_mask : 4; + uint64_t ltype_match : 4; + uint64_t lid : 3; + uint64_t rsvd_62_59 : 4; + uint64_t ena : 1; +}; + +struct nix_rx_action { + uint64_t op :4; + uint64_t pf_func :16; + uint64_t index :20; + uint64_t match_id :16; + uint64_t flow_key_alg :5; + uint64_t rsvd_63_61 :3; +}; + +struct nix_tx_action { + uint64_t op :4; + uint64_t rsvd_11_4 :8; + uint64_t index :20; + uint64_t match_id :16; + uint64_t rsvd_63_48 :16; +}; + +/* NPC layer parse information structure */ +struct npc_layer_info_s { + uint32_t lptr : 8; + uint32_t flags : 8; + uint32_t ltype : 4; + uint32_t rsvd_31_20 : 12; +}; + +/* NPC layer mcam search key extract structure */ +struct npc_layer_kex_s { + uint16_t flags : 8; + uint16_t ltype : 4; + uint16_t rsvd_15_12 : 4; +}; + +/* NPC mcam search key x1 structure */ +struct npc_mcam_key_x1_s { + uint64_t intf : 2; + uint64_t rsvd_63_2 : 62; + uint64_t kw0 : 64; /* W1 */ + uint64_t kw1 : 48; + uint64_t rsvd_191_176 : 16; +}; + +/* NPC mcam search key x2 structure */ +struct npc_mcam_key_x2_s { + uint64_t intf : 2; + uint64_t rsvd_63_2 : 62; + uint64_t kw0 : 64; /* W1 */ + uint64_t kw1 : 64; /* W2 */ + uint64_t kw2 : 64; /* W3 */ + uint64_t kw3 : 32; + uint64_t rsvd_319_288 : 32; +}; + +/* NPC mcam search key x4 structure */ +struct npc_mcam_key_x4_s { + uint64_t intf : 2; + uint64_t rsvd_63_2 : 62; + uint64_t kw0 : 64; /* W1 */ + uint64_t kw1 : 64; /* W2 */ + uint64_t kw2 : 64; /* W3 */ + uint64_t kw3 : 64; /* W4 */ + uint64_t kw4 : 64; /* W5 */ + uint64_t kw5 : 64; /* W6 */ + uint64_t kw6 : 64; /* W7 */ +}; + +/* NPC parse key extract structure */ +struct npc_parse_kex_s { + uint64_t chan : 12; + uint64_t errlev : 4; + uint64_t errcode : 8; + uint64_t l2m : 1; + uint64_t l2b : 1; + uint64_t l3m : 1; + uint64_t l3b : 1; + uint64_t la : 12; + uint64_t lb : 12; + uint64_t lc : 12; + uint64_t ld : 12; + uint64_t le : 12; + uint64_t lf : 12; + uint64_t lg : 12; + uint64_t lh : 12; + uint64_t rsvd_127_124 : 4; +}; + +/* NPC result structure */ +struct npc_result_s { + uint64_t intf : 2; + uint64_t pkind : 6; + uint64_t chan : 12; + uint64_t errlev : 4; + uint64_t errcode : 8; + uint64_t l2m : 1; + uint64_t l2b : 1; + uint64_t l3m : 1; + uint64_t l3b : 1; + uint64_t eoh_ptr : 8; + uint64_t rsvd_63_44 : 20; + uint64_t action : 64; /* W1 */ + uint64_t vtag_action : 64; /* W2 */ + uint64_t la : 20; + uint64_t lb : 20; + uint64_t lc : 20; + uint64_t rsvd_255_252 : 4; + uint64_t ld : 20; + uint64_t le : 20; + uint64_t lf : 20; + uint64_t rsvd_319_316 : 4; + uint64_t lg : 20; + uint64_t lh : 20; + uint64_t rsvd_383_360 : 24; +}; + +#endif /* __OTX2_NPC_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_rvu.h b/drivers/common/octeontx2/hw/otx2_rvu.h new file mode 100644 index 000000000..f2037ec57 --- /dev/null +++ b/drivers/common/octeontx2/hw/otx2_rvu.h @@ -0,0 +1,212 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_RVU_HW_H__ +#define __OTX2_RVU_HW_H__ + +/* Register offsets */ + +#define RVU_AF_MSIXTR_BASE (0x10ull) +#define RVU_AF_BLK_RST (0x30ull) +#define RVU_AF_PF_BAR4_ADDR (0x40ull) +#define RVU_AF_RAS (0x100ull) +#define RVU_AF_RAS_W1S (0x108ull) +#define RVU_AF_RAS_ENA_W1S (0x110ull) +#define RVU_AF_RAS_ENA_W1C (0x118ull) +#define RVU_AF_GEN_INT (0x120ull) +#define RVU_AF_GEN_INT_W1S (0x128ull) +#define RVU_AF_GEN_INT_ENA_W1S (0x130ull) +#define RVU_AF_GEN_INT_ENA_W1C (0x138ull) +#define RVU_AF_AFPFX_MBOXX(a, b) \ + (0x2000ull | (uint64_t)(a) << 4 | (uint64_t)(b) << 3) +#define RVU_AF_PFME_STATUS (0x2800ull) +#define RVU_AF_PFTRPEND (0x2810ull) +#define RVU_AF_PFTRPEND_W1S (0x2820ull) +#define RVU_AF_PF_RST (0x2840ull) +#define RVU_AF_HWVF_RST (0x2850ull) +#define RVU_AF_PFAF_MBOX_INT (0x2880ull) +#define RVU_AF_PFAF_MBOX_INT_W1S (0x2888ull) +#define RVU_AF_PFAF_MBOX_INT_ENA_W1S (0x2890ull) +#define RVU_AF_PFAF_MBOX_INT_ENA_W1C (0x2898ull) +#define RVU_AF_PFFLR_INT (0x28a0ull) +#define RVU_AF_PFFLR_INT_W1S (0x28a8ull) +#define RVU_AF_PFFLR_INT_ENA_W1S (0x28b0ull) +#define RVU_AF_PFFLR_INT_ENA_W1C (0x28b8ull) +#define RVU_AF_PFME_INT (0x28c0ull) +#define RVU_AF_PFME_INT_W1S (0x28c8ull) +#define RVU_AF_PFME_INT_ENA_W1S (0x28d0ull) +#define RVU_AF_PFME_INT_ENA_W1C (0x28d8ull) +#define RVU_PRIV_CONST (0x8000000ull) +#define RVU_PRIV_GEN_CFG (0x8000010ull) +#define RVU_PRIV_CLK_CFG (0x8000020ull) +#define RVU_PRIV_ACTIVE_PC (0x8000030ull) +#define RVU_PRIV_PFX_CFG(a) (0x8000100ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_MSIX_CFG(a) (0x8000110ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_ID_CFG(a) (0x8000120ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_INT_CFG(a) (0x8000200ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_NIXX_CFG(a, b) \ + (0x8000300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define RVU_PRIV_PFX_NPA_CFG(a) (0x8000310ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_SSO_CFG(a) (0x8000320ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_SSOW_CFG(a) (0x8000330ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_TIM_CFG(a) (0x8000340ull | (uint64_t)(a) << 16) +#define RVU_PRIV_PFX_CPTX_CFG(a, b) \ + (0x8000350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define RVU_PRIV_BLOCK_TYPEX_REV(a) (0x8000400ull | (uint64_t)(a) << 3) +#define RVU_PRIV_HWVFX_INT_CFG(a) (0x8001280ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_NIXX_CFG(a, b) \ + (0x8001300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) +#define RVU_PRIV_HWVFX_NPA_CFG(a) (0x8001310ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_SSO_CFG(a) (0x8001320ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_SSOW_CFG(a) (0x8001330ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_TIM_CFG(a) (0x8001340ull | (uint64_t)(a) << 16) +#define RVU_PRIV_HWVFX_CPTX_CFG(a, b) \ + (0x8001350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3) + +#define RVU_PF_VFX_PFVF_MBOXX(a, b) \ + (0x0ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 3) +#define RVU_PF_VF_BAR4_ADDR (0x10ull) +#define RVU_PF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_STATUSX(a) (0x800ull | (uint64_t)(a) << 3) +#define RVU_PF_VFTRPENDX(a) (0x820ull | (uint64_t)(a) << 3) +#define RVU_PF_VFTRPEND_W1SX(a) (0x840ull | (uint64_t)(a) << 3) +#define RVU_PF_VFPF_MBOX_INTX(a) (0x880ull | (uint64_t)(a) << 3) +#define RVU_PF_VFPF_MBOX_INT_W1SX(a) (0x8a0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a) (0x8c0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a) (0x8e0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFFLR_INTX(a) (0x900ull | (uint64_t)(a) << 3) +#define RVU_PF_VFFLR_INT_W1SX(a) (0x920ull | (uint64_t)(a) << 3) +#define RVU_PF_VFFLR_INT_ENA_W1SX(a) (0x940ull | (uint64_t)(a) << 3) +#define RVU_PF_VFFLR_INT_ENA_W1CX(a) (0x960ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_INTX(a) (0x980ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_INT_W1SX(a) (0x9a0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_INT_ENA_W1SX(a) (0x9c0ull | (uint64_t)(a) << 3) +#define RVU_PF_VFME_INT_ENA_W1CX(a) (0x9e0ull | (uint64_t)(a) << 3) +#define RVU_PF_PFAF_MBOXX(a) (0xc00ull | (uint64_t)(a) << 3) +#define RVU_PF_INT (0xc20ull) +#define RVU_PF_INT_W1S (0xc28ull) +#define RVU_PF_INT_ENA_W1S (0xc30ull) +#define RVU_PF_INT_ENA_W1C (0xc38ull) +#define RVU_PF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4) +#define RVU_PF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4) +#define RVU_PF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3) +#define RVU_VF_VFPF_MBOXX(a) (0x0ull | (uint64_t)(a) << 3) +#define RVU_VF_INT (0x20ull) +#define RVU_VF_INT_W1S (0x28ull) +#define RVU_VF_INT_ENA_W1S (0x30ull) +#define RVU_VF_INT_ENA_W1C (0x38ull) +#define RVU_VF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3) +#define RVU_VF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4) +#define RVU_VF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4) +#define RVU_VF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3) + + +/* Enum offsets */ + +#define RVU_BAR_RVU_PF_END_BAR0 (0x84f000000000ull) +#define RVU_BAR_RVU_PF_START_BAR0 (0x840000000000ull) +#define RVU_BAR_RVU_PFX_FUNCX_BAR2(a, b) \ + (0x840200000000ull | ((uint64_t)(a) << 36) | ((uint64_t)(b) << 25)) + +#define RVU_AF_INT_VEC_POISON (0x0ull) +#define RVU_AF_INT_VEC_PFFLR (0x1ull) +#define RVU_AF_INT_VEC_PFME (0x2ull) +#define RVU_AF_INT_VEC_GEN (0x3ull) +#define RVU_AF_INT_VEC_MBOX (0x4ull) + +#define RVU_BLOCK_TYPE_RVUM (0x0ull) +#define RVU_BLOCK_TYPE_LMT (0x2ull) +#define RVU_BLOCK_TYPE_NIX (0x3ull) +#define RVU_BLOCK_TYPE_NPA (0x4ull) +#define RVU_BLOCK_TYPE_NPC (0x5ull) +#define RVU_BLOCK_TYPE_SSO (0x6ull) +#define RVU_BLOCK_TYPE_SSOW (0x7ull) +#define RVU_BLOCK_TYPE_TIM (0x8ull) +#define RVU_BLOCK_TYPE_CPT (0x9ull) +#define RVU_BLOCK_TYPE_NDC (0xaull) +#define RVU_BLOCK_TYPE_DDF (0xbull) +#define RVU_BLOCK_TYPE_ZIP (0xcull) +#define RVU_BLOCK_TYPE_RAD (0xdull) +#define RVU_BLOCK_TYPE_DFA (0xeull) +#define RVU_BLOCK_TYPE_HNA (0xfull) + +#define RVU_BLOCK_ADDR_RVUM (0x0ull) +#define RVU_BLOCK_ADDR_LMT (0x1ull) +#define RVU_BLOCK_ADDR_NPA (0x3ull) +#define RVU_BLOCK_ADDR_NPC (0x6ull) +#define RVU_BLOCK_ADDR_SSO (0x7ull) +#define RVU_BLOCK_ADDR_SSOW (0x8ull) +#define RVU_BLOCK_ADDR_TIM (0x9ull) +#define RVU_BLOCK_ADDR_NIX0 (0x4ull) +#define RVU_BLOCK_ADDR_CPT0 (0xaull) +#define RVU_BLOCK_ADDR_NDC0 (0xcull) +#define RVU_BLOCK_ADDR_NDC1 (0xdull) +#define RVU_BLOCK_ADDR_NDC2 (0xeull) +#define RVU_BLOCK_ADDR_R_END (0x1full) +#define RVU_BLOCK_ADDR_R_START (0x14ull) + +#define RVU_VF_INT_VEC_MBOX (0x0ull) + +#define RVU_PF_INT_VEC_AFPF_MBOX (0x6ull) +#define RVU_PF_INT_VEC_VFFLR0 (0x0ull) +#define RVU_PF_INT_VEC_VFFLR1 (0x1ull) +#define RVU_PF_INT_VEC_VFME0 (0x2ull) +#define RVU_PF_INT_VEC_VFME1 (0x3ull) +#define RVU_PF_INT_VEC_VFPF_MBOX0 (0x4ull) +#define RVU_PF_INT_VEC_VFPF_MBOX1 (0x5ull) + + +#define AF_BAR2_ALIASX_SIZE (0x100000ull) + +#define TIM_AF_BAR2_SEL (0x9000000ull) +#define SSO_AF_BAR2_SEL (0x9000000ull) +#define NIX_AF_BAR2_SEL (0x9000000ull) +#define SSOW_AF_BAR2_SEL (0x9000000ull) +#define NPA_AF_BAR2_SEL (0x9000000ull) +#define CPT_AF_BAR2_SEL (0x9000000ull) +#define RVU_AF_BAR2_SEL (0x9000000ull) + +#define AF_BAR2_ALIASX(a, b) \ + (0x9100000ull | (uint64_t)(a) << 12 | (uint64_t)(b)) +#define TIM_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) +#define SSO_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) +#define NIX_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b) +#define SSOW_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) +#define NPA_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b) +#define CPT_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) +#define RVU_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b) + +/* Structures definitions */ + +/* RVU admin function register address structure */ +struct rvu_af_addr_s { + uint64_t addr : 28; + uint64_t block : 5; + uint64_t rsvd_63_33 : 31; +}; + +/* RVU function-unique address structure */ +struct rvu_func_addr_s { + uint32_t addr : 12; + uint32_t lf_slot : 8; + uint32_t block : 5; + uint32_t rsvd_31_25 : 7; +}; + +/* RVU msi-x vector structure */ +struct rvu_msix_vec_s { + uint64_t addr : 64; /* W0 */ + uint64_t data : 32; + uint64_t mask : 1; + uint64_t pend : 1; + uint64_t rsvd_127_98 : 30; +}; + +/* RVU pf function identification structure */ +struct rvu_pf_func_s { + uint16_t func : 10; + uint16_t pf : 6; +}; + +#endif /* __OTX2_RVU_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_sso.h b/drivers/common/octeontx2/hw/otx2_sso.h new file mode 100644 index 000000000..98a8130b1 --- /dev/null +++ b/drivers/common/octeontx2/hw/otx2_sso.h @@ -0,0 +1,209 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_SSO_HW_H__ +#define __OTX2_SSO_HW_H__ + +/* Register offsets */ + +#define SSO_AF_CONST (0x1000ull) +#define SSO_AF_CONST1 (0x1008ull) +#define SSO_AF_WQ_INT_PC (0x1020ull) +#define SSO_AF_NOS_CNT (0x1050ull) +#define SSO_AF_AW_WE (0x1080ull) +#define SSO_AF_WS_CFG (0x1088ull) +#define SSO_AF_GWE_CFG (0x1098ull) +#define SSO_AF_GWE_RANDOM (0x10b0ull) +#define SSO_AF_LF_HWGRP_RST (0x10e0ull) +#define SSO_AF_AW_CFG (0x10f0ull) +#define SSO_AF_BLK_RST (0x10f8ull) +#define SSO_AF_ACTIVE_CYCLES0 (0x1100ull) +#define SSO_AF_ACTIVE_CYCLES1 (0x1108ull) +#define SSO_AF_ACTIVE_CYCLES2 (0x1110ull) +#define SSO_AF_ERR0 (0x1220ull) +#define SSO_AF_ERR0_W1S (0x1228ull) +#define SSO_AF_ERR0_ENA_W1C (0x1230ull) +#define SSO_AF_ERR0_ENA_W1S (0x1238ull) +#define SSO_AF_ERR2 (0x1260ull) +#define SSO_AF_ERR2_W1S (0x1268ull) +#define SSO_AF_ERR2_ENA_W1C (0x1270ull) +#define SSO_AF_ERR2_ENA_W1S (0x1278ull) +#define SSO_AF_UNMAP_INFO (0x12f0ull) +#define SSO_AF_UNMAP_INFO2 (0x1300ull) +#define SSO_AF_UNMAP_INFO3 (0x1310ull) +#define SSO_AF_RAS (0x1420ull) +#define SSO_AF_RAS_W1S (0x1430ull) +#define SSO_AF_RAS_ENA_W1C (0x1460ull) +#define SSO_AF_RAS_ENA_W1S (0x1470ull) +#define SSO_AF_AW_INP_CTL (0x2070ull) +#define SSO_AF_AW_ADD (0x2080ull) +#define SSO_AF_AW_READ_ARB (0x2090ull) +#define SSO_AF_XAQ_REQ_PC (0x20b0ull) +#define SSO_AF_XAQ_LATENCY_PC (0x20b8ull) +#define SSO_AF_TAQ_CNT (0x20c0ull) +#define SSO_AF_TAQ_ADD (0x20e0ull) +#define SSO_AF_POISONX(a) (0x2100ull | (uint64_t)(a) << 3) +#define SSO_AF_POISONX_W1S(a) (0x2200ull | (uint64_t)(a) << 3) +#define SSO_PRIV_AF_INT_CFG (0x3000ull) +#define SSO_AF_RVU_LF_CFG_DEBUG (0x3800ull) +#define SSO_PRIV_LFX_HWGRP_CFG(a) (0x10000ull | (uint64_t)(a) << 3) +#define SSO_PRIV_LFX_HWGRP_INT_CFG(a) (0x20000ull | (uint64_t)(a) << 3) +#define SSO_AF_IU_ACCNTX_CFG(a) (0x50000ull | (uint64_t)(a) << 3) +#define SSO_AF_IU_ACCNTX_RST(a) (0x60000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_HEAD_PTR(a) (0x80000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_TAIL_PTR(a) (0x90000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_HEAD_NEXT(a) (0xa0000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_TAIL_NEXT(a) (0xb0000ull | (uint64_t)(a) << 3) +#define SSO_AF_TIAQX_STATUS(a) (0xc0000ull | (uint64_t)(a) << 3) +#define SSO_AF_TOAQX_STATUS(a) (0xd0000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQX_GMCTL(a) (0xe0000ull | (uint64_t)(a) << 3) +#define SSO_AF_HWGRPX_IAQ_THR(a) (0x200000ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_TAQ_THR(a) (0x200010ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_PRI(a) (0x200020ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_WS_PC(a) (0x200050ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_EXT_PC(a) (0x200060ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_WA_PC(a) (0x200070ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_TS_PC(a) (0x200080ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_DS_PC(a) (0x200090ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_DQ_PC(a) (0x2000A0ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_PAGE_CNT(a) (0x200100ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_AW_STATUS(a) (0x200110ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_AW_CFG(a) (0x200120ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_AW_TAGSPACE(a) (0x200130ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_XAQ_AURA(a) (0x200140ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_XAQ_LIMIT(a) (0x200220ull | (uint64_t)(a) << 12) +#define SSO_AF_HWGRPX_IU_ACCNT(a) (0x200230ull | (uint64_t)(a) << 12) +#define SSO_AF_HWSX_ARB(a) (0x400100ull | (uint64_t)(a) << 12) +#define SSO_AF_HWSX_INV(a) (0x400180ull | (uint64_t)(a) << 12) +#define SSO_AF_HWSX_GMCTL(a) (0x400200ull | (uint64_t)(a) << 12) +#define SSO_AF_HWSX_SX_GRPMSKX(a, b, c) \ + (0x400400ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 5 | \ + (uint64_t)(c) << 3) +#define SSO_AF_IPL_FREEX(a) (0x800000ull | (uint64_t)(a) << 3) +#define SSO_AF_IPL_IAQX(a) (0x840000ull | (uint64_t)(a) << 3) +#define SSO_AF_IPL_DESCHEDX(a) (0x860000ull | (uint64_t)(a) << 3) +#define SSO_AF_IPL_CONFX(a) (0x880000ull | (uint64_t)(a) << 3) +#define SSO_AF_NPA_DIGESTX(a) (0x900000ull | (uint64_t)(a) << 3) +#define SSO_AF_NPA_DIGESTX_W1S(a) (0x900100ull | (uint64_t)(a) << 3) +#define SSO_AF_BFP_DIGESTX(a) (0x900200ull | (uint64_t)(a) << 3) +#define SSO_AF_BFP_DIGESTX_W1S(a) (0x900300ull | (uint64_t)(a) << 3) +#define SSO_AF_BFPN_DIGESTX(a) (0x900400ull | (uint64_t)(a) << 3) +#define SSO_AF_BFPN_DIGESTX_W1S(a) (0x900500ull | (uint64_t)(a) << 3) +#define SSO_AF_GRPDIS_DIGESTX(a) (0x900600ull | (uint64_t)(a) << 3) +#define SSO_AF_GRPDIS_DIGESTX_W1S(a) (0x900700ull | (uint64_t)(a) << 3) +#define SSO_AF_AWEMPTY_DIGESTX(a) (0x900800ull | (uint64_t)(a) << 3) +#define SSO_AF_AWEMPTY_DIGESTX_W1S(a) (0x900900ull | (uint64_t)(a) << 3) +#define SSO_AF_WQP0_DIGESTX(a) (0x900a00ull | (uint64_t)(a) << 3) +#define SSO_AF_WQP0_DIGESTX_W1S(a) (0x900b00ull | (uint64_t)(a) << 3) +#define SSO_AF_AW_DROPPED_DIGESTX(a) (0x900c00ull | (uint64_t)(a) << 3) +#define SSO_AF_AW_DROPPED_DIGESTX_W1S(a) (0x900d00ull | (uint64_t)(a) << 3) +#define SSO_AF_QCTLDIS_DIGESTX(a) (0x900e00ull | (uint64_t)(a) << 3) +#define SSO_AF_QCTLDIS_DIGESTX_W1S(a) (0x900f00ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQDIS_DIGESTX(a) (0x901000ull | (uint64_t)(a) << 3) +#define SSO_AF_XAQDIS_DIGESTX_W1S(a) (0x901100ull | (uint64_t)(a) << 3) +#define SSO_AF_FLR_AQ_DIGESTX(a) (0x901200ull | (uint64_t)(a) << 3) +#define SSO_AF_FLR_AQ_DIGESTX_W1S(a) (0x901300ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GMULTI_DIGESTX(a) (0x902000ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GMULTI_DIGESTX_W1S(a) (0x902100ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GUNMAP_DIGESTX(a) (0x902200ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GUNMAP_DIGESTX_W1S(a) (0x902300ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_AWE_DIGESTX(a) (0x902400ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_AWE_DIGESTX_W1S(a) (0x902500ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GWI_DIGESTX(a) (0x902600ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_GWI_DIGESTX_W1S(a) (0x902700ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_NE_DIGESTX(a) (0x902800ull | (uint64_t)(a) << 3) +#define SSO_AF_WS_NE_DIGESTX_W1S(a) (0x902900ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_TAG(a) (0xa00000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_GRP(a) (0xa20000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_PENDTAG(a) (0xa40000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_LINKS(a) (0xa60000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_QLINKS(a) (0xa80000ull | (uint64_t)(a) << 3) +#define SSO_AF_IENTX_WQP(a) (0xaa0000ull | (uint64_t)(a) << 3) +#define SSO_AF_TAQX_LINK(a) (0xc00000ull | (uint64_t)(a) << 3) +#define SSO_AF_TAQX_WAEX_TAG(a, b) \ + (0xe00000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) +#define SSO_AF_TAQX_WAEX_WQP(a, b) \ + (0xe00008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4) + +#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull) +#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull) +#define SSO_LF_GGRP_QCTL (0x20ull) +#define SSO_LF_GGRP_EXE_DIS (0x80ull) +#define SSO_LF_GGRP_INT (0x100ull) +#define SSO_LF_GGRP_INT_W1S (0x108ull) +#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull) +#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull) +#define SSO_LF_GGRP_INT_THR (0x140ull) +#define SSO_LF_GGRP_INT_CNT (0x180ull) +#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull) +#define SSO_LF_GGRP_AQ_CNT (0x1c0ull) +#define SSO_LF_GGRP_AQ_THR (0x1e0ull) +#define SSO_LF_GGRP_MISC_CNT (0x200ull) + +#define SSO_AF_IAQ_FREE_CNT_MASK 0x3FFFull +#define SSO_AF_IAQ_RSVD_FREE_MASK 0x3FFFull +#define SSO_AF_IAQ_RSVD_FREE_SHIFT 16 +#define SSO_AF_IAQ_FREE_CNT_MAX SSO_AF_IAQ_FREE_CNT_MASK +#define SSO_AF_AW_ADD_RSVD_FREE_MASK 0x3FFFull +#define SSO_AF_AW_ADD_RSVD_FREE_SHIFT 16 +#define SSO_HWGRP_IAQ_MAX_THR_MASK 0x3FFFull +#define SSO_HWGRP_IAQ_RSVD_THR_MASK 0x3FFFull +#define SSO_HWGRP_IAQ_MAX_THR_SHIFT 32 +#define SSO_HWGRP_IAQ_RSVD_THR 0x2 + +#define SSO_AF_TAQ_FREE_CNT_MASK 0x7FFull +#define SSO_AF_TAQ_RSVD_FREE_MASK 0x7FFull +#define SSO_AF_TAQ_RSVD_FREE_SHIFT 16 +#define SSO_AF_TAQ_FREE_CNT_MAX SSO_AF_TAQ_FREE_CNT_MASK +#define SSO_AF_TAQ_ADD_RSVD_FREE_MASK 0x1FFFull +#define SSO_AF_TAQ_ADD_RSVD_FREE_SHIFT 16 +#define SSO_HWGRP_TAQ_MAX_THR_MASK 0x7FFull +#define SSO_HWGRP_TAQ_RSVD_THR_MASK 0x7FFull +#define SSO_HWGRP_TAQ_MAX_THR_SHIFT 32 +#define SSO_HWGRP_TAQ_RSVD_THR 0x3 + +#define SSO_HWGRP_PRI_AFF_MASK 0xFull +#define SSO_HWGRP_PRI_AFF_SHIFT 8 +#define SSO_HWGRP_PRI_WGT_MASK 0x3Full +#define SSO_HWGRP_PRI_WGT_SHIFT 16 +#define SSO_HWGRP_PRI_WGT_LEFT_MASK 0x3Full +#define SSO_HWGRP_PRI_WGT_LEFT_SHIFT 24 + +#define SSO_HWGRP_AW_CFG_RWEN BIT_ULL(0) +#define SSO_HWGRP_AW_CFG_LDWB BIT_ULL(1) +#define SSO_HWGRP_AW_CFG_LDT BIT_ULL(2) +#define SSO_HWGRP_AW_CFG_STT BIT_ULL(3) +#define SSO_HWGRP_AW_CFG_XAQ_BYP_DIS BIT_ULL(4) + +#define SSO_HWGRP_AW_STS_TPTR_VLD BIT_ULL(8) +#define SSO_HWGRP_AW_STS_NPA_FETCH BIT_ULL(9) +#define SSO_HWGRP_AW_STS_XAQ_BUFSC_MASK 0x7ull +#define SSO_HWGRP_AW_STS_INIT_STS 0x18ull + +/* Enum offsets */ + +#define SSO_LF_INT_VEC_GRP (0x0ull) + +#define SSO_AF_INT_VEC_ERR0 (0x0ull) +#define SSO_AF_INT_VEC_ERR2 (0x1ull) +#define SSO_AF_INT_VEC_RAS (0x2ull) + +#define SSO_WA_IOBN (0x0ull) +#define SSO_WA_NIXRX (0x1ull) +#define SSO_WA_CPT (0x2ull) +#define SSO_WA_ADDWQ (0x3ull) +#define SSO_WA_DPI (0x4ull) +#define SSO_WA_NIXTX (0x5ull) +#define SSO_WA_TIM (0x6ull) +#define SSO_WA_ZIP (0x7ull) + +#define SSO_TT_ORDERED (0x0ull) +#define SSO_TT_ATOMIC (0x1ull) +#define SSO_TT_UNTAGGED (0x2ull) +#define SSO_TT_EMPTY (0x3ull) + + +/* Structures definitions */ + +#endif /* __OTX2_SSO_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_ssow.h b/drivers/common/octeontx2/hw/otx2_ssow.h new file mode 100644 index 000000000..8a4457803 --- /dev/null +++ b/drivers/common/octeontx2/hw/otx2_ssow.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_SSOW_HW_H__ +#define __OTX2_SSOW_HW_H__ + +/* Register offsets */ + +#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG (0x10ull) +#define SSOW_AF_LF_HWS_RST (0x30ull) +#define SSOW_PRIV_LFX_HWS_CFG(a) (0x1000ull | (uint64_t)(a) << 3) +#define SSOW_PRIV_LFX_HWS_INT_CFG(a) (0x2000ull | (uint64_t)(a) << 3) +#define SSOW_AF_SCRATCH_WS (0x100000ull) +#define SSOW_AF_SCRATCH_GW (0x200000ull) +#define SSOW_AF_SCRATCH_AW (0x300000ull) + +#define SSOW_LF_GWS_LINKS (0x10ull) +#define SSOW_LF_GWS_PENDWQP (0x40ull) +#define SSOW_LF_GWS_PENDSTATE (0x50ull) +#define SSOW_LF_GWS_NW_TIM (0x70ull) +#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull) +#define SSOW_LF_GWS_INT (0x100ull) +#define SSOW_LF_GWS_INT_W1S (0x108ull) +#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull) +#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull) +#define SSOW_LF_GWS_TAG (0x200ull) +#define SSOW_LF_GWS_WQP (0x210ull) +#define SSOW_LF_GWS_SWTP (0x220ull) +#define SSOW_LF_GWS_PENDTAG (0x230ull) +#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull) +#define SSOW_LF_GWS_OP_GET_WORK (0x600ull) +#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull) +#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull) +#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull) +#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull) +#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull) +#define SSOW_LF_GWS_OP_DESCHED (0x880ull) +#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull) +#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull) +#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull) +#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull) +#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull) +#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull) +#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull) +#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull) +#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull) +#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull) + + +/* Enum offsets */ + +#define SSOW_LF_INT_VEC_IOP (0x0ull) + + +#endif /* __OTX2_SSOW_HW_H__ */ diff --git a/drivers/common/octeontx2/hw/otx2_tim.h b/drivers/common/octeontx2/hw/otx2_tim.h new file mode 100644 index 000000000..41442ad0a --- /dev/null +++ b/drivers/common/octeontx2/hw/otx2_tim.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_TIM_HW_H__ +#define __OTX2_TIM_HW_H__ + +/* TIM */ +#define TIM_AF_CONST (0x90) +#define TIM_PRIV_LFX_CFG(a) (0x20000 | (a) << 3) +#define TIM_PRIV_LFX_INT_CFG(a) (0x24000 | (a) << 3) +#define TIM_AF_RVU_LF_CFG_DEBUG (0x30000) +#define TIM_AF_BLK_RST (0x10) +#define TIM_AF_LF_RST (0x20) +#define TIM_AF_BLK_RST (0x10) +#define TIM_AF_RINGX_GMCTL(a) (0x2000 | (a) << 3) +#define TIM_AF_RINGX_CTL0(a) (0x4000 | (a) << 3) +#define TIM_AF_RINGX_CTL1(a) (0x6000 | (a) << 3) +#define TIM_AF_RINGX_CTL2(a) (0x8000 | (a) << 3) +#define TIM_AF_FLAGS_REG (0x80) +#define TIM_AF_FLAGS_REG_ENA_TIM BIT_ULL(0) +#define TIM_AF_RINGX_CTL1_ENA BIT_ULL(47) +#define TIM_AF_RINGX_CTL1_RCF_BUSY BIT_ULL(50) +#define TIM_AF_RINGX_CLT1_CLK_10NS (0) +#define TIM_AF_RINGX_CLT1_CLK_GPIO (1) +#define TIM_AF_RINGX_CLT1_CLK_GTI (2) +#define TIM_AF_RINGX_CLT1_CLK_PTP (3) + +/* ENUMS */ + +#define TIM_LF_INT_VEC_NRSPERR_INT (0x0ull) +#define TIM_LF_INT_VEC_RAS_INT (0x1ull) + +#endif /* __OTX2_TIM_HW_H__ */ diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build new file mode 100644 index 000000000..34f8aaea7 --- /dev/null +++ b/drivers/common/octeontx2/meson.build @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +sources= files( + 'otx2_mbox.c', + ) + +extra_flags = [] +# This integrated controller runs only on a arm64 machine, remove 32bit warnings +if not dpdk_conf.get('RTE_ARCH_64') + extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast'] +endif + +foreach flag: extra_flags + if cc.has_argument(flag) + cflags += flag + endif +endforeach + +deps = ['eal', 'ethdev'] +includes += include_directories('../../common/octeontx2', + '../../bus/pci') diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h new file mode 100644 index 000000000..b4e008b14 --- /dev/null +++ b/drivers/common/octeontx2/otx2_common.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef _OTX2_COMMON_H_ +#define _OTX2_COMMON_H_ + +#include + +#include "hw/otx2_rvu.h" +#include "hw/otx2_nix.h" +#include "hw/otx2_npc.h" +#include "hw/otx2_npa.h" +#include "hw/otx2_sso.h" +#include "hw/otx2_ssow.h" +#include "hw/otx2_tim.h" + +/* Alignment */ +#define OTX2_ALIGN 128 + +/* Bits manipulation */ +#ifndef BIT_ULL +#define BIT_ULL(nr) (1ULL << (nr)) +#endif +#ifndef BIT +#define BIT(nr) (1UL << (nr)) +#endif + +/* Compiler attributes */ +#ifndef __hot +#define __hot __attribute__((hot)) +#endif + +#endif /* _OTX2_COMMON_H_ */ diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c new file mode 100644 index 000000000..c9cdbdbbc --- /dev/null +++ b/drivers/common/octeontx2/otx2_mbox.c @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_mbox.h" diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h new file mode 100644 index 000000000..6d7b77ed9 --- /dev/null +++ b/drivers/common/octeontx2/otx2_mbox.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_MBOX_H__ +#define __OTX2_MBOX_H__ + +#include + +#endif /* __OTX2_MBOX_H__ */ diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map new file mode 100644 index 000000000..9a61188cd --- /dev/null +++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map @@ -0,0 +1,4 @@ +DPDK_19.08 { + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index d0df0b023..1640e138a 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -122,6 +122,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y) _LDLIBS-$(CONFIG_RTE_LIBRTE_COMMON_DPAAX) += -lrte_common_dpaax endif +_LDLIBS-y += -lrte_common_octeontx2 + _LDLIBS-$(CONFIG_RTE_LIBRTE_PCI_BUS) += -lrte_bus_pci _LDLIBS-$(CONFIG_RTE_LIBRTE_VDEV_BUS) += -lrte_bus_vdev _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += -lrte_bus_dpaa From patchwork Mon Jun 17 15:55:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54851 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D8E6C1BEEC; Mon, 17 Jun 2019 17:56:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id CA7CF1BE9A for ; Mon, 17 Jun 2019 17:55:58 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppxE000998 for ; Mon, 17 Jun 2019 08:55:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=JYDhz2UKFEH8oF6chFr7y9Sa0uejul/AhTH8sKDDYG4=; b=hnMeJLRO+wr+04fQ6ym6X10KCCYhMvOm/BuF+fo4RkQCAVphJnoo2QYgvMpXnJfV6KmS l6+TFI03O35adc3l+VJt0CFq+KPY3E5IAfQbNJ1wbl4QAbrRNy5qLVgU1Dn3BCQHKHGh hqgsS0sc7r8AtPrBEuO4qw+nyfwiCqjKGZhsEawwy8TMvGYoH+hCxkL38WGSQ2yb8adr gb0OlzymupWBPDBtPkioz+W+Vi9sDDV/cFw10AfrKV7qYtHgIy7ybOukUGgA/i3BBsYA 1oEWMgqJmDzWBO+M0AOacqeH0g/M1b3FW2vwCz3GjtEHesZwI/9sBxHmZO6/QsztaE2U 1Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyawv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:55:57 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:55:56 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:55:56 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id B13CA3F703F; Mon, 17 Jun 2019 08:55:54 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Pavan Nikhilesh Date: Mon, 17 Jun 2019 21:25:12 +0530 Message-ID: <20190617155537.36144-3-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 02/27] common/octeontx2: add IO handling APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Various octeontx2 drivers use IO handling API, added octeontx2 specific IO handling routines in the common code. Since some of those implementations are based on arm64 instructions added the stub to compile the code on non arm64 ISA. The non arm64 ISA stub is possible due to the fact that it is an integrated controller i.e runs only on Marvell HW. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh --- drivers/common/octeontx2/otx2_common.h | 12 +++ drivers/common/octeontx2/otx2_io_arm64.h | 95 ++++++++++++++++++++++ drivers/common/octeontx2/otx2_io_generic.h | 63 ++++++++++++++ 3 files changed, 170 insertions(+) create mode 100644 drivers/common/octeontx2/otx2_io_arm64.h create mode 100644 drivers/common/octeontx2/otx2_io_generic.h diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h index b4e008b14..b0c19266b 100644 --- a/drivers/common/octeontx2/otx2_common.h +++ b/drivers/common/octeontx2/otx2_common.h @@ -6,6 +6,8 @@ #define _OTX2_COMMON_H_ #include +#include +#include #include "hw/otx2_rvu.h" #include "hw/otx2_nix.h" @@ -31,4 +33,14 @@ #define __hot __attribute__((hot)) #endif +/* IO Access */ +#define otx2_read64(addr) rte_read64_relaxed((void *)(addr)) +#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr)) + +#if defined(RTE_ARCH_ARM64) +#include "otx2_io_arm64.h" +#else +#include "otx2_io_generic.h" +#endif + #endif /* _OTX2_COMMON_H_ */ diff --git a/drivers/common/octeontx2/otx2_io_arm64.h b/drivers/common/octeontx2/otx2_io_arm64.h new file mode 100644 index 000000000..468243c04 --- /dev/null +++ b/drivers/common/octeontx2/otx2_io_arm64.h @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef _OTX2_IO_ARM64_H_ +#define _OTX2_IO_ARM64_H_ + +#define otx2_load_pair(val0, val1, addr) ({ \ + asm volatile( \ + "ldp %x[x0], %x[x1], [%x[p1]]" \ + :[x0]"=r"(val0), [x1]"=r"(val1) \ + :[p1]"r"(addr) \ + ); }) + +#define otx2_store_pair(val0, val1, addr) ({ \ + asm volatile( \ + "stp %x[x0], %x[x1], [%x[p1]]" \ + ::[x0]"r"(val0), [x1]"r"(val1), [p1]"r"(addr) \ + ); }) + +#define otx2_prefetch_store_keep(ptr) ({\ + asm volatile("prfm pstl1keep, [%x0]\n" : : "r" (ptr)); }) + +static __rte_always_inline uint64_t +otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr) +{ + uint64_t result; + + /* Atomic add with no ordering */ + asm volatile ( + ".cpu generic+lse\n" + "ldadd %x[i], %x[r], [%[b]]" + : [r] "=r" (result), "+m" (*ptr) + : [i] "r" (incr), [b] "r" (ptr) + : "memory"); + return result; +} + +static __rte_always_inline uint64_t +otx2_atomic64_add_sync(int64_t incr, int64_t *ptr) +{ + uint64_t result; + + /* Atomic add with ordering */ + asm volatile ( + ".cpu generic+lse\n" + "ldadda %x[i], %x[r], [%[b]]" + : [r] "=r" (result), "+m" (*ptr) + : [i] "r" (incr), [b] "r" (ptr) + : "memory"); + return result; +} + +static __rte_always_inline uint64_t +otx2_lmt_submit(rte_iova_t io_address) +{ + uint64_t result; + + asm volatile ( + ".cpu generic+lse\n" + "ldeor xzr,%x[rf],[%[rs]]" : + [rf] "=r"(result): [rs] "r"(io_address)); + return result; +} + +static __rte_always_inline void +otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext) +{ + volatile const __uint128_t *src128 = (const __uint128_t *)in; + volatile __uint128_t *dst128 = (__uint128_t *)out; + dst128[0] = src128[0]; + dst128[1] = src128[1]; + /* lmtext receives following value: + * 1: NIX_SUBDC_EXT needed i.e. tx vlan case + * 2: NIX_SUBDC_EXT + NIX_SUBDC_MEM i.e. tstamp case + */ + if (lmtext) { + dst128[2] = src128[2]; + if (lmtext > 1) + dst128[3] = src128[3]; + } +} + +static __rte_always_inline void +otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw) +{ + volatile const __uint128_t *src128 = (const __uint128_t *)in; + volatile __uint128_t *dst128 = (__uint128_t *)out; + uint8_t i; + + for (i = 0; i < segdw; i++) + dst128[i] = src128[i]; +} + +#endif /* _OTX2_IO_ARM64_H_ */ diff --git a/drivers/common/octeontx2/otx2_io_generic.h b/drivers/common/octeontx2/otx2_io_generic.h new file mode 100644 index 000000000..b1d754008 --- /dev/null +++ b/drivers/common/octeontx2/otx2_io_generic.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef _OTX2_IO_GENERIC_H_ +#define _OTX2_IO_GENERIC_H_ + +#define otx2_load_pair(val0, val1, addr) \ +do { \ + val0 = rte_read64_relaxed((void *)(addr)); \ + val1 = rte_read64_relaxed((uint8_t *)(addr) + 8); \ +} while (0) + +#define otx2_store_pair(val0, val1, addr) \ +do { \ + rte_write64_relaxed(val0, (void *)(addr)); \ + rte_write64_relaxed(val1, (((uint8_t *)(addr)) + 8)); \ +} while (0) + +#define otx2_prefetch_store_keep(ptr) do {} while (0) + +static inline uint64_t +otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr) +{ + RTE_SET_USED(ptr); + RTE_SET_USED(incr); + + return 0; +} + +static inline uint64_t +otx2_atomic64_add_sync(int64_t incr, int64_t *ptr) +{ + RTE_SET_USED(ptr); + RTE_SET_USED(incr); + + return 0; +} + +static inline int64_t +otx2_lmt_submit(uint64_t io_address) +{ + RTE_SET_USED(io_address); + + return 0; +} + +static __rte_always_inline void +otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext) +{ + RTE_SET_USED(out); + RTE_SET_USED(in); + RTE_SET_USED(lmtext); +} + +static __rte_always_inline void +otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw) +{ + RTE_SET_USED(out); + RTE_SET_USED(in); + RTE_SET_USED(segdw); +} +#endif /* _OTX2_IO_GENERIC_H_ */ From patchwork Mon Jun 17 15:55:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54854 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4CDE71BF47; Mon, 17 Jun 2019 17:56:13 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id B8F261BF35 for ; Mon, 17 Jun 2019 17:56:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppJB001017 for ; Mon, 17 Jun 2019 08:56:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Z+db16Y8f0m1f7je7MgfQ0wz+ukZtJiaMad/ulWfZMg=; b=lv9feGozGaZop/xaiJX4gZH4d3lG+lGVlaAQCG973wbbapH+n/swL9CwkCU+dRk219bs O9fFgvIvefVMEsJD3JfkRKKTlD7zQrjYhZrAVnfJH1sFYavRXpX55fJ9nVD0urcQqFiM 1QY4qL63evUnBe3Nfjd3qhRjzsWEyJe8iapAZaFYzNUqI/p6tm1IibedlrsOutLq1h/k xrXD3PMJ8CaPdIde7TmyNKqYJGacS98VLey8BmQr4wf2tmEOPPvBIt9ZZnlaut0ecwEs QVG9tY9pYWQmHWER0hZyCjf+SWXkz+5jDQMoI8Sq1xqdNXI5a3JqeONTjr4PJwItJWIX vw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyax8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:07 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:00 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:00 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 94B973F703F; Mon, 17 Jun 2019 08:55:57 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Pavan Nikhilesh , Kiran Kumar K , Vivek Sharma , "Harman Kalra" , Sunil Kumar Kori , "Krzysztof Kanas" , Zyta Szpak Date: Mon, 17 Jun 2019 21:25:13 +0530 Message-ID: <20190617155537.36144-4-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 03/27] common/octeontx2: add mbox request and response definition X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The admin function driver sits in Linux kernel as mailbox server. The DPDK AF mailbox client, send the message to mailbox server to complete the administrative task such as get mac address. This patch adds mailbox request and response definition of existing mailbox defined between AF driver and DPDK driver. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh Signed-off-by: Kiran Kumar K Signed-off-by: Vamsi Attunuru Signed-off-by: Vivek Sharma Signed-off-by: Harman Kalra Signed-off-by: Sunil Kumar Kori Signed-off-by: Krzysztof Kanas Signed-off-by: Zyta Szpak --- drivers/common/octeontx2/otx2_mbox.h | 1326 ++++++++++++++++++++++++++ 1 file changed, 1326 insertions(+) diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h index 6d7b77ed9..e2d79c070 100644 --- a/drivers/common/octeontx2/otx2_mbox.h +++ b/drivers/common/octeontx2/otx2_mbox.h @@ -5,6 +5,1332 @@ #ifndef __OTX2_MBOX_H__ #define __OTX2_MBOX_H__ +#include +#include + +#include +#include + #include +#define SZ_64K (64 * 1024) +#define SZ_1K (1 * 1024) +#define MBOX_SIZE SZ_64K + +/* AF/PF: PF initiated, PF/VF VF initiated */ +#define MBOX_DOWN_RX_START 0 +#define MBOX_DOWN_RX_SIZE (46 * SZ_1K) +#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE) +#define MBOX_DOWN_TX_SIZE (16 * SZ_1K) +/* AF/PF: AF initiated, PF/VF PF initiated */ +#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE) +#define MBOX_UP_RX_SIZE SZ_1K +#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE) +#define MBOX_UP_TX_SIZE SZ_1K + +#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE +# error "Incorrect mailbox area sizes" +#endif + +#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull)) + +#define MBOX_RSP_TIMEOUT 3000 /* Time to wait for mbox response in ms */ + +#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */ + +/* Mailbox directions */ +#define MBOX_DIR_AFPF 0 /* AF replies to PF */ +#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */ +#define MBOX_DIR_PFVF 2 /* PF replies to VF */ +#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */ +#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */ +#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */ +#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */ +#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */ + +/* Device memory does not support unaligned access, instruct compiler to + * not optimize the memory access when working with mailbox memory. + */ +#define __otx2_io volatile + +struct otx2_mbox_dev { + void *mbase; /* This dev's mbox region */ + rte_spinlock_t mbox_lock; + uint16_t msg_size; /* Total msg size to be sent */ + uint16_t rsp_size; /* Total rsp size to be sure the reply is ok */ + uint16_t num_msgs; /* No of msgs sent or waiting for response */ + uint16_t msgs_acked; /* No of msgs for which response is received */ +}; + +struct otx2_mbox { + uintptr_t hwbase; /* Mbox region advertised by HW */ + uintptr_t reg_base;/* CSR base for this dev */ + uint64_t trigger; /* Trigger mbox notification */ + uint16_t tr_shift; /* Mbox trigger shift */ + uint64_t rx_start; /* Offset of Rx region in mbox memory */ + uint64_t tx_start; /* Offset of Tx region in mbox memory */ + uint16_t rx_size; /* Size of Rx region */ + uint16_t tx_size; /* Size of Tx region */ + uint16_t ndevs; /* The number of peers */ + struct otx2_mbox_dev *dev; +}; + +/* Header which precedes all mbox messages */ +struct mbox_hdr { + uint64_t __otx2_io msg_size; /* Total msgs size embedded */ + uint16_t __otx2_io num_msgs; /* No of msgs embedded */ +}; + +/* Header which precedes every msg and is also part of it */ +struct mbox_msghdr { + uint16_t __otx2_io pcifunc; /* Who's sending this msg */ + uint16_t __otx2_io id; /* Mbox message ID */ +#define OTX2_MBOX_REQ_SIG (0xdead) +#define OTX2_MBOX_RSP_SIG (0xbeef) + /* Signature, for validating corrupted msgs */ + uint16_t __otx2_io sig; +#define OTX2_MBOX_VERSION (0x0001) + /* Version of msg's structure for this ID */ + uint16_t __otx2_io ver; + /* Offset of next msg within mailbox region */ + uint16_t __otx2_io next_msgoff; + int __otx2_io rc; /* Msg processed response code */ +}; + +/* Mailbox message types */ +#define MBOX_MSG_MASK 0xFFFF +#define MBOX_MSG_INVALID 0xFFFE +#define MBOX_MSG_MAX 0xFFFF + +#define MBOX_MESSAGES \ +/* Generic mbox IDs (range 0x000 - 0x1FF) */ \ +M(READY, 0x001, ready, msg_req, ready_msg_rsp) \ +M(ATTACH_RESOURCES, 0x002, attach_resources, rsrc_attach_req, msg_rsp)\ +M(DETACH_RESOURCES, 0x003, detach_resources, rsrc_detach_req, msg_rsp)\ +M(FREE_RSRC_CNT, 0x004, free_rsrc_cnt, msg_req, free_rsrcs_rsp) \ +M(MSIX_OFFSET, 0x005, msix_offset, msg_req, msix_offset_rsp) \ +M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \ +M(PTP_OP, 0x007, ptp_op, ptp_req, ptp_rsp) \ +M(GET_HW_CAP, 0x008, get_hw_cap, msg_req, get_hw_cap_rsp) \ +M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \ +/* CGX mbox IDs (range 0x200 - 0x3FF) */ \ +M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ +M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ +M(CGX_STATS, 0x202, cgx_stats, msg_req, cgx_stats_rsp) \ +M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set, cgx_mac_addr_set_or_get,\ + cgx_mac_addr_set_or_get) \ +M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_get, cgx_mac_addr_set_or_get,\ + cgx_mac_addr_set_or_get) \ +M(CGX_PROMISC_ENABLE, 0x205, cgx_promisc_enable, msg_req, msg_rsp) \ +M(CGX_PROMISC_DISABLE, 0x206, cgx_promisc_disable, msg_req, msg_rsp) \ +M(CGX_START_LINKEVENTS, 0x207, cgx_start_linkevents, msg_req, msg_rsp) \ +M(CGX_STOP_LINKEVENTS, 0x208, cgx_stop_linkevents, msg_req, msg_rsp) \ +M(CGX_GET_LINKINFO, 0x209, cgx_get_linkinfo, msg_req, cgx_link_info_msg)\ +M(CGX_INTLBK_ENABLE, 0x20A, cgx_intlbk_enable, msg_req, msg_rsp) \ +M(CGX_INTLBK_DISABLE, 0x20B, cgx_intlbk_disable, msg_req, msg_rsp) \ +M(CGX_PTP_RX_ENABLE, 0x20C, cgx_ptp_rx_enable, msg_req, msg_rsp) \ +M(CGX_PTP_RX_DISABLE, 0x20D, cgx_ptp_rx_disable, msg_req, msg_rsp) \ +M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \ + cgx_pause_frm_cfg) \ +M(CGX_FW_DATA_GET, 0x20F, cgx_get_aux_link_info, msg_req, cgx_fw_data) \ +M(CGX_MAC_ADDR_ADD, 0x211, cgx_mac_addr_add, cgx_mac_addr_add_req, \ + cgx_mac_addr_add_rsp) \ +M(CGX_MAC_ADDR_DEL, 0x212, cgx_mac_addr_del, cgx_mac_addr_del_req, \ + msg_rsp) \ +M(CGX_MAC_MAX_ENTRIES_GET, 0x213, cgx_mac_max_entries_get, msg_req, \ + cgx_max_dmac_entries_get_rsp) \ +M(CGX_SET_LINK_STATE, 0x214, cgx_set_link_state, \ + cgx_set_link_state_msg, msg_rsp) \ +/* NPA mbox IDs (range 0x400 - 0x5FF) */ \ +M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \ + npa_lf_alloc_rsp) \ +M(NPA_LF_FREE, 0x401, npa_lf_free, msg_req, msg_rsp) \ +M(NPA_AQ_ENQ, 0x402, npa_aq_enq, npa_aq_enq_req, npa_aq_enq_rsp)\ +M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, msg_rsp)\ +/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \ +M(SSO_LF_ALLOC, 0x600, sso_lf_alloc, sso_lf_alloc_req, \ + sso_lf_alloc_rsp) \ +M(SSO_LF_FREE, 0x601, sso_lf_free, sso_lf_free_req, msg_rsp) \ +M(SSOW_LF_ALLOC, 0x602, ssow_lf_alloc, ssow_lf_alloc_req, msg_rsp)\ +M(SSOW_LF_FREE, 0x603, ssow_lf_free, ssow_lf_free_req, msg_rsp) \ +M(SSO_HW_SETCONFIG, 0x604, sso_hw_setconfig, sso_hw_setconfig, \ + msg_rsp) \ +M(SSO_GRP_SET_PRIORITY, 0x605, sso_grp_set_priority, sso_grp_priority, \ + msg_rsp) \ +M(SSO_GRP_GET_PRIORITY, 0x606, sso_grp_get_priority, sso_info_req, \ + sso_grp_priority) \ +M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, msg_req, msg_rsp) \ +M(SSO_GRP_QOS_CONFIG, 0x608, sso_grp_qos_config, sso_grp_qos_cfg, \ + msg_rsp) \ +M(SSO_GRP_GET_STATS, 0x609, sso_grp_get_stats, sso_info_req, \ + sso_grp_stats) \ +M(SSO_HWS_GET_STATS, 0x610, sso_hws_get_stats, sso_info_req, \ + sso_hws_stats) \ +/* TIM mbox IDs (range 0x800 - 0x9FF) */ \ +M(TIM_LF_ALLOC, 0x800, tim_lf_alloc, tim_lf_alloc_req, \ + tim_lf_alloc_rsp) \ +M(TIM_LF_FREE, 0x801, tim_lf_free, tim_ring_req, msg_rsp) \ +M(TIM_CONFIG_RING, 0x802, tim_config_ring, tim_config_req, msg_rsp)\ +M(TIM_ENABLE_RING, 0x803, tim_enable_ring, tim_ring_req, \ + tim_enable_rsp) \ +M(TIM_DISABLE_RING, 0x804, tim_disable_ring, tim_ring_req, msg_rsp) \ +/* CPT mbox IDs (range 0xA00 - 0xBFF) */ \ +M(CPT_RD_WR_REGISTER, 0xA02, cpt_rd_wr_register, cpt_rd_wr_reg_msg, \ + cpt_rd_wr_reg_msg) \ +M(CPT_INLINE_IPSEC_CFG, 0xA04, cpt_inline_ipsec_cfg, \ + cpt_inline_ipsec_cfg_msg, msg_rsp) \ +/* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \ +M(NPC_MCAM_ALLOC_ENTRY, 0x6000, npc_mcam_alloc_entry, \ + npc_mcam_alloc_entry_req, \ + npc_mcam_alloc_entry_rsp) \ +M(NPC_MCAM_FREE_ENTRY, 0x6001, npc_mcam_free_entry, \ + npc_mcam_free_entry_req, msg_rsp) \ +M(NPC_MCAM_WRITE_ENTRY, 0x6002, npc_mcam_write_entry, \ + npc_mcam_write_entry_req, msg_rsp) \ +M(NPC_MCAM_ENA_ENTRY, 0x6003, npc_mcam_ena_entry, \ + npc_mcam_ena_dis_entry_req, msg_rsp) \ +M(NPC_MCAM_DIS_ENTRY, 0x6004, npc_mcam_dis_entry, \ + npc_mcam_ena_dis_entry_req, msg_rsp) \ +M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry, \ + npc_mcam_shift_entry_req, \ + npc_mcam_shift_entry_rsp) \ +M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter, \ + npc_mcam_alloc_counter_req, \ + npc_mcam_alloc_counter_rsp) \ +M(NPC_MCAM_FREE_COUNTER, 0x6007, npc_mcam_free_counter, \ + npc_mcam_oper_counter_req, \ + msg_rsp) \ +M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter, \ + npc_mcam_unmap_counter_req, \ + msg_rsp) \ +M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_clear_counter, \ + npc_mcam_oper_counter_req, \ + msg_rsp) \ +M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_counter_stats, \ + npc_mcam_oper_counter_req, \ + npc_mcam_oper_counter_rsp) \ +M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, npc_mcam_alloc_and_write_entry,\ + npc_mcam_alloc_and_write_entry_req, \ + npc_mcam_alloc_and_write_entry_rsp) \ +M(NPC_GET_KEX_CFG, 0x600c, npc_get_kex_cfg, msg_req, \ + npc_get_kex_cfg_rsp) \ +/* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \ +M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, nix_lf_alloc_req, \ + nix_lf_alloc_rsp) \ +M(NIX_LF_FREE, 0x8001, nix_lf_free, nix_lf_free_req, msg_rsp) \ +M(NIX_AQ_ENQ, 0x8002, nix_aq_enq, nix_aq_enq_req, \ + nix_aq_enq_rsp) \ +M(NIX_HWCTX_DISABLE, 0x8003, nix_hwctx_disable, hwctx_disable_req, \ + msg_rsp) \ +M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc, nix_txsch_alloc_req, \ + nix_txsch_alloc_rsp) \ +M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free, nix_txsch_free_req, \ + msg_rsp) \ +M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_cfg, nix_txschq_config, \ + msg_rsp) \ +M(NIX_STATS_RST, 0x8007, nix_stats_rst, msg_req, msg_rsp) \ +M(NIX_VTAG_CFG, 0x8008, nix_vtag_cfg, nix_vtag_config, msg_rsp) \ +M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, \ + nix_rss_flowkey_cfg, \ + nix_rss_flowkey_cfg_rsp) \ +M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, nix_set_mac_addr, \ + msg_rsp) \ +M(NIX_SET_RX_MODE, 0x800b, nix_set_rx_mode, nix_rx_mode, msg_rsp) \ +M(NIX_SET_HW_FRS, 0x800c, nix_set_hw_frs, nix_frs_cfg, msg_rsp) \ +M(NIX_LF_START_RX, 0x800d, nix_lf_start_rx, msg_req, msg_rsp) \ +M(NIX_LF_STOP_RX, 0x800e, nix_lf_stop_rx, msg_req, msg_rsp) \ +M(NIX_MARK_FORMAT_CFG, 0x800f, nix_mark_format_cfg, \ + nix_mark_format_cfg, \ + nix_mark_format_cfg_rsp) \ +M(NIX_LSO_FORMAT_CFG, 0x8011, nix_lso_format_cfg, nix_lso_format_cfg, \ + nix_lso_format_cfg_rsp) \ +M(NIX_LF_PTP_TX_ENABLE, 0x8013, nix_lf_ptp_tx_enable, msg_req, \ + msg_rsp) \ +M(NIX_LF_PTP_TX_DISABLE, 0x8014, nix_lf_ptp_tx_disable, msg_req, \ + msg_rsp) \ +M(NIX_SET_VLAN_TPID, 0x8015, nix_set_vlan_tpid, nix_set_vlan_tpid, \ + msg_rsp) \ +M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \ + nix_bp_cfg_rsp) \ +M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp)\ +M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, \ + nix_get_mac_addr_rsp) \ +M(NIX_INLINE_IPSEC_CFG, 0x8019, nix_inline_ipsec_cfg, \ + nix_inline_ipsec_cfg, msg_rsp) \ +M(NIX_INLINE_IPSEC_LF_CFG, \ + 0x801a, nix_inline_ipsec_lf_cfg, \ + nix_inline_ipsec_lf_cfg, msg_rsp) + +/* Messages initiated by AF (range 0xC00 - 0xDFF) */ +#define MBOX_UP_CGX_MESSAGES \ +M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, \ + msg_rsp) \ +M(CGX_PTP_RX_INFO, 0xC01, cgx_ptp_rx_info, cgx_ptp_rx_info_msg, \ + msg_rsp) + +enum { +#define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id, +MBOX_MESSAGES +MBOX_UP_CGX_MESSAGES +#undef M +}; + +/* Mailbox message formats */ + +#define RVU_DEFAULT_PF_FUNC 0xFFFF + +/* Generic request msg used for those mbox messages which + * don't send any data in the request. + */ +struct msg_req { + struct mbox_msghdr hdr; +}; + +/* Generic response msg used a ack or response for those mbox + * messages which doesn't have a specific rsp msg format. + */ +struct msg_rsp { + struct mbox_msghdr hdr; +}; + +/* RVU mailbox error codes + * Range 256 - 300. + */ +enum rvu_af_status { + RVU_INVALID_VF_ID = -256, +}; + +struct ready_msg_rsp { + struct mbox_msghdr hdr; + uint16_t __otx2_io sclk_feq; /* SCLK frequency */ +}; + +/* Structure for requesting resource provisioning. + * 'modify' flag to be used when either requesting more + * or detach partial of a certain resource type. + * Rest of the fields specify how many of what type to + * be attached. + */ +struct rsrc_attach_req { + struct mbox_msghdr hdr; + uint8_t __otx2_io modify:1; + uint8_t __otx2_io npalf:1; + uint8_t __otx2_io nixlf:1; + uint16_t __otx2_io sso; + uint16_t __otx2_io ssow; + uint16_t __otx2_io timlfs; + uint16_t __otx2_io cptlfs; +}; + +/* Structure for relinquishing resources. + * 'partial' flag to be used when relinquishing all resources + * but only of a certain type. If not set, all resources of all + * types provisioned to the RVU function will be detached. + */ +struct rsrc_detach_req { + struct mbox_msghdr hdr; + uint8_t __otx2_io partial:1; + uint8_t __otx2_io npalf:1; + uint8_t __otx2_io nixlf:1; + uint8_t __otx2_io sso:1; + uint8_t __otx2_io ssow:1; + uint8_t __otx2_io timlfs:1; + uint8_t __otx2_io cptlfs:1; +}; + +/* NIX Transmit schedulers */ +#define NIX_TXSCH_LVL_SMQ 0x0 +#define NIX_TXSCH_LVL_MDQ 0x0 +#define NIX_TXSCH_LVL_TL4 0x1 +#define NIX_TXSCH_LVL_TL3 0x2 +#define NIX_TXSCH_LVL_TL2 0x3 +#define NIX_TXSCH_LVL_TL1 0x4 +#define NIX_TXSCH_LVL_CNT 0x5 + +/* + * Number of resources available to the caller. + * In reply to MBOX_MSG_FREE_RSRC_CNT. + */ +struct free_rsrcs_rsp { + struct mbox_msghdr hdr; + uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; + uint16_t __otx2_io sso; + uint16_t __otx2_io tim; + uint16_t __otx2_io ssow; + uint16_t __otx2_io cpt; + uint8_t __otx2_io npa; + uint8_t __otx2_io nix; +}; + +#define MSIX_VECTOR_INVALID 0xFFFF +#define MAX_RVU_BLKLF_CNT 256 + +struct msix_offset_rsp { + struct mbox_msghdr hdr; + uint16_t __otx2_io npa_msixoff; + uint16_t __otx2_io nix_msixoff; + uint8_t __otx2_io sso; + uint8_t __otx2_io ssow; + uint8_t __otx2_io timlfs; + uint8_t __otx2_io cptlfs; + uint16_t __otx2_io sso_msixoff[MAX_RVU_BLKLF_CNT]; + uint16_t __otx2_io ssow_msixoff[MAX_RVU_BLKLF_CNT]; + uint16_t __otx2_io timlf_msixoff[MAX_RVU_BLKLF_CNT]; + uint16_t __otx2_io cptlf_msixoff[MAX_RVU_BLKLF_CNT]; +}; + +/* CGX mbox message formats */ +struct cgx_stats_rsp { + struct mbox_msghdr hdr; +#define CGX_RX_STATS_COUNT 13 +#define CGX_TX_STATS_COUNT 18 + uint64_t __otx2_io rx_stats[CGX_RX_STATS_COUNT]; + uint64_t __otx2_io tx_stats[CGX_TX_STATS_COUNT]; +}; + +/* Structure for requesting the operation for + * setting/getting mac address in the CGX interface + */ +struct cgx_mac_addr_set_or_get { + struct mbox_msghdr hdr; + uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; +}; + +/* Structure for requesting the operation to + * add DMAC filter entry into CGX interface + */ +struct cgx_mac_addr_add_req { + struct mbox_msghdr hdr; + uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; +}; + +/* Structure for response against the operation to + * add DMAC filter entry into CGX interface + */ +struct cgx_mac_addr_add_rsp { + struct mbox_msghdr hdr; + uint8_t __otx2_io index; +}; + +/* Structure for requesting the operation to + * delete DMAC filter entry from CGX interface + */ +struct cgx_mac_addr_del_req { + struct mbox_msghdr hdr; + uint8_t __otx2_io index; +}; + +/* Structure for response against the operation to + * get maximum supported DMAC filter entries + */ +struct cgx_max_dmac_entries_get_rsp { + struct mbox_msghdr hdr; + uint8_t __otx2_io max_dmac_filters; +}; + +struct cgx_link_user_info { + uint64_t __otx2_io link_up:1; + uint64_t __otx2_io full_duplex:1; + uint64_t __otx2_io lmac_type_id:4; + uint64_t __otx2_io speed:20; /* speed in Mbps */ +#define LMACTYPE_STR_LEN 16 + char lmac_type[LMACTYPE_STR_LEN]; +}; + +struct cgx_link_info_msg { + struct mbox_msghdr hdr; + struct cgx_link_user_info link_info; +}; + +struct cgx_pause_frm_cfg { + struct mbox_msghdr hdr; + uint8_t __otx2_io set; + /* set = 1 if the request is to config pause frames */ + /* set = 0 if the request is to fetch pause frames config */ + uint8_t __otx2_io rx_pause; + uint8_t __otx2_io tx_pause; +}; + +struct cgx_ptp_rx_info_msg { + struct mbox_msghdr hdr; + uint8_t __otx2_io ptp_en; + uint8_t __otx2_io ptp_offset; +}; + +struct sfp_eeprom_s { +#define SFP_EEPROM_SIZE 256 + uint16_t __otx2_io sff_id; + uint8_t __otx2_io buf[SFP_EEPROM_SIZE]; +}; + +struct cgx_lmac_fwdata_s { + uint16_t __otx2_io rw_valid; + uint64_t __otx2_io supported_fec; + uint64_t __otx2_io supported_an; + uint64_t __otx2_io supported_link_modes; + /* Only applicable if AN is supported */ + uint64_t __otx2_io advertised_fec; + uint64_t __otx2_io advertised_link_modes; + /* Only applicable if SFP/QSFP slot is present */ + struct sfp_eeprom_s sfp_eeprom; +}; + +struct cgx_fw_data { + struct mbox_msghdr hdr; + struct cgx_lmac_fwdata_s fwdata; +}; + +struct cgx_set_link_state_msg { + struct mbox_msghdr hdr; + uint8_t __otx2_io enable; +}; + +/* NPA mbox message formats */ + +/* NPA mailbox error codes + * Range 301 - 400. + */ +enum npa_af_status { + NPA_AF_ERR_PARAM = -301, + NPA_AF_ERR_AQ_FULL = -302, + NPA_AF_ERR_AQ_ENQUEUE = -303, + NPA_AF_ERR_AF_LF_INVALID = -304, + NPA_AF_ERR_AF_LF_ALLOC = -305, + NIX_AF_ERR_X2P_CALIBRATE = -398, + NIX_AF_ERR_RAN_OUT_BPID = -399, +}; + +#define NPA_AURA_SZ_0 0 +#define NPA_AURA_SZ_128 1 +#define NPA_AURA_SZ_256 2 +#define NPA_AURA_SZ_512 3 +#define NPA_AURA_SZ_1K 4 +#define NPA_AURA_SZ_2K 5 +#define NPA_AURA_SZ_4K 6 +#define NPA_AURA_SZ_8K 7 +#define NPA_AURA_SZ_16K 8 +#define NPA_AURA_SZ_32K 9 +#define NPA_AURA_SZ_64K 10 +#define NPA_AURA_SZ_128K 11 +#define NPA_AURA_SZ_256K 12 +#define NPA_AURA_SZ_512K 13 +#define NPA_AURA_SZ_1M 14 +#define NPA_AURA_SZ_MAX 15 + +/* For NPA LF context alloc and init */ +struct npa_lf_alloc_req { + struct mbox_msghdr hdr; + int __otx2_io node; + int __otx2_io aura_sz; /* No of auras. See NPA_AURA_SZ_* */ + uint32_t __otx2_io nr_pools; /* No of pools */ + uint64_t __otx2_io way_mask; +}; + +struct npa_lf_alloc_rsp { + struct mbox_msghdr hdr; + uint32_t __otx2_io stack_pg_ptrs; /* No of ptrs per stack page */ + uint32_t __otx2_io stack_pg_bytes; /* Size of stack page */ + uint16_t __otx2_io qints; /* NPA_AF_CONST::QINTS */ +}; + +/* NPA AQ enqueue msg */ +struct npa_aq_enq_req { + struct mbox_msghdr hdr; + uint32_t __otx2_io aura_id; + uint8_t __otx2_io ctype; + uint8_t __otx2_io op; + union { + /* Valid when op == WRITE/INIT and ctype == AURA. + * LF fills the pool_id in aura.pool_addr. AF will translate + * the pool_id to pool context pointer. + */ + struct npa_aura_s aura; + /* Valid when op == WRITE/INIT and ctype == POOL */ + struct npa_pool_s pool; + }; + /* Mask data when op == WRITE (1=write, 0=don't write) */ + union { + /* Valid when op == WRITE and ctype == AURA */ + struct npa_aura_s aura_mask; + /* Valid when op == WRITE and ctype == POOL */ + struct npa_pool_s pool_mask; + }; +}; + +struct npa_aq_enq_rsp { + struct mbox_msghdr hdr; + union { + /* Valid when op == READ and ctype == AURA */ + struct npa_aura_s aura; + /* Valid when op == READ and ctype == POOL */ + struct npa_pool_s pool; + }; +}; + +/* Disable all contexts of type 'ctype' */ +struct hwctx_disable_req { + struct mbox_msghdr hdr; + uint8_t __otx2_io ctype; +}; + +/* NIX mbox message formats */ +/* NIX mailbox error codes + * Range 401 - 500. + */ +enum nix_af_status { + NIX_AF_ERR_PARAM = -401, + NIX_AF_ERR_AQ_FULL = -402, + NIX_AF_ERR_AQ_ENQUEUE = -403, + NIX_AF_ERR_AF_LF_INVALID = -404, + NIX_AF_ERR_AF_LF_ALLOC = -405, + NIX_AF_ERR_TLX_ALLOC_FAIL = -406, + NIX_AF_ERR_TLX_INVALID = -407, + NIX_AF_ERR_RSS_SIZE_INVALID = -408, + NIX_AF_ERR_RSS_GRPS_INVALID = -409, + NIX_AF_ERR_FRS_INVALID = -410, + NIX_AF_ERR_RX_LINK_INVALID = -411, + NIX_AF_INVAL_TXSCHQ_CFG = -412, + NIX_AF_SMQ_FLUSH_FAILED = -413, + NIX_AF_MACADDR_SET_FAILED = -414, + NIX_AF_RX_MODE_SET_FAILED = -415, + NIX_AF_ERR_RSS_NOSPC_ALGO = -416, + NIX_AF_ERR_RSS_NOSPC_FIELD = -417, + NIX_AF_ERR_MARK_ALLOC_FAIL = -418, + NIX_AF_ERR_LSOFMT_CFG_FAIL = -419, +}; + +/* For NIX LF context alloc and init */ +struct nix_lf_alloc_req { + struct mbox_msghdr hdr; + int __otx2_io node; + uint32_t __otx2_io rq_cnt; /* No of receive queues */ + uint32_t __otx2_io sq_cnt; /* No of send queues */ + uint32_t __otx2_io cq_cnt; /* No of completion queues */ + uint8_t __otx2_io xqe_sz; + uint16_t __otx2_io rss_sz; + uint8_t __otx2_io rss_grps; + uint16_t __otx2_io npa_func; + /* RVU_DEFAULT_PF_FUNC == default pf_func associated with lf */ + uint16_t __otx2_io sso_func; + uint64_t __otx2_io rx_cfg; /* See NIX_AF_LF(0..127)_RX_CFG */ + uint64_t __otx2_io way_mask; +}; + +struct nix_lf_alloc_rsp { + struct mbox_msghdr hdr; + uint16_t __otx2_io sqb_size; + uint16_t __otx2_io rx_chan_base; + uint16_t __otx2_io tx_chan_base; + uint8_t __otx2_io rx_chan_cnt; /* Total number of RX channels */ + uint8_t __otx2_io tx_chan_cnt; /* Total number of TX channels */ + uint8_t __otx2_io lso_tsov4_idx; + uint8_t __otx2_io lso_tsov6_idx; + uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; + uint8_t __otx2_io lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */ + uint8_t __otx2_io lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */ + uint16_t __otx2_io cints; /* NIX_AF_CONST2::CINTS */ + uint16_t __otx2_io qints; /* NIX_AF_CONST2::QINTS */ + uint8_t __otx2_io ptp; /* boolean; true iff PTP block is supported */ +}; + +struct nix_lf_free_req { + struct mbox_msghdr hdr; +#define NIX_LF_DISABLE_FLOWS 0x1 + uint64_t __otx2_io flags; +}; + +/* NIX AQ enqueue msg */ +struct nix_aq_enq_req { + struct mbox_msghdr hdr; + uint32_t __otx2_io qidx; + uint8_t __otx2_io ctype; + uint8_t __otx2_io op; + union { + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */ + struct nix_rq_ctx_s rq; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */ + struct nix_sq_ctx_s sq; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */ + struct nix_cq_ctx_s cq; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */ + struct nix_rsse_s rss; + /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */ + struct nix_rx_mce_s mce; + }; + /* Mask data when op == WRITE (1=write, 0=don't write) */ + union { + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */ + struct nix_rq_ctx_s rq_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */ + struct nix_sq_ctx_s sq_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */ + struct nix_cq_ctx_s cq_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */ + struct nix_rsse_s rss_mask; + /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */ + struct nix_rx_mce_s mce_mask; + }; +}; + +struct nix_aq_enq_rsp { + struct mbox_msghdr hdr; + union { + struct nix_rq_ctx_s rq; + struct nix_sq_ctx_s sq; + struct nix_cq_ctx_s cq; + struct nix_rsse_s rss; + struct nix_rx_mce_s mce; + }; +}; + +/* Tx scheduler/shaper mailbox messages */ + +#define MAX_TXSCHQ_PER_FUNC 128 + +struct nix_txsch_alloc_req { + struct mbox_msghdr hdr; + /* Scheduler queue count request at each level */ + uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */ + uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */ +}; + +struct nix_txsch_alloc_rsp { + struct mbox_msghdr hdr; + /* Scheduler queue count allocated at each level */ + uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */ + uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */ + /* Scheduler queue list allocated at each level */ + uint16_t __otx2_io + schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + uint16_t __otx2_io schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + /* Traffic aggregation scheduler level */ + uint8_t __otx2_io aggr_level; + /* Aggregation lvl's RR_PRIO config */ + uint8_t __otx2_io aggr_lvl_rr_prio; + /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */ + uint8_t __otx2_io link_cfg_lvl; +}; + +struct nix_txsch_free_req { + struct mbox_msghdr hdr; +#define TXSCHQ_FREE_ALL BIT_ULL(0) + uint16_t __otx2_io flags; + /* Scheduler queue level to be freed */ + uint16_t __otx2_io schq_lvl; + /* List of scheduler queues to be freed */ + uint16_t __otx2_io schq; +}; + +struct nix_txschq_config { + struct mbox_msghdr hdr; + uint8_t __otx2_io lvl; /* SMQ/MDQ/TL4/TL3/TL2/TL1 */ +#define TXSCHQ_IDX_SHIFT 16 +#define TXSCHQ_IDX_MASK (BIT_ULL(10) - 1) +#define TXSCHQ_IDX(reg, shift) (((reg) >> (shift)) & TXSCHQ_IDX_MASK) + uint8_t __otx2_io num_regs; +#define MAX_REGS_PER_MBOX_MSG 20 + uint64_t __otx2_io reg[MAX_REGS_PER_MBOX_MSG]; + uint64_t __otx2_io regval[MAX_REGS_PER_MBOX_MSG]; +}; + +struct nix_vtag_config { + struct mbox_msghdr hdr; + /* '0' for 4 octet VTAG, '1' for 8 octet VTAG */ + uint8_t __otx2_io vtag_size; + /* cfg_type is '0' for tx vlan cfg + * cfg_type is '1' for rx vlan cfg + */ + uint8_t __otx2_io cfg_type; + union { + /* Valid when cfg_type is '0' */ + struct { + uint64_t __otx2_io vtag0; + uint64_t __otx2_io vtag1; + + /* cfg_vtag0 & cfg_vtag1 fields are valid + * when free_vtag0 & free_vtag1 are '0's. + */ + /* cfg_vtag0 = 1 to configure vtag0 */ + uint8_t __otx2_io cfg_vtag0 :1; + /* cfg_vtag1 = 1 to configure vtag1 */ + uint8_t __otx2_io cfg_vtag1 :1; + + /* vtag0_idx & vtag1_idx are only valid when + * both cfg_vtag0 & cfg_vtag1 are '0's, + * these fields are used along with free_vtag0 + * & free_vtag1 to free the nix lf's tx_vlan + * configuration. + * + * Denotes the indices of tx_vtag def registers + * that needs to be cleared and freed. + */ + int __otx2_io vtag0_idx; + int __otx2_io vtag1_idx; + + /* Free_vtag0 & free_vtag1 fields are valid + * when cfg_vtag0 & cfg_vtag1 are '0's. + */ + /* Free_vtag0 = 1 clears vtag0 configuration + * vtag0_idx denotes the index to be cleared. + */ + uint8_t __otx2_io free_vtag0 :1; + /* Free_vtag1 = 1 clears vtag1 configuration + * vtag1_idx denotes the index to be cleared. + */ + uint8_t __otx2_io free_vtag1 :1; + } tx; + + /* Valid when cfg_type is '1' */ + struct { + /* Rx vtag type index, valid values are in 0..7 range */ + uint8_t __otx2_io vtag_type; + /* Rx vtag strip */ + uint8_t __otx2_io strip_vtag :1; + /* Rx vtag capture */ + uint8_t __otx2_io capture_vtag :1; + } rx; + }; +}; + +struct nix_vtag_config_rsp { + struct mbox_msghdr hdr; + /* Indices of tx_vtag def registers used to configure + * tx vtag0 & vtag1 headers, these indices are valid + * when nix_vtag_config mbox requested for vtag0 and/ + * or vtag1 configuration. + */ + int __otx2_io vtag0_idx; + int __otx2_io vtag1_idx; +}; + +struct nix_rss_flowkey_cfg { + struct mbox_msghdr hdr; + int __otx2_io mcam_index; /* MCAM entry index to modify */ + uint32_t __otx2_io flowkey_cfg; /* Flowkey types selected */ +#define FLOW_KEY_TYPE_PORT BIT(0) +#define FLOW_KEY_TYPE_IPV4 BIT(1) +#define FLOW_KEY_TYPE_IPV6 BIT(2) +#define FLOW_KEY_TYPE_TCP BIT(3) +#define FLOW_KEY_TYPE_UDP BIT(4) +#define FLOW_KEY_TYPE_SCTP BIT(5) +#define FLOW_KEY_TYPE_NVGRE BIT(6) +#define FLOW_KEY_TYPE_VXLAN BIT(7) +#define FLOW_KEY_TYPE_GENEVE BIT(8) +#define FLOW_KEY_TYPE_ETH_DMAC BIT(9) +#define FLOW_KEY_TYPE_IPV6_EXT BIT(10) +#define FLOW_KEY_TYPE_GTPU BIT(11) +#define FLOW_KEY_TYPE_UDP_VXLAN BIT(12) +#define FLOW_KEY_TYPE_UDP_GENEVE BIT(13) +#define FLOW_KEY_TYPE_UDP_GTPU BIT(14) +#define FLOW_KEY_TYPE_INNR_IPV4 BIT(15) +#define FLOW_KEY_TYPE_INNR_IPV6 BIT(16) +#define FLOW_KEY_TYPE_INNR_TCP BIT(17) +#define FLOW_KEY_TYPE_INNR_UDP BIT(18) +#define FLOW_KEY_TYPE_INNR_SCTP BIT(19) +#define FLOW_KEY_TYPE_INNR_ETH_DMAC BIT(20) + uint8_t group; /* RSS context or group */ +}; + +struct nix_rss_flowkey_cfg_rsp { + struct mbox_msghdr hdr; + uint8_t __otx2_io alg_idx; /* Selected algo index */ +}; + +struct nix_set_mac_addr { + struct mbox_msghdr hdr; + uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; +}; + +struct nix_get_mac_addr_rsp { + struct mbox_msghdr hdr; + uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN]; +}; + +struct nix_mark_format_cfg { + struct mbox_msghdr hdr; + uint8_t __otx2_io offset; + uint8_t __otx2_io y_mask; + uint8_t __otx2_io y_val; + uint8_t __otx2_io r_mask; + uint8_t __otx2_io r_val; +}; + +struct nix_mark_format_cfg_rsp { + struct mbox_msghdr hdr; + uint8_t __otx2_io mark_format_idx; +}; + +struct nix_lso_format_cfg { + struct mbox_msghdr hdr; + uint64_t __otx2_io field_mask; +#define NIX_LSO_FIELD_MAX (8) + uint64_t __otx2_io fields[NIX_LSO_FIELD_MAX]; +}; + +struct nix_lso_format_cfg_rsp { + struct mbox_msghdr hdr; + uint8_t __otx2_io lso_format_idx; +}; + +struct nix_rx_mode { + struct mbox_msghdr hdr; +#define NIX_RX_MODE_UCAST BIT(0) +#define NIX_RX_MODE_PROMISC BIT(1) +#define NIX_RX_MODE_ALLMULTI BIT(2) + uint16_t __otx2_io mode; +}; + +struct nix_frs_cfg { + struct mbox_msghdr hdr; + uint8_t __otx2_io update_smq; /* Update SMQ's min/max lens */ + uint8_t __otx2_io update_minlen; /* Set minlen also */ + uint8_t __otx2_io sdp_link; /* Set SDP RX link */ + uint16_t __otx2_io maxlen; + uint16_t __otx2_io minlen; +}; + +struct nix_set_vlan_tpid { + struct mbox_msghdr hdr; +#define NIX_VLAN_TYPE_INNER 0 +#define NIX_VLAN_TYPE_OUTER 1 + uint8_t __otx2_io vlan_type; + uint16_t __otx2_io tpid; +}; + +struct nix_bp_cfg_req { + struct mbox_msghdr hdr; + uint16_t __otx2_io chan_base; /* Starting channel number */ + uint8_t __otx2_io chan_cnt; /* Number of channels */ + uint8_t __otx2_io bpid_per_chan; + /* bpid_per_chan = 0 assigns single bp id for range of channels */ + /* bpid_per_chan = 1 assigns separate bp id for each channel */ +}; + +/* Global NIX inline IPSec configuration */ +struct nix_inline_ipsec_cfg { + struct mbox_msghdr hdr; + uint32_t __otx2_io cpt_credit; + struct { + uint8_t __otx2_io egrp; + uint8_t __otx2_io opcode; + } gen_cfg; + struct { + uint16_t __otx2_io cpt_pf_func; + uint8_t __otx2_io cpt_slot; + } inst_qsel; + uint8_t __otx2_io enable; +}; + +/* Per NIX LF inline IPSec configuration */ +struct nix_inline_ipsec_lf_cfg { + struct mbox_msghdr hdr; + uint64_t __otx2_io sa_base_addr; + struct { + uint32_t __otx2_io tag_const; + uint16_t __otx2_io lenm1_max; + uint8_t __otx2_io sa_pow2_size; + uint8_t __otx2_io tt; + } ipsec_cfg0; + struct { + uint32_t __otx2_io sa_idx_max; + uint8_t __otx2_io sa_idx_w; + } ipsec_cfg1; + uint8_t __otx2_io enable; +}; + +/* PF can be mapped to either CGX or LBK interface, + * so maximum 64 channels are possible. + */ +#define NIX_MAX_CHAN 64 +struct nix_bp_cfg_rsp { + struct mbox_msghdr hdr; + /* Channel and bpid mapping */ + uint16_t __otx2_io chan_bpid[NIX_MAX_CHAN]; + /* Number of channel for which bpids are assigned */ + uint8_t __otx2_io chan_cnt; +}; + +/* SSO mailbox error codes + * Range 501 - 600. + */ +enum sso_af_status { + SSO_AF_ERR_PARAM = -501, + SSO_AF_ERR_LF_INVALID = -502, + SSO_AF_ERR_AF_LF_ALLOC = -503, + SSO_AF_ERR_GRP_EBUSY = -504, + SSO_AF_ERR_AF_LF_INVALID = -599, +}; + +struct sso_lf_alloc_req { + struct mbox_msghdr hdr; + int __otx2_io node; + uint16_t __otx2_io hwgrps; +}; + +struct sso_lf_alloc_rsp { + struct mbox_msghdr hdr; + uint32_t __otx2_io xaq_buf_size; + uint32_t __otx2_io xaq_wq_entries; + uint32_t __otx2_io in_unit_entries; + uint16_t __otx2_io hwgrps; +}; + +struct sso_lf_free_req { + struct mbox_msghdr hdr; + int __otx2_io node; + uint16_t __otx2_io hwgrps; +}; + +/* SSOW mailbox error codes + * Range 601 - 700. + */ +enum ssow_af_status { + SSOW_AF_ERR_PARAM = -601, + SSOW_AF_ERR_LF_INVALID = -602, + SSOW_AF_ERR_AF_LF_ALLOC = -603, +}; + +struct ssow_lf_alloc_req { + struct mbox_msghdr hdr; + int __otx2_io node; + uint16_t __otx2_io hws; +}; + +struct ssow_lf_free_req { + struct mbox_msghdr hdr; + int __otx2_io node; + uint16_t __otx2_io hws; +}; + +struct sso_hw_setconfig { + struct mbox_msghdr hdr; + uint32_t __otx2_io npa_aura_id; + uint16_t __otx2_io npa_pf_func; + uint16_t __otx2_io hwgrps; +}; + +struct sso_info_req { + struct mbox_msghdr hdr; + union { + uint16_t __otx2_io grp; + uint16_t __otx2_io hws; + }; +}; + +struct sso_grp_priority { + struct mbox_msghdr hdr; + uint16_t __otx2_io grp; + uint8_t __otx2_io priority; + uint8_t __otx2_io affinity; + uint8_t __otx2_io weight; +}; + +struct sso_grp_qos_cfg { + struct mbox_msghdr hdr; + uint16_t __otx2_io grp; + uint32_t __otx2_io xaq_limit; + uint16_t __otx2_io taq_thr; + uint16_t __otx2_io iaq_thr; +}; + +struct sso_grp_stats { + struct mbox_msghdr hdr; + uint16_t __otx2_io grp; + uint64_t __otx2_io ws_pc; + uint64_t __otx2_io ext_pc; + uint64_t __otx2_io wa_pc; + uint64_t __otx2_io ts_pc; + uint64_t __otx2_io ds_pc; + uint64_t __otx2_io dq_pc; + uint64_t __otx2_io aw_status; + uint64_t __otx2_io page_cnt; +}; + +struct sso_hws_stats { + struct mbox_msghdr hdr; + uint16_t __otx2_io hws; + uint64_t __otx2_io arbitration; +}; + +/* CPT mbox message formats */ + +struct cpt_rd_wr_reg_msg { + struct mbox_msghdr hdr; + uint64_t __otx2_io reg_offset; + uint64_t __otx2_io *ret_val; + uint64_t __otx2_io val; + uint8_t __otx2_io is_write; +}; + +#define CPT_INLINE_INBOUND 0 +#define CPT_INLINE_OUTBOUND 1 + +struct cpt_inline_ipsec_cfg_msg { + struct mbox_msghdr hdr; + uint8_t __otx2_io enable; + uint8_t __otx2_io slot; + uint8_t __otx2_io dir; + uint16_t __otx2_io sso_pf_func; /* Inbound path SSO_PF_FUNC */ + uint16_t __otx2_io nix_pf_func; /* Outbound path NIX_PF_FUNC */ +}; + +/* NPC mbox message structs */ + +#define NPC_MCAM_ENTRY_INVALID 0xFFFF +#define NPC_MCAM_INVALID_MAP 0xFFFF + +/* NPC mailbox error codes + * Range 701 - 800. + */ +enum npc_af_status { + NPC_MCAM_INVALID_REQ = -701, + NPC_MCAM_ALLOC_DENIED = -702, + NPC_MCAM_ALLOC_FAILED = -703, + NPC_MCAM_PERM_DENIED = -704, +}; + +struct npc_mcam_alloc_entry_req { + struct mbox_msghdr hdr; +#define NPC_MAX_NONCONTIG_ENTRIES 256 + uint8_t __otx2_io contig; /* Contiguous entries ? */ +#define NPC_MCAM_ANY_PRIO 0 +#define NPC_MCAM_LOWER_PRIO 1 +#define NPC_MCAM_HIGHER_PRIO 2 + uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */ + uint16_t __otx2_io ref_entry; + uint16_t __otx2_io count; /* Number of entries requested */ +}; + +struct npc_mcam_alloc_entry_rsp { + struct mbox_msghdr hdr; + /* Entry alloc'ed or start index if contiguous. + * Invalid in case of non-contiguous. + */ + uint16_t __otx2_io entry; + uint16_t __otx2_io count; /* Number of entries allocated */ + uint16_t __otx2_io free_count; /* Number of entries available */ + uint16_t __otx2_io entry_list[NPC_MAX_NONCONTIG_ENTRIES]; +}; + +struct npc_mcam_free_entry_req { + struct mbox_msghdr hdr; + uint16_t __otx2_io entry; /* Entry index to be freed */ + uint8_t __otx2_io all; /* Free all entries alloc'ed to this PFVF */ +}; + +struct mcam_entry { +#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max key width */ + uint64_t __otx2_io kw[NPC_MAX_KWS_IN_KEY]; + uint64_t __otx2_io kw_mask[NPC_MAX_KWS_IN_KEY]; + uint64_t __otx2_io action; + uint64_t __otx2_io vtag_action; +}; + +struct npc_mcam_write_entry_req { + struct mbox_msghdr hdr; + struct mcam_entry entry_data; + uint16_t __otx2_io entry; /* MCAM entry to write this match key */ + uint16_t __otx2_io cntr; /* Counter for this MCAM entry */ + uint8_t __otx2_io intf; /* Rx or Tx interface */ + uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */ + uint8_t __otx2_io set_cntr; /* Set counter for this entry ? */ +}; + +/* Enable/Disable a given entry */ +struct npc_mcam_ena_dis_entry_req { + struct mbox_msghdr hdr; + uint16_t __otx2_io entry; +}; + +struct npc_mcam_shift_entry_req { + struct mbox_msghdr hdr; +#define NPC_MCAM_MAX_SHIFTS 64 + uint16_t __otx2_io curr_entry[NPC_MCAM_MAX_SHIFTS]; + uint16_t __otx2_io new_entry[NPC_MCAM_MAX_SHIFTS]; + uint16_t __otx2_io shift_count; /* Number of entries to shift */ +}; + +struct npc_mcam_shift_entry_rsp { + struct mbox_msghdr hdr; + /* Index in 'curr_entry', not entry itself */ + uint16_t __otx2_io failed_entry_idx; +}; + +struct npc_mcam_alloc_counter_req { + struct mbox_msghdr hdr; + uint8_t __otx2_io contig; /* Contiguous counters ? */ +#define NPC_MAX_NONCONTIG_COUNTERS 64 + uint16_t __otx2_io count; /* Number of counters requested */ +}; + +struct npc_mcam_alloc_counter_rsp { + struct mbox_msghdr hdr; + /* Counter alloc'ed or start idx if contiguous. + * Invalid incase of non-contiguous. + */ + uint16_t __otx2_io cntr; + uint16_t __otx2_io count; /* Number of counters allocated */ + uint16_t __otx2_io cntr_list[NPC_MAX_NONCONTIG_COUNTERS]; +}; + +struct npc_mcam_oper_counter_req { + struct mbox_msghdr hdr; + uint16_t __otx2_io cntr; /* Free a counter or clear/fetch it's stats */ +}; + +struct npc_mcam_oper_counter_rsp { + struct mbox_msghdr hdr; + /* valid only while fetching counter's stats */ + uint64_t __otx2_io stat; +}; + +struct npc_mcam_unmap_counter_req { + struct mbox_msghdr hdr; + uint16_t __otx2_io cntr; + uint16_t __otx2_io entry; /* Entry and counter to be unmapped */ + uint8_t __otx2_io all; /* Unmap all entries using this counter ? */ +}; + +struct npc_mcam_alloc_and_write_entry_req { + struct mbox_msghdr hdr; + struct mcam_entry entry_data; + uint16_t __otx2_io ref_entry; + uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */ + uint8_t __otx2_io intf; /* Rx or Tx interface */ + uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */ + uint8_t __otx2_io alloc_cntr; /* Allocate counter and map ? */ +}; + +struct npc_mcam_alloc_and_write_entry_rsp { + struct mbox_msghdr hdr; + uint16_t __otx2_io entry; + uint16_t __otx2_io cntr; +}; + +struct npc_get_kex_cfg_rsp { + struct mbox_msghdr hdr; + uint64_t __otx2_io rx_keyx_cfg; /* NPC_AF_INTF(0)_KEX_CFG */ + uint64_t __otx2_io tx_keyx_cfg; /* NPC_AF_INTF(1)_KEX_CFG */ +#define NPC_MAX_INTF 2 +#define NPC_MAX_LID 8 +#define NPC_MAX_LT 16 +#define NPC_MAX_LD 2 +#define NPC_MAX_LFL 16 + /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */ + uint64_t __otx2_io kex_ld_flags[NPC_MAX_LD]; + /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */ + uint64_t __otx2_io + intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]; + /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */ + uint64_t __otx2_io + intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL]; +#define MKEX_NAME_LEN 128 + uint8_t __otx2_io mkex_pfl_name[MKEX_NAME_LEN]; +}; + +/* TIM mailbox error codes + * Range 801 - 900. + */ +enum tim_af_status { + TIM_AF_NO_RINGS_LEFT = -801, + TIM_AF_INVALID_NPA_PF_FUNC = -802, + TIM_AF_INVALID_SSO_PF_FUNC = -803, + TIM_AF_RING_STILL_RUNNING = -804, + TIM_AF_LF_INVALID = -805, + TIM_AF_CSIZE_NOT_ALIGNED = -806, + TIM_AF_CSIZE_TOO_SMALL = -807, + TIM_AF_CSIZE_TOO_BIG = -808, + TIM_AF_INTERVAL_TOO_SMALL = -809, + TIM_AF_INVALID_BIG_ENDIAN_VALUE = -810, + TIM_AF_INVALID_CLOCK_SOURCE = -811, + TIM_AF_GPIO_CLK_SRC_NOT_ENABLED = -812, + TIM_AF_INVALID_BSIZE = -813, + TIM_AF_INVALID_ENABLE_PERIODIC = -814, + TIM_AF_INVALID_ENABLE_DONTFREE = -815, + TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816, + TIM_AF_RING_ALREADY_DISABLED = -817, +}; + +enum tim_clk_srcs { + TIM_CLK_SRCS_TENNS = 0, + TIM_CLK_SRCS_GPIO = 1, + TIM_CLK_SRCS_GTI = 2, + TIM_CLK_SRCS_PTP = 3, + TIM_CLK_SRSC_INVALID, +}; + +enum tim_gpio_edge { + TIM_GPIO_NO_EDGE = 0, + TIM_GPIO_LTOH_TRANS = 1, + TIM_GPIO_HTOL_TRANS = 2, + TIM_GPIO_BOTH_TRANS = 3, + TIM_GPIO_INVALID, +}; + +enum ptp_op { + PTP_OP_ADJFINE = 0, /* adjfine(req.scaled_ppm); */ + PTP_OP_GET_CLOCK = 1, /* rsp.clk = get_clock() */ +}; + +struct ptp_req { + struct mbox_msghdr hdr; + uint8_t __otx2_io op; + int64_t __otx2_io scaled_ppm; +}; + +struct ptp_rsp { + struct mbox_msghdr hdr; + uint64_t __otx2_io clk; +}; + +struct get_hw_cap_rsp { + struct mbox_msghdr hdr; + /* Schq mapping fixed or flexible */ + uint8_t __otx2_io nix_fixed_txschq_mapping; + uint8_t __otx2_io nix_express_traffic; /* Are express links supported */ + uint8_t __otx2_io nix_shaping; /* Is shaping and coloring supported */ +}; + +struct ndc_sync_op { + struct mbox_msghdr hdr; + uint8_t __otx2_io nix_lf_tx_sync; + uint8_t __otx2_io nix_lf_rx_sync; + uint8_t __otx2_io npa_lf_sync; +}; + +struct tim_lf_alloc_req { + struct mbox_msghdr hdr; + uint16_t __otx2_io ring; + uint16_t __otx2_io npa_pf_func; + uint16_t __otx2_io sso_pf_func; +}; + +struct tim_ring_req { + struct mbox_msghdr hdr; + uint16_t __otx2_io ring; +}; + +struct tim_config_req { + struct mbox_msghdr hdr; + uint16_t __otx2_io ring; + uint8_t __otx2_io bigendian; + uint8_t __otx2_io clocksource; + uint8_t __otx2_io enableperiodic; + uint8_t __otx2_io enabledontfreebuffer; + uint32_t __otx2_io bucketsize; + uint32_t __otx2_io chunksize; + uint32_t __otx2_io interval; +}; + +struct tim_lf_alloc_rsp { + struct mbox_msghdr hdr; + uint64_t __otx2_io tenns_clk; +}; + +struct tim_enable_rsp { + struct mbox_msghdr hdr; + uint64_t __otx2_io timestarted; + uint32_t __otx2_io currentbucket; +}; + #endif /* __OTX2_MBOX_H__ */ From patchwork Mon Jun 17 15:55:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54853 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 614311BF3C; Mon, 17 Jun 2019 17:56:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 337901BF35 for ; Mon, 17 Jun 2019 17:56:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFpuh2001280 for ; Mon, 17 Jun 2019 08:56:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=YVxDibzGt/k+5aJ1xuWv15KG7JP08auvzsJkYXUqly4=; b=n8OMfrBqFQg9Gy7jktayfajQT1/0zyLNPPhyy5+6RyXmSYg2BeuzQPXCU3ZzMF1Vlh31 d4chLzHFpJ5MWVONix1A5dSld4IZHz/+RSwC5oLsl4hh4z2LVQKyCjGH5UPwHzufRn8P b5fip0QGYU9uOJYA84iQQlmT13YwirYB0KGK0yI5e4cDim8Skf2rOMjem0ptPWKvCndC mfy5twczwIDxn8x0JetocWEGqnWc4rbMV5cW+CcL3srP11vaLz9/KRH78y8TX1kMZh62 GtOpaNA+z0D2rEXXXLovOPIP1ePmgfY2JobkfbdFxnpe/vFHJVgxkap2d1OXjSOTb6V5 yQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyaxb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:07 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:04 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:04 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id CC9753F703F; Mon, 17 Jun 2019 08:56:02 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru Date: Mon, 17 Jun 2019 21:25:14 +0530 Message-ID: <20190617155537.36144-5-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 04/27] common/octeontx2: add mailbox base support infra X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob This patch adds mailbox init and fini support. Each RVU device has a dedicated 64KB mailbox region shared with its peer for communication. RVU AF has a separate mailbox region shared with each of RVU PFs and an RVU PF has a separate region shared with each of it's VF. Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- drivers/common/octeontx2/otx2_mbox.c | 133 +++++++++++++++++++++++++++ drivers/common/octeontx2/otx2_mbox.h | 5 + 2 files changed, 138 insertions(+) diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c index c9cdbdbbc..cb03f6503 100644 --- a/drivers/common/octeontx2/otx2_mbox.c +++ b/drivers/common/octeontx2/otx2_mbox.c @@ -2,4 +2,137 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include +#include +#include +#include + +#include +#include + #include "otx2_mbox.h" + +#define RVU_AF_AFPF_MBOX0 (0x02000) +#define RVU_AF_AFPF_MBOX1 (0x02008) + +#define RVU_PF_PFAF_MBOX0 (0xC00) +#define RVU_PF_PFAF_MBOX1 (0xC08) + +#define RVU_PF_VFX_PFVF_MBOX0 (0x0000) +#define RVU_PF_VFX_PFVF_MBOX1 (0x0008) + +#define RVU_VF_VFPF_MBOX0 (0x0000) +#define RVU_VF_VFPF_MBOX1 (0x0008) + +void +otx2_mbox_fini(struct otx2_mbox *mbox) +{ + mbox->reg_base = 0; + mbox->hwbase = 0; + free(mbox->dev); + mbox->dev = NULL; +} + +void +otx2_mbox_reset(struct otx2_mbox *mbox, int devid) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_hdr *tx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start); + struct mbox_hdr *rx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + + rte_spinlock_lock(&mdev->mbox_lock); + mdev->msg_size = 0; + mdev->rsp_size = 0; + tx_hdr->msg_size = 0; + tx_hdr->num_msgs = 0; + rx_hdr->msg_size = 0; + rx_hdr->num_msgs = 0; + rte_spinlock_unlock(&mdev->mbox_lock); +} + +int +otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, + uintptr_t reg_base, int direction, int ndevs) +{ + struct otx2_mbox_dev *mdev; + int devid; + + mbox->reg_base = reg_base; + mbox->hwbase = hwbase; + + switch (direction) { + case MBOX_DIR_AFPF: + case MBOX_DIR_PFVF: + mbox->tx_start = MBOX_DOWN_TX_START; + mbox->rx_start = MBOX_DOWN_RX_START; + mbox->tx_size = MBOX_DOWN_TX_SIZE; + mbox->rx_size = MBOX_DOWN_RX_SIZE; + break; + case MBOX_DIR_PFAF: + case MBOX_DIR_VFPF: + mbox->tx_start = MBOX_DOWN_RX_START; + mbox->rx_start = MBOX_DOWN_TX_START; + mbox->tx_size = MBOX_DOWN_RX_SIZE; + mbox->rx_size = MBOX_DOWN_TX_SIZE; + break; + case MBOX_DIR_AFPF_UP: + case MBOX_DIR_PFVF_UP: + mbox->tx_start = MBOX_UP_TX_START; + mbox->rx_start = MBOX_UP_RX_START; + mbox->tx_size = MBOX_UP_TX_SIZE; + mbox->rx_size = MBOX_UP_RX_SIZE; + break; + case MBOX_DIR_PFAF_UP: + case MBOX_DIR_VFPF_UP: + mbox->tx_start = MBOX_UP_RX_START; + mbox->rx_start = MBOX_UP_TX_START; + mbox->tx_size = MBOX_UP_RX_SIZE; + mbox->rx_size = MBOX_UP_TX_SIZE; + break; + default: + return -ENODEV; + } + + switch (direction) { + case MBOX_DIR_AFPF: + case MBOX_DIR_AFPF_UP: + mbox->trigger = RVU_AF_AFPF_MBOX0; + mbox->tr_shift = 4; + break; + case MBOX_DIR_PFAF: + case MBOX_DIR_PFAF_UP: + mbox->trigger = RVU_PF_PFAF_MBOX1; + mbox->tr_shift = 0; + break; + case MBOX_DIR_PFVF: + case MBOX_DIR_PFVF_UP: + mbox->trigger = RVU_PF_VFX_PFVF_MBOX0; + mbox->tr_shift = 12; + break; + case MBOX_DIR_VFPF: + case MBOX_DIR_VFPF_UP: + mbox->trigger = RVU_VF_VFPF_MBOX1; + mbox->tr_shift = 0; + break; + default: + return -ENODEV; + } + + mbox->dev = malloc(ndevs * sizeof(struct otx2_mbox_dev)); + if (!mbox->dev) { + otx2_mbox_fini(mbox); + return -ENOMEM; + } + mbox->ndevs = ndevs; + for (devid = 0; devid < ndevs; devid++) { + mdev = &mbox->dev[devid]; + mdev->mbase = (void *)(mbox->hwbase + (devid * MBOX_SIZE)); + rte_spinlock_init(&mdev->mbox_lock); + /* Init header to reset value */ + otx2_mbox_reset(mbox, devid); + } + + return 0; +} diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h index e2d79c070..ac7de788f 100644 --- a/drivers/common/octeontx2/otx2_mbox.h +++ b/drivers/common/octeontx2/otx2_mbox.h @@ -1333,4 +1333,9 @@ struct tim_enable_rsp { uint32_t __otx2_io currentbucket; }; +void otx2_mbox_reset(struct otx2_mbox *mbox, int devid); +int otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, + uintptr_t reg_base, int direction, int ndevs); +void otx2_mbox_fini(struct otx2_mbox *mbox); + #endif /* __OTX2_MBOX_H__ */ From patchwork Mon Jun 17 15:55:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54856 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8B6CD1BF60; Mon, 17 Jun 2019 17:56:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id D8D991BF37 for ; Mon, 17 Jun 2019 17:56:14 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppxK000998 for ; Mon, 17 Jun 2019 08:56:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=vQ+Gi/hsSTh9p7NkkPeM45EWx0sat8qLWvifkFdnsJ0=; b=lRu/saQg5I4GFRhqrqbPBTcau6U5LUvXiKml+2yrJV9010xNMkPUZCTzpkrF4csFOG9w 3ZkiM8dHhMZkOSwH05LmiJlzN0ZLw5shUrGvIK3Ns/Ezsh5wvlM2Xvp7jyCwyZXfEGLh hxDKE416GHSpXe24FRSZ31if5wm4GRuKdUnSs98+/8QkoiV90ejIy224t3nprvk9JMf5 FcOOgwWhLgNAN0YcYC+gSQ9/g8FFL7NEorLnjBWwNVyLiHmLQoOLikDaMj9vbuJhzzE7 F8JN/kxdMgW425F7qwsqxoDsEr1Uyouwc4o71t58TnOuCt9jCR1Z6xCisXXOgTAHTACW Bw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyax9-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:14 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:06 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:06 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 5B8AB3F703F; Mon, 17 Jun 2019 08:56:05 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru Date: Mon, 17 Jun 2019 21:25:15 +0530 Message-ID: <20190617155537.36144-6-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 05/27] common/octeontx2: add runtime log infra X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Various consumers of this common code need runtime logging infrastructure. This patch adds the same. Signed-off-by: Jerin Jacob --- drivers/common/octeontx2/Makefile | 1 + drivers/common/octeontx2/meson.build | 1 + drivers/common/octeontx2/otx2_common.c | 85 +++++++++++++++++++ drivers/common/octeontx2/otx2_common.h | 36 ++++++++ .../rte_common_octeontx2_version.map | 11 +++ 5 files changed, 134 insertions(+) create mode 100644 drivers/common/octeontx2/otx2_common.c diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile index e5737532a..3fd67f0ab 100644 --- a/drivers/common/octeontx2/Makefile +++ b/drivers/common/octeontx2/Makefile @@ -25,6 +25,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-y += otx2_mbox.c +SRCS-y += otx2_common.c LDLIBS += -lrte_eal LDLIBS += -lrte_ethdev diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build index 34f8aaea7..4771b1942 100644 --- a/drivers/common/octeontx2/meson.build +++ b/drivers/common/octeontx2/meson.build @@ -4,6 +4,7 @@ sources= files( 'otx2_mbox.c', + 'otx2_common.c', ) extra_flags = [] diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c new file mode 100644 index 000000000..a4b91b4f1 --- /dev/null +++ b/drivers/common/octeontx2/otx2_common.c @@ -0,0 +1,85 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_common.h" + +/** + * @internal + */ +int otx2_logtype_base; +/** + * @internal + */ +int otx2_logtype_mbox; +/** + * @internal + */ +int otx2_logtype_npa; +/** + * @internal + */ +int otx2_logtype_nix; +/** + * @internal + */ +int otx2_logtype_npc; +/** + * @internal + */ +int otx2_logtype_tm; +/** + * @internal + */ +int otx2_logtype_sso; +/** + * @internal + */ +int otx2_logtype_tim; +/** + * @internal + */ +int otx2_logtype_dpi; + +RTE_INIT(otx2_log_init); +static void +otx2_log_init(void) +{ + otx2_logtype_base = rte_log_register("pmd.octeontx2.base"); + if (otx2_logtype_base >= 0) + rte_log_set_level(otx2_logtype_base, RTE_LOG_NOTICE); + + otx2_logtype_mbox = rte_log_register("pmd.octeontx2.mbox"); + if (otx2_logtype_mbox >= 0) + rte_log_set_level(otx2_logtype_mbox, RTE_LOG_NOTICE); + + otx2_logtype_npa = rte_log_register("pmd.mempool.octeontx2"); + if (otx2_logtype_npa >= 0) + rte_log_set_level(otx2_logtype_npa, RTE_LOG_NOTICE); + + otx2_logtype_nix = rte_log_register("pmd.net.octeontx2"); + if (otx2_logtype_nix >= 0) + rte_log_set_level(otx2_logtype_nix, RTE_LOG_NOTICE); + + otx2_logtype_npc = rte_log_register("pmd.net.octeontx2.flow"); + if (otx2_logtype_npc >= 0) + rte_log_set_level(otx2_logtype_npc, RTE_LOG_NOTICE); + + otx2_logtype_tm = rte_log_register("pmd.net.octeontx2.tm"); + if (otx2_logtype_tm >= 0) + rte_log_set_level(otx2_logtype_tm, RTE_LOG_NOTICE); + + otx2_logtype_sso = rte_log_register("pmd.event.octeontx2"); + if (otx2_logtype_sso >= 0) + rte_log_set_level(otx2_logtype_sso, RTE_LOG_NOTICE); + + otx2_logtype_tim = rte_log_register("pmd.event.octeontx2.timer"); + if (otx2_logtype_tim >= 0) + rte_log_set_level(otx2_logtype_tim, RTE_LOG_NOTICE); + + otx2_logtype_dpi = rte_log_register("pmd.raw.octeontx2.dpi"); + if (otx2_logtype_dpi >= 0) + rte_log_set_level(otx2_logtype_dpi, RTE_LOG_NOTICE); +} diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h index b0c19266b..58fcf5a41 100644 --- a/drivers/common/octeontx2/otx2_common.h +++ b/drivers/common/octeontx2/otx2_common.h @@ -33,6 +33,42 @@ #define __hot __attribute__((hot)) #endif +/* Log */ +extern int otx2_logtype_base; +extern int otx2_logtype_mbox; +extern int otx2_logtype_npa; +extern int otx2_logtype_nix; +extern int otx2_logtype_sso; +extern int otx2_logtype_npc; +extern int otx2_logtype_tm; +extern int otx2_logtype_tim; +extern int otx2_logtype_dpi; + +#define OTX2_CLNRM "\x1b[0m" +#define OTX2_CLRED "\x1b[31m" + +#define otx2_err(fmt, args...) \ + RTE_LOG(ERR, PMD, ""OTX2_CLRED"%s():%u " fmt OTX2_CLNRM"\n", \ + __func__, __LINE__, ## args) + +#define otx2_info(fmt, args...) \ + RTE_LOG(INFO, PMD, fmt"\n", ## args) + +#define otx2_dbg(subsystem, fmt, args...) \ + rte_log(RTE_LOG_DEBUG, otx2_logtype_ ## subsystem, \ + "[%s] %s():%u " fmt "\n", \ + #subsystem, __func__, __LINE__, ##args) + +#define otx2_base_dbg(fmt, ...) otx2_dbg(base, fmt, ##__VA_ARGS__) +#define otx2_mbox_dbg(fmt, ...) otx2_dbg(mbox, fmt, ##__VA_ARGS__) +#define otx2_npa_dbg(fmt, ...) otx2_dbg(npa, fmt, ##__VA_ARGS__) +#define otx2_nix_dbg(fmt, ...) otx2_dbg(nix, fmt, ##__VA_ARGS__) +#define otx2_sso_dbg(fmt, ...) otx2_dbg(sso, fmt, ##__VA_ARGS__) +#define otx2_npc_dbg(fmt, ...) otx2_dbg(npc, fmt, ##__VA_ARGS__) +#define otx2_tm_dbg(fmt, ...) otx2_dbg(tm, fmt, ##__VA_ARGS__) +#define otx2_tim_dbg(fmt, ...) otx2_dbg(tim, fmt, ##__VA_ARGS__) +#define otx2_dpi_dbg(fmt, ...) otx2_dbg(dpi, fmt, ##__VA_ARGS__) + /* IO Access */ #define otx2_read64(addr) rte_read64_relaxed((void *)(addr)) #define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr)) diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map index 9a61188cd..02f03e177 100644 --- a/drivers/common/octeontx2/rte_common_octeontx2_version.map +++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map @@ -1,4 +1,15 @@ DPDK_19.08 { + global: + + otx2_logtype_base; + otx2_logtype_dpi; + otx2_logtype_mbox; + otx2_logtype_npa; + otx2_logtype_npc; + otx2_logtype_nix; + otx2_logtype_sso; + otx2_logtype_tm; + otx2_logtype_tim; local: *; }; From patchwork Mon Jun 17 15:55:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54855 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C7F411BF4B; Mon, 17 Jun 2019 17:56:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 14BD01BF43 for ; Mon, 17 Jun 2019 17:56:12 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFpplb000980 for ; Mon, 17 Jun 2019 08:56:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=BlkQgFy08Ea+fsj6VcHeugPiQuJYvZa4N/VtSUiINDM=; b=p+vrRwhZg7j5vspN9c7qq95K1xufvBj5U873oV1cWlNlBOrbYaB8z1xulvBquRxWeLhm 5BwrHGJKAZXQOuoWXyqrFzdhTcamZoBttXPTDHwXiKWS4uL5BuydwFl700cgp/mlkurZ Dbv6D047pZVrXdNftIsrAQX/Ksf/GdKGCxiThH34Sl5i89scDRhCfXooKyKtmkBOyLCG h8AcB0ke4vtir4RFw6mhotsXFrdyX3HYcC0Uu81K6JbB4KEn5+s6tmE1/4APW8Pqi0rr rKmjznYK9oabdznnpmIEpqXSILLTZFc3ZuAQ7yF58CIJCDzv9Z6lCVUoqjjA1dHV13om DQ== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyaxq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:11 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:09 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:09 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id E29473F7041; Mon, 17 Jun 2019 08:56:07 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru Date: Mon, 17 Jun 2019 21:25:16 +0530 Message-ID: <20190617155537.36144-7-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 06/27] common/octeontx2: add mailbox send and receive support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Each RVU device has a dedicated 64KB mailbox region shared with its peer for communication. RVU AF has a separate mailbox region shared with each of RVU PFs and an RVU PF has a separate region shared with each of it's VF. This patch add use 64KB memory and implemented mailbox send and receive support. These set of APIs are used by this driver (RVU AF) and other RVU PF/VF drivers eg ethdev, cryptodev e.t.c. Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- drivers/common/octeontx2/otx2_mbox.c | 278 ++++++++++++++++++ drivers/common/octeontx2/otx2_mbox.h | 142 +++++++++ .../rte_common_octeontx2_version.map | 7 + 3 files changed, 427 insertions(+) diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c index cb03f6503..86559fa98 100644 --- a/drivers/common/octeontx2/otx2_mbox.c +++ b/drivers/common/octeontx2/otx2_mbox.c @@ -24,6 +24,12 @@ #define RVU_VF_VFPF_MBOX0 (0x0000) #define RVU_VF_VFPF_MBOX1 (0x0008) +static inline uint16_t +msgs_offset(void) +{ + return RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); +} + void otx2_mbox_fini(struct otx2_mbox *mbox) { @@ -136,3 +142,275 @@ otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, return 0; } + +/** + * @internal + * Allocate a message response + */ +struct mbox_msghdr * +otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, int size, + int size_rsp) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_msghdr *msghdr = NULL; + + rte_spinlock_lock(&mdev->mbox_lock); + size = RTE_ALIGN(size, MBOX_MSG_ALIGN); + size_rsp = RTE_ALIGN(size_rsp, MBOX_MSG_ALIGN); + /* Check if there is space in mailbox */ + if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset()) + goto exit; + if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset()) + goto exit; + if (mdev->msg_size == 0) + mdev->num_msgs = 0; + mdev->num_msgs++; + + msghdr = (struct mbox_msghdr *)(((uintptr_t)mdev->mbase + + mbox->tx_start + msgs_offset() + mdev->msg_size)); + + /* Clear the whole msg region */ + otx2_mbox_memset(msghdr, 0, sizeof(*msghdr) + size); + /* Init message header with reset values */ + msghdr->ver = OTX2_MBOX_VERSION; + mdev->msg_size += size; + mdev->rsp_size += size_rsp; + msghdr->next_msgoff = mdev->msg_size + msgs_offset(); +exit: + rte_spinlock_unlock(&mdev->mbox_lock); + + return msghdr; +} + +/** + * @internal + * Send a mailbox message + */ +void +otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_hdr *tx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start); + struct mbox_hdr *rx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + + /* Reset header for next messages */ + tx_hdr->msg_size = mdev->msg_size; + mdev->msg_size = 0; + mdev->rsp_size = 0; + mdev->msgs_acked = 0; + + /* num_msgs != 0 signals to the peer that the buffer has a number of + * messages. So this should be written after copying txmem + */ + tx_hdr->num_msgs = mdev->num_msgs; + rx_hdr->num_msgs = 0; + + /* Sync mbox data into memory */ + rte_wmb(); + + /* The interrupt should be fired after num_msgs is written + * to the shared memory + */ + rte_write64(1, (volatile void *)(mbox->reg_base + + (mbox->trigger | (devid << mbox->tr_shift)))); +} + +/** + * @internal + * Wait and get mailbox response + */ +int +otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_msghdr *msghdr; + uint64_t offset; + int rc; + + rc = otx2_mbox_wait_for_rsp(mbox, devid); + if (rc != 1) + return -EIO; + + rte_rmb(); + + offset = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + if (msg != NULL) + *msg = msghdr; + + return msghdr->rc; +} + +/** + * @internal + * Wait and get mailbox response with timeout + */ +int +otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg, + uint32_t tmo) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[devid]; + struct mbox_msghdr *msghdr; + uint64_t offset; + int rc; + + rc = otx2_mbox_wait_for_rsp_tmo(mbox, devid, tmo); + if (rc != 1) + return -EIO; + + rte_rmb(); + + offset = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + if (msg != NULL) + *msg = msghdr; + + return msghdr->rc; +} + +static int +mbox_wait(struct otx2_mbox *mbox, int devid, uint32_t rst_timo) +{ + volatile struct otx2_mbox_dev *mdev = &mbox->dev[devid]; + uint32_t timeout = 0, sleep = 1; + + while (mdev->num_msgs > mdev->msgs_acked) { + rte_delay_ms(sleep); + timeout += sleep; + if (timeout >= rst_timo) { + struct mbox_hdr *tx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + + mbox->tx_start); + struct mbox_hdr *rx_hdr = + (struct mbox_hdr *)((uintptr_t)mdev->mbase + + mbox->rx_start); + + otx2_err("MBOX[devid: %d] message wait timeout %d, " + "num_msgs: %d, msgs_acked: %d " + "(tx/rx num_msgs: %d/%d), msg_size: %d, " + "rsp_size: %d", + devid, timeout, mdev->num_msgs, + mdev->msgs_acked, tx_hdr->num_msgs, + rx_hdr->num_msgs, mdev->msg_size, + mdev->rsp_size); + + return -EIO; + } + rte_rmb(); + } + return 0; +} + +int +otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[devid]; + int rc = 0; + + /* Sync with mbox region */ + rte_rmb(); + + if (mbox->trigger == RVU_PF_VFX_PFVF_MBOX1 || + mbox->trigger == RVU_PF_VFX_PFVF_MBOX0) { + /* In case of VF, Wait a bit more to account round trip delay */ + tmo = tmo * 2; + } + + /* Wait message */ + rc = mbox_wait(mbox, devid, tmo); + if (rc) + return rc; + + return mdev->msgs_acked; +} + +/** + * @internal + * Wait for the mailbox response + */ +int +otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid) +{ + return otx2_mbox_wait_for_rsp_tmo(mbox, devid, MBOX_RSP_TIMEOUT); +} + +int +otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[devid]; + int avail; + + rte_spinlock_lock(&mdev->mbox_lock); + avail = mbox->tx_size - mdev->msg_size - msgs_offset(); + rte_spinlock_unlock(&mdev->mbox_lock); + + return avail; +} + +int +otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pcifunc) +{ + struct ready_msg_rsp *rsp; + int rc; + + otx2_mbox_alloc_msg_ready(mbox); + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + if (rc) + return rc; + + if (pcifunc) + *pcifunc = rsp->hdr.pcifunc; + + return 0; +} + +int +otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pcifunc, + uint16_t id) +{ + struct msg_rsp *rsp; + + rsp = (struct msg_rsp *)otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp)); + if (!rsp) + return -ENOMEM; + rsp->hdr.id = id; + rsp->hdr.sig = OTX2_MBOX_RSP_SIG; + rsp->hdr.rc = MBOX_MSG_INVALID; + rsp->hdr.pcifunc = pcifunc; + + return 0; +} + +/** + * @internal + * Convert mail box ID to name + */ +const char *otx2_mbox_id2name(uint16_t id) +{ + switch (id) { +#define M(_name, _id, _1, _2, _3) case _id: return # _name; + MBOX_MESSAGES + MBOX_UP_CGX_MESSAGES +#undef M + default : + return "INVALID ID"; + } +} + +int otx2_mbox_id2size(uint16_t id) +{ + switch (id) { +#define M(_1, _id, _2, _req_type, _3) case _id: return sizeof(struct _req_type); + MBOX_MESSAGES + MBOX_UP_CGX_MESSAGES +#undef M + default : + return 0; + } +} diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h index ac7de788f..f420a99fc 100644 --- a/drivers/common/octeontx2/otx2_mbox.h +++ b/drivers/common/octeontx2/otx2_mbox.h @@ -1333,9 +1333,151 @@ struct tim_enable_rsp { uint32_t __otx2_io currentbucket; }; +const char *otx2_mbox_id2name(uint16_t id); +int otx2_mbox_id2size(uint16_t id); void otx2_mbox_reset(struct otx2_mbox *mbox, int devid); int otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base, int direction, int ndevs); void otx2_mbox_fini(struct otx2_mbox *mbox); +void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid); +int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid); +int otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo); +int otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg); +int otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg, + uint32_t tmo); +int otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid); +struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, + int size, int size_rsp); + +static inline struct mbox_msghdr * +otx2_mbox_alloc_msg(struct otx2_mbox *mbox, int devid, int size) +{ + return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0); +} + +static inline void +otx2_mbox_req_init(uint16_t mbox_id, void *msghdr) +{ + struct mbox_msghdr *hdr = msghdr; + + hdr->sig = OTX2_MBOX_REQ_SIG; + hdr->ver = OTX2_MBOX_VERSION; + hdr->id = mbox_id; + hdr->pcifunc = 0; +} + +static inline void +otx2_mbox_rsp_init(uint16_t mbox_id, void *msghdr) +{ + struct mbox_msghdr *hdr = msghdr; + + hdr->sig = OTX2_MBOX_RSP_SIG; + hdr->rc = -ETIMEDOUT; + hdr->id = mbox_id; +} + +static inline bool +otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[devid]; + bool ret; + + rte_spinlock_lock(&mdev->mbox_lock); + ret = mdev->num_msgs != 0; + rte_spinlock_unlock(&mdev->mbox_lock); + + return ret; +} + +static inline int +otx2_mbox_process(struct otx2_mbox *mbox) +{ + otx2_mbox_msg_send(mbox, 0); + return otx2_mbox_get_rsp(mbox, 0, NULL); +} + +static inline int +otx2_mbox_process_msg(struct otx2_mbox *mbox, void **msg) +{ + otx2_mbox_msg_send(mbox, 0); + return otx2_mbox_get_rsp(mbox, 0, msg); +} + +static inline int +otx2_mbox_process_tmo(struct otx2_mbox *mbox, uint32_t tmo) +{ + otx2_mbox_msg_send(mbox, 0); + return otx2_mbox_get_rsp_tmo(mbox, 0, NULL, tmo); +} + +static inline int +otx2_mbox_process_msg_tmo(struct otx2_mbox *mbox, void **msg, uint32_t tmo) +{ + otx2_mbox_msg_send(mbox, 0); + return otx2_mbox_get_rsp_tmo(mbox, 0, msg, tmo); +} + +int otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pf_func /* out */); +int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pf_func, + uint16_t id); + +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ +static inline struct _req_type \ +*otx2_mbox_alloc_msg_ ## _fn_name(struct otx2_mbox *mbox) \ +{ \ + struct _req_type *req; \ + \ + req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \ + mbox, 0, sizeof(struct _req_type), \ + sizeof(struct _rsp_type)); \ + if (!req) \ + return NULL; \ + \ + req->hdr.sig = OTX2_MBOX_REQ_SIG; \ + req->hdr.id = _id; \ + otx2_mbox_dbg("id=0x%x (%s)", \ + req->hdr.id, otx2_mbox_id2name(req->hdr.id)); \ + return req; \ +} + +MBOX_MESSAGES +#undef M + +/* This is required for copy operations from device memory which do not work on + * addresses which are unaligned to 16B. This is because of specific + * optimizations to libc memcpy. + */ +static inline volatile void * +otx2_mbox_memcpy(volatile void *d, const volatile void *s, size_t l) +{ + const volatile uint8_t *sb; + volatile uint8_t *db; + size_t i; + + if (!d || !s) + return NULL; + db = (volatile uint8_t *)d; + sb = (const volatile uint8_t *)s; + for (i = 0; i < l; i++) + db[i] = sb[i]; + return d; +} + +/* This is required for memory operations from device memory which do not + * work on addresses which are unaligned to 16B. This is because of specific + * optimizations to libc memset. + */ +static inline void +otx2_mbox_memset(volatile void *d, uint8_t val, size_t l) +{ + volatile uint8_t *db; + size_t i = 0; + + if (!d || !l) + return; + db = (volatile uint8_t *)d; + for (i = 0; i < l; i++) + db[i] = val; +} #endif /* __OTX2_MBOX_H__ */ diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map index 02f03e177..e10a2d3b2 100644 --- a/drivers/common/octeontx2/rte_common_octeontx2_version.map +++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map @@ -11,5 +11,12 @@ DPDK_19.08 { otx2_logtype_tm; otx2_logtype_tim; + otx2_mbox_alloc_msg_rsp; + otx2_mbox_get_rsp; + otx2_mbox_get_rsp_tmo; + otx2_mbox_id2name; + otx2_mbox_msg_send; + otx2_mbox_wait_for_rsp; + local: *; }; From patchwork Mon Jun 17 15:55:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54859 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 37ED01BF79; Mon, 17 Jun 2019 17:56:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 6D2D11BF74 for ; Mon, 17 Jun 2019 17:56:34 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFprCP001049 for ; Mon, 17 Jun 2019 08:56:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=ufh0TTUDXB1gvlFs9g5yGZUhAVyEysZfI93l/v8Jbe8=; b=IyF/slOc9tbc/vp4lHdbafLRv2e8hy4HQvqB4WkLY2yYNTYpCMTlVYEPM3RqXE5u52nc J2hkfODEFlMKHYGAmoXA8TTHSGvsll90gAYjrp+EuX+Gx1wjsoKxFgDDnKMp0vihkrgL 05fFsXA47eoEgQlTR3VsKX1yhkZbkeKeh45utFqiZAXC3ITwlkZ68p1vINNhMVoJHUP8 TvACQ3fGaW4lPjxB7ESJ4j43AmA0KgJ+czE1iNenFtfXA6KTegkR0tIZ0qiajrtgG401 31kCeaR4MmfnBEh5+knOFuOVJdFDwWWD0XuPLpC6AUQ+lEEBg0Cs4thcAKKXoZbxOmzn 2Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyaxv-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:33 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:12 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:12 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 9751D3F7044; Mon, 17 Jun 2019 08:56:10 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru Date: Mon, 17 Jun 2019 21:25:17 +0530 Message-ID: <20190617155537.36144-8-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 07/27] common/octeontx2: introduce common device class X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Introduce otx2_dev class to hold octeontx2 PCIe device specific information and operations. All PCIe drivers(ethdev, mempool, cryptodev and eventdev) in octeontx2, inherits this base object to avail the common functionalities such as mailbox creation, interrupt registration, etc of the PCIe device. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- drivers/common/octeontx2/Makefile | 2 + drivers/common/octeontx2/meson.build | 4 +- drivers/common/octeontx2/otx2_common.h | 14 ++ drivers/common/octeontx2/otx2_dev.c | 197 ++++++++++++++++++ drivers/common/octeontx2/otx2_dev.h | 84 ++++++++ drivers/common/octeontx2/otx2_irq.h | 19 ++ .../rte_common_octeontx2_version.map | 3 + 7 files changed, 321 insertions(+), 2 deletions(-) create mode 100644 drivers/common/octeontx2/otx2_dev.c create mode 100644 drivers/common/octeontx2/otx2_dev.h create mode 100644 drivers/common/octeontx2/otx2_irq.h diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile index 3fd67f0ab..a6f94553d 100644 --- a/drivers/common/octeontx2/Makefile +++ b/drivers/common/octeontx2/Makefile @@ -11,6 +11,7 @@ LIB = librte_common_octeontx2.a CFLAGS += $(WERROR_FLAGS) CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/bus/pci ifneq ($(CONFIG_RTE_ARCH_64),y) CFLAGS += -Wno-int-to-pointer-cast @@ -24,6 +25,7 @@ LIBABIVER := 1 # # all source are stored in SRCS-y # +SRCS-y += otx2_dev.c SRCS-y += otx2_mbox.c SRCS-y += otx2_common.c diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build index 4771b1942..feaf75d92 100644 --- a/drivers/common/octeontx2/meson.build +++ b/drivers/common/octeontx2/meson.build @@ -2,7 +2,7 @@ # Copyright(C) 2019 Marvell International Ltd. # -sources= files( +sources= files('otx2_dev.c', 'otx2_mbox.c', 'otx2_common.c', ) @@ -19,6 +19,6 @@ foreach flag: extra_flags endif endforeach -deps = ['eal', 'ethdev'] +deps = ['eal', 'pci', 'ethdev'] includes += include_directories('../../common/octeontx2', '../../bus/pci') diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h index 58fcf5a41..b9e7a7f8d 100644 --- a/drivers/common/octeontx2/otx2_common.h +++ b/drivers/common/octeontx2/otx2_common.h @@ -69,6 +69,20 @@ extern int otx2_logtype_dpi; #define otx2_tim_dbg(fmt, ...) otx2_dbg(tim, fmt, ##__VA_ARGS__) #define otx2_dpi_dbg(fmt, ...) otx2_dbg(dpi, fmt, ##__VA_ARGS__) +/* PCI IDs */ +#define PCI_VENDOR_ID_CAVIUM 0x177D +#define PCI_DEVID_OCTEONTX2_RVU_PF 0xA063 +#define PCI_DEVID_OCTEONTX2_RVU_VF 0xA064 +#define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065 +#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF 0xA0F9 +#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF 0xA0FA +#define PCI_DEVID_OCTEONTX2_RVU_NPA_PF 0xA0FB +#define PCI_DEVID_OCTEONTX2_RVU_NPA_VF 0xA0FC +#define PCI_DEVID_OCTEONTX2_RVU_CPT_PF 0xA0FD +#define PCI_DEVID_OCTEONTX2_RVU_CPT_VF 0xA0FE +#define PCI_DEVID_OCTEONTX2_RVU_AF_VF 0xA0f8 +#define PCI_DEVID_OCTEONTX2_DPI_VF 0xA081 + /* IO Access */ #define otx2_read64(addr) rte_read64_relaxed((void *)(addr)) #define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr)) diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c new file mode 100644 index 000000000..486b1b7c8 --- /dev/null +++ b/drivers/common/octeontx2/otx2_dev.c @@ -0,0 +1,197 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include +#include +#include + +#include +#include +#include + +#include "otx2_dev.h" +#include "otx2_mbox.h" + +/* PF/VF message handling timer */ +#define VF_PF_MBOX_TIMER_MS (20 * 1000) + +static void * +mbox_mem_map(off_t off, size_t size) +{ + void *va = MAP_FAILED; + int mem_fd; + + if (size <= 0) + goto error; + + mem_fd = open("/dev/mem", O_RDWR); + if (mem_fd < 0) + goto error; + + va = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, mem_fd, off); + close(mem_fd); + + if (va == MAP_FAILED) + otx2_err("Failed to mmap sz=0x%zx, fd=%d, off=%jd", + size, mem_fd, (intmax_t)off); +error: + return va; +} + +static void +mbox_mem_unmap(void *va, size_t size) +{ + if (va) + munmap(va, size); +} + +static void +otx2_update_pass_hwcap(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + RTE_SET_USED(pci_dev); + + /* Update this logic when we have A1 */ + dev->hwcap |= OTX2_HWCAP_F_A0; +} + +static void +otx2_update_vf_hwcap(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + dev->hwcap = 0; + + switch (pci_dev->id.device_id) { + case PCI_DEVID_OCTEONTX2_RVU_PF: + break; + case PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF: + case PCI_DEVID_OCTEONTX2_RVU_NPA_VF: + case PCI_DEVID_OCTEONTX2_RVU_CPT_VF: + case PCI_DEVID_OCTEONTX2_RVU_AF_VF: + case PCI_DEVID_OCTEONTX2_RVU_VF: + dev->hwcap |= OTX2_HWCAP_F_VF; + break; + } +} + +/** + * @internal + * Initialize the otx2 device + */ +int +otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev) +{ + int up_direction = MBOX_DIR_PFAF_UP; + int rc, direction = MBOX_DIR_PFAF; + struct otx2_dev *dev = otx2_dev; + uintptr_t bar2, bar4; + uint64_t bar4_addr; + void *hwbase; + + bar2 = (uintptr_t)pci_dev->mem_resource[2].addr; + bar4 = (uintptr_t)pci_dev->mem_resource[4].addr; + + if (bar2 == 0 || bar4 == 0) { + otx2_err("Failed to get pci bars"); + rc = -ENODEV; + goto error; + } + + dev->node = pci_dev->device.numa_node; + dev->maxvf = pci_dev->max_vfs; + dev->bar2 = bar2; + dev->bar4 = bar4; + + otx2_update_vf_hwcap(pci_dev, dev); + otx2_update_pass_hwcap(pci_dev, dev); + + if (otx2_dev_is_vf(dev)) { + direction = MBOX_DIR_VFPF; + up_direction = MBOX_DIR_VFPF_UP; + } + + /* Initialize the local mbox */ + rc = otx2_mbox_init(&dev->mbox_local, bar4, bar2, direction, 1); + if (rc) + goto error; + dev->mbox = &dev->mbox_local; + + rc = otx2_mbox_init(&dev->mbox_up, bar4, bar2, up_direction, 1); + if (rc) + goto error; + + /* Check the readiness of PF/VF */ + rc = otx2_send_ready_msg(dev->mbox, &dev->pf_func); + if (rc) + goto mbox_fini; + + dev->pf = otx2_get_pf(dev->pf_func); + dev->vf = otx2_get_vf(dev->pf_func); + memset(&dev->active_vfs, 0, sizeof(dev->active_vfs)); + + /* Found VF devices in a PF device */ + if (pci_dev->max_vfs > 0) { + + /* Remap mbox area for all vf's */ + bar4_addr = otx2_read64(bar2 + RVU_PF_VF_BAR4_ADDR); + if (bar4_addr == 0) { + rc = -ENODEV; + goto mbox_fini; + } + + hwbase = mbox_mem_map(bar4_addr, MBOX_SIZE * pci_dev->max_vfs); + if (hwbase == MAP_FAILED) { + rc = -ENOMEM; + goto mbox_fini; + } + /* Init mbox object */ + rc = otx2_mbox_init(&dev->mbox_vfpf, (uintptr_t)hwbase, + bar2, MBOX_DIR_PFVF, pci_dev->max_vfs); + if (rc) + goto iounmap; + + /* PF -> VF UP messages */ + rc = otx2_mbox_init(&dev->mbox_vfpf_up, (uintptr_t)hwbase, + bar2, MBOX_DIR_PFVF_UP, pci_dev->max_vfs); + if (rc) + goto mbox_fini; + } + + dev->mbox_active = 1; + return rc; + +iounmap: + mbox_mem_unmap(hwbase, MBOX_SIZE * pci_dev->max_vfs); +mbox_fini: + otx2_mbox_fini(dev->mbox); + otx2_mbox_fini(&dev->mbox_up); +error: + return rc; +} + +/** + * @internal + * Finalize the otx2 device + */ +void +otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev) +{ + struct otx2_dev *dev = otx2_dev; + struct otx2_mbox *mbox; + + /* Release PF - VF */ + mbox = &dev->mbox_vfpf; + if (mbox->hwbase && mbox->dev) + mbox_mem_unmap((void *)mbox->hwbase, + MBOX_SIZE * pci_dev->max_vfs); + otx2_mbox_fini(mbox); + mbox = &dev->mbox_vfpf_up; + otx2_mbox_fini(mbox); + + /* Release PF - AF */ + mbox = dev->mbox; + otx2_mbox_fini(mbox); + mbox = &dev->mbox_up; + otx2_mbox_fini(mbox); + dev->mbox_active = 0; +} diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h new file mode 100644 index 000000000..a89570b62 --- /dev/null +++ b/drivers/common/octeontx2/otx2_dev.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef _OTX2_DEV_H +#define _OTX2_DEV_H + +#include + +#include "otx2_common.h" +#include "otx2_irq.h" +#include "otx2_mbox.h" + +/* Common HWCAP flags. Use from LSB bits */ +#define OTX2_HWCAP_F_VF BIT_ULL(0) /* VF device */ +#define otx2_dev_is_vf(dev) (dev->hwcap & OTX2_HWCAP_F_VF) +#define otx2_dev_is_pf(dev) (!(dev->hwcap & OTX2_HWCAP_F_VF)) +#define otx2_dev_is_lbk(dev) ((dev->hwcap & OTX2_HWCAP_F_VF) && \ + (dev->tx_chan_base < 0x700)) + +#define OTX2_HWCAP_F_A0 BIT_ULL(1) /* A0 device */ +#define otx2_dev_is_A0(dev) (dev->hwcap & OTX2_HWCAP_F_A0) + +struct otx2_dev; + +#define OTX2_DEV \ + int node __rte_cache_aligned; \ + uint16_t pf; \ + int16_t vf; \ + uint16_t pf_func; \ + uint8_t mbox_active; \ + bool drv_inited; \ + uint64_t active_vfs[MAX_VFPF_DWORD_BITS]; \ + uintptr_t bar2; \ + uintptr_t bar4; \ + struct otx2_mbox mbox_local; \ + struct otx2_mbox mbox_up; \ + struct otx2_mbox mbox_vfpf; \ + struct otx2_mbox mbox_vfpf_up; \ + otx2_intr_t intr; \ + int timer_set; /* ~0 : no alarm handling */ \ + uint64_t hwcap; \ + struct otx2_mbox *mbox; \ + uint16_t maxvf; \ + const struct otx2_dev_ops *ops + +struct otx2_dev { + OTX2_DEV; +}; + +int otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev); +void otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev); +int otx2_dev_active_vfs(void *otx2_dev); + +#define RVU_PFVF_PF_SHIFT 10 +#define RVU_PFVF_PF_MASK 0x3F +#define RVU_PFVF_FUNC_SHIFT 0 +#define RVU_PFVF_FUNC_MASK 0x3FF + +static inline int +otx2_get_vf(uint16_t pf_func) +{ + return (((pf_func >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK) - 1); +} + +static inline int +otx2_get_pf(uint16_t pf_func) +{ + return (pf_func >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK; +} + +static inline int +otx2_pfvf_func(int pf, int vf) +{ + return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1); +} + +static inline int +otx2_is_afvf(uint16_t pf_func) +{ + return !(pf_func & ~RVU_PFVF_FUNC_MASK); +} + +#endif /* _OTX2_DEV_H */ diff --git a/drivers/common/octeontx2/otx2_irq.h b/drivers/common/octeontx2/otx2_irq.h new file mode 100644 index 000000000..df44ddfba --- /dev/null +++ b/drivers/common/octeontx2/otx2_irq.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef _OTX2_IRQ_H_ +#define _OTX2_IRQ_H_ + +#include +#include + +#include "otx2_common.h" + +typedef struct { +/* 128 devices translate to two 64 bits dwords */ +#define MAX_VFPF_DWORD_BITS 2 + uint64_t bits[MAX_VFPF_DWORD_BITS]; +} otx2_intr_t; + +#endif /* _OTX2_IRQ_H_ */ diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map index e10a2d3b2..4d9879899 100644 --- a/drivers/common/octeontx2/rte_common_octeontx2_version.map +++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map @@ -1,6 +1,9 @@ DPDK_19.08 { global: + otx2_dev_fini; + otx2_dev_init; + otx2_logtype_base; otx2_logtype_dpi; otx2_logtype_mbox; From patchwork Mon Jun 17 15:55:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54857 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3C7171BF67; Mon, 17 Jun 2019 17:56:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 28C9F1BF5A for ; Mon, 17 Jun 2019 17:56:17 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppld000980 for ; Mon, 17 Jun 2019 08:56:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=GpUD0FWva3/HV9jIOfl/R3efZkT3iYozdFos/qZ8CsU=; b=yoHyIkrA1a/3LE5wZfUhqrC1sedT15eDFTxxb7CuAulaP5dl3W+My4jpsOX/lpKWxE0H 7sSxuG56zk7JPq6KFoAiqHkILxjXD4AOjc/HJgCtMEtYR5opas7p6S0gagr6KjnDtPNf F1d5940uNXVWHMVR4jplALa3SJn38wfZZL5Zcg2FVWIcBqk9FXcEjkivEck2/zOhnYTD wbKKQYLlP5L2eKe0e8Z70ceT3fR8UfAPsW08MtUx6hDY5NRVHzJYpOlztLcAp97iQDgY vssoFypEn8vQZdmym7ZWd+T8G7roj30Lwe4RezB3hrquOcdshxatxyKGyiAn06vZuMQ6 CA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyaxt-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:16 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:14 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:14 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 327E63F7043; Mon, 17 Jun 2019 08:56:12 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Krzysztof Kanas Date: Mon, 17 Jun 2019 21:25:18 +0530 Message-ID: <20190617155537.36144-9-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 08/27] common/octeontx2: introduce irq handling functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob All PCIe drivers(ethdev, mempool, cryptodev and eventdev) in octeontx2, needs to handle interrupt for mailbox and error handling. Create a helper function over rte interrupt API to register, unregister, disable interrupts. Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru Signed-off-by: Krzysztof Kanas --- drivers/common/octeontx2/Makefile | 1 + drivers/common/octeontx2/meson.build | 1 + drivers/common/octeontx2/otx2_irq.c | 254 ++++++++++++++++++ drivers/common/octeontx2/otx2_irq.h | 6 + .../rte_common_octeontx2_version.map | 4 + 5 files changed, 266 insertions(+) create mode 100644 drivers/common/octeontx2/otx2_irq.c diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile index a6f94553d..78243e555 100644 --- a/drivers/common/octeontx2/Makefile +++ b/drivers/common/octeontx2/Makefile @@ -26,6 +26,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-y += otx2_dev.c +SRCS-y += otx2_irq.c SRCS-y += otx2_mbox.c SRCS-y += otx2_common.c diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build index feaf75d92..44ac90085 100644 --- a/drivers/common/octeontx2/meson.build +++ b/drivers/common/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources= files('otx2_dev.c', + 'otx2_irq.c', 'otx2_mbox.c', 'otx2_common.c', ) diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c new file mode 100644 index 000000000..fa3206af5 --- /dev/null +++ b/drivers/common/octeontx2/otx2_irq.c @@ -0,0 +1,254 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include +#include +#include + +#include "otx2_common.h" +#include "otx2_irq.h" + +#ifdef RTE_EAL_VFIO + +#include +#include +#include +#include +#include + +#define MAX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID +#define MSIX_IRQ_SET_BUF_LEN (sizeof(struct vfio_irq_set) + \ + sizeof(int) * (MAX_INTR_VEC_ID)) + +static int +irq_get_info(struct rte_intr_handle *intr_handle) +{ + struct vfio_irq_info irq = { .argsz = sizeof(irq) }; + int rc; + + irq.index = VFIO_PCI_MSIX_IRQ_INDEX; + + rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq); + if (rc < 0) { + otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno); + return rc; + } + + otx2_base_dbg("Flags=0x%x index=0x%x count=0x%x max_intr_vec_id=0x%x", + irq.flags, irq.index, irq.count, MAX_INTR_VEC_ID); + + if (irq.count > MAX_INTR_VEC_ID) { + otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d", + intr_handle->max_intr, MAX_INTR_VEC_ID); + intr_handle->max_intr = MAX_INTR_VEC_ID; + } else { + intr_handle->max_intr = irq.count; + } + + return 0; +} + +static int +irq_config(struct rte_intr_handle *intr_handle, unsigned int vec) +{ + char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; + struct vfio_irq_set *irq_set; + int32_t *fd_ptr; + int len, rc; + + if (vec > intr_handle->max_intr) { + otx2_err("vector=%d greater than max_intr=%d", vec, + intr_handle->max_intr); + return -EINVAL; + } + + len = sizeof(struct vfio_irq_set) + sizeof(int32_t); + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = len; + + irq_set->start = vec; + irq_set->count = 1; + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | + VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + + /* Use vec fd to set interrupt vectors */ + fd_ptr = (int32_t *)&irq_set->data[0]; + fd_ptr[0] = intr_handle->efds[vec]; + + rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) + otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc); + + return rc; +} + +static int +irq_init(struct rte_intr_handle *intr_handle) +{ + char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; + struct vfio_irq_set *irq_set; + int32_t *fd_ptr; + int len, rc; + uint32_t i; + + if (intr_handle->max_intr > MAX_INTR_VEC_ID) { + otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d", + intr_handle->max_intr, MAX_INTR_VEC_ID); + return -ERANGE; + } + + len = sizeof(struct vfio_irq_set) + + sizeof(int32_t) * intr_handle->max_intr; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = len; + irq_set->start = 0; + irq_set->count = intr_handle->max_intr; + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | + VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + + fd_ptr = (int32_t *)&irq_set->data[0]; + for (i = 0; i < irq_set->count; i++) + fd_ptr[i] = -1; + + rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) + otx2_err("Failed to set irqs vector rc=%d", rc); + + return rc; +} + +/** + * @internal + * Disable IRQ + */ +int +otx2_disable_irqs(struct rte_intr_handle *intr_handle) +{ + /* Clear max_intr to indicate re-init next time */ + intr_handle->max_intr = 0; + return rte_intr_disable(intr_handle); +} + +/** + * @internal + * Register IRQ + */ +int +otx2_register_irq(struct rte_intr_handle *intr_handle, + rte_intr_callback_fn cb, void *data, unsigned int vec) +{ + struct rte_intr_handle tmp_handle; + int rc; + + /* If no max_intr read from VFIO */ + if (intr_handle->max_intr == 0) { + irq_get_info(intr_handle); + irq_init(intr_handle); + } + + if (vec > intr_handle->max_intr) { + otx2_err("Vector=%d greater than max_intr=%d", vec, + intr_handle->max_intr); + return -EINVAL; + } + + tmp_handle = *intr_handle; + /* Create new eventfd for interrupt vector */ + tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); + if (tmp_handle.fd == -1) + return -ENODEV; + + /* Register vector interrupt callback */ + rc = rte_intr_callback_register(&tmp_handle, cb, data); + if (rc) { + otx2_err("Failed to register vector:0x%x irq callback.", vec); + return rc; + } + + intr_handle->efds[vec] = tmp_handle.fd; + intr_handle->nb_efd = (vec > intr_handle->nb_efd) ? + vec : intr_handle->nb_efd; + if ((intr_handle->nb_efd + 1) > intr_handle->max_intr) + intr_handle->max_intr = intr_handle->nb_efd + 1; + + otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", + vec, intr_handle->nb_efd, intr_handle->max_intr); + + /* Enable MSIX vectors to VFIO */ + return irq_config(intr_handle, vec); +} + +/** + * @internal + * Unregister IRQ + */ +void +otx2_unregister_irq(struct rte_intr_handle *intr_handle, + rte_intr_callback_fn cb, void *data, unsigned int vec) +{ + struct rte_intr_handle tmp_handle; + + if (vec > intr_handle->max_intr) { + otx2_err("Error unregistering MSI-X interrupts vec:%d > %d", + vec, intr_handle->max_intr); + return; + } + + tmp_handle = *intr_handle; + tmp_handle.fd = intr_handle->efds[vec]; + if (tmp_handle.fd == -1) + return; + + /* Un-register callback func from eal lib */ + rte_intr_callback_unregister(&tmp_handle, cb, data); + + otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", + vec, intr_handle->nb_efd, intr_handle->max_intr); + + if (intr_handle->efds[vec] != -1) + close(intr_handle->efds[vec]); + /* Disable MSIX vectors from VFIO */ + intr_handle->efds[vec] = -1; + irq_config(intr_handle, vec); +} + +#else + +/** + * @internal + * Register IRQ + */ +int otx2_register_irq(__rte_unused struct rte_intr_handle *intr_handle, + __rte_unused rte_intr_callback_fn cb, + __rte_unused void *data, __rte_unused unsigned int vec) +{ + return -ENOTSUP; +} + + +/** + * @internal + * Unregister IRQ + */ +void otx2_unregister_irq(__rte_unused struct rte_intr_handle *intr_handle, + __rte_unused rte_intr_callback_fn cb, + __rte_unused void *data, __rte_unused unsigned int vec) +{ +} + +/** + * @internal + * Disable IRQ + */ +int otx2_disable_irqs(__rte_unused struct rte_intr_handle *intr_handle) +{ + return -ENOTSUP; +} + +#endif /* RTE_EAL_VFIO */ diff --git a/drivers/common/octeontx2/otx2_irq.h b/drivers/common/octeontx2/otx2_irq.h index df44ddfba..9d326276e 100644 --- a/drivers/common/octeontx2/otx2_irq.h +++ b/drivers/common/octeontx2/otx2_irq.h @@ -16,4 +16,10 @@ typedef struct { uint64_t bits[MAX_VFPF_DWORD_BITS]; } otx2_intr_t; +int otx2_register_irq(struct rte_intr_handle *intr_handle, + rte_intr_callback_fn cb, void *data, unsigned int vec); +void otx2_unregister_irq(struct rte_intr_handle *intr_handle, + rte_intr_callback_fn cb, void *data, unsigned int vec); +int otx2_disable_irqs(struct rte_intr_handle *intr_handle); + #endif /* _OTX2_IRQ_H_ */ diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map index 4d9879899..007649a48 100644 --- a/drivers/common/octeontx2/rte_common_octeontx2_version.map +++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map @@ -21,5 +21,9 @@ DPDK_19.08 { otx2_mbox_msg_send; otx2_mbox_wait_for_rsp; + otx2_disable_irqs; + otx2_unregister_irq; + otx2_register_irq; + local: *; }; From patchwork Mon Jun 17 15:55:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54858 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C76A91BF3B; Mon, 17 Jun 2019 17:56:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 0ACD81BF35 for ; Mon, 17 Jun 2019 17:56:20 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppIh000981 for ; Mon, 17 Jun 2019 08:56:20 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=VTosMFj7LCM7MPxqV+BCA1WWdneyW3bhpDwNUysepUg=; b=ifbWkXOaJ2xbQxsH/Mc0yX8YNDGFOnfOvd83YKKlS5v1G3MSvWFh0RWNUCSSv7TeKHl1 mxgD2p3kmsVVuubOZFWAIRKgQqk4Fb9tDnrQfAAUQf+9P8TrkNHO4sxaxsEJ/0BbB+1v Ua7NJOG0bKh0yxF4SxFEKlzqN35hDRmUq5r45Fu6274SdnqGfJLnU9ImWbtMt2qY6FxN oP2c4vCta8MIuRvYcQNIsOC0FLlhQrMxJKiAzgieQHWAZKKqvfgYF59OszZDArF0X+R8 oyd0qEuqhXm4p5kyhwY7fRrZ/0kgYECzXluUiHCQEten1c12DKdruMExY5LGYQPoimcg VQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyayb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:20 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:17 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:17 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 206AC3F7045; Mon, 17 Jun 2019 08:56:15 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Pavan Nikhilesh Date: Mon, 17 Jun 2019 21:25:19 +0530 Message-ID: <20190617155537.36144-10-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 09/27] common/octeontx2: handle intra device operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The mempool device(NPA) may be provisioned as a standalone device or it can be part of ethdev/eventdev device. In order to address mempool as standalone or integrated with ethdev/eventdev device, An intra device structure being introduced. When the _first_ ethdev/eventdev PCIe device or standalone mempool(NPA) devices get probed by the eal PCI subsystem, The NPA object(struct otx2_npa_lf) stored in otx2_dev base class. Once it is accomplished, the other consumer drivers like ethdev driver or eventdev driver use otx2_npa_* API to operate on shared NPA object. The similar concept followed for SSO object, Which needs to share between PCIe devices. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh --- drivers/common/octeontx2/otx2_common.c | 163 ++++++++++++++++++ drivers/common/octeontx2/otx2_common.h | 32 +++- drivers/common/octeontx2/otx2_dev.c | 6 + drivers/common/octeontx2/otx2_dev.h | 1 + .../rte_common_octeontx2_version.map | 9 + 5 files changed, 210 insertions(+), 1 deletion(-) diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c index a4b91b4f1..7e4536639 100644 --- a/drivers/common/octeontx2/otx2_common.c +++ b/drivers/common/octeontx2/otx2_common.c @@ -2,9 +2,172 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include +#include #include #include "otx2_common.h" +#include "otx2_dev.h" +#include "otx2_mbox.h" + +/** + * @internal + * Set default NPA configuration. + */ +void +otx2_npa_set_defaults(struct otx2_idev_cfg *idev) +{ + idev->npa_pf_func = 0; + rte_atomic16_set(&idev->npa_refcnt, 0); +} + +/** + * @internal + * Get intra device config structure. + */ +struct otx2_idev_cfg * +otx2_intra_dev_get_cfg(void) +{ + const char name[] = "octeontx2_intra_device_conf"; + const struct rte_memzone *mz; + struct otx2_idev_cfg *idev; + + mz = rte_memzone_lookup(name); + if (mz != NULL) + return mz->addr; + + /* Request for the first time */ + mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_cfg), + SOCKET_ID_ANY, 0, OTX2_ALIGN); + if (mz != NULL) { + idev = mz->addr; + idev->sso_pf_func = 0; + idev->npa_lf = NULL; + otx2_npa_set_defaults(idev); + return idev; + } + return NULL; +} + +/** + * @internal + * Get SSO PF_FUNC. + */ +uint16_t +otx2_sso_pf_func_get(void) +{ + struct otx2_idev_cfg *idev; + uint16_t sso_pf_func; + + sso_pf_func = 0; + idev = otx2_intra_dev_get_cfg(); + + if (idev != NULL) + sso_pf_func = idev->sso_pf_func; + + return sso_pf_func; +} + +/** + * @internal + * Set SSO PF_FUNC. + */ +void +otx2_sso_pf_func_set(uint16_t sso_pf_func) +{ + struct otx2_idev_cfg *idev; + + idev = otx2_intra_dev_get_cfg(); + + if (idev != NULL) { + idev->sso_pf_func = sso_pf_func; + rte_smp_wmb(); + } +} + +/** + * @internal + * Get NPA PF_FUNC. + */ +uint16_t +otx2_npa_pf_func_get(void) +{ + struct otx2_idev_cfg *idev; + uint16_t npa_pf_func; + + npa_pf_func = 0; + idev = otx2_intra_dev_get_cfg(); + + if (idev != NULL) + npa_pf_func = idev->npa_pf_func; + + return npa_pf_func; +} + +/** + * @internal + * Get NPA lf object. + */ +struct otx2_npa_lf * +otx2_npa_lf_obj_get(void) +{ + struct otx2_idev_cfg *idev; + + idev = otx2_intra_dev_get_cfg(); + + if (idev != NULL && rte_atomic16_read(&idev->npa_refcnt)) + return idev->npa_lf; + + return NULL; +} + +/** + * @internal + * Is NPA lf active for the given device?. + */ +int +otx2_npa_lf_active(void *otx2_dev) +{ + struct otx2_dev *dev = otx2_dev; + struct otx2_idev_cfg *idev; + + /* Check if npalf is actively used on this dev */ + idev = otx2_intra_dev_get_cfg(); + if (!idev || !idev->npa_lf || idev->npa_lf->mbox != dev->mbox) + return 0; + + return rte_atomic16_read(&idev->npa_refcnt); +} + +/* + * @internal + * Gets reference only to existing NPA LF object. + */ +int otx2_npa_lf_obj_ref(void) +{ + struct otx2_idev_cfg *idev; + uint16_t cnt; + int rc; + + idev = otx2_intra_dev_get_cfg(); + + /* Check if ref not possible */ + if (idev == NULL) + return -EINVAL; + + + /* Get ref only if > 0 */ + cnt = rte_atomic16_read(&idev->npa_refcnt); + while (cnt != 0) { + rc = rte_atomic16_cmpset(&idev->npa_refcnt_u16, cnt, cnt + 1); + if (rc) + break; + + cnt = rte_atomic16_read(&idev->npa_refcnt); + } + + return cnt ? 0 : -EINVAL; +} /** * @internal diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h index b9e7a7f8d..cbc5c65a7 100644 --- a/drivers/common/octeontx2/otx2_common.h +++ b/drivers/common/octeontx2/otx2_common.h @@ -5,9 +5,12 @@ #ifndef _OTX2_COMMON_H_ #define _OTX2_COMMON_H_ +#include #include -#include +#include #include +#include +#include #include "hw/otx2_rvu.h" #include "hw/otx2_nix.h" @@ -33,6 +36,33 @@ #define __hot __attribute__((hot)) #endif +/* Intra device related functions */ +struct otx2_npa_lf { + struct otx2_mbox *mbox; + struct rte_pci_device *pci_dev; + struct rte_intr_handle *intr_handle; +}; + +struct otx2_idev_cfg { + uint16_t sso_pf_func; + uint16_t npa_pf_func; + struct otx2_npa_lf *npa_lf; + RTE_STD_C11 + union { + rte_atomic16_t npa_refcnt; + uint16_t npa_refcnt_u16; + }; +}; + +struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void); +void otx2_sso_pf_func_set(uint16_t sso_pf_func); +uint16_t otx2_sso_pf_func_get(void); +uint16_t otx2_npa_pf_func_get(void); +struct otx2_npa_lf *otx2_npa_lf_obj_get(void); +void otx2_npa_set_defaults(struct otx2_idev_cfg *idev); +int otx2_npa_lf_active(void *dev); +int otx2_npa_lf_obj_ref(void); + /* Log */ extern int otx2_logtype_base; extern int otx2_logtype_mbox; diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c index 486b1b7c8..c3b3f9be5 100644 --- a/drivers/common/octeontx2/otx2_dev.c +++ b/drivers/common/octeontx2/otx2_dev.c @@ -177,8 +177,14 @@ void otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev) { struct otx2_dev *dev = otx2_dev; + struct otx2_idev_cfg *idev; struct otx2_mbox *mbox; + /* Clear references to this pci dev */ + idev = otx2_intra_dev_get_cfg(); + if (idev->npa_lf && idev->npa_lf->pci_dev == pci_dev) + idev->npa_lf = NULL; + /* Release PF - VF */ mbox = &dev->mbox_vfpf; if (mbox->hwbase && mbox->dev) diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h index a89570b62..70104dfa2 100644 --- a/drivers/common/octeontx2/otx2_dev.h +++ b/drivers/common/octeontx2/otx2_dev.h @@ -40,6 +40,7 @@ struct otx2_dev; otx2_intr_t intr; \ int timer_set; /* ~0 : no alarm handling */ \ uint64_t hwcap; \ + struct otx2_npa_lf npalf; \ struct otx2_mbox *mbox; \ uint16_t maxvf; \ const struct otx2_dev_ops *ops diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map index 007649a48..efcf0cb55 100644 --- a/drivers/common/octeontx2/rte_common_octeontx2_version.map +++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map @@ -21,6 +21,15 @@ DPDK_19.08 { otx2_mbox_msg_send; otx2_mbox_wait_for_rsp; + otx2_intra_dev_get_cfg; + otx2_npa_lf_active; + otx2_npa_lf_obj_get; + otx2_npa_lf_obj_ref; + otx2_npa_pf_func_get; + otx2_npa_set_defaults; + otx2_sso_pf_func_get; + otx2_sso_pf_func_set; + otx2_disable_irqs; otx2_unregister_irq; otx2_register_irq; From patchwork Mon Jun 17 15:55:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54860 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8AD4C1BF80; Mon, 17 Jun 2019 17:56:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 8F4EF1BEA0 for ; Mon, 17 Jun 2019 17:56:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFpsXg001115 for ; Mon, 17 Jun 2019 08:56:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=jJHw1KQ35bQ1ZK0RMpBd6iwgzbFXMjKGLrteQWOtCvI=; b=Pro0ZaofgT8NpXMlecdxy7J5zXfc1C6LcIIr+7vzM2J5HMZafOZEE+EZtsJR+9FOlf1F adm2YNQ+Gnw+tFX3n5myrQHVq4isvk7/f7pQtSimAAduFTGBlhF2A1hqjGf3Okg2LaIV qS5+SleeV5ItqER+OpR1qtJtvdeXjy1219VSCcwxP3jdx4VMr4ATbegR0WyRJoUVfBCF kgrleRTkkIelWdqck2tJQnfq2G8DTt0QeA8/Xcobj6XKr2LOSwsssPoiIY6TgME9iN9S NBL//LoNARa2IqI+0KaG7TIOEzgzz0VGqOinq+bKxl+eASeV4lfvIO4FwKLcs7fv21y7 1A== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyayd-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:34 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:20 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:20 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id D4A1F3F7040; Mon, 17 Jun 2019 08:56:18 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru Date: Mon, 17 Jun 2019 21:25:20 +0530 Message-ID: <20190617155537.36144-11-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 10/27] common/octeontx2: add AF to PF mailbox IRQ and msg handlers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram This patch adds support for AF to PF mailbox interrupt and message handling. PF writes the message on mapped mailbox region followed by writing the mailbox doorbell register. Upon receiving, the mailbox request in AF(In Linux kernel), It processes the messages and update the counter memory and update the AF mbox doorbell register. That would trigger a VFIO interrupt to userspace and otx2_process_msgs() will handle it. Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru --- drivers/common/octeontx2/otx2_dev.c | 120 +++++++++++++++++++++++++++- 1 file changed, 119 insertions(+), 1 deletion(-) diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c index c3b3f9be5..090cfc8f1 100644 --- a/drivers/common/octeontx2/otx2_dev.c +++ b/drivers/common/octeontx2/otx2_dev.c @@ -14,6 +14,9 @@ #include "otx2_dev.h" #include "otx2_mbox.h" +#define RVU_MAX_VF 64 /* RVU_PF_VFPF_MBOX_INT(0..1) */ +#define RVU_MAX_INT_RETRY 3 + /* PF/VF message handling timer */ #define VF_PF_MBOX_TIMER_MS (20 * 1000) @@ -47,6 +50,108 @@ mbox_mem_unmap(void *va, size_t size) munmap(va, size); } +static void +otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + int msgs_acked = 0; + int offset; + uint16_t i; + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (req_hdr->num_msgs == 0) + return; + + offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + + msgs_acked++; + otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d", + msg->id, otx2_mbox_id2name(msg->id), + otx2_get_pf(msg->pcifunc), + otx2_get_vf(msg->pcifunc)); + + switch (msg->id) { + /* Add message id's that are handled here */ + case MBOX_MSG_READY: + /* Get our identity */ + dev->pf_func = msg->pcifunc; + break; + + default: + if (msg->rc) + otx2_err("Message (%s) response has err=%d", + otx2_mbox_id2name(msg->id), msg->rc); + break; + } + offset = mbox->rx_start + msg->next_msgoff; + } + + otx2_mbox_reset(mbox, 0); + /* Update acked if someone is waiting a message */ + mdev->msgs_acked = msgs_acked; + rte_wmb(); +} + +static void +otx2_af_pf_mbox_irq(void *param) +{ + struct otx2_dev *dev = param; + uint64_t intr; + + intr = otx2_read64(dev->bar2 + RVU_PF_INT); + if (intr == 0) + return; + + otx2_write64(intr, dev->bar2 + RVU_PF_INT); + + otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf); + if (intr) + /* First process all configuration messages */ + otx2_process_msgs(dev, dev->mbox); +} + +static int +mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int rc; + + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + + dev->timer_set = 0; + + /* MBOX interrupt AF <-> PF */ + rc = otx2_register_irq(intr_handle, otx2_af_pf_mbox_irq, + dev, RVU_PF_INT_VEC_AFPF_MBOX); + if (rc) { + otx2_err("Fail to register AF<->PF mbox irq"); + return rc; + } + + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT); + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); + + return rc; +} + +static void +mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + + dev->timer_set = 0; + + /* MBOX interrupt AF <-> PF */ + otx2_unregister_irq(intr_handle, otx2_af_pf_mbox_irq, dev, + RVU_PF_INT_VEC_AFPF_MBOX); +} + static void otx2_update_pass_hwcap(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { @@ -120,10 +225,15 @@ otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev) if (rc) goto error; + /* Register mbox interrupts */ + rc = mbox_register_irq(pci_dev, dev); + if (rc) + goto mbox_fini; + /* Check the readiness of PF/VF */ rc = otx2_send_ready_msg(dev->mbox, &dev->pf_func); if (rc) - goto mbox_fini; + goto mbox_unregister; dev->pf = otx2_get_pf(dev->pf_func); dev->vf = otx2_get_vf(dev->pf_func); @@ -162,6 +272,8 @@ otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev) iounmap: mbox_mem_unmap(hwbase, MBOX_SIZE * pci_dev->max_vfs); +mbox_unregister: + mbox_unregister_irq(pci_dev, dev); mbox_fini: otx2_mbox_fini(dev->mbox); otx2_mbox_fini(&dev->mbox_up); @@ -176,6 +288,7 @@ otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev) void otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev) { + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct otx2_dev *dev = otx2_dev; struct otx2_idev_cfg *idev; struct otx2_mbox *mbox; @@ -185,6 +298,8 @@ otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev) if (idev->npa_lf && idev->npa_lf->pci_dev == pci_dev) idev->npa_lf = NULL; + mbox_unregister_irq(pci_dev, dev); + /* Release PF - VF */ mbox = &dev->mbox_vfpf; if (mbox->hwbase && mbox->dev) @@ -200,4 +315,7 @@ otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev) mbox = &dev->mbox_up; otx2_mbox_fini(mbox); dev->mbox_active = 0; + + /* Disable MSIX vectors */ + otx2_disable_irqs(intr_handle); } From patchwork Mon Jun 17 15:55:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54863 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9FA151BF98; Mon, 17 Jun 2019 17:56:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3AE3E1BF7C for ; Mon, 17 Jun 2019 17:56:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFprCQ001049 for ; Mon, 17 Jun 2019 08:56:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=lFXDX6gLcU0ffss68Xr7ONdXVbJbMIf+TLZSKMEdF80=; b=KUuxAJLnCNj0mORt/SPVbNLujHPoNJlp+zRpfFdrK1/NpAC5WWLYeLfDz7B7yG8GLdo/ VHnQxXguLifmopcmLt+w+KJQqSRA4jP9RFyImnwJcqhvlSCtvUJtx9dLMliSlbhKj3ER l3mY98zUtTmpcKYhFbanP15ugL3/suKxjzgyRXZWZgL7kFRV9DkAF9DbPIJZdg9HNdIE 8hOSdcp3gPBd9+qpvqiF3EEOd6MUxkAx+iUJopyyeppIoMLZ4JtRDV72CPVbK1k5cmQV rbRL5tuT4BmLMf6BH8uce/hDamTeNGq2Sb8v+r4GaamXFcug8F0Li3gr6gMn/pm1PXER AA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyaxv-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:37 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:23 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:23 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 4DD7F3F7048; Mon, 17 Jun 2019 08:56:21 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Krzysztof Kanas Date: Mon, 17 Jun 2019 21:25:21 +0530 Message-ID: <20190617155537.36144-12-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 11/27] common/octeontx2: add PF to VF mailbox IRQ and msg handlers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram PF has additional responsibility being server for VF messages and forward to AF and once AF process it then forward the response back to VF. otx2_vf_pf_mbox_irq() will process the VF mailbox request and af_pf_wait_msg() will until getting a response back from AF. Signed-off-by: Nithin Dabilpuram Signed-off-by: Krzysztof Kanas --- drivers/common/octeontx2/otx2_dev.c | 240 +++++++++++++++++++++++++++- 1 file changed, 239 insertions(+), 1 deletion(-) diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c index 090cfc8f1..efb28a9d2 100644 --- a/drivers/common/octeontx2/otx2_dev.c +++ b/drivers/common/octeontx2/otx2_dev.c @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -50,6 +51,200 @@ mbox_mem_unmap(void *va, size_t size) munmap(va, size); } +static int +af_pf_wait_msg(struct otx2_dev *dev, uint16_t vf, int num_msg) +{ + uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox; + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + volatile uint64_t int_status; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + struct mbox_msghdr *rsp; + uint64_t offset; + size_t size; + int i; + + /* We need to disable PF interrupts. We are in timer interrupt */ + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + + /* Send message */ + otx2_mbox_msg_send(mbox, 0); + + do { + rte_delay_ms(sleep); + timeout++; + if (timeout >= MBOX_RSP_TIMEOUT) { + otx2_err("Routed messages %d timeout: %dms", + num_msg, MBOX_RSP_TIMEOUT); + break; + } + int_status = otx2_read64(dev->bar2 + RVU_PF_INT); + } while ((int_status & 0x1) != 0x1); + + /* Clear */ + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT); + + /* Enable interrupts */ + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); + + rte_spinlock_lock(&mdev->mbox_lock); + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (req_hdr->num_msgs != num_msg) + otx2_err("Routed messages: %d received: %d", num_msg, + req_hdr->num_msgs); + + /* Get messages from mbox */ + offset = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + size = mbox->rx_start + msg->next_msgoff - offset; + + /* Reserve PF/VF mbox message */ + size = RTE_ALIGN(size, MBOX_MSG_ALIGN); + rsp = otx2_mbox_alloc_msg(&dev->mbox_vfpf, vf, size); + otx2_mbox_rsp_init(msg->id, rsp); + + /* Copy message from AF<->PF mbox to PF<->VF mbox */ + otx2_mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr), + (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + + /* Set status and sender pf_func data */ + rsp->rc = msg->rc; + rsp->pcifunc = msg->pcifunc; + + offset = mbox->rx_start + msg->next_msgoff; + } + rte_spinlock_unlock(&mdev->mbox_lock); + + return req_hdr->num_msgs; +} + +static int +vf_pf_process_msgs(struct otx2_dev *dev, uint16_t vf) +{ + int offset, routed = 0; struct otx2_mbox *mbox = &dev->mbox_vfpf; + struct otx2_mbox_dev *mdev = &mbox->dev[vf]; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + size_t size; + uint16_t i; + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (!req_hdr->num_msgs) + return 0; + + offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + + for (i = 0; i < req_hdr->num_msgs; i++) { + + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + size = mbox->rx_start + msg->next_msgoff - offset; + + /* RVU_PF_FUNC_S */ + msg->pcifunc = otx2_pfvf_func(dev->pf, vf); + + if (msg->id == MBOX_MSG_READY) { + struct ready_msg_rsp *rsp; + uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8; + + /* Handle READY message in PF */ + dev->active_vfs[vf / max_bits] |= + BIT_ULL(vf % max_bits); + rsp = (struct ready_msg_rsp *) + otx2_mbox_alloc_msg(mbox, vf, sizeof(*rsp)); + otx2_mbox_rsp_init(msg->id, rsp); + + /* PF/VF function ID */ + rsp->hdr.pcifunc = msg->pcifunc; + rsp->hdr.rc = 0; + } else { + struct mbox_msghdr *af_req; + /* Reserve AF/PF mbox message */ + size = RTE_ALIGN(size, MBOX_MSG_ALIGN); + af_req = otx2_mbox_alloc_msg(dev->mbox, 0, size); + otx2_mbox_req_init(msg->id, af_req); + + /* Copy message from VF<->PF mbox to PF<->AF mbox */ + otx2_mbox_memcpy((uint8_t *)af_req + + sizeof(struct mbox_msghdr), + (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + af_req->pcifunc = msg->pcifunc; + routed++; + } + offset = mbox->rx_start + msg->next_msgoff; + } + + if (routed > 0) { + otx2_base_dbg("pf:%d routed %d messages from vf:%d to AF", + dev->pf, routed, vf); + af_pf_wait_msg(dev, vf, routed); + otx2_mbox_reset(dev->mbox, 0); + } + + /* Send mbox responses to VF */ + if (mdev->num_msgs) { + otx2_base_dbg("pf:%d reply %d messages to vf:%d", + dev->pf, mdev->num_msgs, vf); + otx2_mbox_msg_send(mbox, vf); + } + + return i; +} + +static void +otx2_vf_pf_mbox_handle_msg(void *param) +{ + uint16_t vf, max_vf, max_bits; + struct otx2_dev *dev = param; + + max_bits = sizeof(dev->intr.bits[0]) * sizeof(uint64_t); + max_vf = max_bits * MAX_VFPF_DWORD_BITS; + + for (vf = 0; vf < max_vf; vf++) { + if (dev->intr.bits[vf/max_bits] & BIT_ULL(vf%max_bits)) { + otx2_base_dbg("Process vf:%d request (pf:%d, vf:%d)", + vf, dev->pf, dev->vf); + vf_pf_process_msgs(dev, vf); + dev->intr.bits[vf/max_bits] &= ~(BIT_ULL(vf%max_bits)); + } + } + dev->timer_set = 0; +} + +static void +otx2_vf_pf_mbox_irq(void *param) +{ + struct otx2_dev *dev = param; + bool alarm_set = false; + uint64_t intr; + int vfpf; + + for (vfpf = 0; vfpf < MAX_VFPF_DWORD_BITS; ++vfpf) { + intr = otx2_read64(dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf)); + if (!intr) + continue; + + otx2_base_dbg("vfpf: %d intr: 0x%" PRIx64 " (pf:%d, vf:%d)", + vfpf, intr, dev->pf, dev->vf); + + /* Save and clear intr bits */ + dev->intr.bits[vfpf] |= intr; + otx2_write64(intr, dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf)); + alarm_set = true; + } + + if (!dev->timer_set && alarm_set) { + dev->timer_set = 1; + /* Start timer to handle messages */ + rte_eal_alarm_set(VF_PF_MBOX_TIMER_MS, + otx2_vf_pf_mbox_handle_msg, dev); + } +} + static void otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox) { @@ -118,12 +313,33 @@ static int mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; - int rc; + int i, rc; + + /* HW clear irq */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) + otx2_write64(~0ull, dev->bar2 + + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i)); otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); dev->timer_set = 0; + /* MBOX interrupt for VF(0...63) <-> PF */ + rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX0); + + if (rc) { + otx2_err("Fail to register PF(VF0-63) mbox irq"); + return rc; + } + /* MBOX interrupt for VF(64...128) <-> PF */ + rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX1); + + if (rc) { + otx2_err("Fail to register PF(VF64-128) mbox irq"); + return rc; + } /* MBOX interrupt AF <-> PF */ rc = otx2_register_irq(intr_handle, otx2_af_pf_mbox_irq, dev, RVU_PF_INT_VEC_AFPF_MBOX); @@ -132,6 +348,11 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) return rc; } + /* HW enable intr */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) + otx2_write64(~0ull, dev->bar2 + + RVU_PF_VFPF_MBOX_INT_ENA_W1SX(i)); + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT); otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); @@ -142,11 +363,28 @@ static void mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int i; + + /* HW clear irq */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) + otx2_write64(~0ull, dev->bar2 + + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i)); otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); dev->timer_set = 0; + rte_eal_alarm_cancel(otx2_vf_pf_mbox_handle_msg, dev); + + /* Unregister the interrupt handler for each vectors */ + /* MBOX interrupt for VF(0...63) <-> PF */ + otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX0); + + /* MBOX interrupt for VF(64...128) <-> PF */ + otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX1); + /* MBOX interrupt AF <-> PF */ otx2_unregister_irq(intr_handle, otx2_af_pf_mbox_irq, dev, RVU_PF_INT_VEC_AFPF_MBOX); From patchwork Mon Jun 17 15:55:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54864 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 083431BF9E; Mon, 17 Jun 2019 17:56:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id C3BF11BF7E for ; Mon, 17 Jun 2019 17:56:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFprCR001049 for ; Mon, 17 Jun 2019 08:56:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=n1xcanczaEm3ZsFWV80fck8NwkLNoZDh4TsLVHPcXrU=; b=WK2VFf+hU4aNF4LDuRg075JL9/o9IpTWbWDWCqRxf/gHnUo51KvIQa3mMeXMbrvTVc3s Rzq4RjUxIVWPHMFEG0xi0hvtmndRMIJkj9Yfb3mSapoRvfjhru8baSDkOW3Qjp+5lMNM AN8rTURYpIygYuIhK2q0Yg+KogC/MaR5F1F2S1YYq+IBChOl9KLkmJfDKVkTFvRU0vwq 3pmcGzknKbPu3gFI9fwK3bVuDJGbgffX1+5WjWB7u3zGJOAg5qM80FGOcFVq6Z8xS27G krO3KO3jVO5ZFo9xLsluaXxCWOHPlX0/XsYt9b/t2+KuPc1PLGToxJFt7C/h841m1/oc CQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyaxv-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:38 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:25 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:25 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 229DA3F704A; Mon, 17 Jun 2019 08:56:23 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru Date: Mon, 17 Jun 2019 21:25:22 +0530 Message-ID: <20190617155537.36144-13-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 12/27] common/octeontx2: add VF mailbox IRQ and msg handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob This patch adds support for PF <-> VF mailbox interrupt mailbox message interrupt handling. Signed-off-by: Jerin Jacob --- drivers/common/octeontx2/otx2_dev.c | 78 ++++++++++++++++++++++++++++- 1 file changed, 76 insertions(+), 2 deletions(-) diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c index efb28a9d2..c5f7d5078 100644 --- a/drivers/common/octeontx2/otx2_dev.c +++ b/drivers/common/octeontx2/otx2_dev.c @@ -291,6 +291,24 @@ otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox) rte_wmb(); } +static void +otx2_pf_vf_mbox_irq(void *param) +{ + struct otx2_dev *dev = param; + uint64_t intr; + + intr = otx2_read64(dev->bar2 + RVU_VF_INT); + if (intr == 0) + return; + + otx2_write64(intr, dev->bar2 + RVU_VF_INT); + otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf); + if (intr) + /* First process all configuration messages */ + otx2_process_msgs(dev, dev->mbox); + +} + static void otx2_af_pf_mbox_irq(void *param) { @@ -310,7 +328,7 @@ otx2_af_pf_mbox_irq(void *param) } static int -mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; int i, rc; @@ -359,8 +377,41 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) return rc; } +static int +mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int rc; + + /* Clear irq */ + otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C); + + /* MBOX interrupt PF <-> VF */ + rc = otx2_register_irq(intr_handle, otx2_pf_vf_mbox_irq, + dev, RVU_VF_INT_VEC_MBOX); + if (rc) { + otx2_err("Fail to register PF<->VF mbox irq"); + return rc; + } + + /* HW enable intr */ + otx2_write64(~0ull, dev->bar2 + RVU_VF_INT); + otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1S); + + return rc; +} + +static int +mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + if (otx2_dev_is_vf(dev)) + return mbox_register_vf_irq(pci_dev, dev); + else + return mbox_register_pf_irq(pci_dev, dev); +} + static void -mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; int i; @@ -388,6 +439,29 @@ mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) /* MBOX interrupt AF <-> PF */ otx2_unregister_irq(intr_handle, otx2_af_pf_mbox_irq, dev, RVU_PF_INT_VEC_AFPF_MBOX); + +} + +static void +mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + + /* Clear irq */ + otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C); + + /* Unregister the interrupt handler */ + otx2_unregister_irq(intr_handle, otx2_pf_vf_mbox_irq, dev, + RVU_VF_INT_VEC_MBOX); +} + +static void +mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + if (otx2_dev_is_vf(dev)) + return mbox_unregister_vf_irq(pci_dev, dev); + else + return mbox_unregister_pf_irq(pci_dev, dev); } static void From patchwork Mon Jun 17 15:55:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54861 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E01CF1BF8D; Mon, 17 Jun 2019 17:56:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 6BA411BF7C for ; Mon, 17 Jun 2019 17:56:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppxO000998 for ; Mon, 17 Jun 2019 08:56:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=mFL4zD0M+GDbqJq4oi/ABA9bjtTURT4t3x1KQTsTW0g=; b=gsWmgyJ++8dm0pKfoUeUvrFMYyzwbcDxXadceDsyl2L7Ybe2kV59QMQg0djn3EFY9CP0 NB5n9nFXwqGlBr2i208mGbREz36SEvC1zIK981jjnmGcao64jUy0aJMbM5zcRfPmbRWX qNHq+Usz2tz12GYt2lKTwd0+MAPG88JoFFROu0x8gqyzvWGTBc6QpmOhkwNWjFgSuIXL VYyJ0X3P3x+Ebn0rnVre0i3tmnxFvstt855RiiElkYQNfNfvtB6pyo7fZ4AA9hM6ajiG qlXGCL8RvTh3RQ18jrvFIN4QnJoUPP+QZmGeGHdI0NebzSgqFTt39zBDWYF/HAKmgBEr cw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyax9-14 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:36 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:28 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:28 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id D88483F7040; Mon, 17 Jun 2019 08:56:26 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Harman Kalra Date: Mon, 17 Jun 2019 21:25:23 +0530 Message-ID: <20190617155537.36144-14-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 13/27] common/octeontx2: add uplink message support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram The events like PHY link status change by AF or PHY configuration change by PF would call for the uplink message. The AF initiated uplink would land it in PF and PF would further forward to VF(if it is intended for that specific VF) The PF initiated uplink would be distributed to all active VFs. This patch adds support for the same. Signed-off-by: Nithin Dabilpuram Signed-off-by: Harman Kalra --- drivers/common/octeontx2/otx2_dev.c | 243 +++++++++++++++++++++++++++- drivers/common/octeontx2/otx2_dev.h | 11 ++ 2 files changed, 252 insertions(+), 2 deletions(-) diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c index c5f7d5078..09943855d 100644 --- a/drivers/common/octeontx2/otx2_dev.c +++ b/drivers/common/octeontx2/otx2_dev.c @@ -195,6 +195,57 @@ vf_pf_process_msgs(struct otx2_dev *dev, uint16_t vf) return i; } +static int +vf_pf_process_up_msgs(struct otx2_dev *dev, uint16_t vf) +{ + struct otx2_mbox *mbox = &dev->mbox_vfpf_up; + struct otx2_mbox_dev *mdev = &mbox->dev[vf]; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + int msgs_acked = 0; + int offset; + uint16_t i; + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (req_hdr->num_msgs == 0) + return 0; + + offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + + msgs_acked++; + /* RVU_PF_FUNC_S */ + msg->pcifunc = otx2_pfvf_func(dev->pf, vf); + + switch (msg->id) { + case MBOX_MSG_CGX_LINK_EVENT: + otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)", + msg->id, otx2_mbox_id2name(msg->id), + msg->pcifunc, otx2_get_pf(msg->pcifunc), + otx2_get_vf(msg->pcifunc)); + break; + case MBOX_MSG_CGX_PTP_RX_INFO: + otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)", + msg->id, otx2_mbox_id2name(msg->id), + msg->pcifunc, otx2_get_pf(msg->pcifunc), + otx2_get_vf(msg->pcifunc)); + break; + default: + otx2_err("Not handled UP msg 0x%x (%s) func:0x%x", + msg->id, otx2_mbox_id2name(msg->id), + msg->pcifunc); + } + offset = mbox->rx_start + msg->next_msgoff; + } + otx2_mbox_reset(mbox, vf); + mdev->msgs_acked = msgs_acked; + rte_wmb(); + + return i; +} + static void otx2_vf_pf_mbox_handle_msg(void *param) { @@ -209,6 +260,8 @@ otx2_vf_pf_mbox_handle_msg(void *param) otx2_base_dbg("Process vf:%d request (pf:%d, vf:%d)", vf, dev->pf, dev->vf); vf_pf_process_msgs(dev, vf); + /* UP messages */ + vf_pf_process_up_msgs(dev, vf); dev->intr.bits[vf/max_bits] &= ~(BIT_ULL(vf%max_bits)); } } @@ -291,6 +344,185 @@ otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox) rte_wmb(); } +/* Copies the message received from AF and sends it to VF */ +static void +pf_vf_mbox_send_up_msg(struct otx2_dev *dev, void *rec_msg) +{ + uint16_t max_bits = sizeof(dev->active_vfs[0]) * sizeof(uint64_t); + struct otx2_mbox *vf_mbox = &dev->mbox_vfpf_up; + struct msg_req *msg = rec_msg; + struct mbox_msghdr *vf_msg; + uint16_t vf; + size_t size; + + size = RTE_ALIGN(otx2_mbox_id2size(msg->hdr.id), MBOX_MSG_ALIGN); + /* Send UP message to all VF's */ + for (vf = 0; vf < vf_mbox->ndevs; vf++) { + /* VF active */ + if (!(dev->active_vfs[vf / max_bits] & (BIT_ULL(vf)))) + continue; + + otx2_base_dbg("(%s) size: %zx to VF: %d", + otx2_mbox_id2name(msg->hdr.id), size, vf); + + /* Reserve PF/VF mbox message */ + vf_msg = otx2_mbox_alloc_msg(vf_mbox, vf, size); + if (!vf_msg) { + otx2_err("Failed to alloc VF%d UP message", vf); + continue; + } + otx2_mbox_req_init(msg->hdr.id, vf_msg); + + /* + * Copy message from AF<->PF UP mbox + * to PF<->VF UP mbox + */ + otx2_mbox_memcpy((uint8_t *)vf_msg + + sizeof(struct mbox_msghdr), (uint8_t *)msg + + sizeof(struct mbox_msghdr), size - + sizeof(struct mbox_msghdr)); + + vf_msg->rc = msg->hdr.rc; + /* Set PF to be a sender */ + vf_msg->pcifunc = dev->pf_func; + + /* Send to VF */ + otx2_mbox_msg_send(vf_mbox, vf); + } +} + +static int +otx2_mbox_up_handler_cgx_link_event(struct otx2_dev *dev, + struct cgx_link_info_msg *msg, + struct msg_rsp *rsp) +{ + struct cgx_link_user_info *linfo = &msg->link_info; + + otx2_base_dbg("pf:%d/vf:%d NIC Link %s --> 0x%x (%s) from: pf:%d/vf:%d", + otx2_get_pf(dev->pf_func), otx2_get_vf(dev->pf_func), + linfo->link_up ? "UP" : "DOWN", msg->hdr.id, + otx2_mbox_id2name(msg->hdr.id), + otx2_get_pf(msg->hdr.pcifunc), + otx2_get_vf(msg->hdr.pcifunc)); + + /* PF gets link notification from AF */ + if (otx2_get_pf(msg->hdr.pcifunc) == 0) { + if (dev->ops && dev->ops->link_status_update) + dev->ops->link_status_update(dev, linfo); + + /* Forward the same message as received from AF to VF */ + pf_vf_mbox_send_up_msg(dev, msg); + } else { + /* VF gets link up notification */ + if (dev->ops && dev->ops->link_status_update) + dev->ops->link_status_update(dev, linfo); + } + + rsp->hdr.rc = 0; + return 0; +} + +static int +otx2_mbox_up_handler_cgx_ptp_rx_info(struct otx2_dev *dev, + struct cgx_ptp_rx_info_msg *msg, + struct msg_rsp *rsp) +{ + otx2_nix_dbg("pf:%d/vf:%d PTP mode %s --> 0x%x (%s) from: pf:%d/vf:%d", + otx2_get_pf(dev->pf_func), + otx2_get_vf(dev->pf_func), + msg->ptp_en ? "ENABLED" : "DISABLED", + msg->hdr.id, otx2_mbox_id2name(msg->hdr.id), + otx2_get_pf(msg->hdr.pcifunc), + otx2_get_vf(msg->hdr.pcifunc)); + + /* PF gets PTP notification from AF */ + if (otx2_get_pf(msg->hdr.pcifunc) == 0) { + if (dev->ops && dev->ops->ptp_info_update) + dev->ops->ptp_info_update(dev, msg->ptp_en); + + /* Forward the same message as received from AF to VF */ + pf_vf_mbox_send_up_msg(dev, msg); + } else { + /* VF gets PTP notification */ + if (dev->ops && dev->ops->ptp_info_update) + dev->ops->ptp_info_update(dev, msg->ptp_en); + } + + rsp->hdr.rc = 0; + return 0; +} + +static int +mbox_process_msgs_up(struct otx2_dev *dev, struct mbox_msghdr *req) +{ + /* Check if valid, if not reply with a invalid msg */ + if (req->sig != OTX2_MBOX_REQ_SIG) + return -EIO; + + switch (req->id) { +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ + case _id: { \ + struct _rsp_type *rsp; \ + int err; \ + \ + rsp = (struct _rsp_type *)otx2_mbox_alloc_msg( \ + &dev->mbox_up, 0, \ + sizeof(struct _rsp_type)); \ + if (!rsp) \ + return -ENOMEM; \ + \ + rsp->hdr.id = _id; \ + rsp->hdr.sig = OTX2_MBOX_RSP_SIG; \ + rsp->hdr.pcifunc = dev->pf_func; \ + rsp->hdr.rc = 0; \ + \ + err = otx2_mbox_up_handler_ ## _fn_name( \ + dev, (struct _req_type *)req, rsp); \ + return err; \ + } +MBOX_UP_CGX_MESSAGES +#undef M + + default : + otx2_reply_invalid_msg(&dev->mbox_up, 0, 0, req->id); + } + + return -ENODEV; +} + +static void +otx2_process_msgs_up(struct otx2_dev *dev, struct otx2_mbox *mbox) +{ + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + int i, err, offset; + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (req_hdr->num_msgs == 0) + return; + + offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + + otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d", + msg->id, otx2_mbox_id2name(msg->id), + otx2_get_pf(msg->pcifunc), + otx2_get_vf(msg->pcifunc)); + err = mbox_process_msgs_up(dev, msg); + if (err) + otx2_err("Error %d handling 0x%x (%s)", + err, msg->id, otx2_mbox_id2name(msg->id)); + offset = mbox->rx_start + msg->next_msgoff; + } + /* Send mbox responses */ + if (mdev->num_msgs) { + otx2_base_dbg("Reply num_msgs:%d", mdev->num_msgs); + otx2_mbox_msg_send(mbox, 0); + } +} + static void otx2_pf_vf_mbox_irq(void *param) { @@ -303,10 +535,13 @@ otx2_pf_vf_mbox_irq(void *param) otx2_write64(intr, dev->bar2 + RVU_VF_INT); otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf); - if (intr) + if (intr) { /* First process all configuration messages */ otx2_process_msgs(dev, dev->mbox); + /* Process Uplink messages */ + otx2_process_msgs_up(dev, &dev->mbox_up); + } } static void @@ -322,9 +557,13 @@ otx2_af_pf_mbox_irq(void *param) otx2_write64(intr, dev->bar2 + RVU_PF_INT); otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf); - if (intr) + if (intr) { /* First process all configuration messages */ otx2_process_msgs(dev, dev->mbox); + + /* Process Uplink messages */ + otx2_process_msgs_up(dev, &dev->mbox_up); + } } static int diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h index 70104dfa2..8fa5f32d2 100644 --- a/drivers/common/octeontx2/otx2_dev.h +++ b/drivers/common/octeontx2/otx2_dev.h @@ -23,6 +23,17 @@ struct otx2_dev; +/* Link status callback */ +typedef void (*otx2_link_status_t)(struct otx2_dev *dev, + struct cgx_link_user_info *link); +/* PTP info callback */ +typedef int (*otx2_ptp_info_t)(struct otx2_dev *dev, bool ptp_en); + +struct otx2_dev_ops { + otx2_link_status_t link_status_update; + otx2_ptp_info_t ptp_info_update; +}; + #define OTX2_DEV \ int node __rte_cache_aligned; \ uint16_t pf; \ From patchwork Mon Jun 17 15:55:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54862 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2B18E1BF92; Mon, 17 Jun 2019 17:56:50 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id E76F81BF7E for ; Mon, 17 Jun 2019 17:56:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppxP000998 for ; Mon, 17 Jun 2019 08:56:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=n+h2SslRYMl3qCsU4PL1rp+RsLFXMRd+GRNW10MFTUg=; b=iQz4k7v1JNQC+cIMBof0L0meEJyaf1w4Sbc7Z5Us3LBThD9VTKnT3/C6oyNrQhiXxbkJ Nb0bt2fXs89AaZrtGkdyTnzswVnY2C9qGJqk9PbHq9X09YI2xA86NCTm41HXyY40cu7w gn5v03xlmnsWStuWRvtiTkdv6rLYmewKyWz9Imx7sF6Y7o/gDOKYDyARhSsoHh/QDaIM pE7UxeunphmygHgI2fdIt1WOQmmbNeMjvxq0Il+VIgm9RQT5YDWNYrJ/4lJQM9zH5Qrp OzvrtSuYJAQ1qwObwoH8+lxtv00dh2KCXGULZuYOctG5dsII9saRTPSO+8dqSaRG1b8C 7g== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyax9-15 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:37 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:31 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:31 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A9A773F704A; Mon, 17 Jun 2019 08:56:29 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Harman Kalra Date: Mon, 17 Jun 2019 21:25:24 +0530 Message-ID: <20190617155537.36144-15-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 14/27] common/octeontx2: add FLR IRQ handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Upon receiving FLR request from VF, It is PF responsibly forward to AF and enable FLR for VFs. This patch adds support for VF FLR support in PF. This patch also add otx2_dev_active_vfs() API to find the number of active VF for given PF. Signed-off-by: Nithin Dabilpuram Signed-off-by: Harman Kalra --- drivers/common/octeontx2/otx2_dev.c | 180 ++++++++++++++++++ .../rte_common_octeontx2_version.map | 1 + 2 files changed, 181 insertions(+) diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c index 09943855d..53a0c6efb 100644 --- a/drivers/common/octeontx2/otx2_dev.c +++ b/drivers/common/octeontx2/otx2_dev.c @@ -51,6 +51,52 @@ mbox_mem_unmap(void *va, size_t size) munmap(va, size); } +static int +pf_af_sync_msg(struct otx2_dev *dev, struct mbox_msghdr **rsp) +{ + uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox; + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + volatile uint64_t int_status; + struct mbox_msghdr *msghdr; + uint64_t off; + int rc = 0; + + /* We need to disable PF interrupts. We are in timer interrupt */ + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + + /* Send message */ + otx2_mbox_msg_send(mbox, 0); + + do { + rte_delay_ms(sleep); + timeout += sleep; + if (timeout >= MBOX_RSP_TIMEOUT) { + otx2_err("Message timeout: %dms", MBOX_RSP_TIMEOUT); + rc = -EIO; + break; + } + int_status = otx2_read64(dev->bar2 + RVU_PF_INT); + } while ((int_status & 0x1) != 0x1); + + /* Clear */ + otx2_write64(int_status, dev->bar2 + RVU_PF_INT); + + /* Enable interrupts */ + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); + + if (rc == 0) { + /* Get message */ + off = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + off); + if (rsp) + *rsp = msghdr; + rc = msghdr->rc; + } + + return rc; +} + static int af_pf_wait_msg(struct otx2_dev *dev, uint16_t vf, int num_msg) { @@ -703,6 +749,132 @@ mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) return mbox_unregister_pf_irq(pci_dev, dev); } +static int +vf_flr_send_msg(struct otx2_dev *dev, uint16_t vf) +{ + struct otx2_mbox *mbox = dev->mbox; + struct msg_req *req; + int rc; + + req = otx2_mbox_alloc_msg_vf_flr(mbox); + /* Overwrite pcifunc to indicate VF */ + req->hdr.pcifunc = otx2_pfvf_func(dev->pf, vf); + + /* Sync message in interrupt context */ + rc = pf_af_sync_msg(dev, NULL); + if (rc) + otx2_err("Failed to send VF FLR mbox msg, rc=%d", rc); + + return rc; +} + +static void +otx2_pf_vf_flr_irq(void *param) +{ + struct otx2_dev *dev = (struct otx2_dev *)param; + uint16_t max_vf = 64, vf; + uintptr_t bar2; + uint64_t intr; + int i; + + max_vf = (dev->maxvf > 0) ? dev->maxvf : 64; + bar2 = dev->bar2; + + otx2_base_dbg("FLR VF interrupt: max_vf: %d", max_vf); + + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) { + intr = otx2_read64(bar2 + RVU_PF_VFFLR_INTX(i)); + if (!intr) + continue; + + for (vf = 0; vf < max_vf; vf++) { + if (!(intr & (1ULL << vf))) + continue; + + vf = 64 * i + vf; + otx2_base_dbg("FLR: i :%d intr: 0x%" PRIx64 ", vf-%d", + i, intr, vf); + /* Clear interrupt */ + otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFFLR_INTX(i)); + /* Disable the interrupt */ + otx2_write64(BIT_ULL(vf), + bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i)); + /* Inform AF about VF reset */ + vf_flr_send_msg(dev, vf); + + /* Signal FLR finish */ + otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFTRPENDX(i)); + /* Enable interrupt */ + otx2_write64(~0ull, + bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i)); + } + } +} + +static int +vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int i; + + otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name); + + /* HW clear irq */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; i++) + otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i)); + + otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev, + RVU_PF_INT_VEC_VFFLR0); + + otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev, + RVU_PF_INT_VEC_VFFLR1); + + return 0; +} + +static int +vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev) +{ + struct rte_intr_handle *handle = &pci_dev->intr_handle; + int i, rc; + + otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name); + + rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev, + RVU_PF_INT_VEC_VFFLR0); + if (rc) + otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR0 rc=%d", rc); + + rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev, + RVU_PF_INT_VEC_VFFLR1); + if (rc) + otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR1 rc=%d", rc); + + /* Enable HW interrupt */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) { + otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INTX(i)); + otx2_write64(~0ull, dev->bar2 + RVU_PF_VFTRPENDX(i)); + otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i)); + } + return 0; +} + +/** + * @internal + * Get number of active VFs for the given PF device. + */ +int +otx2_dev_active_vfs(void *otx2_dev) +{ + struct otx2_dev *dev = otx2_dev; + int i, count = 0; + + for (i = 0; i < MAX_VFPF_DWORD_BITS; i++) + count += __builtin_popcount(dev->active_vfs[i]); + + return count; +} + static void otx2_update_pass_hwcap(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { @@ -818,6 +990,12 @@ otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev) goto mbox_fini; } + /* Register VF-FLR irq handlers */ + if (otx2_dev_is_pf(dev)) { + rc = vf_flr_register_irqs(pci_dev, dev); + if (rc) + goto iounmap; + } dev->mbox_active = 1; return rc; @@ -851,6 +1029,8 @@ otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev) mbox_unregister_irq(pci_dev, dev); + if (otx2_dev_is_pf(dev)) + vf_flr_unregister_irqs(pci_dev, dev); /* Release PF - VF */ mbox = &dev->mbox_vfpf; if (mbox->hwbase && mbox->dev) diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map index efcf0cb55..2f4826311 100644 --- a/drivers/common/octeontx2/rte_common_octeontx2_version.map +++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map @@ -1,6 +1,7 @@ DPDK_19.08 { global: + otx2_dev_active_vfs; otx2_dev_fini; otx2_dev_init; From patchwork Mon Jun 17 15:55:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54867 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 76E111BF38; Mon, 17 Jun 2019 17:57:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 8D0561BF86 for ; Mon, 17 Jun 2019 17:56:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFprCT001049; Mon, 17 Jun 2019 08:56:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=pfpt0818; bh=A6OSruuho1QZ/jyjFSIBWYCOZMBdtVVL2g3TEPjgN3Q=; b=BF0ITzlZPCJb9hPzRFoRYfC+1+gAxcuYqmDwAAOiQJsBUPnKChKRd9wlniJqKFB8VoDB WEWuzwKIw6qzTixp/7lbndrKTb7GE/ZpGhG4LOawrTewgsn8Is9INos7aRMLSVMB9ajA ju7dBVMPrQwgZMWuaNPYrsgpkG7Sktezeb4klfBMDm+mGdHY6V2A1I+mMAeNZz5kLyg/ fHWxoybXzo3EBCyYRJ1isOYr5UMHVp+cB9PGfjGAIBgBvHyw7UWUpnmCmEb0lyaLW5YB kdc8ht21w22bRrDJNxWN+etbQAMfW5964pbdJXJbecNPRb2sSQK5aGpvGO1X1el4jOxD LQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyaxv-7 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 17 Jun 2019 08:56:38 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:35 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:35 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A731F3F703F; Mon, 17 Jun 2019 08:56:32 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru , "John McNamara" , Marko Kovacevic CC: Pavan Nikhilesh , Shally Verma , Vivek Sharma Date: Mon, 17 Jun 2019 21:25:25 +0530 Message-ID: <20190617155537.36144-16-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 15/27] doc: add Marvell OCTEON TX2 platform guide X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Platform specific guide for Marvell OCTEON TX2 SoC is added. Cc: John McNamara Signed-off-by: Jerin Jacob Signed-off-by: Pavan Nikhilesh Signed-off-by: Shally Verma Signed-off-by: Vivek Sharma --- .../octeontx2_packet_flow_hw_accelerators.svg | 2804 +++++++++++++++++ .../img/octeontx2_resource_virtualization.svg | 2418 ++++++++++++++ doc/guides/platform/index.rst | 1 + doc/guides/platform/octeontx2.rst | 494 +++ 4 files changed, 5717 insertions(+) create mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg create mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg create mode 100644 doc/guides/platform/octeontx2.rst diff --git a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg b/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg new file mode 100644 index 000000000..ecd575947 --- /dev/null +++ b/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg @@ -0,0 +1,2804 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + + + + DDDpk + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Tx Rx + HW loop back device + + + + + + + + + + + + + + + + + Ethdev Ports (NIX) + Ingress Classification(NPC) + Egress Classification(NPC) + Rx Queues + Tx Queues + EgressTraffic Manager(NIX) + Scheduler SSO + Supports both poll mode and/or event modeby configuring scheduler + ARMv8Cores + Hardware Libraries + Software Libraries + Mempool(NPA) + Timer(TIM) + Crypto(CPT) + Compress(ZIP) + SharedMemory + SW Ring + HASHLPMACL + Mbuf + De(Frag) + + diff --git a/doc/guides/platform/img/octeontx2_resource_virtualization.svg b/doc/guides/platform/img/octeontx2_resource_virtualization.svg new file mode 100644 index 000000000..bf976b52a --- /dev/null +++ b/doc/guides/platform/img/octeontx2_resource_virtualization.svg @@ -0,0 +1,2418 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + +   + + + + + + + + + + NIX AF + NPA AF + SSO AF + NPC AF + CPT AF + RVU AF + Linux AF driver(octeontx2_af)PF0 + + + CGX-0 + + + + CGX-1 + + + + + CGX-2 + + CGX-FW Interface + + + + + + + + + AF-PF MBOX + Linux Netdev PFdriver(octeontx2_pf)PFx + + NIX LF + + NPA LF + + + PF-VF MBOX + CGX-x LMAC-y + + + + + + + + Linux Netdev VFdriver(octeontx2_vf)PFx-VF0 + + NIX LF + + NPA LF + DPDK Ethdev VFdriverPFx-VF1 + + NIX LF + + NPA LF + + + DPDK Ethdev PFdriverPFy + + NIX LF + + NPA LF + PF-VF MBOX + + DPDK Eventdev PFdriverPFz + + TIM LF + + SSO LF + Linux Crypto PFdriverPFm + + NIX LF + + NPA LF + DPDK Ethdev VFdriverPFy-VF0 + + CPT LF + DPDK Crypto VFdriverPFm-VF0 + PF-VF MBOX + + DDDpk DPDK-APP1 with one ethdev over Linux PF + + DPDK-APP2 with Two ethdevs(PF,VF) ,eventdev, timer adapter and cryptodev + + + + + CGX-x LMAC-y + + diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst index a17de2efb..f454ef877 100644 --- a/doc/guides/platform/index.rst +++ b/doc/guides/platform/index.rst @@ -14,3 +14,4 @@ The following are platform specific guides and setup information. dpaa dpaa2 octeontx + octeontx2 diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst new file mode 100644 index 000000000..3a5e03050 --- /dev/null +++ b/doc/guides/platform/octeontx2.rst @@ -0,0 +1,494 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2019 Marvell International Ltd. + +Marvell OCTEON TX2 Platform Guide +================================= + +This document gives an overview of **Marvell OCTEON TX2** RVU H/W block, +packet flow and procedure to build DPDK on OCTEON TX2 platform. + +More information about OCTEON TX2 SoC can be found at `Marvell Official Website +`_. + +Supported OCTEON TX2 SoCs +------------------------- + +- CN96xx +- CN93xx + +OCTEON TX2 Resource Virtualization Unit architecture +---------------------------------------------------- + +The :numref:`figure_octeontx2_resource_virtualization` diagram depicts the +RVU architecture and a resource provisioning example. + +.. _figure_octeontx2_resource_virtualization: + +.. figure:: img/octeontx2_resource_virtualization.* + + OCTEON TX2 Resource virtualization architecture and provisioning example + + +Resource Virtualization Unit (RVU) on Marvell's OCTEON TX2 SoC maps HW +resources belonging to the network, crypto and other functional blocks onto +PCI-compatible physical and virtual functions. + +Each functional block has multiple local functions (LFs) for +provisioning to different PCIe devices. RVU supports multiple PCIe SRIOV +physical functions (PFs) and virtual functions (VFs). + +The :numref:`table_octeontx2_rvu_dpdk_mapping` shows the various local +functions (LFs) provided by the RVU and its functional mapping to +DPDK subsystem. + +.. _table_octeontx2_rvu_dpdk_mapping: + +.. table:: RVU managed functional blocks and its mapping to DPDK subsystem + + +---+-----+--------------------------------------------------------------+ + | # | LF | DPDK subsystem mapping | + +===+=====+==============================================================+ + | 1 | NIX | rte_ethdev, rte_tm, rte_event_eth_[rt]x_adapter, rte_security| + +---+-----+--------------------------------------------------------------+ + | 2 | NPA | rte_mempool | + +---+-----+--------------------------------------------------------------+ + | 3 | NPC | rte_flow | + +---+-----+--------------------------------------------------------------+ + | 4 | CPT | rte_cryptodev, rte_event_crypto_adapter | + +---+-----+--------------------------------------------------------------+ + | 5 | SSO | rte_eventdev | + +---+-----+--------------------------------------------------------------+ + | 6 | TIM | rte_event_timer_adapter | + +---+-----+--------------------------------------------------------------+ + +PF0 is called the administrative / admin function (AF) and has exclusive +privileges to provision RVU functional block's LFs to each of the PF/VF. + +PF/VFs communicates with AF via a shared memory region (mailbox).Upon receiving +requests from PF/VF, AF does resource provisioning and other HW configuration. + +AF is always attached to host, but PF/VFs may be used by host kernel itself, +or attached to VMs or to userspace applications like DPDK, etc. So, AF has to +handle provisioning/configuration requests sent by any device from any domain. + +The AF driver does not receive or process any data. +It is only a configuration driver used in control path. + +The :numref:`figure_octeontx2_resource_virtualization` diagram also shows a +resource provisioning example where, + +1. PFx and PFx-VF0 bound to Linux netdev driver. +2. PFx-VF1 ethdev driver bound to the first DPDK application. +3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application. + +OCTEON TX2 packet flow +---------------------- + +The :numref:`figure_octeontx2_packet_flow_hw_accelerators` diagram depicts +the packet flow on OCTEON TX2 SoC in conjunction with use of various HW accelerators. + +.. _figure_octeontx2_packet_flow_hw_accelerators: + +.. figure:: img/octeontx2_packet_flow_hw_accelerators.* + + OCTEON TX2 packet flow in conjunction with use of HW accelerators + +HW Offload Drivers +------------------ + +This section lists dataplane H/W block(s) available in OCTEON TX2 SoC. + + +Procedure to Setup Platform +--------------------------- + +There are three main prerequisites for setting up DPDK on OCTEON TX2 +compatible board: + +1. **OCTEON TX2 Linux kernel driver** + + The dependent kernel drivers can be obtained from the + `kernel.org `_. + + Alternatively, the Marvell SDK also provides the required kernel drivers. + + Linux kernel should be configured with the following features enabled: + +.. code-block:: console + + # 64K pages enabled for better performance + CONFIG_ARM64_64K_PAGES=y + CONFIG_ARM64_VA_BITS_48=y + # huge pages support enabled + CONFIG_HUGETLBFS=y + CONFIG_HUGETLB_PAGE=y + # VFIO enabled with TYPE1 IOMMU at minimum + CONFIG_VFIO_IOMMU_TYPE1=y + CONFIG_VFIO_VIRQFD=y + CONFIG_VFIO=y + CONFIG_VFIO_NOIOMMU=y + CONFIG_VFIO_PCI=y + CONFIG_VFIO_PCI_MMAP=y + # SMMUv3 driver + CONFIG_ARM_SMMU_V3=y + # ARMv8.1 LSE atomics + CONFIG_ARM64_LSE_ATOMICS=y + # OCTEONTX2 drivers + CONFIG_OCTEONTX2_MBOX=y + CONFIG_OCTEONTX2_AF=y + # Enable if netdev PF driver required + CONFIG_OCTEONTX2_PF=y + # Enable if netdev VF driver required + CONFIG_OCTEONTX2_VF=y + CONFIG_CRYPTO_DEV_OCTEONTX2_CPT=y + +2. **ARM64 Linux Tool Chain** + + For example, the *aarch64* Linaro Toolchain, which can be obtained from + `here `_. + + Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is + optimized for OCTEON TX2 CPU. + +3. **Rootfile system** + + Any *aarch64* supporting filesystem may be used. For example, + Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained + from ``_. + + Alternatively, the Marvell SDK provides the buildroot based root filesystem. + The SDK includes all the above prerequisites necessary to bring up the OCTEON TX2 board. + +- Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment. + + +Debugging Options +----------------- + +.. _table_octeontx2_common_debug_options: + +.. table:: OCTEON TX2 common debug options + + +---+------------+-------------------------------------------------------+ + | # | Component | EAL log command | + +===+============+=======================================================+ + | 1 | Common | --log-level='pmd\.octeontx2\.base,8' | + +---+------------+-------------------------------------------------------+ + | 2 | Mailbox | --log-level='pmd\.octeontx2\.mbox,8' | + +---+------------+-------------------------------------------------------+ + +Debugfs support +~~~~~~~~~~~~~~~ + +The **OCTEON TX2 Linux kernel driver** provides support to dump RVU blocks +context or stats using debugfs. + +Enable ``debugfs`` by: + +1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUGFS=y``. +2. Boot OCTEON TX2 with debugfs supported kernel. +3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using. + +.. code-block:: console + + # mount -t debugfs none /sys/kernel/debug + +Currently ``debugfs`` supports the following RVU blocks NIX, NPA, NPC, NDC, +SSO & CGX. + +The file structure under ``/sys/kernel/debug`` is as follows + +.. code-block:: console + + octeontx2/ + ├── cgx + │ ├── cgx0 + │ │ └── lmac0 + │ │ └── stats + │ ├── cgx1 + │ │ ├── lmac0 + │ │ │ └── stats + │ │ └── lmac1 + │ │ └── stats + │ └── cgx2 + │ └── lmac0 + │ └── stats + ├── cpt + │ ├── cpt_engines_info + │ ├── cpt_engines_sts + │ ├── cpt_err_info + │ ├── cpt_lfs_info + │ └── cpt_pc + ├──── nix + │ ├── cq_ctx + │ ├── ndc_rx_cache + │ ├── ndc_rx_hits_miss + │ ├── ndc_tx_cache + │ ├── ndc_tx_hits_miss + │ ├── qsize + │ ├── rq_ctx + │ ├── sq_ctx + │ └── tx_stall_hwissue + ├── npa + │ ├── aura_ctx + │ ├── ndc_cache + │ ├── ndc_hits_miss + │ ├── pool_ctx + │ └── qsize + ├── npc + │ ├── mcam_info + │ └── rx_miss_act_stats + ├── rsrc_alloc + └── sso + ├── hws + │ └── sso_hws_info + └── hwgrp + ├── sso_hwgrp_aq_thresh + ├── sso_hwgrp_iaq_walk + ├── sso_hwgrp_pc + ├── sso_hwgrp_free_list_walk + ├── sso_hwgrp_ient_walk + └── sso_hwgrp_taq_walk + +RVU block LF allocation: + +.. code-block:: console + + cat /sys/kernel/debug/octeontx2/rsrc_alloc + + pcifunc NPA NIX SSO GROUP SSOWS TIM CPT + PF1 0 0 + PF4 1 + PF13 0, 1 0, 1 0 + +CGX example usage: + +.. code-block:: console + + cat /sys/kernel/debug/octeontx2/cgx/cgx2/lmac0/stats + + =======Link Status====== + Link is UP 40000 Mbps + =======RX_STATS====== + Received packets: 0 + Octets of received packets: 0 + Received PAUSE packets: 0 + Received PAUSE and control packets: 0 + Filtered DMAC0 (NIX-bound) packets: 0 + Filtered DMAC0 (NIX-bound) octets: 0 + Packets dropped due to RX FIFO full: 0 + Octets dropped due to RX FIFO full: 0 + Error packets: 0 + Filtered DMAC1 (NCSI-bound) packets: 0 + Filtered DMAC1 (NCSI-bound) octets: 0 + NCSI-bound packets dropped: 0 + NCSI-bound octets dropped: 0 + =======TX_STATS====== + Packets dropped due to excessive collisions: 0 + Packets dropped due to excessive deferral: 0 + Multiple collisions before successful transmission: 0 + Single collisions before successful transmission: 0 + Total octets sent on the interface: 0 + Total frames sent on the interface: 0 + Packets sent with an octet count < 64: 0 + Packets sent with an octet count == 64: 0 + Packets sent with an octet count of 65127: 0 + Packets sent with an octet count of 128-255: 0 + Packets sent with an octet count of 256-511: 0 + Packets sent with an octet count of 512-1023: 0 + Packets sent with an octet count of 1024-1518: 0 + Packets sent with an octet count of > 1518: 0 + Packets sent to a broadcast DMAC: 0 + Packets sent to the multicast DMAC: 0 + Transmit underflow and were truncated: 0 + Control/PAUSE packets sent: 0 + +CPT example usage: + +.. code-block:: console + + cat /sys/kernel/debug/octeontx2/cpt/cpt_pc + + CPT instruction requests 0 + CPT instruction latency 0 + CPT NCB read requests 0 + CPT NCB read latency 0 + CPT read requests caused by UC fills 0 + CPT active cycles pc 1395642 + CPT clock count pc 5579867595493 + +NIX example usage: + +.. code-block:: console + + Usage: echo [cq number/all] > /sys/kernel/debug/octeontx2/nix/cq_ctx + cat /sys/kernel/debug/octeontx2/nix/cq_ctx + echo 0 0 > /sys/kernel/debug/octeontx2/nix/cq_ctx + cat /sys/kernel/debug/octeontx2/nix/cq_ctx + + =====cq_ctx for nixlf:0 and qidx:0 is===== + W0: base 158ef1a00 + + W1: wrptr 0 + W1: avg_con 0 + W1: cint_idx 0 + W1: cq_err 0 + W1: qint_idx 0 + W1: bpid 0 + W1: bp_ena 0 + + W2: update_time 31043 + W2:avg_level 255 + W2: head 0 + W2:tail 0 + + W3: cq_err_int_ena 5 + W3:cq_err_int 0 + W3: qsize 4 + W3:caching 1 + W3: substream 0x000 + W3: ena 1 + W3: drop_ena 1 + W3: drop 64 + W3: bp 0 + +NPA example usage: + +.. code-block:: console + + Usage: echo [pool number/all] > /sys/kernel/debug/octeontx2/npa/pool_ctx + cat /sys/kernel/debug/octeontx2/npa/pool_ctx + echo 0 0 > /sys/kernel/debug/octeontx2/npa/pool_ctx + cat /sys/kernel/debug/octeontx2/npa/pool_ctx + + ======POOL : 0======= + W0: Stack base 1375bff00 + W1: ena 1 + W1: nat_align 1 + W1: stack_caching 1 + W1: stack_way_mask 0 + W1: buf_offset 1 + W1: buf_size 19 + W2: stack_max_pages 24315 + W2: stack_pages 24314 + W3: op_pc 267456 + W4: stack_offset 2 + W4: shift 5 + W4: avg_level 255 + W4: avg_con 0 + W4: fc_ena 0 + W4: fc_stype 0 + W4: fc_hyst_bits 0 + W4: fc_up_crossing 0 + W4: update_time 62993 + W5: fc_addr 0 + W6: ptr_start 1593adf00 + W7: ptr_end 180000000 + W8: err_int 0 + W8: err_int_ena 7 + W8: thresh_int 0 + W8: thresh_int_ena 0 + W8: thresh_up 0 + W8: thresh_qint_idx 0 + W8: err_qint_idx 0 + +NPC example usage: + +.. code-block:: console + + cat /sys/kernel/debug/octeontx2/npc/mcam_info + + NPC MCAM info: + RX keywidth : 224bits + TX keywidth : 224bits + + MCAM entries : 2048 + Reserved : 158 + Available : 1890 + + MCAM counters : 512 + Reserved : 1 + Available : 511 + +SSO example usage: + +.. code-block:: console + + Usage: echo [/all] > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info + echo 0 > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info + + ================================================== + SSOW HWS[0] Arbitration State 0x0 + SSOW HWS[0] Guest Machine Control 0x0 + SSOW HWS[0] SET[0] Group Mask[0] 0xffffffffffffffff + SSOW HWS[0] SET[0] Group Mask[1] 0xffffffffffffffff + SSOW HWS[0] SET[0] Group Mask[2] 0xffffffffffffffff + SSOW HWS[0] SET[0] Group Mask[3] 0xffffffffffffffff + SSOW HWS[0] SET[1] Group Mask[0] 0xffffffffffffffff + SSOW HWS[0] SET[1] Group Mask[1] 0xffffffffffffffff + SSOW HWS[0] SET[1] Group Mask[2] 0xffffffffffffffff + SSOW HWS[0] SET[1] Group Mask[3] 0xffffffffffffffff + ================================================== + +Compile DPDK +------------ + +DPDK may be compiled either natively on OCTEON TX2 platform or cross-compiled on +an x86 based platform. + +Native Compilation +~~~~~~~~~~~~~~~~~~ + +make build +^^^^^^^^^^ + +.. code-block:: console + + make config T=arm64-octeontx2-linux-gcc + make -j + +The example applications can be compiled using the following: + +.. code-block:: console + + cd + export RTE_SDK=$PWD + export RTE_TARGET=build + cd examples/ + make -j + +meson build +^^^^^^^^^^^ + +.. code-block:: console + + meson build + ninja -C build + +Cross Compilation +~~~~~~~~~~~~~~~~~ + +Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details. + +make build +^^^^^^^^^^ + +.. code-block:: console + + make config T=arm64-octeontx2-linux-gcc + make -j CROSS=aarch64-marvell-linux-gnu- CONFIG_RTE_KNI_KMOD=n + +meson build +^^^^^^^^^^^ + +.. code-block:: console + + meson build --cross-file config/arm/arm64_octeontx2_linux_gcc + ninja -C build + +.. note:: + + By default, meson cross compilation uses ``aarch64-linux-gnu-gcc`` toolchain, + if Marvell toolchain is available then it can be used by overriding the + c, cpp, ar, strip ``binaries`` attributes to respective Marvell + toolchain binaries in ``config/arm/arm64_octeontx2_linux_gcc`` file. From patchwork Mon Jun 17 15:55:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54865 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 086361BFA7; Mon, 17 Jun 2019 17:56:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id B0FB61BF87 for ; Mon, 17 Jun 2019 17:56:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppJJ001017; Mon, 17 Jun 2019 08:56:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=6eif+LSadzQRiC7skh0A7MIvEyoXi28fzG+4BWLsRh8=; b=P11jTIca/PFs1m5oXaCKR+QjYdHVg2eliIK1nUKPolZfDkuMJGLoVpmag+fTSnlEfmSs qrCtQMOBWIRmtRuT1aiA/IhNfjLcQtJOM7rHsibC8fjAtTdFUoBRptSqVAMU5ZIChUxF fsGljz4vefZDiLGO/UcfQEj6IzZhqsCbMoKF/xOyvD/FYeArXgq5d3ETZeMhgn8nNp9o Zv2fsE9o5zKW4rY80mmHCKPRGn7DLl8u+yhHbdCxrwnfzE19zWjm6bcwGi3fhSrj/jsu zMRcCkrtNY93U3WD/LgHyqP1YFszg0ybzsgraCFShUd0QR/p/3c25V2qcXPf3MgPbKS3 xg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb0s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 17 Jun 2019 08:56:41 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:39 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:39 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 134543F7043; Mon, 17 Jun 2019 08:56:36 -0700 (PDT) From: To: , Thomas Monjalon , Olivier Matz , Andrew Rybchenko , "Jerin Jacob" , Nithin Dabilpuram , Vamsi Attunuru , Anatoly Burakov CC: Pavan Nikhilesh Date: Mon, 17 Jun 2019 21:25:26 +0530 Message-ID: <20190617155537.36144-17-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 16/27] mempool/octeontx2: add build infra and device probe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add the make and meson based build infrastructure along with the mempool(NPA) device probe. Signed-off-by: Jerin Jacob Signed-off-by: Pavan Nikhilesh --- config/common_base | 5 ++ drivers/common/Makefile | 3 + drivers/mempool/Makefile | 1 + drivers/mempool/meson.build | 2 +- drivers/mempool/octeontx2/Makefile | 36 ++++++++++++ drivers/mempool/octeontx2/meson.build | 20 +++++++ drivers/mempool/octeontx2/otx2_mempool.c | 57 +++++++++++++++++++ .../rte_mempool_octeontx2_version.map | 4 ++ mk/rte.app.mk | 4 ++ 9 files changed, 131 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/octeontx2/Makefile create mode 100644 drivers/mempool/octeontx2/meson.build create mode 100644 drivers/mempool/octeontx2/otx2_mempool.c create mode 100644 drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map diff --git a/config/common_base b/config/common_base index e406e7836..05ef27dbf 100644 --- a/config/common_base +++ b/config/common_base @@ -776,6 +776,11 @@ CONFIG_RTE_DRIVER_MEMPOOL_STACK=y # CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL=y +# +# Compile PMD for octeontx2 npa mempool device +# +CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL=y + # # Compile librte_mbuf # diff --git a/drivers/common/Makefile b/drivers/common/Makefile index e7abe210e..05d75568f 100644 --- a/drivers/common/Makefile +++ b/drivers/common/Makefile @@ -23,6 +23,9 @@ ifeq ($(CONFIG_RTE_LIBRTE_COMMON_DPAAX),y) DIRS-y += dpaax endif +OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) +ifeq ($(findstring y,$(OCTEONTX2-y)),y) DIRS-y += octeontx2 +endif include $(RTE_SDK)/mk/rte.subdir.mk diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile index 28c2e8360..29ef73bf4 100644 --- a/drivers/mempool/Makefile +++ b/drivers/mempool/Makefile @@ -13,5 +13,6 @@ endif DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK) += stack DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx +DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) += octeontx2 include $(RTE_SDK)/mk/rte.subdir.mk diff --git a/drivers/mempool/meson.build b/drivers/mempool/meson.build index 4527d9806..7520e489f 100644 --- a/drivers/mempool/meson.build +++ b/drivers/mempool/meson.build @@ -1,7 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Intel Corporation -drivers = ['bucket', 'dpaa', 'dpaa2', 'octeontx', 'ring', 'stack'] +drivers = ['bucket', 'dpaa', 'dpaa2', 'octeontx', 'octeontx2', 'ring', 'stack'] std_deps = ['mempool'] config_flag_fmt = 'RTE_LIBRTE_@0@_MEMPOOL' driver_name_fmt = 'rte_mempool_@0@' diff --git a/drivers/mempool/octeontx2/Makefile b/drivers/mempool/octeontx2/Makefile new file mode 100644 index 000000000..6fbb6e291 --- /dev/null +++ b/drivers/mempool/octeontx2/Makefile @@ -0,0 +1,36 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_mempool_octeontx2.a + +CFLAGS += $(WERROR_FLAGS) +CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/bus/pci +CFLAGS += -O3 + +ifneq ($(CONFIG_RTE_ARCH_64),y) +CFLAGS += -Wno-int-to-pointer-cast +CFLAGS += -Wno-pointer-to-int-cast +endif + +EXPORT_MAP := rte_mempool_octeontx2_version.map + +LIBABIVER := 1 + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) += \ + otx2_mempool.c + +LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf +LDLIBS += -lrte_common_octeontx2 -lrte_kvargs -lrte_bus_pci + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build new file mode 100644 index 000000000..ec3c59eef --- /dev/null +++ b/drivers/mempool/octeontx2/meson.build @@ -0,0 +1,20 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +sources = files('otx2_mempool.c', + ) + +extra_flags = [] +# This integrated controller runs only on a arm64 machine, remove 32bit warnings +if not dpdk_conf.get('RTE_ARCH_64') + extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast'] +endif + +foreach flag: extra_flags + if cc.has_argument(flag) + cflags += flag + endif +endforeach + +deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_octeontx2', 'mempool'] diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c new file mode 100644 index 000000000..fd8e147f5 --- /dev/null +++ b/drivers/mempool/octeontx2/otx2_mempool.c @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include +#include +#include + +#include "otx2_common.h" + +static int +npa_remove(struct rte_pci_device *pci_dev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + RTE_SET_USED(pci_dev); + return 0; +} + +static int +npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) +{ + RTE_SET_USED(pci_drv); + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + RTE_SET_USED(pci_dev); + return 0; +} + +static const struct rte_pci_id pci_npa_map[] = { + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, + PCI_DEVID_OCTEONTX2_RVU_NPA_PF) + }, + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, + PCI_DEVID_OCTEONTX2_RVU_NPA_VF) + }, + { + .vendor_id = 0, + }, +}; + +static struct rte_pci_driver pci_npa = { + .id_table = pci_npa_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA, + .probe = npa_probe, + .remove = npa_remove, +}; + +RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa); +RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map); +RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci"); diff --git a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map new file mode 100644 index 000000000..9a61188cd --- /dev/null +++ b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map @@ -0,0 +1,4 @@ +DPDK_19.08 { + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 1640e138a..6eb5e1b4f 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -88,6 +88,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool _LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring +_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) += -lrte_mempool_octeontx2 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING) += -lrte_ring _LDLIBS-$(CONFIG_RTE_LIBRTE_PCI) += -lrte_pci _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal @@ -122,7 +123,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_FSLMC_BUS),y) _LDLIBS-$(CONFIG_RTE_LIBRTE_COMMON_DPAAX) += -lrte_common_dpaax endif +OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) +ifeq ($(findstring y,$(OCTEONTX2-y)),y) _LDLIBS-y += -lrte_common_octeontx2 +endif _LDLIBS-$(CONFIG_RTE_LIBRTE_PCI_BUS) += -lrte_bus_pci _LDLIBS-$(CONFIG_RTE_LIBRTE_VDEV_BUS) += -lrte_bus_vdev From patchwork Mon Jun 17 15:55:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54866 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5ABE71BFB2; Mon, 17 Jun 2019 17:57:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 8849F1BEEC for ; Mon, 17 Jun 2019 17:56:45 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HForA2000541 for ; Mon, 17 Jun 2019 08:56:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=iBEiBmRISVXNHPIaWIAZkTkAtZbJ2RJUE1hQY8UshjU=; b=ar3fl4s+zt9ZzcM4cwYwldRrZdXcSyv6lHhCrJraSXNhsD2SneWGax7c44J/vUiW5M7T kQ3lLFOeDUNzP/Q0lDP5fCz/LkeNZSRYZrdNiDxbQSSmRUhiZaQU++piobBYI5y1rkAx FaM18prb4a3YQOiLWmz64YvE4boI5iVYNPUnBwMyXQEF+svbmnup1YcHcQFpdTmHMPRN PLjQTwZUUGg4JqxjFKyEXoh1Y50uM7LYB8QAt91NgnK+3tKVQlUJBkuwE6BL/oH2TBdK 8vBQ/B1MOurDV6vsUhe2JTAvkqmb1qesYcjM2WTuvCUXYFSUY5cX66Bm3Dp9s+fLy4ez mA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2t68rp9bg2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:44 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:42 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:42 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id DA3913F7041; Mon, 17 Jun 2019 08:56:40 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru Date: Mon, 17 Jun 2019 21:25:27 +0530 Message-ID: <20190617155537.36144-18-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 17/27] drivers: add init and fini on octeontx2 NPA object X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob NPA object needs to initialize memory for queue interrupts context, pool resource management, etc. This patch adds support for initializing and finalizing the NPA object. This patch also updates the otx2_npa_lf definition to meet the init/fini requirements. Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru --- drivers/common/octeontx2/Makefile | 1 + drivers/common/octeontx2/meson.build | 2 +- drivers/common/octeontx2/otx2_common.h | 7 +- drivers/common/octeontx2/otx2_dev.h | 1 + drivers/mempool/octeontx2/otx2_mempool.c | 344 +++++++++++++++++- drivers/mempool/octeontx2/otx2_mempool.h | 55 +++ .../rte_mempool_octeontx2_version.map | 4 + 7 files changed, 403 insertions(+), 11 deletions(-) create mode 100644 drivers/mempool/octeontx2/otx2_mempool.h diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile index 78243e555..fabc32537 100644 --- a/drivers/common/octeontx2/Makefile +++ b/drivers/common/octeontx2/Makefile @@ -11,6 +11,7 @@ LIB = librte_common_octeontx2.a CFLAGS += $(WERROR_FLAGS) CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2 CFLAGS += -I$(RTE_SDK)/drivers/bus/pci ifneq ($(CONFIG_RTE_ARCH_64),y) diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build index 44ac90085..b79145788 100644 --- a/drivers/common/octeontx2/meson.build +++ b/drivers/common/octeontx2/meson.build @@ -22,4 +22,4 @@ endforeach deps = ['eal', 'pci', 'ethdev'] includes += include_directories('../../common/octeontx2', - '../../bus/pci') + '../../mempool/octeontx2', '../../bus/pci') diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h index cbc5c65a7..cdb25d9ed 100644 --- a/drivers/common/octeontx2/otx2_common.h +++ b/drivers/common/octeontx2/otx2_common.h @@ -37,12 +37,7 @@ #endif /* Intra device related functions */ -struct otx2_npa_lf { - struct otx2_mbox *mbox; - struct rte_pci_device *pci_dev; - struct rte_intr_handle *intr_handle; -}; - +struct otx2_npa_lf; struct otx2_idev_cfg { uint16_t sso_pf_func; uint16_t npa_pf_func; diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h index 8fa5f32d2..be862ad1b 100644 --- a/drivers/common/octeontx2/otx2_dev.h +++ b/drivers/common/octeontx2/otx2_dev.h @@ -10,6 +10,7 @@ #include "otx2_common.h" #include "otx2_irq.h" #include "otx2_mbox.h" +#include "otx2_mempool.h" /* Common HWCAP flags. Use from LSB bits */ #define OTX2_HWCAP_F_VF BIT_ULL(0) /* VF device */ diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c index fd8e147f5..fa74b7532 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.c +++ b/drivers/mempool/octeontx2/otx2_mempool.c @@ -2,12 +2,350 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include #include #include #include +#include +#include +#include #include #include "otx2_common.h" +#include "otx2_dev.h" +#include "otx2_mempool.h" + +#define OTX2_NPA_DEV_NAME RTE_STR(otx2_npa_dev_) +#define OTX2_NPA_DEV_NAME_LEN (sizeof(OTX2_NPA_DEV_NAME) + PCI_PRI_STR_SIZE) + +static inline int +npa_lf_alloc(struct otx2_npa_lf *lf) +{ + struct otx2_mbox *mbox = lf->mbox; + struct npa_lf_alloc_req *req; + struct npa_lf_alloc_rsp *rsp; + int rc; + + req = otx2_mbox_alloc_msg_npa_lf_alloc(mbox); + req->aura_sz = lf->aura_sz; + req->nr_pools = lf->nr_pools; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return NPA_LF_ERR_ALLOC; + + lf->stack_pg_ptrs = rsp->stack_pg_ptrs; + lf->stack_pg_bytes = rsp->stack_pg_bytes; + lf->qints = rsp->qints; + + return 0; +} + +static int +npa_lf_free(struct otx2_mbox *mbox) +{ + otx2_mbox_alloc_msg_npa_lf_free(mbox); + + return otx2_mbox_process(mbox); +} + +static int +npa_lf_init(struct otx2_npa_lf *lf, uintptr_t base, uint8_t aura_sz, + uint32_t nr_pools, struct otx2_mbox *mbox) +{ + uint32_t i, bmp_sz; + int rc; + + /* Sanity checks */ + if (!lf || !base || !mbox || !nr_pools) + return NPA_LF_ERR_PARAM; + + if (base & AURA_ID_MASK) + return NPA_LF_ERR_BASE_INVALID; + + if (aura_sz == NPA_AURA_SZ_0 || aura_sz >= NPA_AURA_SZ_MAX) + return NPA_LF_ERR_PARAM; + + memset(lf, 0x0, sizeof(*lf)); + lf->base = base; + lf->aura_sz = aura_sz; + lf->nr_pools = nr_pools; + lf->mbox = mbox; + + rc = npa_lf_alloc(lf); + if (rc) + goto exit; + + bmp_sz = rte_bitmap_get_memory_footprint(nr_pools); + + /* Allocate memory for bitmap */ + lf->npa_bmp_mem = rte_zmalloc("npa_bmp_mem", bmp_sz, + RTE_CACHE_LINE_SIZE); + if (lf->npa_bmp_mem == NULL) { + rc = -ENOMEM; + goto lf_free; + } + + /* Initialize pool resource bitmap array */ + lf->npa_bmp = rte_bitmap_init(nr_pools, lf->npa_bmp_mem, bmp_sz); + if (lf->npa_bmp == NULL) { + rc = -EINVAL; + goto bmap_mem_free; + } + + /* Mark all pools available */ + for (i = 0; i < nr_pools; i++) + rte_bitmap_set(lf->npa_bmp, i); + + /* Allocate memory for qint context */ + lf->npa_qint_mem = rte_zmalloc("npa_qint_mem", + sizeof(struct otx2_npa_qint) * nr_pools, 0); + if (lf->npa_qint_mem == NULL) { + rc = -ENOMEM; + goto bmap_free; + } + + return 0; + +bmap_free: + rte_bitmap_free(lf->npa_bmp); +bmap_mem_free: + rte_free(lf->npa_bmp_mem); +lf_free: + npa_lf_free(lf->mbox); +exit: + return rc; +} + +static int +npa_lf_fini(struct otx2_npa_lf *lf) +{ + if (!lf) + return NPA_LF_ERR_PARAM; + + rte_free(lf->npa_qint_mem); + rte_bitmap_free(lf->npa_bmp); + rte_free(lf->npa_bmp_mem); + + return npa_lf_free(lf->mbox); + +} + +static inline uint32_t +otx2_aura_size_to_u32(uint8_t val) +{ + if (val == NPA_AURA_SZ_0) + return 128; + if (val >= NPA_AURA_SZ_MAX) + return BIT_ULL(20); + + return 1 << (val + 6); +} + +static inline int +npa_lf_attach(struct otx2_mbox *mbox) +{ + struct rsrc_attach_req *req; + + req = otx2_mbox_alloc_msg_attach_resources(mbox); + req->npalf = true; + + return otx2_mbox_process(mbox); +} + +static inline int +npa_lf_detach(struct otx2_mbox *mbox) +{ + struct rsrc_detach_req *req; + + req = otx2_mbox_alloc_msg_detach_resources(mbox); + req->npalf = true; + + return otx2_mbox_process(mbox); +} + +static inline int +npa_lf_get_msix_offset(struct otx2_mbox *mbox, uint16_t *npa_msixoff) +{ + struct msix_offset_rsp *msix_rsp; + int rc; + + /* Get NPA and NIX MSIX vector offsets */ + otx2_mbox_alloc_msg_msix_offset(mbox); + + rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp); + + *npa_msixoff = msix_rsp->npa_msixoff; + + return rc; +} + +/** + * @internal + * Finalize NPA LF. + */ +int +otx2_npa_lf_fini(void) +{ + struct otx2_idev_cfg *idev; + int rc = 0; + + idev = otx2_intra_dev_get_cfg(); + if (idev == NULL) + return -ENOMEM; + + if (rte_atomic16_add_return(&idev->npa_refcnt, -1) == 0) { + rc |= npa_lf_fini(idev->npa_lf); + rc |= npa_lf_detach(idev->npa_lf->mbox); + otx2_npa_set_defaults(idev); + } + + return rc; +} + +/** + * @internal + * Initialize NPA LF. + */ +int +otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev) +{ + struct otx2_dev *dev = otx2_dev; + struct otx2_idev_cfg *idev; + struct otx2_npa_lf *lf; + uint16_t npa_msixoff; + uint32_t nr_pools; + uint8_t aura_sz; + int rc; + + idev = otx2_intra_dev_get_cfg(); + if (idev == NULL) + return -ENOMEM; + + /* Is NPA LF initialized by any another driver? */ + if (rte_atomic16_add_return(&idev->npa_refcnt, 1) == 1) { + + rc = npa_lf_attach(dev->mbox); + if (rc) + goto fail; + + rc = npa_lf_get_msix_offset(dev->mbox, &npa_msixoff); + if (rc) + goto npa_detach; + + aura_sz = NPA_AURA_SZ_128; + nr_pools = otx2_aura_size_to_u32(aura_sz); + + lf = &dev->npalf; + rc = npa_lf_init(lf, dev->bar2 + (RVU_BLOCK_ADDR_NPA << 20), + aura_sz, nr_pools, dev->mbox); + + if (rc) + goto npa_detach; + + lf->pf_func = dev->pf_func; + lf->npa_msixoff = npa_msixoff; + lf->intr_handle = &pci_dev->intr_handle; + lf->pci_dev = pci_dev; + + idev->npa_pf_func = dev->pf_func; + idev->npa_lf = lf; + rte_smp_wmb(); + + rte_mbuf_set_platform_mempool_ops("octeontx2_npa"); + otx2_npa_dbg("npa_lf=%p pools=%d sz=%d pf_func=0x%x msix=0x%x", + lf, nr_pools, aura_sz, lf->pf_func, npa_msixoff); + } + + return 0; + +npa_detach: + npa_lf_detach(dev->mbox); +fail: + rte_atomic16_dec(&idev->npa_refcnt); + return rc; +} + +static inline char* +otx2_npa_dev_to_name(struct rte_pci_device *pci_dev, char *name) +{ + snprintf(name, OTX2_NPA_DEV_NAME_LEN, + OTX2_NPA_DEV_NAME PCI_PRI_FMT, + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); + + return name; +} + +static int +otx2_npa_init(struct rte_pci_device *pci_dev) +{ + char name[OTX2_NPA_DEV_NAME_LEN]; + const struct rte_memzone *mz; + struct otx2_dev *dev; + int rc = -ENOMEM; + + mz = rte_memzone_reserve_aligned(otx2_npa_dev_to_name(pci_dev, name), + sizeof(*dev), SOCKET_ID_ANY, + 0, OTX2_ALIGN); + if (mz == NULL) + goto error; + + dev = mz->addr; + + /* Initialize the base otx2_dev object */ + rc = otx2_dev_init(pci_dev, dev); + if (rc) + goto malloc_fail; + + /* Grab the NPA LF if required */ + rc = otx2_npa_lf_init(pci_dev, dev); + if (rc) + goto dev_uninit; + + dev->drv_inited = true; + return 0; + +dev_uninit: + otx2_npa_lf_fini(); + otx2_dev_fini(pci_dev, dev); +malloc_fail: + rte_memzone_free(mz); +error: + otx2_err("Failed to initialize npa device rc=%d", rc); + return rc; +} + +static int +otx2_npa_fini(struct rte_pci_device *pci_dev) +{ + char name[OTX2_NPA_DEV_NAME_LEN]; + const struct rte_memzone *mz; + struct otx2_dev *dev; + + mz = rte_memzone_lookup(otx2_npa_dev_to_name(pci_dev, name)); + if (mz == NULL) + return -EINVAL; + + dev = mz->addr; + if (!dev->drv_inited) + goto dev_fini; + + dev->drv_inited = false; + otx2_npa_lf_fini(); + +dev_fini: + if (otx2_npa_lf_active(dev)) { + otx2_info("%s: common resource in use by other devices", + pci_dev->name); + return -EAGAIN; + } + + otx2_dev_fini(pci_dev, dev); + rte_memzone_free(mz); + + return 0; +} static int npa_remove(struct rte_pci_device *pci_dev) @@ -15,8 +353,7 @@ npa_remove(struct rte_pci_device *pci_dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; - RTE_SET_USED(pci_dev); - return 0; + return otx2_npa_fini(pci_dev); } static int @@ -27,8 +364,7 @@ npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; - RTE_SET_USED(pci_dev); - return 0; + return otx2_npa_init(pci_dev); } static const struct rte_pci_id pci_npa_map[] = { diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h new file mode 100644 index 000000000..e1c255c60 --- /dev/null +++ b/drivers/mempool/octeontx2/otx2_mempool.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_MEMPOOL_H__ +#define __OTX2_MEMPOOL_H__ + +#include +#include +#include +#include + +#include "otx2_common.h" +#include "otx2_mbox.h" + +enum npa_lf_status { + NPA_LF_ERR_PARAM = -512, + NPA_LF_ERR_ALLOC = -513, + NPA_LF_ERR_INVALID_BLOCK_SZ = -514, + NPA_LF_ERR_AURA_ID_ALLOC = -515, + NPA_LF_ERR_AURA_POOL_INIT = -516, + NPA_LF_ERR_AURA_POOL_FINI = -517, + NPA_LF_ERR_BASE_INVALID = -518, +}; + +struct otx2_npa_lf; +struct otx2_npa_qint { + struct otx2_npa_lf *lf; + uint8_t qintx; +}; + +struct otx2_npa_lf { + uint16_t qints; + uintptr_t base; + uint8_t aura_sz; + uint16_t pf_func; + uint32_t nr_pools; + void *npa_bmp_mem; + void *npa_qint_mem; + uint16_t npa_msixoff; + struct otx2_mbox *mbox; + uint32_t stack_pg_ptrs; + uint32_t stack_pg_bytes; + struct rte_bitmap *npa_bmp; + struct rte_pci_device *pci_dev; + struct rte_intr_handle *intr_handle; +}; + +#define AURA_ID_MASK (BIT_ULL(16) - 1) + +/* NPA LF */ +int otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev); +int otx2_npa_lf_fini(void); + +#endif /* __OTX2_MEMPOOL_H__ */ diff --git a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map index 9a61188cd..d703368c3 100644 --- a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map +++ b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map @@ -1,4 +1,8 @@ DPDK_19.08 { + global: + + otx2_npa_lf_init; + otx2_npa_lf_fini; local: *; }; From patchwork Mon Jun 17 15:55:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54868 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BA4C01BFB8; Mon, 17 Jun 2019 17:57:13 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 9394E1BF87 for ; Mon, 17 Jun 2019 17:56:47 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFprCV001049 for ; Mon, 17 Jun 2019 08:56:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=n//xOsoCq7QWzfqAT50Hxj+CLCwJmhcdLcYHfysEmmI=; b=UjPE7QnfAivlskHDQPdjzFTMCfeBmX78unpc8pEqFyy64SBhJskkfRVXgid09CTahkQv hGoXB3CKJt9xZIzromlHUxiVkUq21kGKCrNQ035DGeu9C+Zg+EIVrtshdTVJyvX5q+mb HHO4dQRY5kV22DWC1q+iult8lGSO7n+6c36l9pAVq0A7wMnlHJixShRAVL7w1+5VGl7f c0ztlTjOYeJ1F/rhYeP036r6qf8Vj/kBA9waid7xIk+zQgfKZabJaDjhcCcx4Pk2WckU Ydz7hPwRCP9mdtGaTbVwlTlDgGuhhGTLBIyXGRv4d+CVDiyCvVWvGhgyPie1huRMX4By /Q== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb1a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:46 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:45 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:45 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 904E53F7040; Mon, 17 Jun 2019 08:56:43 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Kiran Kumar K Date: Mon, 17 Jun 2019 21:25:28 +0530 Message-ID: <20190617155537.36144-19-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 18/27] mempool/octeontx2: add NPA HW operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Implement the low-level NPA HW operations such as alloc, free memory, etc. Signed-off-by: Jerin Jacob Signed-off-by: Kiran Kumar K --- drivers/mempool/octeontx2/otx2_mempool.h | 146 +++++++++++++++++++++++ 1 file changed, 146 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h index e1c255c60..871b45870 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.h +++ b/drivers/mempool/octeontx2/otx2_mempool.h @@ -48,6 +48,152 @@ struct otx2_npa_lf { #define AURA_ID_MASK (BIT_ULL(16) - 1) +/* + * Generate 64bit handle to have optimized alloc and free aura operation. + * 0 - AURA_ID_MASK for storing the aura_id. + * AURA_ID_MASK+1 - (2^64 - 1) for storing the lf base address. + * This scheme is valid when OS can give AURA_ID_MASK + * aligned address for lf base address. + */ +static inline uint64_t +npa_lf_aura_handle_gen(uint32_t aura_id, uintptr_t addr) +{ + uint64_t val; + + val = aura_id & AURA_ID_MASK; + return (uint64_t)addr | val; +} + +static inline uint64_t +npa_lf_aura_handle_to_aura(uint64_t aura_handle) +{ + return aura_handle & AURA_ID_MASK; +} + +static inline uintptr_t +npa_lf_aura_handle_to_base(uint64_t aura_handle) +{ + return (uintptr_t)(aura_handle & ~AURA_ID_MASK); +} + +static inline uint64_t +npa_lf_aura_op_alloc(uint64_t aura_handle, const int drop) +{ + uint64_t wdata = npa_lf_aura_handle_to_aura(aura_handle); + + if (drop) + wdata |= BIT_ULL(63); /* DROP */ + + return otx2_atomic64_add_nosync(wdata, + (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_ALLOCX(0))); +} + +static inline void +npa_lf_aura_op_free(uint64_t aura_handle, const int fabs, uint64_t iova) +{ + uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle); + + if (fabs) + reg |= BIT_ULL(63); /* FABS */ + + otx2_store_pair(iova, reg, + npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_FREE0); +} + +static inline uint64_t +npa_lf_aura_op_cnt_get(uint64_t aura_handle) +{ + uint64_t wdata; + uint64_t reg; + + wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44; + + reg = otx2_atomic64_add_nosync(wdata, + (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_CNT)); + + if (reg & BIT_ULL(42) /* OP_ERR */) + return 0; + else + return reg & 0xFFFFFFFFF; +} + +static inline void +npa_lf_aura_op_cnt_set(uint64_t aura_handle, const int sign, uint64_t count) +{ + uint64_t reg = count & (BIT_ULL(36) - 1); + + if (sign) + reg |= BIT_ULL(43); /* CNT_ADD */ + + reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44); + + otx2_write64(reg, + npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_CNT); +} + +static inline uint64_t +npa_lf_aura_op_limit_get(uint64_t aura_handle) +{ + uint64_t wdata; + uint64_t reg; + + wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44; + + reg = otx2_atomic64_add_nosync(wdata, + (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_LIMIT)); + + if (reg & BIT_ULL(42) /* OP_ERR */) + return 0; + else + return reg & 0xFFFFFFFFF; +} + +static inline void +npa_lf_aura_op_limit_set(uint64_t aura_handle, uint64_t limit) +{ + uint64_t reg = limit & (BIT_ULL(36) - 1); + + reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44); + + otx2_write64(reg, + npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_LIMIT); +} + +static inline uint64_t +npa_lf_aura_op_available(uint64_t aura_handle) +{ + uint64_t wdata; + uint64_t reg; + + wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44; + + reg = otx2_atomic64_add_nosync(wdata, + (int64_t *)(npa_lf_aura_handle_to_base( + aura_handle) + NPA_LF_POOL_OP_AVAILABLE)); + + if (reg & BIT_ULL(42) /* OP_ERR */) + return 0; + else + return reg & 0xFFFFFFFFF; +} + +static inline void +npa_lf_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, + uint64_t end_iova) +{ + uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle); + + otx2_store_pair(start_iova, reg, + npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_POOL_OP_PTR_START0); + otx2_store_pair(end_iova, reg, + npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_POOL_OP_PTR_END0); +} + /* NPA LF */ int otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev); int otx2_npa_lf_fini(void); From patchwork Mon Jun 17 15:55:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54869 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4D9331BFBE; Mon, 17 Jun 2019 17:57:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 605DB1BF78 for ; Mon, 17 Jun 2019 17:56:50 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFp7W8000626 for ; Mon, 17 Jun 2019 08:56:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=7IQkoq1UwXw3J83NrWFZ/QWyvC0Kl5bkEKEfhbF5Ass=; b=AloAMiX213wksHL9T1co5PIMv1Qr1JijSp25sADfIVe2KjvGiyCzQVB/OngbeyLlDBz0 uCQnLyP0Tv79PGMTpnpczMD/KlR1Kb6pfHggc4aI2avGtplB9X6Fm41DRLgj+Q6j7iYN ZPLq0xNB6BGYb9qLO/NETdryyTjmMVyUgZnh/RSRNYz3dIQSh+TyNE5kzJlRA37l4Bib 5o5VNUlyEM5Q4GdWSDLtHbWnzjY9KAPORJ9NAAjbPflbrBFfOR/3H0pkUTcYeInbFCUJ 3nbILZInnzsvoB61KiXrdejtnA6yDSjEzzAzrgtPFg3BFFX90aN+fDBRJi78C54vQmH/ 2g== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2t68rp9bgg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:49 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:48 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:48 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 655C63F703F; Mon, 17 Jun 2019 08:56:46 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Harman Kalra Date: Mon, 17 Jun 2019 21:25:29 +0530 Message-ID: <20190617155537.36144-20-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 19/27] mempool/octeontx2: add NPA IRQ handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Register and implement NPA IRQ handler for RAS and all type of error interrupts to get the fatal errors from HW. Signed-off-by: Jerin Jacob Signed-off-by: Harman Kalra --- drivers/mempool/octeontx2/Makefile | 3 +- drivers/mempool/octeontx2/meson.build | 1 + drivers/mempool/octeontx2/otx2_mempool.c | 6 + drivers/mempool/octeontx2/otx2_mempool.h | 4 + drivers/mempool/octeontx2/otx2_mempool_irq.c | 302 +++++++++++++++++++ 5 files changed, 315 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c diff --git a/drivers/mempool/octeontx2/Makefile b/drivers/mempool/octeontx2/Makefile index 6fbb6e291..86950b270 100644 --- a/drivers/mempool/octeontx2/Makefile +++ b/drivers/mempool/octeontx2/Makefile @@ -28,7 +28,8 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) += \ - otx2_mempool.c + otx2_mempool.c \ + otx2_mempool_irq.c LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf LDLIBS += -lrte_common_octeontx2 -lrte_kvargs -lrte_bus_pci diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build index ec3c59eef..3f93b509d 100644 --- a/drivers/mempool/octeontx2/meson.build +++ b/drivers/mempool/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files('otx2_mempool.c', + 'otx2_mempool_irq.c', ) extra_flags = [] diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c index fa74b7532..1bcb86cf4 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.c +++ b/drivers/mempool/octeontx2/otx2_mempool.c @@ -195,6 +195,7 @@ otx2_npa_lf_fini(void) return -ENOMEM; if (rte_atomic16_add_return(&idev->npa_refcnt, -1) == 0) { + otx2_npa_unregister_irqs(idev->npa_lf); rc |= npa_lf_fini(idev->npa_lf); rc |= npa_lf_detach(idev->npa_lf->mbox); otx2_npa_set_defaults(idev); @@ -251,6 +252,9 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev) idev->npa_pf_func = dev->pf_func; idev->npa_lf = lf; rte_smp_wmb(); + rc = otx2_npa_register_irqs(lf); + if (rc) + goto npa_fini; rte_mbuf_set_platform_mempool_ops("octeontx2_npa"); otx2_npa_dbg("npa_lf=%p pools=%d sz=%d pf_func=0x%x msix=0x%x", @@ -259,6 +263,8 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev) return 0; +npa_fini: + npa_lf_fini(idev->npa_lf); npa_detach: npa_lf_detach(dev->mbox); fail: diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h index 871b45870..41542cf89 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.h +++ b/drivers/mempool/octeontx2/otx2_mempool.h @@ -198,4 +198,8 @@ npa_lf_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, int otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev); int otx2_npa_lf_fini(void); +/* IRQ */ +int otx2_npa_register_irqs(struct otx2_npa_lf *lf); +void otx2_npa_unregister_irqs(struct otx2_npa_lf *lf); + #endif /* __OTX2_MEMPOOL_H__ */ diff --git a/drivers/mempool/octeontx2/otx2_mempool_irq.c b/drivers/mempool/octeontx2/otx2_mempool_irq.c new file mode 100644 index 000000000..c026e1eea --- /dev/null +++ b/drivers/mempool/octeontx2/otx2_mempool_irq.c @@ -0,0 +1,302 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include +#include + +#include "otx2_common.h" +#include "otx2_irq.h" +#include "otx2_mempool.h" + +static void +npa_lf_err_irq(void *param) +{ + struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param; + uint64_t intr; + + intr = otx2_read64(lf->base + NPA_LF_ERR_INT); + if (intr == 0) + return; + + otx2_err("Err_intr=0x%" PRIx64 "", intr); + + /* Clear interrupt */ + otx2_write64(intr, lf->base + NPA_LF_ERR_INT); +} + +static int +npa_lf_register_err_irq(struct otx2_npa_lf *lf) +{ + struct rte_intr_handle *handle = lf->intr_handle; + int rc, vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); + /* Register err interrupt vector */ + rc = otx2_register_irq(handle, npa_lf_err_irq, lf, vec); + + /* Enable hw interrupt */ + otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S); + + return rc; +} + +static void +npa_lf_unregister_err_irq(struct otx2_npa_lf *lf) +{ + struct rte_intr_handle *handle = lf->intr_handle; + int vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); + otx2_unregister_irq(handle, npa_lf_err_irq, lf, vec); +} + +static void +npa_lf_ras_irq(void *param) +{ + struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param; + uint64_t intr; + + intr = otx2_read64(lf->base + NPA_LF_RAS); + if (intr == 0) + return; + + otx2_err("Ras_intr=0x%" PRIx64 "", intr); + + /* Clear interrupt */ + otx2_write64(intr, lf->base + NPA_LF_RAS); +} + +static int +npa_lf_register_ras_irq(struct otx2_npa_lf *lf) +{ + struct rte_intr_handle *handle = lf->intr_handle; + int rc, vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); + /* Set used interrupt vectors */ + rc = otx2_register_irq(handle, npa_lf_ras_irq, lf, vec); + /* Enable hw interrupt */ + otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S); + + return rc; +} + +static void +npa_lf_unregister_ras_irq(struct otx2_npa_lf *lf) +{ + int vec; + struct rte_intr_handle *handle = lf->intr_handle; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); + otx2_unregister_irq(handle, npa_lf_ras_irq, lf, vec); +} + +static inline uint8_t +npa_lf_q_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t q, + uint32_t off, uint64_t mask) +{ + uint64_t reg, wdata; + uint8_t qint; + + wdata = (uint64_t)q << 44; + reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off)); + + if (reg & BIT_ULL(42) /* OP_ERR */) { + otx2_err("Failed execute irq get off=0x%x", off); + return 0; + } + + qint = reg & 0xff; + wdata &= mask; + otx2_write64(wdata, lf->base + off); + + return qint; +} + +static inline uint8_t +npa_lf_pool_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t p) +{ + return npa_lf_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00); +} + +static inline uint8_t +npa_lf_aura_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t a) +{ + return npa_lf_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00); +} + +static void +npa_lf_q_irq(void *param) +{ + struct otx2_npa_qint *qint = (struct otx2_npa_qint *)param; + struct otx2_npa_lf *lf = qint->lf; + uint8_t irq, qintx = qint->qintx; + uint32_t q, pool, aura; + uint64_t intr; + + intr = otx2_read64(lf->base + NPA_LF_QINTX_INT(qintx)); + if (intr == 0) + return; + + otx2_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx); + + /* Handle pool queue interrupts */ + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled POOL */ + if (rte_bitmap_get(lf->npa_bmp, q)) + continue; + + pool = q % lf->qints; + irq = npa_lf_pool_irq_get_and_clear(lf, pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS)) + otx2_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE)) + otx2_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR)) + otx2_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool); + } + + /* Handle aura queue interrupts */ + for (q = 0; q < lf->nr_pools; q++) { + + /* Skip disabled AURA */ + if (rte_bitmap_get(lf->npa_bmp, q)) + continue; + + aura = q % lf->qints; + irq = npa_lf_aura_irq_get_and_clear(lf, aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER)) + otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER)) + otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER)) + otx2_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS)) + otx2_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura); + } + + /* Clear interrupt */ + otx2_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx)); +} + +static int +npa_lf_register_queue_irqs(struct otx2_npa_lf *lf) +{ + struct rte_intr_handle *handle = lf->intr_handle; + int vec, q, qs, rc = 0; + + /* Figure out max qintx required */ + qs = RTE_MIN(lf->qints, lf->nr_pools); + + for (q = 0; q < qs; q++) { + vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + + /* Clear interrupt */ + otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); + + struct otx2_npa_qint *qintmem = lf->npa_qint_mem; + qintmem += q; + + qintmem->lf = lf; + qintmem->qintx = q; + + /* Sync qints_mem update */ + rte_smp_wmb(); + + /* Register queue irq vector */ + rc = otx2_register_irq(handle, npa_lf_q_irq, qintmem, vec); + if (rc) + break; + + otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q)); + /* Enable QINT interrupt */ + otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q)); + } + + return rc; +} + +static void +npa_lf_unregister_queue_irqs(struct otx2_npa_lf *lf) +{ + struct rte_intr_handle *handle = lf->intr_handle; + int vec, q, qs; + + /* Figure out max qintx required */ + qs = RTE_MIN(lf->qints, lf->nr_pools); + + for (q = 0; q < qs; q++) { + vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q)); + + /* Clear interrupt */ + otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); + + struct otx2_npa_qint *qintmem = lf->npa_qint_mem; + qintmem += q; + + /* Unregister queue irq vector */ + otx2_unregister_irq(handle, npa_lf_q_irq, qintmem, vec); + + qintmem->lf = NULL; + qintmem->qintx = 0; + } +} + +int +otx2_npa_register_irqs(struct otx2_npa_lf *lf) +{ + int rc; + + if (lf->npa_msixoff == MSIX_VECTOR_INVALID) { + otx2_err("Invalid NPALF MSIX vector offset vector: 0x%x", + lf->npa_msixoff); + return -EINVAL; + } + + /* Register lf err interrupt */ + rc = npa_lf_register_err_irq(lf); + /* Register RAS interrupt */ + rc |= npa_lf_register_ras_irq(lf); + /* Register queue interrupts */ + rc |= npa_lf_register_queue_irqs(lf); + + return rc; +} + +void +otx2_npa_unregister_irqs(struct otx2_npa_lf *lf) +{ + npa_lf_unregister_err_irq(lf); + npa_lf_unregister_ras_irq(lf); + npa_lf_unregister_queue_irqs(lf); +} From patchwork Mon Jun 17 15:55:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54870 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AC1371BFC7; Mon, 17 Jun 2019 17:57:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id EC6411BF33 for ; Mon, 17 Jun 2019 17:56:52 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFoplI000532 for ; Mon, 17 Jun 2019 08:56:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=TVSNJfSRqWtKoqJ7SwR8wXoYfEDTGTRHWtRfKKi69MI=; b=HPze9k2vm5UBYellZLFHzHvEzX4YT0D/yTKCy3oYAxLQy3rau0F1c9r3i1jeZqtJZ8Au qHZmioruIsGvIB9ZfWgBqKGC21uua3ZXBtf/dkM6tqSkO9Ze4d/tT/lNbhcOHkTxIoYN RVLHspvYAtIODiZD9w9hwkC8O/Do/iNUKoRfYlbx+Ygg0h+WnWIJEKftdc9ozk/GoF63 W5i+VOe5Gdve19HF1PoHMM26LZTfeYFFT1dM7FaInsi/S0YqRVuyJl7TKl9buQG1PvPB NxfDj80Uwvj8hf6K5CJ0V0j3o9TeiDUHhWZXNrOTaD1zo6p4UirG2xRo05pvzyikiiTT cw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2t68rp9bgw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:56:52 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:50 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:50 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 2AC293F703F; Mon, 17 Jun 2019 08:56:48 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Vivek Sharma Date: Mon, 17 Jun 2019 21:25:30 +0530 Message-ID: <20190617155537.36144-21-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 20/27] mempool/octeontx2: add context dump support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add a helper function to dump aura and pool context for NPA debugging. Signed-off-by: Jerin Jacob Signed-off-by: Vivek Sharma --- drivers/mempool/octeontx2/Makefile | 3 +- drivers/mempool/octeontx2/meson.build | 1 + drivers/mempool/octeontx2/otx2_mempool.h | 3 + .../mempool/octeontx2/otx2_mempool_debug.c | 135 ++++++++++++++++++ drivers/mempool/octeontx2/otx2_mempool_irq.c | 1 + 5 files changed, 142 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c diff --git a/drivers/mempool/octeontx2/Makefile b/drivers/mempool/octeontx2/Makefile index 86950b270..b86d469f4 100644 --- a/drivers/mempool/octeontx2/Makefile +++ b/drivers/mempool/octeontx2/Makefile @@ -29,7 +29,8 @@ LIBABIVER := 1 # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) += \ otx2_mempool.c \ - otx2_mempool_irq.c + otx2_mempool_irq.c \ + otx2_mempool_debug.c LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf LDLIBS += -lrte_common_octeontx2 -lrte_kvargs -lrte_bus_pci diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build index 3f93b509d..ab306b729 100644 --- a/drivers/mempool/octeontx2/meson.build +++ b/drivers/mempool/octeontx2/meson.build @@ -4,6 +4,7 @@ sources = files('otx2_mempool.c', 'otx2_mempool_irq.c', + 'otx2_mempool_debug.c' ) extra_flags = [] diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h index 41542cf89..efaa308b3 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.h +++ b/drivers/mempool/octeontx2/otx2_mempool.h @@ -202,4 +202,7 @@ int otx2_npa_lf_fini(void); int otx2_npa_register_irqs(struct otx2_npa_lf *lf); void otx2_npa_unregister_irqs(struct otx2_npa_lf *lf); +/* Debug */ +int otx2_mempool_ctx_dump(struct otx2_npa_lf *lf); + #endif /* __OTX2_MEMPOOL_H__ */ diff --git a/drivers/mempool/octeontx2/otx2_mempool_debug.c b/drivers/mempool/octeontx2/otx2_mempool_debug.c new file mode 100644 index 000000000..eef61ef07 --- /dev/null +++ b/drivers/mempool/octeontx2/otx2_mempool_debug.c @@ -0,0 +1,135 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_mempool.h" + +#define npa_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__) + +static inline void +npa_lf_pool_dump(struct npa_pool_s *pool) +{ + npa_dump("W0: Stack base\t\t0x%"PRIx64"", pool->stack_base); + npa_dump("W1: ena \t\t%d\nW1: nat_align \t\t%d\nW1: stack_caching \t%d", + pool->ena, pool->nat_align, pool->stack_caching); + npa_dump("W1: stack_way_mask\t%d\nW1: buf_offset\t\t%d", + pool->stack_way_mask, pool->buf_offset); + npa_dump("W1: buf_size \t\t%d", pool->buf_size); + + npa_dump("W2: stack_max_pages \t%d\nW2: stack_pages\t\t%d", + pool->stack_max_pages, pool->stack_pages); + + npa_dump("W3: op_pc \t\t0x%"PRIx64"", (uint64_t)pool->op_pc); + + npa_dump("W4: stack_offset\t%d\nW4: shift\t\t%d\nW4: avg_level\t\t%d", + pool->stack_offset, pool->shift, pool->avg_level); + npa_dump("W4: avg_con \t\t%d\nW4: fc_ena\t\t%d\nW4: fc_stype\t\t%d", + pool->avg_con, pool->fc_ena, pool->fc_stype); + npa_dump("W4: fc_hyst_bits\t%d\nW4: fc_up_crossing\t%d", + pool->fc_hyst_bits, pool->fc_up_crossing); + npa_dump("W4: update_time\t\t%d\n", pool->update_time); + + npa_dump("W5: fc_addr\t\t0x%"PRIx64"\n", pool->fc_addr); + + npa_dump("W6: ptr_start\t\t0x%"PRIx64"\n", pool->ptr_start); + + npa_dump("W7: ptr_end\t\t0x%"PRIx64"\n", pool->ptr_end); + npa_dump("W8: err_int\t\t%d\nW8: err_int_ena\t\t%d", + pool->err_int, pool->err_int_ena); + npa_dump("W8: thresh_int\t\t%d", pool->thresh_int); + + npa_dump("W8: thresh_int_ena\t%d\nW8: thresh_up\t\t%d", + pool->thresh_int_ena, pool->thresh_up); + npa_dump("W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t%d", + pool->thresh_qint_idx, pool->err_qint_idx); +} + +static inline void +npa_lf_aura_dump(struct npa_aura_s *aura) +{ + npa_dump("W0: Pool addr\t\t0x%"PRIx64"\n", aura->pool_addr); + + npa_dump("W1: ena\t\t\t%d\nW1: pool caching\t%d\nW1: pool way mask\t%d", + aura->ena, aura->pool_caching, aura->pool_way_mask); + npa_dump("W1: avg con\t\t%d\nW1: pool drop ena\t%d", + aura->avg_con, aura->pool_drop_ena); + npa_dump("W1: aura drop ena\t%d", aura->aura_drop_ena); + npa_dump("W1: bp_ena\t\t%d\nW1: aura drop\t\t%d\nW1: aura shift\t\t%d", + aura->bp_ena, aura->aura_drop, aura->shift); + npa_dump("W1: avg_level\t\t%d\n", aura->avg_level); + + npa_dump("W2: count\t\t%"PRIx64"\nW2: nix0_bpid\t\t%d", + (uint64_t)aura->count, aura->nix0_bpid); + npa_dump("W2: nix1_bpid\t\t%d", aura->nix1_bpid); + + npa_dump("W3: limit\t\t%"PRIx64"\nW3: bp\t\t\t%d\nW3: fc_ena\t\t%d\n", + (uint64_t)aura->limit, aura->bp, aura->fc_ena); + npa_dump("W3: fc_up_crossing\t%d\nW3: fc_stype\t\t%d", + aura->fc_up_crossing, aura->fc_stype); + + npa_dump("W3: fc_hyst_bits\t%d", aura->fc_hyst_bits); + + npa_dump("W4: fc_addr\t\t0x%"PRIx64"\n", aura->fc_addr); + + npa_dump("W5: pool_drop\t\t%d\nW5: update_time\t\t%d", + aura->pool_drop, aura->update_time); + npa_dump("W5: err_int\t\t%d", aura->err_int); + npa_dump("W5: err_int_ena\t\t%d\nW5: thresh_int\t\t%d", + aura->err_int_ena, aura->thresh_int); + npa_dump("W5: thresh_int_ena\t%d", aura->thresh_int_ena); + + npa_dump("W5: thresh_up\t\t%d\nW5: thresh_qint_idx\t%d", + aura->thresh_up, aura->thresh_qint_idx); + npa_dump("W5: err_qint_idx\t%d", aura->err_qint_idx); + + npa_dump("W6: thresh\t\t%"PRIx64"\n", (uint64_t)aura->thresh); +} + +int +otx2_mempool_ctx_dump(struct otx2_npa_lf *lf) +{ + struct npa_aq_enq_req *aq; + struct npa_aq_enq_rsp *rsp; + uint32_t q; + int rc; + + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled POOL */ + if (rte_bitmap_get(lf->npa_bmp, q)) + continue; + + aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox); + aq->aura_id = q; + aq->ctype = NPA_AQ_CTYPE_POOL; + aq->op = NPA_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to get pool(%d) context", q); + return rc; + } + npa_dump("============== pool=%d ===============\n", q); + npa_lf_pool_dump(&rsp->pool); + } + + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled AURA */ + if (rte_bitmap_get(lf->npa_bmp, q)) + continue; + + aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox); + aq->aura_id = q; + aq->ctype = NPA_AQ_CTYPE_AURA; + aq->op = NPA_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to get aura(%d) context", q); + return rc; + } + npa_dump("============== aura=%d ===============\n", q); + npa_lf_aura_dump(&rsp->aura); + } + + return rc; +} diff --git a/drivers/mempool/octeontx2/otx2_mempool_irq.c b/drivers/mempool/octeontx2/otx2_mempool_irq.c index c026e1eea..ce4104453 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_irq.c +++ b/drivers/mempool/octeontx2/otx2_mempool_irq.c @@ -199,6 +199,7 @@ npa_lf_q_irq(void *param) /* Clear interrupt */ otx2_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx)); + otx2_mempool_ctx_dump(lf); } static int From patchwork Mon Jun 17 15:55:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54871 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2D21B1BFCF; Mon, 17 Jun 2019 17:57:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id AEFD81BFAB for ; Mon, 17 Jun 2019 17:56:57 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFpsXj001115; Mon, 17 Jun 2019 08:56:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=6ABF1zIvOGHbh2y/GLyfWGxwDoYPkutJ+zY8cwag3Ds=; b=p9IDuPDMesLPoLCmsYazoT9pzwB1IUEhHQtYqFRzto3sN5ApJXfjW5E6YLeQ+03wIYbL OJPUuLpuQXhlCVZ6rzn1gWt0gYHClOMU6qTfyhqk/Jmuk8B0qAviG0DtMzs+4fxWWaOy bfi4/rMuleHtksG5KjNSOZ3Hu1Jo+xLbuSeTDv7sReH9nhfYByvNJysUZj0St4g2WCpR XESA9eJHaCPdOuR/kEDukTUdUq6vwO3awJ0ErfWNHQZaHCAhIb8GfndBEwfZ4xPrQF0e tJzGl+TKNU7EGt8C1B3vlwKTvjLXJkdhUGpGuOWdJmNIvClR7V46OuxHWnH6UOez8Lrj dQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb2c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 17 Jun 2019 08:56:55 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:53 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:53 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id DE3D63F703F; Mon, 17 Jun 2019 08:56:51 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Olivier Matz Date: Mon, 17 Jun 2019 21:25:31 +0530 Message-ID: <20190617155537.36144-22-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 21/27] mempool/octeontx2: add mempool alloc op X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The DPDK mempool allocation reserves a single HW AURA and POOL in 1:1 map mode. Upon reservation, SW programs the slow path operations such as allocate stack memory for DMA and bunch HW configurations to respective HW blocks. Cc: Olivier Matz Signed-off-by: Jerin Jacob --- drivers/mempool/octeontx2/Makefile | 1 + drivers/mempool/octeontx2/meson.build | 3 +- drivers/mempool/octeontx2/otx2_mempool_ops.c | 246 +++++++++++++++++++ 3 files changed, 249 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c diff --git a/drivers/mempool/octeontx2/Makefile b/drivers/mempool/octeontx2/Makefile index b86d469f4..b3568443e 100644 --- a/drivers/mempool/octeontx2/Makefile +++ b/drivers/mempool/octeontx2/Makefile @@ -28,6 +28,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) += \ + otx2_mempool_ops.c \ otx2_mempool.c \ otx2_mempool_irq.c \ otx2_mempool_debug.c diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build index ab306b729..9fde40f0e 100644 --- a/drivers/mempool/octeontx2/meson.build +++ b/drivers/mempool/octeontx2/meson.build @@ -2,7 +2,8 @@ # Copyright(C) 2019 Marvell International Ltd. # -sources = files('otx2_mempool.c', +sources = files('otx2_mempool_ops.c', + 'otx2_mempool.c', 'otx2_mempool_irq.c', 'otx2_mempool_debug.c' ) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c new file mode 100644 index 000000000..0e7b7a77c --- /dev/null +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -0,0 +1,246 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "otx2_mempool.h" + +static int +npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, + struct npa_aura_s *aura, struct npa_pool_s *pool) +{ + struct npa_aq_enq_req *aura_init_req, *pool_init_req; + struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp; + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + int rc, off; + + aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + + aura_init_req->aura_id = aura_id; + aura_init_req->ctype = NPA_AQ_CTYPE_AURA; + aura_init_req->op = NPA_AQ_INSTOP_INIT; + memcpy(&aura_init_req->aura, aura, sizeof(*aura)); + + pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + + pool_init_req->aura_id = aura_id; + pool_init_req->ctype = NPA_AQ_CTYPE_POOL; + pool_init_req->op = NPA_AQ_INSTOP_INIT; + memcpy(&pool_init_req->pool, pool, sizeof(*pool)); + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + off = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + aura_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + off = mbox->rx_start + aura_init_rsp->hdr.next_msgoff; + pool_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + if (rc == 2 && aura_init_rsp->hdr.rc == 0 && pool_init_rsp->hdr.rc == 0) + return 0; + else + return NPA_LF_ERR_AURA_POOL_INIT; +} + +static inline char* +npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name) +{ + snprintf(name, RTE_MEMZONE_NAMESIZE, "otx2_npa_stack_%x_%d", + lf->pf_func, pool_id); + + return name; +} + +static inline const struct rte_memzone * +npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name, + int pool_id, size_t size) +{ + return rte_memzone_reserve_aligned( + npa_lf_stack_memzone_name(lf, pool_id, name), size, 0, + RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN); +} + +static inline int +bitmap_ctzll(uint64_t slab) +{ + if (slab == 0) + return 0; + + return __builtin_ctzll(slab); +} + +static int +npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size, + const uint32_t block_count, struct npa_aura_s *aura, + struct npa_pool_s *pool, uint64_t *aura_handle) +{ + int rc, aura_id, pool_id, stack_size, alloc_size; + char name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + uint64_t slab; + uint32_t pos; + + /* Sanity check */ + if (!lf || !block_size || !block_count || + !pool || !aura || !aura_handle) + return NPA_LF_ERR_PARAM; + + /* Block size should be cache line aligned and in range of 128B-128KB */ + if (block_size % OTX2_ALIGN || block_size < 128 || + block_size > 128 * 1024) + return NPA_LF_ERR_INVALID_BLOCK_SZ; + + pos = slab = 0; + /* Scan from the beginning */ + __rte_bitmap_scan_init(lf->npa_bmp); + /* Scan bitmap to get the free pool */ + rc = rte_bitmap_scan(lf->npa_bmp, &pos, &slab); + /* Empty bitmap */ + if (rc == 0) { + otx2_err("Mempools exhausted, 'max_pools' devargs to increase"); + return -ERANGE; + } + + /* Get aura_id from resource bitmap */ + aura_id = pos + bitmap_ctzll(slab); + /* Mark pool as reserved */ + rte_bitmap_clear(lf->npa_bmp, aura_id); + + /* Configuration based on each aura has separate pool(aura-pool pair) */ + pool_id = aura_id; + rc = (aura_id < 0 || pool_id >= (int)lf->nr_pools || aura_id >= + (int)BIT_ULL(6 + lf->aura_sz)) ? NPA_LF_ERR_AURA_ID_ALLOC : 0; + if (rc) + goto exit; + + /* Allocate stack memory */ + stack_size = (block_count + lf->stack_pg_ptrs - 1) / lf->stack_pg_ptrs; + alloc_size = stack_size * lf->stack_pg_bytes; + + mz = npa_lf_stack_dma_alloc(lf, name, pool_id, alloc_size); + if (mz == NULL) { + rc = -ENOMEM; + goto aura_res_put; + } + + /* Update aura fields */ + aura->pool_addr = pool_id;/* AF will translate to associated poolctx */ + aura->ena = 1; + aura->shift = __builtin_clz(block_count) - 8; + aura->limit = block_count; + aura->pool_caching = 1; + aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER); + aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER); + aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER); + aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS); + /* Many to one reduction */ + aura->err_qint_idx = aura_id % lf->qints; + + /* Update pool fields */ + pool->stack_base = mz->iova; + pool->ena = 1; + pool->buf_size = block_size / OTX2_ALIGN; + pool->stack_max_pages = stack_size; + pool->shift = __builtin_clz(block_count) - 8; + pool->ptr_start = 0; + pool->ptr_end = ~0; + pool->stack_caching = 1; + pool->err_int_ena = BIT(NPA_POOL_ERR_INT_OVFLS); + pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_RANGE); + pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_PERR); + + /* Many to one reduction */ + pool->err_qint_idx = pool_id % lf->qints; + + /* Issue AURA_INIT and POOL_INIT op */ + rc = npa_lf_aura_pool_init(lf->mbox, aura_id, aura, pool); + if (rc) + goto stack_mem_free; + + *aura_handle = npa_lf_aura_handle_gen(aura_id, lf->base); + + /* Update aura count */ + npa_lf_aura_op_cnt_set(*aura_handle, 0, block_count); + /* Read it back to make sure aura count is updated */ + npa_lf_aura_op_cnt_get(*aura_handle); + + return 0; + +stack_mem_free: + rte_memzone_free(mz); +aura_res_put: + rte_bitmap_set(lf->npa_bmp, aura_id); +exit: + return rc; +} + +static int +otx2_npa_alloc(struct rte_mempool *mp) +{ + uint32_t block_size, block_count; + struct otx2_npa_lf *lf; + struct npa_aura_s aura; + struct npa_pool_s pool; + uint64_t aura_handle; + int rc; + + lf = otx2_npa_lf_obj_get(); + if (lf == NULL) { + rc = -EINVAL; + goto error; + } + + block_size = mp->elt_size + mp->header_size + mp->trailer_size; + block_count = mp->size; + + if (block_size % OTX2_ALIGN != 0) { + otx2_err("Block size should be multiple of 128B"); + rc = -ERANGE; + goto error; + } + + memset(&aura, 0, sizeof(struct npa_aura_s)); + memset(&pool, 0, sizeof(struct npa_pool_s)); + pool.nat_align = 1; + pool.buf_offset = 1; + + if ((uint32_t)pool.buf_offset * OTX2_ALIGN != mp->header_size) { + otx2_err("Unsupported mp->header_size=%d", mp->header_size); + rc = -EINVAL; + goto error; + } + + /* Use driver specific mp->pool_config to override aura config */ + if (mp->pool_config != NULL) + memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s)); + + rc = npa_lf_aura_pool_pair_alloc(lf, block_size, block_count, + &aura, &pool, &aura_handle); + if (rc) { + otx2_err("Failed to alloc pool or aura rc=%d", rc); + goto error; + } + + /* Store aura_handle for future queue operations */ + mp->pool_id = aura_handle; + otx2_npa_dbg("lf=%p block_sz=%d block_count=%d aura_handle=0x%"PRIx64, + lf, block_size, block_count, aura_handle); + + /* Just hold the reference of the object */ + otx2_npa_lf_obj_ref(); + return 0; +error: + return rc; +} + +static struct rte_mempool_ops otx2_npa_ops = { + .name = "octeontx2_npa", + .alloc = otx2_npa_alloc, +}; + +MEMPOOL_REGISTER_OPS(otx2_npa_ops); From patchwork Mon Jun 17 15:55:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54872 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7C9331BFD8; Mon, 17 Jun 2019 17:57:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 4143D1BFAC for ; Mon, 17 Jun 2019 17:56:58 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFpsXk001115; Mon, 17 Jun 2019 08:56:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=F1fHqOAgJrFtFLqnxmni9YArWHbWN6yxNAC3IcV2uj4=; b=Dathf9P7yRLnG0B5lVNQ9LawuAg1CtYf4U0QYhJLrytzHwmwnolmDiZZo1SnEjTpPJLy p8JxoLtI2IH/bsbyvPjNrjRFJO1j8yPEq4EYCCeMjZtV4knYhjn9ALiTB+Z8hIWTG/Bi IdkU3s/SkDmrkH8Rl1KVD5rmoyZ47tsOt2Sbx29tnv5SlyKvmGSSSWxTraCQE9Mr94zU TrKSmV0mSylu2aQcTZTwiLBjk5JbDImKNt37I8GdK/IexK5/Uuz/qjFdoKJG1ztTVcFR HWyJ784COPtMVQQ9TWa63D//dR0pwRMvwsZsymZpcEe3F0uGA3jdwQlaET4YJUj40J+0 mA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb2c-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 17 Jun 2019 08:56:57 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:56 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:56 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id BE4FF3F703F; Mon, 17 Jun 2019 08:56:54 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Olivier Matz Date: Mon, 17 Jun 2019 21:25:32 +0530 Message-ID: <20190617155537.36144-23-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 22/27] mempool/octeontx2: add mempool free op X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The DPDK mempool free operation frees HW AURA and POOL reserved in alloc operation. In addition to that it free all the memory resources allocated in mempool alloc operations. Cc: Olivier Matz Signed-off-by: Jerin Jacob --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 104 +++++++++++++++++++ 1 file changed, 104 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index 0e7b7a77c..94570319a 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -47,6 +47,62 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, return NPA_LF_ERR_AURA_POOL_INIT; } +static int +npa_lf_aura_pool_fini(struct otx2_mbox *mbox, + uint32_t aura_id, + uint64_t aura_handle) +{ + struct npa_aq_enq_req *aura_req, *pool_req; + struct npa_aq_enq_rsp *aura_rsp, *pool_rsp; + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + struct ndc_sync_op *ndc_req; + int rc, off; + + /* Procedure for disabling an aura/pool */ + rte_delay_us(10); + npa_lf_aura_op_alloc(aura_handle, 0); + + pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + pool_req->aura_id = aura_id; + pool_req->ctype = NPA_AQ_CTYPE_POOL; + pool_req->op = NPA_AQ_INSTOP_WRITE; + pool_req->pool.ena = 0; + pool_req->pool_mask.ena = ~pool_req->pool_mask.ena; + + aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + aura_req->aura_id = aura_id; + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_WRITE; + aura_req->aura.ena = 0; + aura_req->aura_mask.ena = ~aura_req->aura_mask.ena; + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + off = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + off = mbox->rx_start + pool_rsp->hdr.next_msgoff; + aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + if (rc != 2 || aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0) + return NPA_LF_ERR_AURA_POOL_FINI; + + /* Sync NDC-NPA for LF */ + ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); + ndc_req->npa_lf_sync = 1; + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Error on NDC-NPA LF sync, rc %d", rc); + return NPA_LF_ERR_AURA_POOL_FINI; + } + return 0; +} + static inline char* npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name) { @@ -65,6 +121,18 @@ npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name, RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN); } +static inline int +npa_lf_stack_dma_free(struct otx2_npa_lf *lf, char *name, int pool_id) +{ + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(npa_lf_stack_memzone_name(lf, pool_id, name)); + if (mz == NULL) + return -EINVAL; + + return rte_memzone_free(mz); +} + static inline int bitmap_ctzll(uint64_t slab) { @@ -179,6 +247,24 @@ npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size, return rc; } +static int +npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle) +{ + char name[RTE_MEMZONE_NAMESIZE]; + int aura_id, pool_id, rc; + + if (!lf || !aura_handle) + return NPA_LF_ERR_PARAM; + + aura_id = pool_id = npa_lf_aura_handle_to_aura(aura_handle); + rc = npa_lf_aura_pool_fini(lf->mbox, aura_id, aura_handle); + rc |= npa_lf_stack_dma_free(lf, name, pool_id); + + rte_bitmap_set(lf->npa_bmp, aura_id); + + return rc; +} + static int otx2_npa_alloc(struct rte_mempool *mp) { @@ -238,9 +324,27 @@ otx2_npa_alloc(struct rte_mempool *mp) return rc; } +static void +otx2_npa_free(struct rte_mempool *mp) +{ + struct otx2_npa_lf *lf = otx2_npa_lf_obj_get(); + int rc = 0; + + otx2_npa_dbg("lf=%p aura_handle=0x%"PRIx64, lf, mp->pool_id); + if (lf != NULL) + rc = npa_lf_aura_pool_pair_free(lf, mp->pool_id); + + if (rc) + otx2_err("Failed to free pool or aura rc=%d", rc); + + /* Release the reference of npalf */ + otx2_npa_lf_fini(); +} + static struct rte_mempool_ops otx2_npa_ops = { .name = "octeontx2_npa", .alloc = otx2_npa_alloc, + .free = otx2_npa_free, }; MEMPOOL_REGISTER_OPS(otx2_npa_ops); From patchwork Mon Jun 17 15:55:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54875 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 37B261BFE4; Mon, 17 Jun 2019 17:57:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 9DA0A1BF8C for ; Mon, 17 Jun 2019 17:57:01 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFprCZ001049 for ; Mon, 17 Jun 2019 08:57:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=+ieFtCc0LN3kuuJEFLJaSUFwyQMC0ljtxO5x37SUOsQ=; b=hj0MkQ157sGtN6J6n/BGADtHAQf1FGXxY4mBv98qZ9v3VLPZXC9tqeswNi3ABldC0Arr rkrdeT5nuH3QsxWnuqY/+t6Sa3jNBYbE+rNTKYaA3iZi8Pjhrh6O+zzwlj0sx/ViYKrh /GNOnO6OxCYQN428psVTf7iWK0gBagFtBPvh/NjFy86lh3hwmt7YD1uZneGBmPlTs67B diR7V1deFvyBrD7dB6FYSQYpdE6sXPphT9D6kJQLSnIyR4UvJYlAgdeD/raOlnFrGsKE S9qoOqF/DrSm9KEB/KDzoA/pKitsU6P7DrKopa9T0a/NCUW8KUfB45Wn5ofW5pOtNynr PA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb3j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:57:00 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:59 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:59 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 952543F703F; Mon, 17 Jun 2019 08:56:57 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Pavan Nikhilesh Date: Mon, 17 Jun 2019 21:25:33 +0530 Message-ID: <20190617155537.36144-24-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 23/27] mempool/octeontx2: add remaining slow path ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add remaining get_count(), calc_mem_size() and populate() slow path mempool operations. Signed-off-by: Jerin Jacob Signed-off-by: Pavan Nikhilesh --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 62 ++++++++++++++++++++ 1 file changed, 62 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index 94570319a..966b7d7f1 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -7,6 +7,12 @@ #include "otx2_mempool.h" +static unsigned int +otx2_npa_get_count(const struct rte_mempool *mp) +{ + return (unsigned int)npa_lf_aura_op_available(mp->pool_id); +} + static int npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, struct npa_aura_s *aura, struct npa_pool_s *pool) @@ -341,10 +347,66 @@ otx2_npa_free(struct rte_mempool *mp) otx2_npa_lf_fini(); } +static ssize_t +otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, + uint32_t pg_shift, size_t *min_chunk_size, size_t *align) +{ + ssize_t mem_size; + + /* + * Simply need space for one more object to be able to + * fulfill alignment requirements. + */ + mem_size = rte_mempool_op_calc_mem_size_default(mp, obj_num + 1, + pg_shift, + min_chunk_size, align); + if (mem_size >= 0) { + /* + * Memory area which contains objects must be physically + * contiguous. + */ + *min_chunk_size = mem_size; + } + + return mem_size; +} + +static int +otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr, + rte_iova_t iova, size_t len, + rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) +{ + size_t total_elt_sz; + size_t off; + + if (iova == RTE_BAD_IOVA) + return -EINVAL; + + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; + + /* Align object start address to a multiple of total_elt_sz */ + off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz); + + if (len < off) + return -EINVAL; + + vaddr = (char *)vaddr + off; + iova += off; + len -= off; + + npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len); + + return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len, + obj_cb, obj_cb_arg); +} + static struct rte_mempool_ops otx2_npa_ops = { .name = "octeontx2_npa", .alloc = otx2_npa_alloc, .free = otx2_npa_free, + .get_count = otx2_npa_get_count, + .calc_mem_size = otx2_npa_calc_mem_size, + .populate = otx2_npa_populate, }; MEMPOOL_REGISTER_OPS(otx2_npa_ops); From patchwork Mon Jun 17 15:55:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54876 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B581D1BFE7; Mon, 17 Jun 2019 17:57:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id E91761BF2D for ; Mon, 17 Jun 2019 17:57:03 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppxV000998; Mon, 17 Jun 2019 08:57:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=jiWT5QHScUCKPMSg/Psb/s+4xwac8obAKtGqZlCFPls=; b=wrp7ndQo/kTSnacBcb3hGOlvPi+eXELma2RaWniK4MyeCNhn2GpOVY9gihmXv3+j2f2q LVuJp0MQEw31S+mP/eHBVS587RdBblRuyFivS008xkpP/KXNtK2LA/hguYc5IoDD3Pdb bR0Hz9xKVbHp4jKU3K4mHxGad+pHo53AhuMRhndxoLCyzAyJO7pxJB+GB4Ya8y4ZtUUR 5LpcbYD7D0+2RqB8T6p3lHlZaK2EKX2vR5s0i6++QBRIKyma8djMlRyYkT8r8i/XnN8B nEjFvfJODpYkEOiLu15oHyIsWSySG+BTmm187VliuuEYhh+koBi+OKuGH1LPz9E90d2C 3Q== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb3p-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 17 Jun 2019 08:57:03 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:57:02 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:57:02 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 51C683F703F; Mon, 17 Jun 2019 08:57:00 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Olivier Matz , Pavan Nikhilesh Date: Mon, 17 Jun 2019 21:25:34 +0530 Message-ID: <20190617155537.36144-25-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 24/27] mempool/octeontx2: add fast path mempool ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add enqueue and dequeue mempool fastpath operations. Cc: Olivier Matz Signed-off-by: Jerin Jacob Signed-off-by: Pavan Nikhilesh --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 57 ++++++++++++++++++++ 1 file changed, 57 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index 966b7d7f1..c59bd73c0 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -7,6 +7,61 @@ #include "otx2_mempool.h" +static int __hot +otx2_npa_enq(struct rte_mempool *mp, void * const *obj_table, unsigned int n) +{ + unsigned int index; const uint64_t aura_handle = mp->pool_id; + const uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle); + const uint64_t addr = npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_FREE0; + + for (index = 0; index < n; index++) + otx2_store_pair((uint64_t)obj_table[index], reg, addr); + + return 0; +} + +static __rte_noinline int +npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr, + void **obj_table, uint8_t i) +{ + uint8_t retry = 4; + + do { + obj_table[i] = (void *)otx2_atomic64_add_nosync(wdata, addr); + if (obj_table[i] != NULL) + return 0; + + } while (retry--); + + return -ENOENT; +} + +static inline int __hot +otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id); + unsigned int index; + uint64_t obj; + + int64_t * const addr = (int64_t * const) + (npa_lf_aura_handle_to_base(mp->pool_id) + + NPA_LF_AURA_OP_ALLOCX(0)); + for (index = 0; index < n; index++, obj_table++) { + obj = npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0); + if (obj == 0) { + for (; index > 0; index--) { + obj_table--; + otx2_npa_enq(mp, obj_table, 1); + } + return -ENOENT; + } + *obj_table = (void *)obj; + } + + return 0; +} + static unsigned int otx2_npa_get_count(const struct rte_mempool *mp) { @@ -404,9 +459,11 @@ static struct rte_mempool_ops otx2_npa_ops = { .name = "octeontx2_npa", .alloc = otx2_npa_alloc, .free = otx2_npa_free, + .enqueue = otx2_npa_enq, .get_count = otx2_npa_get_count, .calc_mem_size = otx2_npa_calc_mem_size, .populate = otx2_npa_populate, + .dequeue = otx2_npa_deq, }; MEMPOOL_REGISTER_OPS(otx2_npa_ops); From patchwork Mon Jun 17 15:55:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54874 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 524A01C00E; Mon, 17 Jun 2019 17:57:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id DAF661BFCA for ; Mon, 17 Jun 2019 17:57:18 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppsB001031; Mon, 17 Jun 2019 08:57:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=xTsuVHVMiZ8nUreHYyjSv/V/yfcDS7VpsKZIxG5/VV4=; b=ZyuwQB5RE+AH6AzCEh770TDLDvpPFeMSjXXzklsnbo3/Rg44ulV4GLSztxfQJ8rSnsO5 16q+7i5lhstZGPdDwRy9gjbna+IdhTUs4mc9W/mUdluoUVvH9s2q7toMhbBz6HZ7bd+h BEN5fUnDqv7lHSxAAKzr0P9d9NNZupEV3eGEmgYDrsoZ3b5YTlnUNZRpkrNEqkw+ijVl R2Nm1WTqOUvGHnykPam3Fk/fkrAjG+Mzhfgut3bCp5Isb8Yk/BbZ7nJaqVNBgXQQ3LYt LRRxinYJA9hcqAB01y+FCQRIWs+FvullKqoTNpL+oRPxCP3n3TgW0ZpeDWyA951re/Rz Ww== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb3u-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 17 Jun 2019 08:57:18 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:57:05 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:57:05 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 40E5F3F7040; Mon, 17 Jun 2019 08:57:03 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Pavan Nikhilesh , Olivier Matz , Aaron Conole Date: Mon, 17 Jun 2019 21:25:35 +0530 Message-ID: <20190617155537.36144-26-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh This patch adds an optimized arm64 instruction based routine to leverage CPU pipeline characteristics of octeontx2. The theme is to fill the pipeline with CASP operations as much HW can do so that HW can do alloc() HW ops in full throttle. Cc: Olivier Matz Cc: Aaron Conole Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 291 +++++++++++++++++++ 1 file changed, 291 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index c59bd73c0..e6737abda 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -37,6 +37,293 @@ npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr, return -ENOENT; } +#if defined(RTE_ARCH_ARM64) +static __rte_noinline int +npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const addr, + void **obj_table, unsigned int n) +{ + uint8_t i; + + for (i = 0; i < n; i++) { + if (obj_table[i] != NULL) + continue; + if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table, i)) + return -ENOENT; + } + + return 0; +} + +static __attribute__((optimize("-O3"))) __rte_noinline int __hot +npa_lf_aura_op_alloc_bulk(const int64_t wdata, int64_t * const addr, + unsigned int n, void **obj_table) +{ + const __uint128_t wdata128 = ((__uint128_t)wdata << 64) | wdata; + uint64x2_t failed = vdupq_n_u64(~0); + + switch (n) { + case 32: + { + __uint128_t t0, t1, t2, t3, t4, t5, t6, t7, t8, t9; + __uint128_t t10, t11; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t4], %H[t4], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t5], %H[t5], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t6], %H[t6], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t7], %H[t7], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t8], %H[t8], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t9], %H[t9], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t10], %H[t10], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t11], %H[t11], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d18, %[t2]\n" + "fmov v18.D[1], %H[t2]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d19, %[t3]\n" + "fmov v19.D[1], %H[t3]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "fmov d20, %[t4]\n" + "fmov v20.D[1], %H[t4]\n" + "fmov d21, %[t5]\n" + "fmov v21.D[1], %H[t5]\n" + "fmov d22, %[t6]\n" + "fmov v22.D[1], %H[t6]\n" + "fmov d23, %[t7]\n" + "fmov v23.D[1], %H[t7]\n" + "and %[failed].16B, %[failed].16B, v20.16B\n" + "and %[failed].16B, %[failed].16B, v21.16B\n" + "and %[failed].16B, %[failed].16B, v22.16B\n" + "and %[failed].16B, %[failed].16B, v23.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" + "fmov d16, %[t8]\n" + "fmov v16.D[1], %H[t8]\n" + "fmov d17, %[t9]\n" + "fmov v17.D[1], %H[t9]\n" + "fmov d18, %[t10]\n" + "fmov v18.D[1], %H[t10]\n" + "fmov d19, %[t11]\n" + "fmov v19.D[1], %H[t11]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "fmov d20, %[t0]\n" + "fmov v20.D[1], %H[t0]\n" + "fmov d21, %[t1]\n" + "fmov v21.D[1], %H[t1]\n" + "fmov d22, %[t2]\n" + "fmov v22.D[1], %H[t2]\n" + "fmov d23, %[t3]\n" + "fmov v23.D[1], %H[t3]\n" + "and %[failed].16B, %[failed].16B, v20.16B\n" + "and %[failed].16B, %[failed].16B, v21.16B\n" + "and %[failed].16B, %[failed].16B, v22.16B\n" + "and %[failed].16B, %[failed].16B, v23.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), + [t3] "=&r" (t3), [t4] "=&r" (t4), [t5] "=&r" (t5), + [t6] "=&r" (t6), [t7] "=&r" (t7), [t8] "=&r" (t8), + [t9] "=&r" (t9), [t10] "=&r" (t10), [t11] "=&r" (t11) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17", "v18", + "v19", "v20", "v21", "v22", "v23" + ); + break; + } + case 16: + { + __uint128_t t0, t1, t2, t3, t4, t5, t6, t7; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t4], %H[t4], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t5], %H[t5], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t6], %H[t6], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t7], %H[t7], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "fmov d18, %[t2]\n" + "fmov v18.D[1], %H[t2]\n" + "fmov d19, %[t3]\n" + "fmov v19.D[1], %H[t3]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "fmov d20, %[t4]\n" + "fmov v20.D[1], %H[t4]\n" + "fmov d21, %[t5]\n" + "fmov v21.D[1], %H[t5]\n" + "fmov d22, %[t6]\n" + "fmov v22.D[1], %H[t6]\n" + "fmov d23, %[t7]\n" + "fmov v23.D[1], %H[t7]\n" + "and %[failed].16B, %[failed].16B, v20.16B\n" + "and %[failed].16B, %[failed].16B, v21.16B\n" + "and %[failed].16B, %[failed].16B, v22.16B\n" + "and %[failed].16B, %[failed].16B, v23.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), + [t3] "=&r" (t3), [t4] "=&r" (t4), [t5] "=&r" (t5), + [t6] "=&r" (t6), [t7] "=&r" (t7) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17", "v18", "v19", + "v20", "v21", "v22", "v23" + ); + break; + } + case 8: + { + __uint128_t t0, t1, t2, t3; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "fmov d18, %[t2]\n" + "fmov v18.D[1], %H[t2]\n" + "fmov d19, %[t3]\n" + "fmov v19.D[1], %H[t3]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), + [t3] "=&r" (t3) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17", "v18", "v19" + ); + break; + } + case 4: + { + __uint128_t t0, t1; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "st1 { v16.2d, v17.2d}, [%[dst]], 32\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17" + ); + break; + } + case 2: + { + __uint128_t t0; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "st1 { v16.2d}, [%[dst]], 16\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16" + ); + break; + } + case 1: + return npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0); + } + + if (unlikely(!(vgetq_lane_u64(failed, 0) & vgetq_lane_u64(failed, 1)))) + return npa_lf_aura_op_search_alloc(wdata, addr, (void **) + ((char *)obj_table - (sizeof(uint64_t) * n)), n); + + return 0; +} + +static __rte_noinline void +otx2_npa_clear_alloc(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + unsigned int i; + + for (i = 0; i < n; i++) { + if (obj_table[i] != NULL) { + otx2_npa_enq(mp, &obj_table[i], 1); + obj_table[i] = NULL; + } + } +} + +static inline int __hot +otx2_npa_deq_arm64(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id); + void **obj_table_bak = obj_table; + const unsigned int nfree = n; + unsigned int parts; + + int64_t * const addr = (int64_t * const) + (npa_lf_aura_handle_to_base(mp->pool_id) + + NPA_LF_AURA_OP_ALLOCX(0)); + while (n) { + parts = n > 31 ? 32 : rte_align32prevpow2(n); + n -= parts; + if (unlikely(npa_lf_aura_op_alloc_bulk(wdata, addr, + parts, obj_table))) { + otx2_npa_clear_alloc(mp, obj_table_bak, nfree - n); + return -ENOENT; + } + obj_table += parts; + } + + return 0; +} +#endif + static inline int __hot otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) { @@ -463,7 +750,11 @@ static struct rte_mempool_ops otx2_npa_ops = { .get_count = otx2_npa_get_count, .calc_mem_size = otx2_npa_calc_mem_size, .populate = otx2_npa_populate, +#if defined(RTE_ARCH_ARM64) + .dequeue = otx2_npa_deq_arm64, +#else .dequeue = otx2_npa_deq, +#endif }; MEMPOOL_REGISTER_OPS(otx2_npa_ops); From patchwork Mon Jun 17 15:55:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54877 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0E32F1BFFA; Mon, 17 Jun 2019 17:57:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 37D801BFB6 for ; Mon, 17 Jun 2019 17:57:10 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppIu000981 for ; Mon, 17 Jun 2019 08:57:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=4xEJY/5BkQjftI32K5M9cpH9eYN3GfgpBQsO/PJpimM=; b=IzJJ/7eGegBnA+NCYRKoqgt5h6G34ElQE1zaVBJncXGcLbcsFVZXqVcH0jsU1u7Fo7QZ RM8GG0z1ux3jDiZmrsHfvfGGajWVhUba7HJWFwKKW6/EVM0yeDUL4ZfRpQzkDqX6W3w4 0TreseteveqD3pZ24DoYG98CeQocClCJYvc9P7iugTXe7wYb0lHu6ZtAXOMmtAfM45aG 6qtRMZ5raM5YUtBcKyCdfoasNcpCeHAUqvBY2BpPDbBZqMKfYo1fmDu8cjDBnWg1E8zL 7XiAIAniNXD04aqrD8KcVdlj9nWhANj4JP4yPooY1vYjpHcllxJISH7OlpYYMDIbRP5u ug== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb44-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:57:09 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:57:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:57:08 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 635223F703F; Mon, 17 Jun 2019 08:57:06 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Harman Kalra Date: Mon, 17 Jun 2019 21:25:36 +0530 Message-ID: <20190617155537.36144-27-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 26/27] mempool/octeontx2: add devargs for max pool selection X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The maximum number of mempools per application needs to be configured on HW during mempool driver initialization. HW can support up to 1M mempools, Since each mempool costs set of HW resources, the max_pools devargs parameter is being introduced to configure the number of mempools required for the application. For example: -w 0002:02:00.0,max_pools=512 With the above configuration, the driver will set up only 512 mempools for the given application to save HW resources. Signed-off-by: Jerin Jacob Signed-off-by: Harman Kalra --- drivers/mempool/octeontx2/otx2_mempool.c | 41 +++++++++++++++++++++++- 1 file changed, 40 insertions(+), 1 deletion(-) diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c index 1bcb86cf4..ff7fcac85 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.c +++ b/drivers/mempool/octeontx2/otx2_mempool.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -142,6 +143,42 @@ otx2_aura_size_to_u32(uint8_t val) return 1 << (val + 6); } +static int +parse_max_pools(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + if (val < otx2_aura_size_to_u32(NPA_AURA_SZ_128)) + val = 128; + if (val > otx2_aura_size_to_u32(NPA_AURA_SZ_1M)) + val = BIT_ULL(20); + + *(uint8_t *)extra_args = rte_log2_u32(val) - 6; + return 0; +} + +#define OTX2_MAX_POOLS "max_pools" + +static uint8_t +otx2_parse_aura_size(struct rte_devargs *devargs) +{ + uint8_t aura_sz = NPA_AURA_SZ_128; + struct rte_kvargs *kvlist; + + if (devargs == NULL) + goto exit; + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + goto exit; + + rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz); + rte_kvargs_free(kvlist); +exit: + return aura_sz; +} + static inline int npa_lf_attach(struct otx2_mbox *mbox) { @@ -234,7 +271,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev) if (rc) goto npa_detach; - aura_sz = NPA_AURA_SZ_128; + aura_sz = otx2_parse_aura_size(pci_dev->device.devargs); nr_pools = otx2_aura_size_to_u32(aura_sz); lf = &dev->npalf; @@ -397,3 +434,5 @@ static struct rte_pci_driver pci_npa = { RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa); RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map); RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2, + OTX2_MAX_POOLS "=<128-1048576>"); From patchwork Mon Jun 17 15:55:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54873 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 31A5D1BFBD; Mon, 17 Jun 2019 17:57:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 06E1C1BFA6 for ; Mon, 17 Jun 2019 17:57:14 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFpuhH001280; Mon, 17 Jun 2019 08:57:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=tUWITAyRLaL+6Ua2izqdvapXTtH0OAXXvmTUiqOVkUQ=; b=Yp6Wv61xKny7wID6iLGhmJ0TzbgkP3h2VcnEO52UCZteVuY0NBsrab1UUp+1IfQBLuKK pNaXWdasPaSZyivr9r8QcaD164C1nvXZ7lZdbnrK+vc6tSK0Sx/nu4B/lPqwKM/v083w IxJrn0+CIeJ6CueeXFmzJquG1yRQPpkRmh1lUZnI7FgOQH5jvFYdPBpjmNTiJ8WptxCs jLiJHOJGovrOmBGBfDUic5d772VXw6+fIEcfHSemSyOqd/fGYT757hHiEJX69S7w2a4v YTqPL2vuCVO8DFg2G85jAppe1dHVfS2Y6nKZhGrLmrJlVW7MsNfimogaD199mqVdgrI0 nw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb4c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 17 Jun 2019 08:57:14 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:57:12 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:57:12 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id B299F3F703F; Mon, 17 Jun 2019 08:57:09 -0700 (PDT) From: To: , Thomas Monjalon , John McNamara , Marko Kovacevic , "Jerin Jacob" , Nithin Dabilpuram , Vamsi Attunuru CC: Vivek Sharma Date: Mon, 17 Jun 2019 21:25:37 +0530 Message-ID: <20190617155537.36144-28-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 27/27] doc: add Marvell OCTEON TX2 mempool documentation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Marvell OCTEON TX2 mempool documentation. This patch also updates the MAINTAINERS file and updates shared library versions in release_19_08.rst. Cc: John McNamara Cc: Thomas Monjalon Signed-off-by: Jerin Jacob Signed-off-by: Vivek Sharma Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- MAINTAINERS | 10 +++ doc/guides/mempool/index.rst | 1 + doc/guides/mempool/octeontx2.rst | 90 ++++++++++++++++++++++++++ doc/guides/platform/octeontx2.rst | 2 + doc/guides/rel_notes/release_19_08.rst | 2 + 5 files changed, 105 insertions(+) create mode 100644 doc/guides/mempool/octeontx2.rst diff --git a/MAINTAINERS b/MAINTAINERS index 0212fe6d0..4ea759197 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -444,6 +444,16 @@ M: Artem V. Andreev M: Andrew Rybchenko F: drivers/mempool/bucket/ +Marvell OCTEON TX2 +M: Jerin Jacob +M: Nithin Dabilpuram +M: Vamsi Attunuru +F: drivers/common/octeontx2/ +F: drivers/mempool/octeontx2/ +F: doc/guides/platform/img/octeontx2_* +F: doc/guides/platform/octeontx2.rst +F: doc/guides/mempool/octeontx2.rst + Bus Drivers ----------- diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst index 2ccf91633..756610264 100644 --- a/doc/guides/mempool/index.rst +++ b/doc/guides/mempool/index.rst @@ -12,3 +12,4 @@ application through the mempool API. :numbered: octeontx + octeontx2 diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst new file mode 100644 index 000000000..2c9a0953b --- /dev/null +++ b/doc/guides/mempool/octeontx2.rst @@ -0,0 +1,90 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2019 Marvell International Ltd. + +OCTEON TX2 NPA Mempool Driver +============================= + +The OCTEON TX2 NPA PMD (**librte_mempool_octeontx2**) provides mempool +driver support for the integrated mempool device found in **Marvell OCTEON TX2** SoC family. + +More information about OCTEON TX2 SoC can be found at `Marvell Official Website +`_. + +Features +-------- + +OCTEON TX2 NPA PMD supports: + +- Up to 128 NPA LFs +- 1M Pools per LF +- HW mempool manager +- Ethdev Rx buffer allocation in HW to save CPU cycles in the Rx path. +- Ethdev Tx buffer recycling in HW to save CPU cycles in the Tx path. + +Prerequisites and Compilation procedure +--------------------------------------- + + See :doc:`../platform/octeontx2` for setup information. + +Pre-Installation Configuration +------------------------------ + +Compile time Config Options +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following option can be modified in the ``config`` file. + +- ``CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL`` (default ``y``) + + Toggle compilation of the ``librte_mempool_octeontx2`` driver. + +Runtime Config Options +~~~~~~~~~~~~~~~~~~~~~~ + +- ``Maximum number of mempools per application`` (default ``128``) + + The maximum number of mempools per application needs to be configured on + HW during mempool driver initialization. HW can support up to 1M mempools, + Since each mempool costs set of HW resources, the ``max_pools`` ``devargs`` + parameter is being introduced to configure the number of mempools required + for the application. + For example:: + + -w 0002:02:00.0,max_pools=512 + + With the above configuration, the driver will set up only 512 mempools for + the given application to save HW resources. + +.. note:: + + Since this configuration is per application, the end user needs to + provide ``max_pools`` parameter to the first PCIe device probed by the given + application. + +Debugging Options +~~~~~~~~~~~~~~~~~ + +.. _table_octeontx2_mempool_debug_options: + +.. table:: OCTEON TX2 mempool debug options + + +---+------------+-------------------------------------------------------+ + | # | Component | EAL log command | + +===+============+=======================================================+ + | 1 | NPA | --log-level='pmd\.mempool.octeontx2,8' | + +---+------------+-------------------------------------------------------+ + +Standalone mempool device +~~~~~~~~~~~~~~~~~~~~~~~~~ + + The ``usertools/dpdk-devbind.py`` script shall enumerate all the mempool devices + available in the system. In order to avoid, the end user to bind the mempool + device prior to use ethdev and/or eventdev device, the respective driver + configures an NPA LF and attach to the first probed ethdev or eventdev device. + In case, if end user need to run mempool as a standalone device + (without ethdev or eventdev), end user needs to bind a mempool device using + ``usertools/dpdk-devbind.py`` + + Example command to run ``mempool_autotest`` test with standalone OCTEONTX2 NPA device:: + + echo "mempool_autotest" | build/app/test -c 0xf0 --mbuf-pool-ops-name="octeontx2_npa" diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst index 3a5e03050..c9ea45647 100644 --- a/doc/guides/platform/octeontx2.rst +++ b/doc/guides/platform/octeontx2.rst @@ -98,6 +98,8 @@ HW Offload Drivers This section lists dataplane H/W block(s) available in OCTEON TX2 SoC. +#. **Mempool Driver** + See :doc:`../mempool/octeontx2` for NPA mempool driver information. Procedure to Setup Platform --------------------------- diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst index 8c3932d06..118d8fbe6 100644 --- a/doc/guides/rel_notes/release_19_08.rst +++ b/doc/guides/rel_notes/release_19_08.rst @@ -171,6 +171,7 @@ The libraries prepended with a plus sign were incremented in this version. librte_cfgfile.so.2 librte_cmdline.so.2 librte_compressdev.so.1 + + librte_common_octeontx2.so.1 librte_cryptodev.so.7 librte_distributor.so.1 librte_eal.so.10 @@ -191,6 +192,7 @@ The libraries prepended with a plus sign were incremented in this version. librte_mbuf.so.5 librte_member.so.1 librte_mempool.so.5 + + librte_mempool_octeontx2.so.1 librte_meter.so.3 librte_metrics.so.1 librte_net.so.1