From patchwork Thu Feb 28 07:13:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50614 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E44BF5911; Thu, 28 Feb 2019 08:15:05 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 7FC8258FA for ; Thu, 28 Feb 2019 08:15:04 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299756" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:01 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:10 +0800 Message-Id: <1551338000-120348-2-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 01/11] drivers/bus/ifpga: add AFU shared data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" AFU can be implemented into many different acceleration devices, these devices need shared data to store private information when they are handled by users. Signed-off-by: Rosen Xu Signed-off-by: Andy Pei --- drivers/bus/ifpga/rte_bus_ifpga.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h index 0bf43ba..820eeaa 100644 --- a/drivers/bus/ifpga/rte_bus_ifpga.h +++ b/drivers/bus/ifpga/rte_bus_ifpga.h @@ -17,6 +17,7 @@ #include #include +#include /** Name of Intel FPGA Bus */ #define IFPGA_BUS_NAME ifpga @@ -60,6 +61,11 @@ struct rte_afu_pr_conf { #define AFU_PRI_STR_SIZE (PCI_PRI_STR_SIZE + 8) +struct rte_afu_shared { + rte_spinlock_t lock; + void *data; +}; + /** * A structure describing a AFU device. */ @@ -71,6 +77,7 @@ struct rte_afu_device { uint32_t num_region; /**< number of regions found */ struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE]; /**< AFU Memory Resource */ + struct rte_afu_shared shared; struct rte_intr_handle intr_handle; /**< Interrupt handle */ struct rte_afu_driver *driver; /**< Associated driver */ char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN]; From patchwork Thu Feb 28 07:13:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50615 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 530505B40; Thu, 28 Feb 2019 08:15:12 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id B52135B3C for ; Thu, 28 Feb 2019 08:15:10 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299777" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:07 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:11 +0800 Message-Id: <1551338000-120348-3-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 02/11] drivers/bus/ifpga: add function for AFU search by name X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In many scenarios, AFU is needed searched by name, this function add the feature. Signed-off-by: Rosen Xu Signed-off-by: Andy Pei --- drivers/bus/ifpga/ifpga_bus.c | 13 +++++++++++++ drivers/bus/ifpga/rte_bus_ifpga.h | 3 +++ 2 files changed, 16 insertions(+) diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c index 55d3abf..dfd6b1f 100644 --- a/drivers/bus/ifpga/ifpga_bus.c +++ b/drivers/bus/ifpga/ifpga_bus.c @@ -73,6 +73,19 @@ void rte_ifpga_driver_unregister(struct rte_afu_driver *driver) return NULL; } +struct rte_afu_device * +rte_ifpga_find_afu_by_name(const char *name) +{ + struct rte_afu_device *afu_dev = NULL; + + TAILQ_FOREACH(afu_dev, &ifpga_afu_dev_list, next) { + if (afu_dev && + !strcmp(afu_dev->device.name, name)) + return afu_dev; + } + return NULL; +} + static const char * const valid_args[] = { #define IFPGA_ARG_NAME "ifpga" IFPGA_ARG_NAME, diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h index 820eeaa..5762a33 100644 --- a/drivers/bus/ifpga/rte_bus_ifpga.h +++ b/drivers/bus/ifpga/rte_bus_ifpga.h @@ -119,6 +119,9 @@ struct rte_afu_driver { return NULL; } +struct rte_afu_device * +rte_ifpga_find_afu_by_name(const char *name); + /** * Register a ifpga afu device driver. * From patchwork Thu Feb 28 07:13:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50616 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C75005F1D; Thu, 28 Feb 2019 08:15:17 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id D96A45F1B for ; Thu, 28 Feb 2019 08:15:14 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299806" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:11 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:12 +0800 Message-Id: <1551338000-120348-4-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 03/11] drivers/raw/ifpga_rawdev: add OPAE share code for IPN3KE X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add OPAE share code for Intel FPGA Acceleration NIC IPN3KE. Signed-off-by: Tianfei Zhang --- drivers/raw/ifpga_rawdev/base/Makefile | 7 + drivers/raw/ifpga_rawdev/base/ifpga_api.c | 69 ++- drivers/raw/ifpga_rawdev/base/ifpga_api.h | 1 + drivers/raw/ifpga_rawdev/base/ifpga_defines.h | 86 +++- drivers/raw/ifpga_rawdev/base/ifpga_enumerate.c | 342 +++++-------- drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.c | 170 +++++-- drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.h | 62 ++- drivers/raw/ifpga_rawdev/base/ifpga_fme.c | 373 ++++++++++++++ drivers/raw/ifpga_rawdev/base/ifpga_fme_pr.c | 2 +- drivers/raw/ifpga_rawdev/base/ifpga_hw.h | 21 +- drivers/raw/ifpga_rawdev/base/ifpga_port.c | 21 + drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.c | 89 ++++ drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.h | 14 + drivers/raw/ifpga_rawdev/base/opae_hw_api.c | 189 ++++++- drivers/raw/ifpga_rawdev/base/opae_hw_api.h | 46 +- drivers/raw/ifpga_rawdev/base/opae_i2c.c | 490 +++++++++++++++++++ drivers/raw/ifpga_rawdev/base/opae_i2c.h | 127 +++++ drivers/raw/ifpga_rawdev/base/opae_intel_max10.c | 106 ++++ drivers/raw/ifpga_rawdev/base/opae_intel_max10.h | 36 ++ drivers/raw/ifpga_rawdev/base/opae_mdio.c | 542 +++++++++++++++++++++ drivers/raw/ifpga_rawdev/base/opae_mdio.h | 90 ++++ drivers/raw/ifpga_rawdev/base/opae_osdep.h | 11 +- drivers/raw/ifpga_rawdev/base/opae_phy_group.c | 88 ++++ drivers/raw/ifpga_rawdev/base/opae_phy_group.h | 53 ++ drivers/raw/ifpga_rawdev/base/opae_spi.c | 260 ++++++++++ drivers/raw/ifpga_rawdev/base/opae_spi.h | 120 +++++ .../raw/ifpga_rawdev/base/opae_spi_transaction.c | 438 +++++++++++++++++ .../ifpga_rawdev/base/osdep_raw/osdep_generic.h | 1 + .../ifpga_rawdev/base/osdep_rte/osdep_generic.h | 10 + 29 files changed, 3549 insertions(+), 315 deletions(-) create mode 100644 drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.c create mode 100644 drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.h create mode 100644 drivers/raw/ifpga_rawdev/base/opae_i2c.c create mode 100644 drivers/raw/ifpga_rawdev/base/opae_i2c.h create mode 100644 drivers/raw/ifpga_rawdev/base/opae_intel_max10.c create mode 100644 drivers/raw/ifpga_rawdev/base/opae_intel_max10.h create mode 100644 drivers/raw/ifpga_rawdev/base/opae_mdio.c create mode 100644 drivers/raw/ifpga_rawdev/base/opae_mdio.h create mode 100644 drivers/raw/ifpga_rawdev/base/opae_phy_group.c create mode 100644 drivers/raw/ifpga_rawdev/base/opae_phy_group.h create mode 100644 drivers/raw/ifpga_rawdev/base/opae_spi.c create mode 100644 drivers/raw/ifpga_rawdev/base/opae_spi.h create mode 100644 drivers/raw/ifpga_rawdev/base/opae_spi_transaction.c diff --git a/drivers/raw/ifpga_rawdev/base/Makefile b/drivers/raw/ifpga_rawdev/base/Makefile index d79da72..c77e751 100644 --- a/drivers/raw/ifpga_rawdev/base/Makefile +++ b/drivers/raw/ifpga_rawdev/base/Makefile @@ -22,5 +22,12 @@ SRCS-y += opae_hw_api.c SRCS-y += opae_ifpga_hw_api.c SRCS-y += opae_debug.c SRCS-y += ifpga_fme_pr.c +SRCS-y += opae_spi.c +SRCS-y += opae_spi_transaction.c +SRCS-y += opae_mdio.c +SRCS-y += opae_i2c.c +SRCS-y += opae_at24_eeprom.c +SRCS-y += opae_phy_group.c +SRCS-y += opae_intel_max10.c SRCS-y += $(wildcard $(SRCDIR)/base/$(OSDEP)/*.c) diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_api.c b/drivers/raw/ifpga_rawdev/base/ifpga_api.c index 540e171..0fd8de1 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_api.c +++ b/drivers/raw/ifpga_rawdev/base/ifpga_api.c @@ -170,7 +170,6 @@ struct opae_accelerator_ops ifpga_acc_ops = { }; /* Bridge APIs */ - static int ifpga_br_reset(struct opae_bridge *br) { struct ifpga_port_hw *port = br->data; @@ -196,13 +195,79 @@ struct opae_manager_ops ifpga_mgr_ops = { .flash = ifpga_mgr_flash, }; +static int ifpga_mgr_read_mac_rom(struct opae_manager *mgr, int offset, + void *buf, int size) +{ + struct ifpga_fme_hw *fme = mgr->data; + + return fme_mgr_read_mac_rom(fme, offset, buf, size); +} + +static int ifpga_mgr_write_mac_rom(struct opae_manager *mgr, int offset, + void *buf, int size) +{ + struct ifpga_fme_hw *fme = mgr->data; + + return fme_mgr_write_mac_rom(fme, offset, buf, size); +} + +static int ifpga_mgr_read_phy_reg(struct opae_manager *mgr, int phy_group, + u8 entry, u16 reg, u32 *value) +{ + struct ifpga_fme_hw *fme = mgr->data; + + return fme_mgr_read_phy_reg(fme, phy_group, entry, reg, value); +} + +static int ifpga_mgr_write_phy_reg(struct opae_manager *mgr, int phy_group, + u8 entry, u16 reg, u32 value) +{ + struct ifpga_fme_hw *fme = mgr->data; + + return fme_mgr_write_phy_reg(fme, phy_group, entry, reg, value); +} + +static int ifpga_mgr_get_retimer_info(struct opae_manager *mgr, + struct opae_retimer_info *info) +{ + struct ifpga_fme_hw *fme = mgr->data; + + return fme_mgr_get_retimer_info(fme, info); +} + +static int ifpga_mgr_set_retimer_speed(struct opae_manager *mgr, int speed) +{ + struct ifpga_fme_hw *fme = mgr->data; + + return fme_mgr_set_retimer_speed(fme, speed); +} + +static int ifpga_mgr_get_retimer_status(struct opae_manager *mgr, int port, + struct opae_retimer_status *status) +{ + struct ifpga_fme_hw *fme = mgr->data; + + return fme_mgr_get_retimer_status(fme, port, status); +} + +/* Network APIs in FME */ +struct opae_manager_networking_ops ifpga_mgr_network_ops = { + .read_mac_rom = ifpga_mgr_read_mac_rom, + .write_mac_rom = ifpga_mgr_write_mac_rom, + .read_phy_reg = ifpga_mgr_read_phy_reg, + .write_phy_reg = ifpga_mgr_write_phy_reg, + .get_retimer_info = ifpga_mgr_get_retimer_info, + .set_retimer_speed = ifpga_mgr_set_retimer_speed, + .get_retimer_status = ifpga_mgr_get_retimer_status, +}; + /* Adapter APIs */ static int ifpga_adapter_enumerate(struct opae_adapter *adapter) { struct ifpga_hw *hw = malloc(sizeof(*hw)); if (hw) { - memset(hw, 0, sizeof(*hw)); + opae_memset(hw, 0, sizeof(*hw)); hw->pci_data = adapter->data; hw->adapter = adapter; if (ifpga_bus_enumerate(hw)) diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_api.h b/drivers/raw/ifpga_rawdev/base/ifpga_api.h index dae7ca1..4a24769 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_api.h +++ b/drivers/raw/ifpga_rawdev/base/ifpga_api.h @@ -12,6 +12,7 @@ extern struct opae_manager_ops ifpga_mgr_ops; extern struct opae_bridge_ops ifpga_br_ops; extern struct opae_accelerator_ops ifpga_acc_ops; +extern struct opae_manager_networking_ops ifpga_mgr_network_ops; /* common APIs */ int ifpga_get_prop(struct ifpga_hw *hw, u32 fiu_id, u32 port_id, diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_defines.h b/drivers/raw/ifpga_rawdev/base/ifpga_defines.h index aa02527..cbd97fe 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_defines.h +++ b/drivers/raw/ifpga_rawdev/base/ifpga_defines.h @@ -15,9 +15,13 @@ #define FME_FEATURE_GLOBAL_IPERF "fme_iperf" #define FME_FEATURE_GLOBAL_ERR "fme_error" #define FME_FEATURE_PR_MGMT "fme_pr" +#define FME_FEATURE_EMIF_MGMT "fme_emif" #define FME_FEATURE_HSSI_ETH "fme_hssi" #define FME_FEATURE_GLOBAL_DPERF "fme_dperf" #define FME_FEATURE_QSPI_FLASH "fme_qspi_flash" +#define FME_FEATURE_MAX10_SPI "fme_max10_spi" +#define FME_FEATURE_I2C_MASTER "fme_i2c_master" +#define FME_FEATURE_PHY_GROUP "fme_phy_group" #define PORT_FEATURE_HEADER "port_hdr" #define PORT_FEATURE_UAFU "port_uafu" @@ -42,6 +46,9 @@ #define FME_HSSI_ETH_REVISION 0 #define FME_GLOBAL_DPERF_REVISION 0 #define FME_QSPI_REVISION 0 +#define FME_MAX10_SPI 0 +#define FME_I2C_MASTER 0 +#define FME_PHY_GROUP 0 #define PORT_HEADER_REVISION 0 /* UAFU's header info depends on the downloaded GBS */ @@ -59,7 +66,8 @@ #define FEATURE_FIU_ID_FME 0x0 #define FEATURE_FIU_ID_PORT 0x1 -#define FEATURE_ID_HEADER 0x0 +/* Reserved 0xfe for Header, 0xff for AFU*/ +#define FEATURE_ID_FIU_HEADER 0xfe #define FEATURE_ID_AFU 0xff enum fpga_id_type { @@ -68,31 +76,26 @@ enum fpga_id_type { FPGA_ID_MAX, }; -enum fme_feature_id { - FME_FEATURE_ID_HEADER = 0x0, - - FME_FEATURE_ID_THERMAL_MGMT = 0x1, - FME_FEATURE_ID_POWER_MGMT = 0x2, - FME_FEATURE_ID_GLOBAL_IPERF = 0x3, - FME_FEATURE_ID_GLOBAL_ERR = 0x4, - FME_FEATURE_ID_PR_MGMT = 0x5, - FME_FEATURE_ID_HSSI_ETH = 0x6, - FME_FEATURE_ID_GLOBAL_DPERF = 0x7, - FME_FEATURE_ID_QSPI_FLASH = 0x8, - - /* one for fme header. */ - FME_FEATURE_ID_MAX = 0x9, -}; - -enum port_feature_id { - PORT_FEATURE_ID_HEADER = 0x0, - PORT_FEATURE_ID_ERROR = 0x1, - PORT_FEATURE_ID_UMSG = 0x2, - PORT_FEATURE_ID_UINT = 0x3, - PORT_FEATURE_ID_STP = 0x4, - PORT_FEATURE_ID_UAFU = 0x5, - PORT_FEATURE_ID_MAX = 0x6, -}; +#define FME_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER +#define FME_FEATURE_ID_THERMAL_MGMT 0x1 +#define FME_FEATURE_ID_POWER_MGMT 0x2 +#define FME_FEATURE_ID_GLOBAL_IPERF 0x3 +#define FME_FEATURE_ID_GLOBAL_ERR 0x4 +#define FME_FEATURE_ID_PR_MGMT 0x5 +#define FME_FEATURE_ID_HSSI_ETH 0x6 +#define FME_FEATURE_ID_GLOBAL_DPERF 0x7 +#define FME_FEATURE_ID_QSPI_FLASH 0x8 +#define FME_FEATURE_ID_EMIF_MGMT 0x9 +#define FME_FEATURE_ID_MAX10_SPI 0xe +#define FME_FEATURE_ID_I2C_MASTER 0xf +#define FME_FEATURE_ID_PHY_GROUP 0x10 + +#define PORT_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER +#define PORT_FEATURE_ID_ERROR 0x10 +#define PORT_FEATURE_ID_UMSG 0x12 +#define PORT_FEATURE_ID_UINT 0x13 +#define PORT_FEATURE_ID_STP 0x14 +#define PORT_FEATURE_ID_UAFU FEATURE_ID_AFU /* * All headers and structures must be byte-packed to match the spec. @@ -1303,6 +1306,37 @@ struct feature_fme_hssi { struct feature_fme_hssi_eth_stat hssi_status; }; +/*FME SPI Master for VC*/ +struct feature_fme_spi { + struct feature_header header; + u64 reg[4]; +}; + +/* FME I2C Master */ +struct feature_fme_i2c { + struct feature_header header; + u64 reg[4]; +}; + +struct feature_fme_phy_group_info { + union { + u64 info; + struct { + u8 group_index:8; + u8 number:8; + u8 speed:8; + u8 direction:1; + u64 rsvd:39; + }; + }; +}; + +/* FME PHY Group */ +struct feature_fme_phy_group { + struct feature_header header; + struct feature_fme_phy_group_info phy_info; +}; + #define PORT_ERR_MASK 0xfff0703ff001f struct feature_port_err_key { union { diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_enumerate.c b/drivers/raw/ifpga_rawdev/base/ifpga_enumerate.c index 848e518..0a206cb 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_enumerate.c +++ b/drivers/raw/ifpga_rawdev/base/ifpga_enumerate.c @@ -28,121 +28,24 @@ struct build_feature_devs_info { struct ifpga_hw *hw; }; -struct feature_info { - const char *name; - u32 resource_size; - int feature_index; - int revision_id; - unsigned int vec_start; - unsigned int vec_cnt; - - struct feature_ops *ops; -}; +static int feature_revision(void __iomem *start) +{ + struct feature_header header; -/* indexed by fme feature IDs which are defined in 'enum fme_feature_id'. */ -static struct feature_info fme_features[] = { - { - .name = FME_FEATURE_HEADER, - .resource_size = sizeof(struct feature_fme_header), - .feature_index = FME_FEATURE_ID_HEADER, - .revision_id = FME_HEADER_REVISION, - .ops = &fme_hdr_ops, - }, - { - .name = FME_FEATURE_THERMAL_MGMT, - .resource_size = sizeof(struct feature_fme_thermal), - .feature_index = FME_FEATURE_ID_THERMAL_MGMT, - .revision_id = FME_THERMAL_MGMT_REVISION, - .ops = &fme_thermal_mgmt_ops, - }, - { - .name = FME_FEATURE_POWER_MGMT, - .resource_size = sizeof(struct feature_fme_power), - .feature_index = FME_FEATURE_ID_POWER_MGMT, - .revision_id = FME_POWER_MGMT_REVISION, - .ops = &fme_power_mgmt_ops, - }, - { - .name = FME_FEATURE_GLOBAL_IPERF, - .resource_size = sizeof(struct feature_fme_iperf), - .feature_index = FME_FEATURE_ID_GLOBAL_IPERF, - .revision_id = FME_GLOBAL_IPERF_REVISION, - .ops = &fme_global_iperf_ops, - }, - { - .name = FME_FEATURE_GLOBAL_ERR, - .resource_size = sizeof(struct feature_fme_err), - .feature_index = FME_FEATURE_ID_GLOBAL_ERR, - .revision_id = FME_GLOBAL_ERR_REVISION, - .ops = &fme_global_err_ops, - }, - { - .name = FME_FEATURE_PR_MGMT, - .resource_size = sizeof(struct feature_fme_pr), - .feature_index = FME_FEATURE_ID_PR_MGMT, - .revision_id = FME_PR_MGMT_REVISION, - .ops = &fme_pr_mgmt_ops, - }, - { - .name = FME_FEATURE_HSSI_ETH, - .resource_size = sizeof(struct feature_fme_hssi), - .feature_index = FME_FEATURE_ID_HSSI_ETH, - .revision_id = FME_HSSI_ETH_REVISION - }, - { - .name = FME_FEATURE_GLOBAL_DPERF, - .resource_size = sizeof(struct feature_fme_dperf), - .feature_index = FME_FEATURE_ID_GLOBAL_DPERF, - .revision_id = FME_GLOBAL_DPERF_REVISION, - .ops = &fme_global_dperf_ops, - } -}; + header.csr = readq(start); -static struct feature_info port_features[] = { - { - .name = PORT_FEATURE_HEADER, - .resource_size = sizeof(struct feature_port_header), - .feature_index = PORT_FEATURE_ID_HEADER, - .revision_id = PORT_HEADER_REVISION, - .ops = &ifpga_rawdev_port_hdr_ops, - }, - { - .name = PORT_FEATURE_ERR, - .resource_size = sizeof(struct feature_port_error), - .feature_index = PORT_FEATURE_ID_ERROR, - .revision_id = PORT_ERR_REVISION, - .ops = &ifpga_rawdev_port_error_ops, - }, - { - .name = PORT_FEATURE_UMSG, - .resource_size = sizeof(struct feature_port_umsg), - .feature_index = PORT_FEATURE_ID_UMSG, - .revision_id = PORT_UMSG_REVISION, - }, - { - .name = PORT_FEATURE_UINT, - .resource_size = sizeof(struct feature_port_uint), - .feature_index = PORT_FEATURE_ID_UINT, - .revision_id = PORT_UINT_REVISION, - .ops = &ifpga_rawdev_port_uint_ops, - }, - { - .name = PORT_FEATURE_STP, - .resource_size = PORT_FEATURE_STP_REGION_SIZE, - .feature_index = PORT_FEATURE_ID_STP, - .revision_id = PORT_STP_REVISION, - .ops = &ifpga_rawdev_port_stp_ops, - }, - { - .name = PORT_FEATURE_UAFU, - /* UAFU feature size should be read from PORT_CAP.MMIOSIZE. - * Will set uafu feature size while parse port device. - */ - .resource_size = 0, - .feature_index = PORT_FEATURE_ID_UAFU, - .revision_id = PORT_UAFU_REVISION - }, -}; + return header.revision; +} + +static u32 feature_size(void __iomem *start) +{ + struct feature_header header; + + header.csr = readq(start); + + /*the size of private feature is 4KB aligned*/ + return header.next_header_offset ? header.next_header_offset:4096; +} static u64 feature_id(void __iomem *start) { @@ -152,7 +55,7 @@ static u64 feature_id(void __iomem *start) switch (header.type) { case FEATURE_TYPE_FIU: - return FEATURE_ID_HEADER; + return FEATURE_ID_FIU_HEADER; case FEATURE_TYPE_PRIVATE: return header.id; case FEATURE_TYPE_AFU: @@ -165,37 +68,35 @@ static u64 feature_id(void __iomem *start) static int build_info_add_sub_feature(struct build_feature_devs_info *binfo, - struct feature_info *finfo, void __iomem *start) + void __iomem *start, u64 fid, unsigned int size, + unsigned int vec_start, + unsigned int vec_cnt) { struct ifpga_hw *hw = binfo->hw; struct feature *feature = NULL; - int feature_idx = finfo->feature_index; - unsigned int vec_start = finfo->vec_start; - unsigned int vec_cnt = finfo->vec_cnt; struct feature_irq_ctx *ctx = NULL; int port_id, ret = 0; unsigned int i; - if (binfo->current_type == FME_ID) { - feature = &hw->fme.sub_feature[feature_idx]; - feature->parent = &hw->fme; - } else if (binfo->current_type == PORT_ID) { - port_id = binfo->current_port_id; - feature = &hw->port[port_id].sub_feature[feature_idx]; - feature->parent = &hw->port[port_id]; - } else { - return -EFAULT; - } + fid = fid?fid:feature_id(start); + size = size?size:feature_size(start); + + feature = opae_malloc(sizeof(struct feature)); + if (!feature) + return -ENOMEM; feature->state = IFPGA_FEATURE_ATTACHED; feature->addr = start; - feature->id = feature_id(start); - feature->size = finfo->resource_size; - feature->name = finfo->name; - feature->revision = finfo->revision_id; - feature->ops = finfo->ops; + feature->id = fid; + feature->size = size; + feature->revision = feature_revision(start); feature->phys_addr = binfo->phys_addr + ((u8 *)start - (u8 *)binfo->ioaddr); + feature->vec_start = vec_start; + feature->vec_cnt = vec_cnt; + + dev_debug(binfo, "%s: id=0x%lx, phys_addr=0x%lx, size=%d\n", + __func__, feature->id, feature->phys_addr, size); if (vec_cnt) { if (vec_start + vec_cnt <= vec_start) @@ -215,22 +116,32 @@ static u64 feature_id(void __iomem *start) feature->ctx_num = vec_cnt; feature->vfio_dev_fd = binfo->pci_data->vfio_dev_fd; + if (binfo->current_type == FME_ID) { + feature->parent = &hw->fme; + feature->type = FEATURE_FME_TYPE; + feature->name = get_fme_feature_name(fid); + TAILQ_INSERT_TAIL(&hw->fme.feature_list, feature, next); + } else if (binfo->current_type == PORT_ID) { + port_id = binfo->current_port_id; + feature->parent = &hw->port[port_id]; + feature->type = FEATURE_PORT_TYPE; + feature->name = get_port_feature_name(fid); + TAILQ_INSERT_TAIL(&hw->port[port_id].feature_list, + feature, next); + } else { + return -EFAULT; + } return ret; } static int create_feature_instance(struct build_feature_devs_info *binfo, - void __iomem *start, struct feature_info *finfo) + void __iomem *start, u64 fid, + unsigned int size, unsigned int vec_start, + unsigned int vec_cnt) { - struct feature_header *hdr = start; - - if (finfo->revision_id != SKIP_REVISION_CHECK && - hdr->revision > finfo->revision_id) { - dev_err(binfo, "feature %s revision :default:%x, now at:%x, mis-match.\n", - finfo->name, finfo->revision_id, hdr->revision); - } - - return build_info_add_sub_feature(binfo, finfo, start); + return build_info_add_sub_feature(binfo, start, fid, size, vec_start, + vec_cnt); } /* @@ -249,31 +160,31 @@ static bool feature_is_UAFU(struct build_feature_devs_info *binfo) static int parse_feature_port_uafu(struct build_feature_devs_info *binfo, struct feature_header *hdr) { - enum port_feature_id id = PORT_FEATURE_ID_UAFU; + u64 id = PORT_FEATURE_ID_UAFU; struct ifpga_afu_info *info; void *start = (void *)hdr; + struct feature_port_header *port_hdr = binfo->ioaddr; + struct feature_port_capability capability; int ret; + int size; - if (port_features[id].resource_size) { - ret = create_feature_instance(binfo, hdr, &port_features[id]); - } else { - dev_err(binfo, "the uafu feature header is mis-configured.\n"); - ret = -EINVAL; - } + capability.csr = readq(&port_hdr->capability); + + size = capability.mmio_size << 10; + ret = create_feature_instance(binfo, hdr, id, size, 0, 0); if (ret) return ret; /* FIXME: need to figure out a better name */ - info = malloc(sizeof(*info)); + info = opae_malloc(sizeof(*info)); if (!info) return -ENOMEM; info->region[0].addr = start; info->region[0].phys_addr = binfo->phys_addr + (uint8_t *)start - (uint8_t *)binfo->ioaddr; - info->region[0].len = port_features[id].resource_size; - port_features[id].resource_size = 0; + info->region[0].len = size; info->num_regions = 1; binfo->acc_info = info; @@ -320,6 +231,8 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo) struct opae_manager *mgr; struct opae_bridge *br; struct opae_accelerator *acc; + struct ifpga_port_hw *port; + struct feature *feature; if (!binfo->fiu) return 0; @@ -337,7 +250,11 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo) br->id = binfo->current_port_id; /* update irq info */ - info->num_irqs = port_features[PORT_FEATURE_ID_UINT].vec_cnt; + port = &hw->port[binfo->current_port_id]; + feature = get_feature_by_id(&port->feature_list, + PORT_FEATURE_ID_UINT); + if (feature) + info->num_irqs = feature->vec_cnt; acc = opae_accelerator_alloc(hw->adapter->name, &ifpga_acc_ops, info); @@ -353,7 +270,7 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo) } else if (binfo->current_type == FME_ID) { mgr = opae_manager_alloc(hw->adapter->name, &ifpga_mgr_ops, - binfo->fiu); + &ifpga_mgr_network_ops, binfo->fiu); if (!mgr) return -ENOMEM; @@ -402,10 +319,10 @@ static int parse_feature_fme(struct build_feature_devs_info *binfo, /* Update FME states */ fme->state = IFPGA_FME_IMPLEMENTED; fme->parent = hw; + TAILQ_INIT(&fme->feature_list); spinlock_init(&fme->lock); - return create_feature_instance(binfo, start, - &fme_features[FME_FEATURE_ID_HEADER]); + return create_feature_instance(binfo, start, 0, 0, 0, 0); } static int parse_feature_port(struct build_feature_devs_info *binfo, @@ -433,29 +350,19 @@ static int parse_feature_port(struct build_feature_devs_info *binfo, port->parent = hw; port->state = IFPGA_PORT_ATTACHED; spinlock_init(&port->lock); + TAILQ_INIT(&port->feature_list); - return create_feature_instance(binfo, start, - &port_features[PORT_FEATURE_ID_HEADER]); + return create_feature_instance(binfo, start, 0, 0, 0, 0); } static void enable_port_uafu(struct build_feature_devs_info *binfo, void __iomem *start) { - enum port_feature_id id = PORT_FEATURE_ID_UAFU; - struct feature_port_header *port_hdr; - struct feature_port_capability capability; struct ifpga_port_hw *port = &binfo->hw->port[binfo->current_port_id]; - port_hdr = (struct feature_port_header *)start; - capability.csr = readq(&port_hdr->capability); - port_features[id].resource_size = (capability.mmio_size << 10); - - /* - * From spec, to Enable UAFU, we should reset related port, - * or the whole mmio space in this UAFU will be invalid - */ - if (port_features[id].resource_size) - fpga_port_reset(port); + UNUSED(start); + + fpga_port_reset(port); } static int parse_feature_fiu(struct build_feature_devs_info *binfo, @@ -505,44 +412,45 @@ static int parse_feature_fiu(struct build_feature_devs_info *binfo, } static void parse_feature_irqs(struct build_feature_devs_info *binfo, - void __iomem *start, struct feature_info *finfo) + void __iomem *start, unsigned int *vec_start, + unsigned int *vec_cnt) { - finfo->vec_start = 0; - finfo->vec_cnt = 0; - UNUSED(binfo); + u64 id; + + id = feature_id(start); - if (!strcmp(finfo->name, PORT_FEATURE_UINT)) { + if (id == PORT_FEATURE_ID_UINT) { struct feature_port_uint *port_uint = start; struct feature_port_uint_cap uint_cap; uint_cap.csr = readq(&port_uint->capability); if (uint_cap.intr_num) { - finfo->vec_start = uint_cap.first_vec_num; - finfo->vec_cnt = uint_cap.intr_num; + *vec_start = uint_cap.first_vec_num; + *vec_cnt = uint_cap.intr_num; } else { dev_debug(binfo, "UAFU doesn't support interrupt\n"); } - } else if (!strcmp(finfo->name, PORT_FEATURE_ERR)) { + } else if (id == PORT_FEATURE_ID_ERROR) { struct feature_port_error *port_err = start; struct feature_port_err_capability port_err_cap; port_err_cap.csr = readq(&port_err->error_capability); if (port_err_cap.support_intr) { - finfo->vec_start = port_err_cap.intr_vector_num; - finfo->vec_cnt = 1; + *vec_start = port_err_cap.intr_vector_num; + *vec_cnt = 1; } else { dev_debug(&binfo, "Port error doesn't support interrupt\n"); } - } else if (!strcmp(finfo->name, FME_FEATURE_GLOBAL_ERR)) { + } else if (id == FME_FEATURE_ID_GLOBAL_ERR) { struct feature_fme_err *fme_err = start; struct feature_fme_error_capability fme_err_cap; fme_err_cap.csr = readq(&fme_err->fme_err_capability); if (fme_err_cap.support_intr) { - finfo->vec_start = fme_err_cap.intr_vector_num; - finfo->vec_cnt = 1; + *vec_start = fme_err_cap.intr_vector_num; + *vec_cnt = 1; } else { dev_debug(&binfo, "FME error doesn't support interrupt\n"); } @@ -552,43 +460,23 @@ static void parse_feature_irqs(struct build_feature_devs_info *binfo, static int parse_feature_fme_private(struct build_feature_devs_info *binfo, struct feature_header *hdr) { - struct feature_header header; - - header.csr = readq(hdr); - - if (header.id >= ARRAY_SIZE(fme_features)) { - dev_err(binfo, "FME feature id %x is not supported yet.\n", - header.id); - return 0; - } + unsigned int vec_start = 0; + unsigned int vec_cnt = 0; - parse_feature_irqs(binfo, hdr, &fme_features[header.id]); + parse_feature_irqs(binfo, hdr, &vec_start, &vec_cnt); - return create_feature_instance(binfo, hdr, &fme_features[header.id]); + return create_feature_instance(binfo, hdr, 0, 0, vec_start, vec_cnt); } static int parse_feature_port_private(struct build_feature_devs_info *binfo, struct feature_header *hdr) { - struct feature_header header; - enum port_feature_id id; + unsigned int vec_start = 0; + unsigned int vec_cnt = 0; - header.csr = readq(hdr); - /* - * the region of port feature id is [0x10, 0x13], + 1 to reserve 0 - * which is dedicated for port-hdr. - */ - id = (header.id & 0x000f) + 1; - - if (id >= ARRAY_SIZE(port_features)) { - dev_err(binfo, "Port feature id %x is not supported yet.\n", - header.id); - return 0; - } - - parse_feature_irqs(binfo, hdr, &port_features[id]); + parse_feature_irqs(binfo, hdr, &vec_start, &vec_cnt); - return create_feature_instance(binfo, hdr, &port_features[id]); + return create_feature_instance(binfo, hdr, 0, 0, vec_start, vec_cnt); } static int parse_feature_private(struct build_feature_devs_info *binfo, @@ -651,12 +539,19 @@ static int parse_feature(struct build_feature_devs_info *binfo, } hdr = (struct feature_header *)start; + header.csr = readq(hdr); + + /*debug*/ + dev_debug(binfo, "%s: address=0x%llx, val=0x%lx, header.id=0x%x, header.next_offset=0x%x, header.eol=0x%x, header.type=0x%x\n", + __func__, (unsigned long long)(hdr), header.csr, + header.id, header.next_header_offset, + header.end_of_list, header.type); + ret = parse_feature(binfo, hdr); if (ret) return ret; - header.csr = readq(hdr); - if (!header.next_header_offset) + if (header.end_of_list || !header.next_header_offset) break; } @@ -746,13 +641,12 @@ static void ifpga_print_device_feature_list(struct ifpga_hw *hw) struct ifpga_fme_hw *fme = &hw->fme; struct ifpga_port_hw *port; struct feature *feature; - int i, j; + int i; dev_info(hw, "found fme_device, is in PF: %s\n", is_ifpga_hw_pf(hw) ? "yes" : "no"); - for (i = 0; i < FME_FEATURE_ID_MAX; i++) { - feature = &fme->sub_feature[i]; + ifpga_for_each_fme_feature(fme, feature) { if (feature->state != IFPGA_FEATURE_ATTACHED) continue; @@ -760,6 +654,7 @@ static void ifpga_print_device_feature_list(struct ifpga_hw *hw) feature->name, feature->addr, feature->addr + feature->size - 1, (unsigned long)feature->phys_addr); + } for (i = 0; i < MAX_FPGA_PORT_NUM; i++) { @@ -770,8 +665,7 @@ static void ifpga_print_device_feature_list(struct ifpga_hw *hw) dev_info(hw, "port device: %d\n", port->port_id); - for (j = 0; j < PORT_FEATURE_ID_MAX; j++) { - feature = &port->sub_feature[j]; + ifpga_for_each_port_feature(port, feature) { if (feature->state != IFPGA_FEATURE_ATTACHED) continue; @@ -782,6 +676,7 @@ static void ifpga_print_device_feature_list(struct ifpga_hw *hw) feature->size - 1, (unsigned long)feature->phys_addr); } + } } @@ -812,10 +707,13 @@ int ifpga_bus_enumerate(struct ifpga_hw *hw) int ifpga_bus_init(struct ifpga_hw *hw) { int i; + struct ifpga_port_hw *port; fme_hw_init(&hw->fme); - for (i = 0; i < MAX_FPGA_PORT_NUM; i++) - port_hw_init(&hw->port[i]); + for (i = 0; i < MAX_FPGA_PORT_NUM; i++) { + port = &hw->port[i]; + port_hw_init(port); + } return 0; } diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.c b/drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.c index be7ac9e..88465d6 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.c +++ b/drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.c @@ -70,6 +70,9 @@ int fpga_get_afu_uuid(struct ifpga_port_hw *port, struct uuid *uuid) struct feature_port_header *port_hdr; u64 guidl, guidh; + if (!uuid) + return -EINVAL; + port_hdr = get_port_feature_ioaddr_by_index(port, PORT_FEATURE_ID_UAFU); spinlock_lock(&port->lock); @@ -77,8 +80,8 @@ int fpga_get_afu_uuid(struct ifpga_port_hw *port, struct uuid *uuid) guidh = readq(&port_hdr->afu_header.guid.b[8]); spinlock_unlock(&port->lock); - memcpy(uuid->b, &guidl, sizeof(u64)); - memcpy(uuid->b + 8, &guidh, sizeof(u64)); + opae_memcpy(uuid->b, &guidl, sizeof(u64)); + opae_memcpy(uuid->b + 8, &guidh, sizeof(u64)); return 0; } @@ -177,77 +180,152 @@ int port_clear_error(struct ifpga_port_hw *port) return port_err_clear(port, error.csr); } -void fme_hw_uinit(struct ifpga_fme_hw *fme) +static struct feature_driver fme_feature_drvs[] = { + {FEATURE_DRV(FME_FEATURE_ID_HEADER, FME_FEATURE_HEADER, + &fme_hdr_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_THERMAL_MGMT, FME_FEATURE_THERMAL_MGMT, + &fme_thermal_mgmt_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_POWER_MGMT, FME_FEATURE_POWER_MGMT, + &fme_power_mgmt_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_GLOBAL_ERR, FME_FEATURE_GLOBAL_ERR, + &fme_global_err_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_PR_MGMT, FME_FEATURE_PR_MGMT, + &fme_pr_mgmt_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_GLOBAL_DPERF, FME_FEATURE_GLOBAL_DPERF, + &fme_global_dperf_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_HSSI_ETH, FME_FEATURE_HSSI_ETH, + &fme_hssi_eth_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_EMIF_MGMT, FME_FEATURE_EMIF_MGMT, + &fme_emif_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_MAX10_SPI, FME_FEATURE_MAX10_SPI, + &fme_spi_master_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_I2C_MASTER, FME_FEATURE_I2C_MASTER, + &fme_i2c_master_ops),}, + {FEATURE_DRV(FME_FEATURE_ID_PHY_GROUP, FME_FEATURE_PHY_GROUP, + &fme_phy_group_ops),}, + {0, NULL, NULL}, /* end of arrary */ +}; + +static struct feature_driver port_feature_drvs[] = { + {FEATURE_DRV(PORT_FEATURE_ID_HEADER, PORT_FEATURE_HEADER, + &ifpga_rawdev_port_hdr_ops)}, + {FEATURE_DRV(PORT_FEATURE_ID_ERROR, PORT_FEATURE_ERR, + &ifpga_rawdev_port_error_ops)}, + {FEATURE_DRV(PORT_FEATURE_ID_UINT, PORT_FEATURE_UINT, + &ifpga_rawdev_port_uint_ops)}, + {FEATURE_DRV(PORT_FEATURE_ID_STP, PORT_FEATURE_STP, + &ifpga_rawdev_port_stp_ops)}, + {FEATURE_DRV(PORT_FEATURE_ID_UAFU, PORT_FEATURE_UAFU, + &ifpga_rawdev_port_afu_ops)}, + {0, NULL, NULL}, /* end of array */ +}; + +const char *get_fme_feature_name(unsigned int id) { - struct feature *feature; - int i; + struct feature_driver *drv = fme_feature_drvs; - if (fme->state != IFPGA_FME_IMPLEMENTED) - return; + while (drv->name) { + if (drv->id == id) + return drv->name; - for (i = 0; i < FME_FEATURE_ID_MAX; i++) { - feature = &fme->sub_feature[i]; - if (feature->state == IFPGA_FEATURE_ATTACHED && - feature->ops && feature->ops->uinit) - feature->ops->uinit(feature); + drv++; } + + return NULL; } -int fme_hw_init(struct ifpga_fme_hw *fme) +const char *get_port_feature_name(unsigned int id) +{ + struct feature_driver *drv = port_feature_drvs; + + while (drv->name) { + if (drv->id == id) + return drv->name; + + drv++; + } + + return NULL; +} + +static void feature_uinit(struct ifpga_feature_list *list) { struct feature *feature; - int i, ret; - if (fme->state != IFPGA_FME_IMPLEMENTED) - return -EINVAL; + TAILQ_FOREACH(feature, list, next) { + if (feature->state != IFPGA_FEATURE_ATTACHED) + continue; + if (feature->ops && feature->ops->uinit) + feature->ops->uinit(feature); + } +} - for (i = 0; i < FME_FEATURE_ID_MAX; i++) { - feature = &fme->sub_feature[i]; - if (feature->state == IFPGA_FEATURE_ATTACHED && - feature->ops && feature->ops->init) { - ret = feature->ops->init(feature); - if (ret) { - fme_hw_uinit(fme); - return ret; +static int feature_init(struct feature_driver *drv, + struct ifpga_feature_list *list) +{ + struct feature *feature; + int ret; + + while (drv->ops) { + TAILQ_FOREACH(feature, list, next) { + if (feature->state != IFPGA_FEATURE_ATTACHED) + continue; + if (feature->id == drv->id) { + feature->ops = drv->ops; + feature->name = drv->name; + if (feature->ops->init) { + ret = feature->ops->init(feature); + if (ret) + goto error; + } } } + drv++; } return 0; +error: + feature_uinit(list); + return ret; } -void port_hw_uinit(struct ifpga_port_hw *port) +int fme_hw_init(struct ifpga_fme_hw *fme) { - struct feature *feature; - int i; + int ret; - for (i = 0; i < PORT_FEATURE_ID_MAX; i++) { - feature = &port->sub_feature[i]; - if (feature->state == IFPGA_FEATURE_ATTACHED && - feature->ops && feature->ops->uinit) - feature->ops->uinit(feature); - } + if (fme->state != IFPGA_FME_IMPLEMENTED) + return -ENODEV; + + ret = feature_init(fme_feature_drvs, &fme->feature_list); + if (ret) + return ret; + + return 0; +} + +void fme_hw_uinit(struct ifpga_fme_hw *fme) +{ + feature_uinit(&fme->feature_list); +} + +void port_hw_uinit(struct ifpga_port_hw *port) +{ + feature_uinit(&port->feature_list); } int port_hw_init(struct ifpga_port_hw *port) { - struct feature *feature; - int i, ret; + int ret; if (port->state == IFPGA_PORT_UNUSED) return 0; - for (i = 0; i < PORT_FEATURE_ID_MAX; i++) { - feature = &port->sub_feature[i]; - if (feature->ops && feature->ops->init) { - ret = feature->ops->init(feature); - if (ret) { - port_hw_uinit(port); - return ret; - } - } - } + ret = feature_init(port_feature_drvs, &port->feature_list); + if (ret) + goto error; return 0; +error: + port_hw_uinit(port); + return ret; } - diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.h b/drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.h index 4391f2f..067e9aa 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.h +++ b/drivers/raw/ifpga_rawdev/base/ifpga_feature_dev.h @@ -7,6 +7,18 @@ #include "ifpga_hw.h" +struct feature_driver { + u64 id; + const char *name; + struct feature_ops *ops; +}; + +/** + * FEATURE_DRV - macro used to describe a specific feature driver + */ +#define FEATURE_DRV(n, s, p) \ + .id = (n), .name = (s), .ops = (p) + static inline struct ifpga_port_hw * get_port(struct ifpga_hw *hw, u32 port_id) { @@ -17,12 +29,10 @@ } #define ifpga_for_each_fme_feature(hw, feature) \ - for ((feature) = (hw)->sub_feature; \ - (feature) < (hw)->sub_feature + (FME_FEATURE_ID_MAX); (feature)++) + TAILQ_FOREACH(feature, &hw->feature_list, next) -#define ifpga_for_each_port_feature(hw, feature) \ - for ((feature) = (hw)->sub_feature; \ - (feature) < (hw)->sub_feature + (PORT_FEATURE_ID_MAX); (feature)++) +#define ifpga_for_each_port_feature(port, feature) \ + TAILQ_FOREACH(feature, &port->feature_list, next) static inline struct feature * get_fme_feature_by_id(struct ifpga_fme_hw *fme, u64 id) @@ -50,16 +60,32 @@ return NULL; } +static inline struct feature * +get_feature_by_id(struct ifpga_feature_list *list, u64 id) +{ + struct feature *feature; + + TAILQ_FOREACH(feature, list, next) + if (feature->id == id) + return feature; + + return NULL; +} + static inline void * get_fme_feature_ioaddr_by_index(struct ifpga_fme_hw *fme, int index) { - return fme->sub_feature[index].addr; + struct feature *feature = get_feature_by_id(&fme->feature_list, index); + + return feature ? feature->addr : NULL; } static inline void * get_port_feature_ioaddr_by_index(struct ifpga_port_hw *port, int index) { - return port->sub_feature[index].addr; + struct feature *feature = get_feature_by_id(&port->feature_list, index); + + return feature ? feature->addr : NULL; } static inline bool @@ -143,6 +169,11 @@ int do_pr(struct ifpga_hw *hw, u32 port_id, void *buffer, u32 size, extern struct feature_ops fme_pr_mgmt_ops; extern struct feature_ops fme_global_iperf_ops; extern struct feature_ops fme_global_dperf_ops; +extern struct feature_ops fme_hssi_eth_ops; +extern struct feature_ops fme_emif_ops; +extern struct feature_ops fme_spi_master_ops; +extern struct feature_ops fme_i2c_master_ops; +extern struct feature_ops fme_phy_group_ops; int port_get_prop(struct ifpga_port_hw *port, struct feature_prop *prop); int port_set_prop(struct ifpga_port_hw *port, struct feature_prop *prop); @@ -155,14 +186,31 @@ struct fpga_uafu_irq_set { }; int port_set_irq(struct ifpga_port_hw *port, u32 feature_id, void *irq_set); +const char *get_fme_feature_name(unsigned int id); +const char *get_port_feature_name(unsigned int id); extern struct feature_ops ifpga_rawdev_port_hdr_ops; extern struct feature_ops ifpga_rawdev_port_error_ops; extern struct feature_ops ifpga_rawdev_port_stp_ops; extern struct feature_ops ifpga_rawdev_port_uint_ops; +extern struct feature_ops ifpga_rawdev_port_afu_ops; /* help functions for feature ops */ int fpga_msix_set_block(struct feature *feature, unsigned int start, unsigned int count, s32 *fds); +/* FME network function ops*/ +int fme_mgr_read_mac_rom(struct ifpga_fme_hw *fme, int offset, + void *buf, int size); +int fme_mgr_write_mac_rom(struct ifpga_fme_hw *fme, int offset, + void *buf, int size); +int fme_mgr_read_phy_reg(struct ifpga_fme_hw *fme, int phy_group, + u8 entry, u16 reg, u32 *value); +int fme_mgr_write_phy_reg(struct ifpga_fme_hw *fme, int phy_group, + u8 entry, u16 reg, u32 value); +int fme_mgr_get_retimer_info(struct ifpga_fme_hw *fme, + struct opae_retimer_info *info); +int fme_mgr_set_retimer_speed(struct ifpga_fme_hw *fme, int speed); +int fme_mgr_get_retimer_status(struct ifpga_fme_hw *fme, int port, + struct opae_retimer_status *status); #endif /* _IFPGA_FEATURE_DEV_H_ */ diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_fme.c b/drivers/raw/ifpga_rawdev/base/ifpga_fme.c index 4be60c0..df87b5f 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_fme.c +++ b/drivers/raw/ifpga_rawdev/base/ifpga_fme.c @@ -3,6 +3,12 @@ */ #include "ifpga_feature_dev.h" +#include "opae_i2c.h" +#include "opae_spi.h" +#include "opae_at24_eeprom.h" +#include "opae_phy_group.h" +#include "opae_intel_max10.h" +#include "opae_mdio.h" #define PWR_THRESHOLD_MAX 0x7F @@ -732,3 +738,370 @@ struct feature_ops fme_power_mgmt_ops = { .get_prop = fme_power_mgmt_get_prop, .set_prop = fme_power_mgmt_set_prop, }; + +static int spi_self_checking(void) +{ + u32 val; + int ret; + + ret = max10_reg_read(0x30043c, &val); + if (ret) + return -EIO; + + if (val != 0x87654321) { + dev_err(NULL, "Read MAX10 test register fail: 0x%x\n", val); + return -EIO; + } + + dev_info(NULL, "Read MAX10 test register success, SPI self-test done\n"); + + return 0; +} + +static int fme_spi_init(struct feature *feature) +{ + struct feature_fme_spi *spi; + struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent; + struct altera_spi_device *spi_master; + struct intel_max10_device *max10; + int ret = 0; + + spi = (struct feature_fme_spi *)feature->addr; + + dev_info(fme, "FME SPI Master (Max10) Init.\n"); + dev_debug(fme, "FME SPI base addr %llx.\n", + (unsigned long long)spi); + dev_debug(fme, "spi param=0x%lx\n", opae_readq(feature->addr + 0x8)); + + spi_master = altera_spi_init(feature->addr); + if (!spi_master) + return -ENODEV; + + max10 = intel_max10_device_probe(spi_master, 0); + if (!max10) { + ret = -ENODEV; + dev_err(fme, "max10 init fail\n"); + goto spi_fail; + } + + fme->max10_dev = max10; + + /* SPI self test */ + if (spi_self_checking()) + return -EIO; + + return ret; + +spi_fail: + altera_spi_release(spi_master); + return ret; +} + +static void fme_spi_uinit(struct feature *feature) +{ + struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent; + + if (fme->max10_dev) + intel_max10_device_remove(fme->max10_dev); +} + +struct feature_ops fme_spi_master_ops = { + .init = fme_spi_init, + .uinit = fme_spi_uinit, + +}; + +static int i2c_mac_rom_test(struct altera_i2c_dev *dev) +{ + char buf[20]; + int ret; + char read_buf[20] = {0,}; + const char *string = "1a2b3c4d5e"; + unsigned int i; + + opae_memcpy(buf, string, strlen(string)); + + printf("data writing into mac rom:\n"); + for (i = 0; i < strlen(string); i++) + printf("%x ", *((char *)buf+i)); + printf("\n"); + + ret = at24_eeprom_write(dev, AT24512_SLAVE_ADDR, 0, + (u8 *)buf, strlen(string)); + if (ret < 0) + printf("write i2c error:%d\n", ret); + + ret = at24_eeprom_read(dev, AT24512_SLAVE_ADDR, 0, + (u8 *)read_buf, strlen(string)); + if (ret < 0) + printf("read i2c error:%d\n", ret); + + printf("read from mac rom\n"); + for (i = 0; i < strlen(string); i++) + printf("%x ", *((char *)read_buf+i)); + printf("\n"); + + if (!memcmp(buf, read_buf, strlen(string))) { + printf("%s test success!\n", __func__); + return -EFAULT; + } + + printf("%s test fail\n", __func__); + + return 0; +} + +static int fme_i2c_init(struct feature *feature) +{ + struct feature_fme_i2c *i2c; + struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent; + + i2c = (struct feature_fme_i2c *)feature->addr; + + dev_info(NULL, "FME I2C Master Init.\n"); + + fme->i2c_master = altera_i2c_probe(i2c); + if (!fme->i2c_master) + return -ENODEV; + + if (i2c_mac_rom_test(fme->i2c_master)) + return -ENODEV; + + return 0; +} + +static void fme_i2c_uninit(struct feature *feature) +{ + struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent; + + altera_i2c_remove(fme->i2c_master); +} + +struct feature_ops fme_i2c_master_ops = { + .init = fme_i2c_init, + .uinit = fme_i2c_uninit, +}; + +static int fme_phy_group_init(struct feature *feature) +{ + struct ifpga_fme_hw *fme = (struct ifpga_fme_hw *)feature->parent; + struct phy_group_device *dev; + + dev = (struct phy_group_device *)phy_group_probe(feature->addr); + if (!dev) + return -ENODEV; + + fme->phy_dev[dev->group_index] = dev; + + dev_info(NULL, "FME PHY Group %d Init.\n", dev->group_index); + dev_info(NULL, "FME PHY Group register base address %llx.\n", + (unsigned long long)dev->base); + + return 0; +} + +static void fme_phy_group_uinit(struct feature *feature) +{ + UNUSED(feature); +} + +struct feature_ops fme_phy_group_ops = { + .init = fme_phy_group_init, + .uinit = fme_phy_group_uinit, +}; + +static int fme_hssi_eth_init(struct feature *feature) +{ + UNUSED(feature); + return 0; +} + +static void fme_hssi_eth_uinit(struct feature *feature) +{ + UNUSED(feature); +} + +struct feature_ops fme_hssi_eth_ops = { + .init = fme_hssi_eth_init, + .uinit = fme_hssi_eth_uinit, +}; + +static int fme_emif_init(struct feature *feature) +{ + UNUSED(feature); + return 0; +} + +static void fme_emif_uinit(struct feature *feature) +{ + UNUSED(feature); +} + +struct feature_ops fme_emif_ops = { + .init = fme_emif_init, + .uinit = fme_emif_uinit, +}; + +static int fme_check_retimter_ports(struct ifpga_fme_hw *fme, int port) +{ + struct intel_max10_device *dev; + int ports; + + dev = (struct intel_max10_device *)fme->max10_dev; + if (!dev) + return -ENODEV; + + ports = dev->num_retimer * dev->num_port; + + if (port > ports || port < 0) + return -EINVAL; + + return 0; +} + +int fme_mgr_read_mac_rom(struct ifpga_fme_hw *fme, int offset, + void *buf, int size) +{ + struct altera_i2c_dev *dev; + + dev = fme->i2c_master; + if (!dev) + return -ENODEV; + + if (fme_check_retimter_ports(fme, offset/size)) + return -EINVAL; + + return at24_eeprom_read(dev, AT24512_SLAVE_ADDR, offset, buf, size); +} + +int fme_mgr_write_mac_rom(struct ifpga_fme_hw *fme, int offset, + void *buf, int size) +{ + struct altera_i2c_dev *dev; + + dev = fme->i2c_master; + if (!dev) + return -ENODEV; + + if (fme_check_retimter_ports(fme, offset/size)) + return -EINVAL; + + return at24_eeprom_write(dev, AT24512_SLAVE_ADDR, offset, buf, size); +} + +int fme_mgr_read_phy_reg(struct ifpga_fme_hw *fme, int phy_group, + u8 entry, u16 reg, u32 *value) +{ + struct phy_group_device *dev; + + if (phy_group > (MAX_PHY_GROUP_DEVICES - 1)) + return -EINVAL; + + dev = (struct phy_group_device *)fme->phy_dev[phy_group]; + if (!dev) + return -ENODEV; + + if (entry > dev->entries) + return -EINVAL; + + + return phy_group_read_reg(dev, entry, reg, value); +} + +int fme_mgr_write_phy_reg(struct ifpga_fme_hw *fme, int phy_group, + u8 entry, u16 reg, u32 value) +{ + struct phy_group_device *dev; + + if (phy_group > (MAX_PHY_GROUP_DEVICES - 1)) + return -EINVAL; + + dev = (struct phy_group_device *)fme->phy_dev[phy_group]; + if (!dev) + return -ENODEV; + + return phy_group_write_reg(dev, entry, reg, value); +} + +int fme_mgr_get_retimer_info(struct ifpga_fme_hw *fme, + struct opae_retimer_info *info) +{ + struct intel_max10_device *dev; + + dev = (struct intel_max10_device *)fme->max10_dev; + if (!dev) + return -ENODEV; + + info->num_retimer = dev->num_retimer; + info->num_port = dev->num_port; + + return 0; +} + +int fme_mgr_set_retimer_speed(struct ifpga_fme_hw *fme, int speed) +{ + struct intel_max10_device *dev; + int i, j, num; + int ret = 0; + + dev = (struct intel_max10_device *)fme->max10_dev; + if (!dev) + return -ENODEV; + + num = dev->num_retimer < INTEL_MAX10_MAX_MDIO_DEVS ? + dev->num_retimer : INTEL_MAX10_MAX_MDIO_DEVS; + + for (i = 0; i < num; i++) + for (j = 0; j < dev->num_port; j++) { + ret = pkvl_set_speed_mode(dev->mdio[i], j, speed); + if (ret) { + printf("pkvl_%d set port_%d speed %d fail\n", + i, j, speed); + break; + } + } + + return ret; +} + +int fme_mgr_get_retimer_status(struct ifpga_fme_hw *fme, int port, + struct opae_retimer_status *status) +{ + struct intel_max10_device *dev; + struct altera_mdio_dev *mdio; + int ports; + int ret; + + dev = (struct intel_max10_device *)fme->max10_dev; + if (!dev) + return -ENODEV; + + ports = dev->num_retimer * dev->num_port; + + if (port > ports || port < 0) + return -EINVAL; + + mdio = dev->mdio[port/dev->num_port]; + port = port % dev->num_port; + + ret = pkvl_get_port_speed_status(mdio, port, &status->speed); + if (ret) + goto error; + + ret = pkvl_get_port_line_link_status(mdio, port, &status->line_link); + if (ret) + goto error; + + ret = pkvl_get_port_host_link_status(mdio, port, &status->host_link); + if (ret) + goto error; + + dev_info(NULL, "get retimer status: pkvl:%d, port:%d, speed:%d, line:%d, host:%d\n", + mdio->index, port, status->speed, + status->line_link, status->host_link); + + return 0; + +error: + return ret; +} diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_fme_pr.c b/drivers/raw/ifpga_rawdev/base/ifpga_fme_pr.c index ec0beeb..8890f4b 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_fme_pr.c +++ b/drivers/raw/ifpga_rawdev/base/ifpga_fme_pr.c @@ -257,7 +257,7 @@ static int fme_pr(struct ifpga_hw *hw, u32 port_id, void *buffer, u32 size, return -EINVAL; } - memset(&info, 0, sizeof(struct fpga_pr_info)); + opae_memset(&info, 0, sizeof(struct fpga_pr_info)); info.flags = FPGA_MGR_PARTIAL_RECONFIG; info.port_id = port_id; diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_hw.h b/drivers/raw/ifpga_rawdev/base/ifpga_hw.h index a20520c..6ac54c6 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_hw.h +++ b/drivers/raw/ifpga_rawdev/base/ifpga_hw.h @@ -7,19 +7,30 @@ #include "ifpga_defines.h" #include "opae_ifpga_hw_api.h" +#include "opae_phy_group.h" + +/** List of private feateues */ +TAILQ_HEAD(ifpga_feature_list, feature); enum ifpga_feature_state { IFPGA_FEATURE_UNUSED = 0, IFPGA_FEATURE_ATTACHED, }; +enum feature_type { + FEATURE_FME_TYPE = 0, + FEATURE_PORT_TYPE, +}; + struct feature_irq_ctx { int eventfd; int idx; }; struct feature { + TAILQ_ENTRY(feature)next; enum ifpga_feature_state state; + enum feature_type type; const char *name; u64 id; u8 *addr; @@ -34,6 +45,8 @@ struct feature { void *parent; /* to parent hw data structure */ struct feature_ops *ops;/* callback to this private feature */ + unsigned int vec_start; + unsigned int vec_cnt; }; struct feature_ops { @@ -52,7 +65,7 @@ enum ifpga_fme_state { struct ifpga_fme_hw { enum ifpga_fme_state state; - struct feature sub_feature[FME_FEATURE_ID_MAX]; + struct ifpga_feature_list feature_list; spinlock_t lock; /* protect hardware access */ void *parent; /* pointer to ifpga_hw */ @@ -67,6 +80,10 @@ struct ifpga_fme_hw { u32 cache_size; u32 capability; + + void *i2c_master; /* I2C Master device */ + void *max10_dev; /* MAX10 device */ + void *phy_dev[MAX_PHY_GROUP_DEVICES]; }; enum ifpga_port_state { @@ -78,7 +95,7 @@ enum ifpga_port_state { struct ifpga_port_hw { enum ifpga_port_state state; - struct feature sub_feature[PORT_FEATURE_ID_MAX]; + struct ifpga_feature_list feature_list; spinlock_t lock; /* protect access to hw */ void *parent; /* pointer to ifpga_hw */ diff --git a/drivers/raw/ifpga_rawdev/base/ifpga_port.c b/drivers/raw/ifpga_rawdev/base/ifpga_port.c index 8b5668d..4628783 100644 --- a/drivers/raw/ifpga_rawdev/base/ifpga_port.c +++ b/drivers/raw/ifpga_rawdev/base/ifpga_port.c @@ -386,3 +386,24 @@ struct feature_ops ifpga_rawdev_port_uint_ops = { .init = port_uint_init, .uinit = port_uint_uinit, }; + +static int port_afu_init(struct feature *feature) +{ + UNUSED(feature); + + dev_info(NULL, "PORT AFU Init.\n"); + + return 0; +} + +static void port_afu_uinit(struct feature *feature) +{ + UNUSED(feature); + + dev_info(NULL, "PORT AFU UInit.\n"); +} + +struct feature_ops ifpga_rawdev_port_afu_ops = { + .init = port_afu_init, + .uinit = port_afu_uinit, +}; diff --git a/drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.c b/drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.c new file mode 100644 index 0000000..cc4901a --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.c @@ -0,0 +1,89 @@ + +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2018 Intel Corporation + */ + +#include "opae_osdep.h" +#include "opae_i2c.h" +#include "opae_at24_eeprom.h" + +#define AT24_READ_RETRY 10 + +static int at24_eeprom_read_and_try(struct altera_i2c_dev *dev, + unsigned int slave_addr, + u32 offset, u8 *buf, u32 len) +{ + int i; + int ret = 0; + + for (i = 0; i < AT24_READ_RETRY; i++) { + ret = i2c_read16(dev, slave_addr, offset, + buf, len); + if (ret == 0) + break; + + opae_udelay(100); + } + + return ret; +} + +int at24_eeprom_read(struct altera_i2c_dev *dev, unsigned int slave_addr, + u32 offset, u8 *buf, int count) +{ + int len; + int status; + int read_count = 0; + + if (!count) + return count; + + if (count > AT24C512_IO_LIMIT) + len = AT24C512_IO_LIMIT; + else + len = count; + + while (count) { + status = at24_eeprom_read_and_try(dev, slave_addr, offset, + buf, len); + if (status) + break; + + buf += len; + offset += len; + count -= len; + read_count += len; + } + + return read_count; +} + +int at24_eeprom_write(struct altera_i2c_dev *dev, unsigned int slave_addr, + u32 offset, u8 *buf, int count) +{ + int len; + int status; + int write_count = 0; + + if (!count) + return count; + + if (count > AT24C512_PAGE_SIZE) + len = AT24C512_PAGE_SIZE; + else + len = count; + + while (count) { + status = i2c_write16(dev, slave_addr, offset, buf, len); + if (status) + break; + + buf += len; + offset += len; + count -= len; + write_count += len; + } + + return write_count; +} + diff --git a/drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.h b/drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.h new file mode 100644 index 0000000..4aa0ee2 --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_at24_eeprom.h @@ -0,0 +1,14 @@ + +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2018 Intel Corporation + */ + +#define AT24C512_PAGE_SIZE 128 +#define AT24C512_IO_LIMIT 128 + +#define AT24512_SLAVE_ADDR 0x51 + +int at24_eeprom_read(struct altera_i2c_dev *dev, unsigned int slave_addr, + u32 offset, u8 *buf, int count); +int at24_eeprom_write(struct altera_i2c_dev *dev, unsigned int slave_addr, + u32 offset, u8 *buf, int count); diff --git a/drivers/raw/ifpga_rawdev/base/opae_hw_api.c b/drivers/raw/ifpga_rawdev/base/opae_hw_api.c index 1541b67..c04bff2 100644 --- a/drivers/raw/ifpga_rawdev/base/opae_hw_api.c +++ b/drivers/raw/ifpga_rawdev/base/opae_hw_api.c @@ -210,12 +210,14 @@ int opae_acc_get_uuid(struct opae_accelerator *acc, * opae_manager_alloc - alloc opae_manager data structure * @name: manager name. * @ops: ops of this manager. + * @network_ops: ops of network management. * @data: private data of this manager. * * Return: opae_manager on success, otherwise NULL. */ struct opae_manager * -opae_manager_alloc(const char *name, struct opae_manager_ops *ops, void *data) +opae_manager_alloc(const char *name, struct opae_manager_ops *ops, + struct opae_manager_networking_ops *network_ops, void *data) { struct opae_manager *mgr = opae_zmalloc(sizeof(*mgr)); @@ -224,6 +226,7 @@ struct opae_manager * mgr->name = name; mgr->ops = ops; + mgr->network_ops = network_ops; mgr->data = data; opae_log("%s %p\n", __func__, mgr); @@ -304,7 +307,7 @@ static struct opae_adapter_ops *match_ops(struct opae_adapter *adapter) /** * opae_adapter_init - init opae_adapter data structure - * @adapter: pointer of opae_adapter data structure + * @adpdate: pointer of opae_adater data structure * @name: adapter name. * @data: private data of this adapter. * @@ -325,6 +328,26 @@ int opae_adapter_init(struct opae_adapter *adapter, } /** + * opae_adapter_alloc - alloc opae_adapter data structure + * @name: adapter name. + * @data: private data of this adapter. + * + * Return: opae_adapter on success, otherwise NULL. + */ +struct opae_adapter *opae_adapter_alloc(const char *name, void *data) +{ + struct opae_adapter *adapter = opae_zmalloc(sizeof(*adapter)); + + if (!adapter) + return NULL; + + if (opae_adapter_init(adapter, name, data)) + return NULL; + + return adapter; +} + +/** * opae_adapter_enumerate - enumerate this adapter * @adapter: adapter to enumerate. * @@ -341,7 +364,7 @@ int opae_adapter_enumerate(struct opae_adapter *adapter) ret = adapter->ops->enumerate(adapter); if (!ret) - opae_adapter_dump(adapter, 1); + opae_adapter_dump(adapter, 0); return ret; } @@ -379,3 +402,163 @@ struct opae_accelerator * return NULL; } + +/** + * opae_manager_read_mac_rom - read the content of the MAC ROM + * @mgr: opae_manager for MAC ROM + * @port: the port number of retimer + * @addr: buffer of the MAC address + * + * Return: return the bytes of read successfully + */ +int opae_manager_read_mac_rom(struct opae_manager *mgr, int port, + struct opae_ether_addr *addr) +{ + if (!mgr || !mgr->network_ops) + return -EINVAL; + + if (mgr->network_ops->read_mac_rom) + return mgr->network_ops->read_mac_rom(mgr, + port * sizeof(struct opae_ether_addr), + addr, sizeof(struct opae_ether_addr)); + + return -ENOENT; +} + +/** + * opae_manager_write_mac_rom - write data into MAC ROM + * @mgr: opae_manager for MAC ROM + * @port: the port number of the retimer + * @addr: data of the MAC address + * + * Return: return written bytes + */ +int opae_manager_write_mac_rom(struct opae_manager *mgr, int port, + struct opae_ether_addr *addr) +{ + if (!mgr || !mgr->network_ops) + return -EINVAL; + + if (mgr->network_ops && mgr->network_ops->write_mac_rom) + return mgr->network_ops->write_mac_rom(mgr, + port * sizeof(struct opae_ether_addr), + addr, sizeof(struct opae_ether_addr)); + + return -ENOENT; +} + +/** + * opae_manager_read_phy_reg - read phy register + * @mgr: opae_manager for PHY + * @phy_group: PHY group index + * @entry: PHY entries + * @reg: register address in PHY group + * @val: register value + * + * Return: 0 on success, otherwise error code + */ +int opae_manager_read_phy_reg(struct opae_manager *mgr, int phy_group, + u8 entry, u32 reg, u32 *val) +{ + if (!mgr || !mgr->network_ops) + return -EINVAL; + + if (mgr->network_ops->read_phy_reg) + return mgr->network_ops->read_phy_reg(mgr, phy_group, + entry, reg, val); + + return -ENOENT; +} + +/** + * opae_manager_write_phy_reg - write PHY group register + * @mgr: opae_manager for PHY Group + * @phy_group: PHY Group index + * @entry: PHY Group entries + * @reg: register address of PHY Group + * @val: data will write to register + * + * Return: 0 on success, otherwise error code + */ +int opae_manager_write_phy_reg(struct opae_manager *mgr, int phy_group, + u8 entry, u32 reg, u32 val) +{ + if (!mgr || !mgr->network_ops) + return -EINVAL; + + if (mgr->network_ops->write_phy_reg) + return mgr->network_ops->write_phy_reg(mgr, phy_group, + entry, reg, val); + + return -ENOENT; +} + +/** + * opae_manager_get_retimer_info - get retimer info like PKVL chip + * @mgr: opae_manager for retimer + * @info: info return to caller + * + * Return: 0 on success, otherwise error code + */ +int opae_manager_get_retimer_info(struct opae_manager *mgr, + struct opae_retimer_info *info) +{ + if (!mgr || !mgr->network_ops) + return -EINVAL; + + //if (mgr->network_ops->get_retimer_info) + // return mgr->network_ops->get_retimer_info(mgr, info); + + //return -ENOENT; + info->num_retimer = 4; + info->num_port = 8; + info->support_speed = MXD_10GB; + + return 0; +} + +/** + * opae_manager_set_retimer_speed - configure the speed of retimer, + * like 10GB or 25GB + * @mgr: opae_manager of MDIO + * @speed: the speed of retimer + * + * Return: 0 on success, otherwise error code + */ +int opae_manager_set_retimer_speed(struct opae_manager *mgr, int speed) +{ + if (!mgr || !mgr->network_ops) + return -EINVAL; + + //if (mgr->network_ops->set_retimer_speed) + // return mgr->network_ops->set_retimer_speed(mgr, speed); + UNUSED(speed); + + return 0; +} + +/** + * opae_manager_get_retimer_status - get retimer status + * @mgr: opae_manager of retimer + * @status: status of retimer + * + * Return: 0 on success, otherwise error code + */ +int opae_manager_get_retimer_status(struct opae_manager *mgr, + int port, struct opae_retimer_status *status) +{ + if (!mgr || !mgr->network_ops) + return -EINVAL; + + //if (mgr->network_ops->get_retimer_status) + // return mgr->network_ops->get_retimer_status(mgr, + // port, status); + + //return -ENOENT; + UNUSED(port); + status->speed = MXD_10GB; + status->line_link = 1; + status->host_link = 1; + + return 0; +} diff --git a/drivers/raw/ifpga_rawdev/base/opae_hw_api.h b/drivers/raw/ifpga_rawdev/base/opae_hw_api.h index 332e0f3..0bfe5cc 100644 --- a/drivers/raw/ifpga_rawdev/base/opae_hw_api.h +++ b/drivers/raw/ifpga_rawdev/base/opae_hw_api.h @@ -11,6 +11,7 @@ #include #include "opae_osdep.h" +#include "opae_mdio.h" #ifndef PCI_MAX_RESOURCE #define PCI_MAX_RESOURCE 6 @@ -25,6 +26,7 @@ enum opae_adapter_type { /* OPAE Manager Data Structure */ struct opae_manager_ops; +struct opae_manager_networking_ops; /* * opae_manager has pointer to its parent adapter, as it could be able to manage @@ -35,6 +37,7 @@ struct opae_manager { const char *name; struct opae_adapter *adapter; struct opae_manager_ops *ops; + struct opae_manager_networking_ops *network_ops; void *data; }; @@ -44,9 +47,27 @@ struct opae_manager_ops { u32 size, u64 *status); }; +/* networking management ops in FME */ +struct opae_manager_networking_ops { + int (*read_mac_rom)(struct opae_manager *mgr, int offset, void *buf, + int size); + int (*write_mac_rom)(struct opae_manager *mgr, int offset, void *buf, + int size); + int (*read_phy_reg)(struct opae_manager *mgr, int phy_group, + u8 entry, u16 reg, u32 *value); + int (*write_phy_reg)(struct opae_manager *mgr, int phy_group, + u8 entry, u16 reg, u32 value); + int (*get_retimer_info)(struct opae_manager *mgr, + struct opae_retimer_info *info); + int (*set_retimer_speed)(struct opae_manager *mgr, int speed); + int (*get_retimer_status)(struct opae_manager *mgr, int port, + struct opae_retimer_status *status); +}; + /* OPAE Manager APIs */ struct opae_manager * -opae_manager_alloc(const char *name, struct opae_manager_ops *ops, void *data); +opae_manager_alloc(const char *name, struct opae_manager_ops *ops, + struct opae_manager_networking_ops *network_ops, void *data); #define opae_manager_free(mgr) opae_free(mgr) int opae_manager_flash(struct opae_manager *mgr, int acc_id, void *buf, u32 size, u64 *status); @@ -227,6 +248,7 @@ struct opae_adapter { int opae_adapter_init(struct opae_adapter *adapter, const char *name, void *data); +struct opae_adapter *opae_adapter_alloc(const char *name, void *data); #define opae_adapter_free(adapter) opae_free(adapter) int opae_adapter_enumerate(struct opae_adapter *adapter); @@ -251,4 +273,26 @@ static inline void opae_adapter_remove_acc(struct opae_adapter *adapter, { TAILQ_REMOVE(&adapter->acc_list, acc, node); } + +/* OPAE vBNG network datastruct */ +#define OPAE_ETHER_ADDR_LEN 6 + +struct opae_ether_addr { + unsigned char addr_bytes[OPAE_ETHER_ADDR_LEN]; +} __attribute__((__packed__)); + +/* OPAE vBNG network API*/ +int opae_manager_read_mac_rom(struct opae_manager *mgr, int port, + struct opae_ether_addr *addr); +int opae_manager_write_mac_rom(struct opae_manager *mgr, int port, + struct opae_ether_addr *addr); +int opae_manager_read_phy_reg(struct opae_manager *mgr, int phy_group, + u8 entry, u32 reg, u32 *value); +int opae_manager_write_phy_reg(struct opae_manager *mgr, int phy_group, + u8 entry, u32 reg, u32 value); +int opae_manager_get_retimer_info(struct opae_manager *mgr, + struct opae_retimer_info *info); +int opae_manager_set_retimer_speed(struct opae_manager *mgr, int speed); +int opae_manager_get_retimer_status(struct opae_manager *mgr, int port, + struct opae_retimer_status *status); #endif /* _OPAE_HW_API_H_*/ diff --git a/drivers/raw/ifpga_rawdev/base/opae_i2c.c b/drivers/raw/ifpga_rawdev/base/opae_i2c.c new file mode 100644 index 0000000..415afab --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_i2c.c @@ -0,0 +1,490 @@ + +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2018 Intel Corporation + */ + +#include "opae_osdep.h" +#include "opae_i2c.h" + +static int i2c_transfer(struct altera_i2c_dev *dev, + struct i2c_msg *msg, int num) +{ + int ret, try; + + for (ret = 0, try = 0; try < I2C_XFER_RETRY; try++) { + ret = dev->xfer(dev, msg, num); + if (ret != -EAGAIN) + break; + } + + return ret; +} + +/** + * i2c read function + */ +int i2c_read(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr, + u32 offset, u8 *buf, u32 count) +{ + u8 msgbuf[2]; + int i = 0; + + if (flags & I2C_FLAG_ADDR16) + msgbuf[i++] = offset >> 8; + + msgbuf[i++] = offset; + + struct i2c_msg msg[2] = { + { + .addr = slave_addr, + .flags = 0, + .len = i, + .buf = msgbuf, + }, + { + .addr = slave_addr, + .flags = I2C_M_RD, + .len = count, + .buf = buf, + }, + }; + + if (!dev->xfer) + return -ENODEV; + + return i2c_transfer(dev, msg, 2); +} + +int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr, + u32 offset, u8 *buffer, int len) +{ + struct i2c_msg msg; + u8 *buf; + int ret; + int i = 0; + + if (!dev->xfer) + return -ENODEV; + + buf = opae_malloc(I2C_MAX_OFFSET_LEN + len); + if (!buf) + return -ENOMEM; + + msg.addr = slave_addr; + msg.flags = 0; + msg.buf = buf; + + if (flags & I2C_FLAG_ADDR16) + msg.buf[i++] = offset >> 8; + + msg.buf[i++] = offset; + opae_memcpy(&msg.buf[i], buffer, len); + msg.len = i + len; + + ret = i2c_transfer(dev, &msg, 1); + + opae_free(buf); + return ret; +} + +int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset, + u8 *buf, u32 count) +{ + return i2c_read(dev, 0, slave_addr, offset, buf, count); +} + +int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset, + u8 *buf, u32 count) +{ + return i2c_read(dev, I2C_FLAG_ADDR16, slave_addr, offset, + buf, count); +} + +int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset, + u8 *buf, u32 count) +{ + return i2c_write(dev, 0, slave_addr, offset, buf, count); +} + +int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset, + u8 *buf, u32 count) +{ + return i2c_write(dev, I2C_FLAG_ADDR16, slave_addr, offset, + buf, count); +} + +static void i2c_indirect_write(struct altera_i2c_dev *dev, u32 reg, + u32 value) +{ + u64 ctrl; + + ctrl = I2C_CTRL_W | (reg >> 2); + + opae_writeq(value & I2C_WRITE_DATA_MASK, dev->base + I2C_WRITE); + opae_writeq(ctrl, dev->base + I2C_CTRL); +} + +static u32 i2c_indirect_read(struct altera_i2c_dev *dev, u32 reg) +{ + u64 tmp; + u64 ctrl; + u32 value; + + ctrl = I2C_CTRL_R | (reg >> 2); + opae_writeq(ctrl, dev->base + I2C_CTRL); + + /* FIXME: Read one more time to avoid HW timing issue. */ + tmp = opae_readq(dev->base + I2C_READ); + tmp = opae_readq(dev->base + I2C_READ); + + value = tmp & I2C_READ_DATA_MASK; + + return value; +} + +static void altera_i2c_transfer(struct altera_i2c_dev *dev, u32 data) +{ + /*send STOP on last byte*/ + if (dev->msg_len == 1) + data |= ALTERA_I2C_TFR_CMD_STO; + if (dev->msg_len > 0) + i2c_indirect_write(dev, ALTERA_I2C_TFR_CMD, data); +} + +static void altera_i2c_disable(struct altera_i2c_dev *dev) +{ + u32 val = i2c_indirect_read(dev, ALTERA_I2C_CTRL); + + i2c_indirect_write(dev, ALTERA_I2C_CTRL, val&~ALTERA_I2C_CTRL_EN); +} + +static void altera_i2c_enable(struct altera_i2c_dev *dev) +{ + u32 val = i2c_indirect_read(dev, ALTERA_I2C_CTRL); + + i2c_indirect_write(dev, ALTERA_I2C_CTRL, val | ALTERA_I2C_CTRL_EN); +} + +static void altera_i2c_reset(struct altera_i2c_dev *dev) +{ + altera_i2c_disable(dev); + altera_i2c_enable(dev); +} + +static int altera_i2c_wait_core_idle(struct altera_i2c_dev *dev) +{ + int retry = 0; + + while (i2c_indirect_read(dev, ALTERA_I2C_STATUS) + & ALTERA_I2C_STAT_CORE) { + if (retry++ > ALTERA_I2C_TIMEOUT_US) { + dev_err(dev, "timeout: Core Status not IDLE...\n"); + return -EBUSY; + } + udelay(1); + } + + return 0; +} + +static void altera_i2c_enable_interrupt(struct altera_i2c_dev *dev, + u32 mask, bool enable) +{ + u32 status; + + status = i2c_indirect_read(dev, ALTERA_I2C_ISER); + if (enable) + dev->isr_mask = status | mask; + else + dev->isr_mask = status&~mask; + + i2c_indirect_write(dev, ALTERA_I2C_ISER, dev->isr_mask); +} + +static void altera_i2c_interrupt_clear(struct altera_i2c_dev *dev, u32 mask) +{ + u32 int_en; + + int_en = i2c_indirect_read(dev, ALTERA_I2C_ISR); + + i2c_indirect_write(dev, ALTERA_I2C_ISR, int_en | mask); +} + +static void altera_i2c_read_rx_fifo(struct altera_i2c_dev *dev) +{ + size_t rx_avail; + size_t bytes; + + rx_avail = i2c_indirect_read(dev, ALTERA_I2C_RX_FIFO_LVL); + bytes = min(rx_avail, dev->msg_len); + + while (bytes-- > 0) { + *dev->buf++ = i2c_indirect_read(dev, ALTERA_I2C_RX_DATA); + dev->msg_len--; + altera_i2c_transfer(dev, 0); + } +} + +static void altera_i2c_stop(struct altera_i2c_dev *dev) +{ + i2c_indirect_write(dev, ALTERA_I2C_TFR_CMD, ALTERA_I2C_TFR_CMD_STO); +} + +static int altera_i2c_fill_tx_fifo(struct altera_i2c_dev *dev) +{ + size_t tx_avail; + int bytes; + int ret; + + tx_avail = dev->fifo_size - + i2c_indirect_read(dev, ALTERA_I2C_TC_FIFO_LVL); + bytes = min(tx_avail, dev->msg_len); + ret = dev->msg_len - bytes; + + while (bytes-- > 0) { + altera_i2c_transfer(dev, *dev->buf++); + dev->msg_len--; + } + + return ret; +} + +static u8 i2c_8bit_addr_from_msg(const struct i2c_msg *msg) +{ + return (msg->addr << 1) | (msg->flags & I2C_M_RD ? 1 : 0); +} + +static int altera_i2c_wait_complete(struct altera_i2c_dev *dev, + u32 *status) +{ + int retry = 0; + + while (!((*status = i2c_indirect_read(dev, ALTERA_I2C_ISR)) + & dev->isr_mask)) { + if (retry++ > ALTERA_I2C_TIMEOUT_US) + return -EBUSY; + + udelay(1000); + } + + return 0; +} + +static bool altera_handle_i2c_status(struct altera_i2c_dev *dev, u32 status) +{ + bool read, finish = false; + int ret; + + read = (dev->msg->flags & I2C_M_RD) != 0; + + if (status & ALTERA_I2C_ISR_ARB) { + altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_ARB); + dev->msg_err = -EAGAIN; + finish = true; + } else if (status & ALTERA_I2C_ISR_NACK) { + dev_debug(dev, "could not get ACK\n"); + dev->msg_err = -ENXIO; + altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_NACK); + altera_i2c_stop(dev); + finish = true; + } else if (read && (status & ALTERA_I2C_ISR_RXOF)) { + /* RX FIFO Overflow */ + altera_i2c_read_rx_fifo(dev); + altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISER_RXOF_EN); + altera_i2c_stop(dev); + dev_err(dev, "error: RX FIFO overflow\n"); + finish = true; + } else if (read && (status & ALTERA_I2C_ISR_RXRDY)) { + altera_i2c_read_rx_fifo(dev); + altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_RXRDY); + if (!dev->msg_len) + finish = true; + } else if (!read && (status & ALTERA_I2C_ISR_TXRDY)) { + altera_i2c_interrupt_clear(dev, ALTERA_I2C_ISR_TXRDY); + if (dev->msg_len > 0) + altera_i2c_fill_tx_fifo(dev); + else + finish = true; + } else { + dev_err(dev, "unexpected status:0x%x\n", status); + altera_i2c_interrupt_clear(dev, ALTERA_I2C_ALL_IRQ); + } + + if (finish) { + ret = altera_i2c_wait_core_idle(dev); + if (ret) + dev_err(dev, "message timeout\n"); + + altera_i2c_enable_interrupt(dev, ALTERA_I2C_ALL_IRQ, false); + altera_i2c_interrupt_clear(dev, ALTERA_I2C_ALL_IRQ); + dev_debug(dev, "message done\n"); + } + + return finish; +} + +static bool altera_i2c_poll_status(struct altera_i2c_dev *dev) +{ + u32 status; + bool finish = false; + int i = 0; + + do { + if (altera_i2c_wait_complete(dev, &status)) { + dev_err(dev, "altera i2c wait complete timeout, status=0x%x\n", + status); + return -EBUSY; + } + + finish = altera_handle_i2c_status(dev, status); + + if (i++ > I2C_XFER_RETRY) + break; + + } while (!finish); + + return finish; +} + +static int altera_i2c_xfer_msg(struct altera_i2c_dev *dev, + struct i2c_msg *msg) +{ + u32 int_mask = ALTERA_I2C_ISR_RXOF | + ALTERA_I2C_ISR_ARB | ALTERA_I2C_ISR_NACK; + u8 addr = i2c_8bit_addr_from_msg(msg); + bool finish; + + dev->msg = msg; + dev->msg_len = msg->len; + dev->buf = msg->buf; + dev->msg_err = 0; + altera_i2c_enable(dev); + + /*make sure RX FIFO is emtry*/ + do { + i2c_indirect_read(dev, ALTERA_I2C_RX_DATA); + } while (i2c_indirect_read(dev, ALTERA_I2C_RX_FIFO_LVL)); + + i2c_indirect_write(dev, ALTERA_I2C_TFR_CMD_RW_D, + ALTERA_I2C_TFR_CMD_STA | addr); + + /*enable irq*/ + if (msg->flags & I2C_M_RD) { + int_mask |= ALTERA_I2C_ISR_RXOF | ALTERA_I2C_ISR_RXRDY; + /* in polling mode, we should set this ISR register? */ + altera_i2c_enable_interrupt(dev, int_mask, true); + altera_i2c_transfer(dev, 0); + } else { + int_mask |= ALTERA_I2C_ISR_TXRDY; + altera_i2c_enable_interrupt(dev, int_mask, true); + altera_i2c_fill_tx_fifo(dev); + } + + finish = altera_i2c_poll_status(dev); + if (!finish) { + dev->msg_err = -ETIMEDOUT; + dev_err(dev, "%s: i2c transfer error\n", __func__); + } + + altera_i2c_enable_interrupt(dev, int_mask, false); + + if (i2c_indirect_read(dev, ALTERA_I2C_STATUS) & ALTERA_I2C_STAT_CORE) + dev_info(dev, "core not idle...\n"); + + altera_i2c_disable(dev); + + return dev->msg_err; +} + +static int altera_i2c_xfer(struct altera_i2c_dev *dev, + struct i2c_msg *msg, int num) +{ + int ret = 0; + int i; + + for (i = 0; i < num; i++, msg++) { + ret = altera_i2c_xfer_msg(dev, msg); + if (ret) + break; + } + + return ret; +} + +static void altera_i2c_hardware_init(struct altera_i2c_dev *dev) +{ + u32 divisor = dev->i2c_clk / dev->bus_clk_rate; + u32 clk_mhz = dev->i2c_clk / 1000000; + u32 tmp = (ALTERA_I2C_THRESHOLD << ALTERA_I2C_CTRL_RXT_SHFT) | + (ALTERA_I2C_THRESHOLD << ALTERA_I2C_CTRL_TCT_SHFT); + u32 t_high, t_low; + + if (dev->bus_clk_rate <= 100000) { + tmp &= ~ALTERA_I2C_CTRL_BSPEED; + /*standard mode SCL 50/50*/ + t_high = divisor*1/2; + t_low = divisor*1/2; + } else { + tmp |= ALTERA_I2C_CTRL_BSPEED; + /*Fast mode SCL 33/66*/ + t_high = divisor*1/3; + t_low = divisor*2/3; + } + + i2c_indirect_write(dev, ALTERA_I2C_CTRL, tmp); + + dev_info(dev, "%s: rate=%uHz per_clk=%uMHz -> ratio=1:%u\n", + __func__, dev->bus_clk_rate, clk_mhz, divisor); + + /*reset the i2c*/ + altera_i2c_reset(dev); + + /*Set SCL high Time*/ + i2c_indirect_write(dev, ALTERA_I2C_SCL_HIGH, t_high); + /*Set SCL low time*/ + i2c_indirect_write(dev, ALTERA_I2C_SCL_LOW, t_low); + /*Set SDA Hold time, 300ms*/ + i2c_indirect_write(dev, ALTERA_I2C_SDA_HOLD, (300*clk_mhz)/1000); + + altera_i2c_enable_interrupt(dev, ALTERA_I2C_ALL_IRQ, false); +} + +struct altera_i2c_dev *altera_i2c_probe(void *base) +{ + struct altera_i2c_dev *dev; + + dev = opae_malloc(sizeof(*dev)); + if (!dev) + return NULL; + + dev->base = (u8 *)base; + dev->i2c_param.info = opae_readq(dev->base + I2C_PARAM); + + if (dev->i2c_param.devid != 0xEE011) { + dev_err(dev, "find a invalid i2c master\n"); + return NULL; + } + + dev->fifo_size = dev->i2c_param.fifo_depth; + + if (dev->i2c_param.max_req == ALTERA_I2C_100KHZ) + dev->bus_clk_rate = 100000; + else if (dev->i2c_param.max_req == ALTERA_I2C_400KHZ) + /* i2c bus clk 400KHz*/ + dev->bus_clk_rate = 400000; + + /* i2c input clock for vista creek is 100MHz */ + dev->i2c_clk = dev->i2c_param.ref_clk * 1000000; + dev->xfer = altera_i2c_xfer; + + altera_i2c_hardware_init(dev); + + return dev; +} + +int altera_i2c_remove(struct altera_i2c_dev *dev) +{ + altera_i2c_disable(dev); + + return 0; +} diff --git a/drivers/raw/ifpga_rawdev/base/opae_i2c.h b/drivers/raw/ifpga_rawdev/base/opae_i2c.h new file mode 100644 index 0000000..36f5927 --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_i2c.h @@ -0,0 +1,127 @@ + +#ifndef _OPAE_I2C_H +#define _OPAE_I2C_H + +#include "opae_osdep.h" + +#define ALTERA_I2C_TFR_CMD 0x00 /* Transfer Command register */ +#define ALTERA_I2C_TFR_CMD_STA BIT(9) /* send START before byte */ +#define ALTERA_I2C_TFR_CMD_STO BIT(8) /* send STOP after byte */ +#define ALTERA_I2C_TFR_CMD_RW_D BIT(0) /* Direction of transfer */ +#define ALTERA_I2C_RX_DATA 0x04 /* RX data FIFO register */ +#define ALTERA_I2C_CTRL 0x8 /* Control register */ +#define ALTERA_I2C_CTRL_RXT_SHFT 4 /* RX FIFO Threshold */ +#define ALTERA_I2C_CTRL_TCT_SHFT 2 /* TFER CMD FIFO Threshold */ +#define ALTERA_I2C_CTRL_BSPEED BIT(1) /* Bus Speed */ +#define ALTERA_I2C_CTRL_EN BIT(0) /* Enable Core */ +#define ALTERA_I2C_ISER 0xc /* Interrupt Status Enable register */ +#define ALTERA_I2C_ISER_RXOF_EN BIT(4) /* Enable RX OVERFLOW IRQ */ +#define ALTERA_I2C_ISER_ARB_EN BIT(3) /* Enable ARB LOST IRQ */ +#define ALTERA_I2C_ISER_NACK_EN BIT(2) /* Enable NACK DET IRQ */ +#define ALTERA_I2C_ISER_RXRDY_EN BIT(1) /* Enable RX Ready IRQ */ +#define ALTERA_I2C_ISER_TXRDY_EN BIT(0) /* Enable TX Ready IRQ */ +#define ALTERA_I2C_ISR 0x10 /* Interrupt Status register */ +#define ALTERA_I2C_ISR_RXOF BIT(4) /* RX OVERFLOW */ +#define ALTERA_I2C_ISR_ARB BIT(3) /* ARB LOST */ +#define ALTERA_I2C_ISR_NACK BIT(2) /* NACK DET */ +#define ALTERA_I2C_ISR_RXRDY BIT(1) /* RX Ready */ +#define ALTERA_I2C_ISR_TXRDY BIT(0) /* TX Ready */ +#define ALTERA_I2C_STATUS 0x14 /* Status register */ +#define ALTERA_I2C_STAT_CORE BIT(0) /* Core Status */ +#define ALTERA_I2C_TC_FIFO_LVL 0x18 /* Transfer FIFO LVL register */ +#define ALTERA_I2C_RX_FIFO_LVL 0x1c /* Receive FIFO LVL register */ +#define ALTERA_I2C_SCL_LOW 0x20 /* SCL low count register */ +#define ALTERA_I2C_SCL_HIGH 0x24 /* SCL high count register */ +#define ALTERA_I2C_SDA_HOLD 0x28 /* SDA hold count register */ + +#define ALTERA_I2C_ALL_IRQ (ALTERA_I2C_ISR_RXOF | ALTERA_I2C_ISR_ARB | \ + ALTERA_I2C_ISR_NACK | ALTERA_I2C_ISR_RXRDY | \ + ALTERA_I2C_ISR_TXRDY) + +#define ALTERA_I2C_THRESHOLD 0 +#define ALTERA_I2C_DFLT_FIFO_SZ 8 +#define ALTERA_I2C_TIMEOUT_US 250000 /* 250ms */ + +#define I2C_PARAM 0x8 +#define I2C_CTRL 0x10 +#define I2C_CTRL_R BIT_ULL(9) +#define I2C_CTRL_W BIT_ULL(8) +#define I2C_CTRL_ADDR_MASK GENMASK_ULL(3, 0) +#define I2C_READ 0x18 +#define I2C_READ_DATA_VALID BIT_ULL(32) +#define I2C_READ_DATA_MASK GENMASK_ULL(31, 0) +#define I2C_WRITE 0x20 +#define I2C_WRITE_DATA_MASK GENMASK_ULL(31, 0) + +#define ALTERA_I2C_100KHZ 0 +#define ALTERA_I2C_400KHZ 1 + +/* i2c slave using 16bit address */ +#define I2C_FLAG_ADDR16 1 + +#define I2C_XFER_RETRY 10 + +struct i2c_core_param { + union { + u64 info; + struct { + u16 fifo_depth:9; + u8 interface:1; + /*reference clock of I2C core in MHz*/ + u32 ref_clk:10; + /*Max I2C interface freq*/ + u8 max_req:4; + u64 devid:32; + /* number of MAC address*/ + u8 nu_macs:8; + }; + }; +}; + +struct altera_i2c_dev { + u8 *base; + struct i2c_core_param i2c_param; + u32 fifo_size; + u32 bus_clk_rate; /* i2c bus clock */ + u32 i2c_clk; /* i2c input clock */ + struct i2c_msg *msg; + size_t msg_len; + int msg_err; + u32 isr_mask; + u8 *buf; + int (*xfer)(struct altera_i2c_dev *dev, struct i2c_msg *msg, int num); +}; + +/** + * struct i2c_msg: an I2C message + */ +struct i2c_msg { + unsigned int addr; + unsigned int flags; + unsigned int len; + u8 *buf; +}; + +#define I2C_MAX_OFFSET_LEN 4 + +enum i2c_msg_flags { + I2C_M_TEN = 0x0010, /*ten-bit chip address*/ + I2C_M_RD = 0x0001, /*read data*/ + I2C_M_STOP = 0x8000, /*send stop after this message*/ +}; + +struct altera_i2c_dev *altera_i2c_probe(void *base); +int altera_i2c_remove(struct altera_i2c_dev *dev); +int i2c_read(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr, + u32 offset, u8 *buf, u32 count); +int i2c_write(struct altera_i2c_dev *dev, int flags, unsigned int slave_addr, + u32 offset, u8 *buffer, int len); +int i2c_read8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset, + u8 *buf, u32 count); +int i2c_read16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset, + u8 *buf, u32 count); +int i2c_write8(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset, + u8 *buf, u32 count); +int i2c_write16(struct altera_i2c_dev *dev, unsigned int slave_addr, u32 offset, + u8 *buf, u32 count); +#endif diff --git a/drivers/raw/ifpga_rawdev/base/opae_intel_max10.c b/drivers/raw/ifpga_rawdev/base/opae_intel_max10.c new file mode 100644 index 0000000..37292f4 --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_intel_max10.c @@ -0,0 +1,106 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2018 Intel Corporation + */ + +#include "opae_intel_max10.h" + +static struct intel_max10_device *g_max10; + +int max10_reg_read(unsigned int reg, unsigned int *val) +{ + if (!g_max10) + return -ENODEV; + + return spi_transaction_read(g_max10->spi_tran_dev, + reg, 4, (unsigned char *)val); +} + +int max10_reg_write(unsigned int reg, unsigned int val) +{ + if (!g_max10) + return -ENODEV; + + return spi_transaction_write(g_max10->spi_tran_dev, + reg, 4, (unsigned char *)&val); +} + +struct resource mdio_resource[INTEL_MAX10_MAX_MDIO_DEVS] = { + { + .start = 0x200100, + .end = 0x2001ff, + }, + { + .start = 0x200200, + .end = 0x2002ff, + }, +}; + +struct intel_max10_device * +intel_max10_device_probe(struct altera_spi_device *spi, + int chipselect) +{ + struct intel_max10_device *dev; + int i; + + dev = opae_malloc(sizeof(*dev)); + if (!dev) + return NULL; + + dev->spi_master = spi; + + dev->spi_tran_dev = spi_transaction_init(spi, chipselect); + if (!dev->spi_tran_dev) { + dev_err(dev, "%s spi tran init fail\n", __func__); + goto free_dev; + } + + g_max10 = dev; + + for (i = 0; i < INTEL_MAX10_MAX_MDIO_DEVS; i++) { + dev->mdio[i] = altera_mdio_probe(i, mdio_resource[i].start, + mdio_resource[i].end, dev->spi_tran_dev); + if (!dev->mdio[i]) { + dev_err(dev, "%s mido init fail\n", __func__); + goto mdio_fail; + } + } + + /* FIXME: should read this info from MAX10 device table */ + dev->num_retimer = INTEL_MAX10_MAX_MDIO_DEVS; + dev->num_port = PKVL_NUMBER_PORTS; + + return dev; + +mdio_fail: + for (i = 0; i < INTEL_MAX10_MAX_MDIO_DEVS; i++) + if (dev->mdio[i]) + opae_free(dev->mdio[i]); + + spi_transaction_remove(dev->spi_tran_dev); +free_dev: + g_max10 = NULL; + opae_free(dev); + + return NULL; +} + +int intel_max10_device_remove(struct intel_max10_device *dev) +{ + int i; + + if (!dev) + return 0; + + if (dev->spi_tran_dev) + spi_transaction_remove(dev->spi_tran_dev); + + for (i = 0; i < INTEL_MAX10_MAX_MDIO_DEVS; i++) + if (dev->mdio[i]) + altera_mdio_release(dev->mdio[i]); + + g_max10 = NULL; + + opae_free(dev); + + return 0; +} diff --git a/drivers/raw/ifpga_rawdev/base/opae_intel_max10.h b/drivers/raw/ifpga_rawdev/base/opae_intel_max10.h new file mode 100644 index 0000000..b212825 --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_intel_max10.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _OPAE_INTEL_MAX10_H_ +#define _OPAE_INTEL_MAX10_H_ + +#include "opae_osdep.h" +#include "opae_spi.h" +#include "opae_mdio.h" + +#define INTEL_MAX10_MAX_MDIO_DEVS 2 +#define PKVL_NUMBER_PORTS 4 + +struct intel_max10_device { + struct altera_spi_device *spi_master; + struct spi_transaction_dev *spi_tran_dev; + struct altera_mdio_dev *mdio[INTEL_MAX10_MAX_MDIO_DEVS]; + int num_retimer; /* number of retimer */ + int num_port; /* number of ports in retimer */ +}; + +struct resource { + u32 start; + u32 end; + u32 flags; +}; + +int max10_reg_read(unsigned int reg, unsigned int *val); +int max10_reg_write(unsigned int reg, unsigned int val); +struct intel_max10_device * +intel_max10_device_probe(struct altera_spi_device *spi, + int chipselect); +int intel_max10_device_remove(struct intel_max10_device *dev); + +#endif diff --git a/drivers/raw/ifpga_rawdev/base/opae_mdio.c b/drivers/raw/ifpga_rawdev/base/opae_mdio.c new file mode 100644 index 0000000..6eb093b --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_mdio.c @@ -0,0 +1,542 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include "opae_osdep.h" +#include "opae_spi.h" +#include "opae_mdio.h" +#include "opae_intel_max10.h" + +#define PHY_MAX_ADDR 32 +#define MAX_NUM_IDS 8 +#define MDIO_PHYSID1 2 +#define MDIO_PHYSID2 3 +#define MDIO_DEVS2 6 +#define MDIO_DEVS1 5 + +static int max10_mdio_reg_read(struct altera_mdio_dev *dev, + unsigned int reg, unsigned int *val) +{ + struct spi_transaction_dev *spi_tran_dev = + dev->sub_dev; + + if (!spi_tran_dev) + return -ENODEV; + + return spi_transaction_read(spi_tran_dev, + reg, 4, (unsigned char *)val); +} + +int altera_mdio_read(struct altera_mdio_dev *dev, u32 dev_addr, + u32 port_addr, u32 reg, u32 *value) +{ + int ret; + struct altera_mdio_addr mdio_addr = {.csr = 0}; + + if (!dev) + return -ENODEV; + + mdio_addr.devad = dev_addr; + mdio_addr.prtad = port_addr; + mdio_addr.regad = reg; + + dev_debug(dev, "%s reg=0x%x, dev:%x, port:%x, reg:0x%x\n", __func__, + mdio_addr.csr, dev_addr, port_addr, reg); + + ret = max10_reg_write(dev->start + ALTERA_MDIO_ADDRESS_OFST, + mdio_addr.csr); + if (ret) + return -EIO; + + return max10_mdio_reg_read(dev, dev->start + ALTERA_MDIO_DATA_OFST, + value); +} + +int altera_mdio_write(struct altera_mdio_dev *dev, u32 dev_addr, + u32 port_addr, u32 reg, u32 value) +{ + int ret; + struct altera_mdio_addr mdio_addr = {.csr = 0}; + + if (!dev) + return -ENODEV; + + mdio_addr.devad = dev_addr; + mdio_addr.prtad = port_addr; + mdio_addr.regad = reg; + + ret = max10_reg_write(dev->start + ALTERA_MDIO_ADDRESS_OFST, + mdio_addr.csr); + if (ret) + return -EIO; + + return max10_reg_write(dev->start + ALTERA_MDIO_DATA_OFST, + value); +} + +int pkvl_reg_read(struct altera_mdio_dev *dev, u32 dev_addr, + u32 reg, u32 *val) +{ + int port_id = dev->port_id; + + if (port_id < 0) + return -ENODEV; + + return altera_mdio_read(dev, dev_addr, port_id, reg, val); +} + +int pkvl_reg_write(struct altera_mdio_dev *dev, u32 dev_addr, + u32 reg, u32 val) +{ + int port_id = dev->port_id; + + if (port_id < 0) + return -ENODEV; + + return altera_mdio_write(dev, dev_addr, port_id, reg, val); +} + +static int pkvl_reg_set_mask(struct altera_mdio_dev *dev, u32 dev_addr, + u32 reg, u32 mask, u32 val) +{ + int ret; + u32 v; + + ret = pkvl_reg_read(dev, dev_addr, reg, &v); + if (ret) + return -EIO; + + v = (v&~mask) | (val & mask); + + return pkvl_reg_write(dev, dev_addr, reg, v); +} + +static int get_phy_package_id(struct altera_mdio_dev *dev, + int addr, int dev_addr, int *id) +{ + int ret; + u32 val = 0; + + ret = altera_mdio_read(dev, dev_addr, addr, MDIO_DEVS2, &val); + if (ret) + return -EIO; + + *id = (val & 0xffff) << 16; + + ret = altera_mdio_read(dev, dev_addr, addr, MDIO_DEVS1, &val); + if (ret) + return -EIO; + + *id |= (val & 0xffff); + + return 0; +} + +static int get_phy_device_id(struct altera_mdio_dev *dev, + int addr, int dev_addr, int *id) +{ + int ret; + u32 val = 0; + + ret = altera_mdio_read(dev, dev_addr, addr, MDIO_PHYSID1, &val); + if (ret) + return -EIO; + + *id = (val & 0xffff) << 16; + + ret = altera_mdio_read(dev, dev_addr, addr, MDIO_PHYSID2, &val); + if (ret) + return -EIO; + + *id |= (val & 0xffff); + + return 0; +} + +static int get_phy_c45_ids(struct altera_mdio_dev *dev, + int addr, int *phy_id, int *device_id) +{ + int i; + int ret; + int id; + + for (i = 1; i < MAX_NUM_IDS; i++) { + ret = get_phy_package_id(dev, addr, i, phy_id); + if (ret) + return -EIO; + + if ((*phy_id & 0x1fffffff) != 0x1fffffff) + break; + } + + ret = get_phy_device_id(dev, addr, 1, &id); + if (ret) + return -EIO; + + *device_id = id; + + return 0; +} + +static int mdio_phy_scan(struct altera_mdio_dev *dev, int *port_id, + int *phy_id, int *device_id) +{ + int i; + int ret; + + for (i = 0; i < PHY_MAX_ADDR; i++) { + ret = get_phy_c45_ids(dev, i, phy_id, device_id); + if (ret) + return -EIO; + + if ((*phy_id & 0x1fffffff) != 0x1fffffff) { + *port_id = i; + break; + } + } + + return 0; +} + +#define PKVL_READ pkvl_reg_read +#define PKVL_WRITE pkvl_reg_write +#define PKVL_SET_MASK pkvl_reg_set_mask + +static int pkvl_check_smbus_cmd(struct altera_mdio_dev *dev) +{ + int retry = 0; + u32 val; + + for (retry = 0; retry < 10; retry++) { + PKVL_READ(dev, 31, 0xf443, &val); + if ((val & 0x3) == 0) + break; + opae_udelay(1); + } + + if (val & 0x3) { + dev_err(dev, "pkvl execute indirect smbus cmd fail\n"); + return -EBUSY; + } + + return 0; +} + +static int pkvl_execute_smbus_cmd(struct altera_mdio_dev *dev) +{ + int ret; + + ret = pkvl_check_smbus_cmd(dev); + if (ret) + return ret; + + PKVL_WRITE(dev, 31, 0xf443, 0x1); + + ret = pkvl_check_smbus_cmd(dev); + if (ret) + return ret; + + return 0; +} + +static int pkvl_indirect_smbus_set(struct altera_mdio_dev *dev, + u32 addr, u32 reg, u32 hv, u32 lv, u32 *v) +{ + int ret; + + PKVL_WRITE(dev, 31, 0xf441, 0x21); + PKVL_WRITE(dev, 31, 0xf442, + ((addr & 0xff) << 8) | (reg & 0xff)); + PKVL_WRITE(dev, 31, 0xf445, hv); + PKVL_WRITE(dev, 31, 0xf444, lv); + PKVL_WRITE(dev, 31, 0xf440, 0); + + ret = pkvl_execute_smbus_cmd(dev); + if (ret) + return ret; + + PKVL_READ(dev, 31, 0xf446, v); + PKVL_WRITE(dev, 31, 0xf443, 0); + + return 0; +} + +static int pkvl_serdes_intr_set(struct altera_mdio_dev *dev, + u32 reg, u32 hv, u32 lv) +{ + u32 addr; + u32 v; + int ret; + + addr = (reg & 0xff00) >> 8; + + ret = pkvl_indirect_smbus_set(dev, addr, 0x3, hv, lv, &v); + if (ret) + return ret; + + if ((v & 0x7) != 1) { + dev_err(dev, "%s(0x%x, 0x%x, 0x%x) fail\n", + __func__, reg, hv, lv); + return -EBUSY; + } + + return 0; +} + +#define PKVL_SERDES_SET pkvl_serdes_intr_set + +static int pkvl_set_line_side_mode(struct altera_mdio_dev *dev, + int port, int mode) +{ + u32 val = 0; + + /* check PKVL exist */ + PKVL_READ(dev, 1, 0, &val); + if (val == 0 || val == 0xffff) { + dev_err(dev, "reading reg 0x0 from PKVL fail\n"); + return -ENODEV; + } + + PKVL_WRITE(dev, 31, 0xf003, 0); + PKVL_WRITE(dev, 3, 0x2000 + 0x200*port, + 0x2040); + PKVL_SET_MASK(dev, 7, 0x200*port, 1<<12, 0); + PKVL_SET_MASK(dev, 7, 0x11+0x200*port, + 0xf3a0, 0); + PKVL_SET_MASK(dev, 7, 0x8014+0x200*port, + 0x330, 0); + PKVL_WRITE(dev, 7, 0x12+0x200*port, 0); + PKVL_WRITE(dev, 7, 0x8015+0x200*port, 0); + PKVL_SET_MASK(dev, 3, 0xf0ba, 0x8000 | (0x800<index, port, val); + return 0; +} + +static int pkvl_set_host_side_mode(struct altera_mdio_dev *dev, + int port, int mode) +{ + u32 val = 0; + + PKVL_WRITE(dev, 4, 0x2000 + 0x200 * port, 0x2040); + PKVL_SET_MASK(dev, 7, 0x1000 + 0x200 * port, + 1<<12, 0); + PKVL_SET_MASK(dev, 7, 0x1011 + 0x200 * port, + 0xf3a0, 0); + PKVL_SET_MASK(dev, 7, 0x9014 + 0x200 * port, + 0x330, 0); + PKVL_WRITE(dev, 7, 0x1012 + 0x200 * port, 0); + PKVL_WRITE(dev, 7, 0x9015 + 0x200 * port, 0); + PKVL_SET_MASK(dev, 4, 0xf0ba, 0x8000 | (0x800 << port), + 0x8000); + PKVL_SET_MASK(dev, 4, 0xf0a6, 0x8000 | (0x800 << port), + 0x8000); + PKVL_WRITE(dev, 4, 0xf378, 0); + PKVL_WRITE(dev, 4, 0xf258 + 0x80 * port, 0); + PKVL_WRITE(dev, 4, 0xf259 + 0x80 * port, 0); + PKVL_WRITE(dev, 4, 0xf25a + 0x80 * port, 0); + PKVL_WRITE(dev, 4, 0xf25b + 0x80 * port, 0); + PKVL_SET_MASK(dev, 4, 0xf26f + 0x80 * port, + 3<<14, 0); + PKVL_SET_MASK(dev, 4, 0xf060, 1<<2, 0); + PKVL_WRITE(dev, 4, 0xf053, 0); + PKVL_WRITE(dev, 4, 0xf056, 0); + PKVL_WRITE(dev, 4, 0xf059, 0); + PKVL_WRITE(dev, 7, 0x9200, 0); + PKVL_WRITE(dev, 7, 0x9400, 0); + PKVL_WRITE(dev, 7, 0x9600, 0); + PKVL_WRITE(dev, 4, 0xf0e7, 0); + + if (mode == MXD_10GB) { + PKVL_SET_MASK(dev, 4, 0xf25c + 0x80 * port, + 0x2, 0x2); + PKVL_WRITE(dev, 4, 0xf220 + 0x80 * port, 0x1918); + PKVL_WRITE(dev, 4, 0xf221 + 0x80 * port, 0x1819); + PKVL_WRITE(dev, 4, 0xf230 + 0x80 * port, 0x7); + PKVL_WRITE(dev, 4, 0xf231 + 0x80 * port, 0xaff); + PKVL_WRITE(dev, 4, 0xf232 + 0x80 * port, 0); + PKVL_WRITE(dev, 4, 0xf250 + 0x80 * port, 0x1111); + PKVL_WRITE(dev, 4, 0xf251 + 0x80 * port, 0x1111); + PKVL_SET_MASK(dev, 4, 0xf258 + 0x80 * port, + 0x7, 0x7); + } + + PKVL_SET_MASK(dev, 4, 0xf25c + 0x80 * port, 0x2, 0x2); + PKVL_WRITE(dev, 4, 0xf22b + 0x80 * port, 0x1918); + PKVL_WRITE(dev, 4, 0xf246 + 0x80 * port, 0x4033); + PKVL_WRITE(dev, 4, 0xf247 + 0x80 * port, 0x4820); + PKVL_WRITE(dev, 4, 0xf255 + 0x80 * port, 0x1100); + PKVL_SET_MASK(dev, 4, 0xf259 + 0x80 * port, 0xc0, 0xc0); + + PKVL_SERDES_SET(dev, 0x103 + 0x100 * port, 0x3d, 0x9004); + PKVL_SERDES_SET(dev, 0x103 + 0x100 * port, 0x3d, 0xa002); + PKVL_SERDES_SET(dev, 0x103 + 0x100 * port, 0x3d, 0xb012); + + PKVL_WRITE(dev, 4, 0xf000 + port, 0x8020 | mode); + PKVL_READ(dev, 4, 0xf000 + port, &val); + + dev_info(dev, "PKVL:%d port:%d host side mode:0x%x\n", + dev->index, port, val); + + return 0; +} + +int pkvl_set_speed_mode(struct altera_mdio_dev *dev, int port, int mode) +{ + int ret; + + ret = pkvl_set_line_side_mode(dev, port, mode); + if (ret) + return ret; + + return pkvl_set_host_side_mode(dev, port, mode); +} + +int pkvl_get_port_speed_status(struct altera_mdio_dev *dev, + int port, unsigned int *speed) +{ + int ret; + + ret = pkvl_reg_read(dev, 4, 0xf000 + port, speed); + if (ret) + return ret; + + *speed = *speed & 0x7; + + return 0; +} + +int pkvl_get_port_line_link_status(struct altera_mdio_dev *dev, + int port, unsigned int *link) +{ + int ret; + + ret = pkvl_reg_read(dev, 3, 0xa002 + 0x200 * port, link); + if (ret) + return ret; + + *link = (*link & (1<<2)) ? 1:0; + + return 0; +} + +int pkvl_get_port_host_link_status(struct altera_mdio_dev *dev, + int port, unsigned int *link) +{ + int ret; + + ret = pkvl_reg_read(dev, 4, 0xa002 + 0x200 * port, link); + if (ret) + return ret; + + *link = (*link & (1<<2)) ? 1:0; + + return 0; +} + +static struct altera_mdio_dev *altera_spi_mdio_init(int index, u32 start, + u32 end, void *sub_dev) +{ + struct altera_mdio_dev *dev; + int ret; + int port_id = 0; + int phy_id = 0; + int device_id = 0; + + dev = opae_malloc(sizeof(*dev)); + if (!dev) + return NULL; + + dev->sub_dev = sub_dev; + dev->start = start; + dev->end = end; + dev->port_id = -1; + dev->index = index; + + ret = mdio_phy_scan(dev, &port_id, &phy_id, &device_id); + if (ret) { + dev_err(dev, "Cannot found Phy Device on MIDO Bus\n"); + opae_free(dev); + return NULL; + } + + dev->port_id = port_id; + dev->phy_device_id = device_id; + + dev_info(dev, "Found MDIO Phy Device %d, port_id=%d, phy_id=0x%x, device_id=0x%x\n", + index, port_id, phy_id, device_id); + + return dev; +} + +struct altera_mdio_dev *altera_mdio_probe(int index, u32 start, u32 end, + void *sub_dev) +{ + return altera_spi_mdio_init(index, start, end, sub_dev); +} + +void altera_mdio_release(struct altera_mdio_dev *dev) +{ + if (dev) + opae_free(dev); +} diff --git a/drivers/raw/ifpga_rawdev/base/opae_mdio.h b/drivers/raw/ifpga_rawdev/base/opae_mdio.h new file mode 100644 index 0000000..8c868d6 --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_mdio.h @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _OPAE_MDIO_H_ +#define _OPAE_MDIO_H_ + +#include "opae_osdep.h" + +/* retimer speed */ +enum retimer_speed { + MXD_1GB = 0, + MXD_2_5GB, + MXD_5GB, + MXD_10GB, + MXD_25GB, + MXD_40GB, + MXD_100GB, + MXD_SPEED_UNKNOWN, +}; + +/* retimer info */ +struct opae_retimer_info { + int num_retimer; + int num_port; + enum retimer_speed support_speed; +}; + +/* retimer status*/ +struct opae_retimer_status { + enum retimer_speed speed; + unsigned int line_link; + unsigned int host_link; +}; + +/** + * read MDIO need about 62us delay, SPI keep + * reading before get valid data, so we let + * SPI master read more than 100 bytes + */ +#define MDIO_READ_DELAY 100 + +/* register offset definition */ +#define ALTERA_MDIO_DATA_OFST 0x80 +#define ALTERA_MDIO_ADDRESS_OFST 0x84 + +struct altera_mdio_dev; + +struct altera_mdio_dev { + void *sub_dev; /* sub dev link to spi tran device*/ + u32 start; /* start address*/ + u32 end; /* end of address */ + int index; + int port_id; + int phy_device_id; +}; + +struct altera_mdio_addr { + union { + unsigned int csr; + struct{ + u8 devad:5; + u8 rsvd1:3; + u8 prtad:5; + u8 rsvd2:3; + u16 regad:16; + }; + }; +}; + +/* function declaration */ +struct altera_mdio_dev *altera_mdio_probe(int index, u32 start, + u32 end, void *sub_dev); +void altera_mdio_release(struct altera_mdio_dev *dev); +int altera_mdio_read(struct altera_mdio_dev *dev, u32 dev_addr, + u32 port_id, u32 reg, u32 *value); +int altera_mdio_write(struct altera_mdio_dev *dev, u32 dev_addr, + u32 port_id, u32 reg, u32 value); +int pkvl_reg_read(struct altera_mdio_dev *dev, u32 dev_addr, + u32 reg, u32 *value); +int pkvl_reg_write(struct altera_mdio_dev *dev, u32 dev_addr, + u32 reg, u32 value); +int pkvl_set_speed_mode(struct altera_mdio_dev *dev, int port, int mode); +int pkvl_get_port_speed_status(struct altera_mdio_dev *dev, + int port, unsigned int *speed); +int pkvl_get_port_line_link_status(struct altera_mdio_dev *dev, + int port, unsigned int *link); +int pkvl_get_port_host_link_status(struct altera_mdio_dev *dev, + int port, unsigned int *link); +#endif /* _OPAE_MDIO_H_ */ diff --git a/drivers/raw/ifpga_rawdev/base/opae_osdep.h b/drivers/raw/ifpga_rawdev/base/opae_osdep.h index 90f54f7..6fb8f08 100644 --- a/drivers/raw/ifpga_rawdev/base/opae_osdep.h +++ b/drivers/raw/ifpga_rawdev/base/opae_osdep.h @@ -5,6 +5,8 @@ #ifndef _OPAE_OSDEP_H #define _OPAE_OSDEP_H +//#define OPAE_DEBUG + #include #include @@ -35,6 +37,7 @@ struct uuid { #ifndef BIT #define BIT(a) (1UL << (a)) #endif /* BIT */ +#define U64_C(x) x ## ULL #ifndef BIT_ULL #define BIT_ULL(a) (1ULL << (a)) #endif /* BIT_ULL */ @@ -52,12 +55,7 @@ struct uuid { #define dev_err(x, args...) dev_printf(ERR, args) #define dev_info(x, args...) dev_printf(INFO, args) #define dev_warn(x, args...) dev_printf(WARNING, args) - -#ifdef OPAE_DEBUG #define dev_debug(x, args...) dev_printf(DEBUG, args) -#else -#define dev_debug(x, args...) do { } while (0) -#endif #define pr_err(y, args...) dev_err(0, y, ##args) #define pr_warn(y, args...) dev_warn(0, y, ##args) @@ -75,5 +73,8 @@ struct uuid { #define udelay(x) opae_udelay(x) #define msleep(x) opae_udelay(1000 * (x)) #define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) +#define time_after(a, b) ((long)((b) - (a)) < 0) +#define time_before(a, b) time_after(b, a) +#define opae_memset(a, b, c) memset((a), (b), (c)) #endif diff --git a/drivers/raw/ifpga_rawdev/base/opae_phy_group.c b/drivers/raw/ifpga_rawdev/base/opae_phy_group.c new file mode 100644 index 0000000..9234129 --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_phy_group.c @@ -0,0 +1,88 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2018 Intel Corporation + */ + +#include "opae_osdep.h" +#include "opae_phy_group.h" + +static int phy_indirect_wait(struct phy_group_device *dev) +{ + int retry = 0; + u64 val; + + while (!((val = opae_readq(dev->base + PHY_GROUP_STAT)) & + STAT_DATA_VALID)) { + if (retry++ > 1000) + return -EBUSY; + + udelay(1); + } + + return 0; +} + +static void phy_indirect_write(struct phy_group_device *dev, u8 entry, + u16 addr, u32 value) +{ + u64 ctrl; + + ctrl = CMD_RD << CTRL_COMMAND_SHIFT | + (entry & CTRL_PHY_NUM_MASK) << CTRL_PHY_NUM_SHIFT | + (addr & CTRL_PHY_ADDR_MASK) << CTRL_PHY_ADDR_SHIFT | + (value & CTRL_WRITE_DATA_MASK); + + opae_writeq(ctrl, dev->base + PHY_GROUP_CTRL); +} + +static int phy_indirect_read(struct phy_group_device *dev, u8 entry, u16 addr, + u32 *value) +{ + u64 tmp; + u64 ctrl = 0; + + ctrl = CMD_RD << CTRL_COMMAND_SHIFT | + (entry & CTRL_PHY_NUM_MASK) << CTRL_PHY_NUM_SHIFT | + (addr & CTRL_PHY_ADDR_MASK) << CTRL_PHY_ADDR_SHIFT; + opae_writeq(ctrl, dev->base + PHY_GROUP_CTRL); + + if (phy_indirect_wait(dev)) + return -ETIMEDOUT; + + tmp = opae_readq(dev->base + PHY_GROUP_STAT); + *value = tmp & STAT_READ_DATA_MASK; + + return 0; +} + +int phy_group_read_reg(struct phy_group_device *dev, u8 entry, + u16 addr, u32 *value) +{ + return phy_indirect_read(dev, entry, addr, value); +} + +int phy_group_write_reg(struct phy_group_device *dev, u8 entry, + u16 addr, u32 value) +{ + phy_indirect_write(dev, entry, addr, value); + + return 0; +} + +struct phy_group_device *phy_group_probe(void *base) +{ + struct phy_group_device *dev; + + dev = opae_malloc(sizeof(*dev)); + if (!dev) + return NULL; + + dev->base = (u8 *)base; + + dev->info.info = opae_readq(dev->base + PHY_GROUP_INFO); + dev->group_index = dev->info.group_number; + dev->entries = dev->info.num_phys; + dev->speed = dev->info.speed; + dev->entry_size = PHY_GROUP_ENTRY_SIZE; + + return dev; +} diff --git a/drivers/raw/ifpga_rawdev/base/opae_phy_group.h b/drivers/raw/ifpga_rawdev/base/opae_phy_group.h new file mode 100644 index 0000000..d4def6d --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_phy_group.h @@ -0,0 +1,53 @@ +#ifndef _OPAE_PHY_MAC_H +#define _OPAE_PHY_MAC_H + +#include "opae_osdep.h" + +#define MAX_PHY_GROUP_DEVICES 8 +#define PHY_GROUP_ENTRY_SIZE 0x1000 + +#define PHY_GROUP_INFO 0x8 +#define PHY_GROUP_CTRL 0x10 +#define CTRL_COMMAND_SHIFT 62 +#define CMD_RD 0x1UL +#define CMD_WR 0x2UL +#define CTRL_PHY_NUM_SHIFT 43 +#define CTRL_PHY_NUM_MASK GENMASK_ULL(45, 43) +#define CTRL_RESET BIT_ULL(42) +#define CTRL_PHY_ADDR_SHIFT 32 +#define CTRL_PHY_ADDR_MASK GENMASK_ULL(41, 32) +#define CTRL_WRITE_DATA_MASK GENMASK_ULL(31, 0) +#define PHY_GROUP_STAT 0x18 +#define STAT_DATA_VALID BIT_ULL(32) +#define STAT_READ_DATA_MASK GENMASK_ULL(31, 0) + +struct phy_group_info { + union { + u64 info; + struct { + u8 group_number:8; + u8 num_phys:8; + u8 speed:8; + u8 direction:1; + u64 resvd:39; + }; + }; +}; + +struct phy_group_device { + u8 *base; + struct phy_group_info info; + u32 group_index; + u32 entries; + u32 speed; + u32 entry_size; + u32 flags; +}; + +struct phy_group_device *phy_group_probe(void *base); +int phy_group_write_reg(struct phy_group_device *dev, + u8 entry, u16 addr, u32 value); +int phy_group_read_reg(struct phy_group_device *dev, + u8 entry, u16 addr, u32 *value); + +#endif diff --git a/drivers/raw/ifpga_rawdev/base/opae_spi.c b/drivers/raw/ifpga_rawdev/base/opae_spi.c new file mode 100644 index 0000000..d21aece --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_spi.c @@ -0,0 +1,260 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2018 Intel Corporation + */ + +#include "opae_osdep.h" +#include "opae_spi.h" + +static void spi_indirect_write(struct altera_spi_device *dev, u32 reg, + u32 value) +{ + u64 ctrl; + + opae_writeq(value & WRITE_DATA_MASK, dev->regs + SPI_WRITE); + + ctrl = CTRL_W | (reg >> 2); + opae_writeq(ctrl, dev->regs + SPI_CTRL); +} + +static u32 spi_indirect_read(struct altera_spi_device *dev, u32 reg) +{ + u64 tmp; + u64 ctrl; + u32 value; + + ctrl = CTRL_R | (reg >> 2); + opae_writeq(ctrl, dev->regs + SPI_CTRL); + + /** + * FIXME: Read one more time to avoid HW timing issue. This is + * a short term workaround solution, and must be removed once + * hardware fixing is done. + */ + tmp = opae_readq(dev->regs + SPI_READ); + tmp = opae_readq(dev->regs + SPI_READ); + + value = (u32)tmp; + + return value; +} + +void spi_cs_activate(struct altera_spi_device *dev, unsigned int chip_select) +{ + spi_indirect_write(dev, ALTERA_SPI_SLAVE_SEL, 1 << chip_select); + spi_indirect_write(dev, ALTERA_SPI_CONTROL, ALTERA_SPI_CONTROL_SSO_MSK); +} + +void spi_cs_deactivate(struct altera_spi_device *dev) +{ + spi_indirect_write(dev, ALTERA_SPI_CONTROL, 0); +} + +static void spi_flush_rx(struct altera_spi_device *dev) +{ + if (spi_indirect_read(dev, ALTERA_SPI_STATUS) & + ALTERA_SPI_STATUS_RRDY_MSK) + spi_indirect_read(dev, ALTERA_SPI_RXDATA); +} + +int spi_read(struct altera_spi_device *dev, unsigned int bytes, void *buffer) +{ + char data; + char *rxbuf = buffer; + + if (bytes <= 0 || !rxbuf) + return -EINVAL; + + /* empty read buffer */ + spi_flush_rx(dev); + + while (bytes--) { + while (!(spi_indirect_read(dev, ALTERA_SPI_STATUS) & + ALTERA_SPI_STATUS_RRDY_MSK)) + ; + data = spi_indirect_read(dev, ALTERA_SPI_RXDATA); + if (buffer) + *rxbuf++ = data; + } + + return 0; +} + +int spi_write(struct altera_spi_device *dev, unsigned int bytes, void *buffer) +{ + unsigned char data; + char *txbuf = buffer; + + if (bytes <= 0 || !txbuf) + return -EINVAL; + + while (bytes--) { + while (!(spi_indirect_read(dev, ALTERA_SPI_STATUS) & + ALTERA_SPI_STATUS_TRDY_MSK)) + ; + data = *txbuf++; + spi_indirect_write(dev, ALTERA_SPI_TXDATA, data); + } + + return 0; +} + +static unsigned int spi_write_bytes(struct altera_spi_device *dev, int count) +{ + unsigned int val = 0; + u16 *p16; + u32 *p32; + + if (dev->txbuf) { + switch (dev->data_width) { + case 1: + val = dev->txbuf[count]; + break; + case 2: + p16 = (u16 *)(dev->txbuf + 2*count); + val = *p16; + if (dev->endian == SPI_BIG_ENDIAN) + val = cpu_to_be16(val); + break; + case 4: + p32 = (u32 *)(dev->txbuf + 4*count); + val = *p32; + if (dev->endian == SPI_BIG_ENDIAN) + val = (val); + break; + } + } + + return val; +} + +static void spi_fill_readbuffer(struct altera_spi_device *dev, + unsigned int value, int count) +{ + u16 *p16; + u32 *p32; + + if (dev->rxbuf) { + switch (dev->data_width) { + case 1: + dev->rxbuf[count] = value; + break; + case 2: + p16 = (u16 *)(dev->rxbuf + 2*count); + if (dev->endian == SPI_BIG_ENDIAN) + *p16 = cpu_to_be16((u16)value); + else + *p16 = (u16)value; + break; + case 4: + p32 = (u32 *)(dev->rxbuf + 4*count); + if (dev->endian == SPI_BIG_ENDIAN) + *p32 = cpu_to_be32(value); + else + *p32 = value; + break; + } + } +} + +static int spi_txrx(struct altera_spi_device *dev) +{ + unsigned int count = 0; + unsigned int rxd; + unsigned int tx_data; + unsigned int status; + int retry = 0; + + while (count < dev->len) { + tx_data = spi_write_bytes(dev, count); + spi_indirect_write(dev, ALTERA_SPI_TXDATA, tx_data); + + while (1) { + status = spi_indirect_read(dev, ALTERA_SPI_STATUS); + if (status & ALTERA_SPI_STATUS_RRDY_MSK) + break; + if (retry++ > SPI_MAX_RETRY) { + dev_err(dev, "%s, read timeout\n", __func__); + return -EBUSY; + } + } + + rxd = spi_indirect_read(dev, ALTERA_SPI_RXDATA); + spi_fill_readbuffer(dev, rxd, count); + + count++; + } + + return 0; +} + +int spi_command(struct altera_spi_device *dev, unsigned int chip_select, + unsigned int wlen, void *wdata, + unsigned int rlen, void *rdata) +{ + if (((wlen > 0) && !wdata) || ((rlen > 0) && !rdata)) { + dev_err(dev, "error on spi command checking\n"); + return -EINVAL; + } + + wlen = wlen / dev->data_width; + rlen = rlen / dev->data_width; + + /* flush rx buffer */ + spi_flush_rx(dev); + + // TODO: GET MUTEX LOCK + spi_cs_activate(dev, chip_select); + if (wlen) { + dev->txbuf = wdata; + dev->rxbuf = rdata; + dev->len = wlen; + spi_txrx(dev); + } + if (rlen) { + dev->rxbuf = rdata; + dev->txbuf = NULL; + dev->len = rlen; + spi_txrx(dev); + } + spi_cs_deactivate(dev); + // TODO: RELEASE MUTEX LOCK + return 0; +} + +struct altera_spi_device *altera_spi_init(void *base) +{ + struct altera_spi_device *spi_dev = + opae_malloc(sizeof(struct altera_spi_device)); + + if (!spi_dev) + return NULL; + + spi_dev->regs = base; + + spi_dev->spi_param.info = opae_readq(spi_dev->regs + SPI_CORE_PARAM); + + spi_dev->data_width = spi_dev->spi_param.data_width / 8; + spi_dev->endian = spi_dev->spi_param.endian; + spi_dev->num_chipselect = spi_dev->spi_param.num_chipselect; + dev_info(spi_dev, "spi param: type=%d, data width:%d, endian:%d, clock_polarity=%d, clock=%dMHz, chips=%d, cpha=%d\n", + spi_dev->spi_param.type, + spi_dev->data_width, spi_dev->endian, + spi_dev->spi_param.clock_polarity, + spi_dev->spi_param.clock, + spi_dev->num_chipselect, + spi_dev->spi_param.clock_phase); + + /* clear */ + spi_indirect_write(spi_dev, ALTERA_SPI_CONTROL, 0); + spi_indirect_write(spi_dev, ALTERA_SPI_STATUS, 0); + /* flush rxdata */ + spi_flush_rx(spi_dev); + + return spi_dev; +} + +void altera_spi_release(struct altera_spi_device *dev) +{ + if (dev) + opae_free(dev); +} diff --git a/drivers/raw/ifpga_rawdev/base/opae_spi.h b/drivers/raw/ifpga_rawdev/base/opae_spi.h new file mode 100644 index 0000000..d93ff09 --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_spi.h @@ -0,0 +1,120 @@ +#ifndef _OPAE_SPI_H +#define _OPAE_SPI_H + +#include "opae_osdep.h" + +#define ALTERA_SPI_RXDATA 0 +#define ALTERA_SPI_TXDATA 4 +#define ALTERA_SPI_STATUS 8 +#define ALTERA_SPI_CONTROL 12 +#define ALTERA_SPI_SLAVE_SEL 20 + +#define ALTERA_SPI_STATUS_ROE_MSK 0x8 +#define ALTERA_SPI_STATUS_TOE_MSK 0x10 +#define ALTERA_SPI_STATUS_TMT_MSK 0x20 +#define ALTERA_SPI_STATUS_TRDY_MSK 0x40 +#define ALTERA_SPI_STATUS_RRDY_MSK 0x80 +#define ALTERA_SPI_STATUS_E_MSK 0x100 + +#define ALTERA_SPI_CONTROL_IROE_MSK 0x8 +#define ALTERA_SPI_CONTROL_ITOE_MSK 0x10 +#define ALTERA_SPI_CONTROL_ITRDY_MSK 0x40 +#define ALTERA_SPI_CONTROL_IRRDY_MSK 0x80 +#define ALTERA_SPI_CONTROL_IE_MSK 0x100 +#define ALTERA_SPI_CONTROL_SSO_MSK 0x400 + +#define SPI_CORE_PARAM 0x8 +#define SPI_CTRL 0x10 +#define CTRL_R BIT_ULL(9) +#define CTRL_W BIT_ULL(8) +#define CTRL_ADDR_MASK GENMASK_ULL(2, 0) +#define SPI_READ 0x18 +#define READ_DATA_VALID BIT_ULL(32) +#define READ_DATA_MASK GENMASK_ULL(31, 0) +#define SPI_WRITE 0x20 +#define WRITE_DATA_MASK GENMASK_ULL(31, 0) + +#define SPI_MAX_RETRY 100000 + +struct spi_core_param { + union { + u64 info; + struct { + u8 type:1; + u8 endian:1; + u8 data_width:6; + u8 num_chipselect:6; + u8 clock_polarity:1; + u8 clock_phase:1; + u8 stages:2; + u8 resvd:4; + u16 clock:10; + u16 peripheral_id:16; + u8 controller_type:1; + u16 resvd1:15; + }; + }; +}; + +struct altera_spi_device { + u8 *regs; + struct spi_core_param spi_param; + int data_width; /* how many bytes for data width */ + int endian; + #define SPI_BIG_ENDIAN 0 + #define SPI_LITTLE_ENDIAN 1 + int num_chipselect; + unsigned char *rxbuf; + unsigned char *txbuf; + unsigned int len; +}; + +#define HEADER_LEN 8 +#define RESPONSE_LEN 4 +#define SPI_TRANSACTION_MAX_LEN 1024 +#define TRAN_SEND_MAX_LEN (SPI_TRANSACTION_MAX_LEN + HEADER_LEN) +#define TRAN_RESP_MAX_LEN SPI_TRANSACTION_MAX_LEN +#define PACKET_SEND_MAX_LEN (2*TRAN_SEND_MAX_LEN + 4) +#define PACKET_RESP_MAX_LEN (2*TRAN_RESP_MAX_LEN + 4) +#define BYTES_SEND_MAX_LEN (2*PACKET_SEND_MAX_LEN) +#define BYTES_RESP_MAX_LEN (2*PACKET_RESP_MAX_LEN) + +struct spi_tran_buffer { + unsigned char tran_send[TRAN_SEND_MAX_LEN]; + unsigned char tran_resp[TRAN_RESP_MAX_LEN]; + unsigned char packet_send[PACKET_SEND_MAX_LEN]; + unsigned char packet_resp[PACKET_RESP_MAX_LEN]; + unsigned char bytes_send[BYTES_SEND_MAX_LEN]; + unsigned char bytes_resp[2*BYTES_RESP_MAX_LEN]; +}; + +struct spi_transaction_dev { + struct altera_spi_device *dev; + int chipselect; + struct spi_tran_buffer *buffer; +}; + +struct spi_tran_header { + u8 trans_type; + u8 reserve; + u16 size; + u32 addr; +}; + +int spi_write(struct altera_spi_device *dev, unsigned int bytes, void *buffer); +int spi_read(struct altera_spi_device *dev, unsigned int bytes, + void *buffer); +int spi_command(struct altera_spi_device *dev, unsigned int chip_select, + unsigned int wlen, void *wdata, unsigned int rlen, void *rdata); +void spi_cs_deactivate(struct altera_spi_device *dev); +void spi_cs_activate(struct altera_spi_device *dev, unsigned int chip_select); +struct altera_spi_device *altera_spi_init(void *base); +void altera_spi_release(struct altera_spi_device *dev); +int spi_transaction_read(struct spi_transaction_dev *dev, unsigned int addr, + unsigned int size, unsigned char *data); +int spi_transaction_write(struct spi_transaction_dev *dev, unsigned int addr, + unsigned int size, unsigned char *data); +struct spi_transaction_dev *spi_transaction_init(struct altera_spi_device *dev, + int chipselect); +void spi_transaction_remove(struct spi_transaction_dev *dev); +#endif diff --git a/drivers/raw/ifpga_rawdev/base/opae_spi_transaction.c b/drivers/raw/ifpga_rawdev/base/opae_spi_transaction.c new file mode 100644 index 0000000..d781b62 --- /dev/null +++ b/drivers/raw/ifpga_rawdev/base/opae_spi_transaction.c @@ -0,0 +1,438 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2018 Intel Corporation + */ + +#include "opae_spi.h" +#include "ifpga_compat.h" + +/*transaction opcodes*/ +#define SPI_TRAN_SEQ_WRITE 0x04 /* SPI transaction sequential write */ +#define SPI_TRAN_SEQ_READ 0x14 /* SPI transaction sequential read */ +#define SPI_TRAN_NON_SEQ_WRITE 0x00 /* SPI transaction non-sequential write */ +#define SPI_TRAN_NON_SEQ_READ 0x10 /* SPI transaction non-sequential read*/ + +/*specail packet characters*/ +#define SPI_PACKET_SOP 0x7a +#define SPI_PACKET_EOP 0x7b +#define SPI_PACKET_CHANNEL 0x7c +#define SPI_PACKET_ESC 0x7d + +/*special byte characters*/ +#define SPI_BYTE_IDLE 0x4a +#define SPI_BYTE_ESC 0x4d + +#define SPI_REG_BYTES 4 + +#define INIT_SPI_TRAN_HEADER(trans_type, size, address) \ +({ \ + header.trans_type = trans_type; \ + header.reserve = 0; \ + header.size = cpu_to_be16(size); \ + header.addr = cpu_to_be32(addr); \ +}) + +#ifdef OPAE_DEBUG +static void print_buffer(const char *string, void *buffer, int len) +{ + int i; + unsigned char *p = buffer; + + printf("%s print buffer, len=%d\n", string, len); + + for (i = 0; i < len; i++) + printf("%x ", *(p+i)); + printf("\n"); +} +#else +static void print_buffer(const char *string, void *buffer, int len) +{ + UNUSED(string); + UNUSED(buffer); + UNUSED(len); +} +#endif + +static unsigned char xor_20(unsigned char val) +{ + return val^0x20; +} + +static void reorder_phy_data(u8 bits_per_word, + void *buf, unsigned int len) +{ + unsigned int count = len / (bits_per_word/8); + u32 *p; + + if (bits_per_word == 32) { + p = (u32 *)buf; + while (count--) { + *p = cpu_to_be32(*p); + p++; + } + } +} + +enum { + SPI_FOUND_SOP, + SPI_FOUND_EOP, + SPI_NOT_FOUND, +}; + +static int resp_find_sop_eop(unsigned char *resp, unsigned int len, + int flags) +{ + int ret = SPI_NOT_FOUND; + + unsigned char *b = resp; + + /* find SOP */ + if (flags != SPI_FOUND_SOP) { + while (b < resp + len && *b != SPI_PACKET_SOP) + b++; + + if (*b != SPI_PACKET_SOP) + goto done; + + ret = SPI_FOUND_SOP; + } + + /* find EOP */ + while (b < resp + len && *b != SPI_PACKET_EOP) + b++; + + if (*b != SPI_PACKET_EOP) + goto done; + + ret = SPI_FOUND_EOP; + +done: + return ret; +} + +static int byte_to_core_convert(struct spi_transaction_dev *dev, + unsigned int send_len, unsigned char *send_data, + unsigned int resp_len, unsigned char *resp_data, + unsigned int *valid_resp_len) +{ + unsigned int i; + int ret = 0; + unsigned char *send_packet = dev->buffer->bytes_send; + unsigned char *resp_packet = dev->buffer->bytes_resp; + unsigned char *p; + unsigned char current_byte; + unsigned char *tx_buffer; + unsigned int tx_len = 0; + unsigned char *rx_buffer; + unsigned int rx_len = 0; + int retry = 0; + int spi_flags; + unsigned int resp_max_len = 2 * resp_len; + + print_buffer("before bytes:", send_data, send_len); + + p = send_packet; + + for (i = 0; i < send_len; i++) { + current_byte = send_data[i]; + switch (current_byte) { + case SPI_BYTE_IDLE: + *p++ = SPI_BYTE_IDLE; + *p++ = xor_20(current_byte); + break; + case SPI_BYTE_ESC: + *p++ = SPI_BYTE_ESC; + *p++ = xor_20(current_byte); + break; + default: + *p++ = current_byte; + break; + } + } + + print_buffer("before spi:", send_packet, p-send_packet); + + reorder_phy_data(32, send_packet, p - send_packet); + + print_buffer("after order to spi:", send_packet, p-send_packet); + + /* call spi */ + tx_buffer = send_packet; + tx_len = p - send_packet; + rx_buffer = resp_packet; + rx_len = resp_max_len; + spi_flags = SPI_NOT_FOUND; + +read_again: + ret = spi_command(dev->dev, dev->chipselect, tx_len, tx_buffer, + rx_len, rx_buffer); + if (ret) + return -EBUSY; + + print_buffer("read from spi:", rx_buffer, rx_len); + + /* look for SOP firstly*/ + ret = resp_find_sop_eop(rx_buffer, rx_len - 1, spi_flags); + if (ret != SPI_FOUND_EOP) { + tx_buffer = NULL; + tx_len = 0; + if (retry++ > 10) { + dev_err(NULL, "cannot found valid data from SPI\n"); + return -EBUSY; + } + + if (ret == SPI_FOUND_SOP) { + rx_buffer += rx_len; + resp_max_len += rx_len; + } + + spi_flags = ret; + goto read_again; + } + + print_buffer("found valid data:", resp_packet, resp_max_len); + + /* analyze response packet */ + i = 0; + p = resp_data; + while (i < resp_max_len) { + current_byte = resp_packet[i]; + switch (current_byte) { + case SPI_BYTE_IDLE: + i++; + break; + case SPI_BYTE_ESC: + i++; + current_byte = resp_packet[i]; + *p++ = xor_20(current_byte); + i++; + break; + default: + *p++ = current_byte; + i++; + break; + } + } + + /* receive "4a" means the SPI is idle, not valid data */ + *valid_resp_len = p - resp_data; + if (*valid_resp_len == 0) { + dev_err(NULL, "error: repond package without valid data\n"); + return -EINVAL; + } + + return 0; +} + +static int packet_to_byte_conver(struct spi_transaction_dev *dev, + unsigned int send_len, unsigned char *send_buf, + unsigned int resp_len, unsigned char *resp_buf, + unsigned int *valid) +{ + int ret = 0; + unsigned int i; + unsigned char current_byte; + unsigned int resp_max_len; + unsigned char *send_packet = dev->buffer->packet_send; + unsigned char *resp_packet = dev->buffer->packet_resp; + unsigned char *p; + unsigned int valid_resp_len = 0; + + print_buffer("before packet:", send_buf, send_len); + + resp_max_len = 2 * resp_len + 4; + + p = send_packet; + + /* SOP header */ + *p++ = SPI_PACKET_SOP; + + *p++ = SPI_PACKET_CHANNEL; + *p++ = 0; + + /* append the data into a packet */ + for (i = 0; i < send_len; i++) { + current_byte = send_buf[i]; + + /* EOP for last byte */ + if (i == send_len - 1) + *p++ = SPI_PACKET_EOP; + + switch (current_byte) { + case SPI_PACKET_SOP: + case SPI_PACKET_EOP: + case SPI_PACKET_CHANNEL: + case SPI_PACKET_ESC: + *p++ = SPI_PACKET_ESC; + *p++ = xor_20(current_byte); + break; + default: + *p++ = current_byte; + } + } + + ret = byte_to_core_convert(dev, p - send_packet, + send_packet, resp_max_len, resp_packet, + &valid_resp_len); + if (ret) + return -EBUSY; + + print_buffer("after byte conver:", resp_packet, valid_resp_len); + + /* analyze the response packet */ + p = resp_buf; + + /* look for SOP */ + for (i = 0; i < valid_resp_len; i++) { + if (resp_packet[i] == SPI_PACKET_SOP) + break; + } + + if (i == valid_resp_len) { + dev_err(NULL, "error on analyze response packet 0x%x\n", + resp_packet[i]); + return -EINVAL; + } + + i++; + + /* continue parsing data after SOP */ + while (i < valid_resp_len) { + current_byte = resp_packet[i]; + + switch (current_byte) { + case SPI_PACKET_ESC: + case SPI_PACKET_CHANNEL: + case SPI_PACKET_SOP: + i++; + current_byte = resp_packet[i]; + *p++ = xor_20(current_byte); + i++; + break; + case SPI_PACKET_EOP: + i++; + current_byte = resp_packet[i]; + if (current_byte == SPI_PACKET_ESC || + current_byte == SPI_PACKET_CHANNEL || + current_byte == SPI_PACKET_SOP) { + i++; + current_byte = resp_packet[i]; + *p++ = xor_20(current_byte); + } else + *p++ = current_byte; + i = valid_resp_len; + break; + default: + *p++ = current_byte; + i++; + } + + } + + *valid = p - resp_buf; + + print_buffer("after packet:", resp_buf, *valid); + + return ret; +} + +static int do_transaction(struct spi_transaction_dev *dev, unsigned int addr, + unsigned int size, unsigned char *data, + unsigned int trans_type) +{ + + struct spi_tran_header header; + unsigned char *transaction = dev->buffer->tran_send; + unsigned char *response = dev->buffer->tran_resp; + unsigned char *p; + int ret = 0; + unsigned int i; + unsigned int valid_len = 0; + + /* make transacation header */ + INIT_SPI_TRAN_HEADER(trans_type, size, addr); + + /* fill the header */ + p = transaction; + opae_memcpy(p, &header, sizeof(struct spi_tran_header)); + p = p + sizeof(struct spi_tran_header); + + switch (trans_type) { + case SPI_TRAN_SEQ_WRITE: + case SPI_TRAN_NON_SEQ_WRITE: + for (i = 0; i < size; i++) + *p++ = *data++; + + ret = packet_to_byte_conver(dev, size + HEADER_LEN, + transaction, RESPONSE_LEN, response, + &valid_len); + if (ret) + return -EBUSY; + + /* check the result */ + if (size != ((unsigned int)(response[2] & 0xff) << 8 | + (unsigned int)(response[3] & 0xff))) + ret = -EBUSY; + + break; + case SPI_TRAN_SEQ_READ: + case SPI_TRAN_NON_SEQ_READ: + ret = packet_to_byte_conver(dev, HEADER_LEN, + transaction, size, response, + &valid_len); + if (ret || valid_len != size) + return -EBUSY; + + for (i = 0; i < size; i++) + *data++ = *response++; + + ret = 0; + break; + } + + return ret; +} + +int spi_transaction_read(struct spi_transaction_dev *dev, unsigned int addr, + unsigned int size, unsigned char *data) +{ + return do_transaction(dev, addr, size, data, + (size > SPI_REG_BYTES) ? + SPI_TRAN_SEQ_READ : SPI_TRAN_NON_SEQ_READ); +} + +int spi_transaction_write(struct spi_transaction_dev *dev, unsigned int addr, + unsigned int size, unsigned char *data) +{ + return do_transaction(dev, addr, size, data, + (size > SPI_REG_BYTES) ? + SPI_TRAN_SEQ_WRITE : SPI_TRAN_NON_SEQ_WRITE); +} + +struct spi_transaction_dev *spi_transaction_init(struct altera_spi_device *dev, + int chipselect) +{ + struct spi_transaction_dev *spi_tran_dev; + + spi_tran_dev = opae_malloc(sizeof(struct spi_transaction_dev)); + if (!spi_tran_dev) + return NULL; + + spi_tran_dev->dev = dev; + spi_tran_dev->chipselect = chipselect; + + spi_tran_dev->buffer = opae_malloc(sizeof(struct spi_tran_buffer)); + if (!spi_tran_dev->buffer) { + opae_free(spi_tran_dev); + return NULL; + } + + return spi_tran_dev; +} + +void spi_transaction_remove(struct spi_transaction_dev *dev) +{ + if (dev && dev->buffer) + opae_free(dev->buffer); + if (dev) + opae_free(dev); +} diff --git a/drivers/raw/ifpga_rawdev/base/osdep_raw/osdep_generic.h b/drivers/raw/ifpga_rawdev/base/osdep_raw/osdep_generic.h index 895a1d8..6769109 100644 --- a/drivers/raw/ifpga_rawdev/base/osdep_raw/osdep_generic.h +++ b/drivers/raw/ifpga_rawdev/base/osdep_raw/osdep_generic.h @@ -71,5 +71,6 @@ static inline void opae_writeq(uint64_t value, volatile void *addr) } #define opae_free(addr) free(addr) +#define opae_memcpy(a, b, c) memcpy((a), (b), (c)) #endif diff --git a/drivers/raw/ifpga_rawdev/base/osdep_rte/osdep_generic.h b/drivers/raw/ifpga_rawdev/base/osdep_rte/osdep_generic.h index 76902e2..cd5b7c9 100644 --- a/drivers/raw/ifpga_rawdev/base/osdep_rte/osdep_generic.h +++ b/drivers/raw/ifpga_rawdev/base/osdep_rte/osdep_generic.h @@ -11,6 +11,8 @@ #include #include #include +#include +#include #define dev_printf(level, fmt, args...) \ RTE_LOG(level, PMD, "osdep_rte: " fmt, ## args) @@ -42,4 +44,12 @@ #define spinlock_lock(x) rte_spinlock_lock(x) #define spinlock_unlock(x) rte_spinlock_unlock(x) +#define cpu_to_be16(o) rte_cpu_to_be_16(o) +#define cpu_to_be32(o) rte_cpu_to_be_32(o) +#define cpu_to_be64(o) rte_cpu_to_be_64(o) +#define cpu_to_le16(o) rte_cpu_to_le_16(o) +#define cpu_to_le32(o) rte_cpu_to_le_32(o) +#define cpu_to_le64(o) rte_cpu_to_le_64(o) + +#define opae_memcpy(a, b, c) rte_memcpy((a), (b), (c)) #endif From patchwork Thu Feb 28 07:13:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50617 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C05485F62; Thu, 28 Feb 2019 08:15:19 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 1AB365F28 for ; Thu, 28 Feb 2019 08:15:17 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299830" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:15 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:13 +0800 Message-Id: <1551338000-120348-5-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 04/11] drivers/raw/ifpga_rawdev: add IPN3KE support for IFPGA Rawdev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Intel FPGA Acceleration NIC IPN3KE support for IFPGA Rawdev. Signed-off-by: Rosen Xu Signed-off-by: Tianfei Zhang Signed-off-by: Andy Pei --- drivers/raw/ifpga_rawdev/ifpga_rawdev.c | 146 +++++++++++++++++++++++++++- drivers/raw/ifpga_rawdev/ifpga_rawdev_api.h | 71 ++++++++++++++ 2 files changed, 214 insertions(+), 3 deletions(-) create mode 100644 drivers/raw/ifpga_rawdev/ifpga_rawdev_api.h diff --git a/drivers/raw/ifpga_rawdev/ifpga_rawdev.c b/drivers/raw/ifpga_rawdev/ifpga_rawdev.c index da772d0..0635009 100644 --- a/drivers/raw/ifpga_rawdev/ifpga_rawdev.c +++ b/drivers/raw/ifpga_rawdev/ifpga_rawdev.c @@ -34,6 +34,7 @@ #include "ifpga_common.h" #include "ifpga_logs.h" #include "ifpga_rawdev.h" +#include "ifpga_rawdev_api.h" int ifpga_rawdev_logtype; @@ -42,10 +43,12 @@ #define PCIE_DEVICE_ID_PF_INT_5_X 0xBCBD #define PCIE_DEVICE_ID_PF_INT_6_X 0xBCC0 #define PCIE_DEVICE_ID_PF_DSC_1_X 0x09C4 +#define PCIE_DEVICE_ID_PAC_N3000 0x0B30 /* VF Device */ #define PCIE_DEVICE_ID_VF_INT_5_X 0xBCBF #define PCIE_DEVICE_ID_VF_INT_6_X 0xBCC1 #define PCIE_DEVICE_ID_VF_DSC_1_X 0x09C5 +#define PCIE_DEVICE_ID_VF_PAC_N3000 0x0B31 #define RTE_MAX_RAW_DEVICE 10 static const struct rte_pci_id pci_ifpga_map[] = { @@ -55,6 +58,8 @@ { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_INT_6_X) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_DSC_1_X) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_DSC_1_X) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PAC_N3000),}, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_PAC_N3000),}, { .vendor_id = 0, /* sentinel */ }, }; @@ -327,6 +332,141 @@ return 0; } +static int +ifgpa_rawdev_get_attr(struct rte_rawdev *dev, + const char *attr_name, + uint64_t *attr_value) +{ + struct ifpga_rawdev_mac_info *mac_info; + struct ifpga_rawdevg_retimer_info *retimer_info; + struct opae_retimer_info or_info; + struct opae_adapter *adapter; + struct opae_manager *mgr; + struct ifpga_rawdevg_link_info *linfo; + struct opae_retimer_status rstatus; + + IFPGA_RAWDEV_PMD_FUNC_TRACE(); + + if (!dev || !attr_name || !attr_value) { + IFPGA_BUS_ERR("Invalid arguments for getting attributes"); + return -1; + } + + adapter = ifpga_rawdev_get_priv(dev); + if (!adapter) + return -1; + + mgr = opae_adapter_get_mgr(adapter); + if (!mgr) + return -1; + + if (!strcmp(attr_name, "retimer_info")) { + retimer_info = (struct ifpga_rawdevg_retimer_info *)attr_value; + if (opae_manager_get_retimer_info(mgr, &or_info)) + return -1; + + retimer_info->retimer_num = or_info.num_retimer; + retimer_info->port_num = or_info.num_port; + switch (or_info.support_speed) { + case MXD_10GB: + retimer_info->mac_type = + IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI; + break; + case MXD_25GB: + retimer_info->mac_type = + IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI; + break; + case MXD_40GB: + retimer_info->mac_type = + IFPGA_RAWDEVG_RETIMER_MAC_TYPE_40GE_XLAUI; + break; + case MXD_100GB: + retimer_info->mac_type = + IFPGA_RAWDEV_RETIMER_MAC_TYPE_100GE_CAUI; + break; + default: + retimer_info->mac_type = + IFPGA_RAWDEV_RETIMER_MAC_TYPE_UNKNOWN; + break; + } + return 0; + } else if (!strcmp(attr_name, "default_mac")) { + /* not implement by MAX */ + mac_info = (struct ifpga_rawdev_mac_info *)attr_value; + mac_info->addr.addr_bytes[0] = 0; + mac_info->addr.addr_bytes[1] = 0; + mac_info->addr.addr_bytes[2] = 0; + mac_info->addr.addr_bytes[3] = 0; + mac_info->addr.addr_bytes[4] = 0; + mac_info->addr.addr_bytes[5] = 0xA + mac_info->port_id; + + return 0; + } else if (!strcmp(attr_name, "retimer_linkstatus")) { + linfo = (struct ifpga_rawdevg_link_info *)attr_value; + linfo->link_up = 0; + linfo->link_speed = IFPGA_RAWDEV_LINK_SPEED_UNKNOWN; + + if (opae_manager_get_retimer_status(mgr, linfo->port, &rstatus)) + return -1; + + linfo->link_up = rstatus.line_link; + switch (rstatus.speed) { + case MXD_10GB: + linfo->link_speed = + IFPGA_RAWDEV_LINK_SPEED_10GB; + break; + case MXD_25GB: + linfo->link_speed = + IFPGA_RAWDEV_LINK_SPEED_25GB; + break; + case MXD_40GB: + linfo->link_speed = + IFPGA_RAWDEV_LINK_SPEED_40GB; + break; + default: + linfo->link_speed = + IFPGA_RAWDEV_LINK_SPEED_UNKNOWN; + break; + } + + return 0; + } else + return -1; + + /* Attribute not found */ + return -1; +} + +static int ifgpa_rawdev_set_attr(struct rte_rawdev *dev, + const char *attr_name, + const uint64_t attr_value) +{ + struct opae_adapter *adapter; + struct opae_manager *mgr; + /*struct ifpga_rawdevg_link_info *linfo;*/ + /*struct opae_retimer_status rstatus;*/ + + IFPGA_RAWDEV_PMD_FUNC_TRACE(); + + if (!dev || !attr_name) { + IFPGA_BUS_ERR("Invalid arguments for setting attributes"); + return -1; + } + + adapter = ifpga_rawdev_get_priv(dev); + if (!adapter) + return -1; + + mgr = opae_adapter_get_mgr(adapter); + if (!mgr) + return -1; + + if (!strcmp(attr_name, "retimer_linkstatus")) + printf("ifgpa_rawdev_set_attr_func %lx\n", attr_value); + + return -1; +} + static const struct rte_rawdev_ops ifpga_rawdev_ops = { .dev_info_get = ifpga_rawdev_info_get, .dev_configure = ifpga_rawdev_configure, @@ -339,8 +479,8 @@ .queue_setup = NULL, .queue_release = NULL, - .attr_get = NULL, - .attr_set = NULL, + .attr_get = ifgpa_rawdev_get_attr, + .attr_set = ifgpa_rawdev_set_attr, .enqueue_bufs = NULL, .dequeue_bufs = NULL, @@ -419,7 +559,7 @@ rawdev->dev_ops = &ifpga_rawdev_ops; rawdev->device = &pci_dev->device; - rawdev->driver_name = pci_dev->device.driver->name; + rawdev->driver_name = pci_dev->driver->driver.name; /* must enumerate the adapter before use it */ ret = opae_adapter_enumerate(adapter); diff --git a/drivers/raw/ifpga_rawdev/ifpga_rawdev_api.h b/drivers/raw/ifpga_rawdev/ifpga_rawdev_api.h new file mode 100644 index 0000000..31e0000 --- /dev/null +++ b/drivers/raw/ifpga_rawdev/ifpga_rawdev_api.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2018 Intel Corporation + */ + +#ifndef _IFPGA_RAWDEV_API_H_ +#define _IFPGA_RAWDEV_API_H_ + +#include + +#ifndef ETH_ALEN +#define ETH_ALEN 6 +#endif + +struct ifpga_rawdev_mac_info { + uint16_t port_id; + struct ether_addr addr; +}; + +enum ifpga_rawdev_retimer_media_type { + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_UNKNOWN = 0, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_100GBASE_LR4, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_100GBASE_SR4, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_100GBASE_CR4, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_40GBASE_LR4, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_400GBASE_SR4, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_40GBASE_CR4, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_25GBASE_SR, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_25GBASE_CR, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_10GBASE_LR, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_10GBASE_SR, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_10GBASE_DAC, + IFPGA_RAWDEV_RETIMER_MEDIA_TYPE_DEFAULT +}; + +enum ifpga_rawdev_retimer_mac_type { + IFPGA_RAWDEV_RETIMER_MAC_TYPE_UNKNOWN = 0, + IFPGA_RAWDEV_RETIMER_MAC_TYPE_100GE_CAUI, + IFPGA_RAWDEVG_RETIMER_MAC_TYPE_40GE_XLAUI, + IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI, + IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI, + IFPGA_RAWDEV_RETIMER_MAC_TYPE_DEFAULT +}; + +#define IFPGA_RAWDEV_LINK_SPEED_10GB_SHIFT 0x0 +#define IFPGA_RAWDEV_LINK_SPEED_40GB_SHIFT 0x1 +#define IFPGA_RAWDEV_LINK_SPEED_25GB_SHIFT 0x2 + +enum ifpga_rawdev_link_speed { + IFPGA_RAWDEV_LINK_SPEED_UNKNOWN = 0, + IFPGA_RAWDEV_LINK_SPEED_10GB = + (1 << IFPGA_RAWDEV_LINK_SPEED_10GB_SHIFT), + IFPGA_RAWDEV_LINK_SPEED_40GB = + (1 << IFPGA_RAWDEV_LINK_SPEED_40GB_SHIFT), + IFPGA_RAWDEV_LINK_SPEED_25GB = + (1 << IFPGA_RAWDEV_LINK_SPEED_25GB_SHIFT), +}; + +struct ifpga_rawdevg_retimer_info { + int retimer_num; + int port_num; + enum ifpga_rawdev_retimer_media_type media_type; + enum ifpga_rawdev_retimer_mac_type mac_type; +}; + +struct ifpga_rawdevg_link_info { + int port; + int link_up; + enum ifpga_rawdev_link_speed link_speed; +}; + +#endif /* _IFPGA_RAWDEV_H_ */ From patchwork Thu Feb 28 07:13:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50619 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3D25C5F33; Thu, 28 Feb 2019 08:15:28 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id DAFDF5F1B for ; Thu, 28 Feb 2019 08:15:24 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299857" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:19 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:14 +0800 Message-Id: <1551338000-120348-6-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 05/11] drivers/net/ipn3ke: add IPN3KE PMD driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Intel FPGA Acceleration NIC IPN3KE PMD driver. Signed-off-by: Rosen Xu Signed-off-by: Andy Pei Signed-off-by: Dan Wei --- drivers/net/Makefile | 1 + drivers/net/ipn3ke/Makefile | 33 + drivers/net/ipn3ke/ipn3ke_ethdev.c | 814 +++++++++ drivers/net/ipn3ke/ipn3ke_ethdev.h | 742 +++++++++ drivers/net/ipn3ke/ipn3ke_flow.c | 1407 ++++++++++++++++ drivers/net/ipn3ke/ipn3ke_flow.h | 104 ++ drivers/net/ipn3ke/ipn3ke_logs.h | 30 + drivers/net/ipn3ke/ipn3ke_representor.c | 890 ++++++++++ drivers/net/ipn3ke/ipn3ke_tm.c | 2217 +++++++++++++++++++++++++ drivers/net/ipn3ke/ipn3ke_tm.h | 135 ++ drivers/net/ipn3ke/meson.build | 9 + drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map | 4 + 12 files changed, 6386 insertions(+) create mode 100644 drivers/net/ipn3ke/Makefile create mode 100644 drivers/net/ipn3ke/ipn3ke_ethdev.c create mode 100644 drivers/net/ipn3ke/ipn3ke_ethdev.h create mode 100644 drivers/net/ipn3ke/ipn3ke_flow.c create mode 100644 drivers/net/ipn3ke/ipn3ke_flow.h create mode 100644 drivers/net/ipn3ke/ipn3ke_logs.h create mode 100644 drivers/net/ipn3ke/ipn3ke_representor.c create mode 100644 drivers/net/ipn3ke/ipn3ke_tm.c create mode 100644 drivers/net/ipn3ke/ipn3ke_tm.h create mode 100644 drivers/net/ipn3ke/meson.build create mode 100644 drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 670d7f7..f66263c 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -32,6 +32,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe +DIRS-$(CONFIG_RTE_LIBRTE_IPN3KE_PMD) += ipn3ke DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4 DIRS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5 diff --git a/drivers/net/ipn3ke/Makefile b/drivers/net/ipn3ke/Makefile new file mode 100644 index 0000000..03f2145 --- /dev/null +++ b/drivers/net/ipn3ke/Makefile @@ -0,0 +1,33 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2019 Intel Corporation + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_ipn3ke.a + +CFLAGS += -DALLOW_EXPERIMENTAL_API +CFLAGS += -O3 +#CFLAGS += $(WERROR_FLAGS) +CFLAGS += -I$(RTE_SDK)/drivers/bus/ifpga +CFLAGS += -I$(RTE_SDK)/drivers/raw/ifpga_rawdev +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring +LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs +LDLIBS += -lrte_bus_pci +LDLIBS += -lrte_bus_vdev + +EXPORT_MAP := rte_pmd_ipn3ke_version.map + +LIBABIVER := 1 + +# +# all source are stored in SRCS-y +# +SRCS-y += ipn3ke_ethdev.c +SRCS-y += ipn3ke_representor.c +SRCS-y += ipn3ke_tm.c +SRCS-y += ipn3ke_flow.c + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.c b/drivers/net/ipn3ke/ipn3ke_ethdev.c new file mode 100644 index 0000000..e691f68 --- /dev/null +++ b/drivers/net/ipn3ke/ipn3ke_ethdev.c @@ -0,0 +1,814 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#include + +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "ifpga_rawdev_api.h" +#include "ipn3ke_tm.h" +#include "ipn3ke_flow.h" +#include "ipn3ke_logs.h" +#include "ipn3ke_ethdev.h" + +int ipn3ke_afu_logtype; + +static const struct rte_afu_uuid afu_uuid_ipn3ke_map[] = { + { MAP_UUID_10G_LOW, MAP_UUID_10G_HIGH }, + { IPN3KE_UUID_10G_LOW, IPN3KE_UUID_10G_HIGH }, + { IPN3KE_UUID_25G_LOW, IPN3KE_UUID_25G_HIGH }, + { 0, 0 /* sentinel */ }, +}; + +struct ipn3ke_hw_cap hw_cap; + +static int ipn3ke_indirect_read(struct ipn3ke_hw *hw, + uint32_t *rd_data, + uint32_t addr, + uint32_t mac_num, + uint32_t dev_sel, + uint32_t eth_wrapper_sel) +{ + uint32_t base_addr; + uint32_t i, try_cnt; + uint32_t delay = 0xFFFFF; + uint64_t indirect_value = 0; + volatile void *indirect_addrs = 0; + uint64_t target_addr = 0; + uint64_t read_data = 0; + + if (mac_num >= hw->port_num) + return -1; + + if (eth_wrapper_sel == 0) + base_addr = IPN3KE_MAC_CTRL_BASE_0; + else if (eth_wrapper_sel == 1) + base_addr = IPN3KE_MAC_CTRL_BASE_1; + else + return -1; + + addr &= 0x3FF; + mac_num &= 0x7; + mac_num <<= 10; + target_addr = addr | mac_num; + + indirect_value = RCMD | target_addr << 32; + indirect_addrs = (volatile void *)(hw->hw_addr + + (uint32_t)(base_addr | 0x10)); + + while (delay != 0) + delay--; + + rte_write64((rte_cpu_to_le_64(indirect_value)), indirect_addrs); + + i = 0; + try_cnt = 10; + indirect_addrs = (volatile void *)(hw->hw_addr + + (uint32_t)(base_addr | 0x18)); + do { + read_data = rte_read64(indirect_addrs); + if ((read_data >> 32) == 1) + break; + i++; + } while (i <= try_cnt); + if (i > try_cnt) + return -1; + + (*rd_data) = rte_le_to_cpu_32(read_data); + return 0; +} + +static int ipn3ke_indirect_write(struct ipn3ke_hw *hw, + uint32_t wr_data, + uint32_t addr, + uint32_t mac_num, + uint32_t dev_sel, + uint32_t eth_wrapper_sel) +{ + uint32_t base_addr; + volatile void *indirect_addrs = 0; + uint64_t indirect_value = 0; + uint64_t target_addr = 0; + + if (mac_num >= hw->port_num) + return -1; + + if (eth_wrapper_sel == 0) + base_addr = IPN3KE_MAC_CTRL_BASE_0; + else if (eth_wrapper_sel == 1) + base_addr = IPN3KE_MAC_CTRL_BASE_1; + else + return -1; + + addr &= 0x3FF; + mac_num &= 0x7; + mac_num <<= 10; + target_addr = addr | mac_num; + + indirect_value = WCMD | target_addr << 32 | wr_data; + indirect_addrs = (volatile void *)(hw->hw_addr + + (uint32_t)(base_addr | 0x10)); + + rte_write64((rte_cpu_to_le_64(indirect_value)), indirect_addrs); + return 0; +} + +static int ipn3ke_indirect_mac_read(struct ipn3ke_hw *hw, + uint32_t *rd_data, + uint32_t addr, + uint32_t mac_num, + uint32_t eth_wrapper_sel) +{ + return ipn3ke_indirect_read(hw, + rd_data, + addr, + mac_num, + 0, + eth_wrapper_sel); +} + +static int ipn3ke_indirect_mac_write(struct ipn3ke_hw *hw, + uint32_t wr_data, + uint32_t addr, + uint32_t mac_num, + uint32_t eth_wrapper_sel) +{ + return ipn3ke_indirect_write(hw, + wr_data, + addr, + mac_num, + 0, + eth_wrapper_sel); +} + +#define MAP_PHY_MAC_BASE 0x409000 +#define MAP_FVL_MAC_BASE 0x809000 +static int map_indirect_mac_read(struct ipn3ke_hw *hw, + uint32_t *rd_data, + uint32_t addr, + uint32_t mac_num, + uint32_t eth_wrapper_sel) +{ + uint32_t base_addr; + + base_addr = MAP_PHY_MAC_BASE + + 0x400000 * eth_wrapper_sel + + 0x1000 * mac_num + + (addr << 4); + + (*rd_data) = IPN3KE_READ_REG(hw, base_addr); + + return 0; +} +static int map_indirect_mac_write(struct ipn3ke_hw *hw, + uint32_t wr_data, + uint32_t addr, + uint32_t mac_num, + uint32_t eth_wrapper_sel) +{ + uint32_t base_addr; + + base_addr = MAP_PHY_MAC_BASE + + 0x400000 * eth_wrapper_sel + + 0x1000 * mac_num + + (addr << 4); + + IPN3KE_WRITE_REG(hw, base_addr, wr_data); + + return 0; +} + +static int +ipn3ke_hw_cap_init(struct ipn3ke_hw *hw) +{ + hw_cap.version_number = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0), 0, 0xFFFF); + hw_cap.capability_registers_block_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x8), 0, 0xFFFFFFFF); + hw_cap.status_registers_block_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x10), 0, 0xFFFFFFFF); + hw_cap.control_registers_block_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x18), 0, 0xFFFFFFFF); + hw_cap.classify_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x20), 0, 0xFFFFFFFF); + hw_cap.classy_size = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x24), 0, 0xFFFF); + hw_cap.policer_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x28), 0, 0xFFFFFFFF); + hw_cap.policer_entry_size = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x2C), 0, 0xFFFF); + hw_cap.rss_key_array_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x30), 0, 0xFFFFFFFF); + hw_cap.rss_key_entry_size = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x34), 0, 0xFFFF); + hw_cap.rss_indirection_table_array_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x38), 0, 0xFFFFFFFF); + hw_cap.rss_indirection_table_entry_size = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x3C), 0, 0xFFFF); + hw_cap.dmac_map_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x40), 0, 0xFFFFFFFF); + hw_cap.dmac_map_size = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x44), 0, 0xFFFF); + hw_cap.qm_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x48), 0, 0xFFFFFFFF); + hw_cap.qm_size = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x4C), 0, 0xFFFF); + hw_cap.ccb_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x50), 0, 0xFFFFFFFF); + hw_cap.ccb_entry_size = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x54), 0, 0xFFFF); + hw_cap.qos_offset = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x58), 0, 0xFFFFFFFF); + hw_cap.qos_size = IPN3KE_MASK_READ_REG(hw, + (IPN3KE_HW_BASE + 0x5C), 0, 0xFFFF); + + hw_cap.num_rx_flow = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CAPABILITY_REGISTERS_BLOCK_OFFSET, + 0, 0xFFFF); + hw_cap.num_rss_blocks = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CAPABILITY_REGISTERS_BLOCK_OFFSET, + 4, 0xFFFF); + hw_cap.num_dmac_map = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CAPABILITY_REGISTERS_BLOCK_OFFSET, + 8, 0xFFFF); + hw_cap.num_tx_flow = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CAPABILITY_REGISTERS_BLOCK_OFFSET, + 0xC, 0xFFFF); + hw_cap.num_smac_map = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CAPABILITY_REGISTERS_BLOCK_OFFSET, + 0x10, 0xFFFF); + + hw_cap.link_speed_mbps = IPN3KE_MASK_READ_REG(hw, + IPN3KE_STATUS_REGISTERS_BLOCK_OFFSET, + 0, 0xFFFFF); + + return 0; +} + +static int +ipn3ke_hw_init_base(struct rte_afu_device *afu_dev, + struct ipn3ke_hw *hw) +{ + struct rte_rawdev *rawdev; + int ret; + int i; + uint32_t val; + + rawdev = afu_dev->rawdev; + + hw->afu_id.uuid.uuid_low = afu_dev->id.uuid.uuid_low; + hw->afu_id.uuid.uuid_high = afu_dev->id.uuid.uuid_high; + hw->afu_id.port = afu_dev->id.port; + hw->hw_addr = (uint8_t *)(afu_dev->mem_resource[0].addr); + if ((afu_dev->id.uuid.uuid_low == MAP_UUID_10G_LOW) && + (afu_dev->id.uuid.uuid_high == MAP_UUID_10G_HIGH)) { + hw->f_mac_read = map_indirect_mac_read; + hw->f_mac_write = map_indirect_mac_write; + } else { + hw->f_mac_read = ipn3ke_indirect_mac_read; + hw->f_mac_write = ipn3ke_indirect_mac_write; + } + hw->rawdev = rawdev; + rawdev->dev_ops->attr_get(rawdev, + "retimer_info", + (uint64_t *)&hw->retimer); + hw->port_num = hw->retimer.port_num; + + /* Enable inter connect channel */ + for (i = 0; i < hw->port_num; i++) { + /* Enable the TX path */ + val = 0; + val &= IPN3KE_MAC_TX_PACKET_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_PACKET_CONTROL, + i, + 1); + + /* Disables source address override */ + val = 0; + val &= IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE, + i, + 1); + + /* Enable the RX path */ + val = 0; + val &= IPN3KE_MAC_RX_TRANSFER_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_TRANSFER_CONTROL, + i, + 1); + + /* Clear all TX statistics counters */ + val = 1; + val &= IPN3KE_MAC_TX_STATS_CLR_CLEAR_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_STATS_CLR, + i, + 1); + + /* Clear all RX statistics counters */ + val = 1; + val &= IPN3KE_MAC_RX_STATS_CLR_CLEAR_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_STATS_CLR, + i, + 1); + } + + ret = rte_eth_switch_domain_alloc(&hw->switch_domain_id); + if (ret) + IPN3KE_AFU_PMD_WARN("failed to allocate switch domain for device %d", + ret); + + hw->tm_hw_enable = 0; + hw->flow_hw_enable = 0; + + hw->acc_tm = 0; + hw->acc_flow = 0; + + return 0; +} + +static int +ipn3ke_hw_init_vbng(struct rte_afu_device *afu_dev, + struct ipn3ke_hw *hw) +{ + struct rte_rawdev *rawdev; + int ret; + int i; + uint32_t val; + + rawdev = afu_dev->rawdev; + + hw->afu_id.uuid.uuid_low = afu_dev->id.uuid.uuid_low; + hw->afu_id.uuid.uuid_high = afu_dev->id.uuid.uuid_high; + hw->afu_id.port = afu_dev->id.port; + hw->hw_addr = (uint8_t *)(afu_dev->mem_resource[0].addr); + if ((afu_dev->id.uuid.uuid_low == MAP_UUID_10G_LOW) && + (afu_dev->id.uuid.uuid_high == MAP_UUID_10G_HIGH)) { + hw->f_mac_read = map_indirect_mac_read; + hw->f_mac_write = map_indirect_mac_write; + } else { + hw->f_mac_read = ipn3ke_indirect_mac_read; + hw->f_mac_write = ipn3ke_indirect_mac_write; + } + hw->rawdev = rawdev; + rawdev->dev_ops->attr_get(rawdev, + "retimer_info", + (uint64_t *)&hw->retimer); + hw->port_num = hw->retimer.port_num; + + ipn3ke_hw_cap_init(hw); + printf("UPL_version is 0x%x\n", IPN3KE_READ_REG(hw, 0)); + + /* Reset FPGA IP */ + IPN3KE_WRITE_REG(hw, IPN3KE_CTRL_RESET, 1); + IPN3KE_WRITE_REG(hw, IPN3KE_CTRL_RESET, 0); + + /* Enable inter connect channel */ + for (i = 0; i < hw->port_num; i++) { + /* Enable the TX path */ + val = 0; + val &= IPN3KE_MAC_TX_PACKET_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_PACKET_CONTROL, + i, + 1); + + /* Disables source address override */ + val = 0; + val &= IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE, + i, + 1); + + /* Enable the RX path */ + val = 0; + val &= IPN3KE_MAC_RX_TRANSFER_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_TRANSFER_CONTROL, + i, + 1); + + /* Clear all TX statistics counters */ + val = 1; + val &= IPN3KE_MAC_TX_STATS_CLR_CLEAR_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_STATS_CLR, + i, + 1); + + /* Clear all RX statistics counters */ + val = 1; + val &= IPN3KE_MAC_RX_STATS_CLR_CLEAR_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_STATS_CLR, + i, + 1); + } + + ret = rte_eth_switch_domain_alloc(&hw->switch_domain_id); + if (ret) + IPN3KE_AFU_PMD_WARN("failed to allocate switch domain for device %d", + ret); + + ret = ipn3ke_hw_tm_init(hw); + if (ret) + return ret; + hw->tm_hw_enable = 1; + + ret = ipn3ke_flow_init(hw); + if (ret) + return ret; + hw->flow_hw_enable = 1; + + hw->acc_tm = 0; + hw->acc_flow = 0; + + return 0; +} + +static void +ipn3ke_hw_uninit(struct ipn3ke_hw *hw) +{ + int i; + uint32_t val; + + for (i = 0; i < hw->port_num; i++) { + /* Disable the TX path */ + val = 1; + val &= IPN3KE_MAC_TX_PACKET_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_PACKET_CONTROL, + 0, + 1); + + /* Disable the RX path */ + val = 1; + val &= IPN3KE_MAC_RX_TRANSFER_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_TRANSFER_CONTROL, + 0, + 1); + + /* Clear all TX statistics counters */ + val = 1; + val &= IPN3KE_MAC_TX_STATS_CLR_CLEAR_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_STATS_CLR, + 0, + 1); + + /* Clear all RX statistics counters */ + val = 1; + val &= IPN3KE_MAC_RX_STATS_CLR_CLEAR_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_STATS_CLR, + 0, + 1); + } +} + +RTE_INIT(ipn3ke_afu_init_log); +static void +ipn3ke_afu_init_log(void) +{ + ipn3ke_afu_logtype = rte_log_register("driver.afu.ipn3ke"); + if (ipn3ke_afu_logtype >= 0) + rte_log_set_level(ipn3ke_afu_logtype, RTE_LOG_NOTICE); +} + +static int ipn3ke_vswitch_probe(struct rte_afu_device *afu_dev) +{ + char name[RTE_ETH_NAME_MAX_LEN]; + struct ipn3ke_hw *hw; + int i, retval; + + /* check if the AFU device has been probed already */ + /* allocate shared mcp_vswitch structure */ + if (!afu_dev->shared.data) { + snprintf(name, sizeof(name), "net_%s_hw", + afu_dev->device.name); + hw = rte_zmalloc_socket(name, + sizeof(struct ipn3ke_hw), + RTE_CACHE_LINE_SIZE, + afu_dev->device.numa_node); + if (!hw) { + IPN3KE_AFU_PMD_LOG(ERR, + "failed to allocate hardwart data"); + retval = -ENOMEM; + return -ENOMEM; + } + afu_dev->shared.data = hw; + + rte_spinlock_init(&afu_dev->shared.lock); + } else + hw = (struct ipn3ke_hw *)afu_dev->shared.data; + +#if IPN3KE_HW_BASE_ENABLE + retval = ipn3ke_hw_init_base(afu_dev, hw); + if (retval) + return retval; +#endif + + retval = ipn3ke_hw_init_vbng(afu_dev, hw); + if (retval) + return retval; + + /* probe representor ports */ + for (i = 0; i < hw->port_num; i++) { + struct ipn3ke_rpst rpst = { + .port_id = i, + .switch_domain_id = hw->switch_domain_id, + .hw = hw + }; + + /* representor port net_bdf_port */ + snprintf(name, sizeof(name), "net_%s_representor_%d", + afu_dev->device.name, i); + + retval = rte_eth_dev_create(&afu_dev->device, name, + sizeof(struct ipn3ke_rpst), NULL, NULL, + ipn3ke_rpst_init, &rpst); + + if (retval) + IPN3KE_AFU_PMD_LOG(ERR, "failed to create ipn3ke " + "representor %s.", name); + } + + return 0; +} + +static int ipn3ke_vswitch_remove(struct rte_afu_device *afu_dev) +{ + char name[RTE_ETH_NAME_MAX_LEN]; + struct ipn3ke_hw *hw; + struct rte_eth_dev *ethdev; + int i, ret; + + hw = (struct ipn3ke_hw *)afu_dev->shared.data; + + /* remove representor ports */ + for (i = 0; i < hw->port_num; i++) { + /* representor port net_bdf_port */ + snprintf(name, sizeof(name), "net_%s_representor_%d", + afu_dev->device.name, i); + + ethdev = rte_eth_dev_allocated(afu_dev->device.name); + if (!ethdev) + return -ENODEV; + + rte_eth_dev_destroy(ethdev, ipn3ke_rpst_uninit); + } + + ret = rte_eth_switch_domain_free(hw->switch_domain_id); + if (ret) + IPN3KE_AFU_PMD_LOG(WARNING, + "failed to free switch domain: %d", + ret); + + /* flow uninit*/ + + ipn3ke_hw_uninit(hw); + + return 0; +} + +static struct rte_afu_driver afu_ipn3ke_driver = { + .id_table = afu_uuid_ipn3ke_map, + .probe = ipn3ke_vswitch_probe, + .remove = ipn3ke_vswitch_remove, +}; + +RTE_PMD_REGISTER_AFU(net_ipn3ke_afu, afu_ipn3ke_driver); +RTE_PMD_REGISTER_AFU_ALIAS(net_ipn3ke_afu, afu_dev); +RTE_PMD_REGISTER_PARAM_STRING(net_ipn3ke_afu, + "bdf= " + "port= " + "uudi_high= " + "uuid_low= " + "path= " + "pr_enable=" + "debug="); + +static const char * const valid_args[] = { +#define IPN3KE_AFU_NAME "afu" + IPN3KE_AFU_NAME, +#define IPN3KE_FPGA_ACCELERATION_LIST "fpga_acc" + IPN3KE_FPGA_ACCELERATION_LIST, +#define IPN3KE_I40E_PF_LIST "i40e_pf" + IPN3KE_I40E_PF_LIST, + NULL +}; +static int +ipn3ke_cfg_parse_acc_list(const char *afu_name, +const char *acc_list_name) +{ + struct rte_afu_device *afu_dev; + struct ipn3ke_hw *hw; + const char *p_source; + char *p_start; + char name[RTE_ETH_NAME_MAX_LEN]; + + afu_dev = rte_ifpga_find_afu_by_name(afu_name); + if (!afu_dev) + return -1; + hw = (struct ipn3ke_hw *)afu_dev->shared.data; + if (!hw) + return -1; + + p_source = acc_list_name; + while (*p_source) { + while ((*p_source == '{') || (*p_source == '|')) + p_source++; + p_start = name; + while ((*p_source != '|') && (*p_source != '}')) + *p_start++ = *p_source++; + *p_start = 0; + if (!strcmp(name, "tm") && hw->tm_hw_enable) + hw->acc_tm = 1; + + if (!strcmp(name, "flow") && hw->flow_hw_enable) + hw->acc_flow = 1; + + if (*p_source == '}') + return 0; + } + + return 0; +} + +static int +ipn3ke_cfg_parse_i40e_pf_ethdev(const char *afu_name, +const char *pf_name) +{ + struct rte_eth_dev *i40e_eth, *rpst_eth; + struct rte_afu_device *afu_dev; + struct ipn3ke_rpst *rpst; + struct ipn3ke_hw *hw; + const char *p_source; + char *p_start; + char name[RTE_ETH_NAME_MAX_LEN]; + uint16_t port_id; + int i; + int ret = -1; + + afu_dev = rte_ifpga_find_afu_by_name(afu_name); + if (!afu_dev) + return -1; + hw = (struct ipn3ke_hw *)afu_dev->shared.data; + if (!hw) + return -1; + + p_source = pf_name; + for (i = 0; i < hw->port_num; i++) { + snprintf(name, sizeof(name), "net_%s_representor_%d", + afu_name, i); + ret = rte_eth_dev_get_port_by_name(name, &port_id); + rpst_eth = &rte_eth_devices[port_id]; + rpst = IPN3KE_DEV_PRIVATE_TO_RPST(rpst_eth); + + while ((*p_source == '{') || (*p_source == '|')) + p_source++; + p_start = name; + while ((*p_source != '|') && (*p_source != '}')) + *p_start++ = *p_source++; + *p_start = 0; + + ret = rte_eth_dev_get_port_by_name(name, &port_id); + i40e_eth = &rte_eth_devices[port_id]; + + rpst->i40e_pf_eth = i40e_eth; + rpst->i40e_pf_eth_port_id = port_id; + + if ((*p_source == '}') || !(*p_source)) + break; + } + + return 0; +} +static int +ipn3ke_cfg_probe(struct rte_vdev_device *dev) +{ + struct rte_devargs *devargs; + struct rte_kvargs *kvlist = NULL; + char *afu_name = NULL; + char *acc_name = NULL; + char *pf_name = NULL; + int ret = -1; + + devargs = dev->device.devargs; + + kvlist = rte_kvargs_parse(devargs->args, valid_args); + if (!kvlist) { + IPN3KE_AFU_PMD_LOG(ERR, "error when parsing param"); + goto end; + } + + if (rte_kvargs_count(kvlist, IPN3KE_AFU_NAME) == 1) { + if (rte_kvargs_process(kvlist, IPN3KE_AFU_NAME, + &rte_ifpga_get_string_arg, + &afu_name) < 0) { + IPN3KE_AFU_PMD_ERR("error to parse %s", + IPN3KE_AFU_NAME); + goto end; + } + } else { + IPN3KE_AFU_PMD_ERR("arg %s is mandatory for ipn3ke", + IPN3KE_AFU_NAME); + goto end; + } + + if (rte_kvargs_count(kvlist, IPN3KE_FPGA_ACCELERATION_LIST) == 1) { + if (rte_kvargs_process(kvlist, IPN3KE_FPGA_ACCELERATION_LIST, + &rte_ifpga_get_string_arg, + &acc_name) < 0) { + IPN3KE_AFU_PMD_ERR("error to parse %s", + IPN3KE_FPGA_ACCELERATION_LIST); + goto end; + } + ret = ipn3ke_cfg_parse_acc_list(afu_name, acc_name); + if (ret) + goto end; + } else { + IPN3KE_AFU_PMD_INFO("arg %s is optional for ipn3ke, using i40e acc", + IPN3KE_FPGA_ACCELERATION_LIST); + } + + if (rte_kvargs_count(kvlist, IPN3KE_I40E_PF_LIST) == 1) { + if (rte_kvargs_process(kvlist, IPN3KE_I40E_PF_LIST, + &rte_ifpga_get_string_arg, + &pf_name) < 0) { + IPN3KE_AFU_PMD_ERR("error to parse %s", + IPN3KE_I40E_PF_LIST); + goto end; + } + ret = ipn3ke_cfg_parse_i40e_pf_ethdev(afu_name, pf_name); + if (ret) + goto end; + } else { + IPN3KE_AFU_PMD_ERR("arg %s is mandatory for ipn3ke", + IPN3KE_I40E_PF_LIST); + goto end; + } + +end: + if (kvlist) + rte_kvargs_free(kvlist); + + return ret; +} + +static int +ipn3ke_cfg_remove(struct rte_vdev_device *vdev) +{ + IPN3KE_AFU_PMD_INFO("Remove ipn3ke_cfg %p", + vdev); + + return 0; +} + +static struct rte_vdev_driver ipn3ke_cfg_driver = { + .probe = ipn3ke_cfg_probe, + .remove = ipn3ke_cfg_remove, +}; + +RTE_PMD_REGISTER_VDEV(ipn3ke_cfg, ipn3ke_cfg_driver); +RTE_PMD_REGISTER_ALIAS(ipn3ke_cfg, ipn3ke_cfg); +RTE_PMD_REGISTER_PARAM_STRING(ipn3ke_cfg, + "afu= " + "fpga_acc=" + "i40e="); + diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.h b/drivers/net/ipn3ke/ipn3ke_ethdev.h new file mode 100644 index 0000000..d5cc6f4 --- /dev/null +++ b/drivers/net/ipn3ke/ipn3ke_ethdev.h @@ -0,0 +1,742 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#ifndef _IPN3KE_ETHDEV_H_ +#define _IPN3KE_ETHDEV_H_ + +#include + +/** Set a bit in the uint32 variable */ +#define IPN3KE_BIT_SET(var, pos) \ + ((var) |= ((uint32_t)1 << ((pos)))) + +/** Reset the bit in the variable */ +#define IPN3KE_BIT_RESET(var, pos) \ + ((var) &= ~((uint32_t)1 << ((pos)))) + +/** Check the bit is set in the variable */ +#define IPN3KE_BIT_ISSET(var, pos) \ + (((var) & ((uint32_t)1 << ((pos)))) ? 1 : 0) + +struct ipn3ke_hw; + +#define IPN3KE_HW_BASE 0x4000000 + +#define IPN3KE_CAPABILITY_REGISTERS_BLOCK_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.capability_registers_block_offset) + +#define IPN3KE_STATUS_REGISTERS_BLOCK_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.status_registers_block_offset) + +#define IPN3KE_CTRL_RESET \ + (IPN3KE_HW_BASE + hw_cap.control_registers_block_offset) + +#define IPN3KE_CTRL_MTU \ + (IPN3KE_HW_BASE + hw_cap.control_registers_block_offset + 4) + +#define IPN3KE_CLASSIFY_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.classify_offset) + +#define IPN3KE_POLICER_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.policer_offset) + +#define IPN3KE_RSS_KEY_ARRAY_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.rss_key_array_offset) + +#define IPN3KE_RSS_INDIRECTION_TABLE_ARRAY_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.rss_indirection_table_array_offset) + +#define IPN3KE_DMAC_MAP_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.dmac_map_offset) + +#define IPN3KE_QM_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.qm_offset) + +#define IPN3KE_CCB_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.ccb_offset) + +#define IPN3KE_QOS_OFFSET \ + (IPN3KE_HW_BASE + hw_cap.qos_offset) + +struct ipn3ke_hw_cap { + uint32_t version_number; + uint32_t capability_registers_block_offset; + uint32_t status_registers_block_offset; + uint32_t control_registers_block_offset; + uint32_t classify_offset; + uint32_t classy_size; + uint32_t policer_offset; + uint32_t policer_entry_size; + uint32_t rss_key_array_offset; + uint32_t rss_key_entry_size; + uint32_t rss_indirection_table_array_offset; + uint32_t rss_indirection_table_entry_size; + uint32_t dmac_map_offset; + uint32_t dmac_map_size; + uint32_t qm_offset; + uint32_t qm_size; + uint32_t ccb_offset; + uint32_t ccb_entry_size; + uint32_t qos_offset; + uint32_t qos_size; + + uint32_t num_rx_flow; /* Default: 64K */ + uint32_t num_rss_blocks; /* Default: 512 */ + uint32_t num_dmac_map; /* Default: 1K */ + uint32_t num_tx_flow; /* Default: 64K */ + uint32_t num_smac_map; /* Default: 1K */ + + uint32_t link_speed_mbps; +}; + +/** + * Strucute to store private data for each representor instance + */ +struct ipn3ke_rpst { + TAILQ_ENTRY(ipn3ke_rpst) next; /**< Next in device list. */ + uint16_t switch_domain_id; + /**< Switch ID */ + uint16_t port_id; + /**< Port ID */ + struct ipn3ke_hw *hw; + struct rte_eth_dev *i40e_pf_eth; + uint16_t i40e_pf_eth_port_id; + struct rte_eth_link ori_linfo; + struct ipn3ke_tm_internals tm; + /**< Private data store of assocaiated physical function */ + struct ether_addr mac_addr; +}; + +/* UUID IDs */ +#define MAP_UUID_10G_LOW 0xffffffffffffffff +#define MAP_UUID_10G_HIGH 0xffffffffffffffff +#define IPN3KE_UUID_10G_LOW 0xc000c9660d824272 +#define IPN3KE_UUID_10G_HIGH 0x9aeffe5f84570612 +#define IPN3KE_UUID_25G_LOW 0xb7d9bac566bfbc80 +#define IPN3KE_UUID_25G_HIGH 0xb07bac1aeef54d67 + +#define IPN3KE_AFU_BUF_SIZE_MIN 1024 +#define IPN3KE_AFU_FRAME_SIZE_MAX 9728 + +#define IPN3KE_RAWDEV_ATTR_LEN_MAX (64) + +typedef int (*ipn3ke_indirect_mac_read_t)(struct ipn3ke_hw *hw, + uint32_t *rd_data, + uint32_t addr, + uint32_t mac_num, + uint32_t eth_wrapper_sel); + +typedef int (*ipn3ke_indirect_mac_write_t)(struct ipn3ke_hw *hw, + uint32_t wr_data, + uint32_t addr, + uint32_t mac_num, + uint32_t eth_wrapper_sel); + +struct ipn3ke_hw { + struct rte_eth_dev *eth_dev; + + /* afu info */ + struct rte_afu_id afu_id; + struct rte_rawdev *rawdev; + + struct ifpga_rawdevg_retimer_info retimer; + + uint16_t switch_domain_id; + uint16_t port_num; + + uint32_t tm_hw_enable; + uint32_t flow_hw_enable; + + uint32_t acc_tm; + uint32_t acc_flow; + + struct ipn3ke_flow_list flow_list; + uint32_t flow_max_entries; + uint32_t flow_num_entries; + + struct ipn3ke_tm_node *nodes; + struct ipn3ke_tm_node *port_nodes; + struct ipn3ke_tm_node *vt_nodes; + struct ipn3ke_tm_node *cos_nodes; + + struct ipn3ke_tm_tdrop_profile *tdrop_profile; + uint32_t tdrop_profile_num; + + uint32_t ccb_status; + uint32_t ccb_seg_free; + uint32_t ccb_seg_num; + uint32_t ccb_seg_k; + + /**< MAC Register read */ + ipn3ke_indirect_mac_read_t f_mac_read; + /**< MAC Register write */ + ipn3ke_indirect_mac_write_t f_mac_write; + + uint8_t *hw_addr; +}; + +/** + * @internal + * Helper macro for drivers that need to convert to struct rte_afu_device. + */ +#define RTE_DEV_TO_AFU(ptr) \ + container_of(ptr, struct rte_afu_device, device) + +#define RTE_DEV_TO_AFU_CONST(ptr) \ + container_of(ptr, const struct rte_afu_device, device) + +#define RTE_ETH_DEV_TO_AFU(eth_dev) \ + RTE_DEV_TO_AFU((eth_dev)->device) + +/** + * PCIe MMIO Access + */ + +#define IPN3KE_PCI_REG(reg) rte_read32(reg) +#define IPN3KE_PCI_REG_ADDR(a, reg) \ + ((volatile uint32_t *)((char *)(a)->hw_addr + (reg))) +static inline uint32_t ipn3ke_read_addr(volatile void *addr) +{ + return rte_le_to_cpu_32(IPN3KE_PCI_REG(addr)); +} + +#define WCMD 0x8000000000000000 +#define RCMD 0x4000000000000000 +#define UPL_BASE 0x10000 +static inline uint32_t _ipn3ke_indrct_read(struct ipn3ke_hw *hw, + uint32_t addr) +{ + uint64_t word_offset = 0; + uint64_t read_data = 0; + uint64_t indirect_value = 0; + volatile void *indirect_addrs = 0; + + word_offset = (addr & 0x1FFFFFF) >> 2; + indirect_value = RCMD | word_offset << 32; + indirect_addrs = (volatile void *)(hw->hw_addr + + (uint32_t)(UPL_BASE | 0x10)); + + usleep(10); + + rte_write64((rte_cpu_to_le_64(indirect_value)), indirect_addrs); + + indirect_addrs = (volatile void *)(hw->hw_addr + + (uint32_t)(UPL_BASE | 0x18)); + while ((read_data >> 32) != 1) + read_data = rte_read64(indirect_addrs); + + return rte_le_to_cpu_32(read_data); +} + +static inline void _ipn3ke_indrct_write(struct ipn3ke_hw *hw, + uint32_t addr, uint32_t value) +{ + uint64_t word_offset = 0; + uint64_t indirect_value = 0; + volatile void *indirect_addrs = 0; + + word_offset = (addr & 0x1FFFFFF) >> 2; + indirect_value = WCMD | word_offset << 32 | value; + indirect_addrs = (volatile void *)(hw->hw_addr + + (uint32_t)(UPL_BASE | 0x10)); + + rte_write64((rte_cpu_to_le_64(indirect_value)), indirect_addrs); + usleep(10); +} + +#define IPN3KE_PCI_REG_WRITE(reg, value) \ + rte_write32((rte_cpu_to_le_32(value)), reg) + +#define IPN3KE_PCI_REG_WRITE_RELAXED(reg, value) \ + rte_write32_relaxed((rte_cpu_to_le_32(value)), reg) + +#define IPN3KE_READ_REG(hw, reg) \ + _ipn3ke_indrct_read((hw), (reg)) + +#define IPN3KE_WRITE_REG(hw, reg, value) \ + _ipn3ke_indrct_write((hw), (reg), (value)) + +#define IPN3KE_MASK_READ_REG(hw, reg, x, mask) \ + ((mask) & IPN3KE_READ_REG((hw), ((reg) + (0x4 * (x))))) + +#define IPN3KE_MASK_WRITE_REG(hw, reg, x, value, mask) \ + IPN3KE_WRITE_REG((hw), ((reg) + (0x4 * (x))), ((mask) & (value))) + +#define IPN3KE_DEV_PRIVATE_TO_HW(dev) \ + (((struct ipn3ke_rpst *)(dev)->data->dev_private)->hw) + +#define IPN3KE_DEV_PRIVATE_TO_RPST(dev) \ + ((struct ipn3ke_rpst *)(dev)->data->dev_private) + +#define IPN3KE_DEV_PRIVATE_TO_TM(dev) \ + (&(((struct ipn3ke_rpst *)(dev)->data->dev_private)->tm)) + +/* Byte address of IPN3KE internal module */ +#define IPN3KE_TM_VERSION (IPN3KE_QM_OFFSET + 0x0000) +#define IPN3KE_TM_SCRATCH (IPN3KE_QM_OFFSET + 0x0004) +#define IPN3KE_TM_STATUS (IPN3KE_QM_OFFSET + 0x0008) +#define IPN3KE_TM_MISC_STATUS (IPN3KE_QM_OFFSET + 0x0010) +#define IPN3KE_TM_MISC_WARNING_0 (IPN3KE_QM_OFFSET + 0x0040) +#define IPN3KE_TM_MISC_MON_0 (IPN3KE_QM_OFFSET + 0x0048) +#define IPN3KE_TM_MISC_FATAL_0 (IPN3KE_QM_OFFSET + 0x0050) +#define IPN3KE_TM_BW_MON_CTRL_1 (IPN3KE_QM_OFFSET + 0x0080) +#define IPN3KE_TM_BW_MON_CTRL_2 (IPN3KE_QM_OFFSET + 0x0084) +#define IPN3KE_TM_BW_MON_RATE (IPN3KE_QM_OFFSET + 0x0088) +#define IPN3KE_TM_STATS_CTRL (IPN3KE_QM_OFFSET + 0x0100) +#define IPN3KE_TM_STATS_DATA_0 (IPN3KE_QM_OFFSET + 0x0110) +#define IPN3KE_TM_STATS_DATA_1 (IPN3KE_QM_OFFSET + 0x0114) +#define IPN3KE_QM_UID_CONFIG_CTRL (IPN3KE_QM_OFFSET + 0x0200) +#define IPN3KE_QM_UID_CONFIG_DATA (IPN3KE_QM_OFFSET + 0x0204) + +#define IPN3KE_BM_VERSION (IPN3KE_QM_OFFSET + 0x4000) +#define IPN3KE_BM_STATUS (IPN3KE_QM_OFFSET + 0x4008) +#define IPN3KE_BM_STORE_CTRL (IPN3KE_QM_OFFSET + 0x4010) +#define IPN3KE_BM_STORE_STATUS (IPN3KE_QM_OFFSET + 0x4018) +#define IPN3KE_BM_STORE_MON (IPN3KE_QM_OFFSET + 0x4028) +#define IPN3KE_BM_WARNING_0 (IPN3KE_QM_OFFSET + 0x4040) +#define IPN3KE_BM_MON_0 (IPN3KE_QM_OFFSET + 0x4048) +#define IPN3KE_BM_FATAL_0 (IPN3KE_QM_OFFSET + 0x4050) +#define IPN3KE_BM_DRAM_ACCESS_CTRL (IPN3KE_QM_OFFSET + 0x4100) +#define IPN3KE_BM_DRAM_ACCESS_DATA_0 (IPN3KE_QM_OFFSET + 0x4120) +#define IPN3KE_BM_DRAM_ACCESS_DATA_1 (IPN3KE_QM_OFFSET + 0x4124) +#define IPN3KE_BM_DRAM_ACCESS_DATA_2 (IPN3KE_QM_OFFSET + 0x4128) +#define IPN3KE_BM_DRAM_ACCESS_DATA_3 (IPN3KE_QM_OFFSET + 0x412C) +#define IPN3KE_BM_DRAM_ACCESS_DATA_4 (IPN3KE_QM_OFFSET + 0x4130) +#define IPN3KE_BM_DRAM_ACCESS_DATA_5 (IPN3KE_QM_OFFSET + 0x4134) +#define IPN3KE_BM_DRAM_ACCESS_DATA_6 (IPN3KE_QM_OFFSET + 0x4138) + +#define IPN3KE_QM_VERSION (IPN3KE_QM_OFFSET + 0x8000) +#define IPN3KE_QM_STATUS (IPN3KE_QM_OFFSET + 0x8008) +#define IPN3KE_QM_LL_TABLE_MON (IPN3KE_QM_OFFSET + 0x8018) +#define IPN3KE_QM_WARNING_0 (IPN3KE_QM_OFFSET + 0x8040) +#define IPN3KE_QM_MON_0 (IPN3KE_QM_OFFSET + 0x8048) +#define IPN3KE_QM_FATAL_0 (IPN3KE_QM_OFFSET + 0x8050) +#define IPN3KE_QM_FATAL_1 (IPN3KE_QM_OFFSET + 0x8054) +#define IPN3KE_LL_TABLE_ACCESS_CTRL (IPN3KE_QM_OFFSET + 0x8100) +#define IPN3KE_LL_TABLE_ACCESS_DATA_0 (IPN3KE_QM_OFFSET + 0x8110) +#define IPN3KE_LL_TABLE_ACCESS_DATA_1 (IPN3KE_QM_OFFSET + 0x8114) + +#define IPN3KE_CCB_ERROR (IPN3KE_CCB_OFFSET + 0x0008) +#define IPN3KE_CCB_NSEGFREE (IPN3KE_CCB_OFFSET + 0x200000) +#define IPN3KE_CCB_NSEGFREE_MASK 0x3FFFFF +#define IPN3KE_CCB_PSEGMAX_COEF (IPN3KE_CCB_OFFSET + 0x200008) +#define IPN3KE_CCB_PSEGMAX_COEF_MASK 0xFFFFF +#define IPN3KE_CCB_NSEG_P (IPN3KE_CCB_OFFSET + 0x200080) +#define IPN3KE_CCB_NSEG_MASK 0x3FFFFF +#define IPN3KE_CCB_QPROFILE_Q (IPN3KE_CCB_OFFSET + 0x240000) +#define IPN3KE_CCB_QPROFILE_MASK 0x7FF +#define IPN3KE_CCB_PROFILE_P (IPN3KE_CCB_OFFSET + 0x280000) +#define IPN3KE_CCB_PROFILE_MASK 0x1FFFFFF +#define IPN3KE_CCB_PROFILE_MS (IPN3KE_CCB_OFFSET + 0xC) +#define IPN3KE_CCB_PROFILE_MS_MASK 0x1FFFFFF +#define IPN3KE_CCB_LR_LB_DBG_CTRL (IPN3KE_CCB_OFFSET + 0x2C0000) +#define IPN3KE_CCB_LR_LB_DBG_DONE (IPN3KE_CCB_OFFSET + 0x2C0004) +#define IPN3KE_CCB_LR_LB_DBG_RDATA (IPN3KE_CCB_OFFSET + 0x2C000C) + +#define IPN3KE_QOS_MAP_L1_X (IPN3KE_QOS_OFFSET + 0x000000) +#define IPN3KE_QOS_MAP_L1_MASK 0x1FFF +#define IPN3KE_QOS_MAP_L2_X (IPN3KE_QOS_OFFSET + 0x040000) +#define IPN3KE_QOS_MAP_L2_MASK 0x7 +#define IPN3KE_QOS_TYPE_MASK 0x3 +#define IPN3KE_QOS_TYPE_L1_X (IPN3KE_QOS_OFFSET + 0x200000) +#define IPN3KE_QOS_TYPE_L2_X (IPN3KE_QOS_OFFSET + 0x240000) +#define IPN3KE_QOS_TYPE_L3_X (IPN3KE_QOS_OFFSET + 0x280000) +#define IPN3KE_QOS_SCH_WT_MASK 0xFF +#define IPN3KE_QOS_SCH_WT_L1_X (IPN3KE_QOS_OFFSET + 0x400000) +#define IPN3KE_QOS_SCH_WT_L2_X (IPN3KE_QOS_OFFSET + 0x440000) +#define IPN3KE_QOS_SCH_WT_L3_X (IPN3KE_QOS_OFFSET + 0x480000) +#define IPN3KE_QOS_SHAP_WT_MASK 0x3FFF +#define IPN3KE_QOS_SHAP_WT_L1_X (IPN3KE_QOS_OFFSET + 0x600000) +#define IPN3KE_QOS_SHAP_WT_L2_X (IPN3KE_QOS_OFFSET + 0x640000) +#define IPN3KE_QOS_SHAP_WT_L3_X (IPN3KE_QOS_OFFSET + 0x680000) + +#define IPN3KE_CLF_BASE_DST_MAC_ADDR_HI (IPN3KE_CLASSIFY_OFFSET + 0x0000) +#define IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW (IPN3KE_CLASSIFY_OFFSET + 0x0004) +#define IPN3KE_CLF_QINQ_STAG (IPN3KE_CLASSIFY_OFFSET + 0x0008) +#define IPN3KE_CLF_LKUP_ENABLE (IPN3KE_CLASSIFY_OFFSET + 0x000C) +#define IPN3KE_CLF_DFT_FLOW_ID (IPN3KE_CLASSIFY_OFFSET + 0x0040) +#define IPN3KE_CLF_RX_PARSE_CFG (IPN3KE_CLASSIFY_OFFSET + 0x0080) +#define IPN3KE_CLF_RX_STATS_CFG (IPN3KE_CLASSIFY_OFFSET + 0x00C0) +#define IPN3KE_CLF_RX_STATS_RPT (IPN3KE_CLASSIFY_OFFSET + 0x00C4) +#define IPN3KE_CLF_RX_TEST (IPN3KE_CLASSIFY_OFFSET + 0x0400) + +#define IPN3KE_CLF_EM_VERSION (IPN3KE_CLASSIFY_OFFSET + 0x40000 + 0x0000) +#define IPN3KE_CLF_EM_NUM (IPN3KE_CLASSIFY_OFFSET + 0x40000 + 0x0008) +#define IPN3KE_CLF_EM_KEY_WDTH (IPN3KE_CLASSIFY_OFFSET + 0x40000 + 0x000C) +#define IPN3KE_CLF_EM_RES_WDTH (IPN3KE_CLASSIFY_OFFSET + 0x40000 + 0x0010) +#define IPN3KE_CLF_EM_ALARMS (IPN3KE_CLASSIFY_OFFSET + 0x40000 + 0x0014) +#define IPN3KE_CLF_EM_DRC_RLAT (IPN3KE_CLASSIFY_OFFSET + 0x40000 + 0x0018) + +#define IPN3KE_CLF_MHL_VERSION (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x0000) +#define IPN3KE_CLF_MHL_GEN_CTRL (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x0018) +#define IPN3KE_CLF_MHL_MGMT_CTRL (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x0020) +#define IPN3KE_CLF_MHL_MGMT_CTRL_BIT_BUSY 31 +#define IPN3KE_CLF_MHL_MGMT_CTRL_FLUSH 0x0 +#define IPN3KE_CLF_MHL_MGMT_CTRL_INSERT 0x1 +#define IPN3KE_CLF_MHL_MGMT_CTRL_DELETE 0x2 +#define IPN3KE_CLF_MHL_MGMT_CTRL_SEARCH 0x3 +#define IPN3KE_CLF_MHL_FATAL_0 (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x0050) +#define IPN3KE_CLF_MHL_MON_0 (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x0060) +#define IPN3KE_CLF_MHL_TOTAL_ENTRIES (IPN3KE_CLASSIFY_OFFSET + \ + 0x50000 + 0x0080) +#define IPN3KE_CLF_MHL_ONEHIT_BUCKETS (IPN3KE_CLASSIFY_OFFSET + \ + 0x50000 + 0x0084) +#define IPN3KE_CLF_MHL_KEY_MASK 0xFFFFFFFF +#define IPN3KE_CLF_MHL_KEY_0 (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x1000) +#define IPN3KE_CLF_MHL_KEY_1 (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x1004) +#define IPN3KE_CLF_MHL_KEY_2 (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x1008) +#define IPN3KE_CLF_MHL_KEY_3 (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x100C) +#define IPN3KE_CLF_MHL_RES_MASK 0xFFFFFFFF +#define IPN3KE_CLF_MHL_RES (IPN3KE_CLASSIFY_OFFSET + 0x50000 + 0x2000) + +#define IPN3KE_ASSERT(x) do {\ + if (!(x)) \ + rte_panic("IPN3KE: x"); \ +} while (0) + +extern struct ipn3ke_hw_cap hw_cap; + +int +ipn3ke_rpst_dev_set_link_up(struct rte_eth_dev *dev); +int +ipn3ke_rpst_dev_set_link_down(struct rte_eth_dev *dev); +int +ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev, + int wait_to_complete); +void +ipn3ke_rpst_promiscuous_enable(struct rte_eth_dev *ethdev); +void +ipn3ke_rpst_promiscuous_disable(struct rte_eth_dev *ethdev); +void +ipn3ke_rpst_allmulticast_enable(struct rte_eth_dev *ethdev); +void +ipn3ke_rpst_allmulticast_disable(struct rte_eth_dev *ethdev); +int +ipn3ke_rpst_mac_addr_set(struct rte_eth_dev *ethdev, + struct ether_addr *mac_addr); +int +ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu); + +int +ipn3ke_rpst_init(struct rte_eth_dev *ethdev, void *init_params); +int +ipn3ke_rpst_uninit(struct rte_eth_dev *ethdev); + + +/* IPN3KE_MASK is a macro used on 32 bit registers */ +#define IPN3KE_MASK(mask, shift) ((mask) << (shift)) + +#define IPN3KE_MAC_CTRL_BASE_0 0x00000000 +#define IPN3KE_MAC_CTRL_BASE_1 0x00008000 + +#define IPN3KE_MAC_STATS_MASK 0xFFFFFFFFF + +/* All the address are in 4Bytes*/ +#define IPN3KE_MAC_PRIMARY_MAC_ADDR0 0x0010 +#define IPN3KE_MAC_PRIMARY_MAC_ADDR1 0x0011 + +#define IPN3KE_MAC_MAC_RESET_CONTROL 0x001F +#define IPN3KE_MAC_MAC_RESET_CONTROL_TX_SHIFT 0 +#define IPN3KE_MAC_MAC_RESET_CONTROL_TX_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_MAC_RESET_CONTROL_TX_SHIFT) + +#define IPN3KE_MAC_MAC_RESET_CONTROL_RX_SHIFT 8 +#define IPN3KE_MAC_MAC_RESET_CONTROL_RX_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_MAC_RESET_CONTROL_RX_SHIFT) + +#define IPN3KE_MAC_TX_PACKET_CONTROL 0x0020 +#define IPN3KE_MAC_TX_PACKET_CONTROL_SHIFT 0 +#define IPN3KE_MAC_TX_PACKET_CONTROL_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_TX_PACKET_CONTROL_SHIFT) + +#define IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE 0x002A +#define IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE_SHIFT 0 +#define IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE_SHIFT) + +#define IPN3KE_MAC_TX_FRAME_MAXLENGTH 0x002C +#define IPN3KE_MAC_TX_FRAME_MAXLENGTH_SHIFT 0 +#define IPN3KE_MAC_TX_FRAME_MAXLENGTH_MASK \ + IPN3KE_MASK(0xFFFF, IPN3KE_MAC_TX_FRAME_MAXLENGTH_SHIFT) + +#define IPN3KE_MAC_TX_PAUSEFRAME_CONTROL 0x0040 +#define IPN3KE_MAC_TX_PAUSEFRAME_CONTROL_SHIFT 0 +#define IPN3KE_MAC_TX_PAUSEFRAME_CONTROL_MASK \ + IPN3KE_MASK(0x3, IPN3KE_MAC_TX_PAUSEFRAME_CONTROL_SHIFT) + +#define IPN3KE_MAC_TX_PAUSEFRAME_QUANTA 0x0042 +#define IPN3KE_MAC_TX_PAUSEFRAME_QUANTA_SHIFT 0 +#define IPN3KE_MAC_TX_PAUSEFRAME_QUANTA_MASK \ + IPN3KE_MASK(0xFFFF, IPN3KE_MAC_TX_PAUSEFRAME_QUANTA_SHIFT) + +#define IPN3KE_MAC_TX_PAUSEFRAME_HOLDOFF_QUANTA 0x0043 +#define IPN3KE_MAC_TX_PAUSEFRAME_HOLDOFF_QUANTA_SHIFT 0 +#define IPN3KE_MAC_TX_PAUSEFRAME_HOLDOFF_QUANTA_MASK \ + IPN3KE_MASK(0xFFFF, IPN3KE_MAC_TX_PAUSEFRAME_HOLDOFF_QUANTA_SHIFT) + +#define IPN3KE_MAC_TX_PAUSEFRAME_ENABLE 0x0044 +#define IPN3KE_MAC_TX_PAUSEFRAME_ENABLE_CFG_SHIFT 0 +#define IPN3KE_MAC_TX_PAUSEFRAME_ENABLE_CFG_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_TX_PAUSEFRAME_ENABLE_CFG_SHIFT) + +#define IPN3KE_MAC_TX_PAUSEFRAME_ENABLE_TYPE_SHIFT 1 +#define IPN3KE_MAC_TX_PAUSEFRAME_ENABLE_TYPE_MASK \ + IPN3KE_MASK(0x3, IPN3KE_MAC_TX_PAUSEFRAME_ENABLE_TYPE_SHIFT) + +#define IPN3KE_MAC_RX_TRANSFER_CONTROL 0x00A0 +#define IPN3KE_MAC_RX_TRANSFER_CONTROL_SHIFT 0x0 +#define IPN3KE_MAC_RX_TRANSFER_CONTROL_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_RX_TRANSFER_CONTROL_SHIFT) + +#define IPN3KE_MAC_RX_FRAME_CONTROL 0x00AC +#define IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLUCAST_SHIFT 0x0 +#define IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLUCAST_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLUCAST_SHIFT) + +#define IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLMCAST_SHIFT 0x1 +#define IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLMCAST_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLMCAST_SHIFT) + +#define IPN3KE_MAC_FRAME_SIZE_MAX 9728 +#define IPN3KE_MAC_RX_FRAME_MAXLENGTH 0x00AE +#define IPN3KE_MAC_RX_FRAME_MAXLENGTH_SHIFT 0 +#define IPN3KE_MAC_RX_FRAME_MAXLENGTH_MASK \ + IPN3KE_MASK(0xFFFF, IPN3KE_MAC_RX_FRAME_MAXLENGTH_SHIFT) + +#define IPN3KE_MAC_TX_STATS_CLR 0x0140 +#define IPN3KE_MAC_TX_STATS_CLR_CLEAR_SHIFT 0 +#define IPN3KE_MAC_TX_STATS_CLR_CLEAR_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_TX_STATS_CLR_CLEAR_SHIFT) + +#define IPN3KE_MAC_RX_STATS_CLR 0x01C0 +#define IPN3KE_MAC_RX_STATS_CLR_CLEAR_SHIFT 0 +#define IPN3KE_MAC_RX_STATS_CLR_CLEAR_MASK \ + IPN3KE_MASK(0x1, IPN3KE_MAC_RX_STATS_CLR_CLEAR_SHIFT) + +/*tx_stats_framesOK*/ +#define IPN3KE_MAC_TX_STATS_FRAMESOK_HI 0x0142 +#define IPN3KE_MAC_TX_STATS_FRAMESOK_LOW 0x0143 + +/*rx_stats_framesOK*/ +#define IPN3KE_MAC_RX_STATS_FRAMESOK_HI 0x01C2 +#define IPN3KE_MAC_RX_STATS_FRAMESOK_LOW 0x01C3 + +/*tx_stats_framesErr*/ +#define IPN3KE_MAC_TX_STATS_FRAMESERR_HI 0x0144 +#define IPN3KE_MAC_TX_STATS_FRAMESERR_LOW 0x0145 + +/*rx_stats_framesErr*/ +#define IPN3KE_MAC_RX_STATS_FRAMESERR_HI 0x01C4 +#define IPN3KE_MAC_RX_STATS_FRAMESERR_LOW 0x01C5 + +/*rx_stats_framesCRCErr*/ +#define IPN3KE_MAC_RX_STATS_FRAMESCRCERR_HI 0x01C6 +#define IPN3KE_MAC_RX_STATS_FRAMESCRCERR_LOW 0x01C7 + +/*tx_stats_octetsOK 64b*/ +#define IPN3KE_MAC_TX_STATS_OCTETSOK_HI 0x0148 +#define IPN3KE_MAC_TX_STATS_OCTETSOK_LOW 0x0149 + +/*rx_stats_octetsOK 64b*/ +#define IPN3KE_MAC_RX_STATS_OCTETSOK_HI 0x01C8 +#define IPN3KE_MAC_RX_STATS_OCTETSOK_LOW 0x01C9 + +/*tx_stats_pauseMACCtrl_Frames*/ +#define IPN3KE_MAC_TX_STATS_PAUSEMACCTRL_FRAMES_HI 0x014A +#define IPN3KE_MAC_TX_STATS_PAUSEMACCTRL_FRAMES_LOW 0x014B + +/*rx_stats_pauseMACCtrl_Frames*/ +#define IPN3KE_MAC_RX_STATS_PAUSEMACCTRL_FRAMES_HI 0x01CA +#define IPN3KE_MAC_RX_STATS_PAUSEMACCTRL_FRAMES_LOW 0x01CB + +/*tx_stats_ifErrors*/ +#define IPN3KE_MAC_TX_STATS_IFERRORS_HI 0x014C +#define IPN3KE_MAC_TX_STATS_IFERRORS_LOW 0x014D + +/*rx_stats_ifErrors*/ +#define IPN3KE_MAC_RX_STATS_IFERRORS_HI 0x01CC +#define IPN3KE_MAC_RX_STATS_IFERRORS_LOW 0x01CD + +/*tx_stats_unicast_FramesOK*/ +#define IPN3KE_MAC_TX_STATS_UNICAST_FRAMESOK_HI 0x014E +#define IPN3KE_MAC_TX_STATS_UNICAST_FRAMESOK_LOW 0x014F + +/*rx_stats_unicast_FramesOK*/ +#define IPN3KE_MAC_RX_STATS_UNICAST_FRAMESOK_HI 0x01CE +#define IPN3KE_MAC_RX_STATS_UNICAST_FRAMESOK_LOW 0x01CF + +/*tx_stats_unicast_FramesErr*/ +#define IPN3KE_MAC_TX_STATS_UNICAST_FRAMESERR_HI 0x0150 +#define IPN3KE_MAC_TX_STATS_UNICAST_FRAMESERR_LOW 0x0151 + +/*rx_stats_unicast_FramesErr*/ +#define IPN3KE_MAC_RX_STATS_UNICAST_FRAMESERR_HI 0x01D0 +#define IPN3KE_MAC_RX_STATS_UNICAST_FRAMESERR_LOW 0x01D1 + +/*tx_stats_multicast_FramesOK*/ +#define IPN3KE_MAC_TX_STATS_MULTICAST_FRAMESOK_HI 0x0152 +#define IPN3KE_MAC_TX_STATS_MULTICAST_FRAMESOK_LOW 0x0153 + +/*rx_stats_multicast_FramesOK*/ +#define IPN3KE_MAC_RX_STATS_MULTICAST_FRAMESOK_HI 0x01D2 +#define IPN3KE_MAC_RX_STATS_MULTICAST_FRAMESOK_LOW 0x01D3 + +/*tx_stats_multicast_FramesErr*/ +#define IPN3KE_MAC_TX_STATS_MULTICAST_FRAMESERR_HI 0x0154 +#define IPN3KE_MAC_TX_STATS_MULTICAST_FRAMESERR_LOW 0x0155 + +/*rx_stats_multicast_FramesErr*/ +#define IPN3KE_MAC_RX_STATS_MULTICAST_FRAMESERR_HI 0x01D4 +#define IPN3KE_MAC_RX_STATS_MULTICAST_FRAMESERR_LOW 0x01D5 + +/*tx_stats_broadcast_FramesOK*/ +#define IPN3KE_MAC_TX_STATS_BROADCAST_FRAMESOK_HI 0x0156 +#define IPN3KE_MAC_TX_STATS_BROADCAST_FRAMESOK_LOW 0x0157 + +/*rx_stats_broadcast_FramesOK*/ +#define IPN3KE_MAC_RX_STATS_BROADCAST_FRAMESOK_HI 0x01D6 +#define IPN3KE_MAC_RX_STATS_BROADCAST_FRAMESOK_LOW 0x01D7 + +/*tx_stats_broadcast_FramesErr*/ +#define IPN3KE_MAC_TX_STATS_BROADCAST_FRAMESERR_HI 0x0158 +#define IPN3KE_MAC_TX_STATS_BROADCAST_FRAMESERR_LOW 0x0159 + +/*rx_stats_broadcast_FramesErr*/ +#define IPN3KE_MAC_RX_STATS_BROADCAST_FRAMESERR_HI 0x01D8 +#define IPN3KE_MAC_RX_STATS_BROADCAST_FRAMESERR_LOW 0x01D9 + +/*tx_stats_etherStatsOctets 64b*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSOCTETS_HI 0x015A +#define IPN3KE_MAC_TX_STATS_ETHERSTATSOCTETS_LOW 0x015B + +/*rx_stats_etherStatsOctets 64b*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSOCTETS_HI 0x01DA +#define IPN3KE_MAC_RX_STATS_ETHERSTATSOCTETS_LOW 0x01DB + +/*tx_stats_etherStatsPkts*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS_HI 0x015C +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS_LOW 0x015D + +/*rx_stats_etherStatsPkts*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS_HI 0x01DC +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS_LOW 0x01DD + +/*tx_stats_etherStatsUndersizePkts*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSUNDERSIZEPKTS_HI 0x015E +#define IPN3KE_MAC_TX_STATS_ETHERSTATSUNDERSIZEPKTS_LOW 0x015F + +/*rx_stats_etherStatsUndersizePkts*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSUNDERSIZEPKTS_HI 0x01DE +#define IPN3KE_MAC_RX_STATS_ETHERSTATSUNDERSIZEPKTS_LOW 0x01DF + +/*tx_stats_etherStatsOversizePkts*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSOVERSIZEPKTS_HI 0x0160 +#define IPN3KE_MAC_TX_STATS_ETHERSTATSOVERSIZEPKTS_LOW 0x0161 + +/*rx_stats_etherStatsOversizePkts*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSOVERSIZEPKTS_HI 0x01E0 +#define IPN3KE_MAC_RX_STATS_ETHERSTATSOVERSIZEPKTS_LOW 0x01E1 + +/*tx_stats_etherStatsPkts64Octets*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS64OCTETS_HI 0x0162 +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS64OCTETS_LOW 0x0163 + +/*rx_stats_etherStatsPkts64Octets*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS64OCTETS_HI 0x01E2 +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS64OCTETS_LOW 0x01E3 + +/*tx_stats_etherStatsPkts65to127Octets*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS65TO127OCTETS_HI 0x0164 +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS65TO127OCTETS_LOW 0x0165 + +/*rx_stats_etherStatsPkts65to127Octets*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS65TO127OCTETS_HI 0x01E4 +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS65TO127OCTETS_LOW 0x01E5 + +/*tx_stats_etherStatsPkts128to255Octets*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS128TO255OCTETS_HI 0x0166 +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS128TO255OCTETS_LOW 0x0167 + +/*rx_stats_etherStatsPkts128to255Octets*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS128TO255OCTETS_HI 0x01E6 +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS128TO255OCTETS_LOW 0x01E7 + +/*tx_stats_etherStatsPkts256to511Octet*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS256TO511OCTET_HI 0x0168 +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS256TO511OCTET_LOW 0x0169 + +/*rx_stats_etherStatsPkts256to511Octets*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS256TO511OCTETS_HI 0x01E8 +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS256TO511OCTETS_LOW 0x01E9 + +/*tx_stats_etherStatsPkts512to1023Octets*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS512TO1023OCTETS_HI 0x016A +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS512TO1023OCTETS_LOW 0x016B + +/*rx_stats_etherStatsPkts512to1023Octets*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS512TO1023OCTETS_HI 0x01EA +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS512TO1023OCTETS_LOW 0x01EB + +/*tx_stats_etherStatPkts1024to1518Octets*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATPKTS1024TO1518OCTETS_HI 0x016C +#define IPN3KE_MAC_TX_STATS_ETHERSTATPKTS1024TO1518OCTETS_LOW 0x016D + +/*rx_stats_etherStatPkts1024to1518Octets*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATPKTS1024TO1518OCTETS_HI 0x01EC +#define IPN3KE_MAC_RX_STATS_ETHERSTATPKTS1024TO1518OCTETS_LOW 0x01ED + +/*tx_stats_etherStatsPkts1519toXOctets*/ +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS1519TOXOCTETS_HI 0x016E +#define IPN3KE_MAC_TX_STATS_ETHERSTATSPKTS1519TOXOCTETS_LOW 0x016F + +/*rx_stats_etherStatsPkts1519toXOctets*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS1519TOXOCTETS_HI 0x01EE +#define IPN3KE_MAC_RX_STATS_ETHERSTATSPKTS1519TOXOCTETS_LOW 0x01EF + +/*rx_stats_etherStatsFragments*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSFRAGMENTS_HI 0x01F0 +#define IPN3KE_MAC_RX_STATS_ETHERSTATSFRAGMENTS_LOW 0x01F1 + +/*rx_stats_etherStatsJabbers*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSJABBERS_HI 0x01F2 +#define IPN3KE_MAC_RX_STATS_ETHERSTATSJABBERS_LOW 0x01F3 + +/*rx_stats_etherStatsCRCErr*/ +#define IPN3KE_MAC_RX_STATS_ETHERSTATSCRCERR_HI 0x01F4 +#define IPN3KE_MAC_RX_STATS_ETHERSTATSCRCERR_LOW 0x01F5 + +/*tx_stats_unicastMACCtrlFrames*/ +#define IPN3KE_MAC_TX_STATS_UNICASTMACCTRLFRAMES_HI 0x0176 +#define IPN3KE_MAC_TX_STATS_UNICASTMACCTRLFRAMES_LOW 0x0177 + +/*rx_stats_unicastMACCtrlFrames*/ +#define IPN3KE_MAC_RX_STATS_UNICASTMACCTRLFRAMES_HI 0x01F6 +#define IPN3KE_MAC_RX_STATS_UNICASTMACCTRLFRAMES_LOW 0x01F7 + +/*tx_stats_multicastMACCtrlFrames*/ +#define IPN3KE_MAC_TX_STATS_MULTICASTMACCTRLFRAMES_HI 0x0178 +#define IPN3KE_MAC_TX_STATS_MULTICASTMACCTRLFRAMES_LOW 0x0179 + +/*rx_stats_multicastMACCtrlFrames*/ +#define IPN3KE_MAC_RX_STATS_MULTICASTMACCTRLFRAMES_HI 0x01F8 +#define IPN3KE_MAC_RX_STATS_MULTICASTMACCTRLFRAMES_LOW 0x01F9 + +/*tx_stats_broadcastMACCtrlFrames*/ +#define IPN3KE_MAC_TX_STATS_BROADCASTMACCTRLFRAMES_HI 0x017A +#define IPN3KE_MAC_TX_STATS_BROADCASTMACCTRLFRAMES_LOW 0x017B + +/*rx_stats_broadcastMACCtrlFrames*/ +#define IPN3KE_MAC_RX_STATS_BROADCASTMACCTRLFRAMES_HI 0x01FA +#define IPN3KE_MAC_RX_STATS_BROADCASTMACCTRLFRAMES_LOW 0x01FB + +/*tx_stats_PFCMACCtrlFrames*/ +#define IPN3KE_MAC_TX_STATS_PFCMACCTRLFRAMES_HI 0x017C +#define IPN3KE_MAC_TX_STATS_PFCMACCTRLFRAMES_LOW 0x017D + +/*rx_stats_PFCMACCtrlFrames*/ +#define IPN3KE_MAC_RX_STATS_PFCMACCTRLFRAMES_HI 0x01FC +#define IPN3KE_MAC_RX_STATS_PFCMACCTRLFRAMES_LOW 0x01FD + + +#endif /* _IPN3KE_ETHDEV_H_ */ diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c new file mode 100644 index 0000000..ad5cd14 --- /dev/null +++ b/drivers/net/ipn3ke/ipn3ke_flow.c @@ -0,0 +1,1407 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ifpga_rawdev_api.h" +#include "ipn3ke_tm.h" +#include "ipn3ke_flow.h" +#include "ipn3ke_logs.h" +#include "ipn3ke_ethdev.h" + +#define DEBUG_IPN3KE_FLOW + +/** Static initializer for items. */ +#define FLOW_PATTERNS(...) \ + ((const enum rte_flow_item_type []) { \ + __VA_ARGS__, RTE_FLOW_ITEM_TYPE_END, \ + }) + +enum IPN3KE_HASH_KEY_TYPE { + IPN3KE_HASH_KEY_VXLAN, + IPN3KE_HASH_KEY_MAC, + IPN3KE_HASH_KEY_QINQ, + IPN3KE_HASH_KEY_MPLS, + IPN3KE_HASH_KEY_IP_TCP, + IPN3KE_HASH_KEY_IP_UDP, + IPN3KE_HASH_KEY_IP_NVGRE, + IPN3KE_HASH_KEY_VXLAN_IP_UDP, +}; + +struct ipn3ke_flow_parse { + uint32_t mark:1; /**< Set if the flow is marked. */ + uint32_t drop:1; /**< ACL drop. */ + uint32_t key_type:IPN3KE_FLOW_KEY_ID_BITS; + uint32_t mark_id:IPN3KE_FLOW_RESULT_UID_BITS; /**< Mark identifier. */ + uint8_t key_len; /**< Length in bit. */ + uint8_t key[BITS_TO_BYTES(IPN3KE_FLOW_KEY_DATA_BITS)]; + /**< key1, key2 */ +}; + +typedef int (*pattern_filter_t)(const struct rte_flow_item patterns[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser); + + +struct ipn3ke_flow_pattern { + const enum rte_flow_item_type *const items; + + pattern_filter_t filter; +}; + +/* + * @ RTL definition: + * typedef struct packed { + * logic [47:0] vxlan_inner_mac; + * logic [23:0] vxlan_vni; + * } Hash_Key_Vxlan_t; + * + * @ flow items: + * RTE_FLOW_ITEM_TYPE_VXLAN + * RTE_FLOW_ITEM_TYPE_ETH + */ +static int +ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + const struct rte_flow_item_vxlan *vxlan = NULL; + const struct rte_flow_item_eth *eth = NULL; + const struct rte_flow_item *item; + + for (item = patterns; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (/*!item->spec || item->mask || */item->last) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Only support item with 'spec'"); + return -rte_errno; + } + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + eth = item->spec; + + rte_memcpy(&parser->key[0], + eth->src.addr_bytes, + ETHER_ADDR_LEN); + break; + + case RTE_FLOW_ITEM_TYPE_VXLAN: + vxlan = item->spec; + + rte_memcpy(&parser->key[6], vxlan->vni, 3); + break; + + default: + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not support item type"); + return -rte_errno; + } + } + + if (vxlan != NULL && eth != NULL) { + parser->key_len = 48 + 24; + return 0; + } + + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + patterns, + "Missed some patterns"); + return -rte_errno; +} + +/* + * @ RTL definition: + * typedef struct packed { + * logic [47:0] eth_smac; + * } Hash_Key_Mac_t; + * + * @ flow items: + * RTE_FLOW_ITEM_TYPE_ETH + */ +static int +ipn3ke_pattern_mac(const struct rte_flow_item patterns[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + const struct rte_flow_item_eth *eth = NULL; + const struct rte_flow_item *item; + + for (item = patterns; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (!item->spec || item->mask || item->last) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Only support item with 'spec'"); + return -rte_errno; + } + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + eth = item->spec; + + rte_memcpy(parser->key, + eth->src.addr_bytes, + ETHER_ADDR_LEN); + break; + + default: + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not support item type"); + return -rte_errno; + } + } + + if (eth != NULL) { + parser->key_len = 48; + return 0; + } + + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + patterns, + "Missed some patterns"); + return -rte_errno; +} + +/* + * @ RTL definition: + * typedef struct packed { + * logic [11:0] outer_vlan_id; + * logic [11:0] inner_vlan_id; + * } Hash_Key_QinQ_t; + * + * @ flow items: + * RTE_FLOW_ITEM_TYPE_VLAN + * RTE_FLOW_ITEM_TYPE_VLAN + */ +static int +ipn3ke_pattern_qinq(const struct rte_flow_item patterns[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + const struct rte_flow_item_vlan *outer_vlan = NULL; + const struct rte_flow_item_vlan *inner_vlan = NULL; + const struct rte_flow_item *item; + uint16_t tci; + + for (item = patterns; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (!item->spec || item->mask || item->last) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Only support item with 'spec'"); + return -rte_errno; + } + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_VLAN: + if (!outer_vlan) { + outer_vlan = item->spec; + + tci = rte_be_to_cpu_16(outer_vlan->tci); + parser->key[0] = (tci & 0xff0) >> 4; + parser->key[1] |= (tci & 0x00f) << 4; + } else { + inner_vlan = item->spec; + + tci = rte_be_to_cpu_16(inner_vlan->tci); + parser->key[1] |= (tci & 0xf00) >> 8; + parser->key[2] = (tci & 0x0ff); + } + break; + + default: + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not support item type"); + return -rte_errno; + } + } + + if (outer_vlan != NULL && inner_vlan != NULL) { + parser->key_len = 12 + 12; + return 0; + } + + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + patterns, + "Missed some patterns"); + return -rte_errno; +} + +/* + * @ RTL definition: + * typedef struct packed { + * logic [19:0] mpls_label1; + * logic [19:0] mpls_label2; + * } Hash_Key_Mpls_t; + * + * @ flow items: + * RTE_FLOW_ITEM_TYPE_MPLS + * RTE_FLOW_ITEM_TYPE_MPLS + */ +static int +ipn3ke_pattern_mpls(const struct rte_flow_item patterns[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + const struct rte_flow_item_mpls *mpls1 = NULL; + const struct rte_flow_item_mpls *mpls2 = NULL; + const struct rte_flow_item *item; + + for (item = patterns; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (!item->spec || item->mask || item->last) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Only support item with 'spec'"); + return -rte_errno; + } + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_MPLS: + if (!mpls1) { + mpls1 = item->spec; + + parser->key[0] = mpls1->label_tc_s[0]; + parser->key[1] = mpls1->label_tc_s[1]; + parser->key[2] = mpls1->label_tc_s[2] & 0xf0; + } else { + mpls2 = item->spec; + + parser->key[2] |= + ((mpls2->label_tc_s[0] & 0xf0) >> 4); + parser->key[3] = + ((mpls2->label_tc_s[0] & 0xf) << 4) | + ((mpls2->label_tc_s[1] & 0xf0) >> 4); + parser->key[4] = + ((mpls2->label_tc_s[1] & 0xf) << 4) | + ((mpls2->label_tc_s[2] & 0xf0) >> 4); + } + break; + + default: + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not support item type"); + return -rte_errno; + } + } + + if (mpls1 != NULL && mpls2 != NULL) { + parser->key_len = 20 + 20; + return 0; + } + + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + patterns, + "Missed some patterns"); + return -rte_errno; +} + +/* + * @ RTL definition: + * typedef struct packed { + * logic [31:0] ip_sa; + * logic [15:0] tcp_sport; + * } Hash_Key_Ip_Tcp_t; + * + * @ flow items: + * RTE_FLOW_ITEM_TYPE_IPV4 + * RTE_FLOW_ITEM_TYPE_TCP + */ +static int +ipn3ke_pattern_ip_tcp(const struct rte_flow_item patterns[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + const struct rte_flow_item_ipv4 *ipv4 = NULL; + const struct rte_flow_item_tcp *tcp = NULL; + const struct rte_flow_item *item; + + for (item = patterns; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (!item->spec || item->mask || item->last) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Only support item with 'spec'"); + return -rte_errno; + } + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_IPV4: + ipv4 = item->spec; + + rte_memcpy(&parser->key[0], &ipv4->hdr.src_addr, 4); + break; + + case RTE_FLOW_ITEM_TYPE_TCP: + tcp = item->spec; + + rte_memcpy(&parser->key[4], &tcp->hdr.src_port, 2); + break; + + default: + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not support item type"); + return -rte_errno; + } + } + + if (ipv4 != NULL && tcp != NULL) { + parser->key_len = 32 + 16; + return 0; + } + + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + patterns, + "Missed some patterns"); + return -rte_errno; +} + +/* + * @ RTL definition: + * typedef struct packed { + * logic [31:0] ip_sa; + * logic [15:0] udp_sport; + * } Hash_Key_Ip_Udp_t; + * + * @ flow items: + * RTE_FLOW_ITEM_TYPE_IPV4 + * RTE_FLOW_ITEM_TYPE_UDP + */ +static int +ipn3ke_pattern_ip_udp(const struct rte_flow_item patterns[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + const struct rte_flow_item_ipv4 *ipv4 = NULL; + const struct rte_flow_item_udp *udp = NULL; + const struct rte_flow_item *item; + + for (item = patterns; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (!item->spec || item->mask || item->last) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Only support item with 'spec'"); + return -rte_errno; + } + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_IPV4: + ipv4 = item->spec; + + rte_memcpy(&parser->key[0], &ipv4->hdr.src_addr, 4); + break; + + case RTE_FLOW_ITEM_TYPE_UDP: + udp = item->spec; + + rte_memcpy(&parser->key[4], &udp->hdr.src_port, 2); + break; + + default: + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not support item type"); + return -rte_errno; + } + } + + if (ipv4 != NULL && udp != NULL) { + parser->key_len = 32 + 16; + return 0; + } + + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + patterns, + "Missed some patterns"); + return -rte_errno; +} + +/* + * @ RTL definition: + * typedef struct packed { + * logic [31:0] ip_sa; + * logic [15:0] udp_sport; + * logic [23:0] vsid; + * } Hash_Key_Ip_Nvgre_t; + * + * @ flow items: + * RTE_FLOW_ITEM_TYPE_IPV4 + * RTE_FLOW_ITEM_TYPE_UDP + * RTE_FLOW_ITEM_TYPE_NVGRE + */ +static int +ipn3ke_pattern_ip_nvgre(const struct rte_flow_item patterns[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + const struct rte_flow_item_nvgre *nvgre = NULL; + const struct rte_flow_item_ipv4 *ipv4 = NULL; + const struct rte_flow_item_udp *udp = NULL; + const struct rte_flow_item *item; + + for (item = patterns; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (!item->spec || item->mask || item->last) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Only support item with 'spec'"); + return -rte_errno; + } + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_IPV4: + ipv4 = item->spec; + + rte_memcpy(&parser->key[0], &ipv4->hdr.src_addr, 4); + break; + + case RTE_FLOW_ITEM_TYPE_UDP: + udp = item->spec; + + rte_memcpy(&parser->key[4], &udp->hdr.src_port, 2); + break; + + case RTE_FLOW_ITEM_TYPE_NVGRE: + nvgre = item->spec; + + rte_memcpy(&parser->key[6], nvgre->tni, 3); + break; + + default: + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not support item type"); + return -rte_errno; + } + } + + if (ipv4 != NULL && udp != NULL && nvgre != NULL) { + parser->key_len = 32 + 16 + 24; + return 0; + } + + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + patterns, + "Missed some patterns"); + return -rte_errno; +} + +/* + * @ RTL definition: + * typedef struct packed{ + * logic [23:0] vxlan_vni; + * logic [31:0] ip_sa; + * logic [15:0] udp_sport; + * } Hash_Key_Vxlan_Ip_Udp_t; + * + * @ flow items: + * RTE_FLOW_ITEM_TYPE_VXLAN + * RTE_FLOW_ITEM_TYPE_IPV4 + * RTE_FLOW_ITEM_TYPE_UDP + */ +static int +ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + const struct rte_flow_item_vxlan *vxlan = NULL; + const struct rte_flow_item_ipv4 *ipv4 = NULL; + const struct rte_flow_item_udp *udp = NULL; + const struct rte_flow_item *item; + + for (item = patterns; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (!item->spec || item->mask || item->last) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Only support item with 'spec'"); + return -rte_errno; + } + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_VXLAN: + vxlan = item->spec; + + rte_memcpy(&parser->key[0], vxlan->vni, 3); + break; + + case RTE_FLOW_ITEM_TYPE_IPV4: + ipv4 = item->spec; + + rte_memcpy(&parser->key[3], &ipv4->hdr.src_addr, 4); + break; + + case RTE_FLOW_ITEM_TYPE_UDP: + udp = item->spec; + + rte_memcpy(&parser->key[7], &udp->hdr.src_port, 2); + break; + + default: + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not support item type"); + return -rte_errno; + } + } + + if (vxlan != NULL && ipv4 != NULL && udp != NULL) { + parser->key_len = 24 + 32 + 16; + return 0; + } + + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + patterns, + "Missed some patterns"); + return -rte_errno; +} + +static const struct ipn3ke_flow_pattern ipn3ke_supported_patterns[] = { + [IPN3KE_HASH_KEY_VXLAN] = { + .items = FLOW_PATTERNS(RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_ETH), + .filter = ipn3ke_pattern_vxlan, + }, + + [IPN3KE_HASH_KEY_MAC] = { + .items = FLOW_PATTERNS(RTE_FLOW_ITEM_TYPE_ETH), + .filter = ipn3ke_pattern_mac, + }, + + [IPN3KE_HASH_KEY_QINQ] = { + .items = FLOW_PATTERNS(RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_VLAN), + .filter = ipn3ke_pattern_qinq, + }, + + [IPN3KE_HASH_KEY_MPLS] = { + .items = FLOW_PATTERNS(RTE_FLOW_ITEM_TYPE_MPLS, + RTE_FLOW_ITEM_TYPE_MPLS), + .filter = ipn3ke_pattern_mpls, + }, + + [IPN3KE_HASH_KEY_IP_TCP] = { + .items = FLOW_PATTERNS(RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP), + .filter = ipn3ke_pattern_ip_tcp, + }, + + [IPN3KE_HASH_KEY_IP_UDP] = { + .items = FLOW_PATTERNS(RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP), + .filter = ipn3ke_pattern_ip_udp, + }, + + [IPN3KE_HASH_KEY_IP_NVGRE] = { + .items = FLOW_PATTERNS(RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_NVGRE), + .filter = ipn3ke_pattern_ip_nvgre, + }, + + [IPN3KE_HASH_KEY_VXLAN_IP_UDP] = { + .items = FLOW_PATTERNS(RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP), + .filter = ipn3ke_pattern_vxlan_ip_udp, + }, +}; + +static int +ipn3ke_flow_convert_attributes(const struct rte_flow_attr *attr, + struct rte_flow_error *error) +{ + if (!attr) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, + "NULL attribute."); + return -rte_errno; + } + + if (attr->group) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, + NULL, + "groups are not supported"); + return -rte_errno; + } + + if (attr->egress) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + NULL, + "egress is not supported"); + return -rte_errno; + } + + if (attr->transfer) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, + NULL, + "transfer is not supported"); + return -rte_errno; + } + + if (!attr->ingress) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + NULL, + "only ingress is supported"); + return -rte_errno; + } + + return 0; +} + +static int +ipn3ke_flow_convert_actions(const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + const struct rte_flow_action_mark *mark = NULL; + + if (!actions) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, + "NULL action."); + return -rte_errno; + } + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; ++actions) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + + case RTE_FLOW_ACTION_TYPE_MARK: + if (mark) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "duplicated mark"); + return -rte_errno; + } + + mark = actions->conf; + if (!mark) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "mark must be defined"); + return -rte_errno; + } else if (mark->id > IPN3KE_FLOW_RESULT_UID_MAX) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "mark id is out of range"); + return -rte_errno; + } + + parser->mark = 1; + parser->mark_id = mark->id; + break; + + case RTE_FLOW_ACTION_TYPE_DROP: + parser->drop = 1; + break; + + default: + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "invalid action"); + return -rte_errno; + } + } + + if (!parser->drop && !parser->mark) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "no valid actions"); + return -rte_errno; + } + + return 0; +} + +static bool +ipn3ke_match_pattern(const enum rte_flow_item_type *patterns, + const struct rte_flow_item *input) +{ + const struct rte_flow_item *item = input; + + while ((*patterns == item->type) && + (*patterns != RTE_FLOW_ITEM_TYPE_END)) { + patterns++; + item++; + } + + return (*patterns == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +static pattern_filter_t +ipn3ke_find_filter_func(const struct rte_flow_item *input, + uint32_t *idx) +{ + pattern_filter_t filter = NULL; + uint32_t i; + + for (i = 0; i < RTE_DIM(ipn3ke_supported_patterns); i++) { + if (ipn3ke_match_pattern(ipn3ke_supported_patterns[i].items, + input)) { + filter = ipn3ke_supported_patterns[i].filter; + *idx = i; + break; + } + } + + return filter; +} + +static int +ipn3ke_flow_convert_items(const struct rte_flow_item items[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + pattern_filter_t filter = NULL; + uint32_t idx; + + if (!items) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, + "NULL pattern."); + return -rte_errno; + } + + filter = ipn3ke_find_filter_func(items, &idx); + + if (!filter) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + items, + "Unsupported pattern"); + return -rte_errno; + } + + parser->key_type = idx; + + return filter(items, error, parser); +} + +/* Put the least @nbits of @data into @offset of @dst bits stream, and + * the @offset starts from MSB to LSB in each byte. + * + * MSB LSB + * +------+------+------+------+ + * | | | | | + * +------+------+------+------+ + * ^ ^ + * |<- data: nbits ->| + * | + * offset + */ +static void +copy_data_bits(uint8_t *dst, uint64_t data, + uint32_t offset, uint8_t nbits) +{ + uint8_t set, *p = &dst[offset / BITS_PER_BYTE]; + uint8_t bits_to_set = BITS_PER_BYTE - (offset % BITS_PER_BYTE); + uint8_t mask_to_set = 0xff >> (offset % BITS_PER_BYTE); + uint32_t size = offset + nbits; + + if (nbits > (sizeof(data) * BITS_PER_BYTE)) { + IPN3KE_AFU_PMD_ERR("nbits is out of range"); + return; + } + + while (nbits - bits_to_set >= 0) { + set = data >> (nbits - bits_to_set); + + *p &= ~mask_to_set; + *p |= (set & mask_to_set); + + nbits -= bits_to_set; + bits_to_set = BITS_PER_BYTE; + mask_to_set = 0xff; + p++; + } + + if (nbits) { + uint8_t shift = BITS_PER_BYTE - (size % BITS_PER_BYTE); + + set = data << shift; + mask_to_set = 0xff << shift; + + *p &= ~mask_to_set; + *p |= (set & mask_to_set); + } +} + +static void +ipn3ke_flow_key_generation(struct ipn3ke_flow_parse *parser, + struct rte_flow *flow) +{ + uint32_t i, shift_bytes, len_in_bytes, offset; + uint64_t key; + uint8_t *dst; + + dst = flow->rule.key; + + copy_data_bits(dst, + parser->key_type, + IPN3KE_FLOW_KEY_ID_OFFSET, + IPN3KE_FLOW_KEY_ID_BITS); + + /* The MSb of key is filled to 0 when it is less than + * IPN3KE_FLOW_KEY_DATA_BITS bit. And the parsed key data is + * save as MSB byte first in the array, it needs to move + * the bits before formatting them. + */ + key = 0; + shift_bytes = 0; + len_in_bytes = BITS_TO_BYTES(parser->key_len); + offset = (IPN3KE_FLOW_KEY_DATA_OFFSET + + IPN3KE_FLOW_KEY_DATA_BITS - + parser->key_len); + + for (i = 0; i < len_in_bytes; i++) { + key = (key << 8) | parser->key[i]; + + if (++shift_bytes == sizeof(key)) { + shift_bytes = 0; + + copy_data_bits(dst, key, offset, + sizeof(key) * BITS_PER_BYTE); + offset += sizeof(key) * BITS_PER_BYTE; + key = 0; + } + } + + if (shift_bytes != 0) { + uint32_t rem_bits; + + rem_bits = parser->key_len % (sizeof(key) * BITS_PER_BYTE); + key >>= (shift_bytes * 8 - rem_bits); + copy_data_bits(dst, key, offset, rem_bits); + } +} + +static void +ipn3ke_flow_result_generation(struct ipn3ke_flow_parse *parser, + struct rte_flow *flow) +{ + uint8_t *dst; + + if (parser->drop) + return; + + dst = flow->rule.result; + + copy_data_bits(dst, + 1, + IPN3KE_FLOW_RESULT_ACL_OFFSET, + IPN3KE_FLOW_RESULT_ACL_BITS); + + copy_data_bits(dst, + parser->mark_id, + IPN3KE_FLOW_RESULT_UID_OFFSET, + IPN3KE_FLOW_RESULT_UID_BITS); +} + +#define __SWAP16(_x) \ + ((((_x) & 0xff) << 8) | \ + (((_x) >> 8) & 0xff)) + +#define __SWAP32(_x) \ + ((__SWAP16((_x) & 0xffff) << 16) | \ + __SWAP16(((_x) >> 16) & 0xffff)) + +#define MHL_COMMAND_TIME_COUNT 0xFFFF +#define MHL_COMMAND_TIME_INTERVAL_US 10 + +static int +ipn3ke_flow_hw_update(struct ipn3ke_hw *hw, + struct rte_flow *flow, uint32_t is_add) +{ + uint32_t *pdata = NULL; + uint32_t data; + uint32_t time_out = MHL_COMMAND_TIME_COUNT; + +#ifdef DEBUG_IPN3KE_FLOW + uint32_t i; + + printf("IPN3KE flow dump\n"); + + pdata = (uint32_t *)flow->rule.key; + printf(" - key :"); + + for (i = 0; i < RTE_DIM(flow->rule.key); i++) + printf(" %02x", flow->rule.key[i]); + + for (i = 0; i < 4; i++) + printf(" %02x", __SWAP32(pdata[3 - i])); + printf("\n"); + + pdata = (uint32_t *)flow->rule.result; + printf(" - result:"); + + for (i = 0; i < RTE_DIM(flow->rule.result); i++) + printf(" %02x", flow->rule.result[i]); + + for (i = 0; i < 1; i++) + printf(" %02x", pdata[i]); + printf("\n"); +#endif + + pdata = (uint32_t *)flow->rule.key; + + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_KEY_0, + 0, + __SWAP32(pdata[3]), + IPN3KE_CLF_MHL_KEY_MASK); + + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_KEY_1, + 0, + __SWAP32(pdata[2]), + IPN3KE_CLF_MHL_KEY_MASK); + + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_KEY_2, + 0, + __SWAP32(pdata[1]), + IPN3KE_CLF_MHL_KEY_MASK); + + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_KEY_3, + 0, + __SWAP32(pdata[0]), + IPN3KE_CLF_MHL_KEY_MASK); + + pdata = (uint32_t *)flow->rule.result; + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_RES, + 0, + __SWAP32(pdata[0]), + IPN3KE_CLF_MHL_RES_MASK); + + /* insert/delete the key and result */ + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_MHL_MGMT_CTRL, + 0, + 0x80000000); + time_out = MHL_COMMAND_TIME_COUNT; + while (IPN3KE_BIT_ISSET(data, IPN3KE_CLF_MHL_MGMT_CTRL_BIT_BUSY) && + (time_out > 0)) { + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_MHL_MGMT_CTRL, + 0, + 0x80000000); + time_out--; + rte_delay_us(MHL_COMMAND_TIME_INTERVAL_US); + } + if (!time_out) + return -1; + if (is_add) + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_MGMT_CTRL, + 0, + IPN3KE_CLF_MHL_MGMT_CTRL_INSERT, + 0x3); + else + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_MGMT_CTRL, + 0, + IPN3KE_CLF_MHL_MGMT_CTRL_DELETE, + 0x3); + + return 0; +} + +static int +ipn3ke_flow_hw_flush(struct ipn3ke_hw *hw) +{ + uint32_t data; + uint32_t time_out = MHL_COMMAND_TIME_COUNT; + + /* flush the MHL lookup table */ + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_MHL_MGMT_CTRL, + 0, + 0x80000000); + time_out = MHL_COMMAND_TIME_COUNT; + while (IPN3KE_BIT_ISSET(data, IPN3KE_CLF_MHL_MGMT_CTRL_BIT_BUSY) && + (time_out > 0)) { + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_MHL_MGMT_CTRL, + 0, + 0x80000000); + time_out--; + rte_delay_us(MHL_COMMAND_TIME_INTERVAL_US); + } + if (!time_out) + return -1; + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_MGMT_CTRL, + 0, + IPN3KE_CLF_MHL_MGMT_CTRL_FLUSH, + 0x3); + + return 0; +} + +static void +ipn3ke_flow_convert_finalise(struct ipn3ke_hw *hw, + struct ipn3ke_flow_parse *parser, + struct rte_flow *flow) +{ + ipn3ke_flow_key_generation(parser, flow); + ipn3ke_flow_result_generation(parser, flow); + ipn3ke_flow_hw_update(hw, flow, 1); +} + +static int +ipn3ke_flow_convert(const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct ipn3ke_flow_parse *parser) +{ + int ret; + + ret = ipn3ke_flow_convert_attributes(attr, error); + if (ret) + return ret; + + ret = ipn3ke_flow_convert_actions(actions, error, parser); + if (ret) + return ret; + + ret = ipn3ke_flow_convert_items(items, error, parser); + if (ret) + return ret; + + return 0; +} + +static int +ipn3ke_flow_validate(__rte_unused struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct ipn3ke_flow_parse parser = { }; + return ipn3ke_flow_convert(attr, pattern, actions, error, &parser); +} + +static struct rte_flow * +ipn3ke_flow_create(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_flow_parse parser = { }; + struct rte_flow *flow; + int ret; + + if (hw->flow_num_entries == hw->flow_max_entries) { + rte_flow_error_set(error, + ENOBUFS, + RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, + "The flow table is full."); + return NULL; + } + + ret = ipn3ke_flow_convert(attr, pattern, actions, error, &parser); + if (ret < 0) { + rte_flow_error_set(error, + -ret, + RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, + "Failed to create flow."); + return NULL; + } + + flow = rte_zmalloc("ipn3ke_flow", sizeof(struct rte_flow), 0); + if (!flow) { + rte_flow_error_set(error, + ENOMEM, + RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, + "Failed to allocate memory"); + return flow; + } + + ipn3ke_flow_convert_finalise(hw, &parser, flow); + + TAILQ_INSERT_TAIL(&hw->flow_list, flow, next); + + return flow; +} + +static int +ipn3ke_flow_destroy(struct rte_eth_dev *dev, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + int ret = 0; + + ret = ipn3ke_flow_hw_update(hw, flow, 0); + if (!ret) { + TAILQ_REMOVE(&hw->flow_list, flow, next); + rte_free(flow); + } else { + rte_flow_error_set(error, + -ret, + RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, + "Failed to destroy flow."); + } + + return ret; +} + +static int +ipn3ke_flow_flush(struct rte_eth_dev *dev, + struct rte_flow_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct rte_flow *flow, *temp; + + TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) { + TAILQ_REMOVE(&hw->flow_list, flow, next); + rte_free(flow); + } + + return ipn3ke_flow_hw_flush(hw); +} + +int ipn3ke_flow_init(void *dev) +{ + struct ipn3ke_hw *hw = (struct ipn3ke_hw *)dev; + uint32_t data; + + /* disable rx classifier bypass */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_RX_TEST, + 0, 0, 0x1); +#ifdef DEBUG_IPN3KE_FLOW + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_RX_TEST, + 0, + 0x1); + printf("IPN3KE_CLF_RX_TEST: %x\n", data); +#endif + + /* configure base mac address */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_BASE_DST_MAC_ADDR_HI, + 0, + 0x2457, + 0xFFFF); +#ifdef DEBUG_IPN3KE_FLOW + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_BASE_DST_MAC_ADDR_HI, + 0, + 0xFFFF); + printf("IPN3KE_CLF_BASE_DST_MAC_ADDR_HI: %x\n", data); +#endif + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW, + 0, + 0x9bdf1000, + 0xFFFFFFFF); +#ifdef DEBUG_IPN3KE_FLOW + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW, + 0, + 0xFFFFFFFF); + printf("IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW: %x\n", data); +#endif + + /* configure hash lookup rules enable */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_LKUP_ENABLE, + 0, + 0xFD, + 0xFF); +#ifdef DEBUG_IPN3KE_FLOW + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_LKUP_ENABLE, + 0, + 0xFF); + printf("IPN3KE_CLF_LKUP_ENABLE: %x\n", data); +#endif + + /* configure rx parse config, settings associatied with VxLAN */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_RX_PARSE_CFG, + 0, + 0x212b5, + 0x3FFFF); +#ifdef DEBUG_IPN3KE_FLOW + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_RX_PARSE_CFG, + 0, + 0x3FFFF); + printf("IPN3KE_CLF_RX_PARSE_CFG: %x\n", data); +#endif + + /* configure QinQ S-Tag */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_QINQ_STAG, + 0, + 0x88a8, + 0xFFFF); +#ifdef DEBUG_IPN3KE_FLOW + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_QINQ_STAG, + 0, + 0xFFFF); + printf("IPN3KE_CLF_QINQ_STAG: %x\n", data); +#endif + + /* configure gen ctrl */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_GEN_CTRL, + 0, + 0x3, + 0x3); +#ifdef DEBUG_IPN3KE_FLOW + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_MHL_GEN_CTRL, + 0, + 0x1F); + printf("IPN3KE_CLF_MHL_GEN_CTRL: %x\n", data); +#endif + + /* clear monitoring register */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CLF_MHL_MON_0, + 0, + 0xFFFFFFFF, + 0xFFFFFFFF); +#ifdef DEBUG_IPN3KE_FLOW + data = 0; + data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_MHL_MON_0, + 0, + 0xFFFFFFFF); + printf("IPN3KE_CLF_MHL_MON_0: %x\n", data); +#endif + + ipn3ke_flow_hw_flush(hw); + + TAILQ_INIT(&hw->flow_list); + hw->flow_max_entries = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CLF_EM_NUM, + 0, + 0xFFFFFFFF); + hw->flow_num_entries = 0; + + return 0; +} + +const struct rte_flow_ops ipn3ke_flow_ops = { + .validate = ipn3ke_flow_validate, + .create = ipn3ke_flow_create, + .destroy = ipn3ke_flow_destroy, + .flush = ipn3ke_flow_flush, +}; + diff --git a/drivers/net/ipn3ke/ipn3ke_flow.h b/drivers/net/ipn3ke/ipn3ke_flow.h new file mode 100644 index 0000000..d5356a8 --- /dev/null +++ b/drivers/net/ipn3ke/ipn3ke_flow.h @@ -0,0 +1,104 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#ifndef _IPN3KE_FLOW_H_ +#define _IPN3KE_FLOW_H_ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/** + * Expand the length to DWORD alignment with 'Unused' field. + * + * FLOW KEY: + * | Unused |Ruler id (id) | Key1 Key2 ā€¦ (data) | + * |--------+---------------+--------------------| + * | 17bits | 3 bits | Total 108 bits | + * MSB ---> LSB + * + * Note: And the MSb of key data is filled to 0 when it is less + * than 108 bit. + */ +#define IPN3KE_FLOW_KEY_UNUSED_BITS 17 +#define IPN3KE_FLOW_KEY_ID_BITS 3 +#define IPN3KE_FLOW_KEY_DATA_BITS 108 + +#define IPN3KE_FLOW_KEY_TOTAL_BITS \ + (IPN3KE_FLOW_KEY_UNUSED_BITS + \ + IPN3KE_FLOW_KEY_ID_BITS + \ + IPN3KE_FLOW_KEY_DATA_BITS) + +#define IPN3KE_FLOW_KEY_ID_OFFSET \ + (IPN3KE_FLOW_KEY_UNUSED_BITS) + +#define IPN3KE_FLOW_KEY_DATA_OFFSET \ + (IPN3KE_FLOW_KEY_ID_OFFSET + IPN3KE_FLOW_KEY_ID_BITS) + +/** + * Expand the length to DWORD alignment with 'Unused' field. + * + * FLOW RESULT: + * | Unused | enable (acl) | uid | + * |---------+--------------+--------------| + * | 15 bits | 1 bit | 16 bits | + * MSB ---> LSB + */ + +#define IPN3KE_FLOW_RESULT_UNUSED_BITS 15 +#define IPN3KE_FLOW_RESULT_ACL_BITS 1 +#define IPN3KE_FLOW_RESULT_UID_BITS 16 + +#define IPN3KE_FLOW_RESULT_TOTAL_BITS \ + (IPN3KE_FLOW_RESULT_UNUSED_BITS + \ + IPN3KE_FLOW_RESULT_ACL_BITS + \ + IPN3KE_FLOW_RESULT_UID_BITS) + +#define IPN3KE_FLOW_RESULT_ACL_OFFSET \ + (IPN3KE_FLOW_RESULT_UNUSED_BITS) + +#define IPN3KE_FLOW_RESULT_UID_OFFSET \ + (IPN3KE_FLOW_RESULT_ACL_OFFSET + IPN3KE_FLOW_RESULT_ACL_BITS) + +#define IPN3KE_FLOW_RESULT_UID_MAX \ + ((1UL << IPN3KE_FLOW_RESULT_UID_BITS) - 1) + +#ifndef BITS_PER_BYTE +#define BITS_PER_BYTE 8 +#endif +#define BITS_TO_BYTES(bits) \ + (((bits) + BITS_PER_BYTE - 1) / BITS_PER_BYTE) + +struct ipn3ke_flow_rule { + uint8_t key[BITS_TO_BYTES(IPN3KE_FLOW_KEY_TOTAL_BITS)]; + uint8_t result[BITS_TO_BYTES(IPN3KE_FLOW_RESULT_TOTAL_BITS)]; +}; + +struct rte_flow { + TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */ + + struct ipn3ke_flow_rule rule; +}; + +TAILQ_HEAD(ipn3ke_flow_list, rte_flow); + +extern const struct rte_flow_ops ipn3ke_flow_ops; + +int ipn3ke_flow_init(void *dev); + +#endif /* _IPN3KE_FLOW_H_ */ diff --git a/drivers/net/ipn3ke/ipn3ke_logs.h b/drivers/net/ipn3ke/ipn3ke_logs.h new file mode 100644 index 0000000..dedaece --- /dev/null +++ b/drivers/net/ipn3ke/ipn3ke_logs.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#ifndef _IPN3KE_LOGS_H_ +#define _IPN3KE_LOGS_H_ + +#include + +extern int ipn3ke_afu_logtype; + +#define IPN3KE_AFU_PMD_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, ipn3ke_afu_logtype, "ipn3ke_afu: " fmt, \ + ##args) + +#define IPN3KE_AFU_PMD_FUNC_TRACE() IPN3KE_AFU_PMD_LOG(DEBUG, ">>") + +#define IPN3KE_AFU_PMD_DEBUG(fmt, args...) \ + IPN3KE_AFU_PMD_LOG(DEBUG, fmt, ## args) + +#define IPN3KE_AFU_PMD_INFO(fmt, args...) \ + IPN3KE_AFU_PMD_LOG(INFO, fmt, ## args) + +#define IPN3KE_AFU_PMD_ERR(fmt, args...) \ + IPN3KE_AFU_PMD_LOG(ERR, fmt, ## args) + +#define IPN3KE_AFU_PMD_WARN(fmt, args...) \ + IPN3KE_AFU_PMD_LOG(WARNING, fmt, ## args) + +#endif /* _IPN3KE_LOGS_H_ */ diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c new file mode 100644 index 0000000..a3d99eb --- /dev/null +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -0,0 +1,890 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#include + +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "ifpga_rawdev_api.h" +#include "ipn3ke_tm.h" +#include "ipn3ke_flow.h" +#include "ipn3ke_logs.h" +#include "ipn3ke_ethdev.h" + +static int ipn3ke_rpst_scan_num; +static pthread_t ipn3ke_rpst_scan_thread; + +/** Double linked list of representor port. */ +TAILQ_HEAD(ipn3ke_rpst_list, ipn3ke_rpst); + +static struct ipn3ke_rpst_list ipn3ke_rpst_list = + TAILQ_HEAD_INITIALIZER(ipn3ke_rpst_list); + +static rte_spinlock_t ipn3ke_link_notify_list_lk = RTE_SPINLOCK_INITIALIZER; + +static void +ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev, + struct rte_eth_dev_info *dev_info) +{ + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + + dev_info->speed_capa = + (hw->retimer.mac_type == + IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ? + ETH_LINK_SPEED_10G : + ((hw->retimer.mac_type == + IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ? + ETH_LINK_SPEED_25G : + ETH_LINK_SPEED_AUTONEG); + + dev_info->max_rx_queues = 1; + dev_info->max_tx_queues = 1; + dev_info->min_rx_bufsize = IPN3KE_AFU_BUF_SIZE_MIN; + dev_info->max_rx_pktlen = IPN3KE_AFU_FRAME_SIZE_MAX; + dev_info->max_mac_addrs = hw->port_num; + dev_info->max_vfs = 0; + dev_info->default_txconf = (struct rte_eth_txconf) { + .offloads = 0, + }; + dev_info->rx_queue_offload_capa = 0; + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_QINQ_STRIP | + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_RX_OFFLOAD_VLAN_EXTEND | + DEV_RX_OFFLOAD_VLAN_FILTER | + DEV_RX_OFFLOAD_JUMBO_FRAME; + + dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE; + dev_info->tx_offload_capa = + DEV_TX_OFFLOAD_VLAN_INSERT | + DEV_TX_OFFLOAD_QINQ_INSERT | + DEV_TX_OFFLOAD_IPV4_CKSUM | + DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM | + DEV_TX_OFFLOAD_SCTP_CKSUM | + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_TX_OFFLOAD_TCP_TSO | + DEV_TX_OFFLOAD_VXLAN_TNL_TSO | + DEV_TX_OFFLOAD_GRE_TNL_TSO | + DEV_TX_OFFLOAD_IPIP_TNL_TSO | + DEV_TX_OFFLOAD_GENEVE_TNL_TSO | + DEV_TX_OFFLOAD_MULTI_SEGS | + dev_info->tx_queue_offload_capa; + + dev_info->dev_capa = + RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | + RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + + dev_info->switch_info.name = ethdev->device->name; + dev_info->switch_info.domain_id = rpst->switch_domain_id; + dev_info->switch_info.port_id = rpst->port_id; +} + +static int +ipn3ke_rpst_dev_configure(__rte_unused struct rte_eth_dev *dev) +{ + return 0; +} + +static int +ipn3ke_rpst_dev_start(struct rte_eth_dev *dev) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(dev); + struct rte_rawdev *rawdev; + struct ifpga_rawdev_mac_info mac_info; + uint32_t val; + char attr_name[IPN3KE_RAWDEV_ATTR_LEN_MAX]; + + rawdev = hw->rawdev; + + memset(attr_name, 0, sizeof(attr_name)); + snprintf(attr_name, IPN3KE_RAWDEV_ATTR_LEN_MAX, "%s", + "default_mac"); + mac_info.port_id = rpst->port_id; + rawdev->dev_ops->attr_get(rawdev, attr_name, (uint64_t *)&mac_info); + ether_addr_copy(&mac_info.addr, &rpst->mac_addr); + + ether_addr_copy(&mac_info.addr, &dev->data->mac_addrs[0]); + + /* Set mac address */ + rte_memcpy(((char *)(&val)), + (char *)&mac_info.addr.addr_bytes[0], + sizeof(uint32_t)); + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_PRIMARY_MAC_ADDR0, + rpst->port_id, + 0); + rte_memcpy(((char *)(&val)), + (char *)&mac_info.addr.addr_bytes[4], + sizeof(uint16_t)); + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_PRIMARY_MAC_ADDR1, + rpst->port_id, + 0); + + /* Enable the TX path */ + val = 0; + val &= IPN3KE_MAC_TX_PACKET_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_PACKET_CONTROL, + rpst->port_id, + 0); + + /* Disables source address override */ + val = 0; + val &= IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_SRC_ADDR_OVERRIDE, + rpst->port_id, + 0); + + /* Enable the RX path */ + val = 0; + val &= IPN3KE_MAC_RX_TRANSFER_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_TRANSFER_CONTROL, + rpst->port_id, + 0); + + /* Clear all TX statistics counters */ + val = 1; + val &= IPN3KE_MAC_TX_STATS_CLR_CLEAR_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_STATS_CLR, + rpst->port_id, + 0); + + /* Clear all RX statistics counters */ + val = 1; + val &= IPN3KE_MAC_RX_STATS_CLR_CLEAR_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_STATS_CLR, + rpst->port_id, + 0); + + return 0; +} + +static void +ipn3ke_rpst_dev_stop(__rte_unused struct rte_eth_dev *dev) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(dev); + uint32_t val; + + /* Disable the TX path */ + val = 1; + val &= IPN3KE_MAC_TX_PACKET_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_PACKET_CONTROL, + rpst->port_id, + 0); + + /* Disable the RX path */ + val = 1; + val &= IPN3KE_MAC_RX_TRANSFER_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_TRANSFER_CONTROL, + rpst->port_id, + 0); +} + +static void +ipn3ke_rpst_dev_close(struct rte_eth_dev *dev) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(dev); + uint32_t val; + + /* Disable the TX path */ + val = 1; + val &= IPN3KE_MAC_TX_PACKET_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_PACKET_CONTROL, + rpst->port_id, + 0); + + /* Disable the RX path */ + val = 1; + val &= IPN3KE_MAC_RX_TRANSFER_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_TRANSFER_CONTROL, + rpst->port_id, + 0); +} + +/* + * Reset PF device only to re-initialize resources in PMD layer + */ +static int +ipn3ke_rpst_dev_reset(struct rte_eth_dev *dev) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(dev); + uint32_t val; + + /* Disable the TX path */ + val = 1; + val &= IPN3KE_MAC_TX_PACKET_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_TX_PACKET_CONTROL, + rpst->port_id, + 0); + + /* Disable the RX path */ + val = 1; + val &= IPN3KE_MAC_RX_TRANSFER_CONTROL_MASK; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_TRANSFER_CONTROL, + rpst->port_id, + 0); + + return 0; +} + +int +ipn3ke_rpst_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + return 0; +} +int +ipn3ke_rpst_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + return 0; +} +int +ipn3ke_rpst_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + return 0; +} +int +ipn3ke_rpst_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + return 0; +} +int +ipn3ke_rpst_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + return 0; +} +void +ipn3ke_rpst_rx_queue_release(void *rxq) +{ +} +int +ipn3ke_rpst_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + return 0; +} +void +ipn3ke_rpst_tx_queue_release(void *txq) +{ +} + +static int +ipn3ke_rpst_stats_get(struct rte_eth_dev *ethdev, + struct rte_eth_stats *stats) +{ + return 0; +} +static int +ipn3ke_rpst_xstats_get(struct rte_eth_dev *dev, + struct rte_eth_xstat *xstats, + unsigned int n) +{ + return 0; +} +static int ipn3ke_rpst_xstats_get_names(__rte_unused struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned int limit) +{ + return 0; +} + +static void +ipn3ke_rpst_stats_reset(struct rte_eth_dev *ethdev) +{ +} + +static int +ipn3ke_retimer_conf_link(struct rte_rawdev *rawdev, + uint16_t port, + uint8_t force_speed, + bool is_up) +{ + struct ifpga_rawdevg_link_info linfo; + + linfo.port = port; + linfo.link_up = is_up; + linfo.link_speed = force_speed; + + return rawdev->dev_ops->attr_set(rawdev, + "retimer_linkstatus", + (uint64_t)&linfo); +} + +static void +ipn3ke_update_link(struct rte_rawdev *rawdev, + uint16_t port, + struct rte_eth_link *link) +{ + struct ifpga_rawdevg_link_info linfo; + + rawdev->dev_ops->attr_get(rawdev, + "retimer_linkstatus", + (uint64_t *)&linfo); + /* Parse the link status */ + link->link_status = linfo.link_up; + switch (linfo.link_speed) { + case IFPGA_RAWDEV_LINK_SPEED_10GB: + link->link_speed = ETH_SPEED_NUM_10G; + break; + case IFPGA_RAWDEV_LINK_SPEED_25GB: + link->link_speed = ETH_SPEED_NUM_25G; + break; + default: + IPN3KE_AFU_PMD_LOG(ERR, "Unknown link speed info %u", + linfo.link_speed); + break; + } +} + +/* + * Set device link up. + */ +int +ipn3ke_rpst_dev_set_link_up(struct rte_eth_dev *dev) +{ + uint8_t link_speed = IFPGA_RAWDEV_LINK_SPEED_UNKNOWN; + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(dev); + struct rte_rawdev *rawdev; + struct rte_eth_conf *conf = &dev->data->dev_conf; + + rawdev = hw->rawdev; + if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) { + conf->link_speeds = ETH_LINK_SPEED_25G | + ETH_LINK_SPEED_10G; + } + + if (conf->link_speeds & ETH_LINK_SPEED_10G) + link_speed = IFPGA_RAWDEV_LINK_SPEED_25GB; + else if (conf->link_speeds & ETH_LINK_SPEED_25G) + link_speed = IFPGA_RAWDEV_LINK_SPEED_10GB; + else + link_speed = IFPGA_RAWDEV_LINK_SPEED_UNKNOWN; + + if (rpst->i40e_pf_eth) + rte_eth_dev_set_link_up(rpst->i40e_pf_eth_port_id); + + return ipn3ke_retimer_conf_link(rawdev, + rpst->port_id, + link_speed, + true); +} + +/* + * Set device link down. + */ +int +ipn3ke_rpst_dev_set_link_down(struct rte_eth_dev *dev) +{ + uint8_t link_speed = IFPGA_RAWDEV_LINK_SPEED_UNKNOWN; + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(dev); + + if (rpst->i40e_pf_eth) + rte_eth_dev_set_link_down(rpst->i40e_pf_eth_port_id); + + return ipn3ke_retimer_conf_link(hw->rawdev, + rpst->port_id, + link_speed, + false); +} +int +ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev, + int wait_to_complete) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + struct rte_rawdev *rawdev; + struct rte_eth_link link; + int ret; + + memset(&link, 0, sizeof(link)); + + link.link_duplex = ETH_LINK_FULL_DUPLEX; + link.link_autoneg = !(ethdev->data->dev_conf.link_speeds & + ETH_LINK_SPEED_FIXED); + + rawdev = hw->rawdev; + ipn3ke_update_link(rawdev, rpst->port_id, &link); + + if (rpst->i40e_pf_eth && + !rpst->ori_linfo.link_status && + link.link_status) + rte_eth_dev_set_link_up(rpst->i40e_pf_eth_port_id); + else if ((rpst->i40e_pf_eth != NULL) && + rpst->ori_linfo.link_status && + !link.link_status) + rte_eth_dev_set_link_down(rpst->i40e_pf_eth_port_id); + + rpst->ori_linfo.link_status = link.link_status; + rpst->ori_linfo.link_speed = link.link_speed; + + ret = rte_eth_linkstatus_set(ethdev, &link); + if (rpst->i40e_pf_eth) + ret = rte_eth_linkstatus_set(rpst->i40e_pf_eth, &link); + + return ret; +} + +static int +ipn3ke_rpst_link_update1(struct ipn3ke_rpst *rpst) +{ + struct ipn3ke_hw *hw; + struct rte_rawdev *rawdev; + struct rte_eth_link link; + int ret = 0; + + if (!rpst) + return -1; + + hw = rpst->hw; + + memset(&link, 0, sizeof(link)); + + rawdev = hw->rawdev; + ipn3ke_update_link(rawdev, rpst->port_id, &link); + + if (rpst->i40e_pf_eth && + !rpst->ori_linfo.link_status && + link.link_status) + rte_eth_dev_set_link_up(rpst->i40e_pf_eth_port_id); + else if ((rpst->i40e_pf_eth != NULL) && + rpst->ori_linfo.link_status && + !link.link_status) + rte_eth_dev_set_link_down(rpst->i40e_pf_eth_port_id); + + rpst->ori_linfo.link_status = link.link_status; + rpst->ori_linfo.link_speed = link.link_speed; + + if (rpst->i40e_pf_eth) + ret = rte_eth_linkstatus_set(rpst->i40e_pf_eth, &link); + + return ret; +} + +static void * +ipn3ke_rpst_scan_handle_request(void *param) +{ + struct ipn3ke_rpst *rpst; + int num = 0; +#define MS 1000 +#define SCAN_NUM 32 + + for (;;) { + num = 0; + TAILQ_FOREACH(rpst, &ipn3ke_rpst_list, next) { + if (rpst->i40e_pf_eth) + ipn3ke_rpst_link_update1(rpst); + + if (++num > SCAN_NUM) + usleep(1 * MS); + } + usleep(50 * MS); + + if (num == 0xffffff) + return NULL; + } + + return NULL; +} +static int +ipn3ke_rpst_scan_check(void) +{ + int ret; + + if (ipn3ke_rpst_scan_num == 1) { + ret = pthread_create(&ipn3ke_rpst_scan_thread, + NULL, + ipn3ke_rpst_scan_handle_request, NULL); + if (ret) { + IPN3KE_AFU_PMD_ERR("Fail to create ipn3ke rpst scan thread"); + return -1; + } + } else if (ipn3ke_rpst_scan_num == 0) { + ret = pthread_cancel(ipn3ke_rpst_scan_thread); + if (ret) + IPN3KE_AFU_PMD_ERR("Can't cancel the thread"); + + ret = pthread_join(ipn3ke_rpst_scan_thread, NULL); + if (ret) + IPN3KE_AFU_PMD_ERR("Can't join the thread"); + + return ret; + } + + return 0; +} + +void +ipn3ke_rpst_promiscuous_enable(struct rte_eth_dev *ethdev) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + uint32_t rddata, val; + + /* Enable all unicast */ + (*hw->f_mac_read)(hw, + &rddata, + IPN3KE_MAC_RX_FRAME_CONTROL, + rpst->port_id, + 0); + val = 1; + val &= IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLUCAST_MASK; + val |= rddata; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_FRAME_CONTROL, + rpst->port_id, + 0); +} + +void +ipn3ke_rpst_promiscuous_disable(struct rte_eth_dev *ethdev) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + uint32_t rddata, val; + + /* Disable all unicast */ + (*hw->f_mac_read)(hw, + &rddata, + IPN3KE_MAC_RX_FRAME_CONTROL, + rpst->port_id, + 0); + val = 0; + val &= IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLUCAST_MASK; + val |= rddata; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_FRAME_CONTROL, + rpst->port_id, + 0); +} + +void +ipn3ke_rpst_allmulticast_enable(struct rte_eth_dev *ethdev) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + uint32_t rddata, val; + + /* Enable all unicast */ + (*hw->f_mac_read)(hw, + &rddata, + IPN3KE_MAC_RX_FRAME_CONTROL, + rpst->port_id, + 0); + val = 1; + val <<= IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLMCAST_SHIFT; + val &= IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLMCAST_MASK; + val |= rddata; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_FRAME_CONTROL, + rpst->port_id, + 0); +} + +void +ipn3ke_rpst_allmulticast_disable(struct rte_eth_dev *ethdev) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + uint32_t rddata, val; + + /* Disable all unicast */ + (*hw->f_mac_read)(hw, + &rddata, + IPN3KE_MAC_RX_FRAME_CONTROL, + rpst->port_id, + 0); + val = 0; + val <<= IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLMCAST_SHIFT; + val &= IPN3KE_MAC_RX_FRAME_CONTROL_EN_ALLMCAST_MASK; + val |= rddata; + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_RX_FRAME_CONTROL, + rpst->port_id, + 0); +} + +int +ipn3ke_rpst_mac_addr_set(struct rte_eth_dev *ethdev, + struct ether_addr *mac_addr) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + uint32_t val; + + if (!is_valid_assigned_ether_addr(mac_addr)) { + IPN3KE_AFU_PMD_LOG(ERR, "Tried to set invalid MAC address."); + return -EINVAL; + } + + ether_addr_copy(&mac_addr[0], &rpst->mac_addr); + + /* Set mac address */ + rte_memcpy(((char *)(&val)), &mac_addr[0], sizeof(uint32_t)); + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_PRIMARY_MAC_ADDR0, + rpst->port_id, + 0); + rte_memcpy(((char *)(&val)), &mac_addr[4], sizeof(uint16_t)); + (*hw->f_mac_write)(hw, + val, + IPN3KE_MAC_PRIMARY_MAC_ADDR0, + rpst->port_id, + 0); + + return 0; +} +int +ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + uint32_t frame_size = mtu; + + /* check if mtu is within the allowed range */ + if ((frame_size < ETHER_MIN_MTU) || + (frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)) + return -EINVAL; + + frame_size &= IPN3KE_MAC_RX_FRAME_MAXLENGTH_MASK; + (*hw->f_mac_write)(hw, + frame_size, + IPN3KE_MAC_RX_FRAME_MAXLENGTH, + rpst->port_id, + 0); + + if (rpst->i40e_pf_eth) + rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth, mtu); + + return 0; +} + +static int +ipn3ke_afu_filter_ctrl(struct rte_eth_dev *ethdev, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, + void *arg) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + int ret = 0; + + if (ethdev == NULL) + return -EINVAL; + + if (hw->acc_flow) + switch (filter_type) { + case RTE_ETH_FILTER_GENERIC: + if (filter_op != RTE_ETH_FILTER_GET) + return -EINVAL; + *(const void **)arg = &ipn3ke_flow_ops; + break; + default: + IPN3KE_AFU_PMD_WARN("Filter type (%d) not supported", + filter_type); + ret = -EINVAL; + break; + } + else if (rpst->i40e_pf_eth) + (*rpst->i40e_pf_eth->dev_ops->filter_ctrl)(ethdev, + filter_type, + filter_op, + arg); + else + return -EINVAL; + + return ret; +} + +struct eth_dev_ops ipn3ke_rpst_dev_ops = { + .dev_infos_get = ipn3ke_rpst_dev_infos_get, + + .dev_configure = ipn3ke_rpst_dev_configure, + .dev_start = ipn3ke_rpst_dev_start, + .dev_stop = ipn3ke_rpst_dev_stop, + .dev_close = ipn3ke_rpst_dev_close, + .dev_reset = ipn3ke_rpst_dev_reset, + + .stats_get = ipn3ke_rpst_stats_get, + .xstats_get = ipn3ke_rpst_xstats_get, + .xstats_get_names = ipn3ke_rpst_xstats_get_names, + .stats_reset = ipn3ke_rpst_stats_reset, + .xstats_reset = ipn3ke_rpst_stats_reset, + + .filter_ctrl = ipn3ke_afu_filter_ctrl, + + .rx_queue_start = ipn3ke_rpst_rx_queue_start, + .rx_queue_stop = ipn3ke_rpst_rx_queue_stop, + .tx_queue_start = ipn3ke_rpst_tx_queue_start, + .tx_queue_stop = ipn3ke_rpst_tx_queue_stop, + .rx_queue_setup = ipn3ke_rpst_rx_queue_setup, + .rx_queue_release = ipn3ke_rpst_rx_queue_release, + .tx_queue_setup = ipn3ke_rpst_tx_queue_setup, + .tx_queue_release = ipn3ke_rpst_tx_queue_release, + + .dev_set_link_up = ipn3ke_rpst_dev_set_link_up, + .dev_set_link_down = ipn3ke_rpst_dev_set_link_down, + .link_update = ipn3ke_rpst_link_update, + + .promiscuous_enable = ipn3ke_rpst_promiscuous_enable, + .promiscuous_disable = ipn3ke_rpst_promiscuous_disable, + .allmulticast_enable = ipn3ke_rpst_allmulticast_enable, + .allmulticast_disable = ipn3ke_rpst_allmulticast_disable, + .mac_addr_set = ipn3ke_rpst_mac_addr_set, + /*.get_reg = ipn3ke_get_regs,*/ + .mtu_set = ipn3ke_rpst_mtu_set, + + /** + * .rxq_info_get = ipn3ke_rxq_info_get, + * .txq_info_get = ipn3ke_txq_info_get, + * .fw_version_get = , + * .get_module_info = ipn3ke_get_module_info, + */ + + .tm_ops_get = ipn3ke_tm_ops_get, +}; + +static uint16_t ipn3ke_rpst_recv_pkts(void *rx_q, + struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + return 0; +} + +static uint16_t +ipn3ke_rpst_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return 0; +} + +int +ipn3ke_rpst_init(struct rte_eth_dev *ethdev, void *init_params) +{ + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + struct ipn3ke_rpst *representor_param = + (struct ipn3ke_rpst *)init_params; + + if (representor_param->port_id >= representor_param->hw->port_num) + return -ENODEV; + + rpst->switch_domain_id = representor_param->switch_domain_id; + rpst->port_id = representor_param->port_id; + rpst->hw = representor_param->hw; + rpst->i40e_pf_eth = NULL; + rpst->i40e_pf_eth_port_id = 0xFFFF; + + if (rpst->hw->tm_hw_enable) + ipn3ke_tm_init(rpst); + + /** representor shares the same driver as it's PF device */ + /** + * ethdev->device->driver = rpst->hw->eth_dev->device->driver; + */ + + /* Set representor device ops */ + ethdev->dev_ops = &ipn3ke_rpst_dev_ops; + + /* No data-path, but need stub Rx/Tx functions to avoid crash + * when testing with the likes of testpmd. + */ + ethdev->rx_pkt_burst = ipn3ke_rpst_recv_pkts; + ethdev->tx_pkt_burst = ipn3ke_rpst_xmit_pkts; + + ethdev->data->nb_rx_queues = 1; + ethdev->data->nb_tx_queues = 1; + + ethdev->data->mac_addrs = rte_zmalloc("ipn3ke_afu_representor", + ETHER_ADDR_LEN, + 0); + if (!ethdev->data->mac_addrs) { + IPN3KE_AFU_PMD_ERR( + "Failed to allocated memory for storing mac address"); + return -ENODEV; + } + + ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; + + rte_spinlock_lock(&ipn3ke_link_notify_list_lk); + TAILQ_INSERT_TAIL(&ipn3ke_rpst_list, rpst, next); + ipn3ke_rpst_scan_num++; + ipn3ke_rpst_scan_check(); + rte_spinlock_unlock(&ipn3ke_link_notify_list_lk); + + return 0; +} + +int +ipn3ke_rpst_uninit(struct rte_eth_dev *ethdev) +{ + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + + rte_spinlock_lock(&ipn3ke_link_notify_list_lk); + TAILQ_REMOVE(&ipn3ke_rpst_list, rpst, next); + ipn3ke_rpst_scan_num--; + ipn3ke_rpst_scan_check(); + rte_spinlock_unlock(&ipn3ke_link_notify_list_lk); + + return 0; +} diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c new file mode 100644 index 0000000..efd7154 --- /dev/null +++ b/drivers/net/ipn3ke/ipn3ke_tm.c @@ -0,0 +1,2217 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#include +#include +#include + +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "ifpga_rawdev_api.h" +#include "ipn3ke_tm.h" +#include "ipn3ke_flow.h" +#include "ipn3ke_logs.h" +#include "ipn3ke_ethdev.h" + +#define BYTES_IN_MBPS (1000 * 1000 / 8) +#define SUBPORT_TC_PERIOD 10 +#define PIPE_TC_PERIOD 40 + +struct ipn3ke_tm_shaper_params_range_type { + uint32_t m1; + uint32_t m2; + uint32_t exp; + uint32_t exp2; + uint32_t low; + uint32_t high; +}; +struct ipn3ke_tm_shaper_params_range_type ipn3ke_tm_shaper_params_rang[] = { + { 0, 1, 0, 1, 0, 4}, + { 2, 3, 0, 1, 8, 12}, + { 4, 7, 0, 1, 16, 28}, + { 8, 15, 0, 1, 32, 60}, + { 16, 31, 0, 1, 64, 124}, + { 32, 63, 0, 1, 128, 252}, + { 64, 127, 0, 1, 256, 508}, + {128, 255, 0, 1, 512, 1020}, + {256, 511, 0, 1, 1024, 2044}, + {512, 1023, 0, 1, 2048, 4092}, + {512, 1023, 1, 2, 4096, 8184}, + {512, 1023, 2, 4, 8192, 16368}, + {512, 1023, 3, 8, 16384, 32736}, + {512, 1023, 4, 16, 32768, 65472}, + {512, 1023, 5, 32, 65536, 130944}, + {512, 1023, 6, 64, 131072, 261888}, + {512, 1023, 7, 128, 262144, 523776}, + {512, 1023, 8, 256, 524288, 1047552}, + {512, 1023, 9, 512, 1048576, 2095104}, + {512, 1023, 10, 1024, 2097152, 4190208}, + {512, 1023, 11, 2048, 4194304, 8380416}, + {512, 1023, 12, 4096, 8388608, 16760832}, + {512, 1023, 13, 8192, 16777216, 33521664}, + {512, 1023, 14, 16384, 33554432, 67043328}, + {512, 1023, 15, 32768, 67108864, 134086656}, +}; + +#define IPN3KE_TM_SHAPER_RANGE_NUM (sizeof(ipn3ke_tm_shaper_params_rang) / \ + sizeof(struct ipn3ke_tm_shaper_params_range_type)) + +#define IPN3KE_TM_SHAPER_COMMITTED_RATE_MAX \ + (ipn3ke_tm_shaper_params_rang[IPN3KE_TM_SHAPER_RANGE_NUM - 1].high) + +#define IPN3KE_TM_SHAPER_PEAK_RATE_MAX \ + (ipn3ke_tm_shaper_params_rang[IPN3KE_TM_SHAPER_RANGE_NUM - 1].high) + +int +ipn3ke_hw_tm_init(struct ipn3ke_hw *hw) +{ +#define SCRATCH_DATA 0xABCDEF + struct ipn3ke_tm_node *nodes; + struct ipn3ke_tm_tdrop_profile *tdrop_profile; + int node_num; + int i; + + if (hw == NULL) + return -EINVAL; +#if IPN3KE_TM_SCRATCH_RW + uint32_t scratch_data; + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_TM_SCRATCH, + 0, + SCRATCH_DATA, + 0xFFFFFFFF); + scratch_data = IPN3KE_MASK_READ_REG(hw, + IPN3KE_TM_SCRATCH, + 0, + 0xFFFFFFFF); + if (scratch_data != SCRATCH_DATA) + return -EINVAL; +#endif + /* alloc memory for all hierarchy nodes */ + node_num = hw->port_num + + IPN3KE_TM_VT_NODE_NUM + + IPN3KE_TM_COS_NODE_NUM; + + nodes = rte_zmalloc("ipn3ke_tm_nodes", + sizeof(struct ipn3ke_tm_node) * node_num, + 0); + if (!nodes) + return -ENOMEM; + + /* alloc memory for Tail Drop Profile */ + tdrop_profile = rte_zmalloc("ipn3ke_tm_tdrop_profile", + sizeof(struct ipn3ke_tm_tdrop_profile) * + IPN3KE_TM_TDROP_PROFILE_NUM, + 0); + if (!tdrop_profile) { + rte_free(nodes); + return -ENOMEM; + } + + hw->nodes = nodes; + hw->port_nodes = nodes; + hw->vt_nodes = hw->port_nodes + hw->port_num; + hw->cos_nodes = hw->vt_nodes + IPN3KE_TM_VT_NODE_NUM; + hw->tdrop_profile = tdrop_profile; + hw->tdrop_profile_num = IPN3KE_TM_TDROP_PROFILE_NUM; + + for (i = 0, nodes = hw->port_nodes; + i < hw->port_num; + i++, nodes++) { + nodes->node_index = i; + nodes->level = IPN3KE_TM_NODE_LEVEL_PORT; + nodes->tm_id = RTE_TM_NODE_ID_NULL; + nodes->node_state = IPN3KE_TM_NODE_STATE_IDLE; + nodes->parent_node_id = RTE_TM_NODE_ID_NULL; + nodes->priority = IPN3KE_TM_NODE_PRIORITY_NORMAL0; + nodes->weight = 0; + nodes->parent_node = NULL; + nodes->shaper_profile.valid = 0; + nodes->tdrop_profile = NULL; + nodes->n_children = 0; + TAILQ_INIT(&nodes->children_node_list); + } + + for (i = 0, nodes = hw->vt_nodes; + i < IPN3KE_TM_VT_NODE_NUM; + i++, nodes++) { + nodes->node_index = i; + nodes->level = IPN3KE_TM_NODE_LEVEL_VT; + nodes->tm_id = RTE_TM_NODE_ID_NULL; + nodes->node_state = IPN3KE_TM_NODE_STATE_IDLE; + nodes->parent_node_id = RTE_TM_NODE_ID_NULL; + nodes->priority = IPN3KE_TM_NODE_PRIORITY_NORMAL0; + nodes->weight = 0; + nodes->parent_node = NULL; + nodes->shaper_profile.valid = 0; + nodes->tdrop_profile = NULL; + nodes->n_children = 0; + TAILQ_INIT(&nodes->children_node_list); + } + + for (i = 0, nodes = hw->cos_nodes; + i < IPN3KE_TM_COS_NODE_NUM; + i++, nodes++) { + nodes->node_index = i; + nodes->level = IPN3KE_TM_NODE_LEVEL_COS; + nodes->tm_id = RTE_TM_NODE_ID_NULL; + nodes->node_state = IPN3KE_TM_NODE_STATE_IDLE; + nodes->parent_node_id = RTE_TM_NODE_ID_NULL; + nodes->priority = IPN3KE_TM_NODE_PRIORITY_NORMAL0; + nodes->weight = 0; + nodes->parent_node = NULL; + nodes->shaper_profile.valid = 0; + nodes->tdrop_profile = NULL; + nodes->n_children = 0; + TAILQ_INIT(&nodes->children_node_list); + } + + for (i = 0, tdrop_profile = hw->tdrop_profile; + i < IPN3KE_TM_TDROP_PROFILE_NUM; + i++, tdrop_profile++) { + tdrop_profile->tdrop_profile_id = i; + tdrop_profile->n_users = 0; + tdrop_profile->valid = 0; + } + + return 0; +} +void +ipn3ke_tm_init(struct ipn3ke_rpst *rpst) +{ + struct ipn3ke_tm_internals *tm; + struct ipn3ke_tm_node *port_node; + + tm = &rpst->tm; + + port_node = &rpst->hw->port_nodes[rpst->port_id]; + tm->h.port_node = port_node; + + tm->h.n_shaper_profiles = 0; + tm->h.n_tdrop_profiles = 0; + tm->h.n_vt_nodes = 0; + tm->h.n_cos_nodes = 0; + + tm->h.port_commit_node = NULL; + TAILQ_INIT(&tm->h.vt_commit_node_list); + TAILQ_INIT(&tm->h.cos_commit_node_list); + + tm->hierarchy_frozen = 0; + tm->tm_started = 1; + tm->tm_id = rpst->port_id; +} + +static struct ipn3ke_tm_shaper_profile * +ipn3ke_hw_tm_shaper_profile_search(struct ipn3ke_hw *hw, + uint32_t shaper_profile_id, + struct rte_tm_error *error) +{ + struct ipn3ke_tm_shaper_profile *sp = NULL; + uint32_t level_of_node_id; + uint32_t node_index; + + /* Shaper profile ID must not be NONE. */ + if (shaper_profile_id == RTE_TM_SHAPER_PROFILE_ID_NONE) { + rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, + rte_strerror(EINVAL)); + + return NULL; + } + + level_of_node_id = shaper_profile_id / IPN3KE_TM_NODE_LEVEL_MOD; + node_index = shaper_profile_id % IPN3KE_TM_NODE_LEVEL_MOD; + + switch (level_of_node_id) { + case IPN3KE_TM_NODE_LEVEL_PORT: + if (node_index >= hw->port_num) + rte_tm_error_set(error, + EEXIST, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, + rte_strerror(EEXIST)); + else + sp = &hw->port_nodes[node_index].shaper_profile; + + break; + + case IPN3KE_TM_NODE_LEVEL_VT: + if (node_index >= IPN3KE_TM_VT_NODE_NUM) + rte_tm_error_set(error, + EEXIST, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, + rte_strerror(EEXIST)); + else + sp = &hw->vt_nodes[node_index].shaper_profile; + + break; + + case IPN3KE_TM_NODE_LEVEL_COS: + if (node_index >= IPN3KE_TM_COS_NODE_NUM) + rte_tm_error_set(error, + EEXIST, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, + rte_strerror(EEXIST)); + else + sp = &hw->cos_nodes[node_index].shaper_profile; + + break; + default: + rte_tm_error_set(error, + EEXIST, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, + rte_strerror(EEXIST)); + } + + return sp; +} + +static struct ipn3ke_tm_tdrop_profile * +ipn3ke_hw_tm_tdrop_profile_search(struct ipn3ke_hw *hw, + uint32_t tdrop_profile_id) +{ + struct ipn3ke_tm_tdrop_profile *tdrop_profile; + + if (tdrop_profile_id >= hw->tdrop_profile_num) + return NULL; + + tdrop_profile = &hw->tdrop_profile[tdrop_profile_id]; + if (tdrop_profile->valid) + return tdrop_profile; + + return NULL; +} + +static struct ipn3ke_tm_node * +ipn3ke_hw_tm_node_search(struct ipn3ke_hw *hw, uint32_t tm_id, + uint32_t node_id, uint32_t state_mask) +{ + uint32_t level_of_node_id; + uint32_t node_index; + struct ipn3ke_tm_node *n; + + level_of_node_id = node_id / IPN3KE_TM_NODE_LEVEL_MOD; + node_index = node_id % IPN3KE_TM_NODE_LEVEL_MOD; + + switch (level_of_node_id) { + case IPN3KE_TM_NODE_LEVEL_PORT: + if (node_index >= hw->port_num) + return NULL; + n = &hw->port_nodes[node_index]; + + break; + case IPN3KE_TM_NODE_LEVEL_VT: + if (node_index >= IPN3KE_TM_VT_NODE_NUM) + return NULL; + n = &hw->vt_nodes[node_index]; + + break; + case IPN3KE_TM_NODE_LEVEL_COS: + if (node_index >= IPN3KE_TM_COS_NODE_NUM) + return NULL; + n = &hw->cos_nodes[node_index]; + + break; + default: + return NULL; + } + + /* Check tm node status */ + if (n->node_state == IPN3KE_TM_NODE_STATE_IDLE) { + if ((n->tm_id != RTE_TM_NODE_ID_NULL) || + (n->parent_node_id != RTE_TM_NODE_ID_NULL) || + (n->parent_node != NULL) || + (n->n_children > 0)) { + IPN3KE_ASSERT(0); + IPN3KE_AFU_PMD_LOG(WARNING, + "tm node check error %d", 1); + } + } else if (n->node_state < IPN3KE_TM_NODE_STATE_MAX) { + if ((n->tm_id == RTE_TM_NODE_ID_NULL) || + ((level_of_node_id != IPN3KE_TM_NODE_LEVEL_PORT) && + (n->parent_node_id == RTE_TM_NODE_ID_NULL)) || + ((level_of_node_id != IPN3KE_TM_NODE_LEVEL_PORT) && + (n->parent_node == NULL))) { + IPN3KE_ASSERT(0); + IPN3KE_AFU_PMD_LOG(WARNING, + "tm node check error %d", 1); + } + } else { + IPN3KE_ASSERT(0); + IPN3KE_AFU_PMD_LOG(WARNING, + "tm node check error %d", 1); + } + + if (IPN3KE_BIT_ISSET(state_mask, n->node_state)) { + if (n->node_state == IPN3KE_TM_NODE_STATE_IDLE) + return n; + else if (n->tm_id == tm_id) + return n; + else + return NULL; + } else + return NULL; +} + +/* Traffic manager node type get */ +static int +ipn3ke_pmd_tm_node_type_get(struct rte_eth_dev *dev, + uint32_t node_id, + int *is_leaf, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + uint32_t tm_id; + struct ipn3ke_tm_node *node; + uint32_t state_mask; + + if (is_leaf == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + + tm_id = tm->tm_id; + + state_mask = 0; + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_COMMITTED); + node = ipn3ke_hw_tm_node_search(hw, tm_id, node_id, state_mask); + if (node_id == RTE_TM_NODE_ID_NULL || + (node == NULL)) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + + *is_leaf = (node->level == IPN3KE_TM_NODE_LEVEL_COS) ? 1 : 0; + + return 0; +} + +#define WRED_SUPPORTED 0 + +#define STATS_MASK_DEFAULT \ + (RTE_TM_STATS_N_PKTS | \ + RTE_TM_STATS_N_BYTES | \ + RTE_TM_STATS_N_PKTS_GREEN_DROPPED | \ + RTE_TM_STATS_N_BYTES_GREEN_DROPPED) + +#define STATS_MASK_QUEUE \ + (STATS_MASK_DEFAULT | RTE_TM_STATS_N_PKTS_QUEUED) + +/* Traffic manager capabilities get */ +static int +ipn3ke_tm_capabilities_get(struct rte_eth_dev *dev, + struct rte_tm_capabilities *cap, + struct rte_tm_error *error) +{ + if (cap == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_CAPABILITIES, + NULL, + rte_strerror(EINVAL)); + + /* set all the parameters to 0 first. */ + memset(cap, 0, sizeof(*cap)); + + cap->n_nodes_max = 1 + IPN3KE_TM_COS_NODE_NUM + IPN3KE_TM_VT_NODE_NUM; + cap->n_levels_max = IPN3KE_TM_NODE_LEVEL_MAX; + + cap->non_leaf_nodes_identical = 0; + cap->leaf_nodes_identical = 1; + + cap->shaper_n_max = 1 + IPN3KE_TM_VT_NODE_NUM; + cap->shaper_private_n_max = 1 + IPN3KE_TM_VT_NODE_NUM; + cap->shaper_private_dual_rate_n_max = 0; + cap->shaper_private_rate_min = 1; + cap->shaper_private_rate_max = 1 + IPN3KE_TM_VT_NODE_NUM; + + cap->shaper_shared_n_max = 0; + cap->shaper_shared_n_nodes_per_shaper_max = 0; + cap->shaper_shared_n_shapers_per_node_max = 0; + cap->shaper_shared_dual_rate_n_max = 0; + cap->shaper_shared_rate_min = 0; + cap->shaper_shared_rate_max = 0; + + cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; + cap->shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; + + cap->sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; + cap->sched_sp_n_priorities_max = 3; + cap->sched_wfq_n_children_per_group_max = UINT32_MAX; + cap->sched_wfq_n_groups_max = 1; + cap->sched_wfq_weight_max = UINT32_MAX; + + cap->cman_wred_packet_mode_supported = 0; + cap->cman_wred_byte_mode_supported = 0; + cap->cman_head_drop_supported = 0; + cap->cman_wred_context_n_max = 0; + cap->cman_wred_context_private_n_max = 0; + cap->cman_wred_context_shared_n_max = 0; + cap->cman_wred_context_shared_n_nodes_per_context_max = 0; + cap->cman_wred_context_shared_n_contexts_per_node_max = 0; + + /** + * cap->mark_vlan_dei_supported = {0, 0, 0}; + * cap->mark_ip_ecn_tcp_supported = {0, 0, 0}; + * cap->mark_ip_ecn_sctp_supported = {0, 0, 0}; + * cap->mark_ip_dscp_supported = {0, 0, 0}; + */ + + cap->dynamic_update_mask = 0; + + cap->stats_mask = 0; + + return 0; +} + +/* Traffic manager level capabilities get */ +static int +ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, + uint32_t level_id, + struct rte_tm_level_capabilities *cap, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + + if (cap == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_CAPABILITIES, + NULL, + rte_strerror(EINVAL)); + + if (level_id >= IPN3KE_TM_NODE_LEVEL_MAX) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, + rte_strerror(EINVAL)); + + /* set all the parameters to 0 first. */ + memset(cap, 0, sizeof(*cap)); + + switch (level_id) { + case IPN3KE_TM_NODE_LEVEL_PORT: + cap->n_nodes_max = hw->port_num; + cap->n_nodes_nonleaf_max = IPN3KE_TM_VT_NODE_NUM; + cap->n_nodes_leaf_max = 0; + cap->non_leaf_nodes_identical = 0; + cap->leaf_nodes_identical = 0; + + cap->nonleaf.shaper_private_supported = 0; + cap->nonleaf.shaper_private_dual_rate_supported = 0; + cap->nonleaf.shaper_private_rate_min = 1; + cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_shared_n_max = 0; + + cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = 0; + cap->nonleaf.sched_wfq_n_groups_max = 0; + cap->nonleaf.sched_wfq_weight_max = 0; + + cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; + break; + + case IPN3KE_TM_NODE_LEVEL_VT: + cap->n_nodes_max = IPN3KE_TM_VT_NODE_NUM; + cap->n_nodes_nonleaf_max = IPN3KE_TM_COS_NODE_NUM; + cap->n_nodes_leaf_max = 0; + cap->non_leaf_nodes_identical = 0; + cap->leaf_nodes_identical = 0; + + cap->nonleaf.shaper_private_supported = 0; + cap->nonleaf.shaper_private_dual_rate_supported = 0; + cap->nonleaf.shaper_private_rate_min = 1; + cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_shared_n_max = 0; + + cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = 0; + cap->nonleaf.sched_wfq_n_groups_max = 0; + cap->nonleaf.sched_wfq_weight_max = 0; + + cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; + break; + + case IPN3KE_TM_NODE_LEVEL_COS: + cap->n_nodes_max = IPN3KE_TM_COS_NODE_NUM; + cap->n_nodes_nonleaf_max = 0; + cap->n_nodes_leaf_max = IPN3KE_TM_COS_NODE_NUM; + cap->non_leaf_nodes_identical = 0; + cap->leaf_nodes_identical = 0; + + cap->leaf.shaper_private_supported = 0; + cap->leaf.shaper_private_dual_rate_supported = 0; + cap->leaf.shaper_private_rate_min = 0; + cap->leaf.shaper_private_rate_max = 0; + cap->leaf.shaper_shared_n_max = 0; + + cap->leaf.cman_head_drop_supported = 0; + cap->leaf.cman_wred_packet_mode_supported = WRED_SUPPORTED; + cap->leaf.cman_wred_byte_mode_supported = 0; + cap->leaf.cman_wred_context_private_supported = WRED_SUPPORTED; + cap->leaf.cman_wred_context_shared_n_max = 0; + + cap->leaf.stats_mask = STATS_MASK_QUEUE; + break; + + default: + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, + rte_strerror(EINVAL)); + break; + } + + return 0; +} + +/* Traffic manager node capabilities get */ +static int +ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, + uint32_t node_id, + struct rte_tm_node_capabilities *cap, + struct rte_tm_error *error) +{ + struct ipn3ke_rpst *representor = IPN3KE_DEV_PRIVATE_TO_RPST(dev); + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + uint32_t tm_id; + struct ipn3ke_tm_node *tm_node; + uint32_t state_mask; + + if (cap == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_CAPABILITIES, + NULL, + rte_strerror(EINVAL)); + + tm_id = tm->tm_id; + + state_mask = 0; + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_COMMITTED); + tm_node = ipn3ke_hw_tm_node_search(hw, tm_id, node_id, state_mask); + if (tm_node == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + + if (tm_node->tm_id != representor->port_id) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + + /* set all the parameters to 0 first. */ + memset(cap, 0, sizeof(*cap)); + + switch (tm_node->level) { + case IPN3KE_TM_NODE_LEVEL_PORT: + cap->shaper_private_supported = 1; + cap->shaper_private_dual_rate_supported = 0; + cap->shaper_private_rate_min = 1; + cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_shared_n_max = 0; + + cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = + IPN3KE_TM_VT_NODE_NUM; + cap->nonleaf.sched_wfq_n_groups_max = 1; + cap->nonleaf.sched_wfq_weight_max = 1; + + cap->stats_mask = STATS_MASK_DEFAULT; + break; + + case IPN3KE_TM_NODE_LEVEL_VT: + cap->shaper_private_supported = 1; + cap->shaper_private_dual_rate_supported = 0; + cap->shaper_private_rate_min = 1; + cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_shared_n_max = 0; + + cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = + IPN3KE_TM_COS_NODE_NUM; + cap->nonleaf.sched_wfq_n_groups_max = 1; + cap->nonleaf.sched_wfq_weight_max = 1; + + cap->stats_mask = STATS_MASK_DEFAULT; + break; + + case IPN3KE_TM_NODE_LEVEL_COS: + cap->shaper_private_supported = 0; + cap->shaper_private_dual_rate_supported = 0; + cap->shaper_private_rate_min = 0; + cap->shaper_private_rate_max = 0; + cap->shaper_shared_n_max = 0; + + cap->leaf.cman_head_drop_supported = 0; + cap->leaf.cman_wred_packet_mode_supported = WRED_SUPPORTED; + cap->leaf.cman_wred_byte_mode_supported = 0; + cap->leaf.cman_wred_context_private_supported = WRED_SUPPORTED; + cap->leaf.cman_wred_context_shared_n_max = 0; + + cap->stats_mask = STATS_MASK_QUEUE; + break; + default: + break; + } + + return 0; +} + +static int +ipn3ke_tm_shaper_parame_trans(struct rte_tm_shaper_params *profile, + struct ipn3ke_tm_shaper_profile *local_profile, + const struct ipn3ke_tm_shaper_params_range_type *ref_data) +{ + int i; + const struct ipn3ke_tm_shaper_params_range_type *r; + uint64_t rate; + + rate = profile->peak.rate; + for (i = 0, r = ref_data; i < IPN3KE_TM_SHAPER_RANGE_NUM; i++, r++) { + if ((rate >= r->low) && + (rate <= r->high)) { + local_profile->m = (rate / 4) / r->exp2; + local_profile->e = r->exp; + local_profile->rate = rate; + + return 0; + } + } + + return -1; +} +static int +ipn3ke_tm_shaper_profile_add(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_shaper_params *profile, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + struct ipn3ke_tm_shaper_profile *sp; + + /* Shaper profile must not exist. */ + sp = ipn3ke_hw_tm_shaper_profile_search(hw, shaper_profile_id, error); + if (!sp || (sp && sp->valid)) + return -rte_tm_error_set(error, + EEXIST, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, + rte_strerror(EEXIST)); + + /* Profile must not be NULL. */ + if (profile == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE, + NULL, + rte_strerror(EINVAL)); + + /* Peak rate: non-zero, 32-bit */ + if (profile->peak.rate == 0 || + profile->peak.rate >= IPN3KE_TM_SHAPER_PEAK_RATE_MAX) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE, + NULL, + rte_strerror(EINVAL)); + + /* Peak size: non-zero, 32-bit */ + if (profile->peak.size != 0) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE, + NULL, + rte_strerror(EINVAL)); + + /* Dual-rate profiles are not supported. */ + if (profile->committed.rate >= IPN3KE_TM_SHAPER_COMMITTED_RATE_MAX) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE, + NULL, + rte_strerror(EINVAL)); + + /* Packet length adjust: 24 bytes */ + if (profile->pkt_length_adjust != 0) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN, + NULL, + rte_strerror(EINVAL)); + + if (ipn3ke_tm_shaper_parame_trans(profile, + sp, + ipn3ke_tm_shaper_params_rang)) { + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE, + NULL, + rte_strerror(EINVAL)); + } else { + sp->valid = 1; + rte_memcpy(&sp->params, profile, sizeof(sp->params)); + } + + tm->h.n_shaper_profiles++; + + return 0; +} + +/* Traffic manager shaper profile delete */ +static int +ipn3ke_tm_shaper_profile_delete(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + struct ipn3ke_tm_shaper_profile *sp; + + /* Check existing */ + sp = ipn3ke_hw_tm_shaper_profile_search(hw, shaper_profile_id, error); + if (!sp || (sp && !sp->valid)) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, + rte_strerror(EINVAL)); + + sp->valid = 0; + tm->h.n_shaper_profiles--; + + return 0; +} + +static int +ipn3ke_tm_tdrop_profile_check(struct rte_eth_dev *dev, + uint32_t tdrop_profile_id, + struct rte_tm_wred_params *profile, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_tdrop_profile *tp; + enum rte_tm_color color; + + /* TDROP profile ID must not be NONE. */ + if (tdrop_profile_id == RTE_TM_WRED_PROFILE_ID_NONE) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_WRED_PROFILE_ID, + NULL, + rte_strerror(EINVAL)); + + /* TDROP profile must not exist. */ + tp = ipn3ke_hw_tm_tdrop_profile_search(hw, tdrop_profile_id); + + /* Profile must not be NULL. */ + if (profile == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_WRED_PROFILE, + NULL, + rte_strerror(EINVAL)); + + /* TDROP profile should be in packet mode */ + if (profile->packet_mode != 0) + return -rte_tm_error_set(error, + ENOTSUP, + RTE_TM_ERROR_TYPE_WRED_PROFILE, + NULL, + rte_strerror(ENOTSUP)); + + /* min_th <= max_th, max_th > 0 */ + for (color = RTE_TM_GREEN; color <= RTE_TM_GREEN; color++) { + uint64_t min_th = profile->red_params[color].min_th; + uint64_t max_th = profile->red_params[color].max_th; + uint32_t th1, th2; + + th1 = (uint32_t)(min_th & IPN3KE_TDROP_TH1_MASK); + th2 = (uint32_t)((min_th >> IPN3KE_TDROP_TH1_SHIFT) & + IPN3KE_TDROP_TH2_MASK); + if (((min_th >> IPN3KE_TDROP_TH1_SHIFT) >> + IPN3KE_TDROP_TH1_SHIFT) || + max_th != 0) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_WRED_PROFILE, + NULL, + rte_strerror(EINVAL)); + } + + return 0; +} + +static int +ipn3ke_hw_tm_tdrop_wr(struct ipn3ke_hw *hw, + struct ipn3ke_tm_tdrop_profile *tp) +{ + if (tp->valid) { + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CCB_PROFILE_MS, + 0, + tp->th2, + IPN3KE_CCB_PROFILE_MS_MASK); + + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CCB_PROFILE_P, + tp->tdrop_profile_id, + tp->th1, + IPN3KE_CCB_PROFILE_MASK); + } else { + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CCB_PROFILE_MS, + 0, + 0, + IPN3KE_CCB_PROFILE_MS_MASK); + + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CCB_PROFILE_P, + tp->tdrop_profile_id, + 0, + IPN3KE_CCB_PROFILE_MASK); + } + + return 0; +} + +/* Traffic manager TDROP profile add */ +static int +ipn3ke_tm_tdrop_profile_add(struct rte_eth_dev *dev, + uint32_t tdrop_profile_id, + struct rte_tm_wred_params *profile, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + struct ipn3ke_tm_tdrop_profile *tp; + int status; + uint64_t min_th; + uint32_t th1, th2; + + /* Check input params */ + status = ipn3ke_tm_tdrop_profile_check(dev, + tdrop_profile_id, + profile, + error); + if (status) + return status; + + /* Memory allocation */ + tp = &hw->tdrop_profile[tdrop_profile_id]; + + /* Fill in */ + tp->valid = 1; + min_th = profile->red_params[RTE_TM_GREEN].min_th; + th1 = (uint32_t)(min_th & IPN3KE_TDROP_TH1_MASK); + th2 = (uint32_t)((min_th >> IPN3KE_TDROP_TH1_SHIFT) & + IPN3KE_TDROP_TH2_MASK); + tp->th1 = th1; + tp->th2 = th2; + rte_memcpy(&tp->params, profile, sizeof(tp->params)); + + /* Add to list */ + tm->h.n_tdrop_profiles++; + + /* Write FPGA */ + ipn3ke_hw_tm_tdrop_wr(hw, tp); + + return 0; +} + +/* Traffic manager TDROP profile delete */ +static int +ipn3ke_tm_tdrop_profile_delete(struct rte_eth_dev *dev, + uint32_t tdrop_profile_id, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + struct ipn3ke_tm_tdrop_profile *tp; + + /* Check existing */ + tp = ipn3ke_hw_tm_tdrop_profile_search(hw, tdrop_profile_id); + if (tp == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_WRED_PROFILE_ID, + NULL, + rte_strerror(EINVAL)); + + /* Check unused */ + if (tp->n_users) + return -rte_tm_error_set(error, + EBUSY, + RTE_TM_ERROR_TYPE_WRED_PROFILE_ID, + NULL, + rte_strerror(EBUSY)); + + /* Set free */ + tp->valid = 0; + tm->h.n_tdrop_profiles--; + + /* Write FPGA */ + ipn3ke_hw_tm_tdrop_wr(hw, tp); + + return 0; +} + +static int +ipn3ke_tm_node_add_check_parameter(uint32_t tm_id, + uint32_t node_id, + uint32_t parent_node_id, + uint32_t priority, + uint32_t weight, + uint32_t level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + uint32_t level_of_node_id; + uint32_t node_index; + uint32_t parent_level_id; + + if (node_id == RTE_TM_NODE_ID_NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + + /* priority: must be 0, 1, 2, 3 */ + if (priority > IPN3KE_TM_NODE_PRIORITY_HIGHEST) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PRIORITY, + NULL, + rte_strerror(EINVAL)); + + /* weight: must be 1 .. 255 */ + if (weight > IPN3KE_TM_NODE_WEIGHT_MAX) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_WEIGHT, + NULL, + rte_strerror(EINVAL)); + + /* check node id and parent id*/ + level_of_node_id = node_id / IPN3KE_TM_NODE_LEVEL_MOD; + if (level_of_node_id != level_id) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + node_index = node_id % IPN3KE_TM_NODE_LEVEL_MOD; + parent_level_id = parent_node_id / IPN3KE_TM_NODE_LEVEL_MOD; + switch (level_id) { + case IPN3KE_TM_NODE_LEVEL_PORT: + if (node_index != tm_id) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + if (parent_node_id != RTE_TM_NODE_ID_NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, + NULL, + rte_strerror(EINVAL)); + break; + + case IPN3KE_TM_NODE_LEVEL_VT: + if (node_index >= IPN3KE_TM_VT_NODE_NUM) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + if (parent_level_id != IPN3KE_TM_NODE_LEVEL_PORT) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, + NULL, + rte_strerror(EINVAL)); + break; + + case IPN3KE_TM_NODE_LEVEL_COS: + if (node_index >= IPN3KE_TM_COS_NODE_NUM) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + if (parent_level_id != IPN3KE_TM_NODE_LEVEL_VT) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, + NULL, + rte_strerror(EINVAL)); + break; + default: + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, + rte_strerror(EINVAL)); + } + + /* params: must not be NULL */ + if (params == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS, + NULL, + rte_strerror(EINVAL)); + /* No shared shapers */ + if (params->n_shared_shapers != 0) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS, + NULL, + rte_strerror(EINVAL)); + return 0; +} +static int +ipn3ke_tm_node_add_check_mount(uint32_t tm_id, + uint32_t node_id, + uint32_t parent_node_id, + uint32_t level_id, + struct rte_tm_error *error) +{ + /*struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev);*/ + uint32_t node_index; + uint32_t parent_index; + uint32_t parent_index1; + + node_index = node_id % IPN3KE_TM_NODE_LEVEL_MOD; + parent_index = parent_node_id % IPN3KE_TM_NODE_LEVEL_MOD; + parent_index1 = node_index / IPN3KE_TM_NODE_MOUNT_MAX; + switch (level_id) { + case IPN3KE_TM_NODE_LEVEL_PORT: + break; + + case IPN3KE_TM_NODE_LEVEL_VT: + if (parent_index != tm_id) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, + NULL, + rte_strerror(EINVAL)); + break; + + case IPN3KE_TM_NODE_LEVEL_COS: + if (parent_index != parent_index1) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, + NULL, + rte_strerror(EINVAL)); + break; + default: + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, + rte_strerror(EINVAL)); + } + + return 0; +} + +/* Traffic manager node add */ +static int +ipn3ke_tm_node_add(struct rte_eth_dev *dev, + uint32_t node_id, + uint32_t parent_node_id, + uint32_t priority, + uint32_t weight, + uint32_t level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + uint32_t tm_id; + struct ipn3ke_tm_node *n, *parent_node; + uint32_t node_state, state_mask; + uint32_t node_index; + uint32_t parent_index; + int status; + + /* Checks */ + if (tm->hierarchy_frozen) + return -rte_tm_error_set(error, + EBUSY, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EBUSY)); + + tm_id = tm->tm_id; + + status = ipn3ke_tm_node_add_check_parameter(tm_id, + node_id, + parent_node_id, + priority, + weight, + level_id, + params, + error); + if (status) + return status; + + status = ipn3ke_tm_node_add_check_mount(tm_id, + node_id, + parent_node_id, + level_id, + error); + if (status) + return status; + + /* Shaper profile ID must not be NONE. */ + if ((params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) && + (params->shaper_profile_id != node_id)) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, + rte_strerror(EINVAL)); + + /* Memory allocation */ + state_mask = 0; + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_IDLE); + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_CONFIGURED_DEL); + n = ipn3ke_hw_tm_node_search(hw, tm_id, node_id, state_mask); + if (!n) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + node_state = n->node_state; + + /* Check parent node */ + state_mask = 0; + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_CONFIGURED_ADD); + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_COMMITTED); + if (parent_node_id != RTE_TM_NODE_ID_NULL) { + parent_node = ipn3ke_hw_tm_node_search(hw, + tm_id, + parent_node_id, + state_mask); + if (!parent_node) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, + NULL, + rte_strerror(EINVAL)); + } else + parent_node = NULL; + + node_index = node_id % IPN3KE_TM_NODE_LEVEL_MOD; + parent_index = parent_node_id % IPN3KE_TM_NODE_LEVEL_MOD; + switch (level_id) { + case IPN3KE_TM_NODE_LEVEL_PORT: + n->node_state = IPN3KE_TM_NODE_STATE_CONFIGURED_ADD; + n->tm_id = tm_id; + tm->h.port_commit_node = n; + break; + + case IPN3KE_TM_NODE_LEVEL_VT: + if (node_state == IPN3KE_TM_NODE_STATE_IDLE) { + TAILQ_INSERT_TAIL(&tm->h.vt_commit_node_list, n, node); + if (parent_node) + parent_node->n_children++; + tm->h.n_vt_nodes++; + } else if (node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_DEL) { + if (parent_node) + parent_node->n_children++; + tm->h.n_vt_nodes++; + } + n->node_state = IPN3KE_TM_NODE_STATE_CONFIGURED_ADD; + n->parent_node_id = parent_node_id; + n->tm_id = tm_id; + n->parent_node = parent_node; + + break; + + case IPN3KE_TM_NODE_LEVEL_COS: + if (node_state == IPN3KE_TM_NODE_STATE_IDLE) { + TAILQ_INSERT_TAIL(&tm->h.cos_commit_node_list, + n, node); + if (parent_node) + parent_node->n_children++; + tm->h.n_cos_nodes++; + } else if (node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_DEL) { + if (parent_node) + parent_node->n_children++; + tm->h.n_cos_nodes++; + } + n->node_state = IPN3KE_TM_NODE_STATE_CONFIGURED_ADD; + n->parent_node_id = parent_node_id; + n->tm_id = tm_id; + n->parent_node = parent_node; + + break; + default: + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, + rte_strerror(EINVAL)); + } + + /* Fill in */ + n->priority = priority; + n->weight = weight; + + if (n->level == IPN3KE_TM_NODE_LEVEL_COS && + params->leaf.cman == RTE_TM_CMAN_TAIL_DROP) + n->tdrop_profile = ipn3ke_hw_tm_tdrop_profile_search(hw, + params->leaf.wred.wred_profile_id); + + rte_memcpy(&n->params, params, sizeof(n->params)); + + return 0; +} + +static int +ipn3ke_tm_node_del_check_parameter(uint32_t tm_id, + uint32_t node_id, + struct rte_tm_error *error) +{ + uint32_t level_of_node_id; + uint32_t node_index; + + if (node_id == RTE_TM_NODE_ID_NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + + /* check node id and parent id*/ + level_of_node_id = node_id / IPN3KE_TM_NODE_LEVEL_MOD; + node_index = node_id % IPN3KE_TM_NODE_LEVEL_MOD; + switch (level_of_node_id) { + case IPN3KE_TM_NODE_LEVEL_PORT: + if (node_index != tm_id) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + break; + + case IPN3KE_TM_NODE_LEVEL_VT: + if (node_index >= IPN3KE_TM_VT_NODE_NUM) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + break; + + case IPN3KE_TM_NODE_LEVEL_COS: + if (node_index >= IPN3KE_TM_COS_NODE_NUM) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + break; + default: + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, + rte_strerror(EINVAL)); + } + + return 0; +} + +/* Traffic manager node delete */ +static int +ipn3ke_pmd_tm_node_delete(struct rte_eth_dev *dev, + uint32_t node_id, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + struct ipn3ke_tm_node *n, *parent_node; + uint32_t tm_id; + int status; + uint32_t level_of_node_id; + uint32_t node_index; + uint32_t node_state; + uint32_t state_mask; + + /* Check hierarchy changes are currently allowed */ + if (tm->hierarchy_frozen) + return -rte_tm_error_set(error, + EBUSY, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EBUSY)); + + tm_id = tm->tm_id; + + status = ipn3ke_tm_node_del_check_parameter(tm_id, + node_id, + error); + if (status) + return status; + + /* Check existing */ + state_mask = 0; + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_CONFIGURED_ADD); + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_COMMITTED); + n = ipn3ke_hw_tm_node_search(hw, tm_id, node_id, state_mask); + if (n == NULL) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + + if (n->n_children > 0) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + + node_state = n->node_state; + + level_of_node_id = node_id / IPN3KE_TM_NODE_LEVEL_MOD; + node_index = node_id % IPN3KE_TM_NODE_LEVEL_MOD; + + /* Check parent node */ + if (n->parent_node_id != RTE_TM_NODE_ID_NULL) { + state_mask = 0; + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_CONFIGURED_ADD); + IPN3KE_BIT_SET(state_mask, IPN3KE_TM_NODE_STATE_COMMITTED); + parent_node = ipn3ke_hw_tm_node_search(hw, + tm_id, + n->parent_node_id, + state_mask); + if (!parent_node) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, + NULL, + rte_strerror(EINVAL)); + if (n->parent_node != parent_node) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + } else + parent_node = NULL; + + switch (level_of_node_id) { + case IPN3KE_TM_NODE_LEVEL_PORT: + if (tm->h.port_node != n) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, + rte_strerror(EINVAL)); + n->node_state = IPN3KE_TM_NODE_STATE_CONFIGURED_DEL; + tm->h.port_commit_node = n; + + break; + + case IPN3KE_TM_NODE_LEVEL_VT: + if (node_state == IPN3KE_TM_NODE_STATE_COMMITTED) { + if (parent_node) + TAILQ_REMOVE(&parent_node->children_node_list, + n, node); + TAILQ_INSERT_TAIL(&tm->h.vt_commit_node_list, n, node); + if (parent_node) + parent_node->n_children--; + tm->h.n_vt_nodes--; + } else if (node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_ADD) { + if (parent_node) + parent_node->n_children--; + tm->h.n_vt_nodes--; + } + n->node_state = IPN3KE_TM_NODE_STATE_CONFIGURED_DEL; + + break; + + case IPN3KE_TM_NODE_LEVEL_COS: + if (node_state == IPN3KE_TM_NODE_STATE_COMMITTED) { + if (parent_node) + TAILQ_REMOVE(&parent_node->children_node_list, + n, node); + TAILQ_INSERT_TAIL(&tm->h.cos_commit_node_list, + n, node); + if (parent_node) + parent_node->n_children--; + tm->h.n_cos_nodes--; + } else if (node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_ADD) { + if (parent_node) + parent_node->n_children--; + tm->h.n_cos_nodes--; + } + n->node_state = IPN3KE_TM_NODE_STATE_CONFIGURED_DEL; + + break; + default: + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, + rte_strerror(EINVAL)); + } + + return 0; +} +static int +ipn3ke_tm_hierarchy_commit_check(struct rte_eth_dev *dev, + struct rte_tm_error *error) +{ + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + uint32_t tm_id; + struct ipn3ke_tm_node_list *nl; + struct ipn3ke_tm_node *n, *parent_node; + enum ipn3ke_tm_node_state node_state; + + tm_id = tm->tm_id; + + nl = &tm->h.cos_commit_node_list; + TAILQ_FOREACH(n, nl, node) { + node_state = n->node_state; + parent_node = n->parent_node; + if (n->node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_ADD) { + if ((n->parent_node_id == RTE_TM_NODE_ID_NULL) || + (n->level != IPN3KE_TM_NODE_LEVEL_COS) || + (n->tm_id != tm_id) || + (parent_node == NULL) || + (parent_node && + (parent_node->node_state == + IPN3KE_TM_NODE_STATE_CONFIGURED_DEL)) || + (parent_node && + (parent_node->node_state == + IPN3KE_TM_NODE_STATE_IDLE)) || + (n->shaper_profile.valid == 0)) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + } else if (n->node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_DEL) + if ((n->level != IPN3KE_TM_NODE_LEVEL_COS) || + (n->n_children != 0)) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + else + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + } + + nl = &tm->h.vt_commit_node_list; + TAILQ_FOREACH(n, nl, node) { + node_state = n->node_state; + parent_node = n->parent_node; + if (n->node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_ADD) { + if ((n->parent_node_id == RTE_TM_NODE_ID_NULL) || + (n->level != IPN3KE_TM_NODE_LEVEL_VT) || + (n->tm_id != tm_id) || + (parent_node == NULL) || + (parent_node && + (parent_node->node_state == + IPN3KE_TM_NODE_STATE_CONFIGURED_DEL)) || + (parent_node && + (parent_node->node_state == + IPN3KE_TM_NODE_STATE_IDLE)) || + (n->shaper_profile.valid == 0)) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + } else if (n->node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_DEL) + if ((n->level != IPN3KE_TM_NODE_LEVEL_VT) || + (n->n_children != 0)) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + else + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + } + + n = tm->h.port_commit_node; + if (n && + ((n->parent_node_id != RTE_TM_NODE_ID_NULL) || + (n->level != IPN3KE_TM_NODE_LEVEL_PORT) || + (n->tm_id != tm_id) || + (n->parent_node != NULL) || + (n->shaper_profile.valid == 0))) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + + return 0; +} +static int +ipn3ke_hw_tm_node_wr(struct ipn3ke_hw *hw, + struct ipn3ke_tm_node *n) +{ + uint32_t level; + uint32_t node_index; + + level = n->level; + node_index = n->node_index; + + switch (level) { + case IPN3KE_TM_NODE_LEVEL_PORT: + /** + * Configure Type + */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_TYPE_L3_X, + n->node_index, + n->priority, + IPN3KE_QOS_TYPE_MASK); + + /** + * Configure Sch_wt + */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_SCH_WT_L3_X, + n->node_index, + n->weight, + IPN3KE_QOS_SCH_WT_MASK); + + /** + * Configure Shap_wt + */ + if (n->shaper_profile.valid) + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_SHAP_WT_L3_X, + n->node_index, + ((n->shaper_profile.e << 10) | + n->shaper_profile.m), + IPN3KE_QOS_SHAP_WT_MASK); + + break; + case IPN3KE_TM_NODE_LEVEL_VT: + /** + * Configure Type + */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_TYPE_L2_X, + n->node_index, + n->priority, + IPN3KE_QOS_TYPE_MASK); + + /** + * Configure Sch_wt + */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_SCH_WT_L2_X, + n->node_index, + n->weight, + IPN3KE_QOS_SCH_WT_MASK); + + /** + * Configure Shap_wt + */ + if (n->shaper_profile.valid) + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_SHAP_WT_L2_X, + n->node_index, + ((n->shaper_profile.e << 10) | + n->shaper_profile.m), + IPN3KE_QOS_SHAP_WT_MASK); + + /** + * Configure Map + */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_MAP_L2_X, + n->node_index, + n->parent_node->node_index, + IPN3KE_QOS_MAP_L2_MASK); + + break; + case IPN3KE_TM_NODE_LEVEL_COS: + /** + * Configure Tail Drop mapping + */ + if (n->tdrop_profile && n->tdrop_profile->valid) { + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_CCB_QPROFILE_Q, + n->node_index, + n->tdrop_profile->tdrop_profile_id, + IPN3KE_CCB_QPROFILE_MASK); + } + + /** + * Configure Type + */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_TYPE_L1_X, + n->node_index, + n->priority, + IPN3KE_QOS_TYPE_MASK); + + /** + * Configure Sch_wt + */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_SCH_WT_L1_X, + n->node_index, + n->weight, + IPN3KE_QOS_SCH_WT_MASK); + + /** + * Configure Shap_wt + */ + if (n->shaper_profile.valid) + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_SHAP_WT_L1_X, + n->node_index, + ((n->shaper_profile.e << 10) | + n->shaper_profile.m), + IPN3KE_QOS_SHAP_WT_MASK); + + /** + * Configure COS queue to port + */ + while (IPN3KE_MASK_READ_REG(hw, + IPN3KE_QM_UID_CONFIG_CTRL, + 0, + 0x80000000)) + ; + + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QM_UID_CONFIG_DATA, + 0, + (1 << 8 | n->parent_node->parent_node->node_index), + 0x1FF); + + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QM_UID_CONFIG_CTRL, + 0, + n->node_index, + 0xFFFFF); + + while (IPN3KE_MASK_READ_REG(hw, + IPN3KE_QM_UID_CONFIG_CTRL, + 0, + 0x80000000)) + ; + + /** + * Configure Map + */ + IPN3KE_MASK_WRITE_REG(hw, + IPN3KE_QOS_MAP_L1_X, + n->node_index, + n->parent_node->node_index, + IPN3KE_QOS_MAP_L1_MASK); + + break; + default: + return -1; + } + + return 0; +} + +static int +ipn3ke_tm_hierarchy_hw_commit(struct rte_eth_dev *dev, + struct rte_tm_error *error) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + struct ipn3ke_tm_node_list *nl; + struct ipn3ke_tm_node *n, *nn, *parent_node; + + n = tm->h.port_commit_node; + if (n) { + if (n->node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_ADD) { + tm->h.port_commit_node = NULL; + + n->node_state = IPN3KE_TM_NODE_STATE_COMMITTED; + } else if (n->node_state == + IPN3KE_TM_NODE_STATE_CONFIGURED_DEL) { + tm->h.port_commit_node = NULL; + + n->node_state = IPN3KE_TM_NODE_STATE_IDLE; + n->priority = IPN3KE_TM_NODE_PRIORITY_NORMAL0; + n->weight = 0; + n->tm_id = RTE_TM_NODE_ID_NULL; + } else + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + ipn3ke_hw_tm_node_wr(hw, n); + } + + nl = &tm->h.vt_commit_node_list; + for (n = TAILQ_FIRST(nl); n; nn) { + nn = TAILQ_NEXT(n, node); + if (n->node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_ADD) { + n->node_state = IPN3KE_TM_NODE_STATE_COMMITTED; + parent_node = n->parent_node; + TAILQ_REMOVE(nl, n, node); + TAILQ_INSERT_TAIL(&parent_node->children_node_list, + n, node); + } else if (n->node_state == + IPN3KE_TM_NODE_STATE_CONFIGURED_DEL) { + parent_node = n->parent_node; + TAILQ_REMOVE(nl, n, node); + + n->node_state = IPN3KE_TM_NODE_STATE_IDLE; + n->parent_node_id = RTE_TM_NODE_ID_NULL; + n->priority = IPN3KE_TM_NODE_PRIORITY_NORMAL0; + n->weight = 0; + n->tm_id = RTE_TM_NODE_ID_NULL; + n->parent_node = NULL; + } else + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + ipn3ke_hw_tm_node_wr(hw, n); + n = nn; + } + + nl = &tm->h.cos_commit_node_list; + for (n = TAILQ_FIRST(nl); n; nn) { + nn = TAILQ_NEXT(n, node); + if (n->node_state == IPN3KE_TM_NODE_STATE_CONFIGURED_ADD) { + n->node_state = IPN3KE_TM_NODE_STATE_COMMITTED; + parent_node = n->parent_node; + TAILQ_REMOVE(nl, n, node); + TAILQ_INSERT_TAIL(&parent_node->children_node_list, + n, node); + } else if (n->node_state == + IPN3KE_TM_NODE_STATE_CONFIGURED_DEL) { + n->node_state = IPN3KE_TM_NODE_STATE_IDLE; + parent_node = n->parent_node; + TAILQ_REMOVE(nl, n, node); + + n->node_state = IPN3KE_TM_NODE_STATE_IDLE; + n->parent_node_id = RTE_TM_NODE_ID_NULL; + n->priority = IPN3KE_TM_NODE_PRIORITY_NORMAL0; + n->weight = 0; + n->tm_id = RTE_TM_NODE_ID_NULL; + n->parent_node = NULL; + + if (n->tdrop_profile) + n->tdrop_profile->n_users--; + } else + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + ipn3ke_hw_tm_node_wr(hw, n); + n = nn; + } + + return 0; +} + +static int +ipn3ke_tm_hierarchy_commit_clear(struct rte_eth_dev *dev) +{ + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + struct ipn3ke_tm_node_list *nl; + struct ipn3ke_tm_node *n; + struct ipn3ke_tm_node *nn; + + n = tm->h.port_commit_node; + if (n) { + n->node_state = IPN3KE_TM_NODE_STATE_IDLE; + n->priority = IPN3KE_TM_NODE_PRIORITY_NORMAL0; + n->weight = 0; + n->tm_id = RTE_TM_NODE_ID_NULL; + n->n_children = 0; + + tm->h.port_commit_node = NULL; + } + + nl = &tm->h.vt_commit_node_list; + for (n = TAILQ_FIRST(nl); n; nn) { + nn = TAILQ_NEXT(n, node); + + n->node_state = IPN3KE_TM_NODE_STATE_IDLE; + n->parent_node_id = RTE_TM_NODE_ID_NULL; + n->priority = IPN3KE_TM_NODE_PRIORITY_NORMAL0; + n->weight = 0; + n->tm_id = RTE_TM_NODE_ID_NULL; + n->parent_node = NULL; + n->n_children = 0; + tm->h.n_vt_nodes--; + + TAILQ_REMOVE(nl, n, node); + n = nn; + } + + nl = &tm->h.cos_commit_node_list; + for (n = TAILQ_FIRST(nl); n; nn) { + nn = TAILQ_NEXT(n, node); + + n->node_state = IPN3KE_TM_NODE_STATE_IDLE; + n->parent_node_id = RTE_TM_NODE_ID_NULL; + n->priority = IPN3KE_TM_NODE_PRIORITY_NORMAL0; + n->weight = 0; + n->tm_id = RTE_TM_NODE_ID_NULL; + n->parent_node = NULL; + tm->h.n_cos_nodes--; + + TAILQ_REMOVE(nl, n, node); + n = nn; + } + + return 0; +} + +static void +ipn3ke_tm_show(struct rte_eth_dev *dev) +{ + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + uint32_t tm_id; + struct ipn3ke_tm_node_list *vt_nl, *cos_nl; + struct ipn3ke_tm_node *port_n, *vt_n, *cos_n; + char *str_state[IPN3KE_TM_NODE_STATE_MAX] = {"Idle", + "CfgAdd", + "CfgDel", + "Committed"}; + + tm_id = tm->tm_id; + + printf("*************HQoS Tree(%d)*************\n", tm_id); + + port_n = tm->h.port_node; + printf("Port: (%d|%s)\n", port_n->node_index, + str_state[port_n->node_state]); + + vt_nl = &tm->h.port_node->children_node_list; + TAILQ_FOREACH(vt_n, vt_nl, node) { + cos_nl = &vt_n->children_node_list; + printf(" VT%d: ", vt_n->node_index); + TAILQ_FOREACH(cos_n, cos_nl, node) { + if (cos_n->parent_node_id != + (vt_n->node_index + IPN3KE_TM_NODE_LEVEL_MOD)) + IPN3KE_ASSERT(0); + printf("(%d|%s), ", cos_n->node_index, + str_state[cos_n->node_state]); + } + printf("\n"); + } +} +static void +ipn3ke_tm_show_commmit(struct rte_eth_dev *dev) +{ + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + uint32_t tm_id; + struct ipn3ke_tm_node_list *nl; + struct ipn3ke_tm_node *n; + char *str_state[IPN3KE_TM_NODE_STATE_MAX] = {"Idle", + "CfgAdd", + "CfgDel", + "Committed"}; + + tm_id = tm->tm_id; + + printf("*************Commit Tree(%d)*************\n", tm_id); + n = tm->h.port_commit_node; + printf("Port: "); + if (n) + printf("(%d|%s)", n->node_index, str_state[n->node_state]); + printf("\n"); + + nl = &tm->h.vt_commit_node_list; + printf("VT : "); + TAILQ_FOREACH(n, nl, node) { + printf("(%d|%s), ", n->node_index, str_state[n->node_state]); + } + printf("\n"); + + nl = &tm->h.cos_commit_node_list; + printf("COS : "); + TAILQ_FOREACH(n, nl, node) { + printf("(%d|%s), ", n->node_index, str_state[n->node_state]); + } + printf("\n"); +} + +/* Traffic manager hierarchy commit */ +static int +ipn3ke_tm_hierarchy_commit(struct rte_eth_dev *dev, + int clear_on_fail, + struct rte_tm_error *error) +{ + struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); + int status; + + /* Checks */ + if (tm->hierarchy_frozen) + return -rte_tm_error_set(error, + EBUSY, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EBUSY)); + + ipn3ke_tm_show_commmit(dev); + + status = ipn3ke_tm_hierarchy_commit_check(dev, error); + if (status) { + if (clear_on_fail) + ipn3ke_tm_hierarchy_commit_clear(dev); + return status; + } + + ipn3ke_tm_hierarchy_hw_commit(dev, error); + ipn3ke_tm_show(dev); + + return 0; +} + +const struct rte_tm_ops ipn3ke_tm_ops = { + .node_type_get = ipn3ke_pmd_tm_node_type_get, + .capabilities_get = ipn3ke_tm_capabilities_get, + .level_capabilities_get = ipn3ke_tm_level_capabilities_get, + .node_capabilities_get = ipn3ke_tm_node_capabilities_get, + + .wred_profile_add = ipn3ke_tm_tdrop_profile_add, + .wred_profile_delete = ipn3ke_tm_tdrop_profile_delete, + .shared_wred_context_add_update = NULL, + .shared_wred_context_delete = NULL, + + .shaper_profile_add = ipn3ke_tm_shaper_profile_add, + .shaper_profile_delete = ipn3ke_tm_shaper_profile_delete, + .shared_shaper_add_update = NULL, + .shared_shaper_delete = NULL, + + .node_add = ipn3ke_tm_node_add, + .node_delete = ipn3ke_pmd_tm_node_delete, + .node_suspend = NULL, + .node_resume = NULL, + .hierarchy_commit = ipn3ke_tm_hierarchy_commit, + + .node_parent_update = NULL, + .node_shaper_update = NULL, + .node_shared_shaper_update = NULL, + .node_stats_update = NULL, + .node_wfq_weight_mode_update = NULL, + .node_cman_update = NULL, + .node_wred_context_update = NULL, + .node_shared_wred_context_update = NULL, + + .node_stats_read = NULL, +}; + +int +ipn3ke_tm_ops_get(struct rte_eth_dev *ethdev, + void *arg) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(ethdev); + struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); + struct rte_tm_error error; + + if (!arg) + return -EINVAL; + + if (hw->acc_tm) + *(const void **)arg = &ipn3ke_tm_ops; + else if (rpst->i40e_pf_eth) + rte_tm_ops_get(rpst->i40e_pf_eth_port_id, &error); + else + return -EINVAL; + + return 0; +} + +int +ipn3ke_hw_tm_hqos_node_dump(struct rte_eth_dev *dev, + uint32_t level, + uint32_t node_index, + int *parent, + uint32_t *type, + uint32_t *sch_wt, + uint32_t *shaper_m, + uint32_t *shaper_e, + uint32_t *ccb_profile_id, + uint32_t *th1, + uint32_t *th2) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + uint32_t shaper; + uint32_t type_base, sch_wt_base, shap_wt_base; + + switch (level) { + case IPN3KE_TM_NODE_LEVEL_PORT: + type_base = IPN3KE_QOS_TYPE_L3_X; + sch_wt_base = IPN3KE_QOS_SCH_WT_L3_X; + shap_wt_base = IPN3KE_QOS_SHAP_WT_L3_X; + + break; + case IPN3KE_TM_NODE_LEVEL_VT: + type_base = IPN3KE_QOS_TYPE_L2_X; + sch_wt_base = IPN3KE_QOS_SCH_WT_L2_X; + shap_wt_base = IPN3KE_QOS_SHAP_WT_L2_X; + + break; + case IPN3KE_TM_NODE_LEVEL_COS: + type_base = IPN3KE_QOS_TYPE_L1_X; + sch_wt_base = IPN3KE_QOS_SCH_WT_L1_X; + shap_wt_base = IPN3KE_QOS_SHAP_WT_L1_X; + + break; + default: + return -1; + } + + /** + * Read Type + */ + (*type) = IPN3KE_MASK_READ_REG(hw, + type_base, + node_index, + IPN3KE_QOS_TYPE_MASK); + + /** + * Read Sch_wt + */ + (*sch_wt) = IPN3KE_MASK_READ_REG(hw, + sch_wt_base, + node_index, + IPN3KE_QOS_SCH_WT_MASK); + + /** + * Read Shap_wt + */ + shaper = IPN3KE_MASK_READ_REG(hw, + shap_wt_base, + node_index, + IPN3KE_QOS_SHAP_WT_MASK); + (*shaper_m) = shaper & 0x3FF; + (*shaper_e) = (shaper >> 10) & 0xF; + + /** + * Read Parent and CCB Profile ID + */ + switch (level) { + case IPN3KE_TM_NODE_LEVEL_PORT: + (*parent) = -1; + + break; + case IPN3KE_TM_NODE_LEVEL_VT: + (*parent) = IPN3KE_MASK_READ_REG(hw, + IPN3KE_QOS_MAP_L2_X, + node_index, + IPN3KE_QOS_MAP_L2_MASK); + + break; + case IPN3KE_TM_NODE_LEVEL_COS: + (*parent) = IPN3KE_MASK_READ_REG(hw, + IPN3KE_QOS_MAP_L1_X, + node_index, + IPN3KE_QOS_MAP_L1_MASK); + + (*ccb_profile_id) = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CCB_QPROFILE_Q, + node_index, + IPN3KE_CCB_QPROFILE_MASK); + (*th1) = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CCB_PROFILE_P, + (*ccb_profile_id), + IPN3KE_CCB_PROFILE_MASK); + + (*th2) = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CCB_PROFILE_MS, + 0, + IPN3KE_CCB_PROFILE_MS_MASK); + + break; + default: + return -1; + } + + return 0; +} + +int +ipn3ke_hw_tm_ccb_node_dump(struct rte_eth_dev *dev, + uint32_t tdrop_profile_id, + uint32_t *th1, + uint32_t *th2) +{ + struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev); + + (*th1) = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CCB_PROFILE_P, + tdrop_profile_id, + IPN3KE_CCB_PROFILE_MASK); + + (*th2) = IPN3KE_MASK_READ_REG(hw, + IPN3KE_CCB_PROFILE_MS, + 0, + IPN3KE_CCB_PROFILE_MS_MASK); + + return 0; +} diff --git a/drivers/net/ipn3ke/ipn3ke_tm.h b/drivers/net/ipn3ke/ipn3ke_tm.h new file mode 100644 index 0000000..43847d8 --- /dev/null +++ b/drivers/net/ipn3ke/ipn3ke_tm.h @@ -0,0 +1,135 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +struct ipn3ke_rpst; + +#ifndef _IPN3KE_TM_H_ +#define _IPN3KE_TM_H_ + +#include + +/* TM Levels */ +enum ipn3ke_tm_node_level { + IPN3KE_TM_NODE_LEVEL_PORT = 0, + IPN3KE_TM_NODE_LEVEL_VT, + IPN3KE_TM_NODE_LEVEL_COS, + IPN3KE_TM_NODE_LEVEL_MAX, +}; + +/* TM Shaper Profile */ +struct ipn3ke_tm_shaper_profile { + uint32_t valid; + uint32_t m; + uint32_t e; + uint64_t rate; + struct rte_tm_shaper_params params; +}; + +TAILQ_HEAD(ipn3ke_tm_shaper_profile_list, ipn3ke_tm_shaper_profile); + + +#define IPN3KE_TDROP_TH1_MASK 0x1ffffff +#define IPN3KE_TDROP_TH1_SHIFT (25) +#define IPN3KE_TDROP_TH2_MASK 0x1ffffff + +/* TM TDROP Profile */ +struct ipn3ke_tm_tdrop_profile { + uint32_t tdrop_profile_id; + uint32_t th1; + uint32_t th2; + uint32_t n_users; + uint32_t valid; + struct rte_tm_wred_params params; +}; + +/* TM node priority */ +enum ipn3ke_tm_node_state { + IPN3KE_TM_NODE_STATE_IDLE = 0, + IPN3KE_TM_NODE_STATE_CONFIGURED_ADD, + IPN3KE_TM_NODE_STATE_CONFIGURED_DEL, + IPN3KE_TM_NODE_STATE_COMMITTED, + IPN3KE_TM_NODE_STATE_MAX, +}; + +TAILQ_HEAD(ipn3ke_tm_node_list, ipn3ke_tm_node); + +/* IPN3KE TM Node */ +struct ipn3ke_tm_node { + TAILQ_ENTRY(ipn3ke_tm_node) node; + uint32_t node_index; + uint32_t level; + uint32_t tm_id; + enum ipn3ke_tm_node_state node_state; + uint32_t parent_node_id; + uint32_t priority; + uint32_t weight; + struct ipn3ke_tm_node *parent_node; + struct ipn3ke_tm_shaper_profile shaper_profile; + struct ipn3ke_tm_tdrop_profile *tdrop_profile; + struct rte_tm_node_params params; + struct rte_tm_node_stats stats; + uint32_t n_children; + struct ipn3ke_tm_node_list children_node_list; +}; + +/* IPN3KE TM Hierarchy Specification */ +struct ipn3ke_tm_hierarchy { + struct ipn3ke_tm_node *port_node; + /*struct ipn3ke_tm_node_list vt_node_list;*/ + /*struct ipn3ke_tm_node_list cos_node_list;*/ + + uint32_t n_shaper_profiles; + /*uint32_t n_shared_shapers;*/ + uint32_t n_tdrop_profiles; + uint32_t n_vt_nodes; + uint32_t n_cos_nodes; + + struct ipn3ke_tm_node *port_commit_node; + struct ipn3ke_tm_node_list vt_commit_node_list; + struct ipn3ke_tm_node_list cos_commit_node_list; + + /*uint32_t n_tm_nodes[IPN3KE_TM_NODE_LEVEL_MAX];*/ +}; + +struct ipn3ke_tm_internals { + /** Hierarchy specification + * + * -Hierarchy is unfrozen at init and when port is stopped. + * -Hierarchy is frozen on successful hierarchy commit. + * -Run-time hierarchy changes are not allowed, therefore it makes + * sense to keep the hierarchy frozen after the port is started. + */ + struct ipn3ke_tm_hierarchy h; + int hierarchy_frozen; + int tm_started; + uint32_t tm_id; +}; + +#define IPN3KE_TM_COS_NODE_NUM (64*1024) +#define IPN3KE_TM_VT_NODE_NUM (IPN3KE_TM_COS_NODE_NUM/8) +#define IPN3KE_TM_10G_PORT_NODE_NUM (8) +#define IPN3KE_TM_25G_PORT_NODE_NUM (4) + +#define IPN3KE_TM_NODE_LEVEL_MOD (100000) +#define IPN3KE_TM_NODE_MOUNT_MAX (8) + +#define IPN3KE_TM_TDROP_PROFILE_NUM (2*1024) + +/* TM node priority */ +enum ipn3ke_tm_node_priority { + IPN3KE_TM_NODE_PRIORITY_NORMAL0 = 0, + IPN3KE_TM_NODE_PRIORITY_LOW, + IPN3KE_TM_NODE_PRIORITY_NORMAL1, + IPN3KE_TM_NODE_PRIORITY_HIGHEST, +}; + +#define IPN3KE_TM_NODE_WEIGHT_MAX UINT8_MAX + +void +ipn3ke_tm_init(struct ipn3ke_rpst *rpst); +int +ipn3ke_tm_ops_get(struct rte_eth_dev *ethdev, + void *arg); + +#endif /* _IPN3KE_TM_H_ */ diff --git a/drivers/net/ipn3ke/meson.build b/drivers/net/ipn3ke/meson.build new file mode 100644 index 0000000..e02ac25 --- /dev/null +++ b/drivers/net/ipn3ke/meson.build @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2019 Intel Corporation + +allow_experimental_apis = true +sources += files('ipn3ke_ethdev.c', + 'ipn3ke_representor.c', + 'ipn3ke_tm.c', + 'ipn3ke_flow.c') +deps += ['kvargs', 'bus_pci', 'bus_ifpga'] diff --git a/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map new file mode 100644 index 0000000..ef35398 --- /dev/null +++ b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map @@ -0,0 +1,4 @@ +DPDK_2.0 { + + local: *; +}; From patchwork Thu Feb 28 07:13:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50618 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DD9457CE2; Thu, 28 Feb 2019 08:15:29 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 7B78F5F1B for ; Thu, 28 Feb 2019 08:15:27 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299878" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:25 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:15 +0800 Message-Id: <1551338000-120348-7-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 06/11] config: add build enablement for IPN3KE X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add build enablement for Intel FPGA Acceleration NIC IPN3KE. Signed-off-by: Rosen Xu --- config/common_base | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/config/common_base b/config/common_base index 7c6da51..4fac8ba 100644 --- a/config/common_base +++ b/config/common_base @@ -316,6 +316,12 @@ CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n # +# Compile burst-oriented IPN3KE PMD driver +# +CONFIG_RTE_LIBRTE_IPN3KE_PMD=n +CONFIG_RTE_LIBRTE_IPN3KE_DEBUG=n + +# # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD # CONFIG_RTE_LIBRTE_MLX4_PMD=n From patchwork Thu Feb 28 07:13:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50620 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BAF181B11F; Thu, 28 Feb 2019 08:15:33 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 4DF641B108 for ; Thu, 28 Feb 2019 08:15:31 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:31 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299887" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:28 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:16 +0800 Message-Id: <1551338000-120348-8-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 07/11] mk: add link enablement for IPN3KE X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add link enablement for Intel FPGA Acceleration NIC IPN3KE. Signed-off-by: Rosen Xu --- mk/rte.app.mk | 1 + 1 file changed, 1 insertion(+) diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 8a4f0f4..8b427d1 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -295,6 +295,7 @@ endif # CONFIG_RTE_LIBRTE_FSLMC_BUS _LDLIBS-$(CONFIG_RTE_LIBRTE_IFPGA_BUS) += -lrte_bus_ifpga ifeq ($(CONFIG_RTE_LIBRTE_IFPGA_BUS),y) _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_IFPGA_RAWDEV) += -lrte_pmd_ifpga_rawdev +_LDLIBS-$(CONFIG_RTE_LIBRTE_IPN3KE_PMD) += -lrte_pmd_ipn3ke endif # CONFIG_RTE_LIBRTE_IFPGA_BUS endif # CONFIG_RTE_LIBRTE_RAWDEV From patchwork Thu Feb 28 07:13:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50621 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7B31B1B13A; Thu, 28 Feb 2019 08:15:36 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 0E1C21B123 for ; Thu, 28 Feb 2019 08:15:34 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299901" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:32 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:17 +0800 Message-Id: <1551338000-120348-9-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 08/11] app/test-pmd: add IPN3KE support for testpmd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add IPN3KE support for testpmd Signed-off-by: Rosen Xu Signed-off-by: Andy Pei Signed-off-by: Rosen Xu > Signed-off-by: Andy Pei > Signed-off-by: Rosen Xu > Signed-off-by: Andy Pei > --- app/test-pmd/Makefile | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile index d5258ea..a6b6f6f 100644 --- a/app/test-pmd/Makefile +++ b/app/test-pmd/Makefile @@ -62,6 +62,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_I40E_PMD),y) LDLIBS += -lrte_pmd_i40e endif +ifeq ($(CONFIG_RTE_LIBRTE_IPN3KE_PMD),y) +LDLIBS += -lrte_pmd_ipn3ke +endif + ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD),y) LDLIBS += -lrte_pmd_bnxt endif From patchwork Thu Feb 28 07:13:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50622 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EDADF1B14C; Thu, 28 Feb 2019 08:15:39 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 6F40E1B146 for ; Thu, 28 Feb 2019 08:15:38 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299915" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:36 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:18 +0800 Message-Id: <1551338000-120348-10-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 09/11] usertools: add IPN3KE device bind X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Intel FPGA Acceleration NIC IPN3KE device bind. Signed-off-by: Rosen Xu --- usertools/dpdk-devbind.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py index a9cd66a..067a809 100755 --- a/usertools/dpdk-devbind.py +++ b/usertools/dpdk-devbind.py @@ -12,6 +12,8 @@ # The PCI base class for all devices network_class = {'Class': '02', 'Vendor': None, 'Device': None, 'SVendor': None, 'SDevice': None} +ifpga_class = {'Class': '12', 'Vendor': '8086', 'Device': 'bcc0,09c4,0b30', + 'SVendor': None, 'SDevice': None} encryption_class = {'Class': '10', 'Vendor': None, 'Device': None, 'SVendor': None, 'SDevice': None} intel_processor_class = {'Class': '0b', 'Vendor': '8086', 'Device': None, @@ -29,7 +31,7 @@ avp_vnic = {'Class': '05', 'Vendor': '1af4', 'Device': '1110', 'SVendor': None, 'SDevice': None} -network_devices = [network_class, cavium_pkx, avp_vnic] +network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class] crypto_devices = [encryption_class, intel_processor_class] eventdev_devices = [cavium_sso, cavium_tim] mempool_devices = [cavium_fpa] From patchwork Thu Feb 28 07:13:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50623 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 096081B1EB; Thu, 28 Feb 2019 08:15:43 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id B7C2C5F14 for ; Thu, 28 Feb 2019 08:15:41 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142299928" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:39 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:19 +0800 Message-Id: <1551338000-120348-11-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 10/11] doc: add IPN3KE document X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Intel FPGA Acceleration NIC IPN3KE document. Signed-off-by: Rosen Xu Signed-off-by: Dan Wei --- doc/guides/nics/features/ipn3ke.ini | 57 ++++++++++++++++++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/ipn3ke.rst | 97 +++++++++++++++++++++++++++++++++++++ 3 files changed, 155 insertions(+) create mode 100644 doc/guides/nics/features/ipn3ke.ini create mode 100644 doc/guides/nics/ipn3ke.rst diff --git a/doc/guides/nics/features/ipn3ke.ini b/doc/guides/nics/features/ipn3ke.ini new file mode 100644 index 0000000..06cfaf5 --- /dev/null +++ b/doc/guides/nics/features/ipn3ke.ini @@ -0,0 +1,57 @@ +; +; Supported features of the 'ipn3ke' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Speed capabilities = Y +Link status = Y +Link status event = Y +Rx interrupt = Y +Queue start/stop = Y +Runtime Rx queue setup = Y +Runtime Tx queue setup = Y +Jumbo frame = Y +Scattered Rx = Y +TSO = Y +Promiscuous mode = Y +Allmulticast mode = Y +Unicast MAC filter = Y +Multicast MAC filter = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y +VMDq = Y +SR-IOV = Y +DCB = Y +VLAN filter = Y +Ethertype filter = Y +Tunnel filter = Y +Hash filter = Y +Flow director = Y +Flow control = Y +Flow API = Y +Traffic mirroring = Y +CRC offload = Y +VLAN offload = Y +QinQ offload = Y +L3 checksum offload = Y +L4 checksum offload = Y +Inner L3 checksum = Y +Inner L4 checksum = Y +Packet type parsing = Y +Timesync = Y +Rx descriptor status = Y +Tx descriptor status = Y +Basic stats = Y +Extended stats = Y +FW version = Y +Module EEPROM dump = Y +Multiprocess aware = Y +BSD nic_uio = Y +Linux UIO = Y +Linux VFIO = Y +x86-32 = Y +x86-64 = Y +ARMv8 = Y +Power8 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index 5c80e3b..6671481 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -28,6 +28,7 @@ Network Interface Controller Drivers fm10k i40e ice + ipn3ke ifc igb ixgbe diff --git a/doc/guides/nics/ipn3ke.rst b/doc/guides/nics/ipn3ke.rst new file mode 100644 index 0000000..7386075 --- /dev/null +++ b/doc/guides/nics/ipn3ke.rst @@ -0,0 +1,97 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2019 Intel Corporation. + +IPN3KE Poll Mode Driver +======================= + +The ipn3ke PMD (librte_pmd_ipn3ke) provides poll mode driver support +for IntelĀ® FPGA PAC(Programmable Acceleration Card) N3000 based on +the Intel Ethernet Controller X710/XXV710 and Intel Arria 10 FPGA. + +In this card, FPGA is an acceleration bridge between network interface +and the Intel Ethernet Controller. Although both FPGA and Ethernet +Controllers are connected to CPU with PCIe Gen3x16 Switch, all the +packet RX/TX is handled by Intel Ethernet Controller. So from application +point of view the data path is still the legacy Intel Ethernet Controller +X710/XXV710 PMD. Besides this, users can enable more acceleration +features by FPGA IP. + +Prerequisites +------------- + +- Identifying your adapter using `Intel Support + `_ and get the latest NVM/FW images. + +- Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup the basic DPDK environment. + +- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms" + section of the :ref:`Getting Started Guide for Linux `. + + +Pre-Installation Configuration +------------------------------ + +Config File Options +~~~~~~~~~~~~~~~~~~~ + +The following options can be modified in the ``config`` file. +Please note that enabling debugging options may affect system performance. + +- ``CONFIG_RTE_LIBRTE_IPN3KE_PMD`` (default ``n``) + + Toggle compilation of the ``librte_pmd_ipn3ke`` driver. + +- ``CONFIG_RTE_LIBRTE_IPN3KE_DEBUG_*`` (default ``n``) + + Toggle display of generic debugging messages. + +Runtime Config Options +~~~~~~~~~~~~~~~~~~~~~~ + +- ``Maximum Number of Queue Pairs`` + + The maximum number of queue pairs is decided by HW. If not configured, APP + uses the number from HW. Users can check the number by calling the API + ``rte_eth_dev_info_get``. + If users want to limit the number of queues, they can set a smaller number + using EAL parameter like ``max_queue_pair_num=n``. + + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC ` +for details. + +Sample Application Notes +------------------------ + +Packet TX/RX with FPGA Pass-through image +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +FPGA Pass-through bitstream is original FPGA Image. + +To start ``testpmd``, and add I40e PF to FPGA network port: + +.. code-block:: console + + ./app/testpmd -l 0-15 -n 4 --vdev 'ifpga_rawdev_cfg0,ifpga=b3:00.0,port=0' --vdev 'ipn3ke_cfg0,afu=0|b3:00.0,i40e_pf={0000:b1:00.0|0000:b1:00.1|0000:b1:00.2|0000:b1:00.3|0000:b5:00.0|0000:b5:00.1|0000:b5:00.2|0000:b5:00.3}' -- -i --no-numa --port-topology=loop + +HQoS and flow acceleration +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +HQoS and flow acceleration bitstream is used to offloading HQoS and flow classifier. + +To start ``testpmd``, and add I40e PF to FPGA network port, enable FPGA HQoS and Flow Acceleration: + +.. code-block:: console + + ./app/testpmd -l 0-15 -n 4 --vdev 'ifpga_rawdev_cfg0,ifpga=b3:00.0,port=0' --vdev 'ipn3ke_cfg0,afu=0|b3:00.0,fpga_acc={tm|flow},i40e_pf={0000:b1:00.0|0000:b1:00.1|0000:b1:00.2|0000:b1:00.3|0000:b5:00.0|0000:b5:00.1|0000:b5:00.2|0000:b5:00.3}' -- -i --no-numa --forward-mode=macswap + +Limitations or Known issues +--------------------------- + +19.05 limitation +~~~~~~~~~~~~~~~~ + +Ipn3ke code released in 19.05 is for evaluation only. From patchwork Thu Feb 28 07:13:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Rosen" X-Patchwork-Id: 50624 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C312F1B1F4; Thu, 28 Feb 2019 08:15:46 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id C53881B1F0 for ; Thu, 28 Feb 2019 08:15:44 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2019 23:15:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,422,1544515200"; d="scan'208";a="142300018" Received: from dpdkx8602.sh.intel.com ([10.67.110.200]) by orsmga001.jf.intel.com with ESMTP; 27 Feb 2019 23:15:42 -0800 From: Rosen Xu To: dev@dpdk.org Cc: ferruh.yigit@intel.com, tianfei.zhang@intel.com, dan.wei@intel.com, rosen.xu@intel.com, andy.pei@intel.com, qiming.yang@intel.com, haiyue.wang@intel.com, santos.chen@intel.com, zhang.zhang@intel.com Date: Thu, 28 Feb 2019 15:13:20 +0800 Message-Id: <1551338000-120348-12-git-send-email-rosen.xu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> References: <1551338000-120348-1-git-send-email-rosen.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 11/11] MAINTAINERS: add MAINTAINERS for IPN3KE X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add MAINTAINERS for Intel FPGA Acceleration NIC IPN3KE. Signed-off-by: Rosen Xu --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 835d8a2..ec49f00 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -599,6 +599,13 @@ F: drivers/net/ice/ F: doc/guides/nics/ice.rst F: doc/guides/nics/features/ice.ini +Intel ipn3ke +M: Rosen Xu +T: git://dpdk.org/next/dpdk-next-net-intel +F: drivers/net/ipn3ke/ +F: doc/guides/nics/ipn3ke.rst +F: doc/guides/nics/features/ipn3ke.ini + Marvell mvpp2 M: Tomasz Duszynski M: Dmitri Epshtein