From patchwork Thu Sep 2 17:59:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 97837 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0D7BA0C4C; Thu, 2 Sep 2021 20:01:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 990594003E; Thu, 2 Sep 2021 20:01:01 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id D5D1B4003C for ; Thu, 2 Sep 2021 20:00:59 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id A84C21A1F28; Thu, 2 Sep 2021 20:00:59 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 43A251A1F22; Thu, 2 Sep 2021 20:00:59 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 4685A183AC8B; Fri, 3 Sep 2021 02:00:58 +0800 (+08) From: Apeksha Gupta To: andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, hemant.agrawal@nxp.com, sachin.saxena@nxp.com, Apeksha Gupta Date: Thu, 2 Sep 2021 23:29:51 +0530 Message-Id: <20210902175955.9202-2-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210902175955.9202-1-apeksha.gupta@nxp.com> References: <20210902175955.9202-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 1/5] net/enetfec: introduce NXP ENETFEC driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENETFEC (Fast Ethernet Controller) is a network poll mode driver for NXP SoC i.MX 8M Mini. This patch adds skeleton for enetfec driver with probe function. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- doc/guides/nics/enetfec.rst | 121 ++++++++++++++++++++ doc/guides/nics/features/enetfec.ini | 8 ++ doc/guides/nics/index.rst | 1 + drivers/net/enetfec/enet_ethdev.c | 95 ++++++++++++++++ drivers/net/enetfec/enet_ethdev.h | 160 +++++++++++++++++++++++++++ drivers/net/enetfec/enet_pmd_logs.h | 31 ++++++ drivers/net/enetfec/meson.build | 15 +++ drivers/net/enetfec/version.map | 3 + drivers/net/meson.build | 1 + 9 files changed, 435 insertions(+) create mode 100644 doc/guides/nics/enetfec.rst create mode 100644 doc/guides/nics/features/enetfec.ini create mode 100644 drivers/net/enetfec/enet_ethdev.c create mode 100644 drivers/net/enetfec/enet_ethdev.h create mode 100644 drivers/net/enetfec/enet_pmd_logs.h create mode 100644 drivers/net/enetfec/meson.build create mode 100644 drivers/net/enetfec/version.map diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst new file mode 100644 index 0000000000..f151bb26c4 --- /dev/null +++ b/doc/guides/nics/enetfec.rst @@ -0,0 +1,121 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2021 NXP + +ENETFEC Poll Mode Driver +======================== + +The ENETFEC NIC PMD (**librte_net_enetfec**) provides poll mode driver +support for the inbuilt NIC found in the ** NXP i.MX 8M Mini** SoC. + +More information can be found at NXP Official Website + + +ENETFEC +------- + +This section provides an overview of the NXP ENETFEC and how it is +integrated into the DPDK. + +Contents summary + +- ENETFEC overview +- ENETFEC features +- Supported ENETFEC SoCs +- Prerequisites +- Driver compilation and testing +- Limitations + +ENETFEC Overview +~~~~~~~~~~~~~~~~ +The i.MX 8M Mini Media Applications Processor is built to achieve both high +performance and low power consumption. ENETFEC is a hardware programmable +packet forwarding engine to provide high performance Ethernet interface. +The diagram below shows a system level overview of ENETFEC: + + ====================================================+=============== + US +-----------------------------------------+ | Kernel Space + | | | + | ENETFEC Driver | | + +-----------------------------------------+ | + ^ | | + ENETFEC RXQ | | TXQ | + PMD | | | + | v | +----------+ + +-------------+ | | fec-uio | + | net_enetfec | | +----------+ + +-------------+ | + ^ | | + TXQ | | RXQ | + | | | + | v | + ===================================================+=============== + +----------------------------------------+ + | | HW + | i.MX 8M MINI EVK | + | +-----+ | + | | MAC | | + +---------------+-----+------------------+ + | PHY | + +-----+ + +ENETFEC Ethernet driver is traditional DPDK PMD driver running in the userspace. +The MAC and PHY are the hardware blocks. 'fec-uio' is the UIO driver, ENETFEC PMD +uses UIO interface to interact with kernel for PHY initialisation and for mapping +the allocated memory of register & BD in kernel with DPDK which gives access to +non-cacheable memory for BD. net_enetfec is logical Ethernet interface, created by +ENETFEC driver. + +- ENETFEC driver registers the device in virtual device driver. +- RTE framework scans and will invoke the probe function of ENETFEC driver. +- The probe function will set the basic device registers and also setups BD rings. +- On packet Rx the respective BD Ring status bit is set which is then used for + packet processing. +- Then Tx is done first followed by Rx via logical interfaces. + +ENETFEC Features +~~~~~~~~~~~~~~~~~ + +- ARMv8 + +Supported ENETFEC SoCs +~~~~~~~~~~~~~~~~~~~~~~ + +- i.MX 8M Mini + +Prerequisites +~~~~~~~~~~~~~ + +There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini +compatible board: + +1. **ARM 64 Tool Chain** + + For example, the `*aarch64* Linaro Toolchain `_. + +2. **Linux Kernel** + + It can be obtained from `NXP's Github hosting `_. + +3. **Rootfile system** + + Any *aarch64* supporting filesystem can be used. For example, + Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland which can be obtained + from `here `_. + +4. The Ethernet device will be registered as virtual device, so ENETFEC has dependency on + **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_enetfec` to + run DPDK application. + +Driver compilation and testing +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Follow instructions available in the document +:ref:`compiling and testing a PMD for a NIC ` +to launch **testpmd** + +Limitations +~~~~~~~~~~~ + +- Multi queue is not supported. +- Link status is down always. +- Single Ethernet interface. diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini new file mode 100644 index 0000000000..5700697981 --- /dev/null +++ b/doc/guides/nics/features/enetfec.ini @@ -0,0 +1,8 @@ +; +; Supported features of the 'enetfec' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +ARMv8 = Y +Usage doc = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index 784d5d39f6..777fdab4a0 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -26,6 +26,7 @@ Network Interface Controller Drivers e1000em ena enetc + enetfec enic fm10k hinic diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c new file mode 100644 index 0000000000..88774788cf --- /dev/null +++ b/drivers/net/enetfec/enet_ethdev.c @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "enet_ethdev.h" +#include "enet_pmd_logs.h" + +#define ENETFEC_NAME_PMD net_enetfec +#define ENETFEC_VDEV_GEM_ID_ARG "intf" +#define ENETFEC_CDEV_INVALID_FD -1 + +int enetfec_logtype_pmd; + +static int +enetfec_eth_init(struct rte_eth_dev *dev) +{ + rte_eth_dev_probing_finish(dev); + return 0; +} + +static int +pmd_enetfec_probe(struct rte_vdev_device *vdev) +{ + struct rte_eth_dev *dev = NULL; + struct enetfec_private *fep; + const char *name; + int rc; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + ENETFEC_PMD_LOG(INFO, "Initializing pmd_fec for %s", name); + + dev = rte_eth_vdev_allocate(vdev, sizeof(*fep)); + if (dev == NULL) + return -ENOMEM; + + /* setup board info structure */ + fep = dev->data->dev_private; + fep->dev = dev; + rc = enetfec_eth_init(dev); + if (rc) + goto failed_init; + + return 0; + +failed_init: + ENETFEC_PMD_ERR("Failed to init"); + return rc; +} + +static int +pmd_enetfec_remove(struct rte_vdev_device *vdev) +{ + struct rte_eth_dev *eth_dev = NULL; + int ret; + + /* find the ethdev entry */ + eth_dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev)); + if (eth_dev == NULL) + return -ENODEV; + + ret = rte_eth_dev_release_port(eth_dev); + if (ret != 0) + return -EINVAL; + + ENETFEC_PMD_INFO("Closing sw device"); + return 0; +} + +static struct rte_vdev_driver pmd_enetfec_drv = { + .probe = pmd_enetfec_probe, + .remove = pmd_enetfec_remove, +}; + +RTE_PMD_REGISTER_VDEV(ENETFEC_NAME_PMD, pmd_enetfec_drv); +RTE_PMD_REGISTER_PARAM_STRING(ENETFEC_NAME_PMD, ENETFEC_VDEV_GEM_ID_ARG "="); + +RTE_INIT(enetfec_pmd_init_log) +{ + int ret; + ret = rte_log_register_type_and_pick_level(ENETFEC_LOGTYPE_PREFIX "driver", + RTE_LOG_NOTICE); + enetfec_logtype_pmd = (ret < 0) ? RTE_LOGTYPE_PMD : ret; +} diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h new file mode 100644 index 0000000000..8c61176fb5 --- /dev/null +++ b/drivers/net/enetfec/enet_ethdev.h @@ -0,0 +1,160 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#ifndef __ENETFEC_ETHDEV_H__ +#define __ENETFEC_ETHDEV_H__ + +#include +#include + +/* Common log type name prefix */ +#define ENETFEC_LOGTYPE_PREFIX "pmd.net.enetfec." + +/* + * ENETFEC with AVB IP can support maximum 3 rx and tx queues. + */ +#define ENETFEC_MAX_Q 3 + +#define BD_LEN 49152 +#define ENETFEC_TX_FR_SIZE 2048 +#define MAX_TX_BD_RING_SIZE 512 /* It should be power of 2 */ +#define MAX_RX_BD_RING_SIZE 512 + +/* full duplex or half duplex */ +#define HALF_DUPLEX 0x00 +#define FULL_DUPLEX 0x01 +#define UNKNOWN_DUPLEX 0xff + +#define PKT_MAX_BUF_SIZE 1984 +#define OPT_FRAME_SIZE (PKT_MAX_BUF_SIZE << 16) +#define ETH_ALEN RTE_ETHER_ADDR_LEN +#define ETH_HLEN RTE_ETHER_HDR_LEN +#define VLAN_HLEN 4 + +struct bufdesc { + uint16_t bd_datlen; /* buffer data length */ + uint16_t bd_sc; /* buffer control & status */ + uint32_t bd_bufaddr; /* buffer address */ +}; + +struct bufdesc_ex { + struct bufdesc desc; + uint32_t bd_esc; + uint32_t bd_prot; + uint32_t bd_bdu; + uint32_t ts; + uint16_t res0[4]; +}; + +struct bufdesc_prop { + int que_id; + /* Addresses of Tx and Rx buffers */ + struct bufdesc *base; + struct bufdesc *last; + struct bufdesc *cur; + void __iomem *active_reg_desc; + uint64_t descr_baseaddr_p; + unsigned short ring_size; + unsigned char d_size; + unsigned char d_size_log2; +}; + +struct enetfec_priv_tx_q { + struct bufdesc_prop bd; + struct rte_mbuf *tx_mbuf[MAX_TX_BD_RING_SIZE]; + struct bufdesc *dirty_tx; + struct rte_mempool *pool; + struct enetfec_private *fep; +}; + +struct enetfec_priv_rx_q { + struct bufdesc_prop bd; + struct rte_mbuf *rx_mbuf[MAX_RX_BD_RING_SIZE]; + struct rte_mempool *pool; + struct enetfec_private *fep; +}; + +/* Buffer descriptors of FEC are used to track the ring buffers. Buffer + * descriptor base is x_bd_base. Currently available buffer are x_cur + * and x_cur. where x is rx or tx. Current buffer is tracked by dirty_tx + * that is sent by the controller. + * The tx_cur and dirty_tx are same in completely full and empty + * conditions. Actual condition is determined by empty & ready bits. + */ +struct enetfec_private { + struct rte_eth_dev *dev; + struct rte_eth_stats stats; + struct rte_mempool *pool; + uint16_t max_rx_queues; + uint16_t max_tx_queues; + unsigned int total_tx_ring_size; + unsigned int total_rx_ring_size; + bool bufdesc_ex; + unsigned int tx_align; + unsigned int rx_align; + int full_duplex; + unsigned int phy_speed; + u_int32_t quirks; + int flag_csum; + int flag_pause; + int flag_wol; + bool rgmii_txc_delay; + bool rgmii_rxc_delay; + int link; + void *hw_baseaddr_v; + uint64_t hw_baseaddr_p; + void *bd_addr_v; + uint64_t bd_addr_p; + uint64_t bd_addr_p_r[ENETFEC_MAX_Q]; + uint64_t bd_addr_p_t[ENETFEC_MAX_Q]; + void *dma_baseaddr_r[ENETFEC_MAX_Q]; + void *dma_baseaddr_t[ENETFEC_MAX_Q]; + uint64_t cbus_size; + unsigned int reg_size; + unsigned int bd_size; + int hw_ts_rx_en; + int hw_ts_tx_en; + struct enetfec_priv_rx_q *rx_queues[ENETFEC_MAX_Q]; + struct enetfec_priv_tx_q *tx_queues[ENETFEC_MAX_Q]; +}; + +#define writel(v, p) ({*(volatile unsigned int *)(p) = (v); }) +#define readl(p) rte_read32(p) + +static inline struct +bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd) +{ + return (bdp >= bd->last) ? bd->base + : (struct bufdesc *)(((void *)bdp) + bd->d_size); +} + +static inline struct +bufdesc *enet_get_prevdesc(struct bufdesc *bdp, struct bufdesc_prop *bd) +{ + return (bdp <= bd->base) ? bd->last + : (struct bufdesc *)(((void *)bdp) - bd->d_size); +} + +static inline int +enet_get_bd_index(struct bufdesc *bdp, struct bufdesc_prop *bd) +{ + return ((const char *)bdp - (const char *)bd->base) >> bd->d_size_log2; +} + +static inline int +fls64(unsigned long word) +{ + return (64 - __builtin_clzl(word)) - 1; +} + +uint16_t enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); +uint16_t enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp, + struct bufdesc_prop *bd); +int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp, + struct rte_mbuf *mbuf); + +#endif /*__ENETFEC_ETHDEV_H__*/ diff --git a/drivers/net/enetfec/enet_pmd_logs.h b/drivers/net/enetfec/enet_pmd_logs.h new file mode 100644 index 0000000000..e7b3964a0e --- /dev/null +++ b/drivers/net/enetfec/enet_pmd_logs.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020-2021 NXP + */ + +#ifndef _ENETFEC_LOGS_H_ +#define _ENETFEC_LOGS_H_ + +extern int enetfec_logtype_pmd; + +/* PMD related logs */ +#define ENETFEC_PMD_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, enetfec_logtype_pmd, "\nfec_net: %s()" \ + fmt "\n", __func__, ##args) + +#define PMD_INIT_FUNC_TRACE() ENET_PMD_LOG(DEBUG, " >>") + +#define ENETFEC_PMD_DEBUG(fmt, args...) \ + ENETFEC_PMD_LOG(DEBUG, fmt, ## args) +#define ENETFEC_PMD_ERR(fmt, args...) \ + ENETFEC_PMD_LOG(ERR, fmt, ## args) +#define ENETFEC_PMD_INFO(fmt, args...) \ + ENETFEC_PMD_LOG(INFO, fmt, ## args) + +#define ENETFEC_PMD_WARN(fmt, args...) \ + ENETFEC_PMD_LOG(WARNING, fmt, ## args) + +/* DP Logs, toggled out at compile time if level lower than current level */ +#define ENETFEC_DP_LOG(level, fmt, args...) \ + RTE_LOG_DP(level, PMD, fmt, ## args) + +#endif /* _ENETFEC_LOGS_H_ */ diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build new file mode 100644 index 0000000000..252bf83309 --- /dev/null +++ b/drivers/net/enetfec/meson.build @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2021 NXP + +if not is_linux + build = false + reason = 'only supported on linux' +endif + +deps += ['common_dpaax'] + +sources = files('enet_ethdev.c') + +if cc.has_argument('-Wno-pointer-arith') + cflags += '-Wno-pointer-arith' +endif diff --git a/drivers/net/enetfec/version.map b/drivers/net/enetfec/version.map new file mode 100644 index 0000000000..170c04fe53 --- /dev/null +++ b/drivers/net/enetfec/version.map @@ -0,0 +1,3 @@ +DPDK_20.0 { + local: *; +}; diff --git a/drivers/net/meson.build b/drivers/net/meson.build index bcf488f203..92f433d5e8 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -19,6 +19,7 @@ drivers = [ 'e1000', 'ena', 'enetc', + 'enetfec', 'enic', 'failsafe', 'fm10k', From patchwork Thu Sep 2 17:59:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 97838 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 82B94A0C4C; Thu, 2 Sep 2021 20:01:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BAAA840698; Thu, 2 Sep 2021 20:01:03 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 3995C4068A for ; Thu, 2 Sep 2021 20:01:02 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 0A6E41A1F24; Thu, 2 Sep 2021 20:01:02 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 98FCF1A1F22; Thu, 2 Sep 2021 20:01:01 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id A6386183AC8B; Fri, 3 Sep 2021 02:01:00 +0800 (+08) From: Apeksha Gupta To: andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, hemant.agrawal@nxp.com, sachin.saxena@nxp.com, Apeksha Gupta Date: Thu, 2 Sep 2021 23:29:52 +0530 Message-Id: <20210902175955.9202-3-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210902175955.9202-1-apeksha.gupta@nxp.com> References: <20210902175955.9202-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 2/5] net/enetfec: add UIO support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implemented the fec-uio driver in kernel. enetfec PMD uses UIO interface to interact with "fec-uio" driver implemented in kernel for PHY initialisation and for mapping the allocated memory of register & BD from kernel to DPDK which gives access to non-cacheable memory for BD. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- drivers/net/enetfec/enet_ethdev.c | 232 ++++++++++++++++++++++++++++++ drivers/net/enetfec/enet_ethdev.h | 2 + drivers/net/enetfec/enet_regs.h | 108 ++++++++++++++ drivers/net/enetfec/enet_uio.c | 200 ++++++++++++++++++++++++++ drivers/net/enetfec/enet_uio.h | 54 +++++++ drivers/net/enetfec/meson.build | 3 +- 6 files changed, 598 insertions(+), 1 deletion(-) create mode 100644 drivers/net/enetfec/enet_regs.h create mode 100644 drivers/net/enetfec/enet_uio.c create mode 100644 drivers/net/enetfec/enet_uio.h diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c index 88774788cf..673361e3f8 100644 --- a/drivers/net/enetfec/enet_ethdev.c +++ b/drivers/net/enetfec/enet_ethdev.c @@ -12,19 +12,223 @@ #include #include #include +#include #include "enet_ethdev.h" #include "enet_pmd_logs.h" +#include "enet_regs.h" +#include "enet_uio.h" #define ENETFEC_NAME_PMD net_enetfec #define ENETFEC_VDEV_GEM_ID_ARG "intf" #define ENETFEC_CDEV_INVALID_FD -1 +#define BIT(nr) (1u << (nr)) + +/* FEC receive acceleration */ +#define ENETFEC_RACC_IPDIS BIT(1) +#define ENETFEC_RACC_PRODIS BIT(2) +#define ENETFEC_RACC_SHIFT16 BIT(7) +#define ENETFEC_RACC_OPTIONS (ENETFEC_RACC_IPDIS | \ + ENETFEC_RACC_PRODIS) + +#define ENETFEC_PAUSE_FLAG_AUTONEG 0x1 +#define ENETFEC_PAUSE_FLAG_ENABLE 0x2 + +/* Pause frame feild and FIFO threshold */ +#define ENETFEC_FCE BIT(5) +#define ENETFEC_RSEM_V 0x84 +#define ENETFEC_RSFL_V 16 +#define ENETFEC_RAEM_V 0x8 +#define ENETFEC_RAFL_V 0x8 +#define ENETFEC_OPD_V 0xFFF0 + +#define NUM_OF_QUEUES 6 int enetfec_logtype_pmd; +uint32_t e_cntl; + +/* + * This function is called to start or restart the ENETFEC during a link + * change, transmit timeout, or to reconfigure the ENETFEC. The network + * packet processing for this device must be stopped before this call. + */ +static void +enetfec_restart(struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + uint32_t temp_mac[2]; + uint32_t rcntl = OPT_FRAME_SIZE | 0x04; + uint32_t ecntl = ENETFEC_ETHEREN; + + /* default mac address */ + struct rte_ether_addr addr = { + .addr_bytes = {0x1, 0x2, 0x3, 0x4, 0x5, 0x6} }; + uint32_t val; + + /* + * enet-mac reset will reset mac address registers too, + * so need to reconfigure it. + */ + memcpy(&temp_mac, addr.addr_bytes, ETH_ALEN); + rte_write32(rte_cpu_to_be_32(temp_mac[0]), + fep->hw_baseaddr_v + ENETFEC_PALR); + rte_write32(rte_cpu_to_be_32(temp_mac[1]), + fep->hw_baseaddr_v + ENETFEC_PAUR); + + /* Clear any outstanding interrupt. */ + writel(0xffffffff, fep->hw_baseaddr_v + ENETFEC_EIR); + + /* Enable MII mode */ + if (fep->full_duplex == FULL_DUPLEX) { + /* FD enable */ + rte_write32(rte_cpu_to_le_32(0x04), + fep->hw_baseaddr_v + ENETFEC_TCR); + } else { + /* No Rcv on Xmit */ + rcntl |= 0x02; + rte_write32(0, fep->hw_baseaddr_v + ENETFEC_TCR); + } + + if (fep->quirks & QUIRK_RACC) { + val = rte_read32(fep->hw_baseaddr_v + ENETFEC_RACC); + /* align IP header */ + val |= ENETFEC_RACC_SHIFT16; + val &= ~ENETFEC_RACC_OPTIONS; + rte_write32(rte_cpu_to_le_32(val), + fep->hw_baseaddr_v + ENETFEC_RACC); + rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE), + fep->hw_baseaddr_v + ENETFEC_FRAME_TRL); + } + + /* + * The phy interface and speed need to get configured + * differently on enet-mac. + */ + if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) { + /* Enable flow control and length check */ + rcntl |= 0x40000000 | 0x00000020; + + /* RGMII, RMII or MII */ + rcntl |= BIT(6); + ecntl |= BIT(5); + } + + /* enable pause frame*/ + if ((fep->flag_pause & ENETFEC_PAUSE_FLAG_ENABLE) || + ((fep->flag_pause & ENETFEC_PAUSE_FLAG_AUTONEG) + /*&& ndev->phydev && ndev->phydev->pause*/)) { + rcntl |= ENETFEC_FCE; + + /* set FIFO threshold parameter to reduce overrun */ + rte_write32(rte_cpu_to_le_32(ENETFEC_RSEM_V), + fep->hw_baseaddr_v + ENETFEC_R_FIFO_SEM); + rte_write32(rte_cpu_to_le_32(ENETFEC_RSFL_V), + fep->hw_baseaddr_v + ENETFEC_R_FIFO_SFL); + rte_write32(rte_cpu_to_le_32(ENETFEC_RAEM_V), + fep->hw_baseaddr_v + ENETFEC_R_FIFO_AEM); + rte_write32(rte_cpu_to_le_32(ENETFEC_RAFL_V), + fep->hw_baseaddr_v + ENETFEC_R_FIFO_AFL); + + /* OPD */ + rte_write32(rte_cpu_to_le_32(ENETFEC_OPD_V), + fep->hw_baseaddr_v + ENETFEC_OPD); + } else { + rcntl &= ~ENETFEC_FCE; + } + + rte_write32(rte_cpu_to_le_32(rcntl), fep->hw_baseaddr_v + ENETFEC_RCR); + + rte_write32(0, fep->hw_baseaddr_v + ENETFEC_IAUR); + rte_write32(0, fep->hw_baseaddr_v + ENETFEC_IALR); + + if (fep->quirks & QUIRK_HAS_ENETFEC_MAC) { + /* enable ENETFEC endian swap */ + ecntl |= (1 << 8); + /* enable ENETFEC store and forward mode */ + rte_write32(rte_cpu_to_le_32(1 << 8), + fep->hw_baseaddr_v + ENETFEC_TFWR); + } + if (fep->bufdesc_ex) + ecntl |= (1 << 4); + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS && + fep->rgmii_txc_delay) + ecntl |= ENETFEC_TXC_DLY; + if (fep->quirks & QUIRK_SUPPORT_DELAYED_CLKS && + fep->rgmii_rxc_delay) + ecntl |= ENETFEC_RXC_DLY; + + /* Enable the MIB statistic event counters */ + rte_write32(0, fep->hw_baseaddr_v + ENETFEC_MIBC); + + ecntl |= 0x70000000; + e_cntl = ecntl; + /* And last, enable the transmit and receive processing */ + rte_write32(rte_cpu_to_le_32(ecntl), fep->hw_baseaddr_v + ENETFEC_ECR); + rte_delay_us(10); +} + +static int +enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev) +{ + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) + ENETFEC_PMD_ERR("PMD does not support KEEP_CRC offload"); + + return 0; +} + +static int +enetfec_eth_start(struct rte_eth_dev *dev) +{ + enetfec_restart(dev); + + return 0; +} + +/* ENETFEC enable function. + * @param[in] base ENETFEC base address + */ +void +enetfec_enable(void *base) +{ + rte_write32(rte_read32(base + ENETFEC_ECR) | e_cntl, + base + ENETFEC_ECR); +} + +/* ENETFEC disable function. + * @param[in] base ENETFEC base address + */ +void +enetfec_disable(void *base) +{ + rte_write32(rte_read32(base + ENETFEC_ECR) & ~e_cntl, + base + ENETFEC_ECR); +} + +static int +enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + + dev->data->dev_started = 0; + enetfec_disable(fep->hw_baseaddr_v); + + return 0; +} + +static const struct eth_dev_ops enetfec_ops = { + .dev_configure = enetfec_eth_configure, + .dev_start = enetfec_eth_start, + .dev_stop = enetfec_eth_stop +}; static int enetfec_eth_init(struct rte_eth_dev *dev) { + struct enetfec_private *fep = dev->data->dev_private; + + fep->full_duplex = FULL_DUPLEX; + dev->dev_ops = &enetfec_ops; rte_eth_dev_probing_finish(dev); + return 0; } @@ -35,6 +239,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) struct enetfec_private *fep; const char *name; int rc; + int i; + unsigned int bdsize; name = rte_vdev_device_name(vdev); if (name == NULL) @@ -48,6 +254,32 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) /* setup board info structure */ fep = dev->data->dev_private; fep->dev = dev; + + fep->max_rx_queues = ENETFEC_MAX_Q; + fep->max_tx_queues = ENETFEC_MAX_Q; + fep->quirks = QUIRK_HAS_ENETFEC_MAC | QUIRK_GBIT | QUIRK_BUFDESC_EX + | QUIRK_RACC; + + rc = config_enetfec_uio(fep); + if (rc != 0) + return -ENOMEM; + + /* Get the BD size for distributing among six queues */ + bdsize = (fep->bd_size) / NUM_OF_QUEUES; + + for (i = 0; i < fep->max_tx_queues; i++) { + fep->dma_baseaddr_t[i] = fep->bd_addr_v; + fep->bd_addr_p_t[i] = fep->bd_addr_p; + fep->bd_addr_v = fep->bd_addr_v + bdsize; + fep->bd_addr_p = fep->bd_addr_p + bdsize; + } + for (i = 0; i < fep->max_rx_queues; i++) { + fep->dma_baseaddr_r[i] = fep->bd_addr_v; + fep->bd_addr_p_r[i] = fep->bd_addr_p; + fep->bd_addr_v = fep->bd_addr_v + bdsize; + fep->bd_addr_p = fep->bd_addr_p + bdsize; + } + rc = enetfec_eth_init(dev); if (rc) goto failed_init; diff --git a/drivers/net/enetfec/enet_ethdev.h b/drivers/net/enetfec/enet_ethdev.h index 8c61176fb5..c94baaf811 100644 --- a/drivers/net/enetfec/enet_ethdev.h +++ b/drivers/net/enetfec/enet_ethdev.h @@ -156,5 +156,7 @@ struct bufdesc *enet_get_nextdesc(struct bufdesc *bdp, struct bufdesc_prop *bd); int enet_new_rxbdp(struct enetfec_private *fep, struct bufdesc *bdp, struct rte_mbuf *mbuf); +void enetfec_enable(void *base); +void enetfec_disable(void *base); #endif /*__ENETFEC_ETHDEV_H__*/ diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h new file mode 100644 index 0000000000..5665e19dd3 --- /dev/null +++ b/drivers/net/enetfec/enet_regs.h @@ -0,0 +1,108 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 NXP + */ + +#ifndef __ENETFEC_REGS_H +#define __ENETFEC_REGS_H + +/* Ethernet receive use control and status of buffer descriptor + */ +#define RX_BD_TR ((ushort)0x0001) /* Truncated */ +#define RX_BD_OV ((ushort)0x0002) /* Over-run */ +#define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */ +#define RX_BD_SH ((ushort)0x0008) /* Reserved */ +#define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */ +#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */ +#define RX_BD_FIRST ((ushort)0x0400) /* Reserved */ +#define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */ +#define RX_BD_INT 0x00800000 +#define RX_BD_ICE 0x00000020 +#define RX_BD_PCR 0x00000010 + +/* + * 0 The next BD in consecutive location + * 1 The next BD in ENETFECn_RDSR. + */ +#define RX_BD_WRAP ((ushort)0x2000) +#define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */ +#define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */ + +/* Ethernet transmit use control and status of buffer descriptor */ +#define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */ +#define TX_BD_LAST ((ushort)0x0800) /* Last in frame */ +#define TX_BD_READY ((ushort)0x8000) /* Data is ready */ +#define TX_BD_STATS ((ushort)0x0fff) /* All buffer descriptor status bits */ +#define TX_BD_WRAP ((ushort)0x2000) + +/* Ethernet transmit use control and status of enhanced buffer descriptor */ +#define TX_BD_IINS 0x08000000 +#define TX_BD_PINS 0x10000000 + +#define ENETFEC_RD_START(X) (((X) == 1) ? ENETFEC_RD_START_1 : \ + (((X) == 2) ? \ + ENETFEC_RD_START_2 : ENETFEC_RD_START_0)) +#define ENETFEC_TD_START(X) (((X) == 1) ? ENETFEC_TD_START_1 : \ + (((X) == 2) ? \ + ENETFEC_TD_START_2 : ENETFEC_TD_START_0)) +#define ENETFEC_MRB_SIZE(X) (((X) == 1) ? ENETFEC_MRB_SIZE_1 : \ + (((X) == 2) ? \ + ENETFEC_MRB_SIZE_2 : ENETFEC_MRB_SIZE_0)) + +#define ENETFEC_ETHEREN ((uint)0x00000002) +#define ENETFEC_TXC_DLY ((uint)0x00010000) +#define ENETFEC_RXC_DLY ((uint)0x00020000) + +/* ENETFEC MAC is in controller */ +#define QUIRK_HAS_ENETFEC_MAC (1 << 0) +/* GBIT supported in controller */ +#define QUIRK_GBIT (1 << 3) +/* Controller has extended descriptor buffer */ +#define QUIRK_BUFDESC_EX (1 << 4) +/* RACC register supported by controller */ +#define QUIRK_RACC (1 << 12) +/* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or + * RXC. For its implementation, ENETFEC uses synchronized clocks (250MHz) for + * generating delay of 2ns. + */ +#define QUIRK_SUPPORT_DELAYED_CLKS (1 << 18) + +#define ENETFEC_EIR 0x004 /* Interrupt event register */ +#define ENETFEC_EIMR 0x008 /* Interrupt mask register */ +#define ENETFEC_RDAR_0 0x010 /* Receive descriptor active register ring0 */ +#define ENETFEC_TDAR_0 0x014 /* Transmit descriptor active register ring0 */ +#define ENETFEC_ECR 0x024 /* Ethernet control register */ +#define ENETFEC_MSCR 0x044 /* MII speed control register */ +#define ENETFEC_MIBC 0x064 /* MIB control and status register */ +#define ENETFEC_RCR 0x084 /* Receive control register */ +#define ENETFEC_TCR 0x0c4 /* Transmit Control register */ +#define ENETFEC_PALR 0x0e4 /* MAC address low 32 bits */ +#define ENETFEC_PAUR 0x0e8 /* MAC address high 16 bits */ +#define ENETFEC_OPD 0x0ec /* Opcode/Pause duration register */ +#define ENETFEC_IAUR 0x118 /* hash table 32 bits high */ +#define ENETFEC_IALR 0x11c /* hash table 32 bits low */ +#define ENETFEC_GAUR 0x120 /* grp hash table 32 bits high */ +#define ENETFEC_GALR 0x124 /* grp hash table 32 bits low */ +#define ENETFEC_TFWR 0x144 /* transmit FIFO water_mark */ +#define ENETFEC_RACC 0x1c4 /* Receive Accelerator function configuration*/ +#define ENETFEC_DMA1CFG 0x1d8 /* DMA class based configuration ring1 */ +#define ENETFEC_DMA2CFG 0x1dc /* DMA class based Configuration ring2 */ +#define ENETFEC_RDAR_1 0x1e0 /* Rx descriptor active register ring1 */ +#define ENETFEC_TDAR_1 0x1e4 /* Tx descriptor active register ring1 */ +#define ENETFEC_RDAR_2 0x1e8 /* Rx descriptor active register ring2 */ +#define ENETFEC_TDAR_2 0x1ec /* Tx descriptor active register ring2 */ +#define ENETFEC_RD_START_1 0x160 /* Receive descriptor ring1 start reg */ +#define ENETFEC_TD_START_1 0x164 /* Transmit descriptor ring1 start reg */ +#define ENETFEC_MRB_SIZE_1 0x168 /* Max receive buffer size reg ring1 */ +#define ENETFEC_RD_START_2 0x16c /* Receive descriptor ring2 start reg */ +#define ENETFEC_TD_START_2 0x170 /* Transmit descriptor ring2 start reg */ +#define ENETFEC_MRB_SIZE_2 0x174 /* Max receive buffer size reg ring2 */ +#define ENETFEC_RD_START_0 0x180 /* Receive descriptor ring0 start reg */ +#define ENETFEC_TD_START_0 0x184 /* Transmit descriptor ring0 start reg */ +#define ENETFEC_MRB_SIZE_0 0x188 /* Max receive buffer size reg ring0*/ +#define ENETFEC_R_FIFO_SFL 0x190 /* Rx FIFO full threshold */ +#define ENETFEC_R_FIFO_SEM 0x194 /* Rx FIFO empty threshold */ +#define ENETFEC_R_FIFO_AEM 0x198 /* Rx FIFO almost empty threshold */ +#define ENETFEC_R_FIFO_AFL 0x19c /* Rx FIFO almost full threshold */ +#define ENETFEC_FRAME_TRL 0x1b0 /* Frame truncation length */ + +#endif /*__ENETFEC_REGS_H */ diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c new file mode 100644 index 0000000000..10567839a6 --- /dev/null +++ b/drivers/net/enetfec/enet_uio.c @@ -0,0 +1,200 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include "enet_pmd_logs.h" +#include "enet_uio.h" + +static struct uio_job enetfec_uio_job; +int count; + +/* + * @brief Reads first line from a file. + * Composes file name as: root/subdir/filename + * + * @param [in] root Root path + * @param [in] subdir Subdirectory name + * @param [in] filename File name + * @param [out] line The first line read from file. + * + * @retval 0 for success + * @retval other value for error + */ +static int +file_read_first_line(const char root[], const char subdir[], + const char filename[], char *line) +{ + char absolute_file_name[FEC_UIO_MAX_ATTR_FILE_NAME]; + int fd = 0, ret = 0; + + /*compose the file name: root/subdir/filename */ + memset(absolute_file_name, 0, sizeof(absolute_file_name)); + snprintf(absolute_file_name, FEC_UIO_MAX_ATTR_FILE_NAME, + "%s/%s/%s", root, subdir, filename); + + fd = open(absolute_file_name, O_RDONLY); + if (fd <= 0) + ENETFEC_PMD_ERR("Error opening file %s", absolute_file_name); + + /* read UIO device name from first line in file */ + ret = read(fd, line, FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH); + if (ret <= 0) { + ENETFEC_PMD_ERR("Error reading file %s", absolute_file_name); + return ret; + } + close(fd); + + /* NULL-ify string */ + line[ret] = '\0'; + + return 0; +} + +/* + * @brief Maps rx-tx bd range assigned for a bd ring. + * + * @param [in] uio_device_fd UIO device file descriptor + * @param [in] uio_device_id UIO device id + * @param [in] uio_map_id UIO allows maximum 5 different mapping for + each device. Maps start with id 0. + * @param [out] map_size Map size. + * @param [out] map_addr Map physical address + * + * @retval NULL if failed to map registers + * @retval Virtual address for mapped register address range + */ +static void * +uio_map_mem(int uio_device_fd, int uio_device_id, + int uio_map_id, int *map_size, uint64_t *map_addr) +{ + void *mapped_address = NULL; + unsigned int uio_map_size = 0; + unsigned int uio_map_p_addr = 0; + char uio_sys_root[FEC_UIO_MAX_ATTR_FILE_NAME]; + char uio_sys_map_subdir[FEC_UIO_MAX_ATTR_FILE_NAME]; + char uio_map_size_str[FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH + 1]; + char uio_map_p_addr_str[32]; + int ret = 0; + + /* compose the file name: root/subdir/filename */ + memset(uio_sys_root, 0, sizeof(uio_sys_root)); + memset(uio_sys_map_subdir, 0, sizeof(uio_sys_map_subdir)); + memset(uio_map_size_str, 0, sizeof(uio_map_size_str)); + memset(uio_map_p_addr_str, 0, sizeof(uio_map_p_addr_str)); + + /* Compose string: /sys/class/uio/uioX */ + snprintf(uio_sys_root, sizeof(uio_sys_root), "%s/%s%d", + FEC_UIO_DEVICE_SYS_ATTR_PATH, "uio", uio_device_id); + /* Compose string: maps/mapY */ + snprintf(uio_sys_map_subdir, sizeof(uio_sys_map_subdir), "%s%d", + FEC_UIO_DEVICE_SYS_MAP_ATTR, uio_map_id); + + /* Read first (and only) line from file + * /sys/class/uio/uioX/maps/mapY/size + */ + ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir, + "size", uio_map_size_str); + if (ret < 0) { + ENETFEC_PMD_ERR("file_read_first_line() failed"); + return NULL; + } + ret = file_read_first_line(uio_sys_root, uio_sys_map_subdir, + "addr", uio_map_p_addr_str); + if (ret < 0) { + ENETFEC_PMD_ERR("file_read_first_line() failed"); + return NULL; + } + /* Read mapping size and physical address expressed in hexa(base 16) */ + uio_map_size = strtol(uio_map_size_str, NULL, 16); + uio_map_p_addr = strtol(uio_map_p_addr_str, NULL, 16); + + if (uio_map_id == 0) { + /* Map the register address in user space when map_id is 0 */ + mapped_address = mmap(0 /*dynamically choose virtual address */, + uio_map_size, PROT_READ | PROT_WRITE, + MAP_SHARED, uio_device_fd, 0); + } else { + /* Map the BD memory in user space */ + mapped_address = mmap(NULL, uio_map_size, + PROT_READ | PROT_WRITE, + MAP_SHARED, uio_device_fd, (1 * MAP_PAGE_SIZE)); + } + + if (mapped_address == MAP_FAILED) { + ENETFEC_PMD_ERR("Failed to map! errno = %d uio job fd = %d," + "uio device id = %d, uio map id = %d", errno, + uio_device_fd, uio_device_id, uio_map_id); + return NULL; + } + + /* Save the map size to use it later on for munmap-ing */ + *map_size = uio_map_size; + *map_addr = uio_map_p_addr; + ENETFEC_PMD_INFO("UIO dev[%d] mapped region [id =%d] size 0x%x at %p", + uio_device_id, uio_map_id, uio_map_size, mapped_address); + + return mapped_address; +} + +int +config_enetfec_uio(struct enetfec_private *fep) +{ + char uio_device_file_name[32]; + struct uio_job *uio_job = NULL; + + /* Mapping is done only one time */ + if (count > 0) { + printf("Mapped!\n"); + return 0; + } + + uio_job = &enetfec_uio_job; + + /* Find UIO device created by ENETFEC-UIO kernel driver */ + memset(uio_device_file_name, 0, sizeof(uio_device_file_name)); + snprintf(uio_device_file_name, sizeof(uio_device_file_name), "%s%d", + FEC_UIO_DEVICE_FILE_NAME, uio_job->uio_minor_number); + + /* Open device file */ + uio_job->uio_fd = open(uio_device_file_name, O_RDWR); + if (uio_job->uio_fd < 0) { + printf("US_UIO: Open Failed\n"); + exit(1); + } + + ENETFEC_PMD_INFO("US_UIO: Open device(%s) file with uio_fd = %d", + uio_device_file_name, uio_job->uio_fd); + + fep->hw_baseaddr_v = uio_map_mem(uio_job->uio_fd, + uio_job->uio_minor_number, FEC_UIO_REG_MAP_ID, + &uio_job->map_size, &uio_job->map_addr); + if (fep->hw_baseaddr_v == NULL) + return -ENOMEM; + fep->hw_baseaddr_p = uio_job->map_addr; + fep->reg_size = uio_job->map_size; + + fep->bd_addr_v = uio_map_mem(uio_job->uio_fd, + uio_job->uio_minor_number, FEC_UIO_BD_MAP_ID, + &uio_job->map_size, &uio_job->map_addr); + if (fep->hw_baseaddr_v == NULL) + return -ENOMEM; + fep->bd_addr_p = uio_job->map_addr; + fep->bd_size = uio_job->map_size; + + count++; + + return 0; +} diff --git a/drivers/net/enetfec/enet_uio.h b/drivers/net/enetfec/enet_uio.h new file mode 100644 index 0000000000..b220cae9dd --- /dev/null +++ b/drivers/net/enetfec/enet_uio.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 NXP + */ + +#include "enet_ethdev.h" + +/* Prefix path to sysfs directory where UIO device attributes are exported. + * Path for UIO device X is /sys/class/uio/uioX + */ +#define FEC_UIO_DEVICE_SYS_ATTR_PATH "/sys/class/uio" + +/* Subfolder in sysfs where mapping attributes are exported + * for each UIO device. Path for mapping Y for device X is: + * /sys/class/uio/uioX/maps/mapY + */ +#define FEC_UIO_DEVICE_SYS_MAP_ATTR "maps/map" + +/* Name of UIO device file prefix. Each UIO device will have a device file + * /dev/uioX, where X is the minor device number. + */ +#define FEC_UIO_DEVICE_FILE_NAME "/dev/uio" + +/* Maximum length for the name of an UIO device file. + * Device file name format is: /dev/uioX. + */ +#define FEC_UIO_MAX_DEVICE_FILE_NAME_LENGTH 30 + +/* Maximum length for the name of an attribute file for an UIO device. + * Attribute files are exported in sysfs and have the name formatted as: + * /sys/class/uio/uioX/ + */ +#define FEC_UIO_MAX_ATTR_FILE_NAME 100 + +/* The id for the mapping used to export ENETFEC registers and BD memory to + * user space through UIO device. + */ +#define FEC_UIO_REG_MAP_ID 0 +#define FEC_UIO_BD_MAP_ID 1 + +#define MAP_PAGE_SIZE 4096 + +struct uio_job { + uint32_t fec_id; + int uio_fd; + void *bd_start_addr; + void *register_base_addr; + int map_size; + uint64_t map_addr; + int uio_minor_number; +}; + +int config_enetfec_uio(struct enetfec_private *fep); +void enetfec_uio_init(void); +void enetfec_cleanup(void); diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build index 252bf83309..05183bd441 100644 --- a/drivers/net/enetfec/meson.build +++ b/drivers/net/enetfec/meson.build @@ -8,7 +8,8 @@ endif deps += ['common_dpaax'] -sources = files('enet_ethdev.c') +sources = files('enet_ethdev.c', + 'enet_uio.c') if cc.has_argument('-Wno-pointer-arith') cflags += '-Wno-pointer-arith' From patchwork Thu Sep 2 17:59:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 97839 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1C44A0C4C; Thu, 2 Sep 2021 20:01:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2899A410DC; Thu, 2 Sep 2021 20:01:05 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id ECF1640DDE for ; Thu, 2 Sep 2021 20:01:03 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id C7C2E201F4D; Thu, 2 Sep 2021 20:01:03 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 64088201F51; Thu, 2 Sep 2021 20:01:03 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 70777183AC89; Fri, 3 Sep 2021 02:01:02 +0800 (+08) From: Apeksha Gupta To: andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, hemant.agrawal@nxp.com, sachin.saxena@nxp.com, Apeksha Gupta Date: Thu, 2 Sep 2021 23:29:53 +0530 Message-Id: <20210902175955.9202-4-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210902175955.9202-1-apeksha.gupta@nxp.com> References: <20210902175955.9202-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 3/5] net/enetfec: support queue configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds Rx/Tx queue configuration setup operations. On packet reception the respective BD Ring status bit is set which is then used for packet processing. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- drivers/net/enetfec/enet_ethdev.c | 230 +++++++++++++++++++++++++++++- 1 file changed, 229 insertions(+), 1 deletion(-) diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c index 673361e3f8..b8bc4a5f8b 100644 --- a/drivers/net/enetfec/enet_ethdev.c +++ b/drivers/net/enetfec/enet_ethdev.c @@ -46,6 +46,19 @@ int enetfec_logtype_pmd; uint32_t e_cntl; +/* Supported Rx offloads */ +static uint64_t dev_rx_offloads_sup = + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_CHECKSUM; + +static uint64_t dev_tx_offloads_sup = + DEV_TX_OFFLOAD_IPV4_CKSUM | + DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM; + /* * This function is called to start or restart the ENETFEC during a link * change, transmit timeout, or to reconfigure the ENETFEC. The network @@ -214,10 +227,225 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev) return 0; } +static int +enetfec_eth_info(__rte_unused struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + dev_info->max_rx_queues = ENETFEC_MAX_Q; + dev_info->max_tx_queues = ENETFEC_MAX_Q; + dev_info->rx_offload_capa = dev_rx_offloads_sup; + dev_info->tx_offload_capa = dev_tx_offloads_sup; + return 0; +} + +static const unsigned short offset_des_active_rxq[] = { + ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2 +}; + +static const unsigned short offset_des_active_txq[] = { + ENETFEC_TDAR_0, ENETFEC_TDAR_1, ENETFEC_TDAR_2 +}; + +static int +enetfec_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) +{ + struct enetfec_private *fep = dev->data->dev_private; + unsigned int i; + struct bufdesc *bdp, *bd_base; + struct enetfec_priv_tx_q *txq; + unsigned int size; + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) : + sizeof(struct bufdesc); + unsigned int dsize_log2 = fls64(dsize); + + /* Tx deferred start is not supported */ + if (tx_conf->tx_deferred_start) { + ENETFEC_PMD_ERR("%p:Tx deferred start not supported", + (void *)dev); + return -EINVAL; + } + + /* allocate transmit queue */ + txq = rte_zmalloc(NULL, sizeof(*txq), RTE_CACHE_LINE_SIZE); + if (txq == NULL) { + ENETFEC_PMD_ERR("transmit queue allocation failed"); + return -ENOMEM; + } + + if (nb_desc > MAX_TX_BD_RING_SIZE) { + nb_desc = MAX_TX_BD_RING_SIZE; + ENETFEC_PMD_WARN("modified the nb_desc to MAX_TX_BD_RING_SIZE\n"); + } + txq->bd.ring_size = nb_desc; + fep->total_tx_ring_size += txq->bd.ring_size; + fep->tx_queues[queue_idx] = txq; + + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_t[queue_idx]), + fep->hw_baseaddr_v + ENETFEC_TD_START(queue_idx)); + + /* Set transmit descriptor base. */ + txq = fep->tx_queues[queue_idx]; + txq->fep = fep; + size = dsize * txq->bd.ring_size; + bd_base = (struct bufdesc *)fep->dma_baseaddr_t[queue_idx]; + txq->bd.que_id = queue_idx; + txq->bd.base = bd_base; + txq->bd.cur = bd_base; + txq->bd.d_size = dsize; + txq->bd.d_size_log2 = dsize_log2; + txq->bd.active_reg_desc = + fep->hw_baseaddr_v + offset_des_active_txq[queue_idx]; + bd_base = (struct bufdesc *)(((void *)bd_base) + size); + txq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize); + bdp = txq->bd.base; + bdp = txq->bd.cur; + + for (i = 0; i < txq->bd.ring_size; i++) { + /* Initialize the BD for every fragment in the page. */ + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc); + if (txq->tx_mbuf[i] != NULL) { + rte_pktmbuf_free(txq->tx_mbuf[i]); + txq->tx_mbuf[i] = NULL; + } + rte_write32(0, &bdp->bd_bufaddr); + bdp = enet_get_nextdesc(bdp, &txq->bd); + } + + /* Set the last buffer to wrap */ + bdp = enet_get_prevdesc(bdp, &txq->bd); + rte_write16((rte_cpu_to_le_16(TX_BD_WRAP) | + rte_read16(&bdp->bd_sc)), &bdp->bd_sc); + txq->dirty_tx = bdp; + dev->data->tx_queues[queue_idx] = fep->tx_queues[queue_idx]; + return 0; +} + +static int +enetfec_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_rx_desc, + unsigned int socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) +{ + struct enetfec_private *fep = dev->data->dev_private; + unsigned int i; + struct bufdesc *bd_base; + struct bufdesc *bdp; + struct enetfec_priv_rx_q *rxq; + unsigned int size; + unsigned int dsize = fep->bufdesc_ex ? sizeof(struct bufdesc_ex) : + sizeof(struct bufdesc); + unsigned int dsize_log2 = fls64(dsize); + + /* Rx deferred start is not supported */ + if (rx_conf->rx_deferred_start) { + ENETFEC_PMD_ERR("%p:Rx deferred start not supported", + (void *)dev); + return -EINVAL; + } + + /* allocate receive queue */ + rxq = rte_zmalloc(NULL, sizeof(*rxq), RTE_CACHE_LINE_SIZE); + if (rxq == NULL) { + ENETFEC_PMD_ERR("receive queue allocation failed"); + return -ENOMEM; + } + + if (nb_rx_desc > MAX_RX_BD_RING_SIZE) { + nb_rx_desc = MAX_RX_BD_RING_SIZE; + ENETFEC_PMD_WARN("modified the nb_desc to MAX_RX_BD_RING_SIZE\n"); + } + + rxq->bd.ring_size = nb_rx_desc; + fep->total_rx_ring_size += rxq->bd.ring_size; + fep->rx_queues[queue_idx] = rxq; + + rte_write32(rte_cpu_to_le_32(fep->bd_addr_p_r[queue_idx]), + fep->hw_baseaddr_v + ENETFEC_RD_START(queue_idx)); + rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE), + fep->hw_baseaddr_v + ENETFEC_MRB_SIZE(queue_idx)); + + /* Set receive descriptor base. */ + rxq = fep->rx_queues[queue_idx]; + rxq->pool = mb_pool; + size = dsize * rxq->bd.ring_size; + bd_base = (struct bufdesc *)fep->dma_baseaddr_r[queue_idx]; + rxq->bd.que_id = queue_idx; + rxq->bd.base = bd_base; + rxq->bd.cur = bd_base; + rxq->bd.d_size = dsize; + rxq->bd.d_size_log2 = dsize_log2; + rxq->bd.active_reg_desc = + fep->hw_baseaddr_v + offset_des_active_rxq[queue_idx]; + bd_base = (struct bufdesc *)(((void *)bd_base) + size); + rxq->bd.last = (struct bufdesc *)(((void *)bd_base) - dsize); + + rxq->fep = fep; + bdp = rxq->bd.base; + rxq->bd.cur = bdp; + + for (i = 0; i < nb_rx_desc; i++) { + /* Initialize Rx buffers from pktmbuf pool */ + struct rte_mbuf *mbuf = rte_pktmbuf_alloc(mb_pool); + if (mbuf == NULL) { + ENETFEC_PMD_ERR("mbuf failed\n"); + goto err_alloc; + } + + /* Get the virtual address & physical address */ + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)), + &bdp->bd_bufaddr); + + rxq->rx_mbuf[i] = mbuf; + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), &bdp->bd_sc); + + bdp = enet_get_nextdesc(bdp, &rxq->bd); + } + + /* Initialize the receive buffer descriptors. */ + bdp = rxq->bd.cur; + for (i = 0; i < rxq->bd.ring_size; i++) { + /* Initialize the BD for every fragment in the page. */ + if (rte_read32(&bdp->bd_bufaddr) > 0) + rte_write16(rte_cpu_to_le_16(RX_BD_EMPTY), + &bdp->bd_sc); + else + rte_write16(rte_cpu_to_le_16(0), &bdp->bd_sc); + + bdp = enet_get_nextdesc(bdp, &rxq->bd); + } + + /* Set the last buffer to wrap */ + bdp = enet_get_prevdesc(bdp, &rxq->bd); + rte_write16((rte_cpu_to_le_16(RX_BD_WRAP) | + rte_read16(&bdp->bd_sc)), &bdp->bd_sc); + dev->data->rx_queues[queue_idx] = fep->rx_queues[queue_idx]; + rte_write32(0, fep->rx_queues[queue_idx]->bd.active_reg_desc); + return 0; + +err_alloc: + for (i = 0; i < nb_rx_desc; i++) { + if (rxq->rx_mbuf[i] != NULL) { + rte_pktmbuf_free(rxq->rx_mbuf[i]); + rxq->rx_mbuf[i] = NULL; + } + } + rte_free(rxq); + return errno; +} + static const struct eth_dev_ops enetfec_ops = { .dev_configure = enetfec_eth_configure, .dev_start = enetfec_eth_start, - .dev_stop = enetfec_eth_stop + .dev_stop = enetfec_eth_stop, + .dev_infos_get = enetfec_eth_info, + .rx_queue_setup = enetfec_rx_queue_setup, + .tx_queue_setup = enetfec_tx_queue_setup }; static int From patchwork Thu Sep 2 17:59:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 97840 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2553DA0C4C; Thu, 2 Sep 2021 20:01:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5271C40141; Thu, 2 Sep 2021 20:01:10 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 683B540141 for ; Thu, 2 Sep 2021 20:01:09 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 42096201F5E; Thu, 2 Sep 2021 20:01:09 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id CFF75201F60; Thu, 2 Sep 2021 20:01:08 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id DC45D183AC89; Fri, 3 Sep 2021 02:01:07 +0800 (+08) From: Apeksha Gupta To: andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, hemant.agrawal@nxp.com, sachin.saxena@nxp.com, Apeksha Gupta Date: Thu, 2 Sep 2021 23:29:54 +0530 Message-Id: <20210902175955.9202-5-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210902175955.9202-1-apeksha.gupta@nxp.com> References: <20210902175955.9202-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 4/5] net/enetfec: add enqueue and dequeue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds burst enqueue and dequeue operations to the enetfec PMD. Loopback mode is also added, compile time flag 'ENETFEC_LOOPBACK' is used to enable this feature. By default loopback mode is disabled. Basic features added like promiscuous enable, basic stats. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- doc/guides/nics/enetfec.rst | 2 + doc/guides/nics/features/enetfec.ini | 2 + drivers/net/enetfec/enet_ethdev.c | 189 +++++++++++- drivers/net/enetfec/enet_rxtx.c | 445 +++++++++++++++++++++++++++ drivers/net/enetfec/meson.build | 3 +- 5 files changed, 639 insertions(+), 2 deletions(-) create mode 100644 drivers/net/enetfec/enet_rxtx.c diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst index f151bb26c4..140279caa1 100644 --- a/doc/guides/nics/enetfec.rst +++ b/doc/guides/nics/enetfec.rst @@ -75,6 +75,8 @@ ENETFEC driver. ENETFEC Features ~~~~~~~~~~~~~~~~~ +- Basic stats +- Promiscuous - ARMv8 Supported ENETFEC SoCs diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini index 5700697981..0a151ba193 100644 --- a/doc/guides/nics/features/enetfec.ini +++ b/doc/guides/nics/features/enetfec.ini @@ -4,5 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Basic stats = Y +Promiscuous mode = Y ARMv8 = Y Usage doc = Y diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c index b8bc4a5f8b..6f512d2f96 100644 --- a/drivers/net/enetfec/enet_ethdev.c +++ b/drivers/net/enetfec/enet_ethdev.c @@ -41,6 +41,8 @@ #define ENETFEC_RAFL_V 0x8 #define ENETFEC_OPD_V 0xFFF0 +/* Extended buffer descriptor */ +#define ENETFEC_EXTENDED_BD 0 #define NUM_OF_QUEUES 6 int enetfec_logtype_pmd; @@ -179,6 +181,40 @@ enetfec_restart(struct rte_eth_dev *dev) rte_delay_us(10); } +static void +enet_free_buffers(struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + unsigned int i, q; + struct rte_mbuf *mbuf; + struct bufdesc *bdp; + struct enetfec_priv_rx_q *rxq; + struct enetfec_priv_tx_q *txq; + + for (q = 0; q < dev->data->nb_rx_queues; q++) { + rxq = fep->rx_queues[q]; + bdp = rxq->bd.base; + for (i = 0; i < rxq->bd.ring_size; i++) { + mbuf = rxq->rx_mbuf[i]; + rxq->rx_mbuf[i] = NULL; + if (mbuf) + rte_pktmbuf_free(mbuf); + bdp = enet_get_nextdesc(bdp, &rxq->bd); + } + } + + for (q = 0; q < dev->data->nb_tx_queues; q++) { + txq = fep->tx_queues[q]; + bdp = txq->bd.base; + for (i = 0; i < txq->bd.ring_size; i++) { + mbuf = txq->tx_mbuf[i]; + txq->tx_mbuf[i] = NULL; + if (mbuf) + rte_pktmbuf_free(mbuf); + } + } +} + static int enetfec_eth_configure(__rte_unused struct rte_eth_dev *dev) { @@ -192,6 +228,8 @@ static int enetfec_eth_start(struct rte_eth_dev *dev) { enetfec_restart(dev); + dev->rx_pkt_burst = &enetfec_recv_pkts; + dev->tx_pkt_burst = &enetfec_xmit_pkts; return 0; } @@ -227,6 +265,100 @@ enetfec_eth_stop(__rte_unused struct rte_eth_dev *dev) return 0; } +static int +enetfec_eth_close(__rte_unused struct rte_eth_dev *dev) +{ + enet_free_buffers(dev); + return 0; +} + +static int +enetfec_eth_link_update(struct rte_eth_dev *dev, + int wait_to_complete __rte_unused) +{ + if (dev == NULL) { + ENETFEC_PMD_ERR("Invalid device in link_update.\n"); + return 0; + } + + ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id, + "down"); + return 0; +} + +static int +enetfec_promiscuous_enable(__rte_unused struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + uint32_t tmp; + + tmp = rte_read32(fep->hw_baseaddr_v + ENETFEC_RCR); + tmp |= 0x8; + tmp &= ~0x2; + rte_write32(rte_cpu_to_le_32(tmp), fep->hw_baseaddr_v + ENETFEC_RCR); + + return 0; +} + +static int +enetfec_multicast_enable(struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + + rte_write32(rte_cpu_to_le_32(0xffffffff), + fep->hw_baseaddr_v + ENETFEC_GAUR); + rte_write32(rte_cpu_to_le_32(0xffffffff), + fep->hw_baseaddr_v + ENETFEC_GALR); + dev->data->all_multicast = 1; + + rte_write32(rte_cpu_to_le_32(0x04400002), + fep->hw_baseaddr_v + ENETFEC_GAUR); + rte_write32(rte_cpu_to_le_32(0x10800049), + fep->hw_baseaddr_v + ENETFEC_GALR); + + return 0; +} + +/* Set a MAC change in hardware. */ +static int +enetfec_set_mac_address(struct rte_eth_dev *dev, + struct rte_ether_addr *addr) +{ + struct enetfec_private *fep = dev->data->dev_private; + + writel(addr->addr_bytes[3] | (addr->addr_bytes[2] << 8) | + (addr->addr_bytes[1] << 16) | (addr->addr_bytes[0] << 24), + fep->hw_baseaddr_v + ENETFEC_PALR); + writel((addr->addr_bytes[5] << 16) | (addr->addr_bytes[4] << 24), + fep->hw_baseaddr_v + ENETFEC_PAUR); + + rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]); + + return 0; +} + +static int +enetfec_stats_get(struct rte_eth_dev *dev, + struct rte_eth_stats *stats) +{ + struct enetfec_private *fep = dev->data->dev_private; + struct rte_eth_stats *eth_stats = &fep->stats; + + if (stats == NULL) + return -1; + + memset(stats, 0, sizeof(struct rte_eth_stats)); + + stats->ipackets = eth_stats->ipackets; + stats->ibytes = eth_stats->ibytes; + stats->ierrors = eth_stats->ierrors; + stats->opackets = eth_stats->opackets; + stats->obytes = eth_stats->obytes; + stats->oerrors = eth_stats->oerrors; + + return 0; +} + static int enetfec_eth_info(__rte_unused struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) @@ -238,6 +370,18 @@ enetfec_eth_info(__rte_unused struct rte_eth_dev *dev, return 0; } +static void +enet_free_queue(struct rte_eth_dev *dev) +{ + struct enetfec_private *fep = dev->data->dev_private; + unsigned int i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) + rte_free(fep->rx_queues[i]); + for (i = 0; i < dev->data->nb_tx_queues; i++) + rte_free(fep->rx_queues[i]); +} + static const unsigned short offset_des_active_rxq[] = { ENETFEC_RDAR_0, ENETFEC_RDAR_1, ENETFEC_RDAR_2 }; @@ -443,7 +587,13 @@ static const struct eth_dev_ops enetfec_ops = { .dev_configure = enetfec_eth_configure, .dev_start = enetfec_eth_start, .dev_stop = enetfec_eth_stop, - .dev_infos_get = enetfec_eth_info, + .dev_close = enetfec_eth_close, + .link_update = enetfec_eth_link_update, + .promiscuous_enable = enetfec_promiscuous_enable, + .allmulticast_enable = enetfec_multicast_enable, + .mac_addr_set = enetfec_set_mac_address, + .stats_get = enetfec_stats_get, + .dev_infos_get = enetfec_eth_info, .rx_queue_setup = enetfec_rx_queue_setup, .tx_queue_setup = enetfec_tx_queue_setup }; @@ -469,6 +619,7 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) int rc; int i; unsigned int bdsize; + struct rte_ether_addr macaddr; name = rte_vdev_device_name(vdev); if (name == NULL) @@ -508,6 +659,27 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) fep->bd_addr_p = fep->bd_addr_p + bdsize; } + /* Copy the station address into the dev structure, */ + dev->data->mac_addrs = rte_zmalloc("mac_addr", ETHER_ADDR_LEN, 0); + if (dev->data->mac_addrs == NULL) { + ENETFEC_PMD_ERR("Failed to allocate mem %d to store MAC addresses", + ETHER_ADDR_LEN); + rc = -ENOMEM; + goto err; + } + + /* + * Set default mac address + */ + macaddr.addr_bytes[0] = 1; + macaddr.addr_bytes[1] = 1; + macaddr.addr_bytes[2] = 1; + macaddr.addr_bytes[3] = 1; + macaddr.addr_bytes[4] = 1; + macaddr.addr_bytes[5] = 1; + enetfec_set_mac_address(dev, &macaddr); + + fep->bufdesc_ex = ENETFEC_EXTENDED_BD; rc = enetfec_eth_init(dev); if (rc) goto failed_init; @@ -516,6 +688,8 @@ pmd_enetfec_probe(struct rte_vdev_device *vdev) failed_init: ENETFEC_PMD_ERR("Failed to init"); +err: + rte_eth_dev_release_port(dev); return rc; } @@ -523,6 +697,8 @@ static int pmd_enetfec_remove(struct rte_vdev_device *vdev) { struct rte_eth_dev *eth_dev = NULL; + struct enetfec_private *fep; + struct enetfec_priv_rx_q *rxq; int ret; /* find the ethdev entry */ @@ -530,11 +706,22 @@ pmd_enetfec_remove(struct rte_vdev_device *vdev) if (eth_dev == NULL) return -ENODEV; + fep = eth_dev->data->dev_private; + /* Free descriptor base of first RX queue as it was configured + * first in enetfec_eth_init(). + */ + rxq = fep->rx_queues[0]; + rte_free(rxq->bd.base); + enet_free_queue(eth_dev); + enetfec_eth_stop(eth_dev); + ret = rte_eth_dev_release_port(eth_dev); if (ret != 0) return -EINVAL; ENETFEC_PMD_INFO("Closing sw device"); + munmap(fep->hw_baseaddr_v, fep->cbus_size); + return 0; } diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c new file mode 100644 index 0000000000..d168c22431 --- /dev/null +++ b/drivers/net/enetfec/enet_rxtx.c @@ -0,0 +1,445 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 NXP + */ + +#include +#include +#include +#include "enet_regs.h" +#include "enet_ethdev.h" +#include "enet_pmd_logs.h" + +#define ENETFEC_LOOPBACK 0 +#define ENETFEC_DUMP 0 + +static volatile bool lb_quit; + +#if ENETFEC_DUMP +static void +enet_dump(struct enetfec_priv_tx_q *txq) +{ + struct bufdesc *bdp; + int index = 0; + + ENETFEC_PMD_DEBUG("TX ring dump\n"); + ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n"); + + bdp = txq->bd.base; + do { + ENETFEC_PMD_DEBUG("%3u %c%c 0x%04x 0x%08x %4u %p\n", + index, + bdp == txq->bd.cur ? 'S' : ' ', + bdp == txq->dirty_tx ? 'H' : ' ', + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)), + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)), + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)), + txq->tx_mbuf[index]); + bdp = enet_get_nextdesc(bdp, &txq->bd); + index++; + } while (bdp != txq->bd.base); +} + +static void +enet_dump_rx(struct enetfec_priv_rx_q *rxq) +{ + struct bufdesc *bdp; + int index = 0; + + ENETFEC_PMD_DEBUG("RX ring dump\n"); + ENETFEC_PMD_DEBUG("Nr SC addr len MBUF\n"); + + bdp = rxq->bd.base; + do { + ENETFEC_PMD_DEBUG("%3u %c 0x%04x 0x%08x %4u %p\n", + index, + bdp == rxq->bd.cur ? 'S' : ' ', + rte_read16(rte_le_to_cpu_16(&bdp->bd_sc)), + rte_read32(rte_le_to_cpu_32(&bdp->bd_bufaddr)), + rte_read16(rte_le_to_cpu_16(&bdp->bd_datlen)), + rxq->rx_mbuf[index]); + rte_pktmbuf_dump(stdout, rxq->rx_mbuf[index], + rxq->rx_mbuf[index]->pkt_len); + bdp = enet_get_nextdesc(bdp, &rxq->bd); + index++; + } while (bdp != rxq->bd.base); +} +#endif + +#if ENETFEC_LOOPBACK +static void fec_signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTSTP || signum == SIGTERM) { + printf("\n\n %s: Signal %d received, preparing to exit...\n", + __func__, signum); + lb_quit = true; + } +} + +static void +enetfec_lb_rxtx(void *rxq1) +{ + struct rte_mempool *pool; + struct bufdesc *rx_bdp = NULL, *tx_bdp = NULL; + struct rte_mbuf *mbuf = NULL, *new_mbuf = NULL; + unsigned short status; + unsigned short pkt_len = 0; + int index_r = 0, index_t = 0; + u8 *data; + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1; + struct rte_eth_stats *stats = &rxq->fep->stats; + unsigned int i; + struct enetfec_private *fep; + struct enetfec_priv_tx_q *txq; + fep = rxq->fep->dev->data->dev_private; + txq = fep->tx_queues[0]; + + pool = rxq->pool; + rx_bdp = rxq->bd.cur; + tx_bdp = txq->bd.cur; + + signal(SIGTSTP, fec_signal_handler); + while (!lb_quit) { +chk_again: + status = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_sc)); + if (status & RX_BD_EMPTY) { + if (!lb_quit) + goto chk_again; + rxq->bd.cur = rx_bdp; + txq->bd.cur = tx_bdp; + return; + } + + /* Check for errors. */ + status ^= RX_BD_LAST; + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO | + RX_BD_CR | RX_BD_OV | RX_BD_LAST | + RX_BD_TR)) { + stats->ierrors++; + if (status & RX_BD_OV) { + /* FIFO overrun */ + ENETFEC_PMD_ERR("rx_fifo_error\n"); + goto rx_processing_done; + } + if (status & (RX_BD_LG | RX_BD_SH + | RX_BD_LAST)) { + /* Frame too long or too short. */ + ENETFEC_PMD_ERR("rx_length_error\n"); + if (status & RX_BD_LAST) + ENETFEC_PMD_ERR("rcv is not +last\n"); + } + /* CRC Error */ + if (status & RX_BD_CR) + ENETFEC_PMD_ERR("rx_crc_errors\n"); + + /* Report late collisions as a frame error. */ + if (status & (RX_BD_NO | RX_BD_TR)) + ENETFEC_PMD_ERR("rx_frame_error\n"); + mbuf = NULL; + goto rx_processing_done; + } + + new_mbuf = rte_pktmbuf_alloc(pool); + if (unlikely(!new_mbuf)) { + stats->ierrors++; + break; + } + /* Process the incoming frame. */ + pkt_len = rte_le_to_cpu_16(rte_read16(&rx_bdp->bd_datlen)); + + /* shows data with respect to the data_off field. */ + index_r = enet_get_bd_index(rx_bdp, &rxq->bd); + mbuf = rxq->rx_mbuf[index_r]; + + /* adjust pkt_len */ + rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4); + if (rxq->fep->quirks & QUIRK_RACC) + rte_pktmbuf_adj(mbuf, 2); + + /* Replace Buffer in BD */ + rxq->rx_mbuf[index_r] = new_mbuf; + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)), + &rx_bdp->bd_bufaddr); + +rx_processing_done: + /* when rx_processing_done clear the status flags + * for this buffer + */ + status &= ~RX_BD_STATS; + + /* Mark the buffer empty */ + status |= RX_BD_EMPTY; + + /* Make sure the updates to rest of the descriptor are + * performed before transferring ownership. + */ + rte_wmb(); + rte_write16(rte_cpu_to_le_16(status), &rx_bdp->bd_sc); + + /* Update BD pointer to next entry */ + rx_bdp = enet_get_nextdesc(rx_bdp, &rxq->bd); + + /* Doing this here will keep the FEC running while we process + * incoming frames. + */ + rte_write32(0, rxq->bd.active_reg_desc); + + /* TX begins: First clean the ring then process packet */ + index_t = enet_get_bd_index(tx_bdp, &txq->bd); + status = rte_le_to_cpu_16(rte_read16(&tx_bdp->bd_sc)); + if (status & TX_BD_READY) + stats->oerrors++; + break; + if (txq->tx_mbuf[index_t]) { + rte_pktmbuf_free(txq->tx_mbuf[index_t]); + txq->tx_mbuf[index_t] = NULL; + } + + if (mbuf == NULL) + continue; + + /* Fill in a Tx ring entry */ + status &= ~TX_BD_STATS; + + /* Set buffer length and buffer pointer */ + pkt_len = rte_pktmbuf_pkt_len(mbuf); + status |= (TX_BD_LAST); + data = rte_pktmbuf_mtod(mbuf, void *); + + for (i = 0; i <= pkt_len; i += RTE_CACHE_LINE_SIZE) + dcbf(data + i); + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)), + &tx_bdp->bd_bufaddr); + rte_write16(rte_cpu_to_le_16(pkt_len), &tx_bdp->bd_datlen); + + /* Make sure the updates to rest of the descriptor are performed + * before transferring ownership. + */ + status |= (TX_BD_READY | TX_BD_TC); + rte_wmb(); + rte_write16(rte_cpu_to_le_16(status), &tx_bdp->bd_sc); + + /* Trigger transmission start */ + rte_write32(0, txq->bd.active_reg_desc); + + /* Save mbuf pointer to clean later */ + txq->tx_mbuf[index_t] = mbuf; + + /* If this was the last BD in the ring, start at the + * beginning again. + */ + tx_bdp = enet_get_nextdesc(tx_bdp, &txq->bd); + } +} +#endif + +/* This function does enetfec_rx_queue processing. Dequeue packet from Rx queue + * When update through the ring, just set the empty indicator. + */ +uint16_t +enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mempool *pool; + struct bufdesc *bdp; + struct rte_mbuf *mbuf, *new_mbuf = NULL; + unsigned short status; + unsigned short pkt_len; + int pkt_received = 0, index = 0; + void *data; + struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1; + struct rte_eth_stats *stats = &rxq->fep->stats; + pool = rxq->pool; + bdp = rxq->bd.cur; +#if ENETFEC_LOOPBACK + enetfec_lb_rxtx(rxq1); +#endif + /* Process the incoming packet */ + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc)); + while ((status & RX_BD_EMPTY) == 0) { + if (pkt_received >= nb_pkts) + break; + + new_mbuf = rte_pktmbuf_alloc(pool); + if (unlikely(new_mbuf == NULL)) { + stats->ierrors++; + break; + } + /* Check for errors. */ + status ^= RX_BD_LAST; + if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO | + RX_BD_CR | RX_BD_OV | RX_BD_LAST | + RX_BD_TR)) { + stats->ierrors++; + if (status & RX_BD_OV) { + /* FIFO overrun */ + /* enet_dump_rx(rxq); */ + ENETFEC_PMD_ERR("rx_fifo_error\n"); + goto rx_processing_done; + } + if (status & (RX_BD_LG | RX_BD_SH + | RX_BD_LAST)) { + /* Frame too long or too short. */ + ENETFEC_PMD_ERR("rx_length_error\n"); + if (status & RX_BD_LAST) + ENETFEC_PMD_ERR("rcv is not +last\n"); + } + if (status & RX_BD_CR) { /* CRC Error */ + ENETFEC_PMD_ERR("rx_crc_errors\n"); + } + /* Report late collisions as a frame error. */ + if (status & (RX_BD_NO | RX_BD_TR)) + ENETFEC_PMD_ERR("rx_frame_error\n"); + goto rx_processing_done; + } + + /* Process the incoming frame. */ + stats->ipackets++; + pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen)); + stats->ibytes += pkt_len; + + /* shows data with respect to the data_off field. */ + index = enet_get_bd_index(bdp, &rxq->bd); + mbuf = rxq->rx_mbuf[index]; + + data = rte_pktmbuf_mtod(mbuf, uint8_t *); + rte_prefetch0(data); + rte_pktmbuf_append((struct rte_mbuf *)mbuf, + pkt_len - 4); + + if (rxq->fep->quirks & QUIRK_RACC) + data = rte_pktmbuf_adj(mbuf, 2); + + rx_pkts[pkt_received] = mbuf; + pkt_received++; + rxq->rx_mbuf[index] = new_mbuf; + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)), + &bdp->bd_bufaddr); +rx_processing_done: + /* when rx_processing_done clear the status flags + * for this buffer + */ + status &= ~RX_BD_STATS; + + /* Mark the buffer empty */ + status |= RX_BD_EMPTY; + + if (rxq->fep->bufdesc_ex) { + struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; + rte_write32(rte_cpu_to_le_32(RX_BD_INT), + &ebdp->bd_esc); + rte_write32(0, &ebdp->bd_prot); + rte_write32(0, &ebdp->bd_bdu); + } + + /* Make sure the updates to rest of the descriptor are + * performed before transferring ownership. + */ + rte_wmb(); + rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc); + + /* Update BD pointer to next entry */ + bdp = enet_get_nextdesc(bdp, &rxq->bd); + + /* Doing this here will keep the FEC running while we process + * incoming frames. + */ + rte_write32(0, rxq->bd.active_reg_desc); + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc)); + } + rxq->bd.cur = bdp; + return pkt_received; +} + +uint16_t +enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct enetfec_priv_tx_q *txq = + (struct enetfec_priv_tx_q *)tx_queue; + struct rte_eth_stats *stats = &txq->fep->stats; + struct bufdesc *bdp, *last_bdp; + struct rte_mbuf *mbuf; + unsigned short status; + unsigned short buflen; + unsigned int index, estatus = 0; + unsigned int i, pkt_transmitted = 0; + u8 *data; + int tx_st = 1; + + while (tx_st) { + if (pkt_transmitted >= nb_pkts) { + tx_st = 0; + break; + } + bdp = txq->bd.cur; + /* First clean the ring */ + index = enet_get_bd_index(bdp, &txq->bd); + status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc)); + + if (status & TX_BD_READY) { + stats->oerrors++; + break; + } + if (txq->tx_mbuf[index]) { + rte_pktmbuf_free(txq->tx_mbuf[index]); + txq->tx_mbuf[index] = NULL; + } + + mbuf = *(tx_pkts); + tx_pkts++; + + /* Fill in a Tx ring entry */ + last_bdp = bdp; + status &= ~TX_BD_STATS; + + /* Set buffer length and buffer pointer */ + buflen = rte_pktmbuf_pkt_len(mbuf); + stats->opackets++; + stats->obytes += buflen; + + if (mbuf->nb_segs > 1) { + ENETFEC_PMD_DEBUG("SG not supported"); + return -1; + } + status |= (TX_BD_LAST); + data = rte_pktmbuf_mtod(mbuf, void *); + for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE) + dcbf(data + i); + + rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(mbuf)), + &bdp->bd_bufaddr); + rte_write16(rte_cpu_to_le_16(buflen), &bdp->bd_datlen); + + if (txq->fep->bufdesc_ex) { + struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; + rte_write32(0, &ebdp->bd_bdu); + rte_write32(rte_cpu_to_le_32(estatus), + &ebdp->bd_esc); + } + + index = enet_get_bd_index(last_bdp, &txq->bd); + /* Save mbuf pointer */ + txq->tx_mbuf[index] = mbuf; + + /* Make sure the updates to rest of the descriptor are performed + * before transferring ownership. + */ + status |= (TX_BD_READY | TX_BD_TC); + rte_wmb(); + rte_write16(rte_cpu_to_le_16(status), &bdp->bd_sc); + + /* Trigger transmission start */ + rte_write32(0, txq->bd.active_reg_desc); + pkt_transmitted++; + + /* If this was the last BD in the ring, start at the + * beginning again. + */ + bdp = enet_get_nextdesc(last_bdp, &txq->bd); + + /* Make sure the update to bdp and tx_skbuff are performed + * before txq->bd.cur. + */ + txq->bd.cur = bdp; + } + return nb_pkts; +} diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build index 05183bd441..4477f7f549 100644 --- a/drivers/net/enetfec/meson.build +++ b/drivers/net/enetfec/meson.build @@ -9,7 +9,8 @@ endif deps += ['common_dpaax'] sources = files('enet_ethdev.c', - 'enet_uio.c') + 'enet_uio.c', + 'enet_rxtx.c') if cc.has_argument('-Wno-pointer-arith') cflags += '-Wno-pointer-arith' From patchwork Thu Sep 2 17:59:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Apeksha Gupta X-Patchwork-Id: 97841 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E01CCA0C4C; Thu, 2 Sep 2021 20:01:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B5E4D41104; Thu, 2 Sep 2021 20:01:12 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id DA22040E09 for ; Thu, 2 Sep 2021 20:01:10 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id B78811A1F22; Thu, 2 Sep 2021 20:01:10 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 5480B1A1F37; Thu, 2 Sep 2021 20:01:10 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 5DF45183AD27; Fri, 3 Sep 2021 02:01:09 +0800 (+08) From: Apeksha Gupta To: andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, hemant.agrawal@nxp.com, sachin.saxena@nxp.com, Apeksha Gupta Date: Thu, 2 Sep 2021 23:29:55 +0530 Message-Id: <20210902175955.9202-6-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210902175955.9202-1-apeksha.gupta@nxp.com> References: <20210902175955.9202-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 5/5] net/enetfec: add features X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds checksum and VLAN offloads in enetfec network poll mode driver. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- doc/guides/nics/enetfec.rst | 2 ++ doc/guides/nics/features/enetfec.ini | 3 ++ drivers/net/enetfec/enet_ethdev.c | 17 +++++++++- drivers/net/enetfec/enet_regs.h | 10 ++++++ drivers/net/enetfec/enet_rxtx.c | 51 +++++++++++++++++++++++++++- 5 files changed, 81 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst index 140279caa1..b48a195e67 100644 --- a/doc/guides/nics/enetfec.rst +++ b/doc/guides/nics/enetfec.rst @@ -77,6 +77,8 @@ ENETFEC Features - Basic stats - Promiscuous +- VLAN offload +- L3/L4 checksum offload - ARMv8 Supported ENETFEC SoCs diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini index 0a151ba193..525df93ec7 100644 --- a/doc/guides/nics/features/enetfec.ini +++ b/doc/guides/nics/features/enetfec.ini @@ -6,5 +6,8 @@ [Features] Basic stats = Y Promiscuous mode = Y +VLAN offload = Y +L3 checksum offload = Y +L4 checksum offload = Y ARMv8 = Y Usage doc = Y diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c index 6f512d2f96..c31c8a82ba 100644 --- a/drivers/net/enetfec/enet_ethdev.c +++ b/drivers/net/enetfec/enet_ethdev.c @@ -107,7 +107,11 @@ enetfec_restart(struct rte_eth_dev *dev) val = rte_read32(fep->hw_baseaddr_v + ENETFEC_RACC); /* align IP header */ val |= ENETFEC_RACC_SHIFT16; - val &= ~ENETFEC_RACC_OPTIONS; + if (fep->flag_csum & RX_FLAG_CSUM_EN) + /* set RX checksum */ + val |= ENETFEC_RACC_OPTIONS; + else + val &= ~ENETFEC_RACC_OPTIONS; rte_write32(rte_cpu_to_le_32(val), fep->hw_baseaddr_v + ENETFEC_RACC); rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE), @@ -602,9 +606,20 @@ static int enetfec_eth_init(struct rte_eth_dev *dev) { struct enetfec_private *fep = dev->data->dev_private; + struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf; + uint64_t rx_offloads = eth_conf->rxmode.offloads; fep->full_duplex = FULL_DUPLEX; dev->dev_ops = &enetfec_ops; + if (fep->quirks & QUIRK_VLAN) + /* enable hw VLAN support */ + rx_offloads |= DEV_RX_OFFLOAD_VLAN; + + if (fep->quirks & QUIRK_CSUM) { + /* enable hw accelerator */ + rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM; + fep->flag_csum |= RX_FLAG_CSUM_EN; + } rte_eth_dev_probing_finish(dev); return 0; diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h index 5665e19dd3..63cba96733 100644 --- a/drivers/net/enetfec/enet_regs.h +++ b/drivers/net/enetfec/enet_regs.h @@ -27,6 +27,12 @@ #define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */ #define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */ +/* Ethernet receive use control and status of enhanced buffer descriptor */ +#define BD_ENETFEC_RX_VLAN 0x00000004 + +#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR) +#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR) + /* Ethernet transmit use control and status of buffer descriptor */ #define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */ #define TX_BD_LAST ((ushort)0x0800) /* Last in frame */ @@ -58,6 +64,10 @@ #define QUIRK_GBIT (1 << 3) /* Controller has extended descriptor buffer */ #define QUIRK_BUFDESC_EX (1 << 4) +/* Controller support hardware checksum */ +#define QUIRK_CSUM (1 << 5) +/* Controller support hardware vlan*/ +#define QUIRK_VLAN (1 << 6) /* RACC register supported by controller */ #define QUIRK_RACC (1 << 12) /* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c index d168c22431..574d6c80a4 100644 --- a/drivers/net/enetfec/enet_rxtx.c +++ b/drivers/net/enetfec/enet_rxtx.c @@ -245,9 +245,14 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, unsigned short status; unsigned short pkt_len; int pkt_received = 0, index = 0; - void *data; + void *data, *mbuf_data; + uint16_t vlan_tag; + struct bufdesc_ex *ebdp = NULL; + bool vlan_packet_rcvd = false; struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1; struct rte_eth_stats *stats = &rxq->fep->stats; + struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf; + uint64_t rx_offloads = eth_conf->rxmode.offloads; pool = rxq->pool; bdp = rxq->bd.cur; #if ENETFEC_LOOPBACK @@ -302,6 +307,7 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, mbuf = rxq->rx_mbuf[index]; data = rte_pktmbuf_mtod(mbuf, uint8_t *); + mbuf_data = data; rte_prefetch0(data); rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4); @@ -311,6 +317,45 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, rx_pkts[pkt_received] = mbuf; pkt_received++; + + /* Extract the enhanced buffer descriptor */ + ebdp = NULL; + if (rxq->fep->bufdesc_ex) + ebdp = (struct bufdesc_ex *)bdp; + + /* If this is a VLAN packet remove the VLAN Tag */ + vlan_packet_rcvd = false; + if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) && + rxq->fep->bufdesc_ex && + (rte_read32(&ebdp->bd_esc) & + rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) { + /* Push and remove the vlan tag */ + struct rte_vlan_hdr *vlan_header = + (struct rte_vlan_hdr *)(data + ETH_HLEN); + vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci); + + vlan_packet_rcvd = true; + memmove(mbuf_data + VLAN_HLEN, data, ETH_ALEN * 2); + rte_pktmbuf_adj(mbuf, VLAN_HLEN); + } + + if (rxq->fep->bufdesc_ex && + (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) { + if ((rte_read32(&ebdp->bd_esc) & + rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) { + /* don't check it */ + mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD; + } else { + mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD; + } + } + + /* Handle received VLAN packets */ + if (vlan_packet_rcvd) { + mbuf->vlan_tci = vlan_tag; + mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN; + } + rxq->rx_mbuf[index] = new_mbuf; rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)), &bdp->bd_bufaddr); @@ -411,6 +456,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) if (txq->fep->bufdesc_ex) { struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; + + if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD) + estatus |= TX_BD_PINS | TX_BD_IINS; + rte_write32(0, &ebdp->bd_bdu); rte_write32(rte_cpu_to_le_32(estatus), &ebdp->bd_esc);