From patchwork Fri Sep 4 08:29:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 76535 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B00E2A04B1; Fri, 4 Sep 2020 10:35:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5204C1C0B4; Fri, 4 Sep 2020 10:35:36 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 3FFB81C0AF for ; Fri, 4 Sep 2020 10:35:35 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 0E91420196E; Fri, 4 Sep 2020 10:35:35 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 36D3C2003A3; Fri, 4 Sep 2020 10:35:33 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id B17EE402CF; Fri, 4 Sep 2020 10:35:30 +0200 (CEST) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Fri, 4 Sep 2020 13:59:15 +0530 Message-Id: <20200904082921.17400-1-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200901123650.29908-1-hemant.agrawal@nxp.com> References: <20200901123650.29908-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v7 1/7] net/dpaa: add VSP support in FMLIB X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang This patch adds support for VSP (Virtual Storage Profile) in fmlib routines. VSP allow a network interface to be divided into physical and virtual instance(s). The concept is very similar to SRIOV. Signed-off-by: Jun Yang Signed-off-by: Hemant Agrawal --- v7: fix spelling mistakes and add FMD link v6: add documentation v5: align to dpdk coding style doc/guides/nics/dpaa.rst | 8 ++ drivers/net/dpaa/fmlib/fm_vsp.c | 148 ++++++++++++++++++++++++++++ drivers/net/dpaa/fmlib/fm_vsp_ext.h | 131 ++++++++++++++++++++++++ drivers/net/dpaa/meson.build | 1 + 4 files changed, 288 insertions(+) create mode 100644 drivers/net/dpaa/fmlib/fm_vsp.c create mode 100644 drivers/net/dpaa/fmlib/fm_vsp_ext.h diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst index f40d6a4d1..f2b2f71e4 100644 --- a/doc/guides/nics/dpaa.rst +++ b/doc/guides/nics/dpaa.rst @@ -339,6 +339,14 @@ FMLIB `Kernel FMD Driver `_. +VSP (Virtual Storage Profile) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + The storage profiled are means to provide virtualized interface. A ranges of + storage profiles cab be associated to Ethernet ports. + They are selected during classification. Specify how the frame should be + written to memory and which buffer pool to select for packet storange in + queues. Start and End margin of buffer can also be configured. + Limitations ----------- diff --git a/drivers/net/dpaa/fmlib/fm_vsp.c b/drivers/net/dpaa/fmlib/fm_vsp.c new file mode 100644 index 000000000..78efd93f2 --- /dev/null +++ b/drivers/net/dpaa/fmlib/fm_vsp.c @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2019-2020 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include "fm_ext.h" +#include "fm_pcd_ext.h" +#include "fm_port_ext.h" +#include "fm_vsp_ext.h" +#include + +uint32_t +fm_port_vsp_alloc(t_handle h_fm_port, + t_fm_port_vspalloc_params *p_params) +{ + t_device *p_dev = (t_device *)h_fm_port; + ioc_fm_port_vsp_alloc_params_t params; + + _fml_dbg("Calling...\n"); + memset(¶ms, 0, sizeof(ioc_fm_port_vsp_alloc_params_t)); + memcpy(¶ms.params, p_params, sizeof(t_fm_port_vspalloc_params)); + + if (ioctl(p_dev->fd, FM_PORT_IOC_VSP_ALLOC, ¶ms)) + RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG); + + _fml_dbg("Called.\n"); + + return E_OK; +} + +t_handle +fm_vsp_config(t_fm_vsp_params *p_fm_vsp_params) +{ + t_device *p_dev = NULL; + t_device *p_vsp_dev = NULL; + ioc_fm_vsp_params_t param; + + p_dev = p_fm_vsp_params->h_fm; + + _fml_dbg("Performing VSP Configuration...\n"); + + memset(¶m, 0, sizeof(ioc_fm_vsp_params_t)); + memcpy(¶m, p_fm_vsp_params, sizeof(t_fm_vsp_params)); + param.vsp_params.h_fm = UINT_TO_PTR(p_dev->id); + param.id = NULL; + + if (ioctl(p_dev->fd, FM_IOC_VSP_CONFIG, ¶m)) { + DPAA_PMD_ERR("%s ioctl error\n", __func__); + return NULL; + } + + p_vsp_dev = (t_device *)malloc(sizeof(t_device)); + if (!p_vsp_dev) { + DPAA_PMD_ERR("FM VSP Params!\n"); + return NULL; + } + memset(p_vsp_dev, 0, sizeof(t_device)); + p_vsp_dev->h_user_priv = (t_handle)p_dev; + p_dev->owners++; + p_vsp_dev->id = PTR_TO_UINT(param.id); + + _fml_dbg("VSP Configuration completed\n"); + + return (t_handle)p_vsp_dev; +} + +uint32_t +fm_vsp_init(t_handle h_fm_vsp) +{ + t_device *p_dev = NULL; + t_device *p_vsp_dev = (t_device *)h_fm_vsp; + ioc_fm_obj_t id; + + _fml_dbg("Calling...\n"); + + p_dev = (t_device *)p_vsp_dev->h_user_priv; + id.obj = UINT_TO_PTR(p_vsp_dev->id); + + if (ioctl(p_dev->fd, FM_IOC_VSP_INIT, &id)) { + DPAA_PMD_ERR("%s ioctl error\n", __func__); + RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG); + } + + _fml_dbg("Called.\n"); + + return E_OK; +} + +uint32_t +fm_vsp_free(t_handle h_fm_vsp) +{ + t_device *p_dev = NULL; + t_device *p_vsp_dev = (t_device *)h_fm_vsp; + ioc_fm_obj_t id; + + _fml_dbg("Calling...\n"); + + p_dev = (t_device *)p_vsp_dev->h_user_priv; + id.obj = UINT_TO_PTR(p_vsp_dev->id); + + if (ioctl(p_dev->fd, FM_IOC_VSP_FREE, &id)) { + DPAA_PMD_ERR("%s ioctl error\n", __func__); + RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG); + } + + p_dev->owners--; + free(p_vsp_dev); + + _fml_dbg("Called.\n"); + + return E_OK; +} + +uint32_t +fm_vsp_config_buffer_prefix_content(t_handle h_fm_vsp, + t_fm_buffer_prefix_content *p_fm_buffer_prefix_content) +{ + t_device *p_dev = NULL; + t_device *p_vsp_dev = (t_device *)h_fm_vsp; + ioc_fm_buffer_prefix_content_params_t params; + + _fml_dbg("Calling...\n"); + + p_dev = (t_device *)p_vsp_dev->h_user_priv; + params.p_fm_vsp = UINT_TO_PTR(p_vsp_dev->id); + memcpy(¶ms.fm_buffer_prefix_content, + p_fm_buffer_prefix_content, sizeof(*p_fm_buffer_prefix_content)); + + if (ioctl(p_dev->fd, FM_IOC_VSP_CONFIG_BUFFER_PREFIX_CONTENT, + ¶ms)) { + DPAA_PMD_ERR("%s ioctl error\n", __func__); + RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG); + } + + _fml_dbg("Called.\n"); + + return E_OK; +} diff --git a/drivers/net/dpaa/fmlib/fm_vsp_ext.h b/drivers/net/dpaa/fmlib/fm_vsp_ext.h new file mode 100644 index 000000000..b51c46162 --- /dev/null +++ b/drivers/net/dpaa/fmlib/fm_vsp_ext.h @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2008-2012 Freescale Semiconductor, Inc + * Copyright 2019-2020 NXP + */ + +/* + * @File fm_vsp_ext.h + * + * @Description FM Virtual Storage-Profile + */ +#ifndef __FM_VSP_EXT_H +#define __FM_VSP_EXT_H +#include "ncsw_ext.h" +#include "fm_ext.h" +#include "net_ext.h" + +typedef struct t_fm_vsp_params { + t_handle h_fm; + /**< A handle to the FM object this VSP related to */ + t_fm_ext_pools ext_buf_pools; + /**< Which external buffer pools are used (up to + * FM_PORT_MAX_NUM_OF_EXT_POOLS), and their sizes. + * Parameter associated with Rx / OP port + */ + uint16_t liodn_offset; /**< VSP's LIODN offset */ + struct { + e_fm_port_type port_type; /**< Port type */ + uint8_t port_id; /**< Port Id - relative to type */ + } port_params; + uint8_t relative_profile_id; + /**< VSP Id - relative to VSP's range defined in + * relevant FM object + */ +} t_fm_vsp_params; + +typedef struct ioc_fm_vsp_params_t { + struct t_fm_vsp_params vsp_params; + void *id; /**< return value */ +} ioc_fm_vsp_params_t; + +typedef struct t_fm_port_vspalloc_params { + uint8_t num_of_profiles; + /**< Number of Virtual Storage Profiles; must be a power of 2 */ + uint8_t dflt_relative_id; + /**< The default Virtual-Storage-Profile-id dedicated to Rx/OP port. The + * same default Virtual-Storage-Profile-id will be for coupled Tx port + * if relevant function called for Rx port + */ +} t_fm_port_vspalloc_params; + +typedef struct ioc_fm_port_vsp_alloc_params_t { + struct t_fm_port_vspalloc_params params; + void *p_fm_tx_port; + /**< Handle to coupled Tx Port; not relevant for OP port. */ +} ioc_fm_port_vsp_alloc_params_t; + +typedef struct ioc_fm_buffer_prefix_content_t { + uint16_t priv_data_size; + /**< Number of bytes to be left at the beginning of the external + * buffer; Note that the private-area will start from the base + * of the buffer address. + */ + bool pass_prs_result; + /**< TRUE to pass the parse result to/from the FM; User + * may use fm_port_get_buffer_prs_result() in order to + * get the parser-result from a buffer. + */ + bool pass_time_stamp; + /**< TRUE to pass the timeStamp to/from the FM User may + * use fm_port_get_buffer_time_stamp() in order to get + * the parser-result from a buffer. + */ + bool pass_hash_result; + /**< TRUE to pass the KG hash result to/from the FM User + * may use fm_port_get_buffer_hash_result() in order to + * get the parser-result from a buffer. + */ + bool pass_all_other_pcd_info; + /**< Add all other Internal-Context information: AD, + * hash-result, key, etc. + */ + uint16_t data_align; + /**< 0 to use driver's default alignment [64], + * other value for selecting a data alignment (must be a + * power of 2); if write optimization is used, must be + * >= 16. + */ + uint8_t manip_extra_space; + /**< Maximum extra size needed + * (insertion-size minus removal-size); + * Note that this field impacts the size of the + * buffer-prefix (i.e. it pushes the data offset); + * This field is irrelevant if DPAA_VERSION==10 + */ +} ioc_fm_buffer_prefix_content_t; + +typedef struct ioc_fm_buffer_prefix_content_params_t { + void *p_fm_vsp; + ioc_fm_buffer_prefix_content_t fm_buffer_prefix_content; +} ioc_fm_buffer_prefix_content_params_t; + +uint32_t fm_port_vsp_alloc(t_handle h_fm_port, + t_fm_port_vspalloc_params *p_params); + +t_handle fm_vsp_config(t_fm_vsp_params *p_fm_vsp_params); + +uint32_t fm_vsp_init(t_handle h_fm_vsp); + +uint32_t fm_vsp_free(t_handle h_fm_vsp); + +uint32_t fm_vsp_config_buffer_prefix_content(t_handle h_fm_vsp, + t_fm_buffer_prefix_content *p_fm_buffer_prefix_content); + +#define FM_PORT_IOC_VSP_ALLOC \ + _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(38), \ + ioc_fm_port_vsp_alloc_params_t) + +#define FM_IOC_VSP_CONFIG \ + _IOWR(FM_IOC_TYPE_BASE, FM_IOC_NUM(8), ioc_fm_vsp_params_t) + +#define FM_IOC_VSP_INIT \ + _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(9), ioc_fm_obj_t) + +#define FM_IOC_VSP_FREE \ + _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(10), ioc_fm_obj_t) + +#define FM_IOC_VSP_CONFIG_BUFFER_PREFIX_CONTENT \ + _IOW(FM_IOC_TYPE_BASE, FM_IOC_NUM(12), \ + ioc_fm_buffer_prefix_content_params_t) + +#endif /* __FM_VSP_EXT_H */ diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build index b2cd555fd..aca1dccc3 100644 --- a/drivers/net/dpaa/meson.build +++ b/drivers/net/dpaa/meson.build @@ -9,6 +9,7 @@ deps += ['mempool_dpaa'] sources = files('dpaa_ethdev.c', 'fmlib/fm_lib.c', + 'fmlib/fm_vsp.c', 'dpaa_rxtx.c') if cc.has_argument('-Wno-pointer-arith') From patchwork Fri Sep 4 08:29:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 76536 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9A7CDA04B1; Fri, 4 Sep 2020 10:35:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 17CC91C0CC; Fri, 4 Sep 2020 10:35:38 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id AFB4A1C0AF for ; Fri, 4 Sep 2020 10:35:35 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 7C8F91A02FE; Fri, 4 Sep 2020 10:35:35 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 689F91A0509; Fri, 4 Sep 2020 10:35:33 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 45AC5402D5; Fri, 4 Sep 2020 10:35:31 +0200 (CEST) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Fri, 4 Sep 2020 13:59:16 +0530 Message-Id: <20200904082921.17400-2-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200904082921.17400-1-hemant.agrawal@nxp.com> References: <20200901123650.29908-1-hemant.agrawal@nxp.com> <20200904082921.17400-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v7 2/7] net/dpaa: add support for fmcless mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sachin Saxena This patch uses fmlib to configure the FMAN HW for flow and distribution configuration, thus avoiding the need for static FMC tool execution optionally. Signed-off-by: Sachin Saxena Signed-off-by: Hemant Agrawal --- drivers/bus/dpaa/include/fsl_qman.h | 1 + drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 + drivers/net/dpaa/dpaa_ethdev.c | 111 ++- drivers/net/dpaa/dpaa_ethdev.h | 4 + drivers/net/dpaa/dpaa_flow.c | 901 ++++++++++++++++++++++ drivers/net/dpaa/dpaa_flow.h | 14 + drivers/net/dpaa/meson.build | 1 + 7 files changed, 1011 insertions(+), 22 deletions(-) create mode 100644 drivers/net/dpaa/dpaa_flow.c create mode 100644 drivers/net/dpaa/dpaa_flow.h diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h index 8ba37411a..dd7ca783a 100644 --- a/drivers/bus/dpaa/include/fsl_qman.h +++ b/drivers/bus/dpaa/include/fsl_qman.h @@ -1896,6 +1896,7 @@ int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags, * FQs than requested (though alignment will be as requested). If @partial is * zero, the return value will either be 'count' or negative. */ +__rte_internal int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial); static inline int qman_alloc_fqid(u32 *result) { diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index 77840a564..f47922c6a 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -49,6 +49,7 @@ INTERNAL { netcfg_release; per_lcore_dpaa_io; qman_alloc_cgrid_range; + qman_alloc_fqid_range; qman_alloc_pool_range; qman_clear_irq; qman_create_cgr; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index c15e2b546..c5b9ac1a5 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -39,6 +39,7 @@ #include #include +#include #include #include @@ -76,6 +77,7 @@ static uint64_t dev_tx_offloads_nodis = /* Keep track of whether QMAN and BMAN have been globally initialized */ static int is_global_init; +static int fmc_q = 1; /* Indicates the use of static fmc for distribution */ static int default_q; /* use default queue - FMC is not executed*/ /* At present we only allow up to 4 push mode queues as default - as each of * this queue need dedicated portal and we are short of portals. @@ -1418,16 +1420,15 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx, } }; - if (fqid) { + if (fmc_q || default_q) { ret = qman_reserve_fqid(fqid); if (ret) { - DPAA_PMD_ERR("reserve rx fqid 0x%x failed with ret: %d", + DPAA_PMD_ERR("reserve rx fqid 0x%x failed, ret: %d", fqid, ret); return -EINVAL; } - } else { - flags |= QMAN_FQ_FLAG_DYNAMIC_FQID; } + DPAA_PMD_DEBUG("creating rx fq %p, fqid 0x%x", fq, fqid); ret = qman_create_fq(fqid, flags, fq); if (ret) { @@ -1602,7 +1603,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) struct fman_if_bpool *bp, *tmp_bp; uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES]; uint32_t cgrid_tx[MAX_DPAA_CORES]; - char eth_buf[RTE_ETHER_ADDR_FMT_SIZE]; + uint32_t dev_rx_fqids[DPAA_MAX_NUM_PCD_QUEUES]; PMD_INIT_FUNC_TRACE(); @@ -1619,30 +1620,36 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) dpaa_intf->ifid = dev_id; dpaa_intf->cfg = cfg; + memset((char *)dev_rx_fqids, 0, + sizeof(uint32_t) * DPAA_MAX_NUM_PCD_QUEUES); + /* Initialize Rx FQ's */ if (default_q) { num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES; + } else if (fmc_q) { + num_rx_fqs = 1; } else { - if (getenv("DPAA_NUM_RX_QUEUES")) - num_rx_fqs = atoi(getenv("DPAA_NUM_RX_QUEUES")); - else - num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES; + /* FMCLESS mode, load balance to multiple cores.*/ + num_rx_fqs = rte_lcore_count(); } - /* Each device can not have more than DPAA_MAX_NUM_PCD_QUEUES RX * queues. */ - if (num_rx_fqs <= 0 || num_rx_fqs > DPAA_MAX_NUM_PCD_QUEUES) { + if (num_rx_fqs < 0 || num_rx_fqs > DPAA_MAX_NUM_PCD_QUEUES) { DPAA_PMD_ERR("Invalid number of RX queues\n"); return -EINVAL; } - dpaa_intf->rx_queues = rte_zmalloc(NULL, - sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE); - if (!dpaa_intf->rx_queues) { - DPAA_PMD_ERR("Failed to alloc mem for RX queues\n"); - return -ENOMEM; + if (num_rx_fqs > 0) { + dpaa_intf->rx_queues = rte_zmalloc(NULL, + sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE); + if (!dpaa_intf->rx_queues) { + DPAA_PMD_ERR("Failed to alloc mem for RX queues\n"); + return -ENOMEM; + } + } else { + dpaa_intf->rx_queues = NULL; } memset(cgrid, 0, sizeof(cgrid)); @@ -1661,7 +1668,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) } /* If congestion control is enabled globally*/ - if (td_threshold) { + if (num_rx_fqs > 0 && td_threshold) { dpaa_intf->cgr_rx = rte_zmalloc(NULL, sizeof(struct qman_cgr) * num_rx_fqs, MAX_CACHELINE); if (!dpaa_intf->cgr_rx) { @@ -1680,12 +1687,20 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) dpaa_intf->cgr_rx = NULL; } + if (!fmc_q && !default_q) { + ret = qman_alloc_fqid_range(dev_rx_fqids, num_rx_fqs, + num_rx_fqs, 0); + if (ret < 0) { + DPAA_PMD_ERR("Failed to alloc rx fqid's\n"); + goto free_rx; + } + } + for (loop = 0; loop < num_rx_fqs; loop++) { if (default_q) fqid = cfg->rx_def; else - fqid = DPAA_PCD_FQID_START + fman_intf->mac_idx * - DPAA_PCD_FQID_MULTIPLIER + loop; + fqid = dev_rx_fqids[loop]; if (dpaa_intf->cgr_rx) dpaa_intf->cgr_rx[loop].cgrid = cgrid[loop]; @@ -1782,9 +1797,16 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) /* copy the primary mac address */ rte_ether_addr_copy(&fman_intf->mac_addr, ð_dev->data->mac_addrs[0]); - rte_ether_format_addr(eth_buf, sizeof(eth_buf), &fman_intf->mac_addr); - DPAA_PMD_INFO("net: dpaa: %s: %s", dpaa_device->name, eth_buf); + RTE_LOG(INFO, PMD, "net: dpaa: %s: %02x:%02x:%02x:%02x:%02x:%02x\n", + dpaa_device->name, + fman_intf->mac_addr.addr_bytes[0], + fman_intf->mac_addr.addr_bytes[1], + fman_intf->mac_addr.addr_bytes[2], + fman_intf->mac_addr.addr_bytes[3], + fman_intf->mac_addr.addr_bytes[4], + fman_intf->mac_addr.addr_bytes[5]); + /* Disable RX mode */ fman_if_discard_rx_errors(fman_intf); @@ -1831,6 +1853,12 @@ dpaa_dev_uninit(struct rte_eth_dev *dev) return -1; } + /* DPAA FM deconfig */ + if (!(default_q || fmc_q)) { + if (dpaa_fm_deconfig(dpaa_intf, dev->process_private)) + DPAA_PMD_WARN("DPAA FM deconfig failed\n"); + } + dpaa_eth_dev_close(dev); /* release configuration memory */ @@ -1874,7 +1902,7 @@ dpaa_dev_uninit(struct rte_eth_dev *dev) } static int -rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, +rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv, struct rte_dpaa_device *dpaa_dev) { int diag; @@ -1920,6 +1948,13 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, default_q = 1; } + if (!(default_q || fmc_q)) { + if (dpaa_fm_init()) { + DPAA_PMD_ERR("FM init failed\n"); + return -1; + } + } + /* disabling the default push mode for LS1043 */ if (dpaa_svr_family == SVR_LS1043A_FAMILY) dpaa_push_mode_max_queue = 0; @@ -1993,6 +2028,38 @@ rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev) return 0; } +static void __attribute__((destructor(102))) dpaa_finish(void) +{ + /* For secondary, primary will do all the cleanup */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + if (!(default_q || fmc_q)) { + unsigned int i; + + for (i = 0; i < RTE_MAX_ETHPORTS; i++) { + if (rte_eth_devices[i].dev_ops == &dpaa_devops) { + struct rte_eth_dev *dev = &rte_eth_devices[i]; + struct dpaa_if *dpaa_intf = + dev->data->dev_private; + struct fman_if *fif = + dev->process_private; + if (dpaa_intf->port_handle) + if (dpaa_fm_deconfig(dpaa_intf, fif)) + DPAA_PMD_WARN("DPAA FM " + "deconfig failed\n"); + } + } + if (is_global_init) + if (dpaa_fm_term()) + DPAA_PMD_WARN("DPAA FM term failed\n"); + + is_global_init = 0; + + DPAA_PMD_INFO("DPAA fman cleaned up"); + } +} + static struct rte_dpaa_driver rte_dpaa_pmd = { .drv_flags = RTE_DPAA_DRV_INTR_LSC, .drv_type = FSL_DPAA_ETH, diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index 4c40ff86a..b10c4a20b 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -118,6 +118,10 @@ struct dpaa_if { uint32_t ifid; struct dpaa_bp_info *bp_info; struct rte_eth_fc_conf *fc_conf; + void *port_handle; + void *netenv_handle; + void *scheme_handle[2]; + uint32_t scheme_count; }; struct dpaa_if_stats { diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c new file mode 100644 index 000000000..a12141efe --- /dev/null +++ b/drivers/net/dpaa/dpaa_flow.c @@ -0,0 +1,901 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2017-2019 NXP + */ + +/* System headers */ +#include +#include +#include +#include + +#include +#include +#include +#include + +#define DPAA_MAX_NUM_ETH_DEV 8 + +static inline +ioc_fm_pcd_extract_entry_t * +SCH_EXT_ARR(ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx) +{ +return &scheme_params->param.key_ext_and_hash.extract_array[hdr_idx]; +} + +#define SCH_EXT_HDR(scheme_params, hdr_idx) \ + SCH_EXT_ARR(scheme_params, hdr_idx)->extract_params.extract_by_hdr + +#define SCH_EXT_FULL_FLD(scheme_params, hdr_idx) \ + SCH_EXT_HDR(scheme_params, hdr_idx).extract_by_hdr_type.full_field + +/* FM global info */ +struct dpaa_fm_info { + t_handle fman_handle; + t_handle pcd_handle; +}; + +/*FM model to read and write from file */ +struct dpaa_fm_model { + uint32_t dev_count; + uint8_t device_order[DPAA_MAX_NUM_ETH_DEV]; + t_fm_port_params fm_port_params[DPAA_MAX_NUM_ETH_DEV]; + t_handle netenv_devid[DPAA_MAX_NUM_ETH_DEV]; + t_handle scheme_devid[DPAA_MAX_NUM_ETH_DEV][2]; +}; + +static struct dpaa_fm_info fm_info; +static struct dpaa_fm_model fm_model; +static const char *fm_log = "/tmp/fmdpdk.bin"; + +static void fm_prev_cleanup(void) +{ + uint32_t fman_id = 0, i = 0, devid; + struct dpaa_if dpaa_intf = {0}; + t_fm_pcd_params fm_pcd_params = {0}; + PMD_INIT_FUNC_TRACE(); + + fm_info.fman_handle = fm_open(fman_id); + if (!fm_info.fman_handle) { + printf("\n%s- unable to open FMAN", __func__); + return; + } + + fm_pcd_params.h_fm = fm_info.fman_handle; + fm_pcd_params.prs_support = true; + fm_pcd_params.kg_support = true; + /* FM PCD Open */ + fm_info.pcd_handle = fm_pcd_open(&fm_pcd_params); + if (!fm_info.pcd_handle) { + printf("\n%s- unable to open PCD", __func__); + return; + } + + while (i < fm_model.dev_count) { + devid = fm_model.device_order[i]; + /* FM Port Open */ + fm_model.fm_port_params[devid].h_fm = fm_info.fman_handle; + dpaa_intf.port_handle = + fm_port_open(&fm_model.fm_port_params[devid]); + dpaa_intf.scheme_handle[0] = create_device(fm_info.pcd_handle, + fm_model.scheme_devid[devid][0]); + dpaa_intf.scheme_count = 1; + if (fm_model.scheme_devid[devid][1]) { + dpaa_intf.scheme_handle[1] = + create_device(fm_info.pcd_handle, + fm_model.scheme_devid[devid][1]); + if (dpaa_intf.scheme_handle[1]) + dpaa_intf.scheme_count++; + } + + dpaa_intf.netenv_handle = create_device(fm_info.pcd_handle, + fm_model.netenv_devid[devid]); + i++; + if (!dpaa_intf.netenv_handle || + !dpaa_intf.scheme_handle[0] || + !dpaa_intf.port_handle) + continue; + + if (dpaa_fm_deconfig(&dpaa_intf, NULL)) + printf("\nDPAA FM deconfig failed\n"); + } + + if (dpaa_fm_term()) + printf("\nDPAA FM term failed\n"); + + memset(&fm_model, 0, sizeof(struct dpaa_fm_model)); +} + +void dpaa_write_fm_config_to_file(void) +{ + size_t bytes_write; + FILE *fp = fopen(fm_log, "wb"); + PMD_INIT_FUNC_TRACE(); + + if (!fp) { + DPAA_PMD_ERR("File open failed"); + return; + } + bytes_write = fwrite(&fm_model, sizeof(struct dpaa_fm_model), 1, fp); + if (!bytes_write) { + DPAA_PMD_WARN("No bytes write"); + fclose(fp); + return; + } + fclose(fp); +} + +static void dpaa_read_fm_config_from_file(void) +{ + size_t bytes_read; + FILE *fp = fopen(fm_log, "rb"); + PMD_INIT_FUNC_TRACE(); + + if (!fp) + return; + DPAA_PMD_INFO("Previous DPDK-FM config instance present, cleaning up."); + + bytes_read = fread(&fm_model, sizeof(struct dpaa_fm_model), 1, fp); + if (!bytes_read) { + DPAA_PMD_WARN("No bytes read"); + fclose(fp); + return; + } + fclose(fp); + + /*FM cleanup from previous configured app */ + fm_prev_cleanup(); +} + +static inline int +set_hash_params_eth(ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx) +{ + int k; + + for (k = 0; k < 2; k++) { + SCH_EXT_ARR(scheme_params, hdr_idx)->type = + e_IOC_FM_PCD_EXTRACT_BY_HDR; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr = + HEADER_TYPE_ETH; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index = + e_IOC_FM_PCD_HDR_INDEX_NONE; + SCH_EXT_HDR(scheme_params, hdr_idx).type = + e_IOC_FM_PCD_EXTRACT_FULL_FIELD; + if (k == 0) + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).eth = + IOC_NET_HF_ETH_SA; + else + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).eth = + IOC_NET_HF_ETH_DA; + hdr_idx++; + } + return hdr_idx; +} + +static inline int +set_hash_params_ipv4(ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx) +{ + int k; + + for (k = 0; k < 2; k++) { + SCH_EXT_ARR(scheme_params, hdr_idx)->type = + e_IOC_FM_PCD_EXTRACT_BY_HDR; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr = + HEADER_TYPE_IPV4; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index = + e_IOC_FM_PCD_HDR_INDEX_NONE; + SCH_EXT_HDR(scheme_params, hdr_idx).type = + e_IOC_FM_PCD_EXTRACT_FULL_FIELD; + if (k == 0) + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).ipv4 = + ioc_net_hf_ipv_4_src_ip; + else + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).ipv4 = + ioc_net_hf_ipv_4_dst_ip; + hdr_idx++; + } + return hdr_idx; +} + +static inline int +set_hash_params_ipv6(ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx) +{ + int k; + + for (k = 0; k < 2; k++) { + SCH_EXT_ARR(scheme_params, hdr_idx)->type = + e_IOC_FM_PCD_EXTRACT_BY_HDR; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr = + HEADER_TYPE_IPV6; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index = + e_IOC_FM_PCD_HDR_INDEX_NONE; + SCH_EXT_HDR(scheme_params, hdr_idx).type = + e_IOC_FM_PCD_EXTRACT_FULL_FIELD; + if (k == 0) + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).ipv6 = + ioc_net_hf_ipv_6_src_ip; + else + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).ipv6 = + ioc_net_hf_ipv_6_dst_ip; + hdr_idx++; + } + return hdr_idx; +} + +static inline int +set_hash_params_udp(ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx) +{ + int k; + + for (k = 0; k < 2; k++) { + SCH_EXT_ARR(scheme_params, hdr_idx)->type = + e_IOC_FM_PCD_EXTRACT_BY_HDR; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr = + HEADER_TYPE_UDP; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index = + e_IOC_FM_PCD_HDR_INDEX_NONE; + SCH_EXT_HDR(scheme_params, hdr_idx).type = + e_IOC_FM_PCD_EXTRACT_FULL_FIELD; + if (k == 0) + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).udp = + IOC_NET_HF_UDP_PORT_SRC; + else + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).udp = + IOC_NET_HF_UDP_PORT_DST; + hdr_idx++; + } + return hdr_idx; +} + +static inline int +set_hash_params_tcp(ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx) +{ + int k; + + for (k = 0; k < 2; k++) { + SCH_EXT_ARR(scheme_params, hdr_idx)->type = + e_IOC_FM_PCD_EXTRACT_BY_HDR; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr = + HEADER_TYPE_TCP; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index = + e_IOC_FM_PCD_HDR_INDEX_NONE; + SCH_EXT_HDR(scheme_params, hdr_idx).type = + e_IOC_FM_PCD_EXTRACT_FULL_FIELD; + if (k == 0) + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).tcp = + IOC_NET_HF_TCP_PORT_SRC; + else + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).tcp = + IOC_NET_HF_TCP_PORT_DST; + hdr_idx++; + } + return hdr_idx; +} + +static inline int +set_hash_params_sctp(ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx) +{ + int k; + + for (k = 0; k < 2; k++) { + SCH_EXT_ARR(scheme_params, hdr_idx)->type = + e_IOC_FM_PCD_EXTRACT_BY_HDR; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr = + HEADER_TYPE_SCTP; + SCH_EXT_HDR(scheme_params, hdr_idx).hdr_index = + e_IOC_FM_PCD_HDR_INDEX_NONE; + SCH_EXT_HDR(scheme_params, hdr_idx).type = + e_IOC_FM_PCD_EXTRACT_FULL_FIELD; + if (k == 0) + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).sctp = + IOC_NET_HF_SCTP_PORT_SRC; + else + SCH_EXT_FULL_FLD(scheme_params, hdr_idx).sctp = + IOC_NET_HF_SCTP_PORT_DST; + hdr_idx++; + } + return hdr_idx; +} + +/* Set scheme params for hash distribution */ +static int set_scheme_params(ioc_fm_pcd_kg_scheme_params_t *scheme_params, + ioc_fm_pcd_net_env_params_t *dist_units, + struct dpaa_if *dpaa_intf, + struct fman_if *fif __rte_unused) +{ + int dist_idx, hdr_idx = 0; + PMD_INIT_FUNC_TRACE(); + + scheme_params->param.use_hash = 1; + scheme_params->param.modify = false; + scheme_params->param.always_direct = false; + scheme_params->param.scheme_counter.update = 1; + scheme_params->param.scheme_counter.value = 0; + scheme_params->param.next_engine = e_IOC_FM_PCD_DONE; + scheme_params->param.base_fqid = dpaa_intf->rx_queues[0].fqid; + scheme_params->param.net_env_params.net_env_id = + dpaa_intf->netenv_handle; + scheme_params->param.net_env_params.num_of_distinction_units = + dist_units->param.num_of_distinction_units; + + scheme_params->param.key_ext_and_hash.hash_dist_num_of_fqids = + dpaa_intf->nb_rx_queues; + scheme_params->param.key_ext_and_hash.num_of_used_extracts = + 2 * dist_units->param.num_of_distinction_units; + + for (dist_idx = 0; dist_idx < + dist_units->param.num_of_distinction_units; + dist_idx++) { + switch (dist_units->param.units[dist_idx].hdrs[0].hdr) { + case HEADER_TYPE_ETH: + hdr_idx = set_hash_params_eth(scheme_params, hdr_idx); + break; + + case HEADER_TYPE_IPV4: + hdr_idx = set_hash_params_ipv4(scheme_params, hdr_idx); + break; + + case HEADER_TYPE_IPV6: + hdr_idx = set_hash_params_ipv6(scheme_params, hdr_idx); + break; + + case HEADER_TYPE_UDP: + hdr_idx = set_hash_params_udp(scheme_params, hdr_idx); + break; + + case HEADER_TYPE_TCP: + hdr_idx = set_hash_params_tcp(scheme_params, hdr_idx); + break; + + case HEADER_TYPE_SCTP: + hdr_idx = set_hash_params_sctp(scheme_params, hdr_idx); + break; + + default: + DPAA_PMD_ERR("Invalid Distinction Unit"); + return -1; + } + } + + return 0; +} + +static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units, + uint64_t req_dist_set) +{ + uint32_t loop = 0, dist_idx = 0, dist_field = 0; + int l2_configured = 0, ipv4_configured = 0, ipv6_configured = 0; + int udp_configured = 0, tcp_configured = 0, sctp_configured = 0; + PMD_INIT_FUNC_TRACE(); + + if (!req_dist_set) + dist_units->param.units[dist_idx++].hdrs[0].hdr = + HEADER_TYPE_ETH; + + while (req_dist_set) { + if (req_dist_set % 2 != 0) { + dist_field = 1U << loop; + switch (dist_field) { + case ETH_RSS_L2_PAYLOAD: + + if (l2_configured) + break; + l2_configured = 1; + + dist_units->param.units[dist_idx++].hdrs[0].hdr + = HEADER_TYPE_ETH; + break; + + case ETH_RSS_IPV4: + case ETH_RSS_FRAG_IPV4: + case ETH_RSS_NONFRAG_IPV4_OTHER: + + if (ipv4_configured) + break; + ipv4_configured = 1; + dist_units->param.units[dist_idx++].hdrs[0].hdr + = HEADER_TYPE_IPV4; + break; + + case ETH_RSS_IPV6: + case ETH_RSS_FRAG_IPV6: + case ETH_RSS_NONFRAG_IPV6_OTHER: + case ETH_RSS_IPV6_EX: + + if (ipv6_configured) + break; + ipv6_configured = 1; + dist_units->param.units[dist_idx++].hdrs[0].hdr + = HEADER_TYPE_IPV6; + break; + + case ETH_RSS_NONFRAG_IPV4_TCP: + case ETH_RSS_NONFRAG_IPV6_TCP: + case ETH_RSS_IPV6_TCP_EX: + + if (tcp_configured) + break; + tcp_configured = 1; + dist_units->param.units[dist_idx++].hdrs[0].hdr + = HEADER_TYPE_TCP; + break; + + case ETH_RSS_NONFRAG_IPV4_UDP: + case ETH_RSS_NONFRAG_IPV6_UDP: + case ETH_RSS_IPV6_UDP_EX: + + if (udp_configured) + break; + udp_configured = 1; + dist_units->param.units[dist_idx++].hdrs[0].hdr + = HEADER_TYPE_UDP; + break; + + case ETH_RSS_NONFRAG_IPV4_SCTP: + case ETH_RSS_NONFRAG_IPV6_SCTP: + + if (sctp_configured) + break; + sctp_configured = 1; + + dist_units->param.units[dist_idx++].hdrs[0].hdr + = HEADER_TYPE_SCTP; + break; + + default: + DPAA_PMD_ERR("Bad flow distribution option"); + } + } + req_dist_set = req_dist_set >> 1; + loop++; + } + + /* Dist units is set to dist_idx */ + dist_units->param.num_of_distinction_units = dist_idx; +} + +/* Apply PCD configuration on interface */ +static inline int set_port_pcd(struct dpaa_if *dpaa_intf) +{ + int ret = 0; + unsigned int idx; + ioc_fm_port_pcd_params_t pcd_param; + ioc_fm_port_pcd_prs_params_t prs_param; + ioc_fm_port_pcd_kg_params_t kg_param; + + PMD_INIT_FUNC_TRACE(); + + /* PCD support for hash distribution */ + uint8_t pcd_support = e_FM_PORT_PCD_SUPPORT_PRS_AND_KG; + + memset(&pcd_param, 0, sizeof(pcd_param)); + memset(&prs_param, 0, sizeof(prs_param)); + memset(&kg_param, 0, sizeof(kg_param)); + + /* Set parse params */ + prs_param.first_prs_hdr = HEADER_TYPE_ETH; + + /* Set kg params */ + for (idx = 0; idx < dpaa_intf->scheme_count; idx++) + kg_param.scheme_ids[idx] = dpaa_intf->scheme_handle[idx]; + kg_param.num_schemes = dpaa_intf->scheme_count; + + /* Set pcd params */ + pcd_param.net_env_id = dpaa_intf->netenv_handle; + pcd_param.pcd_support = pcd_support; + pcd_param.p_kg_params = &kg_param; + pcd_param.p_prs_params = &prs_param; + + /* FM PORT Disable */ + ret = fm_port_disable(dpaa_intf->port_handle); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_port_disable: Failed"); + return ret; + } + + /* FM PORT SetPCD */ + ret = fm_port_set_pcd(dpaa_intf->port_handle, &pcd_param); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_port_set_pcd: Failed"); + return ret; + } + + /* FM PORT Enable */ + ret = fm_port_enable(dpaa_intf->port_handle); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_port_enable: Failed"); + goto fm_port_delete_pcd; + } + + return 0; + +fm_port_delete_pcd: + /* FM PORT DeletePCD */ + ret = fm_port_delete_pcd(dpaa_intf->port_handle); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_port_delete_pcd: Failed\n"); + return ret; + } + return -1; +} + +/* Unset PCD NerEnv and scheme */ +static inline void unset_pcd_netenv_scheme(struct dpaa_if *dpaa_intf) +{ + int ret; + PMD_INIT_FUNC_TRACE(); + + /* reduce scheme count */ + if (dpaa_intf->scheme_count) + dpaa_intf->scheme_count--; + + DPAA_PMD_DEBUG("KG SCHEME DEL %d handle =%p", + dpaa_intf->scheme_count, + dpaa_intf->scheme_handle[dpaa_intf->scheme_count]); + + ret = fm_pcd_kg_scheme_delete(dpaa_intf->scheme_handle + [dpaa_intf->scheme_count]); + if (ret != E_OK) + DPAA_PMD_ERR("fm_pcd_kg_scheme_delete: Failed"); + + dpaa_intf->scheme_handle[dpaa_intf->scheme_count] = NULL; +} + +/* Set PCD NetEnv and Scheme and default scheme */ +static inline int set_default_scheme(struct dpaa_if *dpaa_intf) +{ + ioc_fm_pcd_kg_scheme_params_t scheme_params; + int idx = dpaa_intf->scheme_count; + PMD_INIT_FUNC_TRACE(); + + /* Set PCD NetEnvCharacteristics */ + memset(&scheme_params, 0, sizeof(scheme_params)); + + /* Adding 10 to default schemes as the number of interface would be + * lesser than 10 and the relative scheme ids should be unique for + * every scheme. + */ + scheme_params.param.scm_id.relative_scheme_id = + 10 + dpaa_intf->ifid; + scheme_params.param.use_hash = 0; + scheme_params.param.next_engine = e_IOC_FM_PCD_DONE; + scheme_params.param.net_env_params.num_of_distinction_units = 0; + scheme_params.param.net_env_params.net_env_id = + dpaa_intf->netenv_handle; + scheme_params.param.base_fqid = dpaa_intf->rx_queues[0].fqid; + scheme_params.param.key_ext_and_hash.hash_dist_num_of_fqids = 1; + scheme_params.param.key_ext_and_hash.num_of_used_extracts = 0; + scheme_params.param.modify = false; + scheme_params.param.always_direct = false; + scheme_params.param.scheme_counter.update = 1; + scheme_params.param.scheme_counter.value = 0; + + /* FM PCD KgSchemeSet */ + dpaa_intf->scheme_handle[idx] = + fm_pcd_kg_scheme_set(fm_info.pcd_handle, &scheme_params); + DPAA_PMD_DEBUG("KG SCHEME SET %d handle =%p", + idx, dpaa_intf->scheme_handle[idx]); + if (!dpaa_intf->scheme_handle[idx]) { + DPAA_PMD_ERR("fm_pcd_kg_scheme_set: Failed"); + return -1; + } + + fm_model.scheme_devid[dpaa_intf->ifid][idx] = + get_device_id(dpaa_intf->scheme_handle[idx]); + dpaa_intf->scheme_count++; + return 0; +} + + +/* Set PCD NetEnv and Scheme and default scheme */ +static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf, + uint64_t req_dist_set, + struct fman_if *fif) +{ + int ret = -1; + ioc_fm_pcd_net_env_params_t dist_units; + ioc_fm_pcd_kg_scheme_params_t scheme_params; + int idx = dpaa_intf->scheme_count; + PMD_INIT_FUNC_TRACE(); + + /* Set PCD NetEnvCharacteristics */ + memset(&dist_units, 0, sizeof(dist_units)); + memset(&scheme_params, 0, sizeof(scheme_params)); + + /* Set dist unit header type */ + set_dist_units(&dist_units, req_dist_set); + + scheme_params.param.scm_id.relative_scheme_id = dpaa_intf->ifid; + + /* Set PCD Scheme params */ + ret = set_scheme_params(&scheme_params, &dist_units, dpaa_intf, fif); + if (ret) { + DPAA_PMD_ERR("Set scheme params: Failed"); + return -1; + } + + /* FM PCD KgSchemeSet */ + dpaa_intf->scheme_handle[idx] = + fm_pcd_kg_scheme_set(fm_info.pcd_handle, &scheme_params); + DPAA_PMD_DEBUG("KG SCHEME SET %d handle =%p", + idx, dpaa_intf->scheme_handle[idx]); + if (!dpaa_intf->scheme_handle[idx]) { + DPAA_PMD_ERR("fm_pcd_kg_scheme_set: Failed"); + return -1; + } + + fm_model.scheme_devid[dpaa_intf->ifid][idx] = + get_device_id(dpaa_intf->scheme_handle[idx]); + dpaa_intf->scheme_count++; + return 0; +} + + +static inline int get_port_type(struct fman_if *fif) +{ + if (fif->mac_type == fman_mac_1g) + return e_FM_PORT_TYPE_RX; + else if (fif->mac_type == fman_mac_2_5g) + return e_FM_PORT_TYPE_RX_2_5G; + else if (fif->mac_type == fman_mac_10g) + return e_FM_PORT_TYPE_RX_10G; + + DPAA_PMD_ERR("MAC type unsupported"); + return -1; +} + +static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf, + uint64_t req_dist_set, + struct fman_if *fif) +{ + t_fm_port_params fm_port_params; + ioc_fm_pcd_net_env_params_t dist_units; + PMD_INIT_FUNC_TRACE(); + + /* FMAN mac indexes mappings (0 is unused, + * first 8 are for 1G, next for 10G ports + */ + uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1}; + + /* Memset FM port params */ + memset(&fm_port_params, 0, sizeof(fm_port_params)); + + /* Set FM port params */ + fm_port_params.h_fm = fm_info.fman_handle; + fm_port_params.port_type = get_port_type(fif); + fm_port_params.port_id = mac_idx[fif->mac_idx]; + + /* FM PORT Open */ + dpaa_intf->port_handle = fm_port_open(&fm_port_params); + if (!dpaa_intf->port_handle) { + DPAA_PMD_ERR("fm_port_open: Failed\n"); + return -1; + } + + fm_model.fm_port_params[dpaa_intf->ifid] = fm_port_params; + + /* Set PCD NetEnvCharacteristics */ + memset(&dist_units, 0, sizeof(dist_units)); + + /* Set dist unit header type */ + set_dist_units(&dist_units, req_dist_set); + + /* FM PCD NetEnvCharacteristicsSet */ + dpaa_intf->netenv_handle = + fm_pcd_net_env_characteristics_set(fm_info.pcd_handle, + &dist_units); + if (!dpaa_intf->netenv_handle) { + DPAA_PMD_ERR("fm_pcd_net_env_characteristics_set: Failed"); + return -1; + } + + fm_model.netenv_devid[dpaa_intf->ifid] = + get_device_id(dpaa_intf->netenv_handle); + + return 0; +} + +/* De-Configure DPAA FM */ +int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf, + struct fman_if *fif __rte_unused) +{ + int ret; + unsigned int idx; + + PMD_INIT_FUNC_TRACE(); + + /* FM PORT Disable */ + ret = fm_port_disable(dpaa_intf->port_handle); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_port_disable: Failed"); + return ret; + } + + /* FM PORT DeletePCD */ + ret = fm_port_delete_pcd(dpaa_intf->port_handle); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_port_delete_pcd: Failed"); + return ret; + } + + for (idx = 0; idx < dpaa_intf->scheme_count; idx++) { + DPAA_PMD_DEBUG("KG SCHEME DEL %d, handle =%p", + idx, dpaa_intf->scheme_handle[idx]); + /* FM PCD KgSchemeDelete */ + ret = fm_pcd_kg_scheme_delete(dpaa_intf->scheme_handle[idx]); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_pcd_kg_scheme_delete: Failed"); + return ret; + } + dpaa_intf->scheme_handle[idx] = NULL; + } + /* FM PCD NetEnvCharacteristicsDelete */ + ret = fm_pcd_net_env_characteristics_delete(dpaa_intf->netenv_handle); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_pcd_net_env_characteristics_delete: Failed"); + return ret; + } + dpaa_intf->netenv_handle = NULL; + + /* FM PORT Close */ + fm_port_close(dpaa_intf->port_handle); + dpaa_intf->port_handle = NULL; + + /* Set scheme count to 0 */ + dpaa_intf->scheme_count = 0; + + return 0; +} + +int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set) +{ + struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct fman_if *fif = dev->process_private; + int ret; + unsigned int i = 0; + PMD_INIT_FUNC_TRACE(); + + if (dpaa_intf->port_handle) { + if (dpaa_fm_deconfig(dpaa_intf, fif)) + DPAA_PMD_ERR("DPAA FM deconfig failed"); + } + + if (!dev->data->nb_rx_queues) + return 0; + + if (dev->data->nb_rx_queues & (dev->data->nb_rx_queues - 1)) { + DPAA_PMD_ERR("No of queues should be power of 2"); + return -1; + } + + dpaa_intf->nb_rx_queues = dev->data->nb_rx_queues; + + /* Open FM Port and set it in port info */ + ret = set_fm_port_handle(dpaa_intf, req_dist_set, fif); + if (ret) { + DPAA_PMD_ERR("Set FM Port handle: Failed"); + return -1; + } + + /* Set PCD netenv and scheme */ + if (req_dist_set) { + ret = set_pcd_netenv_scheme(dpaa_intf, req_dist_set, fif); + if (ret) { + DPAA_PMD_ERR("Set PCD NetEnv and Scheme dist: Failed"); + goto unset_fm_port_handle; + } + } + /* Set default netenv and scheme */ + ret = set_default_scheme(dpaa_intf); + if (ret) { + DPAA_PMD_ERR("Set PCD NetEnv and Scheme: Failed"); + goto unset_pcd_netenv_scheme1; + } + + /* Set Port PCD */ + ret = set_port_pcd(dpaa_intf); + if (ret) { + DPAA_PMD_ERR("Set Port PCD: Failed"); + goto unset_pcd_netenv_scheme; + } + + for (; i < fm_model.dev_count; i++) + if (fm_model.device_order[i] == dpaa_intf->ifid) + return 0; + + fm_model.device_order[fm_model.dev_count] = dpaa_intf->ifid; + fm_model.dev_count++; + + return 0; + +unset_pcd_netenv_scheme: + unset_pcd_netenv_scheme(dpaa_intf); + +unset_pcd_netenv_scheme1: + unset_pcd_netenv_scheme(dpaa_intf); + +unset_fm_port_handle: + /* FM PORT Close */ + fm_port_close(dpaa_intf->port_handle); + dpaa_intf->port_handle = NULL; + return -1; +} + +int dpaa_fm_init(void) +{ + t_handle fman_handle; + t_handle pcd_handle; + t_fm_pcd_params fm_pcd_params = {0}; + /* Hard-coded : fman id 0 since one fman is present in LS104x */ + int fman_id = 0, ret; + PMD_INIT_FUNC_TRACE(); + + dpaa_read_fm_config_from_file(); + + /* FM Open */ + fman_handle = fm_open(fman_id); + if (!fman_handle) { + DPAA_PMD_ERR("fm_open: Failed"); + return -1; + } + + /* FM PCD Open */ + fm_pcd_params.h_fm = fman_handle; + fm_pcd_params.prs_support = true; + fm_pcd_params.kg_support = true; + pcd_handle = fm_pcd_open(&fm_pcd_params); + if (!pcd_handle) { + fm_close(fman_handle); + DPAA_PMD_ERR("fm_pcd_open: Failed"); + return -1; + } + + /* FM PCD Enable */ + ret = fm_pcd_enable(pcd_handle); + if (ret) { + fm_close(fman_handle); + fm_pcd_close(pcd_handle); + DPAA_PMD_ERR("fm_pcd_enable: Failed"); + return -1; + } + + /* Set fman and pcd handle in fm info */ + fm_info.fman_handle = fman_handle; + fm_info.pcd_handle = pcd_handle; + + return 0; +} + + +/* De-initialization of FM */ +int dpaa_fm_term(void) +{ + int ret; + + PMD_INIT_FUNC_TRACE(); + + if (fm_info.pcd_handle && fm_info.fman_handle) { + /* FM PCD Disable */ + ret = fm_pcd_disable(fm_info.pcd_handle); + if (ret) { + DPAA_PMD_ERR("fm_pcd_disable: Failed"); + return -1; + } + + /* FM PCD Close */ + fm_pcd_close(fm_info.pcd_handle); + fm_info.pcd_handle = NULL; + } + + if (fm_info.fman_handle) { + /* FM Close */ + fm_close(fm_info.fman_handle); + fm_info.fman_handle = NULL; + } + + if (access(fm_log, F_OK) != -1) { + ret = remove(fm_log); + if (ret) + DPAA_PMD_ERR("File remove: Failed"); + } + return 0; +} diff --git a/drivers/net/dpaa/dpaa_flow.h b/drivers/net/dpaa/dpaa_flow.h new file mode 100644 index 000000000..d16bfec21 --- /dev/null +++ b/drivers/net/dpaa/dpaa_flow.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2017,2019 NXP + */ + +#ifndef __DPAA_FLOW_H__ +#define __DPAA_FLOW_H__ + +int dpaa_fm_init(void); +int dpaa_fm_term(void); +int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set); +int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf, struct fman_if *fif); +void dpaa_write_fm_config_to_file(void); + +#endif diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build index aca1dccc3..51f1f32a1 100644 --- a/drivers/net/dpaa/meson.build +++ b/drivers/net/dpaa/meson.build @@ -10,6 +10,7 @@ deps += ['mempool_dpaa'] sources = files('dpaa_ethdev.c', 'fmlib/fm_lib.c', 'fmlib/fm_vsp.c', + 'dpaa_flow.c', 'dpaa_rxtx.c') if cc.has_argument('-Wno-pointer-arith') From patchwork Fri Sep 4 08:29:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 76537 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3E95A04B1; Fri, 4 Sep 2020 10:35:53 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 922401C0D0; Fri, 4 Sep 2020 10:35:39 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id E9F711C0AF for ; Fri, 4 Sep 2020 10:35:35 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id C8E132003AA; Fri, 4 Sep 2020 10:35:35 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id F15F7201652; Fri, 4 Sep 2020 10:35:33 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id E4133402D9; Fri, 4 Sep 2020 10:35:31 +0200 (CEST) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Fri, 4 Sep 2020 13:59:17 +0530 Message-Id: <20200904082921.17400-3-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200904082921.17400-1-hemant.agrawal@nxp.com> References: <20200901123650.29908-1-hemant.agrawal@nxp.com> <20200904082921.17400-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v7 3/7] bus/dpaa: add shared MAC support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Radu Bulie A shared MAC interface is an interface which can be used by both kernel and userspace based on classification configuration It is defined in dts with the compatible string "fsl,dpa-ethernet-shared" which bpool will be seeded by the dpdk partition and configured as a netdev by the dpaa Linux eth driver. User space buffers from the bpool will be kmapped by the kernel. Signed-off-by: Radu Bulie Signed-off-by: Jun Yang Signed-off-by: Nipun Gupta Acked-by: Hemant Agrawal --- drivers/bus/dpaa/base/fman/fman.c | 27 ++++++++++++++++++++++----- drivers/bus/dpaa/include/fman.h | 2 ++ drivers/net/dpaa/dpaa_ethdev.c | 31 +++++++++++++++++-------------- drivers/net/dpaa/dpaa_flow.c | 18 ++++++++++++++---- 4 files changed, 55 insertions(+), 23 deletions(-) diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c index 33be9e5d7..3ae29bf06 100644 --- a/drivers/bus/dpaa/base/fman/fman.c +++ b/drivers/bus/dpaa/base/fman/fman.c @@ -167,13 +167,21 @@ fman_if_init(const struct device_node *dpa_node) const char *mname, *fname; const char *dname = dpa_node->full_name; size_t lenp; - int _errno; + int _errno, is_shared = 0; const char *char_prop; uint32_t na; if (of_device_is_available(dpa_node) == false) return 0; + if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") && + !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-shared")) { + return 0; + } + + if (of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-shared")) + is_shared = 1; + rprop = "fsl,qman-frame-queues-rx"; mprop = "fsl,fman-mac"; @@ -387,7 +395,7 @@ fman_if_init(const struct device_node *dpa_node) goto err; } - assert(lenp == (4 * sizeof(phandle))); + assert(lenp >= (4 * sizeof(phandle))); na = of_n_addr_cells(mac_node); /* Get rid of endianness (issues). Convert to host byte order */ @@ -408,7 +416,7 @@ fman_if_init(const struct device_node *dpa_node) goto err; } - assert(lenp == (4 * sizeof(phandle))); + assert(lenp >= (4 * sizeof(phandle))); /*TODO: Fix for other cases also */ na = of_n_addr_cells(mac_node); /* Get rid of endianness (issues). Convert to host byte order */ @@ -508,6 +516,9 @@ fman_if_init(const struct device_node *dpa_node) pools_phandle++; } + if (is_shared) + __if->__if.is_shared_mac = 1; + /* Parsing of the network interface is complete, add it to the list */ DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x," "Port ID = %x", @@ -524,7 +535,7 @@ fman_if_init(const struct device_node *dpa_node) int fman_init(void) { - const struct device_node *dpa_node; + const struct device_node *dpa_node, *parent_node; int _errno; /* If multiple dependencies try to initialise the Fman driver, don't @@ -539,7 +550,13 @@ fman_init(void) return fman_ccsr_map_fd; } - for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-init") { + parent_node = of_find_compatible_node(NULL, NULL, "fsl,dpaa"); + if (!parent_node) { + DPAA_BUS_LOG(ERR, "Unable to find fsl,dpaa node"); + return -ENODEV; + } + + for_each_child_node(parent_node, dpa_node) { _errno = fman_if_init(dpa_node); if (_errno) { FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name); diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h index 7a0a7d405..cb7f18ca2 100644 --- a/drivers/bus/dpaa/include/fman.h +++ b/drivers/bus/dpaa/include/fman.h @@ -320,6 +320,8 @@ struct fman_if { struct rte_ether_addr mac_addr; /* The Qman channel to schedule Tx FQs to */ u16 tx_channel_id; + + uint8_t is_shared_mac; /* The hard-coded FQIDs for this interface. Note: this doesn't cover * the PCD nor the "Rx default" FQIDs, which are configured via FMC * and its XML-based configuration. diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index c5b9ac1a5..c2d480397 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -351,7 +351,8 @@ static void dpaa_eth_dev_stop(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); - fman_if_disable_rx(fif); + if (!fif->is_shared_mac) + fman_if_disable_rx(fif); dev->tx_pkt_burst = dpaa_eth_tx_drop_all; } @@ -1807,19 +1808,21 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) fman_intf->mac_addr.addr_bytes[4], fman_intf->mac_addr.addr_bytes[5]); - - /* Disable RX mode */ - fman_if_discard_rx_errors(fman_intf); - fman_if_disable_rx(fman_intf); - /* Disable promiscuous mode */ - fman_if_promiscuous_disable(fman_intf); - /* Disable multicast */ - fman_if_reset_mcast_filter_table(fman_intf); - /* Reset interface statistics */ - fman_if_stats_reset(fman_intf); - /* Disable SG by default */ - fman_if_set_sg(fman_intf, 0); - fman_if_set_maxfrm(fman_intf, RTE_ETHER_MAX_LEN + VLAN_TAG_SIZE); + if (!fman_intf->is_shared_mac) { + /* Disable RX mode */ + fman_if_discard_rx_errors(fman_intf); + fman_if_disable_rx(fman_intf); + /* Disable promiscuous mode */ + fman_if_promiscuous_disable(fman_intf); + /* Disable multicast */ + fman_if_reset_mcast_filter_table(fman_intf); + /* Reset interface statistics */ + fman_if_stats_reset(fman_intf); + /* Disable SG by default */ + fman_if_set_sg(fman_intf, 0); + fman_if_set_maxfrm(fman_intf, + RTE_ETHER_MAX_LEN + VLAN_TAG_SIZE); + } return 0; diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c index a12141efe..d24cd856c 100644 --- a/drivers/net/dpaa/dpaa_flow.c +++ b/drivers/net/dpaa/dpaa_flow.c @@ -736,6 +736,14 @@ int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf, } dpaa_intf->netenv_handle = NULL; + if (fif && fif->is_shared_mac) { + ret = fm_port_enable(dpaa_intf->port_handle); + if (ret != E_OK) { + DPAA_PMD_ERR("shared mac re-enable failed"); + return ret; + } + } + /* FM PORT Close */ fm_port_close(dpaa_intf->port_handle); dpaa_intf->port_handle = NULL; @@ -785,10 +793,12 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set) } } /* Set default netenv and scheme */ - ret = set_default_scheme(dpaa_intf); - if (ret) { - DPAA_PMD_ERR("Set PCD NetEnv and Scheme: Failed"); - goto unset_pcd_netenv_scheme1; + if (!fif->is_shared_mac) { + ret = set_default_scheme(dpaa_intf); + if (ret) { + DPAA_PMD_ERR("Set PCD NetEnv and Scheme: Failed"); + goto unset_pcd_netenv_scheme1; + } } /* Set Port PCD */ From patchwork Fri Sep 4 08:29:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 76538 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0FDCA04B1; Fri, 4 Sep 2020 10:36:06 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5DD1A1C10E; Fri, 4 Sep 2020 10:35:41 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id BF7B81C0BF for ; Fri, 4 Sep 2020 10:35:36 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 923CF201689; Fri, 4 Sep 2020 10:35:36 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id F165620192B; Fri, 4 Sep 2020 10:35:34 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 78281402EB; Fri, 4 Sep 2020 10:35:32 +0200 (CEST) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Fri, 4 Sep 2020 13:59:18 +0530 Message-Id: <20200904082921.17400-4-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200904082921.17400-1-hemant.agrawal@nxp.com> References: <20200901123650.29908-1-hemant.agrawal@nxp.com> <20200904082921.17400-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v7 4/7] bus/dpaa: add Virtual Storage Profile port init X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add support to initialize the VSP ports in the FMAN library. Signed-off-by: Hemant Agrawal --- drivers/bus/dpaa/base/fman/fman.c | 57 +++++++++++++++++++++++++++++++ drivers/bus/dpaa/include/fman.h | 3 ++ 2 files changed, 60 insertions(+) diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c index 3ae29bf06..39102bc1f 100644 --- a/drivers/bus/dpaa/base/fman/fman.c +++ b/drivers/bus/dpaa/base/fman/fman.c @@ -145,6 +145,61 @@ fman_get_mac_index(uint64_t regs_addr_host, uint8_t *mac_idx) return ret; } +static void fman_if_vsp_init(struct __fman_if *__if) +{ + const phandle *prop; + int cell_index; + const struct device_node *dev; + size_t lenp; + const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1}; + + if (__if->__if.mac_type == fman_mac_1g) { + for_each_compatible_node(dev, NULL, + "fsl,fman-port-1g-rx-extended-args") { + prop = of_get_property(dev, "cell-index", &lenp); + if (prop) { + cell_index = of_read_number( + &prop[0], + lenp / sizeof(phandle)); + if (cell_index == mac_idx[__if->__if.mac_idx]) { + prop = of_get_property( + dev, + "vsp-window", &lenp); + if (prop) { + __if->__if.num_profiles = + of_read_number( + &prop[0], 1); + __if->__if.base_profile_id = + of_read_number( + &prop[1], 1); + } + } + } + } + } else if (__if->__if.mac_type == fman_mac_10g) { + for_each_compatible_node(dev, NULL, + "fsl,fman-port-10g-rx-extended-args") { + prop = of_get_property(dev, "cell-index", &lenp); + if (prop) { + cell_index = of_read_number( + &prop[0], lenp / sizeof(phandle)); + if (cell_index == mac_idx[__if->__if.mac_idx]) { + prop = of_get_property( + dev, "vsp-window", &lenp); + if (prop) { + __if->__if.num_profiles = + of_read_number( + &prop[0], 1); + __if->__if.base_profile_id = + of_read_number( + &prop[1], 1); + } + } + } + } + } +} + static int fman_if_init(const struct device_node *dpa_node) { @@ -519,6 +574,8 @@ fman_if_init(const struct device_node *dpa_node) if (is_shared) __if->__if.is_shared_mac = 1; + fman_if_vsp_init(__if); + /* Parsing of the network interface is complete, add it to the list */ DPAA_BUS_LOG(DEBUG, "Found %s, Tx Channel = %x, FMAN = %x," "Port ID = %x", diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h index cb7f18ca2..dcf408372 100644 --- a/drivers/bus/dpaa/include/fman.h +++ b/drivers/bus/dpaa/include/fman.h @@ -321,6 +321,9 @@ struct fman_if { /* The Qman channel to schedule Tx FQs to */ u16 tx_channel_id; + uint8_t base_profile_id; + uint8_t num_profiles; + uint8_t is_shared_mac; /* The hard-coded FQIDs for this interface. Note: this doesn't cover * the PCD nor the "Rx default" FQIDs, which are configured via FMC From patchwork Fri Sep 4 08:29:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 76539 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D8AFA04B1; Fri, 4 Sep 2020 10:36:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8F5F91C116; Fri, 4 Sep 2020 10:35:42 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 841461C0C0 for ; Fri, 4 Sep 2020 10:35:37 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 61CFB1A12DC; Fri, 4 Sep 2020 10:35:37 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8652C1A0533; Fri, 4 Sep 2020 10:35:35 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 0C0154030E; Fri, 4 Sep 2020 10:35:32 +0200 (CEST) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Fri, 4 Sep 2020 13:59:19 +0530 Message-Id: <20200904082921.17400-5-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200904082921.17400-1-hemant.agrawal@nxp.com> References: <20200901123650.29908-1-hemant.agrawal@nxp.com> <20200904082921.17400-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v7 5/7] net/dpaa: add support for Virtual Storage Profile X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang This patch adds support for Virtual Storage profile (VSP) feature. With VSP support when memory pool is created, the hw buffer pool id i.e. bpid is not allocated; thhe bpid is identified by dpaa flow create API. The memory pool of RX queue is attached to specific BMan pool according to the VSP ID when RX queue is setup. for fmlib based hash queue, vsp base ID is assigned to each queue. Signed-off-by: Jun Yang Acked-by: Hemant Agrawal --- drivers/bus/dpaa/include/fsl_qman.h | 1 + drivers/net/dpaa/dpaa_ethdev.c | 133 +++++++++++++++++----- drivers/net/dpaa/dpaa_ethdev.h | 7 ++ drivers/net/dpaa/dpaa_flow.c | 164 +++++++++++++++++++++++++++- drivers/net/dpaa/dpaa_flow.h | 5 + 5 files changed, 282 insertions(+), 28 deletions(-) diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h index dd7ca783a..10212f0fd 100644 --- a/drivers/bus/dpaa/include/fsl_qman.h +++ b/drivers/bus/dpaa/include/fsl_qman.h @@ -1229,6 +1229,7 @@ struct qman_fq { int q_fd; u16 ch_id; + int8_t vsp_id; u8 cgr_groupid; u8 is_static:4; u8 qp_initialized:4; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index c2d480397..8e7eb9824 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -722,6 +722,55 @@ static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev) return 0; } +static void dpaa_fman_if_pool_setup(struct rte_eth_dev *dev) +{ + struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct fman_if_ic_params icp; + uint32_t fd_offset; + uint32_t bp_size; + + memset(&icp, 0, sizeof(icp)); + /* set ICEOF for to the default value , which is 0*/ + icp.iciof = DEFAULT_ICIOF; + icp.iceof = DEFAULT_RX_ICEOF; + icp.icsz = DEFAULT_ICSZ; + fman_if_set_ic_params(dev->process_private, &icp); + + fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE; + fman_if_set_fdoff(dev->process_private, fd_offset); + + /* Buffer pool size should be equal to Dataroom Size*/ + bp_size = rte_pktmbuf_data_room_size(dpaa_intf->bp_info->mp); + + fman_if_set_bp(dev->process_private, + dpaa_intf->bp_info->mp->size, + dpaa_intf->bp_info->bpid, bp_size); +} + +static inline int dpaa_eth_rx_queue_bp_check(struct rte_eth_dev *dev, + int8_t vsp_id, uint32_t bpid) +{ + struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct fman_if *fif = dev->process_private; + + if (fif->num_profiles) { + if (vsp_id < 0) + vsp_id = fif->base_profile_id; + } else { + if (vsp_id < 0) + vsp_id = 0; + } + + if (dpaa_intf->vsp_bpid[vsp_id] && + bpid != dpaa_intf->vsp_bpid[vsp_id]) { + DPAA_PMD_ERR("Various MPs are assigned to RXQs with same VSP"); + + return -1; + } + + return 0; +} + static int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, @@ -757,6 +806,20 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, DPAA_PMD_INFO("Rx queue setup for queue index: %d fq_id (0x%x)", queue_idx, rxq->fqid); + if (!fif->num_profiles) { + if (dpaa_intf->bp_info && dpaa_intf->bp_info->bp && + dpaa_intf->bp_info->mp != mp) { + DPAA_PMD_WARN("Multiple pools on same interface not" + " supported"); + return -EINVAL; + } + } else { + if (dpaa_eth_rx_queue_bp_check(dev, rxq->vsp_id, + DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid)) { + return -EINVAL; + } + } + /* Max packet can fit in single buffer */ if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) { ; @@ -779,36 +842,40 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, buffsz - RTE_PKTMBUF_HEADROOM); } - if (!dpaa_intf->bp_info || dpaa_intf->bp_info->mp != mp) { - struct fman_if_ic_params icp; - uint32_t fd_offset; - uint32_t bp_size; + dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp); - if (!mp->pool_data) { - DPAA_PMD_ERR("Not an offloaded buffer pool!"); - return -1; + /* For shared interface, it's done in kernel, skip.*/ + if (!fif->is_shared_mac) + dpaa_fman_if_pool_setup(dev); + + if (fif->num_profiles) { + int8_t vsp_id = rxq->vsp_id; + + if (vsp_id >= 0) { + ret = dpaa_port_vsp_update(dpaa_intf, fmc_q, vsp_id, + DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid, + fif); + if (ret) { + DPAA_PMD_ERR("dpaa_port_vsp_update failed"); + return ret; + } + } else { + DPAA_PMD_INFO("Base profile is associated to" + " RXQ fqid:%d\r\n", rxq->fqid); + if (fif->is_shared_mac) { + DPAA_PMD_ERR("Fatal: Base profile is associated" + " to shared interface on DPDK."); + return -EINVAL; + } + dpaa_intf->vsp_bpid[fif->base_profile_id] = + DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid; } - dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp); - - memset(&icp, 0, sizeof(icp)); - /* set ICEOF for to the default value , which is 0*/ - icp.iciof = DEFAULT_ICIOF; - icp.iceof = DEFAULT_RX_ICEOF; - icp.icsz = DEFAULT_ICSZ; - fman_if_set_ic_params(fif, &icp); - - fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE; - fman_if_set_fdoff(fif, fd_offset); - - /* Buffer pool size should be equal to Dataroom Size*/ - bp_size = rte_pktmbuf_data_room_size(mp); - fman_if_set_bp(fif, mp->size, - dpaa_intf->bp_info->bpid, bp_size); - dpaa_intf->valid = 1; - DPAA_PMD_DEBUG("if:%s fd_offset = %d offset = %d", - dpaa_intf->name, fd_offset, - fman_if_get_fdoff(fif)); + } else { + dpaa_intf->vsp_bpid[0] = + DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid; } + + dpaa_intf->valid = 1; DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name, fman_if_get_sg_enable(fif), dev->data->dev_conf.rxmode.max_rx_pkt_len); @@ -1605,6 +1672,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES]; uint32_t cgrid_tx[MAX_DPAA_CORES]; uint32_t dev_rx_fqids[DPAA_MAX_NUM_PCD_QUEUES]; + int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES]; + int8_t vsp_id = -1; PMD_INIT_FUNC_TRACE(); @@ -1624,6 +1693,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) memset((char *)dev_rx_fqids, 0, sizeof(uint32_t) * DPAA_MAX_NUM_PCD_QUEUES); + memset(dev_vspids, -1, DPAA_MAX_NUM_PCD_QUEUES); + /* Initialize Rx FQ's */ if (default_q) { num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES; @@ -1703,6 +1774,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) else fqid = dev_rx_fqids[loop]; + vsp_id = dev_vspids[loop]; + if (dpaa_intf->cgr_rx) dpaa_intf->cgr_rx[loop].cgrid = cgrid[loop]; @@ -1711,6 +1784,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) fqid); if (ret) goto free_rx; + dpaa_intf->rx_queues[loop].vsp_id = vsp_id; dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf; } dpaa_intf->nb_rx_queues = num_rx_fqs; @@ -2051,6 +2125,11 @@ static void __attribute__((destructor(102))) dpaa_finish(void) if (dpaa_fm_deconfig(dpaa_intf, fif)) DPAA_PMD_WARN("DPAA FM " "deconfig failed\n"); + if (fif->num_profiles) { + if (dpaa_port_vsp_cleanup(dpaa_intf, + fif)) + DPAA_PMD_WARN("DPAA FM vsp cleanup failed\n"); + } } } if (is_global_init) diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index b10c4a20b..dd182c4d5 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -103,6 +103,10 @@ #define DPAA_FD_CMD_CFQ 0x00ffffff /**< Confirmation Frame Queue */ +#define DPAA_VSP_PROFILE_MAX_NUM 8 + +#define DPAA_DEFAULT_RXQ_VSP_ID 1 + /* Each network interface is represented by one of these */ struct dpaa_if { int valid; @@ -122,6 +126,9 @@ struct dpaa_if { void *netenv_handle; void *scheme_handle[2]; uint32_t scheme_count; + + void *vsp_handle[DPAA_VSP_PROFILE_MAX_NUM]; + uint32_t vsp_bpid[DPAA_VSP_PROFILE_MAX_NUM]; }; struct dpaa_if_stats { diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c index d24cd856c..a0087df67 100644 --- a/drivers/net/dpaa/dpaa_flow.c +++ b/drivers/net/dpaa/dpaa_flow.c @@ -12,6 +12,7 @@ #include #include #include +#include #define DPAA_MAX_NUM_ETH_DEV 8 @@ -47,6 +48,17 @@ static struct dpaa_fm_info fm_info; static struct dpaa_fm_model fm_model; static const char *fm_log = "/tmp/fmdpdk.bin"; +static inline uint8_t fm_default_vsp_id(struct fman_if *fif) +{ + /* Avoid being same as base profile which could be used + * for kernel interface of shared mac. + */ + if (fif->base_profile_id) + return 0; + else + return DPAA_DEFAULT_RXQ_VSP_ID; +} + static void fm_prev_cleanup(void) { uint32_t fman_id = 0, i = 0, devid; @@ -300,11 +312,18 @@ set_hash_params_sctp(ioc_fm_pcd_kg_scheme_params_t *scheme_params, int hdr_idx) static int set_scheme_params(ioc_fm_pcd_kg_scheme_params_t *scheme_params, ioc_fm_pcd_net_env_params_t *dist_units, struct dpaa_if *dpaa_intf, - struct fman_if *fif __rte_unused) + struct fman_if *fif) { int dist_idx, hdr_idx = 0; PMD_INIT_FUNC_TRACE(); + if (fif->num_profiles) { + scheme_params->param.override_storage_profile = true; + scheme_params->param.storage_profile.direct = true; + scheme_params->param.storage_profile.profile_select + .direct_relative_profile_id = fm_default_vsp_id(fif); + } + scheme_params->param.use_hash = 1; scheme_params->param.modify = false; scheme_params->param.always_direct = false; @@ -784,6 +803,14 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set) return -1; } + if (fif->num_profiles) { + for (i = 0; i < dpaa_intf->nb_rx_queues; i++) + dpaa_intf->rx_queues[i].vsp_id = + fm_default_vsp_id(fif); + + i = 0; + } + /* Set PCD netenv and scheme */ if (req_dist_set) { ret = set_pcd_netenv_scheme(dpaa_intf, req_dist_set, fif); @@ -909,3 +936,138 @@ int dpaa_fm_term(void) } return 0; } + +static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf, + uint8_t vsp_id, t_handle fman_handle, + struct fman_if *fif) +{ + t_fm_vsp_params vsp_params; + t_fm_buffer_prefix_content buf_prefix_cont; + uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1}; + uint8_t idx = mac_idx[fif->mac_idx]; + int ret; + + if (vsp_id == fif->base_profile_id && fif->is_shared_mac) { + /* For shared interface, VSP of base + * profile is default pool located in kernel. + */ + dpaa_intf->vsp_bpid[vsp_id] = 0; + return 0; + } + + if (vsp_id >= DPAA_VSP_PROFILE_MAX_NUM) { + DPAA_PMD_ERR("VSP ID %d exceeds MAX number %d", + vsp_id, DPAA_VSP_PROFILE_MAX_NUM); + return -1; + } + + memset(&vsp_params, 0, sizeof(vsp_params)); + vsp_params.h_fm = fman_handle; + vsp_params.relative_profile_id = vsp_id; + vsp_params.port_params.port_id = idx; + if (fif->mac_type == fman_mac_1g) { + vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX; + } else if (fif->mac_type == fman_mac_2_5g) { + vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_2_5G; + } else if (fif->mac_type == fman_mac_10g) { + vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_10G; + } else { + DPAA_PMD_ERR("Mac type %d error", fif->mac_type); + return -1; + } + vsp_params.ext_buf_pools.num_of_pools_used = 1; + vsp_params.ext_buf_pools.ext_buf_pool[0].id = + dpaa_intf->vsp_bpid[vsp_id]; + vsp_params.ext_buf_pools.ext_buf_pool[0].size = + RTE_MBUF_DEFAULT_BUF_SIZE; + + dpaa_intf->vsp_handle[vsp_id] = fm_vsp_config(&vsp_params); + if (!dpaa_intf->vsp_handle[vsp_id]) { + DPAA_PMD_ERR("fm_vsp_config error for profile %d", vsp_id); + return -EINVAL; + } + + /* configure the application buffer (structure, size and + * content) + */ + + memset(&buf_prefix_cont, 0, sizeof(buf_prefix_cont)); + + buf_prefix_cont.priv_data_size = 16; + buf_prefix_cont.data_align = 64; + buf_prefix_cont.pass_prs_result = true; + buf_prefix_cont.pass_time_stamp = true; + buf_prefix_cont.pass_hash_result = false; + buf_prefix_cont.pass_all_other_pcdinfo = false; + ret = fm_vsp_config_buffer_prefix_content(dpaa_intf->vsp_handle[vsp_id], + &buf_prefix_cont); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_vsp_config_buffer_prefix_content error for profile %d err: %d", + vsp_id, ret); + return ret; + } + + /* initialize the FM VSP module */ + ret = fm_vsp_init(dpaa_intf->vsp_handle[vsp_id]); + if (ret != E_OK) { + DPAA_PMD_ERR("fm_vsp_init error for profile %d err:%d", + vsp_id, ret); + return ret; + } + + return 0; +} + +int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf, + bool fmc_mode, uint8_t vsp_id, uint32_t bpid, + struct fman_if *fif) +{ + int ret = 0; + t_handle fman_handle; + + if (!fif->num_profiles) + return 0; + + if (vsp_id >= fif->num_profiles) + return 0; + + if (dpaa_intf->vsp_bpid[vsp_id] == bpid) + return 0; + + if (dpaa_intf->vsp_handle[vsp_id]) { + ret = fm_vsp_free(dpaa_intf->vsp_handle[vsp_id]); + if (ret != E_OK) { + DPAA_PMD_ERR("Error fm_vsp_free: err %d vsp_handle[%d]", + ret, vsp_id); + return ret; + } + dpaa_intf->vsp_handle[vsp_id] = 0; + } + + if (fmc_mode) + fman_handle = fm_open(0); + else + fman_handle = fm_info.fman_handle; + + dpaa_intf->vsp_bpid[vsp_id] = bpid; + + return dpaa_port_vsp_configure(dpaa_intf, vsp_id, fman_handle, fif); +} + +int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, struct fman_if *fif) +{ + int idx, ret; + + for (idx = 0; idx < (uint8_t)fif->num_profiles; idx++) { + if (dpaa_intf->vsp_handle[idx]) { + ret = fm_vsp_free(dpaa_intf->vsp_handle[idx]); + if (ret != E_OK) { + DPAA_PMD_ERR("Error fm_vsp_free: err %d" + " vsp_handle[%d]", ret, idx); + return ret; + } + } + } + + return E_OK; +} diff --git a/drivers/net/dpaa/dpaa_flow.h b/drivers/net/dpaa/dpaa_flow.h index d16bfec21..f5e131acf 100644 --- a/drivers/net/dpaa/dpaa_flow.h +++ b/drivers/net/dpaa/dpaa_flow.h @@ -10,5 +10,10 @@ int dpaa_fm_term(void); int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set); int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf, struct fman_if *fif); void dpaa_write_fm_config_to_file(void); +int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf, + bool fmc_mode, uint8_t vsp_id, uint32_t bpid, struct fman_if *fif); +int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, struct fman_if *fif); +int dpaa_port_fmc_init(struct fman_if *fif, + uint32_t *fqids, int8_t *vspids, int max_nb_rxq); #endif From patchwork Fri Sep 4 08:29:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 76540 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0A8FA04B1; Fri, 4 Sep 2020 10:36:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AFC151C11E; Fri, 4 Sep 2020 10:35:43 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id D02BF1C0C0 for ; Fri, 4 Sep 2020 10:35:37 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id B09561A0490; Fri, 4 Sep 2020 10:35:37 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id A03A31A041E; Fri, 4 Sep 2020 10:35:35 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 93A454031D; Fri, 4 Sep 2020 10:35:33 +0200 (CEST) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Fri, 4 Sep 2020 13:59:20 +0530 Message-Id: <20200904082921.17400-6-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200904082921.17400-1-hemant.agrawal@nxp.com> References: <20200901123650.29908-1-hemant.agrawal@nxp.com> <20200904082921.17400-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v7 6/7] net/dpaa: add fmc parser support for VSP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang FMC tool genertes and saves the setup in a file. This patch help Parse the /tmp/fmc.bin generated by fmc to setup RXQs for each port on fmc mode. The parser gets the fqids and vspids from fmc.bin Signed-off-by: Jun Yang Acked-by: Hemant Agrawal --- drivers/net/dpaa/dpaa_ethdev.c | 26 +- drivers/net/dpaa/dpaa_ethdev.h | 10 +- drivers/net/dpaa/dpaa_fmc.c | 475 +++++++++++++++++++++++++++++++++ drivers/net/dpaa/meson.build | 3 +- 4 files changed, 507 insertions(+), 7 deletions(-) create mode 100644 drivers/net/dpaa/dpaa_fmc.c diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 8e7eb9824..0ce2f5ae3 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -259,6 +259,16 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) dev->data->scattered_rx = 1; } + if (!(default_q || fmc_q)) { + if (dpaa_fm_config(dev, + eth_conf->rx_adv_conf.rss_conf.rss_hf)) { + dpaa_write_fm_config_to_file(); + DPAA_PMD_ERR("FM port configuration: Failed\n"); + return -1; + } + dpaa_write_fm_config_to_file(); + } + /* if the interrupts were configured on this devices*/ if (intr_handle && intr_handle->fd) { if (dev->data->dev_conf.intr_conf.lsc != 0) @@ -334,6 +344,9 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); + if (!(default_q || fmc_q)) + dpaa_write_fm_config_to_file(); + /* Change tx callback to the real one */ if (dpaa_intf->cgr_tx) dev->tx_pkt_burst = dpaa_eth_queue_tx_slow; @@ -1699,7 +1712,18 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) if (default_q) { num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES; } else if (fmc_q) { - num_rx_fqs = 1; + num_rx_fqs = dpaa_port_fmc_init(fman_intf, dev_rx_fqids, + dev_vspids, + DPAA_MAX_NUM_PCD_QUEUES); + if (num_rx_fqs < 0) { + DPAA_PMD_ERR("%s FMC initializes failed!", + dpaa_intf->name); + goto free_rx; + } + if (!num_rx_fqs) { + DPAA_PMD_WARN("%s is not configured by FMC.", + dpaa_intf->name); + } } else { /* FMCLESS mode, load balance to multiple cores.*/ num_rx_fqs = rte_lcore_count(); diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index dd182c4d5..1b8e120e8 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -59,10 +59,10 @@ #endif /* PCD frame queues */ -#define DPAA_PCD_FQID_START 0x400 -#define DPAA_PCD_FQID_MULTIPLIER 0x100 #define DPAA_DEFAULT_NUM_PCD_QUEUES 1 -#define DPAA_MAX_NUM_PCD_QUEUES 4 +#define DPAA_VSP_PROFILE_MAX_NUM 8 +#define DPAA_MAX_NUM_PCD_QUEUES DPAA_VSP_PROFILE_MAX_NUM +/*Same as VSP profile number*/ #define DPAA_IF_TX_PRIORITY 3 #define DPAA_IF_RX_PRIORITY 0 @@ -103,10 +103,10 @@ #define DPAA_FD_CMD_CFQ 0x00ffffff /**< Confirmation Frame Queue */ -#define DPAA_VSP_PROFILE_MAX_NUM 8 - #define DPAA_DEFAULT_RXQ_VSP_ID 1 +#define FMC_FILE "/tmp/fmc.bin" + /* Each network interface is represented by one of these */ struct dpaa_if { int valid; diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c new file mode 100644 index 000000000..0ef362274 --- /dev/null +++ b/drivers/net/dpaa/dpaa_fmc.c @@ -0,0 +1,475 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2017-2020 NXP + */ + +/* System headers */ +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#define FMC_OUTPUT_FORMAT_VER 0x106 + +#define FMC_NAME_LEN 64 +#define FMC_FMAN_NUM 2 +#define FMC_PORTS_PER_FMAN 16 +#define FMC_SCHEMES_NUM 32 +#define FMC_SCHEME_PROTOCOLS_NUM 16 +#define FMC_CC_NODES_NUM 512 +#define FMC_REPLICATORS_NUM 16 +#define FMC_PLC_NUM 64 +#define MAX_SP_CODE_SIZE 0x7C0 +#define FMC_MANIP_MAX 64 +#define FMC_HMANIP_MAX 512 +#define FMC_INSERT_MAX 56 +#define FM_PCD_MAX_REPS 64 + +typedef struct fmc_port_t { + e_fm_port_type type; + unsigned int number; + struct fm_pcd_net_env_params_t distinction_units; + struct ioc_fm_port_pcd_params_t pcd_param; + struct ioc_fm_port_pcd_prs_params_t prs_param; + struct ioc_fm_port_pcd_kg_params_t kg_param; + struct ioc_fm_port_pcd_cc_params_t cc_param; + char name[FMC_NAME_LEN]; + char cctree_name[FMC_NAME_LEN]; + t_handle handle; + t_handle env_id_handle; + t_handle env_id_dev_id; + t_handle cctree_handle; + t_handle cctree_dev_id; + + unsigned int schemes_count; + unsigned int schemes[FMC_SCHEMES_NUM]; + unsigned int ccnodes_count; + unsigned int ccnodes[FMC_CC_NODES_NUM]; + unsigned int htnodes_count; + unsigned int htnodes[FMC_CC_NODES_NUM]; + + unsigned int replicators_count; + unsigned int replicators[FMC_REPLICATORS_NUM]; + ioc_fm_port_vsp_alloc_params_t vsp_param; + + unsigned int ccroot_count; + unsigned int ccroot[FMC_CC_NODES_NUM]; + enum ioc_fm_pcd_engine ccroot_type[FMC_CC_NODES_NUM]; + unsigned int ccroot_manip[FMC_CC_NODES_NUM]; + + unsigned int reasm_index; +} fmc_port; + +typedef struct fmc_fman_t { + unsigned int number; + unsigned int port_count; + unsigned int ports[FMC_PORTS_PER_FMAN]; + char name[FMC_NAME_LEN]; + t_handle handle; + char pcd_name[FMC_NAME_LEN]; + t_handle pcd_handle; + unsigned int kg_payload_offset; + + unsigned int offload_support; + + unsigned int reasm_count; + struct fm_pcd_manip_params_t reasm[FMC_MANIP_MAX]; + char reasm_name[FMC_MANIP_MAX][FMC_NAME_LEN]; + t_handle reasm_handle[FMC_MANIP_MAX]; + t_handle reasm_dev_id[FMC_MANIP_MAX]; + + unsigned int frag_count; + struct fm_pcd_manip_params_t frag[FMC_MANIP_MAX]; + char frag_name[FMC_MANIP_MAX][FMC_NAME_LEN]; + t_handle frag_handle[FMC_MANIP_MAX]; + t_handle frag_dev_id[FMC_MANIP_MAX]; + + unsigned int hdr_count; + struct fm_pcd_manip_params_t hdr[FMC_HMANIP_MAX]; + uint8_t insert_data[FMC_HMANIP_MAX][FMC_INSERT_MAX]; + char hdr_name[FMC_HMANIP_MAX][FMC_NAME_LEN]; + t_handle hdr_handle[FMC_HMANIP_MAX]; + t_handle hdr_dev_id[FMC_HMANIP_MAX]; + unsigned int hdr_has_next[FMC_HMANIP_MAX]; + unsigned int hdr_next[FMC_HMANIP_MAX]; +} fmc_fman; + +typedef enum fmc_apply_order_e { + fmcengine_start, + fmcengine_end, + fmcport_start, + fmcport_end, + fmcscheme, + fmcccnode, + fmchtnode, + fmccctree, + fmcpolicer, + fmcreplicator, + fmcmanipulation +} fmc_apply_order_e; + +typedef struct fmc_apply_order_t { + fmc_apply_order_e type; + unsigned int index; +} fmc_apply_order; + +struct fmc_model_t { + unsigned int format_version; + unsigned int sp_enable; + t_fm_pcd_prs_sw_params sp; + uint8_t spcode[MAX_SP_CODE_SIZE]; + + unsigned int fman_count; + fmc_fman fman[FMC_FMAN_NUM]; + + unsigned int port_count; + fmc_port port[FMC_FMAN_NUM * FMC_PORTS_PER_FMAN]; + + unsigned int scheme_count; + char scheme_name[FMC_SCHEMES_NUM][FMC_NAME_LEN]; + t_handle scheme_handle[FMC_SCHEMES_NUM]; + t_handle scheme_dev_id[FMC_SCHEMES_NUM]; + struct fm_pcd_kg_scheme_params_t scheme[FMC_SCHEMES_NUM]; + + unsigned int ccnode_count; + char ccnode_name[FMC_CC_NODES_NUM][FMC_NAME_LEN]; + t_handle ccnode_handle[FMC_CC_NODES_NUM]; + t_handle ccnode_dev_id[FMC_CC_NODES_NUM]; + struct fm_pcd_cc_node_params_t ccnode[FMC_CC_NODES_NUM]; + uint8_t cckeydata[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS] + [FM_PCD_MAX_SIZE_OF_KEY]; + unsigned char ccmask[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS] + [FM_PCD_MAX_SIZE_OF_KEY]; + unsigned int + ccentry_action_index[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]; + enum ioc_fm_pcd_engine + ccentry_action_type[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]; + unsigned char ccentry_frag[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]; + unsigned int ccentry_manip[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]; + unsigned int ccmiss_action_index[FMC_CC_NODES_NUM]; + enum ioc_fm_pcd_engine ccmiss_action_type[FMC_CC_NODES_NUM]; + unsigned char ccmiss_frag[FMC_CC_NODES_NUM]; + unsigned int ccmiss_manip[FMC_CC_NODES_NUM]; + + unsigned int htnode_count; + char htnode_name[FMC_CC_NODES_NUM][FMC_NAME_LEN]; + t_handle htnode_handle[FMC_CC_NODES_NUM]; + t_handle htnode_dev_id[FMC_CC_NODES_NUM]; + struct fm_pcd_hash_table_params_t htnode[FMC_CC_NODES_NUM]; + + unsigned int htentry_count[FMC_CC_NODES_NUM]; + struct ioc_fm_pcd_cc_key_params_t + htentry[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]; + uint8_t htkeydata[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS] + [FM_PCD_MAX_SIZE_OF_KEY]; + unsigned int + htentry_action_index[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]; + enum ioc_fm_pcd_engine + htentry_action_type[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]; + unsigned char htentry_frag[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]; + unsigned int htentry_manip[FMC_CC_NODES_NUM][FM_PCD_MAX_NUM_OF_KEYS]; + + unsigned int htmiss_action_index[FMC_CC_NODES_NUM]; + enum ioc_fm_pcd_engine htmiss_action_type[FMC_CC_NODES_NUM]; + unsigned char htmiss_frag[FMC_CC_NODES_NUM]; + unsigned int htmiss_manip[FMC_CC_NODES_NUM]; + + unsigned int replicator_count; + char replicator_name[FMC_REPLICATORS_NUM][FMC_NAME_LEN]; + t_handle replicator_handle[FMC_REPLICATORS_NUM]; + t_handle replicator_dev_id[FMC_REPLICATORS_NUM]; + struct fm_pcd_frm_replic_group_params_t replicator[FMC_REPLICATORS_NUM]; + unsigned int + repentry_action_index[FMC_REPLICATORS_NUM][FM_PCD_MAX_REPS]; + unsigned char repentry_frag[FMC_REPLICATORS_NUM][FM_PCD_MAX_REPS]; + unsigned int repentry_manip[FMC_REPLICATORS_NUM][FM_PCD_MAX_REPS]; + + unsigned int policer_count; + char policer_name[FMC_PLC_NUM][FMC_NAME_LEN]; + struct fm_pcd_plcr_profile_params_t policer[FMC_PLC_NUM]; + t_handle policer_handle[FMC_PLC_NUM]; + t_handle policer_dev_id[FMC_PLC_NUM]; + unsigned int policer_action_index[FMC_PLC_NUM][3]; + + unsigned int apply_order_count; + fmc_apply_order apply_order[FMC_FMAN_NUM * + FMC_PORTS_PER_FMAN * + (FMC_SCHEMES_NUM + FMC_CC_NODES_NUM)]; +}; + +struct fmc_model_t *g_fmc_model; + +static int dpaa_port_fmc_port_parse(struct fman_if *fif, + const struct fmc_model_t *fmc_model, + int apply_idx) +{ + int current_port = fmc_model->apply_order[apply_idx].index; + const fmc_port *pport = &fmc_model->port[current_port]; + const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1}; + const uint8_t mac_type[] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2}; + + if (mac_idx[fif->mac_idx] != pport->number || + mac_type[fif->mac_idx] != pport->type) + return -1; + + return current_port; +} + +static int dpaa_port_fmc_scheme_parse(struct fman_if *fif, + const struct fmc_model_t *fmc, + int apply_idx, + uint16_t *rxq_idx, int max_nb_rxq, + uint32_t *fqids, int8_t *vspids) +{ + int idx = fmc->apply_order[apply_idx].index; + uint32_t i; + + if (!fmc->scheme[idx].override_storage_profile && + fif->is_shared_mac) { + DPAA_PMD_WARN("No VSP assigned to scheme %d for sharemac %d!", + idx, fif->mac_idx); + DPAA_PMD_WARN("Risk to receive pkts from skb pool to CRASH!"); + } + + if (e_IOC_FM_PCD_DONE == + fmc->scheme[idx].next_engine) { + for (i = 0; i < fmc->scheme[idx] + .key_ext_and_hash.hash_dist_num_of_fqids; i++) { + uint32_t fqid = fmc->scheme[idx].base_fqid + i; + int k, found = 0; + + if (fqid == fif->fqid_rx_def) { + if (fif->is_shared_mac && + fmc->scheme[idx].override_storage_profile && + fmc->scheme[idx].storage_profile.direct && + fmc->scheme[idx].storage_profile + .profile_select.direct_relative_profile_id != + fif->base_profile_id) { + DPAA_PMD_ERR("Def RXQ must be associated with def VSP on sharemac!"); + + return -1; + } + continue; + } + + if (fif->is_shared_mac && + !fmc->scheme[idx].override_storage_profile) { + DPAA_PMD_ERR("RXQ to DPDK must be associated with VSP on sharemac!"); + return -1; + } + + if (fif->is_shared_mac && + fmc->scheme[idx].override_storage_profile && + fmc->scheme[idx].storage_profile.direct && + fmc->scheme[idx].storage_profile + .profile_select.direct_relative_profile_id == + fif->base_profile_id) { + DPAA_PMD_ERR("RXQ can't be associated with default VSP on sharemac!"); + + return -1; + } + + if ((*rxq_idx) >= max_nb_rxq) { + DPAA_PMD_DEBUG("Too many queues in FMC policy" + "%d overflow %d", + (*rxq_idx), max_nb_rxq); + + continue; + } + + for (k = 0; k < (*rxq_idx); k++) { + if (fqids[k] == fqid) { + found = 1; + break; + } + } + + if (found) + continue; + fqids[(*rxq_idx)] = fqid; + if (fmc->scheme[idx].override_storage_profile) { + if (fmc->scheme[idx].storage_profile.direct) { + vspids[(*rxq_idx)] = + fmc->scheme[idx].storage_profile + .profile_select + .direct_relative_profile_id; + } else { + vspids[(*rxq_idx)] = -1; + } + } else { + vspids[(*rxq_idx)] = -1; + } + (*rxq_idx)++; + } + } + + return 0; +} + +static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif, + const struct fmc_model_t *fmc_model, + int apply_idx, + uint16_t *rxq_idx, int max_nb_rxq, + uint32_t *fqids, int8_t *vspids) +{ + uint16_t j, k, found = 0; + const struct ioc_keys_params_t *keys_params; + uint32_t fqid, cc_idx = fmc_model->apply_order[apply_idx].index; + + keys_params = &fmc_model->ccnode[cc_idx].keys_params; + + if ((*rxq_idx) >= max_nb_rxq) { + DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d", + (*rxq_idx), max_nb_rxq); + + return 0; + } + + for (j = 0; j < keys_params->num_of_keys; ++j) { + found = 0; + fqid = keys_params->key_params[j].cc_next_engine_params + .params.enqueue_params.new_fqid; + + if (keys_params->key_params[j].cc_next_engine_params + .next_engine != e_IOC_FM_PCD_DONE) { + DPAA_PMD_WARN("FMC CC next engine not support"); + continue; + } + if (keys_params->key_params[j].cc_next_engine_params + .params.enqueue_params.action != + e_IOC_FM_PCD_ENQ_FRAME) + continue; + for (k = 0; k < (*rxq_idx); k++) { + if (fqids[k] == fqid) { + found = 1; + break; + } + } + if (found) + continue; + + if ((*rxq_idx) >= max_nb_rxq) { + DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d", + (*rxq_idx), max_nb_rxq); + + return 0; + } + + fqids[(*rxq_idx)] = fqid; + vspids[(*rxq_idx)] = + keys_params->key_params[j].cc_next_engine_params + .params.enqueue_params + .new_relative_storage_profile_id; + + if (vspids[(*rxq_idx)] == fif->base_profile_id && + fif->is_shared_mac) { + DPAA_PMD_ERR("VSP %d can NOT be used on DPDK.", + vspids[(*rxq_idx)]); + DPAA_PMD_ERR("It is associated to skb pool of shared interface."); + return -1; + } + (*rxq_idx)++; + } + + return 0; +} + +int dpaa_port_fmc_init(struct fman_if *fif, + uint32_t *fqids, int8_t *vspids, int max_nb_rxq) +{ + int current_port = -1, ret; + uint16_t rxq_idx = 0; + const struct fmc_model_t *fmc_model; + uint32_t i; + + if (!g_fmc_model) { + size_t bytes_read; + FILE *fp = fopen(FMC_FILE, "rb"); + + if (!fp) { + DPAA_PMD_ERR("%s not exists", FMC_FILE); + return -1; + } + + g_fmc_model = rte_malloc(NULL, sizeof(struct fmc_model_t), 64); + if (!g_fmc_model) { + DPAA_PMD_ERR("FMC memory alloc failed"); + fclose(fp); + return -1; + } + + bytes_read = fread(g_fmc_model, + sizeof(struct fmc_model_t), 1, fp); + if (!bytes_read) { + DPAA_PMD_ERR("No bytes read"); + fclose(fp); + rte_free(g_fmc_model); + g_fmc_model = NULL; + return -1; + } + fclose(fp); + } + + fmc_model = g_fmc_model; + + if (fmc_model->format_version != FMC_OUTPUT_FORMAT_VER) + return -1; + + for (i = 0; i < fmc_model->apply_order_count; i++) { + switch (fmc_model->apply_order[i].type) { + case fmcengine_start: + break; + case fmcengine_end: + break; + case fmcport_start: + current_port = dpaa_port_fmc_port_parse(fif, + fmc_model, i); + break; + case fmcport_end: + break; + case fmcscheme: + if (current_port < 0) + break; + + ret = dpaa_port_fmc_scheme_parse(fif, fmc_model, + i, &rxq_idx, + max_nb_rxq, + fqids, vspids); + if (ret) + return ret; + + break; + case fmcccnode: + if (current_port < 0) + break; + + ret = dpaa_port_fmc_ccnode_parse(fif, fmc_model, + i, &rxq_idx, + max_nb_rxq, fqids, + vspids); + if (ret) + return ret; + + break; + case fmchtnode: + break; + case fmcreplicator: + break; + case fmccctree: + break; + case fmcpolicer: + break; + case fmcmanipulation: + break; + default: + break; + } + } + + return rxq_idx; +} diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build index 51f1f32a1..c00dba6f6 100644 --- a/drivers/net/dpaa/meson.build +++ b/drivers/net/dpaa/meson.build @@ -11,7 +11,8 @@ sources = files('dpaa_ethdev.c', 'fmlib/fm_lib.c', 'fmlib/fm_vsp.c', 'dpaa_flow.c', - 'dpaa_rxtx.c') + 'dpaa_rxtx.c', + 'dpaa_fmc.c') if cc.has_argument('-Wno-pointer-arith') cflags += '-Wno-pointer-arith' From patchwork Fri Sep 4 08:29:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 76541 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78ADBA04B1; Fri, 4 Sep 2020 10:36:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C83661C121; Fri, 4 Sep 2020 10:35:44 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id E3BAA1C0C6 for ; Fri, 4 Sep 2020 10:35:37 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id BFA3720199B; Fri, 4 Sep 2020 10:35:37 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 3410A201997; Fri, 4 Sep 2020 10:35:36 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 278F74024E; Fri, 4 Sep 2020 10:35:34 +0200 (CEST) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Fri, 4 Sep 2020 13:59:21 +0530 Message-Id: <20200904082921.17400-7-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200904082921.17400-1-hemant.agrawal@nxp.com> References: <20200901123650.29908-1-hemant.agrawal@nxp.com> <20200904082921.17400-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v7 7/7] net/dpaa: add RSS update func with FMCless X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sachin Saxena With fmlib (FMCLESS) mode now RSS can be modified on runtime. This patch add support for RSS update functions Signed-off-by: Hemant Agrawal Signed-off-by: Sachin Saxena --- drivers/net/dpaa/dpaa_ethdev.c | 37 ++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 0ce2f5ae3..b0f2023e6 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1303,6 +1303,41 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev, return ret; } +static int +dpaa_dev_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct rte_eth_dev_data *data = dev->data; + struct rte_eth_conf *eth_conf = &data->dev_conf; + + PMD_INIT_FUNC_TRACE(); + + if (!(default_q || fmc_q)) { + if (dpaa_fm_config(dev, rss_conf->rss_hf)) { + DPAA_PMD_ERR("FM port configuration: Failed\n"); + return -1; + } + eth_conf->rx_adv_conf.rss_conf.rss_hf = rss_conf->rss_hf; + } else { + DPAA_PMD_ERR("Function not supported\n"); + return -ENOTSUP; + } + return 0; +} + +static int +dpaa_dev_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct rte_eth_dev_data *data = dev->data; + struct rte_eth_conf *eth_conf = &data->dev_conf; + + /* dpaa does not support rss_key, so length should be 0*/ + rss_conf->rss_key_len = 0; + rss_conf->rss_hf = eth_conf->rx_adv_conf.rss_conf.rss_hf; + return 0; +} + static int dpaa_dev_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { @@ -1418,6 +1453,8 @@ static struct eth_dev_ops dpaa_devops = { .rx_queue_intr_enable = dpaa_dev_queue_intr_enable, .rx_queue_intr_disable = dpaa_dev_queue_intr_disable, + .rss_hash_update = dpaa_dev_rss_hash_update, + .rss_hash_conf_get = dpaa_dev_rss_hash_conf_get, }; static bool