From patchwork Thu Feb 11 14:16:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87856 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65367A054A; Thu, 11 Feb 2021 15:28:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CBC531CC4D8; Thu, 11 Feb 2021 15:28:01 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id A18A31CC38A for ; Thu, 11 Feb 2021 15:27:59 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id BC9F91A05F5; Thu, 11 Feb 2021 15:27:58 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 7D5401A05C7; Thu, 11 Feb 2021 15:27:57 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 7A22B402AD; Thu, 11 Feb 2021 15:27:55 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:01 +0530 Message-Id: <20210211141620.12482-2-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 01/20] bus/fslmc: fix to use ci value for qbman 5.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Youri Querry Random portal hangs observed on device with QBMAN 5.0 This fixes few random packet hang issues in event mode. Few things fixed it. 1. generally, pi == ci, no need for extra checks. 2. the proper initilizations in init with ci helped Fixes: 1b49352f41be ("bus/fslmc: rename portal pi index to consumer index") Cc: stable@dpdk.org Signed-off-by: Youri Querry Acked-by: Hemant Agrawal --- drivers/bus/fslmc/qbman/qbman_portal.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c index 77c9d508c4..aedcad9258 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.c +++ b/drivers/bus/fslmc/qbman/qbman_portal.c @@ -339,17 +339,9 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d) eqcr_pi = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_PI); p->eqcr.pi = eqcr_pi & p->eqcr.pi_ci_mask; p->eqcr.pi_vb = eqcr_pi & QB_VALID_BIT; - if ((p->desc.qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 - && (d->cena_access_mode == qman_cena_fastest_access)) - p->eqcr.ci = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_PI) - & p->eqcr.pi_ci_mask; - else - p->eqcr.ci = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_CI) - & p->eqcr.pi_ci_mask; - p->eqcr.available = p->eqcr.pi_ring_size - - qm_cyc_diff(p->eqcr.pi_ring_size, - p->eqcr.ci & (p->eqcr.pi_ci_mask<<1), - p->eqcr.pi & (p->eqcr.pi_ci_mask<<1)); + p->eqcr.ci = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_CI) + & p->eqcr.pi_ci_mask; + p->eqcr.available = p->eqcr.pi_ring_size; portal_idx_map[p->desc.idx] = p; return p; From patchwork Thu Feb 11 14:16:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87857 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC76DA054A; Thu, 11 Feb 2021 15:28:16 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0A49A1CC54F; Thu, 11 Feb 2021 15:28:03 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id BA8C01CC3B1 for ; Thu, 11 Feb 2021 15:27:59 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 48DFC200903; Thu, 11 Feb 2021 15:27:59 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 08D912008D9; Thu, 11 Feb 2021 15:27:58 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 07662402B7; Thu, 11 Feb 2021 15:27:55 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:02 +0530 Message-Id: <20210211141620.12482-3-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 02/20] bus/dpaa: fix statistics reading X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta Reading of word un-aligned values after reading word aligned values lead to corrution of memory, such that older value. This patch make changes such that word alignedaccess is made, before making an un-aligned access Fixes: 6d6b4f49a155 ("bus/dpaa: add FMAN hardware operations") Cc: stable@dpdk.org Signed-off-by: Nipun Gupta --- drivers/bus/dpaa/base/fman/fman_hw.c | 33 ++++++++++++++-------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c index 4ab49f7853..af9bac76c2 100644 --- a/drivers/bus/dpaa/base/fman/fman_hw.c +++ b/drivers/bus/dpaa/base/fman/fman_hw.c @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * - * Copyright 2017 NXP + * Copyright 2017,2020 NXP * */ @@ -219,20 +219,20 @@ fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats) struct memac_regs *regs = m->ccsr_map; /* read recved packet count */ - stats->ipackets = ((u64)in_be32(®s->rfrm_u)) << 32 | - in_be32(®s->rfrm_l); - stats->ibytes = ((u64)in_be32(®s->roct_u)) << 32 | - in_be32(®s->roct_l); - stats->ierrors = ((u64)in_be32(®s->rerr_u)) << 32 | - in_be32(®s->rerr_l); + stats->ipackets = (u64)in_be32(®s->rfrm_l) | + ((u64)in_be32(®s->rfrm_u)) << 32; + stats->ibytes = (u64)in_be32(®s->roct_l) | + ((u64)in_be32(®s->roct_u)) << 32; + stats->ierrors = (u64)in_be32(®s->rerr_l) | + ((u64)in_be32(®s->rerr_u)) << 32; /* read xmited packet count */ - stats->opackets = ((u64)in_be32(®s->tfrm_u)) << 32 | - in_be32(®s->tfrm_l); - stats->obytes = ((u64)in_be32(®s->toct_u)) << 32 | - in_be32(®s->toct_l); - stats->oerrors = ((u64)in_be32(®s->terr_u)) << 32 | - in_be32(®s->terr_l); + stats->opackets = (u64)in_be32(®s->tfrm_l) | + ((u64)in_be32(®s->tfrm_u)) << 32; + stats->obytes = (u64)in_be32(®s->toct_l) | + ((u64)in_be32(®s->toct_u)) << 32; + stats->oerrors = (u64)in_be32(®s->terr_l) | + ((u64)in_be32(®s->terr_u)) << 32; } void @@ -244,10 +244,9 @@ fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n) uint64_t base_offset = offsetof(struct memac_regs, reoct_l); for (i = 0; i < n; i++) - value[i] = ((u64)in_be32((char *)regs - + base_offset + 8 * i + 4)) << 32 | - ((u64)in_be32((char *)regs - + base_offset + 8 * i)); + value[i] = (((u64)in_be32((char *)regs + base_offset + 8 * i) | + (u64)in_be32((char *)regs + base_offset + + 8 * i + 4)) << 32); } void From patchwork Thu Feb 11 14:16:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87858 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B4FCA054A; Thu, 11 Feb 2021 15:28:25 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 52DB11CC55D; Thu, 11 Feb 2021 15:28:04 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 46E3E1CC38A for ; Thu, 11 Feb 2021 15:28:00 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id C77BD1A05EF; Thu, 11 Feb 2021 15:27:59 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8740C1A05D6; Thu, 11 Feb 2021 15:27:58 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 86986402D0; Thu, 11 Feb 2021 15:27:56 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:03 +0530 Message-Id: <20210211141620.12482-4-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 03/20] net/dpaa2: fix link get API implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rohit Raj According to DPDK Documentation, rte_eth_link_get API can wait upto 9 seconds for auto-negotiation to finish and then returns link status. In current implementation of rte_eth_link_get API in DPAA2 drivers, it wasn't waiting for auto negotiation to finish and was returning link status DOWN It can cause issues with DPDK applications which relies on rte_eth_link_get API for link status and doesn't support link status interrupt. Similar kind of issue was seen in TRex Application. This patch fixes this bug by adding wait for up to 9 seconds for auto negotiation to finish. Fixes: c56c86ff87c1 ("net/dpaa2: update link status") Cc: stable@dpdk.org Signed-off-by: Rohit Raj Acked-by: Hemant Agrawal --- drivers/net/dpaa2/dpaa2_ethdev.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 38774e255b..a81c73438e 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -31,6 +31,8 @@ #define DRIVER_LOOPBACK_MODE "drv_loopback" #define DRIVER_NO_PREFETCH_MODE "drv_no_prefetch" +#define CHECK_INTERVAL 100 /* 100ms */ +#define MAX_REPEAT_TIME 90 /* 9s (90 * 100ms) in total */ /* Supported Rx offloads */ static uint64_t dev_rx_offloads_sup = @@ -1805,23 +1807,32 @@ dpaa2_dev_stats_reset(struct rte_eth_dev *dev) /* return 0 means link status changed, -1 means not changed */ static int dpaa2_dev_link_update(struct rte_eth_dev *dev, - int wait_to_complete __rte_unused) + int wait_to_complete) { int ret; struct dpaa2_dev_priv *priv = dev->data->dev_private; struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private; struct rte_eth_link link; struct dpni_link_state state = {0}; + uint8_t count; if (dpni == NULL) { DPAA2_PMD_ERR("dpni is NULL"); return 0; } - ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state); - if (ret < 0) { - DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret); - return -1; + for (count = 0; count <= MAX_REPEAT_TIME; count++) { + ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, + &state); + if (ret < 0) { + DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret); + return -1; + } + if (state.up == ETH_LINK_DOWN && + wait_to_complete) + rte_delay_ms(CHECK_INTERVAL); + else + break; } memset(&link, 0, sizeof(struct rte_eth_link)); From patchwork Thu Feb 11 14:16:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87859 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6D0DA054A; Thu, 11 Feb 2021 15:28:33 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8DEF51CC562; Thu, 11 Feb 2021 15:28:05 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 6C80F1CC3B1 for ; Thu, 11 Feb 2021 15:28:00 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 508161A061E; Thu, 11 Feb 2021 15:28:00 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 123191A05C7; Thu, 11 Feb 2021 15:27:59 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 10A66402E5; Thu, 11 Feb 2021 15:27:56 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:04 +0530 Message-Id: <20210211141620.12482-5-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 04/20] net/dpaa: fix link get API implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rohit Raj According to DPDK Documentation, rte_eth_link_get API can wait upto 9 seconds for auto-negotiation to finish and then returns link status. In current implementation of rte_eth_link_get API in DPAA drivers, it wasn't waiting for auto negotiation to finish and was returning link status DOWN It can cause issues with DPDK applications which relies on rte_eth_link_get API for link statusand doesn't support link status interrupt. This patch fixes this bug by adding wait for up to 9 seconds for auto negotiation to finish. Fixes: 2aa10990a8dd ("bus/dpaa: enable link state interrupt") Cc: stable@dpdk.org Signed-off-by: Rohit Raj Acked-by: Hemant Agrawal --- drivers/net/dpaa/dpaa_ethdev.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index d643514de6..c59873dd8a 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -49,6 +49,9 @@ #include #include +#define CHECK_INTERVAL 100 /* 100ms */ +#define MAX_REPEAT_TIME 90 /* 9s (90 * 100ms) in total */ + /* Supported Rx offloads */ static uint64_t dev_rx_offloads_sup = DEV_RX_OFFLOAD_JUMBO_FRAME | @@ -669,23 +672,30 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev, } static int dpaa_eth_link_update(struct rte_eth_dev *dev, - int wait_to_complete __rte_unused) + int wait_to_complete) { struct dpaa_if *dpaa_intf = dev->data->dev_private; struct rte_eth_link *link = &dev->data->dev_link; struct fman_if *fif = dev->process_private; struct __fman_if *__fif = container_of(fif, struct __fman_if, __if); int ret, ioctl_version; + uint8_t count; PMD_INIT_FUNC_TRACE(); ioctl_version = dpaa_get_ioctl_version_number(); - if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) { - ret = dpaa_get_link_status(__fif->node_name, link); - if (ret) - return ret; + for (count = 0; count <= MAX_REPEAT_TIME; count++) { + ret = dpaa_get_link_status(__fif->node_name, link); + if (ret) + return ret; + if (link->link_status == ETH_LINK_DOWN && + wait_to_complete) + rte_delay_ms(CHECK_INTERVAL); + else + break; + } } else { link->link_status = dpaa_intf->valid; } From patchwork Thu Feb 11 14:16:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87861 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 92B72A054A; Thu, 11 Feb 2021 15:28:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0EC0D1CC56F; Thu, 11 Feb 2021 15:28:08 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 79CD21CC4D6 for ; Thu, 11 Feb 2021 15:28:01 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 08D502008DC; Thu, 11 Feb 2021 15:28:01 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 92C432008B8; Thu, 11 Feb 2021 15:27:59 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 8FF11402F0; Thu, 11 Feb 2021 15:27:57 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:05 +0530 Message-Id: <20210211141620.12482-6-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 05/20] net/dpaa2: allocate SGT table from first segment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch enables support to use the first segment headroom to build the HW requied Scatter Gather Table. (if space is available). This will avoid 1 less buffer for SG buffer creation. Signed-off-by: Sachin Saxena Signed-off-by: Hemant Agrawal --- drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 5 ++ drivers/net/dpaa2/dpaa2_rxtx.c | 65 +++++++++++++++++++------ 2 files changed, 54 insertions(+), 16 deletions(-) diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index ac24f01451..0f15750b6c 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -330,6 +330,11 @@ enum qbman_fd_format { } while (0) #define DPAA2_FD_GET_FORMAT(fd) (((fd)->simple.bpid_offset >> 28) & 0x3) +#define DPAA2_SG_SET_FORMAT(sg, format) do { \ + (sg)->fin_bpid_offset &= 0xCFFFFFFF; \ + (sg)->fin_bpid_offset |= (uint32_t)format << 28; \ +} while (0) + #define DPAA2_SG_SET_FINAL(sg, fin) do { \ (sg)->fin_bpid_offset &= 0x7FFFFFFF; \ (sg)->fin_bpid_offset |= (uint32_t)fin << 31; \ diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index d6c8b31a8c..c11ed0ee61 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2016-2020 NXP + * Copyright 2016-2021 NXP * */ @@ -377,25 +377,47 @@ eth_fd_to_mbuf(const struct qbman_fd *fd, static int __rte_noinline __rte_hot eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, - struct qbman_fd *fd, uint16_t bpid) + struct qbman_fd *fd, + struct rte_mempool *mp, uint16_t bpid) { struct rte_mbuf *cur_seg = mbuf, *prev_seg, *mi, *temp; struct qbman_sge *sgt, *sge = NULL; - int i; + int i, offset = 0; - temp = rte_pktmbuf_alloc(mbuf->pool); - if (temp == NULL) { - DPAA2_PMD_DP_DEBUG("No memory to allocate S/G table\n"); - return -ENOMEM; +#ifdef RTE_LIBRTE_IEEE1588 + /* annotation area for timestamp in first buffer */ + offset = 0x64; +#endif + if (RTE_MBUF_DIRECT(mbuf) && + (mbuf->data_off > (mbuf->nb_segs * sizeof(struct qbman_sge) + + offset))) { + temp = mbuf; + if (rte_mbuf_refcnt_read(temp) > 1) { + /* If refcnt > 1, invalid bpid is set to ensure + * buffer is not freed by HW + */ + fd->simple.bpid_offset = 0; + DPAA2_SET_FD_IVP(fd); + rte_mbuf_refcnt_update(temp, -1); + } else { + DPAA2_SET_ONLY_FD_BPID(fd, bpid); + } + DPAA2_SET_FD_OFFSET(fd, offset); + } else { + temp = rte_pktmbuf_alloc(mp); + if (temp == NULL) { + DPAA2_PMD_DP_DEBUG("No memory to allocate S/G table\n"); + return -ENOMEM; + } + DPAA2_SET_ONLY_FD_BPID(fd, bpid); + DPAA2_SET_FD_OFFSET(fd, temp->data_off); } - DPAA2_SET_FD_ADDR(fd, DPAA2_MBUF_VADDR_TO_IOVA(temp)); DPAA2_SET_FD_LEN(fd, mbuf->pkt_len); - DPAA2_SET_ONLY_FD_BPID(fd, bpid); - DPAA2_SET_FD_OFFSET(fd, temp->data_off); DPAA2_FD_SET_FORMAT(fd, qbman_fd_sg); DPAA2_RESET_FD_FRC(fd); DPAA2_RESET_FD_CTRL(fd); + DPAA2_RESET_FD_FLC(fd); /*Set Scatter gather table and Scatter gather entries*/ sgt = (struct qbman_sge *)( (size_t)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)) @@ -409,15 +431,24 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, DPAA2_SET_FLE_OFFSET(sge, cur_seg->data_off); sge->length = cur_seg->data_len; if (RTE_MBUF_DIRECT(cur_seg)) { - if (rte_mbuf_refcnt_read(cur_seg) > 1) { + /* if we are using inline SGT in same buffers + * set the FLE FMT as Frame Data Section + */ + if (temp == cur_seg) { + DPAA2_SG_SET_FORMAT(sge, qbman_fd_list); + DPAA2_SET_FLE_IVP(sge); + } else { + if (rte_mbuf_refcnt_read(cur_seg) > 1) { /* If refcnt > 1, invalid bpid is set to ensure * buffer is not freed by HW */ - DPAA2_SET_FLE_IVP(sge); - rte_mbuf_refcnt_update(cur_seg, -1); - } else - DPAA2_SET_FLE_BPID(sge, + DPAA2_SET_FLE_IVP(sge); + rte_mbuf_refcnt_update(cur_seg, -1); + } else { + DPAA2_SET_FLE_BPID(sge, mempool_to_bpid(cur_seg->pool)); + } + } cur_seg = cur_seg->next; } else { /* Get owner MBUF from indirect buffer */ @@ -1152,7 +1183,8 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) bpid = mempool_to_bpid(mp); if (unlikely((*bufs)->nb_segs > 1)) { if (eth_mbuf_to_sg_fd(*bufs, - &fd_arr[loop], bpid)) + &fd_arr[loop], + mp, bpid)) goto send_n_return; } else { eth_mbuf_to_fd(*bufs, @@ -1409,6 +1441,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (unlikely((*bufs)->nb_segs > 1)) { if (eth_mbuf_to_sg_fd(*bufs, &fd_arr[loop], + mp, bpid)) goto send_n_return; } else { From patchwork Thu Feb 11 14:16:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87860 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 297E4A054A; Thu, 11 Feb 2021 15:28:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D869D1CC568; Thu, 11 Feb 2021 15:28:06 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 2BAFB1CC4D4 for ; Thu, 11 Feb 2021 15:28:01 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 08CBD2008D8; Thu, 11 Feb 2021 15:28:01 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id BE8962008D9; Thu, 11 Feb 2021 15:27:59 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 1CF2D4030C; Thu, 11 Feb 2021 15:27:58 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:06 +0530 Message-Id: <20210211141620.12482-7-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 06/20] net/dpaa2: support external buffers in Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta This patch support tx of external allocated buffers. Signed-off-by: Nipun Gupta Signed-off-by: Sachin Saxena Acked-by: Hemant Agrawal --- drivers/net/dpaa2/dpaa2_rxtx.c | 42 ++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index c11ed0ee61..7deba3aed3 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -450,6 +450,9 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, } } cur_seg = cur_seg->next; + } else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) { + DPAA2_SET_FLE_IVP(sge); + cur_seg = cur_seg->next; } else { /* Get owner MBUF from indirect buffer */ mi = rte_mbuf_from_indirect(cur_seg); @@ -494,6 +497,8 @@ eth_mbuf_to_fd(struct rte_mbuf *mbuf, DPAA2_SET_FD_IVP(fd); rte_mbuf_refcnt_update(mbuf, -1); } + } else if (RTE_MBUF_HAS_EXTBUF(mbuf)) { + DPAA2_SET_FD_IVP(fd); } else { struct rte_mbuf *mi; @@ -1065,6 +1070,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data; struct dpaa2_dev_priv *priv = eth_data->dev_private; uint32_t flags[MAX_TX_RING_SLOTS] = {0}; + struct rte_mbuf **orig_bufs = bufs; if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); @@ -1148,6 +1154,24 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mi = rte_mbuf_from_indirect(*bufs); mp = mi->pool; } + + if (unlikely(RTE_MBUF_HAS_EXTBUF(*bufs))) { + if (unlikely((*bufs)->nb_segs > 1)) { + if (eth_mbuf_to_sg_fd(*bufs, + &fd_arr[loop], + mp, 0)) + goto send_n_return; + } else { + eth_mbuf_to_fd(*bufs, + &fd_arr[loop], 0); + } + bufs++; +#ifdef RTE_LIBRTE_IEEE1588 + enable_tx_tstamp(&fd_arr[loop]); +#endif + continue; + } + /* Not a hw_pkt pool allocated frame */ if (unlikely(!mp || !priv->bp_list)) { DPAA2_PMD_ERR("Err: No buffer pool attached"); @@ -1220,6 +1244,15 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) nb_pkts -= loop; } dpaa2_q->tx_pkts += num_tx; + + loop = 0; + while (loop < num_tx) { + if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs))) + rte_pktmbuf_free(*orig_bufs); + orig_bufs++; + loop++; + } + return num_tx; send_n_return: @@ -1246,6 +1279,15 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } skip_tx: dpaa2_q->tx_pkts += num_tx; + + loop = 0; + while (loop < num_tx) { + if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs))) + rte_pktmbuf_free(*orig_bufs); + orig_bufs++; + loop++; + } + return num_tx; } From patchwork Thu Feb 11 14:16:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87862 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37AEBA054A; Thu, 11 Feb 2021 15:29:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 630EF1CC574; Thu, 11 Feb 2021 15:28:09 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 318671CC500 for ; Thu, 11 Feb 2021 15:28:02 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id B4DD01A0613; Thu, 11 Feb 2021 15:28:01 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 4AD131A05F1; Thu, 11 Feb 2021 15:28:00 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 9C3DB402AC; Thu, 11 Feb 2021 15:27:58 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:07 +0530 Message-Id: <20210211141620.12482-8-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 07/20] net/dpaa: support external buffers in Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch support tx of external buffers Signed-off-by: Hemant Agrawal --- drivers/net/dpaa/dpaa_rxtx.c | 39 +++++++++++++++++++++++++++--------- drivers/net/dpaa/dpaa_rxtx.h | 8 +------- 2 files changed, 31 insertions(+), 16 deletions(-) diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index e38fba23c0..423de40e95 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2017,2019 NXP + * Copyright 2017,2019-2021 NXP * */ @@ -334,7 +334,7 @@ dpaa_unsegmented_checksum(struct rte_mbuf *mbuf, struct qm_fd *fd_arr) } } -struct rte_mbuf * +static struct rte_mbuf * dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid) { struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid); @@ -791,13 +791,12 @@ uint16_t dpaa_eth_queue_rx(void *q, return num_rx; } -int +static int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, struct qm_fd *fd, - uint32_t bpid) + struct dpaa_bp_info *bp_info) { struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL; - struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(bpid); struct rte_mbuf *temp, *mi; struct qm_sg_entry *sg_temp, *sgt; int i = 0; @@ -840,7 +839,7 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, fd->format = QM_FD_SG; fd->addr = temp->buf_iova; fd->offset = temp->data_off; - fd->bpid = bpid; + fd->bpid = bp_info ? bp_info->bpid : 0xff; fd->length20 = mbuf->pkt_len; while (i < DPAA_SGT_MAX_ENTRIES) { @@ -862,6 +861,9 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, DPAA_MEMPOOL_TO_BPID(cur_seg->pool); } cur_seg = cur_seg->next; + } else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) { + sg_temp->bpid = 0xff; + cur_seg = cur_seg->next; } else { /* Get owner MBUF from indirect buffer */ mi = rte_mbuf_from_indirect(cur_seg); @@ -911,6 +913,9 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf, */ DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid); } + } else if (RTE_MBUF_HAS_EXTBUF(mbuf)) { + DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, + bp_info ? bp_info->bpid : 0xff); } else { /* This is data-containing core mbuf: 'mi' */ mi = rte_mbuf_from_indirect(mbuf); @@ -929,7 +934,8 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf, * been released by BMAN. */ rte_mbuf_refcnt_update(mi, 1); - DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid); + DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, + bp_info ? bp_info->bpid : 0xff); } rte_pktmbuf_free(mbuf); } @@ -951,7 +957,7 @@ tx_on_dpaa_pool(struct rte_mbuf *mbuf, tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr); } else if (mbuf->nb_segs > 1 && mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) { - if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, bp_info->bpid)) { + if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, bp_info)) { DPAA_PMD_DEBUG("Unable to create Scatter Gather FD"); return 1; } @@ -1055,6 +1061,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) uint16_t state; int ret, realloc_mbuf = 0; uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0}; + struct rte_mbuf **orig_bufs = bufs; if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init((void *)0); @@ -1112,6 +1119,11 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) mp = mi->pool; } + if (unlikely(RTE_MBUF_HAS_EXTBUF(mbuf))) { + bp_info = NULL; + goto indirect_buf; + } + bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp); if (unlikely(mp->ops_index != bp_info->dpaa_ops_index || realloc_mbuf == 1)) { @@ -1130,7 +1142,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) mbuf = temp_mbuf; realloc_mbuf = 0; } - +indirect_buf: state = tx_on_dpaa_pool(mbuf, bp_info, &fd_arr[loop]); if (unlikely(state)) { @@ -1157,6 +1169,15 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q); + + loop = 0; + while (loop < sent) { + if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs))) + rte_pktmbuf_free(*orig_bufs); + orig_bufs++; + loop++; + } + return sent; } diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h index d9d7e04f5c..99e09196e9 100644 --- a/drivers/net/dpaa/dpaa_rxtx.h +++ b/drivers/net/dpaa/dpaa_rxtx.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2017,2020 NXP + * Copyright 2017,2020-2021 NXP * */ @@ -279,12 +279,6 @@ uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused, struct rte_mbuf **bufs __rte_unused, uint16_t nb_bufs __rte_unused); -struct rte_mbuf *dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid); - -int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, - struct qm_fd *fd, - uint32_t bpid); - uint16_t dpaa_free_mbuf(const struct qm_fd *fd); void dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs); From patchwork Thu Feb 11 14:16:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87864 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66944A054A; Thu, 11 Feb 2021 15:29:16 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ECE8D1CC57F; Thu, 11 Feb 2021 15:28:11 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id EF14C1CC505 for ; Thu, 11 Feb 2021 15:28:02 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id C57B2200610; Thu, 11 Feb 2021 15:28:02 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id F32E12008D7; Thu, 11 Feb 2021 15:28:00 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 292E3402AD; Thu, 11 Feb 2021 15:27:59 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:08 +0530 Message-Id: <20210211141620.12482-9-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 08/20] net/dpaa2: add traffic management driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gagandeep Singh Add basic support for scheduling and shaping on dpaa2 platform. HW supports 2 level of scheduling and shaping. However the current patch only support single level. Signed-off-by: Gagandeep Singh Acked-by: Hemant Agrawal --- doc/guides/nics/dpaa2.rst | 120 +++++- drivers/net/dpaa2/dpaa2_ethdev.c | 14 +- drivers/net/dpaa2/dpaa2_ethdev.h | 5 + drivers/net/dpaa2/dpaa2_tm.c | 626 ++++++++++++++++++++++++++++ drivers/net/dpaa2/dpaa2_tm.h | 32 ++ drivers/net/dpaa2/mc/dpni.c | 313 +++++++++++++- drivers/net/dpaa2/mc/fsl_dpni.h | 210 +++++++++- drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 59 ++- drivers/net/dpaa2/meson.build | 3 +- 9 files changed, 1376 insertions(+), 6 deletions(-) create mode 100644 drivers/net/dpaa2/dpaa2_tm.c create mode 100644 drivers/net/dpaa2/dpaa2_tm.h diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst index 233d926e0a..893e87e714 100644 --- a/doc/guides/nics/dpaa2.rst +++ b/doc/guides/nics/dpaa2.rst @@ -1,5 +1,5 @@ .. SPDX-License-Identifier: BSD-3-Clause - Copyright 2016,2020 NXP + Copyright 2016,2020-2021 NXP DPAA2 Poll Mode Driver @@ -406,6 +406,8 @@ Features of the DPAA2 PMD are: - Jumbo frames - Link flow control - Scattered and gather for TX and RX +- :ref:`Traffic Management API ` + Supported DPAA2 SoCs -------------------- @@ -548,3 +550,119 @@ Other Limitations - RSS hash key cannot be modified. - RSS RETA cannot be configured. + +.. _dptmapi: + +Traffic Management API +---------------------- + +DPAA2 PMD supports generic DPDK Traffic Management API which allows to +configure the following features: + +1. Hierarchical scheduling +2. Traffic shaping + +Internally TM is represented by a hierarchy (tree) of nodes. +Node which has a parent is called a leaf whereas node without +parent is called a non-leaf (root). + +Nodes hold following types of settings: + +- for egress scheduler configuration: weight +- for egress rate limiter: private shaper + +Hierarchy is always constructed from the top, i.e first a root node is added +then some number of leaf nodes. Number of leaf nodes cannot exceed number +of configured tx queues. + +After hierarchy is complete it can be committed. + +For an additional description please refer to DPDK :doc:`Traffic Management API <../prog_guide/traffic_management>`. + +Supported Features +~~~~~~~~~~~~~~~~~~ + +The following capabilities are supported: + +- Level0 (root node) and Level1 are supported. +- 1 private shaper at root node (port level) is supported. +- 8 TX queues per port supported (1 channel per port) +- Both SP and WFQ scheduling mechanisms are supported on all 8 queues. +- Congestion notification is supported. It means if there is congestion on + the network, DPDK driver will not enqueue any packet (no taildrop or WRED) + + User can also check node, level capabilities using testpmd commands. + +Usage example +~~~~~~~~~~~~~ + +For a detailed usage description please refer to "Traffic Management" section in DPDK :doc:`Testpmd Runtime Functions <../testpmd_app_ug/testpmd_funcs>`. + +1. Run testpmd as follows: + + .. code-block:: console + + ./dpdk-testpmd -c 0xf -n 1 -- -i --portmask 0x3 --nb-cores=1 --txq=4 --rxq=4 + +2. Stop all ports: + + .. code-block:: console + + testpmd> port stop all + +3. Add shaper profile: + + One port level shaper and strict priority on all 4 queues of port 0: + + .. code-block:: console + + add port tm node shaper profile 0 1 104857600 64 100 0 0 + add port tm nonleaf node 0 8 -1 0 1 0 1 1 1 0 + add port tm leaf node 0 0 8 0 1 1 -1 0 0 0 0 + add port tm leaf node 0 1 8 1 1 1 -1 0 0 0 0 + add port tm leaf node 0 2 8 2 1 1 -1 0 0 0 0 + add port tm leaf node 0 3 8 3 1 1 -1 0 0 0 0 + port tm hierarchy commit 0 no + + or + + One port level shaper and WFQ on all 4 queues of port 0: + + .. code-block:: console + + add port tm node shaper profile 0 1 104857600 64 100 0 0 + add port tm nonleaf node 0 8 -1 0 1 0 1 1 1 0 + add port tm leaf node 0 0 8 0 200 1 -1 0 0 0 0 + add port tm leaf node 0 1 8 0 300 1 -1 0 0 0 0 + add port tm leaf node 0 2 8 0 400 1 -1 0 0 0 0 + add port tm leaf node 0 3 8 0 500 1 -1 0 0 0 0 + port tm hierarchy commit 0 no + +4. Create flows as per the source IP addresses: + + .. code-block:: console + + flow create 1 group 0 priority 1 ingress pattern ipv4 src is \ + 10.10.10.1 / end actions queue index 0 / end + flow create 1 group 0 priority 2 ingress pattern ipv4 src is \ + 10.10.10.2 / end actions queue index 1 / end + flow create 1 group 0 priority 3 ingress pattern ipv4 src is \ + 10.10.10.3 / end actions queue index 2 / end + flow create 1 group 0 priority 4 ingress pattern ipv4 src is \ + 10.10.10.4 / end actions queue index 3 / end + +5. Start all ports + + .. code-block:: console + + testpmd> port start all + + + +6. Enable forwarding + + .. code-block:: console + + testpmd> start + +7. Inject the traffic on port1 as per the configured flows, you will see shaped and scheduled forwarded traffic on port0 diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index a81c73438e..490eb4b3f4 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1,7 +1,7 @@ /* * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2016-2020 NXP + * Copyright 2016-2021 NXP * */ @@ -638,6 +638,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev) if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK); + dpaa2_tm_init(dev); + return 0; } @@ -1264,6 +1266,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev) return -1; } + dpaa2_tm_deinit(dev); dpaa2_flow_clean(dev); /* Clean the device first */ ret = dpni_reset(dpni, CMD_PRI_LOW, priv->token); @@ -2345,6 +2348,14 @@ dpaa2_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, qinfo->conf.tx_deferred_start = 0; } +static int +dpaa2_tm_ops_get(struct rte_eth_dev *dev __rte_unused, void *ops) +{ + *(const void **)ops = &dpaa2_tm_ops; + + return 0; +} + static struct eth_dev_ops dpaa2_ethdev_ops = { .dev_configure = dpaa2_eth_dev_configure, .dev_start = dpaa2_dev_start, @@ -2387,6 +2398,7 @@ static struct eth_dev_ops dpaa2_ethdev_ops = { .filter_ctrl = dpaa2_dev_flow_ctrl, .rxq_info_get = dpaa2_rxq_info_get, .txq_info_get = dpaa2_txq_info_get, + .tm_ops_get = dpaa2_tm_ops_get, #if defined(RTE_LIBRTE_IEEE1588) .timesync_enable = dpaa2_timesync_enable, .timesync_disable = dpaa2_timesync_disable, diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index bb49fa9a38..9837eb62c8 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -12,6 +12,7 @@ #include #include +#include "dpaa2_tm.h" #include #include @@ -112,6 +113,8 @@ extern int dpaa2_timestamp_dynfield_offset; extern const struct rte_flow_ops dpaa2_flow_ops; extern enum rte_filter_type dpaa2_filter_type; +extern const struct rte_tm_ops dpaa2_tm_ops; + #define IP_ADDRESS_OFFSET_INVALID (-1) struct dpaa2_key_info { @@ -179,6 +182,8 @@ struct dpaa2_dev_priv { struct rte_eth_dev *eth_dev; /**< Pointer back to holding ethdev */ LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */ + LIST_HEAD(nodes, dpaa2_tm_node) nodes; + LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles; }; int dpaa2_distset_to_dpkg_profile_cfg(uint64_t req_dist_set, diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c new file mode 100644 index 0000000000..841da733d5 --- /dev/null +++ b/drivers/net/dpaa2/dpaa2_tm.c @@ -0,0 +1,626 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 NXP + */ + +#include +#include +#include + +#include "dpaa2_ethdev.h" + +#define DPAA2_BURST_MAX (64 * 1024) + +#define DPAA2_SHAPER_MIN_RATE 0 +#define DPAA2_SHAPER_MAX_RATE 107374182400ull +#define DPAA2_WEIGHT_MAX 24701 + +int +dpaa2_tm_init(struct rte_eth_dev *dev) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + + LIST_INIT(&priv->shaper_profiles); + LIST_INIT(&priv->nodes); + + return 0; +} + +void dpaa2_tm_deinit(struct rte_eth_dev *dev) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_shaper_profile *profile = + LIST_FIRST(&priv->shaper_profiles); + struct dpaa2_tm_node *node = LIST_FIRST(&priv->nodes); + + while (profile) { + struct dpaa2_tm_shaper_profile *next = LIST_NEXT(profile, next); + + LIST_REMOVE(profile, next); + rte_free(profile); + profile = next; + } + + while (node) { + struct dpaa2_tm_node *next = LIST_NEXT(node, next); + + LIST_REMOVE(node, next); + rte_free(node); + node = next; + } +} + +static struct dpaa2_tm_node * +dpaa2_node_from_id(struct dpaa2_dev_priv *priv, uint32_t node_id) +{ + struct dpaa2_tm_node *node; + + LIST_FOREACH(node, &priv->nodes, next) + if (node->id == node_id) + return node; + + return NULL; +} + +static int +dpaa2_capabilities_get(struct rte_eth_dev *dev, + struct rte_tm_capabilities *cap, + struct rte_tm_error *error) +{ + if (!cap) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, "Capabilities are NULL\n"); + + memset(cap, 0, sizeof(*cap)); + + /* root node(port) + txqs number, assuming each TX + * Queue is mapped to each TC + */ + cap->n_nodes_max = 1 + dev->data->nb_tx_queues; + cap->n_levels_max = 2; /* port level + txqs level */ + cap->non_leaf_nodes_identical = 1; + cap->leaf_nodes_identical = 1; + + cap->shaper_n_max = 1; + cap->shaper_private_n_max = 1; + cap->shaper_private_dual_rate_n_max = 1; + cap->shaper_private_rate_min = DPAA2_SHAPER_MIN_RATE; + cap->shaper_private_rate_max = DPAA2_SHAPER_MAX_RATE; + + cap->sched_n_children_max = dev->data->nb_tx_queues; + cap->sched_sp_n_priorities_max = dev->data->nb_tx_queues; + cap->sched_wfq_n_children_per_group_max = dev->data->nb_tx_queues; + cap->sched_wfq_n_groups_max = 2; + cap->sched_wfq_weight_max = DPAA2_WEIGHT_MAX; + + cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_STATS; + cap->stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; + + return 0; +} + +static int +dpaa2_level_capabilities_get(struct rte_eth_dev *dev, + uint32_t level_id, + struct rte_tm_level_capabilities *cap, + struct rte_tm_error *error) +{ + if (!cap) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + memset(cap, 0, sizeof(*cap)); + + if (level_id > 1) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, "Wrong level id\n"); + + if (level_id == 0) { /* Root node */ + cap->n_nodes_max = 1; + cap->n_nodes_nonleaf_max = 1; + cap->non_leaf_nodes_identical = 1; + + cap->nonleaf.shaper_private_supported = 1; + cap->nonleaf.shaper_private_dual_rate_supported = 1; + cap->nonleaf.shaper_private_rate_min = DPAA2_SHAPER_MIN_RATE; + cap->nonleaf.shaper_private_rate_max = DPAA2_SHAPER_MAX_RATE; + + cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = + dev->data->nb_tx_queues; + cap->nonleaf.sched_wfq_n_groups_max = 2; + cap->nonleaf.sched_wfq_weight_max = DPAA2_WEIGHT_MAX; + cap->nonleaf.stats_mask = RTE_TM_STATS_N_PKTS | + RTE_TM_STATS_N_BYTES; + } else { /* leaf nodes */ + cap->n_nodes_max = dev->data->nb_tx_queues; + cap->n_nodes_leaf_max = dev->data->nb_tx_queues; + cap->leaf_nodes_identical = 1; + + cap->leaf.stats_mask = RTE_TM_STATS_N_PKTS; + } + + return 0; +} + +static int +dpaa2_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, + struct rte_tm_node_capabilities *cap, + struct rte_tm_error *error) +{ + struct dpaa2_tm_node *node; + struct dpaa2_dev_priv *priv = dev->data->dev_private; + + if (!cap) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + memset(cap, 0, sizeof(*cap)); + + node = dpaa2_node_from_id(priv, node_id); + if (!node) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id does not exist\n"); + + if (node->type == 0) { + cap->shaper_private_supported = 1; + + cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = + dev->data->nb_tx_queues; + cap->nonleaf.sched_wfq_n_groups_max = 2; + cap->nonleaf.sched_wfq_weight_max = DPAA2_WEIGHT_MAX; + cap->stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; + } else { + cap->stats_mask = RTE_TM_STATS_N_PKTS; + } + + return 0; +} + +static int +dpaa2_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, int *is_leaf, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_node *node; + + if (!is_leaf) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + node = dpaa2_node_from_id(priv, node_id); + if (!node) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id does not exist\n"); + + *is_leaf = node->type == 1/*NODE_QUEUE*/ ? 1 : 0; + + return 0; +} + +static struct dpaa2_tm_shaper_profile * +dpaa2_shaper_profile_from_id(struct dpaa2_dev_priv *priv, + uint32_t shaper_profile_id) +{ + struct dpaa2_tm_shaper_profile *profile; + + LIST_FOREACH(profile, &priv->shaper_profiles, next) + if (profile->id == shaper_profile_id) + return profile; + + return NULL; +} + +static int +dpaa2_shaper_profile_add(struct rte_eth_dev *dev, uint32_t shaper_profile_id, + struct rte_tm_shaper_params *params, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_shaper_profile *profile; + + if (!params) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + if (params->committed.rate > DPAA2_SHAPER_MAX_RATE) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE, + NULL, "comitted rate is out of range\n"); + + if (params->committed.size > DPAA2_BURST_MAX) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE, + NULL, "committed size is out of range\n"); + + if (params->peak.rate > DPAA2_SHAPER_MAX_RATE) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE, + NULL, "Peak rate is out of range\n"); + + if (params->peak.size > DPAA2_BURST_MAX) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE, + NULL, "Peak size is out of range\n"); + + if (shaper_profile_id == RTE_TM_SHAPER_PROFILE_ID_NONE) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Wrong shaper profile id\n"); + + profile = dpaa2_shaper_profile_from_id(priv, shaper_profile_id); + if (profile) + return -rte_tm_error_set(error, EEXIST, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Profile id already exists\n"); + + profile = rte_zmalloc_socket(NULL, sizeof(*profile), 0, + rte_socket_id()); + if (!profile) + return -rte_tm_error_set(error, ENOMEM, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + profile->id = shaper_profile_id; + rte_memcpy(&profile->params, params, sizeof(profile->params)); + + LIST_INSERT_HEAD(&priv->shaper_profiles, profile, next); + + return 0; +} + +static int +dpaa2_shaper_profile_delete(struct rte_eth_dev *dev, uint32_t shaper_profile_id, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_shaper_profile *profile; + + profile = dpaa2_shaper_profile_from_id(priv, shaper_profile_id); + if (!profile) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Profile id does not exist\n"); + + if (profile->refcnt) + return -rte_tm_error_set(error, EPERM, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Profile is used\n"); + + LIST_REMOVE(profile, next); + rte_free(profile); + + return 0; +} + +static int +dpaa2_node_check_params(struct rte_eth_dev *dev, uint32_t node_id, + __rte_unused uint32_t priority, uint32_t weight, + uint32_t level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + if (node_id == RTE_TM_NODE_ID_NULL) + return -rte_tm_error_set(error, EINVAL, RTE_TM_NODE_ID_NULL, + NULL, "Node id is invalid\n"); + + if (weight > DPAA2_WEIGHT_MAX) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_WEIGHT, + NULL, "Weight is out of range\n"); + + if (level_id != 0 && level_id != 1) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, "Wrong level id\n"); + + if (!params) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + if (params->shared_shaper_id) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID, + NULL, "Shared shaper is not supported\n"); + + if (params->n_shared_shapers) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS, + NULL, "Shared shaper is not supported\n"); + + /* verify port (root node) settings */ + if (node_id >= dev->data->nb_tx_queues) { + if (params->nonleaf.wfq_weight_mode) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE, + NULL, "WFQ weight mode is not supported\n"); + + if (params->stats_mask & ~(RTE_TM_STATS_N_PKTS | + RTE_TM_STATS_N_BYTES)) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_STATS, + NULL, + "Requested port stats are not supported\n"); + + return 0; + } + if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID, + NULL, "Private shaper not supported on leaf\n"); + + if (params->stats_mask & ~RTE_TM_STATS_N_PKTS) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_STATS, + NULL, + "Requested stats are not supported\n"); + + /* check leaf node */ + if (level_id == 1) { + if (params->leaf.cman != RTE_TM_CMAN_TAIL_DROP) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN, + NULL, "Only taildrop is supported\n"); + } + + return 0; +} + +static int +dpaa2_node_add(struct rte_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, uint32_t weight, + uint32_t level_id, struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_shaper_profile *profile = NULL; + struct dpaa2_tm_node *node, *parent = NULL; + int ret; + + if (0/* If device is started*/) + return -rte_tm_error_set(error, EPERM, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, "Port is already started\n"); + + ret = dpaa2_node_check_params(dev, node_id, priority, weight, level_id, + params, error); + if (ret) + return ret; + + if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) { + profile = dpaa2_shaper_profile_from_id(priv, + params->shaper_profile_id); + if (!profile) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Shaper id does not exist\n"); + } + if (parent_node_id == RTE_TM_NODE_ID_NULL) { + LIST_FOREACH(node, &priv->nodes, next) { + if (node->type != 0 /*root node*/) + continue; + + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, "Root node exists\n"); + } + } else { + parent = dpaa2_node_from_id(priv, parent_node_id); + if (!parent) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, + NULL, "Parent node id not exist\n"); + } + + node = dpaa2_node_from_id(priv, node_id); + if (node) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id already exists\n"); + + node = rte_zmalloc_socket(NULL, sizeof(*node), 0, rte_socket_id()); + if (!node) + return -rte_tm_error_set(error, ENOMEM, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + node->id = node_id; + node->type = parent_node_id == RTE_TM_NODE_ID_NULL ? 0/*NODE_PORT*/ : + 1/*NODE_QUEUE*/; + + if (parent) { + node->parent = parent; + parent->refcnt++; + } + + if (profile) { + node->profile = profile; + profile->refcnt++; + } + + node->weight = weight; + node->priority = priority; + node->stats_mask = params->stats_mask; + + LIST_INSERT_HEAD(&priv->nodes, node, next); + + return 0; +} + +static int +dpaa2_node_delete(struct rte_eth_dev *dev, uint32_t node_id, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_node *node; + + if (0) { + return -rte_tm_error_set(error, EPERM, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, "Port is already started\n"); + } + + node = dpaa2_node_from_id(priv, node_id); + if (!node) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id does not exist\n"); + + if (node->refcnt) + return -rte_tm_error_set(error, EPERM, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id is used\n"); + + if (node->parent) + node->parent->refcnt--; + + if (node->profile) + node->profile->refcnt--; + + LIST_REMOVE(node, next); + rte_free(node); + + return 0; +} + +static int +dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_node *node, *temp_node; + struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private; + int ret; + int wfq_grp = 0, is_wfq_grp = 0, conf[DPNI_MAX_TC]; + struct dpni_tx_priorities_cfg prio_cfg; + + memset(&prio_cfg, 0, sizeof(prio_cfg)); + memset(conf, 0, sizeof(conf)); + + LIST_FOREACH(node, &priv->nodes, next) { + if (node->type == 0/*root node*/) { + if (!node->profile) + continue; + + struct dpni_tx_shaping_cfg tx_cr_shaper, tx_er_shaper; + + tx_cr_shaper.max_burst_size = + node->profile->params.committed.size; + tx_cr_shaper.rate_limit = + node->profile->params.committed.rate / (1024 * 1024); + tx_er_shaper.max_burst_size = + node->profile->params.peak.size; + tx_er_shaper.rate_limit = + node->profile->params.peak.rate / (1024 * 1024); + ret = dpni_set_tx_shaping(dpni, 0, priv->token, + &tx_cr_shaper, &tx_er_shaper, 0); + if (ret) { + ret = -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE, NULL, + "Error in setting Shaping\n"); + goto out; + } + + continue; + } else { /* level 1, all leaf nodes */ + if (node->id >= dev->data->nb_tx_queues) { + ret = -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, NULL, + "Not enough txqs configured\n"); + goto out; + } + + if (conf[node->id]) + continue; + + LIST_FOREACH(temp_node, &priv->nodes, next) { + if (temp_node->id == node->id || + temp_node->type == 0) + continue; + if (conf[temp_node->id]) + continue; + if (node->priority == temp_node->priority) { + if (wfq_grp == 0) { + prio_cfg.tc_sched[temp_node->id].mode = + DPNI_TX_SCHED_WEIGHTED_A; + /* DPDK support lowest weight 1 and DPAA2 platform 100 */ + prio_cfg.tc_sched[temp_node->id].delta_bandwidth = + temp_node->weight + 99; + } else if (wfq_grp == 1) { + prio_cfg.tc_sched[temp_node->id].mode = + DPNI_TX_SCHED_WEIGHTED_B; + prio_cfg.tc_sched[temp_node->id].delta_bandwidth = + temp_node->weight + 99; + } else { + /*TODO: add one more check for number of nodes in a group */ + ret = -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, NULL, + "Only 2 WFQ Groups are supported\n"); + goto out; + } + conf[temp_node->id] = 1; + is_wfq_grp = 1; + } + } + if (is_wfq_grp) { + if (wfq_grp == 0) { + prio_cfg.tc_sched[node->id].mode = + DPNI_TX_SCHED_WEIGHTED_A; + prio_cfg.tc_sched[node->id].delta_bandwidth = + node->weight + 99; + prio_cfg.prio_group_A = node->priority; + } else if (wfq_grp == 1) { + prio_cfg.tc_sched[node->id].mode = + DPNI_TX_SCHED_WEIGHTED_B; + prio_cfg.tc_sched[node->id].delta_bandwidth = + node->weight + 99; + prio_cfg.prio_group_B = node->priority; + } + wfq_grp++; + is_wfq_grp = 0; + } + conf[node->id] = 1; + } + if (wfq_grp) + prio_cfg.separate_groups = 1; + } + ret = dpni_set_tx_priorities(dpni, 0, priv->token, &prio_cfg); + if (ret) { + ret = -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, NULL, + "Scheduling Failed\n"); + goto out; + } + + return 0; + +out: + if (clear_on_fail) { + dpaa2_tm_deinit(dev); + dpaa2_tm_init(dev); + } + + return ret; +} + +const struct rte_tm_ops dpaa2_tm_ops = { + .node_type_get = dpaa2_node_type_get, + .capabilities_get = dpaa2_capabilities_get, + .level_capabilities_get = dpaa2_level_capabilities_get, + .node_capabilities_get = dpaa2_node_capabilities_get, + .shaper_profile_add = dpaa2_shaper_profile_add, + .shaper_profile_delete = dpaa2_shaper_profile_delete, + .node_add = dpaa2_node_add, + .node_delete = dpaa2_node_delete, + .hierarchy_commit = dpaa2_hierarchy_commit, +}; diff --git a/drivers/net/dpaa2/dpaa2_tm.h b/drivers/net/dpaa2/dpaa2_tm.h new file mode 100644 index 0000000000..6632fab687 --- /dev/null +++ b/drivers/net/dpaa2/dpaa2_tm.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 NXP + */ + +#ifndef _DPAA2_TM_H_ +#define _DPAA2_TM_H_ + +#include + +struct dpaa2_tm_shaper_profile { + LIST_ENTRY(dpaa2_tm_shaper_profile) next; + uint32_t id; + int refcnt; + struct rte_tm_shaper_params params; +}; + +struct dpaa2_tm_node { + LIST_ENTRY(dpaa2_tm_node) next; + uint32_t id; + uint32_t type; + int refcnt; + struct dpaa2_tm_node *parent; + struct dpaa2_tm_shaper_profile *profile; + uint32_t weight; + uint32_t priority; + uint64_t stats_mask; +}; + +int dpaa2_tm_init(struct rte_eth_dev *dev); +void dpaa2_tm_deinit(struct rte_eth_dev *dev); + +#endif /* _DPAA2_TM_H_ */ diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c index 683d7bcc17..b254931386 100644 --- a/drivers/net/dpaa2/mc/dpni.c +++ b/drivers/net/dpaa2/mc/dpni.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2013-2016 Freescale Semiconductor Inc. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP * */ #include @@ -949,6 +949,46 @@ int dpni_get_link_state(struct fsl_mc_io *mc_io, return 0; } +/** + * dpni_set_tx_shaping() - Set the transmit shaping + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @tx_cr_shaper: TX committed rate shaping configuration + * @tx_er_shaper: TX excess rate shaping configuration + * @coupled: Committed and excess rate shapers are coupled + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_set_tx_shaping(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + const struct dpni_tx_shaping_cfg *tx_cr_shaper, + const struct dpni_tx_shaping_cfg *tx_er_shaper, + int coupled) +{ + struct dpni_cmd_set_tx_shaping *cmd_params; + struct mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_SHAPING, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_tx_shaping *)cmd.params; + cmd_params->tx_cr_max_burst_size = + cpu_to_le16(tx_cr_shaper->max_burst_size); + cmd_params->tx_er_max_burst_size = + cpu_to_le16(tx_er_shaper->max_burst_size); + cmd_params->tx_cr_rate_limit = + cpu_to_le32(tx_cr_shaper->rate_limit); + cmd_params->tx_er_rate_limit = + cpu_to_le32(tx_er_shaper->rate_limit); + dpni_set_field(cmd_params->coupled, COUPLED, coupled); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + /** * dpni_set_max_frame_length() - Set the maximum received frame length. * @mc_io: Pointer to MC portal's I/O object @@ -1476,6 +1516,55 @@ int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io, return mc_send_command(mc_io, &cmd); } +/** + * dpni_set_tx_priorities() - Set transmission TC priority configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @cfg: Transmission selection configuration + * + * warning: Allowed only when DPNI is disabled + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_set_tx_priorities(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + const struct dpni_tx_priorities_cfg *cfg) +{ + struct dpni_cmd_set_tx_priorities *cmd_params; + struct mc_command cmd = { 0 }; + int i; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_PRIORITIES, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_tx_priorities *)cmd.params; + dpni_set_field(cmd_params->flags, + SEPARATE_GRP, + cfg->separate_groups); + cmd_params->prio_group_A = cfg->prio_group_A; + cmd_params->prio_group_B = cfg->prio_group_B; + + for (i = 0; i + 1 < DPNI_MAX_TC; i = i + 2) { + dpni_set_field(cmd_params->modes[i / 2], + MODE_1, + cfg->tc_sched[i].mode); + dpni_set_field(cmd_params->modes[i / 2], + MODE_2, + cfg->tc_sched[i + 1].mode); + } + + for (i = 0; i < DPNI_MAX_TC; i++) { + cmd_params->delta_bandwidth[i] = + cpu_to_le16(cfg->tc_sched[i].delta_bandwidth); + } + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + /** * dpni_set_rx_tc_dist() - Set Rx traffic class distribution configuration * @mc_io: Pointer to MC portal's I/O object @@ -1808,6 +1897,228 @@ int dpni_clear_fs_entries(struct fsl_mc_io *mc_io, return mc_send_command(mc_io, &cmd); } +/** + * dpni_set_rx_tc_policing() - Set Rx traffic class policing configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @tc_id: Traffic class selection (0-7) + * @cfg: Traffic class policing configuration + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t tc_id, + const struct dpni_rx_tc_policing_cfg *cfg) +{ + struct dpni_cmd_set_rx_tc_policing *cmd_params; + struct mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_POLICING, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_rx_tc_policing *)cmd.params; + dpni_set_field(cmd_params->mode_color, COLOR, cfg->default_color); + dpni_set_field(cmd_params->mode_color, MODE, cfg->mode); + dpni_set_field(cmd_params->units, UNITS, cfg->units); + cmd_params->options = cpu_to_le32(cfg->options); + cmd_params->cir = cpu_to_le32(cfg->cir); + cmd_params->cbs = cpu_to_le32(cfg->cbs); + cmd_params->eir = cpu_to_le32(cfg->eir); + cmd_params->ebs = cpu_to_le32(cfg->ebs); + cmd_params->tc_id = tc_id; + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_get_rx_tc_policing() - Get Rx traffic class policing configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @tc_id: Traffic class selection (0-7) + * @cfg: Traffic class policing configuration + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t tc_id, + struct dpni_rx_tc_policing_cfg *cfg) +{ + struct dpni_rsp_get_rx_tc_policing *rsp_params; + struct dpni_cmd_get_rx_tc_policing *cmd_params; + struct mc_command cmd = { 0 }; + int err; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_TC_POLICING, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_get_rx_tc_policing *)cmd.params; + cmd_params->tc_id = tc_id; + + + /* send command to mc*/ + err = mc_send_command(mc_io, &cmd); + if (err) + return err; + + rsp_params = (struct dpni_rsp_get_rx_tc_policing *)cmd.params; + cfg->options = le32_to_cpu(rsp_params->options); + cfg->cir = le32_to_cpu(rsp_params->cir); + cfg->cbs = le32_to_cpu(rsp_params->cbs); + cfg->eir = le32_to_cpu(rsp_params->eir); + cfg->ebs = le32_to_cpu(rsp_params->ebs); + cfg->units = dpni_get_field(rsp_params->units, UNITS); + cfg->mode = dpni_get_field(rsp_params->mode_color, MODE); + cfg->default_color = dpni_get_field(rsp_params->mode_color, COLOR); + + return 0; +} + +/** + * dpni_prepare_early_drop() - prepare an early drop. + * @cfg: Early-drop configuration + * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA + * + * This function has to be called before dpni_set_rx_tc_early_drop or + * dpni_set_tx_tc_early_drop + * + */ +void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg, + uint8_t *early_drop_buf) +{ + struct dpni_early_drop *ext_params; + + ext_params = (struct dpni_early_drop *)early_drop_buf; + + dpni_set_field(ext_params->flags, DROP_ENABLE, cfg->enable); + dpni_set_field(ext_params->flags, DROP_UNITS, cfg->units); + ext_params->green_drop_probability = cfg->green.drop_probability; + ext_params->green_max_threshold = cpu_to_le64(cfg->green.max_threshold); + ext_params->green_min_threshold = cpu_to_le64(cfg->green.min_threshold); + ext_params->yellow_drop_probability = cfg->yellow.drop_probability; + ext_params->yellow_max_threshold = + cpu_to_le64(cfg->yellow.max_threshold); + ext_params->yellow_min_threshold = + cpu_to_le64(cfg->yellow.min_threshold); + ext_params->red_drop_probability = cfg->red.drop_probability; + ext_params->red_max_threshold = cpu_to_le64(cfg->red.max_threshold); + ext_params->red_min_threshold = cpu_to_le64(cfg->red.min_threshold); +} + +/** + * dpni_extract_early_drop() - extract the early drop configuration. + * @cfg: Early-drop configuration + * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA + * + * This function has to be called after dpni_get_rx_tc_early_drop or + * dpni_get_tx_tc_early_drop + * + */ +void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg, + const uint8_t *early_drop_buf) +{ + const struct dpni_early_drop *ext_params; + + ext_params = (const struct dpni_early_drop *)early_drop_buf; + + cfg->enable = dpni_get_field(ext_params->flags, DROP_ENABLE); + cfg->units = dpni_get_field(ext_params->flags, DROP_UNITS); + cfg->green.drop_probability = ext_params->green_drop_probability; + cfg->green.max_threshold = le64_to_cpu(ext_params->green_max_threshold); + cfg->green.min_threshold = le64_to_cpu(ext_params->green_min_threshold); + cfg->yellow.drop_probability = ext_params->yellow_drop_probability; + cfg->yellow.max_threshold = + le64_to_cpu(ext_params->yellow_max_threshold); + cfg->yellow.min_threshold = + le64_to_cpu(ext_params->yellow_min_threshold); + cfg->red.drop_probability = ext_params->red_drop_probability; + cfg->red.max_threshold = le64_to_cpu(ext_params->red_max_threshold); + cfg->red.min_threshold = le64_to_cpu(ext_params->red_min_threshold); +} + +/** + * dpni_set_early_drop() - Set traffic class early-drop configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @qtype: Type of queue - only Rx and Tx types are supported + * @tc_id: Traffic class selection (0-7) + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory filled + * with the early-drop configuration by calling dpni_prepare_early_drop() + * + * warning: Before calling this function, call dpni_prepare_early_drop() to + * prepare the early_drop_iova parameter + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_set_early_drop(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + enum dpni_queue_type qtype, + uint8_t tc_id, + uint64_t early_drop_iova) +{ + struct dpni_cmd_early_drop *cmd_params; + struct mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_EARLY_DROP, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_early_drop *)cmd.params; + cmd_params->qtype = qtype; + cmd_params->tc = tc_id; + cmd_params->early_drop_iova = cpu_to_le64(early_drop_iova); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_get_early_drop() - Get Rx traffic class early-drop configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @qtype: Type of queue - only Rx and Tx types are supported + * @tc_id: Traffic class selection (0-7) + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory + * + * warning: After calling this function, call dpni_extract_early_drop() to + * get the early drop configuration + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_get_early_drop(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + enum dpni_queue_type qtype, + uint8_t tc_id, + uint64_t early_drop_iova) +{ + struct dpni_cmd_early_drop *cmd_params; + struct mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_EARLY_DROP, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_early_drop *)cmd.params; + cmd_params->qtype = qtype; + cmd_params->tc = tc_id; + cmd_params->early_drop_iova = cpu_to_le64(early_drop_iova); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + /** * dpni_set_congestion_notification() - Set traffic class congestion * notification configuration diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h index 598911ddd1..df42746c9a 100644 --- a/drivers/net/dpaa2/mc/fsl_dpni.h +++ b/drivers/net/dpaa2/mc/fsl_dpni.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2013-2016 Freescale Semiconductor Inc. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP * */ #ifndef __FSL_DPNI_H @@ -731,6 +731,23 @@ int dpni_get_link_state(struct fsl_mc_io *mc_io, uint16_t token, struct dpni_link_state *state); +/** + * struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration + * @rate_limit: Rate in Mbps + * @max_burst_size: Burst size in bytes (up to 64KB) + */ +struct dpni_tx_shaping_cfg { + uint32_t rate_limit; + uint16_t max_burst_size; +}; + +int dpni_set_tx_shaping(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + const struct dpni_tx_shaping_cfg *tx_cr_shaper, + const struct dpni_tx_shaping_cfg *tx_er_shaper, + int coupled); + int dpni_set_max_frame_length(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, @@ -832,6 +849,49 @@ int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); +/** + * enum dpni_tx_schedule_mode - DPNI Tx scheduling mode + * @DPNI_TX_SCHED_STRICT_PRIORITY: strict priority + * @DPNI_TX_SCHED_WEIGHTED_A: weighted based scheduling in group A + * @DPNI_TX_SCHED_WEIGHTED_B: weighted based scheduling in group B + */ +enum dpni_tx_schedule_mode { + DPNI_TX_SCHED_STRICT_PRIORITY = 0, + DPNI_TX_SCHED_WEIGHTED_A, + DPNI_TX_SCHED_WEIGHTED_B, +}; + +/** + * struct dpni_tx_schedule_cfg - Structure representing Tx scheduling conf + * @mode: Scheduling mode + * @delta_bandwidth: Bandwidth represented in weights from 100 to 10000; + * not applicable for 'strict-priority' mode; + */ +struct dpni_tx_schedule_cfg { + enum dpni_tx_schedule_mode mode; + uint16_t delta_bandwidth; +}; + +/** + * struct dpni_tx_priorities_cfg - Structure representing transmission + * priorities for DPNI TCs + * @tc_sched: An array of traffic-classes + * @prio_group_A: Priority of group A + * @prio_group_B: Priority of group B + * @separate_groups: Treat A and B groups as separate + */ +struct dpni_tx_priorities_cfg { + struct dpni_tx_schedule_cfg tc_sched[DPNI_MAX_TC]; + uint32_t prio_group_A; + uint32_t prio_group_B; + uint8_t separate_groups; +}; + +int dpni_set_tx_priorities(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + const struct dpni_tx_priorities_cfg *cfg); + /** * enum dpni_dist_mode - DPNI distribution mode * @DPNI_DIST_MODE_NONE: No distribution @@ -904,6 +964,90 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io, uint8_t tc_id, const struct dpni_rx_tc_dist_cfg *cfg); +/** + * Set to select color aware mode (otherwise - color blind) + */ +#define DPNI_POLICER_OPT_COLOR_AWARE 0x00000001 +/** + * Set to discard frame with RED color + */ +#define DPNI_POLICER_OPT_DISCARD_RED 0x00000002 + +/** + * enum dpni_policer_mode - selecting the policer mode + * @DPNI_POLICER_MODE_NONE: Policer is disabled + * @DPNI_POLICER_MODE_PASS_THROUGH: Policer pass through + * @DPNI_POLICER_MODE_RFC_2698: Policer algorithm RFC 2698 + * @DPNI_POLICER_MODE_RFC_4115: Policer algorithm RFC 4115 + */ +enum dpni_policer_mode { + DPNI_POLICER_MODE_NONE = 0, + DPNI_POLICER_MODE_PASS_THROUGH, + DPNI_POLICER_MODE_RFC_2698, + DPNI_POLICER_MODE_RFC_4115 +}; + +/** + * enum dpni_policer_unit - DPNI policer units + * @DPNI_POLICER_UNIT_BYTES: bytes units + * @DPNI_POLICER_UNIT_FRAMES: frames units + */ +enum dpni_policer_unit { + DPNI_POLICER_UNIT_BYTES = 0, + DPNI_POLICER_UNIT_FRAMES +}; + +/** + * enum dpni_policer_color - selecting the policer color + * @DPNI_POLICER_COLOR_GREEN: Green color + * @DPNI_POLICER_COLOR_YELLOW: Yellow color + * @DPNI_POLICER_COLOR_RED: Red color + */ +enum dpni_policer_color { + DPNI_POLICER_COLOR_GREEN = 0, + DPNI_POLICER_COLOR_YELLOW, + DPNI_POLICER_COLOR_RED +}; + +/** + * struct dpni_rx_tc_policing_cfg - Policer configuration + * @options: Mask of available options; use 'DPNI_POLICER_OPT_' values + * @mode: policer mode + * @default_color: For pass-through mode the policer re-colors with this + * color any incoming packets. For Color aware non-pass-through mode: + * policer re-colors with this color all packets with FD[DROPP]>2. + * @units: Bytes or Packets + * @cir: Committed information rate (CIR) in Kbps or packets/second + * @cbs: Committed burst size (CBS) in bytes or packets + * @eir: Peak information rate (PIR, rfc2698) in Kbps or packets/second + * Excess information rate (EIR, rfc4115) in Kbps or packets/second + * @ebs: Peak burst size (PBS, rfc2698) in bytes or packets + * Excess burst size (EBS, rfc4115) in bytes or packets + */ +struct dpni_rx_tc_policing_cfg { + uint32_t options; + enum dpni_policer_mode mode; + enum dpni_policer_unit units; + enum dpni_policer_color default_color; + uint32_t cir; + uint32_t cbs; + uint32_t eir; + uint32_t ebs; +}; + + +int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t tc_id, + const struct dpni_rx_tc_policing_cfg *cfg); + +int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t tc_id, + struct dpni_rx_tc_policing_cfg *cfg); + /** * enum dpni_congestion_unit - DPNI congestion units * @DPNI_CONGESTION_UNIT_BYTES: bytes units @@ -914,6 +1058,70 @@ enum dpni_congestion_unit { DPNI_CONGESTION_UNIT_FRAMES }; +/** + * enum dpni_early_drop_mode - DPNI early drop mode + * @DPNI_EARLY_DROP_MODE_NONE: early drop is disabled + * @DPNI_EARLY_DROP_MODE_TAIL: early drop in taildrop mode + * @DPNI_EARLY_DROP_MODE_WRED: early drop in WRED mode + */ +enum dpni_early_drop_mode { + DPNI_EARLY_DROP_MODE_NONE = 0, + DPNI_EARLY_DROP_MODE_TAIL, + DPNI_EARLY_DROP_MODE_WRED +}; + +/** + * struct dpni_wred_cfg - WRED configuration + * @max_threshold: maximum threshold that packets may be discarded. Above this + * threshold all packets are discarded; must be less than 2^39; + * approximated to be expressed as (x+256)*2^(y-1) due to HW + * implementation. + * @min_threshold: minimum threshold that packets may be discarded at + * @drop_probability: probability that a packet will be discarded (1-100, + * associated with the max_threshold). + */ +struct dpni_wred_cfg { + uint64_t max_threshold; + uint64_t min_threshold; + uint8_t drop_probability; +}; + +/** + * struct dpni_early_drop_cfg - early-drop configuration + * @enable: drop enable + * @units: units type + * @green: WRED - 'green' configuration + * @yellow: WRED - 'yellow' configuration + * @red: WRED - 'red' configuration + */ +struct dpni_early_drop_cfg { + uint8_t enable; + enum dpni_congestion_unit units; + struct dpni_wred_cfg green; + struct dpni_wred_cfg yellow; + struct dpni_wred_cfg red; +}; + +void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg, + uint8_t *early_drop_buf); + +void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg, + const uint8_t *early_drop_buf); + +int dpni_set_early_drop(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + enum dpni_queue_type qtype, + uint8_t tc_id, + uint64_t early_drop_iova); + +int dpni_get_early_drop(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + enum dpni_queue_type qtype, + uint8_t tc_id, + uint64_t early_drop_iova); + /** * enum dpni_dest - DPNI destination types * @DPNI_DEST_NONE: Unassigned destination; The queue is set in parked mode and diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h index 9e7376200d..c40090b8fe 100644 --- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h +++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2013-2016 Freescale Semiconductor Inc. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP * */ #ifndef _FSL_DPNI_CMD_H @@ -69,6 +69,8 @@ #define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V3(0x235) +#define DPNI_CMDID_SET_RX_TC_POLICING DPNI_CMD(0x23E) + #define DPNI_CMDID_SET_QOS_TBL DPNI_CMD_V2(0x240) #define DPNI_CMDID_ADD_QOS_ENT DPNI_CMD_V2(0x241) #define DPNI_CMDID_REMOVE_QOS_ENT DPNI_CMD(0x242) @@ -77,6 +79,9 @@ #define DPNI_CMDID_REMOVE_FS_ENT DPNI_CMD(0x245) #define DPNI_CMDID_CLR_FS_ENT DPNI_CMD(0x246) +#define DPNI_CMDID_SET_TX_PRIORITIES DPNI_CMD_V2(0x250) +#define DPNI_CMDID_GET_RX_TC_POLICING DPNI_CMD(0x251) + #define DPNI_CMDID_GET_STATISTICS DPNI_CMD_V3(0x25D) #define DPNI_CMDID_RESET_STATISTICS DPNI_CMD(0x25E) #define DPNI_CMDID_GET_QUEUE DPNI_CMD_V2(0x25F) @@ -354,6 +359,19 @@ struct dpni_rsp_get_link_state { uint64_t advertising; }; +#define DPNI_COUPLED_SHIFT 0 +#define DPNI_COUPLED_SIZE 1 + +struct dpni_cmd_set_tx_shaping { + uint16_t tx_cr_max_burst_size; + uint16_t tx_er_max_burst_size; + uint32_t pad; + uint32_t tx_cr_rate_limit; + uint32_t tx_er_rate_limit; + /* from LSB: coupled:1 */ + uint8_t coupled; +}; + struct dpni_cmd_set_max_frame_length { uint16_t max_frame_length; }; @@ -592,6 +610,45 @@ struct dpni_cmd_clear_fs_entries { uint8_t tc_id; }; +#define DPNI_MODE_SHIFT 0 +#define DPNI_MODE_SIZE 4 +#define DPNI_COLOR_SHIFT 4 +#define DPNI_COLOR_SIZE 4 +#define DPNI_UNITS_SHIFT 0 +#define DPNI_UNITS_SIZE 4 + +struct dpni_cmd_set_rx_tc_policing { + /* from LSB: mode:4 color:4 */ + uint8_t mode_color; + /* from LSB: units: 4 */ + uint8_t units; + uint8_t tc_id; + uint8_t pad; + uint32_t options; + uint32_t cir; + uint32_t cbs; + uint32_t eir; + uint32_t ebs; +}; + +struct dpni_cmd_get_rx_tc_policing { + uint16_t pad; + uint8_t tc_id; +}; + +struct dpni_rsp_get_rx_tc_policing { + /* from LSB: mode:4 color:4 */ + uint8_t mode_color; + /* from LSB: units: 4 */ + uint8_t units; + uint16_t pad; + uint32_t options; + uint32_t cir; + uint32_t cbs; + uint32_t eir; + uint32_t ebs; +}; + #define DPNI_DROP_ENABLE_SHIFT 0 #define DPNI_DROP_ENABLE_SIZE 1 #define DPNI_DROP_UNITS_SHIFT 2 diff --git a/drivers/net/dpaa2/meson.build b/drivers/net/dpaa2/meson.build index 844dd25159..f5f411d592 100644 --- a/drivers/net/dpaa2/meson.build +++ b/drivers/net/dpaa2/meson.build @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright 2018 NXP +# Copyright 2018-2021 NXP if not is_linux build = false @@ -8,6 +8,7 @@ endif deps += ['mempool_dpaa2'] sources = files('base/dpaa2_hw_dpni.c', + 'dpaa2_tm.c', 'dpaa2_mux.c', 'dpaa2_ethdev.c', 'dpaa2_flow.c', From patchwork Thu Feb 11 14:16:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87863 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 11063A054A; Thu, 11 Feb 2021 15:29:09 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AB1D81CC57A; Thu, 11 Feb 2021 15:28:10 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id E155A1CC504 for ; Thu, 11 Feb 2021 15:28:02 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id BD8A31A05C7; Thu, 11 Feb 2021 15:28:02 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 7DFC61A05F5; Thu, 11 Feb 2021 15:28:01 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id D1887402B7; Thu, 11 Feb 2021 15:27:59 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:09 +0530 Message-Id: <20210211141620.12482-10-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 09/20] net/dpaa2: add support to configure dpdmux max Rx frame len X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce a new pmd api, which can help the applications to configure the max framelen for a given dpdmux interface Signed-off-by: Hemant Agrawal --- drivers/net/dpaa2/dpaa2_mux.c | 28 +++++++++++++++++++++++++++- drivers/net/dpaa2/rte_pmd_dpaa2.h | 18 +++++++++++++++++- drivers/net/dpaa2/version.map | 1 + 3 files changed, 45 insertions(+), 2 deletions(-) diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c index f8366e839e..b397d333d6 100644 --- a/drivers/net/dpaa2/dpaa2_mux.c +++ b/drivers/net/dpaa2/dpaa2_mux.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2018-2020 NXP + * Copyright 2018-2021 NXP */ #include @@ -205,6 +205,32 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, return NULL; } +int +rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len) +{ + struct dpaa2_dpdmux_dev *dpdmux_dev; + int ret; + + /* Find the DPDMUX from dpdmux_id in our list */ + dpdmux_dev = get_dpdmux_from_id(dpdmux_id); + if (!dpdmux_dev) { + DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id); + return -1; + } + + ret = dpdmux_set_max_frame_length(&dpdmux_dev->dpdmux, + CMD_PRI_LOW, dpdmux_dev->token, max_rx_frame_len); + if (ret) { + DPAA2_PMD_ERR("DPDMUX:Unable to set mtu. check config %d", ret); + return ret; + } + + DPAA2_PMD_INFO("dpdmux mtu set as %u", + DPAA2_MAX_RX_PKT_LEN - RTE_ETHER_CRC_LEN); + + return ret; +} + static int dpaa2_create_dpdmux_device(int vdev_fd __rte_unused, struct vfio_device_info *obj_info __rte_unused, diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h index ec8df75af9..7204a8f951 100644 --- a/drivers/net/dpaa2/rte_pmd_dpaa2.h +++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2018-2020 NXP + * Copyright 2018-2021 NXP */ #ifndef _RTE_PMD_DPAA2_H @@ -40,6 +40,22 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, struct rte_flow_item *pattern[], struct rte_flow_action *actions[]); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * demultiplex interface max rx frame length configure + * + * @param dpdmux_id + * ID of the DPDMUX MC object. + * @param max_rx_frame_len + * maximum receive frame length (will be checked to be minimux of all dpnis) + * + */ +__rte_experimental +int +rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len); + /** * @warning * @b EXPERIMENTAL: this API may change, or be removed, without prior notice diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map index 72d1b2b1c8..3c087c2423 100644 --- a/drivers/net/dpaa2/version.map +++ b/drivers/net/dpaa2/version.map @@ -2,6 +2,7 @@ EXPERIMENTAL { global: rte_pmd_dpaa2_mux_flow_create; + rte_pmd_dpaa2_mux_rx_frame_len; rte_pmd_dpaa2_set_custom_hash; }; From patchwork Thu Feb 11 14:16:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87865 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F7DAA054A; Thu, 11 Feb 2021 15:29:28 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9AB711CC58B; Thu, 11 Feb 2021 15:28:13 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 8FE981CC505 for ; Thu, 11 Feb 2021 15:28:03 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 731D61A05F1; Thu, 11 Feb 2021 15:28:03 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 084AF1A05EF; Thu, 11 Feb 2021 15:28:02 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 5BF54402D0; Thu, 11 Feb 2021 15:28:00 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:10 +0530 Message-Id: <20210211141620.12482-11-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 10/20] net/dpaa2: add support for raw pattern in dpdmux X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Akhil Goyal Added support for flow raw pattern and check that the call for dpdmux_set_custom_key() which should be called only once for a particular DPDMUX as all previous rules will be erased with this call. Hence calling it for the first time only. Signed-off-by: Akhil Goyal --- drivers/net/dpaa2/dpaa2_mux.c | 39 ++++++++++++++++++++++----- drivers/net/dpaa2/mc/dpdmux.c | 3 ++- drivers/net/dpaa2/mc/fsl_dpdmux.h | 12 +++++++-- drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 4 +-- 4 files changed, 46 insertions(+), 12 deletions(-) diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c index b397d333d6..b669a16fc1 100644 --- a/drivers/net/dpaa2/dpaa2_mux.c +++ b/drivers/net/dpaa2/dpaa2_mux.c @@ -66,6 +66,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, void *key_iova, *mask_iova, *key_cfg_iova = NULL; uint8_t key_size = 0; int ret; + static int i; /* Find the DPDMUX from dpdmux_id in our list */ dpdmux_dev = get_dpdmux_from_id(dpdmux_id); @@ -154,6 +155,23 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, } break; + case RTE_FLOW_ITEM_TYPE_RAW: + { + const struct rte_flow_item_raw *spec; + + spec = (const struct rte_flow_item_raw *)pattern[0]->spec; + kg_cfg.extracts[0].extract.from_data.offset = spec->offset; + kg_cfg.extracts[0].extract.from_data.size = spec->length; + kg_cfg.extracts[0].type = DPKG_EXTRACT_FROM_DATA; + kg_cfg.num_extracts = 1; + memcpy((void *)key_iova, (const void *)spec->pattern, + spec->length); + memcpy(mask_iova, pattern[0]->mask, spec->length); + + key_size = spec->length; + } + break; + default: DPAA2_PMD_ERR("Not supported pattern type: %d", pattern[0]->type); @@ -166,20 +184,27 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, goto creation_error; } - ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux, CMD_PRI_LOW, - dpdmux_dev->token, - (uint64_t)(DPAA2_VADDR_TO_IOVA(key_cfg_iova))); - if (ret) { - DPAA2_PMD_ERR("dpdmux_set_custom_key failed: err(%d)", ret); - goto creation_error; + /* Multiple rules with same DPKG extracts (kg_cfg.extracts) like same + * offset and length values in raw is supported right now. Different + * values of kg_cfg may not work. + */ + if (i == 0) { + ret = dpdmux_set_custom_key(&dpdmux_dev->dpdmux, CMD_PRI_LOW, + dpdmux_dev->token, + (uint64_t)(DPAA2_VADDR_TO_IOVA(key_cfg_iova))); + if (ret) { + DPAA2_PMD_ERR("dpdmux_set_custom_key failed: err(%d)", + ret); + goto creation_error; + } } - /* As now our key extract parameters are set, let us configure * the rule. */ flow->rule.key_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(key_iova)); flow->rule.mask_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(mask_iova)); flow->rule.key_size = key_size; + flow->rule.entry_index = i++; vf_conf = (const struct rte_flow_action_vf *)(actions[0]->conf); if (vf_conf->id == 0 || vf_conf->id > dpdmux_dev->num_ifs) { diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c index 63f1ec7d30..67d37ed4cd 100644 --- a/drivers/net/dpaa2/mc/dpdmux.c +++ b/drivers/net/dpaa2/mc/dpdmux.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2013-2016 Freescale Semiconductor Inc. - * Copyright 2018-2019 NXP + * Copyright 2018-2021 NXP * */ #include @@ -852,6 +852,7 @@ int dpdmux_add_custom_cls_entry(struct fsl_mc_io *mc_io, cmd_params = (struct dpdmux_cmd_add_custom_cls_entry *)cmd.params; cmd_params->key_size = rule->key_size; + cmd_params->entry_index = rule->entry_index; cmd_params->dest_if = cpu_to_le16(action->dest_if); cmd_params->key_iova = cpu_to_le64(rule->key_iova); cmd_params->mask_iova = cpu_to_le64(rule->mask_iova); diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h index accd1ef5c1..b809aade5d 100644 --- a/drivers/net/dpaa2/mc/fsl_dpdmux.h +++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2013-2016 Freescale Semiconductor Inc. - * Copyright 2018-2019 NXP + * Copyright 2018-2021 NXP * */ #ifndef __FSL_DPDMUX_H @@ -367,15 +367,23 @@ int dpdmux_set_custom_key(struct fsl_mc_io *mc_io, * struct dpdmux_rule_cfg - Custom classification rule. * * @key_iova: DMA address of buffer storing the look-up value - * @mask_iova: DMA address of the mask used for TCAM classification + * @mask_iova: DMA address of the mask used for TCAM classification. This + * parameter is used only if dpdmux was created using option + * DPDMUX_OPT_CLS_MASK_SUPPORT. * @key_size: size, in bytes, of the look-up value. This must match the size * of the look-up key defined using dpdmux_set_custom_key, otherwise the * entry will never be hit + * @entry_index: rule index into the table. This parameter is used only when + * dpdmux object was created using option DPDMUX_OPT_CLS_MASK_SUPPORT. In + * this case the rule is masking and the current frame may be a hit for + * multiple rules. This parameter determines the order in wich the rules + * will be checked (smaller entry_index first). */ struct dpdmux_rule_cfg { uint64_t key_iova; uint64_t mask_iova; uint8_t key_size; + uint16_t entry_index; }; /** diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h index a60b2ebe3c..b6b8c38c41 100644 --- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h +++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2013-2016 Freescale Semiconductor Inc. - * Copyright 2018-2019 NXP + * Copyright 2018-2021 NXP * */ #ifndef _FSL_DPDMUX_CMD_H @@ -204,7 +204,7 @@ struct dpdmux_set_custom_key { struct dpdmux_cmd_add_custom_cls_entry { uint8_t pad[3]; uint8_t key_size; - uint16_t pad1; + uint16_t entry_index; uint16_t dest_if; uint64_t key_iova; uint64_t mask_iova; From patchwork Thu Feb 11 14:16:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87866 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C9106A054A; Thu, 11 Feb 2021 15:29:37 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3F8F71CC56E; Thu, 11 Feb 2021 15:28:15 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 220231CC505 for ; Thu, 11 Feb 2021 15:28:04 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 018E32008D8; Thu, 11 Feb 2021 15:28:04 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 8AAF52008E7; Thu, 11 Feb 2021 15:28:02 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id DCA89402E5; Thu, 11 Feb 2021 15:28:00 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:11 +0530 Message-Id: <20210211141620.12482-12-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 11/20] net/dpaa2: dpdmux skip reset X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Apeksha Gupta This is require as the interface is shared with linux and we do not want the dpdmux to be reset. rather the default interface to kernel shall continue. command dpdmux_set_resetable is used to skip DPDMUX reset. By default the DPDMUX_RESET command will reset the DPDMUX completely, dpdmux_set_resetable command will be ignored in old MC firmware. Signed-off-by: Apeksha Gupta --- drivers/net/dpaa2/dpaa2_mux.c | 26 +++++++++ drivers/net/dpaa2/mc/dpdmux.c | 84 +++++++++++++++++++++++++++ drivers/net/dpaa2/mc/fsl_dpdmux.h | 32 ++++++++++ drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 22 ++++++- 4 files changed, 161 insertions(+), 3 deletions(-) diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c index b669a16fc1..1ff00ca8f7 100644 --- a/drivers/net/dpaa2/dpaa2_mux.c +++ b/drivers/net/dpaa2/dpaa2_mux.c @@ -264,6 +264,8 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused, struct dpaa2_dpdmux_dev *dpdmux_dev; struct dpdmux_attr attr; int ret; + uint16_t maj_ver; + uint16_t min_ver; PMD_INIT_FUNC_TRACE(); @@ -298,6 +300,30 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused, goto init_err; } + ret = dpdmux_get_api_version(&dpdmux_dev->dpdmux, CMD_PRI_LOW, + &maj_ver, &min_ver); + if (ret) { + DPAA2_PMD_ERR("setting version failed in %s", + __func__); + goto init_err; + } + + /* The new dpdmux_set/get_resetable() API are available starting with + * DPDMUX_VER_MAJOR==6 and DPDMUX_VER_MINOR==6 + */ + if (maj_ver >= 6 && min_ver >= 6) { + ret = dpdmux_set_resetable(&dpdmux_dev->dpdmux, CMD_PRI_LOW, + dpdmux_dev->token, + DPDMUX_SKIP_DEFAULT_INTERFACE | + DPDMUX_SKIP_UNICAST_RULES | + DPDMUX_SKIP_MULTICAST_RULES); + if (ret) { + DPAA2_PMD_ERR("setting default interface failed in %s", + __func__); + goto init_err; + } + } + dpdmux_dev->dpdmux_id = dpdmux_id; dpdmux_dev->num_ifs = attr.num_ifs; diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c index 67d37ed4cd..57c811c70f 100644 --- a/drivers/net/dpaa2/mc/dpdmux.c +++ b/drivers/net/dpaa2/mc/dpdmux.c @@ -123,10 +123,12 @@ int dpdmux_create(struct fsl_mc_io *mc_io, cmd_params->method = cfg->method; cmd_params->manip = cfg->manip; cmd_params->num_ifs = cpu_to_le16(cfg->num_ifs); + cmd_params->default_if = cpu_to_le16(cfg->default_if); cmd_params->adv_max_dmat_entries = cpu_to_le16(cfg->adv.max_dmat_entries); cmd_params->adv_max_mc_groups = cpu_to_le16(cfg->adv.max_mc_groups); cmd_params->adv_max_vlan_ids = cpu_to_le16(cfg->adv.max_vlan_ids); + cmd_params->mem_size = cpu_to_le16(cfg->adv.mem_size); cmd_params->options = cpu_to_le64(cfg->adv.options); /* send command to mc*/ @@ -278,6 +280,87 @@ int dpdmux_reset(struct fsl_mc_io *mc_io, return mc_send_command(mc_io, &cmd); } +/** + * dpdmux_set_resetable() - Set overall resetable DPDMUX parameters. + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPDMUX object + * @skip_reset_flags: By default all are 0. + * By setting 1 will deactivate the reset. + * The flags are: + * DPDMUX_SKIP_DEFAULT_INTERFACE 0x01 + * DPDMUX_SKIP_UNICAST_RULES 0x02 + * DPDMUX_SKIP_MULTICAST_RULES 0x04 + * + * For example, by default, through DPDMUX_RESET the default + * interface will be restored with the one from create. + * By setting DPDMUX_SKIP_DEFAULT_INTERFACE flag, + * through DPDMUX_RESET the default interface will not be modified. + * + * Return: '0' on Success; Error code otherwise. + */ +int dpdmux_set_resetable(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t skip_reset_flags) +{ + struct mc_command cmd = { 0 }; + struct dpdmux_cmd_set_skip_reset_flags *cmd_params; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SET_RESETABLE, + cmd_flags, + token); + cmd_params = (struct dpdmux_cmd_set_skip_reset_flags *)cmd.params; + dpdmux_set_field(cmd_params->skip_reset_flags, + SKIP_RESET_FLAGS, + skip_reset_flags); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpdmux_get_resetable() - Get overall resetable parameters. + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPDMUX object + * @skip_reset_flags: Get the reset flags. + * + * The flags are: + * DPDMUX_SKIP_DEFAULT_INTERFACE 0x01 + * DPDMUX_SKIP_UNICAST_RULES 0x02 + * DPDMUX_SKIP_MULTICAST_RULES 0x04 + * + * Return: '0' on Success; Error code otherwise. + */ +int dpdmux_get_resetable(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t *skip_reset_flags) +{ + struct mc_command cmd = { 0 }; + struct dpdmux_rsp_get_skip_reset_flags *rsp_params; + int err; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_GET_RESETABLE, + cmd_flags, + token); + + /* send command to mc*/ + err = mc_send_command(mc_io, &cmd); + if (err) + return err; + + /* retrieve response parameters */ + rsp_params = (struct dpdmux_rsp_get_skip_reset_flags *)cmd.params; + *skip_reset_flags = dpdmux_get_field(rsp_params->skip_reset_flags, + SKIP_RESET_FLAGS); + + return 0; +} + /** * dpdmux_get_attributes() - Retrieve DPDMUX attributes * @mc_io: Pointer to MC portal's I/O object @@ -314,6 +397,7 @@ int dpdmux_get_attributes(struct fsl_mc_io *mc_io, attr->manip = rsp_params->manip; attr->num_ifs = le16_to_cpu(rsp_params->num_ifs); attr->mem_size = le16_to_cpu(rsp_params->mem_size); + attr->default_if = le16_to_cpu(rsp_params->default_if); return 0; } diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h index b809aade5d..f035f0f24e 100644 --- a/drivers/net/dpaa2/mc/fsl_dpdmux.h +++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h @@ -79,6 +79,8 @@ enum dpdmux_method { * @method: Defines the operation method for the DPDMUX address table * @manip: Required manipulation operation * @num_ifs: Number of interfaces (excluding the uplink interface) + * @default_if: Default interface number (different from uplink, + maximum value num_ifs) * @adv: Advanced parameters; default is all zeros; * use this structure to change default settings * @adv.options: DPDMUX options - combination of 'DPDMUX_OPT_' flags. @@ -89,16 +91,20 @@ enum dpdmux_method { * @adv.max_vlan_ids: Maximum vlan ids allowed in the system - * relevant only case of working in mac+vlan method. * 0 - indicates default 16 vlan ids. + * @adv.mem_size: Size of the memory used for internal buffers expressed as + * number of 256byte buffers. */ struct dpdmux_cfg { enum dpdmux_method method; enum dpdmux_manip manip; uint16_t num_ifs; + uint16_t default_if; struct { uint64_t options; uint16_t max_dmat_entries; uint16_t max_mc_groups; uint16_t max_vlan_ids; + uint16_t mem_size; } adv; }; @@ -130,6 +136,29 @@ int dpdmux_reset(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); +/** + *Setting 1 DPDMUX_RESET will not reset default interface + */ +#define DPDMUX_SKIP_DEFAULT_INTERFACE 0x01 +/** + *Setting 1 DPDMUX_RESET will not reset unicast rules + */ +#define DPDMUX_SKIP_UNICAST_RULES 0x02 +/** + *Setting 1 DPDMUX_RESET will not reset multicast rules + */ +#define DPDMUX_SKIP_MULTICAST_RULES 0x04 + +int dpdmux_set_resetable(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t skip_reset_flags); + +int dpdmux_get_resetable(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t *skip_reset_flags); + /** * struct dpdmux_attr - Structure representing DPDMUX attributes * @id: DPDMUX object ID @@ -138,6 +167,8 @@ int dpdmux_reset(struct fsl_mc_io *mc_io, * @manip: DPDMUX manipulation type * @num_ifs: Number of interfaces (excluding the uplink interface) * @mem_size: DPDMUX frame storage memory size + * @default_if: Default interface number (different from uplink, + maximum value num_ifs) */ struct dpdmux_attr { int id; @@ -146,6 +177,7 @@ struct dpdmux_attr { enum dpdmux_manip manip; uint16_t num_ifs; uint16_t mem_size; + uint16_t default_if; }; int dpdmux_get_attributes(struct fsl_mc_io *mc_io, diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h index b6b8c38c41..2444e9a2e5 100644 --- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h +++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h @@ -55,6 +55,9 @@ #define DPDMUX_CMDID_IF_SET_DEFAULT DPDMUX_CMD(0x0b8) #define DPDMUX_CMDID_IF_GET_DEFAULT DPDMUX_CMD(0x0b9) +#define DPDMUX_CMDID_SET_RESETABLE DPDMUX_CMD(0x0ba) +#define DPDMUX_CMDID_GET_RESETABLE DPDMUX_CMD(0x0bb) + #define DPDMUX_MASK(field) \ GENMASK(DPDMUX_##field##_SHIFT + DPDMUX_##field##_SIZE - 1, \ DPDMUX_##field##_SHIFT) @@ -72,12 +75,13 @@ struct dpdmux_cmd_create { uint8_t method; uint8_t manip; uint16_t num_ifs; - uint32_t pad; + uint16_t default_if; + uint16_t pad; uint16_t adv_max_dmat_entries; uint16_t adv_max_mc_groups; uint16_t adv_max_vlan_ids; - uint16_t pad1; + uint16_t mem_size; uint64_t options; }; @@ -100,7 +104,7 @@ struct dpdmux_rsp_get_attr { uint8_t manip; uint16_t num_ifs; uint16_t mem_size; - uint16_t pad; + uint16_t default_if; uint64_t pad1; @@ -217,5 +221,17 @@ struct dpdmux_cmd_remove_custom_cls_entry { uint64_t key_iova; uint64_t mask_iova; }; + +#define DPDMUX_SKIP_RESET_FLAGS_SHIFT 0 +#define DPDMUX_SKIP_RESET_FLAGS_SIZE 3 + +struct dpdmux_cmd_set_skip_reset_flags { + uint8_t skip_reset_flags; +}; + +struct dpdmux_rsp_get_skip_reset_flags { + uint8_t skip_reset_flags; +}; + #pragma pack(pop) #endif /* _FSL_DPDMUX_CMD_H */ From patchwork Thu Feb 11 14:16:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87867 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 26870A054A; Thu, 11 Feb 2021 15:29:45 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C9081CC58F; Thu, 11 Feb 2021 15:28:16 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 9FDCD1CC55F for ; Thu, 11 Feb 2021 15:28:04 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8300C1A05FD; Thu, 11 Feb 2021 15:28:04 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 188DA1A05F0; Thu, 11 Feb 2021 15:28:03 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 6B56A402F0; Thu, 11 Feb 2021 15:28:01 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:12 +0530 Message-Id: <20210211141620.12482-13-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 12/20] net/dpaa2: support dpdmux to not drop parse err pkts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" DPDMUX should not drop parse error packets. They shall be left to the decision of the connected DPNI interfaces Signed-off-by: Hemant Agrawal --- drivers/net/dpaa2/dpaa2_mux.c | 18 ++++ drivers/net/dpaa2/mc/dpdmux.c | 37 +++++++++ drivers/net/dpaa2/mc/fsl_dpdmux.h | 113 +++++++++++++++++++++++++- drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h | 25 ++++-- 4 files changed, 187 insertions(+), 6 deletions(-) diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c index 1ff00ca8f7..811f417491 100644 --- a/drivers/net/dpaa2/dpaa2_mux.c +++ b/drivers/net/dpaa2/dpaa2_mux.c @@ -324,6 +324,24 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused, } } + if (maj_ver >= 6 && min_ver >= 9) { + struct dpdmux_error_cfg mux_err_cfg; + + memset(&mux_err_cfg, 0, sizeof(mux_err_cfg)); + mux_err_cfg.error_action = DPDMUX_ERROR_ACTION_CONTINUE; + mux_err_cfg.errors = DPDMUX_ERROR_DISC; + + ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux, + CMD_PRI_LOW, + dpdmux_dev->token, dpdmux_id, + &mux_err_cfg); + if (ret) { + DPAA2_PMD_ERR("dpdmux_if_set_errors_behavior %s err %d", + __func__, ret); + goto init_err; + } + } + dpdmux_dev->dpdmux_id = dpdmux_id; dpdmux_dev->num_ifs = attr.num_ifs; diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c index 57c811c70f..93912ef9d3 100644 --- a/drivers/net/dpaa2/mc/dpdmux.c +++ b/drivers/net/dpaa2/mc/dpdmux.c @@ -1012,3 +1012,40 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io, return 0; } + +/** + * dpdmux_if_set_errors_behavior() - Set errors behavior + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPSW object + * @if_id: Interface Identifier + * @cfg: Errors configuration + * + * Provides a set of frame errors that will be rejected or accepted by the + * dpdmux interface. The frame with this errors will no longer be dropped by + * the dpdmux interface. When frame has parsing error the distribution to + * expected interface may fail. If the frame must be distributed using the + * information from a header that was not parsed due errors the frame may + * be discarded or end up on a default interface because needed data was not + * parsed properly. + * This function may be called numerous times with different error masks + * + * Return: '0' on Success; Error code otherwise. + */ +int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags, + uint16_t token, uint16_t if_id, struct dpdmux_error_cfg *cfg) +{ + struct mc_command cmd = { 0 }; + struct dpdmux_cmd_set_errors_behavior *cmd_params; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPDMUX_CMDID_SET_ERRORS_BEHAVIOR, + cmd_flags, + token); + cmd_params = (struct dpdmux_cmd_set_errors_behavior *)cmd.params; + cmd_params->errors = cpu_to_le32(cfg->errors); + dpdmux_set_field(cmd_params->flags, ERROR_ACTION, cfg->error_action); + cmd_params->if_id = cpu_to_le16(if_id); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h index f035f0f24e..7968161cea 100644 --- a/drivers/net/dpaa2/mc/fsl_dpdmux.h +++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h @@ -39,6 +39,12 @@ int dpdmux_close(struct fsl_mc_io *mc_io, */ #define DPDMUX_OPT_CLS_MASK_SUPPORT 0x0000000000000020ULL +/** + * Automatic max frame length - maximum frame length for dpdmux interface will + * be changed automatically by connected dpni objects. + */ +#define DPDMUX_OPT_AUTO_MAX_FRAME_LEN 0x0000000000000040ULL + #define DPDMUX_IRQ_INDEX_IF 0x0000 #define DPDMUX_IRQ_INDEX 0x0001 @@ -203,6 +209,7 @@ int dpdmux_set_max_frame_length(struct fsl_mc_io *mc_io, * @DPDMUX_CNT_EGR_FRAME: Counts egress frames * @DPDMUX_CNT_EGR_BYTE: Counts egress bytes * @DPDMUX_CNT_EGR_FRAME_DISCARD: Counts discarded egress frames + * @DPDMUX_CNT_ING_NO_BUFFER_DISCARD: Counts ingress no buffer discard frames */ enum dpdmux_counter_type { DPDMUX_CNT_ING_FRAME = 0x0, @@ -215,7 +222,8 @@ enum dpdmux_counter_type { DPDMUX_CNT_ING_BCAST_BYTES = 0x7, DPDMUX_CNT_EGR_FRAME = 0x8, DPDMUX_CNT_EGR_BYTE = 0x9, - DPDMUX_CNT_EGR_FRAME_DISCARD = 0xa + DPDMUX_CNT_EGR_FRAME_DISCARD = 0xa, + DPDMUX_CNT_ING_NO_BUFFER_DISCARD = 0xb, }; /** @@ -447,4 +455,107 @@ int dpdmux_get_api_version(struct fsl_mc_io *mc_io, uint16_t *major_ver, uint16_t *minor_ver); +/** + * Discard bit. This bit must be used together with other bits in + * DPDMUX_ERROR_ACTION_CONTINUE to disable discarding of frames containing + * errors + */ +#define DPDMUX_ERROR_DISC 0x80000000 +/** + * MACSEC is enabled + */ +#define DPDMUX_ERROR_MS 0x40000000 +/** + * PTP event frame + */ +#define DPDMUX_ERROR_PTP 0x08000000 +/** + * This is a multicast frame + */ +#define DPDMUX_ERROR_MC 0x04000000 +/** + * This is a broadcast frame + */ +#define DPDMUX_ERROR_BC 0x02000000 +/** + * Invalid Key composition or key size error + */ +#define DPDMUX_ERROR_KSE 0x00040000 +/** + * Extract out of frame header + */ +#define DPDMUX_ERROR_EOFHE 0x00020000 +/** + * Maximum number of chained lookups is reached + */ +#define DPDMUX_ERROR_MNLE 0x00010000 +/** + * Invalid table ID + */ +#define DPDMUX_ERROR_TIDE 0x00008000 +/** + * Policer initialization entry error + */ +#define DPDMUX_ERROR_PIEE 0x00004000 +/** + * Frame length error + */ +#define DPDMUX_ERROR_FLE 0x00002000 +/** + * Frame physical error + */ +#define DPDMUX_ERROR_FPE 0x00001000 +/** + * Cycle limit is exceeded and frame parsing is forced to terminate early + */ +#define DPDMUX_ERROR_PTE 0x00000080 +/** + * Invalid softparse instruction is encountered + */ +#define DPDMUX_ERROR_ISP 0x00000040 +/** + * Parsing header error + */ +#define DPDMUX_ERROR_PHE 0x00000020 +/* + * Block limit is exceeded. Maximum data that can be read and parsed is 256 + * bytes. + * Parser will set this bit if it needs more that this limit to parse. + */ +#define DPDMUX_ERROR_BLE 0x00000010 +/** + * L3 checksum validation + */ +#define DPDMUX__ERROR_L3CV 0x00000008 +/** + * L3 checksum error + */ +#define DPDMUX__ERROR_L3CE 0x00000004 +/** + * L4 checksum validation + */ +#define DPDMUX__ERROR_L4CV 0x00000002 +/** + * L4 checksum error + */ +#define DPDMUX__ERROR_L4CE 0x00000001 + +enum dpdmux_error_action { + DPDMUX_ERROR_ACTION_DISCARD = 0, + DPDMUX_ERROR_ACTION_CONTINUE = 1 +}; + +/** + * Configure how dpdmux interface behaves on errors + * @errors - or'ed combination of DPDMUX_ERROR_* + * @action - set to DPDMUX_ERROR_ACTION_DISCARD or DPDMUX_ERROR_ACTION_CONTINUE + */ +struct dpdmux_error_cfg { + uint32_t errors; + enum dpdmux_error_action error_action; +}; + +int dpdmux_if_set_errors_behavior(struct fsl_mc_io *mc_io, uint32_t cmd_flags, + uint16_t token, uint16_t if_id, struct dpdmux_error_cfg *cfg); + #endif /* __FSL_DPDMUX_H */ diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h index 2444e9a2e5..2ab4d75dfb 100644 --- a/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h +++ b/drivers/net/dpaa2/mc/fsl_dpdmux_cmd.h @@ -9,30 +9,35 @@ /* DPDMUX Version */ #define DPDMUX_VER_MAJOR 6 -#define DPDMUX_VER_MINOR 3 +#define DPDMUX_VER_MINOR 9 #define DPDMUX_CMD_BASE_VERSION 1 #define DPDMUX_CMD_VERSION_2 2 +#define DPDMUX_CMD_VERSION_3 3 +#define DPDMUX_CMD_VERSION_4 4 #define DPDMUX_CMD_ID_OFFSET 4 #define DPDMUX_CMD(id) (((id) << DPDMUX_CMD_ID_OFFSET) |\ DPDMUX_CMD_BASE_VERSION) #define DPDMUX_CMD_V2(id) (((id) << DPDMUX_CMD_ID_OFFSET) | \ DPDMUX_CMD_VERSION_2) +#define DPDMUX_CMD_V3(id) (((id) << DPDMUX_CMD_ID_OFFSET) |\ + DPDMUX_CMD_VERSION_3) +#define DPDMUX_CMD_V4(id) (((id) << DPDMUX_CMD_ID_OFFSET) |\ + DPDMUX_CMD_VERSION_4) /* Command IDs */ #define DPDMUX_CMDID_CLOSE DPDMUX_CMD(0x800) #define DPDMUX_CMDID_OPEN DPDMUX_CMD(0x806) -#define DPDMUX_CMDID_CREATE DPDMUX_CMD(0x906) +#define DPDMUX_CMDID_CREATE DPDMUX_CMD_V4(0x906) #define DPDMUX_CMDID_DESTROY DPDMUX_CMD(0x986) #define DPDMUX_CMDID_GET_API_VERSION DPDMUX_CMD(0xa06) #define DPDMUX_CMDID_ENABLE DPDMUX_CMD(0x002) #define DPDMUX_CMDID_DISABLE DPDMUX_CMD(0x003) -#define DPDMUX_CMDID_GET_ATTR DPDMUX_CMD(0x004) +#define DPDMUX_CMDID_GET_ATTR DPDMUX_CMD_V2(0x004) #define DPDMUX_CMDID_RESET DPDMUX_CMD(0x005) #define DPDMUX_CMDID_IS_ENABLED DPDMUX_CMD(0x006) - #define DPDMUX_CMDID_SET_MAX_FRAME_LENGTH DPDMUX_CMD(0x0a1) #define DPDMUX_CMDID_UL_RESET_COUNTERS DPDMUX_CMD(0x0a3) @@ -49,7 +54,7 @@ #define DPDMUX_CMDID_IF_GET_LINK_STATE DPDMUX_CMD_V2(0x0b4) #define DPDMUX_CMDID_SET_CUSTOM_KEY DPDMUX_CMD(0x0b5) -#define DPDMUX_CMDID_ADD_CUSTOM_CLS_ENTRY DPDMUX_CMD(0x0b6) +#define DPDMUX_CMDID_ADD_CUSTOM_CLS_ENTRY DPDMUX_CMD_V2(0x0b6) #define DPDMUX_CMDID_REMOVE_CUSTOM_CLS_ENTRY DPDMUX_CMD(0x0b7) #define DPDMUX_CMDID_IF_SET_DEFAULT DPDMUX_CMD(0x0b8) @@ -57,6 +62,7 @@ #define DPDMUX_CMDID_SET_RESETABLE DPDMUX_CMD(0x0ba) #define DPDMUX_CMDID_GET_RESETABLE DPDMUX_CMD(0x0bb) +#define DPDMUX_CMDID_SET_ERRORS_BEHAVIOR DPDMUX_CMD(0x0bf) #define DPDMUX_MASK(field) \ GENMASK(DPDMUX_##field##_SHIFT + DPDMUX_##field##_SIZE - 1, \ @@ -233,5 +239,14 @@ struct dpdmux_rsp_get_skip_reset_flags { uint8_t skip_reset_flags; }; +#define DPDMUX_ERROR_ACTION_SHIFT 0 +#define DPDMUX_ERROR_ACTION_SIZE 4 + +struct dpdmux_cmd_set_errors_behavior { + uint32_t errors; + uint16_t flags; + uint16_t if_id; +}; + #pragma pack(pop) #endif /* _FSL_DPDMUX_CMD_H */ From patchwork Thu Feb 11 14:16:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87868 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 055BCA054A; Thu, 11 Feb 2021 15:29:53 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDA981CC59B; Thu, 11 Feb 2021 15:28:17 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 352641CC560 for ; Thu, 11 Feb 2021 15:28:05 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 0D4C92008D7; Thu, 11 Feb 2021 15:28:05 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 96A5B2008B8; Thu, 11 Feb 2021 15:28:03 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id EB860402AC; Thu, 11 Feb 2021 15:28:01 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:13 +0530 Message-Id: <20210211141620.12482-14-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 13/20] net/dpaa2: add device args for enable Tx confirmation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for dev arg ``fslmc:dpni.1,drv_tx_conf=1`` It is optional for dpaa2 to use TX confirmation. DPAA2 can free the transmitted packets. However some use-case requires the TX confirmation to be explicit. Signed-off-by: Hemant Agrawal --- doc/guides/nics/dpaa2.rst | 4 ++++ drivers/net/dpaa2/dpaa2_ethdev.c | 35 +++++++++++++++++--------------- drivers/net/dpaa2/dpaa2_ethdev.h | 5 +++-- 3 files changed, 26 insertions(+), 18 deletions(-) diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst index 893e87e714..4eec8fcdd9 100644 --- a/doc/guides/nics/dpaa2.rst +++ b/doc/guides/nics/dpaa2.rst @@ -484,6 +484,10 @@ for details. of the packet pull command which is issued in the previous cycle. e.g. ``fslmc:dpni.1,drv_no_prefetch=1`` +* Use dev arg option ``drv_tx_conf=1`` to enable TX confirmation mode. + In this mode tx conf queues need to be polled to free the buffers. + e.g. ``fslmc:dpni.1,drv_tx_conf=1`` + Enabling logs ------------- diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 490eb4b3f4..4b3eb7f5c9 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -31,6 +31,7 @@ #define DRIVER_LOOPBACK_MODE "drv_loopback" #define DRIVER_NO_PREFETCH_MODE "drv_no_prefetch" +#define DRIVER_TX_CONF "drv_tx_conf" #define CHECK_INTERVAL 100 /* 100ms */ #define MAX_REPEAT_TIME 90 /* 9s (90 * 100ms) in total */ @@ -363,7 +364,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); num_rxqueue_per_tc = (priv->nb_rx_queues / priv->num_rx_tc); - if (priv->tx_conf_en) + if (priv->flags & DPAA2_TX_CONF_ENABLE) tot_queues = priv->nb_rx_queues + 2 * priv->nb_tx_queues; else tot_queues = priv->nb_rx_queues + priv->nb_tx_queues; @@ -401,7 +402,7 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev) goto fail_tx; } - if (priv->tx_conf_en) { + if (priv->flags & DPAA2_TX_CONF_ENABLE) { /*Setup tx confirmation queues*/ for (i = 0; i < priv->nb_tx_queues; i++) { mc_q->eth_data = dev->data; @@ -483,7 +484,7 @@ dpaa2_free_rx_tx_queues(struct rte_eth_dev *dev) dpaa2_q = (struct dpaa2_queue *)priv->tx_vq[i]; rte_free(dpaa2_q->cscn); } - if (priv->tx_conf_en) { + if (priv->flags & DPAA2_TX_CONF_ENABLE) { /* cleanup tx conf queue storage */ for (i = 0; i < priv->nb_tx_queues; i++) { dpaa2_q = (struct dpaa2_queue *) @@ -857,7 +858,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev, if (tx_queue_id == 0) { /*Set tx-conf and error configuration*/ - if (priv->tx_conf_en) + if (priv->flags & DPAA2_TX_CONF_ENABLE) ret = dpni_set_tx_confirmation_mode(dpni, CMD_PRI_LOW, priv->token, DPNI_CONF_AFFINE); @@ -918,7 +919,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev, dpaa2_q->cb_eqresp_free = dpaa2_dev_free_eqresp_buf; dev->data->tx_queues[tx_queue_id] = dpaa2_q; - if (priv->tx_conf_en) { + if (priv->flags & DPAA2_TX_CONF_ENABLE) { dpaa2_q->tx_conf_queue = dpaa2_tx_conf_q; options = options | DPNI_QUEUE_OPT_USER_CTX; tx_conf_cfg.user_context = (size_t)(dpaa2_q); @@ -2614,10 +2615,14 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) priv->max_vlan_filters = attr.vlan_filter_entries; priv->flags = 0; #if defined(RTE_LIBRTE_IEEE1588) - priv->tx_conf_en = 1; -#else - priv->tx_conf_en = 0; + printf("DPDK IEEE1588 is enabled\n"); + priv->flags |= DPAA2_TX_CONF_ENABLE; #endif + /* Used with ``fslmc:dpni.1,drv_tx_conf=1`` */ + if (dpaa2_get_devargs(dev->devargs, DRIVER_TX_CONF)) { + priv->flags |= DPAA2_TX_CONF_ENABLE; + DPAA2_PMD_INFO("TX_CONF Enabled"); + } /* Allocate memory for hardware structure for queues */ ret = dpaa2_alloc_rx_tx_queues(eth_dev); @@ -2650,7 +2655,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) /* ... tx buffer layout ... */ memset(&layout, 0, sizeof(struct dpni_buffer_layout)); - if (priv->tx_conf_en) { + if (priv->flags & DPAA2_TX_CONF_ENABLE) { layout.options = DPNI_BUF_LAYOUT_OPT_FRAME_STATUS | DPNI_BUF_LAYOUT_OPT_TIMESTAMP; layout.pass_timestamp = true; @@ -2667,13 +2672,11 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) /* ... tx-conf and error buffer layout ... */ memset(&layout, 0, sizeof(struct dpni_buffer_layout)); - if (priv->tx_conf_en) { - layout.options = DPNI_BUF_LAYOUT_OPT_FRAME_STATUS | - DPNI_BUF_LAYOUT_OPT_TIMESTAMP; + if (priv->flags & DPAA2_TX_CONF_ENABLE) { + layout.options = DPNI_BUF_LAYOUT_OPT_TIMESTAMP; layout.pass_timestamp = true; - } else { - layout.options = DPNI_BUF_LAYOUT_OPT_FRAME_STATUS; } + layout.options |= DPNI_BUF_LAYOUT_OPT_FRAME_STATUS; layout.pass_frame_status = 1; ret = dpni_set_buffer_layout(dpni_dev, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX_CONFIRM, &layout); @@ -2807,7 +2810,6 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv, eth_dev->data->dev_private = (void *)dev_priv; /* Store a pointer to eth_dev in dev_private */ dev_priv->eth_dev = eth_dev; - dev_priv->tx_conf_en = 0; } else { eth_dev = rte_eth_dev_attach_secondary(dpaa2_dev->device.name); if (!eth_dev) { @@ -2860,5 +2862,6 @@ static struct rte_dpaa2_driver rte_dpaa2_pmd = { RTE_PMD_REGISTER_DPAA2(net_dpaa2, rte_dpaa2_pmd); RTE_PMD_REGISTER_PARAM_STRING(net_dpaa2, DRIVER_LOOPBACK_MODE "= " - DRIVER_NO_PREFETCH_MODE "="); + DRIVER_NO_PREFETCH_MODE "=" + DRIVER_TX_CONF "="); RTE_LOG_REGISTER(dpaa2_logtype_pmd, pmd.net.dpaa2, NOTICE); diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index 9837eb62c8..becdb50055 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -60,6 +60,8 @@ /* Disable RX tail drop, default is enable */ #define DPAA2_RX_TAILDROP_OFF 0x04 +/* Tx confirmation enabled */ +#define DPAA2_TX_CONF_ENABLE 0x08 #define DPAA2_RSS_OFFLOAD_ALL ( \ ETH_RSS_L2_PAYLOAD | \ @@ -152,14 +154,13 @@ struct dpaa2_dev_priv { void *tx_vq[MAX_TX_QUEUES]; struct dpaa2_bp_list *bp_list; /** X-Patchwork-Id: 87869 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8125DA054A; Thu, 11 Feb 2021 15:30:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 11C4A1CC5A3; Thu, 11 Feb 2021 15:28:19 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id AB7F51CC564 for ; Thu, 11 Feb 2021 15:28:05 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8E0781A0613; Thu, 11 Feb 2021 15:28:05 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 231821A05D6; Thu, 11 Feb 2021 15:28:04 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 7502F40326; Thu, 11 Feb 2021 15:28:02 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:14 +0530 Message-Id: <20210211141620.12482-15-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 14/20] net/dpaa2: optionally enable error queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta In case error packets are received by the ethernet interface, this patch enables the receival of packets on the error queue, printing the error and the error packet. to enable, use the dev_arg as : fslmc:dpni.1,drv_error_queue=1 Signed-off-by: Nipun Gupta Acked-by: Hemant Agrawal --- doc/guides/nics/dpaa2.rst | 7 ++ drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 1 + drivers/net/dpaa2/dpaa2_ethdev.c | 67 +++++++++++++++-- drivers/net/dpaa2/dpaa2_ethdev.h | 5 +- drivers/net/dpaa2/dpaa2_rxtx.c | 97 ++++++++++++++++++++++++- 5 files changed, 169 insertions(+), 8 deletions(-) diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst index 4eec8fcdd9..1e698bfbd8 100644 --- a/doc/guides/nics/dpaa2.rst +++ b/doc/guides/nics/dpaa2.rst @@ -488,6 +488,13 @@ for details. In this mode tx conf queues need to be polled to free the buffers. e.g. ``fslmc:dpni.1,drv_tx_conf=1`` +* Use dev arg option ``drv_error_queue=1`` to enable Packets in Error queue. + DPAA2 hardware drops the error packet in hardware. This option enables the + hardware to not drop the error packet and let the driver dump the error + packets, so that user can check what is wrong with those packets. + e.g. ``fslmc:dpni.1,drv_error_queue=1`` + + Enabling logs ------------- diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index 0f15750b6c..037c841ef5 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -314,6 +314,7 @@ enum qbman_fd_format { #define DPAA2_GET_FD_FLC(fd) \ (((uint64_t)((fd)->simple.flc_hi) << 32) + (fd)->simple.flc_lo) #define DPAA2_GET_FD_ERR(fd) ((fd)->simple.ctrl & 0x000000FF) +#define DPAA2_GET_FD_FA_ERR(fd) ((fd)->simple.ctrl & 0x00000040) #define DPAA2_GET_FLE_OFFSET(fle) (((fle)->fin_bpid_offset & 0x0FFF0000) >> 16) #define DPAA2_SET_FLE_SG_EXT(fle) ((fle)->fin_bpid_offset |= (uint64_t)1 << 29) #define DPAA2_IS_SET_FLE_SG_EXT(fle) \ diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 4b3eb7f5c9..412f970800 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -32,6 +32,7 @@ #define DRIVER_LOOPBACK_MODE "drv_loopback" #define DRIVER_NO_PREFETCH_MODE "drv_no_prefetch" #define DRIVER_TX_CONF "drv_tx_conf" +#define DRIVER_ERROR_QUEUE "drv_err_queue" #define CHECK_INTERVAL 100 /* 100ms */ #define MAX_REPEAT_TIME 90 /* 9s (90 * 100ms) in total */ @@ -71,6 +72,9 @@ bool dpaa2_enable_ts[RTE_MAX_ETHPORTS]; uint64_t dpaa2_timestamp_rx_dynflag; int dpaa2_timestamp_dynfield_offset = -1; +/* Enable error queue */ +bool dpaa2_enable_err_queue; + struct rte_dpaa2_xstats_name_off { char name[RTE_ETH_XSTATS_NAME_SIZE]; uint8_t page_id; /* dpni statistics page id */ @@ -391,6 +395,25 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev) goto fail; } + if (dpaa2_enable_err_queue) { + priv->rx_err_vq = rte_zmalloc("dpni_rx_err", + sizeof(struct dpaa2_queue), 0); + + dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq; + dpaa2_q->q_storage = rte_malloc("err_dq_storage", + sizeof(struct queue_storage_info_t) * + RTE_MAX_LCORE, + RTE_CACHE_LINE_SIZE); + if (!dpaa2_q->q_storage) + goto fail; + + memset(dpaa2_q->q_storage, 0, + sizeof(struct queue_storage_info_t)); + for (i = 0; i < RTE_MAX_LCORE; i++) + if (dpaa2_alloc_dq_storage(&dpaa2_q->q_storage[i])) + goto fail; + } + for (i = 0; i < priv->nb_tx_queues; i++) { mc_q->eth_data = dev->data; mc_q->flow_id = 0xffff; @@ -458,6 +481,14 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev) rte_free(dpaa2_q->q_storage); priv->rx_vq[i--] = NULL; } + + if (dpaa2_enable_err_queue) { + dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq; + if (dpaa2_q->q_storage) + dpaa2_free_dq_storage(dpaa2_q->q_storage); + rte_free(dpaa2_q->q_storage); + } + rte_free(mc_q); return -1; } @@ -1163,11 +1194,31 @@ dpaa2_dev_start(struct rte_eth_dev *dev) dpaa2_q->fqid = qid.fqid; } - /*checksum errors, send them to normal path and set it in annotation */ - err_cfg.errors = DPNI_ERROR_L3CE | DPNI_ERROR_L4CE; - err_cfg.errors |= DPNI_ERROR_PHE; + if (dpaa2_enable_err_queue) { + ret = dpni_get_queue(dpni, CMD_PRI_LOW, priv->token, + DPNI_QUEUE_RX_ERR, 0, 0, &cfg, &qid); + if (ret) { + DPAA2_PMD_ERR("Error getting rx err flow information: err=%d", + ret); + return ret; + } + dpaa2_q = (struct dpaa2_queue *)priv->rx_err_vq; + dpaa2_q->fqid = qid.fqid; + dpaa2_q->eth_data = dev->data; - err_cfg.error_action = DPNI_ERROR_ACTION_CONTINUE; + err_cfg.errors = DPNI_ERROR_DISC; + err_cfg.error_action = DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE; + } else { + /* checksum errors, send them to normal path + * and set it in annotation + */ + err_cfg.errors = DPNI_ERROR_L3CE | DPNI_ERROR_L4CE; + + /* if packet with parse error are not to be dropped */ + err_cfg.errors |= DPNI_ERROR_PHE; + + err_cfg.error_action = DPNI_ERROR_ACTION_CONTINUE; + } err_cfg.set_frame_annotation = true; ret = dpni_set_errors_behavior(dpni, CMD_PRI_LOW, @@ -2624,6 +2675,11 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) DPAA2_PMD_INFO("TX_CONF Enabled"); } + if (dpaa2_get_devargs(dev->devargs, DRIVER_ERROR_QUEUE)) { + dpaa2_enable_err_queue = 1; + DPAA2_PMD_INFO("Enable error queue"); + } + /* Allocate memory for hardware structure for queues */ ret = dpaa2_alloc_rx_tx_queues(eth_dev); if (ret) { @@ -2863,5 +2919,6 @@ RTE_PMD_REGISTER_DPAA2(net_dpaa2, rte_dpaa2_pmd); RTE_PMD_REGISTER_PARAM_STRING(net_dpaa2, DRIVER_LOOPBACK_MODE "= " DRIVER_NO_PREFETCH_MODE "=" - DRIVER_TX_CONF "="); + DRIVER_TX_CONF "=" + DRIVER_ERROR_QUEUE "="); RTE_LOG_REGISTER(dpaa2_logtype_pmd, pmd.net.dpaa2, NOTICE); diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index becdb50055..28341178e4 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2016-2020 NXP + * Copyright 2016-2021 NXP * */ @@ -117,6 +117,8 @@ extern enum rte_filter_type dpaa2_filter_type; extern const struct rte_tm_ops dpaa2_tm_ops; +extern bool dpaa2_enable_err_queue; + #define IP_ADDRESS_OFFSET_INVALID (-1) struct dpaa2_key_info { @@ -154,6 +156,7 @@ struct dpaa2_dev_priv { void *tx_vq[MAX_TX_QUEUES]; struct dpaa2_bp_list *bp_list; /** #include #include +#include #include #include @@ -550,6 +551,93 @@ eth_copy_mbuf_to_fd(struct rte_mbuf *mbuf, return 0; } +static void +dump_err_pkts(struct dpaa2_queue *dpaa2_q) +{ + /* Function receive frames for a given device and VQ */ + struct qbman_result *dq_storage; + uint32_t fqid = dpaa2_q->fqid; + int ret, num_rx = 0, num_pulled; + uint8_t pending, status; + struct qbman_swp *swp; + const struct qbman_fd *fd; + struct qbman_pull_desc pulldesc; + struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data; + uint32_t lcore_id = rte_lcore_id(); + void *v_addr, *hw_annot_addr; + struct dpaa2_fas *fas; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret) { + DPAA2_PMD_ERR("Failed to allocate IO portal, tid: %d\n", + rte_gettid()); + return; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + dq_storage = dpaa2_q->q_storage[lcore_id].dq_storage[0]; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage)), 1); + qbman_pull_desc_set_numframes(&pulldesc, dpaa2_dqrr_size); + + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_PMD_DP_DEBUG("VDQ command is not issued.QBMAN is busy\n"); + /* Portal was busy, try again */ + continue; + } + break; + } + + /* Check if the previous issued command is completed. */ + while (!qbman_check_command_complete(dq_storage)) + ; + + num_pulled = 0; + pending = 1; + do { + /* Loop until the dq_storage is updated with + * new token by QBMAN + */ + while (!qbman_check_new_result(dq_storage)) + ; + + /* Check whether Last Pull command is Expired and + * setting Condition for Loop termination + */ + if (qbman_result_DQ_is_pull_complete(dq_storage)) { + pending = 0; + /* Check for valid frame. */ + status = qbman_result_DQ_flags(dq_storage); + if (unlikely((status & + QBMAN_DQ_STAT_VALIDFRAME) == 0)) + continue; + } + fd = qbman_result_DQ_fd(dq_storage); + v_addr = DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)); + hw_annot_addr = (void *)((size_t)v_addr + DPAA2_FD_PTA_SIZE); + fas = hw_annot_addr; + + DPAA2_PMD_ERR("\n\n[%d] error packet on port[%d]:" + " fd_off: %d, fd_err: %x, fas_status: %x", + rte_lcore_id(), eth_data->port_id, + DPAA2_GET_FD_OFFSET(fd), DPAA2_GET_FD_ERR(fd), + fas->status); + rte_hexdump(stderr, "Error packet", v_addr, + DPAA2_GET_FD_OFFSET(fd) + DPAA2_GET_FD_LEN(fd)); + + dq_storage++; + num_rx++; + num_pulled++; + } while (pending); + + dpaa2_q->err_pkts += num_rx; +} + /* This function assumes that caller will be keep the same value for nb_pkts * across calls per queue, if that is not the case, better use non-prefetch * version of rx call. @@ -570,9 +658,10 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct qbman_pull_desc pulldesc; struct queue_storage_info_t *q_storage = dpaa2_q->q_storage; struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data; -#if defined(RTE_LIBRTE_IEEE1588) struct dpaa2_dev_priv *priv = eth_data->dev_private; -#endif + + if (unlikely(dpaa2_enable_err_queue)) + dump_err_pkts(priv->rx_err_vq); if (unlikely(!DPAA2_PER_LCORE_ETHRX_DPIO)) { ret = dpaa2_affine_qbman_ethrx_swp(); @@ -807,6 +896,10 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) const struct qbman_fd *fd; struct qbman_pull_desc pulldesc; struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data; + struct dpaa2_dev_priv *priv = eth_data->dev_private; + + if (unlikely(dpaa2_enable_err_queue)) + dump_err_pkts(priv->rx_err_vq); if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); From patchwork Thu Feb 11 14:16:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87870 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5AAF0A054A; Thu, 11 Feb 2021 15:30:09 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 448891CC5A7; Thu, 11 Feb 2021 15:28:20 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 644071CC566 for ; Thu, 11 Feb 2021 15:28:06 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id E24A52008DC; Thu, 11 Feb 2021 15:28:05 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id A34C3200807; Thu, 11 Feb 2021 15:28:04 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 02D4E402A2; Thu, 11 Feb 2021 15:28:02 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:15 +0530 Message-Id: <20210211141620.12482-16-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 15/20] mempool/dpaa2: support stats for secondary process X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" DPAA2 DPBP object access need availability of MCP object pointer. In case of secondary process, we need to use local MCP pointer instead of primary process. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c index ca49a8d42a..bc146e4ce1 100644 --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c @@ -393,6 +393,7 @@ rte_hw_mbuf_get_count(const struct rte_mempool *mp) unsigned int num_of_bufs = 0; struct dpaa2_bp_info *bp_info; struct dpaa2_dpbp_dev *dpbp_node; + struct fsl_mc_io mc_io; if (!mp || !mp->pool_data) { DPAA2_MEMPOOL_ERR("Invalid mempool provided"); @@ -402,7 +403,12 @@ rte_hw_mbuf_get_count(const struct rte_mempool *mp) bp_info = (struct dpaa2_bp_info *)mp->pool_data; dpbp_node = bp_info->bp_list->buf_pool.dpbp_node; - ret = dpbp_get_num_free_bufs(&dpbp_node->dpbp, CMD_PRI_LOW, + /* In case as secondary process access stats, MCP portal in priv-hw may + * have primary process address. Need the secondary process based MCP + * portal address for this object. + */ + mc_io.regs = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX); + ret = dpbp_get_num_free_bufs(&mc_io, CMD_PRI_LOW, dpbp_node->token, &num_of_bufs); if (ret) { DPAA2_MEMPOOL_ERR("Unable to obtain free buf count (err=%d)", From patchwork Thu Feb 11 14:16:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87871 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F8C8A054A; Thu, 11 Feb 2021 15:30:17 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 875A71CC5AC; Thu, 11 Feb 2021 15:28:21 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 996B71CC566 for ; Thu, 11 Feb 2021 15:28:06 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 774BC1A0604; Thu, 11 Feb 2021 15:28:06 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 37EB01A05F5; Thu, 11 Feb 2021 15:28:05 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 8262C402AD; Thu, 11 Feb 2021 15:28:03 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:16 +0530 Message-Id: <20210211141620.12482-17-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 16/20] net/dpaa: do not release the cgr ranges X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta CGR are automatically freed up in the kernel. As we do not cleanup the queues, if we release the CGR here, kernel reports them in use. So have them freed up in the kernel Signed-off-by: Nipun Gupta --- drivers/net/dpaa/dpaa_ethdev.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index c59873dd8a..0996edf9a9 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -486,9 +486,6 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev) if (dpaa_intf->cgr_rx) { for (loop = 0; loop < dpaa_intf->nb_rx_queues; loop++) qman_delete_cgr(&dpaa_intf->cgr_rx[loop]); - - qman_release_cgrid_range(dpaa_intf->cgr_rx[loop].cgrid, - dpaa_intf->nb_rx_queues); } rte_free(dpaa_intf->cgr_rx); @@ -497,9 +494,6 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev) if (dpaa_intf->cgr_tx) { for (loop = 0; loop < MAX_DPAA_CORES; loop++) qman_delete_cgr(&dpaa_intf->cgr_tx[loop]); - - qman_release_cgrid_range(dpaa_intf->cgr_tx[loop].cgrid, - MAX_DPAA_CORES); rte_free(dpaa_intf->cgr_tx); dpaa_intf->cgr_tx = NULL; } From patchwork Thu Feb 11 14:16:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87872 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36AFBA054A; Thu, 11 Feb 2021 15:30:25 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D17931CC5B3; Thu, 11 Feb 2021 15:28:22 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 1C31A1CC56B for ; Thu, 11 Feb 2021 15:28:07 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 01DC72008D8; Thu, 11 Feb 2021 15:28:07 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id B6D41200610; Thu, 11 Feb 2021 15:28:05 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 16630402B7; Thu, 11 Feb 2021 15:28:04 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:17 +0530 Message-Id: <20210211141620.12482-18-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 17/20] net/dpaa: prevent multiple mp config on an device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta The current driver only support single buffer pool on a given pmd instance. return error, if trying to configure more. Signed-off-by: Nipun Gupta --- drivers/mempool/dpaa/dpaa_mempool.c | 1 + drivers/net/dpaa/dpaa_ethdev.c | 6 ++++++ 2 files changed, 7 insertions(+) diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c index e6b06f0575..1ee7ffb647 100644 --- a/drivers/mempool/dpaa/dpaa_mempool.c +++ b/drivers/mempool/dpaa/dpaa_mempool.c @@ -134,6 +134,7 @@ dpaa_mbuf_free_pool(struct rte_mempool *mp) DPAA_MEMPOOL_INFO("BMAN pool freed for bpid =%d", bp_info->bpid); rte_free(mp->pool_data); + bp_info->bp = NULL; mp->pool_data = NULL; } } diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 0996edf9a9..a3a3e7cb24 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -969,6 +969,12 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } } + if (dpaa_intf->bp_info && dpaa_intf->bp_info->bp && + dpaa_intf->bp_info->mp != mp) { + DPAA_PMD_WARN("Multiple pools on same interface not supported"); + return -EINVAL; + } + /* Max packet can fit in single buffer */ if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) { ; From patchwork Thu Feb 11 14:16:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87873 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5CE95A054A; Thu, 11 Feb 2021 15:30:35 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 09FE01CC5BF; Thu, 11 Feb 2021 15:28:25 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 10E201CC570 for ; Thu, 11 Feb 2021 15:28:08 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8ECCA1A05F5; Thu, 11 Feb 2021 15:28:07 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 4EAB11A05F0; Thu, 11 Feb 2021 15:28:06 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 976E8402D0; Thu, 11 Feb 2021 15:28:04 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:18 +0530 Message-Id: <20210211141620.12482-19-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 18/20] bus/dpaa: secondary process init support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Secondary process also need the access the qman and bman ccsr map. Signed-off-by: Hemant Agrawal --- drivers/bus/dpaa/dpaa_bus.c | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index 662cbfaae5..37cf55d60b 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -582,20 +582,18 @@ rte_dpaa_bus_probe(void) /* Device list creation is only done once */ if (!process_once) { rte_dpaa_bus_dev_build(); - if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - /* One time load of Qman/Bman drivers */ - ret = qman_global_init(); - if (ret) { - DPAA_BUS_ERR("QMAN initialization failed: %d", - ret); - return ret; - } - ret = bman_global_init(); - if (ret) { - DPAA_BUS_ERR("BMAN initialization failed: %d", - ret); - return ret; - } + /* One time load of Qman/Bman drivers */ + ret = qman_global_init(); + if (ret) { + DPAA_BUS_ERR("QMAN initialization failed: %d", + ret); + return ret; + } + ret = bman_global_init(); + if (ret) { + DPAA_BUS_ERR("BMAN initialization failed: %d", + ret); + return ret; } } process_once = 1; From patchwork Thu Feb 11 14:16:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87874 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EC980A054A; Thu, 11 Feb 2021 15:30:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 53EEE1CC5C3; Thu, 11 Feb 2021 15:28:26 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 65F1E1CC570 for ; Thu, 11 Feb 2021 15:28:08 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 47BFF200610; Thu, 11 Feb 2021 15:28:08 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id D12DD2008B8; Thu, 11 Feb 2021 15:28:06 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 2CB19402AC; Thu, 11 Feb 2021 15:28:05 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:19 +0530 Message-Id: <20210211141620.12482-20-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 19/20] bus/dpaa: support shared ethernet MAC interface X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta DPAA can share an interface on classification criteria with kernel. This patch enables default kernel driver to be used as a shared MAC interface with DPDK interface. (provided that VSP is enabled on that interface.) Signed-off-by: Nipun Gupta --- drivers/bus/dpaa/base/fman/fman.c | 149 +++++++++++++++++++++--------- 1 file changed, 105 insertions(+), 44 deletions(-) diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c index 39102bc1f3..692071b4b0 100644 --- a/drivers/bus/dpaa/base/fman/fman.c +++ b/drivers/bus/dpaa/base/fman/fman.c @@ -211,12 +211,16 @@ fman_if_init(const struct device_node *dpa_node) const phandle *mac_phandle, *ports_phandle, *pools_phandle; const phandle *tx_channel_id = NULL, *mac_addr, *cell_idx; const phandle *rx_phandle, *tx_phandle; + const phandle *port_cell_idx, *ext_args_cell_idx; + const struct device_node *parent_node_ext_args; uint64_t tx_phandle_host[4] = {0}; uint64_t rx_phandle_host[4] = {0}; uint64_t regs_addr_host = 0; uint64_t cell_idx_host = 0; + uint64_t port_cell_idx_val = 0; + uint64_t ext_args_cell_idx_val = 0; - const struct device_node *mac_node = NULL, *tx_node; + const struct device_node *mac_node = NULL, *tx_node, *ext_args_node; const struct device_node *pool_node, *fman_node, *rx_node; const uint32_t *regs_addr = NULL; const char *mname, *fname; @@ -230,16 +234,112 @@ fman_if_init(const struct device_node *dpa_node) return 0; if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") && - !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-shared")) { + !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) { return 0; } - if (of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-shared")) - is_shared = 1; - rprop = "fsl,qman-frame-queues-rx"; mprop = "fsl,fman-mac"; + /* Obtain the MAC node used by this interface except macless */ + mac_phandle = of_get_property(dpa_node, mprop, &lenp); + if (!mac_phandle) { + FMAN_ERR(-EINVAL, "%s: no %s\n", dname, mprop); + return -EINVAL; + } + assert(lenp == sizeof(phandle)); + mac_node = of_find_node_by_phandle(*mac_phandle); + if (!mac_node) { + FMAN_ERR(-ENXIO, "%s: bad 'fsl,fman-mac\n", dname); + return -ENXIO; + } + mname = mac_node->full_name; + + /* Extract the Rx and Tx ports */ + ports_phandle = of_get_property(mac_node, "fsl,port-handles", + &lenp); + if (!ports_phandle) + ports_phandle = of_get_property(mac_node, "fsl,fman-ports", + &lenp); + if (!ports_phandle) { + FMAN_ERR(-EINVAL, "%s: no fsl,port-handles\n", + mname); + return -EINVAL; + } + assert(lenp == (2 * sizeof(phandle))); + rx_node = of_find_node_by_phandle(ports_phandle[0]); + if (!rx_node) { + FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]\n", mname); + return -ENXIO; + } + tx_node = of_find_node_by_phandle(ports_phandle[1]); + if (!tx_node) { + FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]\n", mname); + return -ENXIO; + } + + /* Check if the port is shared interface */ + if (of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) { + port_cell_idx = of_get_property(rx_node, "cell-index", &lenp); + if (!port_cell_idx) { + FMAN_ERR(-ENXIO, + "%s: no cell-index for port\n", mname); + return -ENXIO; + } + assert(lenp == sizeof(*port_cell_idx)); + port_cell_idx_val = + of_read_number(port_cell_idx, lenp / sizeof(phandle)); + + if (of_device_is_compatible(rx_node, "fsl,fman-port-1g-rx")) + port_cell_idx_val -= 0x8; + else if (of_device_is_compatible( + rx_node, "fsl,fman-port-10g-rx")) + port_cell_idx_val -= 0x10; + + parent_node_ext_args = of_find_compatible_node(NULL, + NULL, "fsl,fman-extended-args"); + if (!parent_node_ext_args) + return 0; + + for_each_child_node(parent_node_ext_args, ext_args_node) { + ext_args_cell_idx = of_get_property(ext_args_node, + "cell-index", &lenp); + if (!ext_args_cell_idx) { + FMAN_ERR(-ENXIO, + "%s: no cell-index for ext args\n", + mname); + return -ENXIO; + } + assert(lenp == sizeof(*ext_args_cell_idx)); + ext_args_cell_idx_val = + of_read_number(ext_args_cell_idx, lenp / + sizeof(phandle)); + + if (port_cell_idx_val == ext_args_cell_idx_val) { + if (of_device_is_compatible(ext_args_node, + "fsl,fman-port-1g-rx-extended-args") && + of_device_is_compatible(rx_node, + "fsl,fman-port-1g-rx")) { + if (of_get_property(ext_args_node, + "vsp-window", &lenp)) + is_shared = 1; + break; + } + if (of_device_is_compatible(ext_args_node, + "fsl,fman-port-10g-rx-extended-args") && + of_device_is_compatible(rx_node, + "fsl,fman-port-10g-rx")) { + if (of_get_property(ext_args_node, + "vsp-window", &lenp)) + is_shared = 1; + break; + } + } + } + if (!is_shared) + return 0; + } + /* Allocate an object for this network interface */ __if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE); if (!__if) { @@ -253,20 +353,6 @@ fman_if_init(const struct device_node *dpa_node) strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1); __if->node_path[PATH_MAX - 1] = '\0'; - /* Obtain the MAC node used by this interface except macless */ - mac_phandle = of_get_property(dpa_node, mprop, &lenp); - if (!mac_phandle) { - FMAN_ERR(-EINVAL, "%s: no %s\n", dname, mprop); - goto err; - } - assert(lenp == sizeof(phandle)); - mac_node = of_find_node_by_phandle(*mac_phandle); - if (!mac_node) { - FMAN_ERR(-ENXIO, "%s: bad 'fsl,fman-mac\n", dname); - goto err; - } - mname = mac_node->full_name; - /* Map the CCSR regs for the MAC node */ regs_addr = of_get_address(mac_node, 0, &__if->regs_size, NULL); if (!regs_addr) { @@ -290,7 +376,6 @@ fman_if_init(const struct device_node *dpa_node) /* Get rid of endianness (issues). Convert to host byte order */ regs_addr_host = of_read_number(regs_addr, na); - /* Get the index of the Fman this i/f belongs to */ fman_node = of_get_parent(mac_node); na = of_n_addr_cells(mac_node); @@ -384,25 +469,6 @@ fman_if_init(const struct device_node *dpa_node) } memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN); - /* Extract the Tx port (it's the second of the two port handles) - * and get its channel ID - */ - ports_phandle = of_get_property(mac_node, "fsl,port-handles", - &lenp); - if (!ports_phandle) - ports_phandle = of_get_property(mac_node, "fsl,fman-ports", - &lenp); - if (!ports_phandle) { - FMAN_ERR(-EINVAL, "%s: no fsl,port-handles\n", - mname); - goto err; - } - assert(lenp == (2 * sizeof(phandle))); - tx_node = of_find_node_by_phandle(ports_phandle[1]); - if (!tx_node) { - FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]\n", mname); - goto err; - } /* Extract the channel ID (from tx-port-handle) */ tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id", &lenp); @@ -412,11 +478,6 @@ fman_if_init(const struct device_node *dpa_node) goto err; } - rx_node = of_find_node_by_phandle(ports_phandle[0]); - if (!rx_node) { - FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]\n", mname); - goto err; - } regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL); if (!regs_addr) { FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname); From patchwork Thu Feb 11 14:16:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 87875 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 446ADA054A; Thu, 11 Feb 2021 15:30:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9730D1CC5C9; Thu, 11 Feb 2021 15:28:27 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 21BD41CC38A for ; Thu, 11 Feb 2021 15:28:09 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id A5CC11A05C7; Thu, 11 Feb 2021 15:28:08 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 656561A05EF; Thu, 11 Feb 2021 15:28:07 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id AFA65402E5; Thu, 11 Feb 2021 15:28:05 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Thu, 11 Feb 2021 19:46:20 +0530 Message-Id: <20210211141620.12482-21-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210211141620.12482-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> <20210211141620.12482-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 20/20] bus/dpaa: enhance checks for bus and device detection X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 1. It is not a error if no network device available. One can only use crypto device 2. Improve logging for failure in detecting the bus Signed-off-by: Hemant Agrawal --- drivers/bus/dpaa/dpaa_bus.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index 37cf55d60b..173041c026 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -521,16 +521,18 @@ rte_dpaa_bus_dev_build(void) /* Get the interface configurations from device-tree */ dpaa_netcfg = netcfg_acquire(); if (!dpaa_netcfg) { - DPAA_BUS_LOG(ERR, "netcfg_acquire failed"); + DPAA_BUS_LOG(ERR, + "netcfg failed: /dev/fsl_usdpaa device not available"); + DPAA_BUS_WARN( + "Check if you are using USDPAA based device tree"); return -EINVAL; } RTE_LOG(NOTICE, EAL, "DPAA Bus Detected\n"); if (!dpaa_netcfg->num_ethports) { - DPAA_BUS_LOG(INFO, "no network interfaces available"); + DPAA_BUS_LOG(INFO, "NO DPDK mapped net interfaces available"); /* This is not an error */ - return 0; } #ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER