From patchwork Sun Jun 2 15:23:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54077 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A02331B970; Sun, 2 Jun 2019 17:25:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 0FF531B945 for ; Sun, 2 Jun 2019 17:25:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FJtoJ021289; Sun, 2 Jun 2019 08:25:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=U23Ew+HiV3Hdkb+3wPeEsfC1ErY0GFP8NQWJ0T/YcEY=; b=yZzIvbZemveyU3c00P46alHCkQ/AOxvdnIKnl4jRMuYdMmsK8e41FqK+1GarrLFVXqYB F8SvsVPl+h0OKNU7RHLuvH8Pwoekz0WvjJCX2l6DGJWiLEO93/mtoyxXgZZYZAPvDpLL r0oLjXFXHcOBPvQIqeuOAfUv6nGfpHA4yRu5VmnyOUaWfTbQfQ+L9BqMV98jL+d1s/mW tIbgl8xK+Hor5Mly0df84yVhOucvy61UsadTU6m/z4oR++4onAA/FHScgzpoTztms5e4 bhYJz6xI0MY9I7aQrxQzQVoMpBc3pLk2J82nWBXOXCZYFgkRMC9rdJRzQIFuyCz+QSqN cA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2supqkvqg5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:41 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:40 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:40 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id CA7CA3F703F; Sun, 2 Jun 2019 08:25:38 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Krzysztof Kanas Date: Sun, 2 Jun 2019 20:53:57 +0530 Message-ID: <20190602152434.23996-22-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 21/58] net/octeontx2: introduce traffic manager X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Introduce traffic manager infra and default hierarchy creation. Upon ethdev configure, a default hierarchy is created with one-to-one mapped tm nodes. This topology will be overridden when user explicitly creates and commits a new hierarchy using rte_tm interface. Signed-off-by: Nithin Dabilpuram Signed-off-by: Krzysztof Kanas --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 16 ++ drivers/net/octeontx2/otx2_ethdev.h | 14 ++ drivers/net/octeontx2/otx2_tm.c | 252 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_tm.h | 67 ++++++++ 6 files changed, 351 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_tm.c create mode 100644 drivers/net/octeontx2/otx2_tm.h diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 67352ec81..cf2ba0e0e 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ otx2_link.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index b7e56e2ca..14e8e78f8 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files( + 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', 'otx2_link.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 04a953441..2808058a8 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1033,6 +1033,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) rc = nix_store_queue_cfg_and_then_release(eth_dev); if (rc) goto fail; + otx2_nix_tm_fini(eth_dev); nix_lf_free(dev); } @@ -1066,6 +1067,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* Init the default TM scheduler hierarchy */ + rc = otx2_nix_tm_init_default(eth_dev); + if (rc) { + otx2_err("Failed to init traffic manager rc=%d", rc); + goto free_nix_lf; + } + /* Register queue IRQs */ rc = oxt2_nix_register_queue_irqs(eth_dev); if (rc) { @@ -1368,6 +1376,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) /* Also sync same MAC address to CGX table */ otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]); + /* Initialize the tm data structures */ + otx2_nix_tm_conf_init(eth_dev); + dev->tx_offload_capa = nix_get_tx_offload_capa(dev); dev->rx_offload_capa = nix_get_rx_offload_capa(dev); @@ -1423,6 +1434,11 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) } eth_dev->data->nb_rx_queues = 0; + /* Free tm resources */ + rc = otx2_nix_tm_fini(eth_dev); + if (rc) + otx2_err("Failed to cleanup tm, rc=%d", rc); + /* Unregister queue irqs */ oxt2_nix_unregister_queue_irqs(eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 7b8c7e1e5..b2b7d4186 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -19,6 +19,7 @@ #include "otx2_irq.h" #include "otx2_mempool.h" #include "otx2_rx.h" +#include "otx2_tm.h" #include "otx2_tx.h" #define OTX2_ETH_DEV_PMD_VERSION "1.0" @@ -181,6 +182,19 @@ struct otx2_eth_dev { uint64_t rx_offload_capa; uint64_t tx_offload_capa; struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; + uint16_t txschq[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_contig[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_index[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT]; + /* Dis-contiguous queues */ + uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + /* Contiguous queues */ + uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + uint16_t otx2_tm_root_lvl; + uint16_t tm_flags; + uint16_t tm_leaf_cnt; + struct otx2_nix_tm_node_list node_list; + struct otx2_nix_tm_shaper_profile_list shaper_profile_list; struct otx2_rss_info rss_info; uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c new file mode 100644 index 000000000..bc0474242 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tm.c @@ -0,0 +1,252 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" +#include "otx2_tm.h" + +/* Use last LVL_CNT nodes as default nodes */ +#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT) + +enum otx2_tm_node_level { + OTX2_TM_LVL_ROOT = 0, + OTX2_TM_LVL_SCH1, + OTX2_TM_LVL_SCH2, + OTX2_TM_LVL_SCH3, + OTX2_TM_LVL_SCH4, + OTX2_TM_LVL_QUEUE, + OTX2_TM_LVL_MAX, +}; + +static bool +nix_tm_have_tl1_access(struct otx2_eth_dev *dev) +{ + bool is_lbk = otx2_dev_is_lbk(dev); + return otx2_dev_is_pf(dev) && !otx2_dev_is_A0(dev) && + !is_lbk && !dev->maxvf; +} + +static struct otx2_nix_tm_shaper_profile * +nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id) +{ + struct otx2_nix_tm_shaper_profile *tm_shaper_profile; + + TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) { + if (tm_shaper_profile->shaper_profile_id == shaper_id) + return tm_shaper_profile; + } + return NULL; +} + +static struct otx2_nix_tm_node * +nix_tm_node_search(struct otx2_eth_dev *dev, + uint32_t node_id, bool user) +{ + struct otx2_nix_tm_node *tm_node; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (tm_node->id == node_id && + (user == !!(tm_node->flags & NIX_TM_NODE_USER))) + return tm_node; + } + return NULL; +} + +static int +nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight, uint16_t hw_lvl_id, + uint16_t level_id, bool user, + struct rte_tm_node_params *params) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + struct otx2_nix_tm_node *tm_node, *parent_node; + uint32_t shaper_profile_id; + + shaper_profile_id = params->shaper_profile_id; + shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id); + + parent_node = nix_tm_node_search(dev, parent_node_id, user); + + tm_node = rte_zmalloc("otx2_nix_tm_node", + sizeof(struct otx2_nix_tm_node), 0); + if (!tm_node) + return -ENOMEM; + + tm_node->level_id = level_id; + tm_node->hw_lvl_id = hw_lvl_id; + + tm_node->id = node_id; + tm_node->priority = priority; + tm_node->weight = weight; + tm_node->rr_prio = 0xf; + tm_node->max_prio = UINT32_MAX; + tm_node->hw_id = UINT32_MAX; + tm_node->flags = 0; + if (user) + tm_node->flags = NIX_TM_NODE_USER; + rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); + + if (shaper_profile) + shaper_profile->reference_count++; + tm_node->parent = parent_node; + tm_node->parent_hw_id = UINT32_MAX; + + TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node); + + return 0; +} + +static int +nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + + while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) { + if (shaper_profile->reference_count) + otx2_tm_dbg("Shaper profile %u has non zero references", + shaper_profile->shaper_profile_id); + TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper); + rte_free(shaper_profile); + } + + return 0; +} + +static int +nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t def = eth_dev->data->nb_tx_queues; + struct rte_tm_node_params params; + uint32_t leaf_parent, i; + int rc = 0; + + /* Default params */ + memset(¶ms, 0, sizeof(params)); + params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE; + + if (nix_tm_have_tl1_access(dev)) { + dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1; + rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL1, + OTX2_TM_LVL_ROOT, false, ¶ms); + if (rc) + goto exit; + rc = nix_tm_node_add_to_list(dev, def + 1, def, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL2, + OTX2_TM_LVL_SCH1, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL3, + OTX2_TM_LVL_SCH2, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL4, + OTX2_TM_LVL_SCH3, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_SMQ, + OTX2_TM_LVL_SCH4, false, ¶ms); + if (rc) + goto exit; + + leaf_parent = def + 4; + } else { + dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2; + rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL2, + OTX2_TM_LVL_ROOT, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 1, def, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL3, + OTX2_TM_LVL_SCH1, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL4, + OTX2_TM_LVL_SCH2, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_SMQ, + OTX2_TM_LVL_SCH3, false, ¶ms); + if (rc) + goto exit; + + leaf_parent = def + 3; + } + + /* Add leaf nodes */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_CNT, + OTX2_TM_LVL_QUEUE, false, ¶ms); + if (rc) + break; + } + +exit: + return rc; +} + +void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + TAILQ_INIT(&dev->node_list); + TAILQ_INIT(&dev->shaper_profile_list); +} + +int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint16_t sq_cnt = eth_dev->data->nb_tx_queues; + int rc; + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(dev); + dev->tm_flags = NIX_TM_DEFAULT_TREE; + + rc = nix_tm_prepare_default_tree(eth_dev); + if (rc != 0) + return rc; + + dev->tm_leaf_cnt = sq_cnt; + + return 0; +} + +int +otx2_nix_tm_fini(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(dev); + + dev->tm_flags = 0; + return 0; +} diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h new file mode 100644 index 000000000..94023fa99 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tm.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_TM_H__ +#define __OTX2_TM_H__ + +#include + +#include + +#define NIX_TM_DEFAULT_TREE BIT_ULL(0) + +struct otx2_eth_dev; + +void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev); + +struct otx2_nix_tm_node { + TAILQ_ENTRY(otx2_nix_tm_node) node; + uint32_t id; + uint32_t hw_id; + uint32_t priority; + uint32_t weight; + uint16_t level_id; + uint16_t hw_lvl_id; + uint32_t rr_prio; + uint32_t rr_num; + uint32_t max_prio; + uint32_t parent_hw_id; + uint32_t flags; +#define NIX_TM_NODE_HWRES BIT_ULL(0) +#define NIX_TM_NODE_ENABLED BIT_ULL(1) +#define NIX_TM_NODE_USER BIT_ULL(2) + struct otx2_nix_tm_node *parent; + struct rte_tm_node_params params; +}; + +struct otx2_nix_tm_shaper_profile { + TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper; + uint32_t shaper_profile_id; + uint32_t reference_count; + struct rte_tm_shaper_params profile; +}; + +struct shaper_params { + uint64_t burst_exponent; + uint64_t burst_mantissa; + uint64_t div_exp; + uint64_t exponent; + uint64_t mantissa; + uint64_t burst; + uint64_t rate; +}; + +TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node); +TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); + +#define MAX_SCHED_WEIGHT ((uint8_t)~0) +#define NIX_TM_RR_QUANTUM_MAX ((1 << 24) - 1) + +/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */ +/* = NIX_MAX_HW_MTU */ +#define DEFAULT_RR_WEIGHT 71 + +#endif /* __OTX2_TM_H__ */