From patchwork Thu Apr 1 12:38:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 90410 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27C95A0548; Thu, 1 Apr 2021 14:44:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AB4AF1412A5; Thu, 1 Apr 2021 14:40:30 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 603401412A2 for ; Thu, 1 Apr 2021 14:40:29 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 131CPLXo019083 for ; Thu, 1 Apr 2021 05:40:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=AGBURLfxJZaaXU3xqiPGcdjJFBYKiZRwGumYflZUy1Y=; b=XHRWOhJU2nMgk2VR5Vg00yRLpQEC5H4GIZsTzNAJWoCS7h3zeGjTDNvi2G9cCdSb3fDa vpxxzdvXqBjsCPJZ0IQ+jUIf2k+XvAkFm4YhJ1ZtN/XzRlgyzhYsSzuLXkTpuIEEaj1q numNFbuDh4oHh8zBFyYJpiE51Y5bBJxnPKpyRR+byCZgM4wgySDQCYp7IWRzGAJsJsTw YA/CHUPdBGk9uKt3pa+TLDiB9kaKRgAOda41nHiXHMAN1aD6Ay9RRyGpuq7ALYEg03IN Q4ogbSs3GtOWQ9CTXAy2vhAXaxe+n+yqWyopeGsaOFbC++LK7nzzlpcs6SfRsfug6I/0 3A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37n28jje4w-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 01 Apr 2021 05:40:28 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 1 Apr 2021 05:40:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 1 Apr 2021 05:40:26 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id DDF3F3F7043; Thu, 1 Apr 2021 05:40:23 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Nithin Dabilpuram Date: Thu, 1 Apr 2021 18:08:02 +0530 Message-ID: <20210401123817.14348-38-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210401123817.14348-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> <20210401123817.14348-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 7UFF_eE0xNpYQb-DjgU_y0xDHoCZNfrX X-Proofpoint-ORIG-GUID: 7UFF_eE0xNpYQb-DjgU_y0xDHoCZNfrX X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-01_05:2021-03-31, 2021-04-01 signatures=0 Subject: [dpdk-dev] [PATCH v3 37/52] common/cnxk: add nix tm support for internal hierarchy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to create internal TM default hierarchy and ratelimit hierarchy and API to ratelimit SQ to a given rate. This will be used by cnxk ethdev driver's tx queue ratelimit op. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix.h | 7 ++ drivers/common/cnxk/roc_nix_priv.h | 2 + drivers/common/cnxk/roc_nix_tm.c | 156 +++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_nix_tm_ops.c | 141 +++++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 3 + 5 files changed, 309 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 7bf3435..8992ad3 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -330,6 +330,8 @@ enum roc_tm_node_level { /* * TM runtime hierarchy init API. */ +int __roc_api roc_nix_tm_init(struct roc_nix *roc_nix); +void __roc_api roc_nix_tm_fini(struct roc_nix *roc_nix); int __roc_api roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable); int __roc_api roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq); @@ -392,6 +394,11 @@ struct roc_nix_tm_shaper_profile *__roc_api roc_nix_tm_shaper_profile_next( struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *__prev); /* + * TM ratelimit tree API. + */ +int __roc_api roc_nix_tm_rlimit_sq(struct roc_nix *roc_nix, uint16_t qid, + uint64_t rate); +/* * TM hierarchy enable/disable API. */ int __roc_api roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix); diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index a40621c..4e1485f 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -326,6 +326,7 @@ int nix_tm_leaf_data_get(struct nix *nix, uint16_t sq, uint32_t *rr_quantum, int nix_tm_sq_flush_pre(struct roc_nix_sq *sq); int nix_tm_sq_flush_post(struct roc_nix_sq *sq); int nix_tm_smq_xoff(struct nix *nix, struct nix_tm_node *node, bool enable); +int nix_tm_prepare_default_tree(struct roc_nix *roc_nix); int nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node); int nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, enum roc_nix_tm_tree tree, bool free); @@ -344,6 +345,7 @@ int nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree); int nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree); int nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node, bool rr_quantum_only); +int nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix); /* * TM priv utils. diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index 762c85a..9b328c9 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -1089,6 +1089,162 @@ nix_tm_alloc_txschq(struct nix *nix, enum roc_nix_tm_tree tree) } int +nix_tm_prepare_default_tree(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t nonleaf_id = nix->nb_tx_queues; + struct nix_tm_node *node = NULL; + uint8_t leaf_lvl, lvl, lvl_end; + uint32_t parent, i; + int rc = 0; + + /* Add ROOT, SCH1, SCH2, SCH3, [SCH4] nodes */ + parent = ROC_NIX_TM_NODE_ID_INVALID; + /* With TL1 access we have an extra level */ + lvl_end = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH4 : + ROC_TM_LVL_SCH3); + + for (lvl = ROC_TM_LVL_ROOT; lvl <= lvl_end; lvl++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_DEFAULT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + parent = nonleaf_id; + nonleaf_id++; + } + + parent = nonleaf_id - 1; + leaf_lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_QUEUE : + ROC_TM_LVL_SCH4); + + /* Add leaf nodes */ + for (i = 0; i < nix->nb_tx_queues; i++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = i; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = leaf_lvl; + node->tree = ROC_NIX_TM_DEFAULT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + } + + return 0; +error: + nix_tm_node_free(node); + return rc; +} + +int +nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t nonleaf_id = nix->nb_tx_queues; + struct nix_tm_node *node = NULL; + uint8_t leaf_lvl, lvl, lvl_end; + uint32_t parent, i; + int rc = 0; + + /* Add ROOT, SCH1, SCH2 nodes */ + parent = ROC_NIX_TM_NODE_ID_INVALID; + lvl_end = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH3 : + ROC_TM_LVL_SCH2); + + for (lvl = ROC_TM_LVL_ROOT; lvl <= lvl_end; lvl++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_RLIMIT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + parent = nonleaf_id; + nonleaf_id++; + } + + /* SMQ is mapped to SCH4 when we have TL1 access and SCH3 otherwise */ + lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH4 : ROC_TM_LVL_SCH3); + + /* Add per queue SMQ nodes i.e SCH4 / SCH3 */ + for (i = 0; i < nix->nb_tx_queues; i++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id + i; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_RLIMIT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + } + + parent = nonleaf_id; + leaf_lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_QUEUE : + ROC_TM_LVL_SCH4); + + /* Add leaf nodes */ + for (i = 0; i < nix->nb_tx_queues; i++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = i; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = leaf_lvl; + node->tree = ROC_NIX_TM_RLIMIT; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + } + + return 0; +error: + nix_tm_node_free(node); + return rc; +} + +int nix_tm_free_resources(struct roc_nix *roc_nix, uint32_t tree_mask, bool hw_only) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index 6bb0766..d13cc8a 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -543,3 +543,144 @@ roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix, enum roc_nix_tm_tree tree, nix->tm_flags |= NIX_TM_HIERARCHY_ENA; return 0; } + +int +roc_nix_tm_init(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t tree_mask; + int rc; + + if (nix->tm_flags & NIX_TM_HIERARCHY_ENA) { + plt_err("Cannot init while existing hierarchy is enabled"); + return -EBUSY; + } + + /* Free up all user resources already held */ + tree_mask = NIX_TM_TREE_MASK_ALL; + rc = nix_tm_free_resources(roc_nix, tree_mask, false); + if (rc) { + plt_err("Failed to freeup all nodes and resources, rc=%d", rc); + return rc; + } + + /* Prepare default tree */ + rc = nix_tm_prepare_default_tree(roc_nix); + if (rc) { + plt_err("failed to prepare default tm tree, rc=%d", rc); + return rc; + } + + /* Prepare rlimit tree */ + rc = nix_tm_prepare_rate_limited_tree(roc_nix); + if (rc) { + plt_err("failed to prepare rlimit tm tree, rc=%d", rc); + return rc; + } + + return rc; +} + +int +roc_nix_tm_rlimit_sq(struct roc_nix *roc_nix, uint16_t qid, uint64_t rate) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct nix_tm_shaper_profile profile; + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_tm_node *node, *parent; + + volatile uint64_t *reg, *regval; + struct nix_txschq_config *req; + uint16_t flags; + uint8_t k = 0; + int rc; + + if (nix->tm_tree != ROC_NIX_TM_RLIMIT || + !(nix->tm_flags & NIX_TM_HIERARCHY_ENA)) + return NIX_ERR_TM_INVALID_TREE; + + node = nix_tm_node_search(nix, qid, ROC_NIX_TM_RLIMIT); + + /* check if we found a valid leaf node */ + if (!node || !nix_tm_is_leaf(nix, node->lvl) || !node->parent || + node->parent->hw_id == NIX_TM_HW_ID_INVALID) + return NIX_ERR_TM_INVALID_NODE; + + parent = node->parent; + flags = parent->flags; + + req = mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = NIX_TXSCH_LVL_MDQ; + reg = req->reg; + regval = req->regval; + + if (rate == 0) { + k += nix_tm_sw_xoff_prep(parent, true, ®[k], ®val[k]); + flags &= ~NIX_TM_NODE_ENABLED; + goto exit; + } + + if (!(flags & NIX_TM_NODE_ENABLED)) { + k += nix_tm_sw_xoff_prep(parent, false, ®[k], ®val[k]); + flags |= NIX_TM_NODE_ENABLED; + } + + /* Use only PIR for rate limit */ + memset(&profile, 0, sizeof(profile)); + profile.peak.rate = rate; + /* Minimum burst of ~4us Bytes of Tx */ + profile.peak.size = PLT_MAX((uint64_t)roc_nix_max_pkt_len(roc_nix), + (4ul * rate) / ((uint64_t)1E6 * 8)); + if (!nix->tm_rate_min || nix->tm_rate_min > rate) + nix->tm_rate_min = rate; + + k += nix_tm_shaper_reg_prep(parent, &profile, ®[k], ®val[k]); +exit: + req->num_regs = k; + rc = mbox_process(mbox); + if (rc) + return rc; + + parent->flags = flags; + return 0; +} + +void +roc_nix_tm_fini(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_txsch_free_req *req; + uint32_t tree_mask; + uint8_t hw_lvl; + int rc; + + /* Xmit is assumed to be disabled */ + /* Free up resources already held */ + tree_mask = NIX_TM_TREE_MASK_ALL; + rc = nix_tm_free_resources(roc_nix, tree_mask, false); + if (rc) + plt_err("Failed to freeup existing nodes or rsrcs, rc=%d", rc); + + /* Free all other hw resources */ + req = mbox_alloc_msg_nix_txsch_free(mbox); + if (req == NULL) + return; + + req->flags = TXSCHQ_FREE_ALL; + rc = mbox_process(mbox); + if (rc) + plt_err("Failed to freeup all res, rc=%d", rc); + + for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) { + plt_bitmap_reset(nix->schq_bmp[hw_lvl]); + plt_bitmap_reset(nix->schq_contig_bmp[hw_lvl]); + nix->contig_rsvd[hw_lvl] = 0; + nix->discontig_rsvd[hw_lvl] = 0; + } + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(nix); + nix->tm_tree = 0; + nix->tm_flags &= ~NIX_TM_HIERARCHY_ENA; +} diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 9c860ff..854c3c1 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -104,9 +104,11 @@ INTERNAL { roc_nix_xstats_names_get; roc_nix_switch_hdr_set; roc_nix_eeprom_info_get; + roc_nix_tm_fini; roc_nix_tm_free_resources; roc_nix_tm_hierarchy_disable; roc_nix_tm_hierarchy_enable; + roc_nix_tm_init; roc_nix_tm_node_add; roc_nix_tm_node_delete; roc_nix_tm_node_get; @@ -114,6 +116,7 @@ INTERNAL { roc_nix_tm_node_name_get; roc_nix_tm_node_next; roc_nix_tm_node_pkt_mode_update; + roc_nix_tm_rlimit_sq; roc_nix_tm_shaper_profile_add; roc_nix_tm_shaper_profile_delete; roc_nix_tm_shaper_profile_get;