From patchwork Thu Mar 12 11:19:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 66586 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 959D3A056B; Thu, 12 Mar 2020 12:22:09 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D892B1C0D2; Thu, 12 Mar 2020 12:19:44 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 7BB781C0C2 for ; Thu, 12 Mar 2020 12:19:42 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02CBG0KY017747 for ; Thu, 12 Mar 2020 04:19:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=W48c9oH6ehI9ukQn3x9faEAEzN4aKHpNn74vm6qTSmA=; b=n4ASozH8pMUU0q8xHFqiyb40PGdtcXARu62NTgp/+6HtiXjgMoGwZ9t55YqdRgLHmWPO BSHLeaZg/Y3SsrUytR61/hIQCw9mLZgMDg/ZKnvd/tBQ7vugPXrusVz+crz3afy7Wijp IfoNx7JAsuFd1RizlaMx7OqODpfYHpzfHCopp/6D/KQSwUcizeu8F/PSY3iKQ93bh4Lp M3tLBsKBdGuRkmSugqZqeUpLxZ155BmqS3ELvDQaLe2t4WtiDV/X4F0uUjcAutpwUU5B 5FtktCtcTdWRyfRwwWngCYOwg1FqBVT1NJ8tUDwKp/t7+nsc06F2FC+OYSny99Rpf+fv Pw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2yqfggs6g4-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 12 Mar 2020 04:19:41 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 12 Mar 2020 04:19:39 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 12 Mar 2020 04:19:39 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 45D3E3F703F; Thu, 12 Mar 2020 04:19:38 -0700 (PDT) From: Nithin Dabilpuram To: Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Krzysztof Kanas , Date: Thu, 12 Mar 2020 16:49:07 +0530 Message-ID: <20200312111907.31555-12-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200312111907.31555-1-ndabilpuram@marvell.com> References: <20200312111907.31555-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-03-12_03:2020-03-11, 2020-03-12 signatures=0 Subject: [dpdk-dev] [PATCH 11/11] net/octeontx2: add tm capability callbacks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Krzysztof Kanas Add Traffic Management capability callbacks to provide global, level and node capabilities. This patch also adds documentation on Traffic Management Support. Signed-off-by: Nithin Dabilpuram Signed-off-by: Krzysztof Kanas --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/octeontx2.rst | 15 +++ drivers/net/octeontx2/otx2_ethdev.c | 1 + drivers/net/octeontx2/otx2_tm.c | 232 +++++++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_tm.h | 1 + 5 files changed, 250 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 473fe56..fb13517 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -31,6 +31,7 @@ Inline protocol = Y VLAN filter = Y Flow control = Y Flow API = Y +Rate limitation = Y Jumbo frame = Y Scattered Rx = Y VLAN offload = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 60187ec..6b885d6 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -39,6 +39,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection - Support Rx interrupt - Inline IPsec processing support +- :ref:`Traffic Management API ` Prerequisites ------------- @@ -213,6 +214,20 @@ Runtime Config Options parameters to all the PCIe devices if application requires to configure on all the ethdev ports. +Traffic Management API +---------------------- + +OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to +configure the following features: + +1. Hierarchical scheduling +2. Single rate - two color, Two rate - three color shaping + +Both DWRR and Static Priority(SP) hierarchial scheduling is supported. +Every parent can have atmost 10 SP Children and unlimited DWRR children. +Both PF & VF supports traffic management API with PF supporting 6 levels +and VF supporting 5 levels of topology. + Limitations ----------- diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 78b7f3a..599a14c 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -2026,6 +2026,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .link_update = otx2_nix_link_update, .tx_queue_setup = otx2_nix_tx_queue_setup, .tx_queue_release = otx2_nix_tx_queue_release, + .tm_ops_get = otx2_nix_tm_ops_get, .rx_queue_setup = otx2_nix_rx_queue_setup, .rx_queue_release = otx2_nix_rx_queue_release, .dev_start = otx2_nix_dev_start, diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index bafb9aa..1ccb441 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -1825,7 +1825,217 @@ nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id, *is_leaf = true; else *is_leaf = false; + return 0; +} +static int +nix_tm_capabilities_get(struct rte_eth_dev *eth_dev, + struct rte_tm_capabilities *cap, + struct rte_tm_error *error) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + int rc, max_nr_nodes = 0, i; + struct free_rsrcs_rsp *rsp; + + memset(cap, 0, sizeof(*cap)); + + otx2_mbox_alloc_msg_free_rsrc_cnt(mbox); + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + error->message = "unexpected fatal error"; + return rc; + } + + for (i = 0; i < NIX_TXSCH_LVL_TL1; i++) + max_nr_nodes += rsp->schq[i]; + + cap->n_nodes_max = max_nr_nodes + dev->tm_leaf_cnt; + /* TL1 level is reserved for PF */ + cap->n_levels_max = nix_tm_have_tl1_access(dev) ? + OTX2_TM_LVL_MAX : OTX2_TM_LVL_MAX - 1; + cap->non_leaf_nodes_identical = 1; + cap->leaf_nodes_identical = 1; + + /* Shaper Capabilities */ + cap->shaper_private_n_max = max_nr_nodes; + cap->shaper_n_max = max_nr_nodes; + cap->shaper_private_dual_rate_n_max = max_nr_nodes; + cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; + cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->shaper_pkt_length_adjust_min = 0; + cap->shaper_pkt_length_adjust_max = 0; + + /* Schedule Capabilites */ + cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ]; + cap->sched_sp_n_priorities_max = TXSCH_TLX_SP_PRIO_MAX; + cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max; + cap->sched_wfq_n_groups_max = 1; + cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT; + + cap->dynamic_update_mask = + RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL | + RTE_TM_UPDATE_NODE_SUSPEND_RESUME; + cap->stats_mask = + RTE_TM_STATS_N_PKTS | + RTE_TM_STATS_N_BYTES | + RTE_TM_STATS_N_PKTS_RED_DROPPED | + RTE_TM_STATS_N_BYTES_RED_DROPPED; + + for (i = 0; i < RTE_COLORS; i++) { + cap->mark_vlan_dei_supported[i] = false; + cap->mark_ip_ecn_tcp_supported[i] = false; + cap->mark_ip_dscp_supported[i] = false; + } + + return 0; +} + +static int +nix_tm_level_capabilities_get(struct rte_eth_dev *eth_dev, uint32_t lvl, + struct rte_tm_level_capabilities *cap, + struct rte_tm_error *error) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct free_rsrcs_rsp *rsp; + uint16_t hw_lvl; + int rc; + + memset(cap, 0, sizeof(*cap)); + + otx2_mbox_alloc_msg_free_rsrc_cnt(mbox); + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + error->message = "unexpected fatal error"; + return rc; + } + + hw_lvl = nix_tm_lvl2nix(dev, lvl); + + if (nix_tm_is_leaf(dev, lvl)) { + /* Leaf */ + cap->n_nodes_max = dev->tm_leaf_cnt; + cap->n_nodes_leaf_max = dev->tm_leaf_cnt; + cap->leaf_nodes_identical = 1; + cap->leaf.stats_mask = + RTE_TM_STATS_N_PKTS | + RTE_TM_STATS_N_BYTES; + + } else if (lvl == OTX2_TM_LVL_ROOT) { + /* Root node, aka TL2(vf)/TL1(pf) */ + cap->n_nodes_max = 1; + cap->n_nodes_nonleaf_max = 1; + cap->non_leaf_nodes_identical = 1; + + cap->nonleaf.shaper_private_supported = true; + cap->nonleaf.shaper_private_dual_rate_supported = + nix_tm_have_tl1_access(dev) ? false : true; + cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + + cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1]; + cap->nonleaf.sched_sp_n_priorities_max = + nix_max_prio(dev, hw_lvl) + 1; + cap->nonleaf.sched_wfq_n_groups_max = 1; + cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + + if (nix_tm_have_tl1_access(dev)) + cap->nonleaf.stats_mask = + RTE_TM_STATS_N_PKTS_RED_DROPPED | + RTE_TM_STATS_N_BYTES_RED_DROPPED; + } else if ((lvl < OTX2_TM_LVL_MAX) && + (hw_lvl < NIX_TXSCH_LVL_CNT)) { + /* TL2, TL3, TL4, MDQ */ + cap->n_nodes_max = rsp->schq[hw_lvl]; + cap->n_nodes_nonleaf_max = cap->n_nodes_max; + cap->non_leaf_nodes_identical = 1; + + cap->nonleaf.shaper_private_supported = true; + cap->nonleaf.shaper_private_dual_rate_supported = true; + cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + + /* MDQ doesn't support Strict Priority */ + if (hw_lvl == NIX_TXSCH_LVL_MDQ) + cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt; + else + cap->nonleaf.sched_n_children_max = + rsp->schq[hw_lvl - 1]; + cap->nonleaf.sched_sp_n_priorities_max = + nix_max_prio(dev, hw_lvl) + 1; + cap->nonleaf.sched_wfq_n_groups_max = 1; + cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + } else { + /* unsupported level */ + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + return rc; + } + return 0; +} + +static int +nix_tm_node_capabilities_get(struct rte_eth_dev *eth_dev, uint32_t node_id, + struct rte_tm_node_capabilities *cap, + struct rte_tm_error *error) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct otx2_nix_tm_node *tm_node; + struct free_rsrcs_rsp *rsp; + int rc, hw_lvl, lvl; + + memset(cap, 0, sizeof(*cap)); + + tm_node = nix_tm_node_search(dev, node_id, true); + if (!tm_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "no such node"; + return -EINVAL; + } + + hw_lvl = tm_node->hw_lvl; + lvl = tm_node->lvl; + + /* Leaf node */ + if (nix_tm_is_leaf(dev, lvl)) { + cap->stats_mask = RTE_TM_STATS_N_PKTS | + RTE_TM_STATS_N_BYTES; + return 0; + } + + otx2_mbox_alloc_msg_free_rsrc_cnt(mbox); + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + error->message = "unexpected fatal error"; + return rc; + } + + /* Non Leaf Shaper */ + cap->shaper_private_supported = true; + cap->shaper_private_dual_rate_supported = + (hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true; + cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; + cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; + + /* Non Leaf Scheduler */ + if (hw_lvl == NIX_TXSCH_LVL_MDQ) + cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt; + else + cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1]; + + cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = + cap->nonleaf.sched_n_children_max; + cap->nonleaf.sched_wfq_n_groups_max = 1; + cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + + if (hw_lvl == NIX_TXSCH_LVL_TL1) + cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED | + RTE_TM_STATS_N_BYTES_RED_DROPPED; return 0; } @@ -2505,6 +2715,10 @@ nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id, const struct rte_tm_ops otx2_tm_ops = { .node_type_get = nix_tm_node_type_get, + .capabilities_get = nix_tm_capabilities_get, + .level_capabilities_get = nix_tm_level_capabilities_get, + .node_capabilities_get = nix_tm_node_capabilities_get, + .shaper_profile_add = nix_tm_shaper_profile_add, .shaper_profile_delete = nix_tm_shaper_profile_delete, @@ -2901,6 +3115,24 @@ otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev, } int +otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *arg) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + if (!arg) + return -EINVAL; + + /* Check for supported revisions */ + if (otx2_dev_is_95xx_Ax(dev) || + otx2_dev_is_96xx_Ax(dev)) + return -EINVAL; + + *(const void **)arg = &otx2_tm_ops; + + return 0; +} + +int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h index 7b1672e..9675182 100644 --- a/drivers/net/octeontx2/otx2_tm.h +++ b/drivers/net/octeontx2/otx2_tm.h @@ -19,6 +19,7 @@ struct otx2_eth_dev; void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev); int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev); int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops); int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq, uint32_t *rr_quantum, uint16_t *smq); int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,