From patchwork Wed Apr 22 17:21:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 69116 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88893A00C2; Wed, 22 Apr 2020 19:21:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4C0C81C2A9; Wed, 22 Apr 2020 19:21:32 +0200 (CEST) Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by dpdk.org (Postfix) with ESMTP id 4D8591C29A for ; Wed, 22 Apr 2020 19:21:29 +0200 (CEST) Received: by mail-pl1-f182.google.com with SMTP id d24so1191091pll.8 for ; Wed, 22 Apr 2020 10:21:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4t/HOkpE5dVBr0LsOJ0IN7x5mynQjcCdxwB/EMhSRso=; b=AnMLfh9svdCxabrpSPnFjIX/kTnGTRAZQXjkfaalbtRiaFXfZ8mQJBXcKJflrUcxua 8VNKTurir5LyIumYBDA5lOWWhG15HpQU97ruhkz8lBFhT4QZwmDp4mmlKQso0D9VhT2O DiweWgVqML8C4thNXib5KPS8mGpLgvALD2N6OpDDGa3c5x4oN9jh8uh+YvjZKhF87kRh npCbqs4t40E5SfKnaahtdwSew86OVfgejxqyP9wULY4D4eoNnJuyPEH8pxaxMQce3NgS v0HDAKMEijGJuFoNvje8cEfuVQ/rsIgoIUOlPCgo84Q1iNV5c4zwu9MRN4R1j9O3eL+k W+Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4t/HOkpE5dVBr0LsOJ0IN7x5mynQjcCdxwB/EMhSRso=; b=K3Pk5C72cytMUyBNU/bjZfWsxnBSf8Xwv+PzMj7Vcvzw60zdNhxCs97umNTXiLoSy3 iP/S+Qasei6vqqC3axjUK4Hb55iYR3EzFmKI4hfAjx17iSxT88SxVUl0OppEA8baY/0i qns851fw8KqK6a2S/8tGbvoWCvO26vZACvNzKzWTu3fPyDQWQdEzKPPEZUswAj0wm45i TVqLMEyufRoyBNDPPfvbKmlOwLVX2492ATPGJF2WunsD7j8pU2rkGln/gCOIAbYOqknr w+f+FrZfdjTN/5+cNN+Mjy7N7I6nsLvO9FrXbdTW+X5tFAnY1Q/XiHegtgn5CNEmnxHX 2xZw== X-Gm-Message-State: AGi0PubH/6bR5saEpYI91J36FGQUiVh2QT7RYneMYF1NRoW6Dj9s4KR2 9/tFwVA0X+dv3DLdjDD+Eus= X-Google-Smtp-Source: APiQypJV0hqheioZHXU6Gsbt/f7RVBZLkIPiZXHJiuNDCgbsxehZ3jSah622C6dIpf0v8utZQ6pUcg== X-Received: by 2002:a17:90a:30a5:: with SMTP id h34mr12551553pjb.171.1587576088187; Wed, 22 Apr 2020 10:21:28 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id b24sm28196pfd.175.2020.04.22.10.21.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Apr 2020 10:21:27 -0700 (PDT) From: Nithin Dabilpuram To: Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K Cc: dev@dpdk.org, kkanas@marvell.com Date: Wed, 22 Apr 2020 22:51:04 +0530 Message-Id: <20200422172104.23099-4-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200422172104.23099-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200422172104.23099-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v4 4/4] net/octeontx2: support tm length adjust and pkt mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram This patch adds support to packet length adjust TM feature for private shaper. It also adds support to packet mode feature that applies both to private shaper and node DWRR scheduling of SP children. Signed-off-by: Nithin Dabilpuram --- v3..v4: - No change. v2..v3: - No change. v1..v2: - Newly included patch. drivers/net/octeontx2/otx2_tm.c | 140 +++++++++++++++++++++++++++++++++------- drivers/net/octeontx2/otx2_tm.h | 5 ++ 2 files changed, 122 insertions(+), 23 deletions(-) diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index f94618d..fa7d21b 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -336,18 +336,25 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, { struct shaper_params cir, pir; uint32_t schq = tm_node->hw_id; + uint64_t adjust = 0; uint8_t k = 0; memset(&cir, 0, sizeof(cir)); memset(&pir, 0, sizeof(pir)); shaper_config_to_nix(profile, &cir, &pir); - otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, " - "pir %" PRIu64 "(%" PRIu64 "B)," - " cir %" PRIu64 "(%" PRIu64 "B) (%p)", - nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl, - tm_node->id, pir.rate, pir.burst, - cir.rate, cir.burst, tm_node); + /* Packet length adjust */ + if (tm_node->pkt_mode) + adjust = 1; + else if (profile) + adjust = profile->params.pkt_length_adjust & 0x1FF; + + otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, pir %" PRIu64 + "(%" PRIu64 "B), cir %" PRIu64 "(%" PRIu64 "B)" + "adjust 0x%" PRIx64 "(pktmode %u) (%p)", + nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl, + tm_node->id, pir.rate, pir.burst, cir.rate, cir.burst, + adjust, tm_node->pkt_mode, tm_node); switch (tm_node->hw_lvl) { case NIX_TXSCH_LVL_SMQ: @@ -364,7 +371,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED ALG */ reg[k] = NIX_AF_MDQX_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; case NIX_TXSCH_LVL_TL4: @@ -381,7 +390,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL4X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; case NIX_TXSCH_LVL_TL3: @@ -398,7 +409,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL3X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; @@ -416,7 +429,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL2X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; @@ -426,6 +441,12 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, regval[k] = (cir.rate && cir.burst) ? (shaper2regval(&cir) | 1) : 0; k++; + + /* Configure length disable and adjust */ + reg[k] = NIX_AF_TL1X_SHAPE(schq); + regval[k] = (adjust | + (uint64_t)tm_node->pkt_mode << 24); + k++; break; } @@ -773,6 +794,15 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, tm_node->flags = 0; if (user) tm_node->flags = NIX_TM_NODE_USER; + + /* Packet mode */ + if (!nix_tm_is_leaf(dev, lvl) && + ((profile && profile->params.packet_mode) || + (params->nonleaf.wfq_weight_mode && + params->nonleaf.n_sp_priorities && + !params->nonleaf.wfq_weight_mode[0]))) + tm_node->pkt_mode = 1; + rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); if (profile) @@ -1873,8 +1903,10 @@ otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev, cap->shaper_private_dual_rate_n_max = max_nr_nodes; cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; - cap->shaper_pkt_length_adjust_min = 0; - cap->shaper_pkt_length_adjust_max = 0; + cap->shaper_private_packet_mode_supported = 1; + cap->shaper_private_byte_mode_supported = 1; + cap->shaper_pkt_length_adjust_min = NIX_LENGTH_ADJUST_MIN; + cap->shaper_pkt_length_adjust_max = NIX_LENGTH_ADJUST_MAX; /* Schedule Capabilities */ cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ]; @@ -1882,6 +1914,8 @@ otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev, cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->sched_wfq_packet_mode_supported = 1; + cap->sched_wfq_byte_mode_supported = 1; cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL | @@ -1944,12 +1978,16 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, nix_tm_have_tl1_access(dev) ? false : true; cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_packet_mode_supported = 1; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1]; cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; if (nix_tm_have_tl1_access(dev)) cap->nonleaf.stats_mask = @@ -1966,6 +2004,8 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, cap->nonleaf.shaper_private_dual_rate_supported = true; cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_packet_mode_supported = 1; + cap->nonleaf.shaper_private_byte_mode_supported = 1; /* MDQ doesn't support Strict Priority */ if (hw_lvl == NIX_TXSCH_LVL_MDQ) @@ -1977,6 +2017,8 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, nix_max_prio(dev, hw_lvl) + 1; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; } else { /* unsupported level */ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; @@ -2029,6 +2071,8 @@ otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id, (hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true; cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->shaper_private_packet_mode_supported = 1; + cap->shaper_private_byte_mode_supported = 1; /* Non Leaf Scheduler */ if (hw_lvl == NIX_TXSCH_LVL_MDQ) @@ -2041,6 +2085,8 @@ otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id, cap->nonleaf.sched_n_children_max; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; if (hw_lvl == NIX_TXSCH_LVL_TL1) cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED | @@ -2096,6 +2142,13 @@ otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev, } } + if (params->pkt_length_adjust < NIX_LENGTH_ADJUST_MIN || + params->pkt_length_adjust > NIX_LENGTH_ADJUST_MAX) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN; + error->message = "length adjust invalid"; + return -EINVAL; + } + profile = rte_zmalloc("otx2_nix_tm_shaper_profile", sizeof(struct otx2_nix_tm_shaper_profile), 0); if (!profile) @@ -2108,13 +2161,14 @@ otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev, otx2_tm_dbg("Added TM shaper profile %u, " " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64 - ", cbs %" PRIu64 " , adj %u", + ", cbs %" PRIu64 " , adj %u, pkt mode %d", profile_id, params->peak.rate * 8, params->peak.size, params->committed.rate * 8, params->committed.size, - params->pkt_length_adjust); + params->pkt_length_adjust, + params->packet_mode); /* Translate rate as bits per second */ profile->params.peak.rate = profile->params.peak.rate * 8; @@ -2170,9 +2224,11 @@ otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id, struct rte_tm_error *error) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_nix_tm_shaper_profile *profile = NULL; struct otx2_nix_tm_node *parent_node; - int rc, clear_on_fail = 0; - uint32_t exp_next_lvl; + int rc, pkt_mode, clear_on_fail = 0; + uint32_t exp_next_lvl, i; + uint32_t profile_id; uint16_t hw_lvl; /* we don't support dynamic updates */ @@ -2234,13 +2290,45 @@ otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id, return -EINVAL; } - /* Check if shaper profile exists for non leaf node */ - if (!nix_tm_is_leaf(dev, lvl) && - params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && - !nix_tm_shaper_profile_search(dev, params->shaper_profile_id)) { - error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; - error->message = "invalid shaper profile"; - return -EINVAL; + if (!nix_tm_is_leaf(dev, lvl)) { + /* Check if shaper profile exists for non leaf node */ + profile_id = params->shaper_profile_id; + profile = nix_tm_shaper_profile_search(dev, profile_id); + if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && !profile) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "invalid shaper profile"; + return -EINVAL; + } + + /* Minimum static priority count is 1 */ + if (!params->nonleaf.n_sp_priorities || + params->nonleaf.n_sp_priorities > TXSCH_TLX_SP_PRIO_MAX) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES; + error->message = "invalid sp priorities"; + return -EINVAL; + } + + pkt_mode = 0; + /* Validate weight mode */ + for (i = 0; i < params->nonleaf.n_sp_priorities && + params->nonleaf.wfq_weight_mode; i++) { + pkt_mode = !params->nonleaf.wfq_weight_mode[i]; + if (pkt_mode == !params->nonleaf.wfq_weight_mode[0]) + continue; + + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; + error->message = "unsupported weight mode"; + return -EINVAL; + } + + if (profile && params->nonleaf.n_sp_priorities && + pkt_mode != profile->params.packet_mode) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE; + error->message = "shaper wfq packet mode mismatch"; + return -EINVAL; + } } /* Check if there is second DWRR already in siblings or holes in prio */ @@ -2482,6 +2570,12 @@ otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev, } } + if (profile && profile->params.packet_mode != tm_node->pkt_mode) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "shaper profile pkt mode mismatch"; + return -EINVAL; + } + tm_node->params.shaper_profile_id = profile_id; /* Nothing to do if not yet committed */ diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h index 9675182..cdca987 100644 --- a/drivers/net/octeontx2/otx2_tm.h +++ b/drivers/net/octeontx2/otx2_tm.h @@ -48,6 +48,7 @@ struct otx2_nix_tm_node { #define NIX_TM_NODE_USER BIT_ULL(2) /* Shaper algorithm for RED state @NIX_REDALG_E */ uint32_t red_algo:2; + uint32_t pkt_mode:1; struct otx2_nix_tm_node *parent; struct rte_tm_node_params params; @@ -114,6 +115,10 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); #define MAX_SHAPER_RATE \ SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0) +/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */ +#define NIX_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1) +#define NIX_LENGTH_ADJUST_MAX 255 + /** TM Shaper - low level operations */ /** NIX burst limits */