From patchwork Mon Feb 14 10:10:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kumar Kori X-Patchwork-Id: 107445 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC6A9A00C4; Mon, 14 Feb 2022 11:10:18 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3838E4114A; Mon, 14 Feb 2022 11:10:15 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 33F26410F6 for ; Mon, 14 Feb 2022 11:10:12 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21E9fEl3008905; Mon, 14 Feb 2022 02:10:09 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=AvXTRo+6hod5ffgFhlDFW6k/MFWKyZvH3GnhSq2s/+8=; b=N6TQCpIsuLVOURI1IU33B4Tjq6LeOIpKdENCcf/MyImXXyKHarWWHFChfj/OIGTo72pk tocxW1W2+SIzLaLaaHu245vLPqVWuhOyIRNWzeyGzabiSeUuwh8pPiiH6ewEK+UtmgK0 DtIWRgVFgW5ymXJiknk3lC/iHCFGWrf9aH3M9mHmNQMD04bmSyCTL4YePLKEqT4Xnw6v mJdR0idDwaN+kZvP0yiRLxk+rR3szivOHvkNfrCt/iOXkt8jG/Qw+zRuLfZl9DUsDEEe 3zBrQ9m1cvYZ+f2MsehnD33gfCTQW+WUgKY72bzYPDpNoqIinsI1VVTwTRcxCzczuOMV Ug== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3e6d9qwphh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 14 Feb 2022 02:10:09 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 14 Feb 2022 02:10:07 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 14 Feb 2022 02:10:07 -0800 Received: from localhost.localdomain (unknown [10.28.34.25]) by maili.marvell.com (Postfix) with ESMTP id E1DBF3F7085; Mon, 14 Feb 2022 02:10:04 -0800 (PST) From: To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Ray Kinsella CC: Subject: [PATCH v9 1/2] common/cnxk: support priority flow ctrl config API Date: Mon, 14 Feb 2022 15:40:00 +0530 Message-ID: <20220214101001.498992-1-skori@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220214090247.493995-2-skori@marvell.com> References: <20220214090247.493995-2-skori@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 7FN-shVFvrWIiBABYThJE9N4V0pJII-l X-Proofpoint-ORIG-GUID: 7FN-shVFvrWIiBABYThJE9N4V0pJII-l X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-14_02,2022-02-14_02,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Sunil Kumar Kori CNXK platforms support priority flow control(802.1qbb) to pause respective traffic per class on that link. Patch adds RoC interface to configure priority flow control on MAC block i.e. CGX on cn9k and RPM on cn10k. Signed-off-by: Sunil Kumar Kori --- v1..v2: - fix RoC API naming convention. v2..v3: - fix pause quanta configuration for cn10k. - remove unnecessary code v3..v4: - fix PFC configuration with other type of TM tree i.e. default, user and rate limit tree. v4..v5: - rebase on top of tree - fix review comments - fix initialization error for LBK devices v5..v6: - fix review comments v6..v7: - no change v7..v8: - rebase on top of 22.03-rc1 v8..v9: - no change drivers/common/cnxk/roc_mbox.h | 19 ++- drivers/common/cnxk/roc_nix.h | 21 ++++ drivers/common/cnxk/roc_nix_fc.c | 95 +++++++++++++-- drivers/common/cnxk/roc_nix_priv.h | 6 +- drivers/common/cnxk/roc_nix_tm.c | 171 ++++++++++++++++++++++++++- drivers/common/cnxk/roc_nix_tm_ops.c | 14 ++- drivers/common/cnxk/version.map | 4 + 7 files changed, 310 insertions(+), 20 deletions(-) diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index 8967858914..b608f58357 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -95,6 +95,8 @@ struct mbox_msghdr { msg_rsp) \ M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \ M(RPM_STATS, 0x21C, rpm_stats, msg_req, rpm_stats_rsp) \ + M(CGX_PRIO_FLOW_CTRL_CFG, 0x21F, cgx_prio_flow_ctrl_cfg, cgx_pfc_cfg, \ + cgx_pfc_rsp) \ /* NPA mbox IDs (range 0x400 - 0x5FF) */ \ M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \ npa_lf_alloc_rsp) \ @@ -551,6 +553,19 @@ struct cgx_pause_frm_cfg { uint8_t __io tx_pause; }; +struct cgx_pfc_cfg { + struct mbox_msghdr hdr; + uint8_t __io rx_pause; + uint8_t __io tx_pause; + uint16_t __io pfc_en; /* bitmap indicating enabled traffic classes */ +}; + +struct cgx_pfc_rsp { + struct mbox_msghdr hdr; + uint8_t __io rx_pause; + uint8_t __io tx_pause; +}; + struct sfp_eeprom_s { #define SFP_EEPROM_SIZE 256 uint16_t __io sff_id; @@ -1125,7 +1140,9 @@ struct nix_bp_cfg_req { /* PF can be mapped to either CGX or LBK interface, * so maximum 64 channels are possible. */ -#define NIX_MAX_CHAN 64 +#define NIX_MAX_CHAN 64 +#define NIX_CGX_MAX_CHAN 16 +#define NIX_LBK_MAX_CHAN 1 struct nix_bp_cfg_rsp { struct mbox_msghdr hdr; /* Channel and bpid mapping */ diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 755212c8f9..680a34cdcd 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -165,16 +165,27 @@ struct roc_nix_fc_cfg { struct { uint32_t rq; + uint16_t tc; uint16_t cq_drop; bool enable; } cq_cfg; struct { + uint32_t sq; + uint16_t tc; bool enable; } tm_cfg; }; }; +struct roc_nix_pfc_cfg { + enum roc_nix_fc_mode mode; + /* For SET, tc must be [0, 15]. + * For GET, TC will represent bitmap + */ + uint16_t tc; +}; + struct roc_nix_eeprom_info { #define ROC_NIX_EEPROM_SIZE 256 uint16_t sff_id; @@ -478,6 +489,7 @@ void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix); enum roc_nix_tm_tree { ROC_NIX_TM_DEFAULT = 0, ROC_NIX_TM_RLIMIT, + ROC_NIX_TM_PFC, ROC_NIX_TM_USER, ROC_NIX_TM_TREE_MAX, }; @@ -624,6 +636,7 @@ roc_nix_tm_shaper_default_red_algo(struct roc_nix_tm_node *node, int __roc_api roc_nix_tm_lvl_cnt_get(struct roc_nix *roc_nix); int __roc_api roc_nix_tm_lvl_have_link_access(struct roc_nix *roc_nix, int lvl); int __roc_api roc_nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix); +int __roc_api roc_nix_tm_pfc_prepare_tree(struct roc_nix *roc_nix); bool __roc_api roc_nix_tm_is_user_hierarchy_enabled(struct roc_nix *nix); int __roc_api roc_nix_tm_tree_type_get(struct roc_nix *nix); @@ -739,6 +752,14 @@ int __roc_api roc_nix_fc_config_get(struct roc_nix *roc_nix, int __roc_api roc_nix_fc_mode_set(struct roc_nix *roc_nix, enum roc_nix_fc_mode mode); +int __roc_api roc_nix_pfc_mode_set(struct roc_nix *roc_nix, + struct roc_nix_pfc_cfg *pfc_cfg); + +int __roc_api roc_nix_pfc_mode_get(struct roc_nix *roc_nix, + struct roc_nix_pfc_cfg *pfc_cfg); + +uint16_t __roc_api roc_nix_chan_count_get(struct roc_nix *roc_nix); + enum roc_nix_fc_mode __roc_api roc_nix_fc_mode_get(struct roc_nix *roc_nix); void __roc_api rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c index d31137188e..8e31443b8f 100644 --- a/drivers/common/cnxk/roc_nix_fc.c +++ b/drivers/common/cnxk/roc_nix_fc.c @@ -36,7 +36,7 @@ nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable) struct mbox *mbox = get_mbox(roc_nix); struct nix_bp_cfg_req *req; struct nix_bp_cfg_rsp *rsp; - int rc = -ENOSPC; + int rc = -ENOSPC, i; if (roc_nix_is_sdp(roc_nix)) return 0; @@ -45,22 +45,28 @@ nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable) req = mbox_alloc_msg_nix_bp_enable(mbox); if (req == NULL) return rc; + req->chan_base = 0; - req->chan_cnt = 1; - req->bpid_per_chan = 0; + if (roc_nix_is_lbk(roc_nix)) + req->chan_cnt = NIX_LBK_MAX_CHAN; + else + req->chan_cnt = NIX_CGX_MAX_CHAN; + + req->bpid_per_chan = true; rc = mbox_process_msg(mbox, (void *)&rsp); if (rc || (req->chan_cnt != rsp->chan_cnt)) goto exit; - nix->bpid[0] = rsp->chan_bpid[0]; nix->chan_cnt = rsp->chan_cnt; + for (i = 0; i < rsp->chan_cnt; i++) + nix->bpid[i] = rsp->chan_bpid[i] & 0x1FF; } else { req = mbox_alloc_msg_nix_bp_disable(mbox); if (req == NULL) return rc; req->chan_base = 0; - req->chan_cnt = 1; + req->chan_cnt = nix->chan_cnt; rc = mbox_process(mbox); if (rc) @@ -161,7 +167,7 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) aq->op = NIX_AQ_INSTOP_WRITE; if (fc_cfg->cq_cfg.enable) { - aq->cq.bpid = nix->bpid[0]; + aq->cq.bpid = nix->bpid[fc_cfg->cq_cfg.tc]; aq->cq_mask.bpid = ~(aq->cq_mask.bpid); aq->cq.bp = fc_cfg->cq_cfg.cq_drop; aq->cq_mask.bp = ~(aq->cq_mask.bp); @@ -181,7 +187,7 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) aq->op = NIX_AQ_INSTOP_WRITE; if (fc_cfg->cq_cfg.enable) { - aq->cq.bpid = nix->bpid[0]; + aq->cq.bpid = nix->bpid[fc_cfg->cq_cfg.tc]; aq->cq_mask.bpid = ~(aq->cq_mask.bpid); aq->cq.bp = fc_cfg->cq_cfg.cq_drop; aq->cq_mask.bp = ~(aq->cq_mask.bp); @@ -222,7 +228,9 @@ roc_nix_fc_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) return nix_fc_rxchan_bpid_set(roc_nix, fc_cfg->rxchan_cfg.enable); else if (fc_cfg->type == ROC_NIX_FC_TM_CFG) - return nix_tm_bp_config_set(roc_nix, fc_cfg->tm_cfg.enable); + return nix_tm_bp_config_set(roc_nix, fc_cfg->tm_cfg.sq, + fc_cfg->tm_cfg.tc, + fc_cfg->tm_cfg.enable); return -EINVAL; } @@ -403,3 +411,74 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena, mbox_process(mbox); } + +int +roc_nix_pfc_mode_set(struct roc_nix *roc_nix, struct roc_nix_pfc_cfg *pfc_cfg) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = get_mbox(roc_nix); + uint8_t tx_pause, rx_pause; + struct cgx_pfc_cfg *req; + struct cgx_pfc_rsp *rsp; + int rc = -ENOSPC; + + if (roc_nix_is_lbk(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + rx_pause = (pfc_cfg->mode == ROC_NIX_FC_FULL) || + (pfc_cfg->mode == ROC_NIX_FC_RX); + tx_pause = (pfc_cfg->mode == ROC_NIX_FC_FULL) || + (pfc_cfg->mode == ROC_NIX_FC_TX); + + req = mbox_alloc_msg_cgx_prio_flow_ctrl_cfg(mbox); + if (req == NULL) + goto exit; + + req->pfc_en = pfc_cfg->tc; + req->rx_pause = rx_pause; + req->tx_pause = tx_pause; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + nix->rx_pause = rsp->rx_pause; + nix->tx_pause = rsp->tx_pause; + if (rsp->tx_pause) + nix->cev |= BIT(pfc_cfg->tc); + else + nix->cev &= ~BIT(pfc_cfg->tc); + +exit: + return rc; +} + +int +roc_nix_pfc_mode_get(struct roc_nix *roc_nix, struct roc_nix_pfc_cfg *pfc_cfg) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + if (roc_nix_is_lbk(roc_nix)) + return NIX_ERR_OP_NOTSUP; + + pfc_cfg->tc = nix->cev; + + if (nix->rx_pause && nix->tx_pause) + pfc_cfg->mode = ROC_NIX_FC_FULL; + else if (nix->rx_pause) + pfc_cfg->mode = ROC_NIX_FC_RX; + else if (nix->tx_pause) + pfc_cfg->mode = ROC_NIX_FC_TX; + else + pfc_cfg->mode = ROC_NIX_FC_NONE; + + return 0; +} + +uint16_t +roc_nix_chan_count_get(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + + return nix->chan_cnt; +} diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index deb2a6ba11..f3889424c4 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -33,6 +33,7 @@ struct nix_qint { /* Traffic Manager */ #define NIX_TM_MAX_HW_TXSCHQ 512 #define NIX_TM_HW_ID_INVALID UINT32_MAX +#define NIX_TM_CHAN_INVALID UINT16_MAX /* TM flags */ #define NIX_TM_HIERARCHY_ENA BIT_ULL(0) @@ -56,6 +57,7 @@ struct nix_tm_node { uint32_t priority; uint32_t weight; uint16_t lvl; + uint16_t rel_chan; uint32_t parent_id; uint32_t shaper_profile_id; void (*free_fn)(void *node); @@ -139,6 +141,7 @@ struct nix { uint16_t msixoff; uint8_t rx_pause; uint8_t tx_pause; + uint16_t cev; uint64_t rx_cfg; struct dev dev; uint16_t cints; @@ -376,7 +379,8 @@ int nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, bool ena); int nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable); int nix_tm_bp_config_get(struct roc_nix *roc_nix, bool *is_enabled); -int nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable); +int nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc, + bool enable); void nix_rq_vwqe_flush(struct roc_nix_rq *rq, uint16_t vwqe_interval); /* diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index a0448bec61..670cf66db4 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -121,7 +121,7 @@ nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree) if (is_pf_or_lbk && !skip_bp && node->hw_lvl == nix->tm_link_cfg_lvl) { node->bp_capa = 1; - skip_bp = true; + skip_bp = false; } rc = nix_tm_node_reg_conf(nix, node); @@ -317,21 +317,38 @@ nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node) } int -nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable) +nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc, + bool enable) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); enum roc_nix_tm_tree tree = nix->tm_tree; struct mbox *mbox = (&nix->dev)->mbox; struct nix_txschq_config *req = NULL; struct nix_tm_node_list *list; + struct nix_tm_node *sq_node; + struct nix_tm_node *parent; struct nix_tm_node *node; uint8_t k = 0; uint16_t link; int rc = 0; + sq_node = nix_tm_node_search(nix, sq, nix->tm_tree); + parent = sq_node->parent; + while (parent) { + if (parent->lvl == ROC_TM_LVL_SCH2) + break; + + parent = parent->parent; + } + list = nix_tm_node_list(nix, tree); link = nix->tx_link; + if (parent->rel_chan != NIX_TM_CHAN_INVALID && parent->rel_chan != tc) { + rc = -EINVAL; + goto err; + } + TAILQ_FOREACH(node, list, node) { if (node->hw_lvl != nix->tm_link_cfg_lvl) continue; @@ -339,6 +356,9 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable) if (!(node->flags & NIX_TM_NODE_HWRES) || !node->bp_capa) continue; + if (node->hw_id != parent->hw_id) + continue; + if (!req) { req = mbox_alloc_msg_nix_txschq_cfg(mbox); req->lvl = nix->tm_link_cfg_lvl; @@ -346,8 +366,9 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable) } req->reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(node->hw_id, link); - req->regval[k] = enable ? BIT_ULL(13) : 0; - req->regval_mask[k] = ~BIT_ULL(13); + req->regval[k] = enable ? tc : 0; + req->regval[k] |= enable ? BIT_ULL(13) : 0; + req->regval_mask[k] = ~(BIT_ULL(13) | GENMASK_ULL(7, 0)); k++; if (k >= MAX_REGS_PER_MBOX_MSG) { @@ -366,6 +387,7 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, bool enable) goto err; } + parent->rel_chan = enable ? tc : NIX_TM_CHAN_INVALID; return 0; err: plt_err("Failed to %s bp on link %u, rc=%d(%s)", @@ -602,7 +624,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq) } /* Disable backpressure */ - rc = nix_tm_bp_config_set(roc_nix, false); + rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, false); if (rc) { plt_err("Failed to disable backpressure for flush, rc=%d", rc); return rc; @@ -731,7 +753,7 @@ nix_tm_sq_flush_post(struct roc_nix_sq *sq) return 0; /* Restore backpressure */ - rc = nix_tm_bp_config_set(roc_nix, true); + rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, true); if (rc) { plt_err("Failed to restore backpressure, rc=%d", rc); return rc; @@ -1299,6 +1321,7 @@ nix_tm_prepare_default_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = lvl; node->tree = ROC_NIX_TM_DEFAULT; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) @@ -1325,6 +1348,7 @@ nix_tm_prepare_default_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = leaf_lvl; node->tree = ROC_NIX_TM_DEFAULT; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) @@ -1365,6 +1389,7 @@ roc_nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = lvl; node->tree = ROC_NIX_TM_RLIMIT; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) @@ -1390,6 +1415,7 @@ roc_nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = lvl; node->tree = ROC_NIX_TM_RLIMIT; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) @@ -1414,6 +1440,139 @@ roc_nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix) node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; node->lvl = leaf_lvl; node->tree = ROC_NIX_TM_RLIMIT; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + } + + return 0; +error: + nix_tm_node_free(node); + return rc; +} + +int +roc_nix_tm_pfc_prepare_tree(struct roc_nix *roc_nix) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint32_t nonleaf_id = nix->nb_tx_queues; + struct nix_tm_node *node = NULL; + uint8_t leaf_lvl, lvl, lvl_end; + uint32_t tl2_node_id; + uint32_t parent, i; + int rc = -ENOMEM; + + parent = ROC_NIX_TM_NODE_ID_INVALID; + lvl_end = ROC_TM_LVL_SCH3; + leaf_lvl = ROC_TM_LVL_QUEUE; + + /* TL1 node */ + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = ROC_TM_LVL_ROOT; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + + parent = nonleaf_id; + nonleaf_id++; + + /* TL2 node */ + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = ROC_TM_LVL_SCH1; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + + tl2_node_id = nonleaf_id; + nonleaf_id++; + + for (i = 0; i < nix->nb_tx_queues; i++) { + parent = tl2_node_id; + for (lvl = ROC_TM_LVL_SCH2; lvl <= lvl_end; lvl++) { + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = + ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + + parent = nonleaf_id; + nonleaf_id++; + } + + lvl = ROC_TM_LVL_SCH4; + + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = lvl; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; + + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; + + parent = nonleaf_id; + nonleaf_id++; + + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; + + node->id = i; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = leaf_lvl; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; rc = nix_tm_node_add(roc_nix, node); if (rc) diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index 3d81247a12..d3d39eeb99 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -464,10 +464,16 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix) /* Disable backpressure, it will be enabled back if needed on * hierarchy enable */ - rc = nix_tm_bp_config_set(roc_nix, false); - if (rc) { - plt_err("Failed to disable backpressure for flush, rc=%d", rc); - goto cleanup; + for (i = 0; i < sq_cnt; i++) { + sq = nix->sqs[i]; + if (!sq) + continue; + + rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, false); + if (rc) { + plt_err("Failed to disable backpressure, rc=%d", rc); + goto cleanup; + } } /* Flush all tx queues */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index ad1b5e8476..37ec100451 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -107,6 +107,7 @@ INTERNAL { roc_nix_bpf_stats_reset; roc_nix_bpf_stats_to_idx; roc_nix_bpf_timeunit_get; + roc_nix_chan_count_get; roc_nix_cq_dump; roc_nix_cq_fini; roc_nix_cq_head_tail_get; @@ -198,6 +199,8 @@ INTERNAL { roc_nix_npc_promisc_ena_dis; roc_nix_npc_rx_ena_dis; roc_nix_npc_mcast_config; + roc_nix_pfc_mode_get; + roc_nix_pfc_mode_set; roc_nix_ptp_clock_read; roc_nix_ptp_info_cb_register; roc_nix_ptp_info_cb_unregister; @@ -263,6 +266,7 @@ INTERNAL { roc_nix_tm_node_stats_get; roc_nix_tm_node_suspend_resume; roc_nix_tm_prealloc_res; + roc_nix_tm_pfc_prepare_tree; roc_nix_tm_prepare_rate_limited_tree; roc_nix_tm_rlimit_sq; roc_nix_tm_root_has_sp; From patchwork Mon Feb 14 10:10:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kumar Kori X-Patchwork-Id: 107444 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3676DA00C4; Mon, 14 Feb 2022 11:10:14 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 24485410F6; Mon, 14 Feb 2022 11:10:14 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 1B5CF40DDA for ; Mon, 14 Feb 2022 11:10:12 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21E9fEl4008905 for ; Mon, 14 Feb 2022 02:10:11 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=nrwVJP1SvjQO/u6jGCQsCIbYeE7UXPgCL3ULGKw93dQ=; b=Ge/OdeWxQ1z7YhnmeBI8d3nHzn5HnSa3wXnw33GSpbV0+KdE14AaeveslgBcrXxuLqrf 3TFGXsbkXmwZpLbidgkkqw8hhpzherBVtOyIDl54xMBH5Wg4xF4x+VOy+6TK49YThSi5 K+Q7w8jsjsmtps1rE389Lbql215BmbK/YtJLSVPcoAopFRkAElTHhqJI83ZyimXcbl2G AcPCZACTxy6hZ4Uz3yOh0wec/zLjsSrAlbHSkNx3zCLklyWtauf6SIBl1dGsaQvCgPf5 4K/pRmEIV6VQrhp9tZMwLs4MYOC/Siz8Sh+PHNK8gAlRPpNDoYU6XPKbNH/LAv8XOFni fg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3e6d9qwphn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 14 Feb 2022 02:10:11 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 14 Feb 2022 02:10:09 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 14 Feb 2022 02:10:09 -0800 Received: from localhost.localdomain (unknown [10.28.34.25]) by maili.marvell.com (Postfix) with ESMTP id 905B83F70A5; Mon, 14 Feb 2022 02:10:07 -0800 (PST) From: To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: Subject: [PATCH v9 2/2] net/cnxk: support priority flow control Date: Mon, 14 Feb 2022 15:40:01 +0530 Message-ID: <20220214101001.498992-2-skori@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220214101001.498992-1-skori@marvell.com> References: <20220214090247.493995-2-skori@marvell.com> <20220214101001.498992-1-skori@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: hE-kV2ZFzn65U6XSKH_XiIKOjQuVniEb X-Proofpoint-ORIG-GUID: hE-kV2ZFzn65U6XSKH_XiIKOjQuVniEb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-14_02,2022-02-14_02,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Sunil Kumar Kori Patch implements priority flow control support for CNXK platforms. Signed-off-by: Sunil Kumar Kori --- v1..v2: - fix application restart issue. v2..v3: - fix pause quanta configuration for cn10k. - fix review comments. v3..v4: - fix PFC configuration with other type of TM tree i.e. default, user and rate limit tree. v4..v5: - rebase on top of tree. v5..v6: - fix review comments v6..v7: - use correct FC mode flags v7..v8: - rebase on top of 22.03-rc1 v8..v9: - update documentation and release notes doc/guides/nics/cnxk.rst | 1 + doc/guides/rel_notes/release_22_03.rst | 4 + drivers/net/cnxk/cnxk_ethdev.c | 30 ++++ drivers/net/cnxk/cnxk_ethdev.h | 20 +++ drivers/net/cnxk/cnxk_ethdev_ops.c | 187 +++++++++++++++++++++++-- 5 files changed, 233 insertions(+), 9 deletions(-) diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index 27a94204cb..c9467f5d2a 100644 --- a/doc/guides/nics/cnxk.rst +++ b/doc/guides/nics/cnxk.rst @@ -36,6 +36,7 @@ Features of the CNXK Ethdev PMD are: - Support Rx interrupt - Inline IPsec processing support - Ingress meter support +- Queue based priority flow control support Prerequisites ------------- diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index ff3095d742..479448fafc 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -136,6 +136,10 @@ New Features * Added AES-CMAC support in CN9K & CN10K. * Added ESN and anti-replay support in lookaside protocol (IPsec) for CN10K. +* **Updated Marvell cnxk ethdev PMD.** + + * Added queue based priority flow control support for CN9K & CN10K. + * **Added support for CPM2.0b devices to Intel QuickAssist Technology PMD.** * CPM2.0b (4942) devices are now enabled for QAT crypto PMD. diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 27751a6956..37ae0939d7 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1251,6 +1251,8 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev) goto cq_fini; } + /* Initialize TC to SQ mapping as invalid */ + memset(dev->pfc_tc_sq_map, 0xFF, sizeof(dev->pfc_tc_sq_map)); /* * Restore queue config when reconfigure followed by * reconfigure and no queue configure invoked from application case. @@ -1539,6 +1541,10 @@ struct eth_dev_ops cnxk_eth_dev_ops = { .tx_burst_mode_get = cnxk_nix_tx_burst_mode_get, .flow_ctrl_get = cnxk_nix_flow_ctrl_get, .flow_ctrl_set = cnxk_nix_flow_ctrl_set, + .priority_flow_ctrl_queue_config = + cnxk_nix_priority_flow_ctrl_queue_config, + .priority_flow_ctrl_queue_info_get = + cnxk_nix_priority_flow_ctrl_queue_info_get, .dev_set_link_up = cnxk_nix_set_link_up, .dev_set_link_down = cnxk_nix_set_link_down, .get_module_info = cnxk_nix_get_module_info, @@ -1718,6 +1724,8 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); const struct eth_dev_ops *dev_ops = eth_dev->dev_ops; + struct rte_eth_pfc_queue_conf pfc_conf = {0}; + struct rte_eth_fc_conf fc_conf = {0}; struct roc_nix *nix = &dev->nix; int rc, i; @@ -1733,6 +1741,28 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset) roc_nix_npc_rx_ena_dis(nix, false); + /* Restore 802.3 Flow control configuration */ + fc_conf.mode = RTE_ETH_FC_NONE; + rc = cnxk_nix_flow_ctrl_set(eth_dev, &fc_conf); + + pfc_conf.mode = RTE_ETH_FC_NONE; + for (i = 0; i < CNXK_NIX_PFC_CHAN_COUNT; i++) { + if (dev->pfc_tc_sq_map[i] != 0xFFFF) { + pfc_conf.rx_pause.tx_qid = dev->pfc_tc_sq_map[i]; + pfc_conf.rx_pause.tc = i; + pfc_conf.tx_pause.rx_qid = i; + pfc_conf.tx_pause.tc = i; + rc = cnxk_nix_priority_flow_ctrl_queue_config(eth_dev, + &pfc_conf); + if (rc) + plt_err("Failed to reset PFC. error code(%d)", + rc); + } + } + + fc_conf.mode = RTE_ETH_FC_FULL; + rc = cnxk_nix_flow_ctrl_set(eth_dev, &fc_conf); + /* Disable and free rte_meter entries */ nix_meter_fini(dev); diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index fadc8aaf45..d0dfa7cb70 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -137,12 +137,24 @@ /* SPI will be in 20 bits of tag */ #define CNXK_ETHDEV_SPI_TAG_MASK 0xFFFFFUL +#define CNXK_NIX_PFC_CHAN_COUNT 16 + struct cnxk_fc_cfg { enum rte_eth_fc_mode mode; uint8_t rx_pause; uint8_t tx_pause; }; +struct cnxk_pfc_cfg { + struct cnxk_fc_cfg fc_cfg; + uint16_t class_en; + uint16_t pause_time; + uint8_t rx_tc; + uint8_t rx_qid; + uint8_t tx_tc; + uint8_t tx_qid; +}; + struct cnxk_eth_qconf { union { struct rte_eth_txconf tx; @@ -372,6 +384,8 @@ struct cnxk_eth_dev { struct cnxk_eth_qconf *rx_qconf; /* Flow control configuration */ + uint16_t pfc_tc_sq_map[CNXK_NIX_PFC_CHAN_COUNT]; + struct cnxk_pfc_cfg pfc_cfg; struct cnxk_fc_cfg fc_cfg; /* PTP Counters */ @@ -473,6 +487,10 @@ int cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, struct rte_eth_fc_conf *fc_conf); int cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev, struct rte_eth_fc_conf *fc_conf); +int cnxk_nix_priority_flow_ctrl_queue_config(struct rte_eth_dev *eth_dev, + struct rte_eth_pfc_queue_conf *pfc_conf); +int cnxk_nix_priority_flow_ctrl_queue_info_get(struct rte_eth_dev *eth_dev, + struct rte_eth_pfc_queue_info *pfc_info); int cnxk_nix_set_link_up(struct rte_eth_dev *eth_dev); int cnxk_nix_set_link_down(struct rte_eth_dev *eth_dev); int cnxk_nix_get_module_info(struct rte_eth_dev *eth_dev, @@ -617,6 +635,8 @@ int nix_mtr_color_action_validate(struct rte_eth_dev *eth_dev, uint32_t id, uint32_t *prev_id, uint32_t *next_id, struct cnxk_mtr_policy_node *policy, int *tree_level); +int nix_priority_flow_ctrl_configure(struct rte_eth_dev *eth_dev, + struct cnxk_pfc_cfg *conf); /* Inlines */ static __rte_always_inline uint64_t diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index f20f201db2..f4669ee7cf 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -69,6 +69,7 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + return 0; } @@ -230,6 +231,8 @@ nix_fc_cq_config_set(struct cnxk_eth_dev *dev, uint16_t qid, bool enable) cq = &dev->cqs[qid]; fc_cfg.type = ROC_NIX_FC_CQ_CFG; fc_cfg.cq_cfg.enable = enable; + /* Map all CQs to last channel */ + fc_cfg.cq_cfg.tc = roc_nix_chan_count_get(nix) - 1; fc_cfg.cq_cfg.rq = qid; fc_cfg.cq_cfg.cq_drop = cq->drop_thresh; @@ -248,6 +251,8 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, struct rte_eth_dev_data *data = eth_dev->data; struct cnxk_fc_cfg *fc = &dev->fc_cfg; struct roc_nix *nix = &dev->nix; + struct cnxk_eth_rxq_sp *rxq; + struct cnxk_eth_txq_sp *txq; uint8_t rx_pause, tx_pause; int rc, i; @@ -282,7 +287,12 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, } for (i = 0; i < data->nb_rx_queues; i++) { - rc = nix_fc_cq_config_set(dev, i, tx_pause); + struct roc_nix_fc_cfg fc_cfg; + + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[i]) - + 1; + rc = nix_fc_cq_config_set(dev, rxq->qid, !!tx_pause); if (rc) return rc; } @@ -290,14 +300,19 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, /* Check if RX pause frame is enabled or not */ if (fc->rx_pause ^ rx_pause) { - struct roc_nix_fc_cfg fc_cfg; - - memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); - fc_cfg.type = ROC_NIX_FC_TM_CFG; - fc_cfg.tm_cfg.enable = !!rx_pause; - rc = roc_nix_fc_config_set(nix, &fc_cfg); - if (rc) - return rc; + for (i = 0; i < data->nb_tx_queues; i++) { + struct roc_nix_fc_cfg fc_cfg; + + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + txq = ((struct cnxk_eth_txq_sp *)data->tx_queues[i]) - + 1; + fc_cfg.type = ROC_NIX_FC_TM_CFG; + fc_cfg.tm_cfg.sq = txq->qid; + fc_cfg.tm_cfg.enable = !!rx_pause; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) + return rc; + } } rc = roc_nix_fc_mode_set(nix, mode_map[fc_conf->mode]); @@ -311,6 +326,40 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, return rc; } +int +cnxk_nix_priority_flow_ctrl_queue_info_get(struct rte_eth_dev *eth_dev, + struct rte_eth_pfc_queue_info *pfc_info) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + + pfc_info->tc_max = roc_nix_chan_count_get(&dev->nix); + pfc_info->mode_capa = RTE_ETH_FC_FULL; + return 0; +} + +int +cnxk_nix_priority_flow_ctrl_queue_config(struct rte_eth_dev *eth_dev, + struct rte_eth_pfc_queue_conf *pfc_conf) +{ + struct cnxk_pfc_cfg conf = {0}; + int rc; + + conf.fc_cfg.mode = pfc_conf->mode; + + conf.pause_time = pfc_conf->tx_pause.pause_time; + conf.rx_tc = pfc_conf->tx_pause.tc; + conf.rx_qid = pfc_conf->tx_pause.rx_qid; + + conf.tx_tc = pfc_conf->rx_pause.tc; + conf.tx_qid = pfc_conf->rx_pause.tx_qid; + + rc = nix_priority_flow_ctrl_configure(eth_dev, &conf); + if (rc) + return rc; + + return rc; +} + int cnxk_nix_flow_ops_get(struct rte_eth_dev *eth_dev, const struct rte_flow_ops **ops) @@ -972,3 +1021,123 @@ cnxk_nix_mc_addr_list_configure(struct rte_eth_dev *eth_dev, return 0; } + +int +nix_priority_flow_ctrl_configure(struct rte_eth_dev *eth_dev, + struct cnxk_pfc_cfg *conf) +{ + enum roc_nix_fc_mode mode_map[] = {ROC_NIX_FC_NONE, ROC_NIX_FC_RX, + ROC_NIX_FC_TX, ROC_NIX_FC_FULL}; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct cnxk_pfc_cfg *pfc = &dev->pfc_cfg; + struct roc_nix *nix = &dev->nix; + struct roc_nix_pfc_cfg pfc_cfg; + struct roc_nix_fc_cfg fc_cfg; + struct cnxk_eth_rxq_sp *rxq; + struct cnxk_eth_txq_sp *txq; + uint8_t rx_pause, tx_pause; + enum rte_eth_fc_mode mode; + struct roc_nix_cq *cq; + struct roc_nix_sq *sq; + int rc; + + if (roc_nix_is_vf_or_sdp(nix)) { + plt_err("Prio flow ctrl config is not allowed on VF and SDP"); + return -ENOTSUP; + } + + if (roc_model_is_cn96_ax() && data->dev_started) { + /* On Ax, CQ should be in disabled state + * while setting flow control configuration. + */ + plt_info("Stop the port=%d for setting flow control", + data->port_id); + return 0; + } + + if (dev->pfc_tc_sq_map[conf->tx_tc] != 0xFFFF && + dev->pfc_tc_sq_map[conf->tx_tc] != conf->tx_qid) { + plt_err("Same TC can not be configured on multiple SQs"); + return -ENOTSUP; + } + + mode = conf->fc_cfg.mode; + rx_pause = (mode == RTE_ETH_FC_FULL) || (mode == RTE_ETH_FC_RX_PAUSE); + tx_pause = (mode == RTE_ETH_FC_FULL) || (mode == RTE_ETH_FC_TX_PAUSE); + + /* Configure CQs */ + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[conf->rx_qid]) - 1; + cq = &dev->cqs[rxq->qid]; + fc_cfg.type = ROC_NIX_FC_CQ_CFG; + fc_cfg.cq_cfg.tc = conf->rx_tc; + fc_cfg.cq_cfg.enable = !!tx_pause; + fc_cfg.cq_cfg.rq = cq->qid; + fc_cfg.cq_cfg.cq_drop = cq->drop_thresh; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) + goto exit; + + /* Check if RX pause frame is enabled or not */ + if (pfc->fc_cfg.rx_pause ^ rx_pause) { + if (conf->tx_qid >= eth_dev->data->nb_tx_queues) + goto exit; + + if ((roc_nix_tm_tree_type_get(nix) == ROC_NIX_TM_DEFAULT) && + eth_dev->data->nb_tx_queues > 1) { + /* + * Disabled xmit will be enabled when + * new topology is available. + */ + rc = roc_nix_tm_hierarchy_disable(nix); + if (rc) + goto exit; + + rc = roc_nix_tm_pfc_prepare_tree(nix); + if (rc) + goto exit; + + rc = roc_nix_tm_hierarchy_enable(nix, ROC_NIX_TM_PFC, + true); + if (rc) + goto exit; + } + } + + txq = ((struct cnxk_eth_txq_sp *)data->tx_queues[conf->tx_qid]) - 1; + sq = &dev->sqs[txq->qid]; + memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg)); + fc_cfg.type = ROC_NIX_FC_TM_CFG; + fc_cfg.tm_cfg.sq = sq->qid; + fc_cfg.tm_cfg.tc = conf->tx_tc; + fc_cfg.tm_cfg.enable = !!rx_pause; + rc = roc_nix_fc_config_set(nix, &fc_cfg); + if (rc) + return rc; + + dev->pfc_tc_sq_map[conf->tx_tc] = sq->qid; + + /* Configure MAC block */ + if (tx_pause) + pfc->class_en |= BIT(conf->rx_tc); + else + pfc->class_en &= ~BIT(conf->rx_tc); + + if (pfc->class_en) + mode = RTE_ETH_FC_FULL; + + memset(&pfc_cfg, 0, sizeof(struct roc_nix_pfc_cfg)); + pfc_cfg.mode = mode_map[mode]; + pfc_cfg.tc = pfc->class_en; + rc = roc_nix_pfc_mode_set(nix, &pfc_cfg); + if (rc) + return rc; + + pfc->fc_cfg.rx_pause = rx_pause; + pfc->fc_cfg.tx_pause = tx_pause; + pfc->fc_cfg.mode = mode; + +exit: + return rc; +}