From patchwork Mon Feb 7 07:29:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 106940 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AEF83A034F; Mon, 7 Feb 2022 08:31:06 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0704041174; Mon, 7 Feb 2022 08:30:22 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 623584114E for ; Mon, 7 Feb 2022 08:30:17 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 216MmlZs020123; Sun, 6 Feb 2022 23:30:14 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=h4X+jB0gi4rgClPKL24z/rvycXUuiYUS5RoYZRDGqQ0=; b=dPQbVzeaGY6nYWikBfLLCL+xNLoc7hpsQ2Y5xiIYWwXLZhrpkjC8w9TfjIeOzwj6IFjj yyzU15u49eLuHOR7l3X+BD5YhjBiNAhHF4N6ErxczcK6L1ShEHwSCqjoF7NuiUqg5WEP xdF7mNYASz9XqEgFGrnzTInsK781UeADeIduyEPBGYLeMjftVWpOQeC4c2qcQq1VgpME ZdpENC/8WvainZfWajo+lr4sDtoCgOjt1/tZSvcF6/jbOOPw6s294xKGOBEJ93WbeGkU q2NbIHqKrZog/+3nxiArQOIFsAzVIFTdy8GatA2MrSZ4LCdpO1MeKJqoRPa2r8WRa6yi fg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3e1smr4p2e-9 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 06 Feb 2022 23:30:14 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 6 Feb 2022 23:30:08 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 6 Feb 2022 23:30:08 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id ECC243F709B; Sun, 6 Feb 2022 23:30:04 -0800 (PST) From: Nithin Dabilpuram To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao , Ray Kinsella CC: Subject: [PATCH 07/20] common/cnxk: support to enable aura tail drop for RQ Date: Mon, 7 Feb 2022 12:59:19 +0530 Message-ID: <20220207072932.22409-7-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220207072932.22409-1-ndabilpuram@marvell.com> References: <20220207072932.22409-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: xuVwfgM2u2EgE1tNxvR30YbUocEUFT10 X-Proofpoint-ORIG-GUID: xuVwfgM2u2EgE1tNxvR30YbUocEUFT10 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-07_02,2022-02-03_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support to enable aura tail drop via RQ specifically for inline device RQ's pkt pool. This is better than RQ red drop as it can be applied to all RQ's that are not having security enabled but using same packet pool. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix.h | 4 ++++ drivers/common/cnxk/roc_nix_inl.c | 39 ++++++++++++++++++++++++++++++---- drivers/common/cnxk/roc_nix_inl.h | 2 ++ drivers/common/cnxk/roc_nix_inl_dev.c | 9 ++++++++ drivers/common/cnxk/roc_nix_inl_priv.h | 2 ++ drivers/common/cnxk/roc_nix_queue.c | 6 +++++- drivers/common/cnxk/roc_npa.c | 33 ++++++++++++++++++++++++++-- drivers/common/cnxk/roc_npa.h | 3 +++ drivers/common/cnxk/version.map | 1 + 9 files changed, 92 insertions(+), 7 deletions(-) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 250e1c0..0122b98 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -286,6 +286,10 @@ struct roc_nix_rq { uint8_t spb_red_drop; /* Average SPB aura level pass threshold for RED */ uint8_t spb_red_pass; + /* LPB aura drop enable */ + bool lpb_drop_ena; + /* SPB aura drop enable */ + bool spb_drop_ena; /* End of Input parameters */ struct roc_nix *roc_nix; bool inl_dev_ref; diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c index f57f1a4..ac17e95 100644 --- a/drivers/common/cnxk/roc_nix_inl.c +++ b/drivers/common/cnxk/roc_nix_inl.c @@ -528,23 +528,50 @@ roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq) inl_rq->first_skip = rq->first_skip; inl_rq->later_skip = rq->later_skip; inl_rq->lpb_size = rq->lpb_size; + inl_rq->lpb_drop_ena = true; + inl_rq->spb_ena = rq->spb_ena; + inl_rq->spb_aura_handle = rq->spb_aura_handle; + inl_rq->spb_size = rq->spb_size; + inl_rq->spb_drop_ena = !!rq->spb_ena; if (!roc_model_is_cn9k()) { uint64_t aura_limit = roc_npa_aura_op_limit_get(inl_rq->aura_handle); uint64_t aura_shift = plt_log2_u32(aura_limit); + uint64_t aura_drop, drop_pc; if (aura_shift < 8) aura_shift = 0; else aura_shift = aura_shift - 8; - /* Set first pass RQ to drop when half of the buffers are in + /* Set first pass RQ to drop after part of buffers are in * use to avoid metabuf alloc failure. This is needed as long - * as we cannot use different + * as we cannot use different aura. */ - inl_rq->red_pass = (aura_limit / 2) >> aura_shift; - inl_rq->red_drop = ((aura_limit / 2) - 1) >> aura_shift; + drop_pc = inl_dev->lpb_drop_pc; + aura_drop = ((aura_limit * drop_pc) / 100) >> aura_shift; + roc_npa_aura_drop_set(inl_rq->aura_handle, aura_drop, true); + } + + if (inl_rq->spb_ena) { + uint64_t aura_limit = + roc_npa_aura_op_limit_get(inl_rq->spb_aura_handle); + uint64_t aura_shift = plt_log2_u32(aura_limit); + uint64_t aura_drop, drop_pc; + + if (aura_shift < 8) + aura_shift = 0; + else + aura_shift = aura_shift - 8; + + /* Set first pass RQ to drop after part of buffers are in + * use to avoid metabuf alloc failure. This is needed as long + * as we cannot use different aura. + */ + drop_pc = inl_dev->spb_drop_pc; + aura_drop = ((aura_limit * drop_pc) / 100) >> aura_shift; + roc_npa_aura_drop_set(inl_rq->spb_aura_handle, aura_drop, true); } /* Enable IPSec */ @@ -613,6 +640,10 @@ roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq) if (rc) plt_err("Failed to disable inline device rq, rc=%d", rc); + roc_npa_aura_drop_set(inl_rq->aura_handle, 0, false); + if (inl_rq->spb_ena) + roc_npa_aura_drop_set(inl_rq->spb_aura_handle, 0, false); + /* Flush NIX LF for CN10K */ nix_rq_vwqe_flush(rq, inl_dev->vwqe_interval); diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h index 224aaba..728225b 100644 --- a/drivers/common/cnxk/roc_nix_inl.h +++ b/drivers/common/cnxk/roc_nix_inl.h @@ -112,6 +112,8 @@ struct roc_nix_inl_dev { uint16_t chan_mask; bool attach_cptlf; bool wqe_skip; + uint8_t spb_drop_pc; + uint8_t lpb_drop_pc; /* End of input parameters */ #define ROC_NIX_INL_MEM_SZ (1280) diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c index 9dc0a62..4c1d85a 100644 --- a/drivers/common/cnxk/roc_nix_inl_dev.c +++ b/drivers/common/cnxk/roc_nix_inl_dev.c @@ -5,6 +5,8 @@ #include "roc_api.h" #include "roc_priv.h" +#define NIX_AURA_DROP_PC_DFLT 40 + /* Default Rx Config for Inline NIX LF */ #define NIX_INL_LF_RX_CFG \ (ROC_NIX_LF_RX_CFG_DROP_RE | ROC_NIX_LF_RX_CFG_L2_LEN_ERR | \ @@ -662,6 +664,13 @@ roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev) inl_dev->chan_mask = roc_inl_dev->chan_mask; inl_dev->attach_cptlf = roc_inl_dev->attach_cptlf; inl_dev->wqe_skip = roc_inl_dev->wqe_skip; + inl_dev->spb_drop_pc = NIX_AURA_DROP_PC_DFLT; + inl_dev->lpb_drop_pc = NIX_AURA_DROP_PC_DFLT; + + if (roc_inl_dev->spb_drop_pc) + inl_dev->spb_drop_pc = roc_inl_dev->spb_drop_pc; + if (roc_inl_dev->lpb_drop_pc) + inl_dev->lpb_drop_pc = roc_inl_dev->lpb_drop_pc; /* Initialize base device */ rc = dev_init(&inl_dev->dev, pci_dev); diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h index dcf752e..b6d8602 100644 --- a/drivers/common/cnxk/roc_nix_inl_priv.h +++ b/drivers/common/cnxk/roc_nix_inl_priv.h @@ -43,6 +43,8 @@ struct nix_inl_dev { struct roc_nix_rq rq; uint16_t rq_refs; bool is_nix1; + uint8_t spb_drop_pc; + uint8_t lpb_drop_pc; /* NIX/CPT data */ void *inb_sa_base; diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index a283d96..7d27185 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -299,7 +299,9 @@ nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, aq->rq.rq_int_ena = 0; /* Many to one reduction */ aq->rq.qint_idx = rq->qid % qints; - aq->rq.xqe_drop_ena = 1; + aq->rq.xqe_drop_ena = 0; + aq->rq.lpb_drop_ena = rq->lpb_drop_ena; + aq->rq.spb_drop_ena = rq->spb_drop_ena; /* If RED enabled, then fill enable for all cases */ if (rq->red_pass && (rq->red_pass >= rq->red_drop)) { @@ -366,6 +368,8 @@ nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, aq->rq_mask.rq_int_ena = ~aq->rq_mask.rq_int_ena; aq->rq_mask.qint_idx = ~aq->rq_mask.qint_idx; aq->rq_mask.xqe_drop_ena = ~aq->rq_mask.xqe_drop_ena; + aq->rq_mask.lpb_drop_ena = ~aq->rq_mask.lpb_drop_ena; + aq->rq_mask.spb_drop_ena = ~aq->rq_mask.spb_drop_ena; if (rq->red_pass && (rq->red_pass >= rq->red_drop)) { aq->rq_mask.spb_pool_pass = ~aq->rq_mask.spb_pool_pass; diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index 75fc224..1e60f44 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -193,6 +193,35 @@ roc_npa_pool_op_pc_reset(uint64_t aura_handle) } return 0; } + +int +roc_npa_aura_drop_set(uint64_t aura_handle, uint64_t limit, bool ena) +{ + struct npa_aq_enq_req *aura_req; + struct npa_lf *lf; + int rc; + + lf = idev_npa_obj_get(); + if (lf == NULL) + return NPA_ERR_DEVICE_NOT_BOUNDED; + + aura_req = mbox_alloc_msg_npa_aq_enq(lf->mbox); + if (aura_req == NULL) + return -ENOMEM; + aura_req->aura_id = roc_npa_aura_handle_to_aura(aura_handle); + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_WRITE; + + aura_req->aura.aura_drop_ena = ena; + aura_req->aura.aura_drop = limit; + aura_req->aura_mask.aura_drop_ena = + ~(aura_req->aura_mask.aura_drop_ena); + aura_req->aura_mask.aura_drop = ~(aura_req->aura_mask.aura_drop); + rc = mbox_process(lf->mbox); + + return rc; +} + static inline char * npa_stack_memzone_name(struct npa_lf *lf, int pool_id, char *name) { @@ -299,7 +328,7 @@ npa_aura_pool_pair_alloc(struct npa_lf *lf, const uint32_t block_size, aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER); aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER); aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS); - aura->avg_con = ROC_NPA_AVG_CONT; + aura->avg_con = 0; /* Many to one reduction */ aura->err_qint_idx = aura_id % lf->qints; @@ -316,7 +345,7 @@ npa_aura_pool_pair_alloc(struct npa_lf *lf, const uint32_t block_size, pool->err_int_ena = BIT(NPA_POOL_ERR_INT_OVFLS); pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_RANGE); pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_PERR); - pool->avg_con = ROC_NPA_AVG_CONT; + pool->avg_con = 0; /* Many to one reduction */ pool->err_qint_idx = pool_id % lf->qints; diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h index 9f5fe5a..0339876bf 100644 --- a/drivers/common/cnxk/roc_npa.h +++ b/drivers/common/cnxk/roc_npa.h @@ -731,4 +731,7 @@ int __roc_api roc_npa_dump(void); /* Reset operation performance counter. */ int __roc_api roc_npa_pool_op_pc_reset(uint64_t aura_handle); +int __roc_api roc_npa_aura_drop_set(uint64_t aura_handle, uint64_t limit, + bool ena); + #endif /* _ROC_NPA_H_ */ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index a5ea244..7a8aff1 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -285,6 +285,7 @@ INTERNAL { roc_nix_vlan_mcam_entry_write; roc_nix_vlan_strip_vtag_ena_dis; roc_nix_vlan_tpid_set; + roc_npa_aura_drop_set; roc_npa_aura_limit_modify; roc_npa_aura_op_range_set; roc_npa_ctx_dump;