From patchwork Thu Jun 16 07:07:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 112839 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D39FA00C3; Thu, 16 Jun 2022 09:10:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA73442BD8; Thu, 16 Jun 2022 09:10:01 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D6F0742BE3 for ; Thu, 16 Jun 2022 09:10:00 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25G2AMlr013083 for ; Thu, 16 Jun 2022 00:10:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=J3cRSMPOaNN1/Ji955//zFMxfYhgoBVhEULJvhEFKTg=; b=R/81Vou+ZsptMajs+2575lKwM/5MzncLbtVa46opv9HtKyHHahJk92LOPdkWd8H55ZgP KF8rU9xfEuQ5mNpHr17/ix6mXh2Zb/eFask2ULMKyjCr4W1msdjHLQ07/kpOO2qgAP84 qA4a6SnDhU2ijYhYjdW+AHhxVZT07vp1S94AFr2WK7Kpr8mKhIpfVQAGLhU0mUYrhJEd qjblwk0sKTlvO3UPDIi8AfI4T9ZKFkL0Y9aPBZWpkzgq4i/0/qR0cNbSSctC2n8+wzSU y8mmo141b+Sby1ykfZ7+AZ/7AS+RDHtUwsfy2fEyScyInrgkzyB32Vwla1+02qFDAMxz yw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3gq83yx51q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 16 Jun 2022 00:10:00 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 16 Jun 2022 00:09:58 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 16 Jun 2022 00:09:58 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 1D5433F709B; Thu, 16 Jun 2022 00:09:55 -0700 (PDT) From: Nithin Dabilpuram To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao CC: Subject: [PATCH 10/12] net/cnxk: resize CQ for Rx security for errata Date: Thu, 16 Jun 2022 12:37:41 +0530 Message-ID: <20220616070743.30658-10-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220616070743.30658-1-ndabilpuram@marvell.com> References: <20220616070743.30658-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: CjgoRxai79ob7d-M7WnNHc76g-rgXozJ X-Proofpoint-GUID: CjgoRxai79ob7d-M7WnNHc76g-rgXozJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-16_03,2022-06-15_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Resize CQ for Rx security offload in case of HW errata. ci: skip_checkpatch skip_klocwork Signed-off-by: Nithin Dabilpuram --- drivers/net/cnxk/cnxk_ethdev.c | 43 +++++++++++++++++++++++++++++++++++++++++- drivers/net/cnxk/cnxk_ethdev.h | 2 +- 2 files changed, 43 insertions(+), 2 deletions(-) diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 4ea1617..2418290 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -5,6 +5,8 @@ #include +#define CNXK_NIX_CQ_INL_CLAMP_MAX (64UL * 1024UL) + static inline uint64_t nix_get_rx_offload_capa(struct cnxk_eth_dev *dev) { @@ -40,6 +42,39 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev) return speed_capa; } +static uint32_t +nix_inl_cq_sz_clamp_up(struct roc_nix *nix, struct rte_mempool *mp, + uint32_t nb_desc) +{ + struct roc_nix_rq *inl_rq; + uint64_t limit; + + if (!roc_errata_cpt_hang_on_x2p_bp()) + return nb_desc; + + /* CQ should be able to hold all buffers in first pass RQ's aura + * this RQ's aura. + */ + inl_rq = roc_nix_inl_dev_rq(nix); + if (!inl_rq) { + /* This itself is going to be inline RQ's aura */ + limit = roc_npa_aura_op_limit_get(mp->pool_id); + } else { + limit = roc_npa_aura_op_limit_get(inl_rq->aura_handle); + /* Also add this RQ's aura if it is different */ + if (inl_rq->aura_handle != mp->pool_id) + limit += roc_npa_aura_op_limit_get(mp->pool_id); + } + nb_desc = PLT_MAX(limit + 1, nb_desc); + if (nb_desc > CNXK_NIX_CQ_INL_CLAMP_MAX) { + plt_warn("Could not setup CQ size to accommodate" + " all buffers in related auras (%" PRIu64 ")", + limit); + nb_desc = CNXK_NIX_CQ_INL_CLAMP_MAX; + } + return nb_desc; +} + int cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev) { @@ -504,7 +539,7 @@ cnxk_nix_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid) int cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, - uint16_t nb_desc, uint16_t fp_rx_q_sz, + uint32_t nb_desc, uint16_t fp_rx_q_sz, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { @@ -552,6 +587,12 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) roc_nix_inl_dev_xaq_realloc(mp->pool_id); + /* Increase CQ size to Aura size to avoid CQ overflow and + * then CPT buffer leak. + */ + if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) + nb_desc = nix_inl_cq_sz_clamp_up(nix, mp, nb_desc); + /* Setup ROC CQ */ cq = &dev->cqs[qid]; cq->qid = qid; diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index a4e96f0..4cb7c9e 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -530,7 +530,7 @@ int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint16_t nb_desc, uint16_t fp_tx_q_sz, const struct rte_eth_txconf *tx_conf); int cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, - uint16_t nb_desc, uint16_t fp_rx_q_sz, + uint32_t nb_desc, uint16_t fp_rx_q_sz, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); int cnxk_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qid);