From patchwork Tue Jun 13 09:25:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 128559 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DFD8D42CA8; Tue, 13 Jun 2023 11:25:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D1B8C427F5; Tue, 13 Jun 2023 11:25:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 98F1C427F2 for ; Tue, 13 Jun 2023 11:25:56 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35D563If013028 for ; Tue, 13 Jun 2023 02:25:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=EIt9ux+afZ4YLWWQlNcCPhNPsUs8j1Y2N20+zVwy0+w=; b=ccOhL5Dj8/mDjVwjq8UdREEnvGeuu1pYdy3HkJN6Jni/Imd9N+1EE+RFHzrywnEuVp/T u39GBB6CmgktPvcgfEAWT8dHdCBSOiNK7hyBgaFuKudQ6Jw3j3gNr7xlTZfTifcZzXMz ShFHMkHOlfh5HqNzmdB5hvVQ+K3gRVIWNB4WEynCzz49ikEHnu9xI2k2W6WQ2EGyGS13 iUsVaYF0AIGojYMrEQexF0ESWExL1rUXrdr6+1UnM2DNPaNtD7qtvocJASeQCVbitrf1 6i/nOTcP19poZrPq+VhoNytpdX5hkYULjR8WeTvP3qHWi0obtWNyIS4EZs0qyn2U9M1k qg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3r4rpkfj0c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 13 Jun 2023 02:25:55 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 13 Jun 2023 02:25:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 13 Jun 2023 02:25:53 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.28.164.122]) by maili.marvell.com (Postfix) with ESMTP id E91E93F708C; Tue, 13 Jun 2023 02:25:50 -0700 (PDT) From: To: , Pavan Nikhilesh , "Shijith Thotton" , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: Subject: [PATCH v2 1/3] event/cnxk: align TX queue buffer adjustment Date: Tue, 13 Jun 2023 14:55:46 +0530 Message-ID: <20230613092548.1315-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516143752.4941-1-pbhagavatula@marvell.com> References: <20230516143752.4941-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: SBzO5QnCDTxys4lyd_r_GXEwlcxRaSVI X-Proofpoint-GUID: SBzO5QnCDTxys4lyd_r_GXEwlcxRaSVI X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-06-13_04,2023-06-12_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Remove recalculating SQB thresholds in Tx queue buffer adjustment. The adjustment is already done during Tx queue setup. Signed-off-by: Pavan Nikhilesh --- v2 Changes: - Rebase on ToT. drivers/event/cnxk/cn10k_eventdev.c | 9 +-------- drivers/event/cnxk/cn10k_tx_worker.h | 6 +++--- drivers/event/cnxk/cn9k_eventdev.c | 9 +-------- drivers/event/cnxk/cn9k_worker.h | 12 +++++++++--- drivers/net/cnxk/cn10k_tx.h | 12 ++++++------ drivers/net/cnxk/cn9k_tx.h | 5 +++-- 6 files changed, 23 insertions(+), 30 deletions(-) -- 2.25.1 diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 670fc9e926..8ee9ab3c5c 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -843,16 +843,9 @@ cn10k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id) sq = &cnxk_eth_dev->sqs[tx_queue_id]; txq = eth_dev->data->tx_queues[tx_queue_id]; sqes_per_sqb = 1U << txq->sqes_per_sqb_log2; - sq->nb_sqb_bufs_adj = - sq->nb_sqb_bufs - - RTE_ALIGN_MUL_CEIL(sq->nb_sqb_bufs, sqes_per_sqb) / - sqes_per_sqb; if (cnxk_eth_dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) - sq->nb_sqb_bufs_adj -= (cnxk_eth_dev->outb.nb_desc / - (sqes_per_sqb - 1)); + sq->nb_sqb_bufs_adj -= (cnxk_eth_dev->outb.nb_desc / sqes_per_sqb); txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj; - txq->nb_sqb_bufs_adj = - ((100 - ROC_NIX_SQB_THRESH) * txq->nb_sqb_bufs_adj) / 100; } } diff --git a/drivers/event/cnxk/cn10k_tx_worker.h b/drivers/event/cnxk/cn10k_tx_worker.h index 31cbccf7d6..b6c9bb1d26 100644 --- a/drivers/event/cnxk/cn10k_tx_worker.h +++ b/drivers/event/cnxk/cn10k_tx_worker.h @@ -32,9 +32,9 @@ cn10k_sso_txq_fc_wait(const struct cn10k_eth_txq *txq) static __rte_always_inline int32_t cn10k_sso_sq_depth(const struct cn10k_eth_txq *txq) { - return (txq->nb_sqb_bufs_adj - - __atomic_load_n((int16_t *)txq->fc_mem, __ATOMIC_RELAXED)) - << txq->sqes_per_sqb_log2; + int32_t avail = (int32_t)txq->nb_sqb_bufs_adj - + (int32_t)__atomic_load_n(txq->fc_mem, __ATOMIC_RELAXED); + return (avail << txq->sqes_per_sqb_log2) - avail; } static __rte_always_inline uint16_t diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 7ed9aa1331..dde58b60e4 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -877,16 +877,9 @@ cn9k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id) sq = &cnxk_eth_dev->sqs[tx_queue_id]; txq = eth_dev->data->tx_queues[tx_queue_id]; sqes_per_sqb = 1U << txq->sqes_per_sqb_log2; - sq->nb_sqb_bufs_adj = - sq->nb_sqb_bufs - - RTE_ALIGN_MUL_CEIL(sq->nb_sqb_bufs, sqes_per_sqb) / - sqes_per_sqb; if (cnxk_eth_dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) - sq->nb_sqb_bufs_adj -= (cnxk_eth_dev->outb.nb_desc / - (sqes_per_sqb - 1)); + sq->nb_sqb_bufs_adj -= (cnxk_eth_dev->outb.nb_desc / sqes_per_sqb); txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj; - txq->nb_sqb_bufs_adj = - ((100 - ROC_NIX_SQB_THRESH) * txq->nb_sqb_bufs_adj) / 100; } } diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index ec2c1c68dd..ed3b97d7e1 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -713,6 +713,14 @@ cn9k_sso_hws_xmit_sec_one(const struct cn9k_eth_txq *txq, uint64_t base, } #endif +static __rte_always_inline int32_t +cn9k_sso_sq_depth(const struct cn9k_eth_txq *txq) +{ + int32_t avail = (int32_t)txq->nb_sqb_bufs_adj - + (int32_t)__atomic_load_n(txq->fc_mem, __ATOMIC_RELAXED); + return (avail << txq->sqes_per_sqb_log2) - avail; +} + static __rte_always_inline uint16_t cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd, uint64_t *txq_data, const uint32_t flags) @@ -736,9 +744,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd, if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) handle_tx_completion_pkts(txq, 1); - if (((txq->nb_sqb_bufs_adj - - __atomic_load_n((int16_t *)txq->fc_mem, __ATOMIC_RELAXED)) - << txq->sqes_per_sqb_log2) <= 0) + if (cn9k_sso_sq_depth(txq) <= 0) return 0; cn9k_nix_tx_skeleton(txq, cmd, flags, 0); cn9k_nix_xmit_prepare(txq, m, cmd, flags, txq->lso_tun_fmt, txq->mark_flag, diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index 4f23a8dfc3..a365cbe0ee 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -35,12 +35,13 @@ #define NIX_XMIT_FC_OR_RETURN(txq, pkts) \ do { \ + int64_t avail; \ /* Cached value is low, Update the fc_cache_pkts */ \ if (unlikely((txq)->fc_cache_pkts < (pkts))) { \ + avail = txq->nb_sqb_bufs_adj - *txq->fc_mem; \ /* Multiply with sqe_per_sqb to express in pkts */ \ (txq)->fc_cache_pkts = \ - ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) \ - << (txq)->sqes_per_sqb_log2; \ + (avail << (txq)->sqes_per_sqb_log2) - avail; \ /* Check it again for the room */ \ if (unlikely((txq)->fc_cache_pkts < (pkts))) \ return 0; \ @@ -113,10 +114,9 @@ cn10k_nix_vwqe_wait_fc(struct cn10k_eth_txq *txq, int64_t req) if (cached < 0) { /* Check if we have space else retry. */ do { - refill = - (txq->nb_sqb_bufs_adj - - __atomic_load_n(txq->fc_mem, __ATOMIC_RELAXED)) - << txq->sqes_per_sqb_log2; + refill = txq->nb_sqb_bufs_adj - + __atomic_load_n(txq->fc_mem, __ATOMIC_RELAXED); + refill = (refill << txq->sqes_per_sqb_log2) - refill; } while (refill <= 0); __atomic_compare_exchange(&txq->fc_cache_pkts, &cached, &refill, 0, __ATOMIC_RELEASE, diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h index 8f1e05a461..fba4bb4215 100644 --- a/drivers/net/cnxk/cn9k_tx.h +++ b/drivers/net/cnxk/cn9k_tx.h @@ -32,12 +32,13 @@ #define NIX_XMIT_FC_OR_RETURN(txq, pkts) \ do { \ + int64_t avail; \ /* Cached value is low, Update the fc_cache_pkts */ \ if (unlikely((txq)->fc_cache_pkts < (pkts))) { \ + avail = txq->nb_sqb_bufs_adj - *txq->fc_mem; \ /* Multiply with sqe_per_sqb to express in pkts */ \ (txq)->fc_cache_pkts = \ - ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) \ - << (txq)->sqes_per_sqb_log2; \ + (avail << (txq)->sqes_per_sqb_log2) - avail; \ /* Check it again for the room */ \ if (unlikely((txq)->fc_cache_pkts < (pkts))) \ return 0; \