From patchwork Fri Apr 12 11:57:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 139245 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9C81043E51; Fri, 12 Apr 2024 13:57:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 301A140A8A; Fri, 12 Apr 2024 13:57:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9DA5A40A89 for ; Fri, 12 Apr 2024 13:57:50 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43CBkeTW032123; Fri, 12 Apr 2024 04:57:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=v24LFyT2GxbvrqyINn/3Y1odwRHEruGtfX72+uEmG7w=; b=H2J gZfYG/GoXXCW5ThF5TmlkfEByiHB8pTM/tVn08e9vvLz3Kwzc5m366zK+WnWdtZQ oRQAtUfAXx8Bp58B15A63xJTHcInKjB7+HcpysQG1eU95iyUDmOeLIyr9zgIT1KK MxNXDmRvRv9Vm63zgJvrA4nWLjVuLLdibD8OREXEXkxaOkBLiRE55NyNjL5OIjYu hzM2gH+kr26onjV8yB58tZm0RXwj3tblKoArM2uWpC5v/WTECWA5r9jCTlWLHu1L PxPxuRmfS2mUSsqFf8Ll2UYOBumzuSynrT+AtdF1BXJwi9UYAONHPthty6Io1GG1 eWINa2G26M57Po43Z3w== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3xexsah49n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 12 Apr 2024 04:57:44 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 12 Apr 2024 04:57:43 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 12 Apr 2024 04:57:43 -0700 Received: from localhost.localdomain (unknown [10.28.36.102]) by maili.marvell.com (Postfix) with ESMTP id A7AD03F7077; Fri, 12 Apr 2024 04:57:37 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Subject: [PATCH v2 2/3] crypto/cnxk: support queue pair depth API Date: Fri, 12 Apr 2024 17:27:21 +0530 Message-ID: <20240412115722.3709194-3-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240412115722.3709194-1-gakhil@marvell.com> References: <20240411082232.3495883-1-gakhil@marvell.com> <20240412115722.3709194-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: gRRgyqEi7JlvtRG2NKdoVe_iOlfZWbI1 X-Proofpoint-ORIG-GUID: gRRgyqEi7JlvtRG2NKdoVe_iOlfZWbI1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-12_08,2024-04-09_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to get the used queue pair depth for a specific queue on cn10k platform. Signed-off-by: Akhil Goyal --- drivers/crypto/cnxk/cn10k_cryptodev.c | 1 + drivers/crypto/cnxk/cn9k_cryptodev.c | 2 ++ drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 16 ++++++++++++++++ drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 2 ++ 4 files changed, 21 insertions(+) diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c index 5ed918e18e..70bef13cda 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev.c @@ -99,6 +99,7 @@ cn10k_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, dev->driver_id = cn10k_cryptodev_driver_id; dev->feature_flags = cnxk_cpt_default_ff_get(); + dev->qp_depth_used = cnxk_cpt_qp_depth_used; cn10k_cpt_set_enqdeq_fns(dev, vf); cn10k_sec_ops_override(); diff --git a/drivers/crypto/cnxk/cn9k_cryptodev.c b/drivers/crypto/cnxk/cn9k_cryptodev.c index 47b0874185..818458bd6f 100644 --- a/drivers/crypto/cnxk/cn9k_cryptodev.c +++ b/drivers/crypto/cnxk/cn9k_cryptodev.c @@ -15,6 +15,7 @@ #include "cn9k_ipsec.h" #include "cnxk_cryptodev.h" #include "cnxk_cryptodev_capabilities.h" +#include "cnxk_cryptodev_ops.h" #include "cnxk_cryptodev_sec.h" #include "roc_api.h" @@ -96,6 +97,7 @@ cn9k_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, dev->dev_ops = &cn9k_cpt_ops; dev->driver_id = cn9k_cryptodev_driver_id; dev->feature_flags = cnxk_cpt_default_ff_get(); + dev->qp_depth_used = cnxk_cpt_qp_depth_used; cnxk_cpt_caps_populate(vf); diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c index 1dd1dbac9a..d7f5780637 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c @@ -496,6 +496,22 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, return ret; } +uint32_t +cnxk_cpt_qp_depth_used(void *qptr) +{ + struct cnxk_cpt_qp *qp = qptr; + struct pending_queue *pend_q; + union cpt_fc_write_s fc; + + pend_q = &qp->pend_q; + + fc.u64[0] = rte_atomic_load_explicit((RTE_ATOMIC(uint64_t)*)(qp->lmtline.fc_addr), + rte_memory_order_relaxed); + + return RTE_MAX(pending_queue_infl_cnt(pend_q->head, pend_q->tail, pend_q->pq_mask), + fc.s.qsize); +} + unsigned int cnxk_cpt_sym_session_get_size(struct rte_cryptodev *dev __rte_unused) { diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h index e7bba25cb8..708fad910d 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h @@ -142,6 +142,8 @@ int cnxk_ae_session_cfg(struct rte_cryptodev *dev, void cnxk_cpt_dump_on_err(struct cnxk_cpt_qp *qp); int cnxk_cpt_queue_pair_event_error_query(struct rte_cryptodev *dev, uint16_t qp_id); +uint32_t cnxk_cpt_qp_depth_used(void *qptr); + static __rte_always_inline void pending_queue_advance(uint64_t *index, const uint64_t mask) {