From patchwork Thu Dec 16 17:49:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 105185 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB08BA0032; Thu, 16 Dec 2021 18:54:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6E9A841160; Thu, 16 Dec 2021 18:54:25 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9E74241160 for ; Thu, 16 Dec 2021 18:54:23 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1BGERCLP008248 for ; Thu, 16 Dec 2021 09:54:23 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=V6hJs+KwbZAAGxDslVbbjcQKAPHD2gZhGEIrgOOFfBw=; b=BSofMKDFsYaupJf93oAS047KYk7SVlra3uxQyJzfvVs8dmhb/+rskuZrbWNEjFLypB8B jQVI1T2ZWXwCuu/26XM1n6PlMnzJWf2re/XBdrpj47mjcYzgGTnEbUlJ7TeUJc9aG8yG h7MffDEKxJMd9Ngnc9lva1g93h3fv+Qo2IvbD/dJSPDsuh+dNuADQXRAZtQlLxrKdeD/ BZEgIOUfUegAvXQZa+bIwjIFu+fdN1Xb+ndjEWOr+EJ+XkflUayppTOaHIqhErec+WXL Q2nb+FhB+ZNS14kUhky0Wy6Z6abAsyHGEpAI+ZLm+XuQP6p3hTfxvEmd0+ji9Evs6tYI VQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3d02p0af0y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 16 Dec 2021 09:54:22 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 16 Dec 2021 09:54:20 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 16 Dec 2021 09:54:20 -0800 Received: from HY-LT1002.marvell.com (HY-LT1002.marvell.com [10.28.176.218]) by maili.marvell.com (Postfix) with ESMTP id 4543E3F708A; Thu, 16 Dec 2021 09:54:17 -0800 (PST) From: Anoob Joseph To: Akhil Goyal , Jerin Jacob CC: Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Subject: [PATCH v2 20/29] crypto/cnxk: use atomics to access CPT res Date: Thu, 16 Dec 2021 23:19:26 +0530 Message-ID: <1639676975-1316-21-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1639676975-1316-1-git-send-email-anoobj@marvell.com> References: <1638859858-734-1-git-send-email-anoobj@marvell.com> <1639676975-1316-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 5Pl2UBtCjvBU-6K4T9RTfgFWubhGcOIc X-Proofpoint-ORIG-GUID: 5Pl2UBtCjvBU-6K4T9RTfgFWubhGcOIc X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2021-12-16_06,2021-12-16_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The memory would be updated by hardware. Use atomics to read the same. Signed-off-by: Anoob Joseph --- drivers/common/cnxk/hw/cpt.h | 2 ++ drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 24 ++++++++++++++++-------- drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 28 +++++++++++++++++++--------- 3 files changed, 37 insertions(+), 17 deletions(-) diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h index ccc7af4..412dd76 100644 --- a/drivers/common/cnxk/hw/cpt.h +++ b/drivers/common/cnxk/hw/cpt.h @@ -215,6 +215,8 @@ union cpt_res_s { uint64_t reserved_64_127; } cn9k; + + uint64_t u64[2]; }; /* [CN10K, .) */ diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c index 638268e..f8240e1 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c @@ -111,6 +111,10 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], uint64_t w7; int ret; + const union cpt_res_s res = { + .cn10k.compcode = CPT_COMP_NOT_DONE, + }; + op = ops[0]; inst[0].w0.u64 = 0; @@ -174,7 +178,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], } inst[0].res_addr = (uint64_t)&infl_req->res; - infl_req->res.cn10k.compcode = CPT_COMP_NOT_DONE; + __atomic_store_n(&infl_req->res.u64[0], res.u64[0], __ATOMIC_RELAXED); infl_req->cop = op; inst[0].w7.u64 = w7; @@ -395,9 +399,9 @@ cn10k_cpt_sec_ucc_process(struct rte_crypto_op *cop, static inline void cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop, - struct cpt_inflight_req *infl_req) + struct cpt_inflight_req *infl_req, + struct cpt_cn10k_res_s *res) { - struct cpt_cn10k_res_s *res = (struct cpt_cn10k_res_s *)&infl_req->res; const uint8_t uc_compcode = res->uc_compcode; const uint8_t compcode = res->compcode; unsigned int sz; @@ -495,12 +499,15 @@ cn10k_cpt_crypto_adapter_dequeue(uintptr_t get_work1) struct cpt_inflight_req *infl_req; struct rte_crypto_op *cop; struct cnxk_cpt_qp *qp; + union cpt_res_s res; infl_req = (struct cpt_inflight_req *)(get_work1); cop = infl_req->cop; qp = infl_req->qp; - cn10k_cpt_dequeue_post_process(qp, infl_req->cop, infl_req); + res.u64[0] = __atomic_load_n(&infl_req->res.u64[0], __ATOMIC_RELAXED); + + cn10k_cpt_dequeue_post_process(qp, infl_req->cop, infl_req, &res.cn10k); if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF)) rte_mempool_put(qp->meta_info.pool, infl_req->mdata); @@ -515,9 +522,9 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) struct cpt_inflight_req *infl_req; struct cnxk_cpt_qp *qp = qptr; struct pending_queue *pend_q; - struct cpt_cn10k_res_s *res; uint64_t infl_cnt, pq_tail; struct rte_crypto_op *cop; + union cpt_res_s res; int i; pend_q = &qp->pend_q; @@ -534,9 +541,10 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) for (i = 0; i < nb_ops; i++) { infl_req = &pend_q->req_queue[pq_tail]; - res = (struct cpt_cn10k_res_s *)&infl_req->res; + res.u64[0] = __atomic_load_n(&infl_req->res.u64[0], + __ATOMIC_RELAXED); - if (unlikely(res->compcode == CPT_COMP_NOT_DONE)) { + if (unlikely(res.cn10k.compcode == CPT_COMP_NOT_DONE)) { if (unlikely(rte_get_timer_cycles() > pend_q->time_out)) { plt_err("Request timed out"); @@ -553,7 +561,7 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) ops[i] = cop; - cn10k_cpt_dequeue_post_process(qp, cop, infl_req); + cn10k_cpt_dequeue_post_process(qp, cop, infl_req, &res.cn10k); if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF)) rte_mempool_put(qp->meta_info.pool, infl_req->mdata); diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c index 449208d..cf80d47 100644 --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c @@ -221,6 +221,10 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) uint64_t head; int ret; + const union cpt_res_s res = { + .cn10k.compcode = CPT_COMP_NOT_DONE, + }; + pend_q = &qp->pend_q; const uint64_t lmt_base = qp->lf.lmt_base; @@ -274,10 +278,12 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) infl_req_1->op_flags = 0; infl_req_2->op_flags = 0; - infl_req_1->res.cn9k.compcode = CPT_COMP_NOT_DONE; + __atomic_store_n(&infl_req_1->res.u64[0], res.u64[0], + __ATOMIC_RELAXED); inst[0].res_addr = (uint64_t)&infl_req_1->res; - infl_req_2->res.cn9k.compcode = CPT_COMP_NOT_DONE; + __atomic_store_n(&infl_req_2->res.u64[0], res.u64[0], + __ATOMIC_RELAXED); inst[1].res_addr = (uint64_t)&infl_req_2->res; ret = cn9k_cpt_inst_prep(qp, op_1, infl_req_1, &inst[0]); @@ -410,9 +416,9 @@ cn9k_cpt_sec_post_process(struct rte_crypto_op *cop, static inline void cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop, - struct cpt_inflight_req *infl_req) + struct cpt_inflight_req *infl_req, + struct cpt_cn9k_res_s *res) { - struct cpt_cn9k_res_s *res = (struct cpt_cn9k_res_s *)&infl_req->res; unsigned int sz; if (likely(res->compcode == CPT_COMP_GOOD)) { @@ -492,12 +498,15 @@ cn9k_cpt_crypto_adapter_dequeue(uintptr_t get_work1) struct cpt_inflight_req *infl_req; struct rte_crypto_op *cop; struct cnxk_cpt_qp *qp; + union cpt_res_s res; infl_req = (struct cpt_inflight_req *)(get_work1); cop = infl_req->cop; qp = infl_req->qp; - cn9k_cpt_dequeue_post_process(qp, infl_req->cop, infl_req); + res.u64[0] = __atomic_load_n(&infl_req->res.u64[0], __ATOMIC_RELAXED); + + cn9k_cpt_dequeue_post_process(qp, infl_req->cop, infl_req, &res.cn9k); if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF)) rte_mempool_put(qp->meta_info.pool, infl_req->mdata); @@ -512,9 +521,9 @@ cn9k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) struct cpt_inflight_req *infl_req; struct cnxk_cpt_qp *qp = qptr; struct pending_queue *pend_q; - struct cpt_cn9k_res_s *res; uint64_t infl_cnt, pq_tail; struct rte_crypto_op *cop; + union cpt_res_s res; int i; pend_q = &qp->pend_q; @@ -531,9 +540,10 @@ cn9k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) for (i = 0; i < nb_ops; i++) { infl_req = &pend_q->req_queue[pq_tail]; - res = (struct cpt_cn9k_res_s *)&infl_req->res; + res.u64[0] = __atomic_load_n(&infl_req->res.u64[0], + __ATOMIC_RELAXED); - if (unlikely(res->compcode == CPT_COMP_NOT_DONE)) { + if (unlikely(res.cn9k.compcode == CPT_COMP_NOT_DONE)) { if (unlikely(rte_get_timer_cycles() > pend_q->time_out)) { plt_err("Request timed out"); @@ -550,7 +560,7 @@ cn9k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) ops[i] = cop; - cn9k_cpt_dequeue_post_process(qp, cop, infl_req); + cn9k_cpt_dequeue_post_process(qp, cop, infl_req, &res.cn9k); if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF)) rte_mempool_put(qp->meta_info.pool, infl_req->mdata);