From patchwork Tue Nov 3 08:37:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Archana Muniganti X-Patchwork-Id: 83508 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78BB6A0521; Tue, 3 Nov 2020 09:38:41 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 61B4FC848; Tue, 3 Nov 2020 09:37:40 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id E437FC846 for ; Tue, 3 Nov 2020 09:37:38 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0A38MKmo011648; Tue, 3 Nov 2020 00:37:37 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=JiTu2EpicHOpnEMxll/OrX271BfZaazlCOnrZAKXNgw=; b=DUupPKT1qhGZ3L4vxN/5hPzpZEfAjtpZrsAJaOpKwNGhNadcLi+RFs1vGiuzBVAVVwxK uXU1Nzou0yxmNh1BRr1wpQt6ef9jCgMtvD5AN7/CWd4T5yuyEms3AFdgVy6U4yHA9d1S zgF4pBAfABs8Twn2Uc8wrldrsCwu1q0f8h+BvthH6LZjTVekfPaXZ23PdJonCS/daShZ beE1EMApgHtrMI+IzGm1CvA3n9rnUpP+FskpBBlbVKwQ1Y6KKW7wzQEKaL5fHTP4Rufv irwA4RhY1vBzXMRIjhCnTRal4P3RlwEPdqE1WBz11vkHnH/Bs0FYrnmy9VIQA6vRV6SO Bw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 34h59mvrg8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 03 Nov 2020 00:37:37 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 3 Nov 2020 00:37:35 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 3 Nov 2020 00:37:36 -0800 Received: from hyd1409.caveonetworks.com (unknown [10.29.45.15]) by maili.marvell.com (Postfix) with ESMTP id 4FF9B3F7040; Tue, 3 Nov 2020 00:37:34 -0800 (PST) From: Archana Muniganti To: , , CC: Archana Muniganti , Date: Tue, 3 Nov 2020 14:07:17 +0530 Message-ID: <20201103083717.10935-5-marchana@marvell.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20201103083717.10935-1-marchana@marvell.com> References: <20201103083717.10935-1-marchana@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312, 18.0.737 definitions=2020-11-03_05:2020-11-02, 2020-11-03 signatures=0 Subject: [dpdk-dev] [PATCH 4/4] common/cpt: remove redundant structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Replaced structure 'rid' which has single field with its field itself. Signed-off-by: Archana Muniganti --- drivers/common/cpt/cpt_common.h | 7 +------ drivers/crypto/octeontx/otx_cryptodev_hw_access.c | 10 +++++----- drivers/crypto/octeontx/otx_cryptodev_ops.c | 13 +++++++------ drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 13 ++++++------- 4 files changed, 19 insertions(+), 24 deletions(-) diff --git a/drivers/common/cpt/cpt_common.h b/drivers/common/cpt/cpt_common.h index f61495e458..7fea0ca879 100644 --- a/drivers/common/cpt/cpt_common.h +++ b/drivers/common/cpt/cpt_common.h @@ -27,11 +27,6 @@ struct cpt_qp_meta_info { int lb_mlen; }; -struct rid { - /** Request id of a crypto operation */ - uintptr_t rid; -}; - /* * Pending queue structure * @@ -40,7 +35,7 @@ struct pending_queue { /** Pending requests count */ uint64_t pending_count; /** Array of pending requests */ - struct rid *rid_queue; + uintptr_t *req_queue; /** Tail of queue to be used for enqueue */ uint16_t enq_tail; /** Head of queue to be used for dequeue */ diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c index ce546c2ffe..c6b1a5197d 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c +++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c @@ -535,7 +535,7 @@ otx_cpt_get_resource(const struct rte_cryptodev *dev, uint8_t group, len = chunks * RTE_ALIGN(sizeof(struct command_chunk), 8); /* For pending queue */ - len += qlen * RTE_ALIGN(sizeof(struct rid), 8); + len += qlen * sizeof(uintptr_t); /* So that instruction queues start as pg size aligned */ len = RTE_ALIGN(len, pg_sz); @@ -570,14 +570,14 @@ otx_cpt_get_resource(const struct rte_cryptodev *dev, uint8_t group, } /* Pending queue setup */ - cptvf->pqueue.rid_queue = (struct rid *)mem; + cptvf->pqueue.req_queue = (uintptr_t *)mem; cptvf->pqueue.enq_tail = 0; cptvf->pqueue.deq_head = 0; cptvf->pqueue.pending_count = 0; - mem += qlen * RTE_ALIGN(sizeof(struct rid), 8); - len -= qlen * RTE_ALIGN(sizeof(struct rid), 8); - dma_addr += qlen * RTE_ALIGN(sizeof(struct rid), 8); + mem += qlen * sizeof(uintptr_t); + len -= qlen * sizeof(uintptr_t); + dma_addr += qlen * sizeof(uintptr_t); /* Alignment wastage */ used_len = alloc_len - len; diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c index 0a0c50a363..9f731f8cc9 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c @@ -430,7 +430,7 @@ otx_cpt_request_enqueue(struct cpt_instance *instance, /* Default mode of software queue */ mark_cpt_inst(instance); - pqueue->rid_queue[pqueue->enq_tail].rid = (uintptr_t)user_req; + pqueue->req_queue[pqueue->enq_tail] = (uintptr_t)user_req; /* We will use soft queue length here to limit requests */ MOD_INC(pqueue->enq_tail, DEFAULT_CMD_QLEN); @@ -823,7 +823,6 @@ otx_cpt_pkt_dequeue(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops, struct cpt_instance *instance = (struct cpt_instance *)qptr; struct cpt_request_info *user_req; struct cpt_vf *cptvf = (struct cpt_vf *)instance; - struct rid *rid_e; uint8_t cc[nb_ops]; int i, count, pcount; uint8_t ret; @@ -837,11 +836,13 @@ otx_cpt_pkt_dequeue(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops, count = (nb_ops > pcount) ? pcount : nb_ops; for (i = 0; i < count; i++) { - rid_e = &pqueue->rid_queue[pqueue->deq_head]; - user_req = (struct cpt_request_info *)(rid_e->rid); + user_req = (struct cpt_request_info *) + pqueue->req_queue[pqueue->deq_head]; - if (likely((i+1) < count)) - rte_prefetch_non_temporal((void *)rid_e[1].rid); + if (likely((i+1) < count)) { + rte_prefetch_non_temporal( + (void *)pqueue->req_queue[i+1]); + } ret = check_nb_command_id(user_req, instance); diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index fe76fe38c2..c337398242 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -192,7 +192,7 @@ otx2_cpt_qp_create(const struct rte_cryptodev *dev, uint16_t qp_id, size_div40 = (iq_len + 40 - 1) / 40 + 1; /* For pending queue */ - len = iq_len * RTE_ALIGN(sizeof(struct rid), 8); + len = iq_len * sizeof(uintptr_t); /* Space for instruction group memory */ len += size_div40 * 16; @@ -229,12 +229,12 @@ otx2_cpt_qp_create(const struct rte_cryptodev *dev, uint16_t qp_id, } /* Initialize pending queue */ - qp->pend_q.rid_queue = (struct rid *)va; + qp->pend_q.req_queue = (uintptr_t *)va; qp->pend_q.enq_tail = 0; qp->pend_q.deq_head = 0; qp->pend_q.pending_count = 0; - used_len = iq_len * RTE_ALIGN(sizeof(struct rid), 8); + used_len = iq_len * sizeof(uintptr_t); used_len += size_div40 * 16; used_len = RTE_ALIGN(used_len, pg_sz); iova += used_len; @@ -520,7 +520,7 @@ otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); - pend_q->rid_queue[pend_q->enq_tail].rid = (uintptr_t)req; + pend_q->req_queue[pend_q->enq_tail] = (uintptr_t)req; /* We will use soft queue length here to limit requests */ MOD_INC(pend_q->enq_tail, OTX2_CPT_DEFAULT_CMD_QLEN); @@ -977,7 +977,6 @@ otx2_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) struct cpt_request_info *req; struct rte_crypto_op *cop; uint8_t cc[nb_ops]; - struct rid *rid; uintptr_t *rsp; void *metabuf; @@ -989,8 +988,8 @@ otx2_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) nb_ops = nb_pending; for (i = 0; i < nb_ops; i++) { - rid = &pend_q->rid_queue[pend_q->deq_head]; - req = (struct cpt_request_info *)(rid->rid); + req = (struct cpt_request_info *) + pend_q->req_queue[pend_q->deq_head]; cc[i] = otx2_cpt_compcode_get(req);