From patchwork Wed Oct 19 14:15:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 118574 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1282BA06C8; Wed, 19 Oct 2022 16:15:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 09FDD42BD1; Wed, 19 Oct 2022 16:15:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 05277410D1 for ; Wed, 19 Oct 2022 16:15:22 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29JDuHN0003536 for ; Wed, 19 Oct 2022 07:15:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=0w0GT9jfIcnPCndUL65RWuqW7uazNPBi6Q7jkvXpE+k=; b=fEXLRE115BK2dgIZ+8Zlj2fYxKmJuORjNn1N94VEBG3Hr2HroQo5h3bF7w+X+aazt0vv rOYfNH89j6pl3jTOgP+7StMAduNkyY+9M4ijKahxe6PlCWqZLiRmjek/Kd7+rE3b2umG JEfv3S+myLVvoyOD2CqwsCtZ7FVWPtHyfrc6d2zEyodUypkxQV/rKELGEBctzs3Jsefb RHI8RPB700UMsfP+lpoRoFvEawKdzrsc78SMAKkOV/I4QM5IdK0YgRam5Uo1iMfoojV/ dEirtpoylhpekxd2CgFBr+SmscjmSK/dgBH4WNni0M+bY/RbhwaM5m+cUBXEEpPOzo4R Tg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3k7vcph6aq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 19 Oct 2022 07:15:22 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 19 Oct 2022 07:15:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 19 Oct 2022 07:15:19 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 4ABEE3F70A1; Wed, 19 Oct 2022 07:15:16 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Vidya Sagar Velumuri , Subject: [PATCH 01/13] crypto/cnxk: fix length of AES-CMAC algo Date: Wed, 19 Oct 2022 19:45:01 +0530 Message-ID: <20221019141513.1969052-2-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019141513.1969052-1-ktejasree@marvell.com> References: <20221019141513.1969052-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: gVV_xqiJD30HLuEv3TbmEaWUoFKFbknO X-Proofpoint-GUID: gVV_xqiJD30HLuEv3TbmEaWUoFKFbknO X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org AES-CMAC uses PDCP opcode. Length should be passed in bits. Fixes: 759b5e653580 ("crypto/cnxk: support AES-CMAC") Signed-off-by: Tejasree Kondoj --- drivers/crypto/cnxk/cnxk_se.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index 54a78d0a5a..c92e2cca2f 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -1323,6 +1323,9 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, auth_offset += iv_len; inputlen = auth_offset + auth_data_len; + + /* length should be in bits */ + auth_data_len *= 8; } outputlen = mac_len; From patchwork Wed Oct 19 14:15:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 118573 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 26D39A06C8; Wed, 19 Oct 2022 16:15:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1796842B6C; Wed, 19 Oct 2022 16:15:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5DE81410D1 for ; Wed, 19 Oct 2022 16:15:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29J8An7T010348 for ; Wed, 19 Oct 2022 07:15:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=2Qf34o0yGXEHCOxM2VQr7fD3qIJpKukHn7KSciqSF4Q=; b=Gu0wGhyOd2XpJRixzCcRHf/KCKwnJSIAeQcjrM5akD7KA+SCDvyolyQeiwEFBKTL1BXJ 8bKxd7CpC9uiwcdeoHg5ArXB0ryS3sxHtzKwOZOFrVeKUQqTb2xcURZI0Kis6zDtKLwV dmBJqqKYvld3/NCDbXZCZuYdHhLVNpXDgvAiyS0DB5Mvp23jknzhLszXuj5suJV177sX oG77zFQzMFxFgpiLRAM1GiwkMHDCSKSPAss5Poj0Ca63eGa1oBGBNPRWaLfO0UL4pJzn JhDghv6Awy3lpKdlvYKQF5NubKgdEjyr45xx0/AzePrWoJSQWQ2LWsn+vhQ+UArseeGJ Tg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3kadg59c94-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 19 Oct 2022 07:15:21 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 19 Oct 2022 07:15:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 19 Oct 2022 07:15:19 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 599483F7041; Wed, 19 Oct 2022 07:15:18 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Vidya Sagar Velumuri , Subject: [PATCH 02/13] common/cnxk: set inplace bit of lookaside IPsec Date: Wed, 19 Oct 2022 19:45:02 +0530 Message-ID: <20221019141513.1969052-3-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019141513.1969052-1-ktejasree@marvell.com> References: <20221019141513.1969052-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: L1ch5ZgJAl-JrJk8afwp1ECXlcBI2SfG X-Proofpoint-ORIG-GUID: L1ch5ZgJAl-JrJk8afwp1ECXlcBI2SfG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Set inplace bit of lookaside IPsec and remove rptr population in datapath. Signed-off-by: Tejasree Kondoj --- drivers/common/cnxk/roc_ie_on.h | 1 + drivers/common/cnxk/roc_ie_ot.h | 2 ++ drivers/crypto/cnxk/cn10k_ipsec.c | 4 ++-- drivers/crypto/cnxk/cn10k_ipsec_la_ops.h | 2 -- drivers/crypto/cnxk/cn9k_ipsec.c | 4 ++-- drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 3 +-- 6 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/common/cnxk/roc_ie_on.h b/drivers/common/cnxk/roc_ie_on.h index 961d5fc95e..5d02684e34 100644 --- a/drivers/common/cnxk/roc_ie_on.h +++ b/drivers/common/cnxk/roc_ie_on.h @@ -30,6 +30,7 @@ enum roc_ie_on_ucc_ipsec { #define ROC_IE_ON_INB_RPTR_HDR 16 #define ROC_IE_ON_MAX_IV_LEN 16 #define ROC_IE_ON_PER_PKT_IV BIT(43) +#define ROC_IE_ON_INPLACE_BIT BIT(6) enum { ROC_IE_ON_SA_ENC_NULL = 0, diff --git a/drivers/common/cnxk/roc_ie_ot.h b/drivers/common/cnxk/roc_ie_ot.h index 56a1e9f1d6..722fbc1ddc 100644 --- a/drivers/common/cnxk/roc_ie_ot.h +++ b/drivers/common/cnxk/roc_ie_ot.h @@ -18,6 +18,8 @@ #define ROC_IE_OT_CPT_TS_PKIND 54 #define ROC_IE_OT_SA_CTX_HDR_SIZE 1 +#define ROC_IE_OT_INPLACE_BIT BIT(6) + enum roc_ie_ot_ucc_ipsec { ROC_IE_OT_UCC_SUCCESS = 0x00, ROC_IE_OT_UCC_ERR_SA_INVAL = 0xb0, diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c index ef013c8bae..1740a73c36 100644 --- a/drivers/crypto/cnxk/cn10k_ipsec.c +++ b/drivers/crypto/cnxk/cn10k_ipsec.c @@ -99,7 +99,7 @@ cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf, /* pre-populate CPT INST word 4 */ inst_w4.u64 = 0; - inst_w4.s.opcode_major = ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC; + inst_w4.s.opcode_major = ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC | ROC_IE_OT_INPLACE_BIT; param1.u16 = 0; @@ -193,7 +193,7 @@ cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf, /* pre-populate CPT INST word 4 */ inst_w4.u64 = 0; - inst_w4.s.opcode_major = ROC_IE_OT_MAJOR_OP_PROCESS_INBOUND_IPSEC; + inst_w4.s.opcode_major = ROC_IE_OT_MAJOR_OP_PROCESS_INBOUND_IPSEC | ROC_IE_OT_INPLACE_BIT; param1.u16 = 0; diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h index a75e88cb28..084198b5bb 100644 --- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h +++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h @@ -83,7 +83,6 @@ process_outb_sa(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k_s inst->w4.u64 = inst_w4_u64 | rte_pktmbuf_pkt_len(m_src); dptr = rte_pktmbuf_mtod(m_src, uint64_t); inst->dptr = dptr; - inst->rptr = dptr; return 0; } @@ -99,7 +98,6 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, struct inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src); dptr = rte_pktmbuf_mtod(m_src, uint64_t); inst->dptr = dptr; - inst->rptr = dptr; return 0; } diff --git a/drivers/crypto/cnxk/cn9k_ipsec.c b/drivers/crypto/cnxk/cn9k_ipsec.c index 5f3a74107b..55a13570ad 100644 --- a/drivers/crypto/cnxk/cn9k_ipsec.c +++ b/drivers/crypto/cnxk/cn9k_ipsec.c @@ -87,7 +87,7 @@ cn9k_ipsec_outb_sa_create(struct cnxk_cpt_qp *qp, return ret; w4.u64 = 0; - w4.s.opcode_major = ROC_IE_ON_MAJOR_OP_PROCESS_OUTBOUND_IPSEC; + w4.s.opcode_major = ROC_IE_ON_MAJOR_OP_PROCESS_OUTBOUND_IPSEC | ROC_IE_ON_INPLACE_BIT; w4.s.opcode_minor = ctx_len >> 3; param1.u16 = 0; @@ -174,7 +174,7 @@ cn9k_ipsec_inb_sa_create(struct cnxk_cpt_qp *qp, return ret; w4.u64 = 0; - w4.s.opcode_major = ROC_IE_ON_MAJOR_OP_PROCESS_INBOUND_IPSEC; + w4.s.opcode_major = ROC_IE_ON_MAJOR_OP_PROCESS_INBOUND_IPSEC | ROC_IE_ON_INPLACE_BIT; w4.s.opcode_minor = ctx_len >> 3; param2.u16 = 0; diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h index 8b4e636c70..52618e8840 100644 --- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h +++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h @@ -82,7 +82,6 @@ process_outb_sa(struct rte_crypto_op *cop, struct cn9k_sec_session *sess, struct /* Prepare CPT instruction */ inst->w4.u64 = sess->inst.w4 | dlen; inst->dptr = PLT_U64_CAST(hdr); - inst->rptr = PLT_U64_CAST(hdr); inst->w7.u64 = sess->inst.w7; return 0; @@ -96,7 +95,7 @@ process_inb_sa(struct rte_crypto_op *cop, struct cn9k_sec_session *sess, struct /* Prepare CPT instruction */ inst->w4.u64 = sess->inst.w4 | rte_pktmbuf_pkt_len(m_src); - inst->dptr = inst->rptr = rte_pktmbuf_mtod(m_src, uint64_t); + inst->dptr = rte_pktmbuf_mtod(m_src, uint64_t); inst->w7.u64 = sess->inst.w7; } #endif /* __CN9K_IPSEC_LA_OPS_H__ */ From patchwork Wed Oct 19 14:15:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 118575 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3AC86A06C8; Wed, 19 Oct 2022 16:15:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D17D742BD5; Wed, 19 Oct 2022 16:15:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B1A2F42BCB for ; Wed, 19 Oct 2022 16:15:24 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29J8B9Pw010719 for ; Wed, 19 Oct 2022 07:15:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=+3v6pWzFuYs9lq7gP8zhAqaFbZ4z+LTpc1aqvtkbgMs=; b=C+uYhpxcGtU7dMIBsr9l9eQ83VDaqwamy0efI2QfejxvD4ZcGK2jiaaZ+j38Gny8otpW YA/qqa7Jwi3K+LVU9N2utgxv+/gIi3OZZWrtgKYbFF5PNIttGoUx95EnWiGF2mdnZPni v1SHaEAKpNfBaQ+n9MA0i/5BNzsUlqwABKOSOmYX5ax5nWvPCJEaAjg2c8Qns9VZEXDW om+K+4muaaApCCCRg+xr71aMY7q8ymHUd1Z856SKjdPQj0rfN5PteqRRjIhHokHN7yi/ 8PXdSQ17nUW7C+4uRlJNxdIoOmh6U8I+eXK/pnxR1Nh4AOq8kWnBo0hYdokPrLtavShd FA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3kadg59c99-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 19 Oct 2022 07:15:23 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 19 Oct 2022 07:15:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 19 Oct 2022 07:15:21 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 5A6A83F7059; Wed, 19 Oct 2022 07:15:20 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Vidya Sagar Velumuri , Subject: [PATCH 03/13] crypto/cnxk: change capabilities as per firmware Date: Wed, 19 Oct 2022 19:45:03 +0530 Message-ID: <20221019141513.1969052-4-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019141513.1969052-1-ktejasree@marvell.com> References: <20221019141513.1969052-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: PecX7y8HHq9doorurwrla7k1KGG5VO-L X-Proofpoint-ORIG-GUID: PecX7y8HHq9doorurwrla7k1KGG5VO-L X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Changing CPT engine capabilities structure as per microcode. Signed-off-by: Tejasree Kondoj --- drivers/common/cnxk/roc_mbox.h | 5 ++++- drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 4 +--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index 66a1be387d..9b57b934b1 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -1517,7 +1517,10 @@ union cpt_eng_caps { uint64_t __io kasumi : 1; uint64_t __io des : 1; uint64_t __io crc : 1; - uint64_t __io reserved_14_63 : 50; + uint64_t __io mmul : 1; + uint64_t __io reserved_15_33 : 19; + uint64_t __io pdcp_chain : 1; + uint64_t __io reserved_35_63 : 29; }; }; diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c index a5233a942a..e0ceaa32d5 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c @@ -449,9 +449,7 @@ cnxk_sess_fill(struct roc_cpt *roc_cpt, struct rte_crypto_sym_xform *xform, bool pdcp_chain_supported = false; bool ciph_then_auth = false; - if (roc_cpt->cpt_revision == ROC_CPT_REVISION_ID_96XX_B0 || - roc_cpt->cpt_revision == ROC_CPT_REVISION_ID_96XX_C0 || - roc_cpt->cpt_revision == ROC_CPT_REVISION_ID_98XX) + if (roc_cpt->hw_caps[CPT_ENG_TYPE_SE].pdcp_chain) pdcp_chain_supported = true; if (xform == NULL) From patchwork Wed Oct 19 14:15:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 118576 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83F49A06C8; Wed, 19 Oct 2022 16:15:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A38A742BCF; Wed, 19 Oct 2022 16:15:30 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3C7E6410D1 for ; Wed, 19 Oct 2022 16:15:29 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29JC0JSg011929 for ; Wed, 19 Oct 2022 07:15:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=37Om0gOYW4EfeRLpyQz8ZFI6Qj3Br8VItyl8eguG03Y=; b=KKf0SCV/srTDk4NRTFGdEzKOt8ibOPB/8X1uPm8mHFGjol+3hi+Qhm7b7J8m/22+ejY+ UcEvGUv5Kcs+UQxsuzr5UugMDtS9418vOvj0rz3LiKmPnz0cwWwpzY+uF1gPsLfAXqtM tzKu1F7qESQXvmYpUvrRCaKIe73EG8o40QHwTz8uOMO3jKcICuJxFfixBEjJHEPfrOEh BZ5OWGxF082odcfFpGcJcf3R+FzOllmS3zu2F/2H1mo4S3f+Zv2GXhBY9aoRRH8a9FJl UMgLSHy0K+X7bZ7Pj7UoCcrv4BW3GjwpO03IxdWqRKbTH6/th4IQ465PZUkZ9AV7QBi1 aQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3k7vcph6ba-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 19 Oct 2022 07:15:28 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 19 Oct 2022 07:15:24 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 19 Oct 2022 07:15:24 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 6582D3F705E; Wed, 19 Oct 2022 07:15:22 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Anoob Joseph , Vidya Sagar Velumuri , Pavan Nikhilesh , "Shijith Thotton" , Subject: [PATCH 04/13] common/cnxk: support 103XX CPT Date: Wed, 19 Oct 2022 19:45:04 +0530 Message-ID: <20221019141513.1969052-5-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019141513.1969052-1-ktejasree@marvell.com> References: <20221019141513.1969052-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: th9oBVsPUI0iMaJvlFHh9D8mDRMsmFgh X-Proofpoint-GUID: th9oBVsPUI0iMaJvlFHh9D8mDRMsmFgh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding support for 103XX CPT. Signed-off-by: Tejasree Kondoj --- drivers/common/cnxk/hw/cpt.h | 26 +- drivers/common/cnxk/roc_se.h | 11 + drivers/crypto/cnxk/cn10k_cryptodev.c | 2 +- drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 67 +- drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 9 +- drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 4 +- drivers/crypto/cnxk/cnxk_se.h | 1602 +++++++++++---------- drivers/crypto/cnxk/version.map | 3 +- drivers/event/cnxk/cn10k_eventdev.c | 13 +- 9 files changed, 926 insertions(+), 811 deletions(-) diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h index 3c87a0d1e4..ff5aa46f64 100644 --- a/drivers/common/cnxk/hw/cpt.h +++ b/drivers/common/cnxk/hw/cpt.h @@ -157,6 +157,22 @@ union cpt_inst_w4 { } s; }; +union cpt_inst_w5 { + uint64_t u64; + struct { + uint64_t dptr : 60; + uint64_t gather_sz : 4; + } s; +}; + +union cpt_inst_w6 { + uint64_t u64; + struct { + uint64_t rptr : 60; + uint64_t scatter_sz : 4; + } s; +}; + union cpt_inst_w7 { uint64_t u64; struct { @@ -200,9 +216,15 @@ struct cpt_inst_s { union cpt_inst_w4 w4; - uint64_t dptr; + union { + union cpt_inst_w5 w5; + uint64_t dptr; + }; - uint64_t rptr; + union { + union cpt_inst_w6 w6; + uint64_t rptr; + }; union cpt_inst_w7 w7; }; diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h index e70a197d4f..c357c19c0b 100644 --- a/drivers/common/cnxk/roc_se.h +++ b/drivers/common/cnxk/roc_se.h @@ -183,6 +183,17 @@ struct roc_se_sglist_comp { uint64_t ptr[4]; }; +struct roc_se_sg2list_comp { + union { + uint64_t len; + struct { + uint16_t len[3]; + uint16_t valid_segs; + } s; + } u; + uint64_t ptr[3]; +}; + struct roc_se_enc_context { uint64_t iv_source : 1; uint64_t aes_key : 2; diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c index db11ac7444..52de9b9657 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev.c @@ -99,7 +99,7 @@ cn10k_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, dev->driver_id = cn10k_cryptodev_driver_id; dev->feature_flags = cnxk_cpt_default_ff_get(); - cn10k_cpt_set_enqdeq_fns(dev); + cn10k_cpt_set_enqdeq_fns(dev, vf); cn10k_sec_ops_override(); rte_cryptodev_pmd_probing_finish(dev); diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c index 2942617615..7dad370047 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c @@ -29,6 +29,7 @@ struct ops_burst { struct cn10k_sso_hws *ws; struct cnxk_cpt_qp *qp; uint16_t nb_ops; + bool is_sg_ver2; }; /* Holds information required to send vector of operations */ @@ -93,8 +94,8 @@ cpt_sec_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, } static inline int -cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], - struct cpt_inst_s inst[], struct cpt_inflight_req *infl_req) +cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], struct cpt_inst_s inst[], + struct cpt_inflight_req *infl_req, const bool is_sg_ver2) { struct cn10k_sec_session *sec_sess; struct rte_crypto_asym_op *asym_op; @@ -126,8 +127,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], w7 = sec_sess->inst.w7; } else if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { sess = CRYPTODEV_GET_SYM_SESS_PRIV(sym_op->session); - ret = cpt_sym_inst_fill(qp, op, sess, infl_req, - &inst[0]); + ret = cpt_sym_inst_fill(qp, op, sess, infl_req, &inst[0], is_sg_ver2); if (unlikely(ret)) return 0; w7 = sess->cpt_inst_w7; @@ -138,8 +138,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], return 0; } - ret = cpt_sym_inst_fill(qp, op, sess, infl_req, - &inst[0]); + ret = cpt_sym_inst_fill(qp, op, sess, infl_req, &inst[0], is_sg_ver2); if (unlikely(ret)) { sym_session_clear(op->sym->session); rte_mempool_put(qp->sess_mp, op->sym->session); @@ -177,7 +176,8 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[], } static uint16_t -cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) +cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops, + const bool is_sg_ver2) { uint64_t lmt_base, lmt_arg, io_addr; struct cpt_inflight_req *infl_req; @@ -222,7 +222,7 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) infl_req = &pend_q->req_queue[head]; infl_req->op_flags = 0; - ret = cn10k_cpt_fill_inst(qp, ops + i, &inst[2 * i], infl_req); + ret = cn10k_cpt_fill_inst(qp, ops + i, &inst[2 * i], infl_req, is_sg_ver2); if (unlikely(ret != 1)) { plt_dp_err("Could not process op: %p", ops + i); if (i == 0) @@ -266,12 +266,22 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) return count + i; } +static uint16_t +cn10k_cpt_sg_ver1_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) +{ + return cn10k_cpt_enqueue_burst(qptr, ops, nb_ops, false); +} + +static uint16_t +cn10k_cpt_sg_ver2_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) +{ + return cn10k_cpt_enqueue_burst(qptr, ops, nb_ops, true); +} + static int -cn10k_cpt_crypto_adapter_ev_mdata_set(struct rte_cryptodev *dev __rte_unused, - void *sess, +cn10k_cpt_crypto_adapter_ev_mdata_set(struct rte_cryptodev *dev __rte_unused, void *sess, enum rte_crypto_op_type op_type, - enum rte_crypto_op_sess_type sess_type, - void *mdata) + enum rte_crypto_op_sess_type sess_type, void *mdata) { union rte_event_crypto_metadata *ec_mdata = mdata; struct rte_event *rsp_info; @@ -324,8 +334,7 @@ cn10k_cpt_crypto_adapter_ev_mdata_set(struct rte_cryptodev *dev __rte_unused, } static inline int -cn10k_ca_meta_info_extract(struct rte_crypto_op *op, - struct cnxk_cpt_qp **qp, uint64_t *w2) +cn10k_ca_meta_info_extract(struct rte_crypto_op *op, struct cnxk_cpt_qp **qp, uint64_t *w2) { if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) { @@ -514,7 +523,7 @@ ca_lmtst_vec_submit(struct ops_burst *burst, struct vec_request vec_tbl[], uint1 infl_req = infl_reqs[i]; infl_req->op_flags = 0; - ret = cn10k_cpt_fill_inst(qp, &burst->op[i], inst, infl_req); + ret = cn10k_cpt_fill_inst(qp, &burst->op[i], inst, infl_req, burst->is_sg_ver2); if (unlikely(ret != 1)) { plt_cpt_dbg("Could not process op: %p", burst->op[i]); if (i != 0) @@ -633,7 +642,7 @@ ca_lmtst_burst_submit(struct ops_burst *burst) infl_req = infl_reqs[i]; infl_req->op_flags = 0; - ret = cn10k_cpt_fill_inst(qp, &burst->op[i], inst, infl_req); + ret = cn10k_cpt_fill_inst(qp, &burst->op[i], inst, infl_req, burst->is_sg_ver2); if (unlikely(ret != 1)) { plt_dp_dbg("Could not process op: %p", burst->op[i]); if (i != 0) @@ -686,8 +695,9 @@ ca_lmtst_burst_submit(struct ops_burst *burst) return i; } -uint16_t __rte_hot -cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events) +static inline uint16_t __rte_hot +cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events, + const bool is_sg_ver2) { uint16_t submitted, count = 0, vec_tbl_len = 0; struct vec_request vec_tbl[nb_events]; @@ -701,6 +711,7 @@ cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_ev burst.ws = ws; burst.qp = NULL; burst.nb_ops = 0; + burst.is_sg_ver2 = is_sg_ver2; for (i = 0; i < nb_events; i++) { op = ev[i].event_ptr; @@ -762,6 +773,18 @@ cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_ev return count; } +uint16_t __rte_hot +cn10k_cpt_sg_ver1_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events) +{ + return cn10k_cpt_crypto_adapter_enqueue(ws, ev, nb_events, false); +} + +uint16_t __rte_hot +cn10k_cpt_sg_ver2_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events) +{ + return cn10k_cpt_crypto_adapter_enqueue(ws, ev, nb_events, true); +} + static inline void cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res) { @@ -1012,9 +1035,13 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) } void -cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev) +cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf) { - dev->enqueue_burst = cn10k_cpt_enqueue_burst; + if (vf->cpt.cpt_revision > ROC_CPT_REVISION_ID_106XX) + dev->enqueue_burst = cn10k_cpt_sg_ver2_enqueue_burst; + else + dev->enqueue_burst = cn10k_cpt_sg_ver1_enqueue_burst; + dev->dequeue_burst = cn10k_cpt_dequeue_burst; rte_mb(); diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h index 8104310c30..3d7c6d195a 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h @@ -9,12 +9,17 @@ #include #include +#include "cnxk_cryptodev.h" + extern struct rte_cryptodev_ops cn10k_cpt_ops; -void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev); +void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev, struct cnxk_cpt_vf *vf); __rte_internal -uint16_t __rte_hot cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], +uint16_t __rte_hot cn10k_cpt_sg_ver1_crypto_adapter_enqueue(void *ws, struct rte_event ev[], + uint16_t nb_events); +__rte_internal +uint16_t __rte_hot cn10k_cpt_sg_ver2_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events); __rte_internal uintptr_t cn10k_cpt_crypto_adapter_dequeue(uintptr_t get_work1); diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c index 289601330e..2a5c00eadd 100644 --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c @@ -91,7 +91,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { sym_op = op->sym; sess = CRYPTODEV_GET_SYM_SESS_PRIV(sym_op->session); - ret = cpt_sym_inst_fill(qp, op, sess, infl_req, inst); + ret = cpt_sym_inst_fill(qp, op, sess, infl_req, inst, false); inst->w7.u64 = sess->cpt_inst_w7; } else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) ret = cn9k_cpt_sec_inst_fill(op, infl_req, inst); @@ -102,7 +102,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, return -1; } - ret = cpt_sym_inst_fill(qp, op, sess, infl_req, inst); + ret = cpt_sym_inst_fill(qp, op, sess, infl_req, inst, false); if (unlikely(ret)) { sym_session_clear(op->sym->session); rte_mempool_put(qp->sess_mp, op->sym->session); diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h index c92e2cca2f..9ce75c07e0 100644 --- a/drivers/crypto/cnxk/cnxk_se.h +++ b/drivers/crypto/cnxk/cnxk_se.h @@ -9,12 +9,10 @@ #include "cnxk_cryptodev.h" #include "cnxk_cryptodev_ops.h" -#define SRC_IOV_SIZE \ - (sizeof(struct roc_se_iov_ptr) + \ - (sizeof(struct roc_se_buf_ptr) * ROC_SE_MAX_SG_CNT)) -#define DST_IOV_SIZE \ - (sizeof(struct roc_se_iov_ptr) + \ - (sizeof(struct roc_se_buf_ptr) * ROC_SE_MAX_SG_CNT)) +#define SRC_IOV_SIZE \ + (sizeof(struct roc_se_iov_ptr) + (sizeof(struct roc_se_buf_ptr) * ROC_SE_MAX_SG_CNT)) +#define DST_IOV_SIZE \ + (sizeof(struct roc_se_iov_ptr) + (sizeof(struct roc_se_buf_ptr) * ROC_SE_MAX_SG_CNT)) enum cpt_dp_thread_type { CPT_DP_THREAD_TYPE_FC_CHAIN = 0x1, @@ -319,9 +317,506 @@ fill_sg_comp_from_iov(struct roc_se_sglist_comp *list, uint32_t i, return (uint32_t)i; } +static __rte_always_inline uint32_t +fill_sg2_comp(struct roc_se_sg2list_comp *list, uint32_t i, phys_addr_t dma_addr, uint32_t size) +{ + struct roc_se_sg2list_comp *to = &list[i / 3]; + + to->u.s.len[i % 3] = (size); + to->ptr[i % 3] = (dma_addr); + to->u.s.valid_segs = (i % 3) + 1; + i++; + return i; +} + +static __rte_always_inline uint32_t +fill_sg2_comp_from_buf(struct roc_se_sg2list_comp *list, uint32_t i, struct roc_se_buf_ptr *from) +{ + struct roc_se_sg2list_comp *to = &list[i / 3]; + + to->u.s.len[i % 3] = (from->size); + to->ptr[i % 3] = ((uint64_t)from->vaddr); + to->u.s.valid_segs = (i % 3) + 1; + i++; + return i; +} + +static __rte_always_inline uint32_t +fill_sg2_comp_from_buf_min(struct roc_se_sg2list_comp *list, uint32_t i, + struct roc_se_buf_ptr *from, uint32_t *psize) +{ + struct roc_se_sg2list_comp *to = &list[i / 3]; + uint32_t size = *psize; + uint32_t e_len; + + e_len = (size > from->size) ? from->size : size; + to->u.s.len[i % 3] = (e_len); + to->ptr[i % 3] = ((uint64_t)from->vaddr); + to->u.s.valid_segs = (i % 3) + 1; + *psize -= e_len; + i++; + return i; +} + +static __rte_always_inline uint32_t +fill_sg2_comp_from_iov(struct roc_se_sg2list_comp *list, uint32_t i, struct roc_se_iov_ptr *from, + uint32_t from_offset, uint32_t *psize, struct roc_se_buf_ptr *extra_buf, + uint32_t extra_offset) +{ + int32_t j; + uint32_t extra_len = extra_buf ? extra_buf->size : 0; + uint32_t size = *psize; + + for (j = 0; (j < from->buf_cnt) && size; j++) { + struct roc_se_sg2list_comp *to = &list[i / 3]; + uint32_t buf_sz = from->bufs[j].size; + void *vaddr = from->bufs[j].vaddr; + uint64_t e_vaddr; + uint32_t e_len; + + if (unlikely(from_offset)) { + if (from_offset >= buf_sz) { + from_offset -= buf_sz; + continue; + } + e_vaddr = (uint64_t)vaddr + from_offset; + e_len = (size > (buf_sz - from_offset)) ? (buf_sz - from_offset) : size; + from_offset = 0; + } else { + e_vaddr = (uint64_t)vaddr; + e_len = (size > buf_sz) ? buf_sz : size; + } + + to->u.s.len[i % 3] = (e_len); + to->ptr[i % 3] = (e_vaddr); + to->u.s.valid_segs = (i % 3) + 1; + + if (extra_len && (e_len >= extra_offset)) { + /* Break the data at given offset */ + uint32_t next_len = e_len - extra_offset; + uint64_t next_vaddr = e_vaddr + extra_offset; + + if (!extra_offset) { + i--; + } else { + e_len = extra_offset; + size -= e_len; + to->u.s.len[i % 3] = (e_len); + } + + extra_len = RTE_MIN(extra_len, size); + /* Insert extra data ptr */ + if (extra_len) { + i++; + to = &list[i / 3]; + to->u.s.len[i % 3] = (extra_len); + to->ptr[i % 3] = ((uint64_t)extra_buf->vaddr); + to->u.s.valid_segs = (i % 3) + 1; + size -= extra_len; + } + + next_len = RTE_MIN(next_len, size); + /* insert the rest of the data */ + if (next_len) { + i++; + to = &list[i / 3]; + to->u.s.len[i % 3] = (next_len); + to->ptr[i % 3] = (next_vaddr); + to->u.s.valid_segs = (i % 3) + 1; + size -= next_len; + } + extra_len = 0; + + } else { + size -= e_len; + } + if (extra_offset) + extra_offset -= size; + i++; + } + + *psize = size; + return (uint32_t)i; +} + +static __rte_always_inline int +sg_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t offset_ctrl, + uint8_t *iv_s, int iv_len, uint8_t pack_iv, uint8_t pdcp_alg_type, int32_t inputlen, + int32_t outputlen, uint32_t passthrough_len, uint32_t req_flags, int pdcp_flag, + int decrypt) +{ + void *m_vaddr = params->meta_buf.vaddr; + struct roc_se_sglist_comp *gather_comp; + struct roc_se_sglist_comp *scatter_comp; + struct roc_se_buf_ptr *aad_buf = NULL; + uint32_t mac_len = 0, aad_len = 0; + struct roc_se_ctx *se_ctx; + uint32_t i, g_size_bytes; + uint64_t *offset_vaddr; + uint32_t s_size_bytes; + uint8_t *in_buffer; + int zsk_flags; + uint32_t size; + uint8_t *iv_d; + + se_ctx = params->ctx; + zsk_flags = se_ctx->zsk_flags; + mac_len = se_ctx->mac_len; + + if (unlikely(req_flags & ROC_SE_VALID_AAD_BUF)) { + /* We don't support both AAD and auth data separately */ + aad_len = params->aad_buf.size; + aad_buf = ¶ms->aad_buf; + } + + /* save space for iv */ + offset_vaddr = m_vaddr; + + m_vaddr = (uint8_t *)m_vaddr + ROC_SE_OFF_CTRL_LEN + RTE_ALIGN_CEIL(iv_len, 8); + + inst->w4.s.opcode_major |= (uint64_t)ROC_SE_DMA_MODE; + + /* iv offset is 0 */ + *offset_vaddr = offset_ctrl; + + iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); + + if (pdcp_flag) { + if (likely(iv_len)) + pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv); + } else { + if (likely(iv_len)) + memcpy(iv_d, iv_s, iv_len); + } + + /* DPTR has SG list */ + + /* TODO Add error check if space will be sufficient */ + gather_comp = (struct roc_se_sglist_comp *)((uint8_t *)m_vaddr + 8); + + /* + * Input Gather List + */ + i = 0; + + /* Offset control word followed by iv */ + + i = fill_sg_comp(gather_comp, i, (uint64_t)offset_vaddr, ROC_SE_OFF_CTRL_LEN + iv_len); + + /* Add input data */ + if (decrypt && (req_flags & ROC_SE_VALID_MAC_BUF)) { + size = inputlen - iv_len - mac_len; + if (likely(size)) { + uint32_t aad_offset = aad_len ? passthrough_len : 0; + /* input data only */ + if (unlikely(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) { + i = fill_sg_comp_from_buf_min(gather_comp, i, params->bufs, &size); + } else { + i = fill_sg_comp_from_iov(gather_comp, i, params->src_iov, 0, &size, + aad_buf, aad_offset); + } + if (unlikely(size)) { + plt_dp_err("Insufficient buffer" + " space, size %d needed", + size); + return -1; + } + } + + if (mac_len) + i = fill_sg_comp_from_buf(gather_comp, i, ¶ms->mac_buf); + } else { + /* input data */ + size = inputlen - iv_len; + if (size) { + uint32_t aad_offset = aad_len ? passthrough_len : 0; + if (unlikely(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) { + i = fill_sg_comp_from_buf_min(gather_comp, i, params->bufs, &size); + } else { + i = fill_sg_comp_from_iov(gather_comp, i, params->src_iov, 0, &size, + aad_buf, aad_offset); + } + if (unlikely(size)) { + plt_dp_err("Insufficient buffer space," + " size %d needed", + size); + return -1; + } + } + } + + in_buffer = m_vaddr; + + ((uint16_t *)in_buffer)[0] = 0; + ((uint16_t *)in_buffer)[1] = 0; + ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i); + + g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); + /* + * Output Scatter List + */ + + i = 0; + scatter_comp = (struct roc_se_sglist_comp *)((uint8_t *)gather_comp + g_size_bytes); + + if (zsk_flags == 0x1) { + /* IV in SLIST only for EEA3 & UEA2 or for F8 */ + iv_len = 0; + } + + if (iv_len) { + i = fill_sg_comp(scatter_comp, i, (uint64_t)offset_vaddr + ROC_SE_OFF_CTRL_LEN, + iv_len); + } + + /* Add output data */ + if ((!decrypt) && (req_flags & ROC_SE_VALID_MAC_BUF)) { + size = outputlen - iv_len - mac_len; + if (size) { + + uint32_t aad_offset = aad_len ? passthrough_len : 0; + + if (unlikely(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) { + i = fill_sg_comp_from_buf_min(scatter_comp, i, params->bufs, &size); + } else { + i = fill_sg_comp_from_iov(scatter_comp, i, params->dst_iov, 0, + &size, aad_buf, aad_offset); + } + if (unlikely(size)) { + plt_dp_err("Insufficient buffer space," + " size %d needed", + size); + return -1; + } + } + + /* mac data */ + if (mac_len) + i = fill_sg_comp_from_buf(scatter_comp, i, ¶ms->mac_buf); + } else { + /* Output including mac */ + size = outputlen - iv_len; + if (size) { + uint32_t aad_offset = aad_len ? passthrough_len : 0; + + if (unlikely(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) { + i = fill_sg_comp_from_buf_min(scatter_comp, i, params->bufs, &size); + } else { + i = fill_sg_comp_from_iov(scatter_comp, i, params->dst_iov, 0, + &size, aad_buf, aad_offset); + } + + if (unlikely(size)) { + plt_dp_err("Insufficient buffer space," + " size %d needed", + size); + return -1; + } + } + } + ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i); + s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); + + size = g_size_bytes + s_size_bytes + ROC_SE_SG_LIST_HDR_SIZE; + + /* This is DPTR len in case of SG mode */ + inst->w4.s.dlen = size; + + inst->dptr = (uint64_t)in_buffer; + return 0; +} + +static __rte_always_inline int +sg2_inst_prep(struct roc_se_fc_params *params, struct cpt_inst_s *inst, uint64_t offset_ctrl, + uint8_t *iv_s, int iv_len, uint8_t pack_iv, uint8_t pdcp_alg_type, int32_t inputlen, + int32_t outputlen, uint32_t passthrough_len, uint32_t req_flags, int pdcp_flag, + int decrypt) +{ + void *m_vaddr = params->meta_buf.vaddr; + uint32_t i, g_size_bytes; + struct roc_se_sg2list_comp *gather_comp; + struct roc_se_sg2list_comp *scatter_comp; + struct roc_se_buf_ptr *aad_buf = NULL; + struct roc_se_ctx *se_ctx; + uint64_t *offset_vaddr; + uint32_t mac_len = 0, aad_len = 0; + int zsk_flags; + uint32_t size; + union cpt_inst_w5 cpt_inst_w5; + union cpt_inst_w6 cpt_inst_w6; + uint8_t *iv_d; + + se_ctx = params->ctx; + zsk_flags = se_ctx->zsk_flags; + mac_len = se_ctx->mac_len; + + if (unlikely(req_flags & ROC_SE_VALID_AAD_BUF)) { + /* We don't support both AAD and auth data separately */ + aad_len = params->aad_buf.size; + aad_buf = ¶ms->aad_buf; + } + + /* save space for iv */ + offset_vaddr = m_vaddr; + + m_vaddr = (uint8_t *)m_vaddr + ROC_SE_OFF_CTRL_LEN + RTE_ALIGN_CEIL(iv_len, 8); + + inst->w4.s.opcode_major |= (uint64_t)ROC_SE_DMA_MODE; + + /* iv offset is 0 */ + *offset_vaddr = offset_ctrl; + + iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); + if (pdcp_flag) { + if (likely(iv_len)) + pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv); + } else { + if (likely(iv_len)) + memcpy(iv_d, iv_s, iv_len); + } + + /* DPTR has SG list */ + + /* TODO Add error check if space will be sufficient */ + gather_comp = (struct roc_se_sg2list_comp *)((uint8_t *)m_vaddr); + + /* + * Input Gather List + */ + i = 0; + + /* Offset control word followed by iv */ + + i = fill_sg2_comp(gather_comp, i, (uint64_t)offset_vaddr, ROC_SE_OFF_CTRL_LEN + iv_len); + + /* Add input data */ + if (decrypt && (req_flags & ROC_SE_VALID_MAC_BUF)) { + size = inputlen - iv_len - mac_len; + if (size) { + /* input data only */ + if (unlikely(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) { + i = fill_sg2_comp_from_buf_min(gather_comp, i, params->bufs, &size); + } else { + uint32_t aad_offset = aad_len ? passthrough_len : 0; + + i = fill_sg2_comp_from_iov(gather_comp, i, params->src_iov, 0, + &size, aad_buf, aad_offset); + } + if (unlikely(size)) { + plt_dp_err("Insufficient buffer" + " space, size %d needed", + size); + return -1; + } + } + + /* mac data */ + if (mac_len) + i = fill_sg2_comp_from_buf(gather_comp, i, ¶ms->mac_buf); + } else { + /* input data */ + size = inputlen - iv_len; + if (size) { + uint32_t aad_offset = aad_len ? passthrough_len : 0; + if (unlikely(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) { + i = fill_sg2_comp_from_buf_min(gather_comp, i, params->bufs, &size); + } else { + i = fill_sg2_comp_from_iov(gather_comp, i, params->src_iov, 0, + &size, aad_buf, aad_offset); + } + if (unlikely(size)) { + plt_dp_err("Insufficient buffer space," + " size %d needed", + size); + return -1; + } + } + } + + cpt_inst_w5.s.gather_sz = ((i + 2) / 3); + + g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_se_sg2list_comp); + /* + * Output Scatter List + */ + + i = 0; + scatter_comp = (struct roc_se_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes); + + if (zsk_flags == 0x1) { + /* IV in SLIST only for EEA3 & UEA2 or for F8 */ + iv_len = 0; + } + + if (iv_len) { + i = fill_sg2_comp(scatter_comp, i, (uint64_t)offset_vaddr + ROC_SE_OFF_CTRL_LEN, + iv_len); + } + + /* Add output data */ + if ((!decrypt) && (req_flags & ROC_SE_VALID_MAC_BUF)) { + size = outputlen - iv_len - mac_len; + if (size) { + + uint32_t aad_offset = aad_len ? passthrough_len : 0; + + if (unlikely(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) { + i = fill_sg2_comp_from_buf_min(scatter_comp, i, params->bufs, + &size); + } else { + i = fill_sg2_comp_from_iov(scatter_comp, i, params->dst_iov, 0, + &size, aad_buf, aad_offset); + } + if (unlikely(size)) { + plt_dp_err("Insufficient buffer space," + " size %d needed", + size); + return -1; + } + } + + /* mac data */ + if (mac_len) + i = fill_sg2_comp_from_buf(scatter_comp, i, ¶ms->mac_buf); + } else { + /* Output including mac */ + size = outputlen - iv_len; + if (size) { + uint32_t aad_offset = aad_len ? passthrough_len : 0; + + if (unlikely(req_flags & ROC_SE_SINGLE_BUF_INPLACE)) { + i = fill_sg2_comp_from_buf_min(scatter_comp, i, params->bufs, + &size); + } else { + i = fill_sg2_comp_from_iov(scatter_comp, i, params->dst_iov, 0, + &size, aad_buf, aad_offset); + } + + if (unlikely(size)) { + plt_dp_err("Insufficient buffer space," + " size %d needed", + size); + return -1; + } + } + } + + cpt_inst_w6.s.scatter_sz = ((i + 2) / 3); + + /* This is DPTR len in case of SG mode */ + inst->w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; + + cpt_inst_w5.s.dptr = (uint64_t)gather_comp; + cpt_inst_w6.s.rptr = (uint64_t)scatter_comp; + + inst->w5.u64 = cpt_inst_w5.u64; + inst->w6.u64 = cpt_inst_w6.u64; + return 0; +} + static __rte_always_inline int -cpt_digest_gen_prep(uint32_t flags, uint64_t d_lens, - struct roc_se_fc_params *params, struct cpt_inst_s *inst) +cpt_digest_gen_sg_ver1_prep(uint32_t flags, uint64_t d_lens, struct roc_se_fc_params *params, + struct cpt_inst_s *inst) { void *m_vaddr = params->meta_buf.vaddr; uint32_t size, i; @@ -449,23 +944,145 @@ cpt_digest_gen_prep(uint32_t flags, uint64_t d_lens, return 0; } +static __rte_always_inline int +cpt_digest_gen_sg_ver2_prep(uint32_t flags, uint64_t d_lens, struct roc_se_fc_params *params, + struct cpt_inst_s *inst) +{ + void *m_vaddr = params->meta_buf.vaddr; + uint32_t size, i; + uint16_t data_len, mac_len, key_len; + roc_se_auth_type hash_type; + struct roc_se_ctx *ctx; + struct roc_se_sg2list_comp *gather_comp; + struct roc_se_sg2list_comp *scatter_comp; + union cpt_inst_w5 cpt_inst_w5; + union cpt_inst_w6 cpt_inst_w6; + uint32_t g_size_bytes; + union cpt_inst_w4 cpt_inst_w4; + + ctx = params->ctx; + + hash_type = ctx->hash_type; + mac_len = ctx->mac_len; + key_len = ctx->auth_key_len; + data_len = ROC_SE_AUTH_DLEN(d_lens); + + /*GP op header */ + cpt_inst_w4.s.opcode_minor = 0; + cpt_inst_w4.s.param2 = ((uint16_t)hash_type << 8); + if (ctx->hmac) { + cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_HMAC; + cpt_inst_w4.s.param1 = key_len; + cpt_inst_w4.s.dlen = data_len + RTE_ALIGN_CEIL(key_len, 8); + } else { + cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_HASH; + cpt_inst_w4.s.param1 = 0; + cpt_inst_w4.s.dlen = data_len; + } + + /* Null auth only case enters the if */ + if (unlikely(!hash_type && !ctx->enc_cipher)) { + cpt_inst_w4.s.opcode_major = ROC_SE_MAJOR_OP_MISC; + /* Minor op is passthrough */ + cpt_inst_w4.s.opcode_minor = 0x03; + /* Send out completion code only */ + cpt_inst_w4.s.param2 = 0x1; + } + + /* DPTR has SG list */ + + /* TODO Add error check if space will be sufficient */ + gather_comp = (struct roc_se_sg2list_comp *)((uint8_t *)m_vaddr + 0); + + /* + * Input gather list + */ + + i = 0; + + if (ctx->hmac) { + uint64_t k_vaddr = (uint64_t)ctx->auth_key; + /* Key */ + i = fill_sg2_comp(gather_comp, i, k_vaddr, RTE_ALIGN_CEIL(key_len, 8)); + } + + /* input data */ + size = data_len; + if (size) { + i = fill_sg2_comp_from_iov(gather_comp, i, params->src_iov, 0, &size, NULL, 0); + if (unlikely(size)) { + plt_dp_err("Insufficient dst IOV size, short by %dB", size); + return -1; + } + } else { + /* + * Looks like we need to support zero data + * gather ptr in case of hash & hmac + */ + i++; + } + cpt_inst_w5.s.gather_sz = ((i + 2) / 3); + + g_size_bytes = ((i + 2) / 3) * sizeof(struct roc_se_sg2list_comp); + + /* + * Output Gather list + */ + + i = 0; + scatter_comp = (struct roc_se_sg2list_comp *)((uint8_t *)gather_comp + g_size_bytes); + + if (flags & ROC_SE_VALID_MAC_BUF) { + if (unlikely(params->mac_buf.size < mac_len)) { + plt_dp_err("Insufficient MAC size"); + return -1; + } + + size = mac_len; + i = fill_sg2_comp_from_buf_min(scatter_comp, i, ¶ms->mac_buf, &size); + } else { + size = mac_len; + i = fill_sg2_comp_from_iov(scatter_comp, i, params->src_iov, data_len, &size, NULL, + 0); + if (unlikely(size)) { + plt_dp_err("Insufficient dst IOV size, short by %dB", size); + return -1; + } + } + + cpt_inst_w6.s.scatter_sz = ((i + 2) / 3); + + cpt_inst_w5.s.dptr = (uint64_t)gather_comp; + cpt_inst_w6.s.rptr = (uint64_t)scatter_comp; + + inst->w5.u64 = cpt_inst_w5.u64; + inst->w6.u64 = cpt_inst_w6.u64; + + inst->w4.u64 = cpt_inst_w4.u64; + + return 0; +} + static __rte_always_inline int cpt_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, - struct roc_se_fc_params *fc_params, struct cpt_inst_s *inst) + struct roc_se_fc_params *fc_params, struct cpt_inst_s *inst, + const bool is_sg_ver2) { uint32_t iv_offset = 0; int32_t inputlen, outputlen, enc_dlen, auth_dlen; struct roc_se_ctx *se_ctx; uint32_t cipher_type, hash_type; - uint32_t mac_len, size; + uint32_t mac_len; uint8_t iv_len = 16; - struct roc_se_buf_ptr *aad_buf = NULL; uint32_t encr_offset, auth_offset; + uint64_t offset_ctrl; uint32_t encr_data_len, auth_data_len, aad_len = 0; uint32_t passthrough_len = 0; union cpt_inst_w4 cpt_inst_w4; void *offset_vaddr; uint8_t op_minor; + uint8_t *src = NULL; + int ret; encr_offset = ROC_SE_ENCR_OFFSET(d_offs); auth_offset = ROC_SE_AUTH_OFFSET(d_offs); @@ -476,7 +1093,6 @@ cpt_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, auth_data_len = 0; auth_offset = 0; aad_len = fc_params->aad_buf.size; - aad_buf = &fc_params->aad_buf; } se_ctx = fc_params->ctx; @@ -550,6 +1166,17 @@ cpt_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, cpt_inst_w4.s.param1 = encr_data_len; cpt_inst_w4.s.param2 = auth_data_len; + if (unlikely((encr_offset >> 16) || (iv_offset >> 8) || (auth_offset >> 8))) { + plt_dp_err("Offset not supported"); + plt_dp_err("enc_offset: %d", encr_offset); + plt_dp_err("iv_offset : %d", iv_offset); + plt_dp_err("auth_offset: %d", auth_offset); + return -1; + } + + offset_ctrl = rte_cpu_to_be_64(((uint64_t)encr_offset << 16) | ((uint64_t)iv_offset << 8) | + ((uint64_t)auth_offset)); + /* * In cn9k, cn10k since we have a limitation of * IV & Offset control word not part of instruction @@ -562,8 +1189,11 @@ cpt_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, /* Use Direct mode */ - offset_vaddr = - (uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - iv_len; + offset_vaddr = (uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - iv_len; + + *(uint64_t *)offset_vaddr = + rte_cpu_to_be_64(((uint64_t)encr_offset << 16) | + ((uint64_t)iv_offset << 8) | ((uint64_t)auth_offset)); /* DPTR */ inst->dptr = (uint64_t)offset_vaddr; @@ -571,199 +1201,58 @@ cpt_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, /* RPTR should just exclude offset control word */ inst->rptr = (uint64_t)dm_vaddr - iv_len; - cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; - - if (likely(iv_len)) { - uint64_t *dest = (uint64_t *)((uint8_t *)offset_vaddr + - ROC_SE_OFF_CTRL_LEN); - uint64_t *src = fc_params->iv_buf; - dest[0] = src[0]; - dest[1] = src[1]; - } - - } else { - void *m_vaddr = fc_params->meta_buf.vaddr; - uint32_t i, g_size_bytes, s_size_bytes; - struct roc_se_sglist_comp *gather_comp; - struct roc_se_sglist_comp *scatter_comp; - uint8_t *in_buffer; - - /* This falls under strict SG mode */ - offset_vaddr = m_vaddr; - size = ROC_SE_OFF_CTRL_LEN + iv_len; - - m_vaddr = (uint8_t *)m_vaddr + size; - - cpt_inst_w4.s.opcode_major |= (uint64_t)ROC_SE_DMA_MODE; - - if (likely(iv_len)) { - uint64_t *dest = (uint64_t *)((uint8_t *)offset_vaddr + - ROC_SE_OFF_CTRL_LEN); - uint64_t *src = fc_params->iv_buf; - dest[0] = src[0]; - dest[1] = src[1]; - } - - /* DPTR has SG list */ - in_buffer = m_vaddr; - - ((uint16_t *)in_buffer)[0] = 0; - ((uint16_t *)in_buffer)[1] = 0; - - /* TODO Add error check if space will be sufficient */ - gather_comp = - (struct roc_se_sglist_comp *)((uint8_t *)m_vaddr + 8); - - /* - * Input Gather List - */ - - i = 0; - - /* Offset control word that includes iv */ - i = fill_sg_comp(gather_comp, i, (uint64_t)offset_vaddr, - ROC_SE_OFF_CTRL_LEN + iv_len); - - /* Add input data */ - size = inputlen - iv_len; - if (likely(size)) { - uint32_t aad_offset = aad_len ? passthrough_len : 0; - - if (unlikely(flags & ROC_SE_SINGLE_BUF_INPLACE)) { - i = fill_sg_comp_from_buf_min( - gather_comp, i, fc_params->bufs, &size); - } else { - i = fill_sg_comp_from_iov( - gather_comp, i, fc_params->src_iov, 0, - &size, aad_buf, aad_offset); - } - - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } - } - ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i); - g_size_bytes = - ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); - - /* - * Output Scatter list - */ - i = 0; - scatter_comp = - (struct roc_se_sglist_comp *)((uint8_t *)gather_comp + - g_size_bytes); - - /* Add IV */ - if (likely(iv_len)) { - i = fill_sg_comp(scatter_comp, i, - (uint64_t)offset_vaddr + - ROC_SE_OFF_CTRL_LEN, - iv_len); - } - - /* output data or output data + digest*/ - if (unlikely(flags & ROC_SE_VALID_MAC_BUF)) { - size = outputlen - iv_len - mac_len; - if (size) { - uint32_t aad_offset = - aad_len ? passthrough_len : 0; - - if (unlikely(flags & - ROC_SE_SINGLE_BUF_INPLACE)) { - i = fill_sg_comp_from_buf_min( - scatter_comp, i, - fc_params->bufs, &size); - } else { - i = fill_sg_comp_from_iov( - scatter_comp, i, - fc_params->dst_iov, 0, &size, - aad_buf, aad_offset); - } - if (unlikely(size)) { - plt_dp_err("Insufficient buffer" - " space, size %d needed", - size); - return -1; - } - } + cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; - /* Digest buffer */ - i = fill_sg_comp_from_buf(scatter_comp, i, &fc_params->mac_buf); - } else { - /* Output including mac */ - size = outputlen - iv_len; - if (likely(size)) { - uint32_t aad_offset = - aad_len ? passthrough_len : 0; - - if (unlikely(flags & - ROC_SE_SINGLE_BUF_INPLACE)) { - i = fill_sg_comp_from_buf_min( - scatter_comp, i, - fc_params->bufs, &size); - } else { - i = fill_sg_comp_from_iov( - scatter_comp, i, - fc_params->dst_iov, 0, &size, - aad_buf, aad_offset); - } - if (unlikely(size)) { - plt_dp_err("Insufficient buffer" - " space, size %d needed", - size); - return -1; - } - } + if (likely(iv_len)) { + uint64_t *dest = + (uint64_t *)((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); + uint64_t *src = fc_params->iv_buf; + dest[0] = src[0]; + dest[1] = src[1]; } - ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i); - s_size_bytes = - ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); - size = g_size_bytes + s_size_bytes + ROC_SE_SG_LIST_HDR_SIZE; + inst->w4.u64 = cpt_inst_w4.u64; + } else { + if (likely(iv_len)) + src = fc_params->iv_buf; - /* This is DPTR len in case of SG mode */ - cpt_inst_w4.s.dlen = size; + inst->w4.u64 = cpt_inst_w4.u64; - inst->dptr = (uint64_t)in_buffer; - } + if (is_sg_ver2) + ret = sg2_inst_prep(fc_params, inst, offset_ctrl, src, iv_len, 0, 0, + inputlen, outputlen, passthrough_len, flags, 0, 0); + else + ret = sg_inst_prep(fc_params, inst, offset_ctrl, src, iv_len, 0, 0, + inputlen, outputlen, passthrough_len, flags, 0, 0); - if (unlikely((encr_offset >> 16) || (iv_offset >> 8) || - (auth_offset >> 8))) { - plt_dp_err("Offset not supported"); - plt_dp_err("enc_offset: %d", encr_offset); - plt_dp_err("iv_offset : %d", iv_offset); - plt_dp_err("auth_offset: %d", auth_offset); - return -1; + if (unlikely(ret)) { + plt_dp_err("sg prep failed"); + return -1; + } } - *(uint64_t *)offset_vaddr = rte_cpu_to_be_64( - ((uint64_t)encr_offset << 16) | ((uint64_t)iv_offset << 8) | - ((uint64_t)auth_offset)); - - inst->w4.u64 = cpt_inst_w4.u64; return 0; } static __rte_always_inline int cpt_dec_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, - struct roc_se_fc_params *fc_params, struct cpt_inst_s *inst) + struct roc_se_fc_params *fc_params, struct cpt_inst_s *inst, + const bool is_sg_ver2) { - uint32_t iv_offset = 0, size; + uint32_t iv_offset = 0; int32_t inputlen, outputlen, enc_dlen, auth_dlen; struct roc_se_ctx *se_ctx; int32_t hash_type, mac_len; uint8_t iv_len = 16; - struct roc_se_buf_ptr *aad_buf = NULL; uint32_t encr_offset, auth_offset; uint32_t encr_data_len, auth_data_len, aad_len = 0; uint32_t passthrough_len = 0; union cpt_inst_w4 cpt_inst_w4; void *offset_vaddr; uint8_t op_minor; + uint64_t offset_ctrl; + uint8_t *src = NULL; + int ret; encr_offset = ROC_SE_ENCR_OFFSET(d_offs); auth_offset = ROC_SE_AUTH_OFFSET(d_offs); @@ -775,7 +1264,6 @@ cpt_dec_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, auth_data_len = 0; auth_offset = 0; aad_len = fc_params->aad_buf.size; - aad_buf = &fc_params->aad_buf; } se_ctx = fc_params->ctx; @@ -837,20 +1325,34 @@ cpt_dec_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, cpt_inst_w4.s.param1 = encr_data_len; cpt_inst_w4.s.param2 = auth_data_len; + if (unlikely((encr_offset >> 16) || (iv_offset >> 8) || (auth_offset >> 8))) { + plt_dp_err("Offset not supported"); + plt_dp_err("enc_offset: %d", encr_offset); + plt_dp_err("iv_offset : %d", iv_offset); + plt_dp_err("auth_offset: %d", auth_offset); + return -1; + } + + offset_ctrl = rte_cpu_to_be_64(((uint64_t)encr_offset << 16) | ((uint64_t)iv_offset << 8) | + ((uint64_t)auth_offset)); + /* * In cn9k, cn10k since we have a limitation of * IV & Offset control word not part of instruction * and need to be part of Data Buffer, we check if * head room is there and then only do the Direct mode processing */ - if (likely((flags & ROC_SE_SINGLE_BUF_INPLACE) && - (flags & ROC_SE_SINGLE_BUF_HEADROOM))) { + if (likely((flags & ROC_SE_SINGLE_BUF_INPLACE) && (flags & ROC_SE_SINGLE_BUF_HEADROOM))) { void *dm_vaddr = fc_params->bufs[0].vaddr; /* Use Direct mode */ - offset_vaddr = - (uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - iv_len; + offset_vaddr = (uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - iv_len; + + *(uint64_t *)offset_vaddr = + rte_cpu_to_be_64(((uint64_t)encr_offset << 16) | + ((uint64_t)iv_offset << 8) | ((uint64_t)auth_offset)); + inst->dptr = (uint64_t)offset_vaddr; /* RPTR should just exclude offset control word */ @@ -859,197 +1361,33 @@ cpt_dec_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; if (likely(iv_len)) { - uint64_t *dest = (uint64_t *)((uint8_t *)offset_vaddr + - ROC_SE_OFF_CTRL_LEN); + uint64_t *dest = + (uint64_t *)((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); uint64_t *src = fc_params->iv_buf; dest[0] = src[0]; dest[1] = src[1]; } + inst->w4.u64 = cpt_inst_w4.u64; } else { - void *m_vaddr = fc_params->meta_buf.vaddr; - uint32_t g_size_bytes, s_size_bytes; - struct roc_se_sglist_comp *gather_comp; - struct roc_se_sglist_comp *scatter_comp; - uint8_t *in_buffer; - uint8_t i = 0; - - /* This falls under strict SG mode */ - offset_vaddr = m_vaddr; - size = ROC_SE_OFF_CTRL_LEN + iv_len; - - m_vaddr = (uint8_t *)m_vaddr + size; - - cpt_inst_w4.s.opcode_major |= (uint64_t)ROC_SE_DMA_MODE; - if (likely(iv_len)) { - uint64_t *dest = (uint64_t *)((uint8_t *)offset_vaddr + - ROC_SE_OFF_CTRL_LEN); - uint64_t *src = fc_params->iv_buf; - dest[0] = src[0]; - dest[1] = src[1]; - } - - /* DPTR has SG list */ - in_buffer = m_vaddr; - - ((uint16_t *)in_buffer)[0] = 0; - ((uint16_t *)in_buffer)[1] = 0; - - /* TODO Add error check if space will be sufficient */ - gather_comp = - (struct roc_se_sglist_comp *)((uint8_t *)m_vaddr + 8); - - /* - * Input Gather List - */ - i = 0; - - /* Offset control word that includes iv */ - i = fill_sg_comp(gather_comp, i, (uint64_t)offset_vaddr, - ROC_SE_OFF_CTRL_LEN + iv_len); - - /* Add input data */ - if (flags & ROC_SE_VALID_MAC_BUF) { - size = inputlen - iv_len - mac_len; - if (size) { - /* input data only */ - if (unlikely(flags & - ROC_SE_SINGLE_BUF_INPLACE)) { - i = fill_sg_comp_from_buf_min( - gather_comp, i, fc_params->bufs, - &size); - } else { - uint32_t aad_offset = - aad_len ? passthrough_len : 0; - - i = fill_sg_comp_from_iov( - gather_comp, i, - fc_params->src_iov, 0, &size, - aad_buf, aad_offset); - } - if (unlikely(size)) { - plt_dp_err("Insufficient buffer" - " space, size %d needed", - size); - return -1; - } - } - - /* mac data */ - if (mac_len) { - i = fill_sg_comp_from_buf(gather_comp, i, - &fc_params->mac_buf); - } - } else { - /* input data + mac */ - size = inputlen - iv_len; - if (size) { - if (unlikely(flags & - ROC_SE_SINGLE_BUF_INPLACE)) { - i = fill_sg_comp_from_buf_min( - gather_comp, i, fc_params->bufs, - &size); - } else { - uint32_t aad_offset = - aad_len ? passthrough_len : 0; - - if (unlikely(!fc_params->src_iov)) { - plt_dp_err("Bad input args"); - return -1; - } - - i = fill_sg_comp_from_iov( - gather_comp, i, - fc_params->src_iov, 0, &size, - aad_buf, aad_offset); - } - - if (unlikely(size)) { - plt_dp_err("Insufficient buffer" - " space, size %d needed", - size); - return -1; - } - } - } - ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i); - g_size_bytes = - ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); - - /* - * Output Scatter List - */ - - i = 0; - scatter_comp = - (struct roc_se_sglist_comp *)((uint8_t *)gather_comp + - g_size_bytes); - - /* Add iv */ - if (iv_len) { - i = fill_sg_comp(scatter_comp, i, - (uint64_t)offset_vaddr + - ROC_SE_OFF_CTRL_LEN, - iv_len); + src = fc_params->iv_buf; } - /* Add output data */ - size = outputlen - iv_len; - if (size) { - if (unlikely(flags & ROC_SE_SINGLE_BUF_INPLACE)) { - /* handle single buffer here */ - i = fill_sg_comp_from_buf_min(scatter_comp, i, - fc_params->bufs, - &size); - } else { - uint32_t aad_offset = - aad_len ? passthrough_len : 0; - - if (unlikely(!fc_params->dst_iov)) { - plt_dp_err("Bad input args"); - return -1; - } - - i = fill_sg_comp_from_iov( - scatter_comp, i, fc_params->dst_iov, 0, - &size, aad_buf, aad_offset); - } + inst->w4.u64 = cpt_inst_w4.u64; - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } + if (is_sg_ver2) + ret = sg2_inst_prep(fc_params, inst, offset_ctrl, src, iv_len, 0, 0, + inputlen, outputlen, passthrough_len, flags, 0, 1); + else + ret = sg_inst_prep(fc_params, inst, offset_ctrl, src, iv_len, 0, 0, + inputlen, outputlen, passthrough_len, flags, 0, 1); + if (unlikely(ret)) { + plt_dp_err("sg prep failed"); + return -1; } - - ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i); - s_size_bytes = - ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); - - size = g_size_bytes + s_size_bytes + ROC_SE_SG_LIST_HDR_SIZE; - - /* This is DPTR len in case of SG mode */ - cpt_inst_w4.s.dlen = size; - - inst->dptr = (uint64_t)in_buffer; - } - - if (unlikely((encr_offset >> 16) || (iv_offset >> 8) || - (auth_offset >> 8))) { - plt_dp_err("Offset not supported"); - plt_dp_err("enc_offset: %d", encr_offset); - plt_dp_err("iv_offset : %d", iv_offset); - plt_dp_err("auth_offset: %d", auth_offset); - return -1; } - *(uint64_t *)offset_vaddr = rte_cpu_to_be_64( - ((uint64_t)encr_offset << 16) | ((uint64_t)iv_offset << 8) | - ((uint64_t)auth_offset)); - - inst->w4.u64 = cpt_inst_w4.u64; return 0; } @@ -1266,9 +1604,8 @@ cpt_pdcp_chain_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, static __rte_always_inline int cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, - struct roc_se_fc_params *params, struct cpt_inst_s *inst) + struct roc_se_fc_params *params, struct cpt_inst_s *inst, const bool is_sg_ver2) { - uint32_t size; int32_t inputlen, outputlen; struct roc_se_ctx *se_ctx; uint32_t mac_len = 0; @@ -1281,6 +1618,7 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, uint8_t *iv_s; uint8_t pack_iv = 0; union cpt_inst_w4 cpt_inst_w4; + int ret; se_ctx = params->ctx; flags = se_ctx->zsk_flags; @@ -1343,223 +1681,99 @@ cpt_pdcp_alg_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, roc_se_zuc_bytes_swap(iv_s, iv_len); iv_len -= 2; pack_iv = 1; - } - - /* - * Microcode expects offsets in bytes - * TODO: Rounding off - */ - encr_data_len = ROC_SE_ENCR_DLEN(d_lens); - - encr_offset = ROC_SE_ENCR_OFFSET(d_offs); - encr_offset = encr_offset / 8; - /* consider iv len */ - encr_offset += iv_len; - - inputlen = encr_offset + (RTE_ALIGN(encr_data_len, 8) / 8); - outputlen = inputlen; - - /* iv offset is 0 */ - offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset << 16); - - auth_data_len = 0; - auth_offset = 0; - } - - if (unlikely((encr_offset >> 16) || (auth_offset >> 8))) { - plt_dp_err("Offset not supported"); - plt_dp_err("enc_offset: %d", encr_offset); - plt_dp_err("auth_offset: %d", auth_offset); - return -1; - } - - /* - * GP op header, lengths are expected in bits. - */ - cpt_inst_w4.s.param1 = encr_data_len; - cpt_inst_w4.s.param2 = auth_data_len; - - /* - * In cn9k, cn10k since we have a limitation of - * IV & Offset control word not part of instruction - * and need to be part of Data Buffer, we check if - * head room is there and then only do the Direct mode processing - */ - if (likely((req_flags & ROC_SE_SINGLE_BUF_INPLACE) && - (req_flags & ROC_SE_SINGLE_BUF_HEADROOM))) { - void *dm_vaddr = params->bufs[0].vaddr; - - /* Use Direct mode */ - - offset_vaddr = (uint64_t *)((uint8_t *)dm_vaddr - - ROC_SE_OFF_CTRL_LEN - iv_len); - - /* DPTR */ - inst->dptr = (uint64_t)offset_vaddr; - /* RPTR should just exclude offset control word */ - inst->rptr = (uint64_t)dm_vaddr - iv_len; - - cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; - - uint8_t *iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); - pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv); - - *offset_vaddr = offset_ctrl; - } else { - void *m_vaddr = params->meta_buf.vaddr; - uint32_t i, g_size_bytes, s_size_bytes; - struct roc_se_sglist_comp *gather_comp; - struct roc_se_sglist_comp *scatter_comp; - uint8_t *in_buffer; - uint8_t *iv_d; - - /* save space for iv */ - offset_vaddr = m_vaddr; - - m_vaddr = (uint8_t *)m_vaddr + ROC_SE_OFF_CTRL_LEN + - RTE_ALIGN_CEIL(iv_len, 8); - - cpt_inst_w4.s.opcode_major |= (uint64_t)ROC_SE_DMA_MODE; - - /* DPTR has SG list */ - in_buffer = m_vaddr; - - ((uint16_t *)in_buffer)[0] = 0; - ((uint16_t *)in_buffer)[1] = 0; - - /* TODO Add error check if space will be sufficient */ - gather_comp = - (struct roc_se_sglist_comp *)((uint8_t *)m_vaddr + 8); - - /* - * Input Gather List - */ - i = 0; - - /* Offset control word followed by iv */ - - i = fill_sg_comp(gather_comp, i, (uint64_t)offset_vaddr, - ROC_SE_OFF_CTRL_LEN + iv_len); - - /* iv offset is 0 */ - *offset_vaddr = offset_ctrl; - - iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); - pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv); - - /* input data */ - size = inputlen - iv_len; - if (size) { - i = fill_sg_comp_from_iov(gather_comp, i, - params->src_iov, 0, &size, - NULL, 0); - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } - } - ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i); - g_size_bytes = - ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); + } /* - * Output Scatter List + * Microcode expects offsets in bytes + * TODO: Rounding off */ + encr_data_len = ROC_SE_ENCR_DLEN(d_lens); - i = 0; - scatter_comp = - (struct roc_se_sglist_comp *)((uint8_t *)gather_comp + - g_size_bytes); + encr_offset = ROC_SE_ENCR_OFFSET(d_offs); + encr_offset = encr_offset / 8; + /* consider iv len */ + encr_offset += iv_len; - if (flags == 0x1) { - /* IV in SLIST only for EEA3 & UEA2 */ - iv_len = 0; - } + inputlen = encr_offset + (RTE_ALIGN(encr_data_len, 8) / 8); + outputlen = inputlen; - if (iv_len) { - i = fill_sg_comp(scatter_comp, i, - (uint64_t)offset_vaddr + - ROC_SE_OFF_CTRL_LEN, - iv_len); - } + /* iv offset is 0 */ + offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset << 16); - /* Add output data */ - if (req_flags & ROC_SE_VALID_MAC_BUF) { - size = outputlen - iv_len - mac_len; - if (size) { - i = fill_sg_comp_from_iov(scatter_comp, i, - params->dst_iov, 0, - &size, NULL, 0); - - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } - } + auth_data_len = 0; + auth_offset = 0; + } - /* mac data */ - if (mac_len) { - i = fill_sg_comp_from_buf(scatter_comp, i, - ¶ms->mac_buf); - } - } else { - /* Output including mac */ - size = outputlen - iv_len; - if (size) { - i = fill_sg_comp_from_iov(scatter_comp, i, - params->dst_iov, 0, - &size, NULL, 0); - - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } - } - } - ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i); - s_size_bytes = - ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); + if (unlikely((encr_offset >> 16) || (auth_offset >> 8))) { + plt_dp_err("Offset not supported"); + plt_dp_err("enc_offset: %d", encr_offset); + plt_dp_err("auth_offset: %d", auth_offset); + return -1; + } - size = g_size_bytes + s_size_bytes + ROC_SE_SG_LIST_HDR_SIZE; + /* + * GP op header, lengths are expected in bits. + */ + cpt_inst_w4.s.param1 = encr_data_len; + cpt_inst_w4.s.param2 = auth_data_len; - /* This is DPTR len in case of SG mode */ - cpt_inst_w4.s.dlen = size; + /* + * In cn9k, cn10k since we have a limitation of + * IV & Offset control word not part of instruction + * and need to be part of Data Buffer, we check if + * head room is there and then only do the Direct mode processing + */ + if (likely((req_flags & ROC_SE_SINGLE_BUF_INPLACE) && + (req_flags & ROC_SE_SINGLE_BUF_HEADROOM))) { + void *dm_vaddr = params->bufs[0].vaddr; - inst->dptr = (uint64_t)in_buffer; - } + /* Use Direct mode */ - inst->w4.u64 = cpt_inst_w4.u64; + offset_vaddr = (uint64_t *)((uint8_t *)dm_vaddr - ROC_SE_OFF_CTRL_LEN - iv_len); + + /* DPTR */ + inst->dptr = (uint64_t)offset_vaddr; + /* RPTR should just exclude offset control word */ + inst->rptr = (uint64_t)dm_vaddr - iv_len; + + cpt_inst_w4.s.dlen = inputlen + ROC_SE_OFF_CTRL_LEN; + + uint8_t *iv_d = ((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN); + pdcp_iv_copy(iv_d, iv_s, pdcp_alg_type, pack_iv); + + *offset_vaddr = offset_ctrl; + inst->w4.u64 = cpt_inst_w4.u64; + } else { + inst->w4.u64 = cpt_inst_w4.u64; + if (is_sg_ver2) + ret = sg2_inst_prep(params, inst, offset_ctrl, iv_s, iv_len, pack_iv, + pdcp_alg_type, inputlen, outputlen, 0, req_flags, 1, 0); + else + ret = sg_inst_prep(params, inst, offset_ctrl, iv_s, iv_len, pack_iv, + pdcp_alg_type, inputlen, outputlen, 0, req_flags, 1, 0); + if (unlikely(ret)) { + plt_dp_err("sg prep failed"); + return -1; + } + } return 0; } static __rte_always_inline int cpt_kasumi_enc_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, - struct roc_se_fc_params *params, struct cpt_inst_s *inst) + struct roc_se_fc_params *params, struct cpt_inst_s *inst, const bool is_sg_ver2) { - void *m_vaddr = params->meta_buf.vaddr; - uint32_t size; int32_t inputlen = 0, outputlen = 0; struct roc_se_ctx *se_ctx; uint32_t mac_len = 0; - uint8_t i = 0; uint32_t encr_offset, auth_offset; uint32_t encr_data_len, auth_data_len; int flags; - uint8_t *iv_s, *iv_d, iv_len = 8; + uint8_t *iv_s, iv_len = 8; uint8_t dir = 0; - uint64_t *offset_vaddr; + uint64_t offset_ctrl; union cpt_inst_w4 cpt_inst_w4; - uint8_t *in_buffer; - uint32_t g_size_bytes, s_size_bytes; - struct roc_se_sglist_comp *gather_comp; - struct roc_se_sglist_comp *scatter_comp; encr_offset = ROC_SE_ENCR_OFFSET(d_offs) / 8; auth_offset = ROC_SE_AUTH_OFFSET(d_offs) / 8; @@ -1595,32 +1809,11 @@ cpt_kasumi_enc_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, auth_offset += iv_len; } - /* save space for offset ctrl and iv */ - offset_vaddr = m_vaddr; - - m_vaddr = (uint8_t *)m_vaddr + ROC_SE_OFF_CTRL_LEN + iv_len; - - /* DPTR has SG list */ - in_buffer = m_vaddr; - - ((uint16_t *)in_buffer)[0] = 0; - ((uint16_t *)in_buffer)[1] = 0; - - /* TODO Add error check if space will be sufficient */ - gather_comp = (struct roc_se_sglist_comp *)((uint8_t *)m_vaddr + 8); - - /* - * Input Gather List - */ - i = 0; - - /* Offset control word followed by iv */ - if (flags == 0x0) { inputlen = encr_offset + (RTE_ALIGN(encr_data_len, 8) / 8); outputlen = inputlen; /* iv offset is 0 */ - *offset_vaddr = rte_cpu_to_be_64((uint64_t)encr_offset << 16); + offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset << 16); if (unlikely((encr_offset >> 16))) { plt_dp_err("Offset not supported"); plt_dp_err("enc_offset: %d", encr_offset); @@ -1630,7 +1823,7 @@ cpt_kasumi_enc_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, inputlen = auth_offset + (RTE_ALIGN(auth_data_len, 8) / 8); outputlen = mac_len; /* iv offset is 0 */ - *offset_vaddr = rte_cpu_to_be_64((uint64_t)auth_offset); + offset_ctrl = rte_cpu_to_be_64((uint64_t)auth_offset); if (unlikely((auth_offset >> 8))) { plt_dp_err("Offset not supported"); plt_dp_err("auth_offset: %d", auth_offset); @@ -1638,119 +1831,30 @@ cpt_kasumi_enc_prep(uint32_t req_flags, uint64_t d_offs, uint64_t d_lens, } } - i = fill_sg_comp(gather_comp, i, (uint64_t)offset_vaddr, - ROC_SE_OFF_CTRL_LEN + iv_len); - - /* IV */ - iv_d = (uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN; - memcpy(iv_d, iv_s, iv_len); - - /* input data */ - size = inputlen - iv_len; - if (size) { - i = fill_sg_comp_from_iov(gather_comp, i, params->src_iov, 0, - &size, NULL, 0); - - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } - } - ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i); - g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); - - /* - * Output Scatter List - */ - - i = 0; - scatter_comp = (struct roc_se_sglist_comp *)((uint8_t *)gather_comp + - g_size_bytes); - - if (flags == 0x1) { - /* IV in SLIST only for F8 */ - iv_len = 0; - } - - /* IV */ - if (iv_len) { - i = fill_sg_comp(scatter_comp, i, - (uint64_t)offset_vaddr + ROC_SE_OFF_CTRL_LEN, - iv_len); - } - - /* Add output data */ - if (req_flags & ROC_SE_VALID_MAC_BUF) { - size = outputlen - iv_len - mac_len; - if (size) { - i = fill_sg_comp_from_iov(scatter_comp, i, - params->dst_iov, 0, &size, - NULL, 0); - - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } - } - - /* mac data */ - if (mac_len) { - i = fill_sg_comp_from_buf(scatter_comp, i, - ¶ms->mac_buf); - } - } else { - /* Output including mac */ - size = outputlen - iv_len; - if (size) { - i = fill_sg_comp_from_iov(scatter_comp, i, - params->dst_iov, 0, &size, - NULL, 0); - - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } - } - } - ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i); - s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); - - size = g_size_bytes + s_size_bytes + ROC_SE_SG_LIST_HDR_SIZE; - - /* This is DPTR len in case of SG mode */ - cpt_inst_w4.s.dlen = size; - - inst->dptr = (uint64_t)in_buffer; inst->w4.u64 = cpt_inst_w4.u64; + if (is_sg_ver2) + sg2_inst_prep(params, inst, offset_ctrl, iv_s, iv_len, 0, 0, inputlen, outputlen, 0, + req_flags, 0, 0); + else + sg_inst_prep(params, inst, offset_ctrl, iv_s, iv_len, 0, 0, inputlen, outputlen, 0, + req_flags, 0, 0); return 0; } static __rte_always_inline int -cpt_kasumi_dec_prep(uint64_t d_offs, uint64_t d_lens, - struct roc_se_fc_params *params, struct cpt_inst_s *inst) +cpt_kasumi_dec_prep(uint64_t d_offs, uint64_t d_lens, struct roc_se_fc_params *params, + struct cpt_inst_s *inst, const bool is_sg_ver2) { - void *m_vaddr = params->meta_buf.vaddr; - uint32_t size; int32_t inputlen = 0, outputlen; struct roc_se_ctx *se_ctx; - uint8_t i = 0, iv_len = 8; + uint8_t iv_len = 8; uint32_t encr_offset; uint32_t encr_data_len; int flags; uint8_t dir = 0; - uint64_t *offset_vaddr; union cpt_inst_w4 cpt_inst_w4; - uint8_t *in_buffer; - uint32_t g_size_bytes, s_size_bytes; - struct roc_se_sglist_comp *gather_comp; - struct roc_se_sglist_comp *scatter_comp; + uint64_t offset_ctrl; encr_offset = ROC_SE_ENCR_OFFSET(d_offs) / 8; encr_data_len = ROC_SE_ENCR_DLEN(d_lens); @@ -1776,96 +1880,28 @@ cpt_kasumi_dec_prep(uint64_t d_offs, uint64_t d_lens, inputlen = encr_offset + (RTE_ALIGN(encr_data_len, 8) / 8); outputlen = inputlen; - /* save space for offset ctrl & iv */ - offset_vaddr = m_vaddr; - - m_vaddr = (uint8_t *)m_vaddr + ROC_SE_OFF_CTRL_LEN + iv_len; - - /* DPTR has SG list */ - in_buffer = m_vaddr; - - ((uint16_t *)in_buffer)[0] = 0; - ((uint16_t *)in_buffer)[1] = 0; - - /* TODO Add error check if space will be sufficient */ - gather_comp = (struct roc_se_sglist_comp *)((uint8_t *)m_vaddr + 8); - - /* - * Input Gather List - */ - i = 0; - - /* Offset control word followed by iv */ - *offset_vaddr = rte_cpu_to_be_64((uint64_t)encr_offset << 16); + offset_ctrl = rte_cpu_to_be_64((uint64_t)encr_offset << 16); if (unlikely((encr_offset >> 16))) { plt_dp_err("Offset not supported"); plt_dp_err("enc_offset: %d", encr_offset); return -1; } - i = fill_sg_comp(gather_comp, i, (uint64_t)offset_vaddr, - ROC_SE_OFF_CTRL_LEN + iv_len); - - /* IV */ - memcpy((uint8_t *)offset_vaddr + ROC_SE_OFF_CTRL_LEN, params->iv_buf, - iv_len); - - /* Add input data */ - size = inputlen - iv_len; - if (size) { - i = fill_sg_comp_from_iov(gather_comp, i, params->src_iov, 0, - &size, NULL, 0); - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } - } - ((uint16_t *)in_buffer)[2] = rte_cpu_to_be_16(i); - g_size_bytes = ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); - - /* - * Output Scatter List - */ - - i = 0; - scatter_comp = (struct roc_se_sglist_comp *)((uint8_t *)gather_comp + - g_size_bytes); - - /* IV */ - i = fill_sg_comp(scatter_comp, i, - (uint64_t)offset_vaddr + ROC_SE_OFF_CTRL_LEN, iv_len); - - /* Add output data */ - size = outputlen - iv_len; - if (size) { - i = fill_sg_comp_from_iov(scatter_comp, i, params->dst_iov, 0, - &size, NULL, 0); - if (unlikely(size)) { - plt_dp_err("Insufficient buffer space," - " size %d needed", - size); - return -1; - } - } - ((uint16_t *)in_buffer)[3] = rte_cpu_to_be_16(i); - s_size_bytes = ((i + 3) / 4) * sizeof(struct roc_se_sglist_comp); - - size = g_size_bytes + s_size_bytes + ROC_SE_SG_LIST_HDR_SIZE; - - /* This is DPTR len in case of SG mode */ - cpt_inst_w4.s.dlen = size; - - inst->dptr = (uint64_t)in_buffer; inst->w4.u64 = cpt_inst_w4.u64; + if (is_sg_ver2) + sg2_inst_prep(params, inst, offset_ctrl, params->iv_buf, iv_len, 0, 0, inputlen, + outputlen, 0, 0, 0, 1); + else + sg_inst_prep(params, inst, offset_ctrl, params->iv_buf, iv_len, 0, 0, inputlen, + outputlen, 0, 0, 0, 1); return 0; } static __rte_always_inline int cpt_fc_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, - struct roc_se_fc_params *fc_params, struct cpt_inst_s *inst) + struct roc_se_fc_params *fc_params, struct cpt_inst_s *inst, + const bool is_sg_ver2) { struct roc_se_ctx *ctx = fc_params->ctx; uint8_t fc_type; @@ -1874,13 +1910,16 @@ cpt_fc_enc_hmac_prep(uint32_t flags, uint64_t d_offs, uint64_t d_lens, fc_type = ctx->fc_type; if (likely(fc_type == ROC_SE_FC_GEN)) { - ret = cpt_enc_hmac_prep(flags, d_offs, d_lens, fc_params, inst); + ret = cpt_enc_hmac_prep(flags, d_offs, d_lens, fc_params, inst, is_sg_ver2); } else if (fc_type == ROC_SE_PDCP) { - ret = cpt_pdcp_alg_prep(flags, d_offs, d_lens, fc_params, inst); + ret = cpt_pdcp_alg_prep(flags, d_offs, d_lens, fc_params, inst, is_sg_ver2); } else if (fc_type == ROC_SE_KASUMI) { - ret = cpt_kasumi_enc_prep(flags, d_offs, d_lens, fc_params, inst); + ret = cpt_kasumi_enc_prep(flags, d_offs, d_lens, fc_params, inst, is_sg_ver2); } else if (fc_type == ROC_SE_HASH_HMAC) { - ret = cpt_digest_gen_prep(flags, d_lens, fc_params, inst); + if (is_sg_ver2) + ret = cpt_digest_gen_sg_ver2_prep(flags, d_lens, fc_params, inst); + else + ret = cpt_digest_gen_sg_ver1_prep(flags, d_lens, fc_params, inst); } return ret; @@ -2391,11 +2430,11 @@ prepare_iov_from_pkt_inplace(struct rte_mbuf *pkt, iovec->buf_cnt = index; return; } - static __rte_always_inline int fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, - struct cpt_inst_s *inst, const bool is_kasumi, const bool is_aead) + struct cpt_inst_s *inst, const bool is_kasumi, const bool is_aead, + const bool is_sg_ver2) { struct rte_crypto_sym_op *sym_op = cop->sym; void *mdata = NULL; @@ -2606,14 +2645,17 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, if (is_kasumi) { if (cpt_op & ROC_SE_OP_ENCODE) - ret = cpt_kasumi_enc_prep(flags, d_offs, d_lens, &fc_params, inst); + ret = cpt_kasumi_enc_prep(flags, d_offs, d_lens, &fc_params, inst, + is_sg_ver2); else - ret = cpt_kasumi_dec_prep(d_offs, d_lens, &fc_params, inst); + ret = cpt_kasumi_dec_prep(d_offs, d_lens, &fc_params, inst, is_sg_ver2); } else { if (cpt_op & ROC_SE_OP_ENCODE) - ret = cpt_enc_hmac_prep(flags, d_offs, d_lens, &fc_params, inst); + ret = cpt_enc_hmac_prep(flags, d_offs, d_lens, &fc_params, inst, + is_sg_ver2); else - ret = cpt_dec_hmac_prep(flags, d_offs, d_lens, &fc_params, inst); + ret = cpt_dec_hmac_prep(flags, d_offs, d_lens, &fc_params, inst, + is_sg_ver2); } if (unlikely(ret)) { @@ -2633,7 +2675,7 @@ fill_fc_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, static __rte_always_inline int fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, - struct cpt_inst_s *inst) + struct cpt_inst_s *inst, const bool is_sg_ver2) { struct rte_crypto_sym_op *sym_op = cop->sym; struct roc_se_fc_params fc_params; @@ -2712,6 +2754,7 @@ fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, } } + fc_params.meta_buf.vaddr = NULL; if (unlikely(!((flags & ROC_SE_SINGLE_BUF_INPLACE) && (flags & ROC_SE_SINGLE_BUF_HEADROOM)))) { mdata = alloc_op_meta(&fc_params.meta_buf, m_info->mlen, m_info->pool, infl_req); @@ -2721,7 +2764,7 @@ fill_pdcp_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, } } - ret = cpt_pdcp_alg_prep(flags, d_offs, d_lens, &fc_params, inst); + ret = cpt_pdcp_alg_prep(flags, d_offs, d_lens, &fc_params, inst, is_sg_ver2); if (unlikely(ret)) { plt_dp_err("Could not prepare instruction"); goto free_mdata_and_exit; @@ -2935,8 +2978,8 @@ find_kasumif9_direction_and_length(uint8_t *src, uint32_t counter_num_bytes, */ static __rte_always_inline int fill_digest_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, - struct cpt_qp_meta_info *m_info, - struct cpt_inflight_req *infl_req, struct cpt_inst_s *inst) + struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, + struct cpt_inst_s *inst, const bool is_sg_ver2) { uint32_t space = 0; struct rte_crypto_sym_op *sym_op = cop->sym; @@ -3066,7 +3109,7 @@ fill_digest_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, goto free_mdata_and_exit; } - ret = cpt_fc_enc_hmac_prep(flags, d_offs, d_lens, ¶ms, inst); + ret = cpt_fc_enc_hmac_prep(flags, d_offs, d_lens, ¶ms, inst, is_sg_ver2); if (ret) goto free_mdata_and_exit; @@ -3081,28 +3124,31 @@ fill_digest_params(struct rte_crypto_op *cop, struct cnxk_se_sess *sess, static __rte_always_inline int __rte_hot cpt_sym_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cnxk_se_sess *sess, - struct cpt_inflight_req *infl_req, struct cpt_inst_s *inst) + struct cpt_inflight_req *infl_req, struct cpt_inst_s *inst, const bool is_sg_ver2) { int ret; switch (sess->dp_thr_type) { case CPT_DP_THREAD_TYPE_PDCP: - ret = fill_pdcp_params(op, sess, &qp->meta_info, infl_req, inst); + ret = fill_pdcp_params(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2); break; case CPT_DP_THREAD_TYPE_FC_CHAIN: - ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, false, false); + ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, false, false, + is_sg_ver2); break; case CPT_DP_THREAD_TYPE_FC_AEAD: - ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, false, true); + ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, false, true, + is_sg_ver2); break; case CPT_DP_THREAD_TYPE_PDCP_CHAIN: ret = fill_pdcp_chain_params(op, sess, &qp->meta_info, infl_req, inst); break; case CPT_DP_THREAD_TYPE_KASUMI: - ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, true, false); + ret = fill_fc_params(op, sess, &qp->meta_info, infl_req, inst, true, false, + is_sg_ver2); break; case CPT_DP_THREAD_AUTH_ONLY: - ret = fill_digest_params(op, sess, &qp->meta_info, infl_req, inst); + ret = fill_digest_params(op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2); break; default: ret = -EINVAL; diff --git a/drivers/crypto/cnxk/version.map b/drivers/crypto/cnxk/version.map index 4735e70550..d13209feec 100644 --- a/drivers/crypto/cnxk/version.map +++ b/drivers/crypto/cnxk/version.map @@ -3,7 +3,8 @@ INTERNAL { cn9k_cpt_crypto_adapter_enqueue; cn9k_cpt_crypto_adapter_dequeue; - cn10k_cpt_crypto_adapter_enqueue; + cn10k_cpt_sg_ver1_crypto_adapter_enqueue; + cn10k_cpt_sg_ver2_crypto_adapter_enqueue; cn10k_cpt_crypto_adapter_dequeue; cn10k_cpt_crypto_adapter_vector_dequeue; diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 742e43a5c6..30c922b5fc 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -292,6 +292,7 @@ static void cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct roc_cpt *cpt = roc_idev_cpt_get(); const event_dequeue_t sso_hws_deq[NIX_RX_OFFLOAD_MAX] = { #define R(name, flags)[flags] = cn10k_sso_hws_deq_##name, NIX_RX_FASTPATH_MODES @@ -594,14 +595,16 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) } } } - event_dev->ca_enqueue = cn10k_cpt_crypto_adapter_enqueue; + + if ((cpt != NULL) && (cpt->cpt_revision > ROC_CPT_REVISION_ID_106XX)) + event_dev->ca_enqueue = cn10k_cpt_sg_ver2_crypto_adapter_enqueue; + else + event_dev->ca_enqueue = cn10k_cpt_sg_ver1_crypto_adapter_enqueue; if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) - CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, - sso_hws_tx_adptr_enq_seg); + CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq_seg); else - CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, - sso_hws_tx_adptr_enq); + CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq); event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; } From patchwork Wed Oct 19 14:15:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 118577 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD9F9A06C8; Wed, 19 Oct 2022 16:15:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 913B742BDE; Wed, 19 Oct 2022 16:15:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B94FA410D1 for ; Wed, 19 Oct 2022 16:15:29 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29JC0JSh011929 for ; Wed, 19 Oct 2022 07:15:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=d4xDM/ak0KYmq/wkBRJtgqCeCztMJGRKFAs36cuSSyg=; b=TKFm26aU3WB+nO0Lbh261MyUWuGk/gR6LMAb+0HRAAlXHH6JJlyePr3OAxHp3Xu4Z+MS dNDk3wd+5RwvmVjuAFYNyK2e6Fi23ghV6TygUBmsx7mhEicDaL1al8Cd59cZFlpt9g4y lDBM6wv/vva+5wV6lLww8aJJf9KuiX2JmfeORglVPaNe58182kxLlnPchI1fpd/tkZEz l/wPNrj9WKFcGLUrULDDl+kFypPziF+b7ZO0joR4qHispciA03nzemd2dAJ2f9pY2l9g oomNAIyLdI5Bqxeg4C9FPif/b0HZ03QZbjOJyeFTnMPrl1cf9iJpgBFk5K5axDiu0bds 2g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3k7vcph6ba-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 19 Oct 2022 07:15:29 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 19 Oct 2022 07:15:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 19 Oct 2022 07:15:26 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 3146B3F7041; Wed, 19 Oct 2022 07:15:24 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Vidya Sagar Velumuri , Anoob Joseph , Subject: [PATCH 05/13] common/cnxk: support custom UDP port values Date: Wed, 19 Oct 2022 19:45:05 +0530 Message-ID: <20221019141513.1969052-6-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019141513.1969052-1-ktejasree@marvell.com> References: <20221019141513.1969052-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: a25hpcUIzv8567nsyMHnB5_u5WivE89H X-Proofpoint-GUID: a25hpcUIzv8567nsyMHnB5_u5WivE89H X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vidya Sagar Velumuri Add support for custom port values for UDP encapsulation Signed-off-by: Vidya Sagar Velumuri --- drivers/common/cnxk/cnxk_security.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c index f8bdeabeac..f220d2577f 100644 --- a/drivers/common/cnxk/cnxk_security.c +++ b/drivers/common/cnxk/cnxk_security.c @@ -1219,6 +1219,7 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec, struct rte_ipv6_hdr *ip6; struct rte_ipv4_hdr *ip4; const uint8_t *auth_key; + uint16_t sport, dport; int auth_key_len = 0; size_t ctx_len; int ret; @@ -1266,10 +1267,20 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec, } ip4 = (struct rte_ipv4_hdr *)&template->ip4.ipv4_hdr; + /* If custom port values are provided, Overwrite default port values. */ if (ipsec->options.udp_encap) { + sport = 4500; + dport = 4500; + + if (ipsec->udp.sport) + sport = ipsec->udp.sport; + + if (ipsec->udp.dport) + dport = ipsec->udp.dport; + ip4->next_proto_id = IPPROTO_UDP; - template->ip4.udp_src = rte_be_to_cpu_16(4500); - template->ip4.udp_dst = rte_be_to_cpu_16(4500); + template->ip4.udp_src = rte_be_to_cpu_16(sport); + template->ip4.udp_dst = rte_be_to_cpu_16(dport); } else { if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) ip4->next_proto_id = IPPROTO_AH; @@ -1303,13 +1314,12 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec, ip6 = (struct rte_ipv6_hdr *)&template->ip6.ipv6_hdr; if (ipsec->options.udp_encap) { ip6->proto = IPPROTO_UDP; - template->ip6.udp_src = rte_be_to_cpu_16(4500); - template->ip6.udp_dst = rte_be_to_cpu_16(4500); + template->ip6.udp_src = rte_be_to_cpu_16(sport); + template->ip6.udp_dst = rte_be_to_cpu_16(dport); } else { - ip6->proto = (ipsec->proto == - RTE_SECURITY_IPSEC_SA_PROTO_ESP) ? - IPPROTO_ESP : - IPPROTO_AH; + ip6->proto = (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) ? + IPPROTO_ESP : + IPPROTO_AH; } ip6->vtc_flow = rte_cpu_to_be_32(0x60000000 | From patchwork Wed Oct 19 14:15:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 118578 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A146BA06C8; Wed, 19 Oct 2022 16:15:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 877ED42BE2; Wed, 19 Oct 2022 16:15:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6776842BDB for ; Wed, 19 Oct 2022 16:15:31 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29J8B51F010696 for ; Wed, 19 Oct 2022 07:15:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=cQwAShIUxXBt76X6RiogqrNfQvMrppqmB4RYln+ImpI=; b=Vj7AOUZyfYKf+BKp6puMuQ9/zeLtnJ16vYtvmfjAcQyG7oLT8PLdUsxgIQMSj2VRbH+K WVDVrGxYeB/pCgZcGhCNDUv0N6XDSrfe0pE8EtGgxiR/BhXGry6YaZGeHdxokVGnHWV/ xhfq1xmcH2OP1+4oSIK9U7OZzAfuQHykvwfmEgbr3OqH9hdnGoVKg01thTzLYoeI570V LkKsgT+0Dl1asKS96K8VSiKpSvGt09pInmIoPFoIeBMN22vnt7d6CWMpj3+jis9Leix0 S3owX/9/+Kr1Y6Oj4dtLNi232PuuhmOmzx3Hu+NciLFAJD8SuKBnAwPMhS5raBoF5pFt GQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3kadg59ca5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 19 Oct 2022 07:15:30 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 19 Oct 2022 07:15:28 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 19 Oct 2022 07:15:28 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 3BF653F7059; Wed, 19 Oct 2022 07:15:26 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Vidya Sagar Velumuri , Anoob Joseph , Subject: [PATCH 06/13] crypto/cnxk: update rlen calculation for lookaside mode Date: Wed, 19 Oct 2022 19:45:06 +0530 Message-ID: <20221019141513.1969052-7-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019141513.1969052-1-ktejasree@marvell.com> References: <20221019141513.1969052-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: nEcBcgAad8pz3aJyN8e865hi2qCpNHAU X-Proofpoint-ORIG-GUID: nEcBcgAad8pz3aJyN8e865hi2qCpNHAU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vidya Sagar Velumuri For transport mode, IP header will not be part of encryption. Update the response len calculation accordingly for transport mode Signed-off-by: Vidya Sagar Velumuri --- drivers/crypto/cnxk/cn9k_ipsec.c | 42 ------------------------- drivers/crypto/cnxk/cn9k_ipsec_la_ops.h | 10 ++++-- 2 files changed, 7 insertions(+), 45 deletions(-) diff --git a/drivers/crypto/cnxk/cn9k_ipsec.c b/drivers/crypto/cnxk/cn9k_ipsec.c index 55a13570ad..9ae7c73b37 100644 --- a/drivers/crypto/cnxk/cn9k_ipsec.c +++ b/drivers/crypto/cnxk/cn9k_ipsec.c @@ -211,50 +211,8 @@ cn9k_ipsec_xform_verify(struct rte_security_ipsec_xform *ipsec, plt_err("Transport mode AES-256-GCM is not supported"); return -ENOTSUP; } - } else { - struct rte_crypto_cipher_xform *cipher; - struct rte_crypto_auth_xform *auth; - - if (crypto->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { - cipher = &crypto->cipher; - auth = &crypto->next->auth; - } else { - cipher = &crypto->next->cipher; - auth = &crypto->auth; - } - - if ((cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) && - (auth->algo == RTE_CRYPTO_AUTH_SHA256_HMAC)) { - plt_err("Transport mode AES-CBC SHA2 HMAC 256 is not supported"); - return -ENOTSUP; - } - - if ((cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) && - (auth->algo == RTE_CRYPTO_AUTH_SHA384_HMAC)) { - plt_err("Transport mode AES-CBC SHA2 HMAC 384 is not supported"); - return -ENOTSUP; - } - - if ((cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) && - (auth->algo == RTE_CRYPTO_AUTH_SHA512_HMAC)) { - plt_err("Transport mode AES-CBC SHA2 HMAC 512 is not supported"); - return -ENOTSUP; - } - - if ((cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) && - (auth->algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC)) { - plt_err("Transport mode AES-CBC AES-XCBC is not supported"); - return -ENOTSUP; - } - - if ((cipher->algo == RTE_CRYPTO_CIPHER_3DES_CBC) && - (auth->algo == RTE_CRYPTO_AUTH_AES_XCBC_MAC)) { - plt_err("Transport mode 3DES-CBC AES-XCBC is not supported"); - return -ENOTSUP; - } } } - return 0; } diff --git a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h index 52618e8840..724fc525ad 100644 --- a/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h +++ b/drivers/crypto/cnxk/cn9k_ipsec_la_ops.h @@ -16,11 +16,15 @@ static __rte_always_inline int32_t ipsec_po_out_rlen_get(struct cn9k_sec_session *sess, uint32_t plen) { uint32_t enc_payload_len; + int adj_len = 0; - enc_payload_len = RTE_ALIGN_CEIL(plen + sess->rlens.roundup_len, - sess->rlens.roundup_byte); + if (sess->sa.out_sa.common_sa.ctl.ipsec_mode == ROC_IE_SA_MODE_TRANSPORT) + adj_len = ROC_CPT_TUNNEL_IPV4_HDR_LEN; - return sess->custom_hdr_len + sess->rlens.partial_len + enc_payload_len; + enc_payload_len = + RTE_ALIGN_CEIL(plen + sess->rlens.roundup_len - adj_len, sess->rlens.roundup_byte); + + return sess->custom_hdr_len + sess->rlens.partial_len + enc_payload_len + adj_len; } static __rte_always_inline int From patchwork Wed Oct 19 14:15:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejasree Kondoj X-Patchwork-Id: 118579 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44BD0A06C8; Wed, 19 Oct 2022 16:15:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CD8EC42BEE; Wed, 19 Oct 2022 16:15:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5D96242BE2 for ; Wed, 19 Oct 2022 16:15:33 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29J8Ao51010370 for ; Wed, 19 Oct 2022 07:15:32 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=qdVfDZus7xQMi09zZz2VrsVkOMdcNqxBKh5jFkYe12Q=; b=UdwIBBa3JFPFqHAMy7oTF18FhgDczH8PapB1T3bhgV89BQ9hE0oorDJDkcMHacaSGFkG lnYzPiM868gJC6DpLgZ8Z7ugJ1wI9yZ5P5BjqfdKblhMXpfUt0WBtWDd+AYilcjKtWDR 2ei+34XQrgg3P4sb/+GhHnc5cPokaGWIz++RYvIdNTA385FwTVHBJVSIDo2OOOlygzKg mq3G/KikLVQgnrJJUbegY1503wiHsHw1UIY7X4kXJO4oxD1DuJgY7weku+1TgGScbz7f JKKY9q50aPuFyjSdTyKtaaDtyZ9tQ9f+lDFFNfoIp0uhfrXXDx5EoBiR1NnamaDYvljX zg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3kadg59caf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 19 Oct 2022 07:15:32 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 19 Oct 2022 07:15:30 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 19 Oct 2022 07:15:30 -0700 Received: from hyd1554.marvell.com (unknown [10.29.57.11]) by maili.marvell.com (Postfix) with ESMTP id 55C8E3F7041; Wed, 19 Oct 2022 07:15:29 -0700 (PDT) From: Tejasree Kondoj To: Akhil Goyal CC: Vidya Sagar Velumuri , Anoob Joseph , Subject: [PATCH 07/13] crypto/cnxk: add support for DES and MD5 Date: Wed, 19 Oct 2022 19:45:07 +0530 Message-ID: <20221019141513.1969052-8-ktejasree@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221019141513.1969052-1-ktejasree@marvell.com> References: <20221019141513.1969052-1-ktejasree@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 6OnJ-1KMVUEzeLxG4DVnFGb6ct6oxtON X-Proofpoint-ORIG-GUID: 6OnJ-1KMVUEzeLxG4DVnFGb6ct6oxtON X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vidya Sagar Velumuri Add supoort for cipher DES and auth MD5 for IPsec offload Signed-off-by: Vidya Sagar Velumuri --- drivers/crypto/cnxk/cnxk_cryptodev.h | 2 +- .../crypto/cnxk/cnxk_cryptodev_capabilities.c | 42 ++++++++++++++++++- drivers/crypto/cnxk/cnxk_ipsec.h | 9 ++++ 3 files changed, 51 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/cnxk/cnxk_cryptodev.h b/drivers/crypto/cnxk/cnxk_cryptodev.h index 588760cfb0..48bd6e144c 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev.h +++ b/drivers/crypto/cnxk/cnxk_cryptodev.h @@ -11,7 +11,7 @@ #include "roc_cpt.h" #define CNXK_CPT_MAX_CAPS 37 -#define CNXK_SEC_CRYPTO_MAX_CAPS 14 +#define CNXK_SEC_CRYPTO_MAX_CAPS 16 #define CNXK_SEC_MAX_CAPS 9 #define CNXK_AE_EC_ID_MAX 8 /** diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c index 6a15154607..6c28f8942e 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c @@ -946,6 +946,26 @@ static const struct rte_cryptodev_capabilities sec_caps_aes[] = { }; static const struct rte_cryptodev_capabilities sec_caps_des[] = { + { /* DES */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_DES_CBC, + .block_size = 8, + .key_size = { + .min = 8, + .max = 8, + .increment = 0 + }, + .iv_size = { + .min = 8, + .max = 8, + .increment = 0 + } + }, }, + }, } + }, { /* 3DES CBC */ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, {.sym = { @@ -965,7 +985,7 @@ static const struct rte_cryptodev_capabilities sec_caps_des[] = { } }, } }, } - } + }, }; static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = { @@ -1049,6 +1069,26 @@ static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = { }, } }, } }, + { /* MD5 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_MD5_HMAC, + .block_size = 64, + .key_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .digest_size = { + .min = 12, + .max = 12, + .increment = 0 + }, + }, } + }, } + }, }; static const struct rte_cryptodev_capabilities sec_caps_null[] = { diff --git a/drivers/crypto/cnxk/cnxk_ipsec.h b/drivers/crypto/cnxk/cnxk_ipsec.h index 00873ca6ac..0c471b2cfe 100644 --- a/drivers/crypto/cnxk/cnxk_ipsec.h +++ b/drivers/crypto/cnxk/cnxk_ipsec.h @@ -23,6 +23,10 @@ ipsec_xform_cipher_verify(struct rte_crypto_sym_xform *crypto_xform) if (crypto_xform->cipher.algo == RTE_CRYPTO_CIPHER_NULL) return 0; + if (crypto_xform->cipher.algo == RTE_CRYPTO_CIPHER_DES_CBC && + crypto_xform->cipher.key.length == 8) + return 0; + if (crypto_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC || crypto_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CTR) { switch (crypto_xform->cipher.key.length) { @@ -51,6 +55,11 @@ ipsec_xform_auth_verify(struct rte_crypto_sym_xform *crypto_xform) if (crypto_xform->auth.algo == RTE_CRYPTO_AUTH_NULL) return 0; + if (crypto_xform->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC) { + if (keylen == 16) + return 0; + } + if (crypto_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) { if (keylen >= 20 && keylen <= 64) return 0;