From patchwork Tue Jan 2 04:54:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 135662 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 288C9437F8; Tue, 2 Jan 2024 05:56:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 00DE14067B; Tue, 2 Jan 2024 05:56:25 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id C3B33402E6 for ; Tue, 2 Jan 2024 05:56:22 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 401Nwn0I021269 for ; Mon, 1 Jan 2024 20:56:22 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=3M6X+lk20wkmgVlXjNBSzrnqydSX4TZS+0POQTe65rQ=; b=dHp gBK0bpDky+pnmJ9mCQLpSP5WyiAzVAGRp0b1qRgY0KzmxjqavF3VMNxRfhOUZKR9 RgzqqM9suQ5ngyG/hxHcUxnNdBepvPeuwrHP30J3Hy4dz0V/9R96bc/yMVogV7Vj b9pt6cMrJ84YQW5W/L+mzPDtzGHq50N4HhjPYXA5v+0GbhlwEeVAUxBHnTzBZG4F ItXafVbIoEhBoWqhgaz+96DTsyAwjcJ+jtZNrcfwBLBrxFRN9yZIGPZMaCwfHAo7 MkZdN4/NczO9rkH8W5oLRj75nkMu+WUHEYLcFoE1OrxFF3oBHdH53eqa26YP6a+v cRVUrUFJL/w/011FeNw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3vakkkwvtv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 01 Jan 2024 20:56:21 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 1 Jan 2024 20:56:19 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 1 Jan 2024 20:56:19 -0800 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.163.189]) by maili.marvell.com (Postfix) with ESMTP id B57CF3F7087; Mon, 1 Jan 2024 20:56:14 -0800 (PST) From: Anoob Joseph To: Akhil Goyal CC: Vidya Sagar Velumuri , Jerin Jacob , Tejasree Kondoj , Subject: [PATCH v2 15/24] crypto/cnxk: add TLS record session ops Date: Tue, 2 Jan 2024 10:24:08 +0530 Message-ID: <20240102045417.115-16-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240102045417.115-1-anoobj@marvell.com> References: <20231221123545.510-1-anoobj@marvell.com> <20240102045417.115-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: u2JWMXirFRgyqJgPtsm-8kPgaUDf2hDK X-Proofpoint-GUID: u2JWMXirFRgyqJgPtsm-8kPgaUDf2hDK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vidya Sagar Velumuri Add TLS record session ops for creating and destroying security sessions. Add support for both read and write sessions. Signed-off-by: Anoob Joseph Signed-off-by: Vidya Sagar Velumuri --- drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 8 + drivers/crypto/cnxk/cn10k_tls.c | 758 ++++++++++++++++++++++ drivers/crypto/cnxk/cn10k_tls.h | 35 + drivers/crypto/cnxk/meson.build | 1 + 4 files changed, 802 insertions(+) create mode 100644 drivers/crypto/cnxk/cn10k_tls.c create mode 100644 drivers/crypto/cnxk/cn10k_tls.h diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h index 02fd35eab7..33fd3aa398 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h +++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h @@ -11,6 +11,7 @@ #include "roc_cpt.h" #include "cn10k_ipsec.h" +#include "cn10k_tls.h" struct cn10k_sec_session { struct rte_security_session rte_sess; @@ -28,6 +29,12 @@ struct cn10k_sec_session { uint8_t ip_csum; bool is_outbound; } ipsec; + struct { + uint8_t enable_padding : 1; + uint8_t hdr_len : 4; + uint8_t rvsd : 3; + bool is_write; + } tls; }; /** Queue pair */ struct cnxk_cpt_qp *qp; @@ -39,6 +46,7 @@ struct cn10k_sec_session { */ union { struct cn10k_ipsec_sa sa; + struct cn10k_tls_record tls_rec; }; } __rte_aligned(ROC_ALIGN); diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c new file mode 100644 index 0000000000..7dd61aa159 --- /dev/null +++ b/drivers/crypto/cnxk/cn10k_tls.c @@ -0,0 +1,758 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include +#include + +#include + +#include "roc_cpt.h" +#include "roc_se.h" + +#include "cn10k_cryptodev_sec.h" +#include "cn10k_tls.h" +#include "cnxk_cryptodev.h" +#include "cnxk_cryptodev_ops.h" +#include "cnxk_security.h" + +static int +tls_xform_cipher_verify(struct rte_crypto_sym_xform *crypto_xform) +{ + enum rte_crypto_cipher_algorithm c_algo = crypto_xform->cipher.algo; + uint16_t keylen = crypto_xform->cipher.key.length; + + if (((c_algo == RTE_CRYPTO_CIPHER_NULL) && (keylen == 0)) || + ((c_algo == RTE_CRYPTO_CIPHER_3DES_CBC) && (keylen == 24)) || + ((c_algo == RTE_CRYPTO_CIPHER_AES_CBC) && ((keylen == 16) || (keylen == 32)))) + return 0; + + return -EINVAL; +} + +static int +tls_xform_auth_verify(struct rte_crypto_sym_xform *crypto_xform) +{ + enum rte_crypto_auth_algorithm a_algo = crypto_xform->auth.algo; + uint16_t keylen = crypto_xform->auth.key.length; + + if (((a_algo == RTE_CRYPTO_AUTH_MD5_HMAC) && (keylen == 16)) || + ((a_algo == RTE_CRYPTO_AUTH_SHA1_HMAC) && (keylen == 20)) || + ((a_algo == RTE_CRYPTO_AUTH_SHA256_HMAC) && (keylen == 32))) + return 0; + + return -EINVAL; +} + +static int +tls_xform_aead_verify(struct rte_security_tls_record_xform *tls_xform, + struct rte_crypto_sym_xform *crypto_xform) +{ + uint16_t keylen = crypto_xform->aead.key.length; + + if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE && + crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT) + return -EINVAL; + + if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ && + crypto_xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT) + return -EINVAL; + + if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) { + if ((keylen == 16) || (keylen == 32)) + return 0; + } + + return -EINVAL; +} + +static int +cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform, + struct rte_crypto_sym_xform *crypto_xform) +{ + struct rte_crypto_sym_xform *auth_xform, *cipher_xform = NULL; + int ret = 0; + + if ((tls_xform->ver != RTE_SECURITY_VERSION_TLS_1_2) && + (tls_xform->ver != RTE_SECURITY_VERSION_DTLS_1_2)) + return -EINVAL; + + if ((tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_READ) && + (tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_WRITE)) + return -EINVAL; + + if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) + return tls_xform_aead_verify(tls_xform, crypto_xform); + + if (tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) { + /* Egress */ + + /* First should be for auth in Egress */ + if (crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AUTH) + return -EINVAL; + + /* Next if present, should be for cipher in Egress */ + if ((crypto_xform->next != NULL) && + (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)) + return -EINVAL; + + auth_xform = crypto_xform; + cipher_xform = crypto_xform->next; + } else { + /* Ingress */ + + /* First can be for auth only when next is NULL in Ingress. */ + if ((crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) && + (crypto_xform->next != NULL)) + return -EINVAL; + else if ((crypto_xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER) || + (crypto_xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)) + return -EINVAL; + + cipher_xform = crypto_xform; + auth_xform = crypto_xform->next; + } + + if (cipher_xform) { + if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) && + !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT && + auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE)) + return -EINVAL; + + if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) && + !(cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT && + auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY)) + return -EINVAL; + } else { + if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_WRITE) && + (auth_xform->auth.op != RTE_CRYPTO_AUTH_OP_GENERATE)) + return -EINVAL; + + if ((tls_xform->type == RTE_SECURITY_TLS_SESS_TYPE_READ) && + (auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY)) + return -EINVAL; + } + + if (cipher_xform) + ret = tls_xform_cipher_verify(cipher_xform); + + if (!ret) + return tls_xform_auth_verify(auth_xform); + + return ret; +} + +static int +tls_write_rlens_get(struct rte_security_tls_record_xform *tls_xfrm, + struct rte_crypto_sym_xform *crypto_xfrm) +{ + enum rte_crypto_cipher_algorithm c_algo = RTE_CRYPTO_CIPHER_NULL; + enum rte_crypto_auth_algorithm a_algo = RTE_CRYPTO_AUTH_NULL; + uint8_t roundup_byte, tls_hdr_len; + uint8_t mac_len, iv_len; + + switch (tls_xfrm->ver) { + case RTE_SECURITY_VERSION_TLS_1_2: + case RTE_SECURITY_VERSION_TLS_1_3: + tls_hdr_len = 5; + break; + case RTE_SECURITY_VERSION_DTLS_1_2: + tls_hdr_len = 13; + break; + default: + tls_hdr_len = 0; + break; + } + + /* Get Cipher and Auth algo */ + if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) + return tls_hdr_len + ROC_CPT_AES_GCM_IV_LEN + ROC_CPT_AES_GCM_MAC_LEN; + + if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + c_algo = crypto_xfrm->cipher.algo; + if (crypto_xfrm->next) + a_algo = crypto_xfrm->next->auth.algo; + } else { + a_algo = crypto_xfrm->auth.algo; + if (crypto_xfrm->next) + c_algo = crypto_xfrm->next->cipher.algo; + } + + switch (c_algo) { + case RTE_CRYPTO_CIPHER_NULL: + roundup_byte = 4; + iv_len = 0; + break; + case RTE_CRYPTO_CIPHER_3DES_CBC: + roundup_byte = ROC_CPT_DES_BLOCK_LENGTH; + iv_len = ROC_CPT_DES_IV_LEN; + break; + case RTE_CRYPTO_CIPHER_AES_CBC: + roundup_byte = ROC_CPT_AES_BLOCK_LENGTH; + iv_len = ROC_CPT_AES_CBC_IV_LEN; + break; + default: + roundup_byte = 0; + iv_len = 0; + break; + } + + switch (a_algo) { + case RTE_CRYPTO_AUTH_NULL: + mac_len = 0; + break; + case RTE_CRYPTO_AUTH_MD5_HMAC: + mac_len = 16; + break; + case RTE_CRYPTO_AUTH_SHA1_HMAC: + mac_len = 20; + break; + case RTE_CRYPTO_AUTH_SHA256_HMAC: + mac_len = 32; + break; + default: + mac_len = 0; + break; + } + + return tls_hdr_len + iv_len + mac_len + roundup_byte; +} + +static void +tls_write_sa_init(struct roc_ie_ot_tls_write_sa *sa) +{ + size_t offset; + + memset(sa, 0, sizeof(struct roc_ie_ot_tls_write_sa)); + + offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7); + sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B; + sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off; + sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN; + sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE; + sa->w0.s.aop_valid = 1; +} + +static void +tls_read_sa_init(struct roc_ie_ot_tls_read_sa *sa) +{ + size_t offset; + + memset(sa, 0, sizeof(struct roc_ie_ot_tls_read_sa)); + + offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx); + sa->w0.s.hw_ctx_off = offset / ROC_CTX_UNIT_8B; + sa->w0.s.ctx_push_size = sa->w0.s.hw_ctx_off; + sa->w0.s.ctx_size = ROC_IE_OT_TLS_CTX_ILEN; + sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE; + sa->w0.s.aop_valid = 1; +} + +static size_t +tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa) +{ + size_t size; + + /* Variable based on Anti-replay Window */ + size = offsetof(struct roc_ie_ot_tls_read_sa, ctx) + + offsetof(struct roc_ie_ot_tls_read_ctx_update_reg, ar_winbits); + + if (sa->w0.s.ar_win) + size += (1 << (sa->w0.s.ar_win - 1)) * sizeof(uint64_t); + + return size; +} + +static int +tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa, + struct rte_security_tls_record_xform *tls_xfrm, + struct rte_crypto_sym_xform *crypto_xfrm) +{ + struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm; + const uint8_t *key = NULL; + uint64_t *tmp, *tmp_key; + uint32_t replay_win_sz; + uint8_t *cipher_key; + int i, length = 0; + size_t offset; + + /* Initialize the SA */ + memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa)); + + cipher_key = read_sa->cipher_key; + + /* Set encryption algorithm */ + if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) && + (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) { + read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM; + read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256; + + length = crypto_xfrm->aead.key.length; + if (length == 16) + read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128; + else + read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256; + + key = crypto_xfrm->aead.key.data; + memcpy(cipher_key, key, length); + + if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) + memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4); + else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) + memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4); + + goto key_swap; + } + + if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + auth_xfrm = crypto_xfrm; + cipher_xfrm = crypto_xfrm->next; + } else { + cipher_xfrm = crypto_xfrm; + auth_xfrm = crypto_xfrm->next; + } + + if (cipher_xfrm != NULL) { + if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) { + read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_3DES; + length = cipher_xfrm->cipher.key.length; + } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) { + read_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_CBC; + length = cipher_xfrm->cipher.key.length; + if (length == 16) + read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128; + else if (length == 32) + read_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256; + else + return -EINVAL; + } else { + return -EINVAL; + } + + key = cipher_xfrm->cipher.key.data; + memcpy(cipher_key, key, length); + } + + if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC) + read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_MD5; + else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) + read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA1; + else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) + read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256; + else + return -EINVAL; + + cnxk_sec_opad_ipad_gen(auth_xfrm, read_sa->opad_ipad, true); + tmp = (uint64_t *)read_sa->opad_ipad; + for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++) + tmp[i] = rte_be_to_cpu_64(tmp[i]); + +key_swap: + tmp_key = (uint64_t *)cipher_key; + for (i = 0; i < (int)(ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++) + tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]); + + if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) { + /* Only support power-of-two window sizes supported */ + replay_win_sz = tls_xfrm->dtls_1_2.ar_win_sz; + if (replay_win_sz) { + if (!rte_is_power_of_2(replay_win_sz) || + replay_win_sz > ROC_IE_OT_TLS_AR_WIN_SIZE_MAX) + return -ENOTSUP; + + read_sa->w0.s.ar_win = rte_log2_u32(replay_win_sz) - 5; + } + } + + read_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE; + read_sa->w0.s.aop_valid = 1; + + offset = offsetof(struct roc_ie_ot_tls_read_sa, ctx); + + /* Word offset for HW managed CTX field */ + read_sa->w0.s.hw_ctx_off = offset / 8; + read_sa->w0.s.ctx_push_size = read_sa->w0.s.hw_ctx_off; + + /* Entire context size in 128B units */ + read_sa->w0.s.ctx_size = (PLT_ALIGN_CEIL(tls_read_ctx_size(read_sa), ROC_CTX_UNIT_128B) / + ROC_CTX_UNIT_128B) - + 1; + + if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) { + read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12; + read_sa->ctx.ar_valid_mask = tls_xfrm->tls_1_2.seq_no - 1; + } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) { + read_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12; + } + + rte_wmb(); + + return 0; +} + +static int +tls_write_sa_fill(struct roc_ie_ot_tls_write_sa *write_sa, + struct rte_security_tls_record_xform *tls_xfrm, + struct rte_crypto_sym_xform *crypto_xfrm) +{ + struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm; + const uint8_t *key = NULL; + uint8_t *cipher_key; + uint64_t *tmp_key; + int i, length = 0; + size_t offset; + + cipher_key = write_sa->cipher_key; + + /* Set encryption algorithm */ + if ((crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) && + (crypto_xfrm->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)) { + write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_GCM; + write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256; + + length = crypto_xfrm->aead.key.length; + if (length == 16) + write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128; + else + write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256; + + key = crypto_xfrm->aead.key.data; + memcpy(cipher_key, key, length); + + if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) + memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->tls_1_2.imp_nonce, 4); + else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) + memcpy(((uint8_t *)cipher_key + 32), &tls_xfrm->dtls_1_2.imp_nonce, 4); + + goto key_swap; + } + + if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + auth_xfrm = crypto_xfrm; + cipher_xfrm = crypto_xfrm->next; + } else { + cipher_xfrm = crypto_xfrm; + auth_xfrm = crypto_xfrm->next; + } + + if (cipher_xfrm != NULL) { + if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC) { + write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_3DES; + length = cipher_xfrm->cipher.key.length; + } else if (cipher_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) { + write_sa->w2.s.cipher_select = ROC_IE_OT_TLS_CIPHER_AES_CBC; + length = cipher_xfrm->cipher.key.length; + if (length == 16) + write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_128; + else if (length == 32) + write_sa->w2.s.aes_key_len = ROC_IE_OT_TLS_AES_KEY_LEN_256; + else + return -EINVAL; + } else { + return -EINVAL; + } + + key = cipher_xfrm->cipher.key.data; + if (key != NULL && length != 0) { + /* Copy encryption key */ + memcpy(cipher_key, key, length); + } + } + + if (auth_xfrm != NULL) { + if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC) + write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_MD5; + else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) + write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA1; + else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) + write_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256; + else + return -EINVAL; + + cnxk_sec_opad_ipad_gen(auth_xfrm, write_sa->opad_ipad, true); + } + + tmp_key = (uint64_t *)write_sa->opad_ipad; + for (i = 0; i < (int)(ROC_CTX_MAX_OPAD_IPAD_LEN / sizeof(uint64_t)); i++) + tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]); + +key_swap: + tmp_key = (uint64_t *)cipher_key; + for (i = 0; i < (int)(ROC_IE_OT_TLS_CTX_MAX_KEY_IV_LEN / sizeof(uint64_t)); i++) + tmp_key[i] = rte_be_to_cpu_64(tmp_key[i]); + + write_sa->w0.s.ctx_hdr_size = ROC_IE_OT_TLS_CTX_HDR_SIZE; + offset = offsetof(struct roc_ie_ot_tls_write_sa, w26_rsvd7); + + /* Word offset for HW managed CTX field */ + write_sa->w0.s.hw_ctx_off = offset / 8; + write_sa->w0.s.ctx_push_size = write_sa->w0.s.hw_ctx_off; + + /* Entire context size in 128B units */ + write_sa->w0.s.ctx_size = + (PLT_ALIGN_CEIL(sizeof(struct roc_ie_ot_tls_write_sa), ROC_CTX_UNIT_128B) / + ROC_CTX_UNIT_128B) - + 1; + write_sa->w0.s.aop_valid = 1; + + if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_2) { + write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_TLS_12; + write_sa->seq_num = tls_xfrm->tls_1_2.seq_no - 1; + } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_DTLS_1_2) { + write_sa->w2.s.version_select = ROC_IE_OT_TLS_VERSION_DTLS_12; + write_sa->seq_num = ((uint64_t)tls_xfrm->dtls_1_2.epoch << 48) | + (tls_xfrm->dtls_1_2.seq_no & 0x0000ffffffffffff); + write_sa->seq_num -= 1; + } + + write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_DEFAULT; + +#ifdef LA_IPSEC_DEBUG + if (tls_xfrm->options.iv_gen_disable == 1) + write_sa->w2.s.iv_at_cptr = ROC_IE_OT_TLS_IV_SRC_FROM_SA; +#else + if (tls_xfrm->options.iv_gen_disable) { + plt_err("Application provided IV is not supported"); + return -ENOTSUP; + } +#endif + + rte_wmb(); + + return 0; +} + +static int +cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf, + struct rte_security_tls_record_xform *tls_xfrm, + struct rte_crypto_sym_xform *crypto_xfrm, + struct cn10k_sec_session *sec_sess) +{ + struct roc_ie_ot_tls_read_sa *sa_dptr; + struct cn10k_tls_record *tls; + union cpt_inst_w4 inst_w4; + void *read_sa; + int ret = 0; + + tls = &sec_sess->tls_rec; + read_sa = &tls->read_sa; + + /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */ + sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_read_sa), 8); + if (sa_dptr == NULL) { + plt_err("Could not allocate memory for SA DPTR"); + return -ENOMEM; + } + + /* Translate security parameters to SA */ + ret = tls_read_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm); + if (ret) { + plt_err("Could not fill read session parameters"); + goto sa_dptr_free; + } + if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + sec_sess->iv_offset = crypto_xfrm->aead.iv.offset; + sec_sess->iv_length = crypto_xfrm->aead.iv.length; + } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset; + sec_sess->iv_length = crypto_xfrm->cipher.iv.length; + } else { + sec_sess->iv_offset = crypto_xfrm->auth.iv.offset; + sec_sess->iv_length = crypto_xfrm->auth.iv.length; + } + + if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12) + sec_sess->tls.hdr_len = 13; + else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12) + sec_sess->tls.hdr_len = 5; + + sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD; + + /* Enable mib counters */ + sa_dptr->w0.s.count_mib_bytes = 1; + sa_dptr->w0.s.count_mib_pkts = 1; + + /* pre-populate CPT INST word 4 */ + inst_w4.u64 = 0; + inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT; + + sec_sess->inst.w4 = inst_w4.u64; + sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, read_sa); + + memset(read_sa, 0, sizeof(struct roc_ie_ot_tls_read_sa)); + + /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */ + memcpy(read_sa, sa_dptr, 8); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + + /* Write session using microcode opcode */ + ret = roc_cpt_ctx_write(lf, sa_dptr, read_sa, sizeof(struct roc_ie_ot_tls_read_sa)); + if (ret) { + plt_err("Could not write read session to hardware"); + goto sa_dptr_free; + } + + /* Trigger CTX flush so that data is written back to DRAM */ + roc_cpt_lf_ctx_flush(lf, read_sa, true); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + +sa_dptr_free: + plt_free(sa_dptr); + + return ret; +} + +static int +cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf, + struct rte_security_tls_record_xform *tls_xfrm, + struct rte_crypto_sym_xform *crypto_xfrm, + struct cn10k_sec_session *sec_sess) +{ + struct roc_ie_ot_tls_write_sa *sa_dptr; + struct cn10k_tls_record *tls; + union cpt_inst_w4 inst_w4; + void *write_sa; + int ret = 0; + + tls = &sec_sess->tls_rec; + write_sa = &tls->write_sa; + + /* Allocate memory to be used as dptr for CPT ucode WRITE_SA op */ + sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_write_sa), 8); + if (sa_dptr == NULL) { + plt_err("Could not allocate memory for SA DPTR"); + return -ENOMEM; + } + + /* Translate security parameters to SA */ + ret = tls_write_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm); + if (ret) { + plt_err("Could not fill write session parameters"); + goto sa_dptr_free; + } + + if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + sec_sess->iv_offset = crypto_xfrm->aead.iv.offset; + sec_sess->iv_length = crypto_xfrm->aead.iv.length; + } else if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + sec_sess->iv_offset = crypto_xfrm->cipher.iv.offset; + sec_sess->iv_length = crypto_xfrm->cipher.iv.length; + } else { + sec_sess->iv_offset = crypto_xfrm->next->cipher.iv.offset; + sec_sess->iv_length = crypto_xfrm->next->cipher.iv.length; + } + + sec_sess->tls.is_write = true; + sec_sess->tls.enable_padding = tls_xfrm->options.extra_padding_enable; + sec_sess->max_extended_len = tls_write_rlens_get(tls_xfrm, crypto_xfrm); + sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD; + + /* pre-populate CPT INST word 4 */ + inst_w4.u64 = 0; + inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT; + + sec_sess->inst.w4 = inst_w4.u64; + sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, write_sa); + + memset(write_sa, 0, sizeof(struct roc_ie_ot_tls_write_sa)); + + /* Copy word0 from sa_dptr to populate ctx_push_sz ctx_size fields */ + memcpy(write_sa, sa_dptr, 8); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + + /* Write session using microcode opcode */ + ret = roc_cpt_ctx_write(lf, sa_dptr, write_sa, sizeof(struct roc_ie_ot_tls_write_sa)); + if (ret) { + plt_err("Could not write tls write session to hardware"); + goto sa_dptr_free; + } + + /* Trigger CTX flush so that data is written back to DRAM */ + roc_cpt_lf_ctx_flush(lf, write_sa, false); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + +sa_dptr_free: + plt_free(sa_dptr); + + return ret; +} + +int +cn10k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp, + struct rte_security_tls_record_xform *tls_xfrm, + struct rte_crypto_sym_xform *crypto_xfrm, + struct rte_security_session *sess) +{ + struct roc_cpt *roc_cpt; + int ret; + + ret = cnxk_tls_xform_verify(tls_xfrm, crypto_xfrm); + if (ret) + return ret; + + roc_cpt = &vf->cpt; + + if (tls_xfrm->type == RTE_SECURITY_TLS_SESS_TYPE_READ) + return cn10k_tls_read_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm, + (struct cn10k_sec_session *)sess); + else + return cn10k_tls_write_sa_create(roc_cpt, &qp->lf, tls_xfrm, crypto_xfrm, + (struct cn10k_sec_session *)sess); +} + +int +cn10k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess) +{ + struct cn10k_tls_record *tls; + struct roc_cpt_lf *lf; + void *sa_dptr = NULL; + int ret; + + lf = &qp->lf; + + tls = &sess->tls_rec; + + /* Trigger CTX flush to write dirty data back to DRAM */ + roc_cpt_lf_ctx_flush(lf, &tls->read_sa, false); + + ret = -1; + + if (sess->tls.is_write) { + sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_write_sa), 8); + if (sa_dptr != NULL) { + tls_write_sa_init(sa_dptr); + + ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->write_sa, + sizeof(struct roc_ie_ot_tls_write_sa)); + } + } else { + sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_read_sa), 8); + if (sa_dptr != NULL) { + tls_read_sa_init(sa_dptr); + + ret = roc_cpt_ctx_write(lf, sa_dptr, &tls->read_sa, + sizeof(struct roc_ie_ot_tls_read_sa)); + } + } + + plt_free(sa_dptr); + + if (ret) { + /* MC write_ctx failed. Attempt reload of CTX */ + + /* Wait for 1 ms so that flush is complete */ + rte_delay_ms(1); + + rte_atomic_thread_fence(rte_memory_order_seq_cst); + + /* Trigger CTX reload to fetch new data from DRAM */ + roc_cpt_lf_ctx_reload(lf, &tls->read_sa); + } + + return 0; +} diff --git a/drivers/crypto/cnxk/cn10k_tls.h b/drivers/crypto/cnxk/cn10k_tls.h new file mode 100644 index 0000000000..c477d51169 --- /dev/null +++ b/drivers/crypto/cnxk/cn10k_tls.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef __CN10K_TLS_H__ +#define __CN10K_TLS_H__ + +#include +#include + +#include "roc_ie_ot_tls.h" + +#include "cnxk_cryptodev.h" +#include "cnxk_cryptodev_ops.h" + +/* Forward declaration */ +struct cn10k_sec_session; + +struct cn10k_tls_record { + union { + /** Read SA */ + struct roc_ie_ot_tls_read_sa read_sa; + /** Write SA */ + struct roc_ie_ot_tls_write_sa write_sa; + }; +} __rte_aligned(ROC_ALIGN); + +int cn10k_tls_record_session_create(struct cnxk_cpt_vf *vf, struct cnxk_cpt_qp *qp, + struct rte_security_tls_record_xform *tls_xfrm, + struct rte_crypto_sym_xform *crypto_xfrm, + struct rte_security_session *sess); + +int cn10k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session *sess); + +#endif /* __CN10K_TLS_H__ */ diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build index d6fafd43d9..ee0c65e32a 100644 --- a/drivers/crypto/cnxk/meson.build +++ b/drivers/crypto/cnxk/meson.build @@ -16,6 +16,7 @@ sources = files( 'cn10k_cryptodev_ops.c', 'cn10k_cryptodev_sec.c', 'cn10k_ipsec.c', + 'cn10k_tls.c', 'cnxk_cryptodev.c', 'cnxk_cryptodev_capabilities.c', 'cnxk_cryptodev_devargs.c',