From patchwork Fri Mar 15 05:42:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vidya Sagar Velumuri X-Patchwork-Id: 138418 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8850643CA6; Fri, 15 Mar 2024 06:43:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BF95942EDD; Fri, 15 Mar 2024 06:43:16 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 8C607410FC for ; Fri, 15 Mar 2024 06:42:34 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 42EMJNY6032193 for ; Thu, 14 Mar 2024 22:42:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=9zbvW3ogQ/si8/97KBPZhIsl8DfHaFYjMlchAa4hvGE=; b=JDg FW/J6LVGZ9VmARSusd6JLy6wag5BoB0QJVPt5fNI/Muq2xcstRfuBlcDi1yqDwlF wJW04dx1FXMqU1XtIAeJ/j812IAkNpA/dJR/M3Y7ONxOmf89+qK74SeUeZsLVuAH 9PuzSbVhZTa1V0QeVbbjI/Ysdv3WPCbLFXBPRt+zly1EURDt/0RGcp2imEF34/S+ kGeYlVdkayL9lB8//WQGE5lUjg+DqoDYMdumGQH7Sgo0pFDHqLDfAxMnjTRa0KSL hHbAnrq5MK3hlzP7xiDroPoNe5cOtWEWMDODu2MCLFWSwn+iKzKv/FQ6J3sLzUsy TdvT0tycetDeLnYYNfQ== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3wv9xa12he-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 14 Mar 2024 22:42:33 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Thu, 14 Mar 2024 22:42:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Thu, 14 Mar 2024 22:42:32 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.193.69.194]) by maili.marvell.com (Postfix) with ESMTP id 1D2E23F704A; Thu, 14 Mar 2024 22:42:29 -0700 (PDT) From: Vidya Sagar Velumuri To: Akhil Goyal CC: Jerin Jacob , , Aakash Sasidharan , Anoob Joseph Subject: [PATCH v3 6/8] crypto/cnxk: add support for padding verification in TLS Date: Fri, 15 Mar 2024 11:12:11 +0530 Message-ID: <20240315054213.540-7-vvelumuri@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240315054213.540-1-vvelumuri@marvell.com> References: <20240314131839.3362494-1-vvelumuri@marvell.com> <20240315054213.540-1-vvelumuri@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: hFs2DmFwi0FmcXRUjYLeOEq0EgTPb_AJ X-Proofpoint-GUID: hFs2DmFwi0FmcXRUjYLeOEq0EgTPb_AJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-14_13,2024-03-13_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For TLS-1.2: - Verify that the padding bytes are having pad len as the value. - Report error in case of discrepancies. - Trim the padding and MAC from the tls-1.2 records For TLS-1.3: - Find the content type as the last non-zero byte in the record. - Return the content type as the inner content type. Signed-off-by: Vidya Sagar Velumuri --- drivers/common/cnxk/roc_se.h | 1 + drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 151 +++++++++++++++++++++- drivers/crypto/cnxk/cn10k_cryptodev_sec.h | 17 ++- drivers/crypto/cnxk/cn10k_tls.c | 65 +++++++--- drivers/crypto/cnxk/cn10k_tls_ops.h | 19 ++- 5 files changed, 215 insertions(+), 38 deletions(-) diff --git a/drivers/common/cnxk/roc_se.h b/drivers/common/cnxk/roc_se.h index ddcf6bdb44..50741a0b81 100644 --- a/drivers/common/cnxk/roc_se.h +++ b/drivers/common/cnxk/roc_se.h @@ -169,6 +169,7 @@ typedef enum { ROC_SE_ERR_SSL_CIPHER_UNSUPPORTED = 0x84, ROC_SE_ERR_SSL_MAC_UNSUPPORTED = 0x85, ROC_SE_ERR_SSL_VERSION_UNSUPPORTED = 0x86, + ROC_SE_ERR_SSL_POST_PROCESS = 0x88, ROC_SE_ERR_SSL_MAC_MISMATCH = 0x89, ROC_SE_ERR_SSL_PKT_REPLAY_SEQ_OUT_OF_WINDOW = 0xC1, ROC_SE_ERR_SSL_PKT_REPLAY_SEQ = 0xC9, diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c index 8991150c05..720b756001 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c @@ -207,7 +207,7 @@ cpt_sec_tls_inst_fill(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, struct cn10k_sec_session *sess, struct cpt_inst_s *inst, struct cpt_inflight_req *infl_req, const bool is_sg_ver2) { - if (sess->tls.is_write) + if (sess->tls_opt.is_write) return process_tls_write(&qp->lf, op, sess, &qp->meta_info, infl_req, inst, is_sg_ver2); else @@ -989,20 +989,161 @@ cn10k_cpt_ipsec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s * } static inline void -cn10k_cpt_tls_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res) +cn10k_cpt_tls12_trim_mac(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res, uint8_t mac_len) { + struct rte_mbuf *mac_prev_seg = NULL, *mac_seg = NULL, *seg; + uint32_t pad_len, trim_len, mac_offset, pad_offset; struct rte_mbuf *mbuf = cop->sym->m_src; - const uint16_t m_len = res->rlen; + uint16_t m_len = res->rlen; + uint32_t i, nb_segs = 1; + uint8_t pad_res = 0; + uint8_t pad_val; + + pad_val = ((res->spi >> 16) & 0xff); + pad_len = pad_val + 1; + trim_len = pad_len + mac_len; + mac_offset = m_len - trim_len; + pad_offset = mac_offset + mac_len; + + /* Handle Direct Mode */ + if (mbuf->next == NULL) { + uint8_t *ptr = rte_pktmbuf_mtod_offset(mbuf, uint8_t *, pad_offset); + + for (i = 0; i < pad_len; i++) + pad_res |= ptr[i] ^ pad_val; + + if (pad_res) { + cop->status = RTE_CRYPTO_OP_STATUS_ERROR; + cop->aux_flags = res->uc_compcode; + } + mbuf->pkt_len = m_len - trim_len; + mbuf->data_len = m_len - trim_len; + + return; + } + + /* Handle SG mode */ + seg = mbuf; + while (mac_offset >= seg->data_len) { + mac_offset -= seg->data_len; + mac_prev_seg = seg; + seg = seg->next; + nb_segs++; + } + mac_seg = seg; + + pad_offset = mac_offset + mac_len; + while (pad_offset >= seg->data_len) { + pad_offset -= seg->data_len; + seg = seg->next; + } + + while (pad_len != 0) { + uint8_t *ptr = rte_pktmbuf_mtod_offset(seg, uint8_t *, pad_offset); + uint8_t len = RTE_MIN(seg->data_len - pad_offset, pad_len); + + for (i = 0; i < len; i++) + pad_res |= ptr[i] ^ pad_val; + + pad_offset = 0; + pad_len -= len; + seg = seg->next; + } + + if (pad_res) { + cop->status = RTE_CRYPTO_OP_STATUS_ERROR; + cop->aux_flags = res->uc_compcode; + } + + mbuf->pkt_len = m_len - trim_len; + if (mac_offset) { + rte_pktmbuf_free(mac_seg->next); + mac_seg->next = NULL; + mac_seg->data_len = mac_offset; + mbuf->nb_segs = nb_segs; + } else { + rte_pktmbuf_free(mac_seg); + mac_prev_seg->next = NULL; + mbuf->nb_segs = nb_segs - 1; + } +} + +/* TLS-1.3: + * Read from last until a non-zero value is encountered. + * Return the non zero value as the content type. + * Remove the MAC and content type and padding bytes. + */ +static inline void +cn10k_cpt_tls13_trim_mac(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res) +{ + struct rte_mbuf *mbuf = cop->sym->m_src; + struct rte_mbuf *seg = mbuf; + uint16_t m_len = res->rlen; + uint8_t *ptr, type = 0x0; + int len, i, nb_segs = 1; + + while (m_len && !type) { + len = m_len; + seg = mbuf; + + /* get the last seg */ + while (len > seg->data_len) { + len -= seg->data_len; + seg = seg->next; + nb_segs++; + } + + /* walkthrough from last until a non zero value is found */ + ptr = rte_pktmbuf_mtod(seg, uint8_t *); + i = len; + while (i && (ptr[--i] == 0)) + ; + + type = ptr[i]; + m_len -= len; + } + + if (type) { + cop->param1.tls_record.content_type = type; + mbuf->pkt_len = m_len + i; + mbuf->nb_segs = nb_segs; + seg->data_len = i; + rte_pktmbuf_free(seg->next); + seg->next = NULL; + } else { + cop->status = RTE_CRYPTO_OP_STATUS_ERROR; + } +} + +static inline void +cn10k_cpt_tls_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res, + struct cn10k_sec_session *sess) +{ + struct cn10k_tls_opt tls_opt = sess->tls_opt; + struct rte_mbuf *mbuf = cop->sym->m_src; + uint16_t m_len = res->rlen; if (!res->uc_compcode) { if (mbuf->next == NULL) mbuf->data_len = m_len; mbuf->pkt_len = m_len; - } else { + cop->param1.tls_record.content_type = (res->spi >> 24) & 0xff; + return; + } + + /* Any error other than post process */ + if (res->uc_compcode != ROC_SE_ERR_SSL_POST_PROCESS) { cop->status = RTE_CRYPTO_OP_STATUS_ERROR; cop->aux_flags = res->uc_compcode; plt_err("crypto op failed with UC compcode: 0x%x", res->uc_compcode); + return; } + + /* Extra padding scenario: Verify padding. Remove padding and MAC */ + if (tls_opt.tls_ver != RTE_SECURITY_VERSION_TLS_1_3) + cn10k_cpt_tls12_trim_mac(cop, res, (uint8_t)tls_opt.mac_len); + else + cn10k_cpt_tls13_trim_mac(cop, res); } static inline void @@ -1015,7 +1156,7 @@ cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *re if (sess->proto == RTE_SECURITY_PROTOCOL_IPSEC) cn10k_cpt_ipsec_post_process(cop, res); else if (sess->proto == RTE_SECURITY_PROTOCOL_TLS_RECORD) - cn10k_cpt_tls_post_process(cop, res); + cn10k_cpt_tls_post_process(cop, res, sess); } static inline void diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h index 230c0f7c1c..1637a9a25c 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_sec.h +++ b/drivers/crypto/cnxk/cn10k_cryptodev_sec.h @@ -16,6 +16,15 @@ #define SEC_SESS_SIZE sizeof(struct rte_security_session) +struct cn10k_tls_opt { + uint16_t pad_shift : 3; + uint16_t enable_padding : 1; + uint16_t tail_fetch_len : 2; + uint16_t tls_ver : 2; + uint16_t is_write : 1; + uint16_t mac_len : 7; +}; + struct cn10k_sec_session { uint8_t rte_sess[SEC_SESS_SIZE]; @@ -29,16 +38,12 @@ struct cn10k_sec_session { uint8_t proto; uint8_t iv_length; union { + uint16_t u16; + struct cn10k_tls_opt tls_opt; struct { uint8_t ip_csum; uint8_t is_outbound : 1; } ipsec; - struct { - uint8_t enable_padding : 1; - uint8_t tail_fetch_len : 2; - uint8_t is_write : 1; - uint8_t rvsd : 4; - } tls; }; /** Queue pair */ struct cnxk_cpt_qp *qp; diff --git a/drivers/crypto/cnxk/cn10k_tls.c b/drivers/crypto/cnxk/cn10k_tls.c index ae3ed3176c..3505a71a6c 100644 --- a/drivers/crypto/cnxk/cn10k_tls.c +++ b/drivers/crypto/cnxk/cn10k_tls.c @@ -119,8 +119,14 @@ cnxk_tls_xform_verify(struct rte_security_tls_record_xform *tls_xform, (tls_xform->type != RTE_SECURITY_TLS_SESS_TYPE_WRITE)) return -EINVAL; - if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) + if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + /* optional padding is not allowed in TLS-1.2 for AEAD */ + if ((tls_xform->ver == RTE_SECURITY_VERSION_TLS_1_2) && + (tls_xform->options.extra_padding_enable == 1)) + return -EINVAL; + return tls_xform_aead_verify(tls_xform, crypto_xform); + } /* TLS-1.3 only support AEAD. * Control should not reach here for TLS-1.3 @@ -321,7 +327,7 @@ tls_read_ctx_size(struct roc_ie_ot_tls_read_sa *sa, enum rte_security_tls_versio static int tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa, struct rte_security_tls_record_xform *tls_xfrm, - struct rte_crypto_sym_xform *crypto_xfrm) + struct rte_crypto_sym_xform *crypto_xfrm, struct cn10k_tls_opt *tls_opt) { enum rte_security_tls_version tls_ver = tls_xfrm->ver; struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm; @@ -405,16 +411,26 @@ tls_read_sa_fill(struct roc_ie_ot_tls_read_sa *read_sa, memcpy(cipher_key, key, length); } - if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_MD5_HMAC) + switch (auth_xfrm->auth.algo) { + case RTE_CRYPTO_AUTH_MD5_HMAC: read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_MD5; - else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) + tls_opt->mac_len = 0; + break; + case RTE_CRYPTO_AUTH_SHA1_HMAC: read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA1; - else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) + tls_opt->mac_len = 20; + break; + case RTE_CRYPTO_AUTH_SHA256_HMAC: read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_256; - else if (auth_xfrm->auth.algo == RTE_CRYPTO_AUTH_SHA384_HMAC) + tls_opt->mac_len = 32; + break; + case RTE_CRYPTO_AUTH_SHA384_HMAC: read_sa->w2.s.mac_select = ROC_IE_OT_TLS_MAC_SHA2_384; - else + tls_opt->mac_len = 48; + break; + default: return -EINVAL; + } roc_se_hmac_opad_ipad_gen(read_sa->w2.s.mac_select, auth_xfrm->auth.key.data, auth_xfrm->auth.key.length, read_sa->tls_12.opad_ipad, @@ -622,6 +638,7 @@ cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf, struct cn10k_sec_session *sec_sess) { struct roc_ie_ot_tls_read_sa *sa_dptr; + uint8_t tls_ver = tls_xfrm->ver; struct cn10k_tls_record *tls; union cpt_inst_w4 inst_w4; void *read_sa; @@ -638,7 +655,7 @@ cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf, } /* Translate security parameters to SA */ - ret = tls_read_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm); + ret = tls_read_sa_fill(sa_dptr, tls_xfrm, crypto_xfrm, &sec_sess->tls_opt); if (ret) { plt_err("Could not fill read session parameters"); goto sa_dptr_free; @@ -658,19 +675,20 @@ cn10k_tls_read_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf, /* pre-populate CPT INST word 4 */ inst_w4.u64 = 0; - if ((sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12) || - (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)) { + if ((tls_ver == RTE_SECURITY_VERSION_TLS_1_2) || + (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)) { inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT; - sec_sess->tls.tail_fetch_len = 0; + sec_sess->tls_opt.tail_fetch_len = 0; if (sa_dptr->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_3DES) - sec_sess->tls.tail_fetch_len = 1; + sec_sess->tls_opt.tail_fetch_len = 1; else if (sa_dptr->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_AES_CBC) - sec_sess->tls.tail_fetch_len = 2; - } else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_13) { + sec_sess->tls_opt.tail_fetch_len = 2; + } else if (tls_xfrm->ver == RTE_SECURITY_VERSION_TLS_1_3) { inst_w4.s.opcode_major = ROC_IE_OT_TLS13_MAJOR_OP_RECORD_DEC | ROC_IE_OT_INPLACE_BIT; } + sec_sess->tls_opt.tls_ver = tls_ver; sec_sess->inst.w4 = inst_w4.u64; sec_sess->inst.w7 = cpt_inst_w7_get(roc_cpt, read_sa); @@ -706,6 +724,7 @@ cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf, struct cn10k_sec_session *sec_sess) { struct roc_ie_ot_tls_write_sa *sa_dptr; + uint8_t tls_ver = tls_xfrm->ver; struct cn10k_tls_record *tls; union cpt_inst_w4 inst_w4; void *write_sa; @@ -739,17 +758,23 @@ cn10k_tls_write_sa_create(struct roc_cpt *roc_cpt, struct roc_cpt_lf *lf, sec_sess->iv_length = crypto_xfrm->next->cipher.iv.length; } - sec_sess->tls.is_write = 1; - sec_sess->tls.enable_padding = tls_xfrm->options.extra_padding_enable; + sec_sess->tls_opt.is_write = 1; + sec_sess->tls_opt.pad_shift = 0; + sec_sess->tls_opt.tls_ver = tls_ver; + sec_sess->tls_opt.enable_padding = tls_xfrm->options.extra_padding_enable; sec_sess->max_extended_len = tls_write_rlens_get(tls_xfrm, crypto_xfrm); sec_sess->proto = RTE_SECURITY_PROTOCOL_TLS_RECORD; /* pre-populate CPT INST word 4 */ inst_w4.u64 = 0; - if ((sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_12) || - (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_DTLS_12)) { + if ((tls_ver == RTE_SECURITY_VERSION_TLS_1_2) || + (tls_ver == RTE_SECURITY_VERSION_DTLS_1_2)) { inst_w4.s.opcode_major = ROC_IE_OT_TLS_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT; - } else if (sa_dptr->w2.s.version_select == ROC_IE_OT_TLS_VERSION_TLS_13) { + if (sa_dptr->w2.s.cipher_select == ROC_IE_OT_TLS_CIPHER_3DES) + sec_sess->tls_opt.pad_shift = 3; + else + sec_sess->tls_opt.pad_shift = 4; + } else if (tls_ver == RTE_SECURITY_VERSION_TLS_1_3) { inst_w4.s.opcode_major = ROC_IE_OT_TLS13_MAJOR_OP_RECORD_ENC | ROC_IE_OT_INPLACE_BIT; } @@ -838,7 +863,7 @@ cn10k_sec_tls_session_destroy(struct cnxk_cpt_qp *qp, struct cn10k_sec_session * ret = -1; - if (sess->tls.is_write) { + if (sess->tls_opt.is_write) { sa_dptr = plt_zmalloc(sizeof(struct roc_ie_ot_tls_write_sa), 8); if (sa_dptr != NULL) { tls_write_sa_init(sa_dptr); diff --git a/drivers/crypto/cnxk/cn10k_tls_ops.h b/drivers/crypto/cnxk/cn10k_tls_ops.h index 6fd74927ee..64f94a4e8b 100644 --- a/drivers/crypto/cnxk/cn10k_tls_ops.h +++ b/drivers/crypto/cnxk/cn10k_tls_ops.h @@ -21,16 +21,21 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k struct cpt_qp_meta_info *m_info, struct cpt_inflight_req *infl_req, struct cpt_inst_s *inst, const bool is_sg_ver2) { + struct cn10k_tls_opt tls_opt = sess->tls_opt; struct rte_crypto_sym_op *sym_op = cop->sym; #ifdef LA_IPSEC_DEBUG struct roc_ie_ot_tls_write_sa *write_sa; #endif struct rte_mbuf *m_src = sym_op->m_src; + uint32_t pad_len, pad_bytes; struct rte_mbuf *last_seg; union cpt_inst_w4 w4; void *m_data = NULL; uint8_t *in_buffer; + pad_bytes = (cop->aux_flags * 8) > 0xff ? 0xff : (cop->aux_flags * 8); + pad_len = (pad_bytes >> tls_opt.pad_shift) * tls_opt.enable_padding; + #ifdef LA_IPSEC_DEBUG write_sa = &sess->tls_rec.write_sa; if (write_sa->w2.s.iv_at_cptr == ROC_IE_OT_TLS_IV_SRC_FROM_SA) { @@ -94,7 +99,7 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k w4.s.dlen = m_src->data_len; w4.s.param2 = cop->param1.tls_record.content_type; - w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8; + w4.s.opcode_minor = pad_len; inst->w4.u64 = w4.u64; } else if (is_sg_ver2 == false) { @@ -148,10 +153,10 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k w4.s.param1 = rte_pktmbuf_pkt_len(m_src); w4.s.param2 = cop->param1.tls_record.content_type; w4.s.opcode_major |= (uint64_t)ROC_DMA_MODE_SG; - w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8; + w4.s.opcode_minor = pad_len; /* Output Scatter List */ - last_seg->data_len += sess->max_extended_len; + last_seg->data_len += sess->max_extended_len + pad_bytes; inst->w4.u64 = w4.u64; } else { struct roc_sg2list_comp *scatter_comp, *gather_comp; @@ -198,11 +203,11 @@ process_tls_write(struct roc_cpt_lf *lf, struct rte_crypto_op *cop, struct cn10k w4.u64 = sess->inst.w4; w4.s.dlen = rte_pktmbuf_pkt_len(m_src); w4.s.opcode_major &= (~(ROC_IE_OT_INPLACE_BIT)); - w4.s.opcode_minor = sess->tls.enable_padding * cop->aux_flags * 8; + w4.s.opcode_minor = pad_len; w4.s.param1 = w4.s.dlen; w4.s.param2 = cop->param1.tls_record.content_type; /* Output Scatter List */ - last_seg->data_len += sess->max_extended_len; + last_seg->data_len += sess->max_extended_len + pad_bytes; inst->w4.u64 = w4.u64; } @@ -234,7 +239,7 @@ process_tls_read(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, inst->w4.u64 = w4.u64; } else if (is_sg_ver2 == false) { struct roc_sglist_comp *scatter_comp, *gather_comp; - int tail_len = sess->tls.tail_fetch_len * 16; + int tail_len = sess->tls_opt.tail_fetch_len * 16; int pkt_len = rte_pktmbuf_pkt_len(m_src); uint32_t g_size_bytes, s_size_bytes; uint16_t *sg_hdr; @@ -289,7 +294,7 @@ process_tls_read(struct rte_crypto_op *cop, struct cn10k_sec_session *sess, inst->w4.u64 = w4.u64; } else { struct roc_sg2list_comp *scatter_comp, *gather_comp; - int tail_len = sess->tls.tail_fetch_len * 16; + int tail_len = sess->tls_opt.tail_fetch_len * 16; int pkt_len = rte_pktmbuf_pkt_len(m_src); union cpt_inst_w5 cpt_inst_w5; union cpt_inst_w6 cpt_inst_w6;