From patchwork Fri Apr 14 17:44:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126101 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 26BCA42943; Fri, 14 Apr 2023 19:45:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 03CA5427E9; Fri, 14 Apr 2023 19:45:30 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 61F0A410F6 for ; Fri, 14 Apr 2023 19:45:28 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33E9094v011328; Fri, 14 Apr 2023 10:45:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=rOPgFvVe42GSNxhZQYsBWwYr3puwxGenZiAuxyTGpzU=; b=IImHpxzmav5vN5Jf1grDTf4ZsoZ2jpJVQZGajYd7NocYsAHzCuuWygb80qzRWwEG0VoY Be6QKPqm0RpzX95RhNUYsdhVKRXuO73QUJ7boKGcJwl0sNiZT9f1nmLZ8++MpHdyeXt1 VO5TouKC1F+59mczVqfOY5QYA+AQ2mdstWT5rHMKOR9c/cFO6Hoc5FEwbKgQh1csAXqo qHAqu7kXGU1iCjjj0hTQWE05g2rmnp+VH1ONQy56WXpreyLDYKY8blS1itD/9ilaLJ+X QBvIAKspT2NskyC0BbYoWQlBmkiDWKeVQOG5YhYYL/pHm1b5IuzZYqQYhS+bKPLekCf8 3w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3py3tk2e7k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:45:26 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:45:24 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:45:24 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 431783F7080; Fri, 14 Apr 2023 10:45:19 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 01/22] net: add PDCP header Date: Fri, 14 Apr 2023 23:14:51 +0530 Message-ID: <20230414174512.642-2-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Shwe_ef3KuAd5ldzfiUR21-hBmNnxfbK X-Proofpoint-ORIG-GUID: Shwe_ef3KuAd5ldzfiUR21-hBmNnxfbK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko Add PDCP protocol header to be used for supporting PDCP protocol processing. Signed-off-by: Anoob Joseph Signed-off-by: Kiran Kumar K Signed-off-by: Volodymyr Fialko Acked-by: Akhil Goyal --- doc/api/doxy-api-index.md | 3 +- lib/net/meson.build | 1 + lib/net/rte_pdcp_hdr.h | 140 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 143 insertions(+), 1 deletion(-) create mode 100644 lib/net/rte_pdcp_hdr.h diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index c709fd48ad..debbe4134f 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -127,7 +127,8 @@ The public API headers are grouped by topics: [Geneve](@ref rte_geneve.h), [eCPRI](@ref rte_ecpri.h), [L2TPv2](@ref rte_l2tpv2.h), - [PPP](@ref rte_ppp.h) + [PPP](@ref rte_ppp.h), + [PDCP hdr](@ref rte_pdcp_hdr.h) - **QoS**: [metering](@ref rte_meter.h), diff --git a/lib/net/meson.build b/lib/net/meson.build index 379d161ee0..bd56f91c22 100644 --- a/lib/net/meson.build +++ b/lib/net/meson.build @@ -22,6 +22,7 @@ headers = files( 'rte_geneve.h', 'rte_l2tpv2.h', 'rte_ppp.h', + 'rte_pdcp_hdr.h', ) sources = files( diff --git a/lib/net/rte_pdcp_hdr.h b/lib/net/rte_pdcp_hdr.h new file mode 100644 index 0000000000..87ecd379c4 --- /dev/null +++ b/lib/net/rte_pdcp_hdr.h @@ -0,0 +1,140 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef RTE_PDCP_HDR_H +#define RTE_PDCP_HDR_H + +/** + * @file + * + * PDCP-related defines + * + * Based on - ETSI TS 138 323 V17.1.0 (2022-08) + * https://www.etsi.org/deliver/etsi_ts/138300_138399/138323/17.01.00_60/ts_138323v170100p.pdf + */ + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * 4.3.1 + * + * Indicate the maximum supported size of a PDCP Control PDU. + */ +#define RTE_PDCP_CTRL_PDU_SIZE_MAX 9000u + +/** + * Indicate type of control information included in the corresponding PDCP + * Control PDU. + */ +enum rte_pdcp_ctrl_pdu_type { + RTE_PDCP_CTRL_PDU_TYPE_STATUS_REPORT = 0, + RTE_PDCP_CTRL_PDU_TYPE_ROHC_FEEDBACK = 1, + RTE_PDCP_CTRL_PDU_TYPE_EHC_FEEDBACK = 2, + RTE_PDCP_CRTL_PDU_TYPE_UDC_FEEDBACK = 3, +}; + +/** + * 6.3.7 D/C + * + * This field indicates whether the corresponding PDCP PDU is a + * PDCP Data PDU or a PDCP Control PDU. + */ +enum rte_pdcp_pdu_type { + RTE_PDCP_PDU_TYPE_CTRL = 0, + RTE_PDCP_PDU_TYPE_DATA = 1, +}; + +/** + * 6.2.2.1 Data PDU for SRBs + */ +__extension__ +struct rte_pdcp_cp_data_pdu_sn_12_hdr { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t sn_11_8 : 4; /**< Sequence number bits 8-11 */ + uint8_t r : 4; /**< Reserved */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t r : 4; /**< Reserved */ + uint8_t sn_11_8 : 4; /**< Sequence number bits 8-11 */ +#endif + uint8_t sn_7_0; /**< Sequence number bits 0-7 */ +}; + +/** + * 6.2.2.2 Data PDU for DRBs and MRBs with 12 bits PDCP SN + */ +__extension__ +struct rte_pdcp_up_data_pdu_sn_12_hdr { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t sn_11_8 : 4; /**< Sequence number bits 8-11 */ + uint8_t r : 3; /**< Reserved */ + uint8_t d_c : 1; /**< D/C bit */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t d_c : 1; /**< D/C bit */ + uint8_t r : 3; /**< Reserved */ + uint8_t sn_11_8 : 4; /**< Sequence number bits 8-11 */ +#endif + uint8_t sn_7_0; /**< Sequence number bits 0-7 */ +}; + +/** + * 6.2.2.3 Data PDU for DRBs and MRBs with 18 bits PDCP SN + */ +__extension__ +struct rte_pdcp_up_data_pdu_sn_18_hdr { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t sn_17_16 : 2; /**< Sequence number bits 16-17 */ + uint8_t r : 5; /**< Reserved */ + uint8_t d_c : 1; /**< D/C bit */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t d_c : 1; /**< D/C bit */ + uint8_t r : 5; /**< Reserved */ + uint8_t sn_17_16 : 2; /**< Sequence number bits 16-17 */ +#endif + uint8_t sn_15_8; /**< Sequence number bits 8-15 */ + uint8_t sn_7_0; /**< Sequence number bits 0-7 */ +}; + +/** + * 6.2.3.1 Control PDU for PDCP status report + */ +__extension__ +struct rte_pdcp_up_ctrl_pdu_hdr { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t r : 4; /**< Reserved */ + uint8_t pdu_type : 3; /**< Control PDU type */ + uint8_t d_c : 1; /**< D/C bit */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t d_c : 1; /**< D/C bit */ + uint8_t pdu_type : 3; /**< Control PDU type */ + uint8_t r : 4; /**< Reserved */ +#endif + /** + * 6.3.9 FMC + * + * First Missing COUNT. This field indicates the COUNT value of the + * first missing PDCP SDU within the reordering window, i.e. RX_DELIV. + */ + rte_be32_t fmc; + /** + * 6.3.10 Bitmap + * + * Length: Variable. The length of the bitmap field can be 0. + * + * This field indicates which SDUs are missing and which SDUs are + * correctly received in the receiving PDCP entity. The bit position of + * Nth bit in the Bitmap is N, i.e., the bit position of the first bit + * in the Bitmap is 1. + */ + uint8_t bitmap[]; +} __rte_packed; + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_PDCP_HDR_H */ From patchwork Fri Apr 14 17:44:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126102 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C51F42943; Fri, 14 Apr 2023 19:45:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 380A042D0C; Fri, 14 Apr 2023 19:45:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id EEF5142C24 for ; Fri, 14 Apr 2023 19:45:34 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EDJhLb015785; Fri, 14 Apr 2023 10:45:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=b3FpYWvQqT/3ta2CtbIkN73J5JJLW+c47VSJiUH8Shg=; b=WPmxf7RLikm+pcSP8RsCOLsrDsuB7qGCCaLSeb6lT3FokmPNypBtmgcwhbhohatkXKyw FcjIS/EMHe+6LWtibEzN3xuzNbgoQWv+n2ph4tuRMOH2xL36+Tlz6A4JVM8YCDjrRc47 5qyKujTa5EevNc1EmX1eFKJp+VtufqBGjT34yr4MDBl/IarjlC0kFiwJQAyf+Yj9dHqz ocm4CVOqYKAkENLD50YBGcEBE8IXwE6I7Gu0Caag+m9sHGBH3xFW+JqqiVXViw0AgoYt oVULUsNoE2G0ek0BFDUaQQ6if6W3k0N112j151wtuWjB1rGPXEaESeKYoh3f6O1Ztr9Y fQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6gx-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:45:33 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:45:30 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:45:30 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 7EE9A3F7081; Fri, 14 Apr 2023 10:45:25 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 02/22] lib: add pdcp protocol Date: Fri, 14 Apr 2023 23:14:52 +0530 Message-ID: <20230414174512.642-3-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: h1_cMVtNXnursmGvNuwcGrJNJWta6FFG X-Proofpoint-ORIG-GUID: h1_cMVtNXnursmGvNuwcGrJNJWta6FFG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add Packet Data Convergence Protocol (PDCP) processing library. The library is similar to lib_ipsec which provides IPsec processing capabilities in DPDK. PDCP would involve roughly the following options, 1. Transfer of user plane data 2. Transfer of control plane data 3. Header compression 4. Uplink data compression 5. Ciphering and integrity protection PDCP library provides following control path APIs that is used to configure various PDCP entities, 1. rte_pdcp_entity_establish() 2. rte_pdcp_entity_suspend() 3. rte_pdcp_entity_release() Signed-off-by: Anoob Joseph Signed-off-by: Kiran Kumar K Signed-off-by: Volodymyr Fialko --- doc/api/doxy-api-index.md | 3 +- doc/api/doxy-api.conf.in | 1 + lib/meson.build | 1 + lib/pdcp/meson.build | 17 +++++ lib/pdcp/pdcp_crypto.c | 21 +++++ lib/pdcp/pdcp_crypto.h | 15 ++++ lib/pdcp/pdcp_entity.h | 95 +++++++++++++++++++++++ lib/pdcp/pdcp_process.c | 138 +++++++++++++++++++++++++++++++++ lib/pdcp/pdcp_process.h | 13 ++++ lib/pdcp/rte_pdcp.c | 138 +++++++++++++++++++++++++++++++++ lib/pdcp/rte_pdcp.h | 157 ++++++++++++++++++++++++++++++++++++++ lib/pdcp/version.map | 10 +++ 12 files changed, 608 insertions(+), 1 deletion(-) create mode 100644 lib/pdcp/meson.build create mode 100644 lib/pdcp/pdcp_crypto.c create mode 100644 lib/pdcp/pdcp_crypto.h create mode 100644 lib/pdcp/pdcp_entity.h create mode 100644 lib/pdcp/pdcp_process.c create mode 100644 lib/pdcp/pdcp_process.h create mode 100644 lib/pdcp/rte_pdcp.c create mode 100644 lib/pdcp/rte_pdcp.h create mode 100644 lib/pdcp/version.map diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index debbe4134f..cd7a6cae44 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -128,7 +128,8 @@ The public API headers are grouped by topics: [eCPRI](@ref rte_ecpri.h), [L2TPv2](@ref rte_l2tpv2.h), [PPP](@ref rte_ppp.h), - [PDCP hdr](@ref rte_pdcp_hdr.h) + [PDCP hdr](@ref rte_pdcp_hdr.h), + [PDCP](@ref rte_pdcp.h) - **QoS**: [metering](@ref rte_meter.h), diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index d230a19e1f..58789308a9 100644 --- a/doc/api/doxy-api.conf.in +++ b/doc/api/doxy-api.conf.in @@ -62,6 +62,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \ @TOPDIR@/lib/net \ @TOPDIR@/lib/pcapng \ @TOPDIR@/lib/pci \ + @TOPDIR@/lib/pdcp \ @TOPDIR@/lib/pdump \ @TOPDIR@/lib/pipeline \ @TOPDIR@/lib/port \ diff --git a/lib/meson.build b/lib/meson.build index 0812ce6026..d217c04ea9 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -64,6 +64,7 @@ libraries = [ 'flow_classify', # flow_classify lib depends on pkt framework table lib 'graph', 'node', + 'pdcp', # pdcp lib depends on crypto and security ] optional_libs = [ diff --git a/lib/pdcp/meson.build b/lib/pdcp/meson.build new file mode 100644 index 0000000000..ccaf426240 --- /dev/null +++ b/lib/pdcp/meson.build @@ -0,0 +1,17 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2023 Marvell. + +if is_windows + build = false + reason = 'not supported on Windows' + subdir_done() +endif + +sources = files( + 'pdcp_crypto.c', + 'pdcp_process.c', + 'rte_pdcp.c', + ) +headers = files('rte_pdcp.h') + +deps += ['mbuf', 'net', 'cryptodev', 'security'] diff --git a/lib/pdcp/pdcp_crypto.c b/lib/pdcp/pdcp_crypto.c new file mode 100644 index 0000000000..755e27ec9e --- /dev/null +++ b/lib/pdcp/pdcp_crypto.c @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +#include "pdcp_crypto.h" + +int +pdcp_crypto_sess_create(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf) +{ + RTE_SET_USED(entity); + RTE_SET_USED(conf); + return 0; +} + +void +pdcp_crypto_sess_destroy(struct rte_pdcp_entity *entity) +{ + RTE_SET_USED(entity); +} diff --git a/lib/pdcp/pdcp_crypto.h b/lib/pdcp/pdcp_crypto.h new file mode 100644 index 0000000000..6563331d37 --- /dev/null +++ b/lib/pdcp/pdcp_crypto.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef PDCP_CRYPTO_H +#define PDCP_CRYPTO_H + +#include + +int pdcp_crypto_sess_create(struct rte_pdcp_entity *entity, + const struct rte_pdcp_entity_conf *conf); + +void pdcp_crypto_sess_destroy(struct rte_pdcp_entity *entity); + +#endif /* PDCP_CRYPTO_H */ diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h new file mode 100644 index 0000000000..ca1d56b516 --- /dev/null +++ b/lib/pdcp/pdcp_entity.h @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef PDCP_ENTITY_H +#define PDCP_ENTITY_H + +#include +#include +#include +#include +#include + +struct entity_priv; + +/* IV generation function based on the entity configuration */ +typedef void (*iv_gen_t)(struct rte_crypto_op *cop, const struct entity_priv *en_priv, + uint32_t count); + +struct entity_state { + uint32_t rx_next; + uint32_t tx_next; + uint32_t rx_deliv; + uint32_t rx_reord; +}; + +/* + * Layout of PDCP entity: [rte_pdcp_entity] [entity_priv] [entity_dl/ul] + */ + +struct entity_priv { + /** Crypto sym session. */ + struct rte_cryptodev_sym_session *crypto_sess; + /** Entity specific IV generation function. */ + iv_gen_t iv_gen; + /** Entity state variables. */ + struct entity_state state; + /** Flags. */ + struct { + /** PDCP PDU has 4 byte MAC-I. */ + uint64_t is_authenticated : 1; + /** Cipher offset & length in bits. */ + uint64_t is_ciph_in_bits : 1; + /** Auth offset & length in bits. */ + uint64_t is_auth_in_bits : 1; + /** Is UL/transmitting PDCP entity. */ + uint64_t is_ul_entity : 1; + /** Is NULL auth. */ + uint64_t is_null_auth : 1; + } flags; + /** Crypto op pool. */ + struct rte_mempool *cop_pool; + /** PDCP header size. */ + uint8_t hdr_sz; + /** PDCP AAD size. For AES-CMAC, additional message is prepended for the operation. */ + uint8_t aad_sz; + /** Device ID of the device to be used for offload. */ + uint8_t dev_id; +}; + +struct entity_priv_dl_part { + /* NOTE: when in-order-delivery is supported, post PDCP packets would need to cached. */ + uint8_t dummy; +}; + +struct entity_priv_ul_part { + /* + * NOTE: when re-establish is supported, plain PDCP packets & COUNT values need to be + * cached. + */ + uint8_t dummy; +}; + +static inline struct entity_priv * +entity_priv_get(const struct rte_pdcp_entity *entity) { + return RTE_PTR_ADD(entity, sizeof(struct rte_pdcp_entity)); +} + +static inline struct entity_priv_dl_part * +entity_dl_part_get(const struct rte_pdcp_entity *entity) { + return RTE_PTR_ADD(entity, sizeof(struct rte_pdcp_entity) + sizeof(struct entity_priv)); +} + +static inline struct entity_priv_ul_part * +entity_ul_part_get(const struct rte_pdcp_entity *entity) { + return RTE_PTR_ADD(entity, sizeof(struct rte_pdcp_entity) + sizeof(struct entity_priv)); +} + +static inline int +pdcp_hdr_size_get(enum rte_security_pdcp_sn_size sn_size) +{ + return RTE_ALIGN_MUL_CEIL(sn_size, 8) / 8; +} + +#endif /* PDCP_ENTITY_H */ diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c new file mode 100644 index 0000000000..d4b158536d --- /dev/null +++ b/lib/pdcp/pdcp_process.c @@ -0,0 +1,138 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include +#include +#include +#include +#include + +#include "pdcp_crypto.h" +#include "pdcp_entity.h" +#include "pdcp_process.h" + +static int +pdcp_crypto_xfrm_get(const struct rte_pdcp_entity_conf *conf, struct rte_crypto_sym_xform **c_xfrm, + struct rte_crypto_sym_xform **a_xfrm) +{ + *c_xfrm = NULL; + *a_xfrm = NULL; + + if (conf->crypto_xfrm == NULL) + return -EINVAL; + + if (conf->crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + *c_xfrm = conf->crypto_xfrm; + *a_xfrm = conf->crypto_xfrm->next; + } else if (conf->crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + *a_xfrm = conf->crypto_xfrm; + *c_xfrm = conf->crypto_xfrm->next; + } else { + return -EINVAL; + } + + return 0; +} + +static int +pdcp_entity_priv_populate(struct entity_priv *en_priv, const struct rte_pdcp_entity_conf *conf) +{ + struct rte_crypto_sym_xform *c_xfrm, *a_xfrm; + int ret; + + /** + * flags.is_authenticated + * + * MAC-I would be added in case of control plane packets and when authentication + * transform is not NULL. + */ + + if (conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_CONTROL) + en_priv->flags.is_authenticated = 1; + + ret = pdcp_crypto_xfrm_get(conf, &c_xfrm, &a_xfrm); + if (ret) + return ret; + + if (a_xfrm != NULL) + en_priv->flags.is_authenticated = 1; + + /** + * flags.is_ciph_in_bits + * + * For ZUC & SNOW3G cipher algos, offset & length need to be provided in bits. + */ + + if ((c_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2) || + (c_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_ZUC_EEA3)) + en_priv->flags.is_ciph_in_bits = 1; + + /** + * flags.is_auth_in_bits + * + * For ZUC & SNOW3G authentication algos, offset & length need to be provided in bits. + */ + + if (a_xfrm != NULL) { + if ((a_xfrm->auth.algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2) || + (a_xfrm->auth.algo == RTE_CRYPTO_AUTH_ZUC_EIA3)) + en_priv->flags.is_auth_in_bits = 1; + } + + /** + * flags.is_ul_entity + * + * Indicate whether the entity is UL/transmitting PDCP entity. + */ + if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) + en_priv->flags.is_ul_entity = 1; + + /** + * flags.is_null_auth + * + * For NULL auth, 4B zeros need to be added by lib PDCP. Indicate that + * algo is NULL auth to perform the same. + */ + if (a_xfrm != NULL && a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL) + en_priv->flags.is_null_auth = 1; + + /** + * hdr_sz + * + * PDCP header size of the entity + */ + en_priv->hdr_sz = pdcp_hdr_size_get(conf->pdcp_xfrm.sn_size); + + /** + * aad_sz + * + * For AES-CMAC, additional message is prepended for processing. Need to be trimmed after + * crypto processing is done. + */ + if (a_xfrm != NULL && a_xfrm->auth.algo == RTE_CRYPTO_AUTH_AES_CMAC) + en_priv->aad_sz = 8; + else + en_priv->aad_sz = 0; + + return 0; +} + +int +pdcp_process_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf) +{ + struct entity_priv *en_priv; + int ret; + + if (entity == NULL || conf == NULL) + return -EINVAL; + + en_priv = entity_priv_get(entity); + + ret = pdcp_entity_priv_populate(en_priv, conf); + if (ret) + return ret; + + return 0; +} diff --git a/lib/pdcp/pdcp_process.h b/lib/pdcp/pdcp_process.h new file mode 100644 index 0000000000..fd53fff0aa --- /dev/null +++ b/lib/pdcp/pdcp_process.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef PDCP_PROCESS_H +#define PDCP_PROCESS_H + +#include + +int +pdcp_process_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf); + +#endif /* PDCP_PROCESS_H */ diff --git a/lib/pdcp/rte_pdcp.c b/lib/pdcp/rte_pdcp.c new file mode 100644 index 0000000000..8914548dbd --- /dev/null +++ b/lib/pdcp/rte_pdcp.c @@ -0,0 +1,138 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include +#include + +#include "pdcp_crypto.h" +#include "pdcp_entity.h" +#include "pdcp_process.h" + +static int +pdcp_entity_size_get(const struct rte_pdcp_entity_conf *conf) +{ + int size; + + size = sizeof(struct rte_pdcp_entity) + sizeof(struct entity_priv); + + if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) + size += sizeof(struct entity_priv_dl_part); + else if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) + size += sizeof(struct entity_priv_ul_part); + else + return -EINVAL; + + return RTE_ALIGN_CEIL(size, RTE_CACHE_LINE_SIZE); +} + +struct rte_pdcp_entity * +rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) +{ + struct rte_pdcp_entity *entity = NULL; + struct entity_priv *en_priv; + int ret, entity_size; + + if (conf == NULL || conf->cop_pool == NULL) { + rte_errno = -EINVAL; + return NULL; + } + + if (conf->pdcp_xfrm.en_ordering || conf->pdcp_xfrm.remove_duplicates || conf->is_slrb || + conf->en_sec_offload) { + rte_errno = -ENOTSUP; + return NULL; + } + + /* + * 6.3.2 PDCP SN + * Length: 12 or 18 bits as indicated in table 6.3.2-1. The length of the PDCP SN is + * configured by upper layers (pdcp-SN-SizeUL, pdcp-SN-SizeDL, or sl-PDCP-SN-Size in + * TS 38.331 [3]) + */ + if ((conf->pdcp_xfrm.sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) && + (conf->pdcp_xfrm.sn_size != RTE_SECURITY_PDCP_SN_SIZE_18)) { + rte_errno = -ENOTSUP; + return NULL; + } + + if (conf->pdcp_xfrm.hfn || conf->pdcp_xfrm.hfn_threshold) { + rte_errno = -EINVAL; + return NULL; + } + + entity_size = pdcp_entity_size_get(conf); + if (entity_size < 0) { + rte_errno = -EINVAL; + return NULL; + } + + entity = rte_zmalloc_socket("pdcp_entity", entity_size, RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (entity == NULL) { + rte_errno = -ENOMEM; + return NULL; + } + + en_priv = entity_priv_get(entity); + + en_priv->state.rx_deliv = conf->count; + en_priv->state.tx_next = conf->count; + en_priv->cop_pool = conf->cop_pool; + + /* Setup crypto session */ + ret = pdcp_crypto_sess_create(entity, conf); + if (ret) + goto entity_free; + + ret = pdcp_process_func_set(entity, conf); + if (ret) + goto crypto_sess_destroy; + + return entity; + +crypto_sess_destroy: + pdcp_crypto_sess_destroy(entity); +entity_free: + rte_free(entity); + rte_errno = ret; + return NULL; +} + +int +rte_pdcp_entity_release(struct rte_pdcp_entity *pdcp_entity, struct rte_mbuf *out_mb[]) +{ + if (pdcp_entity == NULL) + return -EINVAL; + + /* Teardown crypto sessions */ + pdcp_crypto_sess_destroy(pdcp_entity); + + rte_free(pdcp_entity); + + RTE_SET_USED(out_mb); + return 0; +} + +int +rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity, + struct rte_mbuf *out_mb[]) +{ + struct entity_priv *en_priv; + + if (pdcp_entity == NULL) + return -EINVAL; + + en_priv = entity_priv_get(pdcp_entity); + + if (en_priv->flags.is_ul_entity) { + en_priv->state.tx_next = 0; + } else { + en_priv->state.rx_next = 0; + en_priv->state.rx_deliv = 0; + } + + RTE_SET_USED(out_mb); + + return 0; +} diff --git a/lib/pdcp/rte_pdcp.h b/lib/pdcp/rte_pdcp.h new file mode 100644 index 0000000000..33c355b05a --- /dev/null +++ b/lib/pdcp/rte_pdcp.h @@ -0,0 +1,157 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef RTE_PDCP_H +#define RTE_PDCP_H + +/** + * @file rte_pdcp.h + * + * RTE PDCP support. + * + * librte_pdcp provides a framework for PDCP protocol processing. + */ + +#include +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * PDCP entity. + */ +struct rte_pdcp_entity { + /** + * PDCP entities may hold packets for purposes of in-order delivery (in + * case of receiving PDCP entity) and re-transmission (in case of + * transmitting PDCP entity). + * + * For receiving PDCP entity, it may hold packets when in-order + * delivery is enabled. The packets would be cached until either a + * packet that completes the sequence arrives or when t-Reordering timer + * expires. + * + * When post-processing of PDCP packet which completes a sequence is + * done, the API may return more packets than enqueued. Application is + * expected to provide *rte_pdcp_pkt_post_process()* with *out_mb* + * which can hold maximum number of packets which may be returned. + */ + uint32_t max_pkt_cache; + /** User area for saving application data. */ + uint64_t user_area[2]; +} __rte_cache_aligned; + +/** + * PDCP entity configuration to be used for establishing an entity. + */ +/* Structure rte_pdcp_entity_conf 8< */ +struct rte_pdcp_entity_conf { + /** PDCP transform for the entity. */ + struct rte_security_pdcp_xform pdcp_xfrm; + /** Crypto transform applicable for the entity. */ + struct rte_crypto_sym_xform *crypto_xfrm; + /** Mempool for crypto symmetric session. */ + struct rte_mempool *sess_mpool; + /** Crypto op pool.*/ + struct rte_mempool *cop_pool; + /** + * 32 bit count value (HFN + SN) to be used for the first packet. + * pdcp_xfrm.hfn would be ignored as the HFN would be derived from this value. + */ + uint32_t count; + /** Indicate whether the PDCP entity belongs to Side Link Radio Bearer. */ + bool is_slrb; + /** Enable security offload on the device specified. */ + bool en_sec_offload; + /** Device on which security/crypto session need to be created. */ + uint8_t dev_id; + /** Reverse direction during IV generation. Can be used to simulate UE crypto processing.*/ + bool reverse_iv_direction; +}; +/* >8 End of structure rte_pdcp_entity_conf. */ + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * 5.1.1 PDCP entity establishment + * + * Establish PDCP entity based on provided input configuration. + * + * @param conf + * Parameters to be used for initializing PDCP entity object. + * @return + * - Valid handle if success + * - NULL in case of failure. rte_errno will be set to error code + */ +__rte_experimental +struct rte_pdcp_entity * +rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * 5.1.3 PDCP entity release + * + * Release PDCP entity. + * + * For UL/transmitting PDCP entity, all stored PDCP SDUs would be dropped. + * For DL/receiving PDCP entity, the stored PDCP SDUs would be returned in + * *out_mb* buffer. The buffer should be large enough to hold all cached + * packets in the entity. + * + * @param pdcp_entity + * Pointer to the PDCP entity to be released. + * @param[out] out_mb + * The address of an array that can hold up to *rte_pdcp_entity.max_pkt_cache* + * pointers to *rte_mbuf* structures. + * @return + * - 0: Success and no cached packets to return + * - >0: Success and the number of packets returned in out_mb + * - <0: Error code in case of failures + */ +__rte_experimental +int +rte_pdcp_entity_release(struct rte_pdcp_entity *pdcp_entity, + struct rte_mbuf *out_mb[]); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * 5.1.4 PDCP entity suspend + * + * Suspend PDCP entity. + * + * For DL/receiving PDCP entity, the stored PDCP SDUs would be returned in + * *out_mb* buffer. The buffer should be large enough to hold all cached + * packets in the entity. + * + * For UL/transmitting PDCP entity, *out_mb* buffer would be unused. + * + * @param pdcp_entity + * Pointer to the PDCP entity to be suspended. + * @param[out] out_mb + * The address of an array that can hold up to *rte_pdcp_entity.max_pkt_cache* + * pointers to *rte_mbuf* structures. + * @return + * - 0: Success and no cached packets to return + * - >0: Success and the number of packets returned in out_mb + * - <0: Error code in case of failures + */ +__rte_experimental +int +rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity, + struct rte_mbuf *out_mb[]); + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_PDCP_H */ diff --git a/lib/pdcp/version.map b/lib/pdcp/version.map new file mode 100644 index 0000000000..923e165f3f --- /dev/null +++ b/lib/pdcp/version.map @@ -0,0 +1,10 @@ +EXPERIMENTAL { + global: + + # added in 23.07 + rte_pdcp_entity_establish; + rte_pdcp_entity_release; + rte_pdcp_entity_suspend; + + local: *; +}; From patchwork Fri Apr 14 17:44:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126103 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62B4C42943; Fri, 14 Apr 2023 19:45:46 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 16F8242C4D; Fri, 14 Apr 2023 19:45:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 7F73142D0D for ; Fri, 14 Apr 2023 19:45:39 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EHJ0L2013323; Fri, 14 Apr 2023 10:45:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=izqAzVJ519lx71yGGUy/Kt2/TYIXU+WOs+K/49VPOhg=; b=KsEx+QHA1WtJmYmWn5p8pI5eZ/hQvR9tNkNsVCm6oqkDVCslRNnyviNPis5XD4VFR8f8 g1FlD157D4aYFAWLtfMdbsXh77ohFn4da4jWXS78LTH7ZO6iPtstSF9luATs7ge0F81t iIurzkXtSK4duvzxH+OZxLq8DBVubV8Tyb2sWpM3cZHZGieCa58CdQvkI+RcTUMXZb/s ePQ6HzCo9ciZCrYs19U0OtUJzhwCHRcEouSiFJAKJ1IKL623XZ2g8wCkRx5m5U5s0Br3 Sm4GDi5tN/DFrztSNsN14eM9pTqz+13SP/SMtLMoCwXIfXqcVRR8pSLpC4HinFt1bttl VA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6h7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:45:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:45:36 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:45:36 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 5F31A3F707F; Fri, 14 Apr 2023 10:45:31 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 03/22] pdcp: add pre and post-process Date: Fri, 14 Apr 2023 23:14:53 +0530 Message-ID: <20230414174512.642-4-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 83JLs7ZTZk9490uevw2Zhtd3Wsv8axhy X-Proofpoint-ORIG-GUID: 83JLs7ZTZk9490uevw2Zhtd3Wsv8axhy X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org PDCP process is split into 2 parts. One before crypto processing (rte_pdcp_pkt_pre_process()) and one after crypto processing (rte_pdcp_pkt_post_process()). Functionality of pre-process & post-process varies based on the type of entity. Registration of entity specific function pointer allows skipping multiple checks that would come in datapath otherwise. Signed-off-by: Anoob Joseph Signed-off-by: Kiran Kumar K Signed-off-by: Volodymyr Fialko Acked-by: Akhil Goyal --- lib/pdcp/rte_pdcp.h | 97 ++++++++++++++++++++++++++++++++++++++++++++ lib/pdcp/version.map | 3 ++ 2 files changed, 100 insertions(+) diff --git a/lib/pdcp/rte_pdcp.h b/lib/pdcp/rte_pdcp.h index 33c355b05a..75dc569f66 100644 --- a/lib/pdcp/rte_pdcp.h +++ b/lib/pdcp/rte_pdcp.h @@ -22,10 +22,29 @@ extern "C" { #endif +/* Forward declarations */ +struct rte_pdcp_entity; + +/* PDCP pre-process function based on entity configuration */ +typedef uint16_t (*rte_pdcp_pre_p_t)(const struct rte_pdcp_entity *entity, + struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], + uint16_t num, uint16_t *nb_err); + +/* PDCP post-process function based on entity configuration */ +typedef uint16_t (*rte_pdcp_post_p_t)(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], + struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err); + /** * PDCP entity. */ struct rte_pdcp_entity { + /** Entity specific pre-process handle. */ + rte_pdcp_pre_p_t pre_process; + /** Entity specific post-process handle. */ + rte_pdcp_post_p_t post_process; /** * PDCP entities may hold packets for purposes of in-order delivery (in * case of receiving PDCP entity) and re-transmission (in case of @@ -150,6 +169,84 @@ int rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity, struct rte_mbuf *out_mb[]); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * For input mbufs and given PDCP entity pre-process the mbufs and prepare + * crypto ops that can be enqueued to the cryptodev associated with given + * session. Only error packets would be moved returned in the input buffer, + * *mb*, and it is the responsibility of the application to free the same. + * + * @param entity + * Pointer to the *rte_pdcp_entity* object the packets belong to. + * @param[in, out] mb + * The address of an array of *num* pointers to *rte_mbuf* structures + * which contain the input packets. Any error packets would be returned in the + * same buffer. + * @param[out] cop + * The address of an array that can hold up to *num* pointers to + * *rte_crypto_op* structures. Crypto ops would be allocated by + * ``rte_pdcp_pkt_pre_process`` API. + * @param num + * The maximum number of packets to process. + * @param[out] nb_err + * Pointer to return the number of error packets returned in *mb* + * @return + * Count of crypto_ops prepared + */ +__rte_experimental +static inline uint16_t +rte_pdcp_pkt_pre_process(const struct rte_pdcp_entity *entity, + struct rte_mbuf *mb[], struct rte_crypto_op *cop[], + uint16_t num, uint16_t *nb_err) +{ + return entity->pre_process(entity, mb, cop, num, nb_err); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * For input mbufs and given PDCP entity, perform PDCP post-processing of the + * mbufs. + * + * Input mbufs are the ones retrieved from crypto_ops dequeued from cryptodev + * and grouped by *rte_pdcp_pkt_crypto_group()*. + * + * The post-processed packets would be returned in the *out_mb* buffer. + * The resultant mbufs would be grouped into success packets and error packets. + * Error packets would be grouped in the end of the array and it is the + * responsibility of the application to handle the same. + * + * When in-order delivery is enabled, PDCP entity may buffer packets and would + * deliver packets only when all prior packets have been post-processed. That + * would result in returning more/less packets than enqueued. + * + * @param entity + * Pointer to the *rte_pdcp_entity* object the packets belong to. + * @param in_mb + * The address of an array of *num* pointers to *rte_mbuf* structures. + * @param[out] out_mb + * The address of an array of *num* pointers to *rte_mbuf* structures + * to output packets after PDCP post-processing. + * @param num + * The maximum number of packets to process. + * @param[out] nb_err + * The number of error packets returned in *out_mb* buffer. + * @return + * Count of packets returned in *out_mb* buffer. + */ +__rte_experimental +static inline uint16_t +rte_pdcp_pkt_post_process(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], + struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err) +{ + return entity->post_process(entity, in_mb, out_mb, num, nb_err); +} + #ifdef __cplusplus } #endif diff --git a/lib/pdcp/version.map b/lib/pdcp/version.map index 923e165f3f..f9ff30600a 100644 --- a/lib/pdcp/version.map +++ b/lib/pdcp/version.map @@ -6,5 +6,8 @@ EXPERIMENTAL { rte_pdcp_entity_release; rte_pdcp_entity_suspend; + rte_pdcp_pkt_post_process; + rte_pdcp_pkt_pre_process; + local: *; }; From patchwork Fri Apr 14 17:44:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126104 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0EC1C42943; Fri, 14 Apr 2023 19:45:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 95DD442D2F; Fri, 14 Apr 2023 19:45:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 0EBCE42D2D for ; Fri, 14 Apr 2023 19:45:44 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33E908w8011308; Fri, 14 Apr 2023 10:45:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=YUAoKhBZJd8w5Tuthz7VNwLpFX3dMhXnTf2RJfaErFE=; b=jWP5zFs0bR2izlGigX+Y8bmS6dwarvIzXNwtcRMA+utcziPrRiQ3ILLS5uWVJ/97Fx33 CnH6r2VjFeCVUfwOaGcfX7qFDsaQb+F77rsGZJRZQYzcF8Bhx1G9e69wz/WFxv0rmZHF jSOeBo4e4PcL/gbjxVNnaV/8CF6/by+mqaZQ/cHXDSCgH630CvXkcoM1eeHhRx1+swSU RM0+Lz5eGOXVN5aNJze9U2A8WhAbvIfWN9FN1dBcQT+MAS/dU1hafGvddiwHYg7RtxS5 xM+Laa+4ymWmc5HhjkKSJK9qFWal/By7OZBNnmM0Va/iYtl5BItMujvkFSFm7M48ZVJn FQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3py3tk2ea9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:45:43 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:45:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:45:41 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id E30AA3F7080; Fri, 14 Apr 2023 10:45:36 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 04/22] pdcp: add packet group Date: Fri, 14 Apr 2023 23:14:54 +0530 Message-ID: <20230414174512.642-5-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: pjO6vyFj2IG_AA8ZNpMoWBL57BhU6uqw X-Proofpoint-ORIG-GUID: pjO6vyFj2IG_AA8ZNpMoWBL57BhU6uqw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Crypto processing in PDCP is performed asynchronously by rte_cryptodev_enqueue_burst() and rte_cryptodev_dequeue_burst(). Since cryptodev dequeue can return crypto operations belonging to multiple entities, rte_pdcp_pkt_crypto_group() is added to help grouping crypto operations belonging to same entity. Signed-off-by: Anoob Joseph Signed-off-by: Kiran Kumar K Signed-off-by: Volodymyr Fialko --- lib/pdcp/meson.build | 1 + lib/pdcp/rte_pdcp.h | 2 + lib/pdcp/rte_pdcp_group.h | 125 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 128 insertions(+) create mode 100644 lib/pdcp/rte_pdcp_group.h diff --git a/lib/pdcp/meson.build b/lib/pdcp/meson.build index ccaf426240..08679b743a 100644 --- a/lib/pdcp/meson.build +++ b/lib/pdcp/meson.build @@ -13,5 +13,6 @@ sources = files( 'rte_pdcp.c', ) headers = files('rte_pdcp.h') +indirect_headers += files('rte_pdcp_group.h') deps += ['mbuf', 'net', 'cryptodev', 'security'] diff --git a/lib/pdcp/rte_pdcp.h b/lib/pdcp/rte_pdcp.h index 75dc569f66..54f88e3fd3 100644 --- a/lib/pdcp/rte_pdcp.h +++ b/lib/pdcp/rte_pdcp.h @@ -247,6 +247,8 @@ rte_pdcp_pkt_post_process(const struct rte_pdcp_entity *entity, return entity->post_process(entity, in_mb, out_mb, num, nb_err); } +#include + #ifdef __cplusplus } #endif diff --git a/lib/pdcp/rte_pdcp_group.h b/lib/pdcp/rte_pdcp_group.h new file mode 100644 index 0000000000..cb322f55c7 --- /dev/null +++ b/lib/pdcp/rte_pdcp_group.h @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef RTE_PDCP_GROUP_H +#define RTE_PDCP_GROUP_H + +/** + * @file rte_pdcp_group.h + * + * RTE PDCP grouping support. + * It is not recommended to include this file directly, include + * instead. + * Provides helper functions to process completed crypto-ops and group related + * packets by sessions they belong to. + */ + +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Group packets belonging to same PDCP entity. + */ +struct rte_pdcp_group { + union { + uint64_t val; + void *ptr; + } id; /**< Grouped by value */ + struct rte_mbuf **m; /**< Start of the group */ + uint32_t cnt; /**< Number of entries in the group */ + int32_t rc; /**< Status code associated with the group */ +}; + +/** + * Take crypto-op as an input and extract pointer to related PDCP entity. + * @param cop + * The address of an input *rte_crypto_op* structure. + * @return + * The pointer to the related *rte_pdcp_entity* structure. + */ +static inline struct rte_pdcp_entity * +rte_pdcp_en_from_cop(const struct rte_crypto_op *cop) +{ + void *sess = cop->sym[0].session; + + return (struct rte_pdcp_entity *) + rte_cryptodev_sym_session_opaque_data_get(sess); +} + +/** + * Take as input completed crypto ops, extract related mbufs and group them by + * *rte_pdcp_entity* they belong to. Mbuf for which the crypto operation has + * failed would be flagged using *RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED* flag + * in rte_mbuf.ol_flags. The crypto_ops would be freed after the grouping. + * + * Note that application must ensure only crypto-ops prepared by lib_pdcp is + * provided back to @see rte_pdcp_pkt_crypto_group(). + * + * @param cop + * The address of an array of *num* pointers to the input *rte_crypto_op* + * structures. + * @param[out] mb + * The address of an array of *num* pointers to output *rte_mbuf* structures. + * @param[out] grp + * The address of an array of *num* to output *rte_pdcp_group* structures. + * @param num + * The maximum number of crypto-ops to process. + * @return + * Number of filled elements in *grp* array. + * + */ +static inline uint16_t +rte_pdcp_pkt_crypto_group(struct rte_crypto_op *cop[], struct rte_mbuf *mb[], + struct rte_pdcp_group grp[], uint16_t num) +{ + uint32_t i, j = 0, n = 0; + void *ns, *ps = NULL; + struct rte_mbuf *m; + + for (i = 0; i != num; i++) { + m = cop[i]->sym[0].m_src; + ns = cop[i]->sym[0].session; + + m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD; + if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS) + m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED; + + /* Different entity */ + if (ps != ns) { + + /* Finalize open group and start a new one */ + if (ps != NULL) { + grp[n].cnt = mb + j - grp[n].m; + n++; + } + + /* Start new group */ + grp[n].m = mb + j; + ps = ns; + grp[n].id.ptr = rte_pdcp_en_from_cop(cop[i]); + } + + mb[j++] = m; + rte_crypto_op_free(cop[i]); + } + + /* Finalize last group */ + if (ps != NULL) { + grp[n].cnt = mb + j - grp[n].m; + n++; + } + + return n; +} + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_PDCP_GROUP_H */ From patchwork Fri Apr 14 17:44:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126105 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 09BC242943; Fri, 14 Apr 2023 19:45:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EE0D242D29; Fri, 14 Apr 2023 19:45:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id ADF1D42C54 for ; Fri, 14 Apr 2023 19:45:50 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGUdJp026903; Fri, 14 Apr 2023 10:45:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=keM4U33fENjZgw1nEwafWLtRzdW2dcjz9ojEgQl8O2k=; b=OoF2ST8c0c/P/+xhZynl6C1nf7tuKZE3eZFJWU+Y7Enmcsn1lZicGqOiwMbYLPo0IrLx 4SRNbyJeBAfExaoclVXgZkQNx92HE9QxkSwK4NufcWvOUp/QhFFRLPqBjJixmtzhIy3i gmDB7n2qI7LIbd5WNByjEAJpDKDnLMf5jxGkuBScZ2sc2kALjM1Z5r0GzD4C2u1vv/Tx nWrV/nZrY0KTNcR2WvW1w+SKF6oaN1fZj+NA8SzK03pDoYCycvBfEQ06Qy1RG+iBsI5z j5YPFWS+92TPyB949OvqPO+DOV7XyioAhb4StruA3NJSwfv0h16HYXsNQNQv80w8Gwvc RQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6hw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:45:49 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:45:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:45:47 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 6B6683F707F; Fri, 14 Apr 2023 10:45:42 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 05/22] pdcp: add crypto session create and destroy Date: Fri, 14 Apr 2023 23:14:55 +0530 Message-ID: <20230414174512.642-6-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 45JYwE7dG71qgi__QDQl_VVVIy0aLnWY X-Proofpoint-ORIG-GUID: 45JYwE7dG71qgi__QDQl_VVVIy0aLnWY X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add routines to create & destroy sessions. PDCP lib would take crypto transforms as input and creates the session on the corresponding device after verifying capabilities. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko Acked-by: Akhil Goyal --- lib/pdcp/pdcp_crypto.c | 222 ++++++++++++++++++++++++++++++++++++++++- lib/pdcp/pdcp_crypto.h | 6 ++ 2 files changed, 225 insertions(+), 3 deletions(-) diff --git a/lib/pdcp/pdcp_crypto.c b/lib/pdcp/pdcp_crypto.c index 755e27ec9e..0e6e7d99d8 100644 --- a/lib/pdcp/pdcp_crypto.c +++ b/lib/pdcp/pdcp_crypto.c @@ -2,20 +2,236 @@ * Copyright(C) 2023 Marvell. */ +#include +#include +#include +#include #include #include "pdcp_crypto.h" +#include "pdcp_entity.h" + +static int +pdcp_crypto_caps_cipher_verify(uint8_t dev_id, const struct rte_crypto_sym_xform *c_xfrm) +{ + const struct rte_cryptodev_symmetric_capability *cap; + struct rte_cryptodev_sym_capability_idx cap_idx; + int ret; + + cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; + cap_idx.algo.cipher = c_xfrm->cipher.algo; + + cap = rte_cryptodev_sym_capability_get(dev_id, &cap_idx); + if (cap == NULL) + return -1; + + ret = rte_cryptodev_sym_capability_check_cipher(cap, c_xfrm->cipher.key.length, + c_xfrm->cipher.iv.length); + + return ret; +} + +static int +pdcp_crypto_caps_auth_verify(uint8_t dev_id, const struct rte_crypto_sym_xform *a_xfrm) +{ + const struct rte_cryptodev_symmetric_capability *cap; + struct rte_cryptodev_sym_capability_idx cap_idx; + int ret; + + cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH; + cap_idx.algo.auth = a_xfrm->auth.algo; + + cap = rte_cryptodev_sym_capability_get(dev_id, &cap_idx); + if (cap == NULL) + return -1; + + ret = rte_cryptodev_sym_capability_check_auth(cap, a_xfrm->auth.key.length, + a_xfrm->auth.digest_length, + a_xfrm->auth.iv.length); + + return ret; +} + +static int +pdcp_crypto_xfrm_validate(const struct rte_pdcp_entity_conf *conf, + const struct rte_crypto_sym_xform *c_xfrm, + const struct rte_crypto_sym_xform *a_xfrm, + bool is_auth_then_cipher) +{ + uint16_t ciph_iv_len, auth_digest_len, auth_iv_len; + int ret; + + /* + * Uplink means PDCP entity is configured for transmit. Downlink means PDCP entity is + * configured for receive. When integrity protection is enabled, PDCP always performs + * digest-encrypted or auth-gen-encrypt for uplink (and decrypt-auth-verify for downlink). + * So for uplink, crypto chain would be auth-cipher while for downlink it would be + * cipher-auth. + * + * When integrity protection is not required, xform would be cipher only. + */ + + if (c_xfrm == NULL) + return -EINVAL; + + if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) { + + /* With UPLINK, if auth is enabled, it should be before cipher */ + if (a_xfrm != NULL && !is_auth_then_cipher) + return -EINVAL; + + /* With UPLINK, cipher operation must be encrypt */ + if (c_xfrm->cipher.op != RTE_CRYPTO_CIPHER_OP_ENCRYPT) + return -EINVAL; + + /* With UPLINK, auth operation (if present) must be generate */ + if (a_xfrm != NULL && a_xfrm->auth.op != RTE_CRYPTO_AUTH_OP_GENERATE) + return -EINVAL; + + } else if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) { + + /* With DOWNLINK, if auth is enabled, it should be after cipher */ + if (a_xfrm != NULL && is_auth_then_cipher) + return -EINVAL; + + /* With DOWNLINK, cipher operation must be decrypt */ + if (c_xfrm->cipher.op != RTE_CRYPTO_CIPHER_OP_DECRYPT) + return -EINVAL; + + /* With DOWNLINK, auth operation (if present) must be verify */ + if (a_xfrm != NULL && a_xfrm->auth.op != RTE_CRYPTO_AUTH_OP_VERIFY) + return -EINVAL; + + } else { + return -EINVAL; + } + + if ((c_xfrm->cipher.algo != RTE_CRYPTO_CIPHER_NULL) && + (c_xfrm->cipher.algo != RTE_CRYPTO_CIPHER_AES_CTR) && + (c_xfrm->cipher.algo != RTE_CRYPTO_CIPHER_ZUC_EEA3) && + (c_xfrm->cipher.algo != RTE_CRYPTO_CIPHER_SNOW3G_UEA2)) + return -EINVAL; + + if (c_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_NULL) + ciph_iv_len = 0; + else + ciph_iv_len = PDCP_IV_LEN; + + if (ciph_iv_len != c_xfrm->cipher.iv.length) + return -EINVAL; + + if (a_xfrm != NULL) { + if ((a_xfrm->auth.algo != RTE_CRYPTO_AUTH_NULL) && + (a_xfrm->auth.algo != RTE_CRYPTO_AUTH_AES_CMAC) && + (a_xfrm->auth.algo != RTE_CRYPTO_AUTH_ZUC_EIA3) && + (a_xfrm->auth.algo != RTE_CRYPTO_AUTH_SNOW3G_UIA2)) + return -EINVAL; + + /* For AUTH NULL, lib PDCP would add 4 byte 0s */ + if (a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL) + auth_digest_len = 0; + else + auth_digest_len = PDCP_MAC_I_LEN; + + if (auth_digest_len != a_xfrm->auth.digest_length) + return -EINVAL; + + if ((a_xfrm->auth.algo == RTE_CRYPTO_AUTH_ZUC_EIA3) || + (a_xfrm->auth.algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2)) + auth_iv_len = PDCP_IV_LEN; + else + auth_iv_len = 0; + + if (a_xfrm->auth.iv.length != auth_iv_len) + return -EINVAL; + } + + if (!rte_cryptodev_is_valid_dev(conf->dev_id)) + return -EINVAL; + + ret = pdcp_crypto_caps_cipher_verify(conf->dev_id, c_xfrm); + if (ret) + return -ENOTSUP; + + if (a_xfrm != NULL) { + ret = pdcp_crypto_caps_auth_verify(conf->dev_id, a_xfrm); + if (ret) + return -ENOTSUP; + } + + return 0; +} int pdcp_crypto_sess_create(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf) { - RTE_SET_USED(entity); - RTE_SET_USED(conf); + struct rte_crypto_sym_xform *c_xfrm, *a_xfrm; + struct entity_priv *en_priv; + bool is_auth_then_cipher; + int ret; + + if (entity == NULL || conf == NULL || conf->crypto_xfrm == NULL) + return -EINVAL; + + en_priv = entity_priv_get(entity); + + en_priv->dev_id = conf->dev_id; + + if (conf->crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + c_xfrm = conf->crypto_xfrm; + a_xfrm = conf->crypto_xfrm->next; + is_auth_then_cipher = false; + } else if (conf->crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + a_xfrm = conf->crypto_xfrm; + c_xfrm = conf->crypto_xfrm->next; + is_auth_then_cipher = true; + } else { + return -EINVAL; + } + + ret = pdcp_crypto_xfrm_validate(conf, c_xfrm, a_xfrm, is_auth_then_cipher); + if (ret) + return ret; + + if (c_xfrm->cipher.algo == RTE_CRYPTO_CIPHER_NULL) + c_xfrm->cipher.iv.offset = 0; + else + c_xfrm->cipher.iv.offset = PDCP_IV_OFFSET; + + if (a_xfrm != NULL) { + if (a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL) + a_xfrm->auth.iv.offset = 0; + else + if (c_xfrm->cipher.iv.offset) + a_xfrm->auth.iv.offset = PDCP_IV_OFFSET + PDCP_IV_LEN; + else + a_xfrm->auth.iv.offset = PDCP_IV_OFFSET; + } + + if (conf->sess_mpool == NULL) + return -EINVAL; + + en_priv->crypto_sess = rte_cryptodev_sym_session_create(conf->dev_id, conf->crypto_xfrm, + conf->sess_mpool); + if (en_priv->crypto_sess == NULL) { + /* API returns positive values as error codes */ + return -rte_errno; + } + + rte_cryptodev_sym_session_opaque_data_set(en_priv->crypto_sess, (uint64_t)entity); + return 0; } void pdcp_crypto_sess_destroy(struct rte_pdcp_entity *entity) { - RTE_SET_USED(entity); + struct entity_priv *en_priv; + + en_priv = entity_priv_get(entity); + + if (en_priv->crypto_sess != NULL) { + rte_cryptodev_sym_session_free(en_priv->dev_id, en_priv->crypto_sess); + en_priv->crypto_sess = NULL; + } } diff --git a/lib/pdcp/pdcp_crypto.h b/lib/pdcp/pdcp_crypto.h index 6563331d37..e178ddd725 100644 --- a/lib/pdcp/pdcp_crypto.h +++ b/lib/pdcp/pdcp_crypto.h @@ -5,8 +5,14 @@ #ifndef PDCP_CRYPTO_H #define PDCP_CRYPTO_H +#include +#include #include +#define PDCP_IV_OFFSET (sizeof(struct rte_crypto_op) + sizeof(struct rte_crypto_sym_op)) +#define PDCP_IV_LEN 16 +#define PDCP_MAC_I_LEN 4 + int pdcp_crypto_sess_create(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf); From patchwork Fri Apr 14 17:44:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126106 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F2DB42943; Fri, 14 Apr 2023 19:46:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3456341153; Fri, 14 Apr 2023 19:45:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9482E41153 for ; Fri, 14 Apr 2023 19:45:56 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EHJ0L4013323; Fri, 14 Apr 2023 10:45:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=yJH/D3Ngnbpyo9uu9omO5jdcCK/8KWuSOiL3Ff13G8k=; b=ad0RDAoondoKIBb78dmYZUWMO4sv3BDu1wwAmhWnHYIzEKFUwpcN3aNx3qzZkfJXvATP BNQ/KzPVHjYe2ARV2qFS+MKdDpyB+sPcAqKQF9BnGs6dEemA+QCZXZX3VGfTtXNfTvQJ O4D0l1qzrEBfZ2KmyCID6oJDM6ozGPUgvHnHak1YicXM42G8zSlQW/MoQE/bgikTuQna 5j7L11vuwjFqmP7eYhwZ7Lcw1kzudWqWbtv8fx5BXrRJfdoH6wkGpI5lMcIT3PqFPoKx /Tl61OS9MmVH138OmlXMNYkUNhhdhsLNo7+Jt4J/y6Ww3Rz+DJBGrn0oNBstuugeY2v7 ew== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6je-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:45:55 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:45:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:45:53 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 3CFCE3F7080; Fri, 14 Apr 2023 10:45:47 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 06/22] pdcp: add pre and post process for UL Date: Fri, 14 Apr 2023 23:14:56 +0530 Message-ID: <20230414174512.642-7-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: eal_7lGJfHfhhoFVRuoajNcya2RFVsrZ X-Proofpoint-ORIG-GUID: eal_7lGJfHfhhoFVRuoajNcya2RFVsrZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add routines to perform pre & post processing based on the type of entity. To avoid checks in datapath, there are different function pointers registered based on the following, 1. Control plane v/s user plane 2. 12 bit v/s 18 bit SN For control plane only 12 bit SN need to be supported (as per PDCP specification). Signed-off-by: Anoob Joseph Signed-off-by: Kiran Kumar K Signed-off-by: Volodymyr Fialko Acked-by: Akhil Goyal --- lib/pdcp/pdcp_entity.h | 42 +++++ lib/pdcp/pdcp_process.c | 330 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 372 insertions(+) diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h index ca1d56b516..46cdaead09 100644 --- a/lib/pdcp/pdcp_entity.h +++ b/lib/pdcp/pdcp_entity.h @@ -92,4 +92,46 @@ pdcp_hdr_size_get(enum rte_security_pdcp_sn_size sn_size) return RTE_ALIGN_MUL_CEIL(sn_size, 8) / 8; } +static inline uint32_t +pdcp_window_size_get(enum rte_security_pdcp_sn_size sn_size) +{ + return 1 << (sn_size - 1); +} + +static inline uint32_t +pdcp_sn_mask_get(enum rte_security_pdcp_sn_size sn_size) +{ + return (1 << sn_size) - 1; +} + +static inline uint32_t +pdcp_sn_from_count_get(uint32_t count, enum rte_security_pdcp_sn_size sn_size) +{ + return (count & pdcp_sn_mask_get(sn_size)); +} + +static inline uint32_t +pdcp_hfn_mask_get(enum rte_security_pdcp_sn_size sn_size) +{ + return ~pdcp_sn_mask_get(sn_size); +} + +static inline uint32_t +pdcp_hfn_from_count_get(uint32_t count, enum rte_security_pdcp_sn_size sn_size) +{ + return (count & pdcp_hfn_mask_get(sn_size)) >> sn_size; +} + +static inline uint32_t +pdcp_count_from_hfn_sn_get(uint32_t hfn, uint32_t sn, enum rte_security_pdcp_sn_size sn_size) +{ + return (((hfn << sn_size) & pdcp_hfn_mask_get(sn_size)) | (sn & pdcp_sn_mask_get(sn_size))); +} + +static inline uint32_t +pdcp_hfn_max(enum rte_security_pdcp_sn_size sn_size) +{ + return (1 << (32 - sn_size)) - 1; +} + #endif /* PDCP_ENTITY_H */ diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c index d4b158536d..7c1fc85fcb 100644 --- a/lib/pdcp/pdcp_process.c +++ b/lib/pdcp/pdcp_process.c @@ -36,6 +36,332 @@ pdcp_crypto_xfrm_get(const struct rte_pdcp_entity_conf *conf, struct rte_crypto_ return 0; } +static inline void +cop_prepare(const struct entity_priv *en_priv, struct rte_mbuf *mb, struct rte_crypto_op *cop, + uint8_t data_offset, uint32_t count, const bool is_auth) +{ + const struct rte_crypto_op cop_init = { + .type = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + .status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED, + .sess_type = RTE_CRYPTO_OP_WITH_SESSION, + }; + struct rte_crypto_sym_op *op; + uint32_t pkt_len; + + const uint8_t ciph_shift = 3 * en_priv->flags.is_ciph_in_bits; + const uint8_t auth_shift = 3 * en_priv->flags.is_auth_in_bits; + + op = cop->sym; + cop->raw = cop_init.raw; + op->m_src = mb; + op->m_dst = mb; + + /* Set IV */ + en_priv->iv_gen(cop, en_priv, count); + + /* Prepare op */ + pkt_len = rte_pktmbuf_pkt_len(mb); + op->cipher.data.offset = data_offset << ciph_shift; + op->cipher.data.length = (pkt_len - data_offset) << ciph_shift; + + if (is_auth) { + op->auth.data.offset = 0; + op->auth.data.length = (pkt_len - PDCP_MAC_I_LEN) << auth_shift; + op->auth.digest.data = rte_pktmbuf_mtod_offset(mb, uint8_t *, + (pkt_len - PDCP_MAC_I_LEN)); + } + + __rte_crypto_sym_op_attach_sym_session(op, en_priv->crypto_sess); +} + +static inline bool +pdcp_pre_process_uplane_sn_12_ul_set_sn(struct entity_priv *en_priv, struct rte_mbuf *mb, + uint32_t *count) +{ + struct rte_pdcp_up_data_pdu_sn_12_hdr *pdu_hdr; + const uint8_t hdr_sz = en_priv->hdr_sz; + uint32_t sn; + + /* Prepend PDU header */ + pdu_hdr = (struct rte_pdcp_up_data_pdu_sn_12_hdr *)rte_pktmbuf_prepend(mb, hdr_sz); + if (unlikely(pdu_hdr == NULL)) + return false; + + /* Update sequence num in the PDU header */ + *count = en_priv->state.tx_next++; + sn = pdcp_sn_from_count_get(*count, RTE_SECURITY_PDCP_SN_SIZE_12); + + pdu_hdr->d_c = RTE_PDCP_PDU_TYPE_DATA; + pdu_hdr->sn_11_8 = ((sn & 0xf00) >> 8); + pdu_hdr->sn_7_0 = (sn & 0xff); + pdu_hdr->r = 0; + return true; +} + +static uint16_t +pdcp_pre_process_uplane_sn_12_ul(const struct rte_pdcp_entity *entity, struct rte_mbuf *in_mb[], + struct rte_crypto_op *cop[], uint16_t num, uint16_t *nb_err_ret) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + uint16_t nb_cop, nb_prep = 0, nb_err = 0; + struct rte_mbuf *mb; + uint32_t count; + uint8_t *mac_i; + int i; + + const uint8_t data_offset = en_priv->hdr_sz + en_priv->aad_sz; + const int is_null_auth = en_priv->flags.is_null_auth; + + nb_cop = rte_crypto_op_bulk_alloc(en_priv->cop_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, cop, + num); + + if (en_priv->flags.is_authenticated) { + for (i = 0; i < nb_cop; i++) { + mb = in_mb[i]; + mac_i = (uint8_t *)rte_pktmbuf_append(mb, PDCP_MAC_I_LEN); + if (unlikely(mac_i == NULL)) { + in_mb[nb_err++] = mb; + continue; + } + + /* Clear MAC-I field for NULL auth */ + if (is_null_auth) + memset(mac_i, 0, PDCP_MAC_I_LEN); + + if (unlikely(!pdcp_pre_process_uplane_sn_12_ul_set_sn(en_priv, mb, + &count))) { + in_mb[nb_err++] = mb; + continue; + } + cop_prepare(en_priv, mb, cop[nb_prep++], data_offset, count, true); + } + } else { + for (i = 0; i < nb_cop; i++) { + mb = in_mb[i]; + if (unlikely(!pdcp_pre_process_uplane_sn_12_ul_set_sn(en_priv, mb, + &count))) { + in_mb[nb_err++] = mb; + continue; + } + cop_prepare(en_priv, mb, cop[nb_prep++], data_offset, count, false); + } + } + + if (unlikely(nb_err)) + /* Using mempool API since crypto API is not providing bulk free */ + rte_mempool_put_bulk(en_priv->cop_pool, (void *)&cop[nb_prep], nb_cop - nb_prep); + + *nb_err_ret = num - nb_prep; + + return nb_prep; +} + +static inline bool +pdcp_pre_process_uplane_sn_18_ul_set_sn(struct entity_priv *en_priv, struct rte_mbuf *mb, + uint32_t *count) +{ + struct rte_pdcp_up_data_pdu_sn_18_hdr *pdu_hdr; + const uint8_t hdr_sz = en_priv->hdr_sz; + uint32_t sn; + + /* Prepend PDU header */ + pdu_hdr = (struct rte_pdcp_up_data_pdu_sn_18_hdr *)rte_pktmbuf_prepend(mb, hdr_sz); + if (unlikely(pdu_hdr == NULL)) + return false; + + /* Update sequence num in the PDU header */ + *count = en_priv->state.tx_next++; + sn = pdcp_sn_from_count_get(*count, RTE_SECURITY_PDCP_SN_SIZE_18); + + pdu_hdr->d_c = RTE_PDCP_PDU_TYPE_DATA; + pdu_hdr->sn_17_16 = ((sn & 0x30000) >> 16); + pdu_hdr->sn_15_8 = ((sn & 0xff00) >> 8); + pdu_hdr->sn_7_0 = (sn & 0xff); + pdu_hdr->r = 0; + + return true; +} + +static inline uint16_t +pdcp_pre_process_uplane_sn_18_ul(const struct rte_pdcp_entity *entity, struct rte_mbuf *in_mb[], + struct rte_crypto_op *cop[], uint16_t num, uint16_t *nb_err_ret) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + uint16_t nb_cop, nb_prep = 0, nb_err = 0; + struct rte_mbuf *mb; + uint32_t count; + uint8_t *mac_i; + int i; + + const uint8_t data_offset = en_priv->hdr_sz + en_priv->aad_sz; + const int is_null_auth = en_priv->flags.is_null_auth; + + nb_cop = rte_crypto_op_bulk_alloc(en_priv->cop_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, cop, + num); + + if (en_priv->flags.is_authenticated) { + for (i = 0; i < nb_cop; i++) { + mb = in_mb[i]; + mac_i = (uint8_t *)rte_pktmbuf_append(mb, PDCP_MAC_I_LEN); + if (unlikely(mac_i == NULL)) { + in_mb[nb_err++] = mb; + continue; + } + + /* Clear MAC-I field for NULL auth */ + if (is_null_auth) + memset(mac_i, 0, PDCP_MAC_I_LEN); + + if (unlikely(!pdcp_pre_process_uplane_sn_18_ul_set_sn(en_priv, mb, + &count))) { + in_mb[nb_err++] = mb; + continue; + } + cop_prepare(en_priv, mb, cop[nb_prep++], data_offset, count, true); + } + } else { + for (i = 0; i < nb_cop; i++) { + mb = in_mb[i]; + if (unlikely(!pdcp_pre_process_uplane_sn_18_ul_set_sn(en_priv, mb, + &count))) { + + in_mb[nb_err++] = mb; + continue; + } + cop_prepare(en_priv, mb, cop[nb_prep++], data_offset, count, false); + } + } + + if (unlikely(nb_err)) + /* Using mempool API since crypto API is not providing bulk free */ + rte_mempool_put_bulk(en_priv->cop_pool, (void *)&cop[nb_prep], nb_cop - nb_prep); + + *nb_err_ret = num - nb_prep; + + return nb_prep; +} + +static uint16_t +pdcp_pre_process_cplane_sn_12_ul(const struct rte_pdcp_entity *entity, struct rte_mbuf *in_mb[], + struct rte_crypto_op *cop[], uint16_t num, uint16_t *nb_err_ret) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + struct rte_pdcp_cp_data_pdu_sn_12_hdr *pdu_hdr; + uint16_t nb_cop, nb_prep = 0, nb_err = 0; + struct rte_mbuf *mb; + uint32_t count, sn; + uint8_t *mac_i; + int i; + + const uint8_t hdr_sz = en_priv->hdr_sz; + const uint8_t data_offset = hdr_sz + en_priv->aad_sz; + const int is_null_auth = en_priv->flags.is_null_auth; + + nb_cop = rte_crypto_op_bulk_alloc(en_priv->cop_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, cop, + num); + + for (i = 0; i < nb_cop; i++) { + mb = in_mb[i]; + /* Prepend PDU header */ + pdu_hdr = (struct rte_pdcp_cp_data_pdu_sn_12_hdr *)rte_pktmbuf_prepend(mb, hdr_sz); + if (unlikely(pdu_hdr == NULL)) { + in_mb[nb_err++] = mb; + continue; + } + + mac_i = (uint8_t *)rte_pktmbuf_append(mb, PDCP_MAC_I_LEN); + if (unlikely(mac_i == NULL)) { + in_mb[nb_err++] = mb; + continue; + } + + /* Clear MAC-I field for NULL auth */ + if (is_null_auth) + memset(mac_i, 0, PDCP_MAC_I_LEN); + + /* Update sequence number in the PDU header */ + count = en_priv->state.tx_next++; + sn = pdcp_sn_from_count_get(count, RTE_SECURITY_PDCP_SN_SIZE_12); + + pdu_hdr->sn_11_8 = ((sn & 0xf00) >> 8); + pdu_hdr->sn_7_0 = (sn & 0xff); + pdu_hdr->r = 0; + + cop_prepare(en_priv, mb, cop[nb_prep++], data_offset, count, true); + } + + if (unlikely(nb_err)) + /* Using mempool API since crypto API is not providing bulk free */ + rte_mempool_put_bulk(en_priv->cop_pool, (void *)&cop[nb_prep], nb_cop - nb_prep); + + *nb_err_ret = num - nb_prep; + + return nb_prep; +} + +static uint16_t +pdcp_post_process_ul(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err_ret) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + const uint32_t hdr_trim_sz = en_priv->aad_sz; + int i, nb_success = 0, nb_err = 0; + struct rte_mbuf *mb, *err_mb[num]; + + for (i = 0; i < num; i++) { + mb = in_mb[i]; + if (unlikely(mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) { + err_mb[nb_err++] = mb; + continue; + } + + if (hdr_trim_sz) + rte_pktmbuf_adj(mb, hdr_trim_sz); + + out_mb[nb_success++] = mb; + } + + if (unlikely(nb_err != 0)) + rte_memcpy(&out_mb[nb_success], err_mb, nb_err * sizeof(struct rte_mbuf *)); + + *nb_err_ret = nb_err; + return nb_success; +} + +static int +pdcp_pre_post_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf) +{ + entity->pre_process = NULL; + entity->post_process = NULL; + + if ((conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_CONTROL) && + (conf->pdcp_xfrm.sn_size == RTE_SECURITY_PDCP_SN_SIZE_12) && + (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK)) { + entity->pre_process = pdcp_pre_process_cplane_sn_12_ul; + entity->post_process = pdcp_post_process_ul; + } + + if ((conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_DATA) && + (conf->pdcp_xfrm.sn_size == RTE_SECURITY_PDCP_SN_SIZE_12) && + (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK)) { + entity->pre_process = pdcp_pre_process_uplane_sn_12_ul; + entity->post_process = pdcp_post_process_ul; + } + + if ((conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_DATA) && + (conf->pdcp_xfrm.sn_size == RTE_SECURITY_PDCP_SN_SIZE_18) && + (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK)) { + entity->pre_process = pdcp_pre_process_uplane_sn_18_ul; + entity->post_process = pdcp_post_process_ul; + } + + if (entity->pre_process == NULL || entity->post_process == NULL) + return -ENOTSUP; + + return 0; +} + static int pdcp_entity_priv_populate(struct entity_priv *en_priv, const struct rte_pdcp_entity_conf *conf) { @@ -134,5 +460,9 @@ pdcp_process_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_enti if (ret) return ret; + ret = pdcp_pre_post_func_set(entity, conf); + if (ret) + return ret; + return 0; } From patchwork Fri Apr 14 17:44:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126107 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA89442943; Fri, 14 Apr 2023 19:46:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0B0C242D38; Fri, 14 Apr 2023 19:46:05 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CEEB242B8E for ; Fri, 14 Apr 2023 19:46:02 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGOBuT013729; Fri, 14 Apr 2023 10:46:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=yo/C+R3bC/UNptPw61WCsTCnpnHuBoSKeaChePbJAwI=; b=XS2myp+ixmoEv9xPoy3bhLM8younzW1VLo6e3eeK2EXo3TaFBxahOTR6eBFxhNLOu/NQ kmKuMbD5zf3Jb80UpavNC0syiu+AenCl1Q1h3S9cfdope0sPPtV2NXDDF3H+mTOJZS/T sS7qqLXZaOJTQSoGOLC5aCO4Bu5Ouv1mmpPUDaNlTslyhxXg6/zbHUfwnsuKV4t2gvTz mHFOD2yku0wKCThxD6cT/eupig4j56PCMnvklmoDjP/UE6uC0Wn5TziS0VHaG3dVruRO Cyfk49W4wGWSXCjAAyH1M9Wf9YHZ87aDZP+vc8bBfjNS/7UDUf8d7DTMlFdkw8KUw4wr YQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6k6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:46:01 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:45:59 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:45:59 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id E30173F7081; Fri, 14 Apr 2023 10:45:53 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 07/22] pdcp: add pre and post process for DL Date: Fri, 14 Apr 2023 23:14:57 +0530 Message-ID: <20230414174512.642-8-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 7NdZgt1-1KFH4z0uR7GP94T07AFLQQhY X-Proofpoint-ORIG-GUID: 7NdZgt1-1KFH4z0uR7GP94T07AFLQQhY X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add routines to perform pre & post processing for down link entities. Signed-off-by: Anoob Joseph Signed-off-by: Kiran Kumar K Signed-off-by: Volodymyr Fialko --- lib/pdcp/pdcp_entity.h | 2 + lib/pdcp/pdcp_process.c | 453 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 455 insertions(+) diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h index 46cdaead09..d2d9bbe149 100644 --- a/lib/pdcp/pdcp_entity.h +++ b/lib/pdcp/pdcp_entity.h @@ -13,6 +13,8 @@ struct entity_priv; +#define PDCP_HFN_MIN 0 + /* IV generation function based on the entity configuration */ typedef void (*iv_gen_t)(struct rte_crypto_op *cop, const struct entity_priv *en_priv, uint32_t count); diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c index 7c1fc85fcb..79d6ca352a 100644 --- a/lib/pdcp/pdcp_process.c +++ b/lib/pdcp/pdcp_process.c @@ -329,9 +329,423 @@ pdcp_post_process_ul(const struct rte_pdcp_entity *entity, return nb_success; } +static inline int +pdcp_sn_count_get(const uint32_t rx_deliv, int32_t rsn, uint32_t *count, + const enum rte_security_pdcp_sn_size sn_size) +{ + const uint32_t rx_deliv_sn = pdcp_sn_from_count_get(rx_deliv, sn_size); + const uint32_t window_sz = pdcp_window_size_get(sn_size); + uint32_t rhfn; + + rhfn = pdcp_hfn_from_count_get(rx_deliv, sn_size); + + if (rsn < (int32_t)(rx_deliv_sn - window_sz)) { + if (unlikely(rhfn == pdcp_hfn_max(sn_size))) + return -ERANGE; + rhfn += 1; + } else if ((uint32_t)rsn >= (rx_deliv_sn + window_sz)) { + if (unlikely(rhfn == PDCP_HFN_MIN)) + return -ERANGE; + rhfn -= 1; + } + + *count = pdcp_count_from_hfn_sn_get(rhfn, rsn, sn_size); + + return 0; +} + +static inline uint16_t +pdcp_pre_process_uplane_sn_12_dl_flags(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], struct rte_crypto_op *cop[], + uint16_t num, uint16_t *nb_err_ret, + const bool is_integ_protected) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + struct rte_pdcp_up_data_pdu_sn_12_hdr *pdu_hdr; + uint16_t nb_cop, nb_prep = 0, nb_err = 0; + struct rte_mbuf *mb; + int32_t rsn = 0; + uint32_t count; + int i; + + const uint8_t data_offset = en_priv->hdr_sz + en_priv->aad_sz; + + nb_cop = rte_crypto_op_bulk_alloc(en_priv->cop_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, cop, + num); + + const uint32_t rx_deliv = en_priv->state.rx_deliv; + + for (i = 0; i < nb_cop; i++) { + mb = in_mb[i]; + pdu_hdr = rte_pktmbuf_mtod(mb, struct rte_pdcp_up_data_pdu_sn_12_hdr *); + + /* Check for PDU type */ + if (likely(pdu_hdr->d_c == RTE_PDCP_PDU_TYPE_DATA)) + rsn = ((pdu_hdr->sn_11_8 << 8) | (pdu_hdr->sn_7_0)); + else + rte_panic("TODO: Control PDU not handled"); + + if (unlikely(pdcp_sn_count_get(rx_deliv, rsn, &count, + RTE_SECURITY_PDCP_SN_SIZE_12))) { + in_mb[nb_err++] = mb; + continue; + } + cop_prepare(en_priv, mb, cop[nb_prep++], data_offset, count, is_integ_protected); + } + + if (unlikely(nb_err)) + rte_mempool_put_bulk(en_priv->cop_pool, (void *)&cop[nb_prep], nb_cop - nb_prep); + + *nb_err_ret = num - nb_prep; + + return nb_prep; +} + +static uint16_t +pdcp_pre_process_uplane_sn_12_dl_ip(const struct rte_pdcp_entity *entity, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num, uint16_t *nb_err) +{ + return pdcp_pre_process_uplane_sn_12_dl_flags(entity, mb, cop, num, nb_err, true); +} + +static uint16_t +pdcp_pre_process_uplane_sn_12_dl(const struct rte_pdcp_entity *entity, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num, uint16_t *nb_err) +{ + return pdcp_pre_process_uplane_sn_12_dl_flags(entity, mb, cop, num, nb_err, false); +} + +static inline uint16_t +pdcp_pre_process_uplane_sn_18_dl_flags(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], struct rte_crypto_op *cop[], + uint16_t num, uint16_t *nb_err_ret, + const bool is_integ_protected) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + struct rte_pdcp_up_data_pdu_sn_18_hdr *pdu_hdr; + uint16_t nb_cop, nb_prep = 0, nb_err = 0; + struct rte_mbuf *mb; + int32_t rsn = 0; + uint32_t count; + int i; + + const uint8_t data_offset = en_priv->hdr_sz + en_priv->aad_sz; + nb_cop = rte_crypto_op_bulk_alloc(en_priv->cop_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, cop, + num); + + const uint32_t rx_deliv = en_priv->state.rx_deliv; + + for (i = 0; i < nb_cop; i++) { + mb = in_mb[i]; + pdu_hdr = rte_pktmbuf_mtod(mb, struct rte_pdcp_up_data_pdu_sn_18_hdr *); + + /* Check for PDU type */ + if (likely(pdu_hdr->d_c == RTE_PDCP_PDU_TYPE_DATA)) + rsn = ((pdu_hdr->sn_17_16 << 16) | (pdu_hdr->sn_15_8 << 8) | + (pdu_hdr->sn_7_0)); + else + rte_panic("TODO: Control PDU not handled"); + + if (unlikely(pdcp_sn_count_get(rx_deliv, rsn, &count, + RTE_SECURITY_PDCP_SN_SIZE_18))) { + in_mb[nb_err++] = mb; + continue; + } + cop_prepare(en_priv, mb, cop[nb_prep++], data_offset, count, is_integ_protected); + } + + if (unlikely(nb_err)) + /* Using mempool API since crypto API is not providing bulk free */ + rte_mempool_put_bulk(en_priv->cop_pool, (void *)&cop[nb_prep], nb_cop - nb_prep); + + *nb_err_ret = num - nb_prep; + + return nb_prep; +} + +static uint16_t +pdcp_pre_process_uplane_sn_18_dl_ip(const struct rte_pdcp_entity *entity, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num, uint16_t *nb_err) +{ + return pdcp_pre_process_uplane_sn_18_dl_flags(entity, mb, cop, num, nb_err, true); +} + +static uint16_t +pdcp_pre_process_uplane_sn_18_dl(const struct rte_pdcp_entity *entity, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num, uint16_t *nb_err) +{ + return pdcp_pre_process_uplane_sn_18_dl_flags(entity, mb, cop, num, nb_err, false); +} + +static uint16_t +pdcp_pre_process_cplane_sn_12_dl(const struct rte_pdcp_entity *entity, struct rte_mbuf *in_mb[], + struct rte_crypto_op *cop[], uint16_t num, uint16_t *nb_err_ret) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + struct rte_pdcp_cp_data_pdu_sn_12_hdr *pdu_hdr; + uint16_t nb_cop, nb_prep = 0, nb_err = 0; + struct rte_mbuf *mb; + uint32_t count; + int32_t rsn; + int i; + + const uint8_t data_offset = en_priv->hdr_sz + en_priv->aad_sz; + + nb_cop = rte_crypto_op_bulk_alloc(en_priv->cop_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, cop, + num); + + const uint32_t rx_deliv = en_priv->state.rx_deliv; + + for (i = 0; i < nb_cop; i++) { + mb = in_mb[i]; + pdu_hdr = rte_pktmbuf_mtod(mb, struct rte_pdcp_cp_data_pdu_sn_12_hdr *); + rsn = ((pdu_hdr->sn_11_8 << 8) | (pdu_hdr->sn_7_0)); + if (unlikely(pdcp_sn_count_get(rx_deliv, rsn, &count, + RTE_SECURITY_PDCP_SN_SIZE_12))) { + in_mb[nb_err++] = mb; + continue; + } + cop_prepare(en_priv, mb, cop[nb_prep++], data_offset, count, true); + } + + if (unlikely(nb_err)) + /* Using mempool API since crypto API is not providing bulk free */ + rte_mempool_put_bulk(en_priv->cop_pool, (void *)&cop[nb_prep], nb_cop - nb_prep); + + *nb_err_ret = num - nb_prep; + + return nb_prep; +} + +static inline void +pdcp_packet_strip(struct rte_mbuf *mb, const uint32_t hdr_trim_sz, const bool trim_mac) +{ + char *p = rte_pktmbuf_adj(mb, hdr_trim_sz); + RTE_VERIFY(p != NULL); + + if (trim_mac) { + int ret = rte_pktmbuf_trim(mb, PDCP_MAC_I_LEN); + RTE_VERIFY(ret == 0); + } +} + +static inline bool +pdcp_post_process_update_entity_state(const struct rte_pdcp_entity *entity, + const uint32_t count) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + + if (count < en_priv->state.rx_deliv) + return false; + + /* t-Reordering timer is not supported - SDU will be delivered immediately. + * Update RX_DELIV to the COUNT value of the first PDCP SDU which has not + * been delivered to upper layers + */ + en_priv->state.rx_next = count + 1; + + if (count >= en_priv->state.rx_next) + en_priv->state.rx_next = count + 1; + + return true; +} + +static inline uint16_t +pdcp_post_process_uplane_sn_12_dl_flags(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], + struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err_ret, + const bool is_integ_protected) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + struct rte_pdcp_up_data_pdu_sn_12_hdr *pdu_hdr; + int i, nb_success = 0, nb_err = 0, rsn = 0; + const uint32_t aad_sz = en_priv->aad_sz; + struct rte_mbuf *err_mb[num]; + struct rte_mbuf *mb; + uint32_t count; + + const uint32_t hdr_trim_sz = en_priv->hdr_sz + aad_sz; + + for (i = 0; i < num; i++) { + mb = in_mb[i]; + if (unlikely(mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) + goto error; + pdu_hdr = rte_pktmbuf_mtod_offset(mb, struct rte_pdcp_up_data_pdu_sn_12_hdr *, + aad_sz); + + /* Check for PDU type */ + if (likely(pdu_hdr->d_c == RTE_PDCP_PDU_TYPE_DATA)) + rsn = ((pdu_hdr->sn_11_8 << 8) | (pdu_hdr->sn_7_0)); + else + rte_panic("Control PDU should not be received"); + + if (unlikely(pdcp_sn_count_get(en_priv->state.rx_deliv, rsn, &count, + RTE_SECURITY_PDCP_SN_SIZE_12))) + goto error; + + if (unlikely(!pdcp_post_process_update_entity_state(entity, count))) + goto error; + + pdcp_packet_strip(mb, hdr_trim_sz, is_integ_protected); + out_mb[nb_success++] = mb; + continue; + +error: + err_mb[nb_err++] = mb; + } + + if (unlikely(nb_err != 0)) + rte_memcpy(&out_mb[nb_success], err_mb, nb_err * sizeof(struct rte_mbuf *)); + + *nb_err_ret = nb_err; + return nb_success; +} + +static uint16_t +pdcp_post_process_uplane_sn_12_dl_ip(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], + struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err) +{ + return pdcp_post_process_uplane_sn_12_dl_flags(entity, in_mb, out_mb, num, nb_err, true); +} + +static uint16_t +pdcp_post_process_uplane_sn_12_dl(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], + struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err) +{ + return pdcp_post_process_uplane_sn_12_dl_flags(entity, in_mb, out_mb, num, nb_err, false); +} + +static inline uint16_t +pdcp_post_process_uplane_sn_18_dl_flags(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], + struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err_ret, + const bool is_integ_protected) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + struct rte_pdcp_up_data_pdu_sn_18_hdr *pdu_hdr; + const uint32_t aad_sz = en_priv->aad_sz; + int i, nb_success = 0, nb_err = 0; + struct rte_mbuf *mb, *err_mb[num]; + int32_t rsn = 0; + uint32_t count; + + const uint32_t hdr_trim_sz = en_priv->hdr_sz + aad_sz; + + for (i = 0; i < num; i++) { + mb = in_mb[i]; + if (unlikely(mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) + goto error; + + pdu_hdr = rte_pktmbuf_mtod_offset(mb, struct rte_pdcp_up_data_pdu_sn_18_hdr *, + aad_sz); + + /* Check for PDU type */ + if (likely(pdu_hdr->d_c == RTE_PDCP_PDU_TYPE_DATA)) + rsn = ((pdu_hdr->sn_17_16 << 16) | (pdu_hdr->sn_15_8 << 8) | + (pdu_hdr->sn_7_0)); + else + rte_panic("Control PDU should not be received"); + + if (unlikely(pdcp_sn_count_get(en_priv->state.rx_deliv, rsn, &count, + RTE_SECURITY_PDCP_SN_SIZE_18))) + goto error; + + if (unlikely(!pdcp_post_process_update_entity_state(entity, count))) + goto error; + + pdcp_packet_strip(mb, hdr_trim_sz, is_integ_protected); + out_mb[nb_success++] = mb; + continue; + +error: + err_mb[nb_err++] = mb; + } + + if (unlikely(nb_err != 0)) + rte_memcpy(&out_mb[nb_success], err_mb, nb_err * sizeof(struct rte_mbuf *)); + + *nb_err_ret = nb_err; + return nb_success; +} + +static uint16_t +pdcp_post_process_uplane_sn_18_dl_ip(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], + struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err) +{ + return pdcp_post_process_uplane_sn_18_dl_flags(entity, in_mb, out_mb, num, nb_err, true); +} + +static uint16_t +pdcp_post_process_uplane_sn_18_dl(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], + struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err) +{ + return pdcp_post_process_uplane_sn_18_dl_flags(entity, in_mb, out_mb, num, nb_err, false); +} + +static uint16_t +pdcp_post_process_cplane_sn_12_dl(const struct rte_pdcp_entity *entity, + struct rte_mbuf *in_mb[], + struct rte_mbuf *out_mb[], + uint16_t num, uint16_t *nb_err_ret) +{ + struct entity_priv *en_priv = entity_priv_get(entity); + struct rte_pdcp_cp_data_pdu_sn_12_hdr *pdu_hdr; + const uint32_t aad_sz = en_priv->aad_sz; + int i, nb_success = 0, nb_err = 0; + struct rte_mbuf *err_mb[num]; + struct rte_mbuf *mb; + uint32_t count; + int32_t rsn; + + const uint32_t hdr_trim_sz = en_priv->hdr_sz + aad_sz; + + for (i = 0; i < num; i++) { + mb = in_mb[i]; + if (unlikely(mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) + goto error; + + pdu_hdr = rte_pktmbuf_mtod_offset(mb, struct rte_pdcp_cp_data_pdu_sn_12_hdr *, + aad_sz); + rsn = ((pdu_hdr->sn_11_8 << 8) | (pdu_hdr->sn_7_0)); + + if (unlikely(pdcp_sn_count_get(en_priv->state.rx_deliv, rsn, &count, + RTE_SECURITY_PDCP_SN_SIZE_12))) + goto error; + + if (unlikely(!pdcp_post_process_update_entity_state(entity, count))) + goto error; + + pdcp_packet_strip(mb, hdr_trim_sz, true); + + out_mb[nb_success++] = mb; + continue; + +error: + err_mb[nb_err++] = mb; + } + + if (unlikely(nb_err != 0)) + rte_memcpy(&out_mb[nb_success], err_mb, nb_err * sizeof(struct rte_mbuf *)); + + *nb_err_ret = nb_err; + return nb_success; +} + static int pdcp_pre_post_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf) { + struct entity_priv *en_priv = entity_priv_get(entity); + entity->pre_process = NULL; entity->post_process = NULL; @@ -342,6 +756,13 @@ pdcp_pre_post_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_ent entity->post_process = pdcp_post_process_ul; } + if ((conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_CONTROL) && + (conf->pdcp_xfrm.sn_size == RTE_SECURITY_PDCP_SN_SIZE_12) && + (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK)) { + entity->pre_process = pdcp_pre_process_cplane_sn_12_dl; + entity->post_process = pdcp_post_process_cplane_sn_12_dl; + } + if ((conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_DATA) && (conf->pdcp_xfrm.sn_size == RTE_SECURITY_PDCP_SN_SIZE_12) && (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK)) { @@ -356,6 +777,38 @@ pdcp_pre_post_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_ent entity->post_process = pdcp_post_process_ul; } + if ((conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_DATA) && + (conf->pdcp_xfrm.sn_size == RTE_SECURITY_PDCP_SN_SIZE_12) && + (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) && + (en_priv->flags.is_authenticated)) { + entity->pre_process = pdcp_pre_process_uplane_sn_12_dl_ip; + entity->post_process = pdcp_post_process_uplane_sn_12_dl_ip; + } + + if ((conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_DATA) && + (conf->pdcp_xfrm.sn_size == RTE_SECURITY_PDCP_SN_SIZE_12) && + (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) && + (!en_priv->flags.is_authenticated)) { + entity->pre_process = pdcp_pre_process_uplane_sn_12_dl; + entity->post_process = pdcp_post_process_uplane_sn_12_dl; + } + + if ((conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_DATA) && + (conf->pdcp_xfrm.sn_size == RTE_SECURITY_PDCP_SN_SIZE_18) && + (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) && + (en_priv->flags.is_authenticated)) { + entity->pre_process = pdcp_pre_process_uplane_sn_18_dl_ip; + entity->post_process = pdcp_post_process_uplane_sn_18_dl_ip; + } + + if ((conf->pdcp_xfrm.domain == RTE_SECURITY_PDCP_MODE_DATA) && + (conf->pdcp_xfrm.sn_size == RTE_SECURITY_PDCP_SN_SIZE_18) && + (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) && + (!en_priv->flags.is_authenticated)) { + entity->pre_process = pdcp_pre_process_uplane_sn_18_dl; + entity->post_process = pdcp_post_process_uplane_sn_18_dl; + } + if (entity->pre_process == NULL || entity->post_process == NULL) return -ENOTSUP; From patchwork Fri Apr 14 17:44:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126108 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 898FA42943; Fri, 14 Apr 2023 19:46:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D13C642BC9; Fri, 14 Apr 2023 19:46:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6125742D0E for ; Fri, 14 Apr 2023 19:46:08 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33E906uw011272; Fri, 14 Apr 2023 10:46:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/g5neQgmxsUMX2zwYsmsAd14TKSeA8NBjFr75c5pRkw=; b=bnarT2J9YAdx6DhCza9YVBIkjPCPCG8f9OIYqJuuFLgcXqGR8NWX8MoNK7XApRizG1Cb mpxWiHfRQIr0t6ocsGI5E/3AyltWMGgYq1+MQ4D/R0hR9pQlncI35FCfv218CJnfIoz0 ejzXPaUZt8hqSGnRVUuG74m2n7KR85uIQovGzUE3rRQH/aJB8uko9LeWR4uc/M9uFOfW sqlDo5C3PE5RmASFVXKuWFQSojowJweARc6jw0QDhV71NEmRnKrRP4cqI9Vp2df896dz MUjCJPIkprfM1Bsm9vfs2v93PICMgrU9y/61gLy+kcF8iFqagtNrT9S5sW0JePJc+sMp tA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3py3tk2ecs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:46:06 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:46:04 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:46:04 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 138D53F7080; Fri, 14 Apr 2023 10:45:59 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 08/22] pdcp: add IV generation routines Date: Fri, 14 Apr 2023 23:14:58 +0530 Message-ID: <20230414174512.642-9-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: s942vTQOFWSbnFv-7B-Ah-lv477VS5ZZ X-Proofpoint-ORIG-GUID: s942vTQOFWSbnFv-7B-Ah-lv477VS5ZZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For PDCP, IV generated has varying formats depending on the ciphering and authentication algorithm used. Add routines to populate IV accordingly. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- lib/pdcp/pdcp_entity.h | 87 ++++++++++++ lib/pdcp/pdcp_process.c | 284 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 371 insertions(+) diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h index d2d9bbe149..3108795977 100644 --- a/lib/pdcp/pdcp_entity.h +++ b/lib/pdcp/pdcp_entity.h @@ -26,6 +26,89 @@ struct entity_state { uint32_t rx_reord; }; +union auth_iv_partial { + /* For AES-CMAC, there is no IV, but message gets prepended */ + struct { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint64_t count : 32; + uint64_t zero_38_39 : 2; + uint64_t direction : 1; + uint64_t bearer : 5; + uint64_t zero_40_63 : 24; +#else + uint64_t count : 32; + uint64_t bearer : 5; + uint64_t direction : 1; + uint64_t zero_38_39 : 2; + uint64_t zero_40_63 : 24; +#endif + } aes_cmac; + struct { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint64_t count : 32; + uint64_t zero_37_39 : 3; + uint64_t bearer : 5; + uint64_t zero_40_63 : 24; + + uint64_t rsvd_65_71 : 7; + uint64_t direction_64 : 1; + uint64_t rsvd_72_111 : 40; + uint64_t rsvd_113_119 : 7; + uint64_t direction_112 : 1; + uint64_t rsvd_120_127 : 8; +#else + uint64_t count : 32; + uint64_t bearer : 5; + uint64_t zero_37_39 : 3; + uint64_t zero_40_63 : 24; + + uint64_t direction_64 : 1; + uint64_t rsvd_65_71 : 7; + uint64_t rsvd_72_111 : 40; + uint64_t direction_112 : 1; + uint64_t rsvd_113_119 : 7; + uint64_t rsvd_120_127 : 8; +#endif + } zs; + uint64_t u64[2]; +}; + +union cipher_iv_partial { + struct { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint64_t count : 32; + uint64_t zero_38_39 : 2; + uint64_t direction : 1; + uint64_t bearer : 5; + uint64_t zero_40_63 : 24; +#else + uint64_t count : 32; + uint64_t bearer : 5; + uint64_t direction : 1; + uint64_t zero_38_39 : 2; + uint64_t zero_40_63 : 24; +#endif + uint64_t zero_64_127; + } aes_ctr; + struct { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint64_t count : 32; + uint64_t zero_38_39 : 2; + uint64_t direction : 1; + uint64_t bearer : 5; + uint64_t zero_40_63 : 24; +#else + uint64_t count : 32; + uint64_t bearer : 5; + uint64_t direction : 1; + uint64_t zero_38_39 : 2; + uint64_t zero_40_63 : 24; +#endif + uint64_t rsvd_64_127; + } zs; + uint64_t u64[2]; +}; + /* * Layout of PDCP entity: [rte_pdcp_entity] [entity_priv] [entity_dl/ul] */ @@ -35,6 +118,10 @@ struct entity_priv { struct rte_cryptodev_sym_session *crypto_sess; /** Entity specific IV generation function. */ iv_gen_t iv_gen; + /** Pre-prepared auth IV. */ + union auth_iv_partial auth_iv_part; + /** Pre-prepared cipher IV. */ + union cipher_iv_partial cipher_iv_part; /** Entity state variables. */ struct entity_state state; /** Flags. */ diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c index 79d6ca352a..9c1a5e0669 100644 --- a/lib/pdcp/pdcp_process.c +++ b/lib/pdcp/pdcp_process.c @@ -13,6 +13,181 @@ #include "pdcp_entity.h" #include "pdcp_process.h" +/* Enum of supported algorithms for ciphering */ +enum pdcp_cipher_algo { + PDCP_CIPHER_ALGO_NULL, + PDCP_CIPHER_ALGO_AES, + PDCP_CIPHER_ALGO_ZUC, + PDCP_CIPHER_ALGO_SNOW3G, + PDCP_CIPHER_ALGO_MAX +}; + +/* Enum of supported algorithms for integrity */ +enum pdcp_auth_algo { + PDCP_AUTH_ALGO_NULL, + PDCP_AUTH_ALGO_AES, + PDCP_AUTH_ALGO_ZUC, + PDCP_AUTH_ALGO_SNOW3G, + PDCP_AUTH_ALGO_MAX +}; + +/* IV generation functions based on type of operation (cipher - auth) */ + +static void +pdcp_iv_gen_null_null(struct rte_crypto_op *cop, const struct entity_priv *en_priv, uint32_t count) +{ + /* No IV required for NULL cipher + NULL auth */ + RTE_SET_USED(cop); + RTE_SET_USED(en_priv); + RTE_SET_USED(count); +} + +static void +pdcp_iv_gen_null_aes_cmac(struct rte_crypto_op *cop, const struct entity_priv *en_priv, + uint32_t count) +{ + struct rte_crypto_sym_op *op = cop->sym; + struct rte_mbuf *mb = op->m_src; + uint8_t *m_ptr; + uint64_t m; + + /* AES-CMAC requires message to be prepended with info on count etc */ + + /* Prepend by 8 bytes to add custom message */ + m_ptr = (uint8_t *)rte_pktmbuf_prepend(mb, 8); + + m = en_priv->auth_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + + rte_memcpy(m_ptr, &m, 8); +} + +static void +pdcp_iv_gen_null_zs(struct rte_crypto_op *cop, const struct entity_priv *en_priv, uint32_t count) +{ + uint64_t iv_u64[2]; + uint8_t *iv; + + iv = rte_crypto_op_ctod_offset(cop, uint8_t *, PDCP_IV_OFFSET); + + iv_u64[0] = en_priv->auth_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + rte_memcpy(iv, &iv_u64[0], 8); + + iv_u64[1] = iv_u64[0] ^ en_priv->auth_iv_part.u64[1]; + rte_memcpy(iv + 8, &iv_u64[1], 8); +} + +static void +pdcp_iv_gen_aes_ctr_null(struct rte_crypto_op *cop, const struct entity_priv *en_priv, + uint32_t count) +{ + uint64_t iv_u64[2]; + uint8_t *iv; + + iv = rte_crypto_op_ctod_offset(cop, uint8_t *, PDCP_IV_OFFSET); + + iv_u64[0] = en_priv->cipher_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + iv_u64[1] = 0; + rte_memcpy(iv, iv_u64, 16); +} + +static void +pdcp_iv_gen_zs_null(struct rte_crypto_op *cop, const struct entity_priv *en_priv, uint32_t count) +{ + uint64_t iv_u64; + uint8_t *iv; + + iv = rte_crypto_op_ctod_offset(cop, uint8_t *, PDCP_IV_OFFSET); + + iv_u64 = en_priv->cipher_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + rte_memcpy(iv, &iv_u64, 8); + rte_memcpy(iv + 8, &iv_u64, 8); +} + +static void +pdcp_iv_gen_zs_zs(struct rte_crypto_op *cop, const struct entity_priv *en_priv, uint32_t count) +{ + uint64_t iv_u64[2]; + uint8_t *iv; + + iv = rte_crypto_op_ctod_offset(cop, uint8_t *, PDCP_IV_OFFSET); + + /* Generating cipher IV */ + iv_u64[0] = en_priv->cipher_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + rte_memcpy(iv, &iv_u64[0], 8); + rte_memcpy(iv + 8, &iv_u64[0], 8); + + iv += PDCP_IV_LEN; + + /* Generating auth IV */ + iv_u64[0] = en_priv->auth_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + rte_memcpy(iv, &iv_u64[0], 8); + + iv_u64[1] = iv_u64[0] ^ en_priv->auth_iv_part.u64[1]; + rte_memcpy(iv + 8, &iv_u64[1], 8); +} + +static void +pdcp_iv_gen_zs_aes_cmac(struct rte_crypto_op *cop, const struct entity_priv *en_priv, + uint32_t count) +{ + struct rte_crypto_sym_op *op = cop->sym; + struct rte_mbuf *mb = op->m_src; + uint8_t *m_ptr, *iv; + uint64_t iv_u64[2]; + uint64_t m; + + iv = rte_crypto_op_ctod_offset(cop, uint8_t *, PDCP_IV_OFFSET); + iv_u64[0] = en_priv->cipher_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + rte_memcpy(iv, &iv_u64[0], 8); + rte_memcpy(iv + 8, &iv_u64[0], 8); + + m_ptr = (uint8_t *)rte_pktmbuf_prepend(mb, 8); + m = en_priv->auth_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + rte_memcpy(m_ptr, &m, 8); +} + +static void +pdcp_iv_gen_aes_ctr_aes_cmac(struct rte_crypto_op *cop, const struct entity_priv *en_priv, + uint32_t count) +{ + struct rte_crypto_sym_op *op = cop->sym; + struct rte_mbuf *mb = op->m_src; + uint8_t *m_ptr, *iv; + uint64_t iv_u64[2]; + uint64_t m; + + iv = rte_crypto_op_ctod_offset(cop, uint8_t *, PDCP_IV_OFFSET); + + iv_u64[0] = en_priv->cipher_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + iv_u64[1] = 0; + rte_memcpy(iv, iv_u64, PDCP_IV_LEN); + + m_ptr = (uint8_t *)rte_pktmbuf_prepend(mb, 8); + m = en_priv->auth_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + rte_memcpy(m_ptr, &m, 8); +} + +static void +pdcp_iv_gen_aes_ctr_zs(struct rte_crypto_op *cop, const struct entity_priv *en_priv, uint32_t count) +{ + uint64_t iv_u64[2]; + uint8_t *iv; + + iv = rte_crypto_op_ctod_offset(cop, uint8_t *, PDCP_IV_OFFSET); + + iv_u64[0] = en_priv->cipher_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + iv_u64[1] = 0; + rte_memcpy(iv, iv_u64, PDCP_IV_LEN); + + iv += PDCP_IV_LEN; + + iv_u64[0] = en_priv->auth_iv_part.u64[0] | ((uint64_t)(rte_cpu_to_be_32(count))); + rte_memcpy(iv, &iv_u64[0], 8); + + iv_u64[1] = iv_u64[0] ^ en_priv->auth_iv_part.u64[1]; + rte_memcpy(iv + 8, &iv_u64[1], 8); +} + static int pdcp_crypto_xfrm_get(const struct rte_pdcp_entity_conf *conf, struct rte_crypto_sym_xform **c_xfrm, struct rte_crypto_sym_xform **a_xfrm) @@ -36,6 +211,111 @@ pdcp_crypto_xfrm_get(const struct rte_pdcp_entity_conf *conf, struct rte_crypto_ return 0; } +static int +pdcp_iv_gen_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf) +{ + struct rte_crypto_sym_xform *c_xfrm, *a_xfrm; + enum rte_security_pdcp_direction direction; + enum pdcp_cipher_algo ciph_algo; + enum pdcp_auth_algo auth_algo; + struct entity_priv *en_priv; + int ret; + + en_priv = entity_priv_get(entity); + + direction = conf->pdcp_xfrm.pkt_dir; + if (conf->reverse_iv_direction) + direction = !direction; + + ret = pdcp_crypto_xfrm_get(conf, &c_xfrm, &a_xfrm); + if (ret) + return ret; + + if (c_xfrm == NULL) + return -EINVAL; + + memset(&en_priv->auth_iv_part, 0, sizeof(en_priv->auth_iv_part)); + memset(&en_priv->cipher_iv_part, 0, sizeof(en_priv->cipher_iv_part)); + + switch (c_xfrm->cipher.algo) { + case RTE_CRYPTO_CIPHER_NULL: + ciph_algo = PDCP_CIPHER_ALGO_NULL; + break; + case RTE_CRYPTO_CIPHER_AES_CTR: + ciph_algo = PDCP_CIPHER_ALGO_AES; + en_priv->cipher_iv_part.aes_ctr.bearer = conf->pdcp_xfrm.bearer; + en_priv->cipher_iv_part.aes_ctr.direction = direction; + break; + case RTE_CRYPTO_CIPHER_SNOW3G_UEA2: + ciph_algo = PDCP_CIPHER_ALGO_SNOW3G; + en_priv->cipher_iv_part.zs.bearer = conf->pdcp_xfrm.bearer; + en_priv->cipher_iv_part.zs.direction = direction; + break; + case RTE_CRYPTO_CIPHER_ZUC_EEA3: + ciph_algo = PDCP_CIPHER_ALGO_ZUC; + en_priv->cipher_iv_part.zs.bearer = conf->pdcp_xfrm.bearer; + en_priv->cipher_iv_part.zs.direction = direction; + break; + default: + return -ENOTSUP; + } + + if (a_xfrm != NULL) { + switch (a_xfrm->auth.algo) { + case RTE_CRYPTO_AUTH_NULL: + auth_algo = PDCP_AUTH_ALGO_NULL; + break; + case RTE_CRYPTO_AUTH_AES_CMAC: + auth_algo = PDCP_AUTH_ALGO_AES; + en_priv->auth_iv_part.aes_cmac.bearer = conf->pdcp_xfrm.bearer; + en_priv->auth_iv_part.aes_cmac.direction = direction; + break; + case RTE_CRYPTO_AUTH_SNOW3G_UIA2: + auth_algo = PDCP_AUTH_ALGO_SNOW3G; + en_priv->auth_iv_part.zs.bearer = conf->pdcp_xfrm.bearer; + en_priv->auth_iv_part.zs.direction_64 = direction; + en_priv->auth_iv_part.zs.direction_112 = direction; + break; + case RTE_CRYPTO_AUTH_ZUC_EIA3: + auth_algo = PDCP_AUTH_ALGO_ZUC; + en_priv->auth_iv_part.zs.bearer = conf->pdcp_xfrm.bearer; + en_priv->auth_iv_part.zs.direction_64 = direction; + en_priv->auth_iv_part.zs.direction_112 = direction; + break; + default: + return -ENOTSUP; + } + } else { + auth_algo = PDCP_AUTH_ALGO_NULL; + } + + static const iv_gen_t iv_gen_map[PDCP_CIPHER_ALGO_MAX][PDCP_AUTH_ALGO_MAX] = { + [PDCP_CIPHER_ALGO_NULL][PDCP_AUTH_ALGO_NULL] = pdcp_iv_gen_null_null, + [PDCP_CIPHER_ALGO_NULL][PDCP_AUTH_ALGO_AES] = pdcp_iv_gen_null_aes_cmac, + [PDCP_CIPHER_ALGO_NULL][PDCP_AUTH_ALGO_SNOW3G] = pdcp_iv_gen_null_zs, + [PDCP_CIPHER_ALGO_NULL][PDCP_AUTH_ALGO_ZUC] = pdcp_iv_gen_null_zs, + + [PDCP_CIPHER_ALGO_AES][PDCP_AUTH_ALGO_NULL] = pdcp_iv_gen_aes_ctr_null, + [PDCP_CIPHER_ALGO_AES][PDCP_AUTH_ALGO_AES] = pdcp_iv_gen_aes_ctr_aes_cmac, + [PDCP_CIPHER_ALGO_AES][PDCP_AUTH_ALGO_SNOW3G] = pdcp_iv_gen_aes_ctr_zs, + [PDCP_CIPHER_ALGO_AES][PDCP_AUTH_ALGO_ZUC] = pdcp_iv_gen_aes_ctr_zs, + + [PDCP_CIPHER_ALGO_SNOW3G][PDCP_AUTH_ALGO_NULL] = pdcp_iv_gen_zs_null, + [PDCP_CIPHER_ALGO_SNOW3G][PDCP_AUTH_ALGO_AES] = pdcp_iv_gen_zs_aes_cmac, + [PDCP_CIPHER_ALGO_SNOW3G][PDCP_AUTH_ALGO_SNOW3G] = pdcp_iv_gen_zs_zs, + [PDCP_CIPHER_ALGO_SNOW3G][PDCP_AUTH_ALGO_ZUC] = pdcp_iv_gen_zs_zs, + + [PDCP_CIPHER_ALGO_ZUC][PDCP_AUTH_ALGO_NULL] = pdcp_iv_gen_zs_null, + [PDCP_CIPHER_ALGO_ZUC][PDCP_AUTH_ALGO_AES] = pdcp_iv_gen_zs_aes_cmac, + [PDCP_CIPHER_ALGO_ZUC][PDCP_AUTH_ALGO_SNOW3G] = pdcp_iv_gen_zs_zs, + [PDCP_CIPHER_ALGO_ZUC][PDCP_AUTH_ALGO_ZUC] = pdcp_iv_gen_zs_zs, + }; + + en_priv->iv_gen = iv_gen_map[ciph_algo][auth_algo]; + + return 0; +} + static inline void cop_prepare(const struct entity_priv *en_priv, struct rte_mbuf *mb, struct rte_crypto_op *cop, uint8_t data_offset, uint32_t count, const bool is_auth) @@ -909,6 +1189,10 @@ pdcp_process_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_enti en_priv = entity_priv_get(entity); + ret = pdcp_iv_gen_func_set(entity, conf); + if (ret) + return ret; + ret = pdcp_entity_priv_populate(en_priv, conf); if (ret) return ret; From patchwork Fri Apr 14 17:44:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126109 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2250A42943; Fri, 14 Apr 2023 19:46:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A71642D10; Fri, 14 Apr 2023 19:46:29 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9E22542B8E for ; Fri, 14 Apr 2023 19:46:27 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EDJhLf015785; Fri, 14 Apr 2023 10:46:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=iuAbPhbPID3Ac6xW21e5erjTMJ30rI5Lwj+mwi2Jzsw=; b=Q1rCMgHv6oAIUc0ohSiAMWLiRZ3pGYWAIPd/E3saDuvxO0KkmzhUc3ST5BwiVFjbYX7l 2Zoz2tOG0CSotDpfO/ik/2I55ptXwyYxJtYh7Ka18AitIfmQNNMbRfs9SjQT8X37i3oR sl1Y6QtJ2cmgSmXVMMxuOFxajiclE6YDIZ8c2Pe414Ju9efJrhAPfNR+cU/2AhJbXj8v S1P6+j+HZcBOWkVBTZapYxg/hFk9BHMrjO/9QQK0IKLRJgsfkhP2ACLTnIYYXsQARUb/ wApr4ChqZSaa676DFN5zdPx4yWdT97QOGuImPz7zIwB5f+P1poGIZFAai7VYT0VVOGxY NA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6nm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:46:26 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:46:24 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:46:24 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 5A44D3F7082; Fri, 14 Apr 2023 10:46:05 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 09/22] app/test: add lib pdcp tests Date: Fri, 14 Apr 2023 23:14:59 +0530 Message-ID: <20230414174512.642-10-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 5XgNM2JABUSgtR1XkTEPIfxXodK3gpyA X-Proofpoint-ORIG-GUID: 5XgNM2JABUSgtR1XkTEPIfxXodK3gpyA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add tests to verify lib PDCP operations. Tests leverage existing PDCP test vectors. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- app/test/meson.build | 1 + app/test/test_cryptodev.h | 3 + app/test/test_pdcp.c | 729 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 733 insertions(+) create mode 100644 app/test/test_pdcp.c diff --git a/app/test/meson.build b/app/test/meson.build index 52d9088578..0f658aa2ab 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -96,6 +96,7 @@ test_sources = files( 'test_meter.c', 'test_mcslock.c', 'test_mp_secondary.c', + 'test_pdcp.c', 'test_per_lcore.c', 'test_pflock.c', 'test_pmd_perf.c', diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h index abd795f54a..89057dba22 100644 --- a/app/test/test_cryptodev.h +++ b/app/test/test_cryptodev.h @@ -4,6 +4,9 @@ #ifndef TEST_CRYPTODEV_H_ #define TEST_CRYPTODEV_H_ +#include +#include + #define HEX_DUMP 0 #define FALSE 0 diff --git a/app/test/test_pdcp.c b/app/test/test_pdcp.c new file mode 100644 index 0000000000..cb88817004 --- /dev/null +++ b/app/test/test_pdcp.c @@ -0,0 +1,729 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include +#include +#include + +#include "test.h" +#include "test_cryptodev.h" +#include "test_cryptodev_security_pdcp_test_vectors.h" + +#define NB_DESC 1024 +#define CDEV_INVALID_ID UINT8_MAX +#define NB_TESTS RTE_DIM(pdcp_test_params) + +struct pdcp_testsuite_params { + struct rte_mempool *mbuf_pool; + struct rte_mempool *cop_pool; + struct rte_mempool *sess_pool; + bool cdevs_used[RTE_CRYPTO_MAX_DEVS]; +}; + +static struct pdcp_testsuite_params testsuite_params; + +struct pdcp_test_conf { + struct rte_pdcp_entity_conf entity; + struct rte_crypto_sym_xform c_xfrm; + struct rte_crypto_sym_xform a_xfrm; + bool is_integrity_protected; + uint8_t input[RTE_PDCP_CTRL_PDU_SIZE_MAX]; + uint32_t input_len; + uint8_t output[RTE_PDCP_CTRL_PDU_SIZE_MAX]; + uint32_t output_len; +}; + +static inline int +pdcp_hdr_size_get(enum rte_security_pdcp_sn_size sn_size) +{ + return RTE_ALIGN_MUL_CEIL(sn_size, 8) / 8; +} + +static int +cryptodev_init(int dev_id) +{ + struct pdcp_testsuite_params *ts_params = &testsuite_params; + struct rte_cryptodev_qp_conf qp_conf; + struct rte_cryptodev_info dev_info; + struct rte_cryptodev_config config; + int ret, socket_id; + + /* Check if device was already initialized */ + if (ts_params->cdevs_used[dev_id]) + return 0; + + rte_cryptodev_info_get(dev_id, &dev_info); + + if (dev_info.max_nb_queue_pairs < 1) { + RTE_LOG(ERR, USER1, "Cryptodev doesn't have sufficient queue pairs available\n"); + return -ENODEV; + } + + socket_id = rte_socket_id(); + + memset(&config, 0, sizeof(config)); + config.nb_queue_pairs = 1; + config.socket_id = socket_id; + + ret = rte_cryptodev_configure(dev_id, &config); + if (ret < 0) { + RTE_LOG(ERR, USER1, "Could not configure cryptodev - %d\n", dev_id); + return -ENODEV; + } + + memset(&qp_conf, 0, sizeof(qp_conf)); + qp_conf.nb_descriptors = NB_DESC; + + ret = rte_cryptodev_queue_pair_setup(dev_id, 0, &qp_conf, socket_id); + if (ret < 0) { + RTE_LOG(ERR, USER1, "Could not configure queue pair\n"); + return -ENODEV; + } + + ret = rte_cryptodev_start(dev_id); + if (ret < 0) { + RTE_LOG(ERR, USER1, "Could not start cryptodev\n"); + return -ENODEV; + } + + /* Mark device as initialized */ + ts_params->cdevs_used[dev_id] = true; + + return 0; +} + +static void +cryptodev_fini(int dev_id) +{ + rte_cryptodev_stop(dev_id); +} + +static unsigned int +cryptodev_sess_priv_max_req_get(void) +{ + struct rte_cryptodev_info info; + unsigned int sess_priv_sz; + int i, nb_dev; + void *sec_ctx; + + nb_dev = rte_cryptodev_count(); + + sess_priv_sz = 0; + + for (i = 0; i < nb_dev; i++) { + rte_cryptodev_info_get(i, &info); + sess_priv_sz = RTE_MAX(sess_priv_sz, rte_cryptodev_sym_get_private_session_size(i)); + if (info.feature_flags & RTE_CRYPTODEV_FF_SECURITY) { + sec_ctx = rte_cryptodev_get_sec_ctx(i); + sess_priv_sz = RTE_MAX(sess_priv_sz, + rte_security_session_get_size(sec_ctx)); + } + } + + return sess_priv_sz; +} + +static int +testsuite_setup(void) +{ + struct pdcp_testsuite_params *ts_params = &testsuite_params; + int nb_cdev, sess_priv_size, nb_sess = 1024; + + RTE_SET_USED(pdcp_test_hfn_threshold); + + nb_cdev = rte_cryptodev_count(); + if (nb_cdev < 1) { + RTE_LOG(ERR, USER1, "No crypto devices found.\n"); + return TEST_SKIPPED; + } + + memset(ts_params, 0, sizeof(*ts_params)); + + ts_params->mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NUM_MBUFS, MBUF_CACHE_SIZE, 0, + MBUF_SIZE, SOCKET_ID_ANY); + if (ts_params->mbuf_pool == NULL) { + RTE_LOG(ERR, USER1, "Could not create mbuf pool\n"); + return TEST_FAILED; + } + + ts_params->cop_pool = rte_crypto_op_pool_create("cop_pool", RTE_CRYPTO_OP_TYPE_SYMMETRIC, + NUM_MBUFS, MBUF_CACHE_SIZE, + 2 * MAXIMUM_IV_LENGTH, SOCKET_ID_ANY); + if (ts_params->cop_pool == NULL) { + RTE_LOG(ERR, USER1, "Could not create crypto_op pool\n"); + goto mbuf_pool_free; + } + + /* Get max session priv size required */ + sess_priv_size = cryptodev_sess_priv_max_req_get(); + + ts_params->sess_pool = rte_cryptodev_sym_session_pool_create("sess_pool", nb_sess, + sess_priv_size, + RTE_MEMPOOL_CACHE_MAX_SIZE, + 0, SOCKET_ID_ANY); + if (ts_params->sess_pool == NULL) { + RTE_LOG(ERR, USER1, "Could not create session pool\n"); + goto cop_pool_free; + } + + return 0; + +cop_pool_free: + rte_mempool_free(ts_params->cop_pool); + ts_params->cop_pool = NULL; +mbuf_pool_free: + rte_mempool_free(ts_params->mbuf_pool); + ts_params->mbuf_pool = NULL; + return TEST_FAILED; +} + +static void +testsuite_teardown(void) +{ + struct pdcp_testsuite_params *ts_params = &testsuite_params; + uint8_t dev_id; + + for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) { + if (ts_params->cdevs_used[dev_id]) + cryptodev_fini(dev_id); + } + + rte_mempool_free(ts_params->sess_pool); + ts_params->sess_pool = NULL; + + rte_mempool_free(ts_params->cop_pool); + ts_params->cop_pool = NULL; + + rte_mempool_free(ts_params->mbuf_pool); + ts_params->mbuf_pool = NULL; +} + +static int +ut_setup_pdcp(void) +{ + return 0; +} + +static void +ut_teardown_pdcp(void) +{ +} + +static int +crypto_caps_cipher_verify(uint8_t dev_id, const struct rte_crypto_sym_xform *c_xfrm) +{ + const struct rte_cryptodev_symmetric_capability *cap; + struct rte_cryptodev_sym_capability_idx cap_idx; + int ret; + + cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; + cap_idx.algo.cipher = c_xfrm->cipher.algo; + + cap = rte_cryptodev_sym_capability_get(dev_id, &cap_idx); + if (cap == NULL) + return -1; + + ret = rte_cryptodev_sym_capability_check_cipher(cap, c_xfrm->cipher.key.length, + c_xfrm->cipher.iv.length); + + return ret; +} + +static int +crypto_caps_auth_verify(uint8_t dev_id, const struct rte_crypto_sym_xform *a_xfrm) +{ + const struct rte_cryptodev_symmetric_capability *cap; + struct rte_cryptodev_sym_capability_idx cap_idx; + int ret; + + cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH; + cap_idx.algo.auth = a_xfrm->auth.algo; + + cap = rte_cryptodev_sym_capability_get(dev_id, &cap_idx); + if (cap == NULL) + return -1; + + ret = rte_cryptodev_sym_capability_check_auth(cap, a_xfrm->auth.key.length, + a_xfrm->auth.digest_length, + a_xfrm->auth.iv.length); + + return ret; +} + +static int +cryptodev_id_get(bool is_integrity_protected, const struct rte_crypto_sym_xform *c_xfrm, + const struct rte_crypto_sym_xform *a_xfrm) +{ + int i, nb_devs; + + nb_devs = rte_cryptodev_count(); + + /* Check capabilities */ + + for (i = 0; i < nb_devs; i++) { + if ((crypto_caps_cipher_verify(i, c_xfrm) == 0) && + (!is_integrity_protected || crypto_caps_auth_verify(i, a_xfrm) == 0)) + break; + } + + if (i == nb_devs) + return -1; + + return i; +} + +static int +pdcp_known_vec_verify(struct rte_mbuf *m, const uint8_t *expected, uint32_t expected_pkt_len) +{ + uint8_t *actual = rte_pktmbuf_mtod(m, uint8_t *); + uint32_t actual_pkt_len = rte_pktmbuf_pkt_len(m); + + debug_hexdump(stdout, "Received:", actual, actual_pkt_len); + debug_hexdump(stdout, "Expected:", expected, expected_pkt_len); + + TEST_ASSERT_EQUAL(actual_pkt_len, expected_pkt_len, + "Mismatch in packet lengths [expected: %d, received: %d]", + expected_pkt_len, actual_pkt_len); + + TEST_ASSERT_BUFFERS_ARE_EQUAL(actual, expected, expected_pkt_len, + "Generated packet not as expected"); + + return 0; +} + +static struct rte_crypto_op * +process_crypto_request(uint8_t dev_id, struct rte_crypto_op *op) +{ + if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) { + RTE_LOG(ERR, USER1, "Error sending packet to cryptodev\n"); + return NULL; + } + + op = NULL; + + while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0) + rte_pause(); + + return op; +} + +static uint32_t +pdcp_sn_from_raw_get(const void *data, enum rte_security_pdcp_sn_size size) +{ + uint32_t sn = 0; + + if (size == RTE_SECURITY_PDCP_SN_SIZE_12) { + sn = rte_cpu_to_be_16(*(const uint16_t *)data); + sn = sn & 0xfff; + } else if (size == RTE_SECURITY_PDCP_SN_SIZE_18) { + sn = rte_cpu_to_be_32(*(const uint32_t *)data); + sn = (sn & 0x3ffff00) >> 8; + } + + return sn; +} + +static int +create_test_conf_from_index(const int index, struct pdcp_test_conf *conf) +{ + const struct pdcp_testsuite_params *ts_params = &testsuite_params; + struct rte_crypto_sym_xform c_xfrm, a_xfrm; + uint32_t hfn, sn, expected_len, count = 0; + uint8_t *data, *expected; + int pdcp_hdr_sz; + + memset(conf, 0, sizeof(*conf)); + memset(&c_xfrm, 0, sizeof(c_xfrm)); + memset(&a_xfrm, 0, sizeof(a_xfrm)); + + conf->entity.sess_mpool = ts_params->sess_pool; + conf->entity.cop_pool = ts_params->cop_pool; + conf->entity.pdcp_xfrm.bearer = pdcp_test_bearer[index]; + conf->entity.pdcp_xfrm.en_ordering = 0; + conf->entity.pdcp_xfrm.remove_duplicates = 0; + conf->entity.pdcp_xfrm.domain = pdcp_test_params[index].domain; + + if (pdcp_test_packet_direction[index] == PDCP_DIR_UPLINK) + conf->entity.pdcp_xfrm.pkt_dir = RTE_SECURITY_PDCP_UPLINK; + else + conf->entity.pdcp_xfrm.pkt_dir = RTE_SECURITY_PDCP_DOWNLINK; + + conf->entity.pdcp_xfrm.sn_size = pdcp_test_data_sn_size[index]; + + /* Zero initialize unsupported flags */ + conf->entity.pdcp_xfrm.hfn_threshold = 0; + conf->entity.pdcp_xfrm.hfn_ovrd = 0; + conf->entity.pdcp_xfrm.sdap_enabled = 0; + + c_xfrm.type = RTE_CRYPTO_SYM_XFORM_CIPHER; + c_xfrm.cipher.algo = pdcp_test_params[index].cipher_alg; + c_xfrm.cipher.key.length = pdcp_test_params[index].cipher_key_len; + c_xfrm.cipher.key.data = pdcp_test_crypto_key[index]; + + a_xfrm.type = RTE_CRYPTO_SYM_XFORM_AUTH; + + if (pdcp_test_params[index].auth_alg == 0) { + conf->is_integrity_protected = false; + } else { + a_xfrm.auth.algo = pdcp_test_params[index].auth_alg; + a_xfrm.auth.key.data = pdcp_test_auth_key[index]; + a_xfrm.auth.key.length = pdcp_test_params[index].auth_key_len; + conf->is_integrity_protected = true; + } + + pdcp_hdr_sz = pdcp_hdr_size_get(pdcp_test_data_sn_size[index]); + + /* + * Uplink means PDCP entity is configured for transmit. Downlink means PDCP entity is + * configured for receive. When integrity protecting is enabled, PDCP always performs + * digest-encrypted or auth-gen-encrypt for uplink (and decrypt-auth-verify for downlink). + * So for uplink, crypto chain would be auth-cipher while for downlink it would be + * cipher-auth. + * + * When integrity protection is not required, xform would be cipher only. + */ + + if (conf->is_integrity_protected) { + if (conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) { + conf->entity.crypto_xfrm = &conf->a_xfrm; + + a_xfrm.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE; + a_xfrm.next = &conf->c_xfrm; + + c_xfrm.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT; + c_xfrm.next = NULL; + } else { + conf->entity.crypto_xfrm = &conf->c_xfrm; + + c_xfrm.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT; + c_xfrm.next = &conf->a_xfrm; + + a_xfrm.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY; + a_xfrm.next = NULL; + } + } else { + conf->entity.crypto_xfrm = &conf->c_xfrm; + c_xfrm.next = NULL; + + if (conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) + c_xfrm.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT; + else + c_xfrm.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT; + } + /* Update xforms to match PDCP requirements */ + + if ((c_xfrm.cipher.algo == RTE_CRYPTO_CIPHER_AES_CTR) || + (c_xfrm.cipher.algo == RTE_CRYPTO_CIPHER_ZUC_EEA3 || + (c_xfrm.cipher.algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2))) + c_xfrm.cipher.iv.length = 16; + else + c_xfrm.cipher.iv.length = 0; + + if (conf->is_integrity_protected) { + if (a_xfrm.auth.algo == RTE_CRYPTO_AUTH_NULL) + a_xfrm.auth.digest_length = 0; + else + a_xfrm.auth.digest_length = 4; + + if ((a_xfrm.auth.algo == RTE_CRYPTO_AUTH_ZUC_EIA3) || + (a_xfrm.auth.algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2)) + a_xfrm.auth.iv.length = 16; + else + a_xfrm.auth.iv.length = 0; + } + + conf->c_xfrm = c_xfrm; + conf->a_xfrm = a_xfrm; + + conf->entity.dev_id = (uint8_t)cryptodev_id_get(conf->is_integrity_protected, + &conf->c_xfrm, &conf->a_xfrm); + + if (pdcp_test_params[index].domain == RTE_SECURITY_PDCP_MODE_CONTROL || + pdcp_test_params[index].domain == RTE_SECURITY_PDCP_MODE_DATA) { + data = pdcp_test_data_in[index]; + hfn = pdcp_test_hfn[index] << pdcp_test_data_sn_size[index]; + sn = pdcp_sn_from_raw_get(data, pdcp_test_data_sn_size[index]); + count = hfn | sn; + } + conf->entity.count = count; + + if (conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) { +#ifdef VEC_DUMP + debug_hexdump(stdout, "Original vector:", pdcp_test_data_in[index], + pdcp_test_data_in_len[index]); +#endif + /* Since the vectors available already have PDCP header, trim the same */ + conf->input_len = pdcp_test_data_in_len[index] - pdcp_hdr_sz; + memcpy(conf->input, pdcp_test_data_in[index] + pdcp_hdr_sz, conf->input_len); + } else { + conf->input_len = pdcp_test_data_in_len[index]; + + if (conf->is_integrity_protected) + conf->input_len += 4; + + memcpy(conf->input, pdcp_test_data_out[index], conf->input_len); +#ifdef VEC_DUMP + debug_hexdump(stdout, "Original vector:", conf->input, conf->input_len); +#endif + } + + if (conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) + expected = pdcp_test_data_out[index]; + else + expected = pdcp_test_data_in[index]; + + /* Calculate expected packet length */ + expected_len = pdcp_test_data_in_len[index]; + + /* In DL processing, PDCP header would be stripped */ + if (conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) { + expected += pdcp_hdr_sz; + expected_len -= pdcp_hdr_sz; + } + + /* In UL processing with integrity protection, MAC would be added */ + if (conf->is_integrity_protected && + conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) + expected_len += 4; + + memcpy(conf->output, expected, expected_len); + conf->output_len = expected_len; + + return 0; +} + +static struct rte_pdcp_entity* +test_entity_create(const struct pdcp_test_conf *t_conf, int *rc) +{ + struct rte_pdcp_entity *pdcp_entity; + int ret; + + if (t_conf->entity.pdcp_xfrm.sn_size != RTE_SECURITY_PDCP_SN_SIZE_12 && + t_conf->entity.pdcp_xfrm.sn_size != RTE_SECURITY_PDCP_SN_SIZE_18) { + *rc = -ENOTSUP; + return NULL; + } + + if (t_conf->entity.dev_id == CDEV_INVALID_ID) { + RTE_LOG(DEBUG, USER1, "Could not find device with required capabilities\n"); + *rc = -ENOTSUP; + return NULL; + } + + ret = cryptodev_init(t_conf->entity.dev_id); + if (ret) { + *rc = ret; + RTE_LOG(DEBUG, USER1, "Could not initialize cryptodev\n"); + return NULL; + } + + rte_errno = 0; + + pdcp_entity = rte_pdcp_entity_establish(&t_conf->entity); + if (pdcp_entity == NULL) { + *rc = rte_errno; + RTE_LOG(DEBUG, USER1, "Could not establish PDCP entity\n"); + return NULL; + } + + return pdcp_entity; +} + +static uint16_t +test_process_packets(const struct rte_pdcp_entity *pdcp_entity, uint8_t cdev_id, + struct rte_mbuf *in_mb[], uint16_t nb_in, + struct rte_mbuf *out_mb[], uint16_t *nb_err) +{ + struct rte_crypto_op *cop, *cop_out; + struct rte_pdcp_group grp[1]; + uint16_t nb_success, nb_grp; + struct rte_mbuf *mbuf, *mb; + + if (nb_in != 1) + return -ENOTSUP; + + mbuf = in_mb[0]; + + nb_success = rte_pdcp_pkt_pre_process(pdcp_entity, &mbuf, &cop_out, 1, nb_err); + if (nb_success != 1 || *nb_err != 0) { + RTE_LOG(ERR, USER1, "Could not pre process PDCP packet\n"); + return TEST_FAILED; + } + +#ifdef VEC_DUMP + printf("Pre-processed vector:\n"); + rte_pktmbuf_dump(stdout, mbuf, rte_pktmbuf_pkt_len(mbuf)); +#endif + + cop = process_crypto_request(cdev_id, cop_out); + if (cop == NULL) { + RTE_LOG(ERR, USER1, "Could not process crypto request\n"); + return -EIO; + } + + nb_grp = rte_pdcp_pkt_crypto_group(&cop_out, &mb, grp, 1); + if (nb_grp != 1 || grp[0].cnt != 1) { + RTE_LOG(ERR, USER1, "Could not group PDCP crypto results\n"); + return -ENOTRECOVERABLE; + } + + if ((uintptr_t)pdcp_entity != grp[0].id.val) { + RTE_LOG(ERR, USER1, "PDCP entity not matching the one from crypto_op\n"); + return -ENOTRECOVERABLE; + } + +#ifdef VEC_DUMP + printf("Crypto processed vector:\n"); + rte_pktmbuf_dump(stdout, cop->sym->m_dst, rte_pktmbuf_pkt_len(mbuf)); +#endif + + return rte_pdcp_pkt_post_process(grp[0].id.ptr, grp[0].m, out_mb, grp[0].cnt, nb_err); +} + +static struct rte_mbuf* +mbuf_from_data_create(uint8_t *data, uint16_t data_len) +{ + const struct pdcp_testsuite_params *ts_params = &testsuite_params; + struct rte_mbuf *mbuf; + uint8_t *input_text; + + mbuf = rte_pktmbuf_alloc(ts_params->mbuf_pool); + if (mbuf == NULL) { + RTE_LOG(ERR, USER1, "Could not create mbuf\n"); + return NULL; + } + + memset(rte_pktmbuf_mtod(mbuf, uint8_t *), 0, rte_pktmbuf_tailroom(mbuf)); + + input_text = (uint8_t *)rte_pktmbuf_append(mbuf, data_len); + memcpy(input_text, data, data_len); + + return mbuf; +} + +static int +test_attempt_single(struct pdcp_test_conf *t_conf) +{ + struct rte_mbuf *mbuf, **out_mb = NULL; + struct rte_pdcp_entity *pdcp_entity; + uint16_t nb_success, nb_err; + int ret = 0, nb_max_out_mb; + + pdcp_entity = test_entity_create(t_conf, &ret); + if (pdcp_entity == NULL) + goto exit; + + /* Allocate buffer for holding mbufs returned */ + + /* Max packets that can be cached in entity + burst size */ + nb_max_out_mb = pdcp_entity->max_pkt_cache + 1; + out_mb = rte_malloc(NULL, nb_max_out_mb * sizeof(uintptr_t), 0); + if (out_mb == NULL) { + RTE_LOG(ERR, USER1, "Could not allocate buffer for holding out_mb buffers\n"); + ret = -ENOMEM; + goto entity_release; + } + + mbuf = mbuf_from_data_create(t_conf->input, t_conf->input_len); + if (mbuf == NULL) { + ret = -ENOMEM; + goto entity_release; + } + +#ifdef VEC_DUMP + printf("Adjusted vector:\n"); + rte_pktmbuf_dump(stdout, mbuf, t_conf->input_len); +#endif + + nb_success = test_process_packets(pdcp_entity, t_conf->entity.dev_id, &mbuf, 1, out_mb, + &nb_err); + if (nb_success != 1 || nb_err != 0) { + RTE_LOG(ERR, USER1, "Could not process PDCP packet\n"); + ret = TEST_FAILED; + goto mbuf_free; + } + + ret = pdcp_known_vec_verify(mbuf, t_conf->output, t_conf->output_len); + if (ret) + goto mbuf_free; + + ret = rte_pdcp_entity_suspend(pdcp_entity, out_mb); + if (ret) { + RTE_LOG(DEBUG, USER1, "Could not suspend PDCP entity\n"); + goto mbuf_free; + } + +mbuf_free: + rte_pktmbuf_free(mbuf); +entity_release: + rte_pdcp_entity_release(pdcp_entity, out_mb); + rte_free(out_mb); +exit: + return ret; +} + +static int +run_test_for_one_known_vec(const void *arg) +{ + struct pdcp_test_conf test_conf; + int i = *(const uint32_t *)arg; + + create_test_conf_from_index(i, &test_conf); + return test_attempt_single(&test_conf); +} + +struct unit_test_suite *test_suites[] = { + NULL, /* Place holder for known_vector_cases */ + NULL /* End of suites list */ +}; + +static struct unit_test_suite pdcp_testsuite = { + .suite_name = "PDCP Unit Test Suite", + .unit_test_cases = {TEST_CASES_END()}, + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_suites = test_suites, +}; + +static int +test_pdcp(void) +{ + struct unit_test_suite *known_vector_cases; + int ret, index[NB_TESTS]; + uint32_t i, size; + + size = sizeof(struct unit_test_suite); + size += (NB_TESTS + 1) * sizeof(struct unit_test_case); + + known_vector_cases = rte_zmalloc(NULL, size, 0); + if (known_vector_cases == NULL) + return TEST_FAILED; + + known_vector_cases->suite_name = "Known vector cases"; + + for (i = 0; i < NB_TESTS; i++) { + index[i] = i; + known_vector_cases->unit_test_cases[i].name = pdcp_test_params[i].name; + known_vector_cases->unit_test_cases[i].data = (void *)&index[i]; + known_vector_cases->unit_test_cases[i].enabled = 1; + known_vector_cases->unit_test_cases[i].setup = ut_setup_pdcp; + known_vector_cases->unit_test_cases[i].teardown = ut_teardown_pdcp; + known_vector_cases->unit_test_cases[i].testcase = NULL; + known_vector_cases->unit_test_cases[i].testcase_with_data + = run_test_for_one_known_vec; + } + + known_vector_cases->unit_test_cases[i].testcase = NULL; + known_vector_cases->unit_test_cases[i].testcase_with_data = NULL; + + test_suites[0] = known_vector_cases; + + ret = unit_test_suite_runner(&pdcp_testsuite); + + rte_free(known_vector_cases); + return ret; +} + +REGISTER_TEST_COMMAND(pdcp_autotest, test_pdcp); From patchwork Fri Apr 14 17:45:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126110 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 67D2D42943; Fri, 14 Apr 2023 19:46:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 59D6841153; Fri, 14 Apr 2023 19:46:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id EDAB3400EF for ; Fri, 14 Apr 2023 19:46:45 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGUdK7026903; Fri, 14 Apr 2023 10:46:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=0rTMSpSPqOF8GeP2ilnwNaQt2OP9eY94G78ywdZdmlc=; b=I8VcEFCPuzzj9bnpKi3uDW5Tj73lIJ41eCUrbxhuZEsVOd55/XdSCDfQDrXtKJaQcvXc RusietPcdIP2ElG7ARgvPclfEeh8otGFdazn1BIzNj43328ns/+qnfSRItqo3sl80c3b aSSmBv1iZ3AKxwbdeTp3c4U8IBNInuBkgXTxaC83FMtcR/MIH+fnC6uY8bHII5G4U//E gaCUwXPDU08j0IImlMPjlU1837WCAiBgU1crbrQ21tbMWoPLdxTcU777cBbuYbLiLRLJ muujIKHj56/8XOgFHtcacVp1WTMJhjZqO7CCqfRFa75esvux3iDKST3QyUheOReskdEM 5A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6q1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:46:44 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:46:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:46:42 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 779C33F707F; Fri, 14 Apr 2023 10:46:25 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 10/22] test/pdcp: pdcp HFN tests in combined mode Date: Fri, 14 Apr 2023 23:15:00 +0530 Message-ID: <20230414174512.642-11-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 3wvX_o7tzQak1Bi_PcZkS-2qzTtBLgZ2 X-Proofpoint-ORIG-GUID: 3wvX_o7tzQak1Bi_PcZkS-2qzTtBLgZ2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko Add tests to verify HFN/SN behaviour. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- app/test/test_pdcp.c | 302 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 299 insertions(+), 3 deletions(-) diff --git a/app/test/test_pdcp.c b/app/test/test_pdcp.c index cb88817004..fc49947ba2 100644 --- a/app/test/test_pdcp.c +++ b/app/test/test_pdcp.c @@ -15,6 +15,9 @@ #define CDEV_INVALID_ID UINT8_MAX #define NB_TESTS RTE_DIM(pdcp_test_params) +/* According to formula(7.2.a Window_Size) */ +#define PDCP_WINDOW_SIZE(sn_size) (1 << (sn_size - 1)) + struct pdcp_testsuite_params { struct rte_mempool *mbuf_pool; struct rte_mempool *cop_pool; @@ -35,12 +38,69 @@ struct pdcp_test_conf { uint32_t output_len; }; +static int create_test_conf_from_index(const int index, struct pdcp_test_conf *conf); + +typedef int (*test_with_conf_t)(struct pdcp_test_conf *conf); + +static int +run_test_foreach_known_vec(test_with_conf_t test, bool stop_on_first_pass) +{ + struct pdcp_test_conf test_conf; + bool all_tests_skipped = true; + uint32_t i; + int ret; + + for (i = 0; i < NB_TESTS; i++) { + create_test_conf_from_index(i, &test_conf); + ret = test(&test_conf); + + if (ret == TEST_FAILED) { + printf("[%03i] - %s - failed\n", i, pdcp_test_params[i].name); + return TEST_FAILED; + } + + if ((ret == TEST_SKIPPED) || (ret == -ENOTSUP)) + continue; + + if (stop_on_first_pass) + return TEST_SUCCESS; + + all_tests_skipped = false; + } + + if (all_tests_skipped) + return TEST_SKIPPED; + + return TEST_SUCCESS; +} + +static int +run_test_with_all_known_vec(const void *args) +{ + test_with_conf_t test = args; + + return run_test_foreach_known_vec(test, false); +} + static inline int pdcp_hdr_size_get(enum rte_security_pdcp_sn_size sn_size) { return RTE_ALIGN_MUL_CEIL(sn_size, 8) / 8; } +static int +pktmbuf_read_into(const struct rte_mbuf *m, void *buf, size_t buf_len) +{ + if (m->pkt_len > buf_len) + return -ENOMEM; + + const void *read = rte_pktmbuf_read(m, 0, m->pkt_len, buf); + if (read != NULL && read != buf) + memcpy(buf, read, m->pkt_len); + + return 0; +} + static int cryptodev_init(int dev_id) { @@ -325,6 +385,21 @@ pdcp_sn_from_raw_get(const void *data, enum rte_security_pdcp_sn_size size) return sn; } +static void +pdcp_sn_to_raw_set(void *data, uint32_t sn, int size) +{ + if (size == RTE_SECURITY_PDCP_SN_SIZE_12) { + struct rte_pdcp_up_data_pdu_sn_12_hdr *pdu_hdr = data; + pdu_hdr->sn_11_8 = ((sn & 0xf00) >> 8); + pdu_hdr->sn_7_0 = (sn & 0xff); + } else if (size == RTE_SECURITY_PDCP_SN_SIZE_18) { + struct rte_pdcp_up_data_pdu_sn_18_hdr *pdu_hdr = data; + pdu_hdr->sn_17_16 = ((sn & 0x30000) >> 16); + pdu_hdr->sn_15_8 = ((sn & 0xff00) >> 8); + pdu_hdr->sn_7_0 = (sn & 0xff); + } +} + static int create_test_conf_from_index(const int index, struct pdcp_test_conf *conf) { @@ -645,9 +720,17 @@ test_attempt_single(struct pdcp_test_conf *t_conf) goto mbuf_free; } - ret = pdcp_known_vec_verify(mbuf, t_conf->output, t_conf->output_len); - if (ret) - goto mbuf_free; + /* If expected output provided - verify, else - store for future use */ + if (t_conf->output_len) { + ret = pdcp_known_vec_verify(mbuf, t_conf->output, t_conf->output_len); + if (ret) + goto mbuf_free; + } else { + ret = pktmbuf_read_into(mbuf, t_conf->output, RTE_PDCP_CTRL_PDU_SIZE_MAX); + if (ret) + goto mbuf_free; + t_conf->output_len = mbuf->pkt_len; + } ret = rte_pdcp_entity_suspend(pdcp_entity, out_mb); if (ret) { @@ -664,6 +747,193 @@ test_attempt_single(struct pdcp_test_conf *t_conf) return ret; } +static void +uplink_to_downlink_convert(const struct pdcp_test_conf *ul_cfg, + struct pdcp_test_conf *dl_cfg) +{ + assert(ul_cfg->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK); + + memcpy(dl_cfg, ul_cfg, sizeof(*dl_cfg)); + dl_cfg->entity.pdcp_xfrm.pkt_dir = RTE_SECURITY_PDCP_DOWNLINK; + dl_cfg->entity.reverse_iv_direction = false; + + if (dl_cfg->is_integrity_protected) { + dl_cfg->entity.crypto_xfrm = &dl_cfg->c_xfrm; + + dl_cfg->c_xfrm.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT; + dl_cfg->c_xfrm.next = &dl_cfg->a_xfrm; + + dl_cfg->a_xfrm.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY; + dl_cfg->a_xfrm.next = NULL; + } else { + dl_cfg->entity.crypto_xfrm = &dl_cfg->c_xfrm; + dl_cfg->c_xfrm.next = NULL; + dl_cfg->c_xfrm.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT; + } + + dl_cfg->entity.dev_id = (uint8_t)cryptodev_id_get(dl_cfg->is_integrity_protected, + &dl_cfg->c_xfrm, &dl_cfg->a_xfrm); + + memcpy(dl_cfg->input, ul_cfg->output, ul_cfg->output_len); + dl_cfg->input_len = ul_cfg->output_len; + + memcpy(dl_cfg->output, ul_cfg->input, ul_cfg->input_len); + dl_cfg->output_len = ul_cfg->input_len; +} + +/* + * According to ETSI TS 138 323 V17.1.0, Section 5.2.2.1, + * SN could be divided into following ranges, + * relatively to current value of RX_DELIV state: + * +-------------+-------------+-------------+-------------+ + * | -Outside | -Window | +Window | +Outside | + * | (valid) | (Invalid) | (Valid) | (Invalid) | + * +-------------+-------------^-------------+-------------+ + * | + * v + * SN(RX_DELIV) + */ +enum sn_range_type { + SN_RANGE_MINUS_OUTSIDE, + SN_RANGE_MINUS_WINDOW, + SN_RANGE_PLUS_WINDOW, + SN_RANGE_PLUS_OUTSIDE, +}; + +#define PDCP_SET_COUNT(hfn, sn, size) ((hfn << size) | (sn & ((1 << size) - 1))) + +/* + * Take uplink test case as base, modify RX_DELIV in state and SN in input + */ +static int +test_sn_range_type(enum sn_range_type type, struct pdcp_test_conf *conf) +{ + uint32_t rx_deliv_hfn, rx_deliv_sn, rx_deliv, new_hfn, new_sn; + const int domain = conf->entity.pdcp_xfrm.domain; + struct pdcp_test_conf dl_conf; + int ret, expected_ret; + + if (conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) + return TEST_SKIPPED; + + if (domain != RTE_SECURITY_PDCP_MODE_CONTROL && domain != RTE_SECURITY_PDCP_MODE_DATA) + return TEST_SKIPPED; + + const uint32_t sn_size = conf->entity.pdcp_xfrm.sn_size; + const uint32_t window_size = PDCP_WINDOW_SIZE(sn_size); + /* Max value of SN that could fit in `sn_size` bits */ + const uint32_t max_sn = (1 << sn_size) - 1; + const uint32_t shift = (max_sn - window_size) / 2; + /* Could be any number up to `shift` value */ + const uint32_t default_sn = RTE_MIN(2u, shift); + + /* Initialize HFN as non zero value, to be able check values before */ + rx_deliv_hfn = 0xa; + + switch (type) { + case SN_RANGE_PLUS_WINDOW: + /* Within window size, HFN stay same */ + new_hfn = rx_deliv_hfn; + rx_deliv_sn = default_sn; + new_sn = rx_deliv_sn + 1; + expected_ret = TEST_SUCCESS; + break; + case SN_RANGE_MINUS_WINDOW: + /* Within window size, HFN stay same */ + new_hfn = rx_deliv_hfn; + rx_deliv_sn = default_sn; + new_sn = rx_deliv_sn - 1; + expected_ret = TEST_FAILED; + break; + case SN_RANGE_PLUS_OUTSIDE: + /* RCVD_SN >= SN(RX_DELIV) + Window_Size */ + new_hfn = rx_deliv_hfn - 1; + rx_deliv_sn = default_sn; + new_sn = rx_deliv_sn + window_size; + expected_ret = TEST_FAILED; + break; + case SN_RANGE_MINUS_OUTSIDE: + /* RCVD_SN < SN(RX_DELIV) - Window_Size */ + new_hfn = rx_deliv_hfn + 1; + rx_deliv_sn = window_size + default_sn; + new_sn = rx_deliv_sn - window_size - 1; + expected_ret = TEST_SUCCESS; + break; + default: + return TEST_FAILED; + } + + rx_deliv = PDCP_SET_COUNT(rx_deliv_hfn, rx_deliv_sn, sn_size); + + /* Configure Uplink to generate expected, encrypted packet */ + pdcp_sn_to_raw_set(conf->input, new_sn, conf->entity.pdcp_xfrm.sn_size); + conf->entity.reverse_iv_direction = true; + conf->entity.count = PDCP_SET_COUNT(new_hfn, new_sn, sn_size); + conf->output_len = 0; + ret = test_attempt_single(conf); + if (ret != TEST_SUCCESS) + return ret; + + /* Flip configuration to downlink */ + uplink_to_downlink_convert(conf, &dl_conf); + + /* Modify the rx_deliv to verify the expected behaviour */ + dl_conf.entity.count = rx_deliv; + ret = test_attempt_single(&dl_conf); + if ((ret == TEST_SKIPPED) || (ret == -ENOTSUP)) + return ret; + + TEST_ASSERT_EQUAL(ret, expected_ret, "Unexpected result"); + + return TEST_SUCCESS; +} + +static int +test_sn_plus_window(struct pdcp_test_conf *t_conf) +{ + return test_sn_range_type(SN_RANGE_PLUS_WINDOW, t_conf); +} + +static int +test_sn_minus_window(struct pdcp_test_conf *t_conf) +{ + return test_sn_range_type(SN_RANGE_MINUS_WINDOW, t_conf); +} + +static int +test_sn_plus_outside(struct pdcp_test_conf *t_conf) +{ + return test_sn_range_type(SN_RANGE_PLUS_OUTSIDE, t_conf); +} + +static int +test_sn_minus_outside(struct pdcp_test_conf *t_conf) +{ + return test_sn_range_type(SN_RANGE_MINUS_OUTSIDE, t_conf); +} + +static int +test_combined(struct pdcp_test_conf *ul_conf) +{ + struct pdcp_test_conf dl_conf; + int ret; + + if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) + return TEST_SKIPPED; + + ul_conf->entity.reverse_iv_direction = true; + ul_conf->output_len = 0; + + ret = test_attempt_single(ul_conf); + if (ret != TEST_SUCCESS) + return ret; + + uplink_to_downlink_convert(ul_conf, &dl_conf); + ret = test_attempt_single(&dl_conf); + + return ret; +} + static int run_test_for_one_known_vec(const void *arg) { @@ -674,8 +944,34 @@ run_test_for_one_known_vec(const void *arg) return test_attempt_single(&test_conf); } +static struct unit_test_suite combined_mode_cases = { + .suite_name = "PDCP combined mode", + .unit_test_cases = { + TEST_CASE_NAMED_WITH_DATA("combined mode", ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_combined), + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + +static struct unit_test_suite hfn_sn_test_cases = { + .suite_name = "PDCP HFN/SN", + .unit_test_cases = { + TEST_CASE_NAMED_WITH_DATA("SN plus window", ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_sn_plus_window), + TEST_CASE_NAMED_WITH_DATA("SN minus window", ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_sn_minus_window), + TEST_CASE_NAMED_WITH_DATA("SN plus outside", ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_sn_plus_outside), + TEST_CASE_NAMED_WITH_DATA("SN minus outside", ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_sn_minus_outside), + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + struct unit_test_suite *test_suites[] = { NULL, /* Place holder for known_vector_cases */ + &combined_mode_cases, + &hfn_sn_test_cases, NULL /* End of suites list */ }; From patchwork Fri Apr 14 17:45:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126111 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7215E42943; Fri, 14 Apr 2023 19:47:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6298141144; Fri, 14 Apr 2023 19:47:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D2FBF410F6 for ; Fri, 14 Apr 2023 19:47:09 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGOBul013729; Fri, 14 Apr 2023 10:47:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=OU4yvsktD8wK43wNBu/mdXbGvCR2c5IOGpX+6vMHGSU=; b=QZk6/VEc++2eD0TpR9lLBaf2sbDhyCn9qzhS/ZUc8AX8meJZYGu5tksuI+UkMkZOWFVH sHoxPfPXrd4p+hXNC6Xw6i+HB6CWxHrlOAmxJbgLHHMz23uLhgfZ9OW2law4jWojkqOu YrbfPKqbov5moThbZnHKqXM4KSk/XHqkug5/2K/gHtswlsiQ24Fb6wsuJDIA/YKMH5og LVqgrJ/BEgARiB0MEejWs+bPUrvnbJg1c0bXj14s3aDOdkNxsKxa/GYKvyLez1ohHKDq ruexEhOj23ugUemiuVfSBE/oCtpdS7atqnxt/TUstICVU/TX76FYbRu3VHsQrWkCWs5h kw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6rf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:47:08 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:47:06 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:47:06 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 199613F7081; Fri, 14 Apr 2023 10:46:43 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 11/22] doc: add PDCP library guide Date: Fri, 14 Apr 2023 23:15:01 +0530 Message-ID: <20230414174512.642-12-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: sB-DdTFfHs1avS79skPm5SIDtbCGfjUB X-Proofpoint-ORIG-GUID: sB-DdTFfHs1avS79skPm5SIDtbCGfjUB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add guide for PDCP library. Signed-off-by: Anoob Joseph Signed-off-by: Kiran Kumar K Signed-off-by: Volodymyr Fialko --- .../img/pdcp_functional_overview.svg | 1 + doc/guides/prog_guide/index.rst | 1 + doc/guides/prog_guide/pdcp_lib.rst | 246 ++++++++++++++++++ 3 files changed, 248 insertions(+) create mode 100644 doc/guides/prog_guide/img/pdcp_functional_overview.svg create mode 100644 doc/guides/prog_guide/pdcp_lib.rst diff --git a/doc/guides/prog_guide/img/pdcp_functional_overview.svg b/doc/guides/prog_guide/img/pdcp_functional_overview.svg new file mode 100644 index 0000000000..287daafc21 --- /dev/null +++ b/doc/guides/prog_guide/img/pdcp_functional_overview.svg @@ -0,0 +1 @@ +Radio Interface (Uu/PC5)UE/NG-RAN/UE ANG-RAN/UE/UE BTransmitting PDCP entityReceiving PDCP entityTransmission buffer:SequencenumberingHeader or uplink dataCompressionHeader or uplink dataDecompressionRouting / DuplicationAdd PDCP headerCipheringIntegrity protectionPackets associated to a PDCP SDUPackets not associated to a PDCP SDURemove PDCP HeaderDecipheringIntegrity VerificationReception buffer:ReorderingDuplicate discardingPackets associated to a PDCP SDUPackets not associated to a PDCP SDU \ No newline at end of file diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index 87333ee84a..6099ff63cd 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -77,4 +77,5 @@ Programmer's Guide lto profile_app asan + pdcp_lib glossary diff --git a/doc/guides/prog_guide/pdcp_lib.rst b/doc/guides/prog_guide/pdcp_lib.rst new file mode 100644 index 0000000000..abd874f2cc --- /dev/null +++ b/doc/guides/prog_guide/pdcp_lib.rst @@ -0,0 +1,246 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2023 Marvell. + +PDCP Protocol Processing Library +================================ + +DPDK provides a library for PDCP protocol processing. The library utilizes +other DPDK libraries such as cryptodev, reorder, etc., to provide the +application with a transparent and high performant PDCP protocol processing +library. + +The library abstracts complete PDCP protocol processing conforming to +``ETSI TS 138 323 V17.1.0 (2022-08)``. +https://www.etsi.org/deliver/etsi_ts/138300_138399/138323/17.01.00_60/ts_138323v170100p.pdf + +PDCP would involve the following operations, + +1. Transfer of user plane data +2. Transfer of control plane data +3. Header compression +4. Uplink data compression +5. Ciphering and integrity protection + +.. _figure_pdcp_functional_overview: + +.. figure:: img/pdcp_functional_overview.* + + PDCP functional overview new + +PDCP library would abstract the protocol offload features of the cryptodev and +would provide a uniform interface and consistent API usage to work with +cryptodev irrespective of the protocol offload features supported. + +PDCP entity API +--------------- + +PDCP library provides following control path APIs that is used to +configure various PDCP entities, + +1. ``rte_pdcp_entity_establish()`` +2. ``rte_pdcp_entity_suspend()`` +3. ``rte_pdcp_entity_release()`` + +A PDCP entity would translate to one ``rte_cryptodev_sym_session`` or +``rte_security_session`` based on the config. The sessions would be created/ +destroyed while corresponding PDCP entity operations are performed. + +PDCP PDU (Protocol Data Unit) +----------------------------- + +PDCP PDUs can be categorized as, + +1. Control PDU +2. Data PDU + +Control PDUs are used for signalling between entities on either end and can be +one of the following, + +1. PDCP status report +2. ROHC feedback +3. EHC feedback + +Control PDUs are not ciphered or authenticated, and so such packets are not +submitted to cryptodev for processing. + +Data PDUs are regular packets submitted by upper layers for transmission to +other end. Such packets would need to be ciphered and authenticated based on +the entity configuration. + +PDCP packet processing API for data PDU +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +PDCP processing is split into 2 parts. One before cryptodev processing +(``rte_pdcp_pkt_pre_process()``) and one after cryptodev processing +(``rte_pdcp_pkt_post_process()``). Since cryptodev dequeue can return crypto +operations belonging to multiple entities, ``rte_pdcp_pkt_crypto_group()`` +is added to help grouping crypto operations belonging to same PDCP entity. + +Lib PDCP would allow application to use same API sequence while leveraging +protocol offload features enabled by ``rte_security`` library. Lib PDCP would +internally change the handles registered for ``pre_process`` and +``post_process`` based on features enabled in the entity. + +Lib PDCP would create the required sessions on the device provided in entity to +minimize the application requirements. Also, the crypto_op allocation and free +would also be done internally by lib PDCP to allow the library to create +crypto ops as required for the input packets. For example, when control PDUs are +received, no cryptodev enqueue-dequeue is expected for the same and lib PDCP +is expected to handle it differently. + +Sample API usage +---------------- + +The ``rte_pdcp_entity_conf`` structure is used to pass the configuration +parameters for entity creation. + +.. literalinclude:: ../../../lib/pdcp/rte_pdcp.h + :language: c + :start-after: Structure rte_pdcp_entity_conf 8< + :end-before: >8 End of structure rte_pdcp_entity_conf. + +.. code-block:: c + + struct rte_mbuf **out_mb, *pkts[MAX_BURST_SIZE]; + struct rte_crypto_op *cop[MAX_BURST_SIZE]; + struct rte_pdcp_group grp[MAX_BURST_SIZE]; + struct rte_pdcp_entity *pdcp_entity; + int nb_max_out_mb, ret, nb_grp; + uint16_t nb_ops; + + /* Create PDCP entity */ + pdcp_entity = rte_pdcp_entity_establish(&conf); + + /** + * Allocate buffer for holding mbufs returned during PDCP suspend, + * release & post-process APIs. + */ + + /* Max packets that can be cached in entity + burst size */ + nb_max_out_mb = pdcp_entity->max_pkt_cache + MAX_BURST_SIZE; + out_mb = rte_malloc(NULL, nb_max_out_mb * sizeof(uintptr_t), 0); + if (out_mb == NULL) { + /* Handle error */ + } + + while (1) { + /* Receive packet and form mbuf */ + + /** + * Prepare packets for crypto operation. Following operations + * would be done, + * + * Transmitting entity/UL (only data PDUs): + * - Perform compression + * - Assign sequence number + * - Add PDCP header + * - Create & prepare crypto_op + * - Prepare IV for crypto operation (auth_gen, encrypt) + * - Save original PDCP SDU (during PDCP re-establishment, + * unconfirmed PDCP SDUs need to crypto processed again and + * transmitted/re-transmitted) + * + * Receiving entity/DL: + * - Any control PDUs received would be processed and + * appropriate actions taken. If data PDU, continue. + * - Determine sequence number (based on HFN & per packet SN) + * - Prepare crypto_op + * - Prepare IV for crypto operation (decrypt, auth_verify) + */ + nb_success = rte_pdcp_pkt_pre_process(pdcp_entity, pkts, cop, + nb_rx, &nb_err); + if (nb_err != 0) { + /* Handle error packets */ + } + + if ((rte_cryptodev_enqueue_burst(dev_id, qp_id, cop, nb_success) + != nb_success) { + /* Retry for enqueue failure packets */ + } + + ... + + nb_ops = rte_cryptodev_dequeue_burst(dev_id, qp_id, cop, + MAX_BURST_SIZE); + if (nb_ops == 0) + continue; + + /** + * Received a burst of completed crypto ops from cryptodev. It + * may belong to various entities. Group similar ones together + * for entity specific post-processing. + */ + + /** + * Groups similar entities together. Frees crypto op and based + * on crypto_op status, set mbuf->ol_flags which would be + * checked in rte_pdcp_pkt_post_process(). + */ + nb_grp = rte_pdcp_pkt_crypto_group(cop, pkts, grp, ret); + + for (i = 0; i != nb_grp; i++) { + + /** + * Post process packets after crypto completion. + * Following operations would be done, + * + * Transmitting entity/UL: + * - Check crypto result + * + * Receiving entity/DL: + * - Check crypto operation status + * - Check for duplication (if yes, drop duplicate) + * - Perform decompression + * - Trim PDCP header + * - Hold packet (SDU) for in-order delivery (return + * completed packets as and when sequence is + * completed) + * - If not in sequence, cache the packet and start + * t-Reordering timer. When timer expires, the + * packets need to delivered to upper layers (not + * treated as error packets). + */ + nb_success = rte_pdcp_pkt_post_process(grp[i].id.ptr, + grp[i].m, out_mb, + grp[i].cnt, + &nb_err); + if (nb_err != 0) { + /* Handle error packets */ + } + + /* Perform additional operations */ + + /** + * Transmitting entity/UL + * - If duplication is enabled, duplicate PDCP PDUs + * - When lower layers confirm reception of a PDCP PDU, + * it should be communicated to PDCP layer so that + * PDCP can drop the corresponding SDU + */ + } + } + + +Supported features +------------------ + +- 12 bit & 18 bit sequence numbers +- Uplink & downlink traffic +- HFN increment +- IV generation as required per algorithm + +Supported ciphering algorithms +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- NULL +- AES-CTR +- SNOW3G-CIPHER +- ZUC-CIPHER + +Supported integrity protection algorithms +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- NULL +- AES-CMAC +- SNOW3G-AUTH +- ZUC-AUTH From patchwork Fri Apr 14 17:45:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126112 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F2E442943; Fri, 14 Apr 2023 19:47:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7D8C642D0B; Fri, 14 Apr 2023 19:47:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 5C789410F6 for ; Fri, 14 Apr 2023 19:47:15 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EFFBRF009805; Fri, 14 Apr 2023 10:47:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=XtUdpJ8QdUXSDvWbUA/QZkknn1mx//ENKnmSrGfV/0c=; b=Ux38A/Rq7qnuiD0yrsHGoryCU0t4ftD956V9zW/3ltjceziSo2QjaWVTBOZ6WLpzWxe7 7a0nbnZYnezxLG5DQGXIrtjIm9khNLHEgW+bvtm05FFtRZyCHE/cAjkgbMlrGIt3aJAE RAcLxuA9MWyStFyeZs/rQf85lu/JoTRgF5Gy8Z0ioyeKlsyYBuEm6DCMvJzUQXWT9SmE Um4tDDHY4OrA2IV4VkGIczbw0oxUg+BrUIYxNRABV/adwtL88AgFmNIwnN/zMcRu2XQj 3VlihCTPeql3WIArjVAD6SRUYMStv8v/rHNbbwdM3odVpd2gAqDUqahyn3NRbP8e5YdZ 9g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6rx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:47:14 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:47:12 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:47:12 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 249683F707F; Fri, 14 Apr 2023 10:47:06 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Kiran Kumar K" , Volodymyr Fialko , , Olivier Matz Subject: [PATCH v2 12/22] pdcp: add control PDU handling Date: Fri, 14 Apr 2023 23:15:02 +0530 Message-ID: <20230414174512.642-13-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: qT1v4RQMC6k56Cc7_fu0iFbyQEem8MH2 X-Proofpoint-ORIG-GUID: qT1v4RQMC6k56Cc7_fu0iFbyQEem8MH2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add control PDU handling and implement status report generation. Status report generation works only when RX_DELIV = RX_NEXT. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- app/test/test_pdcp.c | 1 + doc/guides/prog_guide/pdcp_lib.rst | 10 +++++++ lib/pdcp/meson.build | 2 ++ lib/pdcp/pdcp_cnt.c | 29 ++++++++++++++++++ lib/pdcp/pdcp_cnt.h | 14 +++++++++ lib/pdcp/pdcp_ctrl_pdu.c | 46 +++++++++++++++++++++++++++++ lib/pdcp/pdcp_ctrl_pdu.h | 15 ++++++++++ lib/pdcp/pdcp_entity.h | 15 ++++++++-- lib/pdcp/pdcp_process.c | 13 +++++++++ lib/pdcp/rte_pdcp.c | 47 +++++++++++++++++++++++++++++- lib/pdcp/rte_pdcp.h | 31 ++++++++++++++++++++ lib/pdcp/version.map | 2 ++ 12 files changed, 222 insertions(+), 3 deletions(-) create mode 100644 lib/pdcp/pdcp_cnt.c create mode 100644 lib/pdcp/pdcp_cnt.h create mode 100644 lib/pdcp/pdcp_ctrl_pdu.c create mode 100644 lib/pdcp/pdcp_ctrl_pdu.h diff --git a/app/test/test_pdcp.c b/app/test/test_pdcp.c index fc49947ba2..4ecb4d9572 100644 --- a/app/test/test_pdcp.c +++ b/app/test/test_pdcp.c @@ -415,6 +415,7 @@ create_test_conf_from_index(const int index, struct pdcp_test_conf *conf) conf->entity.sess_mpool = ts_params->sess_pool; conf->entity.cop_pool = ts_params->cop_pool; + conf->entity.ctr_pdu_pool = ts_params->mbuf_pool; conf->entity.pdcp_xfrm.bearer = pdcp_test_bearer[index]; conf->entity.pdcp_xfrm.en_ordering = 0; conf->entity.pdcp_xfrm.remove_duplicates = 0; diff --git a/doc/guides/prog_guide/pdcp_lib.rst b/doc/guides/prog_guide/pdcp_lib.rst index abd874f2cc..f23360dfc3 100644 --- a/doc/guides/prog_guide/pdcp_lib.rst +++ b/doc/guides/prog_guide/pdcp_lib.rst @@ -67,6 +67,15 @@ Data PDUs are regular packets submitted by upper layers for transmission to other end. Such packets would need to be ciphered and authenticated based on the entity configuration. +PDCP packet processing API for control PDU +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Control PDUs are used in PDCP as a communication channel between transmitting +and receiving entities. When upper layer request for operations such +re-establishment, receiving PDCP entity need to prepare a status report and +send it to the other end. The API ``pdcp_ctrl_pdu_status_gen`` allows +application to request the same. + PDCP packet processing API for data PDU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -228,6 +237,7 @@ Supported features - Uplink & downlink traffic - HFN increment - IV generation as required per algorithm +- Control PDU generation Supported ciphering algorithms ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/pdcp/meson.build b/lib/pdcp/meson.build index 08679b743a..75d476bf6d 100644 --- a/lib/pdcp/meson.build +++ b/lib/pdcp/meson.build @@ -8,7 +8,9 @@ if is_windows endif sources = files( + 'pdcp_cnt.c', 'pdcp_crypto.c', + 'pdcp_ctrl_pdu.c', 'pdcp_process.c', 'rte_pdcp.c', ) diff --git a/lib/pdcp/pdcp_cnt.c b/lib/pdcp/pdcp_cnt.c new file mode 100644 index 0000000000..c9b952184b --- /dev/null +++ b/lib/pdcp/pdcp_cnt.c @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include + +#include "pdcp_cnt.h" +#include "pdcp_entity.h" + +int +pdcp_cnt_ring_create(struct rte_pdcp_entity *en, const struct rte_pdcp_entity_conf *conf) +{ + struct entity_priv_dl_part *en_priv_dl; + uint32_t window_sz; + + if (en == NULL || conf == NULL) + return -EINVAL; + + if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) + return 0; + + en_priv_dl = entity_dl_part_get(en); + window_sz = pdcp_window_size_get(conf->pdcp_xfrm.sn_size); + + RTE_SET_USED(window_sz); + RTE_SET_USED(en_priv_dl); + + return 0; +} diff --git a/lib/pdcp/pdcp_cnt.h b/lib/pdcp/pdcp_cnt.h new file mode 100644 index 0000000000..bbda478b55 --- /dev/null +++ b/lib/pdcp/pdcp_cnt.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef PDCP_CNT_H +#define PDCP_CNT_H + +#include + +#include "pdcp_entity.h" + +int pdcp_cnt_ring_create(struct rte_pdcp_entity *en, const struct rte_pdcp_entity_conf *conf); + +#endif /* PDCP_CNT_H */ diff --git a/lib/pdcp/pdcp_ctrl_pdu.c b/lib/pdcp/pdcp_ctrl_pdu.c new file mode 100644 index 0000000000..feb05fd863 --- /dev/null +++ b/lib/pdcp/pdcp_ctrl_pdu.c @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include +#include + +#include "pdcp_ctrl_pdu.h" +#include "pdcp_entity.h" + +static __rte_always_inline void +pdcp_hdr_fill(struct rte_pdcp_up_ctrl_pdu_hdr *pdu_hdr, uint32_t rx_deliv) +{ + pdu_hdr->d_c = RTE_PDCP_PDU_TYPE_CTRL; + pdu_hdr->pdu_type = RTE_PDCP_CTRL_PDU_TYPE_STATUS_REPORT; + pdu_hdr->r = 0; + pdu_hdr->fmc = rte_cpu_to_be_32(rx_deliv); +} + +int +pdcp_ctrl_pdu_status_gen(struct entity_priv *en_priv, struct rte_mbuf *m) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *pdu_hdr; + uint32_t rx_deliv; + int pdu_sz; + + if (!en_priv->flags.is_status_report_required) + return -EINVAL; + + pdu_sz = sizeof(struct rte_pdcp_up_ctrl_pdu_hdr); + + rx_deliv = en_priv->state.rx_deliv; + + /* Zero missing PDUs - status report contains only FMC */ + if (rx_deliv >= en_priv->state.rx_next) { + pdu_hdr = (struct rte_pdcp_up_ctrl_pdu_hdr *)rte_pktmbuf_append(m, pdu_sz); + if (pdu_hdr == NULL) + return -ENOMEM; + pdcp_hdr_fill(pdu_hdr, rx_deliv); + + return 0; + } + + return -ENOTSUP; +} diff --git a/lib/pdcp/pdcp_ctrl_pdu.h b/lib/pdcp/pdcp_ctrl_pdu.h new file mode 100644 index 0000000000..a2424fbd10 --- /dev/null +++ b/lib/pdcp/pdcp_ctrl_pdu.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef PDCP_CTRL_PDU_H +#define PDCP_CTRL_PDU_H + +#include + +#include "pdcp_entity.h" + +int +pdcp_ctrl_pdu_status_gen(struct entity_priv *en_priv, struct rte_mbuf *m); + +#endif /* PDCP_CTRL_PDU_H */ diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h index 3108795977..7b7e7f69dd 100644 --- a/lib/pdcp/pdcp_entity.h +++ b/lib/pdcp/pdcp_entity.h @@ -109,6 +109,13 @@ union cipher_iv_partial { uint64_t u64[2]; }; +struct pdcp_cnt_bitmap { + /** Number of entries that can be stored. */ + uint32_t size; + /** Bitmap of the count values already received.*/ + struct rte_bitmap *bmp; +}; + /* * Layout of PDCP entity: [rte_pdcp_entity] [entity_priv] [entity_dl/ul] */ @@ -136,9 +143,13 @@ struct entity_priv { uint64_t is_ul_entity : 1; /** Is NULL auth. */ uint64_t is_null_auth : 1; + /** Is status report required.*/ + uint64_t is_status_report_required : 1; } flags; /** Crypto op pool. */ struct rte_mempool *cop_pool; + /** Control PDU pool. */ + struct rte_mempool *ctr_pdu_pool; /** PDCP header size. */ uint8_t hdr_sz; /** PDCP AAD size. For AES-CMAC, additional message is prepended for the operation. */ @@ -148,8 +159,8 @@ struct entity_priv { }; struct entity_priv_dl_part { - /* NOTE: when in-order-delivery is supported, post PDCP packets would need to cached. */ - uint8_t dummy; + /** PDCP would need to track the count values that are already received.*/ + struct pdcp_cnt_bitmap bitmap; }; struct entity_priv_ul_part { diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c index 9c1a5e0669..267b3b7723 100644 --- a/lib/pdcp/pdcp_process.c +++ b/lib/pdcp/pdcp_process.c @@ -1157,6 +1157,19 @@ pdcp_entity_priv_populate(struct entity_priv *en_priv, const struct rte_pdcp_ent if (a_xfrm != NULL && a_xfrm->auth.algo == RTE_CRYPTO_AUTH_NULL) en_priv->flags.is_null_auth = 1; + /** + * flags.is_status_report_required + * + * Indicate whether status report is required. + */ + if (conf->status_report_required) { + /** Status report is required only for DL entities. */ + if (conf->pdcp_xfrm.pkt_dir != RTE_SECURITY_PDCP_DOWNLINK) + return -EINVAL; + + en_priv->flags.is_status_report_required = 1; + } + /** * hdr_sz * diff --git a/lib/pdcp/rte_pdcp.c b/lib/pdcp/rte_pdcp.c index 8914548dbd..5cd3f5ca31 100644 --- a/lib/pdcp/rte_pdcp.c +++ b/lib/pdcp/rte_pdcp.c @@ -6,7 +6,9 @@ #include #include +#include "pdcp_cnt.h" #include "pdcp_crypto.h" +#include "pdcp_ctrl_pdu.h" #include "pdcp_entity.h" #include "pdcp_process.h" @@ -34,7 +36,7 @@ rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) struct entity_priv *en_priv; int ret, entity_size; - if (conf == NULL || conf->cop_pool == NULL) { + if (conf == NULL || conf->cop_pool == NULL || conf->ctr_pdu_pool == NULL) { rte_errno = -EINVAL; return NULL; } @@ -79,6 +81,7 @@ rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) en_priv->state.rx_deliv = conf->count; en_priv->state.tx_next = conf->count; en_priv->cop_pool = conf->cop_pool; + en_priv->ctr_pdu_pool = conf->ctr_pdu_pool; /* Setup crypto session */ ret = pdcp_crypto_sess_create(entity, conf); @@ -89,6 +92,10 @@ rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) if (ret) goto crypto_sess_destroy; + ret = pdcp_cnt_ring_create(entity, conf); + if (ret) + goto crypto_sess_destroy; + return entity; crypto_sess_destroy: @@ -136,3 +143,41 @@ rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity, return 0; } + +struct rte_mbuf * +rte_pdcp_control_pdu_create(struct rte_pdcp_entity *pdcp_entity, + enum rte_pdcp_ctrl_pdu_type type) +{ + struct entity_priv *en_priv; + struct rte_mbuf *m; + int ret; + + if (pdcp_entity == NULL) { + rte_errno = -EINVAL; + return NULL; + } + + en_priv = entity_priv_get(pdcp_entity); + + m = rte_pktmbuf_alloc(en_priv->ctr_pdu_pool); + if (m == NULL) { + rte_errno = -ENOMEM; + return NULL; + } + + switch (type) { + case RTE_PDCP_CTRL_PDU_TYPE_STATUS_REPORT: + ret = pdcp_ctrl_pdu_status_gen(en_priv, m); + break; + default: + ret = -ENOTSUP; + } + + if (ret) { + rte_pktmbuf_free(m); + rte_errno = ret; + return NULL; + } + + return m; +} diff --git a/lib/pdcp/rte_pdcp.h b/lib/pdcp/rte_pdcp.h index 54f88e3fd3..d2db25d7d9 100644 --- a/lib/pdcp/rte_pdcp.h +++ b/lib/pdcp/rte_pdcp.h @@ -16,6 +16,7 @@ #include #include #include +#include #include #ifdef __cplusplus @@ -78,6 +79,8 @@ struct rte_pdcp_entity_conf { struct rte_mempool *sess_mpool; /** Crypto op pool.*/ struct rte_mempool *cop_pool; + /** Mbuf pool to be used for allocating control PDUs.*/ + struct rte_mempool *ctr_pdu_pool; /** * 32 bit count value (HFN + SN) to be used for the first packet. * pdcp_xfrm.hfn would be ignored as the HFN would be derived from this value. @@ -91,6 +94,15 @@ struct rte_pdcp_entity_conf { uint8_t dev_id; /** Reverse direction during IV generation. Can be used to simulate UE crypto processing.*/ bool reverse_iv_direction; + /** + * Status report required (specified in TS 38.331). + * + * If PDCP entity is configured to send a PDCP status report, the upper layer application + * may request a receiving PDCP entity to generate a PDCP status report using + * ``rte_pdcp_ctrl_pdu_create``. In addition, PDCP status reports may be generated during + * operations such as entity re-establishment. + */ + bool status_report_required; }; /* >8 End of structure rte_pdcp_entity_conf. */ @@ -169,6 +181,25 @@ int rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity, struct rte_mbuf *out_mb[]); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Create control PDU packet of the `type` specified. + * + * @param pdcp_entity + * Pointer to the PDCP entity for which the control PDU need to be generated. + * @param type + * Type of control PDU to be generated. + * @return + * - Control PDU generated, in case of success. + * - NULL in case of failure. rte_errno will be set to error code. + */ +__rte_experimental +struct rte_mbuf * +rte_pdcp_control_pdu_create(struct rte_pdcp_entity *pdcp_entity, + enum rte_pdcp_ctrl_pdu_type type); + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice diff --git a/lib/pdcp/version.map b/lib/pdcp/version.map index f9ff30600a..d0cf338e1f 100644 --- a/lib/pdcp/version.map +++ b/lib/pdcp/version.map @@ -6,6 +6,8 @@ EXPERIMENTAL { rte_pdcp_entity_release; rte_pdcp_entity_suspend; + rte_pdcp_control_pdu_create; + rte_pdcp_pkt_post_process; rte_pdcp_pkt_pre_process; From patchwork Fri Apr 14 17:45:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126113 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB7AE42943; Fri, 14 Apr 2023 19:47:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 34AC842D31; Fri, 14 Apr 2023 19:47:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 4A8C842B8E for ; Fri, 14 Apr 2023 19:47:23 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGNfC9012938; Fri, 14 Apr 2023 10:47:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Y7m4cOQZaQacu9x5Y00xRSk8NS5AfWdr9NQagwGBQWc=; b=cElSltpVGmopI8J+LbQyOFycLdfAXdwvcgrqruKwKqtcjKdLg+Oqi8cjvvuU66YxUdRX tlsLakcqg+iXe4qDA/7mVTQoYPbXQ3oGoBg+jZYWMoa0FqDS3V3XzaMM9FW/YhhObWSe iq6Tmf7GOIo3t/Kpe1HVY1Ae5u1prfx9fiwi1SC4j1dieu1/FfXkzXkDsqihAOiKmSAC HsptHMsXlu8c98L0JqiMstT2PwkFJKYFDRDPx0iaDwYP35vn8i7Kr4FvkwaSC2J+KqPL Lp1HnU8oRNMZsL5aI2git5LrBTZo8cAMr7cvEqABFIO7/OdH0C00EmCiKiYVeODjJZFf Ag== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6t9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:47:22 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:47:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:47:19 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 459653F7080; Fri, 14 Apr 2023 10:47:12 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 13/22] pdcp: implement t-Reordering and packet buffering Date: Fri, 14 Apr 2023 23:15:03 +0530 Message-ID: <20230414174512.642-14-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: AZp0zHdA69uVNJ6Mv3klnOpxDB_1HMon X-Proofpoint-ORIG-GUID: AZp0zHdA69uVNJ6Mv3klnOpxDB_1HMon X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko Add in-order delivery of packets in PDCP. Delivery of packets in-order relies on t-Reordering timer. When 'out-of-order delivery' is disabled, PDCP will buffer all received packets that are out of order. The t-Reordering timer determines the time period these packets would be held in the buffer, waiting for any missing packets to arrive. Introduce packet buffering and state variables which indicate status of the timer. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- lib/pdcp/meson.build | 3 +- lib/pdcp/pdcp_entity.h | 19 +++++++ lib/pdcp/pdcp_process.c | 123 +++++++++++++++++++++++++++++++--------- lib/pdcp/pdcp_reorder.c | 27 +++++++++ lib/pdcp/pdcp_reorder.h | 62 ++++++++++++++++++++ lib/pdcp/rte_pdcp.c | 53 +++++++++++++++-- lib/pdcp/rte_pdcp.h | 6 +- 7 files changed, 257 insertions(+), 36 deletions(-) create mode 100644 lib/pdcp/pdcp_reorder.c create mode 100644 lib/pdcp/pdcp_reorder.h diff --git a/lib/pdcp/meson.build b/lib/pdcp/meson.build index 75d476bf6d..f4f9246bcb 100644 --- a/lib/pdcp/meson.build +++ b/lib/pdcp/meson.build @@ -12,9 +12,10 @@ sources = files( 'pdcp_crypto.c', 'pdcp_ctrl_pdu.c', 'pdcp_process.c', + 'pdcp_reorder.c', 'rte_pdcp.c', ) headers = files('rte_pdcp.h') indirect_headers += files('rte_pdcp_group.h') -deps += ['mbuf', 'net', 'cryptodev', 'security'] +deps += ['mbuf', 'net', 'cryptodev', 'security', 'reorder'] diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h index 7b7e7f69dd..71962d7279 100644 --- a/lib/pdcp/pdcp_entity.h +++ b/lib/pdcp/pdcp_entity.h @@ -11,6 +11,8 @@ #include #include +#include "pdcp_reorder.h" + struct entity_priv; #define PDCP_HFN_MIN 0 @@ -109,6 +111,17 @@ union cipher_iv_partial { uint64_t u64[2]; }; +enum timer_state { + TIMER_STOP, + TIMER_RUNNING, + TIMER_EXPIRED, +}; + +struct pdcp_t_reordering { + /** Represent timer state */ + enum timer_state state; +}; + struct pdcp_cnt_bitmap { /** Number of entries that can be stored. */ uint32_t size; @@ -145,6 +158,8 @@ struct entity_priv { uint64_t is_null_auth : 1; /** Is status report required.*/ uint64_t is_status_report_required : 1; + /** Is out-of-order delivery enabled */ + uint64_t is_out_of_order_delivery : 1; } flags; /** Crypto op pool. */ struct rte_mempool *cop_pool; @@ -161,6 +176,10 @@ struct entity_priv { struct entity_priv_dl_part { /** PDCP would need to track the count values that are already received.*/ struct pdcp_cnt_bitmap bitmap; + /** t-Reordering handles */ + struct pdcp_t_reordering t_reorder; + /** Reorder packet buffer */ + struct pdcp_reorder reorder; }; struct entity_priv_ul_part { diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c index 267b3b7723..16d22cbe14 100644 --- a/lib/pdcp/pdcp_process.c +++ b/lib/pdcp/pdcp_process.c @@ -809,25 +809,88 @@ pdcp_packet_strip(struct rte_mbuf *mb, const uint32_t hdr_trim_sz, const bool tr } } -static inline bool +static inline int pdcp_post_process_update_entity_state(const struct rte_pdcp_entity *entity, - const uint32_t count) + const uint32_t count, struct rte_mbuf *mb, + struct rte_mbuf *out_mb[], + const bool trim_mac) { struct entity_priv *en_priv = entity_priv_get(entity); + struct pdcp_t_reordering *t_reorder; + struct pdcp_reorder *reorder; + uint16_t processed = 0; - if (count < en_priv->state.rx_deliv) - return false; + struct entity_priv_dl_part *dl = entity_dl_part_get(entity); + const uint32_t hdr_trim_sz = en_priv->hdr_sz + en_priv->aad_sz; - /* t-Reordering timer is not supported - SDU will be delivered immediately. - * Update RX_DELIV to the COUNT value of the first PDCP SDU which has not - * been delivered to upper layers - */ - en_priv->state.rx_next = count + 1; + if (count < en_priv->state.rx_deliv) + return -EINVAL; if (count >= en_priv->state.rx_next) en_priv->state.rx_next = count + 1; - return true; + pdcp_packet_strip(mb, hdr_trim_sz, trim_mac); + + if (en_priv->flags.is_out_of_order_delivery) { + out_mb[0] = mb; + en_priv->state.rx_deliv = count + 1; + + return 1; + } + + reorder = &dl->reorder; + t_reorder = &dl->t_reorder; + + if (count == en_priv->state.rx_deliv) { + if (reorder->is_active) { + /* + * This insert used only to increment reorder->min_seqn + * To remove it - min_seqn_set() has to work with non-empty buffer + */ + pdcp_reorder_insert(reorder, mb, count); + + /* Get buffered packets */ + struct rte_mbuf **cached_mbufs = &out_mb[processed]; + uint32_t nb_cached = pdcp_reorder_get_sequential(reorder, + cached_mbufs, entity->max_pkt_cache - processed); + + processed += nb_cached; + } else { + out_mb[processed++] = mb; + } + + /* Processed should never exceed the window size */ + en_priv->state.rx_deliv = count + processed; + + } else { + if (!reorder->is_active) + /* Initialize reordering buffer with RX_DELIV */ + pdcp_reorder_start(reorder, en_priv->state.rx_deliv); + /* Buffer the packet */ + pdcp_reorder_insert(reorder, mb, count); + } + + /* Stop & reset current timer if rx_reord is received */ + if (t_reorder->state == TIMER_RUNNING && + en_priv->state.rx_deliv >= en_priv->state.rx_reord) { + t_reorder->state = TIMER_STOP; + /* Stop reorder buffer, only if it's empty */ + if (en_priv->state.rx_deliv == en_priv->state.rx_next) + pdcp_reorder_stop(reorder); + } + + /* + * If t-Reordering is not running (includes the case when t-Reordering is stopped due to + * actions above). + */ + if (t_reorder->state == TIMER_STOP && en_priv->state.rx_deliv < en_priv->state.rx_next) { + /* Update RX_REORD to RX_NEXT */ + en_priv->state.rx_reord = en_priv->state.rx_next; + /* Start t-Reordering */ + t_reorder->state = TIMER_RUNNING; + } + + return processed; } static inline uint16_t @@ -837,16 +900,14 @@ pdcp_post_process_uplane_sn_12_dl_flags(const struct rte_pdcp_entity *entity, uint16_t num, uint16_t *nb_err_ret, const bool is_integ_protected) { + int i, nb_processed, nb_success = 0, nb_err = 0, rsn = 0; struct entity_priv *en_priv = entity_priv_get(entity); struct rte_pdcp_up_data_pdu_sn_12_hdr *pdu_hdr; - int i, nb_success = 0, nb_err = 0, rsn = 0; const uint32_t aad_sz = en_priv->aad_sz; struct rte_mbuf *err_mb[num]; struct rte_mbuf *mb; uint32_t count; - const uint32_t hdr_trim_sz = en_priv->hdr_sz + aad_sz; - for (i = 0; i < num; i++) { mb = in_mb[i]; if (unlikely(mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) @@ -864,11 +925,12 @@ pdcp_post_process_uplane_sn_12_dl_flags(const struct rte_pdcp_entity *entity, RTE_SECURITY_PDCP_SN_SIZE_12))) goto error; - if (unlikely(!pdcp_post_process_update_entity_state(entity, count))) + nb_processed = pdcp_post_process_update_entity_state( + entity, count, mb, &out_mb[nb_success], is_integ_protected); + if (nb_processed < 0) goto error; - pdcp_packet_strip(mb, hdr_trim_sz, is_integ_protected); - out_mb[nb_success++] = mb; + nb_success += nb_processed; continue; error: @@ -908,14 +970,13 @@ pdcp_post_process_uplane_sn_18_dl_flags(const struct rte_pdcp_entity *entity, const bool is_integ_protected) { struct entity_priv *en_priv = entity_priv_get(entity); + int i, nb_processed, nb_success = 0, nb_err = 0; struct rte_pdcp_up_data_pdu_sn_18_hdr *pdu_hdr; const uint32_t aad_sz = en_priv->aad_sz; - int i, nb_success = 0, nb_err = 0; struct rte_mbuf *mb, *err_mb[num]; int32_t rsn = 0; uint32_t count; - const uint32_t hdr_trim_sz = en_priv->hdr_sz + aad_sz; for (i = 0; i < num; i++) { mb = in_mb[i]; @@ -936,11 +997,12 @@ pdcp_post_process_uplane_sn_18_dl_flags(const struct rte_pdcp_entity *entity, RTE_SECURITY_PDCP_SN_SIZE_18))) goto error; - if (unlikely(!pdcp_post_process_update_entity_state(entity, count))) + nb_processed = pdcp_post_process_update_entity_state( + entity, count, mb, &out_mb[nb_success], is_integ_protected); + if (nb_processed < 0) goto error; - pdcp_packet_strip(mb, hdr_trim_sz, is_integ_protected); - out_mb[nb_success++] = mb; + nb_success += nb_processed; continue; error: @@ -979,16 +1041,14 @@ pdcp_post_process_cplane_sn_12_dl(const struct rte_pdcp_entity *entity, uint16_t num, uint16_t *nb_err_ret) { struct entity_priv *en_priv = entity_priv_get(entity); + int i, nb_processed, nb_success = 0, nb_err = 0; struct rte_pdcp_cp_data_pdu_sn_12_hdr *pdu_hdr; const uint32_t aad_sz = en_priv->aad_sz; - int i, nb_success = 0, nb_err = 0; struct rte_mbuf *err_mb[num]; struct rte_mbuf *mb; uint32_t count; int32_t rsn; - const uint32_t hdr_trim_sz = en_priv->hdr_sz + aad_sz; - for (i = 0; i < num; i++) { mb = in_mb[i]; if (unlikely(mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) @@ -1002,12 +1062,12 @@ pdcp_post_process_cplane_sn_12_dl(const struct rte_pdcp_entity *entity, RTE_SECURITY_PDCP_SN_SIZE_12))) goto error; - if (unlikely(!pdcp_post_process_update_entity_state(entity, count))) + nb_processed = pdcp_post_process_update_entity_state( + entity, count, mb, &out_mb[nb_success], true); + if (nb_processed < 0) goto error; - pdcp_packet_strip(mb, hdr_trim_sz, true); - - out_mb[nb_success++] = mb; + nb_success += nb_processed; continue; error: @@ -1170,6 +1230,13 @@ pdcp_entity_priv_populate(struct entity_priv *en_priv, const struct rte_pdcp_ent en_priv->flags.is_status_report_required = 1; } + /** + * flags.is_out_of_order_delivery + * + * Indicate whether the outoforder delivery is enabled for PDCP entity. + */ + en_priv->flags.is_out_of_order_delivery = conf->out_of_order_delivery; + /** * hdr_sz * diff --git a/lib/pdcp/pdcp_reorder.c b/lib/pdcp/pdcp_reorder.c new file mode 100644 index 0000000000..5399f0dc28 --- /dev/null +++ b/lib/pdcp/pdcp_reorder.c @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#include +#include + +#include "pdcp_reorder.h" + +int +pdcp_reorder_create(struct pdcp_reorder *reorder, uint32_t window_size) +{ + reorder->buf = rte_reorder_create("reorder_buffer", SOCKET_ID_ANY, window_size); + if (reorder->buf == NULL) + return -rte_errno; + + reorder->window_size = window_size; + reorder->is_active = false; + + return 0; +} + +void +pdcp_reorder_destroy(const struct pdcp_reorder *reorder) +{ + rte_reorder_free(reorder->buf); +} diff --git a/lib/pdcp/pdcp_reorder.h b/lib/pdcp/pdcp_reorder.h new file mode 100644 index 0000000000..6a2f61d6ae --- /dev/null +++ b/lib/pdcp/pdcp_reorder.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef PDCP_REORDER_H +#define PDCP_REORDER_H + +#include + +struct pdcp_reorder { + struct rte_reorder_buffer *buf; + uint32_t window_size; + bool is_active; +}; + +int pdcp_reorder_create(struct pdcp_reorder *reorder, uint32_t window_size); +void pdcp_reorder_destroy(const struct pdcp_reorder *reorder); + +static inline uint32_t +pdcp_reorder_get_sequential(struct pdcp_reorder *reorder, struct rte_mbuf **mbufs, + uint32_t max_mbufs) +{ + return rte_reorder_drain(reorder->buf, mbufs, max_mbufs); +} + +static inline uint32_t +pdcp_reorder_up_to_get(struct pdcp_reorder *reorder, struct rte_mbuf **mbufs, + uint32_t max_mbufs, uint32_t seqn) +{ + return rte_reorder_drain_up_to_seqn(reorder->buf, mbufs, max_mbufs, seqn); +} + +static inline void +pdcp_reorder_start(struct pdcp_reorder *reorder, uint32_t min_seqn) +{ + int ret; + + reorder->is_active = true; + + ret = rte_reorder_min_seqn_set(reorder->buf, min_seqn); + RTE_VERIFY(ret == 0); +} + +static inline void +pdcp_reorder_stop(struct pdcp_reorder *reorder) +{ + reorder->is_active = false; +} + +static inline void +pdcp_reorder_insert(struct pdcp_reorder *reorder, struct rte_mbuf *mbuf, + rte_reorder_seqn_t pkt_count) +{ + int ret; + + *rte_reorder_seqn(mbuf) = pkt_count; + + ret = rte_reorder_insert(reorder->buf, mbuf); + RTE_VERIFY(ret == 0); +} + +#endif /* PDCP_REORDER_H */ diff --git a/lib/pdcp/rte_pdcp.c b/lib/pdcp/rte_pdcp.c index 5cd3f5ca31..c9d44ca539 100644 --- a/lib/pdcp/rte_pdcp.c +++ b/lib/pdcp/rte_pdcp.c @@ -29,6 +29,17 @@ pdcp_entity_size_get(const struct rte_pdcp_entity_conf *conf) return RTE_ALIGN_CEIL(size, RTE_CACHE_LINE_SIZE); } +static int +pdcp_dl_establish(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf) +{ + const uint32_t window_size = pdcp_window_size_get(conf->pdcp_xfrm.sn_size); + struct entity_priv_dl_part *dl = entity_dl_part_get(entity); + + entity->max_pkt_cache = RTE_MAX(entity->max_pkt_cache, window_size); + + return pdcp_reorder_create(&dl->reorder, window_size); +} + struct rte_pdcp_entity * rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) { @@ -92,6 +103,12 @@ rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) if (ret) goto crypto_sess_destroy; + if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) { + ret = pdcp_dl_establish(entity, conf); + if (ret) + goto crypto_sess_destroy; + } + ret = pdcp_cnt_ring_create(entity, conf); if (ret) goto crypto_sess_destroy; @@ -106,26 +123,50 @@ rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) return NULL; } +static int +pdcp_dl_release(struct rte_pdcp_entity *entity, struct rte_mbuf *out_mb[]) +{ + struct entity_priv_dl_part *dl = entity_dl_part_get(entity); + struct entity_priv *en_priv = entity_priv_get(entity); + int nb_out; + + nb_out = pdcp_reorder_up_to_get(&dl->reorder, out_mb, entity->max_pkt_cache, + en_priv->state.rx_next); + + pdcp_reorder_destroy(&dl->reorder); + + return nb_out; +} + int rte_pdcp_entity_release(struct rte_pdcp_entity *pdcp_entity, struct rte_mbuf *out_mb[]) { + struct entity_priv *en_priv; + int nb_out = 0; + if (pdcp_entity == NULL) return -EINVAL; + en_priv = entity_priv_get(pdcp_entity); + + if (!en_priv->flags.is_ul_entity) + nb_out = pdcp_dl_release(pdcp_entity, out_mb); + /* Teardown crypto sessions */ pdcp_crypto_sess_destroy(pdcp_entity); rte_free(pdcp_entity); - RTE_SET_USED(out_mb); - return 0; + return nb_out; } int rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity, struct rte_mbuf *out_mb[]) { + struct entity_priv_dl_part *dl; struct entity_priv *en_priv; + int nb_out = 0; if (pdcp_entity == NULL) return -EINVAL; @@ -135,13 +176,15 @@ rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity, if (en_priv->flags.is_ul_entity) { en_priv->state.tx_next = 0; } else { + dl = entity_dl_part_get(pdcp_entity); + nb_out = pdcp_reorder_up_to_get(&dl->reorder, out_mb, pdcp_entity->max_pkt_cache, + en_priv->state.rx_next); + pdcp_reorder_stop(&dl->reorder); en_priv->state.rx_next = 0; en_priv->state.rx_deliv = 0; } - RTE_SET_USED(out_mb); - - return 0; + return nb_out; } struct rte_mbuf * diff --git a/lib/pdcp/rte_pdcp.h b/lib/pdcp/rte_pdcp.h index d2db25d7d9..0d2f4fe7c1 100644 --- a/lib/pdcp/rte_pdcp.h +++ b/lib/pdcp/rte_pdcp.h @@ -103,6 +103,8 @@ struct rte_pdcp_entity_conf { * operations such as entity re-establishment. */ bool status_report_required; + /** Enable out of order delivery. */ + bool out_of_order_delivery; }; /* >8 End of structure rte_pdcp_entity_conf. */ @@ -259,8 +261,8 @@ rte_pdcp_pkt_pre_process(const struct rte_pdcp_entity *entity, * @param in_mb * The address of an array of *num* pointers to *rte_mbuf* structures. * @param[out] out_mb - * The address of an array of *num* pointers to *rte_mbuf* structures - * to output packets after PDCP post-processing. + * The address of an array that can hold up to *rte_pdcp_entity.max_pkt_cache* + * pointers to *rte_mbuf* structures to output packets after PDCP post-processing. * @param num * The maximum number of packets to process. * @param[out] nb_err From patchwork Fri Apr 14 17:45:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126114 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B8F7D42943; Fri, 14 Apr 2023 19:47:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A58D142BC9; Fri, 14 Apr 2023 19:47:29 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 30CEE42BC9 for ; Fri, 14 Apr 2023 19:47:28 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGNfcF012935; Fri, 14 Apr 2023 10:47:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=uyGtb17UTcn2iyWc3cWuWLIzKXucVUVbmgTAQw6kJ3E=; b=d714kwHAH2KX2fBOn0/U0TH6SNmliIu0nbyO/+gyieHiqqNMcSB7MfXVUk5JSzaMRE8E MeJvpFeK2vDtR/I50NAceFAkWUURyzQkT/0lUfgzom0sUrSsMC9lgyj6eY5hXMBIA8oE imnNj7Y14YsLajwootm2n+8BsLsubD0cK1rdENVpo7mwSCHB1wPJPIdLJO+eOfeNrJ+M CO3hpvXyP6kz/RLGDDb6T4A72yhkUvGyR91UE9U/qfTVy9sIyaLqgBKFYXn5MqilVs9l i9JKDD5t063Y011JPe/r0UdsK1otGKmkiJbb/1PjjCmNPuu1t/QmJLxgUdIbSP3vVpgj Rw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6tk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:47:27 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:47:24 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:47:24 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id EF3A03F7083; Fri, 14 Apr 2023 10:47:19 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 14/22] test/pdcp: add in-order delivery cases Date: Fri, 14 Apr 2023 23:15:04 +0530 Message-ID: <20230414174512.642-15-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: vAhVzrEFSEZPJtcoPM5dZnmH4TEbymYA X-Proofpoint-ORIG-GUID: vAhVzrEFSEZPJtcoPM5dZnmH4TEbymYA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko Add test cases to verify behaviour when in-order delivery is enabled and packets arrive in out-of-order. PDCP library is expected to buffer the packets and return packets in-order when the missing packet arrives. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- app/test/test_pdcp.c | 195 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 195 insertions(+) diff --git a/app/test/test_pdcp.c b/app/test/test_pdcp.c index 4ecb4d9572..c96ddc4f53 100644 --- a/app/test/test_pdcp.c +++ b/app/test/test_pdcp.c @@ -15,6 +15,15 @@ #define CDEV_INVALID_ID UINT8_MAX #define NB_TESTS RTE_DIM(pdcp_test_params) +/* Assert that condition is true, or goto the mark */ +#define ASSERT_TRUE_OR_GOTO(cond, mark, ...) do {\ + if (!(cond)) { \ + RTE_LOG(ERR, USER1, "Error at: %s:%d\n", __func__, __LINE__); \ + RTE_LOG(ERR, USER1, __VA_ARGS__); \ + goto mark; \ + } \ +} while (0) + /* According to formula(7.2.a Window_Size) */ #define PDCP_WINDOW_SIZE(sn_size) (1 << (sn_size - 1)) @@ -82,6 +91,14 @@ run_test_with_all_known_vec(const void *args) return run_test_foreach_known_vec(test, false); } +static int +run_test_with_all_known_vec_until_first_pass(const void *args) +{ + test_with_conf_t test = args; + + return run_test_foreach_known_vec(test, true); +} + static inline int pdcp_hdr_size_get(enum rte_security_pdcp_sn_size sn_size) { @@ -868,6 +885,7 @@ test_sn_range_type(enum sn_range_type type, struct pdcp_test_conf *conf) /* Configure Uplink to generate expected, encrypted packet */ pdcp_sn_to_raw_set(conf->input, new_sn, conf->entity.pdcp_xfrm.sn_size); + conf->entity.out_of_order_delivery = true; conf->entity.reverse_iv_direction = true; conf->entity.count = PDCP_SET_COUNT(new_hfn, new_sn, sn_size); conf->output_len = 0; @@ -913,6 +931,168 @@ test_sn_minus_outside(struct pdcp_test_conf *t_conf) return test_sn_range_type(SN_RANGE_MINUS_OUTSIDE, t_conf); } +static struct rte_mbuf * +generate_packet_for_dl_with_sn(struct pdcp_test_conf ul_conf, uint32_t sn) +{ + int ret; + + ul_conf.entity.count = sn; + ul_conf.entity.out_of_order_delivery = true; + ul_conf.entity.reverse_iv_direction = true; + ul_conf.output_len = 0; + + ret = test_attempt_single(&ul_conf); + if (ret != TEST_SUCCESS) + return NULL; + + return mbuf_from_data_create(ul_conf.output, ul_conf.output_len); +} + +static bool +array_asc_sorted_check(struct rte_mbuf *m[], uint32_t len, enum rte_security_pdcp_sn_size sn_size) +{ + uint32_t i; + + if (len < 2) + return true; + + for (i = 0; i < (len - 1); i++) { + if (pdcp_sn_from_raw_get(rte_pktmbuf_mtod(m[i], void *), sn_size) > + pdcp_sn_from_raw_get(rte_pktmbuf_mtod(m[i + 1], void *), sn_size)) + return false; + } + + return true; +} + +static int +test_reorder_gap_fill(struct pdcp_test_conf *ul_conf) +{ + struct rte_mbuf *m0 = NULL, *m1 = NULL, *out_mb[2] = {0}; + uint16_t nb_success = 0, nb_err = 0; + struct rte_pdcp_entity *pdcp_entity; + struct pdcp_test_conf dl_conf; + int ret = TEST_FAILED, nb_out; + uint8_t cdev_id; + + const int start_count = 0; + + if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) + return TEST_SKIPPED; + + /* Create configuration for actual testing */ + uplink_to_downlink_convert(ul_conf, &dl_conf); + dl_conf.entity.count = start_count; + + pdcp_entity = test_entity_create(&dl_conf, &ret); + if (pdcp_entity == NULL) + return ret; + + const uint32_t sn_size = dl_conf.entity.pdcp_xfrm.sn_size; + cdev_id = dl_conf.entity.dev_id; + + /* Send packet with SN > RX_DELIV to create a gap */ + m1 = generate_packet_for_dl_with_sn(*ul_conf, start_count + 1); + ASSERT_TRUE_OR_GOTO(m1 != NULL, exit, "Could not allocate buffer for packet\n"); + + /* Buffered packets after insert [NULL, m1] */ + nb_success = test_process_packets(pdcp_entity, cdev_id, &m1, 1, out_mb, &nb_err); + ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet process\n"); + ASSERT_TRUE_OR_GOTO(nb_success == 0, exit, "Packet was not buffered as expected\n"); + m1 = NULL; /* Packet was moved to PDCP lib */ + + /* Generate packet to fill the existing gap */ + m0 = generate_packet_for_dl_with_sn(*ul_conf, start_count); + ASSERT_TRUE_OR_GOTO(m0 != NULL, exit, "Could not allocate buffer for packet\n"); + + /* + * Buffered packets after insert [m0, m1] + * Gap filled, all packets should be returned + */ + nb_success = test_process_packets(pdcp_entity, cdev_id, &m0, 1, out_mb, &nb_err); + ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet process\n"); + ASSERT_TRUE_OR_GOTO(nb_success == 2, exit, + "Packet count mismatch (received: %i, expected: 2)\n", nb_success); + m0 = NULL; /* Packet was moved to out_mb */ + + /* Check that packets in correct order */ + ASSERT_TRUE_OR_GOTO(array_asc_sorted_check(out_mb, nb_success, sn_size), exit, + "Error occurred during packet drain\n"); + + ret = TEST_SUCCESS; +exit: + rte_pktmbuf_free(m0); + rte_pktmbuf_free(m1); + rte_pktmbuf_free_bulk(out_mb, nb_success); + nb_out = rte_pdcp_entity_release(pdcp_entity, out_mb); + rte_pktmbuf_free_bulk(out_mb, nb_out); + return ret; +} + +static int +test_reorder_buffer_full_window_size_sn_12(const struct pdcp_test_conf *ul_conf) +{ + struct rte_mbuf *m1 = NULL, **out_mb = NULL; + uint16_t nb_success = 0, nb_err = 0; + struct rte_pdcp_entity *pdcp_entity; + struct pdcp_test_conf dl_conf; + const int rx_deliv = 0; + int ret = TEST_FAILED; + size_t i, nb_out; + uint8_t cdev_id; + + const uint32_t sn_size = ul_conf->entity.pdcp_xfrm.sn_size; + const uint32_t window_size = PDCP_WINDOW_SIZE(sn_size); + + if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK || + sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) + return TEST_SKIPPED; + + /* Create configuration for actual testing */ + uplink_to_downlink_convert(ul_conf, &dl_conf); + dl_conf.entity.count = rx_deliv; + + pdcp_entity = test_entity_create(&dl_conf, &ret); + if (pdcp_entity == NULL) + return ret; + + ASSERT_TRUE_OR_GOTO(pdcp_entity->max_pkt_cache >= window_size, exit, + "PDCP max packet cache is too small"); + cdev_id = dl_conf.entity.dev_id; + out_mb = rte_zmalloc(NULL, pdcp_entity->max_pkt_cache * sizeof(uintptr_t), 0); + ASSERT_TRUE_OR_GOTO(out_mb != NULL, exit, + "Could not allocate buffer for holding out_mb buffers\n"); + + /* Send packets with SN > RX_DELIV to create a gap */ + for (i = rx_deliv + 1; i < window_size; i++) { + m1 = generate_packet_for_dl_with_sn(*ul_conf, i); + ASSERT_TRUE_OR_GOTO(m1 != NULL, exit, "Could not allocate buffer for packet\n"); + /* Buffered packets after insert [NULL, m1] */ + nb_success = test_process_packets(pdcp_entity, cdev_id, &m1, 1, out_mb, &nb_err); + ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet buffering\n"); + ASSERT_TRUE_OR_GOTO(nb_success == 0, exit, "Packet was not buffered as expected\n"); + } + + m1 = generate_packet_for_dl_with_sn(*ul_conf, rx_deliv); + ASSERT_TRUE_OR_GOTO(m1 != NULL, exit, "Could not allocate buffer for packet\n"); + /* Insert missing packet */ + nb_success = test_process_packets(pdcp_entity, cdev_id, &m1, 1, out_mb, &nb_err); + ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet buffering\n"); + ASSERT_TRUE_OR_GOTO(nb_success == window_size, exit, + "Packet count mismatch (received: %i, expected: %i)\n", + nb_success, window_size); + m1 = NULL; + + ret = TEST_SUCCESS; +exit: + rte_pktmbuf_free(m1); + rte_pktmbuf_free_bulk(out_mb, nb_success); + nb_out = rte_pdcp_entity_release(pdcp_entity, out_mb); + rte_pktmbuf_free_bulk(out_mb, nb_out); + rte_free(out_mb); + return ret; +} + static int test_combined(struct pdcp_test_conf *ul_conf) { @@ -969,10 +1149,25 @@ static struct unit_test_suite hfn_sn_test_cases = { } }; +static struct unit_test_suite reorder_test_cases = { + .suite_name = "PDCP reorder", + .unit_test_cases = { + TEST_CASE_NAMED_WITH_DATA("test_reorder_gap_fill", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_reorder_gap_fill), + TEST_CASE_NAMED_WITH_DATA("test_reorder_buffer_full_window_size_sn_12", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec_until_first_pass, + test_reorder_buffer_full_window_size_sn_12), + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + struct unit_test_suite *test_suites[] = { NULL, /* Place holder for known_vector_cases */ &combined_mode_cases, &hfn_sn_test_cases, + &reorder_test_cases, NULL /* End of suites list */ }; From patchwork Fri Apr 14 17:45:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126115 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4220342943; Fri, 14 Apr 2023 19:47:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5047642B8E; Fri, 14 Apr 2023 19:47:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4B43C42D0E for ; Fri, 14 Apr 2023 19:47:35 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33E906vC011272; Fri, 14 Apr 2023 10:47:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=b0/x7cWkgsUghz0TosrEX8yQElBhGJ6iNgPQSFf7pzg=; b=gvAeQ55nYdJrdHuwPK1CvUqEoGL+TTPrucGgqeRMrJnB+MjmXcnC/4kZV/VPlrG3TxOC AI2OCxjRR/sw7AS8VZC9tQcXx/fIf/QQzQTdwukVLTh9mB9QcHP1JAY/KCphjTJseEzc 8uFoRinc4Y3HtDlCXzvOomF7ZyqpXDywtCHA5g79zUP9Qm5fZ2AesyPy10Q/9Zy8fu6m uTOchD7ShjGcdpHpT2LDFWfg11O+8Luo3DQBHWIySXeZo2M8h7QcPrJGvB6ISE/B/M8H 2xw2GCxG4HmQOtXpMeJQwlKlOT+R+Y31xE3oJjHaJIOSCBCCjDGwUEA3t8s4zl36tYTE ew== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3py3tk2ejv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:47:33 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:47:31 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:47:31 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id CE99D3F707F; Fri, 14 Apr 2023 10:47:25 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 15/22] pdcp: add timer callback handlers Date: Fri, 14 Apr 2023 23:15:05 +0530 Message-ID: <20230414174512.642-16-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Luyf5lG2l6O3NEhD6RIPmdBg2ypGUnqN X-Proofpoint-ORIG-GUID: Luyf5lG2l6O3NEhD6RIPmdBg2ypGUnqN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko PDCP has a windowing mechanism which allows only packets that fall in a reception window. The pivot point for this window is RX_REORD which happens to be the first missing or next expected packet. If the missing packet is not received after a specified time, then the RX_REORD state variable needs to be moved up to slide the reception window. PDCP relies on timers for such operations. The timer needs to be armed when PDCP library doesn't receive all packets in-order and starts buffering packets that arrived after a missing packet. The timer needs to be cancelled when a missing packet is received. To avoid dependency on particular timer implementation, PDCP library allows application to register two callbacks, timer_start() and timer_stop() that will be called later by library. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- lib/pdcp/pdcp_entity.h | 2 ++ lib/pdcp/pdcp_process.c | 2 ++ lib/pdcp/rte_pdcp.c | 1 + lib/pdcp/rte_pdcp.h | 47 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 52 insertions(+) diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h index 71962d7279..ca98a1d0f7 100644 --- a/lib/pdcp/pdcp_entity.h +++ b/lib/pdcp/pdcp_entity.h @@ -120,6 +120,8 @@ enum timer_state { struct pdcp_t_reordering { /** Represent timer state */ enum timer_state state; + /** User defined callback handles */ + struct rte_pdcp_t_reordering handle; }; struct pdcp_cnt_bitmap { diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c index 16d22cbe14..9ba07d5255 100644 --- a/lib/pdcp/pdcp_process.c +++ b/lib/pdcp/pdcp_process.c @@ -874,6 +874,7 @@ pdcp_post_process_update_entity_state(const struct rte_pdcp_entity *entity, if (t_reorder->state == TIMER_RUNNING && en_priv->state.rx_deliv >= en_priv->state.rx_reord) { t_reorder->state = TIMER_STOP; + t_reorder->handle.stop(t_reorder->handle.timer, t_reorder->handle.args); /* Stop reorder buffer, only if it's empty */ if (en_priv->state.rx_deliv == en_priv->state.rx_next) pdcp_reorder_stop(reorder); @@ -888,6 +889,7 @@ pdcp_post_process_update_entity_state(const struct rte_pdcp_entity *entity, en_priv->state.rx_reord = en_priv->state.rx_next; /* Start t-Reordering */ t_reorder->state = TIMER_RUNNING; + t_reorder->handle.start(t_reorder->handle.timer, t_reorder->handle.args); } return processed; diff --git a/lib/pdcp/rte_pdcp.c b/lib/pdcp/rte_pdcp.c index c9d44ca539..755d592578 100644 --- a/lib/pdcp/rte_pdcp.c +++ b/lib/pdcp/rte_pdcp.c @@ -36,6 +36,7 @@ pdcp_dl_establish(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_c struct entity_priv_dl_part *dl = entity_dl_part_get(entity); entity->max_pkt_cache = RTE_MAX(entity->max_pkt_cache, window_size); + dl->t_reorder.handle = conf->t_reordering; return pdcp_reorder_create(&dl->reorder, window_size); } diff --git a/lib/pdcp/rte_pdcp.h b/lib/pdcp/rte_pdcp.h index 0d2f4fe7c1..55e57c3b9b 100644 --- a/lib/pdcp/rte_pdcp.h +++ b/lib/pdcp/rte_pdcp.h @@ -66,6 +66,51 @@ struct rte_pdcp_entity { uint64_t user_area[2]; } __rte_cache_aligned; +/** + * Callback function type for t-Reordering timer start, set during PDCP entity establish. + * This callback is invoked by PDCP library, during t-Reordering timer start event. + * Only one t-Reordering per receiving PDCP entity would be running at a given time. + * + * @see struct rte_pdcp_timer + * @see rte_pdcp_entity_establish() + * + * @param timer + * Pointer to timer. + * @param args + * Pointer to timer arguments. + */ +typedef void (*rte_pdcp_t_reordering_start_cb_t)(void *timer, void *args); + +/** + * Callback function type for t-Reordering timer stop, set during PDCP entity establish. + * This callback will be invoked by PDCP library, during t-Reordering timer stop event. + * + * @see struct rte_pdcp_timer + * @see rte_pdcp_entity_establish() + * + * @param timer + * Pointer to timer. + * @param args + * Pointer to timer arguments. + */ +typedef void (*rte_pdcp_t_reordering_stop_cb_t)(void *timer, void *args); + +/** + * PDCP t-Reordering timer interface + * + * Configuration provided by user, that PDCP library will invoke according to timer behaviour. + */ +struct rte_pdcp_t_reordering { + /** Timer pointer, stored for later use in callback functions */ + void *timer; + /** Timer arguments, stored for later use in callback functions */ + void *args; + /** Timer start callback handle */ + rte_pdcp_t_reordering_start_cb_t start; + /** Timer start callback handle */ + rte_pdcp_t_reordering_stop_cb_t stop; +}; + /** * PDCP entity configuration to be used for establishing an entity. */ @@ -105,6 +150,8 @@ struct rte_pdcp_entity_conf { bool status_report_required; /** Enable out of order delivery. */ bool out_of_order_delivery; + /** t-Reordering timer configuration */ + struct rte_pdcp_t_reordering t_reordering; }; /* >8 End of structure rte_pdcp_entity_conf. */ From patchwork Fri Apr 14 17:45:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126116 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A6F3942943; Fri, 14 Apr 2023 19:47:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9499642D0B; Fri, 14 Apr 2023 19:47:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AD361400EF for ; Fri, 14 Apr 2023 19:47:41 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGNfcI012935; Fri, 14 Apr 2023 10:47:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=C90v+GDELR/E/CkGOFp7FJTddyFqf7wI7ULt5a7PsdE=; b=cNpMc7CPOy5rYCNpjjUH/dKsETcB7bdbLlNDQ+FursfxRc+51EUk/FVv6FQgeq5ixBgk M58QQAmlt4arKu+7E2C3KSareblJKFPQ+KJYmAEjpNBtf+EOBpO1i/YBoMIi6DKsKYrc vFzKgnkBhWe7dPvea5sa7I2MjHwYjc+XVH1F8bxAEaOYhT8RPbYsYBTQgPa++wjVoPlE K/7ehBIb6UewA0k1lMkBqsU0I4Gt7lI4XaG1/0oQjYIfQ+3m05fWvVnG+ZdBJ/kVGDPn 9wYupdKA0XPw/7MQ155GED/4+HVITQIbm4uEC4G3t0x2ssSSkAQTE5wSBkH6b6XKEZOb jQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6ur-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:47:40 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:47:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:47:38 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id E43F93F707F; Fri, 14 Apr 2023 10:47:32 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 16/22] pdcp: add timer expiry handle Date: Fri, 14 Apr 2023 23:15:06 +0530 Message-ID: <20230414174512.642-17-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Uxu-ZmLClpaseGpe8BxDc1wZKNgPJSSv X-Proofpoint-ORIG-GUID: Uxu-ZmLClpaseGpe8BxDc1wZKNgPJSSv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko The PDCP protocol requires usage of timers to keep track of how long an out-of-order packet should be buffered while waiting for missing packets. Applications can register a desired timer implementation with the PDCP library. Once the timer expires, the application will be notified, and further handling of the event will be performed in the PDCP library. When the timer expires, the PDCP library will return the cached packets, and PDCP internal state variables (like RX_REORD, RX_DELIV etc) will be updated accordingly. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- doc/guides/prog_guide/pdcp_lib.rst | 29 ++++++++++++++++++ lib/pdcp/pdcp_process.c | 49 ++++++++++++++++++++++++++++++ lib/pdcp/rte_pdcp.h | 31 +++++++++++++++++++ lib/pdcp/version.map | 2 ++ 4 files changed, 111 insertions(+) diff --git a/doc/guides/prog_guide/pdcp_lib.rst b/doc/guides/prog_guide/pdcp_lib.rst index f23360dfc3..32370633e5 100644 --- a/doc/guides/prog_guide/pdcp_lib.rst +++ b/doc/guides/prog_guide/pdcp_lib.rst @@ -229,6 +229,35 @@ parameters for entity creation. } } +Timers +------ + +PDCP utilizes a reception window mechanism to limit the bits of COUNT value +transmitted in the packet. It utilizes state variables such as RX_REORD, +RX_DELIV to define the window and uses RX_DELIV as the lower pivot point of the +window. + +RX_DELIV would be updated only when packets are received in-order. Any missing +packet would mean RX_DELIV won't be updated. A timer, t-Reordering, helps PDCP +to slide the window if the missing packet is not received in a specified time +duration. + +While starting and stopping the timer need to be done by lib PDCP, application +could register its own timer implementation. This is to make sure application +can choose between timers such as rte_timer and rte_event based timers. Starting +and stopping of timer would happen during pre & post process API. + +When the t-Reordering timer expires, application would receive the expiry event. +To perform the PDCP handling of the expiry event, +``rte_pdcp_t_reordering_expiry_handle`` can be used. Expiry handling would +involve sliding the window by updating state variables and passing the expired +packets to the application. + +.. literalinclude:: ../../../lib/pdcp/rte_pdcp.h + :language: c + :start-after: Structure rte_pdcp_t_reordering 8< + :end-before: >8 End of structure rte_pdcp_t_reordering. + Supported features ------------------ diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c index 9ba07d5255..91b87a2a81 100644 --- a/lib/pdcp/pdcp_process.c +++ b/lib/pdcp/pdcp_process.c @@ -1285,3 +1285,52 @@ pdcp_process_func_set(struct rte_pdcp_entity *entity, const struct rte_pdcp_enti return 0; } + +uint16_t +rte_pdcp_t_reordering_expiry_handle(const struct rte_pdcp_entity *entity, struct rte_mbuf *out_mb[]) +{ + struct entity_priv_dl_part *dl = entity_dl_part_get(entity); + struct entity_priv *en_priv = entity_priv_get(entity); + uint16_t capacity = entity->max_pkt_cache; + uint16_t nb_out, nb_seq; + + /* 5.2.2.2 Actions when a t-Reordering expires */ + + /* + * - deliver to upper layers in ascending order of the associated COUNT value after + * performing header decompression, if not decompressed before: + */ + + /* - all stored PDCP SDU(s) with associated COUNT value(s) < RX_REORD; */ + nb_out = pdcp_reorder_up_to_get(&dl->reorder, out_mb, capacity, en_priv->state.rx_reord); + capacity -= nb_out; + out_mb = &out_mb[nb_out]; + + /* + * - all stored PDCP SDU(s) with consecutively associated COUNT value(s) starting from + * RX_REORD; + */ + nb_seq = pdcp_reorder_get_sequential(&dl->reorder, out_mb, capacity); + nb_out += nb_seq; + + /* + * - update RX_DELIV to the COUNT value of the first PDCP SDU which has not been delivered + * to upper layers, with COUNT value >= RX_REORD; + */ + en_priv->state.rx_deliv = en_priv->state.rx_reord + nb_seq; + + /* + * - if RX_DELIV < RX_NEXT: + * - update RX_REORD to RX_NEXT; + * - start t-Reordering. + */ + if (en_priv->state.rx_deliv < en_priv->state.rx_next) { + en_priv->state.rx_reord = en_priv->state.rx_next; + dl->t_reorder.state = TIMER_RUNNING; + dl->t_reorder.handle.start(dl->t_reorder.handle.timer, dl->t_reorder.handle.args); + } else { + dl->t_reorder.state = TIMER_EXPIRED; + } + + return nb_out; +} diff --git a/lib/pdcp/rte_pdcp.h b/lib/pdcp/rte_pdcp.h index 55e57c3b9b..c077acce63 100644 --- a/lib/pdcp/rte_pdcp.h +++ b/lib/pdcp/rte_pdcp.h @@ -100,6 +100,7 @@ typedef void (*rte_pdcp_t_reordering_stop_cb_t)(void *timer, void *args); * * Configuration provided by user, that PDCP library will invoke according to timer behaviour. */ +/* Structure rte_pdcp_t_reordering 8< */ struct rte_pdcp_t_reordering { /** Timer pointer, stored for later use in callback functions */ void *timer; @@ -110,6 +111,7 @@ struct rte_pdcp_t_reordering { /** Timer start callback handle */ rte_pdcp_t_reordering_stop_cb_t stop; }; +/* >8 End of structure rte_pdcp_t_reordering. */ /** * PDCP entity configuration to be used for establishing an entity. @@ -327,6 +329,35 @@ rte_pdcp_pkt_post_process(const struct rte_pdcp_entity *entity, return entity->post_process(entity, in_mb, out_mb, num, nb_err); } +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * 5.2.2.2 Actions when a t-Reordering expires + * + * When t-Reordering timer expires, PDCP is required to slide the reception + * window by updating state variables such as RX_REORD & RX_DELIV. PDCP would + * need to deliver some of the buffered packets based on the state variables and + * conditions described. + * + * The expiry handle need to be invoked by the application when t-Reordering + * timer expires. In addition to returning buffered packets, it may also restart + * timer based on the state variables. + * + * @param entity + * Pointer to the *rte_pdcp_entity* for which the timer expired. + * @param[out] out_mb + * The address of an array that can hold up to *rte_pdcp_entity.max_pkt_cache* + * pointers to *rte_mbuf* structures. Used to return buffered packets that are + * expired. + * @return + * Number of packets returned in *out_mb* buffer. + */ +__rte_experimental +uint16_t +rte_pdcp_t_reordering_expiry_handle(const struct rte_pdcp_entity *entity, + struct rte_mbuf *out_mb[]); + #include #ifdef __cplusplus diff --git a/lib/pdcp/version.map b/lib/pdcp/version.map index d0cf338e1f..89b0463be6 100644 --- a/lib/pdcp/version.map +++ b/lib/pdcp/version.map @@ -11,5 +11,7 @@ EXPERIMENTAL { rte_pdcp_pkt_post_process; rte_pdcp_pkt_pre_process; + rte_pdcp_t_reordering_expiry_handle; + local: *; }; From patchwork Fri Apr 14 17:45:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126117 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7409342943; Fri, 14 Apr 2023 19:47:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1528942C76; Fri, 14 Apr 2023 19:47:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3F45942D3D for ; Fri, 14 Apr 2023 19:47:47 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33E90C9C011664; Fri, 14 Apr 2023 10:47:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Fbh2ugs5ydEKF6xop/SMEKzMDOHfIz/LcbAmJU1fm50=; b=i1wcSKXWYk+PRg3dRfUxpN1jE9e09WXqxCHS3nwKG+yz0YOIdBb9pkpqltNDb5WC1wLH E+ZwRzqIYgq98xaRPYlErFSxCTS82wU0j++zkFq8LRPFikw/HmSilZD307HJ75Gvgmku RH7wJ0IS+pzWuQvMw3PgtoQypJupqbfPxPLwx58bcg1LHzYLIbFM8jwuObxyYuDPI7v+ Cek+CE2S+JMITlXJZDUqCrcRlRiKtYqbSeMGl27q0m9u9KO+M9eBE/ji6AUjKMrSkSnL zE7YhnYktHBeuYnqWXPYlet0fkSbvRz8K0I/Ebg4xyTM8CvfmBcoTy/kipSbbJxQnF2k Nw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3py3tk2em7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:47:45 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:47:43 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:47:43 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 0FD543F7080; Fri, 14 Apr 2023 10:47:38 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 17/22] test/pdcp: add timer expiry cases Date: Fri, 14 Apr 2023 23:15:07 +0530 Message-ID: <20230414174512.642-18-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 9XGhETLlQ12ZrQKOPEdTQONvQD_KGUuG X-Proofpoint-ORIG-GUID: 9XGhETLlQ12ZrQKOPEdTQONvQD_KGUuG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko Add test cases for handling the expiry with rte_timer and rte_event_timer. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- app/test/test_pdcp.c | 325 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 325 insertions(+) diff --git a/app/test/test_pdcp.c b/app/test/test_pdcp.c index c96ddc4f53..a0e982777d 100644 --- a/app/test/test_pdcp.c +++ b/app/test/test_pdcp.c @@ -3,15 +3,22 @@ */ #include +#include +#include #include #include #include +#include #include "test.h" #include "test_cryptodev.h" #include "test_cryptodev_security_pdcp_test_vectors.h" +#define NSECPERSEC 1E9 #define NB_DESC 1024 +#define TIMER_ADAPTER_ID 0 +#define TEST_EV_QUEUE_ID 0 +#define TEST_EV_PORT_ID 0 #define CDEV_INVALID_ID UINT8_MAX #define NB_TESTS RTE_DIM(pdcp_test_params) @@ -32,10 +39,19 @@ struct pdcp_testsuite_params { struct rte_mempool *cop_pool; struct rte_mempool *sess_pool; bool cdevs_used[RTE_CRYPTO_MAX_DEVS]; + int evdev; + struct rte_event_timer_adapter *timdev; + bool timer_is_running; + uint64_t min_resolution_ns; }; static struct pdcp_testsuite_params testsuite_params; +struct test_rte_timer_args { + int status; + struct rte_pdcp_entity *pdcp_entity; +}; + struct pdcp_test_conf { struct rte_pdcp_entity_conf entity; struct rte_crypto_sym_xform c_xfrm; @@ -99,6 +115,30 @@ run_test_with_all_known_vec_until_first_pass(const void *args) return run_test_foreach_known_vec(test, true); } +static void +pdcp_timer_start_cb(void *timer, void *args) +{ + bool *is_timer_running = timer; + + RTE_SET_USED(args); + *is_timer_running = true; +} + +static void +pdcp_timer_stop_cb(void *timer, void *args) +{ + bool *is_timer_running = timer; + + RTE_SET_USED(args); + *is_timer_running = false; +} + +static struct rte_pdcp_t_reordering t_reorder_timer = { + .timer = &testsuite_params.timer_is_running, + .start = pdcp_timer_start_cb, + .stop = pdcp_timer_stop_cb, +}; + static inline int pdcp_hdr_size_get(enum rte_security_pdcp_sn_size sn_size) { @@ -437,6 +477,7 @@ create_test_conf_from_index(const int index, struct pdcp_test_conf *conf) conf->entity.pdcp_xfrm.en_ordering = 0; conf->entity.pdcp_xfrm.remove_duplicates = 0; conf->entity.pdcp_xfrm.domain = pdcp_test_params[index].domain; + conf->entity.t_reordering = t_reorder_timer; if (pdcp_test_packet_direction[index] == PDCP_DIR_UPLINK) conf->entity.pdcp_xfrm.pkt_dir = RTE_SECURITY_PDCP_UPLINK; @@ -1018,6 +1059,8 @@ test_reorder_gap_fill(struct pdcp_test_conf *ul_conf) /* Check that packets in correct order */ ASSERT_TRUE_OR_GOTO(array_asc_sorted_check(out_mb, nb_success, sn_size), exit, "Error occurred during packet drain\n"); + ASSERT_TRUE_OR_GOTO(testsuite_params.timer_is_running == false, exit, + "Timer should be stopped after full drain\n"); ret = TEST_SUCCESS; exit: @@ -1093,6 +1136,170 @@ test_reorder_buffer_full_window_size_sn_12(const struct pdcp_test_conf *ul_conf) return ret; } +static void +event_timer_start_cb(void *timer, void *args) +{ + struct rte_event_timer *evtims = args; + int ret = 0; + + ret = rte_event_timer_arm_burst(timer, &evtims, 1); + assert(ret == 1); +} + +static int +test_expiry_with_event_timer(const struct pdcp_test_conf *ul_conf) +{ + struct rte_mbuf *m1 = NULL, *out_mb[1] = {0}; + uint16_t n = 0, nb_err = 0, nb_try = 5; + struct rte_pdcp_entity *pdcp_entity; + struct pdcp_test_conf dl_conf; + int ret = TEST_FAILED, nb_out; + struct rte_event event; + + const int start_count = 0; + struct rte_event_timer evtim = { + .ev.op = RTE_EVENT_OP_NEW, + .ev.queue_id = TEST_EV_QUEUE_ID, + .ev.sched_type = RTE_SCHED_TYPE_ATOMIC, + .ev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .ev.event_type = RTE_EVENT_TYPE_TIMER, + .state = RTE_EVENT_TIMER_NOT_ARMED, + .timeout_ticks = 1, + }; + + if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) + return TEST_SKIPPED; + + /* Create configuration for actual testing */ + uplink_to_downlink_convert(ul_conf, &dl_conf); + dl_conf.entity.count = start_count; + dl_conf.entity.t_reordering.args = &evtim; + dl_conf.entity.t_reordering.timer = testsuite_params.timdev; + dl_conf.entity.t_reordering.start = event_timer_start_cb; + + pdcp_entity = test_entity_create(&dl_conf, &ret); + if (pdcp_entity == NULL) + return ret; + + evtim.ev.event_ptr = pdcp_entity; + + /* Send packet with SN > RX_DELIV to create a gap */ + m1 = generate_packet_for_dl_with_sn(*ul_conf, start_count + 1); + ASSERT_TRUE_OR_GOTO(m1 != NULL, exit, "Could not allocate buffer for packet\n"); + + /* Buffered packets after insert [NULL, m1] */ + n = test_process_packets(pdcp_entity, dl_conf.entity.dev_id, &m1, 1, out_mb, &nb_err); + ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet buffering\n"); + ASSERT_TRUE_OR_GOTO(n == 0, exit, "Packet was not buffered as expected\n"); + + m1 = NULL; /* Packet was moved to PDCP lib */ + + n = rte_event_dequeue_burst(testsuite_params.evdev, TEST_EV_PORT_ID, &event, 1, 0); + while (n != 1) { + rte_delay_us(testsuite_params.min_resolution_ns / 1000); + n = rte_event_dequeue_burst(testsuite_params.evdev, TEST_EV_PORT_ID, &event, 1, 0); + ASSERT_TRUE_OR_GOTO(nb_try-- > 0, exit, + "Dequeued unexpected timer expiry event: %i\n", n); + } + + ASSERT_TRUE_OR_GOTO(event.event_type == RTE_EVENT_TYPE_TIMER, exit, "Unexpected event type\n"); + + /* Handle expiry event */ + n = rte_pdcp_t_reordering_expiry_handle(event.event_ptr, out_mb); + ASSERT_TRUE_OR_GOTO(n == 1, exit, "Unexpected number of expired packets :%i\n", n); + + ret = TEST_SUCCESS; +exit: + rte_pktmbuf_free(m1); + rte_pktmbuf_free_bulk(out_mb, n); + nb_out = rte_pdcp_entity_release(pdcp_entity, out_mb); + rte_pktmbuf_free_bulk(out_mb, nb_out); + return ret; +} + +static void +test_rte_timer_expiry_handle(struct rte_timer *tim, void *arg) +{ + struct test_rte_timer_args *tim_data = arg; + struct rte_mbuf *out_mb[1] = {0}; + uint16_t n; + + RTE_SET_USED(tim); + + n = rte_pdcp_t_reordering_expiry_handle(tim_data->pdcp_entity, out_mb); + rte_pktmbuf_free_bulk(out_mb, n); + + tim_data->status = n == 1 ? n : -1; +} + +static void +test_rte_timer_start_cb(void *timer, void *args) +{ + rte_timer_reset_sync(timer, 1, SINGLE, rte_lcore_id(), test_rte_timer_expiry_handle, args); +} + +static int +test_expiry_with_rte_timer(const struct pdcp_test_conf *ul_conf) +{ + struct rte_mbuf *m1 = NULL, *out_mb[1] = {0}; + uint16_t n = 0, nb_err = 0, nb_try = 5; + struct test_rte_timer_args timer_args; + struct rte_pdcp_entity *pdcp_entity; + struct pdcp_test_conf dl_conf; + int ret = TEST_FAILED, nb_out; + struct rte_timer timer = {0}; + + const int start_count = 0; + + if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) + return TEST_SKIPPED; + + /* Set up a timer */ + rte_timer_init(&timer); + + /* Create configuration for actual testing */ + uplink_to_downlink_convert(ul_conf, &dl_conf); + dl_conf.entity.count = start_count; + dl_conf.entity.t_reordering.args = &timer_args; + dl_conf.entity.t_reordering.timer = &timer; + dl_conf.entity.t_reordering.start = test_rte_timer_start_cb; + + pdcp_entity = test_entity_create(&dl_conf, &ret); + if (pdcp_entity == NULL) + return ret; + + timer_args.status = 0; + timer_args.pdcp_entity = pdcp_entity; + + /* Send packet with SN > RX_DELIV to create a gap */ + m1 = generate_packet_for_dl_with_sn(*ul_conf, start_count + 1); + ASSERT_TRUE_OR_GOTO(m1 != NULL, exit, "Could not allocate buffer for packet\n"); + + /* Buffered packets after insert [NULL, m1] */ + n = test_process_packets(pdcp_entity, dl_conf.entity.dev_id, &m1, 1, out_mb, &nb_err); + ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet buffering\n"); + ASSERT_TRUE_OR_GOTO(n == 0, exit, "Packet was not buffered as expected\n"); + + m1 = NULL; /* Packet was moved to PDCP lib */ + + /* Verify that expire was handled correctly */ + rte_timer_manage(); + while (timer_args.status != 1) { + rte_delay_us(1); + rte_timer_manage(); + ASSERT_TRUE_OR_GOTO(nb_try-- > 0, exit, "Bad expire handle status %i\n", + timer_args.status); + } + + ret = TEST_SUCCESS; +exit: + rte_pktmbuf_free(m1); + rte_pktmbuf_free_bulk(out_mb, n); + nb_out = rte_pdcp_entity_release(pdcp_entity, out_mb); + rte_pktmbuf_free_bulk(out_mb, nb_out); + return ret; +} + static int test_combined(struct pdcp_test_conf *ul_conf) { @@ -1115,6 +1322,116 @@ test_combined(struct pdcp_test_conf *ul_conf) return ret; } +static inline void +eventdev_conf_default_set(struct rte_event_dev_config *dev_conf, struct rte_event_dev_info *info) +{ + memset(dev_conf, 0, sizeof(struct rte_event_dev_config)); + dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns; + dev_conf->nb_event_ports = 1; + dev_conf->nb_event_queues = 1; + dev_conf->nb_event_queue_flows = info->max_event_queue_flows; + dev_conf->nb_event_port_dequeue_depth = info->max_event_port_dequeue_depth; + dev_conf->nb_event_port_enqueue_depth = info->max_event_port_enqueue_depth; + dev_conf->nb_event_port_enqueue_depth = info->max_event_port_enqueue_depth; + dev_conf->nb_events_limit = info->max_num_events; +} + +static inline int +eventdev_setup(void) +{ + struct rte_event_dev_config dev_conf; + struct rte_event_dev_info info; + int ret, evdev = 0; + + if (!rte_event_dev_count()) + return TEST_SKIPPED; + + ret = rte_event_dev_info_get(evdev, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + TEST_ASSERT(info.max_num_events < 0 || info.max_num_events >= 1, + "ERROR max_num_events=%d < max_events=%d", info.max_num_events, 1); + + eventdev_conf_default_set(&dev_conf, &info); + ret = rte_event_dev_configure(evdev, &dev_conf); + TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); + + ret = rte_event_queue_setup(evdev, TEST_EV_QUEUE_ID, NULL); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", TEST_EV_QUEUE_ID); + + /* Configure event port */ + ret = rte_event_port_setup(evdev, TEST_EV_PORT_ID, NULL); + TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", TEST_EV_PORT_ID); + ret = rte_event_port_link(evdev, TEST_EV_PORT_ID, NULL, NULL, 0); + TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", TEST_EV_PORT_ID); + + ret = rte_event_dev_start(evdev); + TEST_ASSERT_SUCCESS(ret, "Failed to start device"); + + testsuite_params.evdev = evdev; + + return TEST_SUCCESS; +} + +static int +event_timer_setup(void) +{ + struct rte_event_timer_adapter_info info; + struct rte_event_timer_adapter *timdev; + uint32_t caps = 0; + + struct rte_event_timer_adapter_conf config = { + .event_dev_id = testsuite_params.evdev, + .timer_adapter_id = TIMER_ADAPTER_ID, + .timer_tick_ns = NSECPERSEC, + .max_tmo_ns = 10 * NSECPERSEC, + .nb_timers = 10, + .flags = 0, + }; + + TEST_ASSERT_SUCCESS(rte_event_timer_adapter_caps_get(testsuite_params.evdev, &caps), + "Failed to get adapter capabilities"); + + if (!(caps & RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT)) + return TEST_SKIPPED; + + timdev = rte_event_timer_adapter_create(&config); + + TEST_ASSERT_NOT_NULL(timdev, "Failed to create event timer ring"); + + testsuite_params.timdev = timdev; + + TEST_ASSERT_EQUAL(rte_event_timer_adapter_start(timdev), 0, + "Failed to start event timer adapter"); + + rte_event_timer_adapter_get_info(timdev, &info); + testsuite_params.min_resolution_ns = info.min_resolution_ns; + + return TEST_SUCCESS; +} + +static int +ut_setup_pdcp_event_timer(void) +{ + int ret; + ret = eventdev_setup(); + if (ret) + return ret; + return event_timer_setup(); +} + +static void +ut_teardown_pdcp_event_timer(void) +{ + struct rte_event_timer_adapter *timdev = testsuite_params.timdev; + int evdev = testsuite_params.evdev; + + rte_event_dev_stop(evdev); + rte_event_dev_close(evdev); + + rte_event_timer_adapter_stop(timdev); + rte_event_timer_adapter_free(timdev); +} + static int run_test_for_one_known_vec(const void *arg) { @@ -1159,6 +1476,14 @@ static struct unit_test_suite reorder_test_cases = { ut_setup_pdcp, ut_teardown_pdcp, run_test_with_all_known_vec_until_first_pass, test_reorder_buffer_full_window_size_sn_12), + TEST_CASE_NAMED_WITH_DATA("test_expire_with_event_timer", + ut_setup_pdcp_event_timer, ut_teardown_pdcp_event_timer, + run_test_with_all_known_vec_until_first_pass, + test_expiry_with_event_timer), + TEST_CASE_NAMED_WITH_DATA("test_expire_with_rte_timer", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec_until_first_pass, + test_expiry_with_rte_timer), TEST_CASES_END() /**< NULL terminate unit test array */ } }; From patchwork Fri Apr 14 17:45:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126118 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E87B142943; Fri, 14 Apr 2023 19:47:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AC6D942D48; Fri, 14 Apr 2023 19:47:53 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D099F42D43 for ; Fri, 14 Apr 2023 19:47:52 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGNfcJ012935; Fri, 14 Apr 2023 10:47:51 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=phZCDUZCFSui8XKp0E2yByPqaOlyeLr2kz2UB8HC6U4=; b=EGwISUofFKz3/lKemHz/VOTUaPmANIY31X/iBBXvKXVzH2ognWvrF2zXrg36VOUs0Mfx 14IhwKQLo44oLsK1FbL2BcJmABU1yZSxi695yPkhoeLIEzQ22zJJRA9mDze5gKd+it9s RR+KcORuycHGfWJjXY0sp7FW+xQXK0fbyjJnvH6VWE4hLQZI5qmZCh/wvACN/XogD6EO SL3nkKuV13Ds7jtqwQz5Pl5kBUoeXa0rm4USojnnyU1g1t6kOtsGGXZj0UQftGhUSQlK G1beqsMQRWb4OcvKdOZsceh3Nvz/Ih3RGGmCzgHCQDfftuEEOAavTMPXTb9Wt+EkvnJh kQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6vh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:47:51 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:47:49 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:47:49 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 72BE43F707F; Fri, 14 Apr 2023 10:47:43 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 18/22] test/pdcp: add timer restart case Date: Fri, 14 Apr 2023 23:15:08 +0530 Message-ID: <20230414174512.642-19-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: diBaiJtOwkIa7O0jMViwKXZdkSyObM6s X-Proofpoint-ORIG-GUID: diBaiJtOwkIa7O0jMViwKXZdkSyObM6s X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko Add test to cover the case when t-reordering timer should be restarted on the same packet. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- app/test/test_pdcp.c | 67 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 67 insertions(+) diff --git a/app/test/test_pdcp.c b/app/test/test_pdcp.c index a0e982777d..de3375bb22 100644 --- a/app/test/test_pdcp.c +++ b/app/test/test_pdcp.c @@ -1072,6 +1072,70 @@ test_reorder_gap_fill(struct pdcp_test_conf *ul_conf) return ret; } +static int +test_reorder_gap_in_reorder_buffer(const struct pdcp_test_conf *ul_conf) +{ + struct rte_mbuf *m = NULL, *out_mb[2] = {0}; + uint16_t nb_success = 0, nb_err = 0; + struct rte_pdcp_entity *pdcp_entity; + int ret = TEST_FAILED, nb_out, i; + struct pdcp_test_conf dl_conf; + uint8_t cdev_id; + + const int start_count = 0; + + if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) + return TEST_SKIPPED; + + /* Create configuration for actual testing */ + uplink_to_downlink_convert(ul_conf, &dl_conf); + dl_conf.entity.count = start_count; + pdcp_entity = test_entity_create(&dl_conf, &ret); + if (pdcp_entity == NULL) + return ret; + + const uint32_t sn_size = dl_conf.entity.pdcp_xfrm.sn_size; + cdev_id = dl_conf.entity.dev_id; + + /* Create two gaps [NULL, m1, NULL, m3]*/ + for (i = 0; i < 2; i++) { + m = generate_packet_for_dl_with_sn(*ul_conf, start_count + 2 * i + 1); + ASSERT_TRUE_OR_GOTO(m != NULL, exit, "Could not allocate buffer for packet\n"); + nb_success = test_process_packets(pdcp_entity, cdev_id, &m, 1, out_mb, &nb_err); + ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet process\n"); + ASSERT_TRUE_OR_GOTO(nb_success == 0, exit, "Packet was not buffered as expected\n"); + m = NULL; /* Packet was moved to PDCP lib */ + } + + /* Generate packet to fill the first gap */ + m = generate_packet_for_dl_with_sn(*ul_conf, start_count); + ASSERT_TRUE_OR_GOTO(m != NULL, exit, "Could not allocate buffer for packet\n"); + + /* + * Buffered packets after insert [m0, m1, NULL, m3] + * Only first gap should be filled, timer should be restarted for second gap + */ + nb_success = test_process_packets(pdcp_entity, cdev_id, &m, 1, out_mb, &nb_err); + ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet process\n"); + ASSERT_TRUE_OR_GOTO(nb_success == 2, exit, + "Packet count mismatch (received: %i, expected: 2)\n", nb_success); + m = NULL; + /* Check that packets in correct order */ + ASSERT_TRUE_OR_GOTO(array_asc_sorted_check(out_mb, nb_success, sn_size), + exit, "Error occurred during packet drain\n"); + ASSERT_TRUE_OR_GOTO(testsuite_params.timer_is_running == true, exit, + "Timer should be restarted after partial drain"); + + + ret = TEST_SUCCESS; +exit: + rte_pktmbuf_free(m); + rte_pktmbuf_free_bulk(out_mb, nb_success); + nb_out = rte_pdcp_entity_release(pdcp_entity, out_mb); + rte_pktmbuf_free_bulk(out_mb, nb_out); + return ret; +} + static int test_reorder_buffer_full_window_size_sn_12(const struct pdcp_test_conf *ul_conf) { @@ -1472,6 +1536,9 @@ static struct unit_test_suite reorder_test_cases = { TEST_CASE_NAMED_WITH_DATA("test_reorder_gap_fill", ut_setup_pdcp, ut_teardown_pdcp, run_test_with_all_known_vec, test_reorder_gap_fill), + TEST_CASE_NAMED_WITH_DATA("test_reorder_gap_in_reorder_buffer", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_reorder_gap_in_reorder_buffer), TEST_CASE_NAMED_WITH_DATA("test_reorder_buffer_full_window_size_sn_12", ut_setup_pdcp, ut_teardown_pdcp, run_test_with_all_known_vec_until_first_pass, From patchwork Fri Apr 14 17:45:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126119 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6564142943; Fri, 14 Apr 2023 19:48:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 070B542D1D; Fri, 14 Apr 2023 19:48:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 7175642D49 for ; Fri, 14 Apr 2023 19:47:58 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGNfCE012938; Fri, 14 Apr 2023 10:47:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=pXbZms7xgRLAIgPPh6PbRMx83qVMMm64ZKMtL8Z0DS0=; b=N3fybcoQwc+hqUWts/T7gHOI/JkHhGiaA+ijhjn+gD6wfex0tDat3TuIdnFBQ8LHJV8G z3bhp3st6enX/yw20+CPsTyQAseZVqcJNwu8X2IGydCXZxRQi3PGwfh3LCrOU8+grccd EghK7YfMBiZmcZVPWLbJibLmasFCAXmSbHij818XQv+ZIRskrev2XlTr37rtbZCK5dez OotunohUUROSnQiw1aOCtOQ6nyFYbLUewojgFoo0Lf36T8iabz0ivlM3ePAbcX9eNq/f v89fCK/r2pVbwIXdSa1JoViyjqtceObwQdcbzNcogJTG7QBuzFvv6aKr+VnfuS4cTBmC jw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6wb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:47:57 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:47:54 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:47:54 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 20EAF3F7080; Fri, 14 Apr 2023 10:47:49 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 19/22] pdcp: add support for status report Date: Fri, 14 Apr 2023 23:15:09 +0530 Message-ID: <20230414174512.642-20-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: YZGjQuM20JPrWg7w4E2pQJQs5UYv4Mvi X-Proofpoint-ORIG-GUID: YZGjQuM20JPrWg7w4E2pQJQs5UYv4Mvi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko Implement status report generation for PDCP entity. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- lib/pdcp/pdcp_cnt.c | 158 ++++++++++++++++++++++++++++++++++++--- lib/pdcp/pdcp_cnt.h | 11 ++- lib/pdcp/pdcp_ctrl_pdu.c | 34 ++++++++- lib/pdcp/pdcp_ctrl_pdu.h | 3 +- lib/pdcp/pdcp_entity.h | 2 + lib/pdcp/pdcp_process.c | 21 +++++- lib/pdcp/rte_pdcp.c | 32 ++++++-- 7 files changed, 233 insertions(+), 28 deletions(-) diff --git a/lib/pdcp/pdcp_cnt.c b/lib/pdcp/pdcp_cnt.c index c9b952184b..af027b00d3 100644 --- a/lib/pdcp/pdcp_cnt.c +++ b/lib/pdcp/pdcp_cnt.c @@ -2,28 +2,164 @@ * Copyright(C) 2023 Marvell. */ +#include #include #include "pdcp_cnt.h" +#include "pdcp_ctrl_pdu.h" #include "pdcp_entity.h" +#define SLAB_BYTE_SIZE (RTE_BITMAP_SLAB_BIT_SIZE / 8) + +uint32_t +pdcp_cnt_bitmap_get_memory_footprint(const struct rte_pdcp_entity_conf *conf) +{ + uint32_t n_bits = pdcp_window_size_get(conf->pdcp_xfrm.sn_size); + + return rte_bitmap_get_memory_footprint(n_bits); +} + int -pdcp_cnt_ring_create(struct rte_pdcp_entity *en, const struct rte_pdcp_entity_conf *conf) +pdcp_cnt_bitmap_create(struct entity_priv_dl_part *dl, void *bitmap_mem, uint32_t window_size) { - struct entity_priv_dl_part *en_priv_dl; - uint32_t window_sz; + uint32_t mem_size = rte_bitmap_get_memory_footprint(window_size); - if (en == NULL || conf == NULL) + dl->bitmap.bmp = rte_bitmap_init(window_size, bitmap_mem, mem_size); + if (dl->bitmap.bmp == NULL) return -EINVAL; - if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) - return 0; + dl->bitmap.size = window_size; - en_priv_dl = entity_dl_part_get(en); - window_sz = pdcp_window_size_get(conf->pdcp_xfrm.sn_size); + return 0; +} - RTE_SET_USED(window_sz); - RTE_SET_USED(en_priv_dl); +void +pdcp_cnt_bitmap_set(struct pdcp_cnt_bitmap bitmap, uint32_t count) +{ + rte_bitmap_set(bitmap.bmp, count % bitmap.size); +} - return 0; +bool +pdcp_cnt_bitmap_is_set(struct pdcp_cnt_bitmap bitmap, uint32_t count) +{ + return rte_bitmap_get(bitmap.bmp, count % bitmap.size); +} + +void +pdcp_cnt_bitmap_range_clear(struct pdcp_cnt_bitmap bitmap, uint32_t start, uint32_t stop) +{ + uint32_t i; + + for (i = start; i < stop; i++) + rte_bitmap_clear(bitmap.bmp, i % bitmap.size); +} + +uint16_t +pdcp_cnt_get_bitmap_size(uint32_t pending_bytes) +{ + /* + * Round up bitmap size to slab size to operate only on slabs sizes, instead of individual + * bytes + */ + return RTE_ALIGN_MUL_CEIL(pending_bytes, SLAB_BYTE_SIZE); +} + +static __rte_always_inline uint64_t +leftover_get(uint64_t slab, uint32_t shift, uint64_t mask) +{ + return (slab & mask) << shift; +} + +void +pdcp_cnt_report_fill(struct pdcp_cnt_bitmap bitmap, struct entity_state state, + uint8_t *data, uint16_t data_len) +{ + uint64_t slab = 0, next_slab = 0, leftover; + uint32_t zeros, report_len, diff; + uint32_t slab_id, next_slab_id; + uint32_t pos = 0, next_pos = 0; + + const uint32_t start_count = state.rx_deliv + 1; + const uint32_t nb_slabs = bitmap.size / RTE_BITMAP_SLAB_BIT_SIZE; + const uint32_t nb_data_slabs = data_len / SLAB_BYTE_SIZE; + const uint32_t start_slab_id = start_count / RTE_BITMAP_SLAB_BIT_SIZE; + const uint32_t stop_slab_id = (start_slab_id + nb_data_slabs) % nb_slabs; + const uint32_t shift = start_count % RTE_BITMAP_SLAB_BIT_SIZE; + const uint32_t leftover_shift = shift ? RTE_BITMAP_SLAB_BIT_SIZE - shift : 0; + const uint8_t *data_end = RTE_PTR_ADD(data, data_len + SLAB_BYTE_SIZE); + + /* NOTE: Mask required to workaround case - when shift is not needed */ + const uint64_t leftover_mask = shift ? ~0 : 0; + + /* NOTE: implement scan init at to set custom position */ + __rte_bitmap_scan_init(bitmap.bmp); + while (true) { + assert(rte_bitmap_scan(bitmap.bmp, &pos, &slab) == 1); + slab_id = pos / RTE_BITMAP_SLAB_BIT_SIZE; + if (slab_id >= start_slab_id) + break; + } + + report_len = nb_data_slabs; + + if (slab_id > start_slab_id) { + /* Zero slabs at beginning */ + zeros = (slab_id - start_slab_id - 1) * SLAB_BYTE_SIZE; + memset(data, 0, zeros); + data = RTE_PTR_ADD(data, zeros); + leftover = leftover_get(slab, leftover_shift, leftover_mask); + memcpy(data, &leftover, SLAB_BYTE_SIZE); + data = RTE_PTR_ADD(data, SLAB_BYTE_SIZE); + report_len -= (slab_id - start_slab_id); + } + + while (report_len) { + rte_bitmap_scan(bitmap.bmp, &next_pos, &next_slab); + next_slab_id = next_pos / RTE_BITMAP_SLAB_BIT_SIZE; + diff = (next_slab_id + nb_slabs - slab_id) % nb_slabs; + + /* If next_slab_id == slab_id - overlap */ + diff += !(next_slab_id ^ slab_id) * nb_slabs; + + /* Size check - next slab is outsize of size range */ + if (diff > report_len) { + next_slab = 0; + next_slab_id = stop_slab_id; + diff = report_len; + } + + report_len -= diff; + + /* Calculate gap between slabs, taking wrap around into account */ + zeros = (next_slab_id + nb_slabs - slab_id - 1) % nb_slabs; + if (zeros) { + /* Non continues slabs, align them individually */ + slab >>= shift; + memcpy(data, &slab, SLAB_BYTE_SIZE); + data = RTE_PTR_ADD(data, SLAB_BYTE_SIZE); + + /* Fill zeros between slabs */ + zeros = (zeros - 1) * SLAB_BYTE_SIZE; + memset(data, 0, zeros); + data = RTE_PTR_ADD(data, zeros); + + /* Align beginning of next slab */ + leftover = leftover_get(next_slab, leftover_shift, leftover_mask); + memcpy(data, &leftover, SLAB_BYTE_SIZE); + data = RTE_PTR_ADD(data, SLAB_BYTE_SIZE); + } else { + /* Continues slabs, combine them */ + uint64_t new_slab = (slab >> shift) | + leftover_get(next_slab, leftover_shift, leftover_mask); + memcpy(data, &new_slab, SLAB_BYTE_SIZE); + data = RTE_PTR_ADD(data, SLAB_BYTE_SIZE); + } + + slab = next_slab; + pos = next_pos; + slab_id = next_slab_id; + + }; + + assert(data < data_end); } diff --git a/lib/pdcp/pdcp_cnt.h b/lib/pdcp/pdcp_cnt.h index bbda478b55..5941b7a406 100644 --- a/lib/pdcp/pdcp_cnt.h +++ b/lib/pdcp/pdcp_cnt.h @@ -9,6 +9,15 @@ #include "pdcp_entity.h" -int pdcp_cnt_ring_create(struct rte_pdcp_entity *en, const struct rte_pdcp_entity_conf *conf); +uint32_t pdcp_cnt_bitmap_get_memory_footprint(const struct rte_pdcp_entity_conf *conf); +int pdcp_cnt_bitmap_create(struct entity_priv_dl_part *dl, void *bitmap_mem, uint32_t window_size); + +void pdcp_cnt_bitmap_set(struct pdcp_cnt_bitmap bitmap, uint32_t count); +bool pdcp_cnt_bitmap_is_set(struct pdcp_cnt_bitmap bitmap, uint32_t count); +void pdcp_cnt_bitmap_range_clear(struct pdcp_cnt_bitmap bitmap, uint32_t start, uint32_t stop); + +uint16_t pdcp_cnt_get_bitmap_size(uint32_t pending_bytes); +void pdcp_cnt_report_fill(struct pdcp_cnt_bitmap bitmap, struct entity_state state, + uint8_t *data, uint16_t data_len); #endif /* PDCP_CNT_H */ diff --git a/lib/pdcp/pdcp_ctrl_pdu.c b/lib/pdcp/pdcp_ctrl_pdu.c index feb05fd863..e0ac2d3720 100644 --- a/lib/pdcp/pdcp_ctrl_pdu.c +++ b/lib/pdcp/pdcp_ctrl_pdu.c @@ -8,6 +8,14 @@ #include "pdcp_ctrl_pdu.h" #include "pdcp_entity.h" +#include "pdcp_cnt.h" + +static inline uint16_t +round_up_bits(uint32_t bits) +{ + /* round up to the next multiple of 8 */ + return RTE_ALIGN_MUL_CEIL(bits, 8) / 8; +} static __rte_always_inline void pdcp_hdr_fill(struct rte_pdcp_up_ctrl_pdu_hdr *pdu_hdr, uint32_t rx_deliv) @@ -19,11 +27,13 @@ pdcp_hdr_fill(struct rte_pdcp_up_ctrl_pdu_hdr *pdu_hdr, uint32_t rx_deliv) } int -pdcp_ctrl_pdu_status_gen(struct entity_priv *en_priv, struct rte_mbuf *m) +pdcp_ctrl_pdu_status_gen(struct entity_priv *en_priv, struct entity_priv_dl_part *dl, + struct rte_mbuf *m) { struct rte_pdcp_up_ctrl_pdu_hdr *pdu_hdr; - uint32_t rx_deliv; - int pdu_sz; + uint32_t rx_deliv, actual_sz; + uint16_t pdu_sz, bitmap_sz; + uint8_t *data; if (!en_priv->flags.is_status_report_required) return -EINVAL; @@ -42,5 +52,21 @@ pdcp_ctrl_pdu_status_gen(struct entity_priv *en_priv, struct rte_mbuf *m) return 0; } - return -ENOTSUP; + actual_sz = RTE_MIN(round_up_bits(en_priv->state.rx_next - rx_deliv - 1), + RTE_PDCP_CTRL_PDU_SIZE_MAX - pdu_sz); + bitmap_sz = pdcp_cnt_get_bitmap_size(actual_sz); + + data = (uint8_t *)rte_pktmbuf_append(m, pdu_sz + bitmap_sz); + if (data == NULL) + return -ENOMEM; + + m->pkt_len = pdu_sz + actual_sz; + m->data_len = pdu_sz + actual_sz; + + pdcp_hdr_fill((struct rte_pdcp_up_ctrl_pdu_hdr *)data, rx_deliv); + + data = RTE_PTR_ADD(data, pdu_sz); + pdcp_cnt_report_fill(dl->bitmap, en_priv->state, data, bitmap_sz); + + return 0; } diff --git a/lib/pdcp/pdcp_ctrl_pdu.h b/lib/pdcp/pdcp_ctrl_pdu.h index a2424fbd10..2a87928b88 100644 --- a/lib/pdcp/pdcp_ctrl_pdu.h +++ b/lib/pdcp/pdcp_ctrl_pdu.h @@ -10,6 +10,7 @@ #include "pdcp_entity.h" int -pdcp_ctrl_pdu_status_gen(struct entity_priv *en_priv, struct rte_mbuf *m); +pdcp_ctrl_pdu_status_gen(struct entity_priv *en_priv, struct entity_priv_dl_part *dl, + struct rte_mbuf *m); #endif /* PDCP_CTRL_PDU_H */ diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h index ca98a1d0f7..8e1d6254dd 100644 --- a/lib/pdcp/pdcp_entity.h +++ b/lib/pdcp/pdcp_entity.h @@ -182,6 +182,8 @@ struct entity_priv_dl_part { struct pdcp_t_reordering t_reorder; /** Reorder packet buffer */ struct pdcp_reorder reorder; + /** Bitmap memory region */ + uint8_t bitmap_mem[0]; }; struct entity_priv_ul_part { diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c index 91b87a2a81..cdadc9b6b8 100644 --- a/lib/pdcp/pdcp_process.c +++ b/lib/pdcp/pdcp_process.c @@ -9,6 +9,7 @@ #include #include +#include "pdcp_cnt.h" #include "pdcp_crypto.h" #include "pdcp_entity.h" #include "pdcp_process.h" @@ -809,6 +810,16 @@ pdcp_packet_strip(struct rte_mbuf *mb, const uint32_t hdr_trim_sz, const bool tr } } +static inline void +pdcp_rx_deliv_set(const struct rte_pdcp_entity *entity, uint32_t rx_deliv) +{ + struct entity_priv_dl_part *dl = entity_dl_part_get(entity); + struct entity_priv *en_priv = entity_priv_get(entity); + + pdcp_cnt_bitmap_range_clear(dl->bitmap, en_priv->state.rx_deliv, rx_deliv); + en_priv->state.rx_deliv = rx_deliv; +} + static inline int pdcp_post_process_update_entity_state(const struct rte_pdcp_entity *entity, const uint32_t count, struct rte_mbuf *mb, @@ -829,11 +840,15 @@ pdcp_post_process_update_entity_state(const struct rte_pdcp_entity *entity, if (count >= en_priv->state.rx_next) en_priv->state.rx_next = count + 1; + if (unlikely(pdcp_cnt_bitmap_is_set(dl->bitmap, count))) + return -EEXIST; + + pdcp_cnt_bitmap_set(dl->bitmap, count); pdcp_packet_strip(mb, hdr_trim_sz, trim_mac); if (en_priv->flags.is_out_of_order_delivery) { out_mb[0] = mb; - en_priv->state.rx_deliv = count + 1; + pdcp_rx_deliv_set(entity, count + 1); return 1; } @@ -860,7 +875,7 @@ pdcp_post_process_update_entity_state(const struct rte_pdcp_entity *entity, } /* Processed should never exceed the window size */ - en_priv->state.rx_deliv = count + processed; + pdcp_rx_deliv_set(entity, count + processed); } else { if (!reorder->is_active) @@ -1317,7 +1332,7 @@ rte_pdcp_t_reordering_expiry_handle(const struct rte_pdcp_entity *entity, struct * - update RX_DELIV to the COUNT value of the first PDCP SDU which has not been delivered * to upper layers, with COUNT value >= RX_REORD; */ - en_priv->state.rx_deliv = en_priv->state.rx_reord + nb_seq; + pdcp_rx_deliv_set(entity, en_priv->state.rx_reord + nb_seq); /* * - if RX_DELIV < RX_NEXT: diff --git a/lib/pdcp/rte_pdcp.c b/lib/pdcp/rte_pdcp.c index 755d592578..ce846d687e 100644 --- a/lib/pdcp/rte_pdcp.c +++ b/lib/pdcp/rte_pdcp.c @@ -12,6 +12,8 @@ #include "pdcp_entity.h" #include "pdcp_process.h" +static int bitmap_mem_offset; + static int pdcp_entity_size_get(const struct rte_pdcp_entity_conf *conf) { @@ -19,9 +21,12 @@ pdcp_entity_size_get(const struct rte_pdcp_entity_conf *conf) size = sizeof(struct rte_pdcp_entity) + sizeof(struct entity_priv); - if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) + if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) { size += sizeof(struct entity_priv_dl_part); - else if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) + size = RTE_CACHE_LINE_ROUNDUP(size); + bitmap_mem_offset = size; + size += pdcp_cnt_bitmap_get_memory_footprint(conf); + } else if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) size += sizeof(struct entity_priv_ul_part); else return -EINVAL; @@ -34,11 +39,24 @@ pdcp_dl_establish(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_c { const uint32_t window_size = pdcp_window_size_get(conf->pdcp_xfrm.sn_size); struct entity_priv_dl_part *dl = entity_dl_part_get(entity); + void *bitmap_mem; + int ret; entity->max_pkt_cache = RTE_MAX(entity->max_pkt_cache, window_size); dl->t_reorder.handle = conf->t_reordering; - return pdcp_reorder_create(&dl->reorder, window_size); + ret = pdcp_reorder_create(&dl->reorder, window_size); + if (ret) + return ret; + + bitmap_mem = RTE_PTR_ADD(entity, bitmap_mem_offset); + ret = pdcp_cnt_bitmap_create(dl, bitmap_mem, window_size); + if (ret) { + pdcp_reorder_destroy(&dl->reorder); + return ret; + } + + return 0; } struct rte_pdcp_entity * @@ -110,10 +128,6 @@ rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) goto crypto_sess_destroy; } - ret = pdcp_cnt_ring_create(entity, conf); - if (ret) - goto crypto_sess_destroy; - return entity; crypto_sess_destroy: @@ -192,6 +206,7 @@ struct rte_mbuf * rte_pdcp_control_pdu_create(struct rte_pdcp_entity *pdcp_entity, enum rte_pdcp_ctrl_pdu_type type) { + struct entity_priv_dl_part *dl; struct entity_priv *en_priv; struct rte_mbuf *m; int ret; @@ -202,6 +217,7 @@ rte_pdcp_control_pdu_create(struct rte_pdcp_entity *pdcp_entity, } en_priv = entity_priv_get(pdcp_entity); + dl = entity_dl_part_get(pdcp_entity); m = rte_pktmbuf_alloc(en_priv->ctr_pdu_pool); if (m == NULL) { @@ -211,7 +227,7 @@ rte_pdcp_control_pdu_create(struct rte_pdcp_entity *pdcp_entity, switch (type) { case RTE_PDCP_CTRL_PDU_TYPE_STATUS_REPORT: - ret = pdcp_ctrl_pdu_status_gen(en_priv, m); + ret = pdcp_ctrl_pdu_status_gen(en_priv, dl, m); break; default: ret = -ENOTSUP; From patchwork Fri Apr 14 17:45:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126120 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B9CB742943; Fri, 14 Apr 2023 19:48:13 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0EA2642D53; Fri, 14 Apr 2023 19:48:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 56FFA42D0E for ; Fri, 14 Apr 2023 19:48:04 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EGNfCF012938; Fri, 14 Apr 2023 10:48:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=1jkkjwIDUYYWhP5vVAaiXgdr6KkOgEm9UlGmCDMVCVw=; b=g/Hr6WBm66Ao/qP22ka/KHuZoGXyvndoOxf420oELi03mnUdhCKXdSpTW5t6ysPuEohQ S3DhlpZL1drDciHzcrr0IHx+A+kf3t3eDlibrzx8EejMEMPeZbpCzbRDDFRMneASoA4m W+x9pgVsWlqQcP85woennaap6MF6ak33SJANuMdPOJCfkCOZlWCua8YNLqMl5DlgYA0g c+ocITW/KoF016qfkB/PZyVaaRIviw0C3LjoTm/MQeV5kF774HND/KVRTI1csFdsQjAt 0dvL53OrFvXdDO3W9sN6YsCUrz8Rycf2G6rlrT1VBbu9IpBxRih8cxJF3FbpXr0tXFWm GQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6wq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:48:03 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:48:01 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:48:00 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 9FE333F7081; Fri, 14 Apr 2023 10:47:55 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 20/22] pdcp: allocate reorder buffer alongside with entity Date: Fri, 14 Apr 2023 23:15:10 +0530 Message-ID: <20230414174512.642-21-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 7paThd-LU_WAMYK_Vju3FZVie9GusVuW X-Proofpoint-ORIG-GUID: 7paThd-LU_WAMYK_Vju3FZVie9GusVuW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko Instead of allocating reorder buffer separately on heap, allocate memory for it together with rest of entity, and then only initialize buffer via `rte_reorder_init()`. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- lib/pdcp/pdcp_cnt.c | 9 +++---- lib/pdcp/pdcp_cnt.h | 3 ++- lib/pdcp/pdcp_entity.h | 2 +- lib/pdcp/pdcp_reorder.c | 11 ++------ lib/pdcp/pdcp_reorder.h | 12 ++++++--- lib/pdcp/rte_pdcp.c | 58 ++++++++++++++++++++++++++--------------- 6 files changed, 55 insertions(+), 40 deletions(-) diff --git a/lib/pdcp/pdcp_cnt.c b/lib/pdcp/pdcp_cnt.c index af027b00d3..e1d0634b4d 100644 --- a/lib/pdcp/pdcp_cnt.c +++ b/lib/pdcp/pdcp_cnt.c @@ -20,15 +20,14 @@ pdcp_cnt_bitmap_get_memory_footprint(const struct rte_pdcp_entity_conf *conf) } int -pdcp_cnt_bitmap_create(struct entity_priv_dl_part *dl, void *bitmap_mem, uint32_t window_size) +pdcp_cnt_bitmap_create(struct entity_priv_dl_part *dl, uint32_t nb_elem, + void *bitmap_mem, uint32_t mem_size) { - uint32_t mem_size = rte_bitmap_get_memory_footprint(window_size); - - dl->bitmap.bmp = rte_bitmap_init(window_size, bitmap_mem, mem_size); + dl->bitmap.bmp = rte_bitmap_init(nb_elem, bitmap_mem, mem_size); if (dl->bitmap.bmp == NULL) return -EINVAL; - dl->bitmap.size = window_size; + dl->bitmap.size = nb_elem; return 0; } diff --git a/lib/pdcp/pdcp_cnt.h b/lib/pdcp/pdcp_cnt.h index 5941b7a406..87b011f9dc 100644 --- a/lib/pdcp/pdcp_cnt.h +++ b/lib/pdcp/pdcp_cnt.h @@ -10,7 +10,8 @@ #include "pdcp_entity.h" uint32_t pdcp_cnt_bitmap_get_memory_footprint(const struct rte_pdcp_entity_conf *conf); -int pdcp_cnt_bitmap_create(struct entity_priv_dl_part *dl, void *bitmap_mem, uint32_t window_size); +int pdcp_cnt_bitmap_create(struct entity_priv_dl_part *dl, uint32_t nb_elem, + void *bitmap_mem, uint32_t mem_size); void pdcp_cnt_bitmap_set(struct pdcp_cnt_bitmap bitmap, uint32_t count); bool pdcp_cnt_bitmap_is_set(struct pdcp_cnt_bitmap bitmap, uint32_t count); diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h index 8e1d6254dd..38fa71acef 100644 --- a/lib/pdcp/pdcp_entity.h +++ b/lib/pdcp/pdcp_entity.h @@ -132,7 +132,7 @@ struct pdcp_cnt_bitmap { }; /* - * Layout of PDCP entity: [rte_pdcp_entity] [entity_priv] [entity_dl/ul] + * Layout of PDCP entity: [rte_pdcp_entity] [entity_priv] [entity_dl/ul] [reorder/bitmap] */ struct entity_priv { diff --git a/lib/pdcp/pdcp_reorder.c b/lib/pdcp/pdcp_reorder.c index 5399f0dc28..bc45f2e19b 100644 --- a/lib/pdcp/pdcp_reorder.c +++ b/lib/pdcp/pdcp_reorder.c @@ -8,20 +8,13 @@ #include "pdcp_reorder.h" int -pdcp_reorder_create(struct pdcp_reorder *reorder, uint32_t window_size) +pdcp_reorder_create(struct pdcp_reorder *reorder, size_t nb_elem, void *mem, size_t mem_size) { - reorder->buf = rte_reorder_create("reorder_buffer", SOCKET_ID_ANY, window_size); + reorder->buf = rte_reorder_init(mem, mem_size, "reorder_buffer", nb_elem); if (reorder->buf == NULL) return -rte_errno; - reorder->window_size = window_size; reorder->is_active = false; return 0; } - -void -pdcp_reorder_destroy(const struct pdcp_reorder *reorder) -{ - rte_reorder_free(reorder->buf); -} diff --git a/lib/pdcp/pdcp_reorder.h b/lib/pdcp/pdcp_reorder.h index 6a2f61d6ae..7e4f079d4b 100644 --- a/lib/pdcp/pdcp_reorder.h +++ b/lib/pdcp/pdcp_reorder.h @@ -9,12 +9,18 @@ struct pdcp_reorder { struct rte_reorder_buffer *buf; - uint32_t window_size; bool is_active; }; -int pdcp_reorder_create(struct pdcp_reorder *reorder, uint32_t window_size); -void pdcp_reorder_destroy(const struct pdcp_reorder *reorder); +int pdcp_reorder_create(struct pdcp_reorder *reorder, size_t nb_elem, void *mem, size_t mem_size); + +/* NOTE: replace with `rte_reorder_memory_footprint_get` after DPDK 23.07 */ +#define SIZE_OF_REORDER_BUFFER (4 * RTE_CACHE_LINE_SIZE) +static inline size_t +pdcp_reorder_memory_footprint_get(size_t nb_elem) +{ + return SIZE_OF_REORDER_BUFFER + (2 * nb_elem * sizeof(struct rte_mbuf *)); +} static inline uint32_t pdcp_reorder_get_sequential(struct pdcp_reorder *reorder, struct rte_mbuf **mbufs, diff --git a/lib/pdcp/rte_pdcp.c b/lib/pdcp/rte_pdcp.c index ce846d687e..95d2283cef 100644 --- a/lib/pdcp/rte_pdcp.c +++ b/lib/pdcp/rte_pdcp.c @@ -12,49 +12,65 @@ #include "pdcp_entity.h" #include "pdcp_process.h" -static int bitmap_mem_offset; +struct entity_layout { + size_t bitmap_offset; + size_t bitmap_size; + + size_t reorder_buf_offset; + size_t reorder_buf_size; + + size_t total_size; +}; static int -pdcp_entity_size_get(const struct rte_pdcp_entity_conf *conf) +pdcp_entity_layout_get(const struct rte_pdcp_entity_conf *conf, struct entity_layout *layout) { - int size; + size_t size; + const uint32_t window_size = pdcp_window_size_get(conf->pdcp_xfrm.sn_size); size = sizeof(struct rte_pdcp_entity) + sizeof(struct entity_priv); if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) { size += sizeof(struct entity_priv_dl_part); + /* Bitmap require memory to be cache aligned */ size = RTE_CACHE_LINE_ROUNDUP(size); - bitmap_mem_offset = size; - size += pdcp_cnt_bitmap_get_memory_footprint(conf); + layout->bitmap_offset = size; + layout->bitmap_size = pdcp_cnt_bitmap_get_memory_footprint(conf); + size += layout->bitmap_size; + layout->reorder_buf_offset = size; + layout->reorder_buf_size = pdcp_reorder_memory_footprint_get(window_size); + size += layout->reorder_buf_size; } else if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_UPLINK) size += sizeof(struct entity_priv_ul_part); else return -EINVAL; - return RTE_ALIGN_CEIL(size, RTE_CACHE_LINE_SIZE); + layout->total_size = size; + + return 0; } static int -pdcp_dl_establish(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf) +pdcp_dl_establish(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_conf *conf, + const struct entity_layout *layout) { const uint32_t window_size = pdcp_window_size_get(conf->pdcp_xfrm.sn_size); struct entity_priv_dl_part *dl = entity_dl_part_get(entity); - void *bitmap_mem; + void *memory; int ret; entity->max_pkt_cache = RTE_MAX(entity->max_pkt_cache, window_size); dl->t_reorder.handle = conf->t_reordering; - ret = pdcp_reorder_create(&dl->reorder, window_size); + memory = RTE_PTR_ADD(entity, layout->reorder_buf_offset); + ret = pdcp_reorder_create(&dl->reorder, window_size, memory, layout->reorder_buf_size); if (ret) return ret; - bitmap_mem = RTE_PTR_ADD(entity, bitmap_mem_offset); - ret = pdcp_cnt_bitmap_create(dl, bitmap_mem, window_size); - if (ret) { - pdcp_reorder_destroy(&dl->reorder); + memory = RTE_PTR_ADD(entity, layout->bitmap_offset); + ret = pdcp_cnt_bitmap_create(dl, window_size, memory, layout->bitmap_size); + if (ret) return ret; - } return 0; } @@ -62,9 +78,10 @@ pdcp_dl_establish(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_c struct rte_pdcp_entity * rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) { + struct entity_layout entity_layout = { 0 }; struct rte_pdcp_entity *entity = NULL; struct entity_priv *en_priv; - int ret, entity_size; + int ret; if (conf == NULL || conf->cop_pool == NULL || conf->ctr_pdu_pool == NULL) { rte_errno = -EINVAL; @@ -94,13 +111,14 @@ rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) return NULL; } - entity_size = pdcp_entity_size_get(conf); - if (entity_size < 0) { + ret = pdcp_entity_layout_get(conf, &entity_layout); + if (ret < 0) { rte_errno = -EINVAL; return NULL; } - entity = rte_zmalloc_socket("pdcp_entity", entity_size, RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + entity = rte_zmalloc_socket("pdcp_entity", entity_layout.total_size, RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); if (entity == NULL) { rte_errno = -ENOMEM; return NULL; @@ -123,7 +141,7 @@ rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf) goto crypto_sess_destroy; if (conf->pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) { - ret = pdcp_dl_establish(entity, conf); + ret = pdcp_dl_establish(entity, conf, &entity_layout); if (ret) goto crypto_sess_destroy; } @@ -148,8 +166,6 @@ pdcp_dl_release(struct rte_pdcp_entity *entity, struct rte_mbuf *out_mb[]) nb_out = pdcp_reorder_up_to_get(&dl->reorder, out_mb, entity->max_pkt_cache, en_priv->state.rx_next); - pdcp_reorder_destroy(&dl->reorder); - return nb_out; } From patchwork Fri Apr 14 17:45:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126121 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A92942943; Fri, 14 Apr 2023 19:48:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 40A4742D5A; Fri, 14 Apr 2023 19:48:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B93A042D0E for ; Fri, 14 Apr 2023 19:48:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33E909e5011326; Fri, 14 Apr 2023 10:48:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=3X/frPzvgt4vtIwDWitUyZlRWJpPt0C8yj/NROW4zvY=; b=UL8BlJwKNATLat22haYh593CWSqPuNA6m9f4SfwJdtGwW/RsV8jpgIODEPuouUckmIbv ByxPwThz6TE4JBb9HgQordAx330IifDtTxhFhApIkB0kzD0QaI7lJCv6ZgwbqVEFp8Y0 JlEQglgEMCwcQChQ4Ry45B/5P6b603wMGMXM3e2jVxpwXtHkb7xgCXkYaDRLnBOGhJAk YgfocClaE22SYtja4glGLz0DarTeVrTJZv3RiWYzg5z3NMJJlWAryMDntp2VZzpJI0oK J2uVo+UOsPqLwR7ubkp4oUJwZ8BmBwf3h5s/JgaJtFOvCaJsVQtZIKe0e5BGiN4LJYin Tw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3py3tk2epx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:48:08 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:48:07 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:48:07 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id F06B73F7081; Fri, 14 Apr 2023 10:48:01 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 21/22] pdcp: add thread safe processing Date: Fri, 14 Apr 2023 23:15:11 +0530 Message-ID: <20230414174512.642-22-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 9eE4oJo5Sgu6qyNnEvmT_DJb5i4xIN52 X-Proofpoint-ORIG-GUID: 9eE4oJo5Sgu6qyNnEvmT_DJb5i4xIN52 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko PDCP state has to be guarded for: - Uplink pre_process: - tx_next atomic increment - Downlink pre_process: - rx_deliv - read - Downlink post_process: - rx_deliv, rx_reorder, rx_next - read/write - bitmask/reorder buffer - read/write When application requires thread safe processing, the state variables need to be updated atomically. Add config option to select this option per entity. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- lib/pdcp/pdcp_entity.h | 46 +++++++++++++++++++++++++++++++++++++++++ lib/pdcp/pdcp_process.c | 34 +++++++++++++++++++++++++++--- lib/pdcp/rte_pdcp.c | 2 ++ lib/pdcp/rte_pdcp.h | 2 ++ 4 files changed, 81 insertions(+), 3 deletions(-) diff --git a/lib/pdcp/pdcp_entity.h b/lib/pdcp/pdcp_entity.h index 38fa71acef..2dd6d2417d 100644 --- a/lib/pdcp/pdcp_entity.h +++ b/lib/pdcp/pdcp_entity.h @@ -10,6 +10,7 @@ #include #include #include +#include #include "pdcp_reorder.h" @@ -162,6 +163,8 @@ struct entity_priv { uint64_t is_status_report_required : 1; /** Is out-of-order delivery enabled */ uint64_t is_out_of_order_delivery : 1; + /** Is thread safety disabled */ + uint64_t is_thread_safety_disabled : 1; } flags; /** Crypto op pool. */ struct rte_mempool *cop_pool; @@ -175,6 +178,8 @@ struct entity_priv { uint8_t dev_id; }; +typedef rte_spinlock_t pdcp_lock_t; + struct entity_priv_dl_part { /** PDCP would need to track the count values that are already received.*/ struct pdcp_cnt_bitmap bitmap; @@ -182,6 +187,8 @@ struct entity_priv_dl_part { struct pdcp_t_reordering t_reorder; /** Reorder packet buffer */ struct pdcp_reorder reorder; + /* Lock to protect concurrent updates */ + pdcp_lock_t lock; /** Bitmap memory region */ uint8_t bitmap_mem[0]; }; @@ -257,4 +264,43 @@ pdcp_hfn_max(enum rte_security_pdcp_sn_size sn_size) return (1 << (32 - sn_size)) - 1; } +static inline uint32_t +pdcp_atomic_inc(const struct entity_priv *en_priv, uint32_t *val) +{ + if (en_priv->flags.is_thread_safety_disabled) + return (*val)++; + else + return __atomic_fetch_add(val, 1, __ATOMIC_RELAXED); +} + +static inline void +pdcp_lock_init(const struct rte_pdcp_entity *entity) +{ + struct entity_priv_dl_part *dl = entity_dl_part_get(entity); + struct entity_priv *en_priv = entity_priv_get(entity); + + if (!en_priv->flags.is_thread_safety_disabled) + rte_spinlock_init(&dl->lock); +} + +static inline void +pdcp_lock_lock(const struct rte_pdcp_entity *entity) +{ + struct entity_priv_dl_part *dl = entity_dl_part_get(entity); + struct entity_priv *en_priv = entity_priv_get(entity); + + if (!en_priv->flags.is_thread_safety_disabled) + rte_spinlock_lock(&dl->lock); +} + +static inline void +pdcp_lock_unlock(const struct rte_pdcp_entity *entity) +{ + struct entity_priv_dl_part *dl = entity_dl_part_get(entity); + struct entity_priv *en_priv = entity_priv_get(entity); + + if (!en_priv->flags.is_thread_safety_disabled) + rte_spinlock_unlock(&dl->lock); +} + #endif /* PDCP_ENTITY_H */ diff --git a/lib/pdcp/pdcp_process.c b/lib/pdcp/pdcp_process.c index cdadc9b6b8..0bafa3447a 100644 --- a/lib/pdcp/pdcp_process.c +++ b/lib/pdcp/pdcp_process.c @@ -369,7 +369,7 @@ pdcp_pre_process_uplane_sn_12_ul_set_sn(struct entity_priv *en_priv, struct rte_ return false; /* Update sequence num in the PDU header */ - *count = en_priv->state.tx_next++; + *count = pdcp_atomic_inc(en_priv, &en_priv->state.tx_next); sn = pdcp_sn_from_count_get(*count, RTE_SECURITY_PDCP_SN_SIZE_12); pdu_hdr->d_c = RTE_PDCP_PDU_TYPE_DATA; @@ -451,7 +451,7 @@ pdcp_pre_process_uplane_sn_18_ul_set_sn(struct entity_priv *en_priv, struct rte_ return false; /* Update sequence num in the PDU header */ - *count = en_priv->state.tx_next++; + *count = pdcp_atomic_inc(en_priv, &en_priv->state.tx_next); sn = pdcp_sn_from_count_get(*count, RTE_SECURITY_PDCP_SN_SIZE_18); pdu_hdr->d_c = RTE_PDCP_PDU_TYPE_DATA; @@ -561,7 +561,7 @@ pdcp_pre_process_cplane_sn_12_ul(const struct rte_pdcp_entity *entity, struct rt memset(mac_i, 0, PDCP_MAC_I_LEN); /* Update sequence number in the PDU header */ - count = en_priv->state.tx_next++; + count = pdcp_atomic_inc(en_priv, &en_priv->state.tx_next); sn = pdcp_sn_from_count_get(count, RTE_SECURITY_PDCP_SN_SIZE_12); pdu_hdr->sn_11_8 = ((sn & 0xf00) >> 8); @@ -654,7 +654,9 @@ pdcp_pre_process_uplane_sn_12_dl_flags(const struct rte_pdcp_entity *entity, nb_cop = rte_crypto_op_bulk_alloc(en_priv->cop_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, cop, num); + pdcp_lock_lock(entity); const uint32_t rx_deliv = en_priv->state.rx_deliv; + pdcp_lock_unlock(entity); for (i = 0; i < nb_cop; i++) { mb = in_mb[i]; @@ -714,7 +716,9 @@ pdcp_pre_process_uplane_sn_18_dl_flags(const struct rte_pdcp_entity *entity, nb_cop = rte_crypto_op_bulk_alloc(en_priv->cop_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, cop, num); + pdcp_lock_lock(entity); const uint32_t rx_deliv = en_priv->state.rx_deliv; + pdcp_lock_unlock(entity); for (i = 0; i < nb_cop; i++) { mb = in_mb[i]; @@ -775,7 +779,9 @@ pdcp_pre_process_cplane_sn_12_dl(const struct rte_pdcp_entity *entity, struct rt nb_cop = rte_crypto_op_bulk_alloc(en_priv->cop_pool, RTE_CRYPTO_OP_TYPE_SYMMETRIC, cop, num); + pdcp_lock_lock(entity); const uint32_t rx_deliv = en_priv->state.rx_deliv; + pdcp_lock_unlock(entity); for (i = 0; i < nb_cop; i++) { mb = in_mb[i]; @@ -925,6 +931,8 @@ pdcp_post_process_uplane_sn_12_dl_flags(const struct rte_pdcp_entity *entity, struct rte_mbuf *mb; uint32_t count; + pdcp_lock_lock(entity); + for (i = 0; i < num; i++) { mb = in_mb[i]; if (unlikely(mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) @@ -954,6 +962,8 @@ pdcp_post_process_uplane_sn_12_dl_flags(const struct rte_pdcp_entity *entity, err_mb[nb_err++] = mb; } + pdcp_lock_unlock(entity); + if (unlikely(nb_err != 0)) rte_memcpy(&out_mb[nb_success], err_mb, nb_err * sizeof(struct rte_mbuf *)); @@ -994,6 +1004,7 @@ pdcp_post_process_uplane_sn_18_dl_flags(const struct rte_pdcp_entity *entity, int32_t rsn = 0; uint32_t count; + pdcp_lock_lock(entity); for (i = 0; i < num; i++) { mb = in_mb[i]; @@ -1026,6 +1037,8 @@ pdcp_post_process_uplane_sn_18_dl_flags(const struct rte_pdcp_entity *entity, err_mb[nb_err++] = mb; } + pdcp_lock_unlock(entity); + if (unlikely(nb_err != 0)) rte_memcpy(&out_mb[nb_success], err_mb, nb_err * sizeof(struct rte_mbuf *)); @@ -1066,6 +1079,8 @@ pdcp_post_process_cplane_sn_12_dl(const struct rte_pdcp_entity *entity, uint32_t count; int32_t rsn; + pdcp_lock_lock(entity); + for (i = 0; i < num; i++) { mb = in_mb[i]; if (unlikely(mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) @@ -1091,6 +1106,8 @@ pdcp_post_process_cplane_sn_12_dl(const struct rte_pdcp_entity *entity, err_mb[nb_err++] = mb; } + pdcp_lock_unlock(entity); + if (unlikely(nb_err != 0)) rte_memcpy(&out_mb[nb_success], err_mb, nb_err * sizeof(struct rte_mbuf *)); @@ -1254,6 +1271,13 @@ pdcp_entity_priv_populate(struct entity_priv *en_priv, const struct rte_pdcp_ent */ en_priv->flags.is_out_of_order_delivery = conf->out_of_order_delivery; + /** + * flags.disable_thread_safety + * + * Indicate whether the thread safety is disabled for PDCP entity. + */ + en_priv->flags.is_thread_safety_disabled = conf->disable_thread_safety; + /** * hdr_sz * @@ -1316,6 +1340,8 @@ rte_pdcp_t_reordering_expiry_handle(const struct rte_pdcp_entity *entity, struct * performing header decompression, if not decompressed before: */ + pdcp_lock_lock(entity); + /* - all stored PDCP SDU(s) with associated COUNT value(s) < RX_REORD; */ nb_out = pdcp_reorder_up_to_get(&dl->reorder, out_mb, capacity, en_priv->state.rx_reord); capacity -= nb_out; @@ -1347,5 +1373,7 @@ rte_pdcp_t_reordering_expiry_handle(const struct rte_pdcp_entity *entity, struct dl->t_reorder.state = TIMER_EXPIRED; } + pdcp_lock_unlock(entity); + return nb_out; } diff --git a/lib/pdcp/rte_pdcp.c b/lib/pdcp/rte_pdcp.c index 95d2283cef..06b86c274e 100644 --- a/lib/pdcp/rte_pdcp.c +++ b/lib/pdcp/rte_pdcp.c @@ -72,6 +72,8 @@ pdcp_dl_establish(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_c if (ret) return ret; + pdcp_lock_init(entity); + return 0; } diff --git a/lib/pdcp/rte_pdcp.h b/lib/pdcp/rte_pdcp.h index c077acce63..a8b824a7ee 100644 --- a/lib/pdcp/rte_pdcp.h +++ b/lib/pdcp/rte_pdcp.h @@ -137,6 +137,8 @@ struct rte_pdcp_entity_conf { bool is_slrb; /** Enable security offload on the device specified. */ bool en_sec_offload; + /** Disable usage of synchronization primitives for entity. */ + bool disable_thread_safety; /** Device on which security/crypto session need to be created. */ uint8_t dev_id; /** Reverse direction during IV generation. Can be used to simulate UE crypto processing.*/ From patchwork Fri Apr 14 17:45:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 126122 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 245D542943; Fri, 14 Apr 2023 19:48:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BF5D942D0E; Fri, 14 Apr 2023 19:48:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 46F9942D61 for ; Fri, 14 Apr 2023 19:48:17 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33EFFBRR009805; Fri, 14 Apr 2023 10:48:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=BHmG8ryfQxfQhk9stSAo/3+eAG82yMiNvenujLAbVp4=; b=C87fLE0xWEaZ5cfcpysdbCpai9uAm5HNFJ6nPyYDuhkWbqx9VnXPBW/hsEqa/0pT5ucf SHZ6ZrvB5Z+zfimB5thsKb0lEGPBEaKavFsAmcw4SgA+Iwkh2zCzhpnv/bBI1GmT3A69 sJD2umeG/rPJiPI0FHQkQloONwsqTneRBscwqyFcbL+UenccOU/sgWngBfGsNDp9Vsq0 jlNu2eXk0oaYOUkAY5wluXl5DVPMwlt3xxF5DCJTCqMzNitowgZHh58BXB/ardkl+XG6 p26IlA/jdqFw393FKT+tK57Bqdr1xkV0We3GuUFWFiv4nAC7gDwtnAydiC8bHZZ1qOsf uQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3py646s6xq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 14 Apr 2023 10:48:16 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 14 Apr 2023 10:48:14 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 14 Apr 2023 10:48:13 -0700 Received: from BG-LT92004.corp.innovium.com (unknown [10.28.161.183]) by maili.marvell.com (Postfix) with ESMTP id 088093F707F; Fri, 14 Apr 2023 10:48:07 -0700 (PDT) From: Anoob Joseph To: Thomas Monjalon , Akhil Goyal , Jerin Jacob , Konstantin Ananyev , Bernard Iremonger CC: Volodymyr Fialko , Hemant Agrawal , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Kiran Kumar K , , Olivier Matz Subject: [PATCH v2 22/22] test/pdcp: add PDCP status report cases Date: Fri, 14 Apr 2023 23:15:12 +0530 Message-ID: <20230414174512.642-23-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414174512.642-1-anoobj@marvell.com> References: <20221222092522.1628-1-anoobj@marvell.com> <20230414174512.642-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: T5BydYO3n-Sm4M-MK8lGG4EmL_Z45glf X-Proofpoint-ORIG-GUID: T5BydYO3n-Sm4M-MK8lGG4EmL_Z45glf X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-14_10,2023-04-14_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Volodymyr Fialko Test PDCP status report generation. Signed-off-by: Anoob Joseph Signed-off-by: Volodymyr Fialko --- app/test/test_pdcp.c | 310 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 310 insertions(+) diff --git a/app/test/test_pdcp.c b/app/test/test_pdcp.c index de3375bb22..c9e4b894ac 100644 --- a/app/test/test_pdcp.c +++ b/app/test/test_pdcp.c @@ -2,6 +2,7 @@ * Copyright(C) 2023 Marvell. */ +#include #include #include #include @@ -43,6 +44,9 @@ struct pdcp_testsuite_params { struct rte_event_timer_adapter *timdev; bool timer_is_running; uint64_t min_resolution_ns; + struct rte_pdcp_up_ctrl_pdu_hdr *status_report; + uint32_t status_report_bitmask_capacity; + uint8_t *ctrl_pdu_buf; }; static struct pdcp_testsuite_params testsuite_params; @@ -139,6 +143,18 @@ static struct rte_pdcp_t_reordering t_reorder_timer = { .stop = pdcp_timer_stop_cb, }; +static inline void +bitmask_set_bit(uint8_t *mask, uint32_t bit) +{ + mask[bit / 8] |= (1 << bit % 8); +} + +static inline bool +bitmask_is_bit_set(const uint8_t *mask, uint32_t bit) +{ + return mask[bit / 8] & (1 << (bit % 8)); +} + static inline int pdcp_hdr_size_get(enum rte_security_pdcp_sn_size sn_size) { @@ -285,6 +301,21 @@ testsuite_setup(void) goto cop_pool_free; } + /* Allocate memory for longest possible status report */ + ts_params->status_report_bitmask_capacity = RTE_PDCP_CTRL_PDU_SIZE_MAX - + sizeof(struct rte_pdcp_up_ctrl_pdu_hdr); + ts_params->status_report = rte_zmalloc(NULL, RTE_PDCP_CTRL_PDU_SIZE_MAX, 0); + if (ts_params->status_report == NULL) { + RTE_LOG(ERR, USER1, "Could not allocate status report\n"); + goto cop_pool_free; + } + + ts_params->ctrl_pdu_buf = rte_zmalloc(NULL, RTE_PDCP_CTRL_PDU_SIZE_MAX, 0); + if (ts_params->ctrl_pdu_buf == NULL) { + RTE_LOG(ERR, USER1, "Could not allocate status report data\n"); + goto cop_pool_free; + } + return 0; cop_pool_free: @@ -293,6 +324,8 @@ testsuite_setup(void) mbuf_pool_free: rte_mempool_free(ts_params->mbuf_pool); ts_params->mbuf_pool = NULL; + rte_free(ts_params->status_report); + rte_free(ts_params->ctrl_pdu_buf); return TEST_FAILED; } @@ -315,6 +348,9 @@ testsuite_teardown(void) rte_mempool_free(ts_params->mbuf_pool); ts_params->mbuf_pool = NULL; + + rte_free(ts_params->status_report); + rte_free(ts_params->ctrl_pdu_buf); } static int @@ -1364,6 +1400,244 @@ test_expiry_with_rte_timer(const struct pdcp_test_conf *ul_conf) return ret; } +static struct rte_pdcp_up_ctrl_pdu_hdr * +pdcp_status_report_init(uint32_t fmc) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = testsuite_params.status_report; + + hdr->d_c = RTE_PDCP_PDU_TYPE_CTRL; + hdr->pdu_type = RTE_PDCP_CTRL_PDU_TYPE_STATUS_REPORT; + hdr->fmc = rte_cpu_to_be_32(fmc); + hdr->r = 0; + memset(hdr->bitmap, 0, testsuite_params.status_report_bitmask_capacity); + + return hdr; +} + +static uint32_t +pdcp_status_report_len(void) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = testsuite_params.status_report; + uint32_t i; + + for (i = testsuite_params.status_report_bitmask_capacity; i != 0; i--) { + if (hdr->bitmap[i - 1]) + return i; + } + + return 0; +} + +static int +pdcp_status_report_verify(struct rte_mbuf *status_report, + const struct rte_pdcp_up_ctrl_pdu_hdr *expected_hdr, uint32_t expected_len) +{ + uint32_t received_len = rte_pktmbuf_pkt_len(status_report); + uint8_t *received_buf = testsuite_params.ctrl_pdu_buf; + int ret; + + ret = pktmbuf_read_into(status_report, received_buf, RTE_PDCP_CTRL_PDU_SIZE_MAX); + TEST_ASSERT_SUCCESS(ret, "Failed to copy status report pkt into continuous buffer"); + + debug_hexdump(stdout, "Received:", received_buf, received_len); + debug_hexdump(stdout, "Expected:", expected_hdr, expected_len); + + TEST_ASSERT_EQUAL(expected_len, received_len, + "Mismatch in packet lengths [expected: %d, received: %d]", + expected_len, received_len); + + TEST_ASSERT_BUFFERS_ARE_EQUAL(received_buf, expected_hdr, expected_len, + "Generated packet not as expected"); + + return 0; +} + +static int +test_status_report_gen(const struct pdcp_test_conf *ul_conf, + const struct rte_pdcp_up_ctrl_pdu_hdr *hdr, + uint32_t bitmap_len) +{ + struct rte_mbuf *status_report = NULL, **out_mb, *m; + uint16_t nb_success = 0, nb_err = 0; + struct rte_pdcp_entity *pdcp_entity; + struct pdcp_test_conf dl_conf; + int ret = TEST_FAILED, nb_out; + uint32_t nb_pkts = 0, i; + uint8_t cdev_id; + + const uint32_t start_count = rte_be_to_cpu_32(hdr->fmc); + + if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK) + return TEST_SKIPPED; + + /* Create configuration for actual testing */ + uplink_to_downlink_convert(ul_conf, &dl_conf); + dl_conf.entity.count = start_count; + dl_conf.entity.status_report_required = true; + + pdcp_entity = test_entity_create(&dl_conf, &ret); + if (pdcp_entity == NULL) + return ret; + + cdev_id = dl_conf.entity.dev_id; + out_mb = calloc(pdcp_entity->max_pkt_cache, sizeof(uintptr_t)); + + for (i = 0; i < bitmap_len * 8; i++) { + if (!bitmask_is_bit_set(hdr->bitmap, i)) + continue; + + m = generate_packet_for_dl_with_sn(*ul_conf, start_count + i + 1); + ASSERT_TRUE_OR_GOTO(m != NULL, exit, "Could not allocate buffer for packet\n"); + + nb_success = test_process_packets(pdcp_entity, cdev_id, &m, 1, out_mb, &nb_err); + ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet buffering\n"); + ASSERT_TRUE_OR_GOTO(nb_success == 0, exit, "Packet was not buffered as expected\n"); + + } + + m = NULL; + + /* Check status report */ + status_report = rte_pdcp_control_pdu_create(pdcp_entity, + RTE_PDCP_CTRL_PDU_TYPE_STATUS_REPORT); + ASSERT_TRUE_OR_GOTO(status_report != NULL, exit, "Could not generate status report\n"); + + const uint32_t expected_len = sizeof(struct rte_pdcp_up_ctrl_pdu_hdr) + bitmap_len; + + ASSERT_TRUE_OR_GOTO(pdcp_status_report_verify(status_report, hdr, expected_len) == 0, exit, + "Report verification failure\n"); + + ret = TEST_SUCCESS; +exit: + rte_free(m); + rte_pktmbuf_free(status_report); + rte_pktmbuf_free_bulk(out_mb, nb_pkts); + nb_out = rte_pdcp_entity_release(pdcp_entity, out_mb); + rte_pktmbuf_free_bulk(out_mb, nb_out); + free(out_mb); + return ret; +} + +static void +ctrl_pdu_hdr_packet_set(struct rte_pdcp_up_ctrl_pdu_hdr *hdr, uint32_t pkt_count) +{ + bitmask_set_bit(hdr->bitmap, pkt_count - rte_be_to_cpu_32(hdr->fmc) - 1); +} + +static int +test_status_report_fmc_only(const struct pdcp_test_conf *ul_conf) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = pdcp_status_report_init(42); + + return test_status_report_gen(ul_conf, hdr, pdcp_status_report_len()); +} + +static int +test_status_report_one_pkt_first_slab(const struct pdcp_test_conf *ul_conf) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = pdcp_status_report_init(0); + + ctrl_pdu_hdr_packet_set(hdr, RTE_BITMAP_SLAB_BIT_SIZE / 2 + 1); + + return test_status_report_gen(ul_conf, hdr, pdcp_status_report_len()); +} + +static int +test_status_report_one_pkt_second_slab(const struct pdcp_test_conf *ul_conf) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = pdcp_status_report_init(1); + + ctrl_pdu_hdr_packet_set(hdr, RTE_BITMAP_SLAB_BIT_SIZE + 1); + + return test_status_report_gen(ul_conf, hdr, pdcp_status_report_len()); +} + +static int +test_status_report_full_slab(const struct pdcp_test_conf *ul_conf) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = pdcp_status_report_init(1); + const uint32_t start_offset = RTE_BITMAP_SLAB_BIT_SIZE + 1; + int i; + + for (i = 0; i < RTE_BITMAP_SLAB_BIT_SIZE; i++) + ctrl_pdu_hdr_packet_set(hdr, start_offset + i); + + return test_status_report_gen(ul_conf, hdr, pdcp_status_report_len()); +} + +static int +test_status_report_two_sequential_slabs(const struct pdcp_test_conf *ul_conf) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = pdcp_status_report_init(0); + const uint32_t start_offset = RTE_BITMAP_SLAB_BIT_SIZE / 2 + 1; + + ctrl_pdu_hdr_packet_set(hdr, start_offset); + ctrl_pdu_hdr_packet_set(hdr, start_offset + RTE_BITMAP_SLAB_BIT_SIZE); + + return test_status_report_gen(ul_conf, hdr, pdcp_status_report_len()); +} + +static int +test_status_report_two_non_sequential_slabs(const struct pdcp_test_conf *ul_conf) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = pdcp_status_report_init(0); + const uint32_t start_offset = RTE_BITMAP_SLAB_BIT_SIZE / 2 + 1; + + ctrl_pdu_hdr_packet_set(hdr, start_offset); + ctrl_pdu_hdr_packet_set(hdr, start_offset + RTE_BITMAP_SLAB_BIT_SIZE); + ctrl_pdu_hdr_packet_set(hdr, 3 * RTE_BITMAP_SLAB_BIT_SIZE); + + return test_status_report_gen(ul_conf, hdr, pdcp_status_report_len()); +} + +static int +test_status_report_max_length_sn_12(const struct pdcp_test_conf *ul_conf) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr; + const uint32_t fmc = 0; + uint32_t i; + + if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK || + ul_conf->entity.pdcp_xfrm.sn_size != RTE_SECURITY_PDCP_SN_SIZE_12) + return TEST_SKIPPED; + + hdr = pdcp_status_report_init(fmc); + + const uint32_t max_count = RTE_MIN((RTE_PDCP_CTRL_PDU_SIZE_MAX - sizeof(hdr)) * 8, + (uint32_t)PDCP_WINDOW_SIZE(RTE_SECURITY_PDCP_SN_SIZE_12)); + + i = fmc + 2; /* set first count to have a gap, to enable packet buffering */ + + for (; i < max_count; i++) + ctrl_pdu_hdr_packet_set(hdr, i); + + return test_status_report_gen(ul_conf, hdr, pdcp_status_report_len()); +} + +static int +test_status_report_overlap_different_slabs(const struct pdcp_test_conf *ul_conf) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = pdcp_status_report_init(63); + const uint32_t sn_size = 12; + + ctrl_pdu_hdr_packet_set(hdr, 64 + 1); + ctrl_pdu_hdr_packet_set(hdr, PDCP_WINDOW_SIZE(sn_size) + 1); + + return test_status_report_gen(ul_conf, hdr, pdcp_status_report_len()); +} + +static int +test_status_report_overlap_same_slab(const struct pdcp_test_conf *ul_conf) +{ + struct rte_pdcp_up_ctrl_pdu_hdr *hdr = pdcp_status_report_init(2); + const uint32_t sn_size = 12; + + ctrl_pdu_hdr_packet_set(hdr, 4); + ctrl_pdu_hdr_packet_set(hdr, PDCP_WINDOW_SIZE(sn_size) + 1); + + return test_status_report_gen(ul_conf, hdr, pdcp_status_report_len()); +} + static int test_combined(struct pdcp_test_conf *ul_conf) { @@ -1555,11 +1829,47 @@ static struct unit_test_suite reorder_test_cases = { } }; +static struct unit_test_suite status_report_test_cases = { + .suite_name = "PDCP status report", + .unit_test_cases = { + TEST_CASE_NAMED_WITH_DATA("test_status_report_fmc_only", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_status_report_fmc_only), + TEST_CASE_NAMED_WITH_DATA("test_status_report_one_pkt_first_slab", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_status_report_one_pkt_first_slab), + TEST_CASE_NAMED_WITH_DATA("test_status_report_one_pkt_second_slab", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_status_report_one_pkt_second_slab), + TEST_CASE_NAMED_WITH_DATA("test_status_report_full_slab", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_status_report_full_slab), + TEST_CASE_NAMED_WITH_DATA("test_status_report_two_sequential_slabs", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_status_report_two_sequential_slabs), + TEST_CASE_NAMED_WITH_DATA("test_status_report_two_non_sequential_slabs", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_status_report_two_non_sequential_slabs), + TEST_CASE_NAMED_WITH_DATA("test_status_report_max_length_sn_12", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec_until_first_pass, + test_status_report_max_length_sn_12), + TEST_CASE_NAMED_WITH_DATA("test_status_report_overlap_different_slabs", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_status_report_overlap_different_slabs), + TEST_CASE_NAMED_WITH_DATA("test_status_report_overlap_same_slab", + ut_setup_pdcp, ut_teardown_pdcp, + run_test_with_all_known_vec, test_status_report_overlap_same_slab), + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + struct unit_test_suite *test_suites[] = { NULL, /* Place holder for known_vector_cases */ &combined_mode_cases, &hfn_sn_test_cases, &reorder_test_cases, + &status_report_test_cases, NULL /* End of suites list */ };