From patchwork Thu Nov 15 23:53:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 48143 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 26F6B4C95; Fri, 16 Nov 2018 00:54:05 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id B5E7A3238 for ; Fri, 16 Nov 2018 00:54:01 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Nov 2018 15:54:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,238,1539673200"; d="scan'208";a="89697338" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga007.jf.intel.com with ESMTP; 15 Nov 2018 15:53:59 -0800 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev Date: Thu, 15 Nov 2018 23:53:43 +0000 Message-Id: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> References: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session. That allows upper layer to easily associate some user defined data with the session. Signed-off-by: Konstantin Ananyev Acked-by: Mohammad Abdul Awal --- lib/librte_cryptodev/rte_cryptodev.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h index 4099823f1..009860e7b 100644 --- a/lib/librte_cryptodev/rte_cryptodev.h +++ b/lib/librte_cryptodev/rte_cryptodev.h @@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, * has a fixed algo, key, op-type, digest_len etc. */ struct rte_cryptodev_sym_session { + uint64_t opaque_data; + /**< Opaque user defined data */ __extension__ void *sess_private_data[0]; /**< Private symmetric session material */ }; From patchwork Fri Nov 30 16:45:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 48437 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B77AF1B584; Fri, 30 Nov 2018 17:46:31 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id CE35E1B578 for ; Fri, 30 Nov 2018 17:46:25 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Nov 2018 08:46:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,299,1539673200"; d="scan'208";a="94677615" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga007.jf.intel.com with ESMTP; 30 Nov 2018 08:46:23 -0800 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , akhil.goyal@nxp.com, declan.doherty@intel.com Date: Fri, 30 Nov 2018 16:45:59 +0000 Message-Id: <1543596366-22617-3-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> References: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 2/9] security: add opaque userdata pointer into security session X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add 'uint64_t opaque_data' inside struct rte_security_session. That allows upper layer to easily associate some user defined data with the session. Signed-off-by: Konstantin Ananyev Acked-by: Mohammad Abdul Awal --- lib/librte_security/rte_security.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h index 1431b4df1..07b315512 100644 --- a/lib/librte_security/rte_security.h +++ b/lib/librte_security/rte_security.h @@ -318,6 +318,8 @@ struct rte_security_session_conf { struct rte_security_session { void *sess_private_data; /**< Private session material */ + uint64_t opaque_data; + /**< Opaque user defined data */ }; /** From patchwork Fri Nov 30 16:46:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 48438 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E32C61B58C; Fri, 30 Nov 2018 17:46:33 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id BE6AF1B57F for ; Fri, 30 Nov 2018 17:46:26 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Nov 2018 08:46:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,299,1539673200"; d="scan'208";a="94677626" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga007.jf.intel.com with ESMTP; 30 Nov 2018 08:46:25 -0800 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , olivier.matz@6wind.com Date: Fri, 30 Nov 2018 16:46:00 +0000 Message-Id: <1543596366-22617-4-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> References: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 3/9] net: add ESP trailer structure definition X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Konstantin Ananyev Acked-by: Mohammad Abdul Awal --- lib/librte_net/rte_esp.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h index f77ec2eb2..8e1b3d2dd 100644 --- a/lib/librte_net/rte_esp.h +++ b/lib/librte_net/rte_esp.h @@ -11,7 +11,7 @@ * ESP-related defines */ -#include +#include #ifdef __cplusplus extern "C" { @@ -25,6 +25,14 @@ struct esp_hdr { rte_be32_t seq; /**< packet sequence number */ } __attribute__((__packed__)); +/** + * ESP Trailer + */ +struct esp_tail { + uint8_t pad_len; /**< number of pad bytes (0-255) */ + uint8_t next_proto; /**< IPv4 or IPv6 or next layer header */ +} __attribute__((__packed__)); + #ifdef __cplusplus } #endif From patchwork Fri Nov 30 16:46:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 48439 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 31D2C1B59A; Fri, 30 Nov 2018 17:46:36 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id BA87A1B57B for ; Fri, 30 Nov 2018 17:46:28 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Nov 2018 08:46:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,299,1539673200"; d="scan'208";a="94677632" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga007.jf.intel.com with ESMTP; 30 Nov 2018 08:46:26 -0800 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , Mohammad Abdul Awal Date: Fri, 30 Nov 2018 16:46:01 +0000 Message-Id: <1543596366-22617-5-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> References: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 4/9] lib: introduce ipsec library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce librte_ipsec library. The library is supposed to utilize existing DPDK crypto-dev and security API to provide application with transparent IPsec processing API. That initial commit provides some base API to manage IPsec Security Association (SA) object. Signed-off-by: Mohammad Abdul Awal Signed-off-by: Konstantin Ananyev --- MAINTAINERS | 5 + config/common_base | 5 + lib/Makefile | 2 + lib/librte_ipsec/Makefile | 24 ++ lib/librte_ipsec/ipsec_sqn.h | 48 ++++ lib/librte_ipsec/meson.build | 10 + lib/librte_ipsec/rte_ipsec_sa.h | 139 +++++++++++ lib/librte_ipsec/rte_ipsec_version.map | 10 + lib/librte_ipsec/sa.c | 307 +++++++++++++++++++++++++ lib/librte_ipsec/sa.h | 77 +++++++ lib/meson.build | 2 + mk/rte.app.mk | 2 + 12 files changed, 631 insertions(+) create mode 100644 lib/librte_ipsec/Makefile create mode 100644 lib/librte_ipsec/ipsec_sqn.h create mode 100644 lib/librte_ipsec/meson.build create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h create mode 100644 lib/librte_ipsec/rte_ipsec_version.map create mode 100644 lib/librte_ipsec/sa.c create mode 100644 lib/librte_ipsec/sa.h diff --git a/MAINTAINERS b/MAINTAINERS index 19353ac89..f06aee6b6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1071,6 +1071,11 @@ F: doc/guides/prog_guide/pdump_lib.rst F: app/pdump/ F: doc/guides/tools/pdump.rst +IPsec - EXPERIMENTAL +M: Konstantin Ananyev +F: lib/librte_ipsec/ +M: Bernard Iremonger +F: test/test/test_ipsec.c Packet Framework ---------------- diff --git a/config/common_base b/config/common_base index d12ae98bc..32499d772 100644 --- a/config/common_base +++ b/config/common_base @@ -925,6 +925,11 @@ CONFIG_RTE_LIBRTE_BPF=y # allow load BPF from ELF files (requires libelf) CONFIG_RTE_LIBRTE_BPF_ELF=n +# +# Compile librte_ipsec +# +CONFIG_RTE_LIBRTE_IPSEC=y + # # Compile the test application # diff --git a/lib/Makefile b/lib/Makefile index b7370ef97..5dc774604 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -106,6 +106,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net DEPDIRS-librte_gso += librte_mempool DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev +DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec +DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile new file mode 100644 index 000000000..7758dcc6d --- /dev/null +++ b/lib/librte_ipsec/Makefile @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2018 Intel Corporation + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_ipsec.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) +CFLAGS += -DALLOW_EXPERIMENTAL_API +LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security + +EXPORT_MAP := rte_ipsec_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c + +# install header files +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h new file mode 100644 index 000000000..4471814f9 --- /dev/null +++ b/lib/librte_ipsec/ipsec_sqn.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _IPSEC_SQN_H_ +#define _IPSEC_SQN_H_ + +#define WINDOW_BUCKET_BITS 6 /* uint64_t */ +#define WINDOW_BUCKET_SIZE (1 << WINDOW_BUCKET_BITS) +#define WINDOW_BIT_LOC_MASK (WINDOW_BUCKET_SIZE - 1) + +/* minimum number of bucket, power of 2*/ +#define WINDOW_BUCKET_MIN 2 +#define WINDOW_BUCKET_MAX (INT16_MAX + 1) + +#define IS_ESN(sa) ((sa)->sqn_mask == UINT64_MAX) + +/* + * for given size, calculate required number of buckets. + */ +static uint32_t +replay_num_bucket(uint32_t wsz) +{ + uint32_t nb; + + nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) / + WINDOW_BUCKET_SIZE); + nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN); + + return nb; +} + +/** + * Based on number of buckets calculated required size for the + * structure that holds replay window and sequnce number (RSN) information. + */ +static size_t +rsn_size(uint32_t nb_bucket) +{ + size_t sz; + struct replay_sqn *rsn; + + sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]); + sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE); + return sz; +} + +#endif /* _IPSEC_SQN_H_ */ diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build new file mode 100644 index 000000000..52c78eaeb --- /dev/null +++ b/lib/librte_ipsec/meson.build @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2018 Intel Corporation + +allow_experimental_apis = true + +sources=files('sa.c') + +install_headers = files('rte_ipsec_sa.h') + +deps += ['mbuf', 'net', 'cryptodev', 'security'] diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h new file mode 100644 index 000000000..4e36fd99b --- /dev/null +++ b/lib/librte_ipsec/rte_ipsec_sa.h @@ -0,0 +1,139 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _RTE_IPSEC_SA_H_ +#define _RTE_IPSEC_SA_H_ + +/** + * @file rte_ipsec_sa.h + * @b EXPERIMENTAL: this API may change without prior notice + * + * Defines API to manage IPsec Security Association (SA) objects. + */ + +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * An opaque structure to represent Security Association (SA). + */ +struct rte_ipsec_sa; + +/** + * SA initialization parameters. + */ +struct rte_ipsec_sa_prm { + + uint64_t userdata; /**< provided and interpreted by user */ + uint64_t flags; /**< see RTE_IPSEC_SAFLAG_* below */ + /** ipsec configuration */ + struct rte_security_ipsec_xform ipsec_xform; + struct rte_crypto_sym_xform *crypto_xform; + union { + struct { + uint8_t hdr_len; /**< tunnel header len */ + uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */ + uint8_t next_proto; /**< next header protocol */ + const void *hdr; /**< tunnel header template */ + } tun; /**< tunnel mode repated parameters */ + struct { + uint8_t proto; /**< next header protocol */ + } trs; /**< transport mode repated parameters */ + }; + + uint32_t replay_win_sz; + /**< window size to enable sequence replay attack handling. + * Replay checking is disabled if the window size is 0. + */ +}; + +/** + * SA type is an 64-bit value that contain the following information: + * - IP version (IPv4/IPv6) + * - IPsec proto (ESP/AH) + * - inbound/outbound + * - mode (TRANSPORT/TUNNEL) + * - for TUNNEL outer IP version (IPv4/IPv6) + * ... + */ + +enum { + RTE_SATP_LOG_IPV, + RTE_SATP_LOG_PROTO, + RTE_SATP_LOG_DIR, + RTE_SATP_LOG_MODE, + RTE_SATP_LOG_NUM +}; + +#define RTE_IPSEC_SATP_IPV_MASK (1ULL << RTE_SATP_LOG_IPV) +#define RTE_IPSEC_SATP_IPV4 (0ULL << RTE_SATP_LOG_IPV) +#define RTE_IPSEC_SATP_IPV6 (1ULL << RTE_SATP_LOG_IPV) + +#define RTE_IPSEC_SATP_PROTO_MASK (1ULL << RTE_SATP_LOG_PROTO) +#define RTE_IPSEC_SATP_PROTO_AH (0ULL << RTE_SATP_LOG_PROTO) +#define RTE_IPSEC_SATP_PROTO_ESP (1ULL << RTE_SATP_LOG_PROTO) + +#define RTE_IPSEC_SATP_DIR_MASK (1ULL << RTE_SATP_LOG_DIR) +#define RTE_IPSEC_SATP_DIR_IB (0ULL << RTE_SATP_LOG_DIR) +#define RTE_IPSEC_SATP_DIR_OB (1ULL << RTE_SATP_LOG_DIR) + +#define RTE_IPSEC_SATP_MODE_MASK (3ULL << RTE_SATP_LOG_MODE) +#define RTE_IPSEC_SATP_MODE_TRANS (0ULL << RTE_SATP_LOG_MODE) +#define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG_MODE) +#define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG_MODE) + +/** + * get type of given SA + * @return + * SA type value. + */ +uint64_t __rte_experimental +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa); + +/** + * Calculate requied SA size based on provided input parameters. + * @param prm + * Parameters that wil be used to initialise SA object. + * @return + * - Actual size required for SA with given parameters. + * - -EINVAL if the parameters are invalid. + */ +int __rte_experimental +rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm); + +/** + * initialise SA based on provided input parameters. + * @param sa + * SA object to initialise. + * @param prm + * Parameters used to initialise given SA object. + * @param size + * size of the provided buffer for SA. + * @return + * - Actual size of SA object if operation completed successfully. + * - -EINVAL if the parameters are invalid. + * - -ENOSPC if the size of the provided buffer is not big enough. + */ +int __rte_experimental +rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, + uint32_t size); + +/** + * cleanup SA + * @param sa + * Pointer to SA object to de-initialize. + */ +void __rte_experimental +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_IPSEC_SA_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map new file mode 100644 index 000000000..1a66726b8 --- /dev/null +++ b/lib/librte_ipsec/rte_ipsec_version.map @@ -0,0 +1,10 @@ +EXPERIMENTAL { + global: + + rte_ipsec_sa_fini; + rte_ipsec_sa_init; + rte_ipsec_sa_size; + rte_ipsec_sa_type; + + local: *; +}; diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c new file mode 100644 index 000000000..c814e5384 --- /dev/null +++ b/lib/librte_ipsec/sa.c @@ -0,0 +1,307 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include +#include +#include +#include + +#include "sa.h" +#include "ipsec_sqn.h" + +/* some helper structures */ +struct crypto_xform { + struct rte_crypto_auth_xform *auth; + struct rte_crypto_cipher_xform *cipher; + struct rte_crypto_aead_xform *aead; +}; + + +static int +check_crypto_xform(struct crypto_xform *xform) +{ + uintptr_t p; + + p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher; + + /* either aead or both auth and cipher should be not NULLs */ + if (xform->aead) { + if (p) + return -EINVAL; + } else if (p == (uintptr_t)xform->auth) { + return -EINVAL; + } + + return 0; +} + +static int +fill_crypto_xform(struct crypto_xform *xform, + const struct rte_ipsec_sa_prm *prm) +{ + struct rte_crypto_sym_xform *xf; + + memset(xform, 0, sizeof(*xform)); + + for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) { + if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + if (xform->auth != NULL) + return -EINVAL; + xform->auth = &xf->auth; + } else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (xform->cipher != NULL) + return -EINVAL; + xform->cipher = &xf->cipher; + } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + if (xform->aead != NULL) + return -EINVAL; + xform->aead = &xf->aead; + } else + return -EINVAL; + } + + return check_crypto_xform(xform); +} + +uint64_t __rte_experimental +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa) +{ + return sa->type; +} + +static int32_t +ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket) +{ + uint32_t n, sz; + + n = 0; + if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) == + RTE_IPSEC_SATP_DIR_IB) + n = replay_num_bucket(wsz); + + if (n > WINDOW_BUCKET_MAX) + return -EINVAL; + + *nb_bucket = n; + + sz = rsn_size(n); + sz += sizeof(struct rte_ipsec_sa); + return sz; +} + +void __rte_experimental +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa) +{ + memset(sa, 0, sa->size); +} + +static uint64_t +fill_sa_type(const struct rte_ipsec_sa_prm *prm) +{ + uint64_t tp; + + tp = 0; + + if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) + tp |= RTE_IPSEC_SATP_PROTO_AH; + else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) + tp |= RTE_IPSEC_SATP_PROTO_ESP; + + if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) + tp |= RTE_IPSEC_SATP_DIR_OB; + else + tp |= RTE_IPSEC_SATP_DIR_IB; + + if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) { + if (prm->ipsec_xform.tunnel.type == + RTE_SECURITY_IPSEC_TUNNEL_IPV4) + tp |= RTE_IPSEC_SATP_MODE_TUNLV4; + else + tp |= RTE_IPSEC_SATP_MODE_TUNLV6; + + if (prm->tun.next_proto == IPPROTO_IPIP) + tp |= RTE_IPSEC_SATP_IPV4; + else if (prm->tun.next_proto == IPPROTO_IPV6) + tp |= RTE_IPSEC_SATP_IPV4; + } else { + tp |= RTE_IPSEC_SATP_MODE_TRANS; + if (prm->trs.proto == IPPROTO_IPIP) + tp |= RTE_IPSEC_SATP_IPV4; + else if (prm->trs.proto == IPPROTO_IPV6) + tp |= RTE_IPSEC_SATP_IPV4; + } + + return tp; +} + +static void +esp_inb_init(struct rte_ipsec_sa *sa) +{ + /* these params may differ with new algorithms support */ + sa->ctp.auth.offset = 0; + sa->ctp.auth.length = sa->icv_len - sa->sqh_len; + sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len; + sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset; +} + +static void +esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) +{ + sa->proto = prm->tun.next_proto; + esp_inb_init(sa); +} + +static void +esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen) +{ + sa->sqn.outb = 1; + + /* these params may differ with new algorithms support */ + sa->ctp.auth.offset = hlen; + sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len; + if (sa->aad_len != 0) { + sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) + + sa->iv_len; + sa->ctp.cipher.length = 0; + } else { + sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr); + sa->ctp.cipher.length = sa->iv_len; + } +} + +static void +esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) +{ + sa->proto = prm->tun.next_proto; + sa->hdr_len = prm->tun.hdr_len; + sa->hdr_l3_off = prm->tun.hdr_l3_off; + memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len); + + esp_outb_init(sa, sa->hdr_len); +} + +static int +esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, + const struct crypto_xform *cxf) +{ + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + if (cxf->aead != NULL) { + /* RFC 4106 */ + if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM) + return -EINVAL; + sa->icv_len = cxf->aead->digest_length; + sa->iv_ofs = cxf->aead->iv.offset; + sa->iv_len = sizeof(uint64_t); + sa->pad_align = 4; + } else { + sa->icv_len = cxf->auth->digest_length; + sa->iv_ofs = cxf->cipher->iv.offset; + sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0; + if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) { + sa->pad_align = 4; + sa->iv_len = 0; + } else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) { + sa->pad_align = IPSEC_MAX_IV_SIZE; + sa->iv_len = IPSEC_MAX_IV_SIZE; + } else + return -EINVAL; + } + + sa->udata = prm->userdata; + sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi); + sa->salt = prm->ipsec_xform.salt; + + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + esp_inb_tun_init(sa, prm); + break; + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): + esp_inb_init(sa); + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + esp_outb_tun_init(sa, prm); + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): + esp_outb_init(sa, 0); + break; + } + + return 0; +} + +int __rte_experimental +rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm) +{ + uint64_t type; + uint32_t nb; + + if (prm == NULL) + return -EINVAL; + + /* determine SA type */ + type = fill_sa_type(prm); + + /* determine required size */ + return ipsec_sa_size(prm->replay_win_sz, type, &nb); +} + +int __rte_experimental +rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, + uint32_t size) +{ + int32_t rc, sz; + uint32_t nb; + uint64_t type; + struct crypto_xform cxf; + + if (sa == NULL || prm == NULL) + return -EINVAL; + + /* determine SA type */ + type = fill_sa_type(prm); + + /* determine required size */ + sz = ipsec_sa_size(prm->replay_win_sz, type, &nb); + if (sz < 0) + return sz; + else if (size < (uint32_t)sz) + return -ENOSPC; + + /* only esp is supported right now */ + if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP) + return -EINVAL; + + if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL && + prm->tun.hdr_len > sizeof(sa->hdr)) + return -EINVAL; + + rc = fill_crypto_xform(&cxf, prm); + if (rc != 0) + return rc; + + sa->type = type; + sa->size = sz; + + /* check for ESN flag */ + sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ? + UINT32_MAX : UINT64_MAX; + + rc = esp_sa_init(sa, prm, &cxf); + if (rc != 0) + rte_ipsec_sa_fini(sa); + + /* fill replay window related fields */ + if (nb != 0) { + sa->replay.win_sz = prm->replay_win_sz; + sa->replay.nb_bucket = nb; + sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1; + sa->sqn.inb = (struct replay_sqn *)(sa + 1); + } + + return sz; +} diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h new file mode 100644 index 000000000..5d113891a --- /dev/null +++ b/lib/librte_ipsec/sa.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _SA_H_ +#define _SA_H_ + +#define IPSEC_MAX_HDR_SIZE 64 +#define IPSEC_MAX_IV_SIZE 16 +#define IPSEC_MAX_IV_QWORD (IPSEC_MAX_IV_SIZE / sizeof(uint64_t)) + +/* these definitions probably has to be in rte_crypto_sym.h */ +union sym_op_ofslen { + uint64_t raw; + struct { + uint32_t offset; + uint32_t length; + }; +}; + +union sym_op_data { +#ifdef __SIZEOF_INT128__ + __uint128_t raw; +#endif + struct { + uint8_t *va; + rte_iova_t pa; + }; +}; + +struct replay_sqn { + uint64_t sqn; + __extension__ uint64_t window[0]; +}; + +struct rte_ipsec_sa { + uint64_t type; /* type of given SA */ + uint64_t udata; /* user defined */ + uint32_t size; /* size of given sa object */ + uint32_t spi; + /* sqn calculations related */ + uint64_t sqn_mask; + struct { + uint32_t win_sz; + uint16_t nb_bucket; + uint16_t bucket_index_mask; + } replay; + /* template for crypto op fields */ + struct { + union sym_op_ofslen cipher; + union sym_op_ofslen auth; + } ctp; + uint32_t salt; + uint8_t proto; /* next proto */ + uint8_t aad_len; + uint8_t hdr_len; + uint8_t hdr_l3_off; + uint8_t icv_len; + uint8_t sqh_len; + uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */ + uint8_t iv_len; + uint8_t pad_align; + + /* template for tunnel header */ + uint8_t hdr[IPSEC_MAX_HDR_SIZE]; + + /* + * sqn and replay window + */ + union { + uint64_t outb; + struct replay_sqn *inb; + } sqn; + +} __rte_cache_aligned; + +#endif /* _SA_H_ */ diff --git a/lib/meson.build b/lib/meson.build index bb7f443f9..69684ef14 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning 'kni', 'latencystats', 'lpm', 'member', 'meter', 'power', 'pdump', 'rawdev', 'reorder', 'sched', 'security', 'vhost', + #ipsec lib depends on crypto and security + 'ipsec', # add pkt framework libs which use other libs from above 'port', 'table', 'pipeline', # flow_classify lib depends on pkt framework table lib diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 5699d979d..f4cd75252 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y) _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF) += -lelf endif +_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC) += -lrte_ipsec + _LDLIBS-y += --whole-archive _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile From patchwork Fri Nov 30 16:46:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 48440 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9646C1B583; Fri, 30 Nov 2018 17:46:43 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 312331B583 for ; Fri, 30 Nov 2018 17:46:30 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Nov 2018 08:46:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,299,1539673200"; d="scan'208";a="94677637" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga007.jf.intel.com with ESMTP; 30 Nov 2018 08:46:28 -0800 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , Mohammad Abdul Awal Date: Fri, 30 Nov 2018 16:46:02 +0000 Message-Id: <1543596366-22617-6-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> References: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 5/9] ipsec: add SA data-path API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce Security Association (SA-level) data-path API Operates at SA level, provides functions to: - initialize/teardown SA object - process inbound/outbound ESP/AH packets associated with the given SA (decrypt/encrypt, authenticate, check integrity, add/remove ESP/AH related headers and data, etc.). Signed-off-by: Mohammad Abdul Awal Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/Makefile | 2 + lib/librte_ipsec/meson.build | 4 +- lib/librte_ipsec/rte_ipsec.h | 154 +++++++++++++++++++++++++ lib/librte_ipsec/rte_ipsec_version.map | 3 + lib/librte_ipsec/sa.c | 21 +++- lib/librte_ipsec/sa.h | 4 + lib/librte_ipsec/ses.c | 45 ++++++++ 7 files changed, 230 insertions(+), 3 deletions(-) create mode 100644 lib/librte_ipsec/rte_ipsec.h create mode 100644 lib/librte_ipsec/ses.c diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile index 7758dcc6d..79f187fae 100644 --- a/lib/librte_ipsec/Makefile +++ b/lib/librte_ipsec/Makefile @@ -17,8 +17,10 @@ LIBABIVER := 1 # all source are stored in SRCS-y SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c # install header files +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build index 52c78eaeb..6e8c6fabe 100644 --- a/lib/librte_ipsec/meson.build +++ b/lib/librte_ipsec/meson.build @@ -3,8 +3,8 @@ allow_experimental_apis = true -sources=files('sa.c') +sources=files('sa.c', 'ses.c') -install_headers = files('rte_ipsec_sa.h') +install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h') deps += ['mbuf', 'net', 'cryptodev', 'security'] diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h new file mode 100644 index 000000000..429d4bf38 --- /dev/null +++ b/lib/librte_ipsec/rte_ipsec.h @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _RTE_IPSEC_H_ +#define _RTE_IPSEC_H_ + +/** + * @file rte_ipsec.h + * @b EXPERIMENTAL: this API may change without prior notice + * + * RTE IPsec support. + * librte_ipsec provides a framework for data-path IPsec protocol + * processing (ESP/AH). + * IKEv2 protocol support right now is out of scope of that draft. + * Though it tries to define related API in such way, that it could be adopted + * by IKEv2 implementation. + */ + +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +struct rte_ipsec_session; + +/** + * IPsec session specific functions that will be used to: + * - prepare - for input mbufs and given IPsec session prepare crypto ops + * that can be enqueued into the cryptodev associated with given session + * (see *rte_ipsec_pkt_crypto_prepare* below for more details). + * - process - finalize processing of packets after crypto-dev finished + * with them or process packets that are subjects to inline IPsec offload + * (see rte_ipsec_pkt_process for more details). + */ +struct rte_ipsec_sa_pkt_func { + uint16_t (*prepare)(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], + uint16_t num); + uint16_t (*process)(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], + uint16_t num); +}; + +/** + * rte_ipsec_session is an aggregate structure that defines particular + * IPsec Security Association IPsec (SA) on given security/crypto device: + * - pointer to the SA object + * - security session action type + * - pointer to security/crypto session, plus other related data + * - session/device specific functions to prepare/process IPsec packets. + */ +struct rte_ipsec_session { + + /** + * SA that session belongs to. + * Note that multiple sessions can belong to the same SA. + */ + struct rte_ipsec_sa *sa; + /** session action type */ + enum rte_security_session_action_type type; + /** session and related data */ + union { + struct { + struct rte_cryptodev_sym_session *ses; + } crypto; + struct { + struct rte_security_session *ses; + struct rte_security_ctx *ctx; + uint32_t ol_flags; + } security; + }; + /** functions to prepare/process IPsec packets */ + struct rte_ipsec_sa_pkt_func pkt_func; +} __rte_cache_aligned; + +/** + * Checks that inside given rte_ipsec_session crypto/security fields + * are filled correctly and setups function pointers based on these values. + * @param ss + * Pointer to the *rte_ipsec_session* object + * @return + * - Zero if operation completed successfully. + * - -EINVAL if the parameters are invalid. + */ +int __rte_experimental +rte_ipsec_session_prepare(struct rte_ipsec_session *ss); + +/** + * For input mbufs and given IPsec session prepare crypto ops that can be + * enqueued into the cryptodev associated with given session. + * expects that for each input packet: + * - l2_len, l3_len are setup correctly + * Note that erroneous mbufs are not freed by the function, + * but are placed beyond last valid mbuf in the *mb* array. + * It is a user responsibility to handle them further. + * @param ss + * Pointer to the *rte_ipsec_session* object the packets belong to. + * @param mb + * The address of an array of *num* pointers to *rte_mbuf* structures + * which contain the input packets. + * @param cop + * The address of an array of *num* pointers to the output *rte_crypto_op* + * structures. + * @param num + * The maximum number of packets to process. + * @return + * Number of successfully processed packets, with error code set in rte_errno. + */ +static inline uint16_t __rte_experimental +rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) +{ + return ss->pkt_func.prepare(ss, mb, cop, num); +} + +/** + * Finalise processing of packets after crypto-dev finished with them or + * process packets that are subjects to inline IPsec offload. + * Expects that for each input packet: + * - l2_len, l3_len are setup correctly + * Output mbufs will be: + * inbound - decrypted & authenticated, ESP(AH) related headers removed, + * *l2_len* and *l3_len* fields are updated. + * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.) + * properly setup, if necessary - IP headers updated, ESP(AH) fields added, + * Note that erroneous mbufs are not freed by the function, + * but are placed beyond last valid mbuf in the *mb* array. + * It is a user responsibility to handle them further. + * @param ss + * Pointer to the *rte_ipsec_session* object the packets belong to. + * @param mb + * The address of an array of *num* pointers to *rte_mbuf* structures + * which contain the input packets. + * @param num + * The maximum number of packets to process. + * @return + * Number of successfully processed packets, with error code set in rte_errno. + */ +static inline uint16_t __rte_experimental +rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + return ss->pkt_func.process(ss, mb, num); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_IPSEC_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map index 1a66726b8..d1c52d7ca 100644 --- a/lib/librte_ipsec/rte_ipsec_version.map +++ b/lib/librte_ipsec/rte_ipsec_version.map @@ -1,6 +1,9 @@ EXPERIMENTAL { global: + rte_ipsec_pkt_crypto_prepare; + rte_ipsec_session_prepare; + rte_ipsec_pkt_process; rte_ipsec_sa_fini; rte_ipsec_sa_init; rte_ipsec_sa_size; diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index c814e5384..7f9baa602 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -2,7 +2,7 @@ * Copyright(c) 2018 Intel Corporation */ -#include +#include #include #include #include @@ -305,3 +305,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, return sz; } + +int +ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, + const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf) +{ + int32_t rc; + + RTE_SET_USED(sa); + + rc = 0; + pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 }; + + switch (ss->type) { + default: + rc = -ENOTSUP; + } + + return rc; +} diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index 5d113891a..050a6d7ae 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -74,4 +74,8 @@ struct rte_ipsec_sa { } __rte_cache_aligned; +int +ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, + const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf); + #endif /* _SA_H_ */ diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c new file mode 100644 index 000000000..562c1423e --- /dev/null +++ b/lib/librte_ipsec/ses.c @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include +#include "sa.h" + +static int +session_check(struct rte_ipsec_session *ss) +{ + if (ss == NULL || ss->sa == NULL) + return -EINVAL; + + if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) { + if (ss->crypto.ses == NULL) + return -EINVAL; + } else if (ss->security.ses == NULL || ss->security.ctx == NULL) + return -EINVAL; + + return 0; +} + +int __rte_experimental +rte_ipsec_session_prepare(struct rte_ipsec_session *ss) +{ + int32_t rc; + struct rte_ipsec_sa_pkt_func fp; + + rc = session_check(ss); + if (rc != 0) + return rc; + + rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp); + if (rc != 0) + return rc; + + ss->pkt_func = fp; + + if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) + ss->crypto.ses->opaque_data = (uintptr_t)ss; + else + ss->security.ses->opaque_data = (uintptr_t)ss; + + return 0; +} From patchwork Fri Nov 30 16:46:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 48442 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1C5451B5B3; Fri, 30 Nov 2018 17:46:47 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 522A81B594 for ; Fri, 30 Nov 2018 17:46:34 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Nov 2018 08:46:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,299,1539673200"; d="scan'208";a="94677642" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga007.jf.intel.com with ESMTP; 30 Nov 2018 08:46:29 -0800 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , Mohammad Abdul Awal Date: Fri, 30 Nov 2018 16:46:03 +0000 Message-Id: <1543596366-22617-7-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> References: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 6/9] ipsec: implement SA data-path API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Provide implementation for rte_ipsec_pkt_crypto_prepare() and rte_ipsec_pkt_process(). Current implementation: - supports ESP protocol tunnel mode. - supports ESP protocol transport mode. - supports ESN and replay window. - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL. - covers all currently defined security session types: - RTE_SECURITY_ACTION_TYPE_NONE - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL For first two types SQN check/update is done by SW (inside the library). For last two type it is HW/PMD responsibility. Signed-off-by: Mohammad Abdul Awal Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/crypto.h | 123 ++++ lib/librte_ipsec/iph.h | 84 +++ lib/librte_ipsec/ipsec_sqn.h | 186 ++++++ lib/librte_ipsec/pad.h | 45 ++ lib/librte_ipsec/sa.c | 1044 +++++++++++++++++++++++++++++++++- 5 files changed, 1480 insertions(+), 2 deletions(-) create mode 100644 lib/librte_ipsec/crypto.h create mode 100644 lib/librte_ipsec/iph.h create mode 100644 lib/librte_ipsec/pad.h diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h new file mode 100644 index 000000000..61f5c1433 --- /dev/null +++ b/lib/librte_ipsec/crypto.h @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _CRYPTO_H_ +#define _CRYPTO_H_ + +/** + * @file crypto.h + * Contains crypto specific functions/structures/macros used internally + * by ipsec library. + */ + + /* + * AES-GCM devices have some specific requirements for IV and AAD formats. + * Ideally that to be done by the driver itself. + */ + +struct aead_gcm_iv { + uint32_t salt; + uint64_t iv; + uint32_t cnt; +} __attribute__((packed)); + +struct aead_gcm_aad { + uint32_t spi; + /* + * RFC 4106, section 5: + * Two formats of the AAD are defined: + * one for 32-bit sequence numbers, and one for 64-bit ESN. + */ + union { + uint32_t u32[2]; + uint64_t u64; + } sqn; + uint32_t align0; /* align to 16B boundary */ +} __attribute__((packed)); + +struct gcm_esph_iv { + struct esp_hdr esph; + uint64_t iv; +} __attribute__((packed)); + + +static inline void +aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt) +{ + gcm->salt = salt; + gcm->iv = iv; + gcm->cnt = rte_cpu_to_be_32(1); +} + +/* + * RFC 4106, 5 AAD Construction + * spi and sqn should already be converted into network byte order. + * Make sure that not used bytes are zeroed. + */ +static inline void +aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn, + int esn) +{ + aad->spi = spi; + if (esn) + aad->sqn.u64 = sqn; + else { + aad->sqn.u32[0] = sqn_low32(sqn); + aad->sqn.u32[1] = 0; + } + aad->align0 = 0; +} + +static inline void +gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn) +{ + iv[0] = sqn; + iv[1] = 0; +} + +/* + * from RFC 4303 3.3.2.1.4: + * If the ESN option is enabled for the SA, the high-order 32 + * bits of the sequence number are appended after the Next Header field + * for purposes of this computation, but are not transmitted. + */ + +/* + * Helper function that moves ICV by 4B below, and inserts SQN.hibits. + * icv parameter points to the new start of ICV. + */ +static inline void +insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len) +{ + uint32_t *icv; + int32_t i; + + RTE_ASSERT(icv_len % sizeof(uint32_t) == 0); + + icv = picv; + icv_len = icv_len / sizeof(uint32_t); + for (i = icv_len; i-- != 0; icv[i] = icv[i - 1]) + ; + + icv[i] = sqh; +} + +/* + * Helper function that moves ICV by 4B up, and removes SQN.hibits. + * icv parameter points to the new start of ICV. + */ +static inline void +remove_sqh(void *picv, uint32_t icv_len) +{ + uint32_t i, *icv; + + RTE_ASSERT(icv_len % sizeof(uint32_t) == 0); + + icv = picv; + icv_len = icv_len / sizeof(uint32_t); + for (i = 0; i != icv_len; i++) + icv[i] = icv[i + 1]; +} + +#endif /* _CRYPTO_H_ */ diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h new file mode 100644 index 000000000..3fd93016d --- /dev/null +++ b/lib/librte_ipsec/iph.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _IPH_H_ +#define _IPH_H_ + +/** + * @file iph.h + * Contains functions/structures/macros to manipulate IPv/IPv6 headers + * used internally by ipsec library. + */ + +/* + * Move preceding (L3) headers down to remove ESP header and IV. + */ +static inline void +remove_esph(char *np, char *op, uint32_t hlen) +{ + uint32_t i; + + for (i = hlen; i-- != 0; np[i] = op[i]) + ; +} + +/* + * Move preceding (L3) headers up to free space for ESP header and IV. + */ +static inline void +insert_esph(char *np, char *op, uint32_t hlen) +{ + uint32_t i; + + for (i = 0; i != hlen; i++) + np[i] = op[i]; +} + +/* update original ip header fields for trasnport case */ +static inline int +update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen, + uint32_t l2len, uint32_t l3len, uint8_t proto) +{ + struct ipv4_hdr *v4h; + struct ipv6_hdr *v6h; + int32_t rc; + + if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) { + v4h = p; + rc = v4h->next_proto_id; + v4h->next_proto_id = proto; + v4h->total_length = rte_cpu_to_be_16(plen - l2len); + } else if (l3len == sizeof(*v6h)) { + v6h = p; + rc = v6h->proto; + v6h->proto = proto; + v6h->payload_len = rte_cpu_to_be_16(plen - l2len - + sizeof(*v6h)); + /* need to add support for IPv6 with options */ + } else + rc = -ENOTSUP; + + return rc; +} + +/* update original and new ip header fields for tunnel case */ +static inline void +update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen, + uint32_t l2len, rte_be16_t pid) +{ + struct ipv4_hdr *v4h; + struct ipv6_hdr *v6h; + + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) { + v4h = p; + v4h->packet_id = pid; + v4h->total_length = rte_cpu_to_be_16(plen - l2len); + } else { + v6h = p; + v6h->payload_len = rte_cpu_to_be_16(plen - l2len - + sizeof(*v6h)); + } +} + +#endif /* _IPH_H_ */ diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h index 4471814f9..a33ff9cca 100644 --- a/lib/librte_ipsec/ipsec_sqn.h +++ b/lib/librte_ipsec/ipsec_sqn.h @@ -15,6 +15,45 @@ #define IS_ESN(sa) ((sa)->sqn_mask == UINT64_MAX) +/* + * gets SQN.hi32 bits, SQN supposed to be in network byte order. + */ +static inline rte_be32_t +sqn_hi32(rte_be64_t sqn) +{ +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + return (sqn >> 32); +#else + return sqn; +#endif +} + +/* + * gets SQN.low32 bits, SQN supposed to be in network byte order. + */ +static inline rte_be32_t +sqn_low32(rte_be64_t sqn) +{ +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + return sqn; +#else + return (sqn >> 32); +#endif +} + +/* + * gets SQN.low16 bits, SQN supposed to be in network byte order. + */ +static inline rte_be16_t +sqn_low16(rte_be64_t sqn) +{ +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + return sqn; +#else + return (sqn >> 48); +#endif +} + /* * for given size, calculate required number of buckets. */ @@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz) return nb; } +/* + * According to RFC4303 A2.1, determine the high-order bit of sequence number. + * use 32bit arithmetic inside, return uint64_t. + */ +static inline uint64_t +reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w) +{ + uint32_t th, tl, bl; + + tl = t; + th = t >> 32; + bl = tl - w + 1; + + /* case A: window is within one sequence number subspace */ + if (tl >= (w - 1)) + th += (sqn < bl); + /* case B: window spans two sequence number subspaces */ + else if (th != 0) + th -= (sqn >= bl); + + /* return constructed sequence with proper high-order bits */ + return (uint64_t)th << 32 | sqn; +} + +/** + * Perform the replay checking. + * + * struct rte_ipsec_sa contains the window and window related parameters, + * such as the window size, bitmask, and the last acknowledged sequence number. + * + * Based on RFC 6479. + * Blocks are 64 bits unsigned integers + */ +static inline int32_t +esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa, + uint64_t sqn) +{ + uint32_t bit, bucket; + + /* replay not enabled */ + if (sa->replay.win_sz == 0) + return 0; + + /* seq is larger than lastseq */ + if (sqn > rsn->sqn) + return 0; + + /* seq is outside window */ + if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn) + return -EINVAL; + + /* seq is inside the window */ + bit = sqn & WINDOW_BIT_LOC_MASK; + bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask; + + /* already seen packet */ + if (rsn->window[bucket] & ((uint64_t)1 << bit)) + return -EINVAL; + + return 0; +} + +/** + * For outbound SA perform the sequence number update. + */ +static inline uint64_t +esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num) +{ + uint64_t n, s, sqn; + + n = *num; + sqn = sa->sqn.outb + n; + sa->sqn.outb = sqn; + + /* overflow */ + if (sqn > sa->sqn_mask) { + s = sqn - sa->sqn_mask; + *num = (s < n) ? n - s : 0; + } + + return sqn - n; +} + +/** + * For inbound SA perform the sequence number and replay window update. + */ +static inline int32_t +esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa, + uint64_t sqn) +{ + uint32_t bit, bucket, last_bucket, new_bucket, diff, i; + + /* replay not enabled */ + if (sa->replay.win_sz == 0) + return 0; + + /* handle ESN */ + if (IS_ESN(sa)) + sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz); + + /* seq is outside window*/ + if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn) + return -EINVAL; + + /* update the bit */ + bucket = (sqn >> WINDOW_BUCKET_BITS); + + /* check if the seq is within the range */ + if (sqn > rsn->sqn) { + last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS; + diff = bucket - last_bucket; + /* seq is way after the range of WINDOW_SIZE */ + if (diff > sa->replay.nb_bucket) + diff = sa->replay.nb_bucket; + + for (i = 0; i != diff; i++) { + new_bucket = (i + last_bucket + 1) & + sa->replay.bucket_index_mask; + rsn->window[new_bucket] = 0; + } + rsn->sqn = sqn; + } + + bucket &= sa->replay.bucket_index_mask; + bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK); + + /* already seen packet */ + if (rsn->window[bucket] & bit) + return -EINVAL; + + rsn->window[bucket] |= bit; + return 0; +} + +/** + * To achieve ability to do multiple readers single writer for + * SA replay window information and sequence number (RSN) + * basic RCU schema is used: + * SA have 2 copies of RSN (one for readers, another for writers). + * Each RSN contains a rwlock that has to be grabbed (for read/write) + * to avoid races between readers and writer. + * Writer is responsible to make a copy or reader RSN, update it + * and mark newly updated RSN as readers one. + * That approach is intended to minimize contention and cache sharing + * between writer and readers. + */ + /** * Based on number of buckets calculated required size for the * structure that holds replay window and sequnce number (RSN) information. diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h new file mode 100644 index 000000000..2f5ccd00e --- /dev/null +++ b/lib/librte_ipsec/pad.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _PAD_H_ +#define _PAD_H_ + +#define IPSEC_MAX_PAD_SIZE UINT8_MAX + +static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = { + 1, 2, 3, 4, 5, 6, 7, 8, + 9, 10, 11, 12, 13, 14, 15, 16, + 17, 18, 19, 20, 21, 22, 23, 24, + 25, 26, 27, 28, 29, 30, 31, 32, + 33, 34, 35, 36, 37, 38, 39, 40, + 41, 42, 43, 44, 45, 46, 47, 48, + 49, 50, 51, 52, 53, 54, 55, 56, + 57, 58, 59, 60, 61, 62, 63, 64, + 65, 66, 67, 68, 69, 70, 71, 72, + 73, 74, 75, 76, 77, 78, 79, 80, + 81, 82, 83, 84, 85, 86, 87, 88, + 89, 90, 91, 92, 93, 94, 95, 96, + 97, 98, 99, 100, 101, 102, 103, 104, + 105, 106, 107, 108, 109, 110, 111, 112, + 113, 114, 115, 116, 117, 118, 119, 120, + 121, 122, 123, 124, 125, 126, 127, 128, + 129, 130, 131, 132, 133, 134, 135, 136, + 137, 138, 139, 140, 141, 142, 143, 144, + 145, 146, 147, 148, 149, 150, 151, 152, + 153, 154, 155, 156, 157, 158, 159, 160, + 161, 162, 163, 164, 165, 166, 167, 168, + 169, 170, 171, 172, 173, 174, 175, 176, + 177, 178, 179, 180, 181, 182, 183, 184, + 185, 186, 187, 188, 189, 190, 191, 192, + 193, 194, 195, 196, 197, 198, 199, 200, + 201, 202, 203, 204, 205, 206, 207, 208, + 209, 210, 211, 212, 213, 214, 215, 216, + 217, 218, 219, 220, 221, 222, 223, 224, + 225, 226, 227, 228, 229, 230, 231, 232, + 233, 234, 235, 236, 237, 238, 239, 240, + 241, 242, 243, 244, 245, 246, 247, 248, + 249, 250, 251, 252, 253, 254, 255, +}; + +#endif /* _PAD_H_ */ diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 7f9baa602..6643a3293 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -6,9 +6,13 @@ #include #include #include +#include #include "sa.h" #include "ipsec_sqn.h" +#include "crypto.h" +#include "iph.h" +#include "pad.h" /* some helper structures */ struct crypto_xform { @@ -192,6 +196,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, /* RFC 4106 */ if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM) return -EINVAL; + sa->aad_len = sizeof(struct aead_gcm_aad); sa->icv_len = cxf->aead->digest_length; sa->iv_ofs = cxf->aead->iv.offset; sa->iv_len = sizeof(uint64_t); @@ -306,18 +311,1053 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, return sz; } +static inline void +mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[], + uint32_t num) +{ + uint32_t i; + + for (i = 0; i != num; i++) + dst[i] = src[i]; +} + +static inline void +lksd_none_cop_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) +{ + uint32_t i; + struct rte_crypto_sym_op *sop; + + for (i = 0; i != num; i++) { + sop = cop[i]->sym; + cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION; + sop->m_src = mb[i]; + __rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses); + } +} + +static inline void +esp_outb_cop_prepare(struct rte_crypto_op *cop, + const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD], + const union sym_op_data *icv, uint32_t hlen, uint32_t plen) +{ + struct rte_crypto_sym_op *sop; + struct aead_gcm_iv *gcm; + + /* fill sym op fields */ + sop = cop->sym; + + /* AEAD (AES_GCM) case */ + if (sa->aad_len != 0) { + sop->aead.data.offset = sa->ctp.cipher.offset + hlen; + sop->aead.data.length = sa->ctp.cipher.length + plen; + sop->aead.digest.data = icv->va; + sop->aead.digest.phys_addr = icv->pa; + sop->aead.aad.data = icv->va + sa->icv_len; + sop->aead.aad.phys_addr = icv->pa + sa->icv_len; + + /* fill AAD IV (located inside crypto op) */ + gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, + sa->iv_ofs); + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + /* CRYPT+AUTH case */ + } else { + sop->cipher.data.offset = sa->ctp.cipher.offset + hlen; + sop->cipher.data.length = sa->ctp.cipher.length + plen; + sop->auth.data.offset = sa->ctp.auth.offset + hlen; + sop->auth.data.length = sa->ctp.auth.length + plen; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + } +} + +static inline int32_t +esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, + const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, + union sym_op_data *icv) +{ + uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen; + struct rte_mbuf *ml; + struct esp_hdr *esph; + struct esp_tail *espt; + char *ph, *pt; + uint64_t *iv; + + /* calculate extra header space required */ + hlen = sa->hdr_len + sa->iv_len + sizeof(*esph); + + /* size of ipsec protected data */ + l2len = mb->l2_len; + plen = mb->pkt_len - mb->l2_len; + + /* number of bytes to encrypt */ + clen = plen + sizeof(*espt); + clen = RTE_ALIGN_CEIL(clen, sa->pad_align); + + /* pad length + esp tail */ + pdlen = clen - plen; + tlen = pdlen + sa->icv_len; + + /* do append and prepend */ + ml = rte_pktmbuf_lastseg(mb); + if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml)) + return -ENOSPC; + + /* prepend header */ + ph = rte_pktmbuf_prepend(mb, hlen - l2len); + if (ph == NULL) + return -ENOSPC; + + /* append tail */ + pdofs = ml->data_len; + ml->data_len += tlen; + mb->pkt_len += tlen; + pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs); + + /* update pkt l2/l3 len */ + mb->l2_len = sa->hdr_l3_off; + mb->l3_len = sa->hdr_len - sa->hdr_l3_off; + + /* copy tunnel pkt header */ + rte_memcpy(ph, sa->hdr, sa->hdr_len); + + /* update original and new ip header fields */ + update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off, + sqn_low16(sqc)); + + /* update spi, seqn and iv */ + esph = (struct esp_hdr *)(ph + sa->hdr_len); + iv = (uint64_t *)(esph + 1); + rte_memcpy(iv, ivp, sa->iv_len); + + esph->spi = sa->spi; + esph->seq = sqn_low32(sqc); + + /* offset for ICV */ + pdofs += pdlen + sa->sqh_len; + + /* pad length */ + pdlen -= sizeof(*espt); + + /* copy padding data */ + rte_memcpy(pt, esp_pad_bytes, pdlen); + + /* update esp trailer */ + espt = (struct esp_tail *)(pt + pdlen); + espt->pad_len = pdlen; + espt->next_proto = sa->proto; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); + icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); + + return clen; +} + +/* + * for pure cryptodev (lookaside none) depending on SA settings, + * we might have to write some extra data to the packet. + */ +static inline void +outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, + const union sym_op_data *icv) +{ + uint32_t *psqh; + struct aead_gcm_aad *aad; + + /* insert SQN.hi between ESP trailer and ICV */ + if (sa->sqh_len != 0) { + psqh = (uint32_t *)(icv->va - sa->sqh_len); + psqh[0] = sqn_hi32(sqc); + } + + /* + * fill IV and AAD fields, if any (aad fields are placed after icv), + * right now we support only one AEAD algorithm: AES-GCM . + */ + if (sa->aad_len != 0) { + aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); + aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa)); + } +} + +static uint16_t +outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv, sqc); + + /* try to update the packet itself */ + rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv); + + /* success, setup crypto op */ + if (rc >= 0) { + mb[k] = mb[i]; + outb_pkt_xprepare(sa, sqc, &icv); + esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc); + k++; + /* failure, put packet into the death-row */ + } else { + dr[i - k] = mb[i]; + rte_errno = -rc; + } + } + + /* update cops */ + lksd_none_cop_prepare(ss, mb, cop, k); + + /* copy not prepared mbufs beyond good ones */ + if (k != num && k != 0) + mbuf_bulk_copy(mb + k, dr, num - k); + + return k; +} + +static inline int32_t +esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, + const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, + uint32_t l2len, uint32_t l3len, union sym_op_data *icv) +{ + uint8_t np; + uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen; + struct rte_mbuf *ml; + struct esp_hdr *esph; + struct esp_tail *espt; + char *ph, *pt; + uint64_t *iv; + + uhlen = l2len + l3len; + plen = mb->pkt_len - uhlen; + + /* calculate extra header space required */ + hlen = sa->iv_len + sizeof(*esph); + + /* number of bytes to encrypt */ + clen = plen + sizeof(*espt); + clen = RTE_ALIGN_CEIL(clen, sa->pad_align); + + /* pad length + esp tail */ + pdlen = clen - plen; + tlen = pdlen + sa->icv_len; + + /* do append and insert */ + ml = rte_pktmbuf_lastseg(mb); + if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml)) + return -ENOSPC; + + /* prepend space for ESP header */ + ph = rte_pktmbuf_prepend(mb, hlen); + if (ph == NULL) + return -ENOSPC; + + /* append tail */ + pdofs = ml->data_len; + ml->data_len += tlen; + mb->pkt_len += tlen; + pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs); + + /* shift L2/L3 headers */ + insert_esph(ph, ph + hlen, uhlen); + + /* update ip header fields */ + np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len, + IPPROTO_ESP); + + /* update spi, seqn and iv */ + esph = (struct esp_hdr *)(ph + uhlen); + iv = (uint64_t *)(esph + 1); + rte_memcpy(iv, ivp, sa->iv_len); + + esph->spi = sa->spi; + esph->seq = sqn_low32(sqc); + + /* offset for ICV */ + pdofs += pdlen + sa->sqh_len; + + /* pad length */ + pdlen -= sizeof(*espt); + + /* copy padding data */ + rte_memcpy(pt, esp_pad_bytes, pdlen); + + /* update esp trailer */ + espt = (struct esp_tail *)(pt + pdlen); + espt->pad_len = pdlen; + espt->next_proto = np; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); + icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); + + return clen; +} + +static uint16_t +outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n, l2, l3; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + l2 = mb[i]->l2_len; + l3 = mb[i]->l3_len; + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv, sqc); + + /* try to update the packet itself */ + rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i], + l2, l3, &icv); + + /* success, setup crypto op */ + if (rc >= 0) { + mb[k] = mb[i]; + outb_pkt_xprepare(sa, sqc, &icv); + esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc); + k++; + /* failure, put packet into the death-row */ + } else { + dr[i - k] = mb[i]; + rte_errno = -rc; + } + } + + /* update cops */ + lksd_none_cop_prepare(ss, mb, cop, k); + + /* copy not prepared mbufs beyond good ones */ + if (k != num && k != 0) + mbuf_bulk_copy(mb + k, dr, num - k); + + return k; +} + +static inline int32_t +esp_inb_tun_cop_prepare(struct rte_crypto_op *cop, + const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + const union sym_op_data *icv, uint32_t pofs, uint32_t plen) +{ + struct rte_crypto_sym_op *sop; + struct aead_gcm_iv *gcm; + uint64_t *ivc, *ivp; + uint32_t clen; + + clen = plen - sa->ctp.cipher.length; + if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0) + return -EINVAL; + + /* fill sym op fields */ + sop = cop->sym; + + /* AEAD (AES_GCM) case */ + if (sa->aad_len != 0) { + sop->aead.data.offset = pofs + sa->ctp.cipher.offset; + sop->aead.data.length = clen; + sop->aead.digest.data = icv->va; + sop->aead.digest.phys_addr = icv->pa; + sop->aead.aad.data = icv->va + sa->icv_len; + sop->aead.aad.phys_addr = icv->pa + sa->icv_len; + + /* fill AAD IV (located inside crypto op) */ + gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, + sa->iv_ofs); + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct esp_hdr)); + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + /* CRYPT+AUTH case */ + } else { + sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; + sop->cipher.data.length = clen; + sop->auth.data.offset = pofs + sa->ctp.auth.offset; + sop->auth.data.length = plen - sa->ctp.auth.length; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + + /* copy iv from the input packet to the cop */ + ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs); + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct esp_hdr)); + rte_memcpy(ivc, ivp, sa->iv_len); + } + return 0; +} + +/* + * for pure cryptodev (lookaside none) depending on SA settings, + * we might have to write some extra data to the packet. + */ +static inline void +inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, + const union sym_op_data *icv) +{ + struct aead_gcm_aad *aad; + + /* insert SQN.hi between ESP trailer and ICV */ + if (sa->sqh_len != 0) + insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len); + + /* + * fill AAD fields, if any (aad fields are placed after icv), + * right now we support only one AEAD algorithm: AES-GCM. + */ + if (sa->aad_len != 0) { + aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); + aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa)); + } +} + +static inline int32_t +esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa, + const struct replay_sqn *rsn, struct rte_mbuf *mb, + uint32_t hlen, union sym_op_data *icv) +{ + int32_t rc; + uint64_t sqn; + uint32_t icv_ofs, plen; + struct rte_mbuf *ml; + struct esp_hdr *esph; + + esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); + + /* + * retrieve and reconstruct SQN, then check it, then + * convert it back into network byte order. + */ + sqn = rte_be_to_cpu_32(esph->seq); + if (IS_ESN(sa)) + sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz); + + rc = esn_inb_check_sqn(rsn, sa, sqn); + if (rc != 0) + return rc; + + sqn = rte_cpu_to_be_64(sqn); + + /* start packet manipulation */ + plen = mb->pkt_len; + plen = plen - hlen; + + ml = rte_pktmbuf_lastseg(mb); + icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len; + + /* we have to allocate space for AAD somewhere, + * right now - just use free trailing space at the last segment. + * Would probably be more convenient to reserve space for AAD + * inside rte_crypto_op itself + * (again for IV space is already reserved inside cop). + */ + if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml)) + return -ENOSPC; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs); + icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs); + + inb_pkt_xprepare(sa, sqn, icv); + return plen; +} + +static uint16_t +inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, hl; + struct rte_ipsec_sa *sa; + struct replay_sqn *rsn; + union sym_op_data icv; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + rsn = sa->sqn.inb; + + k = 0; + for (i = 0; i != num; i++) { + + hl = mb[i]->l2_len + mb[i]->l3_len; + rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv); + if (rc >= 0) + rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv, + hl, rc); + + if (rc == 0) + mb[k++] = mb[i]; + else { + dr[i - k] = mb[i]; + rte_errno = -rc; + } + } + + /* update cops */ + lksd_none_cop_prepare(ss, mb, cop, k); + + /* copy not prepared mbufs beyond good ones */ + if (k != num && k != 0) + mbuf_bulk_copy(mb + k, dr, num - k); + + return k; +} + +static inline void +lksd_proto_cop_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) +{ + uint32_t i; + struct rte_crypto_sym_op *sop; + + for (i = 0; i != num; i++) { + sop = cop[i]->sym; + cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION; + sop->m_src = mb[i]; + __rte_security_attach_session(sop, ss->security.ses); + } +} + +static uint16_t +lksd_proto_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) +{ + lksd_proto_cop_prepare(ss, mb, cop, num); + return num; +} + +static inline int +esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t *sqn) +{ + uint32_t hlen, icv_len, tlen; + struct esp_hdr *esph; + struct esp_tail *espt; + struct rte_mbuf *ml; + char *pd; + + if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) + return -EBADMSG; + + icv_len = sa->icv_len; + + ml = rte_pktmbuf_lastseg(mb); + espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, + ml->data_len - icv_len - sizeof(*espt)); + + /* + * check padding and next proto. + * return an error if something is wrong. + */ + pd = (char *)espt - espt->pad_len; + if (espt->next_proto != sa->proto || + memcmp(pd, esp_pad_bytes, espt->pad_len)) + return -EINVAL; + + /* cut of ICV, ESP tail and padding bytes */ + tlen = icv_len + sizeof(*espt) + espt->pad_len; + ml->data_len -= tlen; + mb->pkt_len -= tlen; + + /* cut of L2/L3 headers, ESP header and IV */ + hlen = mb->l2_len + mb->l3_len; + esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); + rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset); + + /* retrieve SQN for later check */ + *sqn = rte_be_to_cpu_32(esph->seq); + + /* reset mbuf metatdata: L2/L3 len, packet type */ + mb->packet_type = RTE_PTYPE_UNKNOWN; + mb->l2_len = 0; + mb->l3_len = 0; + + /* clear the PKT_RX_SEC_OFFLOAD flag if set */ + mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); + return 0; +} + +static inline int +esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t *sqn) +{ + uint32_t hlen, icv_len, l2len, l3len, tlen; + struct esp_hdr *esph; + struct esp_tail *espt; + struct rte_mbuf *ml; + char *np, *op, *pd; + + if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) + return -EBADMSG; + + icv_len = sa->icv_len; + + ml = rte_pktmbuf_lastseg(mb); + espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, + ml->data_len - icv_len - sizeof(*espt)); + + /* check padding, return an error if something is wrong. */ + pd = (char *)espt - espt->pad_len; + if (memcmp(pd, esp_pad_bytes, espt->pad_len)) + return -EINVAL; + + /* cut of ICV, ESP tail and padding bytes */ + tlen = icv_len + sizeof(*espt) + espt->pad_len; + ml->data_len -= tlen; + mb->pkt_len -= tlen; + + /* retrieve SQN for later check */ + l2len = mb->l2_len; + l3len = mb->l3_len; + hlen = l2len + l3len; + op = rte_pktmbuf_mtod(mb, char *); + esph = (struct esp_hdr *)(op + hlen); + *sqn = rte_be_to_cpu_32(esph->seq); + + /* cut off ESP header and IV, update L3 header */ + np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset); + remove_esph(np, op, hlen); + update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len, + espt->next_proto); + + /* reset mbuf packet type */ + mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK); + + /* clear the PKT_RX_SEC_OFFLOAD flag if set */ + mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); + return 0; +} + +static inline uint16_t +esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], + struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num) +{ + uint32_t i, k; + struct replay_sqn *rsn; + + rsn = sa->sqn.inb; + + k = 0; + for (i = 0; i != num; i++) { + if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0) + mb[k++] = mb[i]; + else + dr[i - k] = mb[i]; + } + + return k; +} + +static uint16_t +inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + uint32_t i, k; + struct rte_ipsec_sa *sa; + uint32_t sqn[num]; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + /* process packets, extract seq numbers */ + + k = 0; + for (i = 0; i != num; i++) { + /* good packet */ + if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0) + mb[k++] = mb[i]; + /* bad packet, will drop from furhter processing */ + else + dr[i - k] = mb[i]; + } + + /* update seq # and replay winow */ + k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k); + + /* handle unprocessed mbufs */ + if (k != num) { + rte_errno = EBADMSG; + if (k != 0) + mbuf_bulk_copy(mb + k, dr, num - k); + } + + return k; +} + +static uint16_t +inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + uint32_t i, k; + uint32_t sqn[num]; + struct rte_ipsec_sa *sa; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + /* process packets, extract seq numbers */ + + k = 0; + for (i = 0; i != num; i++) { + /* good packet */ + if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0) + mb[k++] = mb[i]; + /* bad packet, will drop from furhter processing */ + else + dr[i - k] = mb[i]; + } + + /* update seq # and replay winow */ + k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k); + + /* handle unprocessed mbufs */ + if (k != num) { + rte_errno = EBADMSG; + if (k != 0) + mbuf_bulk_copy(mb + k, dr, num - k); + } + + return k; +} + +/* + * process outbound packets for SA with ESN support, + * for algorithms that require SQN.hibits to be implictly included + * into digest computation. + * In that case we have to move ICV bytes back to their proper place. + */ +static uint16_t +outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + uint32_t i, k, icv_len, *icv; + struct rte_mbuf *ml; + struct rte_ipsec_sa *sa; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + k = 0; + icv_len = sa->icv_len; + + for (i = 0; i != num; i++) { + if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) { + ml = rte_pktmbuf_lastseg(mb[i]); + icv = rte_pktmbuf_mtod_offset(ml, void *, + ml->data_len - icv_len); + remove_sqh(icv, icv_len); + mb[k++] = mb[i]; + } else + dr[i - k] = mb[i]; + } + + /* handle unprocessed mbufs */ + if (k != num) { + rte_errno = EBADMSG; + if (k != 0) + mbuf_bulk_copy(mb + k, dr, num - k); + } + + return k; +} + +/* + * simplest pkt process routine: + * all actual processing is done already doneby HW/PMD, + * just check mbuf ol_flags. + * used for: + * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL + * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL + * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled + */ +static uint16_t +pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + uint32_t i, k; + struct rte_mbuf *dr[num]; + + RTE_SET_USED(ss); + + k = 0; + for (i = 0; i != num; i++) { + if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) + mb[k++] = mb[i]; + else + dr[i - k] = mb[i]; + } + + /* handle unprocessed mbufs */ + if (k != num) { + rte_errno = EBADMSG; + if (k != 0) + mbuf_bulk_copy(mb + k, dr, num - k); + } + + return k; +} + +/* + * prepare packets for inline ipsec processing: + * set ol_flags and attach metadata. + */ +static inline void +inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + uint32_t i, ol_flags; + + ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA; + for (i = 0; i != num; i++) { + + mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD; + if (ol_flags != 0) + rte_security_set_pkt_metadata(ss->security.ctx, + ss->security.ses, mb[i], NULL); + } +} + +static uint16_t +inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv, sqc); + + /* try to update the packet itself */ + rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv); + + /* success, update mbuf fields */ + if (rc >= 0) + mb[k++] = mb[i]; + /* failure, put packet into the death-row */ + else { + dr[i - k] = mb[i]; + rte_errno = -rc; + } + } + + inline_outb_mbuf_prepare(ss, mb, k); + + /* copy not processed mbufs beyond good ones */ + if (k != num && k != 0) + mbuf_bulk_copy(mb + k, dr, num - k); + + return k; +} + +static uint16_t +inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n, l2, l3; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + l2 = mb[i]->l2_len; + l3 = mb[i]->l3_len; + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv, sqc); + + /* try to update the packet itself */ + rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i], + l2, l3, &icv); + + /* success, update mbuf fields */ + if (rc >= 0) + mb[k++] = mb[i]; + /* failure, put packet into the death-row */ + else { + dr[i - k] = mb[i]; + rte_errno = -rc; + } + } + + inline_outb_mbuf_prepare(ss, mb, k); + + /* copy not processed mbufs beyond good ones */ + if (k != num && k != 0) + mbuf_bulk_copy(mb + k, dr, num - k); + + return k; +} + +/* + * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL: + * actual processing is done by HW/PMD, just set flags and metadata. + */ +static uint16_t +outb_inline_proto_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + inline_outb_mbuf_prepare(ss, mb, num); + return num; +} + +static int +lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa, + struct rte_ipsec_sa_pkt_func *pf) +{ + int32_t rc; + + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + rc = 0; + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->prepare = inb_pkt_prepare; + pf->process = inb_tun_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): + pf->prepare = inb_pkt_prepare; + pf->process = inb_trs_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->prepare = outb_tun_prepare; + pf->process = (sa->sqh_len != 0) ? + outb_sqh_process : pkt_flag_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): + pf->prepare = outb_trs_prepare; + pf->process = (sa->sqh_len != 0) ? + outb_sqh_process : pkt_flag_process; + break; + default: + rc = -ENOTSUP; + } + + return rc; +} + +static int +inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa, + struct rte_ipsec_sa_pkt_func *pf) +{ + int32_t rc; + + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + rc = 0; + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->process = inb_tun_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): + pf->process = inb_trs_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->process = inline_outb_tun_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): + pf->process = inline_outb_trs_pkt_process; + break; + default: + rc = -ENOTSUP; + } + + return rc; +} + int ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf) { int32_t rc; - RTE_SET_USED(sa); - rc = 0; pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 }; switch (ss->type) { + case RTE_SECURITY_ACTION_TYPE_NONE: + rc = lksd_none_pkt_func_select(sa, pf); + break; + case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO: + rc = inline_crypto_pkt_func_select(sa, pf); + break; + case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL: + if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) == + RTE_IPSEC_SATP_DIR_IB) + pf->process = pkt_flag_process; + else + pf->process = outb_inline_proto_process; + break; + case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: + pf->prepare = lksd_proto_prepare; + pf->process = pkt_flag_process; + break; default: rc = -ENOTSUP; } From patchwork Fri Nov 30 16:46:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 48441 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1B6C61B5AD; Fri, 30 Nov 2018 17:46:45 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id A043F1B58A for ; Fri, 30 Nov 2018 17:46:33 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Nov 2018 08:46:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,299,1539673200"; d="scan'208";a="94677645" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga007.jf.intel.com with ESMTP; 30 Nov 2018 08:46:31 -0800 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev Date: Fri, 30 Nov 2018 16:46:04 +0000 Message-Id: <1543596366-22617-8-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> References: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 7/9] ipsec: rework SA replay window/SQN for MT environment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" With these changes functions: - rte_ipsec_pkt_crypto_prepare - rte_ipsec_pkt_process can be safely used in MT environment, as long as the user can guarantee that they obey multiple readers/single writer model for SQN+replay_window operations. To be more specific: for outbound SA there are no restrictions. for inbound SA the caller has to guarantee that at any given moment only one thread is executing rte_ipsec_pkt_process() for given SA. Note that it is caller responsibility to maintain correct order of packets to be processed. Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/ipsec_sqn.h | 113 +++++++++++++++++++++++++++++++- lib/librte_ipsec/rte_ipsec_sa.h | 27 ++++++++ lib/librte_ipsec/sa.c | 23 +++++-- lib/librte_ipsec/sa.h | 21 +++++- 4 files changed, 176 insertions(+), 8 deletions(-) diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h index a33ff9cca..ee5e35978 100644 --- a/lib/librte_ipsec/ipsec_sqn.h +++ b/lib/librte_ipsec/ipsec_sqn.h @@ -15,6 +15,8 @@ #define IS_ESN(sa) ((sa)->sqn_mask == UINT64_MAX) +#define SQN_ATOMIC(sa) ((sa)->type & RTE_IPSEC_SATP_SQN_ATOM) + /* * gets SQN.hi32 bits, SQN supposed to be in network byte order. */ @@ -140,8 +142,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num) uint64_t n, s, sqn; n = *num; - sqn = sa->sqn.outb + n; - sa->sqn.outb = sqn; + if (SQN_ATOMIC(sa)) + sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n); + else { + sqn = sa->sqn.outb.raw + n; + sa->sqn.outb.raw = sqn; + } /* overflow */ if (sqn > sa->sqn_mask) { @@ -231,4 +237,107 @@ rsn_size(uint32_t nb_bucket) return sz; } +/** + * Copy replay window and SQN. + */ +static inline void +rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src) +{ + uint32_t i, n; + struct replay_sqn *d; + const struct replay_sqn *s; + + d = sa->sqn.inb.rsn[dst]; + s = sa->sqn.inb.rsn[src]; + + n = sa->replay.nb_bucket; + + d->sqn = s->sqn; + for (i = 0; i != n; i++) + d->window[i] = s->window[i]; +} + +/** + * Get RSN for read-only access. + */ +static inline struct replay_sqn * +rsn_acquire(struct rte_ipsec_sa *sa) +{ + uint32_t n; + struct replay_sqn *rsn; + + n = sa->sqn.inb.rdidx; + rsn = sa->sqn.inb.rsn[n]; + + if (!SQN_ATOMIC(sa)) + return rsn; + + /* check there are no writers */ + while (rte_rwlock_read_trylock(&rsn->rwl) < 0) { + rte_pause(); + n = sa->sqn.inb.rdidx; + rsn = sa->sqn.inb.rsn[n]; + rte_compiler_barrier(); + } + + return rsn; +} + +/** + * Release read-only access for RSN. + */ +static inline void +rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn) +{ + if (SQN_ATOMIC(sa)) + rte_rwlock_read_unlock(&rsn->rwl); +} + +/** + * Start RSN update. + */ +static inline struct replay_sqn * +rsn_update_start(struct rte_ipsec_sa *sa) +{ + uint32_t k, n; + struct replay_sqn *rsn; + + n = sa->sqn.inb.wridx; + + /* no active writers */ + RTE_ASSERT(n == sa->sqn.inb.rdidx); + + if (!SQN_ATOMIC(sa)) + return sa->sqn.inb.rsn[n]; + + k = REPLAY_SQN_NEXT(n); + sa->sqn.inb.wridx = k; + + rsn = sa->sqn.inb.rsn[k]; + rte_rwlock_write_lock(&rsn->rwl); + rsn_copy(sa, k, n); + + return rsn; +} + +/** + * Finish RSN update. + */ +static inline void +rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn) +{ + uint32_t n; + + if (!SQN_ATOMIC(sa)) + return; + + n = sa->sqn.inb.wridx; + RTE_ASSERT(n != sa->sqn.inb.rdidx); + RTE_ASSERT(rsn - sa->sqn.inb.rsn == n); + + rte_rwlock_write_unlock(&rsn->rwl); + sa->sqn.inb.rdidx = n; +} + + #endif /* _IPSEC_SQN_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h index 4e36fd99b..35a0afec1 100644 --- a/lib/librte_ipsec/rte_ipsec_sa.h +++ b/lib/librte_ipsec/rte_ipsec_sa.h @@ -53,6 +53,27 @@ struct rte_ipsec_sa_prm { */ }; +/** + * Indicates that SA will(/will not) need an 'atomic' access + * to sequence number and replay window. + * 'atomic' here means: + * functions: + * - rte_ipsec_pkt_crypto_prepare + * - rte_ipsec_pkt_process + * can be safely used in MT environment, as long as the user can guarantee + * that they obey multiple readers/single writer model for SQN+replay_window + * operations. + * To be more specific: + * for outbound SA there are no restrictions. + * for inbound SA the caller has to guarantee that at any given moment + * only one thread is executing rte_ipsec_pkt_process() for given SA. + * Note that it is caller responsibility to maintain correct order + * of packets to be processed. + * In other words - it is a caller responsibility to serialize process() + * invocations. + */ +#define RTE_IPSEC_SAFLAG_SQN_ATOM (1ULL << 0) + /** * SA type is an 64-bit value that contain the following information: * - IP version (IPv4/IPv6) @@ -60,6 +81,7 @@ struct rte_ipsec_sa_prm { * - inbound/outbound * - mode (TRANSPORT/TUNNEL) * - for TUNNEL outer IP version (IPv4/IPv6) + * - are SA SQN operations 'atomic' * ... */ @@ -68,6 +90,7 @@ enum { RTE_SATP_LOG_PROTO, RTE_SATP_LOG_DIR, RTE_SATP_LOG_MODE, + RTE_SATP_LOG_SQN = RTE_SATP_LOG_MODE + 2, RTE_SATP_LOG_NUM }; @@ -88,6 +111,10 @@ enum { #define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG_MODE) #define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG_MODE) +#define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG_SQN) +#define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG_SQN) +#define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG_SQN) + /** * get type of given SA * @return diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 6643a3293..2690d2619 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -90,6 +90,9 @@ ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket) *nb_bucket = n; sz = rsn_size(n); + if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) + sz *= REPLAY_SQN_NUM; + sz += sizeof(struct rte_ipsec_sa); return sz; } @@ -136,6 +139,12 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm) tp |= RTE_IPSEC_SATP_IPV4; } + /* interpret flags */ + if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM) + tp |= RTE_IPSEC_SATP_SQN_ATOM; + else + tp |= RTE_IPSEC_SATP_SQN_RAW; + return tp; } @@ -159,7 +168,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) static void esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen) { - sa->sqn.outb = 1; + sa->sqn.outb.raw = 1; /* these params may differ with new algorithms support */ sa->ctp.auth.offset = hlen; @@ -305,7 +314,10 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, sa->replay.win_sz = prm->replay_win_sz; sa->replay.nb_bucket = nb; sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1; - sa->sqn.inb = (struct replay_sqn *)(sa + 1); + sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1); + if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) + sa->sqn.inb.rsn[1] = (struct replay_sqn *) + ((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb)); } return sz; @@ -804,7 +816,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], struct rte_mbuf *dr[num]; sa = ss->sa; - rsn = sa->sqn.inb; + rsn = rsn_acquire(sa); k = 0; for (i = 0; i != num; i++) { @@ -823,6 +835,8 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], } } + rsn_release(sa, rsn); + /* update cops */ lksd_none_cop_prepare(ss, mb, cop, k); @@ -967,7 +981,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], uint32_t i, k; struct replay_sqn *rsn; - rsn = sa->sqn.inb; + rsn = rsn_update_start(sa); k = 0; for (i = 0; i != num; i++) { @@ -977,6 +991,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], dr[i - k] = mb[i]; } + rsn_update_finish(sa, rsn); return k; } diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index 050a6d7ae..7dc9933f1 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -5,6 +5,8 @@ #ifndef _SA_H_ #define _SA_H_ +#include + #define IPSEC_MAX_HDR_SIZE 64 #define IPSEC_MAX_IV_SIZE 16 #define IPSEC_MAX_IV_QWORD (IPSEC_MAX_IV_SIZE / sizeof(uint64_t)) @@ -28,7 +30,11 @@ union sym_op_data { }; }; +#define REPLAY_SQN_NUM 2 +#define REPLAY_SQN_NEXT(n) ((n) ^ 1) + struct replay_sqn { + rte_rwlock_t rwl; uint64_t sqn; __extension__ uint64_t window[0]; }; @@ -66,10 +72,21 @@ struct rte_ipsec_sa { /* * sqn and replay window + * In case of SA handled by multiple threads *sqn* cacheline + * could be shared by multiple cores. + * To minimise perfomance impact, we try to locate in a separate + * place from other frequently accesed data. */ union { - uint64_t outb; - struct replay_sqn *inb; + union { + rte_atomic64_t atom; + uint64_t raw; + } outb; + struct { + uint32_t rdidx; /* read index */ + uint32_t wridx; /* write index */ + struct replay_sqn *rsn[REPLAY_SQN_NUM]; + } inb; } sqn; } __rte_cache_aligned; From patchwork Fri Nov 30 16:46:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 48443 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2443A1B587; Fri, 30 Nov 2018 17:46:50 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 24D261B598 for ; Fri, 30 Nov 2018 17:46:34 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Nov 2018 08:46:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,299,1539673200"; d="scan'208";a="94677648" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga007.jf.intel.com with ESMTP; 30 Nov 2018 08:46:32 -0800 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev Date: Fri, 30 Nov 2018 16:46:05 +0000 Message-Id: <1543596366-22617-9-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> References: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 8/9] ipsec: helper functions to group completed crypto-ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce helper functions to process completed crypto-ops and group related packets by sessions they belong to. Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/Makefile | 1 + lib/librte_ipsec/meson.build | 2 +- lib/librte_ipsec/rte_ipsec.h | 2 + lib/librte_ipsec/rte_ipsec_group.h | 151 +++++++++++++++++++++++++ lib/librte_ipsec/rte_ipsec_version.map | 2 + 5 files changed, 157 insertions(+), 1 deletion(-) create mode 100644 lib/librte_ipsec/rte_ipsec_group.h diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile index 79f187fae..98c52f388 100644 --- a/lib/librte_ipsec/Makefile +++ b/lib/librte_ipsec/Makefile @@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c # install header files SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build index 6e8c6fabe..d2427b809 100644 --- a/lib/librte_ipsec/meson.build +++ b/lib/librte_ipsec/meson.build @@ -5,6 +5,6 @@ allow_experimental_apis = true sources=files('sa.c', 'ses.c') -install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h') +install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h') deps += ['mbuf', 'net', 'cryptodev', 'security'] diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h index 429d4bf38..0df7ea907 100644 --- a/lib/librte_ipsec/rte_ipsec.h +++ b/lib/librte_ipsec/rte_ipsec.h @@ -147,6 +147,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], return ss->pkt_func.process(ss, mb, num); } +#include + #ifdef __cplusplus } #endif diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h new file mode 100644 index 000000000..d264d7e78 --- /dev/null +++ b/lib/librte_ipsec/rte_ipsec_group.h @@ -0,0 +1,151 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _RTE_IPSEC_GROUP_H_ +#define _RTE_IPSEC_GROUP_H_ + +/** + * @file rte_ipsec_group.h + * @b EXPERIMENTAL: this API may change without prior notice + * + * RTE IPsec support. + * It is not recomended to include this file direclty, + * include instead. + * Contains helper functions to process completed crypto-ops + * and group related packets by sessions they belong to. + */ + + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Used to group mbufs by some id. + * See below for particular usage. + */ +struct rte_ipsec_group { + union { + uint64_t val; + void *ptr; + } id; /**< grouped by value */ + struct rte_mbuf **m; /**< start of the group */ + uint32_t cnt; /**< number of entries in the group */ + int32_t rc; /**< status code associated with the group */ +}; + +/** + * Take crypto-op as an input and extract pointer to related ipsec session. + * @param cop + * The address of an input *rte_crypto_op* structure. + * @return + * The pointer to the related *rte_ipsec_session* structure. + */ +static inline __rte_experimental struct rte_ipsec_session * +rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop) +{ + const struct rte_security_session *ss; + const struct rte_cryptodev_sym_session *cs; + + if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) { + ss = cop->sym[0].sec_session; + return (void *)(uintptr_t)ss->opaque_data; + } else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + cs = cop->sym[0].session; + return (void *)(uintptr_t)cs->opaque_data; + } + return NULL; +} + +/** + * Take as input completed crypto ops, extract related mbufs + * and group them by rte_ipsec_session they belong to. + * For mbuf which crypto-op wasn't completed successfully + * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags. + * Note that mbufs with undetermined SA (session-less) are not freed + * by the function, but are placed beyond mbufs for the last valid group. + * It is a user responsibility to handle them further. + * @param cop + * The address of an array of *num* pointers to the input *rte_crypto_op* + * structures. + * @param mb + * The address of an array of *num* pointers to output *rte_mbuf* structures. + * @param grp + * The address of an array of *num* to output *rte_ipsec_group* structures. + * @param num + * The maximum number of crypto-ops to process. + * @return + * Number of filled elements in *grp* array. + */ +static inline uint16_t __rte_experimental +rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[], + struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num) +{ + uint32_t i, j, k, n; + void *ns, *ps; + struct rte_mbuf *m, *dr[num]; + + j = 0; + k = 0; + n = 0; + ps = NULL; + + for (i = 0; i != num; i++) { + + m = cop[i]->sym[0].m_src; + ns = cop[i]->sym[0].session; + + m->ol_flags |= PKT_RX_SEC_OFFLOAD; + if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS) + m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED; + + /* no valid session found */ + if (ns == NULL) { + dr[k++] = m; + continue; + } + + /* different SA */ + if (ps != ns) { + + /* + * we already have an open group - finilise it, + * then open a new one. + */ + if (ps != NULL) { + grp[n].id.ptr = + rte_ipsec_ses_from_crypto(cop[i - 1]); + grp[n].cnt = mb + j - grp[n].m; + n++; + } + + /* start new group */ + grp[n].m = mb + j; + ps = ns; + } + + mb[j++] = m; + } + + /* finalise last group */ + if (ps != NULL) { + grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]); + grp[n].cnt = mb + j - grp[n].m; + n++; + } + + /* copy mbufs with unknown session beyond recognised ones */ + if (k != 0 && k != num) { + for (i = 0; i != k; i++) + mb[j + i] = dr[i]; + } + + return n; +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_IPSEC_GROUP_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map index d1c52d7ca..0f91fb134 100644 --- a/lib/librte_ipsec/rte_ipsec_version.map +++ b/lib/librte_ipsec/rte_ipsec_version.map @@ -1,6 +1,7 @@ EXPERIMENTAL { global: + rte_ipsec_pkt_crypto_group; rte_ipsec_pkt_crypto_prepare; rte_ipsec_session_prepare; rte_ipsec_pkt_process; @@ -8,6 +9,7 @@ EXPERIMENTAL { rte_ipsec_sa_init; rte_ipsec_sa_size; rte_ipsec_sa_type; + rte_ipsec_ses_from_crypto; local: *; }; From patchwork Fri Nov 30 16:46:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 48444 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 69DD81B5C9; Fri, 30 Nov 2018 17:46:52 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id BE5C61B5A2 for ; Fri, 30 Nov 2018 17:46:36 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Nov 2018 08:46:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,299,1539673200"; d="scan'208";a="94677654" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga007.jf.intel.com with ESMTP; 30 Nov 2018 08:46:34 -0800 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , Mohammad Abdul Awal , Bernard Iremonger Date: Fri, 30 Nov 2018 16:46:06 +0000 Message-Id: <1543596366-22617-10-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> References: <1542326031-5263-2-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 9/9] test/ipsec: introduce functional test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Create functional test for librte_ipsec. Signed-off-by: Mohammad Abdul Awal Signed-off-by: Bernard Iremonger Signed-off-by: Konstantin Ananyev --- test/test/Makefile | 3 + test/test/meson.build | 3 + test/test/test_ipsec.c | 2209 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 2215 insertions(+) create mode 100644 test/test/test_ipsec.c diff --git a/test/test/Makefile b/test/test/Makefile index ab4fec34a..e7c8108f2 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -207,6 +207,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c +LDLIBS += -lrte_ipsec + CFLAGS += -DALLOW_EXPERIMENTAL_API CFLAGS += -O3 diff --git a/test/test/meson.build b/test/test/meson.build index 554e9945f..d4f689417 100644 --- a/test/test/meson.build +++ b/test/test/meson.build @@ -48,6 +48,7 @@ test_sources = files('commands.c', 'test_hash_perf.c', 'test_hash_readwrite_lf.c', 'test_interrupts.c', + 'test_ipsec.c', 'test_kni.c', 'test_kvargs.c', 'test_link_bonding.c', @@ -115,6 +116,7 @@ test_deps = ['acl', 'eventdev', 'flow_classify', 'hash', + 'ipsec', 'lpm', 'member', 'metrics', @@ -179,6 +181,7 @@ test_names = [ 'hash_readwrite_autotest', 'hash_readwrite_lf_autotest', 'interrupt_autotest', + 'ipsec_autotest', 'kni_autotest', 'kvargs_autotest', 'link_bonding_autotest', diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c new file mode 100644 index 000000000..95a447174 --- /dev/null +++ b/test/test/test_ipsec.c @@ -0,0 +1,2209 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" +#include "test_cryptodev.h" + +#define VDEV_ARGS_SIZE 100 +#define MAX_NB_SESSIONS 100 +#define MAX_NB_SAS 2 +#define REPLAY_WIN_0 0 +#define REPLAY_WIN_32 32 +#define REPLAY_WIN_64 64 +#define REPLAY_WIN_128 128 +#define REPLAY_WIN_256 256 +#define DATA_64_BYTES 64 +#define DATA_80_BYTES 80 +#define DATA_100_BYTES 100 +#define ESN_ENABLED 1 +#define ESN_DISABLED 0 +#define INBOUND_SPI 7 +#define OUTBOUND_SPI 17 +#define BURST_SIZE 32 +#define REORDER_PKTS 1 + +struct user_params { + enum rte_crypto_sym_xform_type auth; + enum rte_crypto_sym_xform_type cipher; + enum rte_crypto_sym_xform_type aead; + + char auth_algo[128]; + char cipher_algo[128]; + char aead_algo[128]; +}; + +struct ipsec_testsuite_params { + struct rte_mempool *mbuf_pool; + struct rte_mempool *cop_mpool; + struct rte_mempool *session_mpool; + struct rte_cryptodev_config conf; + struct rte_cryptodev_qp_conf qp_conf; + + uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS]; + uint8_t valid_dev_count; +}; + +struct ipsec_unitest_params { + struct rte_crypto_sym_xform cipher_xform; + struct rte_crypto_sym_xform auth_xform; + struct rte_crypto_sym_xform aead_xform; + struct rte_crypto_sym_xform *crypto_xforms; + + struct rte_security_ipsec_xform ipsec_xform; + + struct rte_ipsec_sa_prm sa_prm; + struct rte_ipsec_session ss[MAX_NB_SAS]; + + struct rte_crypto_op *cop[BURST_SIZE]; + + struct rte_mbuf *obuf[BURST_SIZE], *ibuf[BURST_SIZE], + *testbuf[BURST_SIZE]; + + uint8_t *digest; + uint16_t pkt_index; +}; + +struct ipsec_test_cfg { + uint32_t replay_win_sz; + uint32_t esn; + uint64_t flags; + size_t pkt_sz; + uint16_t num_pkts; + uint32_t reorder_pkts; +}; + +static const struct ipsec_test_cfg test_cfg[] = { + + {REPLAY_WIN_0, ESN_DISABLED, 0, DATA_64_BYTES, 1, 0}, + {REPLAY_WIN_0, ESN_DISABLED, 0, DATA_80_BYTES, BURST_SIZE, + REORDER_PKTS}, + {REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, 1, 0}, + {REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, BURST_SIZE, + REORDER_PKTS}, + {REPLAY_WIN_64, ESN_ENABLED, 0, DATA_64_BYTES, 1, 0}, + {REPLAY_WIN_128, ESN_ENABLED, RTE_IPSEC_SAFLAG_SQN_ATOM, + DATA_80_BYTES, 1, 0}, + {REPLAY_WIN_256, ESN_DISABLED, 0, DATA_100_BYTES, 1, 0}, +}; + +static const int num_cfg = RTE_DIM(test_cfg); +static struct ipsec_testsuite_params testsuite_params = { NULL }; +static struct ipsec_unitest_params unittest_params; +static struct user_params uparams; + +static uint8_t global_key[128] = { 0 }; + +struct supported_cipher_algo { + const char *keyword; + enum rte_crypto_cipher_algorithm algo; + uint16_t iv_len; + uint16_t block_size; + uint16_t key_len; +}; + +struct supported_auth_algo { + const char *keyword; + enum rte_crypto_auth_algorithm algo; + uint16_t digest_len; + uint16_t key_len; + uint8_t key_not_req; +}; + +const struct supported_cipher_algo cipher_algos[] = { + { + .keyword = "null", + .algo = RTE_CRYPTO_CIPHER_NULL, + .iv_len = 0, + .block_size = 4, + .key_len = 0 + }, +}; + +const struct supported_auth_algo auth_algos[] = { + { + .keyword = "null", + .algo = RTE_CRYPTO_AUTH_NULL, + .digest_len = 0, + .key_len = 0, + .key_not_req = 1 + }, +}; + +static int +dummy_sec_create(void *device, struct rte_security_session_conf *conf, + struct rte_security_session *sess, struct rte_mempool *mp) +{ + RTE_SET_USED(device); + RTE_SET_USED(conf); + RTE_SET_USED(mp); + + sess->sess_private_data = NULL; + return 0; +} + +static int +dummy_sec_destroy(void *device, struct rte_security_session *sess) +{ + RTE_SET_USED(device); + RTE_SET_USED(sess); + return 0; +} + +static const struct rte_security_ops dummy_sec_ops = { + .session_create = dummy_sec_create, + .session_destroy = dummy_sec_destroy, +}; + +static struct rte_security_ctx dummy_sec_ctx = { + .ops = &dummy_sec_ops, +}; + +static const struct supported_cipher_algo * +find_match_cipher_algo(const char *cipher_keyword) +{ + size_t i; + + for (i = 0; i < RTE_DIM(cipher_algos); i++) { + const struct supported_cipher_algo *algo = + &cipher_algos[i]; + + if (strcmp(cipher_keyword, algo->keyword) == 0) + return algo; + } + + return NULL; +} + +static const struct supported_auth_algo * +find_match_auth_algo(const char *auth_keyword) +{ + size_t i; + + for (i = 0; i < RTE_DIM(auth_algos); i++) { + const struct supported_auth_algo *algo = + &auth_algos[i]; + + if (strcmp(auth_keyword, algo->keyword) == 0) + return algo; + } + + return NULL; +} + +static int +testsuite_setup(void) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct rte_cryptodev_info info; + uint32_t nb_devs, dev_id; + + memset(ts_params, 0, sizeof(*ts_params)); + + ts_params->mbuf_pool = rte_pktmbuf_pool_create( + "CRYPTO_MBUFPOOL", + NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE, + rte_socket_id()); + if (ts_params->mbuf_pool == NULL) { + RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n"); + return TEST_FAILED; + } + + ts_params->cop_mpool = rte_crypto_op_pool_create( + "MBUF_CRYPTO_SYM_OP_POOL", + RTE_CRYPTO_OP_TYPE_SYMMETRIC, + NUM_MBUFS, MBUF_CACHE_SIZE, + DEFAULT_NUM_XFORMS * + sizeof(struct rte_crypto_sym_xform) + + MAXIMUM_IV_LENGTH, + rte_socket_id()); + if (ts_params->cop_mpool == NULL) { + RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n"); + return TEST_FAILED; + } + + nb_devs = rte_cryptodev_count(); + if (nb_devs < 1) { + RTE_LOG(ERR, USER1, "No crypto devices found?\n"); + return TEST_FAILED; + } + + ts_params->valid_devs[ts_params->valid_dev_count++] = 0; + + /* Set up all the qps on the first of the valid devices found */ + dev_id = ts_params->valid_devs[0]; + + rte_cryptodev_info_get(dev_id, &info); + + ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs; + ts_params->conf.socket_id = SOCKET_ID_ANY; + + unsigned int session_size = + rte_cryptodev_sym_get_private_session_size(dev_id); + + /* + * Create mempool with maximum number of sessions * 2, + * to include the session headers + */ + if (info.sym.max_nb_sessions != 0 && + info.sym.max_nb_sessions < MAX_NB_SESSIONS) { + RTE_LOG(ERR, USER1, "Device does not support " + "at least %u sessions\n", + MAX_NB_SESSIONS); + return TEST_FAILED; + } + + ts_params->session_mpool = rte_mempool_create( + "test_sess_mp", + MAX_NB_SESSIONS * 2, + session_size, + 0, 0, NULL, NULL, NULL, + NULL, SOCKET_ID_ANY, + 0); + + TEST_ASSERT_NOT_NULL(ts_params->session_mpool, + "session mempool allocation failed"); + + TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, + &ts_params->conf), + "Failed to configure cryptodev %u with %u qps", + dev_id, ts_params->conf.nb_queue_pairs); + + ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT; + + TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup( + dev_id, 0, &ts_params->qp_conf, + rte_cryptodev_socket_id(dev_id), + ts_params->session_mpool), + "Failed to setup queue pair %u on cryptodev %u", + 0, dev_id); + + return TEST_SUCCESS; +} + +static void +testsuite_teardown(void) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + + if (ts_params->mbuf_pool != NULL) { + RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n", + rte_mempool_avail_count(ts_params->mbuf_pool)); + rte_mempool_free(ts_params->mbuf_pool); + ts_params->mbuf_pool = NULL; + } + + if (ts_params->cop_mpool != NULL) { + RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n", + rte_mempool_avail_count(ts_params->cop_mpool)); + rte_mempool_free(ts_params->cop_mpool); + ts_params->cop_mpool = NULL; + } + + /* Free session mempools */ + if (ts_params->session_mpool != NULL) { + rte_mempool_free(ts_params->session_mpool); + ts_params->session_mpool = NULL; + } +} + +static int +ut_setup(void) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + + /* Clear unit test parameters before running test */ + memset(ut_params, 0, sizeof(*ut_params)); + + /* Reconfigure device to default parameters */ + ts_params->conf.socket_id = SOCKET_ID_ANY; + + /* Start the device */ + TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]), + "Failed to start cryptodev %u", + ts_params->valid_devs[0]); + + return TEST_SUCCESS; +} + +static void +ut_teardown(void) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + int i; + + for (i = 0; i < BURST_SIZE; i++) { + /* free crypto operation structure */ + if (ut_params->cop[i]) + rte_crypto_op_free(ut_params->cop[i]); + + /* + * free mbuf - both obuf and ibuf are usually the same, + * so check if they point at the same address is necessary, + * to avoid freeing the mbuf twice. + */ + if (ut_params->obuf[i]) { + rte_pktmbuf_free(ut_params->obuf[i]); + if (ut_params->ibuf[i] == ut_params->obuf[i]) + ut_params->ibuf[i] = 0; + ut_params->obuf[i] = 0; + } + if (ut_params->ibuf[i]) { + rte_pktmbuf_free(ut_params->ibuf[i]); + ut_params->ibuf[i] = 0; + } + + if (ut_params->testbuf[i]) { + rte_pktmbuf_free(ut_params->testbuf[i]); + ut_params->testbuf[i] = 0; + } + } + + if (ts_params->mbuf_pool != NULL) + RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n", + rte_mempool_avail_count(ts_params->mbuf_pool)); + + /* Stop the device */ + rte_cryptodev_stop(ts_params->valid_devs[0]); +} + +#define IPSEC_MAX_PAD_SIZE UINT8_MAX + +static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = { + 1, 2, 3, 4, 5, 6, 7, 8, + 9, 10, 11, 12, 13, 14, 15, 16, + 17, 18, 19, 20, 21, 22, 23, 24, + 25, 26, 27, 28, 29, 30, 31, 32, + 33, 34, 35, 36, 37, 38, 39, 40, + 41, 42, 43, 44, 45, 46, 47, 48, + 49, 50, 51, 52, 53, 54, 55, 56, + 57, 58, 59, 60, 61, 62, 63, 64, + 65, 66, 67, 68, 69, 70, 71, 72, + 73, 74, 75, 76, 77, 78, 79, 80, + 81, 82, 83, 84, 85, 86, 87, 88, + 89, 90, 91, 92, 93, 94, 95, 96, + 97, 98, 99, 100, 101, 102, 103, 104, + 105, 106, 107, 108, 109, 110, 111, 112, + 113, 114, 115, 116, 117, 118, 119, 120, + 121, 122, 123, 124, 125, 126, 127, 128, + 129, 130, 131, 132, 133, 134, 135, 136, + 137, 138, 139, 140, 141, 142, 143, 144, + 145, 146, 147, 148, 149, 150, 151, 152, + 153, 154, 155, 156, 157, 158, 159, 160, + 161, 162, 163, 164, 165, 166, 167, 168, + 169, 170, 171, 172, 173, 174, 175, 176, + 177, 178, 179, 180, 181, 182, 183, 184, + 185, 186, 187, 188, 189, 190, 191, 192, + 193, 194, 195, 196, 197, 198, 199, 200, + 201, 202, 203, 204, 205, 206, 207, 208, + 209, 210, 211, 212, 213, 214, 215, 216, + 217, 218, 219, 220, 221, 222, 223, 224, + 225, 226, 227, 228, 229, 230, 231, 232, + 233, 234, 235, 236, 237, 238, 239, 240, + 241, 242, 243, 244, 245, 246, 247, 248, + 249, 250, 251, 252, 253, 254, 255, +}; + +/* ***** data for tests ***** */ + +const char null_plain_data[] = + "Network Security People Have A Strange Sense Of Humor unlike Other " + "People who have a normal sense of humour"; + +const char null_encrypted_data[] = + "Network Security People Have A Strange Sense Of Humor unlike Other " + "People who have a normal sense of humour"; + +struct ipv4_hdr ipv4_outer = { + .version_ihl = IPVERSION << 4 | + sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER, + .time_to_live = IPDEFTTL, + .next_proto_id = IPPROTO_ESP, + .src_addr = IPv4(192, 168, 1, 100), + .dst_addr = IPv4(192, 168, 2, 100), +}; + +static struct rte_mbuf * +setup_test_string(struct rte_mempool *mpool, + const char *string, size_t len, uint8_t blocksize) +{ + struct rte_mbuf *m = rte_pktmbuf_alloc(mpool); + size_t t_len = len - (blocksize ? (len % blocksize) : 0); + + if (m) { + memset(m->buf_addr, 0, m->buf_len); + char *dst = rte_pktmbuf_append(m, t_len); + + if (!dst) { + rte_pktmbuf_free(m); + return NULL; + } + if (string != NULL) + rte_memcpy(dst, string, t_len); + else + memset(dst, 0, t_len); + } + + return m; +} + +static struct rte_mbuf * +setup_test_string_tunneled(struct rte_mempool *mpool, const char *string, + size_t len, uint32_t spi, uint32_t seq) +{ + struct rte_mbuf *m = rte_pktmbuf_alloc(mpool); + uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr); + uint32_t taillen = sizeof(struct esp_tail); + uint32_t t_len = len + hdrlen + taillen; + uint32_t padlen; + + struct esp_hdr esph = { + .spi = rte_cpu_to_be_32(spi), + .seq = rte_cpu_to_be_32(seq) + }; + + padlen = RTE_ALIGN(t_len, 4) - t_len; + t_len += padlen; + + struct esp_tail espt = { + .pad_len = padlen, + .next_proto = IPPROTO_IPIP, + }; + + if (m == NULL) + return NULL; + + memset(m->buf_addr, 0, m->buf_len); + char *dst = rte_pktmbuf_append(m, t_len); + + if (!dst) { + rte_pktmbuf_free(m); + return NULL; + } + /* copy outer IP and ESP header */ + ipv4_outer.total_length = rte_cpu_to_be_16(t_len); + ipv4_outer.packet_id = rte_cpu_to_be_16(seq); + rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer)); + dst += sizeof(ipv4_outer); + m->l3_len = sizeof(ipv4_outer); + rte_memcpy(dst, &esph, sizeof(esph)); + dst += sizeof(esph); + + if (string != NULL) { + /* copy payload */ + rte_memcpy(dst, string, len); + dst += len; + /* copy pad bytes */ + rte_memcpy(dst, esp_pad_bytes, padlen); + dst += padlen; + /* copy ESP tail header */ + rte_memcpy(dst, &espt, sizeof(espt)); + } else + memset(dst, 0, t_len); + + return m; +} + +static int +check_cryptodev_capablity(const struct ipsec_unitest_params *ut, + uint8_t devid) +{ + struct rte_cryptodev_sym_capability_idx cap_idx; + const struct rte_cryptodev_symmetric_capability *cap; + int rc = -1; + + cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH; + cap_idx.algo.auth = ut->auth_xform.auth.algo; + cap = rte_cryptodev_sym_capability_get(devid, &cap_idx); + + if (cap != NULL) { + rc = rte_cryptodev_sym_capability_check_auth(cap, + ut->auth_xform.auth.key.length, + ut->auth_xform.auth.digest_length, 0); + if (rc == 0) { + cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; + cap_idx.algo.cipher = ut->cipher_xform.cipher.algo; + cap = rte_cryptodev_sym_capability_get(devid, &cap_idx); + if (cap != NULL) + rc = rte_cryptodev_sym_capability_check_cipher( + cap, + ut->cipher_xform.cipher.key.length, + ut->cipher_xform.cipher.iv.length); + } + } + + return rc; +} + +static int +create_dummy_sec_session(struct ipsec_unitest_params *ut, + struct rte_mempool *pool, uint32_t j) +{ + static struct rte_security_session_conf conf; + + ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx, + &conf, pool); + + if (ut->ss[j].security.ses == NULL) + return -ENOMEM; + + ut->ss[j].security.ctx = &dummy_sec_ctx; + ut->ss[j].security.ol_flags = 0; + return 0; +} + +static int +create_crypto_session(struct ipsec_unitest_params *ut, + struct rte_mempool *pool, const uint8_t crypto_dev[], + uint32_t crypto_dev_num, uint32_t j) +{ + int32_t rc; + uint32_t devnum, i; + struct rte_cryptodev_sym_session *s; + uint8_t devid[RTE_CRYPTO_MAX_DEVS]; + + /* check which cryptodevs support SA */ + devnum = 0; + for (i = 0; i < crypto_dev_num; i++) { + if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0) + devid[devnum++] = crypto_dev[i]; + } + + if (devnum == 0) + return -ENODEV; + + s = rte_cryptodev_sym_session_create(pool); + if (s == NULL) + return -ENOMEM; + + /* initiliaze SA crypto session for all supported devices */ + for (i = 0; i != devnum; i++) { + rc = rte_cryptodev_sym_session_init(devid[i], s, + ut->crypto_xforms, pool); + if (rc != 0) + break; + } + + if (i == devnum) { + ut->ss[j].crypto.ses = s; + return 0; + } + + /* failure, do cleanup */ + while (i-- != 0) + rte_cryptodev_sym_session_clear(devid[i], s); + + rte_cryptodev_sym_session_free(s); + return rc; +} + +static int +create_session(struct ipsec_unitest_params *ut, + struct rte_mempool *pool, const uint8_t crypto_dev[], + uint32_t crypto_dev_num, uint32_t j) +{ + if (ut->ss[j].type == RTE_SECURITY_ACTION_TYPE_NONE) + return create_crypto_session(ut, pool, crypto_dev, + crypto_dev_num, j); + else + return create_dummy_sec_session(ut, pool, j); +} + +static void +fill_crypto_xform(struct ipsec_unitest_params *ut_params, + const struct supported_auth_algo *auth_algo, + const struct supported_cipher_algo *cipher_algo) +{ + ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH; + ut_params->auth_xform.auth.algo = auth_algo->algo; + ut_params->auth_xform.auth.key.data = global_key; + ut_params->auth_xform.auth.key.length = auth_algo->key_len; + ut_params->auth_xform.auth.digest_length = auth_algo->digest_len; + + ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY; + ut_params->auth_xform.next = &ut_params->cipher_xform; + + ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER; + ut_params->cipher_xform.cipher.algo = cipher_algo->algo; + ut_params->cipher_xform.cipher.key.data = global_key; + ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len; + ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT; + ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET; + ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len; + ut_params->cipher_xform.next = NULL; + + ut_params->crypto_xforms = &ut_params->auth_xform; +} + +static int +fill_ipsec_param(uint32_t replay_win_sz, uint64_t flags) +{ + struct ipsec_unitest_params *ut_params = &unittest_params; + struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm; + const struct supported_auth_algo *auth_algo; + const struct supported_cipher_algo *cipher_algo; + + memset(prm, 0, sizeof(*prm)); + + prm->userdata = 1; + prm->flags = flags; + prm->replay_win_sz = replay_win_sz; + + /* setup ipsec xform */ + prm->ipsec_xform = ut_params->ipsec_xform; + prm->ipsec_xform.salt = (uint32_t)rte_rand(); + + /* setup tunnel related fields */ + prm->tun.hdr_len = sizeof(ipv4_outer); + prm->tun.next_proto = IPPROTO_IPIP; + prm->tun.hdr = &ipv4_outer; + + /* setup crypto section */ + if (uparams.aead != 0) { + /* TODO: will need to fill out with other test cases */ + } else { + if (uparams.auth == 0 && uparams.cipher == 0) + return TEST_FAILED; + + auth_algo = find_match_auth_algo(uparams.auth_algo); + cipher_algo = find_match_cipher_algo(uparams.cipher_algo); + + fill_crypto_xform(ut_params, auth_algo, cipher_algo); + } + + prm->crypto_xform = ut_params->crypto_xforms; + return TEST_SUCCESS; +} + +static int +create_sa(enum rte_security_session_action_type action_type, + uint32_t replay_win_sz, uint64_t flags, uint32_t j) +{ + struct ipsec_testsuite_params *ts = &testsuite_params; + struct ipsec_unitest_params *ut = &unittest_params; + size_t sz; + int rc; + + memset(&ut->ss[j], 0, sizeof(ut->ss[j])); + + rc = fill_ipsec_param(replay_win_sz, flags); + if (rc != 0) + return TEST_FAILED; + + /* create rte_ipsec_sa*/ + sz = rte_ipsec_sa_size(&ut->sa_prm); + TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n"); + + ut->ss[j].sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE); + TEST_ASSERT_NOT_NULL(ut->ss[j].sa, + "failed to allocate memory for rte_ipsec_sa\n"); + + ut->ss[j].type = action_type; + rc = create_session(ut, ts->session_mpool, ts->valid_devs, + ts->valid_dev_count, j); + if (rc != 0) + return TEST_FAILED; + + rc = rte_ipsec_sa_init(ut->ss[j].sa, &ut->sa_prm, sz); + rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL; + + return rte_ipsec_session_prepare(&ut->ss[j]); +} + +static int +crypto_ipsec(uint16_t num_pkts) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + uint32_t k, ng; + struct rte_ipsec_group grp[1]; + + /* call crypto prepare */ + k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf, + ut_params->cop, num_pkts); + if (k != num_pkts) { + RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n"); + return TEST_FAILED; + } + k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0, + ut_params->cop, num_pkts); + if (k != num_pkts) { + RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n"); + return TEST_FAILED; + } + + k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0, + ut_params->cop, num_pkts); + if (k != num_pkts) { + RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n"); + return TEST_FAILED; + } + + ng = rte_ipsec_pkt_crypto_group( + (const struct rte_crypto_op **)(uintptr_t)ut_params->cop, + ut_params->obuf, grp, num_pkts); + if (ng != 1 || + grp[0].m[0] != ut_params->obuf[0] || + grp[0].cnt != num_pkts || + grp[0].id.ptr != &ut_params->ss[0]) { + RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n"); + return TEST_FAILED; + } + + /* call crypto process */ + k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt); + if (k != num_pkts) { + RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n"); + return TEST_FAILED; + } + + return TEST_SUCCESS; +} + +static int +crypto_ipsec_2sa(void) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + struct rte_ipsec_group grp[BURST_SIZE]; + + uint32_t k, ng, i, r; + + for (i = 0; i < BURST_SIZE; i++) { + r = i % 2; + /* call crypto prepare */ + k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[r], + ut_params->ibuf + i, ut_params->cop + i, 1); + if (k != 1) { + RTE_LOG(ERR, USER1, + "rte_ipsec_pkt_crypto_prepare fail\n"); + return TEST_FAILED; + } + k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0, + ut_params->cop + i, 1); + if (k != 1) { + RTE_LOG(ERR, USER1, + "rte_cryptodev_enqueue_burst fail\n"); + return TEST_FAILED; + } + } + + k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0, + ut_params->cop, BURST_SIZE); + if (k != BURST_SIZE) { + RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n"); + return TEST_FAILED; + } + + ng = rte_ipsec_pkt_crypto_group( + (const struct rte_crypto_op **)(uintptr_t)ut_params->cop, + ut_params->obuf, grp, BURST_SIZE); + if (ng != BURST_SIZE) { + RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n", + ng); + return TEST_FAILED; + } + + /* call crypto process */ + for (i = 0; i < ng; i++) { + k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt); + if (k != grp[i].cnt) { + RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n"); + return TEST_FAILED; + } + } + return TEST_SUCCESS; +} + +#define PKT_4 4 +#define PKT_12 12 +#define PKT_21 21 + +static uint32_t +crypto_ipsec_4grp(uint32_t pkt_num) +{ + uint32_t sa_ind; + + /* group packets in 4 different size groups groups, 2 per SA */ + if (pkt_num < PKT_4) + sa_ind = 0; + else if (pkt_num < PKT_12) + sa_ind = 1; + else if (pkt_num < PKT_21) + sa_ind = 0; + else + sa_ind = 1; + + return sa_ind; +} + +static uint32_t +crypto_ipsec_4grp_check_mbufs(uint32_t grp_ind, struct rte_ipsec_group *grp) +{ + struct ipsec_unitest_params *ut_params = &unittest_params; + uint32_t i, j; + uint32_t rc = 0; + + if (grp_ind == 0) { + for (i = 0, j = 0; i < PKT_4; i++, j++) + if (grp[grp_ind].m[i] != ut_params->obuf[j]) { + rc = TEST_FAILED; + break; + } + } else if (grp_ind == 1) { + for (i = 0, j = PKT_4; i < (PKT_12 - PKT_4); i++, j++) { + if (grp[grp_ind].m[i] != ut_params->obuf[j]) { + rc = TEST_FAILED; + break; + } + } + } else if (grp_ind == 2) { + for (i = 0, j = PKT_12; i < (PKT_21 - PKT_12); i++, j++) + if (grp[grp_ind].m[i] != ut_params->obuf[j]) { + rc = TEST_FAILED; + break; + } + } else if (grp_ind == 3) { + for (i = 0, j = PKT_21; i < (BURST_SIZE - PKT_21); i++, j++) + if (grp[grp_ind].m[i] != ut_params->obuf[j]) { + rc = TEST_FAILED; + break; + } + } else + rc = TEST_FAILED; + + return rc; +} + +static uint32_t +crypto_ipsec_4grp_check_cnt(uint32_t grp_ind, struct rte_ipsec_group *grp) +{ + uint32_t rc = 0; + + if (grp_ind == 0) { + if (grp[grp_ind].cnt != PKT_4) + rc = TEST_FAILED; + } else if (grp_ind == 1) { + if (grp[grp_ind].cnt != PKT_12 - PKT_4) + rc = TEST_FAILED; + } else if (grp_ind == 2) { + if (grp[grp_ind].cnt != PKT_21 - PKT_12) + rc = TEST_FAILED; + } else if (grp_ind == 3) { + if (grp[grp_ind].cnt != BURST_SIZE - PKT_21) + rc = TEST_FAILED; + } else + rc = TEST_FAILED; + + return rc; +} + +static int +crypto_ipsec_2sa_4grp(void) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + struct rte_ipsec_group grp[BURST_SIZE]; + uint32_t k, ng, i, j; + uint32_t rc = 0; + + for (i = 0; i < BURST_SIZE; i++) { + j = crypto_ipsec_4grp(i); + + /* call crypto prepare */ + k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[j], + ut_params->ibuf + i, ut_params->cop + i, 1); + if (k != 1) { + RTE_LOG(ERR, USER1, + "rte_ipsec_pkt_crypto_prepare fail\n"); + return TEST_FAILED; + } + k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0, + ut_params->cop + i, 1); + if (k != 1) { + RTE_LOG(ERR, USER1, + "rte_cryptodev_enqueue_burst fail\n"); + return TEST_FAILED; + } + } + + k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0, + ut_params->cop, BURST_SIZE); + if (k != BURST_SIZE) { + RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n"); + return TEST_FAILED; + } + + ng = rte_ipsec_pkt_crypto_group( + (const struct rte_crypto_op **)(uintptr_t)ut_params->cop, + ut_params->obuf, grp, BURST_SIZE); + if (ng != 4) { + RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n", + ng); + return TEST_FAILED; + } + + /* call crypto process */ + for (i = 0; i < ng; i++) { + k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt); + if (k != grp[i].cnt) { + RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n"); + return TEST_FAILED; + } + rc = crypto_ipsec_4grp_check_cnt(i, grp); + if (rc != 0) { + RTE_LOG(ERR, USER1, + "crypto_ipsec_4grp_check_cnt fail\n"); + return TEST_FAILED; + } + rc = crypto_ipsec_4grp_check_mbufs(i, grp); + if (rc != 0) { + RTE_LOG(ERR, USER1, + "crypto_ipsec_4grp_check_mbufs fail\n"); + return TEST_FAILED; + } + } + return TEST_SUCCESS; +} + +static void +test_ipsec_reorder_inb_pkt_burst(uint16_t num_pkts) +{ + struct ipsec_unitest_params *ut_params = &unittest_params; + struct rte_mbuf *ibuf_tmp[BURST_SIZE]; + uint16_t j; + + /* reorder packets and create gaps in sequence numbers */ + static const uint32_t reorder[BURST_SIZE] = { + 24, 25, 26, 27, 28, 29, 30, 31, + 16, 17, 18, 19, 20, 21, 22, 23, + 8, 9, 10, 11, 12, 13, 14, 15, + 0, 1, 2, 3, 4, 5, 6, 7, + }; + + if (num_pkts != BURST_SIZE) + return; + + for (j = 0; j != BURST_SIZE; j++) + ibuf_tmp[j] = ut_params->ibuf[reorder[j]]; + + memcpy(ut_params->ibuf, ibuf_tmp, sizeof(ut_params->ibuf)); +} + +static int +test_ipsec_crypto_op_alloc(uint16_t num_pkts) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + int rc = 0; + uint16_t j; + + for (j = 0; j < num_pkts && rc == 0; j++) { + ut_params->cop[j] = rte_crypto_op_alloc(ts_params->cop_mpool, + RTE_CRYPTO_OP_TYPE_SYMMETRIC); + if (ut_params->cop[j] == NULL) { + RTE_LOG(ERR, USER1, + "Failed to allocate symmetric crypto op\n"); + rc = TEST_FAILED; + } + } + + return rc; +} + +static void +test_ipsec_dump_buffers(struct ipsec_unitest_params *ut_params, int i) +{ + uint16_t j = ut_params->pkt_index; + + printf("\ntest config: num %d\n", i); + printf(" replay_win_sz %u\n", test_cfg[i].replay_win_sz); + printf(" esn %u\n", test_cfg[i].esn); + printf(" flags 0x%lx\n", test_cfg[i].flags); + printf(" pkt_sz %lu\n", test_cfg[i].pkt_sz); + printf(" num_pkts %u\n\n", test_cfg[i].num_pkts); + + if (ut_params->ibuf[j]) { + printf("ibuf[%u] data:\n", j); + rte_pktmbuf_dump(stdout, ut_params->ibuf[j], + ut_params->ibuf[j]->data_len); + } + if (ut_params->obuf[j]) { + printf("obuf[%u] data:\n", j); + rte_pktmbuf_dump(stdout, ut_params->obuf[j], + ut_params->obuf[j]->data_len); + } + if (ut_params->testbuf[j]) { + printf("testbuf[%u] data:\n", j); + rte_pktmbuf_dump(stdout, ut_params->testbuf[j], + ut_params->testbuf[j]->data_len); + } +} + +static void +destroy_sa(uint32_t j) +{ + struct ipsec_unitest_params *ut = &unittest_params; + + rte_ipsec_sa_fini(ut->ss[j].sa); + rte_free(ut->ss[j].sa); + rte_cryptodev_sym_session_free(ut->ss[j].crypto.ses); + memset(&ut->ss[j], 0, sizeof(ut->ss[j])); +} + +static int +crypto_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i, + uint16_t num_pkts) +{ + uint16_t j; + + for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) { + ut_params->pkt_index = j; + + /* compare the data buffers */ + TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data, + rte_pktmbuf_mtod(ut_params->obuf[j], void *), + test_cfg[i].pkt_sz, + "input and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len, + ut_params->obuf[j]->pkt_len, + "data_len is not equal to pkt_len"); + TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len, + test_cfg[i].pkt_sz, + "data_len is not equal to input data"); + } + + return 0; +} + +static int +test_ipsec_crypto_inb_burst_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + uint16_t num_pkts = test_cfg[i].num_pkts; + uint16_t j; + int rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa */ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + for (j = 0; j < num_pkts && rc == 0; j++) { + /* packet with sequence number 0 is invalid */ + ut_params->ibuf[j] = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + test_cfg[i].pkt_sz, INBOUND_SPI, j + 1); + if (ut_params->ibuf[j] == NULL) + rc = TEST_FAILED; + } + + if (rc == 0) { + if (test_cfg[i].reorder_pkts) + test_ipsec_reorder_inb_pkt_burst(num_pkts); + rc = test_ipsec_crypto_op_alloc(num_pkts); + } + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(num_pkts); + if (rc == 0) + rc = crypto_inb_burst_null_null_check( + ut_params, i, num_pkts); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n", + i); + rc = TEST_FAILED; + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + return rc; +} + +static int +test_ipsec_crypto_inb_burst_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = INBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_crypto_inb_burst_null_null(i); + } + + return rc; +} + +static int +crypto_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params, + uint16_t num_pkts) +{ + void *obuf_data; + void *testbuf_data; + uint16_t j; + + for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) { + ut_params->pkt_index = j; + + testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf[j], void *); + obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *); + /* compare the buffer data */ + TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data, + ut_params->obuf[j]->pkt_len, + "test and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len, + ut_params->testbuf[j]->data_len, + "obuf data_len is not equal to testbuf data_len"); + TEST_ASSERT_EQUAL(ut_params->obuf[j]->pkt_len, + ut_params->testbuf[j]->pkt_len, + "obuf pkt_len is not equal to testbuf pkt_len"); + } + + return 0; +} + +static int +test_ipsec_crypto_outb_burst_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + uint16_t num_pkts = test_cfg[i].num_pkts; + uint16_t j; + int32_t rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa*/ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + return TEST_FAILED; + } + + /* Generate input mbuf data */ + for (j = 0; j < num_pkts && rc == 0; j++) { + ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool, + null_plain_data, test_cfg[i].pkt_sz, 0); + if (ut_params->ibuf[j] == NULL) + rc = TEST_FAILED; + else { + /* Generate test mbuf data */ + /* packet with sequence number 0 is invalid */ + ut_params->testbuf[j] = setup_test_string_tunneled( + ts_params->mbuf_pool, + null_plain_data, test_cfg[i].pkt_sz, + OUTBOUND_SPI, j + 1); + if (ut_params->testbuf[j] == NULL) + rc = TEST_FAILED; + } + } + + if (rc == 0) + rc = test_ipsec_crypto_op_alloc(num_pkts); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(num_pkts); + if (rc == 0) + rc = crypto_outb_burst_null_null_check(ut_params, + num_pkts); + else + RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n", + i); + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + return rc; +} + +static int +test_ipsec_crypto_outb_burst_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = OUTBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_crypto_outb_burst_null_null(i); + } + + return rc; +} + +static int +inline_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i, + uint16_t num_pkts) +{ + void *ibuf_data; + void *obuf_data; + uint16_t j; + + for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) { + ut_params->pkt_index = j; + + /* compare the buffer data */ + ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *); + obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *); + + TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data, + ut_params->ibuf[j]->data_len, + "input and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len, + ut_params->obuf[j]->data_len, + "ibuf data_len is not equal to obuf data_len"); + TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len, + ut_params->obuf[j]->pkt_len, + "ibuf pkt_len is not equal to obuf pkt_len"); + TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len, + test_cfg[i].pkt_sz, + "data_len is not equal input data"); + } + return 0; +} + +static int +test_ipsec_inline_inb_burst_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + uint16_t num_pkts = test_cfg[i].num_pkts; + uint16_t j; + int32_t rc; + uint32_t n; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa*/ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + return TEST_FAILED; + } + + /* Generate inbound mbuf data */ + for (j = 0; j < num_pkts && rc == 0; j++) { + ut_params->ibuf[j] = setup_test_string_tunneled( + ts_params->mbuf_pool, + null_plain_data, test_cfg[i].pkt_sz, + INBOUND_SPI, j + 1); + if (ut_params->ibuf[j] == NULL) + rc = TEST_FAILED; + else { + /* Generate test mbuf data */ + ut_params->obuf[j] = setup_test_string( + ts_params->mbuf_pool, + null_plain_data, test_cfg[i].pkt_sz, 0); + if (ut_params->obuf[j] == NULL) + rc = TEST_FAILED; + } + } + + if (rc == 0) { + n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf, + num_pkts); + if (n == num_pkts) + rc = inline_inb_burst_null_null_check(ut_params, i, + num_pkts); + else { + RTE_LOG(ERR, USER1, + "rte_ipsec_pkt_process failed, cfg %d\n", + i); + rc = TEST_FAILED; + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + return rc; +} + +static int +test_ipsec_inline_inb_burst_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = INBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_inline_inb_burst_null_null(i); + } + + return rc; +} + +static int +inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params, + uint16_t num_pkts) +{ + void *obuf_data; + void *ibuf_data; + uint16_t j; + + for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) { + ut_params->pkt_index = j; + + /* compare the buffer data */ + ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *); + obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *); + TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data, + ut_params->ibuf[j]->data_len, + "input and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len, + ut_params->obuf[j]->data_len, + "ibuf data_len is not equal to obuf data_len"); + TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len, + ut_params->obuf[j]->pkt_len, + "ibuf pkt_len is not equal to obuf pkt_len"); + } + return 0; +} + +static int +test_ipsec_inline_outb_burst_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + uint16_t num_pkts = test_cfg[i].num_pkts; + uint16_t j; + int32_t rc; + uint32_t n; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa */ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + for (j = 0; j < num_pkts && rc == 0; j++) { + ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool, + null_plain_data, test_cfg[i].pkt_sz, 0); + if (ut_params->ibuf[0] == NULL) + rc = TEST_FAILED; + + if (rc == 0) { + /* Generate test tunneled mbuf data for comparison */ + ut_params->obuf[j] = setup_test_string_tunneled( + ts_params->mbuf_pool, + null_plain_data, test_cfg[i].pkt_sz, + OUTBOUND_SPI, j + 1); + if (ut_params->obuf[j] == NULL) + rc = TEST_FAILED; + } + } + + if (rc == 0) { + n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf, + num_pkts); + if (n == num_pkts) + rc = inline_outb_burst_null_null_check(ut_params, + num_pkts); + else { + RTE_LOG(ERR, USER1, + "rte_ipsec_pkt_process failed, cfg %d\n", + i); + rc = TEST_FAILED; + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + return rc; +} + +static int +test_ipsec_inline_outb_burst_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = OUTBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_inline_outb_burst_null_null(i); + } + + return rc; +} + +static int +replay_inb_null_null_check(struct ipsec_unitest_params *ut_params, int i, + int num_pkts) +{ + uint16_t j; + + for (j = 0; j < num_pkts; j++) { + /* compare the buffer data */ + TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data, + rte_pktmbuf_mtod(ut_params->obuf[j], void *), + test_cfg[i].pkt_sz, + "input and output data does not match\n"); + + TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len, + ut_params->obuf[j]->pkt_len, + "data_len is not equal to pkt_len"); + } + + return 0; +} + +static int +test_ipsec_replay_inb_inside_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + int rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa*/ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + return TEST_FAILED; + } + + /* Generate inbound mbuf data */ + ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool, + null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1); + if (ut_params->ibuf[0] == NULL) + rc = TEST_FAILED; + else + rc = test_ipsec_crypto_op_alloc(1); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(1); + if (rc == 0) + rc = replay_inb_null_null_check(ut_params, i, 1); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n", + i); + rc = TEST_FAILED; + } + } + + if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) { + /* generate packet with seq number inside the replay window */ + if (ut_params->ibuf[0]) { + rte_pktmbuf_free(ut_params->ibuf[0]); + ut_params->ibuf[0] = 0; + } + + ut_params->ibuf[0] = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + test_cfg[i].pkt_sz, INBOUND_SPI, + test_cfg[i].replay_win_sz); + if (ut_params->ibuf[0] == NULL) + rc = TEST_FAILED; + else + rc = test_ipsec_crypto_op_alloc(1); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(1); + if (rc == 0) + rc = replay_inb_null_null_check( + ut_params, i, 1); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed\n"); + rc = TEST_FAILED; + } + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + + return rc; +} + +static int +test_ipsec_replay_inb_inside_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = INBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_replay_inb_inside_null_null(i); + } + + return rc; +} + +static int +test_ipsec_replay_inb_outside_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + int rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa */ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool, + null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, + test_cfg[i].replay_win_sz + 2); + if (ut_params->ibuf[0] == NULL) + rc = TEST_FAILED; + else + rc = test_ipsec_crypto_op_alloc(1); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(1); + if (rc == 0) + rc = replay_inb_null_null_check(ut_params, i, 1); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n", + i); + rc = TEST_FAILED; + } + } + + if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) { + /* generate packet with seq number outside the replay window */ + if (ut_params->ibuf[0]) { + rte_pktmbuf_free(ut_params->ibuf[0]); + ut_params->ibuf[0] = 0; + } + ut_params->ibuf[0] = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + test_cfg[i].pkt_sz, INBOUND_SPI, 1); + if (ut_params->ibuf[0] == NULL) + rc = TEST_FAILED; + else + rc = test_ipsec_crypto_op_alloc(1); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(1); + if (rc == 0) { + if (test_cfg[i].esn == 0) { + RTE_LOG(ERR, USER1, + "packet is not outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n", + i, + test_cfg[i].replay_win_sz + 2, + 1); + rc = TEST_FAILED; + } + } else { + RTE_LOG(ERR, USER1, + "packet is outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n", + i, test_cfg[i].replay_win_sz + 2, 1); + rc = 0; + } + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + + return rc; +} + +static int +test_ipsec_replay_inb_outside_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = INBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_replay_inb_outside_null_null(i); + } + + return rc; +} + +static int +test_ipsec_replay_inb_repeat_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + int rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa */ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", i); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool, + null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1); + if (ut_params->ibuf[0] == NULL) + rc = TEST_FAILED; + else + rc = test_ipsec_crypto_op_alloc(1); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(1); + if (rc == 0) + rc = replay_inb_null_null_check(ut_params, i, 1); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n", + i); + rc = TEST_FAILED; + } + } + + if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) { + /* + * generate packet with repeat seq number in the replay + * window + */ + if (ut_params->ibuf[0]) { + rte_pktmbuf_free(ut_params->ibuf[0]); + ut_params->ibuf[0] = 0; + } + + ut_params->ibuf[0] = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + test_cfg[i].pkt_sz, INBOUND_SPI, 1); + if (ut_params->ibuf[0] == NULL) + rc = TEST_FAILED; + else + rc = test_ipsec_crypto_op_alloc(1); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(1); + if (rc == 0) { + RTE_LOG(ERR, USER1, + "packet is not repeated in the replay window, cfg %d seq %u\n", + i, 1); + rc = TEST_FAILED; + } else { + RTE_LOG(ERR, USER1, + "packet is repeated in the replay window, cfg %d seq %u\n", + i, 1); + rc = 0; + } + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + + return rc; +} + +static int +test_ipsec_replay_inb_repeat_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = INBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_replay_inb_repeat_null_null(i); + } + + return rc; +} + +static int +test_ipsec_replay_inb_inside_burst_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + uint16_t num_pkts = test_cfg[i].num_pkts; + int rc; + int j; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa*/ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + return TEST_FAILED; + } + + /* Generate inbound mbuf data */ + ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool, + null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1); + if (ut_params->ibuf[0] == NULL) + rc = TEST_FAILED; + else + rc = test_ipsec_crypto_op_alloc(1); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(1); + if (rc == 0) + rc = replay_inb_null_null_check(ut_params, i, 1); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n", + i); + rc = TEST_FAILED; + } + } + + if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) { + /* + * generate packet(s) with seq number(s) inside the + * replay window + */ + if (ut_params->ibuf[0]) { + rte_pktmbuf_free(ut_params->ibuf[0]); + ut_params->ibuf[0] = 0; + } + + for (j = 0; j < num_pkts && rc == 0; j++) { + /* packet with sequence number 1 already processed */ + ut_params->ibuf[j] = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + test_cfg[i].pkt_sz, INBOUND_SPI, j + 2); + if (ut_params->ibuf[j] == NULL) + rc = TEST_FAILED; + } + + if (rc == 0) { + if (test_cfg[i].reorder_pkts) + test_ipsec_reorder_inb_pkt_burst(num_pkts); + rc = test_ipsec_crypto_op_alloc(num_pkts); + } + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(num_pkts); + if (rc == 0) + rc = replay_inb_null_null_check( + ut_params, i, num_pkts); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed\n"); + rc = TEST_FAILED; + } + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + + return rc; +} + +static int +test_ipsec_replay_inb_inside_burst_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = INBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_replay_inb_inside_burst_null_null(i); + } + + return rc; +} + + +static int +crypto_inb_burst_2sa_null_null_check(struct ipsec_unitest_params *ut_params, + int i) +{ + uint16_t j; + + for (j = 0; j < BURST_SIZE; j++) { + ut_params->pkt_index = j; + + /* compare the data buffers */ + TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data, + rte_pktmbuf_mtod(ut_params->obuf[j], void *), + test_cfg[i].pkt_sz, + "input and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len, + ut_params->obuf[j]->pkt_len, + "data_len is not equal to pkt_len"); + TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len, + test_cfg[i].pkt_sz, + "data_len is not equal to input data"); + } + + return 0; +} + +static int +test_ipsec_crypto_inb_burst_2sa_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + uint16_t num_pkts = test_cfg[i].num_pkts; + uint16_t j, r; + int rc = 0; + + if (num_pkts != BURST_SIZE) + return rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa */ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + return TEST_FAILED; + } + + /* create second rte_ipsec_sa */ + ut_params->ipsec_xform.spi = INBOUND_SPI + 1; + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 1); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + destroy_sa(0); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + for (j = 0; j < num_pkts && rc == 0; j++) { + r = j % 2; + /* packet with sequence number 0 is invalid */ + ut_params->ibuf[j] = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + test_cfg[i].pkt_sz, INBOUND_SPI + r, j + 1); + if (ut_params->ibuf[j] == NULL) + rc = TEST_FAILED; + } + + if (rc == 0) + rc = test_ipsec_crypto_op_alloc(num_pkts); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec_2sa(); + if (rc == 0) + rc = crypto_inb_burst_2sa_null_null_check( + ut_params, i); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n", + i); + rc = TEST_FAILED; + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + destroy_sa(1); + return rc; +} + +static int +test_ipsec_crypto_inb_burst_2sa_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = INBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_crypto_inb_burst_2sa_null_null(i); + } + + return rc; +} + +static int +test_ipsec_crypto_inb_burst_2sa_4grp_null_null(int i) +{ + struct ipsec_testsuite_params *ts_params = &testsuite_params; + struct ipsec_unitest_params *ut_params = &unittest_params; + uint16_t num_pkts = test_cfg[i].num_pkts; + uint16_t j, k; + int rc = 0; + + if (num_pkts != BURST_SIZE) + return rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa */ + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + return TEST_FAILED; + } + + /* create second rte_ipsec_sa */ + ut_params->ipsec_xform.spi = INBOUND_SPI + 1; + rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE, + test_cfg[i].replay_win_sz, test_cfg[i].flags, 1); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", + i); + destroy_sa(0); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + for (j = 0; j < num_pkts && rc == 0; j++) { + k = crypto_ipsec_4grp(j); + + /* packet with sequence number 0 is invalid */ + ut_params->ibuf[j] = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + test_cfg[i].pkt_sz, INBOUND_SPI + k, j + 1); + if (ut_params->ibuf[j] == NULL) + rc = TEST_FAILED; + } + + if (rc == 0) + rc = test_ipsec_crypto_op_alloc(num_pkts); + + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec_2sa_4grp(); + if (rc == 0) + rc = crypto_inb_burst_2sa_null_null_check( + ut_params, i); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n", + i); + rc = TEST_FAILED; + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params, i); + + destroy_sa(0); + destroy_sa(1); + return rc; +} + +static int +test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper(void) +{ + int i; + int rc = 0; + struct ipsec_unitest_params *ut_params = &unittest_params; + + ut_params->ipsec_xform.spi = INBOUND_SPI; + ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS; + ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + + for (i = 0; i < num_cfg && rc == 0; i++) { + ut_params->ipsec_xform.options.esn = test_cfg[i].esn; + rc = test_ipsec_crypto_inb_burst_2sa_4grp_null_null(i); + } + + return rc; +} + +static struct unit_test_suite ipsec_testsuite = { + .suite_name = "IPsec NULL Unit Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_crypto_inb_burst_null_null_wrapper), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_crypto_outb_burst_null_null_wrapper), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_inline_inb_burst_null_null_wrapper), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_inline_outb_burst_null_null_wrapper), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_replay_inb_inside_null_null_wrapper), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_replay_inb_outside_null_null_wrapper), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_replay_inb_repeat_null_null_wrapper), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_replay_inb_inside_burst_null_null_wrapper), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_crypto_inb_burst_2sa_null_null_wrapper), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper), + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + +static int +test_ipsec(void) +{ + return unit_test_suite_runner(&ipsec_testsuite); +} + +REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);