From patchwork Fri Aug 24 16:53:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 43880 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5A4F68D86; Fri, 24 Aug 2018 18:54:07 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 080067EE3 for ; Fri, 24 Aug 2018 18:54:04 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Aug 2018 09:54:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,283,1531810800"; d="scan'208";a="68866918" Received: from sivswdev02.ir.intel.com (HELO localhost.localdomain) ([10.237.217.46]) by orsmga006.jf.intel.com with ESMTP; 24 Aug 2018 09:53:34 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , Mohammad Abdul Awal , Declan Doherty Date: Fri, 24 Aug 2018 17:53:18 +0100 Message-Id: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 Subject: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This RFC introduces a new library within DPDK: librte_ipsec. The aim is to provide DPDK native high performance library for IPsec data-path processing. The library is supposed to utilize existing DPDK crypto-dev and security API to provide application with transparent IPsec processing API. The library is concentrated on data-path protocols processing (ESP and AH), IKE protocol(s) implementation is out of scope for that library. Though hook/callback mechanisms will be defined to allow integrate it with existing IKE implementations. Due to quite complex nature of IPsec protocol suite and variety of user requirements and usage scenarios a few API levels will be provided: 1) Security Association (SA-level) API Operates at SA level, provides functions to: - initialize/teardown SA object - process inbound/outbound ESP/AH packets associated with the given SA (decrypt/encrypt, authenticate, check integrity, add/remove ESP/AH related headers and data, etc.). 2) Security Association Database (SAD) API API to create/manage/destroy IPsec SAD. While DPDK IPsec library plans to have its own implementation, the intention is to keep it as independent from the other parts of IPsec library as possible. That is supposed to give users the ability to provide their own implementation of the SAD compatible with the other parts of the IPsec library. 3) IPsec Context (CTX) API This is supposed to be a high-level API, where each IPsec CTX is an abstraction of 'independent copy of the IPsec stack'. CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc. and provides: - de-multiplexing stream of inbound packets to particular SAs and further IPsec related processing. - IPsec related processing for the outbound packets. - SA add/delete/update functionality Current RFC concentrates on SA-level API only (1), detailed discussion for 2) and 3) will be subjects for separate RFC(s). SA (low) level API ================== API described below operates on SA level. It provides functionality that allows user for given SA to process inbound and outbound IPsec packets. To be more specific: - for inbound ESP/AH packets perform decryption, authentication, integrity checking, remove ESP/AH related headers - for outbound packets perform payload encryption, attach ICV, update/add IP headers, add ESP/AH headers/trailers, setup related mbuf felids (ol_flags, tx_offloads, etc.). - initialize/un-initialize given SA based on user provided parameters. Processed inbound/outbound packets could be grouped by user provided flow id (opaque 64-bit number associated by user with given SA). SA-level API is based on top of crypto-dev/security API and relies on them to perform actual cipher and integrity checking. Due to the nature of crypto-dev API (enqueue/deque model) we use asynchronous API for IPsec packets destined to be processed by crypto-device: rte_ipsec_crypto_prepare()->rte_cryptodev_enqueue_burst()-> rte_cryptodev_dequeue_burst()->rte_ipsec_crypto_process(). Though for packets destined for inline processing no extra overhead is required and simple and synchronous API: rte_ipsec_inline_process() is introduced for that case. The following functionality: - match inbound/outbound packets to particular SA - manage crypto/security devices - provide SAD/SPD related functionality - determine what crypto/security device has to be used for given packet(s) is out of scope for SA-level API. Below is the brief (and simplified) overview of expected SA-level API usage. /* allocate and initialize SA */ size_t sz = rte_ipsec_sa_size(); struct rte_ipsec_sa *sa = rte_malloc(sz); struct rte_ipsec_sa_prm prm; /* fill prm */ rc = rte_ipsec_sa_init(sa, &prm); if (rc != 0) { /*handle error */} ..... /* process inbound/outbound IPsec packets that belongs to given SA */ /* inline IPsec processing was done for these packets */ if (use_inline_ipsec) n = rte_ipsec_inline_process(sa, pkts, nb_pkts); /* use crypto-device to process the packets */ else { struct rte_crypto_op *cop[nb_pkts]; struct rte_ipsec_group grp[nb_pkts]; .... /* prepare crypto ops */ n = rte_ipsec_crypto_prepare(sa, pkts, cops, nb_pkts); /* enqueue crypto ops to related crypto-dev */ n = rte_cryptodev_enqueue_burst(..., cops, n); if (n != nb_pkts) { /*handle failed packets */} /* dequeue finished crypto ops from related crypto-dev */ n = rte_cryptodev_dequeue_burst(..., cops, nb_pkts); /* finish IPsec processing for associated packets */ n = rte_ipsec_crypto_process(cop, pkts, grp, n); /* now we have group of packets grouped by SA flow id */ .... } ... /* uninit given SA */ rte_ipsec_sa_fini(sa); Planned scope for 18.11: ======================== - SA-level API definition - ESP tunnel mode support (both IPv4/IPv6) - Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL. - UT Note: Still WIP, so not all planned for 18.11 functionality is in place. Post 18.11: =========== - ESP transport mode support (both IPv4/IPv6) - update examples/ipsec-secgw to use librte_ipsec - SAD and high-level API definition and implementation Signed-off-by: Mohammad Abdul Awal Signed-off-by: Declan Doherty Signed-off-by: Konstantin Ananyev --- config/common_base | 5 + lib/Makefile | 2 + lib/librte_ipsec/Makefile | 24 + lib/librte_ipsec/meson.build | 10 + lib/librte_ipsec/pad.h | 45 ++ lib/librte_ipsec/rte_ipsec.h | 245 +++++++++ lib/librte_ipsec/rte_ipsec_version.map | 13 + lib/librte_ipsec/sa.c | 921 +++++++++++++++++++++++++++++++++ lib/librte_net/rte_esp.h | 10 +- lib/meson.build | 2 + mk/rte.app.mk | 2 + 11 files changed, 1278 insertions(+), 1 deletion(-) create mode 100644 lib/librte_ipsec/Makefile create mode 100644 lib/librte_ipsec/meson.build create mode 100644 lib/librte_ipsec/pad.h create mode 100644 lib/librte_ipsec/rte_ipsec.h create mode 100644 lib/librte_ipsec/rte_ipsec_version.map create mode 100644 lib/librte_ipsec/sa.c diff --git a/config/common_base b/config/common_base index 4bcbaf923..c95602c05 100644 --- a/config/common_base +++ b/config/common_base @@ -879,6 +879,11 @@ CONFIG_RTE_LIBRTE_BPF=y CONFIG_RTE_LIBRTE_BPF_ELF=n # +# Compile librte_ipsec +# +CONFIG_RTE_LIBRTE_IPSEC=y + +# # Compile the test application # CONFIG_RTE_APP_TEST=y diff --git a/lib/Makefile b/lib/Makefile index afa604e20..58998dedd 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -105,6 +105,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net DEPDIRS-librte_gso += librte_mempool DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev +DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec +DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y) DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile new file mode 100644 index 000000000..15441cf41 --- /dev/null +++ b/lib/librte_ipsec/Makefile @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2018 Intel Corporation + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_ipsec.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) +CFLAGS += -DALLOW_EXPERIMENTAL_API +LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security + +EXPORT_MAP := rte_ipsec_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c + +# install header files +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build new file mode 100644 index 000000000..79c55a8be --- /dev/null +++ b/lib/librte_ipsec/meson.build @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2018 Intel Corporation + +allow_experimental_apis = true + +sources=files('sa.c') + +install_headers = files('rte_ipsec.h') + +deps += ['mbuf', 'net', 'cryptodev', 'security'] diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h new file mode 100644 index 000000000..2f5ccd00e --- /dev/null +++ b/lib/librte_ipsec/pad.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _PAD_H_ +#define _PAD_H_ + +#define IPSEC_MAX_PAD_SIZE UINT8_MAX + +static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = { + 1, 2, 3, 4, 5, 6, 7, 8, + 9, 10, 11, 12, 13, 14, 15, 16, + 17, 18, 19, 20, 21, 22, 23, 24, + 25, 26, 27, 28, 29, 30, 31, 32, + 33, 34, 35, 36, 37, 38, 39, 40, + 41, 42, 43, 44, 45, 46, 47, 48, + 49, 50, 51, 52, 53, 54, 55, 56, + 57, 58, 59, 60, 61, 62, 63, 64, + 65, 66, 67, 68, 69, 70, 71, 72, + 73, 74, 75, 76, 77, 78, 79, 80, + 81, 82, 83, 84, 85, 86, 87, 88, + 89, 90, 91, 92, 93, 94, 95, 96, + 97, 98, 99, 100, 101, 102, 103, 104, + 105, 106, 107, 108, 109, 110, 111, 112, + 113, 114, 115, 116, 117, 118, 119, 120, + 121, 122, 123, 124, 125, 126, 127, 128, + 129, 130, 131, 132, 133, 134, 135, 136, + 137, 138, 139, 140, 141, 142, 143, 144, + 145, 146, 147, 148, 149, 150, 151, 152, + 153, 154, 155, 156, 157, 158, 159, 160, + 161, 162, 163, 164, 165, 166, 167, 168, + 169, 170, 171, 172, 173, 174, 175, 176, + 177, 178, 179, 180, 181, 182, 183, 184, + 185, 186, 187, 188, 189, 190, 191, 192, + 193, 194, 195, 196, 197, 198, 199, 200, + 201, 202, 203, 204, 205, 206, 207, 208, + 209, 210, 211, 212, 213, 214, 215, 216, + 217, 218, 219, 220, 221, 222, 223, 224, + 225, 226, 227, 228, 229, 230, 231, 232, + 233, 234, 235, 236, 237, 238, 239, 240, + 241, 242, 243, 244, 245, 246, 247, 248, + 249, 250, 251, 252, 253, 254, 255, +}; + +#endif /* _PAD_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h new file mode 100644 index 000000000..d1154eede --- /dev/null +++ b/lib/librte_ipsec/rte_ipsec.h @@ -0,0 +1,245 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _RTE_IPSEC_H_ +#define _RTE_IPSEC_H_ + +/** + * @file rte_ipsec.h + * @b EXPERIMENTAL: this API may change without prior notice + * + * RTE IPsec support. + * librte_ipsec provides a framework for data-path IPsec protocol + * processing (ESP/AH). + * IKEv2 protocol support right now is out of scope of that draft. + * Though it tries to define related API in such way, that it could be adopted + * by IKEv2 implementation. + */ + +#include +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * An opaque structure to represent Security Association (SA). + */ +struct rte_ipsec_sa; + +/** + * SA initialization parameters. + */ +struct rte_ipsec_sa_prm { + + uint64_t flowid; /**< provided and interpreted by user */ + struct rte_security_ipsec_xform ipsec_xform; /**< SA configuration */ + union { + struct { + uint8_t hdr_len; /**< tunnel header len */ + uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */ + uint8_t next_proto; /**< next header protocol */ + const void *hdr; /**< tunnel header template */ + } tun; /**< tunnel mode repated parameters */ + struct { + uint8_t proto; /**< next header protocol */ + } trs; /**< transport mode repated parameters */ + }; + + struct { + enum rte_security_session_action_type type; + struct rte_security_ctx *sctx; + struct rte_security_session *sses; + uint32_t ol_flags; + } sec; /**< rte_security related parameters */ + + struct { + struct rte_crypto_sym_xform *xform; + struct rte_mempool *pool; + /** +#include +#include +#include +#include +#include "pad.h" + +#define IPSEC_MAX_HDR_SIZE 64 +#define IPSEC_MAX_IV_SIZE (2 * sizeof(uint64_t)) + +#define IPSEC_MAX_CRYPTO_DEVS (UINT8_MAX + 1) + +/* ??? these definitions probably has to be in rte_crypto_sym.h */ +union sym_op_ofslen { + uint64_t raw; + struct { + uint32_t offset; + uint32_t length; + }; +}; + +union sym_op_data { + __uint128_t raw; + struct { + uint8_t *va; + rte_iova_t pa; + }; +}; + +struct rte_ipsec_sa { + uint64_t type; /* type of given SA */ + uint64_t flowid; /* user defined */ + uint32_t spi; + uint32_t salt; + uint64_t sqn; + uint64_t *iv_ptr; + uint8_t aad_len; + uint8_t hdr_len; + uint8_t hdr_l3_off; + uint8_t icv_len; + uint8_t iv_len; + uint8_t pad_align; + uint8_t proto; /* next proto */ + /* template for crypto op fields */ + struct { + union sym_op_ofslen cipher; + union sym_op_ofslen auth; + uint8_t type; + uint8_t status; + uint8_t sess_type; + } ctp; + struct { + uint64_t v8; + uint64_t v[IPSEC_MAX_IV_SIZE / sizeof(uint64_t)]; + } iv; + uint8_t hdr[IPSEC_MAX_HDR_SIZE]; + + struct { + struct rte_security_session *sec; + uint32_t ol_flags; + struct rte_security_ctx *sctx; + + /* + * !!! should be removed if we do crypto sym session properly + * bitmap of crypto devs for which that session was initialised. + */ + rte_ymm_t cdev_bmap; + + /* + * !!! as alternative we need a space in cryptodev_sym_session + * to store ptr to SA (uint64_t udata or so). + */ + struct rte_cryptodev_sym_session crypto; + } session __rte_cache_min_aligned; + +} __rte_cache_aligned; + +#define CS_TO_SA(cs) ((cs) - offsetof(struct rte_ipsec_sa, session.crypto)) + +/* some helper structures */ +struct crypto_xform { + struct rte_crypto_auth_xform *auth; + struct rte_crypto_cipher_xform *cipher; + struct rte_crypto_aead_xform *aead; +}; + +static inline struct rte_ipsec_sa * +cses2sa(uintptr_t p) +{ + p -= offsetof(struct rte_ipsec_sa, session.crypto); + return (struct rte_ipsec_sa *)p; +} + +static int +check_crypto_xform(struct crypto_xform *xform) +{ + uintptr_t p; + + p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher; + + /* either aead or both auth and cipher should be not NULLs */ + if (xform->aead) { + if (p) + return -EINVAL; + } else if (p == (uintptr_t)xform->auth) { + return -EINVAL; + } + + return 0; +} + +static int +fill_crypto_xform(struct crypto_xform *xform, + const struct rte_ipsec_sa_prm *prm) +{ + struct rte_crypto_sym_xform *xf; + + memset(xform, 0, sizeof(*xform)); + + for (xf = prm->crypto.xform; xf != NULL; xf = xf->next) { + if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + if (xform->auth != NULL) + return -EINVAL; + xform->auth = &xf->auth; + } else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (xform->cipher != NULL) + return -EINVAL; + xform->cipher = &xf->cipher; + } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + if (xform->aead != NULL) + return -EINVAL; + xform->aead = &xf->aead; + } else + return -EINVAL; + } + + return check_crypto_xform(xform); +} + +/* + * !!! we might not need session fini - if cryptodev layer would have similar + * functionality. + */ +static void +crypto_session_fini(struct rte_ipsec_sa *sa) +{ + uint64_t v; + size_t sz; + uint32_t i, j; + + sz = sizeof(sa->session.cdev_bmap.u64[0]) * CHAR_BIT; + + for (i = 0; i != RTE_DIM(sa->session.cdev_bmap.u64); i++) { + + v = sa->session.cdev_bmap.u64[0]; + for (j = 0; v != 0; v >>= 1, j++) { + if ((v & 1) != 0) + rte_cryptodev_sym_session_clear(i * sz + j, + &sa->session.crypto); + } + sa->session.cdev_bmap.u64[i] = 0; + } +} + +static int +crypto_session_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) +{ + size_t sz; + uint32_t i, k; + int32_t rc; + + rc = 0; + sz = sizeof(sa->session.cdev_bmap.u64[0]) * CHAR_BIT; + + for (i = 0; i != prm->crypto.nb_dev; i++) { + + k = prm->crypto.devid[i]; + rc = rte_cryptodev_sym_session_init(k, &sa->session.crypto, + prm->crypto.xform, prm->crypto.pool); + if (rc != 0) + break; + sa->session.cdev_bmap.u64[k / sz] |= 1 << (k % sz); + } + + return rc; +} + +uint64_t __rte_experimental +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa) +{ + return sa->type; +} + +size_t __rte_experimental +rte_ipsec_sa_size(void) +{ + size_t sz; + + sz = sizeof(struct rte_ipsec_sa) + + rte_cryptodev_sym_get_header_session_size(); + sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE); + return sz; +} + +void __rte_experimental +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa) +{ + size_t sz; + + sz = rte_ipsec_sa_size(); + crypto_session_fini(sa); + memset(sa, 0, sz); +} + +static uint64_t +fill_sa_type(const struct rte_ipsec_sa_prm *prm) +{ + uint64_t tp; + + tp = 0; + + if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) + tp |= RTE_IPSEC_SATP_PROTO_AH; + else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) + tp |= RTE_IPSEC_SATP_PROTO_ESP; + + if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) + tp |= RTE_IPSEC_SATP_DIR_OB; + else + tp |= RTE_IPSEC_SATP_DIR_IB; + + if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) { + if (prm->ipsec_xform.tunnel.type == + RTE_SECURITY_IPSEC_TUNNEL_IPV4) + tp |= RTE_IPSEC_SATP_MODE_TUNLV4; + else + tp |= RTE_IPSEC_SATP_MODE_TUNLV6; + + if (prm->tun.next_proto == IPPROTO_IPIP) + tp |= RTE_IPSEC_SATP_IPV4; + else if (prm->tun.next_proto == IPPROTO_IPV6) + tp |= RTE_IPSEC_SATP_IPV4; + } else { + tp |= RTE_IPSEC_SATP_MODE_TRANS; + if (prm->trs.proto == IPPROTO_IPIP) + tp |= RTE_IPSEC_SATP_IPV4; + else if (prm->trs.proto == IPPROTO_IPV6) + tp |= RTE_IPSEC_SATP_IPV4; + } + + /* !!! some inline ipsec support right now */ + if (prm->sec.type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + tp |= RTE_IPSEC_SATP_USE_INLN; + else + tp |= RTE_IPSEC_SATP_USE_LKSD; + + return tp; +} + +static void +esp_inb_tun_init(struct rte_ipsec_sa *sa) +{ + /* these params may differ with new algorithms support */ + sa->ctp.auth.offset = 0; + sa->ctp.auth.length = sa->icv_len; + sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len; + sa->ctp.cipher.length = sa->ctp.auth.length + sa->ctp.cipher.offset; +} + +static void +esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) +{ + sa->hdr_len = prm->tun.hdr_len; + sa->hdr_l3_off = prm->tun.hdr_l3_off; + memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len); + + /* these params may differ with new algorithms support */ + sa->ctp.auth.offset = sa->hdr_len; + sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len; + sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr); + sa->ctp.cipher.length = sa->iv_len; +} + +static int +esp_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, + const struct crypto_xform *cxf) +{ + int32_t rc = 0; + + if (cxf->aead != NULL) { + if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM) + return -EINVAL; + sa->aad_len = cxf->aead->aad_length; + sa->icv_len = cxf->aead->digest_length; + sa->iv_len = cxf->aead->iv.length; + sa->iv_ptr = sa->iv.v; + sa->pad_align = 4; + } else { + sa->aad_len = 0; + sa->icv_len = cxf->auth->digest_length; + if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) { + sa->pad_align = 4; + sa->iv_len = 0; + sa->iv_ptr = sa->iv.v; + } else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) { + sa->pad_align = sizeof(sa->iv.v); + sa->iv_len = sizeof(sa->iv.v); + sa->iv_ptr = sa->iv.v; + memset(sa->iv.v, 0, sizeof(sa->iv.v)); + } else + return -EINVAL; + } + + sa->type = fill_sa_type(prm); + sa->flowid = prm->flowid; + sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi); + sa->salt = prm->ipsec_xform.salt; + sa->sqn = 0; + + sa->proto = prm->tun.next_proto; + + if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) + esp_inb_tun_init(sa); + else + esp_outb_tun_init(sa, prm); + + sa->ctp.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + sa->ctp.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + sa->ctp.sess_type = RTE_CRYPTO_OP_WITH_SESSION; + + /* pass info required for inline outbound */ + sa->session.sctx = prm->sec.sctx; + sa->session.sec = prm->sec.sses; + sa->session.ol_flags = prm->sec.ol_flags; + + if ((sa->type & RTE_IPSEC_SATP_USE_MASK) != RTE_IPSEC_SATP_USE_INLN) + rc = crypto_session_init(sa, prm); + return rc; +} + +int __rte_experimental +rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) +{ + int32_t rc; + struct crypto_xform cxf; + + if (sa == NULL || prm == NULL) + return -EINVAL; + + /* only esp inbound and outbound tunnel is supported right now */ + if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP || + prm->ipsec_xform.mode != + RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) + return -EINVAL; + + /* only inline crypto or none action type are supported */ + if (!(prm->sec.type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO || + prm->sec.type == RTE_SECURITY_ACTION_TYPE_NONE)) + return -EINVAL; + + if (prm->tun.hdr_len > sizeof(sa->hdr)) + return -EINVAL; + + rc = fill_crypto_xform(&cxf, prm); + if (rc != 0) + return rc; + + rc = esp_tun_init(sa, prm, &cxf); + if (rc != 0) + rte_ipsec_sa_fini(sa); + + return rc; +} + +static inline void +esp_outb_tun_cop_prepare(struct rte_crypto_op *cop, + const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + const union sym_op_data *icv, uint32_t plen) +{ + struct rte_crypto_sym_op *sop; + + cop->type = sa->ctp.type; + cop->status = sa->ctp.status; + cop->sess_type = sa->ctp.sess_type; + + sop = cop->sym; + + /* fill sym op fields */ + sop->session = (void *)(uintptr_t)&sa->session.crypto; + sop->m_src = mb; + + sop->cipher.data.offset = sa->ctp.cipher.offset; + sop->cipher.data.length = sa->ctp.cipher.length + plen; + sop->auth.data.offset = sa->ctp.auth.offset; + sop->auth.data.length = sa->ctp.auth.length + plen; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + + /* !!! fill sym op aead fields */ +} + +static inline int32_t +esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + union sym_op_data *icv) +{ + uint32_t clen, hlen, pdlen, pdofs, tlen; + struct rte_mbuf *ml; + struct esp_hdr *esph; + struct esp_tail *espt; + char *ph, *pt; + uint64_t *iv; + + hlen = sa->hdr_len + sa->iv_len + sizeof(*esph); + /* calculate padding and tail space required */ + + /* number of bytes to encrypt */ + clen = mb->pkt_len + sizeof(*espt); + clen = RTE_ALIGN_CEIL(clen, sa->pad_align); + + /* pad length + esp tail */ + pdlen = clen - mb->pkt_len; + tlen = pdlen + sa->icv_len; + + /* do append and prepend */ + ml = rte_pktmbuf_lastseg(mb); + if (tlen + sa->aad_len > rte_pktmbuf_tailroom(ml)) + return -ENOSPC; + + /* prepend header */ + ph = rte_pktmbuf_prepend(mb, hlen); + if (ph == NULL) + return -ENOSPC; + + /* append tail */ + pdofs = ml->data_len; + ml->data_len += tlen; + mb->pkt_len += tlen; + pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs); + + /* copy tunnel pkt header */ + rte_memcpy(ph, sa->hdr, sa->hdr_len); + + /* update original and new ip header fields */ + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) { + struct ipv4_hdr *l3h; + l3h = (struct ipv4_hdr *)(ph + sa->hdr_l3_off); + l3h->packet_id = rte_cpu_to_be_16(sa->sqn); + l3h->total_length = rte_cpu_to_be_16(mb->pkt_len - + sa->hdr_l3_off); + } else { + struct ipv6_hdr *l3h; + l3h = (struct ipv6_hdr *)(ph + sa->hdr_l3_off); + l3h->payload_len = rte_cpu_to_be_16(mb->pkt_len - + sa->hdr_l3_off - sizeof(*l3h)); + } + + /* update spi, seqn and iv */ + esph = (struct esp_hdr *)(ph + sa->hdr_len); + iv = (uint64_t *)(esph + 1); + + esph->spi = sa->spi; + esph->seq = rte_cpu_to_be_32(sa->sqn); + rte_memcpy(iv, sa->iv_ptr, sa->iv_len); + + /* offset for ICV */ + pdofs += pdlen; + + /* pad length */ + pdlen -= sizeof(*espt); + + /* copy padding data */ + rte_memcpy(pt, esp_pad_bytes, pdlen); + + /* update esp trailer */ + espt = (struct esp_tail *)(pt + pdlen); + espt->pad_len = pdlen; + espt->next_proto = sa->proto; + + /* !!! fill aad fields, if any (aad fields are placed after icv */ + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); + icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); + + return clen; +} + +static inline uint32_t +esn_outb_check_sqn(struct rte_ipsec_sa *sa, uint32_t num) +{ + RTE_SET_USED(sa); + return num; +} + +static inline int +esn_inb_check_sqn(struct rte_ipsec_sa *sa, uint32_t sqn) +{ + RTE_SET_USED(sa); + RTE_SET_USED(sqn); + return 0; +} + +static inline uint16_t +esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + int32_t rc; + uint32_t i, n; + union sym_op_data icv; + + n = esn_outb_check_sqn(sa, num); + + for (i = 0; i != n; i++) { + + sa->sqn++; + sa->iv.v8 = rte_cpu_to_be_64(sa->sqn); + + /* update the packet itself */ + rc = esp_outb_tun_pkt_prepare(sa, mb[i], &icv); + if (rc < 0) { + rte_errno = -rc; + break; + } + + /* update crypto op */ + esp_outb_tun_cop_prepare(cop[i], sa, mb[i], &icv, rc); + } + + return i; +} + +static inline int32_t +esp_inb_tun_cop_prepare(struct rte_crypto_op *cop, + const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + const union sym_op_data *icv, uint32_t pofs, uint32_t plen) +{ + struct rte_crypto_sym_op *sop; + uint64_t *ivc, *ivp; + uint32_t clen; + + clen = plen - sa->ctp.cipher.length; + if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0) + return -EINVAL; + + cop->type = sa->ctp.type; + cop->status = sa->ctp.status; + cop->sess_type = sa->ctp.sess_type; + + sop = cop->sym; + + /* fill sym op fields */ + sop->session = (void *)(uintptr_t)&sa->session.crypto; + sop->m_src = mb; + + sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; + sop->cipher.data.length = clen; + sop->auth.data.offset = pofs + sa->ctp.auth.offset; + sop->auth.data.length = plen - sa->ctp.auth.length; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + + /* !!! fill sym op aead fields */ + + /* copy iv from the input packet to the cop */ + ivc = (uint64_t *)(sop + 1); + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct esp_hdr)); + rte_memcpy(ivc, ivp, sa->iv_len); + return 0; +} + +static inline int32_t +esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t hlen, union sym_op_data *icv) +{ + struct rte_mbuf *ml; + uint32_t icv_ofs, plen; + + plen = mb->pkt_len; + plen = plen - hlen; + + ml = rte_pktmbuf_lastseg(mb); + icv_ofs = ml->data_len - sa->icv_len; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs); + icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs); + + return plen; +} + +static inline uint16_t +esp_inb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + int32_t rc; + uint32_t i, hl; + union sym_op_data icv; + + for (i = 0; i != num; i++) { + + hl = mb[i]->l2_len + mb[i]->l3_len; + rc = esp_inb_tun_pkt_prepare(sa, mb[i], hl, &icv); + + if (rc >= 0) + rc = esp_inb_tun_cop_prepare(cop[i], sa, mb[i], &icv, + hl, rc); + if (rc < 0) { + rte_errno = -rc; + break; + } + } + + return i; +} + +uint16_t __rte_experimental +rte_ipsec_crypto_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + return esp_inb_tun_prepare(sa, mb, cop, num); + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + return esp_outb_tun_prepare(sa, mb, cop, num); + default: + rte_errno = ENOTSUP; + return 0; + } +} + +/* + * !!! create something more generic (and smarter) + * ideally in librte_mbuf + */ +static inline void +free_mbuf_bulk(struct rte_mbuf *mb[], uint32_t num) +{ + uint32_t i; + + for (i = 0; i != num; i++) + rte_pktmbuf_free(mb[i]); +} + +/* exclude NULLs from the final list of packets. */ +static inline uint32_t +compress_pkt_list(struct rte_mbuf *pkt[], uint32_t nb_pkt, uint32_t nb_zero) +{ + uint32_t i, j, k, l; + + for (j = nb_pkt; nb_zero != 0 && j-- != 0; ) { + + /* found a hole. */ + if (pkt[j] == NULL) { + + /* find how big is it. */ + for (i = j; i-- != 0 && pkt[i] == NULL; ) + ; + /* fill the hole. */ + for (k = j + 1, l = i + 1; k != nb_pkt; k++, l++) + pkt[l] = pkt[k]; + + nb_pkt -= j - i; + nb_zero -= j - i; + j = i + 1; + } + } + + return nb_pkt; +} + +static inline int +esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t icv_len) +{ + uint32_t hlen, tlen; + struct esp_hdr *esph; + struct esp_tail *espt; + struct rte_mbuf *ml; + char *pd; + + ml = rte_pktmbuf_lastseg(mb); + espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, + ml->data_len - icv_len - sizeof(*espt)); + + /* cut of ICV, ESP tail and padding bytes */ + tlen = icv_len + sizeof(*espt) + espt->pad_len; + ml->data_len -= tlen; + mb->pkt_len -= tlen; + + /* cut of L2/L3 headers, ESP header and IV */ + hlen = mb->l2_len + mb->l3_len; + esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); + rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset); + + /* reset mbuf metatdata: L2/L3 len, packet type */ + mb->packet_type = RTE_PTYPE_UNKNOWN; + mb->l2_len = 0; + mb->l3_len = 0; + + /* clear the PKT_RX_SEC_OFFLOAD flag if set */ + mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); + + /* + * check spi, sqn, padding and next proto. + * drop packet if something is wrong. + * ??? consider move spi and sqn check to prepare. + */ + + pd = (char *)espt - espt->pad_len; + if (esph->spi != sa->spi || + esn_inb_check_sqn(sa, esph->seq) != 0 || + espt->next_proto != sa->proto || + memcmp(pd, esp_pad_bytes, espt->pad_len)) + return -EINVAL; + + return 0; +} + +static inline uint16_t +esp_inb_tun_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + struct rte_mbuf *dr[], uint16_t num) +{ + uint32_t i, k, icv_len; + + icv_len = sa->icv_len; + + k = 0; + for (i = 0; i != num; i++) { + if (esp_inb_tun_single_pkt_process(sa, mb[i], icv_len)) { + dr[k++] = mb[i]; + mb[i] = NULL; + } + } + + if (k != 0) + compress_pkt_list(mb, num, k); + + return num - k; +} + +static inline uint16_t +esp_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + struct rte_mbuf *dr[], uint16_t num) +{ + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + return esp_inb_tun_pkt_process(sa, mb, dr, num); + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + return num; + default: + return 0; + } +} + +static inline uint16_t +esp_process(const struct rte_crypto_op *cop[], struct rte_mbuf *mb[], + struct rte_ipsec_group grp[], uint16_t num) +{ + uint32_t cnt, i, j, k, n; + uintptr_t ns, ps; + struct rte_ipsec_sa *sa; + struct rte_mbuf *m, *dr[num]; + + j = 0; + k = 0; + n = 0; + ps = 0; + + for (i = 0; i != num; i++) { + + m = cop[i]->sym[0].m_src; + ns = (uintptr_t)cop[i]->sym[0].session; + + if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS) { + dr[k++] = m; + continue; + } + + if (ps != ns) { + + if (ps != 0) { + sa = cses2sa(ps); + + /* setup count for current group */ + grp[n].cnt = mb + j - grp[n].m; + + /* do SA type specific processing */ + cnt = esp_pkt_process(sa, grp[n].m, dr + k, + grp[n].cnt); + + /* some packets were dropped */ + cnt = grp[n].cnt - cnt; + if (cnt != 0) { + grp[n].cnt -= cnt; + j -= cnt; + k += cnt; + } + + /* open new group */ + n++; + } + + grp[n].flowid = cses2sa(ns)->flowid; + grp[n].m = mb + j; + ps = ns; + } + + mb[j++] = m; + } + + if (ps != 0) { + sa = cses2sa(ps); + grp[n].cnt = mb + j - grp[n].m; + cnt = esp_pkt_process(sa, grp[n].m, dr + k, grp[n].cnt); + cnt = grp[n].cnt - cnt; + if (cnt != 0) { + grp[n].cnt -= cnt; + j -= cnt; + k += cnt; + } + n++; + } + + if (k != 0) + free_mbuf_bulk(dr, k); + + return n; +} + +uint16_t __rte_experimental +rte_ipsec_crypto_process(const struct rte_crypto_op *cop[], + struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num) +{ + return esp_process(cop, mb, grp, num); +} + +static inline uint16_t +inline_outb_tun_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + uint16_t num) +{ + uint32_t i; + struct rte_mbuf *m; + int rc; + union sym_op_data icv; + + for (i = 0; i != num; i++) { + m = mb[i]; + + sa->sqn++; + sa->iv.v8 = rte_cpu_to_be_64(sa->sqn); + + /* update the packet itself */ + rc = esp_outb_tun_pkt_prepare(sa, m, &icv); + if (rc < 0) { + rte_errno = -rc; + break; + } + + m->ol_flags |= PKT_TX_SEC_OFFLOAD; + + if (sa->session.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) + rte_security_set_pkt_metadata(sa->session.sctx, + sa->session.sec, m, NULL); + } + + return i; +} + +static inline uint16_t +inline_inb_tun_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + uint16_t num) +{ + uint32_t i, icv_len; + int rc; + + icv_len = sa->icv_len; + + for (i = 0; i != num; i++) { + rc = esp_inb_tun_single_pkt_process(sa, mb[i], icv_len); + if (rc != 0) { + rte_errno = -rc; + break; + } + } + + return i; +} + +uint16_t __rte_experimental +rte_ipsec_inline_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + uint16_t num) +{ + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + return inline_inb_tun_pkt_process(sa, mb, num); + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + return inline_outb_tun_pkt_process(sa, mb, num); + default: + rte_errno = ENOTSUP; + } + + return 0; +} diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h index f77ec2eb2..8e1b3d2dd 100644 --- a/lib/librte_net/rte_esp.h +++ b/lib/librte_net/rte_esp.h @@ -11,7 +11,7 @@ * ESP-related defines */ -#include +#include #ifdef __cplusplus extern "C" { @@ -25,6 +25,14 @@ struct esp_hdr { rte_be32_t seq; /**< packet sequence number */ } __attribute__((__packed__)); +/** + * ESP Trailer + */ +struct esp_tail { + uint8_t pad_len; /**< number of pad bytes (0-255) */ + uint8_t next_proto; /**< IPv4 or IPv6 or next layer header */ +} __attribute__((__packed__)); + #ifdef __cplusplus } #endif diff --git a/lib/meson.build b/lib/meson.build index eb91f100b..bb07e67bd 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -21,6 +21,8 @@ libraries = [ 'compat', # just a header, used for versioning 'kni', 'latencystats', 'lpm', 'member', 'meter', 'power', 'pdump', 'rawdev', 'reorder', 'sched', 'security', 'vhost', + # ipsec lib depends on crypto and security + 'ipsec', # add pkt framework libs which use other libs from above 'port', 'table', 'pipeline', # flow_classify lib depends on pkt framework table lib diff --git a/mk/rte.app.mk b/mk/rte.app.mk index de33883be..7f4344ecd 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -62,6 +62,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y) _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF) += -lelf endif +_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC) += -lrte_ipsec + _LDLIBS-y += --whole-archive _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile From patchwork Tue Oct 9 18:23:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 46404 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A60251B54A; Tue, 9 Oct 2018 20:24:01 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 2048D1B45E for ; Tue, 9 Oct 2018 20:23:56 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Oct 2018 11:23:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,361,1534834800"; d="scan'208";a="77469422" Received: from sivswdev02.ir.intel.com (HELO localhost.localdomain) ([10.237.217.46]) by fmsmga008.fm.intel.com with ESMTP; 09 Oct 2018 11:23:52 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev Date: Tue, 9 Oct 2018 19:23:33 +0100 Message-Id: <1539109420-13412-3-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> References: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [RFC v2 2/9] security: add opaque userdata pointer into security session X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add 'uint64_t userdata' inside struct rte_security_session. That allows upper layer to easily associate some user defined data with the session. Signed-off-by: Konstantin Ananyev --- lib/librte_security/rte_security.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h index b0d1b97ee..a945dc515 100644 --- a/lib/librte_security/rte_security.h +++ b/lib/librte_security/rte_security.h @@ -257,6 +257,8 @@ struct rte_security_session_conf { struct rte_security_session { void *sess_private_data; /**< Private session material */ + uint64_t userdata; + /**< Opaque user defined data */ }; /** From patchwork Tue Oct 9 18:23:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 46403 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 67A5D1B519; Tue, 9 Oct 2018 20:23:59 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 85CFD1B45E for ; Tue, 9 Oct 2018 20:23:56 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Oct 2018 11:23:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,361,1534834800"; d="scan'208";a="77469427" Received: from sivswdev02.ir.intel.com (HELO localhost.localdomain) ([10.237.217.46]) by fmsmga008.fm.intel.com with ESMTP; 09 Oct 2018 11:23:53 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev Date: Tue, 9 Oct 2018 19:23:34 +0100 Message-Id: <1539109420-13412-4-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> References: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [RFC v2 3/9] net: add ESP trailer structure definition X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Konstantin Ananyev --- lib/librte_net/rte_esp.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h index f77ec2eb2..8e1b3d2dd 100644 --- a/lib/librte_net/rte_esp.h +++ b/lib/librte_net/rte_esp.h @@ -11,7 +11,7 @@ * ESP-related defines */ -#include +#include #ifdef __cplusplus extern "C" { @@ -25,6 +25,14 @@ struct esp_hdr { rte_be32_t seq; /**< packet sequence number */ } __attribute__((__packed__)); +/** + * ESP Trailer + */ +struct esp_tail { + uint8_t pad_len; /**< number of pad bytes (0-255) */ + uint8_t next_proto; /**< IPv4 or IPv6 or next layer header */ +} __attribute__((__packed__)); + #ifdef __cplusplus } #endif From patchwork Tue Oct 9 18:23:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 46410 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9C7E31B603; Tue, 9 Oct 2018 20:24:25 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 712DE1B53E for ; Tue, 9 Oct 2018 20:24:18 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Oct 2018 11:24:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,361,1534834800"; d="scan'208";a="77469431" Received: from sivswdev02.ir.intel.com (HELO localhost.localdomain) ([10.237.217.46]) by fmsmga008.fm.intel.com with ESMTP; 09 Oct 2018 11:23:54 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , Mohammad Abdul Awal Date: Tue, 9 Oct 2018 19:23:35 +0100 Message-Id: <1539109420-13412-5-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> References: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [RFC v2 4/9] lib: introduce ipsec library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce librte_ipsec library. The library is supposed to utilize existing DPDK crypto-dev and security API to provide application with transparent IPsec processing API. That initial commit provides some base API to manage IPsec Security Association (SA) object. Signed-off-by: Mohammad Abdul Awal Signed-off-by: Konstantin Ananyev --- config/common_base | 5 + lib/Makefile | 2 + lib/librte_ipsec/Makefile | 24 +++ lib/librte_ipsec/ipsec_sqn.h | 48 ++++++ lib/librte_ipsec/meson.build | 10 ++ lib/librte_ipsec/rte_ipsec_sa.h | 139 ++++++++++++++++ lib/librte_ipsec/rte_ipsec_version.map | 10 ++ lib/librte_ipsec/sa.c | 282 +++++++++++++++++++++++++++++++++ lib/librte_ipsec/sa.h | 75 +++++++++ lib/meson.build | 2 + mk/rte.app.mk | 2 + 11 files changed, 599 insertions(+) create mode 100644 lib/librte_ipsec/Makefile create mode 100644 lib/librte_ipsec/ipsec_sqn.h create mode 100644 lib/librte_ipsec/meson.build create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h create mode 100644 lib/librte_ipsec/rte_ipsec_version.map create mode 100644 lib/librte_ipsec/sa.c create mode 100644 lib/librte_ipsec/sa.h diff --git a/config/common_base b/config/common_base index acc5211bc..e7e66390b 100644 --- a/config/common_base +++ b/config/common_base @@ -885,6 +885,11 @@ CONFIG_RTE_LIBRTE_BPF=y CONFIG_RTE_LIBRTE_BPF_ELF=n # +# Compile librte_ipsec +# +CONFIG_RTE_LIBRTE_IPSEC=y + +# # Compile the test application # CONFIG_RTE_APP_TEST=y diff --git a/lib/Makefile b/lib/Makefile index 8c839425d..8cfc59054 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -105,6 +105,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net DEPDIRS-librte_gso += librte_mempool DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev +DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec +DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y) DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile new file mode 100644 index 000000000..7758dcc6d --- /dev/null +++ b/lib/librte_ipsec/Makefile @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2018 Intel Corporation + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_ipsec.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) +CFLAGS += -DALLOW_EXPERIMENTAL_API +LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security + +EXPORT_MAP := rte_ipsec_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c + +# install header files +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h new file mode 100644 index 000000000..d0d122824 --- /dev/null +++ b/lib/librte_ipsec/ipsec_sqn.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _IPSEC_SQN_H_ +#define _IPSEC_SQN_H_ + +#define WINDOW_BUCKET_BITS 6 /* uint64_t */ +#define WINDOW_BUCKET_SIZE (1 << WINDOW_BUCKET_BITS) +#define WINDOW_BIT_LOC_MASK (WINDOW_BUCKET_SIZE - 1) + +/* minimum number of bucket, power of 2*/ +#define WINDOW_BUCKET_MIN 2 +#define WINDOW_BUCKET_MAX (INT16_MAX + 1) + +#define IS_ESN(sa) ((sa)->sqn_mask == UINT64_MAX) + +/** + * for given size, calculate required number of buckets. + */ +static uint32_t +replay_num_bucket(uint32_t wsz) +{ + uint32_t nb; + + nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) / + WINDOW_BUCKET_SIZE); + nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN); + + return nb; +} + +/** + * Based on number of buckets calculated required size for the + * structure that holds replay window and sequnce number (RSN) information. + */ +static size_t +rsn_size(uint32_t nb_bucket) +{ + size_t sz; + struct replay_sqn *rsn; + + sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]); + sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE); + return sz; +} + +#endif /* _IPSEC_SQN_H_ */ diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build new file mode 100644 index 000000000..52c78eaeb --- /dev/null +++ b/lib/librte_ipsec/meson.build @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2018 Intel Corporation + +allow_experimental_apis = true + +sources=files('sa.c') + +install_headers = files('rte_ipsec_sa.h') + +deps += ['mbuf', 'net', 'cryptodev', 'security'] diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h new file mode 100644 index 000000000..0efda33de --- /dev/null +++ b/lib/librte_ipsec/rte_ipsec_sa.h @@ -0,0 +1,139 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _RTE_IPSEC_SA_H_ +#define _RTE_IPSEC_SA_H_ + +/** + * @file rte_ipsec_sa.h + * @b EXPERIMENTAL: this API may change without prior notice + * + * Defines API to manage IPsec Security Association (SA) objects. + */ + +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * An opaque structure to represent Security Association (SA). + */ +struct rte_ipsec_sa; + +/** + * SA initialization parameters. + */ +struct rte_ipsec_sa_prm { + + uint64_t userdata; /**< provided and interpreted by user */ + uint64_t flags; /**< see RTE_IPSEC_SAFLAG_* below */ + /** ipsec configuration */ + struct rte_security_ipsec_xform ipsec_xform; + struct rte_crypto_sym_xform *crypto_xform; + union { + struct { + uint8_t hdr_len; /**< tunnel header len */ + uint8_t hdr_l3_off; /**< offset for IPv4/IPv6 header */ + uint8_t next_proto; /**< next header protocol */ + const void *hdr; /**< tunnel header template */ + } tun; /**< tunnel mode repated parameters */ + struct { + uint8_t proto; /**< next header protocol */ + } trs; /**< transport mode repated parameters */ + }; + + uint32_t replay_win_sz; + /**< window size to enable sequence replay attack handling. + * Replay checking is disabled if the window size is 0. + */ +}; + +/** + * SA type is an 64-bit value that contain the following information: + * - IP version (IPv4/IPv6) + * - IPsec proto (ESP/AH) + * - inbound/outbound + * - mode (TRANSPORT/TUNNEL) + * - for TUNNEL outer IP version (IPv4/IPv6) + * ... + */ + +enum { + RTE_SATP_LOG_IPV, + RTE_SATP_LOG_PROTO, + RTE_SATP_LOG_DIR, + RTE_SATP_LOG_MODE, + RTE_SATP_LOG_NUM +}; + +#define RTE_IPSEC_SATP_IPV_MASK (1ULL << RTE_SATP_LOG_IPV) +#define RTE_IPSEC_SATP_IPV4 (0ULL << RTE_SATP_LOG_IPV) +#define RTE_IPSEC_SATP_IPV6 (1ULL << RTE_SATP_LOG_IPV) + +#define RTE_IPSEC_SATP_PROTO_MASK (1ULL << RTE_SATP_LOG_PROTO) +#define RTE_IPSEC_SATP_PROTO_AH (0ULL << RTE_SATP_LOG_PROTO) +#define RTE_IPSEC_SATP_PROTO_ESP (1ULL << RTE_SATP_LOG_PROTO) + +#define RTE_IPSEC_SATP_DIR_MASK (1ULL << RTE_SATP_LOG_DIR) +#define RTE_IPSEC_SATP_DIR_IB (0ULL << RTE_SATP_LOG_DIR) +#define RTE_IPSEC_SATP_DIR_OB (1ULL << RTE_SATP_LOG_DIR) + +#define RTE_IPSEC_SATP_MODE_MASK (3ULL << RTE_SATP_LOG_MODE) +#define RTE_IPSEC_SATP_MODE_TRANS (0ULL << RTE_SATP_LOG_MODE) +#define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG_MODE) +#define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG_MODE) + +/** + * get type of given SA + * @return + * SA type value. + */ +uint64_t __rte_experimental +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa); + +/** + * Calculate requied SA size based on provided input parameters. + * @param prm + * Parameters that wil be used to initialise SA object. + * @return + * - Actual size required for SA with given parameters. + * - -EINVAL if the parameters are invalid. + */ +int __rte_experimental +rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm); + +/** + * initialise SA based on provided input parameters. + * @param sa + * SA object to initialise. + * @param prm + * Parameters used to initialise given SA object. + * @param size + * size of the provided buffer for SA. + * @return + * - Zero if operation completed successfully. + * - -EINVAL if the parameters are invalid. + * - -ENOSPC if the size of the provided buffer is not big enough. + */ +int __rte_experimental +rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, + uint32_t size); + +/** + * cleanup SA + * @param sa + * Pointer to SA object to de-initialize. + */ +void __rte_experimental +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_IPSEC_SA_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map new file mode 100644 index 000000000..1a66726b8 --- /dev/null +++ b/lib/librte_ipsec/rte_ipsec_version.map @@ -0,0 +1,10 @@ +EXPERIMENTAL { + global: + + rte_ipsec_sa_fini; + rte_ipsec_sa_init; + rte_ipsec_sa_size; + rte_ipsec_sa_type; + + local: *; +}; diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c new file mode 100644 index 000000000..913856a3d --- /dev/null +++ b/lib/librte_ipsec/sa.c @@ -0,0 +1,282 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include +#include +#include +#include + +#include "sa.h" +#include "ipsec_sqn.h" + +/* some helper structures */ +struct crypto_xform { + struct rte_crypto_auth_xform *auth; + struct rte_crypto_cipher_xform *cipher; + struct rte_crypto_aead_xform *aead; +}; + + +static int +check_crypto_xform(struct crypto_xform *xform) +{ + uintptr_t p; + + p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher; + + /* either aead or both auth and cipher should be not NULLs */ + if (xform->aead) { + if (p) + return -EINVAL; + } else if (p == (uintptr_t)xform->auth) { + return -EINVAL; + } + + return 0; +} + +static int +fill_crypto_xform(struct crypto_xform *xform, + const struct rte_ipsec_sa_prm *prm) +{ + struct rte_crypto_sym_xform *xf; + + memset(xform, 0, sizeof(*xform)); + + for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) { + if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + if (xform->auth != NULL) + return -EINVAL; + xform->auth = &xf->auth; + } else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (xform->cipher != NULL) + return -EINVAL; + xform->cipher = &xf->cipher; + } else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + if (xform->aead != NULL) + return -EINVAL; + xform->aead = &xf->aead; + } else + return -EINVAL; + } + + return check_crypto_xform(xform); +} + +uint64_t __rte_experimental +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa) +{ + return sa->type; +} + +static int32_t +ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket) +{ + uint32_t n, sz; + + n = 0; + if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) == + RTE_IPSEC_SATP_DIR_IB) + n = replay_num_bucket(wsz); + + if (n > WINDOW_BUCKET_MAX) + return -EINVAL; + + *nb_bucket = n; + + sz = rsn_size(n); + sz += sizeof(struct rte_ipsec_sa); + return sz; +} + +void __rte_experimental +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa) +{ + memset(sa, 0, sa->size); +} + +static uint64_t +fill_sa_type(const struct rte_ipsec_sa_prm *prm) +{ + uint64_t tp; + + tp = 0; + + if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) + tp |= RTE_IPSEC_SATP_PROTO_AH; + else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) + tp |= RTE_IPSEC_SATP_PROTO_ESP; + + if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) + tp |= RTE_IPSEC_SATP_DIR_OB; + else + tp |= RTE_IPSEC_SATP_DIR_IB; + + if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) { + if (prm->ipsec_xform.tunnel.type == + RTE_SECURITY_IPSEC_TUNNEL_IPV4) + tp |= RTE_IPSEC_SATP_MODE_TUNLV4; + else + tp |= RTE_IPSEC_SATP_MODE_TUNLV6; + + if (prm->tun.next_proto == IPPROTO_IPIP) + tp |= RTE_IPSEC_SATP_IPV4; + else if (prm->tun.next_proto == IPPROTO_IPV6) + tp |= RTE_IPSEC_SATP_IPV4; + } else { + tp |= RTE_IPSEC_SATP_MODE_TRANS; + if (prm->trs.proto == IPPROTO_IPIP) + tp |= RTE_IPSEC_SATP_IPV4; + else if (prm->trs.proto == IPPROTO_IPV6) + tp |= RTE_IPSEC_SATP_IPV4; + } + + return tp; +} + +static void +esp_inb_tun_init(struct rte_ipsec_sa *sa) +{ + /* these params may differ with new algorithms support */ + sa->ctp.auth.offset = 0; + sa->ctp.auth.length = sa->icv_len; + sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len; + sa->ctp.cipher.length = sa->ctp.auth.length + sa->ctp.cipher.offset; +} + +static void +esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) +{ + sa->sqn.outb = 1; + sa->hdr_len = prm->tun.hdr_len; + sa->hdr_l3_off = prm->tun.hdr_l3_off; + memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len); + + /* these params may differ with new algorithms support */ + sa->ctp.auth.offset = sa->hdr_len; + sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len; + if (sa->aad_len != 0) { + sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr) + + sa->iv_len; + sa->ctp.cipher.length = 0; + } else { + sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr); + sa->ctp.cipher.length = sa->iv_len; + } +} + +static int +esp_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, + const struct crypto_xform *cxf) +{ + if (cxf->aead != NULL) { + /* RFC 4106 */ + if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM) + return -EINVAL; + sa->icv_len = cxf->aead->digest_length; + sa->iv_ofs = cxf->aead->iv.offset; + sa->iv_len = sizeof(uint64_t); + sa->pad_align = 4; + } else { + sa->icv_len = cxf->auth->digest_length; + sa->iv_ofs = cxf->cipher->iv.offset; + if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) { + sa->pad_align = 4; + sa->iv_len = 0; + } else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) { + sa->pad_align = IPSEC_MAX_IV_SIZE; + sa->iv_len = IPSEC_MAX_IV_SIZE; + } else + return -EINVAL; + } + + sa->aad_len = 0; + sa->udata = prm->userdata; + sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi); + sa->salt = prm->ipsec_xform.salt; + + sa->proto = prm->tun.next_proto; + + if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) + esp_inb_tun_init(sa); + else + esp_outb_tun_init(sa, prm); + + return 0; +} + +int __rte_experimental +rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm) +{ + uint64_t type; + uint32_t nb; + + if (prm == NULL) + return -EINVAL; + + /* determine SA type */ + type = fill_sa_type(prm); + + /* determine required size */ + return ipsec_sa_size(prm->replay_win_sz, type, &nb); +} + +int __rte_experimental +rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, + uint32_t size) +{ + int32_t rc, sz; + uint32_t nb; + uint64_t type; + struct crypto_xform cxf; + + if (sa == NULL || prm == NULL) + return -EINVAL; + + /* determine SA type */ + type = fill_sa_type(prm); + + /* determine required size */ + sz = ipsec_sa_size(prm->replay_win_sz, type, &nb); + if (sz < 0) + return sz; + else if (size < (uint32_t)sz) + return -ENOSPC; + + /* only esp inbound and outbound tunnel is supported right now */ + if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP || + prm->ipsec_xform.mode != + RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) + return -EINVAL; + + if (prm->tun.hdr_len > sizeof(sa->hdr)) + return -EINVAL; + + rc = fill_crypto_xform(&cxf, prm); + if (rc != 0) + return rc; + + sa->type = type; + sa->size = sz; + + rc = esp_tun_init(sa, prm, &cxf); + if (rc != 0) + rte_ipsec_sa_fini(sa); + + /* check for ESN flag */ + if (prm->ipsec_xform.options.esn == 0) + sa->sqn_mask = UINT32_MAX; + else + sa->sqn_mask = UINT64_MAX; + + /* fill replay window related fields */ + if (nb != 0) { + sa->replay.win_sz = prm->replay_win_sz; + sa->replay.nb_bucket = nb; + sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1; + sa->sqn.inb = (struct replay_sqn *)(sa + 1); + } + + return sz; +} diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h new file mode 100644 index 000000000..ef030334c --- /dev/null +++ b/lib/librte_ipsec/sa.h @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _SA_H_ +#define _SA_H_ + +#define IPSEC_MAX_HDR_SIZE 64 +#define IPSEC_MAX_IV_SIZE 16 +#define IPSEC_MAX_IV_QWORD (IPSEC_MAX_IV_SIZE / sizeof(uint64_t)) + +/* helper structures to store/update crypto session/op data */ +union sym_op_ofslen { + uint64_t raw; + struct { + uint32_t offset; + uint32_t length; + }; +}; + +union sym_op_data { + __uint128_t raw; + struct { + uint8_t *va; + rte_iova_t pa; + }; +}; + +/* Inbound replay window and last sequence number */ +struct replay_sqn { + uint64_t sqn; + __extension__ uint64_t window[0]; +}; + +struct rte_ipsec_sa { + uint64_t type; /* type of given SA */ + uint64_t udata; /* user defined */ + uint32_t size; /* size of given sa object */ + uint32_t spi; + /* sqn calculations related */ + uint64_t sqn_mask; + struct { + uint32_t win_sz; + uint16_t nb_bucket; + uint16_t bucket_index_mask; + } replay; + /* template for crypto op fields */ + struct { + union sym_op_ofslen cipher; + union sym_op_ofslen auth; + } ctp; + uint32_t salt; + uint8_t aad_len; + uint8_t hdr_len; + uint8_t hdr_l3_off; + uint8_t icv_len; + uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */ + uint8_t iv_len; + uint8_t pad_align; + uint8_t proto; /* next proto */ + + /* template for tunnel header */ + uint8_t hdr[IPSEC_MAX_HDR_SIZE]; + + /* + * sqn and replay window + */ + union { + uint64_t outb; + struct replay_sqn *inb; + } sqn; + +} __rte_cache_aligned; + +#endif /* _SA_H_ */ diff --git a/lib/meson.build b/lib/meson.build index 3acc67e6e..4b0c13148 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -21,6 +21,8 @@ libraries = [ 'compat', # just a header, used for versioning 'kni', 'latencystats', 'lpm', 'member', 'meter', 'power', 'pdump', 'rawdev', 'reorder', 'sched', 'security', 'vhost', + #ipsec lib depends on crypto and security + 'ipsec', # add pkt framework libs which use other libs from above 'port', 'table', 'pipeline', # flow_classify lib depends on pkt framework table lib diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 32579e4b7..5756ffe40 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -62,6 +62,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y) _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF) += -lelf endif +_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC) += -lrte_ipsec + _LDLIBS-y += --whole-archive _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile From patchwork Tue Oct 9 18:23:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 46407 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2B5881B5C2; Tue, 9 Oct 2018 20:24:20 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 0BA4D1B58A for ; Tue, 9 Oct 2018 20:24:16 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Oct 2018 11:24:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,361,1534834800"; d="scan'208";a="77469435" Received: from sivswdev02.ir.intel.com (HELO localhost.localdomain) ([10.237.217.46]) by fmsmga008.fm.intel.com with ESMTP; 09 Oct 2018 11:23:55 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , Mohammad Abdul Awal Date: Tue, 9 Oct 2018 19:23:36 +0100 Message-Id: <1539109420-13412-6-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> References: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce Security Association (SA-level) data-path API Operates at SA level, provides functions to: - initialize/teardown SA object - process inbound/outbound ESP/AH packets associated with the given SA (decrypt/encrypt, authenticate, check integrity, add/remove ESP/AH related headers and data, etc.). Signed-off-by: Mohammad Abdul Awal Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/Makefile | 2 + lib/librte_ipsec/meson.build | 4 +- lib/librte_ipsec/rte_ipsec.h | 154 +++++++++++++++++++++++++++++++++ lib/librte_ipsec/rte_ipsec_version.map | 3 + lib/librte_ipsec/sa.c | 98 ++++++++++++++++++++- lib/librte_ipsec/sa.h | 3 + lib/librte_ipsec/ses.c | 45 ++++++++++ 7 files changed, 306 insertions(+), 3 deletions(-) create mode 100644 lib/librte_ipsec/rte_ipsec.h create mode 100644 lib/librte_ipsec/ses.c diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile index 7758dcc6d..79f187fae 100644 --- a/lib/librte_ipsec/Makefile +++ b/lib/librte_ipsec/Makefile @@ -17,8 +17,10 @@ LIBABIVER := 1 # all source are stored in SRCS-y SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c # install header files +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build index 52c78eaeb..6e8c6fabe 100644 --- a/lib/librte_ipsec/meson.build +++ b/lib/librte_ipsec/meson.build @@ -3,8 +3,8 @@ allow_experimental_apis = true -sources=files('sa.c') +sources=files('sa.c', 'ses.c') -install_headers = files('rte_ipsec_sa.h') +install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h') deps += ['mbuf', 'net', 'cryptodev', 'security'] diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h new file mode 100644 index 000000000..5c9a1ed0b --- /dev/null +++ b/lib/librte_ipsec/rte_ipsec.h @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _RTE_IPSEC_H_ +#define _RTE_IPSEC_H_ + +/** + * @file rte_ipsec.h + * @b EXPERIMENTAL: this API may change without prior notice + * + * RTE IPsec support. + * librte_ipsec provides a framework for data-path IPsec protocol + * processing (ESP/AH). + * IKEv2 protocol support right now is out of scope of that draft. + * Though it tries to define related API in such way, that it could be adopted + * by IKEv2 implementation. + */ + +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +struct rte_ipsec_session; + +/** + * IPsec session specific functions that will be used to: + * - prepare - for input mbufs and given IPsec session prepare crypto ops + * that can be enqueued into the cryptodev associated with given session + * (see *rte_ipsec_crypto_prepare* below for more details). + * - process - finalize processing of packets after crypto-dev finished + * with them or process packets that are subjects to inline IPsec offload + * (see rte_ipsec_process for more details). + */ +struct rte_ipsec_sa_func { + uint16_t (*prepare)(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], + uint16_t num); + uint16_t (*process)(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], + uint16_t num); +}; + +/** + * rte_ipsec_session is an aggregate structure that defines particular + * IPsec Security Association IPsec (SA) on given security/crypto device: + * - pointer to the SA object + * - security session action type + * - pointer to security/crypto session, plus other related data + * - session/device specific functions to prepare/process IPsec packets. + */ +struct rte_ipsec_session { + + /** + * SA that session belongs to. + * Note that multiple sessions can belong to the same SA. + */ + struct rte_ipsec_sa *sa; + /** session action type */ + enum rte_security_session_action_type type; + /** session and related data */ + union { + struct { + struct rte_cryptodev_sym_session *ses; + } crypto; + struct { + struct rte_security_session *ses; + struct rte_security_ctx *ctx; + uint32_t ol_flags; + } security; + }; + /** functions to prepare/process IPsec packets */ + struct rte_ipsec_sa_func func; +}; + +/** + * Checks that inside given rte_ipsec_session crypto/security fields + * are filled correctly and setups function pointers based on these values. + * @param ss + * Pointer to the *rte_ipsec_session* object + * @return + * - Zero if operation completed successfully. + * - -EINVAL if the parameters are invalid. + */ +int __rte_experimental +rte_ipsec_session_prepare(struct rte_ipsec_session *ss); + +/** + * For input mbufs and given IPsec session prepare crypto ops that can be + * enqueued into the cryptodev associated with given session. + * expects that for each input packet: + * - l2_len, l3_len are setup correctly + * Note that erroneous mbufs are not freed by the function, + * but are placed beyond last valid mbuf in the *mb* array. + * It is a user responsibility to handle them further. + * @param ss + * Pointer to the *rte_ipsec_session* object the packets belong to. + * @param mb + * The address of an array of *num* pointers to *rte_mbuf* structures + * which contain the input packets. + * @param cop + * The address of an array of *num* pointers to the output *rte_crypto_op* + * structures. + * @param num + * The maximum number of packets to process. + * @return + * Number of successfully processed packets, with error code set in rte_errno. + */ +static inline uint16_t __rte_experimental +rte_ipsec_crypto_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) +{ + return ss->func.prepare(ss, mb, cop, num); +} + +/** + * Finalise processing of packets after crypto-dev finished with them or + * process packets that are subjects to inline IPsec offload. + * Expects that for each input packet: + * - l2_len, l3_len are setup correctly + * Output mbufs will be: + * inbound - decrypted & authenticated, ESP(AH) related headers removed, + * *l2_len* and *l3_len* fields are updated. + * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.) + * properly setup, if necessary - IP headers updated, ESP(AH) fields added, + * Note that erroneous mbufs are not freed by the function, + * but are placed beyond last valid mbuf in the *mb* array. + * It is a user responsibility to handle them further. + * @param ss + * Pointer to the *rte_ipsec_session* object the packets belong to. + * @param mb + * The address of an array of *num* pointers to *rte_mbuf* structures + * which contain the input packets. + * @param num + * The maximum number of packets to process. + * @return + * Number of successfully processed packets, with error code set in rte_errno. + */ +static inline uint16_t __rte_experimental +rte_ipsec_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + return ss->func.process(ss, mb, num); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_IPSEC_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map index 1a66726b8..47620cef5 100644 --- a/lib/librte_ipsec/rte_ipsec_version.map +++ b/lib/librte_ipsec/rte_ipsec_version.map @@ -1,6 +1,9 @@ EXPERIMENTAL { global: + rte_ipsec_crypto_prepare; + rte_ipsec_session_prepare; + rte_ipsec_process; rte_ipsec_sa_fini; rte_ipsec_sa_init; rte_ipsec_sa_size; diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 913856a3d..ad2aa29df 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -2,7 +2,7 @@ * Copyright(c) 2018 Intel Corporation */ -#include +#include #include #include #include @@ -280,3 +280,99 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, return sz; } + +static uint16_t +lksd_none_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + RTE_SET_USED(ss); + RTE_SET_USED(mb); + RTE_SET_USED(cop); + RTE_SET_USED(num); + rte_errno = ENOTSUP; + return 0; +} + +static uint16_t +lksd_proto_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + RTE_SET_USED(ss); + RTE_SET_USED(mb); + RTE_SET_USED(cop); + RTE_SET_USED(num); + rte_errno = ENOTSUP; + return 0; +} + +static uint16_t +lksd_none_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + RTE_SET_USED(ss); + RTE_SET_USED(mb); + RTE_SET_USED(num); + rte_errno = ENOTSUP; + return 0; +} + +static uint16_t +inline_crypto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + RTE_SET_USED(ss); + RTE_SET_USED(mb); + RTE_SET_USED(num); + rte_errno = ENOTSUP; + return 0; +} + +static uint16_t +inline_proto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + RTE_SET_USED(ss); + RTE_SET_USED(mb); + RTE_SET_USED(num); + rte_errno = ENOTSUP; + return 0; +} + +static uint16_t +lksd_proto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + RTE_SET_USED(ss); + RTE_SET_USED(mb); + RTE_SET_USED(num); + rte_errno = ENOTSUP; + return 0; +} + +const struct rte_ipsec_sa_func * +ipsec_sa_func_select(const struct rte_ipsec_session *ss) +{ + static const struct rte_ipsec_sa_func tfunc[] = { + [RTE_SECURITY_ACTION_TYPE_NONE] = { + .prepare = lksd_none_prepare, + .process = lksd_none_process, + }, + [RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO] = { + .prepare = NULL, + .process = inline_crypto_process, + }, + [RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL] = { + .prepare = NULL, + .process = inline_proto_process, + }, + [RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL] = { + .prepare = lksd_proto_prepare, + .process = lksd_proto_process, + }, + }; + + if (ss->type >= RTE_DIM(tfunc)) + return NULL; + + return tfunc + ss->type; +} diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index ef030334c..13a5a68f3 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -72,4 +72,7 @@ struct rte_ipsec_sa { } __rte_cache_aligned; +const struct rte_ipsec_sa_func * +ipsec_sa_func_select(const struct rte_ipsec_session *ss); + #endif /* _SA_H_ */ diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c new file mode 100644 index 000000000..afefda937 --- /dev/null +++ b/lib/librte_ipsec/ses.c @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include +#include "sa.h" + +static int +session_check(struct rte_ipsec_session *ss) +{ + if (ss == NULL) + return -EINVAL; + + if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) { + if (ss->crypto.ses == NULL) + return -EINVAL; + } else if (ss->security.ses == NULL || ss->security.ctx == NULL) + return -EINVAL; + + return 0; +} + +int __rte_experimental +rte_ipsec_session_prepare(struct rte_ipsec_session *ss) +{ + int32_t rc; + const struct rte_ipsec_sa_func *fp; + + rc = session_check(ss); + if (rc != 0) + return rc; + + fp = ipsec_sa_func_select(ss); + if (fp == NULL) + return -ENOTSUP; + + ss->func = fp[0]; + + if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) + ss->crypto.ses->userdata = (uintptr_t)ss; + else + ss->security.ses->userdata = (uintptr_t)ss; + + return 0; +} From patchwork Tue Oct 9 18:23:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 46408 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BCE001B5F2; Tue, 9 Oct 2018 20:24:21 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 8D2611B58A for ; Tue, 9 Oct 2018 20:24:17 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Oct 2018 11:24:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,361,1534834800"; d="scan'208";a="77469439" Received: from sivswdev02.ir.intel.com (HELO localhost.localdomain) ([10.237.217.46]) by fmsmga008.fm.intel.com with ESMTP; 09 Oct 2018 11:23:56 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , Mohammad Abdul Awal Date: Tue, 9 Oct 2018 19:23:37 +0100 Message-Id: <1539109420-13412-7-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> References: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [RFC v2 6/9] ipsec: implement SA data-path API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Provide implementation for rte_ipsec_crypto_prepare() and rte_ipsec_process(). Current implementation: - supports ESP protocol tunnel mode only. - supports ESN and replay window. - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL. - covers all currently defined security session types: - RTE_SECURITY_ACTION_TYPE_NONE - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL For first two types SQN check/update is done by SW (inside the library). For last two type it is HW/PMD responsibility. Signed-off-by: Mohammad Abdul Awal Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/crypto.h | 74 +++++ lib/librte_ipsec/ipsec_sqn.h | 144 ++++++++- lib/librte_ipsec/pad.h | 45 +++ lib/librte_ipsec/sa.c | 681 ++++++++++++++++++++++++++++++++++++++++--- 4 files changed, 909 insertions(+), 35 deletions(-) create mode 100644 lib/librte_ipsec/crypto.h create mode 100644 lib/librte_ipsec/pad.h diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h new file mode 100644 index 000000000..6ff995c59 --- /dev/null +++ b/lib/librte_ipsec/crypto.h @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _CRYPTO_H_ +#define _CRYPTO_H_ + +/** + * @file crypto.h + * Contains crypto specific functions/structures/macros used internally + * by ipsec library. + */ + + /* + * AES-GCM devices have some specific requirements for IV and AAD formats. + * Ideally that to be done by the driver itself. + */ + +struct aead_gcm_iv { + uint32_t salt; + uint64_t iv; + uint32_t cnt; +} __attribute__((packed)); + +struct aead_gcm_aad { + uint32_t spi; + /* + * RFC 4106, section 5: + * Two formats of the AAD are defined: + * one for 32-bit sequence numbers, and one for 64-bit ESN. + */ + union { + uint32_t u32; + uint64_t u64; + } sqn; + uint32_t align0; /* align to 16B boundary */ +} __attribute__((packed)); + +struct gcm_esph_iv { + struct esp_hdr esph; + uint64_t iv; +} __attribute__((packed)); + + +static inline void +aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt) +{ + gcm->salt = salt; + gcm->iv = iv; + gcm->cnt = rte_cpu_to_be_32(1); +} + +/* + * RFC 4106, 5 AAD Construction + */ +static inline void +aead_gcm_aad_fill(struct aead_gcm_aad *aad, const struct gcm_esph_iv *hiv, + int esn) +{ + aad->spi = hiv->esph.spi; + if (esn) + aad->sqn.u64 = hiv->iv; + else + aad->sqn.u32 = hiv->esph.seq; +} + +static inline void +gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], uint64_t sqn) +{ + iv[0] = rte_cpu_to_be_64(sqn); + iv[1] = 0; +} + +#endif /* _CRYPTO_H_ */ diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h index d0d122824..7477b8d59 100644 --- a/lib/librte_ipsec/ipsec_sqn.h +++ b/lib/librte_ipsec/ipsec_sqn.h @@ -15,7 +15,7 @@ #define IS_ESN(sa) ((sa)->sqn_mask == UINT64_MAX) -/** +/* * for given size, calculate required number of buckets. */ static uint32_t @@ -30,6 +30,148 @@ replay_num_bucket(uint32_t wsz) return nb; } +/* + * According to RFC4303 A2.1, determine the high-order bit of sequence number. + * use 32bit arithmetic inside, return uint64_t. + */ +static inline uint64_t +reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w) +{ + uint32_t th, tl, bl; + + tl = t; + th = t >> 32; + bl = tl - w + 1; + + /* case A: window is within one sequence number subspace */ + if (tl >= (w - 1)) + th += (sqn < bl); + /* case B: window spans two sequence number subspaces */ + else if (th != 0) + th -= (sqn >= bl); + + /* return constructed sequence with proper high-order bits */ + return (uint64_t)th << 32 | sqn; +} + +/** + * Perform the replay checking. + * + * struct rte_ipsec_sa contains the window and window related parameters, + * such as the window size, bitmask, and the last acknowledged sequence number. + * + * Based on RFC 6479. + * Blocks are 64 bits unsigned integers + */ +static int32_t +esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa, + uint64_t sqn) +{ + uint32_t bit, bucket; + + /* seq not valid (first or wrapped) */ + if (sqn == 0) + return -EINVAL; + + /* replay not enabled */ + if (sa->replay.win_sz == 0) + return 0; + + /* handle ESN */ + if (IS_ESN(sa)) + sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz); + + /* seq is larger than lastseq */ + if (sqn > rsn->sqn) + return 0; + + /* seq is outside window */ + if ((sqn + sa->replay.win_sz) < rsn->sqn) + return -EINVAL; + + /* seq is inside the window */ + bit = sqn & WINDOW_BIT_LOC_MASK; + bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask; + + /* already seen packet */ + if (rsn->window[bucket] & ((uint64_t)1 << bit)) + return -EINVAL; + + return 0; +} + +/** + * For outbound SA perform the sequence number update. + */ +static inline uint64_t +esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num) +{ + uint64_t n, s, sqn; + + n = *num; + sqn = sa->sqn.outb + n; + sa->sqn.outb = sqn; + + /* overflow */ + if (sqn > sa->sqn_mask) { + s = sqn - sa->sqn_mask; + *num = (s < n) ? n - s : 0; + } + + return sqn - n; +} + +/** + * For inbound SA perform the sequence number and replay window update. + */ +static inline int32_t +esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa, + uint64_t sqn) +{ + uint32_t bit, bucket, last_bucket, new_bucket, diff, i; + + /* replay not enabled */ + if (sa->replay.win_sz == 0) + return 0; + + /* handle ESN */ + if (IS_ESN(sa)) + sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz); + + /* seq is outside window*/ + if ((sqn + sa->replay.win_sz) < rsn->sqn) + return -EINVAL; + + /* update the bit */ + bucket = (sqn >> WINDOW_BUCKET_BITS); + + /* check if the seq is within the range */ + if (sqn > rsn->sqn) { + last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS; + diff = bucket - last_bucket; + /* seq is way after the range of WINDOW_SIZE */ + if (diff > sa->replay.nb_bucket) + diff = sa->replay.nb_bucket; + + for (i = 0; i != diff; i++) { + new_bucket = (i + last_bucket + 1) & + sa->replay.bucket_index_mask; + rsn->window[new_bucket] = 0; + } + rsn->sqn = sqn; + } + + bucket &= sa->replay.bucket_index_mask; + bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK); + + /* already seen packet */ + if (rsn->window[bucket] & bit) + return -EINVAL; + + rsn->window[bucket] |= bit; + return 0; +} + /** * Based on number of buckets calculated required size for the * structure that holds replay window and sequnce number (RSN) information. diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h new file mode 100644 index 000000000..2f5ccd00e --- /dev/null +++ b/lib/librte_ipsec/pad.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _PAD_H_ +#define _PAD_H_ + +#define IPSEC_MAX_PAD_SIZE UINT8_MAX + +static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = { + 1, 2, 3, 4, 5, 6, 7, 8, + 9, 10, 11, 12, 13, 14, 15, 16, + 17, 18, 19, 20, 21, 22, 23, 24, + 25, 26, 27, 28, 29, 30, 31, 32, + 33, 34, 35, 36, 37, 38, 39, 40, + 41, 42, 43, 44, 45, 46, 47, 48, + 49, 50, 51, 52, 53, 54, 55, 56, + 57, 58, 59, 60, 61, 62, 63, 64, + 65, 66, 67, 68, 69, 70, 71, 72, + 73, 74, 75, 76, 77, 78, 79, 80, + 81, 82, 83, 84, 85, 86, 87, 88, + 89, 90, 91, 92, 93, 94, 95, 96, + 97, 98, 99, 100, 101, 102, 103, 104, + 105, 106, 107, 108, 109, 110, 111, 112, + 113, 114, 115, 116, 117, 118, 119, 120, + 121, 122, 123, 124, 125, 126, 127, 128, + 129, 130, 131, 132, 133, 134, 135, 136, + 137, 138, 139, 140, 141, 142, 143, 144, + 145, 146, 147, 148, 149, 150, 151, 152, + 153, 154, 155, 156, 157, 158, 159, 160, + 161, 162, 163, 164, 165, 166, 167, 168, + 169, 170, 171, 172, 173, 174, 175, 176, + 177, 178, 179, 180, 181, 182, 183, 184, + 185, 186, 187, 188, 189, 190, 191, 192, + 193, 194, 195, 196, 197, 198, 199, 200, + 201, 202, 203, 204, 205, 206, 207, 208, + 209, 210, 211, 212, 213, 214, 215, 216, + 217, 218, 219, 220, 221, 222, 223, 224, + 225, 226, 227, 228, 229, 230, 231, 232, + 233, 234, 235, 236, 237, 238, 239, 240, + 241, 242, 243, 244, 245, 246, 247, 248, + 249, 250, 251, 252, 253, 254, 255, +}; + +#endif /* _PAD_H_ */ diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index ad2aa29df..ae8ce4f24 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -6,9 +6,12 @@ #include #include #include +#include #include "sa.h" #include "ipsec_sqn.h" +#include "crypto.h" +#include "pad.h" /* some helper structures */ struct crypto_xform { @@ -174,11 +177,13 @@ esp_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, /* RFC 4106 */ if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM) return -EINVAL; + sa->aad_len = sizeof(struct aead_gcm_aad); sa->icv_len = cxf->aead->digest_length; sa->iv_ofs = cxf->aead->iv.offset; sa->iv_len = sizeof(uint64_t); sa->pad_align = 4; } else { + sa->aad_len = 0; sa->icv_len = cxf->auth->digest_length; sa->iv_ofs = cxf->cipher->iv.offset; if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) { @@ -191,7 +196,6 @@ esp_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, return -EINVAL; } - sa->aad_len = 0; sa->udata = prm->userdata; sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi); sa->salt = prm->ipsec_xform.salt; @@ -281,72 +285,681 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, return sz; } +static inline void +esp_outb_tun_cop_prepare(struct rte_crypto_op *cop, + const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD], + const union sym_op_data *icv, uint32_t plen) +{ + struct rte_crypto_sym_op *sop; + struct aead_gcm_iv *gcm; + + /* fill sym op fields */ + sop = cop->sym; + + /* AEAD (AES_GCM) case */ + if (sa->aad_len != 0) { + sop->aead.data.offset = sa->ctp.cipher.offset; + sop->aead.data.length = sa->ctp.cipher.length + plen; + sop->aead.digest.data = icv->va; + sop->aead.digest.phys_addr = icv->pa; + sop->aead.aad.data = icv->va + sa->icv_len; + sop->aead.aad.phys_addr = icv->pa + sa->icv_len; + + /* fill AAD IV (located inside crypto op) */ + gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, + sa->iv_ofs); + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + /* CRYPT+AUTH case */ + } else { + sop->cipher.data.offset = sa->ctp.cipher.offset; + sop->cipher.data.length = sa->ctp.cipher.length + plen; + sop->auth.data.offset = sa->ctp.auth.offset; + sop->auth.data.length = sa->ctp.auth.length + plen; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + } +} + +static inline int32_t +esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, uint64_t sqn, + const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, + union sym_op_data *icv) +{ + uint32_t clen, hlen, pdlen, pdofs, tlen; + struct rte_mbuf *ml; + struct esp_hdr *esph; + struct esp_tail *espt; + struct aead_gcm_aad *aad; + char *ph, *pt; + uint64_t *iv; + + /* calculate extra header space required */ + hlen = sa->hdr_len + sa->iv_len + sizeof(*esph); + + /* number of bytes to encrypt */ + clen = mb->pkt_len + sizeof(*espt); + clen = RTE_ALIGN_CEIL(clen, sa->pad_align); + + /* pad length + esp tail */ + pdlen = clen - mb->pkt_len; + tlen = pdlen + sa->icv_len; + + /* do append and prepend */ + ml = rte_pktmbuf_lastseg(mb); + if (tlen + sa->aad_len > rte_pktmbuf_tailroom(ml)) + return -ENOSPC; + + /* prepend header */ + ph = rte_pktmbuf_prepend(mb, hlen); + if (ph == NULL) + return -ENOSPC; + + /* append tail */ + pdofs = ml->data_len; + ml->data_len += tlen; + mb->pkt_len += tlen; + pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs); + + /* update pkt l2/l3 len */ + mb->l2_len = sa->hdr_l3_off; + mb->l3_len = sa->hdr_len - sa->hdr_l3_off; + + /* copy tunnel pkt header */ + rte_memcpy(ph, sa->hdr, sa->hdr_len); + + /* update original and new ip header fields */ + if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) { + struct ipv4_hdr *l3h; + l3h = (struct ipv4_hdr *)(ph + sa->hdr_l3_off); + l3h->packet_id = rte_cpu_to_be_16(sqn); + l3h->total_length = rte_cpu_to_be_16(mb->pkt_len - + sa->hdr_l3_off); + } else { + struct ipv6_hdr *l3h; + l3h = (struct ipv6_hdr *)(ph + sa->hdr_l3_off); + l3h->payload_len = rte_cpu_to_be_16(mb->pkt_len - + sa->hdr_l3_off - sizeof(*l3h)); + } + + /* update spi, seqn and iv */ + esph = (struct esp_hdr *)(ph + sa->hdr_len); + iv = (uint64_t *)(esph + 1); + rte_memcpy(iv, ivp, sa->iv_len); + + esph->spi = sa->spi; + esph->seq = rte_cpu_to_be_32(sqn); + + /* offset for ICV */ + pdofs += pdlen; + + /* pad length */ + pdlen -= sizeof(*espt); + + /* copy padding data */ + rte_memcpy(pt, esp_pad_bytes, pdlen); + + /* update esp trailer */ + espt = (struct esp_tail *)(pt + pdlen); + espt->pad_len = pdlen; + espt->next_proto = sa->proto; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); + icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); + + /* + * fill IV and AAD fields, if any (aad fields are placed after icv), + * right now we support only one AEAD algorithm: AES-GCM . + */ + if (sa->aad_len != 0) { + aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); + aead_gcm_aad_fill(aad, (const struct gcm_esph_iv *)esph, + IS_ESN(sa)); + } + + return clen; +} + +static inline uint16_t +esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], struct rte_mbuf *dr[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n; + uint64_t sqn; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + gen_iv(iv, sqn + i); + + /* try to update the packet itself */ + rc = esp_outb_tun_pkt_prepare(sa, sqn + i, iv, mb[i], &icv); + + /* success, setup crypto op */ + if (rc >= 0) { + mb[k] = mb[i]; + esp_outb_tun_cop_prepare(cop[k], sa, iv, &icv, rc); + k++; + /* failure, put packet into the death-row */ + } else { + dr[i - k] = mb[i]; + rte_errno = -rc; + } + } + + return k; +} + +static inline int32_t +esp_inb_tun_cop_prepare(struct rte_crypto_op *cop, + const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + const union sym_op_data *icv, uint32_t pofs, uint32_t plen) +{ + struct rte_crypto_sym_op *sop; + struct aead_gcm_iv *gcm; + uint64_t *ivc, *ivp; + uint32_t clen; + + clen = plen - sa->ctp.cipher.length; + if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0) + return -EINVAL; + + /* fill sym op fields */ + sop = cop->sym; + + /* AEAD (AES_GCM) case */ + if (sa->aad_len != 0) { + sop->aead.data.offset = pofs + sa->ctp.cipher.offset; + sop->aead.data.length = clen; + sop->aead.digest.data = icv->va; + sop->aead.digest.phys_addr = icv->pa; + sop->aead.aad.data = icv->va + sa->icv_len; + sop->aead.aad.phys_addr = icv->pa + sa->icv_len; + + /* fill AAD IV (located inside crypto op) */ + gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, + sa->iv_ofs); + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct esp_hdr)); + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + /* CRYPT+AUTH case */ + } else { + sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; + sop->cipher.data.length = clen; + sop->auth.data.offset = pofs + sa->ctp.auth.offset; + sop->auth.data.length = plen - sa->ctp.auth.length; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + + /* copy iv from the input packet to the cop */ + ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs); + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct esp_hdr)); + rte_memcpy(ivc, ivp, sa->iv_len); + } + return 0; +} + +static inline int32_t +esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa, + const struct replay_sqn *rsn, struct rte_mbuf *mb, + uint32_t hlen, union sym_op_data *icv) +{ + int32_t rc; + uint32_t icv_ofs, plen, sqn; + struct rte_mbuf *ml; + struct esp_hdr *esph; + struct aead_gcm_aad *aad; + + esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); + sqn = rte_be_to_cpu_32(esph->seq); + rc = esn_inb_check_sqn(rsn, sa, sqn); + if (rc != 0) + return rc; + + plen = mb->pkt_len; + plen = plen - hlen; + + ml = rte_pktmbuf_lastseg(mb); + icv_ofs = ml->data_len - sa->icv_len; + + /* we have to allocate space for AAD somewhere, + * right now - just use free trailing space at the last segment. + * Would probably be more convenient to reserve space for AAD + * inside rte_crypto_op itself + * (again for IV space is already reserved inside cop). + */ + if (sa->aad_len > rte_pktmbuf_tailroom(ml)) + return -ENOSPC; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs); + icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs); + + /* + * fill AAD fields, if any (aad fields are placed after icv), + * right now we support only one AEAD algorithm: AES-GCM. + */ + if (sa->aad_len != 0) { + aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); + aead_gcm_aad_fill(aad, (const struct gcm_esph_iv *)esph, + IS_ESN(sa)); + } + + return plen; +} + +static inline uint16_t +esp_inb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], struct rte_mbuf *dr[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, hl; + struct replay_sqn *rsn; + union sym_op_data icv; + + rsn = sa->sqn.inb; + + k = 0; + for (i = 0; i != num; i++) { + + hl = mb[i]->l2_len + mb[i]->l3_len; + rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv); + if (rc >= 0) + rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv, + hl, rc); + + if (rc == 0) + mb[k++] = mb[i]; + else { + dr[i - k] = mb[i]; + rte_errno = -rc; + } + } + + return k; +} + +static inline void +mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[], + uint32_t num) +{ + uint32_t i; + + for (i = 0; i != num; i++) + dst[i] = src[i]; +} + +static inline void +lksd_none_cop_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) +{ + uint32_t i; + struct rte_crypto_sym_op *sop; + + for (i = 0; i != num; i++) { + sop = cop[i]->sym; + cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION; + sop->m_src = mb[i]; + __rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses); + } +} + static uint16_t lksd_none_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) { - RTE_SET_USED(ss); - RTE_SET_USED(mb); - RTE_SET_USED(cop); - RTE_SET_USED(num); - rte_errno = ENOTSUP; - return 0; + uint32_t n; + struct rte_ipsec_sa *sa; + struct rte_mbuf *dr[num]; + + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + sa = ss->sa; + + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + n = esp_inb_tun_prepare(sa, mb, cop, dr, num); + lksd_none_cop_prepare(ss, mb, cop, n); + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + n = esp_outb_tun_prepare(sa, mb, cop, dr, num); + lksd_none_cop_prepare(ss, mb, cop, n); + break; + default: + rte_errno = ENOTSUP; + n = 0; + } + + /* copy not prepared mbufs beyond good ones */ + if (n != num && n != 0) + mbuf_bulk_copy(mb + n, dr, num - n); + + return n; +} + +static inline void +lksd_proto_cop_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) +{ + uint32_t i; + struct rte_crypto_sym_op *sop; + + for (i = 0; i != num; i++) { + sop = cop[i]->sym; + cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION; + sop->m_src = mb[i]; + __rte_security_attach_session(sop, ss->security.ses); + } } static uint16_t -lksd_proto_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - struct rte_crypto_op *cop[], uint16_t num) +lksd_proto_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) { - RTE_SET_USED(ss); - RTE_SET_USED(mb); - RTE_SET_USED(cop); - RTE_SET_USED(num); - rte_errno = ENOTSUP; + lksd_proto_cop_prepare(ss, mb, cop, num); + return num; +} + +static inline int +esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t *sqn) +{ + uint32_t hlen, icv_len, tlen; + struct esp_hdr *esph; + struct esp_tail *espt; + struct rte_mbuf *ml; + char *pd; + + if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) + return -EBADMSG; + + icv_len = sa->icv_len; + + ml = rte_pktmbuf_lastseg(mb); + espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, + ml->data_len - icv_len - sizeof(*espt)); + + /* cut of ICV, ESP tail and padding bytes */ + tlen = icv_len + sizeof(*espt) + espt->pad_len; + ml->data_len -= tlen; + mb->pkt_len -= tlen; + + /* cut of L2/L3 headers, ESP header and IV */ + hlen = mb->l2_len + mb->l3_len; + esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); + rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset); + + /* reset mbuf metatdata: L2/L3 len, packet type */ + mb->packet_type = RTE_PTYPE_UNKNOWN; + mb->l2_len = 0; + mb->l3_len = 0; + + /* clear the PKT_RX_SEC_OFFLOAD flag if set */ + mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); + + /* + * check padding and next proto. + * return an error if something is wrong. + */ + + pd = (char *)espt - espt->pad_len; + if (espt->next_proto != sa->proto || + memcmp(pd, esp_pad_bytes, espt->pad_len)) + return -EINVAL; + + *sqn = rte_be_to_cpu_32(esph->seq); return 0; } +static inline uint16_t +esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], + struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num) +{ + uint32_t i, k; + struct replay_sqn *rsn; + + rsn = sa->sqn.inb; + + k = 0; + for (i = 0; i != num; i++) { + if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0) + mb[k++] = mb[i]; + else + dr[i - k] = mb[i]; + } + + return k; +} + +static inline uint16_t +esp_inb_tun_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + struct rte_mbuf *dr[], uint16_t num) +{ + uint32_t i, k; + uint32_t sqn[num]; + + /* process packets, extract seq numbers */ + + k = 0; + for (i = 0; i != num; i++) { + /* good packet */ + if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0) + mb[k++] = mb[i]; + /* bad packet, will drop from furhter processing */ + else + dr[i - k] = mb[i]; + } + + /* update seq # and replay winow */ + k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k); + + if (k != num) + rte_errno = EBADMSG; + return k; +} + +/* + * helper routine, puts packets with PKT_RX_SEC_OFFLOAD_FAILED set, + * into the death-row. + */ +static inline uint16_t +pkt_flag_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + struct rte_mbuf *dr[], uint16_t num) +{ + uint32_t i, k; + + RTE_SET_USED(sa); + + k = 0; + for (i = 0; i != num; i++) { + if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) + mb[k++] = mb[i]; + else + dr[i - k] = mb[i]; + } + + if (k != num) + rte_errno = EBADMSG; + return k; +} + +static inline void +inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + uint32_t i, ol_flags; + + ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA; + for (i = 0; i != num; i++) { + + mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD; + if (ol_flags != 0) + rte_security_set_pkt_metadata(ss->security.ctx, + ss->security.ses, mb[i], NULL); + } +} + +static inline uint16_t +inline_outb_tun_pkt_process(struct rte_ipsec_sa *sa, + struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n; + uint64_t sqn; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + gen_iv(iv, sqn + i); + + /* try to update the packet itself */ + rc = esp_outb_tun_pkt_prepare(sa, sqn + i, iv, mb[i], &icv); + + /* success, update mbuf fields */ + if (rc >= 0) + mb[k++] = mb[i]; + /* failure, put packet into the death-row */ + else { + dr[i - k] = mb[i]; + rte_errno = -rc; + } + } + + return k; +} + static uint16_t lksd_none_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { - RTE_SET_USED(ss); - RTE_SET_USED(mb); - RTE_SET_USED(num); - rte_errno = ENOTSUP; - return 0; + uint32_t n; + struct rte_ipsec_sa *sa; + struct rte_mbuf *dr[num]; + + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + sa = ss->sa; + + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + n = esp_inb_tun_pkt_process(sa, mb, dr, num); + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + n = pkt_flag_process(sa, mb, dr, num); + break; + default: + n = 0; + rte_errno = ENOTSUP; + } + + /* copy not prepared mbufs beyond good ones */ + if (n != num && n != 0) + mbuf_bulk_copy(mb + n, dr, num - n); + + return n; } static uint16_t inline_crypto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { - RTE_SET_USED(ss); - RTE_SET_USED(mb); - RTE_SET_USED(num); - rte_errno = ENOTSUP; - return 0; + uint32_t n; + struct rte_ipsec_sa *sa; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + n = esp_inb_tun_pkt_process(sa, mb, dr, num); + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + n = inline_outb_tun_pkt_process(sa, mb, dr, num); + inline_outb_mbuf_prepare(ss, mb, n); + break; + default: + n = 0; + rte_errno = ENOTSUP; + } + + /* copy not processed mbufs beyond good ones */ + if (n != num && n != 0) + mbuf_bulk_copy(mb + n, dr, num - n); + + return n; } static uint16_t inline_proto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { - RTE_SET_USED(ss); - RTE_SET_USED(mb); - RTE_SET_USED(num); - rte_errno = ENOTSUP; - return 0; + uint32_t n; + struct rte_ipsec_sa *sa; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + /* outbound, just set flags and metadata */ + if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_OB) { + inline_outb_mbuf_prepare(ss, mb, num); + return num; + } + + /* inbound, check that HW succesfly processed packets */ + n = pkt_flag_process(sa, mb, dr, num); + + /* copy the bad ones, after the good ones */ + if (n != num && n != 0) + mbuf_bulk_copy(mb + n, dr, num - n); + return n; } static uint16_t lksd_proto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { - RTE_SET_USED(ss); - RTE_SET_USED(mb); - RTE_SET_USED(num); - rte_errno = ENOTSUP; - return 0; + uint32_t n; + struct rte_ipsec_sa *sa; + struct rte_mbuf *dr[num]; + + sa = ss->sa; + + /* check that HW succesfly processed packets */ + n = pkt_flag_process(sa, mb, dr, num); + + /* copy the bad ones, after the good ones */ + if (n != num && n != 0) + mbuf_bulk_copy(mb + n, dr, num - n); + return n; } const struct rte_ipsec_sa_func * From patchwork Tue Oct 9 18:23:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 46406 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F137F1B54E; Tue, 9 Oct 2018 20:24:17 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 1E8251B53E for ; Tue, 9 Oct 2018 20:24:15 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Oct 2018 11:24:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,361,1534834800"; d="scan'208";a="77469442" Received: from sivswdev02.ir.intel.com (HELO localhost.localdomain) ([10.237.217.46]) by fmsmga008.fm.intel.com with ESMTP; 09 Oct 2018 11:23:57 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev Date: Tue, 9 Oct 2018 19:23:38 +0100 Message-Id: <1539109420-13412-8-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> References: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [RFC v2 7/9] ipsec: rework SA replay window/SQN for MT environment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" With these changes functions: - rte_ipsec_crypto_prepare - rte_ipsec_process can be safely used in MT environment, as long as the user can guarantee that they obey multiple readers/single writer model for SQN+replay_window operations. To be more specific: for outbound SA there are no restrictions. for inbound SA the caller has to guarantee that at any given moment only one thread is executing rte_ipsec_process() for given SA. Note that it is caller responsibility to maintain correct order of packets to be processed. Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/ipsec_sqn.h | 129 +++++++++++++++++++++++++++++++++++++++- lib/librte_ipsec/rte_ipsec_sa.h | 27 +++++++++ lib/librte_ipsec/rwl.h | 68 +++++++++++++++++++++ lib/librte_ipsec/sa.c | 22 +++++-- lib/librte_ipsec/sa.h | 22 +++++-- 5 files changed, 258 insertions(+), 10 deletions(-) create mode 100644 lib/librte_ipsec/rwl.h diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h index 7477b8d59..a3c993a52 100644 --- a/lib/librte_ipsec/ipsec_sqn.h +++ b/lib/librte_ipsec/ipsec_sqn.h @@ -5,6 +5,8 @@ #ifndef _IPSEC_SQN_H_ #define _IPSEC_SQN_H_ +#include "rwl.h" + #define WINDOW_BUCKET_BITS 6 /* uint64_t */ #define WINDOW_BUCKET_SIZE (1 << WINDOW_BUCKET_BITS) #define WINDOW_BIT_LOC_MASK (WINDOW_BUCKET_SIZE - 1) @@ -15,6 +17,9 @@ #define IS_ESN(sa) ((sa)->sqn_mask == UINT64_MAX) +#define SQN_ATOMIC(sa) ((sa)->type & RTE_IPSEC_SATP_SQN_ATOM) + + /* * for given size, calculate required number of buckets. */ @@ -109,8 +114,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num) uint64_t n, s, sqn; n = *num; - sqn = sa->sqn.outb + n; - sa->sqn.outb = sqn; + if (SQN_ATOMIC(sa)) + sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n); + else { + sqn = sa->sqn.outb.raw + n; + sa->sqn.outb.raw = sqn; + } /* overflow */ if (sqn > sa->sqn_mask) { @@ -173,6 +182,19 @@ esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa, } /** + * To achieve ability to do multiple readers single writer for + * SA replay window information and sequence number (RSN) + * basic RCU schema is used: + * SA have 2 copies of RSN (one for readers, another for writers). + * Each RSN contains a rwlock that has to be grabbed (for read/write) + * to avoid races between readers and writer. + * Writer is responsible to make a copy or reader RSN, update it + * and mark newly updated RSN as readers one. + * That approach is intended to minimize contention and cache sharing + * between writer and readers. + */ + +/** * Based on number of buckets calculated required size for the * structure that holds replay window and sequnce number (RSN) information. */ @@ -187,4 +209,107 @@ rsn_size(uint32_t nb_bucket) return sz; } +/** + * Copy replay window and SQN. + */ +static inline void +rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src) +{ + uint32_t i, n; + struct replay_sqn *d; + const struct replay_sqn *s; + + d = sa->sqn.inb.rsn[dst]; + s = sa->sqn.inb.rsn[src]; + + n = sa->replay.nb_bucket; + + d->sqn = s->sqn; + for (i = 0; i != n; i++) + d->window[i] = s->window[i]; +} + +/** + * Get RSN for read-only access. + */ +static inline struct replay_sqn * +rsn_acquire(struct rte_ipsec_sa *sa) +{ + uint32_t n; + struct replay_sqn *rsn; + + n = sa->sqn.inb.rdidx; + rsn = sa->sqn.inb.rsn[n]; + + if (!SQN_ATOMIC(sa)) + return rsn; + + /* check there are no writers */ + while (rwl_try_read_lock(&rsn->rwl) < 0) { + rte_pause(); + n = sa->sqn.inb.rdidx; + rsn = sa->sqn.inb.rsn[n]; + rte_compiler_barrier(); + } + + return rsn; +} + +/** + * Release read-only access for RSN. + */ +static inline void +rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn) +{ + if (SQN_ATOMIC(sa)) + rwl_read_unlock(&rsn->rwl); +} + +/** + * Start RSN update. + */ +static inline struct replay_sqn * +rsn_update_start(struct rte_ipsec_sa *sa) +{ + uint32_t k, n; + struct replay_sqn *rsn; + + n = sa->sqn.inb.wridx; + + /* no active writers */ + RTE_ASSERT(n == sa->sqn.inb.rdidx); + + if (!SQN_ATOMIC(sa)) + return sa->sqn.inb.rsn[n]; + + k = REPLAY_SQN_NEXT(n); + sa->sqn.inb.wridx = k; + + rsn = sa->sqn.inb.rsn[k]; + rwl_write_lock(&rsn->rwl); + rsn_copy(sa, k, n); + + return rsn; +} + +/** + * Finish RSN update. + */ +static inline void +rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn) +{ + uint32_t n; + + if (!SQN_ATOMIC(sa)) + return; + + n = sa->sqn.inb.wridx; + RTE_ASSERT(n != sa->sqn.inb.rdidx); + RTE_ASSERT(rsn - sa->sqn.inb.rsn == n); + + rwl_write_unlock(&rsn->rwl); + sa->sqn.inb.rdidx = n; +} + + #endif /* _IPSEC_SQN_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h index 0efda33de..3324cbedb 100644 --- a/lib/librte_ipsec/rte_ipsec_sa.h +++ b/lib/librte_ipsec/rte_ipsec_sa.h @@ -54,12 +54,34 @@ struct rte_ipsec_sa_prm { }; /** + * Indicates that SA will(/will not) need an 'atomic' access + * to sequence number and replay window. + * 'atomic' here means: + * functions: + * - rte_ipsec_crypto_prepare + * - rte_ipsec_process + * can be safely used in MT environment, as long as the user can guarantee + * that they obey multiple readers/single writer model for SQN+replay_window + * operations. + * To be more specific: + * for outbound SA there are no restrictions. + * for inbound SA the caller has to guarantee that at any given moment + * only one thread is executing rte_ipsec_process() for given SA. + * Note that it is caller responsibility to maintain correct order + * of packets to be processed. + * In other words - it is a caller responsibility to serialize process() + * invocations. + */ +#define RTE_IPSEC_SAFLAG_SQN_ATOM (1ULL << 0) + +/** * SA type is an 64-bit value that contain the following information: * - IP version (IPv4/IPv6) * - IPsec proto (ESP/AH) * - inbound/outbound * - mode (TRANSPORT/TUNNEL) * - for TUNNEL outer IP version (IPv4/IPv6) + * - are SA SQN operations 'atomic' * ... */ @@ -68,6 +90,7 @@ enum { RTE_SATP_LOG_PROTO, RTE_SATP_LOG_DIR, RTE_SATP_LOG_MODE, + RTE_SATP_LOG_SQN = RTE_SATP_LOG_MODE + 2, RTE_SATP_LOG_NUM }; @@ -88,6 +111,10 @@ enum { #define RTE_IPSEC_SATP_MODE_TUNLV4 (1ULL << RTE_SATP_LOG_MODE) #define RTE_IPSEC_SATP_MODE_TUNLV6 (2ULL << RTE_SATP_LOG_MODE) +#define RTE_IPSEC_SATP_SQN_MASK (1ULL << RTE_SATP_LOG_SQN) +#define RTE_IPSEC_SATP_SQN_RAW (0ULL << RTE_SATP_LOG_SQN) +#define RTE_IPSEC_SATP_SQN_ATOM (1ULL << RTE_SATP_LOG_SQN) + /** * get type of given SA * @return diff --git a/lib/librte_ipsec/rwl.h b/lib/librte_ipsec/rwl.h new file mode 100644 index 000000000..fc44d1e9f --- /dev/null +++ b/lib/librte_ipsec/rwl.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _RWL_H_ +#define _RWL_H_ + +/** + * @file rwl.h + * + * Analog of read-write locks, very much in favour of read side. + * Assumes, that there are no more then INT32_MAX concurrent readers. + * Consider to move into librte_eal. + */ + +/** + * release read-lock. + * @param p + * pointer to atomic variable. + */ +static inline void +rwl_read_unlock(rte_atomic32_t *p) +{ + rte_atomic32_sub(p, 1); +} + +/** + * try to grab read-lock. + * @param p + * pointer to atomic variable. + * @return + * positive value on success + */ +static inline int +rwl_try_read_lock(rte_atomic32_t *p) +{ + int32_t rc; + + rc = rte_atomic32_add_return(p, 1); + if (rc < 0) + rwl_read_unlock(p); + return rc; +} + +/** + * grab write-lock. + * @param p + * pointer to atomic variable. + */ +static inline void +rwl_write_lock(rte_atomic32_t *p) +{ + while (rte_atomic32_cmpset((volatile uint32_t *)p, 0, INT32_MIN) == 0) + rte_pause(); +} + +/** + * release write-lock. + * @param p + * pointer to atomic variable. + */ +static inline void +rwl_write_unlock(rte_atomic32_t *p) +{ + rte_atomic32_sub(p, INT32_MIN); +} + +#endif /* _RWL_H_ */ diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index ae8ce4f24..e2852b020 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -89,6 +89,9 @@ ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket) *nb_bucket = n; sz = rsn_size(n); + if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) + sz *= REPLAY_SQN_NUM; + sz += sizeof(struct rte_ipsec_sa); return sz; } @@ -135,6 +138,12 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm) tp |= RTE_IPSEC_SATP_IPV4; } + /* interpret flags */ + if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM) + tp |= RTE_IPSEC_SATP_SQN_ATOM; + else + tp |= RTE_IPSEC_SATP_SQN_RAW; + return tp; } @@ -151,7 +160,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa) static void esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) { - sa->sqn.outb = 1; + sa->sqn.outb.raw = 1; sa->hdr_len = prm->tun.hdr_len; sa->hdr_l3_off = prm->tun.hdr_l3_off; memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len); @@ -279,7 +288,10 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, sa->replay.win_sz = prm->replay_win_sz; sa->replay.nb_bucket = nb; sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1; - sa->sqn.inb = (struct replay_sqn *)(sa + 1); + sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1); + if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM) + sa->sqn.inb.rsn[1] = (struct replay_sqn *) + ((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb)); } return sz; @@ -564,7 +576,7 @@ esp_inb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], struct replay_sqn *rsn; union sym_op_data icv; - rsn = sa->sqn.inb; + rsn = rsn_acquire(sa); k = 0; for (i = 0; i != num; i++) { @@ -583,6 +595,7 @@ esp_inb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], } } + rsn_release(sa, rsn); return k; } @@ -732,7 +745,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], uint32_t i, k; struct replay_sqn *rsn; - rsn = sa->sqn.inb; + rsn = rsn_update_start(sa); k = 0; for (i = 0; i != num; i++) { @@ -742,6 +755,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], dr[i - k] = mb[i]; } + rsn_update_finish(sa, rsn); return k; } diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index 13a5a68f3..9fe1f8483 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -9,7 +9,7 @@ #define IPSEC_MAX_IV_SIZE 16 #define IPSEC_MAX_IV_QWORD (IPSEC_MAX_IV_SIZE / sizeof(uint64_t)) -/* helper structures to store/update crypto session/op data */ +/* these definitions probably has to be in rte_crypto_sym.h */ union sym_op_ofslen { uint64_t raw; struct { @@ -26,8 +26,11 @@ union sym_op_data { }; }; -/* Inbound replay window and last sequence number */ +#define REPLAY_SQN_NUM 2 +#define REPLAY_SQN_NEXT(n) ((n) ^ 1) + struct replay_sqn { + rte_atomic32_t rwl; uint64_t sqn; __extension__ uint64_t window[0]; }; @@ -64,10 +67,21 @@ struct rte_ipsec_sa { /* * sqn and replay window + * In case of SA handled by multiple threads *sqn* cacheline + * could be shared by multiple cores. + * To minimise perfomance impact, we try to locate in a separate + * place from other frequently accesed data. */ union { - uint64_t outb; - struct replay_sqn *inb; + union { + rte_atomic64_t atom; + uint64_t raw; + } outb; + struct { + uint32_t rdidx; /* read index */ + uint32_t wridx; /* write index */ + struct replay_sqn *rsn[REPLAY_SQN_NUM]; + } inb; } sqn; } __rte_cache_aligned; From patchwork Tue Oct 9 18:23:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 46411 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D148C1B60F; Tue, 9 Oct 2018 20:24:26 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id A46921B56D for ; Tue, 9 Oct 2018 20:24:18 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Oct 2018 11:24:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,361,1534834800"; d="scan'208";a="77469446" Received: from sivswdev02.ir.intel.com (HELO localhost.localdomain) ([10.237.217.46]) by fmsmga008.fm.intel.com with ESMTP; 09 Oct 2018 11:23:58 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev Date: Tue, 9 Oct 2018 19:23:39 +0100 Message-Id: <1539109420-13412-9-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> References: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [RFC v2 8/9] ipsec: helper functions to group completed crypto-ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce helper functions to process completed crypto-ops and group related packets by sessions they belong to. Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/Makefile | 1 + lib/librte_ipsec/meson.build | 2 +- lib/librte_ipsec/rte_ipsec.h | 2 + lib/librte_ipsec/rte_ipsec_group.h | 151 +++++++++++++++++++++++++++++++++ lib/librte_ipsec/rte_ipsec_version.map | 2 + 5 files changed, 157 insertions(+), 1 deletion(-) create mode 100644 lib/librte_ipsec/rte_ipsec_group.h diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile index 79f187fae..98c52f388 100644 --- a/lib/librte_ipsec/Makefile +++ b/lib/librte_ipsec/Makefile @@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c # install header files SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build index 6e8c6fabe..d2427b809 100644 --- a/lib/librte_ipsec/meson.build +++ b/lib/librte_ipsec/meson.build @@ -5,6 +5,6 @@ allow_experimental_apis = true sources=files('sa.c', 'ses.c') -install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h') +install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h') deps += ['mbuf', 'net', 'cryptodev', 'security'] diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h index 5c9a1ed0b..aa17c78e3 100644 --- a/lib/librte_ipsec/rte_ipsec.h +++ b/lib/librte_ipsec/rte_ipsec.h @@ -147,6 +147,8 @@ rte_ipsec_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], return ss->func.process(ss, mb, num); } +#include + #ifdef __cplusplus } #endif diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h new file mode 100644 index 000000000..df6f4fdd1 --- /dev/null +++ b/lib/librte_ipsec/rte_ipsec_group.h @@ -0,0 +1,151 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _RTE_IPSEC_GROUP_H_ +#define _RTE_IPSEC_GROUP_H_ + +/** + * @file rte_ipsec_group.h + * @b EXPERIMENTAL: this API may change without prior notice + * + * RTE IPsec support. + * It is not recomended to include this file direclty, + * include instead. + * Contains helper functions to process completed crypto-ops + * and group related packets by sessions they belong to. + */ + + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Used to group mbufs by some id. + * See below for particular usage. + */ +struct rte_ipsec_group { + union { + uint64_t val; + void *ptr; + } id; /**< grouped by value */ + struct rte_mbuf **m; /**< start of the group */ + uint32_t cnt; /**< number of entries in the group */ + int32_t rc; /**< status code associated with the group */ +}; + +/** + * Take crypto-op as an input and extract pointer to related ipsec session. + * @param cop + * The address of an input *rte_crypto_op* structure. + * @return + * The pointer to the related *rte_ipsec_session* structure. + */ +static inline __rte_experimental struct rte_ipsec_session * +rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop) +{ + const struct rte_security_session *ss; + const struct rte_cryptodev_sym_session *cs; + + if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) { + ss = cop->sym[0].sec_session; + return (void *)ss->userdata; + } else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + cs = cop->sym[0].session; + return (void *)cs->userdata; + } + return NULL; +} + +/** + * Take as input completed crypto ops, extract related mbufs + * group them by rte_ipsec_session they belong to. + * For mbuf which crypto-op wasn't completed successfully + * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags. + * Note that mbufs with undetermined SA (session-less) are not freed + * by the function, but are placed beyond mbufs for the last valid group. + * It is a user responsibility to handle them further. + * @param cop + * The address of an array of *num* pointers to the input *rte_crypto_op* + * structures. + * @param mb + * The address of an array of *num* pointers to output *rte_mbuf* structures. + * @param grp + * The address of an array of *num* to output *rte_ipsec_group* structures. + * @param num + * The maximum number of crypto-ops to process. + * @return + * Number of filled elements in *grp* array. + */ +static inline uint16_t __rte_experimental +rte_ipsec_crypto_group(const struct rte_crypto_op *cop[], struct rte_mbuf *mb[], + struct rte_ipsec_group grp[], uint16_t num) +{ + uint32_t i, j, k, n; + void *ns, *ps; + struct rte_mbuf *m, *dr[num]; + + j = 0; + k = 0; + n = 0; + ps = NULL; + + for (i = 0; i != num; i++) { + + m = cop[i]->sym[0].m_src; + ns = cop[i]->sym[0].session; + + m->ol_flags |= PKT_RX_SEC_OFFLOAD; + if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS) + m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED; + + /* no valid session found */ + if (ns == NULL) { + dr[k++] = m; + continue; + } + + /* different SA */ + if (ps != ns) { + + /* + * we already have an open group - finilise it, + * then open a new one. + */ + if (ps != NULL) { + grp[n].id.ptr = + rte_ipsec_ses_from_crypto(cop[i - 1]); + grp[n].cnt = mb + j - grp[n].m; + n++; + } + + /* start new group */ + grp[n].m = mb + j; + ps = ns; + } + + mb[j++] = m; + } + + /* finalise last group */ + if (ps != NULL) { + grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]); + grp[n].cnt = mb + j - grp[n].m; + n++; + } + + /* copy mbufs with unknown session beyond recognised ones */ + if (k != 0 && k != num) { + for (i = 0; i != k; i++) + mb[j + i] = dr[i]; + } + + return n; +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_IPSEC_GROUP_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map index 47620cef5..b025b636c 100644 --- a/lib/librte_ipsec/rte_ipsec_version.map +++ b/lib/librte_ipsec/rte_ipsec_version.map @@ -1,6 +1,7 @@ EXPERIMENTAL { global: + rte_ipsec_crypto_group; rte_ipsec_crypto_prepare; rte_ipsec_session_prepare; rte_ipsec_process; @@ -8,6 +9,7 @@ EXPERIMENTAL { rte_ipsec_sa_init; rte_ipsec_sa_size; rte_ipsec_sa_type; + rte_ipsec_ses_from_crypto; local: *; }; From patchwork Tue Oct 9 18:23:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 46409 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2D17C1B5F9; Tue, 9 Oct 2018 20:24:24 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id A6EF21B59C for ; Tue, 9 Oct 2018 20:24:17 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Oct 2018 11:24:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,361,1534834800"; d="scan'208";a="77469454" Received: from sivswdev02.ir.intel.com (HELO localhost.localdomain) ([10.237.217.46]) by fmsmga008.fm.intel.com with ESMTP; 09 Oct 2018 11:23:59 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: Konstantin Ananyev , Mohammad Abdul Awal , Bernard Iremonger Date: Tue, 9 Oct 2018 19:23:40 +0100 Message-Id: <1539109420-13412-10-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> References: <1535129598-27301-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [RFC v2 9/9] test/ipsec: introduce functional test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Mohammad Abdul Awal Signed-off-by: Bernard Iremonger --- test/test/Makefile | 3 + test/test/meson.build | 3 + test/test/test_ipsec.c | 1329 ++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 1335 insertions(+) create mode 100644 test/test/test_ipsec.c diff --git a/test/test/Makefile b/test/test/Makefile index dcea4410d..2be25808c 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -204,6 +204,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c +LDLIBS += -lrte_ipsec + CFLAGS += -DALLOW_EXPERIMENTAL_API CFLAGS += -O3 diff --git a/test/test/meson.build b/test/test/meson.build index bacb5b144..803f2e28d 100644 --- a/test/test/meson.build +++ b/test/test/meson.build @@ -47,6 +47,7 @@ test_sources = files('commands.c', 'test_hash_perf.c', 'test_hash_scaling.c', 'test_interrupts.c', + 'test_ipsec.c', 'test_kni.c', 'test_kvargs.c', 'test_link_bonding.c', @@ -113,6 +114,7 @@ test_deps = ['acl', 'eventdev', 'flow_classify', 'hash', + 'ipsec', 'lpm', 'member', 'pipeline', @@ -172,6 +174,7 @@ test_names = [ 'hash_multiwriter_autotest', 'hash_perf_autotest', 'interrupt_autotest', + 'ipsec_autotest', 'kni_autotest', 'kvargs_autotest', 'link_bonding_autotest', diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c new file mode 100644 index 000000000..6922cbb7e --- /dev/null +++ b/test/test/test_ipsec.c @@ -0,0 +1,1329 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" +#include "test_cryptodev.h" + +#define VDEV_ARGS_SIZE 100 +#define MAX_NB_SESSIONS 8 + +struct user_params { + enum rte_crypto_sym_xform_type auth; + enum rte_crypto_sym_xform_type cipher; + enum rte_crypto_sym_xform_type aead; + + char auth_algo[128]; + char cipher_algo[128]; + char aead_algo[128]; +}; + +struct crypto_testsuite_params { + struct rte_mempool *mbuf_pool; + struct rte_mempool *op_mpool; + struct rte_mempool *session_mpool; + struct rte_cryptodev_config conf; + struct rte_cryptodev_qp_conf qp_conf; + + uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS]; + uint8_t valid_dev_count; +}; + +struct crypto_unittest_params { + struct rte_crypto_sym_xform cipher_xform; + struct rte_crypto_sym_xform auth_xform; + struct rte_crypto_sym_xform aead_xform; + struct rte_crypto_sym_xform *crypto_xforms; + + struct rte_ipsec_sa_prm sa_prm; + struct rte_ipsec_session ss; + + struct rte_crypto_op *op; + + struct rte_mbuf *obuf, *ibuf, *testbuf; + + uint8_t *digest; +}; + +static struct crypto_testsuite_params testsuite_params = { NULL }; +static struct crypto_unittest_params unittest_params; +static struct user_params uparams; + +static uint8_t global_key[128] = { 0 }; + +struct supported_cipher_algo { + const char *keyword; + enum rte_crypto_cipher_algorithm algo; + uint16_t iv_len; + uint16_t block_size; + uint16_t key_len; +}; + +struct supported_auth_algo { + const char *keyword; + enum rte_crypto_auth_algorithm algo; + uint16_t digest_len; + uint16_t key_len; + uint8_t key_not_req; +}; + +const struct supported_cipher_algo cipher_algos[] = { + { + .keyword = "null", + .algo = RTE_CRYPTO_CIPHER_NULL, + .iv_len = 0, + .block_size = 4, + .key_len = 0 + }, +}; + +const struct supported_auth_algo auth_algos[] = { + { + .keyword = "null", + .algo = RTE_CRYPTO_AUTH_NULL, + .digest_len = 0, + .key_len = 0, + .key_not_req = 1 + }, +}; + +static int +dummy_sec_create(void *device, struct rte_security_session_conf *conf, + struct rte_security_session *sess, struct rte_mempool *mp) +{ + RTE_SET_USED(device); + RTE_SET_USED(conf); + RTE_SET_USED(mp); + + sess->sess_private_data = NULL; + return 0; +} + +static int +dummy_sec_destroy(void *device, struct rte_security_session *sess) +{ + RTE_SET_USED(device); + RTE_SET_USED(sess); + return 0; +} + +static const struct rte_security_ops dummy_sec_ops = { + .session_create = dummy_sec_create, + .session_destroy = dummy_sec_destroy, +}; + +static struct rte_security_ctx dummy_sec_ctx = { + .ops = &dummy_sec_ops, +}; + +static const struct supported_cipher_algo * +find_match_cipher_algo(const char *cipher_keyword) +{ + size_t i; + + for (i = 0; i < RTE_DIM(cipher_algos); i++) { + const struct supported_cipher_algo *algo = + &cipher_algos[i]; + + if (strcmp(cipher_keyword, algo->keyword) == 0) + return algo; + } + + return NULL; +} + +static const struct supported_auth_algo * +find_match_auth_algo(const char *auth_keyword) +{ + size_t i; + + for (i = 0; i < RTE_DIM(auth_algos); i++) { + const struct supported_auth_algo *algo = + &auth_algos[i]; + + if (strcmp(auth_keyword, algo->keyword) == 0) + return algo; + } + + return NULL; +} + +static int +testsuite_setup(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct rte_cryptodev_info info; + uint32_t nb_devs, dev_id; + + memset(ts_params, 0, sizeof(*ts_params)); + + ts_params->mbuf_pool = rte_pktmbuf_pool_create( + "CRYPTO_MBUFPOOL", + NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE, + rte_socket_id()); + if (ts_params->mbuf_pool == NULL) { + RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n"); + return TEST_FAILED; + } + + ts_params->op_mpool = rte_crypto_op_pool_create( + "MBUF_CRYPTO_SYM_OP_POOL", + RTE_CRYPTO_OP_TYPE_SYMMETRIC, + NUM_MBUFS, MBUF_CACHE_SIZE, + DEFAULT_NUM_XFORMS * + sizeof(struct rte_crypto_sym_xform) + + MAXIMUM_IV_LENGTH, + rte_socket_id()); + if (ts_params->op_mpool == NULL) { + RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n"); + return TEST_FAILED; + } + + nb_devs = rte_cryptodev_count(); + if (nb_devs < 1) { + RTE_LOG(ERR, USER1, "No crypto devices found?\n"); + return TEST_FAILED; + } + + ts_params->valid_devs[ts_params->valid_dev_count++] = 0; + + /* Set up all the qps on the first of the valid devices found */ + dev_id = ts_params->valid_devs[0]; + + rte_cryptodev_info_get(dev_id, &info); + + ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs; + ts_params->conf.socket_id = SOCKET_ID_ANY; + + unsigned int session_size = + rte_cryptodev_sym_get_private_session_size(dev_id); + + /* + * Create mempool with maximum number of sessions * 2, + * to include the session headers + */ + if (info.sym.max_nb_sessions != 0 && + info.sym.max_nb_sessions < MAX_NB_SESSIONS) { + RTE_LOG(ERR, USER1, "Device does not support " + "at least %u sessions\n", + MAX_NB_SESSIONS); + return TEST_FAILED; + } + + ts_params->session_mpool = rte_mempool_create( + "test_sess_mp", + MAX_NB_SESSIONS * 2, + session_size, + 0, 0, NULL, NULL, NULL, + NULL, SOCKET_ID_ANY, + 0); + + TEST_ASSERT_NOT_NULL(ts_params->session_mpool, + "session mempool allocation failed"); + + TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, + &ts_params->conf), + "Failed to configure cryptodev %u with %u qps", + dev_id, ts_params->conf.nb_queue_pairs); + + ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT; + + TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup( + dev_id, 0, &ts_params->qp_conf, + rte_cryptodev_socket_id(dev_id), + ts_params->session_mpool), + "Failed to setup queue pair %u on cryptodev %u", + 0, dev_id); + + return TEST_SUCCESS; +} + +static void +testsuite_teardown(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + + if (ts_params->mbuf_pool != NULL) { + RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n", + rte_mempool_avail_count(ts_params->mbuf_pool)); + rte_mempool_free(ts_params->mbuf_pool); + ts_params->mbuf_pool = NULL; + } + + if (ts_params->op_mpool != NULL) { + RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n", + rte_mempool_avail_count(ts_params->op_mpool)); + rte_mempool_free(ts_params->op_mpool); + ts_params->op_mpool = NULL; + } + + /* Free session mempools */ + if (ts_params->session_mpool != NULL) { + rte_mempool_free(ts_params->session_mpool); + ts_params->session_mpool = NULL; + } +} + +static int +ut_setup(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + + /* Clear unit test parameters before running test */ + memset(ut_params, 0, sizeof(*ut_params)); + + /* Reconfigure device to default parameters */ + ts_params->conf.socket_id = SOCKET_ID_ANY; + + /* Start the device */ + TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]), + "Failed to start cryptodev %u", + ts_params->valid_devs[0]); + + return TEST_SUCCESS; +} + +static void +ut_teardown(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + + /* free crypto operation structure */ + if (ut_params->op) + rte_crypto_op_free(ut_params->op); + + /* + * free mbuf - both obuf and ibuf are usually the same, + * so check if they point at the same address is necessary, + * to avoid freeing the mbuf twice. + */ + if (ut_params->obuf) { + rte_pktmbuf_free(ut_params->obuf); + if (ut_params->ibuf == ut_params->obuf) + ut_params->ibuf = 0; + ut_params->obuf = 0; + } + if (ut_params->ibuf) { + rte_pktmbuf_free(ut_params->ibuf); + ut_params->ibuf = 0; + } + + if (ut_params->testbuf) { + rte_pktmbuf_free(ut_params->testbuf); + ut_params->testbuf = 0; + } + + if (ts_params->mbuf_pool != NULL) + RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n", + rte_mempool_avail_count(ts_params->mbuf_pool)); + + /* Stop the device */ + rte_cryptodev_stop(ts_params->valid_devs[0]); +} + +#define IPSEC_MAX_PAD_SIZE UINT8_MAX + +static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = { + 1, 2, 3, 4, 5, 6, 7, 8, + 9, 10, 11, 12, 13, 14, 15, 16, + 17, 18, 19, 20, 21, 22, 23, 24, + 25, 26, 27, 28, 29, 30, 31, 32, + 33, 34, 35, 36, 37, 38, 39, 40, + 41, 42, 43, 44, 45, 46, 47, 48, + 49, 50, 51, 52, 53, 54, 55, 56, + 57, 58, 59, 60, 61, 62, 63, 64, + 65, 66, 67, 68, 69, 70, 71, 72, + 73, 74, 75, 76, 77, 78, 79, 80, + 81, 82, 83, 84, 85, 86, 87, 88, + 89, 90, 91, 92, 93, 94, 95, 96, + 97, 98, 99, 100, 101, 102, 103, 104, + 105, 106, 107, 108, 109, 110, 111, 112, + 113, 114, 115, 116, 117, 118, 119, 120, + 121, 122, 123, 124, 125, 126, 127, 128, + 129, 130, 131, 132, 133, 134, 135, 136, + 137, 138, 139, 140, 141, 142, 143, 144, + 145, 146, 147, 148, 149, 150, 151, 152, + 153, 154, 155, 156, 157, 158, 159, 160, + 161, 162, 163, 164, 165, 166, 167, 168, + 169, 170, 171, 172, 173, 174, 175, 176, + 177, 178, 179, 180, 181, 182, 183, 184, + 185, 186, 187, 188, 189, 190, 191, 192, + 193, 194, 195, 196, 197, 198, 199, 200, + 201, 202, 203, 204, 205, 206, 207, 208, + 209, 210, 211, 212, 213, 214, 215, 216, + 217, 218, 219, 220, 221, 222, 223, 224, + 225, 226, 227, 228, 229, 230, 231, 232, + 233, 234, 235, 236, 237, 238, 239, 240, + 241, 242, 243, 244, 245, 246, 247, 248, + 249, 250, 251, 252, 253, 254, 255, +}; + +/* ***** data for tests ***** */ + +const char null_plain_data[] = + "Network Security People Have A Strange Sense Of Humor unlike Other " + "People who have a normal sense of humour"; +const char null_encrypted_data[] = + "Network Security People Have A Strange Sense Of Humor unlike Other " + "People who have a normal sense of humour"; + +#define DATA_64_BYTES (64) +#define DATA_80_BYTES (80) +#define DATA_100_BYTES (100) +#define INBOUND_SPI (7) +#define OUTBOUND_SPI (17) + +struct ipv4_hdr ipv4_outer = { + .version_ihl = IPVERSION << 4 | + sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER, + .time_to_live = IPDEFTTL, + .next_proto_id = IPPROTO_ESP, + .src_addr = IPv4(192, 168, 1, 100), + .dst_addr = IPv4(192, 168, 2, 100), +}; + +static struct rte_mbuf * +setup_test_string(struct rte_mempool *mpool, + const char *string, size_t len, uint8_t blocksize) +{ + struct rte_mbuf *m = rte_pktmbuf_alloc(mpool); + size_t t_len = len - (blocksize ? (len % blocksize) : 0); + + if (m) { + memset(m->buf_addr, 0, m->buf_len); + char *dst = rte_pktmbuf_append(m, t_len); + + if (!dst) { + rte_pktmbuf_free(m); + return NULL; + } + if (string != NULL) + rte_memcpy(dst, string, t_len); + else + memset(dst, 0, t_len); + } + + return m; +} + +static struct rte_mbuf * +setup_test_string_tunneled(struct rte_mempool *mpool, const char *string, + size_t len, uint32_t spi, uint32_t seq) +{ + struct rte_mbuf *m = rte_pktmbuf_alloc(mpool); + uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr); + uint32_t taillen = sizeof(struct esp_tail); + uint32_t t_len = len + hdrlen + taillen; + uint32_t padlen; + + struct esp_hdr esph = { + .spi = rte_cpu_to_be_32(spi), + .seq = rte_cpu_to_be_32(seq) + }; + + padlen = RTE_ALIGN(t_len, 4) - t_len; + t_len += padlen; + + struct esp_tail espt = { + .pad_len = padlen, + .next_proto = IPPROTO_IPIP, + }; + + if (m == NULL) + return NULL; + + memset(m->buf_addr, 0, m->buf_len); + char *dst = rte_pktmbuf_append(m, t_len); + + if (!dst) { + rte_pktmbuf_free(m); + return NULL; + } + /* copy outer IP and ESP header */ + ipv4_outer.total_length = rte_cpu_to_be_16(t_len); + ipv4_outer.packet_id = rte_cpu_to_be_16(1); + rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer)); + dst += sizeof(ipv4_outer); + m->l3_len = sizeof(ipv4_outer); + rte_memcpy(dst, &esph, sizeof(esph)); + dst += sizeof(esph); + + if (string != NULL) { + /* copy payload */ + rte_memcpy(dst, string, len); + dst += len; + /* copy pad bytes */ + rte_memcpy(dst, esp_pad_bytes, padlen); + dst += padlen; + /* copy ESP tail header */ + rte_memcpy(dst, &espt, sizeof(espt)); + } else + memset(dst, 0, t_len); + + return m; +} + +static int +check_cryptodev_capablity(const struct crypto_unittest_params *ut, + uint8_t devid) +{ + struct rte_cryptodev_sym_capability_idx cap_idx; + const struct rte_cryptodev_symmetric_capability *cap; + int rc = -1; + + cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH; + cap_idx.algo.auth = ut->auth_xform.auth.algo; + cap = rte_cryptodev_sym_capability_get(devid, &cap_idx); + + if (cap != NULL) { + rc = rte_cryptodev_sym_capability_check_auth(cap, + ut->auth_xform.auth.key.length, + ut->auth_xform.auth.digest_length, 0); + if (rc == 0) { + cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER; + cap_idx.algo.cipher = ut->cipher_xform.cipher.algo; + cap = rte_cryptodev_sym_capability_get(devid, &cap_idx); + if (cap != NULL) + rc = rte_cryptodev_sym_capability_check_cipher( + cap, + ut->cipher_xform.cipher.key.length, + ut->cipher_xform.cipher.iv.length); + } + } + + return rc; +} + +static int +create_dummy_sec_session(struct crypto_unittest_params *ut, + struct rte_mempool *pool) +{ + static struct rte_security_session_conf conf; + + ut->ss.security.ses = rte_security_session_create(&dummy_sec_ctx, + &conf, pool); + + if (ut->ss.security.ses == NULL) + return -ENOMEM; + + ut->ss.security.ctx = &dummy_sec_ctx; + ut->ss.security.ol_flags = 0; + return 0; +} + +static int +create_crypto_session(struct crypto_unittest_params *ut, + struct rte_mempool *pool, const uint8_t crypto_dev[], + uint32_t crypto_dev_num) +{ + int32_t rc; + uint32_t devnum, i; + struct rte_cryptodev_sym_session *s; + uint8_t devid[RTE_CRYPTO_MAX_DEVS]; + + /* check which cryptodevs support SA */ + devnum = 0; + for (i = 0; i < crypto_dev_num; i++) { + if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0) + devid[devnum++] = crypto_dev[i]; + } + + if (devnum == 0) + return -ENODEV; + + s = rte_cryptodev_sym_session_create(pool); + if (s == NULL) + return -ENOMEM; + + /* initiliaze SA crypto session for all supported devices */ + for (i = 0; i != devnum; i++) { + rc = rte_cryptodev_sym_session_init(devid[i], s, + ut->crypto_xforms, pool); + if (rc != 0) + break; + } + + if (i == devnum) { + ut->ss.crypto.ses = s; + return 0; + } + + /* failure, do cleanup */ + while (i-- != 0) + rte_cryptodev_sym_session_clear(devid[i], s); + + rte_cryptodev_sym_session_free(s); + return rc; +} + +static int +create_session(struct crypto_unittest_params *ut, + struct rte_mempool *pool, const uint8_t crypto_dev[], + uint32_t crypto_dev_num) +{ + if (ut->ss.type == RTE_SECURITY_ACTION_TYPE_NONE) + return create_crypto_session(ut, pool, crypto_dev, + crypto_dev_num); + else + return create_dummy_sec_session(ut, pool); +} + +static void +fill_crypto_xform(struct crypto_unittest_params *ut_params, + const struct supported_auth_algo *auth_algo, + const struct supported_cipher_algo *cipher_algo) +{ + ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH; + ut_params->auth_xform.auth.algo = auth_algo->algo; + ut_params->auth_xform.auth.key.data = global_key; + ut_params->auth_xform.auth.key.length = auth_algo->key_len; + ut_params->auth_xform.auth.digest_length = auth_algo->digest_len; + + ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY; + ut_params->auth_xform.next = &ut_params->cipher_xform; + + ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER; + ut_params->cipher_xform.cipher.algo = cipher_algo->algo; + ut_params->cipher_xform.cipher.key.data = global_key; + ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len; + ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT; + ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET; + ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len; + ut_params->cipher_xform.next = NULL; + + ut_params->crypto_xforms = &ut_params->auth_xform; +} + +static int +fill_ipsec_param(uint32_t spi, enum rte_security_ipsec_sa_direction direction, + uint32_t replay_win_sz, uint64_t flags) +{ + struct crypto_unittest_params *ut_params = &unittest_params; + struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm; + const struct supported_auth_algo *auth_algo; + const struct supported_cipher_algo *cipher_algo; + + memset(prm, 0, sizeof(*prm)); + + prm->userdata = 1; + prm->flags = flags; + prm->replay_win_sz = replay_win_sz; + + /* setup ipsec xform */ + prm->ipsec_xform.spi = spi; + prm->ipsec_xform.salt = (uint32_t)rte_rand(); + prm->ipsec_xform.direction = direction; + prm->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP; + prm->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + + /* setup tunnel related fields */ + prm->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; + prm->tun.hdr_len = sizeof(ipv4_outer); + prm->tun.next_proto = IPPROTO_IPIP; + prm->tun.hdr = &ipv4_outer; + + /* setup crypto section */ + if (uparams.aead != 0) { + /* TODO: will need to fill out with other test cases */ + } else { + if (uparams.auth == 0 && uparams.cipher == 0) + return TEST_FAILED; + + auth_algo = find_match_auth_algo(uparams.auth_algo); + cipher_algo = find_match_cipher_algo(uparams.cipher_algo); + + fill_crypto_xform(ut_params, auth_algo, cipher_algo); + } + + prm->crypto_xform = ut_params->crypto_xforms; + return TEST_SUCCESS; +} + +static int +create_sa(uint32_t spi, enum rte_security_ipsec_sa_direction direction, + enum rte_security_session_action_type action_type, + uint32_t replay_win_sz, uint64_t flags) +{ + struct crypto_testsuite_params *ts = &testsuite_params; + struct crypto_unittest_params *ut = &unittest_params; + size_t sz; + int rc; + + const struct rte_ipsec_sa_prm prm = { + .flags = flags, + .ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS, + .replay_win_sz = replay_win_sz, + }; + + memset(&ut->ss, 0, sizeof(ut->ss)); + + /* create rte_ipsec_sa*/ + sz = rte_ipsec_sa_size(&prm); + TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n"); + ut->ss.sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE); + TEST_ASSERT_NOT_NULL(ut->ss.sa, + "failed to allocate memory for rte_ipsec_sa\n"); + + rc = fill_ipsec_param(spi, direction, replay_win_sz, flags); + if (rc != 0) + return TEST_FAILED; + + ut->ss.type = action_type; + rc = create_session(ut, ts->session_mpool, ts->valid_devs, + ts->valid_dev_count); + if (rc != 0) + return TEST_FAILED; + + rc = rte_ipsec_sa_init(ut->ss.sa, &ut->sa_prm, sz); + rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL; + + return rte_ipsec_session_prepare(&ut->ss); +} + +static int +crypto_ipsec(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + uint32_t k, n; + struct rte_ipsec_group grp[1]; + + /* call crypto prepare */ + k = rte_ipsec_crypto_prepare(&ut_params->ss, &ut_params->ibuf, + &ut_params->op, 1); + if (k != 1) { + RTE_LOG(ERR, USER1, "rte_ipsec_crypto_prepare fail\n"); + return TEST_FAILED; + } + k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0, + &ut_params->op, k); + if (k != 1) { + RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n"); + return TEST_FAILED; + } + + n = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0, + &ut_params->op, RTE_DIM(&ut_params->op)); + if (n != 1) { + RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n"); + return TEST_FAILED; + } + + n = rte_ipsec_crypto_group( + (const struct rte_crypto_op **)(uintptr_t)&ut_params->op, + &ut_params->obuf, grp, n); + if (n != 1 || grp[0].m[0] != ut_params->obuf || grp[0].cnt != 1 || + grp[0].id.ptr != &ut_params->ss) { + RTE_LOG(ERR, USER1, "rte_ipsec_crypto_group fail\n"); + return TEST_FAILED; + } + + /* call crypto process */ + n = rte_ipsec_process(grp[0].id.ptr, grp[0].m, grp[0].cnt); + + if (n != 1) { + RTE_LOG(ERR, USER1, "rte_ipsec_crypto_process fail\n"); + return TEST_FAILED; + } + + return TEST_SUCCESS; +} + +static int +test_ipsec_crypto_op_alloc(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + int rc = 0; + + ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool, + RTE_CRYPTO_OP_TYPE_SYMMETRIC); + if (ut_params->op != NULL) + ut_params->op->sym[0].m_src = ut_params->ibuf; + else { + RTE_LOG(ERR, USER1, + "Failed to allocate symmetric crypto operation struct\n"); + rc = TEST_FAILED; + } + return rc; +} + +static void +test_ipsec_dump_buffers(struct crypto_unittest_params *ut_params) +{ + if (ut_params->ibuf) { + printf("ibuf data:\n"); + rte_pktmbuf_dump(stdout, ut_params->ibuf, + ut_params->ibuf->data_len); + } + if (ut_params->obuf) { + printf("obuf data:\n"); + rte_pktmbuf_dump(stdout, ut_params->obuf, + ut_params->obuf->data_len); + } + if (ut_params->testbuf) { + printf("testbuf data:\n"); + rte_pktmbuf_dump(stdout, ut_params->testbuf, + ut_params->testbuf->data_len); + } +} + +static int +crypto_inb_null_null_check(struct crypto_unittest_params *ut_params) +{ + /* compare the data buffers */ + TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data, + rte_pktmbuf_mtod(ut_params->obuf, void *), + DATA_64_BYTES, + "input and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->obuf->data_len, ut_params->obuf->pkt_len, + "data_len is not equal to pkt_len"); + TEST_ASSERT_EQUAL(ut_params->obuf->data_len, DATA_64_BYTES, + "data_len is not equal to input data"); + return 0; +} + +static void +destroy_sa(void) +{ + struct crypto_unittest_params *ut = &unittest_params; + + rte_ipsec_sa_fini(ut->ss.sa); + rte_free(ut->ss.sa); + rte_cryptodev_sym_session_free(ut->ss.crypto.ses); + memset(&ut->ss, 0, sizeof(ut->ss)); +} + +static int +test_ipsec_crypto_inb_null_null(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + int rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa*/ + rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS, + RTE_SECURITY_ACTION_TYPE_NONE, 0, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n"); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool, + null_encrypted_data, DATA_64_BYTES, INBOUND_SPI, 1); + + rc = test_ipsec_crypto_op_alloc(); + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(); + if (rc == 0) + rc = crypto_inb_null_null_check(ut_params); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed\n"); + rc = TEST_FAILED; + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params); + + destroy_sa(); + return rc; +} + +static int +crypto_outb_null_null_check(struct crypto_unittest_params *ut_params) +{ + void *obuf_data; + void *testbuf_data; + + /* compare the buffer data */ + testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf, void *); + obuf_data = rte_pktmbuf_mtod(ut_params->obuf, void *); + + TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data, + ut_params->obuf->pkt_len, + "test and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->obuf->data_len, + ut_params->testbuf->data_len, + "obuf data_len is not equal to testbuf data_len"); + TEST_ASSERT_EQUAL(ut_params->obuf->pkt_len, + ut_params->testbuf->pkt_len, + "obuf pkt_len is not equal to testbuf pkt_len"); + + return 0; +} + +static int +test_ipsec_crypto_outb_null_null(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + int32_t rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa*/ + rc = create_sa(OUTBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_EGRESS, + RTE_SECURITY_ACTION_TYPE_NONE, 0, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n"); + return TEST_FAILED; + } + + /* Generate input mbuf data */ + ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, + null_plain_data, DATA_80_BYTES, 0); + + /* Generate test mbuf data */ + ut_params->testbuf = setup_test_string_tunneled(ts_params->mbuf_pool, + null_plain_data, DATA_80_BYTES, OUTBOUND_SPI, 1); + + rc = test_ipsec_crypto_op_alloc(); + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(); + if (rc == 0) + rc = crypto_outb_null_null_check(ut_params); + else + RTE_LOG(ERR, USER1, "crypto_ipsec failed\n"); + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params); + + destroy_sa(); + return rc; +} + +static int +inline_inb_null_null_check(struct crypto_unittest_params *ut_params) +{ + void *ibuf_data; + void *obuf_data; + + /* compare the buffer data */ + ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf, void *); + obuf_data = rte_pktmbuf_mtod(ut_params->obuf, void *); + + TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data, + ut_params->ibuf->data_len, + "input and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->ibuf->data_len, ut_params->obuf->data_len, + "ibuf data_len is not equal to obuf data_len"); + TEST_ASSERT_EQUAL(ut_params->ibuf->pkt_len, ut_params->obuf->pkt_len, + "ibuf pkt_len is not equal to obuf pkt_len"); + TEST_ASSERT_EQUAL(ut_params->ibuf->data_len, DATA_100_BYTES, + "data_len is not equal input data"); + + return 0; +} + +static int +test_ipsec_inline_inb_null_null(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + + int32_t rc; + uint32_t n; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa*/ + rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS, + RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, 0, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n"); + return TEST_FAILED; + } + + /* Generate inbound mbuf data */ + ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool, + null_plain_data, DATA_100_BYTES, INBOUND_SPI, 1); + + /* Generate test mbuf data */ + ut_params->obuf = setup_test_string(ts_params->mbuf_pool, + null_plain_data, DATA_100_BYTES, 0); + + n = rte_ipsec_process(&ut_params->ss, &ut_params->ibuf, 1); + if (n == 1) + rc = inline_inb_null_null_check(ut_params); + else { + RTE_LOG(ERR, USER1, "rte_ipsec_process failed\n"); + rc = TEST_FAILED; + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params); + + destroy_sa(); + return rc; +} + +static int +inline_outb_null_null_check(struct crypto_unittest_params *ut_params) +{ + void *obuf_data; + void *ibuf_data; + + /* compare the buffer data */ + ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf, void *); + obuf_data = rte_pktmbuf_mtod(ut_params->obuf, void *); + TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data, + ut_params->ibuf->data_len, + "input and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->ibuf->data_len, + ut_params->obuf->data_len, + "ibuf data_len is not equal to obuf data_len"); + TEST_ASSERT_EQUAL(ut_params->ibuf->pkt_len, + ut_params->obuf->pkt_len, + "ibuf pkt_len is not equal to obuf pkt_len"); + + return 0; +} + +static int +test_ipsec_inline_outb_null_null(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + int32_t rc; + uint32_t n; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa*/ + rc = create_sa(OUTBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_EGRESS, + RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, 0, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n"); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + ut_params->ibuf = setup_test_string(ts_params->mbuf_pool, + null_plain_data, DATA_100_BYTES, 0); + + /* Generate test tunneled mbuf data for comparison*/ + ut_params->obuf = setup_test_string_tunneled(ts_params->mbuf_pool, + null_plain_data, DATA_100_BYTES, OUTBOUND_SPI, 1); + + n = rte_ipsec_process(&ut_params->ss, &ut_params->ibuf, 1); + if (n == 1) + rc = inline_outb_null_null_check(ut_params); + else { + RTE_LOG(ERR, USER1, "rte_ipsec_process failed\n"); + rc = TEST_FAILED; + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params); + + destroy_sa(); + return rc; +} + +#define REPLAY_WIN_64 64 + +static int +replay_inb_null_null_check(struct crypto_unittest_params *ut_params) +{ + /* compare the buffer data */ + TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data, + rte_pktmbuf_mtod(ut_params->obuf, void *), + DATA_64_BYTES, + "input and output data does not match\n"); + TEST_ASSERT_EQUAL(ut_params->obuf->data_len, ut_params->obuf->pkt_len, + "data_len is not equal to pkt_len"); + + return 0; +} + +static int +test_ipsec_replay_inb_inside_null_null(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + int rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa*/ + rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS, + RTE_SECURITY_ACTION_TYPE_NONE, REPLAY_WIN_64, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n"); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool, + null_encrypted_data, DATA_64_BYTES, INBOUND_SPI, 1); + + rc = test_ipsec_crypto_op_alloc(); + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(); + if (rc == 0) + rc = replay_inb_null_null_check(ut_params); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed\n"); + rc = TEST_FAILED; + } + } else { + RTE_LOG(ERR, USER1, + "Failed to allocate symmetric crypto operation struct\n"); + rc = TEST_FAILED; + } + + if (rc == 0) { + /* generate packet with seq number inside the replay window */ + if (ut_params->ibuf) { + rte_pktmbuf_free(ut_params->ibuf); + ut_params->ibuf = 0; + } + + ut_params->ibuf = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + DATA_64_BYTES, INBOUND_SPI, REPLAY_WIN_64); + + rc = test_ipsec_crypto_op_alloc(); + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(); + if (rc == 0) + rc = replay_inb_null_null_check(ut_params); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed\n"); + rc = TEST_FAILED; + } + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params); + + destroy_sa(); + + return rc; +} + +static int +test_ipsec_replay_inb_outside_null_null(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + int rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa */ + rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS, + RTE_SECURITY_ACTION_TYPE_NONE, REPLAY_WIN_64, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n"); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool, + null_encrypted_data, DATA_64_BYTES, INBOUND_SPI, + REPLAY_WIN_64 + 2); + + rc = test_ipsec_crypto_op_alloc(); + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(); + if (rc == 0) + rc = replay_inb_null_null_check(ut_params); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed\n"); + rc = TEST_FAILED; + } + } + + if (rc == 0) { + /* generate packet with seq number outside the replay window */ + if (ut_params->ibuf) { + rte_pktmbuf_free(ut_params->ibuf); + ut_params->ibuf = 0; + } + ut_params->ibuf = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + DATA_64_BYTES, INBOUND_SPI, 1); + + rc = test_ipsec_crypto_op_alloc(); + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(); + if (rc == 0) { + RTE_LOG(ERR, USER1, + "packet is not outside the replay window\n"); + rc = TEST_FAILED; + } else { + RTE_LOG(ERR, USER1, + "packet is outside the replay window\n"); + rc = 0; + } + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params); + + destroy_sa(); + + return rc; +} + +static int +test_ipsec_replay_inb_repeat_null_null(void) +{ + struct crypto_testsuite_params *ts_params = &testsuite_params; + struct crypto_unittest_params *ut_params = &unittest_params; + int rc; + + uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH; + uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER; + strcpy(uparams.auth_algo, "null"); + strcpy(uparams.cipher_algo, "null"); + + /* create rte_ipsec_sa */ + rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS, + RTE_SECURITY_ACTION_TYPE_NONE, REPLAY_WIN_64, 0); + if (rc != 0) { + RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n"); + return TEST_FAILED; + } + + /* Generate test mbuf data */ + ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool, + null_encrypted_data, DATA_64_BYTES, INBOUND_SPI, 1); + + rc = test_ipsec_crypto_op_alloc(); + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(); + if (rc == 0) + rc = replay_inb_null_null_check(ut_params); + else { + RTE_LOG(ERR, USER1, "crypto_ipsec failed\n"); + rc = TEST_FAILED; + } + } + + if (rc == 0) { + /* + * generate packet with repeat seq number in the replay + * window + */ + if (ut_params->ibuf) { + rte_pktmbuf_free(ut_params->ibuf); + ut_params->ibuf = 0; + } + ut_params->ibuf = setup_test_string_tunneled( + ts_params->mbuf_pool, null_encrypted_data, + DATA_64_BYTES, INBOUND_SPI, 1); + + rc = test_ipsec_crypto_op_alloc(); + if (rc == 0) { + /* call ipsec library api */ + rc = crypto_ipsec(); + if (rc == 0) { + RTE_LOG(ERR, USER1, + "packet is not repeated in the replay window\n"); + rc = TEST_FAILED; + } else { + RTE_LOG(ERR, USER1, + "packet is repeated in the replay window\n"); + rc = 0; + } + } + } + + if (rc == TEST_FAILED) + test_ipsec_dump_buffers(ut_params); + + destroy_sa(); + + return rc; +} + +static struct unit_test_suite ipsec_testsuite = { + .suite_name = "IPsec NULL Unit Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { + + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_crypto_inb_null_null), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_crypto_outb_null_null), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_inline_inb_null_null), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_inline_outb_null_null), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_replay_inb_inside_null_null), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_replay_inb_outside_null_null), + TEST_CASE_ST(ut_setup, ut_teardown, + test_ipsec_replay_inb_repeat_null_null), + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + +static int +test_ipsec(void) +{ + return unit_test_suite_runner(&ipsec_testsuite); +} + +REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);