From patchwork Fri Mar 29 10:27:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51891 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 873054C88; Fri, 29 Mar 2019 11:27:51 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 2B6A237AF for ; Fri, 29 Mar 2019 11:27:50 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 03:27:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,284,1549958400"; d="scan'208";a="131234639" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga006.jf.intel.com with ESMTP; 29 Mar 2019 03:27:47 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Fri, 29 Mar 2019 10:27:18 +0000 Message-Id: <20190329102726.27716-2-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190329102726.27716-1-konstantin.ananyev@intel.com> References: <20190326154320.29913-1-konstantin.ananyev@intel.com> <20190329102726.27716-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 1/9] mbuf: new function to generate raw Tx offload value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Operations to set/update bit-fields often cause compilers to generate suboptimal code. To help avoid such situation for tx_offload fields: introduce new enum for tx_offload bit-fields lengths and offsets, and new function to generate raw tx_offload value. Signed-off-by: Konstantin Ananyev Acked-by: Akhil Goyal --- lib/librte_mbuf/rte_mbuf.h | 79 ++++++++++++++++++++++++++++++++++---- 1 file changed, 72 insertions(+), 7 deletions(-) diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index d961ccaf6..0b197e8ce 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -479,6 +479,31 @@ struct rte_mbuf_sched { uint16_t reserved; /**< Reserved. */ }; /**< Hierarchical scheduler */ +/** + * enum for the tx_offload bit-fields lenghts and offsets. + * defines the layout of rte_mbuf tx_offload field. + */ +enum { + RTE_MBUF_L2_LEN_BITS = 7, + RTE_MBUF_L3_LEN_BITS = 9, + RTE_MBUF_L4_LEN_BITS = 8, + RTE_MBUF_TSO_SEGSZ_BITS = 16, + RTE_MBUF_OUTL3_LEN_BITS = 9, + RTE_MBUF_OUTL2_LEN_BITS = 7, + RTE_MBUF_L2_LEN_OFS = 0, + RTE_MBUF_L3_LEN_OFS = RTE_MBUF_L2_LEN_OFS + RTE_MBUF_L2_LEN_BITS, + RTE_MBUF_L4_LEN_OFS = RTE_MBUF_L3_LEN_OFS + RTE_MBUF_L3_LEN_BITS, + RTE_MBUF_TSO_SEGSZ_OFS = RTE_MBUF_L4_LEN_OFS + RTE_MBUF_L4_LEN_BITS, + RTE_MBUF_OUTL3_LEN_OFS = + RTE_MBUF_TSO_SEGSZ_OFS + RTE_MBUF_TSO_SEGSZ_BITS, + RTE_MBUF_OUTL2_LEN_OFS = + RTE_MBUF_OUTL3_LEN_OFS + RTE_MBUF_OUTL3_LEN_BITS, + RTE_MBUF_TXOFLD_UNUSED_OFS = + RTE_MBUF_OUTL2_LEN_OFS + RTE_MBUF_OUTL2_LEN_BITS, + RTE_MBUF_TXOFLD_UNUSED_BITS = + sizeof(uint64_t) * CHAR_BIT - RTE_MBUF_TXOFLD_UNUSED_OFS, +}; + /** * The generic rte_mbuf, containing a packet mbuf. */ @@ -640,19 +665,24 @@ struct rte_mbuf { uint64_t tx_offload; /**< combined for easy fetch */ __extension__ struct { - uint64_t l2_len:7; + uint64_t l2_len:RTE_MBUF_L2_LEN_BITS; /**< L2 (MAC) Header Length for non-tunneling pkt. * Outer_L4_len + ... + Inner_L2_len for tunneling pkt. */ - uint64_t l3_len:9; /**< L3 (IP) Header Length. */ - uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */ - uint64_t tso_segsz:16; /**< TCP TSO segment size */ + uint64_t l3_len:RTE_MBUF_L3_LEN_BITS; + /**< L3 (IP) Header Length. */ + uint64_t l4_len:RTE_MBUF_L4_LEN_BITS; + /**< L4 (TCP/UDP) Header Length. */ + uint64_t tso_segsz:RTE_MBUF_TSO_SEGSZ_BITS; + /**< TCP TSO segment size */ /* fields for TX offloading of tunnels */ - uint64_t outer_l3_len:9; /**< Outer L3 (IP) Hdr Length. */ - uint64_t outer_l2_len:7; /**< Outer L2 (MAC) Hdr Length. */ + uint64_t outer_l3_len:RTE_MBUF_OUTL3_LEN_BITS; + /**< Outer L3 (IP) Hdr Length. */ + uint64_t outer_l2_len:RTE_MBUF_OUTL2_LEN_BITS; + /**< Outer L2 (MAC) Hdr Length. */ - /* uint64_t unused:8; */ + /* uint64_t unused:RTE_MBUF_TXOFLD_UNUSED_BITS; */ }; }; @@ -2243,6 +2273,41 @@ static inline int rte_pktmbuf_chain(struct rte_mbuf *head, struct rte_mbuf *tail return 0; } +/* + * @warning + * @b EXPERIMENTAL: This API may change without prior notice. + * + * For given input values generate raw tx_offload value. + * @param il2 + * l2_len value. + * @param il3 + * l3_len value. + * @param il4 + * l4_len value. + * @param tso + * tso_segsz value. + * @param ol3 + * outer_l3_len value. + * @param ol2 + * outer_l2_len value. + * @param unused + * unused value. + * @return + * raw tx_offload value. + */ +static __rte_always_inline uint64_t +rte_mbuf_tx_offload(uint64_t il2, uint64_t il3, uint64_t il4, uint64_t tso, + uint64_t ol3, uint64_t ol2, uint64_t unused) +{ + return il2 << RTE_MBUF_L2_LEN_OFS | + il3 << RTE_MBUF_L3_LEN_OFS | + il4 << RTE_MBUF_L4_LEN_OFS | + tso << RTE_MBUF_TSO_SEGSZ_OFS | + ol3 << RTE_MBUF_OUTL3_LEN_OFS | + ol2 << RTE_MBUF_OUTL2_LEN_OFS | + unused << RTE_MBUF_TXOFLD_UNUSED_OFS; +} + /** * Validate general requirements for Tx offload in mbuf. * From patchwork Fri Mar 29 10:27:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51892 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 71E814C95; Fri, 29 Mar 2019 11:27:58 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 830E04C95 for ; Fri, 29 Mar 2019 11:27:56 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 03:27:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,284,1549958400"; d="scan'208";a="131234653" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga006.jf.intel.com with ESMTP; 29 Mar 2019 03:27:54 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Fri, 29 Mar 2019 10:27:19 +0000 Message-Id: <20190329102726.27716-3-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190329102726.27716-1-konstantin.ananyev@intel.com> References: <20190326154320.29913-1-konstantin.ananyev@intel.com> <20190329102726.27716-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 2/9] ipsec: add Tx offload template into SA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Operations to set/update bit-fields often cause compilers to generate suboptimal code. To avoid such negative effect, use tx_offload raw value and mask to update l2_len and l3_len fields within mbufs. Signed-off-by: Konstantin Ananyev Acked-by: Akhil Goyal --- lib/librte_ipsec/sa.c | 23 ++++++++++++++++++----- lib/librte_ipsec/sa.h | 5 +++++ 2 files changed, 23 insertions(+), 5 deletions(-) diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 2eb6bae07..747c37002 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -14,6 +14,9 @@ #include "iph.h" #include "pad.h" +#define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t) +#define MBUF_MAX_L3_LEN RTE_LEN2MASK(RTE_MBUF_L3_LEN_BITS, uint64_t) + /* some helper structures */ struct crypto_xform { struct rte_crypto_auth_xform *auth; @@ -254,6 +257,11 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) sa->proto = prm->tun.next_proto; sa->hdr_len = prm->tun.hdr_len; sa->hdr_l3_off = prm->tun.hdr_l3_off; + + /* update l2_len and l3_len fields for outbound mbuf */ + sa->tx_offload.val = rte_mbuf_tx_offload(sa->hdr_l3_off, + sa->hdr_len - sa->hdr_l3_off, 0, 0, 0, 0, 0); + memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len); esp_outb_init(sa, sa->hdr_len); @@ -324,6 +332,11 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi); sa->salt = prm->ipsec_xform.salt; + /* preserve all values except l2_len and l3_len */ + sa->tx_offload.msk = + ~rte_mbuf_tx_offload(MBUF_MAX_L2_LEN, MBUF_MAX_L3_LEN, + 0, 0, 0, 0, 0); + switch (sa->type & msk) { case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): @@ -549,7 +562,7 @@ esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, /* size of ipsec protected data */ l2len = mb->l2_len; - plen = mb->pkt_len - mb->l2_len; + plen = mb->pkt_len - l2len; /* number of bytes to encrypt */ clen = plen + sizeof(*espt); @@ -576,8 +589,8 @@ esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs); /* update pkt l2/l3 len */ - mb->l2_len = sa->hdr_l3_off; - mb->l3_len = sa->hdr_len - sa->hdr_l3_off; + mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) | + sa->tx_offload.val; /* copy tunnel pkt header */ rte_memcpy(ph, sa->hdr, sa->hdr_len); @@ -1124,8 +1137,8 @@ esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, /* reset mbuf metatdata: L2/L3 len, packet type */ mb->packet_type = RTE_PTYPE_UNKNOWN; - mb->l2_len = 0; - mb->l3_len = 0; + mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) | + sa->tx_offload.val; /* clear the PKT_RX_SEC_OFFLOAD flag if set */ mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index c3a0d84bc..48c3c103a 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -87,6 +87,11 @@ struct rte_ipsec_sa { union sym_op_ofslen cipher; union sym_op_ofslen auth; } ctp; + /* tx_offload template for tunnel mbuf */ + struct { + uint64_t msk; + uint64_t val; + } tx_offload; uint32_t salt; uint8_t algo_type; uint8_t proto; /* next proto */ From patchwork Fri Mar 29 10:27:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51893 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9ABDB4CA7; Fri, 29 Mar 2019 11:28:05 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id DEE424CA0 for ; Fri, 29 Mar 2019 11:28:03 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 03:28:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,284,1549958400"; d="scan'208";a="131234659" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga006.jf.intel.com with ESMTP; 29 Mar 2019 03:28:00 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Fri, 29 Mar 2019 10:27:20 +0000 Message-Id: <20190329102726.27716-4-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190329102726.27716-1-konstantin.ananyev@intel.com> References: <20190326154320.29913-1-konstantin.ananyev@intel.com> <20190329102726.27716-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 3/9] ipsec: change the order in filling crypto op X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Right now we first fill crypto_sym_op part of crypto_op, then in a separate cycle we fill crypto op fields. It makes more sense to fill whole crypto-op in one go instead. Signed-off-by: Konstantin Ananyev Acked-by: Akhil Goyal --- lib/librte_ipsec/sa.c | 46 ++++++++++++++++++++----------------------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 747c37002..97c0f8c61 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -464,20 +464,17 @@ mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[], * setup crypto ops for LOOKASIDE_NONE (pure crypto) type of devices. */ static inline void -lksd_none_cop_prepare(const struct rte_ipsec_session *ss, - struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) +lksd_none_cop_prepare(struct rte_crypto_op *cop, + struct rte_cryptodev_sym_session *cs, struct rte_mbuf *mb) { - uint32_t i; struct rte_crypto_sym_op *sop; - for (i = 0; i != num; i++) { - sop = cop[i]->sym; - cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; - cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; - cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION; - sop->m_src = mb[i]; - __rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses); - } + sop = cop->sym; + cop->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + cop->sess_type = RTE_CRYPTO_OP_WITH_SESSION; + sop->m_src = mb; + __rte_crypto_sym_op_attach_sym_session(sop, cs); } /* @@ -667,11 +664,13 @@ outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint64_t sqn; rte_be64_t sqc; struct rte_ipsec_sa *sa; + struct rte_cryptodev_sym_session *cs; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; struct rte_mbuf *dr[num]; sa = ss->sa; + cs = ss->crypto.ses; n = num; sqn = esn_outb_update_sqn(sa, &n); @@ -689,10 +688,10 @@ outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], /* success, setup crypto op */ if (rc >= 0) { - mb[k] = mb[i]; outb_pkt_xprepare(sa, sqc, &icv); + lksd_none_cop_prepare(cop[k], cs, mb[i]); esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc); - k++; + mb[k++] = mb[i]; /* failure, put packet into the death-row */ } else { dr[i - k] = mb[i]; @@ -700,9 +699,6 @@ outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], } } - /* update cops */ - lksd_none_cop_prepare(ss, mb, cop, k); - /* copy not prepared mbufs beyond good ones */ if (k != n && k != 0) mbuf_bulk_copy(mb + k, dr, n - k); @@ -803,11 +799,13 @@ outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint64_t sqn; rte_be64_t sqc; struct rte_ipsec_sa *sa; + struct rte_cryptodev_sym_session *cs; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; struct rte_mbuf *dr[num]; sa = ss->sa; + cs = ss->crypto.ses; n = num; sqn = esn_outb_update_sqn(sa, &n); @@ -829,10 +827,10 @@ outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], /* success, setup crypto op */ if (rc >= 0) { - mb[k] = mb[i]; outb_pkt_xprepare(sa, sqc, &icv); + lksd_none_cop_prepare(cop[k], cs, mb[i]); esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc); - k++; + mb[k++] = mb[i]; /* failure, put packet into the death-row */ } else { dr[i - k] = mb[i]; @@ -840,9 +838,6 @@ outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], } } - /* update cops */ - lksd_none_cop_prepare(ss, mb, cop, k); - /* copy not prepared mbufs beyond good ones */ if (k != n && k != 0) mbuf_bulk_copy(mb + k, dr, n - k); @@ -1021,11 +1016,13 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], int32_t rc; uint32_t i, k, hl; struct rte_ipsec_sa *sa; + struct rte_cryptodev_sym_session *cs; struct replay_sqn *rsn; union sym_op_data icv; struct rte_mbuf *dr[num]; sa = ss->sa; + cs = ss->crypto.ses; rsn = rsn_acquire(sa); k = 0; @@ -1033,9 +1030,11 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], hl = mb[i]->l2_len + mb[i]->l3_len; rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv); - if (rc >= 0) + if (rc >= 0) { + lksd_none_cop_prepare(cop[k], cs, mb[i]); rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc); + } if (rc == 0) mb[k++] = mb[i]; @@ -1047,9 +1046,6 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], rsn_release(sa, rsn); - /* update cops */ - lksd_none_cop_prepare(ss, mb, cop, k); - /* copy not prepared mbufs beyond good ones */ if (k != num && k != 0) mbuf_bulk_copy(mb + k, dr, num - k); From patchwork Fri Mar 29 10:27:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51894 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E8F664CBD; Fri, 29 Mar 2019 11:28:08 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 47FDB4CA6 for ; Fri, 29 Mar 2019 11:28:04 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 03:28:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,284,1549958400"; d="scan'208";a="131234678" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga006.jf.intel.com with ESMTP; 29 Mar 2019 03:28:01 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Fri, 29 Mar 2019 10:27:21 +0000 Message-Id: <20190329102726.27716-5-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190329102726.27716-1-konstantin.ananyev@intel.com> References: <20190326154320.29913-1-konstantin.ananyev@intel.com> <20190329102726.27716-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 4/9] ipsec: change the way unprocessed mbufs are accounted X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As was pointed in one of previous reviews - we can avoid updating contents of mbuf array for successfully processed packets. Instead store indexes of failed packets, to move them beyond the good ones later. Signed-off-by: Konstantin Ananyev Acked-by: Akhil Goyal --- lib/librte_ipsec/sa.c | 164 +++++++++++++++++++++++------------------- 1 file changed, 92 insertions(+), 72 deletions(-) diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 97c0f8c61..e728c6a28 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -450,14 +450,31 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, return sz; } +/* + * Move bad (unprocessed) mbufs beyond the good (processed) ones. + * bad_idx[] contains the indexes of bad mbufs inside the mb[]. + */ static inline void -mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[], - uint32_t num) +move_bad_mbufs(struct rte_mbuf *mb[], const uint32_t bad_idx[], uint32_t nb_mb, + uint32_t nb_bad) { - uint32_t i; + uint32_t i, j, k; + struct rte_mbuf *drb[nb_bad]; - for (i = 0; i != num; i++) - dst[i] = src[i]; + j = 0; + k = 0; + + /* copy bad ones into a temp place */ + for (i = 0; i != nb_mb; i++) { + if (j != nb_bad && i == bad_idx[j]) + drb[j++] = mb[i]; + else + mb[k++] = mb[i]; + } + + /* copy bad ones after the good ones */ + for (i = 0; i != nb_bad; i++) + mb[k + i] = drb[i]; } /* @@ -667,7 +684,7 @@ outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], struct rte_cryptodev_sym_session *cs; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; - struct rte_mbuf *dr[num]; + uint32_t dr[num]; sa = ss->sa; cs = ss->crypto.ses; @@ -691,17 +708,17 @@ outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], outb_pkt_xprepare(sa, sqc, &icv); lksd_none_cop_prepare(cop[k], cs, mb[i]); esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc); - mb[k++] = mb[i]; + k++; /* failure, put packet into the death-row */ } else { - dr[i - k] = mb[i]; + dr[i - k] = i; rte_errno = -rc; } } /* copy not prepared mbufs beyond good ones */ if (k != n && k != 0) - mbuf_bulk_copy(mb + k, dr, n - k); + move_bad_mbufs(mb, dr, n, n - k); return k; } @@ -802,7 +819,7 @@ outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], struct rte_cryptodev_sym_session *cs; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; - struct rte_mbuf *dr[num]; + uint32_t dr[num]; sa = ss->sa; cs = ss->crypto.ses; @@ -830,17 +847,17 @@ outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], outb_pkt_xprepare(sa, sqc, &icv); lksd_none_cop_prepare(cop[k], cs, mb[i]); esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc); - mb[k++] = mb[i]; + k++; /* failure, put packet into the death-row */ } else { - dr[i - k] = mb[i]; + dr[i - k] = i; rte_errno = -rc; } } /* copy not prepared mbufs beyond good ones */ if (k != n && k != 0) - mbuf_bulk_copy(mb + k, dr, n - k); + move_bad_mbufs(mb, dr, n, n - k); return k; } @@ -1019,7 +1036,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], struct rte_cryptodev_sym_session *cs; struct replay_sqn *rsn; union sym_op_data icv; - struct rte_mbuf *dr[num]; + uint32_t dr[num]; sa = ss->sa; cs = ss->crypto.ses; @@ -1036,10 +1053,9 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], hl, rc); } - if (rc == 0) - mb[k++] = mb[i]; - else { - dr[i - k] = mb[i]; + k += (rc == 0); + if (rc != 0) { + dr[i - k] = i; rte_errno = -rc; } } @@ -1048,7 +1064,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], /* copy not prepared mbufs beyond good ones */ if (k != num && k != 0) - mbuf_bulk_copy(mb + k, dr, num - k); + move_bad_mbufs(mb, dr, num, num - k); return k; } @@ -1200,7 +1216,7 @@ esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, */ static inline uint16_t esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], - struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num) + uint32_t dr[], uint16_t num) { uint32_t i, k; struct replay_sqn *rsn; @@ -1210,9 +1226,9 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], k = 0; for (i = 0; i != num; i++) { if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0) - mb[k++] = mb[i]; + k++; else - dr[i - k] = mb[i]; + dr[i - k] = i; } rsn_update_finish(sa, rsn); @@ -1226,10 +1242,10 @@ static uint16_t inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { - uint32_t i, k; + uint32_t i, k, n; struct rte_ipsec_sa *sa; uint32_t sqn[num]; - struct rte_mbuf *dr[num]; + uint32_t dr[num]; sa = ss->sa; @@ -1239,23 +1255,27 @@ inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], for (i = 0; i != num; i++) { /* good packet */ if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0) - mb[k++] = mb[i]; + k++; /* bad packet, will drop from furhter processing */ else - dr[i - k] = mb[i]; + dr[i - k] = i; } - /* update seq # and replay winow */ - k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k); - /* handle unprocessed mbufs */ - if (k != num) { + if (k != num && k != 0) + move_bad_mbufs(mb, dr, num, num - k); + + /* update SQN and replay winow */ + n = esp_inb_rsn_update(sa, sqn, dr, k); + + /* handle mbufs with wrong SQN */ + if (n != k && n != 0) + move_bad_mbufs(mb, dr, k, k - n); + + if (n != num) rte_errno = EBADMSG; - if (k != 0) - mbuf_bulk_copy(mb + k, dr, num - k); - } - return k; + return n; } /* @@ -1265,10 +1285,10 @@ static uint16_t inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { - uint32_t i, k; + uint32_t i, k, n; uint32_t sqn[num]; struct rte_ipsec_sa *sa; - struct rte_mbuf *dr[num]; + uint32_t dr[num]; sa = ss->sa; @@ -1278,23 +1298,27 @@ inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], for (i = 0; i != num; i++) { /* good packet */ if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0) - mb[k++] = mb[i]; + k++; /* bad packet, will drop from furhter processing */ else - dr[i - k] = mb[i]; + dr[i - k] = i; } - /* update seq # and replay winow */ - k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k); - /* handle unprocessed mbufs */ - if (k != num) { + if (k != num && k != 0) + move_bad_mbufs(mb, dr, num, num - k); + + /* update SQN and replay winow */ + n = esp_inb_rsn_update(sa, sqn, dr, k); + + /* handle mbufs with wrong SQN */ + if (n != k && n != 0) + move_bad_mbufs(mb, dr, k, k - n); + + if (n != num) rte_errno = EBADMSG; - if (k != 0) - mbuf_bulk_copy(mb + k, dr, num - k); - } - return k; + return n; } /* @@ -1310,7 +1334,7 @@ outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint32_t i, k, icv_len, *icv; struct rte_mbuf *ml; struct rte_ipsec_sa *sa; - struct rte_mbuf *dr[num]; + uint32_t dr[num]; sa = ss->sa; @@ -1323,16 +1347,16 @@ outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], icv = rte_pktmbuf_mtod_offset(ml, void *, ml->data_len - icv_len); remove_sqh(icv, icv_len); - mb[k++] = mb[i]; + k++; } else - dr[i - k] = mb[i]; + dr[i - k] = i; } /* handle unprocessed mbufs */ if (k != num) { rte_errno = EBADMSG; if (k != 0) - mbuf_bulk_copy(mb + k, dr, num - k); + move_bad_mbufs(mb, dr, num, num - k); } return k; @@ -1352,23 +1376,23 @@ pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { uint32_t i, k; - struct rte_mbuf *dr[num]; + uint32_t dr[num]; RTE_SET_USED(ss); k = 0; for (i = 0; i != num; i++) { if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) - mb[k++] = mb[i]; + k++; else - dr[i - k] = mb[i]; + dr[i - k] = i; } /* handle unprocessed mbufs */ if (k != num) { rte_errno = EBADMSG; if (k != 0) - mbuf_bulk_copy(mb + k, dr, num - k); + move_bad_mbufs(mb, dr, num, num - k); } return k; @@ -1409,7 +1433,7 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_ipsec_sa *sa; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; - struct rte_mbuf *dr[num]; + uint32_t dr[num]; sa = ss->sa; @@ -1427,22 +1451,20 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, /* try to update the packet itself */ rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv); - /* success, update mbuf fields */ - if (rc >= 0) - mb[k++] = mb[i]; + k += (rc >= 0); + /* failure, put packet into the death-row */ - else { - dr[i - k] = mb[i]; + if (rc < 0) { + dr[i - k] = i; rte_errno = -rc; } } - inline_outb_mbuf_prepare(ss, mb, k); - /* copy not processed mbufs beyond good ones */ if (k != n && k != 0) - mbuf_bulk_copy(mb + k, dr, n - k); + move_bad_mbufs(mb, dr, n, n - k); + inline_outb_mbuf_prepare(ss, mb, k); return k; } @@ -1461,7 +1483,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_ipsec_sa *sa; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; - struct rte_mbuf *dr[num]; + uint32_t dr[num]; sa = ss->sa; @@ -1483,22 +1505,20 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, &icv); - /* success, update mbuf fields */ - if (rc >= 0) - mb[k++] = mb[i]; + k += (rc >= 0); + /* failure, put packet into the death-row */ - else { - dr[i - k] = mb[i]; + if (rc < 0) { + dr[i - k] = i; rte_errno = -rc; } } - inline_outb_mbuf_prepare(ss, mb, k); - /* copy not processed mbufs beyond good ones */ if (k != n && k != 0) - mbuf_bulk_copy(mb + k, dr, n - k); + move_bad_mbufs(mb, dr, n, n - k); + inline_outb_mbuf_prepare(ss, mb, k); return k; } From patchwork Fri Mar 29 10:27:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51895 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C667E4C9C; Fri, 29 Mar 2019 11:28:16 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 9450F4CAF for ; Fri, 29 Mar 2019 11:28:06 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 03:28:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,284,1549958400"; d="scan'208";a="131234689" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga006.jf.intel.com with ESMTP; 29 Mar 2019 03:28:03 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Fri, 29 Mar 2019 10:27:22 +0000 Message-Id: <20190329102726.27716-6-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190329102726.27716-1-konstantin.ananyev@intel.com> References: <20190326154320.29913-1-konstantin.ananyev@intel.com> <20190329102726.27716-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 5/9] ipsec: move inbound and outbound code into different files X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" sa.c becomes too big, so decided to split it into 3 chunks: - sa.c - control path related functions (init/fini, etc.) - esp_inb.c - ESP inbound packet processing - esp_outb.c - ESP outbound packet processing Plus few changes in internal function names to follow the same code convention. No functional changes introduced. Signed-off-by: Konstantin Ananyev Acked-by: Akhil Goyal --- lib/librte_ipsec/Makefile | 2 + lib/librte_ipsec/crypto.h | 17 + lib/librte_ipsec/esp_inb.c | 439 ++++++++++++++ lib/librte_ipsec/esp_outb.c | 559 ++++++++++++++++++ lib/librte_ipsec/ipsec_sqn.h | 30 - lib/librte_ipsec/meson.build | 2 +- lib/librte_ipsec/misc.h | 41 ++ lib/librte_ipsec/sa.c | 1067 ++-------------------------------- lib/librte_ipsec/sa.h | 40 ++ 9 files changed, 1141 insertions(+), 1056 deletions(-) create mode 100644 lib/librte_ipsec/esp_inb.c create mode 100644 lib/librte_ipsec/esp_outb.c create mode 100644 lib/librte_ipsec/misc.h diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile index 77506d6ad..e80926baa 100644 --- a/lib/librte_ipsec/Makefile +++ b/lib/librte_ipsec/Makefile @@ -16,6 +16,8 @@ EXPORT_MAP := rte_ipsec_version.map LIBABIVER := 1 # all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += esp_inb.c +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += esp_outb.c SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h index 4f551e39c..3e9a8f41c 100644 --- a/lib/librte_ipsec/crypto.h +++ b/lib/librte_ipsec/crypto.h @@ -162,4 +162,21 @@ remove_sqh(void *picv, uint32_t icv_len) icv[i] = icv[i + 1]; } +/* + * setup crypto ops for LOOKASIDE_NONE (pure crypto) type of devices. + */ +static inline void +lksd_none_cop_prepare(struct rte_crypto_op *cop, + struct rte_cryptodev_sym_session *cs, struct rte_mbuf *mb) +{ + struct rte_crypto_sym_op *sop; + + sop = cop->sym; + cop->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + cop->sess_type = RTE_CRYPTO_OP_WITH_SESSION; + sop->m_src = mb; + __rte_crypto_sym_op_attach_sym_session(sop, cs); +} + #endif /* _CRYPTO_H_ */ diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c new file mode 100644 index 000000000..a775c7b0b --- /dev/null +++ b/lib/librte_ipsec/esp_inb.c @@ -0,0 +1,439 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include +#include +#include +#include +#include + +#include "sa.h" +#include "ipsec_sqn.h" +#include "crypto.h" +#include "iph.h" +#include "misc.h" +#include "pad.h" + +/* + * setup crypto op and crypto sym op for ESP inbound tunnel packet. + */ +static inline int32_t +inb_cop_prepare(struct rte_crypto_op *cop, + const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + const union sym_op_data *icv, uint32_t pofs, uint32_t plen) +{ + struct rte_crypto_sym_op *sop; + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + uint64_t *ivc, *ivp; + uint32_t algo, clen; + + clen = plen - sa->ctp.cipher.length; + if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0) + return -EINVAL; + + algo = sa->algo_type; + + /* fill sym op fields */ + sop = cop->sym; + + switch (algo) { + case ALGO_TYPE_AES_GCM: + sop->aead.data.offset = pofs + sa->ctp.cipher.offset; + sop->aead.data.length = clen; + sop->aead.digest.data = icv->va; + sop->aead.digest.phys_addr = icv->pa; + sop->aead.aad.data = icv->va + sa->icv_len; + sop->aead.aad.phys_addr = icv->pa + sa->icv_len; + + /* fill AAD IV (located inside crypto op) */ + gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, + sa->iv_ofs); + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct esp_hdr)); + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + break; + case ALGO_TYPE_AES_CBC: + case ALGO_TYPE_3DES_CBC: + sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; + sop->cipher.data.length = clen; + sop->auth.data.offset = pofs + sa->ctp.auth.offset; + sop->auth.data.length = plen - sa->ctp.auth.length; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + + /* copy iv from the input packet to the cop */ + ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs); + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct esp_hdr)); + copy_iv(ivc, ivp, sa->iv_len); + break; + case ALGO_TYPE_AES_CTR: + sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; + sop->cipher.data.length = clen; + sop->auth.data.offset = pofs + sa->ctp.auth.offset; + sop->auth.data.length = plen - sa->ctp.auth.length; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + + /* copy iv from the input packet to the cop */ + ctr = rte_crypto_op_ctod_offset(cop, struct aesctr_cnt_blk *, + sa->iv_ofs); + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct esp_hdr)); + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + case ALGO_TYPE_NULL: + sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; + sop->cipher.data.length = clen; + sop->auth.data.offset = pofs + sa->ctp.auth.offset; + sop->auth.data.length = plen - sa->ctp.auth.length; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + break; + default: + return -EINVAL; + } + + return 0; +} + +/* + * for pure cryptodev (lookaside none) depending on SA settings, + * we might have to write some extra data to the packet. + */ +static inline void +inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, + const union sym_op_data *icv) +{ + struct aead_gcm_aad *aad; + + /* insert SQN.hi between ESP trailer and ICV */ + if (sa->sqh_len != 0) + insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len); + + /* + * fill AAD fields, if any (aad fields are placed after icv), + * right now we support only one AEAD algorithm: AES-GCM. + */ + if (sa->aad_len != 0) { + aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); + aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa)); + } +} + +/* + * setup/update packet data and metadata for ESP inbound tunnel case. + */ +static inline int32_t +inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, + struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv) +{ + int32_t rc; + uint64_t sqn; + uint32_t icv_ofs, plen; + struct rte_mbuf *ml; + struct esp_hdr *esph; + + esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); + + /* + * retrieve and reconstruct SQN, then check it, then + * convert it back into network byte order. + */ + sqn = rte_be_to_cpu_32(esph->seq); + if (IS_ESN(sa)) + sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz); + + rc = esn_inb_check_sqn(rsn, sa, sqn); + if (rc != 0) + return rc; + + sqn = rte_cpu_to_be_64(sqn); + + /* start packet manipulation */ + plen = mb->pkt_len; + plen = plen - hlen; + + ml = rte_pktmbuf_lastseg(mb); + icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len; + + /* we have to allocate space for AAD somewhere, + * right now - just use free trailing space at the last segment. + * Would probably be more convenient to reserve space for AAD + * inside rte_crypto_op itself + * (again for IV space is already reserved inside cop). + */ + if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml)) + return -ENOSPC; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs); + icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs); + + inb_pkt_xprepare(sa, sqn, icv); + return plen; +} + +/* + * setup/update packets and crypto ops for ESP inbound case. + */ +uint16_t +esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, hl; + struct rte_ipsec_sa *sa; + struct rte_cryptodev_sym_session *cs; + struct replay_sqn *rsn; + union sym_op_data icv; + uint32_t dr[num]; + + sa = ss->sa; + cs = ss->crypto.ses; + rsn = rsn_acquire(sa); + + k = 0; + for (i = 0; i != num; i++) { + + hl = mb[i]->l2_len + mb[i]->l3_len; + rc = inb_pkt_prepare(sa, rsn, mb[i], hl, &icv); + if (rc >= 0) { + lksd_none_cop_prepare(cop[k], cs, mb[i]); + rc = inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc); + } + + k += (rc == 0); + if (rc != 0) { + dr[i - k] = i; + rte_errno = -rc; + } + } + + rsn_release(sa, rsn); + + /* copy not prepared mbufs beyond good ones */ + if (k != num && k != 0) + move_bad_mbufs(mb, dr, num, num - k); + + return k; +} + +/* + * process ESP inbound tunnel packet. + */ +static inline int +inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t *sqn) +{ + uint32_t hlen, icv_len, tlen; + struct esp_hdr *esph; + struct esp_tail *espt; + struct rte_mbuf *ml; + char *pd; + + if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) + return -EBADMSG; + + icv_len = sa->icv_len; + + ml = rte_pktmbuf_lastseg(mb); + espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, + ml->data_len - icv_len - sizeof(*espt)); + + /* + * check padding and next proto. + * return an error if something is wrong. + */ + pd = (char *)espt - espt->pad_len; + if (espt->next_proto != sa->proto || + memcmp(pd, esp_pad_bytes, espt->pad_len)) + return -EINVAL; + + /* cut of ICV, ESP tail and padding bytes */ + tlen = icv_len + sizeof(*espt) + espt->pad_len; + ml->data_len -= tlen; + mb->pkt_len -= tlen; + + /* cut of L2/L3 headers, ESP header and IV */ + hlen = mb->l2_len + mb->l3_len; + esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); + rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset); + + /* retrieve SQN for later check */ + *sqn = rte_be_to_cpu_32(esph->seq); + + /* reset mbuf metatdata: L2/L3 len, packet type */ + mb->packet_type = RTE_PTYPE_UNKNOWN; + mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) | + sa->tx_offload.val; + + /* clear the PKT_RX_SEC_OFFLOAD flag if set */ + mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); + return 0; +} + +/* + * process ESP inbound transport packet. + */ +static inline int +inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t *sqn) +{ + uint32_t hlen, icv_len, l2len, l3len, tlen; + struct esp_hdr *esph; + struct esp_tail *espt; + struct rte_mbuf *ml; + char *np, *op, *pd; + + if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) + return -EBADMSG; + + icv_len = sa->icv_len; + + ml = rte_pktmbuf_lastseg(mb); + espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, + ml->data_len - icv_len - sizeof(*espt)); + + /* check padding, return an error if something is wrong. */ + pd = (char *)espt - espt->pad_len; + if (memcmp(pd, esp_pad_bytes, espt->pad_len)) + return -EINVAL; + + /* cut of ICV, ESP tail and padding bytes */ + tlen = icv_len + sizeof(*espt) + espt->pad_len; + ml->data_len -= tlen; + mb->pkt_len -= tlen; + + /* retrieve SQN for later check */ + l2len = mb->l2_len; + l3len = mb->l3_len; + hlen = l2len + l3len; + op = rte_pktmbuf_mtod(mb, char *); + esph = (struct esp_hdr *)(op + hlen); + *sqn = rte_be_to_cpu_32(esph->seq); + + /* cut off ESP header and IV, update L3 header */ + np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset); + remove_esph(np, op, hlen); + update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len, + espt->next_proto); + + /* reset mbuf packet type */ + mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK); + + /* clear the PKT_RX_SEC_OFFLOAD flag if set */ + mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); + return 0; +} + +/* + * for group of ESP inbound packets perform SQN check and update. + */ +static inline uint16_t +esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], + uint32_t dr[], uint16_t num) +{ + uint32_t i, k; + struct replay_sqn *rsn; + + rsn = rsn_update_start(sa); + + k = 0; + for (i = 0; i != num; i++) { + if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0) + k++; + else + dr[i - k] = i; + } + + rsn_update_finish(sa, rsn); + return k; +} + +/* + * process group of ESP inbound tunnel packets. + */ +uint16_t +esp_inb_tun_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + uint32_t i, k, n; + struct rte_ipsec_sa *sa; + uint32_t sqn[num]; + uint32_t dr[num]; + + sa = ss->sa; + + /* process packets, extract seq numbers */ + + k = 0; + for (i = 0; i != num; i++) { + /* good packet */ + if (inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0) + k++; + /* bad packet, will drop from furhter processing */ + else + dr[i - k] = i; + } + + /* handle unprocessed mbufs */ + if (k != num && k != 0) + move_bad_mbufs(mb, dr, num, num - k); + + /* update SQN and replay winow */ + n = esp_inb_rsn_update(sa, sqn, dr, k); + + /* handle mbufs with wrong SQN */ + if (n != k && n != 0) + move_bad_mbufs(mb, dr, k, k - n); + + if (n != num) + rte_errno = EBADMSG; + + return n; +} + +/* + * process group of ESP inbound transport packets. + */ +uint16_t +esp_inb_trs_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + uint32_t i, k, n; + uint32_t sqn[num]; + struct rte_ipsec_sa *sa; + uint32_t dr[num]; + + sa = ss->sa; + + /* process packets, extract seq numbers */ + + k = 0; + for (i = 0; i != num; i++) { + /* good packet */ + if (inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0) + k++; + /* bad packet, will drop from furhter processing */ + else + dr[i - k] = i; + } + + /* handle unprocessed mbufs */ + if (k != num && k != 0) + move_bad_mbufs(mb, dr, num, num - k); + + /* update SQN and replay winow */ + n = esp_inb_rsn_update(sa, sqn, dr, k); + + /* handle mbufs with wrong SQN */ + if (n != k && n != 0) + move_bad_mbufs(mb, dr, k, k - n); + + if (n != num) + rte_errno = EBADMSG; + + return n; +} diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c new file mode 100644 index 000000000..09bfb8658 --- /dev/null +++ b/lib/librte_ipsec/esp_outb.c @@ -0,0 +1,559 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include +#include +#include +#include +#include + +#include "sa.h" +#include "ipsec_sqn.h" +#include "crypto.h" +#include "iph.h" +#include "misc.h" +#include "pad.h" + +/* + * setup crypto op and crypto sym op for ESP outbound packet. + */ +static inline void +outb_cop_prepare(struct rte_crypto_op *cop, + const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD], + const union sym_op_data *icv, uint32_t hlen, uint32_t plen) +{ + struct rte_crypto_sym_op *sop; + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + uint32_t algo; + + algo = sa->algo_type; + + /* fill sym op fields */ + sop = cop->sym; + + switch (algo) { + case ALGO_TYPE_AES_CBC: + /* Cipher-Auth (AES-CBC *) case */ + case ALGO_TYPE_3DES_CBC: + /* Cipher-Auth (3DES-CBC *) case */ + case ALGO_TYPE_NULL: + /* NULL case */ + sop->cipher.data.offset = sa->ctp.cipher.offset + hlen; + sop->cipher.data.length = sa->ctp.cipher.length + plen; + sop->auth.data.offset = sa->ctp.auth.offset + hlen; + sop->auth.data.length = sa->ctp.auth.length + plen; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + break; + case ALGO_TYPE_AES_GCM: + /* AEAD (AES_GCM) case */ + sop->aead.data.offset = sa->ctp.cipher.offset + hlen; + sop->aead.data.length = sa->ctp.cipher.length + plen; + sop->aead.digest.data = icv->va; + sop->aead.digest.phys_addr = icv->pa; + sop->aead.aad.data = icv->va + sa->icv_len; + sop->aead.aad.phys_addr = icv->pa + sa->icv_len; + + /* fill AAD IV (located inside crypto op) */ + gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, + sa->iv_ofs); + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + break; + case ALGO_TYPE_AES_CTR: + /* Cipher-Auth (AES-CTR *) case */ + sop->cipher.data.offset = sa->ctp.cipher.offset + hlen; + sop->cipher.data.length = sa->ctp.cipher.length + plen; + sop->auth.data.offset = sa->ctp.auth.offset + hlen; + sop->auth.data.length = sa->ctp.auth.length + plen; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; + + ctr = rte_crypto_op_ctod_offset(cop, struct aesctr_cnt_blk *, + sa->iv_ofs); + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + } +} + +/* + * setup/update packet data and metadata for ESP outbound tunnel case. + */ +static inline int32_t +outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, + const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, + union sym_op_data *icv) +{ + uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen; + struct rte_mbuf *ml; + struct esp_hdr *esph; + struct esp_tail *espt; + char *ph, *pt; + uint64_t *iv; + + /* calculate extra header space required */ + hlen = sa->hdr_len + sa->iv_len + sizeof(*esph); + + /* size of ipsec protected data */ + l2len = mb->l2_len; + plen = mb->pkt_len - l2len; + + /* number of bytes to encrypt */ + clen = plen + sizeof(*espt); + clen = RTE_ALIGN_CEIL(clen, sa->pad_align); + + /* pad length + esp tail */ + pdlen = clen - plen; + tlen = pdlen + sa->icv_len; + + /* do append and prepend */ + ml = rte_pktmbuf_lastseg(mb); + if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml)) + return -ENOSPC; + + /* prepend header */ + ph = rte_pktmbuf_prepend(mb, hlen - l2len); + if (ph == NULL) + return -ENOSPC; + + /* append tail */ + pdofs = ml->data_len; + ml->data_len += tlen; + mb->pkt_len += tlen; + pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs); + + /* update pkt l2/l3 len */ + mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) | + sa->tx_offload.val; + + /* copy tunnel pkt header */ + rte_memcpy(ph, sa->hdr, sa->hdr_len); + + /* update original and new ip header fields */ + update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off, + sqn_low16(sqc)); + + /* update spi, seqn and iv */ + esph = (struct esp_hdr *)(ph + sa->hdr_len); + iv = (uint64_t *)(esph + 1); + copy_iv(iv, ivp, sa->iv_len); + + esph->spi = sa->spi; + esph->seq = sqn_low32(sqc); + + /* offset for ICV */ + pdofs += pdlen + sa->sqh_len; + + /* pad length */ + pdlen -= sizeof(*espt); + + /* copy padding data */ + rte_memcpy(pt, esp_pad_bytes, pdlen); + + /* update esp trailer */ + espt = (struct esp_tail *)(pt + pdlen); + espt->pad_len = pdlen; + espt->next_proto = sa->proto; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); + icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); + + return clen; +} + +/* + * for pure cryptodev (lookaside none) depending on SA settings, + * we might have to write some extra data to the packet. + */ +static inline void +outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, + const union sym_op_data *icv) +{ + uint32_t *psqh; + struct aead_gcm_aad *aad; + + /* insert SQN.hi between ESP trailer and ICV */ + if (sa->sqh_len != 0) { + psqh = (uint32_t *)(icv->va - sa->sqh_len); + psqh[0] = sqn_hi32(sqc); + } + + /* + * fill IV and AAD fields, if any (aad fields are placed after icv), + * right now we support only one AEAD algorithm: AES-GCM . + */ + if (sa->aad_len != 0) { + aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); + aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa)); + } +} + +/* + * setup/update packets and crypto ops for ESP outbound tunnel case. + */ +uint16_t +esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + struct rte_cryptodev_sym_session *cs; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + uint32_t dr[num]; + + sa = ss->sa; + cs = ss->crypto.ses; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv, sqc); + + /* try to update the packet itself */ + rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv); + + /* success, setup crypto op */ + if (rc >= 0) { + outb_pkt_xprepare(sa, sqc, &icv); + lksd_none_cop_prepare(cop[k], cs, mb[i]); + outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc); + k++; + /* failure, put packet into the death-row */ + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not prepared mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + return k; +} + +/* + * setup/update packet data and metadata for ESP outbound transport case. + */ +static inline int32_t +outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, + const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, + uint32_t l2len, uint32_t l3len, union sym_op_data *icv) +{ + uint8_t np; + uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen; + struct rte_mbuf *ml; + struct esp_hdr *esph; + struct esp_tail *espt; + char *ph, *pt; + uint64_t *iv; + + uhlen = l2len + l3len; + plen = mb->pkt_len - uhlen; + + /* calculate extra header space required */ + hlen = sa->iv_len + sizeof(*esph); + + /* number of bytes to encrypt */ + clen = plen + sizeof(*espt); + clen = RTE_ALIGN_CEIL(clen, sa->pad_align); + + /* pad length + esp tail */ + pdlen = clen - plen; + tlen = pdlen + sa->icv_len; + + /* do append and insert */ + ml = rte_pktmbuf_lastseg(mb); + if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml)) + return -ENOSPC; + + /* prepend space for ESP header */ + ph = rte_pktmbuf_prepend(mb, hlen); + if (ph == NULL) + return -ENOSPC; + + /* append tail */ + pdofs = ml->data_len; + ml->data_len += tlen; + mb->pkt_len += tlen; + pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs); + + /* shift L2/L3 headers */ + insert_esph(ph, ph + hlen, uhlen); + + /* update ip header fields */ + np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len, + IPPROTO_ESP); + + /* update spi, seqn and iv */ + esph = (struct esp_hdr *)(ph + uhlen); + iv = (uint64_t *)(esph + 1); + copy_iv(iv, ivp, sa->iv_len); + + esph->spi = sa->spi; + esph->seq = sqn_low32(sqc); + + /* offset for ICV */ + pdofs += pdlen + sa->sqh_len; + + /* pad length */ + pdlen -= sizeof(*espt); + + /* copy padding data */ + rte_memcpy(pt, esp_pad_bytes, pdlen); + + /* update esp trailer */ + espt = (struct esp_tail *)(pt + pdlen); + espt->pad_len = pdlen; + espt->next_proto = np; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); + icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); + + return clen; +} + +/* + * setup/update packets and crypto ops for ESP outbound transport case. + */ +uint16_t +esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n, l2, l3; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + struct rte_cryptodev_sym_session *cs; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + uint32_t dr[num]; + + sa = ss->sa; + cs = ss->crypto.ses; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + l2 = mb[i]->l2_len; + l3 = mb[i]->l3_len; + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv, sqc); + + /* try to update the packet itself */ + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, &icv); + + /* success, setup crypto op */ + if (rc >= 0) { + outb_pkt_xprepare(sa, sqc, &icv); + lksd_none_cop_prepare(cop[k], cs, mb[i]); + outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc); + k++; + /* failure, put packet into the death-row */ + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not prepared mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + return k; +} + +/* + * process outbound packets for SA with ESN support, + * for algorithms that require SQN.hibits to be implictly included + * into digest computation. + * In that case we have to move ICV bytes back to their proper place. + */ +uint16_t +esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num) +{ + uint32_t i, k, icv_len, *icv; + struct rte_mbuf *ml; + struct rte_ipsec_sa *sa; + uint32_t dr[num]; + + sa = ss->sa; + + k = 0; + icv_len = sa->icv_len; + + for (i = 0; i != num; i++) { + if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) { + ml = rte_pktmbuf_lastseg(mb[i]); + icv = rte_pktmbuf_mtod_offset(ml, void *, + ml->data_len - icv_len); + remove_sqh(icv, icv_len); + k++; + } else + dr[i - k] = i; + } + + /* handle unprocessed mbufs */ + if (k != num) { + rte_errno = EBADMSG; + if (k != 0) + move_bad_mbufs(mb, dr, num, num - k); + } + + return k; +} + +/* + * prepare packets for inline ipsec processing: + * set ol_flags and attach metadata. + */ +static inline void +inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + uint32_t i, ol_flags; + + ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA; + for (i = 0; i != num; i++) { + + mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD; + if (ol_flags != 0) + rte_security_set_pkt_metadata(ss->security.ctx, + ss->security.ses, mb[i], NULL); + } +} + +/* + * process group of ESP outbound tunnel packets destined for + * INLINE_CRYPTO type of device. + */ +uint16_t +inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + uint32_t dr[num]; + + sa = ss->sa; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv, sqc); + + /* try to update the packet itself */ + rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv); + + k += (rc >= 0); + + /* failure, put packet into the death-row */ + if (rc < 0) { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not processed mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + inline_outb_mbuf_prepare(ss, mb, k); + return k; +} + +/* + * process group of ESP outbound transport packets destined for + * INLINE_CRYPTO type of device. + */ +uint16_t +inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + int32_t rc; + uint32_t i, k, n, l2, l3; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + union sym_op_data icv; + uint64_t iv[IPSEC_MAX_IV_QWORD]; + uint32_t dr[num]; + + sa = ss->sa; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + k = 0; + for (i = 0; i != n; i++) { + + l2 = mb[i]->l2_len; + l3 = mb[i]->l3_len; + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv, sqc); + + /* try to update the packet itself */ + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], + l2, l3, &icv); + + k += (rc >= 0); + + /* failure, put packet into the death-row */ + if (rc < 0) { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not processed mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + inline_outb_mbuf_prepare(ss, mb, k); + return k; +} + +/* + * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL: + * actual processing is done by HW/PMD, just set flags and metadata. + */ +uint16_t +inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + inline_outb_mbuf_prepare(ss, mb, num); + return num; +} diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h index a3ae7e2de..4ba079d75 100644 --- a/lib/librte_ipsec/ipsec_sqn.h +++ b/lib/librte_ipsec/ipsec_sqn.h @@ -56,21 +56,6 @@ sqn_low16(rte_be64_t sqn) #endif } -/* - * for given size, calculate required number of buckets. - */ -static uint32_t -replay_num_bucket(uint32_t wsz) -{ - uint32_t nb; - - nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) / - WINDOW_BUCKET_SIZE); - nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN); - - return nb; -} - /* * According to RFC4303 A2.1, determine the high-order bit of sequence number. * use 32bit arithmetic inside, return uint64_t. @@ -222,21 +207,6 @@ esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa, * between writer and readers. */ -/** - * Based on number of buckets calculated required size for the - * structure that holds replay window and sequence number (RSN) information. - */ -static size_t -rsn_size(uint32_t nb_bucket) -{ - size_t sz; - struct replay_sqn *rsn; - - sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]); - sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE); - return sz; -} - /** * Copy replay window and SQN. */ diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build index d2427b809..18fb2a143 100644 --- a/lib/librte_ipsec/meson.build +++ b/lib/librte_ipsec/meson.build @@ -3,7 +3,7 @@ allow_experimental_apis = true -sources=files('sa.c', 'ses.c') +sources=files('esp_inb.c', 'esp_outb.c', 'sa.c', 'ses.c') install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h') diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h new file mode 100644 index 000000000..67a6be2aa --- /dev/null +++ b/lib/librte_ipsec/misc.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _MISC_H_ +#define _MISC_H_ + +/** + * @file misc.h + * Contains miscelaneous functions/structures/macros used internally + * by ipsec library. + */ + +/* + * Move bad (unprocessed) mbufs beyond the good (processed) ones. + * bad_idx[] contains the indexes of bad mbufs inside the mb[]. + */ +static inline void +move_bad_mbufs(struct rte_mbuf *mb[], const uint32_t bad_idx[], uint32_t nb_mb, + uint32_t nb_bad) +{ + uint32_t i, j, k; + struct rte_mbuf *drb[nb_bad]; + + j = 0; + k = 0; + + /* copy bad ones into a temp place */ + for (i = 0; i != nb_mb; i++) { + if (j != nb_bad && i == bad_idx[j]) + drb[j++] = mb[i]; + else + mb[k++] = mb[i]; + } + + /* copy bad ones after the good ones */ + for (i = 0; i != nb_bad; i++) + mb[k + i] = drb[i]; +} + +#endif /* _MISC_H_ */ diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index e728c6a28..846e317fe 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -12,6 +12,7 @@ #include "ipsec_sqn.h" #include "crypto.h" #include "iph.h" +#include "misc.h" #include "pad.h" #define MBUF_MAX_L2_LEN RTE_LEN2MASK(RTE_MBUF_L2_LEN_BITS, uint64_t) @@ -82,6 +83,36 @@ rte_ipsec_sa_type(const struct rte_ipsec_sa *sa) return sa->type; } +/** + * Based on number of buckets calculated required size for the + * structure that holds replay window and sequence number (RSN) information. + */ +static size_t +rsn_size(uint32_t nb_bucket) +{ + size_t sz; + struct replay_sqn *rsn; + + sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]); + sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE); + return sz; +} + +/* + * for given size, calculate required number of buckets. + */ +static uint32_t +replay_num_bucket(uint32_t wsz) +{ + uint32_t nb; + + nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) / + WINDOW_BUCKET_SIZE); + nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN); + + return nb; +} + static int32_t ipsec_sa_size(uint64_t type, uint32_t *wnd_sz, uint32_t *nb_bucket) { @@ -450,625 +481,6 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm, return sz; } -/* - * Move bad (unprocessed) mbufs beyond the good (processed) ones. - * bad_idx[] contains the indexes of bad mbufs inside the mb[]. - */ -static inline void -move_bad_mbufs(struct rte_mbuf *mb[], const uint32_t bad_idx[], uint32_t nb_mb, - uint32_t nb_bad) -{ - uint32_t i, j, k; - struct rte_mbuf *drb[nb_bad]; - - j = 0; - k = 0; - - /* copy bad ones into a temp place */ - for (i = 0; i != nb_mb; i++) { - if (j != nb_bad && i == bad_idx[j]) - drb[j++] = mb[i]; - else - mb[k++] = mb[i]; - } - - /* copy bad ones after the good ones */ - for (i = 0; i != nb_bad; i++) - mb[k + i] = drb[i]; -} - -/* - * setup crypto ops for LOOKASIDE_NONE (pure crypto) type of devices. - */ -static inline void -lksd_none_cop_prepare(struct rte_crypto_op *cop, - struct rte_cryptodev_sym_session *cs, struct rte_mbuf *mb) -{ - struct rte_crypto_sym_op *sop; - - sop = cop->sym; - cop->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; - cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; - cop->sess_type = RTE_CRYPTO_OP_WITH_SESSION; - sop->m_src = mb; - __rte_crypto_sym_op_attach_sym_session(sop, cs); -} - -/* - * setup crypto op and crypto sym op for ESP outbound packet. - */ -static inline void -esp_outb_cop_prepare(struct rte_crypto_op *cop, - const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD], - const union sym_op_data *icv, uint32_t hlen, uint32_t plen) -{ - struct rte_crypto_sym_op *sop; - struct aead_gcm_iv *gcm; - struct aesctr_cnt_blk *ctr; - uint8_t algo_type = sa->algo_type; - - /* fill sym op fields */ - sop = cop->sym; - - switch (algo_type) { - case ALGO_TYPE_AES_CBC: - /* Cipher-Auth (AES-CBC *) case */ - case ALGO_TYPE_3DES_CBC: - /* Cipher-Auth (3DES-CBC *) case */ - case ALGO_TYPE_NULL: - /* NULL case */ - sop->cipher.data.offset = sa->ctp.cipher.offset + hlen; - sop->cipher.data.length = sa->ctp.cipher.length + plen; - sop->auth.data.offset = sa->ctp.auth.offset + hlen; - sop->auth.data.length = sa->ctp.auth.length + plen; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; - break; - case ALGO_TYPE_AES_GCM: - /* AEAD (AES_GCM) case */ - sop->aead.data.offset = sa->ctp.cipher.offset + hlen; - sop->aead.data.length = sa->ctp.cipher.length + plen; - sop->aead.digest.data = icv->va; - sop->aead.digest.phys_addr = icv->pa; - sop->aead.aad.data = icv->va + sa->icv_len; - sop->aead.aad.phys_addr = icv->pa + sa->icv_len; - - /* fill AAD IV (located inside crypto op) */ - gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, - sa->iv_ofs); - aead_gcm_iv_fill(gcm, ivp[0], sa->salt); - break; - case ALGO_TYPE_AES_CTR: - /* Cipher-Auth (AES-CTR *) case */ - sop->cipher.data.offset = sa->ctp.cipher.offset + hlen; - sop->cipher.data.length = sa->ctp.cipher.length + plen; - sop->auth.data.offset = sa->ctp.auth.offset + hlen; - sop->auth.data.length = sa->ctp.auth.length + plen; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; - - ctr = rte_crypto_op_ctod_offset(cop, struct aesctr_cnt_blk *, - sa->iv_ofs); - aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); - break; - default: - break; - } -} - -/* - * setup/update packet data and metadata for ESP outbound tunnel case. - */ -static inline int32_t -esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, - const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, - union sym_op_data *icv) -{ - uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen; - struct rte_mbuf *ml; - struct esp_hdr *esph; - struct esp_tail *espt; - char *ph, *pt; - uint64_t *iv; - - /* calculate extra header space required */ - hlen = sa->hdr_len + sa->iv_len + sizeof(*esph); - - /* size of ipsec protected data */ - l2len = mb->l2_len; - plen = mb->pkt_len - l2len; - - /* number of bytes to encrypt */ - clen = plen + sizeof(*espt); - clen = RTE_ALIGN_CEIL(clen, sa->pad_align); - - /* pad length + esp tail */ - pdlen = clen - plen; - tlen = pdlen + sa->icv_len; - - /* do append and prepend */ - ml = rte_pktmbuf_lastseg(mb); - if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml)) - return -ENOSPC; - - /* prepend header */ - ph = rte_pktmbuf_prepend(mb, hlen - l2len); - if (ph == NULL) - return -ENOSPC; - - /* append tail */ - pdofs = ml->data_len; - ml->data_len += tlen; - mb->pkt_len += tlen; - pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs); - - /* update pkt l2/l3 len */ - mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) | - sa->tx_offload.val; - - /* copy tunnel pkt header */ - rte_memcpy(ph, sa->hdr, sa->hdr_len); - - /* update original and new ip header fields */ - update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off, - sqn_low16(sqc)); - - /* update spi, seqn and iv */ - esph = (struct esp_hdr *)(ph + sa->hdr_len); - iv = (uint64_t *)(esph + 1); - copy_iv(iv, ivp, sa->iv_len); - - esph->spi = sa->spi; - esph->seq = sqn_low32(sqc); - - /* offset for ICV */ - pdofs += pdlen + sa->sqh_len; - - /* pad length */ - pdlen -= sizeof(*espt); - - /* copy padding data */ - rte_memcpy(pt, esp_pad_bytes, pdlen); - - /* update esp trailer */ - espt = (struct esp_tail *)(pt + pdlen); - espt->pad_len = pdlen; - espt->next_proto = sa->proto; - - icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); - icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); - - return clen; -} - -/* - * for pure cryptodev (lookaside none) depending on SA settings, - * we might have to write some extra data to the packet. - */ -static inline void -outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, - const union sym_op_data *icv) -{ - uint32_t *psqh; - struct aead_gcm_aad *aad; - uint8_t algo_type = sa->algo_type; - - /* insert SQN.hi between ESP trailer and ICV */ - if (sa->sqh_len != 0) { - psqh = (uint32_t *)(icv->va - sa->sqh_len); - psqh[0] = sqn_hi32(sqc); - } - - /* - * fill IV and AAD fields, if any (aad fields are placed after icv), - * right now we support only one AEAD algorithm: AES-GCM . - */ - if (algo_type == ALGO_TYPE_AES_GCM) { - aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); - aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa)); - } -} - -/* - * setup/update packets and crypto ops for ESP outbound tunnel case. - */ -static uint16_t -outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - struct rte_crypto_op *cop[], uint16_t num) -{ - int32_t rc; - uint32_t i, k, n; - uint64_t sqn; - rte_be64_t sqc; - struct rte_ipsec_sa *sa; - struct rte_cryptodev_sym_session *cs; - union sym_op_data icv; - uint64_t iv[IPSEC_MAX_IV_QWORD]; - uint32_t dr[num]; - - sa = ss->sa; - cs = ss->crypto.ses; - - n = num; - sqn = esn_outb_update_sqn(sa, &n); - if (n != num) - rte_errno = EOVERFLOW; - - k = 0; - for (i = 0; i != n; i++) { - - sqc = rte_cpu_to_be_64(sqn + i); - gen_iv(iv, sqc); - - /* try to update the packet itself */ - rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv); - - /* success, setup crypto op */ - if (rc >= 0) { - outb_pkt_xprepare(sa, sqc, &icv); - lksd_none_cop_prepare(cop[k], cs, mb[i]); - esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc); - k++; - /* failure, put packet into the death-row */ - } else { - dr[i - k] = i; - rte_errno = -rc; - } - } - - /* copy not prepared mbufs beyond good ones */ - if (k != n && k != 0) - move_bad_mbufs(mb, dr, n, n - k); - - return k; -} - -/* - * setup/update packet data and metadata for ESP outbound transport case. - */ -static inline int32_t -esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, - const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, - uint32_t l2len, uint32_t l3len, union sym_op_data *icv) -{ - uint8_t np; - uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen; - struct rte_mbuf *ml; - struct esp_hdr *esph; - struct esp_tail *espt; - char *ph, *pt; - uint64_t *iv; - - uhlen = l2len + l3len; - plen = mb->pkt_len - uhlen; - - /* calculate extra header space required */ - hlen = sa->iv_len + sizeof(*esph); - - /* number of bytes to encrypt */ - clen = plen + sizeof(*espt); - clen = RTE_ALIGN_CEIL(clen, sa->pad_align); - - /* pad length + esp tail */ - pdlen = clen - plen; - tlen = pdlen + sa->icv_len; - - /* do append and insert */ - ml = rte_pktmbuf_lastseg(mb); - if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml)) - return -ENOSPC; - - /* prepend space for ESP header */ - ph = rte_pktmbuf_prepend(mb, hlen); - if (ph == NULL) - return -ENOSPC; - - /* append tail */ - pdofs = ml->data_len; - ml->data_len += tlen; - mb->pkt_len += tlen; - pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs); - - /* shift L2/L3 headers */ - insert_esph(ph, ph + hlen, uhlen); - - /* update ip header fields */ - np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len, - IPPROTO_ESP); - - /* update spi, seqn and iv */ - esph = (struct esp_hdr *)(ph + uhlen); - iv = (uint64_t *)(esph + 1); - copy_iv(iv, ivp, sa->iv_len); - - esph->spi = sa->spi; - esph->seq = sqn_low32(sqc); - - /* offset for ICV */ - pdofs += pdlen + sa->sqh_len; - - /* pad length */ - pdlen -= sizeof(*espt); - - /* copy padding data */ - rte_memcpy(pt, esp_pad_bytes, pdlen); - - /* update esp trailer */ - espt = (struct esp_tail *)(pt + pdlen); - espt->pad_len = pdlen; - espt->next_proto = np; - - icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); - icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); - - return clen; -} - -/* - * setup/update packets and crypto ops for ESP outbound transport case. - */ -static uint16_t -outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - struct rte_crypto_op *cop[], uint16_t num) -{ - int32_t rc; - uint32_t i, k, n, l2, l3; - uint64_t sqn; - rte_be64_t sqc; - struct rte_ipsec_sa *sa; - struct rte_cryptodev_sym_session *cs; - union sym_op_data icv; - uint64_t iv[IPSEC_MAX_IV_QWORD]; - uint32_t dr[num]; - - sa = ss->sa; - cs = ss->crypto.ses; - - n = num; - sqn = esn_outb_update_sqn(sa, &n); - if (n != num) - rte_errno = EOVERFLOW; - - k = 0; - for (i = 0; i != n; i++) { - - l2 = mb[i]->l2_len; - l3 = mb[i]->l3_len; - - sqc = rte_cpu_to_be_64(sqn + i); - gen_iv(iv, sqc); - - /* try to update the packet itself */ - rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i], - l2, l3, &icv); - - /* success, setup crypto op */ - if (rc >= 0) { - outb_pkt_xprepare(sa, sqc, &icv); - lksd_none_cop_prepare(cop[k], cs, mb[i]); - esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc); - k++; - /* failure, put packet into the death-row */ - } else { - dr[i - k] = i; - rte_errno = -rc; - } - } - - /* copy not prepared mbufs beyond good ones */ - if (k != n && k != 0) - move_bad_mbufs(mb, dr, n, n - k); - - return k; -} - -/* - * setup crypto op and crypto sym op for ESP inbound tunnel packet. - */ -static inline int32_t -esp_inb_tun_cop_prepare(struct rte_crypto_op *cop, - const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, - const union sym_op_data *icv, uint32_t pofs, uint32_t plen) -{ - struct rte_crypto_sym_op *sop; - struct aead_gcm_iv *gcm; - struct aesctr_cnt_blk *ctr; - uint64_t *ivc, *ivp; - uint32_t clen; - uint8_t algo_type = sa->algo_type; - - clen = plen - sa->ctp.cipher.length; - if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0) - return -EINVAL; - - /* fill sym op fields */ - sop = cop->sym; - - switch (algo_type) { - case ALGO_TYPE_AES_GCM: - sop->aead.data.offset = pofs + sa->ctp.cipher.offset; - sop->aead.data.length = clen; - sop->aead.digest.data = icv->va; - sop->aead.digest.phys_addr = icv->pa; - sop->aead.aad.data = icv->va + sa->icv_len; - sop->aead.aad.phys_addr = icv->pa + sa->icv_len; - - /* fill AAD IV (located inside crypto op) */ - gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, - sa->iv_ofs); - ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, - pofs + sizeof(struct esp_hdr)); - aead_gcm_iv_fill(gcm, ivp[0], sa->salt); - break; - case ALGO_TYPE_AES_CBC: - case ALGO_TYPE_3DES_CBC: - sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; - sop->cipher.data.length = clen; - sop->auth.data.offset = pofs + sa->ctp.auth.offset; - sop->auth.data.length = plen - sa->ctp.auth.length; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; - - /* copy iv from the input packet to the cop */ - ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs); - ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, - pofs + sizeof(struct esp_hdr)); - copy_iv(ivc, ivp, sa->iv_len); - break; - case ALGO_TYPE_AES_CTR: - sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; - sop->cipher.data.length = clen; - sop->auth.data.offset = pofs + sa->ctp.auth.offset; - sop->auth.data.length = plen - sa->ctp.auth.length; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; - - /* copy iv from the input packet to the cop */ - ctr = rte_crypto_op_ctod_offset(cop, struct aesctr_cnt_blk *, - sa->iv_ofs); - ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, - pofs + sizeof(struct esp_hdr)); - aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); - break; - case ALGO_TYPE_NULL: - sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; - sop->cipher.data.length = clen; - sop->auth.data.offset = pofs + sa->ctp.auth.offset; - sop->auth.data.length = plen - sa->ctp.auth.length; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; - break; - - default: - return -EINVAL; - } - - return 0; -} - -/* - * for pure cryptodev (lookaside none) depending on SA settings, - * we might have to write some extra data to the packet. - */ -static inline void -inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, - const union sym_op_data *icv) -{ - struct aead_gcm_aad *aad; - - /* insert SQN.hi between ESP trailer and ICV */ - if (sa->sqh_len != 0) - insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len); - - /* - * fill AAD fields, if any (aad fields are placed after icv), - * right now we support only one AEAD algorithm: AES-GCM. - */ - if (sa->aad_len != 0) { - aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); - aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa)); - } -} - -/* - * setup/update packet data and metadata for ESP inbound tunnel case. - */ -static inline int32_t -esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa, - const struct replay_sqn *rsn, struct rte_mbuf *mb, - uint32_t hlen, union sym_op_data *icv) -{ - int32_t rc; - uint64_t sqn; - uint32_t icv_ofs, plen; - struct rte_mbuf *ml; - struct esp_hdr *esph; - - esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); - - /* - * retrieve and reconstruct SQN, then check it, then - * convert it back into network byte order. - */ - sqn = rte_be_to_cpu_32(esph->seq); - if (IS_ESN(sa)) - sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz); - - rc = esn_inb_check_sqn(rsn, sa, sqn); - if (rc != 0) - return rc; - - sqn = rte_cpu_to_be_64(sqn); - - /* start packet manipulation */ - plen = mb->pkt_len; - plen = plen - hlen; - - ml = rte_pktmbuf_lastseg(mb); - icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len; - - /* we have to allocate space for AAD somewhere, - * right now - just use free trailing space at the last segment. - * Would probably be more convenient to reserve space for AAD - * inside rte_crypto_op itself - * (again for IV space is already reserved inside cop). - */ - if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml)) - return -ENOSPC; - - icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs); - icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs); - - inb_pkt_xprepare(sa, sqn, icv); - return plen; -} - -/* - * setup/update packets and crypto ops for ESP inbound case. - */ -static uint16_t -inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - struct rte_crypto_op *cop[], uint16_t num) -{ - int32_t rc; - uint32_t i, k, hl; - struct rte_ipsec_sa *sa; - struct rte_cryptodev_sym_session *cs; - struct replay_sqn *rsn; - union sym_op_data icv; - uint32_t dr[num]; - - sa = ss->sa; - cs = ss->crypto.ses; - rsn = rsn_acquire(sa); - - k = 0; - for (i = 0; i != num; i++) { - - hl = mb[i]->l2_len + mb[i]->l3_len; - rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv); - if (rc >= 0) { - lksd_none_cop_prepare(cop[k], cs, mb[i]); - rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv, - hl, rc); - } - - k += (rc == 0); - if (rc != 0) { - dr[i - k] = i; - rte_errno = -rc; - } - } - - rsn_release(sa, rsn); - - /* copy not prepared mbufs beyond good ones */ - if (k != num && k != 0) - move_bad_mbufs(mb, dr, num, num - k); - - return k; -} - /* * setup crypto ops for LOOKASIDE_PROTO type of devices. */ @@ -1103,265 +515,6 @@ lksd_proto_prepare(const struct rte_ipsec_session *ss, return num; } -/* - * process ESP inbound tunnel packet. - */ -static inline int -esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, - uint32_t *sqn) -{ - uint32_t hlen, icv_len, tlen; - struct esp_hdr *esph; - struct esp_tail *espt; - struct rte_mbuf *ml; - char *pd; - - if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) - return -EBADMSG; - - icv_len = sa->icv_len; - - ml = rte_pktmbuf_lastseg(mb); - espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, - ml->data_len - icv_len - sizeof(*espt)); - - /* - * check padding and next proto. - * return an error if something is wrong. - */ - pd = (char *)espt - espt->pad_len; - if (espt->next_proto != sa->proto || - memcmp(pd, esp_pad_bytes, espt->pad_len)) - return -EINVAL; - - /* cut of ICV, ESP tail and padding bytes */ - tlen = icv_len + sizeof(*espt) + espt->pad_len; - ml->data_len -= tlen; - mb->pkt_len -= tlen; - - /* cut of L2/L3 headers, ESP header and IV */ - hlen = mb->l2_len + mb->l3_len; - esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); - rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset); - - /* retrieve SQN for later check */ - *sqn = rte_be_to_cpu_32(esph->seq); - - /* reset mbuf metatdata: L2/L3 len, packet type */ - mb->packet_type = RTE_PTYPE_UNKNOWN; - mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) | - sa->tx_offload.val; - - /* clear the PKT_RX_SEC_OFFLOAD flag if set */ - mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); - return 0; -} - -/* - * process ESP inbound transport packet. - */ -static inline int -esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, - uint32_t *sqn) -{ - uint32_t hlen, icv_len, l2len, l3len, tlen; - struct esp_hdr *esph; - struct esp_tail *espt; - struct rte_mbuf *ml; - char *np, *op, *pd; - - if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) - return -EBADMSG; - - icv_len = sa->icv_len; - - ml = rte_pktmbuf_lastseg(mb); - espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, - ml->data_len - icv_len - sizeof(*espt)); - - /* check padding, return an error if something is wrong. */ - pd = (char *)espt - espt->pad_len; - if (memcmp(pd, esp_pad_bytes, espt->pad_len)) - return -EINVAL; - - /* cut of ICV, ESP tail and padding bytes */ - tlen = icv_len + sizeof(*espt) + espt->pad_len; - ml->data_len -= tlen; - mb->pkt_len -= tlen; - - /* retrieve SQN for later check */ - l2len = mb->l2_len; - l3len = mb->l3_len; - hlen = l2len + l3len; - op = rte_pktmbuf_mtod(mb, char *); - esph = (struct esp_hdr *)(op + hlen); - *sqn = rte_be_to_cpu_32(esph->seq); - - /* cut off ESP header and IV, update L3 header */ - np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset); - remove_esph(np, op, hlen); - update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len, - espt->next_proto); - - /* reset mbuf packet type */ - mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK); - - /* clear the PKT_RX_SEC_OFFLOAD flag if set */ - mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); - return 0; -} - -/* - * for group of ESP inbound packets perform SQN check and update. - */ -static inline uint16_t -esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], - uint32_t dr[], uint16_t num) -{ - uint32_t i, k; - struct replay_sqn *rsn; - - rsn = rsn_update_start(sa); - - k = 0; - for (i = 0; i != num; i++) { - if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0) - k++; - else - dr[i - k] = i; - } - - rsn_update_finish(sa, rsn); - return k; -} - -/* - * process group of ESP inbound tunnel packets. - */ -static uint16_t -inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - uint16_t num) -{ - uint32_t i, k, n; - struct rte_ipsec_sa *sa; - uint32_t sqn[num]; - uint32_t dr[num]; - - sa = ss->sa; - - /* process packets, extract seq numbers */ - - k = 0; - for (i = 0; i != num; i++) { - /* good packet */ - if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0) - k++; - /* bad packet, will drop from furhter processing */ - else - dr[i - k] = i; - } - - /* handle unprocessed mbufs */ - if (k != num && k != 0) - move_bad_mbufs(mb, dr, num, num - k); - - /* update SQN and replay winow */ - n = esp_inb_rsn_update(sa, sqn, dr, k); - - /* handle mbufs with wrong SQN */ - if (n != k && n != 0) - move_bad_mbufs(mb, dr, k, k - n); - - if (n != num) - rte_errno = EBADMSG; - - return n; -} - -/* - * process group of ESP inbound transport packets. - */ -static uint16_t -inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - uint16_t num) -{ - uint32_t i, k, n; - uint32_t sqn[num]; - struct rte_ipsec_sa *sa; - uint32_t dr[num]; - - sa = ss->sa; - - /* process packets, extract seq numbers */ - - k = 0; - for (i = 0; i != num; i++) { - /* good packet */ - if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0) - k++; - /* bad packet, will drop from furhter processing */ - else - dr[i - k] = i; - } - - /* handle unprocessed mbufs */ - if (k != num && k != 0) - move_bad_mbufs(mb, dr, num, num - k); - - /* update SQN and replay winow */ - n = esp_inb_rsn_update(sa, sqn, dr, k); - - /* handle mbufs with wrong SQN */ - if (n != k && n != 0) - move_bad_mbufs(mb, dr, k, k - n); - - if (n != num) - rte_errno = EBADMSG; - - return n; -} - -/* - * process outbound packets for SA with ESN support, - * for algorithms that require SQN.hibits to be implictly included - * into digest computation. - * In that case we have to move ICV bytes back to their proper place. - */ -static uint16_t -outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - uint16_t num) -{ - uint32_t i, k, icv_len, *icv; - struct rte_mbuf *ml; - struct rte_ipsec_sa *sa; - uint32_t dr[num]; - - sa = ss->sa; - - k = 0; - icv_len = sa->icv_len; - - for (i = 0; i != num; i++) { - if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) { - ml = rte_pktmbuf_lastseg(mb[i]); - icv = rte_pktmbuf_mtod_offset(ml, void *, - ml->data_len - icv_len); - remove_sqh(icv, icv_len); - k++; - } else - dr[i - k] = i; - } - - /* handle unprocessed mbufs */ - if (k != num) { - rte_errno = EBADMSG; - if (k != 0) - move_bad_mbufs(mb, dr, num, num - k); - } - - return k; -} - /* * simplest pkt process routine: * all actual processing is already done by HW/PMD, @@ -1398,142 +551,6 @@ pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], return k; } -/* - * prepare packets for inline ipsec processing: - * set ol_flags and attach metadata. - */ -static inline void -inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss, - struct rte_mbuf *mb[], uint16_t num) -{ - uint32_t i, ol_flags; - - ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA; - for (i = 0; i != num; i++) { - - mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD; - if (ol_flags != 0) - rte_security_set_pkt_metadata(ss->security.ctx, - ss->security.ses, mb[i], NULL); - } -} - -/* - * process group of ESP outbound tunnel packets destined for - * INLINE_CRYPTO type of device. - */ -static uint16_t -inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, - struct rte_mbuf *mb[], uint16_t num) -{ - int32_t rc; - uint32_t i, k, n; - uint64_t sqn; - rte_be64_t sqc; - struct rte_ipsec_sa *sa; - union sym_op_data icv; - uint64_t iv[IPSEC_MAX_IV_QWORD]; - uint32_t dr[num]; - - sa = ss->sa; - - n = num; - sqn = esn_outb_update_sqn(sa, &n); - if (n != num) - rte_errno = EOVERFLOW; - - k = 0; - for (i = 0; i != n; i++) { - - sqc = rte_cpu_to_be_64(sqn + i); - gen_iv(iv, sqc); - - /* try to update the packet itself */ - rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv); - - k += (rc >= 0); - - /* failure, put packet into the death-row */ - if (rc < 0) { - dr[i - k] = i; - rte_errno = -rc; - } - } - - /* copy not processed mbufs beyond good ones */ - if (k != n && k != 0) - move_bad_mbufs(mb, dr, n, n - k); - - inline_outb_mbuf_prepare(ss, mb, k); - return k; -} - -/* - * process group of ESP outbound transport packets destined for - * INLINE_CRYPTO type of device. - */ -static uint16_t -inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, - struct rte_mbuf *mb[], uint16_t num) -{ - int32_t rc; - uint32_t i, k, n, l2, l3; - uint64_t sqn; - rte_be64_t sqc; - struct rte_ipsec_sa *sa; - union sym_op_data icv; - uint64_t iv[IPSEC_MAX_IV_QWORD]; - uint32_t dr[num]; - - sa = ss->sa; - - n = num; - sqn = esn_outb_update_sqn(sa, &n); - if (n != num) - rte_errno = EOVERFLOW; - - k = 0; - for (i = 0; i != n; i++) { - - l2 = mb[i]->l2_len; - l3 = mb[i]->l3_len; - - sqc = rte_cpu_to_be_64(sqn + i); - gen_iv(iv, sqc); - - /* try to update the packet itself */ - rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i], - l2, l3, &icv); - - k += (rc >= 0); - - /* failure, put packet into the death-row */ - if (rc < 0) { - dr[i - k] = i; - rte_errno = -rc; - } - } - - /* copy not processed mbufs beyond good ones */ - if (k != n && k != 0) - move_bad_mbufs(mb, dr, n, n - k); - - inline_outb_mbuf_prepare(ss, mb, k); - return k; -} - -/* - * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL: - * actual processing is done by HW/PMD, just set flags and metadata. - */ -static uint16_t -outb_inline_proto_process(const struct rte_ipsec_session *ss, - struct rte_mbuf *mb[], uint16_t num) -{ - inline_outb_mbuf_prepare(ss, mb, num); - return num; -} - /* * Select packet processing function for session on LOOKASIDE_NONE * type of device. @@ -1551,23 +568,23 @@ lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa, switch (sa->type & msk) { case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): - pf->prepare = inb_pkt_prepare; - pf->process = inb_tun_pkt_process; + pf->prepare = esp_inb_pkt_prepare; + pf->process = esp_inb_tun_pkt_process; break; case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): - pf->prepare = inb_pkt_prepare; - pf->process = inb_trs_pkt_process; + pf->prepare = esp_inb_pkt_prepare; + pf->process = esp_inb_trs_pkt_process; break; case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): - pf->prepare = outb_tun_prepare; + pf->prepare = esp_outb_tun_prepare; pf->process = (sa->sqh_len != 0) ? - outb_sqh_process : pkt_flag_process; + esp_outb_sqh_process : pkt_flag_process; break; case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): - pf->prepare = outb_trs_prepare; + pf->prepare = esp_outb_trs_prepare; pf->process = (sa->sqh_len != 0) ? - outb_sqh_process : pkt_flag_process; + esp_outb_sqh_process : pkt_flag_process; break; default: rc = -ENOTSUP; @@ -1593,10 +610,10 @@ inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa, switch (sa->type & msk) { case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): - pf->process = inb_tun_pkt_process; + pf->process = esp_inb_tun_pkt_process; break; case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): - pf->process = inb_trs_pkt_process; + pf->process = esp_inb_trs_pkt_process; break; case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): @@ -1637,7 +654,7 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, RTE_IPSEC_SATP_DIR_IB) pf->process = pkt_flag_process; else - pf->process = outb_inline_proto_process; + pf->process = inline_proto_outb_pkt_process; break; case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: pf->prepare = lksd_proto_prepare; diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index 48c3c103a..ffb5fb4f8 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -132,4 +132,44 @@ int ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf); +/* inbound processing */ + +uint16_t +esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num); + +uint16_t +esp_inb_tun_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +esp_inb_trs_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +/* outbound processing */ + +uint16_t +esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num); + +uint16_t +esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + struct rte_crypto_op *cop[], uint16_t num); + +uint16_t +esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], + uint16_t num); + +uint16_t +inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + #endif /* _SA_H_ */ From patchwork Fri Mar 29 10:27:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51896 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C32DF4F98; Fri, 29 Mar 2019 11:28:19 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id CA03A4CA6 for ; Fri, 29 Mar 2019 11:28:07 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 03:28:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,284,1549958400"; d="scan'208";a="131234695" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga006.jf.intel.com with ESMTP; 29 Mar 2019 03:28:05 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Fri, 29 Mar 2019 10:27:23 +0000 Message-Id: <20190329102726.27716-7-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190329102726.27716-1-konstantin.ananyev@intel.com> References: <20190326154320.29913-1-konstantin.ananyev@intel.com> <20190329102726.27716-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 6/9] ipsec: reorder packet check for esp inbound X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Right now check for packet length and padding is done inside cop_prepare(). It makes sense to have all necessary checks in one place at early stage: inside pkt_prepare(). That allows to simplify (and later hopefully) optimize cop_prepare() part. Signed-off-by: Konstantin Ananyev Acked-by: Akhil Goyal --- lib/librte_ipsec/esp_inb.c | 41 +++++++++++++++++--------------------- 1 file changed, 18 insertions(+), 23 deletions(-) diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c index a775c7b0b..8d1171556 100644 --- a/lib/librte_ipsec/esp_inb.c +++ b/lib/librte_ipsec/esp_inb.c @@ -18,7 +18,7 @@ /* * setup crypto op and crypto sym op for ESP inbound tunnel packet. */ -static inline int32_t +static inline void inb_cop_prepare(struct rte_crypto_op *cop, const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, const union sym_op_data *icv, uint32_t pofs, uint32_t plen) @@ -27,11 +27,7 @@ inb_cop_prepare(struct rte_crypto_op *cop, struct aead_gcm_iv *gcm; struct aesctr_cnt_blk *ctr; uint64_t *ivc, *ivp; - uint32_t algo, clen; - - clen = plen - sa->ctp.cipher.length; - if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0) - return -EINVAL; + uint32_t algo; algo = sa->algo_type; @@ -41,7 +37,7 @@ inb_cop_prepare(struct rte_crypto_op *cop, switch (algo) { case ALGO_TYPE_AES_GCM: sop->aead.data.offset = pofs + sa->ctp.cipher.offset; - sop->aead.data.length = clen; + sop->aead.data.length = plen - sa->ctp.cipher.length; sop->aead.digest.data = icv->va; sop->aead.digest.phys_addr = icv->pa; sop->aead.aad.data = icv->va + sa->icv_len; @@ -57,7 +53,7 @@ inb_cop_prepare(struct rte_crypto_op *cop, case ALGO_TYPE_AES_CBC: case ALGO_TYPE_3DES_CBC: sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; - sop->cipher.data.length = clen; + sop->cipher.data.length = plen - sa->ctp.cipher.length; sop->auth.data.offset = pofs + sa->ctp.auth.offset; sop->auth.data.length = plen - sa->ctp.auth.length; sop->auth.digest.data = icv->va; @@ -71,7 +67,7 @@ inb_cop_prepare(struct rte_crypto_op *cop, break; case ALGO_TYPE_AES_CTR: sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; - sop->cipher.data.length = clen; + sop->cipher.data.length = plen - sa->ctp.cipher.length; sop->auth.data.offset = pofs + sa->ctp.auth.offset; sop->auth.data.length = plen - sa->ctp.auth.length; sop->auth.digest.data = icv->va; @@ -86,17 +82,13 @@ inb_cop_prepare(struct rte_crypto_op *cop, break; case ALGO_TYPE_NULL: sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; - sop->cipher.data.length = clen; + sop->cipher.data.length = plen - sa->ctp.cipher.length; sop->auth.data.offset = pofs + sa->ctp.auth.offset; sop->auth.data.length = plen - sa->ctp.auth.length; sop->auth.digest.data = icv->va; sop->auth.digest.phys_addr = icv->pa; break; - default: - return -EINVAL; } - - return 0; } /* @@ -132,7 +124,7 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, { int32_t rc; uint64_t sqn; - uint32_t icv_ofs, plen; + uint32_t clen, icv_ofs, plen; struct rte_mbuf *ml; struct esp_hdr *esph; @@ -159,6 +151,11 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, ml = rte_pktmbuf_lastseg(mb); icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len; + /* check that packet has a valid length */ + clen = plen - sa->ctp.cipher.length; + if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0) + return -EBADMSG; + /* we have to allocate space for AAD somewhere, * right now - just use free trailing space at the last segment. * Would probably be more convenient to reserve space for AAD @@ -201,21 +198,19 @@ esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], rc = inb_pkt_prepare(sa, rsn, mb[i], hl, &icv); if (rc >= 0) { lksd_none_cop_prepare(cop[k], cs, mb[i]); - rc = inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc); - } - - k += (rc == 0); - if (rc != 0) { + inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc); + k++; + } else dr[i - k] = i; - rte_errno = -rc; - } } rsn_release(sa, rsn); /* copy not prepared mbufs beyond good ones */ - if (k != num && k != 0) + if (k != num && k != 0) { move_bad_mbufs(mb, dr, num, num - k); + rte_errno = EBADMSG; + } return k; } From patchwork Fri Mar 29 10:27:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51897 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B31F55323; Fri, 29 Mar 2019 11:28:22 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id D17164CC5 for ; Fri, 29 Mar 2019 11:28:09 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 03:28:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,284,1549958400"; d="scan'208";a="131234698" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga006.jf.intel.com with ESMTP; 29 Mar 2019 03:28:07 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Fri, 29 Mar 2019 10:27:24 +0000 Message-Id: <20190329102726.27716-8-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190329102726.27716-1-konstantin.ananyev@intel.com> References: <20190326154320.29913-1-konstantin.ananyev@intel.com> <20190329102726.27716-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 7/9] ipsec: reorder packet process for esp inbound X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Change the order of operations for esp inbound post-process: - read mbuf metadata and esp tail first for all packets in the burst first to minimize stalls due to load latency. - move code that is common for both transport and tunnel modes into separate functions to reduce code duplication. - add extra check for packet consitency Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/esp_inb.c | 351 ++++++++++++++++++++++------------- lib/librte_ipsec/ipsec_sqn.h | 4 - 2 files changed, 227 insertions(+), 128 deletions(-) diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c index 8d1171556..138ed0450 100644 --- a/lib/librte_ipsec/esp_inb.c +++ b/lib/librte_ipsec/esp_inb.c @@ -15,8 +15,11 @@ #include "misc.h" #include "pad.h" +typedef uint16_t (*esp_inb_process_t)(const struct rte_ipsec_sa *sa, + struct rte_mbuf *mb[], uint32_t sqn[], uint32_t dr[], uint16_t num); + /* - * setup crypto op and crypto sym op for ESP inbound tunnel packet. + * setup crypto op and crypto sym op for ESP inbound packet. */ static inline void inb_cop_prepare(struct rte_crypto_op *cop, @@ -216,111 +219,239 @@ esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], } /* - * process ESP inbound tunnel packet. + * Start with processing inbound packet. + * This is common part for both tunnel and transport mode. + * Extract information that will be needed later from mbuf metadata and + * actual packet data: + * - mbuf for packet's last segment + * - length of the L2/L3 headers + * - esp tail structure */ -static inline int -inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, - uint32_t *sqn) +static inline void +process_step1(struct rte_mbuf *mb, uint32_t tlen, struct rte_mbuf **ml, + struct esp_tail *espt, uint32_t *hlen) { - uint32_t hlen, icv_len, tlen; - struct esp_hdr *esph; - struct esp_tail *espt; - struct rte_mbuf *ml; - char *pd; + const struct esp_tail *pt; - if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) - return -EBADMSG; + ml[0] = rte_pktmbuf_lastseg(mb); + hlen[0] = mb->l2_len + mb->l3_len; + pt = rte_pktmbuf_mtod_offset(ml[0], const struct esp_tail *, + ml[0]->data_len - tlen); + espt[0] = pt[0]; +} - icv_len = sa->icv_len; +/* + * packet checks for transport mode: + * - no reported IPsec related failures in ol_flags + * - tail length is valid + * - padding bytes are valid + */ +static inline int32_t +trs_process_check(const struct rte_mbuf *mb, const struct rte_mbuf *ml, + struct esp_tail espt, uint32_t hlen, uint32_t tlen) +{ + const uint8_t *pd; + int32_t ofs; - ml = rte_pktmbuf_lastseg(mb); - espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, - ml->data_len - icv_len - sizeof(*espt)); + ofs = ml->data_len - tlen; + pd = rte_pktmbuf_mtod_offset(ml, const uint8_t *, ofs); - /* - * check padding and next proto. - * return an error if something is wrong. - */ - pd = (char *)espt - espt->pad_len; - if (espt->next_proto != sa->proto || - memcmp(pd, esp_pad_bytes, espt->pad_len)) - return -EINVAL; + return ((mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) != 0 || + ofs < 0 || tlen + hlen > mb->pkt_len || + (espt.pad_len != 0 && memcmp(pd, esp_pad_bytes, espt.pad_len))); +} + +/* + * packet checks for tunnel mode: + * - same as for trasnport mode + * - esp tail next proto contains expected for that SA value + */ +static inline int32_t +tun_process_check(const struct rte_mbuf *mb, struct rte_mbuf *ml, + struct esp_tail espt, uint32_t hlen, const uint32_t tlen, uint8_t proto) +{ + return (trs_process_check(mb, ml, espt, hlen, tlen) || + espt.next_proto != proto); +} + +/* + * step two for tunnel mode: + * - read SQN value (for future use) + * - cut of ICV, ESP tail and padding bytes + * - cut of ESP header and IV, also if needed - L2/L3 headers + * (controlled by *adj* value) + */ +static inline void * +tun_process_step2(struct rte_mbuf *mb, struct rte_mbuf *ml, uint32_t hlen, + uint32_t adj, uint32_t tlen, uint32_t *sqn) +{ + const struct esp_hdr *ph; + + /* read SQN value */ + ph = rte_pktmbuf_mtod_offset(mb, const struct esp_hdr *, hlen); + sqn[0] = ph->seq; /* cut of ICV, ESP tail and padding bytes */ - tlen = icv_len + sizeof(*espt) + espt->pad_len; ml->data_len -= tlen; mb->pkt_len -= tlen; /* cut of L2/L3 headers, ESP header and IV */ - hlen = mb->l2_len + mb->l3_len; - esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen); - rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset); + return rte_pktmbuf_adj(mb, adj); +} + +/* + * step two for transport mode: + * - read SQN value (for future use) + * - cut of ICV, ESP tail and padding bytes + * - cut of ESP header and IV + * - move L2/L3 header to fill the gap after ESP header removal + */ +static inline void * +trs_process_step2(struct rte_mbuf *mb, struct rte_mbuf *ml, uint32_t hlen, + uint32_t adj, uint32_t tlen, uint32_t *sqn) +{ + char *np, *op; + + /* get start of the packet before modifications */ + op = rte_pktmbuf_mtod(mb, char *); + + /* cut off ESP header and IV */ + np = tun_process_step2(mb, ml, hlen, adj, tlen, sqn); + + /* move header bytes to fill the gap after ESP header removal */ + remove_esph(np, op, hlen); + return np; +} - /* retrieve SQN for later check */ - *sqn = rte_be_to_cpu_32(esph->seq); +/* + * step three for transport mode: + * update mbuf metadata: + * - packet_type + * - ol_flags + */ +static inline void +trs_process_step3(struct rte_mbuf *mb) +{ + /* reset mbuf packet type */ + mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK); + /* clear the PKT_RX_SEC_OFFLOAD flag if set */ + mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD; +} + +/* + * step three for tunnel mode: + * update mbuf metadata: + * - packet_type + * - ol_flags + * - tx_offload + */ +static inline void +tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val) +{ /* reset mbuf metatdata: L2/L3 len, packet type */ mb->packet_type = RTE_PTYPE_UNKNOWN; - mb->tx_offload = (mb->tx_offload & sa->tx_offload.msk) | - sa->tx_offload.val; + mb->tx_offload = (mb->tx_offload & txof_msk) | txof_val; /* clear the PKT_RX_SEC_OFFLOAD flag if set */ - mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); - return 0; + mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD; } + /* - * process ESP inbound transport packet. + * *process* function for tunnel packets */ -static inline int -inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb, - uint32_t *sqn) +static inline uint16_t +tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + uint32_t sqn[], uint32_t dr[], uint16_t num) { - uint32_t hlen, icv_len, l2len, l3len, tlen; - struct esp_hdr *esph; - struct esp_tail *espt; - struct rte_mbuf *ml; - char *np, *op, *pd; + uint32_t adj, i, k, tl; + uint32_t hl[num]; + struct esp_tail espt[num]; + struct rte_mbuf *ml[num]; - if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) - return -EBADMSG; + const uint32_t tlen = sa->icv_len + sizeof(espt[0]); + const uint32_t cofs = sa->ctp.cipher.offset; - icv_len = sa->icv_len; + /* + * to minimize stalls due to load latency, + * read mbufs metadata and esp tail first. + */ + for (i = 0; i != num; i++) + process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i]); - ml = rte_pktmbuf_lastseg(mb); - espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *, - ml->data_len - icv_len - sizeof(*espt)); + k = 0; + for (i = 0; i != num; i++) { - /* check padding, return an error if something is wrong. */ - pd = (char *)espt - espt->pad_len; - if (memcmp(pd, esp_pad_bytes, espt->pad_len)) - return -EINVAL; + adj = hl[i] + cofs; + tl = tlen + espt[i].pad_len; - /* cut of ICV, ESP tail and padding bytes */ - tlen = icv_len + sizeof(*espt) + espt->pad_len; - ml->data_len -= tlen; - mb->pkt_len -= tlen; + /* check that packet is valid */ + if (tun_process_check(mb[i], ml[i], espt[i], adj, tl, + sa->proto) == 0) { - /* retrieve SQN for later check */ - l2len = mb->l2_len; - l3len = mb->l3_len; - hlen = l2len + l3len; - op = rte_pktmbuf_mtod(mb, char *); - esph = (struct esp_hdr *)(op + hlen); - *sqn = rte_be_to_cpu_32(esph->seq); + /* modify packet's layout */ + tun_process_step2(mb[i], ml[i], hl[i], adj, + tl, sqn + k); + /* update mbuf's metadata */ + tun_process_step3(mb[i], sa->tx_offload.msk, + sa->tx_offload.val); + k++; + } else + dr[i - k] = i; + } - /* cut off ESP header and IV, update L3 header */ - np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset); - remove_esph(np, op, hlen); - update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len, - espt->next_proto); + return k; +} - /* reset mbuf packet type */ - mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK); - /* clear the PKT_RX_SEC_OFFLOAD flag if set */ - mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD); - return 0; +/* + * *process* function for tunnel packets + */ +static inline uint16_t +trs_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], + uint32_t sqn[], uint32_t dr[], uint16_t num) +{ + char *np; + uint32_t i, k, l2, tl; + uint32_t hl[num]; + struct esp_tail espt[num]; + struct rte_mbuf *ml[num]; + + const uint32_t tlen = sa->icv_len + sizeof(espt[0]); + const uint32_t cofs = sa->ctp.cipher.offset; + + /* + * to minimize stalls due to load latency, + * read mbufs metadata and esp tail first. + */ + for (i = 0; i != num; i++) + process_step1(mb[i], tlen, &ml[i], &espt[i], &hl[i]); + + k = 0; + for (i = 0; i != num; i++) { + + tl = tlen + espt[i].pad_len; + l2 = mb[i]->l2_len; + + /* check that packet is valid */ + if (trs_process_check(mb[i], ml[i], espt[i], hl[i] + cofs, + tl) == 0) { + + /* modify packet's layout */ + np = trs_process_step2(mb[i], ml[i], hl[i], cofs, tl, + sqn + k); + update_trs_l3hdr(sa, np + l2, mb[i]->pkt_len, + l2, hl[i] - l2, espt[i].next_proto); + + /* update mbuf's metadata */ + trs_process_step3(mb[i]); + k++; + } else + dr[i - k] = i; + } + + return k; } /* @@ -333,11 +464,15 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], uint32_t i, k; struct replay_sqn *rsn; + /* replay not enabled */ + if (sa->replay.win_sz == 0) + return num; + rsn = rsn_update_start(sa); k = 0; for (i = 0; i != num; i++) { - if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0) + if (esn_inb_update_sqn(rsn, sa, rte_be_to_cpu_32(sqn[i])) == 0) k++; else dr[i - k] = i; @@ -348,13 +483,13 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[], } /* - * process group of ESP inbound tunnel packets. + * process group of ESP inbound packets. */ -uint16_t -esp_inb_tun_pkt_process(const struct rte_ipsec_session *ss, - struct rte_mbuf *mb[], uint16_t num) +static inline uint16_t +esp_inb_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num, esp_inb_process_t process) { - uint32_t i, k, n; + uint32_t k, n; struct rte_ipsec_sa *sa; uint32_t sqn[num]; uint32_t dr[num]; @@ -362,16 +497,7 @@ esp_inb_tun_pkt_process(const struct rte_ipsec_session *ss, sa = ss->sa; /* process packets, extract seq numbers */ - - k = 0; - for (i = 0; i != num; i++) { - /* good packet */ - if (inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0) - k++; - /* bad packet, will drop from furhter processing */ - else - dr[i - k] = i; - } + k = process(sa, mb, sqn, dr, num); /* handle unprocessed mbufs */ if (k != num && k != 0) @@ -390,6 +516,16 @@ esp_inb_tun_pkt_process(const struct rte_ipsec_session *ss, return n; } +/* + * process group of ESP inbound tunnel packets. + */ +uint16_t +esp_inb_tun_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return esp_inb_pkt_process(ss, mb, num, tun_process); +} + /* * process group of ESP inbound transport packets. */ @@ -397,38 +533,5 @@ uint16_t esp_inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { - uint32_t i, k, n; - uint32_t sqn[num]; - struct rte_ipsec_sa *sa; - uint32_t dr[num]; - - sa = ss->sa; - - /* process packets, extract seq numbers */ - - k = 0; - for (i = 0; i != num; i++) { - /* good packet */ - if (inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0) - k++; - /* bad packet, will drop from furhter processing */ - else - dr[i - k] = i; - } - - /* handle unprocessed mbufs */ - if (k != num && k != 0) - move_bad_mbufs(mb, dr, num, num - k); - - /* update SQN and replay winow */ - n = esp_inb_rsn_update(sa, sqn, dr, k); - - /* handle mbufs with wrong SQN */ - if (n != k && n != 0) - move_bad_mbufs(mb, dr, k, k - n); - - if (n != num) - rte_errno = EBADMSG; - - return n; + return esp_inb_pkt_process(ss, mb, num, trs_process); } diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h index 4ba079d75..0c2f76a7a 100644 --- a/lib/librte_ipsec/ipsec_sqn.h +++ b/lib/librte_ipsec/ipsec_sqn.h @@ -152,10 +152,6 @@ esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa, { uint32_t bit, bucket, last_bucket, new_bucket, diff, i; - /* replay not enabled */ - if (sa->replay.win_sz == 0) - return 0; - /* handle ESN */ if (IS_ESN(sa)) sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz); From patchwork Fri Mar 29 10:27:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51899 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 777B15699; Fri, 29 Mar 2019 11:28:36 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 030834CC0 for ; Fri, 29 Mar 2019 11:28:10 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 03:28:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,284,1549958400"; d="scan'208";a="131234704" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga006.jf.intel.com with ESMTP; 29 Mar 2019 03:28:09 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Fri, 29 Mar 2019 10:27:25 +0000 Message-Id: <20190329102726.27716-9-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190329102726.27716-1-konstantin.ananyev@intel.com> References: <20190326154320.29913-1-konstantin.ananyev@intel.com> <20190329102726.27716-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 8/9] ipsec: de-duplicate crypto op prepare code-path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For sym_crypto_op prepare move common code into a separate function(s). Signed-off-by: Konstantin Ananyev Acked-by: Akhil Goyal --- lib/librte_ipsec/esp_inb.c | 72 +++++++++++++++++++++---------------- lib/librte_ipsec/esp_outb.c | 57 +++++++++++++++++++---------- 2 files changed, 80 insertions(+), 49 deletions(-) diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c index 138ed0450..4e0e12a85 100644 --- a/lib/librte_ipsec/esp_inb.c +++ b/lib/librte_ipsec/esp_inb.c @@ -18,6 +18,40 @@ typedef uint16_t (*esp_inb_process_t)(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], uint32_t sqn[], uint32_t dr[], uint16_t num); +/* + * helper function to fill crypto_sym op for cipher+auth algorithms. + * used by inb_cop_prepare(), see below. + */ +static inline void +sop_ciph_auth_prepare(struct rte_crypto_sym_op *sop, + const struct rte_ipsec_sa *sa, const union sym_op_data *icv, + uint32_t pofs, uint32_t plen) +{ + sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; + sop->cipher.data.length = plen - sa->ctp.cipher.length; + sop->auth.data.offset = pofs + sa->ctp.auth.offset; + sop->auth.data.length = plen - sa->ctp.auth.length; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; +} + +/* + * helper function to fill crypto_sym op for aead algorithms + * used by inb_cop_prepare(), see below. + */ +static inline void +sop_aead_prepare(struct rte_crypto_sym_op *sop, + const struct rte_ipsec_sa *sa, const union sym_op_data *icv, + uint32_t pofs, uint32_t plen) +{ + sop->aead.data.offset = pofs + sa->ctp.cipher.offset; + sop->aead.data.length = plen - sa->ctp.cipher.length; + sop->aead.digest.data = icv->va; + sop->aead.digest.phys_addr = icv->pa; + sop->aead.aad.data = icv->va + sa->icv_len; + sop->aead.aad.phys_addr = icv->pa + sa->icv_len; +} + /* * setup crypto op and crypto sym op for ESP inbound packet. */ @@ -33,63 +67,39 @@ inb_cop_prepare(struct rte_crypto_op *cop, uint32_t algo; algo = sa->algo_type; + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct esp_hdr)); /* fill sym op fields */ sop = cop->sym; switch (algo) { case ALGO_TYPE_AES_GCM: - sop->aead.data.offset = pofs + sa->ctp.cipher.offset; - sop->aead.data.length = plen - sa->ctp.cipher.length; - sop->aead.digest.data = icv->va; - sop->aead.digest.phys_addr = icv->pa; - sop->aead.aad.data = icv->va + sa->icv_len; - sop->aead.aad.phys_addr = icv->pa + sa->icv_len; + sop_aead_prepare(sop, sa, icv, pofs, plen); /* fill AAD IV (located inside crypto op) */ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, sa->iv_ofs); - ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, - pofs + sizeof(struct esp_hdr)); aead_gcm_iv_fill(gcm, ivp[0], sa->salt); break; case ALGO_TYPE_AES_CBC: case ALGO_TYPE_3DES_CBC: - sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; - sop->cipher.data.length = plen - sa->ctp.cipher.length; - sop->auth.data.offset = pofs + sa->ctp.auth.offset; - sop->auth.data.length = plen - sa->ctp.auth.length; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; + sop_ciph_auth_prepare(sop, sa, icv, pofs, plen); /* copy iv from the input packet to the cop */ ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs); - ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, - pofs + sizeof(struct esp_hdr)); copy_iv(ivc, ivp, sa->iv_len); break; case ALGO_TYPE_AES_CTR: - sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; - sop->cipher.data.length = plen - sa->ctp.cipher.length; - sop->auth.data.offset = pofs + sa->ctp.auth.offset; - sop->auth.data.length = plen - sa->ctp.auth.length; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; + sop_ciph_auth_prepare(sop, sa, icv, pofs, plen); - /* copy iv from the input packet to the cop */ + /* fill CTR block (located inside crypto op) */ ctr = rte_crypto_op_ctod_offset(cop, struct aesctr_cnt_blk *, sa->iv_ofs); - ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, - pofs + sizeof(struct esp_hdr)); aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); break; case ALGO_TYPE_NULL: - sop->cipher.data.offset = pofs + sa->ctp.cipher.offset; - sop->cipher.data.length = plen - sa->ctp.cipher.length; - sop->auth.data.offset = pofs + sa->ctp.auth.offset; - sop->auth.data.length = plen - sa->ctp.auth.length; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; + sop_ciph_auth_prepare(sop, sa, icv, pofs, plen); break; } } diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c index 09bfb8658..c798bc4c4 100644 --- a/lib/librte_ipsec/esp_outb.c +++ b/lib/librte_ipsec/esp_outb.c @@ -15,6 +15,41 @@ #include "misc.h" #include "pad.h" + +/* + * helper function to fill crypto_sym op for cipher+auth algorithms. + * used by outb_cop_prepare(), see below. + */ +static inline void +sop_ciph_auth_prepare(struct rte_crypto_sym_op *sop, + const struct rte_ipsec_sa *sa, const union sym_op_data *icv, + uint32_t pofs, uint32_t plen) +{ + sop->cipher.data.offset = sa->ctp.cipher.offset + pofs; + sop->cipher.data.length = sa->ctp.cipher.length + plen; + sop->auth.data.offset = sa->ctp.auth.offset + pofs; + sop->auth.data.length = sa->ctp.auth.length + plen; + sop->auth.digest.data = icv->va; + sop->auth.digest.phys_addr = icv->pa; +} + +/* + * helper function to fill crypto_sym op for cipher+auth algorithms. + * used by outb_cop_prepare(), see below. + */ +static inline void +sop_aead_prepare(struct rte_crypto_sym_op *sop, + const struct rte_ipsec_sa *sa, const union sym_op_data *icv, + uint32_t pofs, uint32_t plen) +{ + sop->aead.data.offset = sa->ctp.cipher.offset + pofs; + sop->aead.data.length = sa->ctp.cipher.length + plen; + sop->aead.digest.data = icv->va; + sop->aead.digest.phys_addr = icv->pa; + sop->aead.aad.data = icv->va + sa->icv_len; + sop->aead.aad.phys_addr = icv->pa + sa->icv_len; +} + /* * setup crypto op and crypto sym op for ESP outbound packet. */ @@ -40,21 +75,11 @@ outb_cop_prepare(struct rte_crypto_op *cop, /* Cipher-Auth (3DES-CBC *) case */ case ALGO_TYPE_NULL: /* NULL case */ - sop->cipher.data.offset = sa->ctp.cipher.offset + hlen; - sop->cipher.data.length = sa->ctp.cipher.length + plen; - sop->auth.data.offset = sa->ctp.auth.offset + hlen; - sop->auth.data.length = sa->ctp.auth.length + plen; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; + sop_ciph_auth_prepare(sop, sa, icv, hlen, plen); break; case ALGO_TYPE_AES_GCM: /* AEAD (AES_GCM) case */ - sop->aead.data.offset = sa->ctp.cipher.offset + hlen; - sop->aead.data.length = sa->ctp.cipher.length + plen; - sop->aead.digest.data = icv->va; - sop->aead.digest.phys_addr = icv->pa; - sop->aead.aad.data = icv->va + sa->icv_len; - sop->aead.aad.phys_addr = icv->pa + sa->icv_len; + sop_aead_prepare(sop, sa, icv, hlen, plen); /* fill AAD IV (located inside crypto op) */ gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *, @@ -63,13 +88,9 @@ outb_cop_prepare(struct rte_crypto_op *cop, break; case ALGO_TYPE_AES_CTR: /* Cipher-Auth (AES-CTR *) case */ - sop->cipher.data.offset = sa->ctp.cipher.offset + hlen; - sop->cipher.data.length = sa->ctp.cipher.length + plen; - sop->auth.data.offset = sa->ctp.auth.offset + hlen; - sop->auth.data.length = sa->ctp.auth.length + plen; - sop->auth.digest.data = icv->va; - sop->auth.digest.phys_addr = icv->pa; + sop_ciph_auth_prepare(sop, sa, icv, hlen, plen); + /* fill CTR block (located inside crypto op) */ ctr = rte_crypto_op_ctod_offset(cop, struct aesctr_cnt_blk *, sa->iv_ofs); aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); From patchwork Fri Mar 29 10:27:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51898 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 962D24F94; Fri, 29 Mar 2019 11:28:31 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 9C6614CAF for ; Fri, 29 Mar 2019 11:28:12 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2019 03:28:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,284,1549958400"; d="scan'208";a="131234708" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga006.jf.intel.com with ESMTP; 29 Mar 2019 03:28:10 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Fri, 29 Mar 2019 10:27:26 +0000 Message-Id: <20190329102726.27716-10-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190329102726.27716-1-konstantin.ananyev@intel.com> References: <20190326154320.29913-1-konstantin.ananyev@intel.com> <20190329102726.27716-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 9/9] doc: add ipsec lib into shared libraries list X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add librte_ipsec into 'Shared Library Versions' list in the release notes. Signed-off-by: Konstantin Ananyev Acked-by: Akhil Goyal --- doc/guides/rel_notes/release_19_05.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst index d11bb5a2b..f41d1104e 100644 --- a/doc/guides/rel_notes/release_19_05.rst +++ b/doc/guides/rel_notes/release_19_05.rst @@ -199,6 +199,7 @@ The libraries prepended with a plus sign were incremented in this version. librte_gso.so.1 librte_hash.so.2 librte_ip_frag.so.1 + librte_ipsec.so.1 librte_jobstats.so.1 librte_kni.so.2 librte_kvargs.so.1