From patchwork Fri Sep 3 11:22:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Nicolau X-Patchwork-Id: 97912 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A5C91A0C54; Fri, 3 Sep 2021 13:29:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7FE6D410E3; Fri, 3 Sep 2021 13:28:59 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id D525640E3C for ; Fri, 3 Sep 2021 13:28:57 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="206516709" X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="206516709" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2021 04:28:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="533996163" Received: from silpixa00400884.ir.intel.com ([10.243.22.82]) by FMSMGA003.fm.intel.com with ESMTP; 03 Sep 2021 04:28:55 -0700 From: Radu Nicolau To: Radu Nicolau , Akhil Goyal Cc: dev@dpdk.org, declan.doherty@intel.com Date: Fri, 3 Sep 2021 12:22:51 +0100 Message-Id: <20210903112257.303961-2-radu.nicolau@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210903112257.303961-1-radu.nicolau@intel.com> References: <20210903112257.303961-1-radu.nicolau@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 1/7] examples/ipsec-secgw: add ol_flags support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for ol_flags to the IPsec GW sample app. Signed-off-by: Declan Doherty Signed-off-by: Radu Nicolau --- examples/ipsec-secgw/ipsec-secgw.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index f252d34985..6d516e2221 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -515,7 +515,7 @@ prepare_traffic(struct rte_mbuf **pkts, struct ipsec_traffic *t, static inline void prepare_tx_pkt(struct rte_mbuf *pkt, uint16_t port, - const struct lcore_conf *qconf) + const struct lcore_conf *qconf __rte_unused) { struct ip *ip; struct rte_ether_hdr *ethhdr; @@ -526,20 +526,17 @@ prepare_tx_pkt(struct rte_mbuf *pkt, uint16_t port, rte_pktmbuf_prepend(pkt, RTE_ETHER_HDR_LEN); if (ip->ip_v == IPVERSION) { - pkt->ol_flags |= qconf->outbound.ipv4_offloads; - pkt->l3_len = sizeof(struct ip); pkt->l2_len = RTE_ETHER_HDR_LEN; - ip->ip_sum = 0; - /* calculate IPv4 cksum in SW */ - if ((pkt->ol_flags & PKT_TX_IP_CKSUM) == 0) + if ((pkt->ol_flags & + (PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)) == 0) ip->ip_sum = rte_ipv4_cksum((struct rte_ipv4_hdr *)ip); + else + ip->ip_sum = 0; ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); } else { - pkt->ol_flags |= qconf->outbound.ipv6_offloads; - pkt->l3_len = sizeof(struct ip6_hdr); pkt->l2_len = RTE_ETHER_HDR_LEN; ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); From patchwork Fri Sep 3 11:22:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Nicolau X-Patchwork-Id: 97913 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 57FA4A0C54; Fri, 3 Sep 2021 13:29:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 96D3D40E3C; Fri, 3 Sep 2021 13:29:00 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id BD30E410D8 for ; Fri, 3 Sep 2021 13:28:58 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="206516714" X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="206516714" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2021 04:28:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="533996173" Received: from silpixa00400884.ir.intel.com ([10.243.22.82]) by FMSMGA003.fm.intel.com with ESMTP; 03 Sep 2021 04:28:56 -0700 From: Radu Nicolau To: Radu Nicolau , Akhil Goyal Cc: dev@dpdk.org, declan.doherty@intel.com Date: Fri, 3 Sep 2021 12:22:52 +0100 Message-Id: <20210903112257.303961-3-radu.nicolau@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210903112257.303961-1-radu.nicolau@intel.com> References: <20210903112257.303961-1-radu.nicolau@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 2/7] examples/ipsec-secgw: add support for NAT-T X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to the sample application to support IPsec NAT-T for both transport and tunnel modes, for both IPv4 and IPv6. Signed-off-by: Declan Doherty Signed-off-by: Radu Nicolau --- examples/ipsec-secgw/esp.c | 7 +- examples/ipsec-secgw/ipsec-secgw.c | 3 - examples/ipsec-secgw/ipsec.c | 375 ++++++++++++----------------- examples/ipsec-secgw/ipsec.h | 19 +- examples/ipsec-secgw/sa.c | 240 +++++++++++++----- examples/ipsec-secgw/sad.c | 10 +- examples/ipsec-secgw/sad.h | 20 +- 7 files changed, 365 insertions(+), 309 deletions(-) diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c index bfa7ff7217..3762d61597 100644 --- a/examples/ipsec-secgw/esp.c +++ b/examples/ipsec-secgw/esp.c @@ -265,9 +265,9 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa, RTE_ASSERT(IS_TUNNEL(sa->flags) || IS_TRANSPORT(sa->flags)); - if (likely(IS_IP4_TUNNEL(sa->flags))) + if (likely((IS_TUNNEL(sa->flags) && IS_IP4(sa->flags)))) ip_hdr_len = sizeof(struct ip); - else if (IS_IP6_TUNNEL(sa->flags)) + else if ((IS_TUNNEL(sa->flags) && IS_IP6(sa->flags))) ip_hdr_len = sizeof(struct ip6_hdr); else if (!IS_TRANSPORT(sa->flags)) { RTE_LOG(ERR, IPSEC_ESP, "Unsupported SA flags: 0x%x\n", @@ -308,7 +308,8 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa, &sa->src, &sa->dst); esp = (struct rte_esp_hdr *)(ip6 + 1); break; - case TRANSPORT: + case IP4_TRANSPORT: + case IP6_TRANSPORT: new_ip = (uint8_t *)rte_pktmbuf_prepend(m, sizeof(struct rte_esp_hdr) + sa->iv_len); memmove(new_ip, ip4, ip_hdr_len); diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 6d516e2221..46fb49d91e 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2299,9 +2299,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) /* Pre-populate pkt offloads based on capabilities */ qconf->outbound.ipv4_offloads = PKT_TX_IPV4; qconf->outbound.ipv6_offloads = PKT_TX_IPV6; - if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) - qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM; - tx_queueid++; /* init RX queues */ diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index 5b032fecfb..aa68e4f827 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -24,7 +24,7 @@ set_ipsec_conf(struct ipsec_sa *sa, struct rte_security_ipsec_xform *ipsec) if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) { struct rte_security_ipsec_tunnel_param *tunnel = &ipsec->tunnel; - if (IS_IP4_TUNNEL(sa->flags)) { + if (IS_TUNNEL(sa->flags) && IS_IP4(sa->flags)) { tunnel->type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; tunnel->ipv4.ttl = IPDEFTTL; @@ -34,7 +34,7 @@ set_ipsec_conf(struct ipsec_sa *sa, struct rte_security_ipsec_xform *ipsec) memcpy((uint8_t *)&tunnel->ipv4.dst_ip, (uint8_t *)&sa->dst.ip.ip4, 4); - } else if (IS_IP6_TUNNEL(sa->flags)) { + } else if (IS_TUNNEL(sa->flags) && IS_IP6(sa->flags)) { tunnel->type = RTE_SECURITY_IPSEC_TUNNEL_IPV6; tunnel->ipv6.hlimit = IPDEFTTL; @@ -163,262 +163,196 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, { int32_t ret = 0; struct rte_security_ctx *sec_ctx; + const struct rte_security_capability *sec_cap; struct rte_security_session_conf sess_conf = { .action_type = ips->type, .protocol = RTE_SECURITY_PROTOCOL_IPSEC, {.ipsec = { - .spi = sa->spi, + .spi = htonl(sa->spi), .salt = sa->salt, .options = { 0 }, .replay_win_sz = 0, .direction = sa->direction, - .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP, - .mode = (sa->flags == IP4_TUNNEL || - sa->flags == IP6_TUNNEL) ? - RTE_SECURITY_IPSEC_SA_MODE_TUNNEL : - RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT, + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP } }, .crypto_xform = sa->xforms, .userdata = NULL, }; - RTE_LOG_DP(DEBUG, IPSEC, "Create session for SA spi %u on port %u\n", - sa->spi, sa->portid); + if (IS_TRANSPORT(sa->flags)) { + sess_conf.ipsec.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT; + /** + * TODO: address this in rte_security API + * Use tunnel parameters to pass both transport IP addresses + */ + if (IS_IP4(sa->flags)) { + sess_conf.ipsec.tunnel.type = + RTE_SECURITY_IPSEC_TUNNEL_IPV4; - if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) { - struct rte_flow_error err; - const struct rte_security_capability *sec_cap; - int ret = 0; - - sec_ctx = (struct rte_security_ctx *) - rte_eth_dev_get_sec_ctx( - sa->portid); - if (sec_ctx == NULL) { - RTE_LOG(ERR, IPSEC, - " rte_eth_dev_get_sec_ctx failed\n"); - return -1; - } + sess_conf.ipsec.tunnel.ipv4.src_ip.s_addr = + sa->src.ip.ip4; + sess_conf.ipsec.tunnel.ipv4.dst_ip.s_addr = + sa->dst.ip.ip4; + } else if (IS_IP6(sa->flags)) { + sess_conf.ipsec.tunnel.type = + RTE_SECURITY_IPSEC_TUNNEL_IPV6; - ips->security.ses = rte_security_session_create(sec_ctx, - &sess_conf, skt_ctx->session_pool, - skt_ctx->session_priv_pool); - if (ips->security.ses == NULL) { - RTE_LOG(ERR, IPSEC, - "SEC Session init failed: err: %d\n", ret); - return -1; + memcpy(sess_conf.ipsec.tunnel.ipv6.src_addr.s6_addr, + sa->src.ip.ip6.ip6_b, 16); + memcpy(sess_conf.ipsec.tunnel.ipv6.dst_addr.s6_addr, + sa->dst.ip.ip6.ip6_b, 16); } + } else if (IS_TUNNEL(sa->flags)) { + sess_conf.ipsec.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; - sec_cap = rte_security_capabilities_get(sec_ctx); - - /* iterate until ESP tunnel*/ - while (sec_cap->action != RTE_SECURITY_ACTION_TYPE_NONE) { - if (sec_cap->action == ips->type && - sec_cap->protocol == - RTE_SECURITY_PROTOCOL_IPSEC && - sec_cap->ipsec.mode == - RTE_SECURITY_IPSEC_SA_MODE_TUNNEL && - sec_cap->ipsec.direction == sa->direction) - break; - sec_cap++; - } + if (IS_IP4(sa->flags)) { + sess_conf.ipsec.tunnel.type = + RTE_SECURITY_IPSEC_TUNNEL_IPV4; - if (sec_cap->action == RTE_SECURITY_ACTION_TYPE_NONE) { - RTE_LOG(ERR, IPSEC, - "No suitable security capability found\n"); + sess_conf.ipsec.tunnel.ipv4.src_ip.s_addr = + sa->src.ip.ip4; + sess_conf.ipsec.tunnel.ipv4.dst_ip.s_addr = + sa->dst.ip.ip4; + } else if (IS_IP6(sa->flags)) { + sess_conf.ipsec.tunnel.type = + RTE_SECURITY_IPSEC_TUNNEL_IPV6; + + memcpy(sess_conf.ipsec.tunnel.ipv6.src_addr.s6_addr, + sa->src.ip.ip6.ip6_b, 16); + memcpy(sess_conf.ipsec.tunnel.ipv6.dst_addr.s6_addr, + sa->dst.ip.ip6.ip6_b, 16); + } else { + RTE_LOG(ERR, IPSEC, "invalid tunnel type\n"); return -1; } + } - ips->security.ol_flags = sec_cap->ol_flags; - ips->security.ctx = sec_ctx; - sa->pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH; + if (IS_NATT_UDP_TUNNEL(sa->flags)) { + sess_conf.ipsec.options.udp_encap = 1; - if (IS_IP6(sa->flags)) { - sa->pattern[1].mask = &rte_flow_item_ipv6_mask; - sa->pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV6; - sa->pattern[1].spec = &sa->ipv6_spec; + sess_conf.ipsec.udp.sport = htons(sa->udp.sport); + sess_conf.ipsec.udp.dport = htons(sa->udp.dport); + } - memcpy(sa->ipv6_spec.hdr.dst_addr, - sa->dst.ip.ip6.ip6_b, 16); - memcpy(sa->ipv6_spec.hdr.src_addr, - sa->src.ip.ip6.ip6_b, 16); - } else if (IS_IP4(sa->flags)) { - sa->pattern[1].mask = &rte_flow_item_ipv4_mask; - sa->pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV4; - sa->pattern[1].spec = &sa->ipv4_spec; - - sa->ipv4_spec.hdr.dst_addr = sa->dst.ip.ip4; - sa->ipv4_spec.hdr.src_addr = sa->src.ip.ip4; - } + struct rte_flow_action_security action_security; + struct rte_flow_error err; + + RTE_LOG_DP(DEBUG, IPSEC, "Create session for SA spi %u on port %u\n", + sa->spi, sa->portid); + + sec_ctx = (struct rte_security_ctx *) + rte_eth_dev_get_sec_ctx(sa->portid); + if (sec_ctx == NULL) { + RTE_LOG(ERR, IPSEC, + " rte_eth_dev_get_sec_ctx failed\n"); + return -1; + } + + ips->security.ses = rte_security_session_create(sec_ctx, + &sess_conf, skt_ctx->session_pool, + skt_ctx->session_priv_pool); + if (ips->security.ses == NULL) { + RTE_LOG(ERR, IPSEC, + "SEC Session init failed: err: %d\n", ret); + return -1; + } + + ips->security.ctx = sec_ctx; + + sec_cap = rte_security_capabilities_get(sec_ctx); + + ips->security.ol_flags = sec_cap->ol_flags; + + if (sa->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) + return 0; + + sa->pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH; + sa->pattern[0].spec = NULL; + + if (IS_IP6(sa->flags)) { + memcpy(sa->ipv6_spec.hdr.dst_addr, + sa->dst.ip.ip6.ip6_b, 16); + memcpy(sa->ipv6_spec.hdr.src_addr, + sa->src.ip.ip6.ip6_b, 16); + + sa->pattern[1].mask = &rte_flow_item_ipv6_mask; + sa->pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV6; + sa->pattern[1].spec = &sa->ipv6_spec; + + } else if (IS_IP4(sa->flags)) { + sa->ipv4_spec.hdr.dst_addr = sa->dst.ip.ip4; + sa->ipv4_spec.hdr.src_addr = sa->src.ip.ip4; + + sa->pattern[1].mask = &rte_flow_item_ipv4_mask; + sa->pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV4; + sa->pattern[1].spec = &sa->ipv4_spec; + } + + if (IS_NATT_UDP_TUNNEL(sa->flags)) { + + sa->udp_spec.hdr.dst_port = rte_cpu_to_be_16(sa->udp.dport); + sa->udp_spec.hdr.src_port = rte_cpu_to_be_16(sa->udp.sport); + + sa->pattern[2].mask = &rte_flow_item_udp_mask; + sa->pattern[2].type = RTE_FLOW_ITEM_TYPE_UDP; + sa->pattern[2].spec = &sa->udp_spec; + + sa->esp_spec.hdr.spi = rte_cpu_to_be_32(sa->spi); + + sa->pattern[3].type = RTE_FLOW_ITEM_TYPE_ESP; + sa->pattern[3].spec = &sa->esp_spec; + sa->pattern[3].mask = &rte_flow_item_esp_mask; + + sa->pattern[4].type = RTE_FLOW_ITEM_TYPE_END; + } else { + sa->esp_spec.hdr.spi = rte_cpu_to_be_32(sa->spi); sa->pattern[2].type = RTE_FLOW_ITEM_TYPE_ESP; sa->pattern[2].spec = &sa->esp_spec; sa->pattern[2].mask = &rte_flow_item_esp_mask; - sa->esp_spec.hdr.spi = rte_cpu_to_be_32(sa->spi); sa->pattern[3].type = RTE_FLOW_ITEM_TYPE_END; + } - sa->action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; - sa->action[0].conf = ips->security.ses; - - sa->action[1].type = RTE_FLOW_ACTION_TYPE_END; - - sa->attr.egress = (sa->direction == - RTE_SECURITY_IPSEC_SA_DIR_EGRESS); - sa->attr.ingress = (sa->direction == - RTE_SECURITY_IPSEC_SA_DIR_INGRESS); - if (sa->attr.ingress) { - uint8_t rss_key[40]; - struct rte_eth_rss_conf rss_conf = { - .rss_key = rss_key, - .rss_key_len = 40, - }; - struct rte_eth_dev_info dev_info; - uint16_t queue[RTE_MAX_QUEUES_PER_PORT]; - struct rte_flow_action_rss action_rss; - unsigned int i; - unsigned int j; - - /* Don't create flow if default flow is created */ - if (flow_info_tbl[sa->portid].rx_def_flow) - return 0; - - ret = rte_eth_dev_info_get(sa->portid, &dev_info); - if (ret != 0) { - RTE_LOG(ERR, IPSEC, - "Error during getting device (port %u) info: %s\n", - sa->portid, strerror(-ret)); - return ret; - } - - sa->action[2].type = RTE_FLOW_ACTION_TYPE_END; - /* Try RSS. */ - sa->action[1].type = RTE_FLOW_ACTION_TYPE_RSS; - sa->action[1].conf = &action_rss; - ret = rte_eth_dev_rss_hash_conf_get(sa->portid, - &rss_conf); - if (ret != 0) { - RTE_LOG(ERR, IPSEC, - "rte_eth_dev_rss_hash_conf_get:ret=%d\n", - ret); - return -1; - } - for (i = 0, j = 0; i < dev_info.nb_rx_queues; ++i) - queue[j++] = i; - - action_rss = (struct rte_flow_action_rss){ - .types = rss_conf.rss_hf, - .key_len = rss_conf.rss_key_len, - .queue_num = j, - .key = rss_key, - .queue = queue, - }; - ret = rte_flow_validate(sa->portid, &sa->attr, - sa->pattern, sa->action, - &err); - if (!ret) - goto flow_create; - /* Try Queue. */ - sa->action[1].type = RTE_FLOW_ACTION_TYPE_QUEUE; - sa->action[1].conf = - &(struct rte_flow_action_queue){ - .index = 0, - }; - ret = rte_flow_validate(sa->portid, &sa->attr, - sa->pattern, sa->action, - &err); - /* Try End. */ - sa->action[1].type = RTE_FLOW_ACTION_TYPE_END; - sa->action[1].conf = NULL; - ret = rte_flow_validate(sa->portid, &sa->attr, - sa->pattern, sa->action, - &err); - if (ret) - goto flow_create_failure; - } else if (sa->attr.egress && - (ips->security.ol_flags & - RTE_SECURITY_TX_HW_TRAILER_OFFLOAD)) { - sa->action[1].type = - RTE_FLOW_ACTION_TYPE_PASSTHRU; - sa->action[2].type = - RTE_FLOW_ACTION_TYPE_END; - } -flow_create: - sa->flow = rte_flow_create(sa->portid, - &sa->attr, sa->pattern, sa->action, &err); - if (sa->flow == NULL) { -flow_create_failure: - RTE_LOG(ERR, IPSEC, - "Failed to create ipsec flow msg: %s\n", - err.message); - return -1; - } - } else if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { - const struct rte_security_capability *sec_cap; + action_security.security_session = ips->security.ses; - sec_ctx = (struct rte_security_ctx *) - rte_eth_dev_get_sec_ctx(sa->portid); + sa->action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; + sa->action[0].conf = &action_security; - if (sec_ctx == NULL) { - RTE_LOG(ERR, IPSEC, - "Ethernet device doesn't have security features registered\n"); - return -1; - } - /* Set IPsec parameters in conf */ - set_ipsec_conf(sa, &(sess_conf.ipsec)); - - /* Save SA as userdata for the security session. When - * the packet is received, this userdata will be - * retrieved using the metadata from the packet. - * - * The PMD is expected to set similar metadata for other - * operations, like rte_eth_event, which are tied to - * security session. In such cases, the userdata could - * be obtained to uniquely identify the security - * parameters denoted. - */ + sa->action[1].type = RTE_FLOW_ACTION_TYPE_END; + sa->action[1].conf = NULL; - sess_conf.userdata = (void *) sa; + sa->attr.egress = (sa->direction == + RTE_SECURITY_IPSEC_SA_DIR_EGRESS); - ips->security.ses = rte_security_session_create(sec_ctx, - &sess_conf, skt_ctx->session_pool, - skt_ctx->session_priv_pool); - if (ips->security.ses == NULL) { - RTE_LOG(ERR, IPSEC, - "SEC Session init failed: err: %d\n", ret); - return -1; - } + if (sa->attr.egress) + return 0; - sec_cap = rte_security_capabilities_get(sec_ctx); - if (sec_cap == NULL) { - RTE_LOG(ERR, IPSEC, - "No capabilities registered\n"); - return -1; - } + sa->attr.ingress = (sa->direction == + RTE_SECURITY_IPSEC_SA_DIR_INGRESS); - /* iterate until ESP tunnel*/ - while (sec_cap->action != - RTE_SECURITY_ACTION_TYPE_NONE) { - if (sec_cap->action == ips->type && - sec_cap->protocol == - RTE_SECURITY_PROTOCOL_IPSEC && - sec_cap->ipsec.mode == - sess_conf.ipsec.mode && - sec_cap->ipsec.direction == sa->direction) - break; - sec_cap++; - } - if (sec_cap->action == RTE_SECURITY_ACTION_TYPE_NONE) { - RTE_LOG(ERR, IPSEC, - "No suitable security capability found\n"); - return -1; - } + ret = rte_flow_validate(sa->portid, + &sa->attr, + sa->pattern, + sa->action, + &err); + if (ret) + goto flow_create_failure; - ips->security.ol_flags = sec_cap->ol_flags; - ips->security.ctx = sec_ctx; + sa->flow = rte_flow_create(sa->portid, &sa->attr, sa->pattern, + sa->action, &err); + if (sa->flow == NULL) { +flow_create_failure: + RTE_LOG(ERR, IPSEC, + "Failed to create ipsec flow msg: %s\n", + err.message); + return -1; } + sa->cdev_id_qp = 0; + return 0; } @@ -427,23 +361,28 @@ create_ipsec_esp_flow(struct ipsec_sa *sa) { int ret = 0; struct rte_flow_error err; + if (sa->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) { RTE_LOG(ERR, IPSEC, "No Flow director rule for Egress traffic\n"); return -1; } - if (sa->flags == TRANSPORT) { + + if (IS_TRANSPORT(sa->flags)) { RTE_LOG(ERR, IPSEC, "No Flow director rule for transport mode\n"); return -1; } + sa->action[0].type = RTE_FLOW_ACTION_TYPE_QUEUE; sa->pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH; sa->action[0].conf = &(struct rte_flow_action_queue) { .index = sa->fdir_qid, }; + sa->attr.egress = 0; sa->attr.ingress = 1; + if (IS_IP6(sa->flags)) { sa->pattern[1].mask = &rte_flow_item_ipv6_mask; sa->pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV6; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index ae5058de27..b496b4a936 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -122,11 +122,15 @@ struct ipsec_sa { uint16_t flags; #define IP4_TUNNEL (1 << 0) #define IP6_TUNNEL (1 << 1) -#define TRANSPORT (1 << 2) +#define NATT_UDP_TUNNEL (1 << 2) #define IP4_TRANSPORT (1 << 3) #define IP6_TRANSPORT (1 << 4) struct ip_addr src; struct ip_addr dst; + struct { + uint16_t sport; + uint16_t dport; + } udp; uint8_t cipher_key[MAX_KEY_SIZE]; uint16_t cipher_key_len; uint8_t auth_key[MAX_KEY_SIZE]; @@ -142,7 +146,7 @@ struct ipsec_sa { uint8_t fdir_qid; uint8_t fdir_flag; -#define MAX_RTE_FLOW_PATTERN (4) +#define MAX_RTE_FLOW_PATTERN (5) #define MAX_RTE_FLOW_ACTIONS (3) struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN]; struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS]; @@ -151,6 +155,7 @@ struct ipsec_sa { struct rte_flow_item_ipv4 ipv4_spec; struct rte_flow_item_ipv6 ipv6_spec; }; + struct rte_flow_item_udp udp_spec; struct rte_flow_item_esp esp_spec; struct rte_flow *flow; struct rte_security_session_conf sess_conf; @@ -181,18 +186,16 @@ struct ipsec_mbuf_metadata { uint8_t buf[32]; } __rte_cache_aligned; -#define IS_TRANSPORT(flags) ((flags) & TRANSPORT) +#define IS_TRANSPORT(flags) ((flags) & (IP4_TRANSPORT | IP6_TRANSPORT)) #define IS_TUNNEL(flags) ((flags) & (IP4_TUNNEL | IP6_TUNNEL)) +#define IS_NATT_UDP_TUNNEL(flags) ((flags) & NATT_UDP_TUNNEL) + #define IS_IP4(flags) ((flags) & (IP4_TUNNEL | IP4_TRANSPORT)) #define IS_IP6(flags) ((flags) & (IP6_TUNNEL | IP6_TRANSPORT)) -#define IS_IP4_TUNNEL(flags) ((flags) & IP4_TUNNEL) - -#define IS_IP6_TUNNEL(flags) ((flags) & IP6_TUNNEL) - /* * Macro for getting ipsec_sa flags statuses without version of protocol * used for transport (IP4_TRANSPORT and IP6_TRANSPORT flags). @@ -200,7 +203,7 @@ struct ipsec_mbuf_metadata { #define WITHOUT_TRANSPORT_VERSION(flags) \ ((flags) & (IP4_TUNNEL | \ IP6_TUNNEL | \ - TRANSPORT)) + (IP4_TRANSPORT | IP6_TRANSPORT))) struct cdev_qp { uint16_t id; diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 17a28556c9..d5943f8cdc 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -339,13 +340,28 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, if (strcmp(tokens[ti], "ipv4-tunnel") == 0) { sa_cnt->nb_v4++; rule->flags = IP4_TUNNEL; + } else if (strcmp(tokens[ti], "ipv4-udp-tunnel") == 0) { + sa_cnt->nb_v4++; + rule->flags = IP4_TUNNEL | NATT_UDP_TUNNEL; + rule->udp.sport = 0; + rule->udp.dport = 4500; } else if (strcmp(tokens[ti], "ipv6-tunnel") == 0) { sa_cnt->nb_v6++; rule->flags = IP6_TUNNEL; + } else if (strcmp(tokens[ti], "ipv6-udp-tunnel") == 0) { + sa_cnt->nb_v6++; + rule->flags = IP6_TUNNEL | NATT_UDP_TUNNEL; } else if (strcmp(tokens[ti], "transport") == 0) { sa_cnt->nb_v4++; sa_cnt->nb_v6++; - rule->flags = TRANSPORT; + rule->flags = IP4_TRANSPORT | IP6_TRANSPORT; + } else if (strcmp(tokens[ti], "udp-transport") == 0) { + sa_cnt->nb_v4++; + sa_cnt->nb_v6++; + rule->flags = IP4_TRANSPORT | IP6_TRANSPORT | + NATT_UDP_TUNNEL; + rule->udp.sport = 0; + rule->udp.dport = 4500; } else { APP_CHECK(0, status, "unrecognized " "input \"%s\"", tokens[ti]); @@ -548,7 +564,7 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, if (status->status < 0) return; - if (IS_IP4_TUNNEL(rule->flags)) { + if (IS_IP4(rule->flags) && IS_TUNNEL(rule->flags)) { struct in_addr ip; APP_CHECK(parse_ipv4_addr(tokens[ti], @@ -560,7 +576,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, return; rule->src.ip.ip4 = rte_bswap32( (uint32_t)ip.s_addr); - } else if (IS_IP6_TUNNEL(rule->flags)) { + } else if (IS_IP6(rule->flags) && + IS_TUNNEL(rule->flags)) { struct in6_addr ip; APP_CHECK(parse_ipv6_addr(tokens[ti], &ip, @@ -591,7 +608,7 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, if (status->status < 0) return; - if (IS_IP4_TUNNEL(rule->flags)) { + if (IS_IP4(rule->flags) && IS_TUNNEL(rule->flags)) { struct in_addr ip; APP_CHECK(parse_ipv4_addr(tokens[ti], @@ -603,7 +620,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, return; rule->dst.ip.ip4 = rte_bswap32( (uint32_t)ip.s_addr); - } else if (IS_IP6_TUNNEL(rule->flags)) { + } else if (IS_IP6(rule->flags) && + IS_TUNNEL(rule->flags)) { struct in6_addr ip; APP_CHECK(parse_ipv6_addr(tokens[ti], &ip, @@ -832,19 +850,19 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) const struct rte_ipsec_session *ips; const struct rte_ipsec_session *fallback_ips; - printf("\tspi_%s(%3u):", inbound?"in":"out", sa->spi); + printf("\tspi_%s (%3u):", inbound?"in":"out", sa->spi); for (i = 0; i < RTE_DIM(cipher_algos); i++) { if (cipher_algos[i].algo == sa->cipher_algo && cipher_algos[i].key_len == sa->cipher_key_len) { - printf("%s ", cipher_algos[i].keyword); + printf(" %s", cipher_algos[i].keyword); break; } } for (i = 0; i < RTE_DIM(auth_algos); i++) { if (auth_algos[i].algo == sa->auth_algo) { - printf("%s ", auth_algos[i].keyword); + printf(" %s", auth_algos[i].keyword); break; } } @@ -852,23 +870,29 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) for (i = 0; i < RTE_DIM(aead_algos); i++) { if (aead_algos[i].algo == sa->aead_algo && aead_algos[i].key_len-4 == sa->cipher_key_len) { - printf("%s ", aead_algos[i].keyword); + printf(" %s ", aead_algos[i].keyword); break; } } - printf("mode:"); + printf(", mode:"); + + if (IS_IP4(sa->flags) && IS_TUNNEL(sa->flags)) { - switch (WITHOUT_TRANSPORT_VERSION(sa->flags)) { - case IP4_TUNNEL: - printf("IP4Tunnel "); + if (IS_NATT_UDP_TUNNEL(sa->flags)) + printf("IP4Tunnel NAT-T ("); + else + printf("IP4Tunnel ("); uint32_t_to_char(sa->src.ip.ip4, &a, &b, &c, &d); printf("%hhu.%hhu.%hhu.%hhu ", d, c, b, a); uint32_t_to_char(sa->dst.ip.ip4, &a, &b, &c, &d); printf("%hhu.%hhu.%hhu.%hhu", d, c, b, a); - break; - case IP6_TUNNEL: - printf("IP6Tunnel "); + } else if (IS_IP6(sa->flags) && IS_TUNNEL(sa->flags)) { + + if (IS_NATT_UDP_TUNNEL(sa->flags)) + printf("IP6Tunnel NAT-T ("); + else + printf("IP6Tunnel ("); for (i = 0; i < 16; i++) { if (i % 2 && i != 15) printf("%.2x:", sa->src.ip.ip6.ip6_b[i]); @@ -882,14 +906,15 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) else printf("%.2x", sa->dst.ip.ip6.ip6_b[i]); } - break; - case TRANSPORT: - printf("Transport "); - break; + } else if (IS_TRANSPORT(sa->flags)) { + if (IS_NATT_UDP_TUNNEL(sa->flags)) + printf("Transport NAT-T ("); + else + printf("Transport ("); } ips = &sa->sessions[IPSEC_SESSION_PRIMARY]; - printf(" type:"); + printf("), type: "); switch (ips->type) { case RTE_SECURITY_ACTION_TYPE_NONE: printf("no-offload "); @@ -1053,7 +1078,11 @@ sa_add_address_inline_crypto(struct ipsec_sa *sa) protocol = get_spi_proto(sa->spi, sa->direction, ip_addr, mask); if (protocol < 0) return protocol; - else if (protocol == IPPROTO_IPIP) { + + /* Clear transport bits before selecting IP4/IP6 */ + sa->flags &= ~(IP4_TRANSPORT | IP6_TRANSPORT); + + if (protocol == IPPROTO_IPIP) { sa->flags |= IP4_TRANSPORT; if (mask[0] == IP4_FULL_MASK && mask[1] == IP4_FULL_MASK && @@ -1131,12 +1160,7 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], return -EINVAL; } - switch (WITHOUT_TRANSPORT_VERSION(sa->flags)) { - case IP4_TUNNEL: - sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4); - sa->dst.ip.ip4 = rte_cpu_to_be_32(sa->dst.ip.ip4); - break; - case TRANSPORT: + if (IS_TRANSPORT(sa->flags)) { if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) { inline_status = @@ -1144,11 +1168,25 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], if (inline_status < 0) return inline_status; } - break; + } else if (IS_TUNNEL(sa->flags)) { + if (IS_IP4(sa->flags)) { + sa->src.ip.ip4 = + rte_cpu_to_be_32(sa->src.ip.ip4); + sa->dst.ip.ip4 = + rte_cpu_to_be_32(sa->dst.ip.ip4); + } + } - if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) { - iv_length = 12; + + if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM || + sa->aead_algo == RTE_CRYPTO_AEAD_AES_CCM || + sa->aead_algo == RTE_CRYPTO_AEAD_CHACHA20_POLY1305) { + + if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + iv_length = 8; + else + iv_length = 12; sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD; sa_ctx->xf[idx].a.aead.algo = sa->aead_algo; @@ -1285,9 +1323,21 @@ fill_ipsec_app_sa_prm(struct rte_ipsec_sa_prm *prm, prm->ipsec_xform.replay_win_sz = app_prm->window_size; } +struct udp_ipv4_tunnel { + struct rte_ipv4_hdr v4; + struct rte_udp_hdr udp; +} __rte_packed; + +struct udp_ipv6_tunnel { + struct rte_ipv6_hdr v6; + struct rte_udp_hdr udp; +} __rte_packed; + static int fill_ipsec_sa_prm(struct rte_ipsec_sa_prm *prm, const struct ipsec_sa *ss, - const struct rte_ipv4_hdr *v4, struct rte_ipv6_hdr *v6) + const struct rte_ipv4_hdr *v4, struct rte_ipv6_hdr *v6, + const struct udp_ipv4_tunnel *udp_ipv4, + const struct udp_ipv6_tunnel *udp_ipv6) { int32_t rc; @@ -1311,22 +1361,49 @@ fill_ipsec_sa_prm(struct rte_ipsec_sa_prm *prm, const struct ipsec_sa *ss, prm->ipsec_xform.mode = (IS_TRANSPORT(ss->flags)) ? RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT : RTE_SECURITY_IPSEC_SA_MODE_TUNNEL; + prm->ipsec_xform.options.udp_encap = + (IS_NATT_UDP_TUNNEL(ss->flags)) ? 1 : 0; prm->ipsec_xform.options.ecn = 1; prm->ipsec_xform.options.copy_dscp = 1; - if (IS_IP4_TUNNEL(ss->flags)) { - prm->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4; - prm->tun.hdr_len = sizeof(*v4); - prm->tun.next_proto = rc; - prm->tun.hdr = v4; - } else if (IS_IP6_TUNNEL(ss->flags)) { - prm->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV6; - prm->tun.hdr_len = sizeof(*v6); - prm->tun.next_proto = rc; - prm->tun.hdr = v6; - } else { + if (IS_TRANSPORT(ss->flags)) { /* transport mode */ prm->trs.proto = rc; + } else if (IS_TUNNEL(ss->flags)) { + prm->tun.hdr_l3_off = 0; + + /* tunnel mode */ + if (IS_IP4(ss->flags)) { + prm->ipsec_xform.tunnel.type = + RTE_SECURITY_IPSEC_TUNNEL_IPV4; + prm->tun.next_proto = rc; + prm->tun.hdr_l3_len = sizeof(*v4); + + if (IS_NATT_UDP_TUNNEL(ss->flags)) { + prm->tun.hdr_len = sizeof(*udp_ipv4); + prm->tun.hdr = udp_ipv4; + + } else { + prm->tun.hdr_len = sizeof(*v4); + prm->tun.hdr = v4; + } + + } else if (IS_IP6(ss->flags)) { + prm->ipsec_xform.tunnel.type = + RTE_SECURITY_IPSEC_TUNNEL_IPV6; + prm->tun.next_proto = rc; + prm->tun.hdr_l3_len = sizeof(*v6); + + if (IS_NATT_UDP_TUNNEL(ss->flags)) { + + prm->tun.hdr_len = sizeof(*udp_ipv6); + prm->tun.hdr = udp_ipv6; + + } else { + prm->tun.hdr_len = sizeof(*v6); + prm->tun.hdr = v6; + } + } } /* setup crypto section */ @@ -1362,25 +1439,66 @@ ipsec_sa_init(struct ipsec_sa *lsa, struct rte_ipsec_sa *sa, uint32_t sa_size) int rc; struct rte_ipsec_sa_prm prm; struct rte_ipsec_session *ips; - struct rte_ipv4_hdr v4 = { - .version_ihl = IPVERSION << 4 | - sizeof(v4) / RTE_IPV4_IHL_MULTIPLIER, - .time_to_live = IPDEFTTL, - .next_proto_id = IPPROTO_ESP, - .src_addr = lsa->src.ip.ip4, - .dst_addr = lsa->dst.ip.ip4, - }; - struct rte_ipv6_hdr v6 = { - .vtc_flow = htonl(IP6_VERSION << 28), - .proto = IPPROTO_ESP, - }; - - if (IS_IP6_TUNNEL(lsa->flags)) { - memcpy(v6.src_addr, lsa->src.ip.ip6.ip6_b, sizeof(v6.src_addr)); - memcpy(v6.dst_addr, lsa->dst.ip.ip6.ip6_b, sizeof(v6.dst_addr)); + struct rte_ipv4_hdr v4; + struct rte_ipv6_hdr v6; + struct udp_ipv4_tunnel udp_ipv4; + struct udp_ipv6_tunnel udp_ipv6; + + + if (IS_TUNNEL(lsa->flags) && IS_NATT_UDP_TUNNEL(lsa->flags)) { + if (IS_IP4(lsa->flags)) { + + udp_ipv4.v4.version_ihl = IPVERSION << 4 | sizeof(v4) / + RTE_IPV4_IHL_MULTIPLIER; + udp_ipv4.v4.time_to_live = IPDEFTTL; + udp_ipv4.v4.next_proto_id = IPPROTO_UDP; + udp_ipv4.v4.src_addr = lsa->src.ip.ip4; + udp_ipv4.v4.dst_addr = lsa->dst.ip.ip4; + + udp_ipv4.udp.src_port = + rte_cpu_to_be_16(lsa->udp.sport); + udp_ipv4.udp.dst_port = + rte_cpu_to_be_16(lsa->udp.dport); + + } else if (IS_IP6(lsa->flags)) { + + udp_ipv6.v6.vtc_flow = htonl(IP6_VERSION << 28), + udp_ipv6.v6.proto = IPPROTO_UDP, + memcpy(udp_ipv6.v6.src_addr, lsa->src.ip.ip6.ip6_b, + sizeof(udp_ipv6.v6.src_addr)); + memcpy(udp_ipv6.v6.dst_addr, lsa->dst.ip.ip6.ip6_b, + sizeof(udp_ipv6.v6.dst_addr)); + + udp_ipv6.udp.src_port = + rte_cpu_to_be_16(lsa->udp.sport); + udp_ipv6.udp.dst_port = + rte_cpu_to_be_16(lsa->udp.dport); + } + + } else if (IS_TUNNEL(lsa->flags)) { + + if (IS_IP4(lsa->flags)) { + v4.version_ihl = IPVERSION << 4 | sizeof(v4) / + RTE_IPV4_IHL_MULTIPLIER; + v4.time_to_live = IPDEFTTL; + v4.next_proto_id = IPPROTO_ESP; + v4.src_addr = lsa->src.ip.ip4; + v4.dst_addr = lsa->dst.ip.ip4; + + } else if (IS_IP6(lsa->flags)) { + + v6.vtc_flow = htonl(IP6_VERSION << 28), + v6.proto = IPPROTO_ESP, + memcpy(v6.src_addr, lsa->src.ip.ip6.ip6_b, + sizeof(v6.src_addr)); + memcpy(v6.dst_addr, lsa->dst.ip.ip6.ip6_b, + sizeof(v6.dst_addr)); + + } + } - rc = fill_ipsec_sa_prm(&prm, lsa, &v4, &v6); + rc = fill_ipsec_sa_prm(&prm, lsa, &v4, &v6, &udp_ipv4, &udp_ipv6); if (rc == 0) rc = rte_ipsec_sa_init(sa, &prm, sa_size); if (rc < 0) @@ -1415,7 +1533,7 @@ ipsec_satbl_init(struct sa_ctx *ctx, uint32_t nb_ent, int32_t socket) /* determine SA size */ idx = 0; - fill_ipsec_sa_prm(&prm, ctx->sa + idx, NULL, NULL); + fill_ipsec_sa_prm(&prm, ctx->sa + idx, NULL, NULL, NULL, NULL); sz = rte_ipsec_sa_size(&prm); if (sz < 0) { RTE_LOG(ERR, IPSEC, "%s(%p, %u, %d): " diff --git a/examples/ipsec-secgw/sad.c b/examples/ipsec-secgw/sad.c index 5b2c0e6792..191fc79faf 100644 --- a/examples/ipsec-secgw/sad.c +++ b/examples/ipsec-secgw/sad.c @@ -25,8 +25,8 @@ ipsec_sad_add(struct ipsec_sad *sad, struct ipsec_sa *sa) /* spi field is common for ipv4 and ipv6 key types */ key.v4.spi = rte_cpu_to_be_32(sa->spi); lookup_key[0] = &key; - switch (WITHOUT_TRANSPORT_VERSION(sa->flags)) { - case IP4_TUNNEL: + + if (IS_IP4(sa->flags) && IS_TUNNEL(sa->flags)) { rte_ipsec_sad_lookup(sad->sad_v4, lookup_key, &tmp, 1); if (tmp != NULL) return -EEXIST; @@ -35,8 +35,7 @@ ipsec_sad_add(struct ipsec_sad *sad, struct ipsec_sa *sa) RTE_IPSEC_SAD_SPI_ONLY, sa); if (ret != 0) return ret; - break; - case IP6_TUNNEL: + } else if (IS_IP6(sa->flags) && IS_TUNNEL(sa->flags)) { rte_ipsec_sad_lookup(sad->sad_v6, lookup_key, &tmp, 1); if (tmp != NULL) return -EEXIST; @@ -45,8 +44,7 @@ ipsec_sad_add(struct ipsec_sad *sad, struct ipsec_sa *sa) RTE_IPSEC_SAD_SPI_ONLY, sa); if (ret != 0) return ret; - break; - case TRANSPORT: + } else if (IS_TRANSPORT(sa->flags)) { if (sp4_spi_present(sa->spi, 1, NULL, NULL) >= 0) { rte_ipsec_sad_lookup(sad->sad_v4, lookup_key, &tmp, 1); if (tmp != NULL) diff --git a/examples/ipsec-secgw/sad.h b/examples/ipsec-secgw/sad.h index 3224b6252c..6813a47942 100644 --- a/examples/ipsec-secgw/sad.h +++ b/examples/ipsec-secgw/sad.h @@ -29,16 +29,16 @@ static inline int cmp_sa_key(struct ipsec_sa *sa, int is_v4, struct rte_ipv4_hdr *ipv4, struct rte_ipv6_hdr *ipv6) { - int sa_type = WITHOUT_TRANSPORT_VERSION(sa->flags); - if ((sa_type == TRANSPORT) || - /* IPv4 check */ - (is_v4 && (sa_type == IP4_TUNNEL) && - (sa->src.ip.ip4 == ipv4->src_addr) && - (sa->dst.ip.ip4 == ipv4->dst_addr)) || - /* IPv6 check */ - (!is_v4 && (sa_type == IP6_TUNNEL) && - (!memcmp(sa->src.ip.ip6.ip6, ipv6->src_addr, 16)) && - (!memcmp(sa->dst.ip.ip6.ip6, ipv6->dst_addr, 16)))) + + if (IS_TRANSPORT(sa->flags) || + /* IPv4 check */ + (is_v4 && IS_IP4(sa->flags) && + (sa->src.ip.ip4 == ipv4->src_addr) && + (sa->dst.ip.ip4 == ipv4->dst_addr)) || + /* IPv6 check */ + (!is_v4 && IS_IP6(sa->flags) && + (!memcmp(sa->src.ip.ip6.ip6, ipv6->src_addr, 16)) && + (!memcmp(sa->dst.ip.ip6.ip6, ipv6->dst_addr, 16)))) return 1; return 0; From patchwork Fri Sep 3 11:22:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Nicolau X-Patchwork-Id: 97914 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6DBE3A0C54; Fri, 3 Sep 2021 13:29:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31E62410FE; Fri, 3 Sep 2021 13:29:03 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id D0957410D8 for ; Fri, 3 Sep 2021 13:28:59 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="206516716" X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="206516716" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2021 04:28:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="533996220" Received: from silpixa00400884.ir.intel.com ([10.243.22.82]) by FMSMGA003.fm.intel.com with ESMTP; 03 Sep 2021 04:28:58 -0700 From: Radu Nicolau To: Radu Nicolau , Akhil Goyal Cc: dev@dpdk.org, declan.doherty@intel.com Date: Fri, 3 Sep 2021 12:22:53 +0100 Message-Id: <20210903112257.303961-4-radu.nicolau@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210903112257.303961-1-radu.nicolau@intel.com> References: <20210903112257.303961-1-radu.nicolau@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 3/7] examples/ipsec-secgw: add support for TSO X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to allow user to specific MSS for TSO offload on a per SA basis. MSS configuration in the context of IPsec is only supported for outbound SA's in the context of an inline IPsec Crypto offload. Signed-off-by: Declan Doherty Signed-off-by: Radu Nicolau --- examples/ipsec-secgw/ipsec.h | 1 + examples/ipsec-secgw/sa.c | 15 +++++++++++++++ 2 files changed, 16 insertions(+) diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index b496b4a936..7ba29406bf 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -143,6 +143,7 @@ struct ipsec_sa { enum rte_security_ipsec_sa_direction direction; uint8_t udp_encap; uint16_t portid; + uint16_t mss; uint8_t fdir_qid; uint8_t fdir_flag; diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index d5943f8cdc..bc83e4bccc 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -695,6 +695,16 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, continue; } + if (strcmp(tokens[ti], "mss") == 0) { + INCREMENT_TOKEN_INDEX(ti, n_tokens, status); + if (status->status < 0) + return; + rule->mss = atoi(tokens[ti]); + if (status->status < 0) + return; + continue; + } + if (strcmp(tokens[ti], "fallback") == 0) { struct rte_ipsec_session *fb; @@ -1366,6 +1376,11 @@ fill_ipsec_sa_prm(struct rte_ipsec_sa_prm *prm, const struct ipsec_sa *ss, prm->ipsec_xform.options.ecn = 1; prm->ipsec_xform.options.copy_dscp = 1; + if (ss->mss > 0) { + prm->ipsec_xform.options.tso = 1; + prm->ipsec_xform.mss = ss->mss; + } + if (IS_TRANSPORT(ss->flags)) { /* transport mode */ prm->trs.proto = rc; From patchwork Fri Sep 3 11:22:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Nicolau X-Patchwork-Id: 97915 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3095A0C54; Fri, 3 Sep 2021 13:29:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 64BEC41124; Fri, 3 Sep 2021 13:29:04 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 27CE5410FE for ; Fri, 3 Sep 2021 13:29:01 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="206516717" X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="206516717" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2021 04:29:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="533996231" Received: from silpixa00400884.ir.intel.com ([10.243.22.82]) by FMSMGA003.fm.intel.com with ESMTP; 03 Sep 2021 04:28:59 -0700 From: Radu Nicolau To: Radu Nicolau , Akhil Goyal Cc: dev@dpdk.org, declan.doherty@intel.com Date: Fri, 3 Sep 2021 12:22:54 +0100 Message-Id: <20210903112257.303961-5-radu.nicolau@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210903112257.303961-1-radu.nicolau@intel.com> References: <20210903112257.303961-1-radu.nicolau@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 4/7] examples/ipsec-secgw: enable stats by default X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enable stats screen by default Signed-off-by: Declan Doherty Signed-off-by: Radu Nicolau Acked-by: Fan Zhang --- examples/ipsec-secgw/ipsec-secgw.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index 96e22de45e..ede610bcde 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -7,7 +7,7 @@ #include #ifndef STATS_INTERVAL -#define STATS_INTERVAL 0 +#define STATS_INTERVAL 10ULL #endif #define NB_SOCKETS 4 From patchwork Fri Sep 3 11:22:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Nicolau X-Patchwork-Id: 97916 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC1ACA0C54; Fri, 3 Sep 2021 13:29:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 923A341135; Fri, 3 Sep 2021 13:29:05 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 5C9784111D for ; Fri, 3 Sep 2021 13:29:02 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="206516721" X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="206516721" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2021 04:29:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="533996253" Received: from silpixa00400884.ir.intel.com ([10.243.22.82]) by FMSMGA003.fm.intel.com with ESMTP; 03 Sep 2021 04:29:00 -0700 From: Radu Nicolau To: Radu Nicolau , Akhil Goyal Cc: dev@dpdk.org, declan.doherty@intel.com Date: Fri, 3 Sep 2021 12:22:55 +0100 Message-Id: <20210903112257.303961-6-radu.nicolau@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210903112257.303961-1-radu.nicolau@intel.com> References: <20210903112257.303961-1-radu.nicolau@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 5/7] examples/ipsec-secgw: add support for telemetry X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add telemetry support to the IPsec GW sample app Signed-off-by: Declan Doherty Signed-off-by: Radu Nicolau --- examples/ipsec-secgw/ipsec-secgw.c | 370 +++++++++++++++++++++++++++-- examples/ipsec-secgw/ipsec-secgw.h | 31 +++ examples/ipsec-secgw/ipsec.h | 2 + examples/ipsec-secgw/meson.build | 2 +- examples/ipsec-secgw/sa.c | 21 +- 5 files changed, 401 insertions(+), 25 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 46fb49d91e..e725d84e7c 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -48,6 +48,7 @@ #include #include #include +#include #include "event_helper.h" #include "flow.h" @@ -668,7 +669,7 @@ send_single_packet(struct rte_mbuf *m, uint16_t port, uint8_t proto) static inline void inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip, - uint16_t lim) + uint16_t lim, struct ipsec_spd_stats *stats) { struct rte_mbuf *m; uint32_t i, j, res, sa_idx; @@ -685,25 +686,30 @@ inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip, res = ip->res[i]; if (res == BYPASS) { ip->pkts[j++] = m; + stats->bypass++; continue; } if (res == DISCARD) { free_pkts(&m, 1); + stats->discard++; continue; } /* Only check SPI match for processed IPSec packets */ if (i < lim && ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)) { + stats->discard++; free_pkts(&m, 1); continue; } sa_idx = res - 1; if (!inbound_sa_check(sa, m, sa_idx)) { + stats->discard++; free_pkts(&m, 1); continue; } ip->pkts[j++] = m; + stats->protect++; } ip->num = j; } @@ -747,6 +753,7 @@ static inline void process_pkts_inbound(struct ipsec_ctx *ipsec_ctx, struct ipsec_traffic *traffic) { + unsigned int lcoreid = rte_lcore_id(); uint16_t nb_pkts_in, n_ip4, n_ip6; n_ip4 = traffic->ip4.num; @@ -762,16 +769,20 @@ process_pkts_inbound(struct ipsec_ctx *ipsec_ctx, ipsec_process(ipsec_ctx, traffic); } - inbound_sp_sa(ipsec_ctx->sp4_ctx, ipsec_ctx->sa_ctx, &traffic->ip4, - n_ip4); + inbound_sp_sa(ipsec_ctx->sp4_ctx, + ipsec_ctx->sa_ctx, &traffic->ip4, n_ip4, + &core_statistics[lcoreid].inbound.spd4); - inbound_sp_sa(ipsec_ctx->sp6_ctx, ipsec_ctx->sa_ctx, &traffic->ip6, - n_ip6); + inbound_sp_sa(ipsec_ctx->sp6_ctx, + ipsec_ctx->sa_ctx, &traffic->ip6, n_ip6, + &core_statistics[lcoreid].inbound.spd6); } static inline void -outbound_sp(struct sp_ctx *sp, struct traffic_type *ip, - struct traffic_type *ipsec) +outbound_spd_lookup(struct sp_ctx *sp, + struct traffic_type *ip, + struct traffic_type *ipsec, + struct ipsec_spd_stats *stats) { struct rte_mbuf *m; uint32_t i, j, sa_idx; @@ -779,20 +790,27 @@ outbound_sp(struct sp_ctx *sp, struct traffic_type *ip, if (ip->num == 0 || sp == NULL) return; - rte_acl_classify((struct rte_acl_ctx *)sp, ip->data, ip->res, - ip->num, DEFAULT_MAX_CATEGORIES); + rte_acl_classify((struct rte_acl_ctx *)sp, + ip->data, ip->res, ip->num, + DEFAULT_MAX_CATEGORIES); - j = 0; - for (i = 0; i < ip->num; i++) { + for (i = 0, j = 0; i < ip->num; i++) { m = ip->pkts[i]; sa_idx = ip->res[i] - 1; - if (ip->res[i] == DISCARD) + + if (unlikely(ip->res[i] == DISCARD)) { free_pkts(&m, 1); - else if (ip->res[i] == BYPASS) + + stats->discard++; + } else if (unlikely(ip->res[i] == BYPASS)) { ip->pkts[j++] = m; - else { + + stats->bypass++; + } else { ipsec->res[ipsec->num] = sa_idx; ipsec->pkts[ipsec->num++] = m; + + stats->protect++; } } ip->num = j; @@ -804,15 +822,20 @@ process_pkts_outbound(struct ipsec_ctx *ipsec_ctx, { struct rte_mbuf *m; uint16_t idx, nb_pkts_out, i; + unsigned int lcoreid = rte_lcore_id(); /* Drop any IPsec traffic from protected ports */ free_pkts(traffic->ipsec.pkts, traffic->ipsec.num); traffic->ipsec.num = 0; - outbound_sp(ipsec_ctx->sp4_ctx, &traffic->ip4, &traffic->ipsec); + outbound_spd_lookup(ipsec_ctx->sp4_ctx, + &traffic->ip4, &traffic->ipsec, + &core_statistics[lcoreid].outbound.spd4); - outbound_sp(ipsec_ctx->sp6_ctx, &traffic->ip6, &traffic->ipsec); + outbound_spd_lookup(ipsec_ctx->sp6_ctx, + &traffic->ip6, &traffic->ipsec, + &core_statistics[lcoreid].outbound.spd4); if (app_sa_prm.enable == 0) { @@ -966,6 +989,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) int32_t pkt_hop = 0; uint16_t i, offset; uint16_t lpm_pkts = 0; + unsigned int lcoreid = rte_lcore_id(); if (nb_pkts == 0) return; @@ -1001,6 +1025,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) } if ((pkt_hop & RTE_LPM_LOOKUP_SUCCESS) == 0) { + core_statistics[lcoreid].lpm4.miss++; free_pkts(&pkts[i], 1); continue; } @@ -1017,6 +1042,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) int32_t pkt_hop = 0; uint16_t i, offset; uint16_t lpm_pkts = 0; + unsigned int lcoreid = rte_lcore_id(); if (nb_pkts == 0) return; @@ -1053,6 +1079,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) } if (pkt_hop == -1) { + core_statistics[lcoreid].lpm6.miss++; free_pkts(&pkts[i], 1); continue; } @@ -1126,6 +1153,7 @@ drain_inbound_crypto_queues(const struct lcore_conf *qconf, { uint32_t n; struct ipsec_traffic trf; + unsigned int lcoreid = rte_lcore_id(); if (app_sa_prm.enable == 0) { @@ -1143,13 +1171,15 @@ drain_inbound_crypto_queues(const struct lcore_conf *qconf, /* process ipv4 packets */ if (trf.ip4.num != 0) { - inbound_sp_sa(ctx->sp4_ctx, ctx->sa_ctx, &trf.ip4, 0); + inbound_sp_sa(ctx->sp4_ctx, ctx->sa_ctx, &trf.ip4, 0, + &core_statistics[lcoreid].inbound.spd4); route4_pkts(qconf->rt4_ctx, trf.ip4.pkts, trf.ip4.num); } /* process ipv6 packets */ if (trf.ip6.num != 0) { - inbound_sp_sa(ctx->sp6_ctx, ctx->sa_ctx, &trf.ip6, 0); + inbound_sp_sa(ctx->sp6_ctx, ctx->sa_ctx, &trf.ip6, 0, + &core_statistics[lcoreid].inbound.spd6); route6_pkts(qconf->rt6_ctx, trf.ip6.pkts, trf.ip6.num); } } @@ -2826,6 +2856,308 @@ calculate_nb_mbufs(uint16_t nb_ports, uint16_t nb_crypto_qp, uint32_t nb_rxq, 8192U); } + +static int +handle_telemetry_cmd_ipsec_secgw_stats(const char *cmd __rte_unused, + const char *params, struct rte_tel_data *data) +{ + uint64_t total_pkts_dropped = 0, total_pkts_tx = 0, total_pkts_rx = 0; + unsigned int coreid; + + rte_tel_data_start_dict(data); + + if (params) { + coreid = (uint32_t)atoi(params); + if (rte_lcore_is_enabled(coreid) == 0) + return -EINVAL; + + total_pkts_dropped = core_statistics[coreid].dropped; + total_pkts_tx = core_statistics[coreid].tx; + total_pkts_rx = core_statistics[coreid].rx; + + } else { + for (coreid = 0; coreid < RTE_MAX_LCORE; coreid++) { + + /* skip disabled cores */ + if (rte_lcore_is_enabled(coreid) == 0) + continue; + + total_pkts_dropped += core_statistics[coreid].dropped; + total_pkts_tx += core_statistics[coreid].tx; + total_pkts_rx += core_statistics[coreid].rx; + } + } + + /* add telemetry key/values pairs */ + rte_tel_data_add_dict_u64(data, "packets received", + total_pkts_rx); + + rte_tel_data_add_dict_u64(data, "packets transmitted", + total_pkts_tx); + + rte_tel_data_add_dict_u64(data, "packets dopped", + total_pkts_dropped); + + + return 0; +} + +static void +update_lcore_statistics(struct ipsec_core_statistics *total, uint32_t coreid) +{ + struct ipsec_core_statistics *lcore_stats; + + /* skip disabled cores */ + if (rte_lcore_is_enabled(coreid) == 0) + return; + + lcore_stats = &core_statistics[coreid]; + + total->rx = lcore_stats->rx; + total->dropped = lcore_stats->dropped; + total->tx = lcore_stats->tx; + + /* outbound stats */ + total->outbound.spd6.protect += lcore_stats->outbound.spd6.protect; + total->outbound.spd6.bypass += lcore_stats->outbound.spd6.bypass; + total->outbound.spd6.discard += lcore_stats->outbound.spd6.discard; + + total->outbound.spd4.protect += lcore_stats->outbound.spd4.protect; + total->outbound.spd4.bypass += lcore_stats->outbound.spd4.bypass; + total->outbound.spd4.discard += lcore_stats->outbound.spd4.discard; + + total->outbound.sad.miss += lcore_stats->outbound.sad.miss; + + /* inbound stats */ + total->inbound.spd6.protect += lcore_stats->inbound.spd6.protect; + total->inbound.spd6.bypass += lcore_stats->inbound.spd6.bypass; + total->inbound.spd6.discard += lcore_stats->inbound.spd6.discard; + + total->inbound.spd4.protect += lcore_stats->inbound.spd4.protect; + total->inbound.spd4.bypass += lcore_stats->inbound.spd4.bypass; + total->inbound.spd4.discard += lcore_stats->inbound.spd4.discard; + + total->inbound.sad.miss += lcore_stats->inbound.sad.miss; + + + /* routing stats */ + total->lpm4.miss += lcore_stats->lpm4.miss; + total->lpm6.miss += lcore_stats->lpm6.miss; +} + +static void +update_statistics(struct ipsec_core_statistics *total, uint32_t coreid) +{ + memset(total, 0, sizeof(*total)); + + if (coreid != UINT32_MAX) { + update_lcore_statistics(total, coreid); + } else { + for (coreid = 0; coreid < RTE_MAX_LCORE; coreid++) + update_lcore_statistics(total, coreid); + } +} + +static int +handle_telemetry_cmd_ipsec_secgw_stats_outbound(const char *cmd __rte_unused, + const char *params, struct rte_tel_data *data) +{ + struct ipsec_core_statistics total_stats; + + struct rte_tel_data *spd4_data = rte_tel_data_alloc(); + struct rte_tel_data *spd6_data = rte_tel_data_alloc(); + struct rte_tel_data *sad_data = rte_tel_data_alloc(); + + unsigned int coreid = UINT32_MAX; + + /* verify allocated telemetry data structures */ + if (!spd4_data || !spd6_data || !sad_data) + return -ENOMEM; + + /* initialize telemetry data structs as dicts */ + rte_tel_data_start_dict(data); + + rte_tel_data_start_dict(spd4_data); + rte_tel_data_start_dict(spd6_data); + rte_tel_data_start_dict(sad_data); + + if (params) { + coreid = (uint32_t)atoi(params); + if (rte_lcore_is_enabled(coreid) == 0) + return -EINVAL; + } + + update_statistics(&total_stats, coreid); + + /* add spd 4 telemetry key/values pairs */ + + rte_tel_data_add_dict_u64(spd4_data, "protect", + total_stats.outbound.spd4.protect); + rte_tel_data_add_dict_u64(spd4_data, "bypass", + total_stats.outbound.spd4.bypass); + rte_tel_data_add_dict_u64(spd4_data, "discard", + total_stats.outbound.spd4.discard); + + rte_tel_data_add_dict_container(data, "spd4", spd4_data, 0); + + /* add spd 6 telemetry key/values pairs */ + + rte_tel_data_add_dict_u64(spd6_data, "protect", + total_stats.outbound.spd6.protect); + rte_tel_data_add_dict_u64(spd6_data, "bypass", + total_stats.outbound.spd6.bypass); + rte_tel_data_add_dict_u64(spd6_data, "discard", + total_stats.outbound.spd6.discard); + + rte_tel_data_add_dict_container(data, "spd6", spd6_data, 0); + + /* add sad telemetry key/values pairs */ + + rte_tel_data_add_dict_u64(sad_data, "miss", + total_stats.outbound.sad.miss); + + rte_tel_data_add_dict_container(data, "sad", sad_data, 0); + + return 0; +} + +static int +handle_telemetry_cmd_ipsec_secgw_stats_inbound(const char *cmd __rte_unused, + const char *params, struct rte_tel_data *data) +{ + struct ipsec_core_statistics total_stats; + + struct rte_tel_data *spd4_data = rte_tel_data_alloc(); + struct rte_tel_data *spd6_data = rte_tel_data_alloc(); + struct rte_tel_data *sad_data = rte_tel_data_alloc(); + + unsigned int coreid = UINT32_MAX; + + /* verify allocated telemetry data structures */ + if (!spd4_data || !spd6_data || !sad_data) + return -ENOMEM; + + /* initialize telemetry data structs as dicts */ + rte_tel_data_start_dict(data); + rte_tel_data_start_dict(spd4_data); + rte_tel_data_start_dict(spd6_data); + rte_tel_data_start_dict(sad_data); + + /* add children dicts to parent dict */ + + if (params) { + coreid = (uint32_t)atoi(params); + if (rte_lcore_is_enabled(coreid) == 0) + return -EINVAL; + } + + update_statistics(&total_stats, coreid); + + /* add sad telemetry key/values pairs */ + + rte_tel_data_add_dict_u64(sad_data, "miss", + total_stats.outbound.sad.miss); + + rte_tel_data_add_dict_container(data, "sad", sad_data, 0); + + /* add spd 4 telemetry key/values pairs */ + + rte_tel_data_add_dict_u64(spd4_data, "protect", + total_stats.inbound.spd4.protect); + rte_tel_data_add_dict_u64(spd4_data, "bypass", + total_stats.inbound.spd4.bypass); + rte_tel_data_add_dict_u64(spd4_data, "discard", + total_stats.inbound.spd4.discard); + + rte_tel_data_add_dict_container(data, "spd4", spd4_data, 0); + + /* add spd 6 telemetry key/values pairs */ + + rte_tel_data_add_dict_u64(spd6_data, "protect", + total_stats.inbound.spd6.protect); + rte_tel_data_add_dict_u64(spd6_data, "bypass", + total_stats.inbound.spd6.bypass); + rte_tel_data_add_dict_u64(spd6_data, "discard", + total_stats.inbound.spd6.discard); + + rte_tel_data_add_dict_container(data, "spd6", spd6_data, 0); + + return 0; +} + +static int +handle_telemetry_cmd_ipsec_secgw_stats_routing(const char *cmd __rte_unused, + const char *params, struct rte_tel_data *data) +{ + struct ipsec_core_statistics total_stats; + + struct rte_tel_data *lpm4_data = rte_tel_data_alloc(); + struct rte_tel_data *lpm6_data = rte_tel_data_alloc(); + + unsigned int coreid = UINT32_MAX; + + /* initialize telemetry data structs as dicts */ + rte_tel_data_start_dict(data); + rte_tel_data_start_dict(lpm4_data); + rte_tel_data_start_dict(lpm6_data); + + + if (params) { + coreid = (uint32_t)atoi(params); + if (rte_lcore_is_enabled(coreid) == 0) + return -EINVAL; + } + + update_statistics(&total_stats, coreid); + + /* add lpm 4 telemetry key/values pairs */ + rte_tel_data_add_dict_u64(lpm4_data, "miss", + total_stats.outbound.spd4.protect); + + rte_tel_data_add_dict_container(data, "IPv4 LPM", lpm4_data, 0); + + /* add lpm 6 telemetry key/values pairs */ + rte_tel_data_add_dict_u64(lpm6_data, "miss", + total_stats.outbound.spd6.protect); + + rte_tel_data_add_dict_container(data, "IPv6 LPM", lpm6_data, 0); + + return 0; +} + +static void +ipsec_secgw_telemetry_init(void) +{ + rte_telemetry_register_cmd("/examples/ipsec-secgw/stats", + handle_telemetry_cmd_ipsec_secgw_stats, + "Returns outbound global stats. " + "Optional Parameters: int "); + + rte_telemetry_register_cmd("/examples/ipsec-secgw/stats/outbound", + handle_telemetry_cmd_ipsec_secgw_stats_outbound, + "Returns outbound global stats. " + "Optional Parameters: int "); + + rte_telemetry_register_cmd("/examples/ipsec-secgw/stats/inbound", + handle_telemetry_cmd_ipsec_secgw_stats_inbound, + "Returns outbound global stats. " + "Optional Parameters: int "); + + rte_telemetry_register_cmd("/examples/ipsec-secgw/stats/routing", + handle_telemetry_cmd_ipsec_secgw_stats_routing, + "Returns outbound global stats. " + "Optional Parameters: int "); +} + +static void +telemetry_init(void) +{ + rte_ipsec_telemetry_init(); + + ipsec_secgw_telemetry_init(); + +} + int32_t main(int32_t argc, char **argv) { @@ -2863,6 +3195,8 @@ main(int32_t argc, char **argv) if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid parameters\n"); + telemetry_init(); + /* parse configuration file */ if (parse_cfg_file(cfgfile) < 0) { printf("parsing file \"%s\" failed\n", diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index ede610bcde..faa2b2c262 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -83,6 +83,17 @@ struct ethaddr_info { uint64_t src, dst; }; +struct ipsec_spd_stats { + uint64_t protect; + uint64_t bypass; + uint64_t discard; +}; + +struct ipsec_sa_stats { + uint64_t hit; + uint64_t miss; +}; + #if (STATS_INTERVAL > 0) struct ipsec_core_statistics { uint64_t tx; @@ -91,6 +102,26 @@ struct ipsec_core_statistics { uint64_t tx_call; uint64_t dropped; uint64_t burst_rx; + + struct { + struct ipsec_spd_stats spd4; + struct ipsec_spd_stats spd6; + struct ipsec_sa_stats sad; + } outbound; + + struct { + struct ipsec_spd_stats spd4; + struct ipsec_spd_stats spd6; + struct ipsec_sa_stats sad; + } inbound; + + struct { + uint64_t miss; + } lpm4; + + struct { + uint64_t miss; + } lpm6; } __rte_cache_aligned; struct ipsec_core_statistics core_statistics[RTE_MAX_LCORE]; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 7ba29406bf..4f12c57dc3 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -125,6 +125,8 @@ struct ipsec_sa { #define NATT_UDP_TUNNEL (1 << 2) #define IP4_TRANSPORT (1 << 3) #define IP6_TRANSPORT (1 << 4) +#define SA_TELEMETRY_ENABLE (1 << 5) + struct ip_addr src; struct ip_addr dst; struct { diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index b4b483a782..ccdaef1c4d 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -6,7 +6,7 @@ # To build this example as a standalone application with an already-installed # DPDK instance, use 'make' -deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] +deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev', 'telemetry'] allow_experimental_apis = true sources = files( 'esp.c', diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index bc83e4bccc..37039e70fc 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -323,6 +323,7 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, return; if (atoi(tokens[1]) == INVALID_SPI) return; + rule->flags = 0; rule->spi = atoi(tokens[1]); rule->portid = UINT16_MAX; ips = ipsec_get_primary_session(rule); @@ -339,26 +340,26 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, if (strcmp(tokens[ti], "ipv4-tunnel") == 0) { sa_cnt->nb_v4++; - rule->flags = IP4_TUNNEL; + rule->flags |= IP4_TUNNEL; } else if (strcmp(tokens[ti], "ipv4-udp-tunnel") == 0) { sa_cnt->nb_v4++; - rule->flags = IP4_TUNNEL | NATT_UDP_TUNNEL; + rule->flags |= IP4_TUNNEL | NATT_UDP_TUNNEL; rule->udp.sport = 0; rule->udp.dport = 4500; } else if (strcmp(tokens[ti], "ipv6-tunnel") == 0) { sa_cnt->nb_v6++; - rule->flags = IP6_TUNNEL; + rule->flags |= IP6_TUNNEL; } else if (strcmp(tokens[ti], "ipv6-udp-tunnel") == 0) { sa_cnt->nb_v6++; - rule->flags = IP6_TUNNEL | NATT_UDP_TUNNEL; + rule->flags |= IP6_TUNNEL | NATT_UDP_TUNNEL; } else if (strcmp(tokens[ti], "transport") == 0) { sa_cnt->nb_v4++; sa_cnt->nb_v6++; - rule->flags = IP4_TRANSPORT | IP6_TRANSPORT; + rule->flags |= IP4_TRANSPORT | IP6_TRANSPORT; } else if (strcmp(tokens[ti], "udp-transport") == 0) { sa_cnt->nb_v4++; sa_cnt->nb_v6++; - rule->flags = IP4_TRANSPORT | IP6_TRANSPORT | + rule->flags |= IP4_TRANSPORT | IP6_TRANSPORT | NATT_UDP_TUNNEL; rule->udp.sport = 0; rule->udp.dport = 4500; @@ -372,6 +373,11 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, continue; } + if (strcmp(tokens[ti], "telemetry") == 0) { + rule->flags |= SA_TELEMETRY_ENABLE; + continue; + } + if (strcmp(tokens[ti], "cipher_algo") == 0) { const struct supported_cipher_algo *algo; uint32_t key_len; @@ -1519,6 +1525,9 @@ ipsec_sa_init(struct ipsec_sa *lsa, struct rte_ipsec_sa *sa, uint32_t sa_size) if (rc < 0) return rc; + if (lsa->flags & SA_TELEMETRY_ENABLE) + rte_ipsec_telemetry_sa_add(sa); + /* init primary processing session */ ips = ipsec_get_primary_session(lsa); rc = fill_ipsec_session(ips, sa); From patchwork Fri Sep 3 11:22:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Nicolau X-Patchwork-Id: 97917 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 209B4A0C54; Fri, 3 Sep 2021 13:29:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BE7034113C; Fri, 3 Sep 2021 13:29:06 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id C982641100 for ; Fri, 3 Sep 2021 13:29:04 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="206516725" X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="206516725" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2021 04:29:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="533996267" Received: from silpixa00400884.ir.intel.com ([10.243.22.82]) by FMSMGA003.fm.intel.com with ESMTP; 03 Sep 2021 04:29:02 -0700 From: Radu Nicolau To: Radu Nicolau , Akhil Goyal Cc: dev@dpdk.org, declan.doherty@intel.com Date: Fri, 3 Sep 2021 12:22:56 +0100 Message-Id: <20210903112257.303961-7-radu.nicolau@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210903112257.303961-1-radu.nicolau@intel.com> References: <20210903112257.303961-1-radu.nicolau@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 6/7] examples/ipsec-secgw: add support for defining initial sequence number value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add esn field to SA definition block to allow initial ESN value Signed-off-by: Declan Doherty Signed-off-by: Radu Nicolau --- examples/ipsec-secgw/ipsec.c | 5 +++++ examples/ipsec-secgw/ipsec.h | 2 ++ examples/ipsec-secgw/sa.c | 15 +++++++++++++++ 3 files changed, 22 insertions(+) diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index aa68e4f827..28772da345 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -234,6 +234,11 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, sess_conf.ipsec.udp.dport = htons(sa->udp.dport); } + if (sa->esn > 0) { + sess_conf.ipsec.options.esn = 1; + sess_conf.ipsec.esn.value = sa->esn; + } + struct rte_flow_action_security action_security; struct rte_flow_error err; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 4f12c57dc3..db7988604a 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -146,6 +146,8 @@ struct ipsec_sa { uint8_t udp_encap; uint16_t portid; uint16_t mss; + uint16_t esn; + uint8_t fdir_qid; uint8_t fdir_flag; diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 37039e70fc..3ee5ed7dcf 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -711,6 +711,16 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, continue; } + if (strcmp(tokens[ti], "esn") == 0) { + INCREMENT_TOKEN_INDEX(ti, n_tokens, status); + if (status->status < 0) + return; + rule->esn = atoll(tokens[ti]); + if (status->status < 0) + return; + continue; + } + if (strcmp(tokens[ti], "fallback") == 0) { struct rte_ipsec_session *fb; @@ -1387,6 +1397,11 @@ fill_ipsec_sa_prm(struct rte_ipsec_sa_prm *prm, const struct ipsec_sa *ss, prm->ipsec_xform.mss = ss->mss; } + if (ss->esn > 0) { + prm->ipsec_xform.options.esn = 1; + prm->ipsec_xform.esn.value = ss->esn; + } + if (IS_TRANSPORT(ss->flags)) { /* transport mode */ prm->trs.proto = rc; From patchwork Fri Sep 3 11:22:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Nicolau X-Patchwork-Id: 97918 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56876A0C54; Fri, 3 Sep 2021 13:29:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 55AD441149; Fri, 3 Sep 2021 13:29:09 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 725FA41139 for ; Fri, 3 Sep 2021 13:29:05 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10095"; a="206516727" X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="206516727" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2021 04:29:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,265,1624345200"; d="scan'208";a="533996278" Received: from silpixa00400884.ir.intel.com ([10.243.22.82]) by FMSMGA003.fm.intel.com with ESMTP; 03 Sep 2021 04:29:04 -0700 From: Radu Nicolau To: Radu Nicolau , Akhil Goyal Cc: dev@dpdk.org, declan.doherty@intel.com Date: Fri, 3 Sep 2021 12:22:57 +0100 Message-Id: <20210903112257.303961-8-radu.nicolau@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210903112257.303961-1-radu.nicolau@intel.com> References: <20210903112257.303961-1-radu.nicolau@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 7/7] examples/ipsec-secgw: add ethdev reset callback X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add event handler for ethdev reset callback Signed-off-by: Declan Doherty Signed-off-by: Radu Nicolau --- examples/ipsec-secgw/ipsec-secgw.c | 18 +++- examples/ipsec-secgw/ipsec.h | 4 +- examples/ipsec-secgw/sa.c | 130 +++++++++++++++++++++++++++-- 3 files changed, 139 insertions(+), 13 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index e725d84e7c..9ba9568978 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2254,7 +2254,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) local_port_conf.rxmode.offloads) rte_exit(EXIT_FAILURE, "Error: port %u required RX offloads: 0x%" PRIx64 - ", avaialbe RX offloads: 0x%" PRIx64 "\n", + ", available RX offloads: 0x%" PRIx64 "\n", portid, local_port_conf.rxmode.offloads, dev_info.rx_offload_capa); @@ -2262,7 +2262,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) local_port_conf.txmode.offloads) rte_exit(EXIT_FAILURE, "Error: port %u required TX offloads: 0x%" PRIx64 - ", avaialbe TX offloads: 0x%" PRIx64 "\n", + ", available TX offloads: 0x%" PRIx64 "\n", portid, local_port_conf.txmode.offloads, dev_info.tx_offload_capa); @@ -2543,6 +2543,17 @@ inline_ipsec_event_callback(uint16_t port_id, enum rte_eth_event_type type, return -1; } +static int +ethdev_reset_event_callback(uint16_t port_id, + enum rte_eth_event_type type __rte_unused, + void *param __rte_unused, void *ret_param __rte_unused) +{ + printf("Reset Event on port id %d\n", port_id); + printf("Force quit application"); + force_quit = true; + return 0; +} + static uint16_t rx_callback(__rte_unused uint16_t port, __rte_unused uint16_t queue, struct rte_mbuf *pkt[], uint16_t nb_pkts, @@ -3317,6 +3328,9 @@ main(int32_t argc, char **argv) rte_strerror(-ret), portid); } + rte_eth_dev_callback_register(portid, RTE_ETH_EVENT_INTR_RESET, + ethdev_reset_event_callback, NULL); + rte_eth_dev_callback_register(portid, RTE_ETH_EVENT_IPSEC, inline_ipsec_event_callback, NULL); } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index db7988604a..e8752e0bde 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -65,7 +65,7 @@ struct ip_addr { } ip; }; -#define MAX_KEY_SIZE 36 +#define MAX_KEY_SIZE 132 /* * application wide SA parameters @@ -146,7 +146,7 @@ struct ipsec_sa { uint8_t udp_encap; uint16_t portid; uint16_t mss; - uint16_t esn; + uint32_t esn; uint8_t fdir_qid; uint8_t fdir_flag; diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 3ee5ed7dcf..0be8bdef7a 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -46,6 +46,7 @@ struct supported_cipher_algo { struct supported_auth_algo { const char *keyword; enum rte_crypto_auth_algorithm algo; + uint16_t iv_len; uint16_t digest_len; uint16_t key_len; uint8_t key_not_req; @@ -98,6 +99,20 @@ const struct supported_cipher_algo cipher_algos[] = { .block_size = 4, .key_len = 20 }, + { + .keyword = "aes-192-ctr", + .algo = RTE_CRYPTO_CIPHER_AES_CTR, + .iv_len = 16, + .block_size = 16, + .key_len = 28 + }, + { + .keyword = "aes-256-ctr", + .algo = RTE_CRYPTO_CIPHER_AES_CTR, + .iv_len = 16, + .block_size = 16, + .key_len = 36 + }, { .keyword = "3des-cbc", .algo = RTE_CRYPTO_CIPHER_3DES_CBC, @@ -126,6 +141,31 @@ const struct supported_auth_algo auth_algos[] = { .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, .digest_len = 16, .key_len = 32 + }, + { + .keyword = "sha384-hmac", + .algo = RTE_CRYPTO_AUTH_SHA384_HMAC, + .digest_len = 24, + .key_len = 48 + }, + { + .keyword = "sha512-hmac", + .algo = RTE_CRYPTO_AUTH_SHA512_HMAC, + .digest_len = 32, + .key_len = 64 + }, + { + .keyword = "aes-gmac", + .algo = RTE_CRYPTO_AUTH_AES_GMAC, + .iv_len = 8, + .digest_len = 16, + .key_len = 20 + }, + { + .keyword = "aes-xcbc-mac-96", + .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC, + .digest_len = 12, + .key_len = 16 } }; @@ -156,6 +196,42 @@ const struct supported_aead_algo aead_algos[] = { .key_len = 36, .digest_len = 16, .aad_len = 8, + }, + { + .keyword = "aes-128-ccm", + .algo = RTE_CRYPTO_AEAD_AES_CCM, + .iv_len = 8, + .block_size = 4, + .key_len = 20, + .digest_len = 16, + .aad_len = 8, + }, + { + .keyword = "aes-192-ccm", + .algo = RTE_CRYPTO_AEAD_AES_CCM, + .iv_len = 8, + .block_size = 4, + .key_len = 28, + .digest_len = 16, + .aad_len = 8, + }, + { + .keyword = "aes-256-ccm", + .algo = RTE_CRYPTO_AEAD_AES_CCM, + .iv_len = 8, + .block_size = 4, + .key_len = 36, + .digest_len = 16, + .aad_len = 8, + }, + { + .keyword = "chacha20-poly1305", + .algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, + .iv_len = 12, + .block_size = 64, + .key_len = 36, + .digest_len = 16, + .aad_len = 8, } }; @@ -352,6 +428,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, } else if (strcmp(tokens[ti], "ipv6-udp-tunnel") == 0) { sa_cnt->nb_v6++; rule->flags |= IP6_TUNNEL | NATT_UDP_TUNNEL; + rule->udp.sport = 0; + rule->udp.dport = 4500; } else if (strcmp(tokens[ti], "transport") == 0) { sa_cnt->nb_v4++; sa_cnt->nb_v6++; @@ -499,6 +577,15 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, if (status->status < 0) return; + if (algo->algo == RTE_CRYPTO_AUTH_AES_GMAC) { + key_len -= 4; + rule->auth_key_len = key_len; + rule->iv_len = algo->iv_len; + memcpy(&rule->salt, + &rule->auth_key[key_len], 4); + } + + auth_algo_p = 1; continue; } @@ -1209,10 +1296,15 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], sa->aead_algo == RTE_CRYPTO_AEAD_AES_CCM || sa->aead_algo == RTE_CRYPTO_AEAD_CHACHA20_POLY1305) { - if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + if (ips->type == + RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) { iv_length = 8; - else - iv_length = 12; + } else { + if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_CCM) + iv_length = 11; + else + iv_length = 12; + } sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AEAD; sa_ctx->xf[idx].a.aead.algo = sa->aead_algo; @@ -1236,10 +1328,8 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], case RTE_CRYPTO_CIPHER_NULL: case RTE_CRYPTO_CIPHER_3DES_CBC: case RTE_CRYPTO_CIPHER_AES_CBC: - iv_length = sa->iv_len; - break; case RTE_CRYPTO_CIPHER_AES_CTR: - iv_length = 16; + iv_length = sa->iv_len; break; default: RTE_LOG(ERR, IPSEC_ESP, @@ -1248,6 +1338,15 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], return -EINVAL; } + if (sa->auth_algo == RTE_CRYPTO_AUTH_AES_GMAC) { + if (ips->type == + RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) { + iv_length = 8; + } else { + iv_length = 12; + } + } + if (inbound) { sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_CIPHER; sa_ctx->xf[idx].b.cipher.algo = sa->cipher_algo; @@ -1269,6 +1368,9 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], sa->digest_len; sa_ctx->xf[idx].a.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY; + sa_ctx->xf[idx].a.auth.iv.offset = IV_OFFSET; + sa_ctx->xf[idx].a.auth.iv.length = iv_length; + } else { /* outbound */ sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_CIPHER; sa_ctx->xf[idx].a.cipher.algo = sa->cipher_algo; @@ -1290,11 +1392,21 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], sa->digest_len; sa_ctx->xf[idx].b.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE; + sa_ctx->xf[idx].b.auth.iv.offset = IV_OFFSET; + sa_ctx->xf[idx].b.auth.iv.length = iv_length; + } - sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b; - sa_ctx->xf[idx].b.next = NULL; - sa->xforms = &sa_ctx->xf[idx].a; + if (sa->auth_algo == RTE_CRYPTO_AUTH_AES_GMAC) { + sa->xforms = inbound ? + &sa_ctx->xf[idx].a : &sa_ctx->xf[idx].b; + sa->xforms->next = NULL; + + } else { + sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b; + sa_ctx->xf[idx].b.next = NULL; + sa->xforms = &sa_ctx->xf[idx].a; + } } if (ips->type ==