From patchwork Mon Jun 29 05:10:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 72397 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 15F30A0350; Mon, 29 Jun 2020 07:35:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 91DEA1BE94; Mon, 29 Jun 2020 07:35:33 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 2DADA1BDAC; Mon, 29 Jun 2020 07:35:28 +0200 (CEST) IronPort-SDR: nhqxn/EQ6/sKzTUK8YL3YerjIRUwCdTGy8kbyMXMPzGJpRiaf/yLtijpRp0nNVcjMqm9v8luWk H1cXX3DxMqXg== X-IronPort-AV: E=McAfee;i="6000,8403,9666"; a="147458185" X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="147458185" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2020 22:35:27 -0700 IronPort-SDR: 4bPiUOp/XXGZx93QxSyMPf2XffjkTuFRrcPincbaJYjgkGuQn/1UdY0GE6amCAUW7c68AAsMfh FMdd7c7hMPFw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="302948800" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by fmsmga004.fm.intel.com with ESMTP; 28 Jun 2020 22:35:25 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Mon, 29 Jun 2020 13:10:26 +0800 Message-Id: <20200629051030.3541-2-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200629051030.3541-1-wei.zhao1@intel.com> References: <20200628052857.67428-1-wei.zhao1@intel.com> <20200629051030.3541-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 1/5] net/ice: add support more PPPoE packet type for switch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add more support for switch parser of pppoe packet, it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for pppoe payload, so we can use L4 dst/src port and L3 ip address as input set for switch filter pppoe related rule. Signed-off-by: Wei Zhao --- doc/guides/rel_notes/release_20_08.rst | 1 + drivers/net/ice/ice_switch_filter.c | 115 +++++++++++++++++++++---- 2 files changed, 101 insertions(+), 15 deletions(-) diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index 3c40424cc..90b58a027 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -86,6 +86,7 @@ New Features Updated the Intel ice driver with new features and improvements, including: * Added support for DCF datapath configuration. + * Added support for more PPPoE packet type for switch filter. Removed Items ------------- diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 5ccd020c5..3c0c36bce 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -26,6 +26,8 @@ #define MAX_QGRP_NUM_TYPE 7 +#define ICE_PPP_IPV4_PROTO 0x0021 +#define ICE_PPP_IPV6_PROTO 0x0057 #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@ -95,6 +97,18 @@ ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \ ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION | \ ICE_INSET_PPPOE_PROTO) +#define ICE_SW_INSET_MAC_PPPOE_IPV4 ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4) +#define ICE_SW_INSET_MAC_PPPOE_IPV4_TCP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_TCP) +#define ICE_SW_INSET_MAC_PPPOE_IPV4_UDP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_UDP) +#define ICE_SW_INSET_MAC_PPPOE_IPV6 ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6) +#define ICE_SW_INSET_MAC_PPPOE_IPV6_TCP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_TCP) +#define ICE_SW_INSET_MAC_PPPOE_IPV6_UDP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_UDP) #define ICE_SW_INSET_MAC_IPV4_ESP ( \ ICE_SW_INSET_MAC_IPV4 | ICE_INSET_ESP_SPI) #define ICE_SW_INSET_MAC_IPV6_ESP ( \ @@ -154,10 +168,6 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = { ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_pppoes, ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes, @@ -166,6 +176,30 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = { ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes_proto, ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_esp, ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE}, {pattern_eth_ipv4_udp_esp, @@ -254,10 +288,6 @@ ice_pattern_match_item ice_switch_pattern_perm[] = { ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_pppoes, ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes, @@ -266,6 +296,30 @@ ice_pattern_match_item ice_switch_pattern_perm[] = { ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes_proto, ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_esp, ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE}, {pattern_eth_ipv4_udp_esp, @@ -416,13 +470,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask; const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask; uint64_t input_set = ICE_INSET_NONE; - uint16_t j, t = 0; + bool pppoe_elem_valid = 0; + bool pppoe_patt_valid = 0; + bool pppoe_prot_valid = 0; bool profile_rule = 0; bool tunnel_valid = 0; - bool pppoe_valid = 0; bool ipv6_valiad = 0; bool ipv4_valiad = 0; bool udp_valiad = 0; + bool tcp_valiad = 0; + uint16_t j, t = 0; for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { @@ -752,6 +809,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], case RTE_FLOW_ITEM_TYPE_TCP: tcp_spec = item->spec; tcp_mask = item->mask; + tcp_valiad = 1; if (tcp_spec && tcp_mask) { /* Check TCP mask and update input set */ if (tcp_mask->hdr.sent_seq || @@ -969,6 +1027,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid pppoe item"); return 0; } + pppoe_patt_valid = 1; if (pppoe_spec && pppoe_mask) { /* Check pppoe mask and update input set */ if (pppoe_mask->length || @@ -989,7 +1048,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], input_set |= ICE_INSET_PPPOE_SESSION; } t++; - pppoe_valid = 1; + pppoe_elem_valid = 1; } break; @@ -1010,7 +1069,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], return 0; } if (pppoe_proto_spec && pppoe_proto_mask) { - if (pppoe_valid) + if (pppoe_elem_valid) t--; list[t].type = ICE_PPPOE; if (pppoe_proto_mask->proto_id) { @@ -1019,9 +1078,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.pppoe_hdr.ppp_prot_id = pppoe_proto_mask->proto_id; input_set |= ICE_INSET_PPPOE_PROTO; + + pppoe_prot_valid = 1; } + if ((pppoe_proto_mask->proto_id & + pppoe_proto_spec->proto_id) != + CPU_TO_BE16(ICE_PPP_IPV4_PROTO) && + (pppoe_proto_mask->proto_id & + pppoe_proto_spec->proto_id) != + CPU_TO_BE16(ICE_PPP_IPV6_PROTO)) + *tun_type = ICE_SW_TUN_PPPOE_PAY; + else + *tun_type = ICE_SW_TUN_PPPOE; t++; } + break; case RTE_FLOW_ITEM_TYPE_ESP: @@ -1232,6 +1303,23 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } } + if (pppoe_patt_valid && !pppoe_prot_valid) { + if (ipv6_valiad && udp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP; + else if (ipv6_valiad && tcp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP; + else if (ipv4_valiad && udp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP; + else if (ipv4_valiad && tcp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP; + else if (ipv6_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV6; + else if (ipv4_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV4; + else + *tun_type = ICE_SW_TUN_PPPOE; + } + *lkups_num = t; return input_set; @@ -1447,9 +1535,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, tun_type = ICE_SW_TUN_VXLAN; if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) tun_type = ICE_SW_TUN_NVGRE; - if (item->type == RTE_FLOW_ITEM_TYPE_PPPOED || - item->type == RTE_FLOW_ITEM_TYPE_PPPOES) - tun_type = ICE_SW_TUN_PPPOE; if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { const struct rte_flow_item_eth *eth_mask; if (item->mask) From patchwork Mon Jun 29 05:10:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 72398 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F40FA0350; Mon, 29 Jun 2020 07:35:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 082331BE90; Mon, 29 Jun 2020 07:35:41 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id CFEEE1BE94; Mon, 29 Jun 2020 07:35:29 +0200 (CEST) IronPort-SDR: iEzqh8i9sl2FmSIyP3QC1QHBu4qPByESQJ/BgpSFVKDTkSvC1vkE7WwOsMq+pbiXzp9lhxl6VG LVp4KZCJxrKQ== X-IronPort-AV: E=McAfee;i="6000,8403,9666"; a="147458191" X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="147458191" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2020 22:35:29 -0700 IronPort-SDR: p7yLq4wpEUSW2xPv1AKt8BZAX6HBvxFCNnGkqciXDoD7j5SxLHcFAcbS2ymfQ6x2/ZhRQTaPvJ tDyS3Nn7hNcQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="302948809" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by fmsmga004.fm.intel.com with ESMTP; 28 Jun 2020 22:35:27 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Mon, 29 Jun 2020 13:10:27 +0800 Message-Id: <20200629051030.3541-3-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200629051030.3541-1-wei.zhao1@intel.com> References: <20200628052857.67428-1-wei.zhao1@intel.com> <20200629051030.3541-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 2/5] net/ice: fix tunnel type for switch rule X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add check for protocol type of IPv4 packet, it need to update tunnel type when NVGRE is in payload. Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 3c0c36bce..c607e8d17 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -28,6 +28,7 @@ #define MAX_QGRP_NUM_TYPE 7 #define ICE_PPP_IPV4_PROTO 0x0021 #define ICE_PPP_IPV6_PROTO 0x0057 +#define ICE_IPV4_PROTO_NVGRE 0x002F #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@ -632,6 +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.ipv4_hdr.protocol = ipv4_mask->hdr.next_proto_id; } + if ((ipv4_spec->hdr.next_proto_id & + ipv4_mask->hdr.next_proto_id) == + ICE_IPV4_PROTO_NVGRE) + *tun_type = ICE_SW_TUN_AND_NON_TUN; if (ipv4_mask->hdr.type_of_service) { list[t].h_u.ipv4_hdr.tos = ipv4_spec->hdr.type_of_service; @@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, const struct rte_flow_item *item = pattern; uint16_t item_num = 0; enum ice_sw_tunnel_type tun_type = - ICE_SW_TUN_AND_NON_TUN; + ICE_NON_TUN; struct ice_pattern_match_item *pattern_match_item = NULL; for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { From patchwork Mon Jun 29 05:10:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 72399 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF035A0350; Mon, 29 Jun 2020 07:35:54 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 084D61BEB1; Mon, 29 Jun 2020 07:35:43 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id A1CE41BDAC; Mon, 29 Jun 2020 07:35:31 +0200 (CEST) IronPort-SDR: XwqCn6Rfe+7jZH4H7QOgMairIY1AelqcZjgtmm6+y7js806Qg5XNI8mEGyrOwRvj0uozsFcAvx FdKtDdHYsz1g== X-IronPort-AV: E=McAfee;i="6000,8403,9666"; a="147458193" X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="147458193" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2020 22:35:31 -0700 IronPort-SDR: xCm58NxyhS6c8JhGL2i6IszvExyobPbjgEDS7klJ4Ld7DBJ96hl4KOtCS7m/LeGVuAOcrexCiZ Y0zSdcsXXRrw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="302948817" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by fmsmga004.fm.intel.com with ESMTP; 28 Jun 2020 22:35:29 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Mon, 29 Jun 2020 13:10:28 +0800 Message-Id: <20200629051030.3541-4-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200629051030.3541-1-wei.zhao1@intel.com> References: <20200628052857.67428-1-wei.zhao1@intel.com> <20200629051030.3541-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 3/5] net/ice: support switch flow for specific L4 type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add more specific tunnel type for ipv4/ipv6 packet, it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without L4 dst/src port number as input set for the switch filter rule. Fixes: 47d460d63233 ("net/ice: rework switch filter") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index c607e8d17..7d1cd98f5 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -474,8 +474,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], bool pppoe_elem_valid = 0; bool pppoe_patt_valid = 0; bool pppoe_prot_valid = 0; - bool profile_rule = 0; bool tunnel_valid = 0; + bool profile_rule = 0; + bool nvgre_valid = 0; + bool vxlan_valid = 0; bool ipv6_valiad = 0; bool ipv4_valiad = 0; bool udp_valiad = 0; @@ -923,7 +925,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid VXLAN item"); return 0; } - + vxlan_valid = 1; tunnel_valid = 1; if (vxlan_spec && vxlan_mask) { list[t].type = ICE_VXLAN; @@ -960,6 +962,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid NVGRE item"); return 0; } + nvgre_valid = 1; tunnel_valid = 1; if (nvgre_spec && nvgre_mask) { list[t].type = ICE_NVGRE; @@ -1325,6 +1328,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], *tun_type = ICE_SW_TUN_PPPOE; } + if (*tun_type == ICE_NON_TUN) { + if (vxlan_valid) + *tun_type = ICE_SW_TUN_VXLAN; + else if (nvgre_valid) + *tun_type = ICE_SW_TUN_NVGRE; + else if (ipv4_valiad && tcp_valiad) + *tun_type = ICE_SW_IPV4_TCP; + else if (ipv4_valiad && udp_valiad) + *tun_type = ICE_SW_IPV4_UDP; + else if (ipv6_valiad && tcp_valiad) + *tun_type = ICE_SW_IPV6_TCP; + else if (ipv6_valiad && udp_valiad) + *tun_type = ICE_SW_IPV6_UDP; + } + *lkups_num = t; return input_set; @@ -1536,10 +1554,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { item_num++; - if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) - tun_type = ICE_SW_TUN_VXLAN; - if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - tun_type = ICE_SW_TUN_NVGRE; if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { const struct rte_flow_item_eth *eth_mask; if (item->mask) From patchwork Mon Jun 29 05:10:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 72400 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE427A0350; Mon, 29 Jun 2020 07:36:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AA7331BEBA; Mon, 29 Jun 2020 07:35:44 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id A95471BE97; Mon, 29 Jun 2020 07:35:33 +0200 (CEST) IronPort-SDR: pHf2W/+iFUg9irZD0Omp7GJECBi4U5hu6CehwNO1HGbb08QFPAKp2yF5ma6tajl8m3Hg7KEhei Lj+zOTfj6ihQ== X-IronPort-AV: E=McAfee;i="6000,8403,9666"; a="147458194" X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="147458194" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2020 22:35:33 -0700 IronPort-SDR: syYhUxMdHPpqv684MOIlDV/yH8p0MOVKRSgQdmEkuHcw8LpC6a8i8B9B9m2ri16xPClTtonAG0 RM6IGSZXn+TA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="302948827" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by fmsmga004.fm.intel.com with ESMTP; 28 Jun 2020 22:35:31 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Mon, 29 Jun 2020 13:10:29 +0800 Message-Id: <20200629051030.3541-5-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200629051030.3541-1-wei.zhao1@intel.com> References: <20200628052857.67428-1-wei.zhao1@intel.com> <20200629051030.3541-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 4/5] net/ice: add input set byte number check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add the total input set byte number check, as there is a hardware requirement for the total number of 32 byte. Fixes: 47d460d63233 ("net/ice: rework switch filter") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 43 +++++++++++++++++++++++++++-- 1 file changed, 40 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 7d1cd98f5..5054555c2 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -25,7 +25,8 @@ #include "ice_generic_flow.h" -#define MAX_QGRP_NUM_TYPE 7 +#define MAX_QGRP_NUM_TYPE 7 +#define MAX_INPUT_SET_BYTE 32 #define ICE_PPP_IPV4_PROTO 0x0021 #define ICE_PPP_IPV6_PROTO 0x0057 #define ICE_IPV4_PROTO_NVGRE 0x002F @@ -471,6 +472,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask; const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask; uint64_t input_set = ICE_INSET_NONE; + uint16_t input_set_byte = 0; bool pppoe_elem_valid = 0; bool pppoe_patt_valid = 0; bool pppoe_prot_valid = 0; @@ -540,6 +542,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], m->src_addr[j] = eth_mask->src.addr_bytes[j]; i = 1; + input_set_byte++; } if (eth_mask->dst.addr_bytes[j]) { h->dst_addr[j] = @@ -547,6 +550,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], m->dst_addr[j] = eth_mask->dst.addr_bytes[j]; i = 1; + input_set_byte++; } } if (i) @@ -557,6 +561,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], eth_spec->type; list[t].m_u.ethertype.ethtype_id = eth_mask->type; + input_set_byte += 2; t++; } } @@ -616,24 +621,28 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], ipv4_spec->hdr.src_addr; list[t].m_u.ipv4_hdr.src_addr = ipv4_mask->hdr.src_addr; + input_set_byte += 2; } if (ipv4_mask->hdr.dst_addr) { list[t].h_u.ipv4_hdr.dst_addr = ipv4_spec->hdr.dst_addr; list[t].m_u.ipv4_hdr.dst_addr = ipv4_mask->hdr.dst_addr; + input_set_byte += 2; } if (ipv4_mask->hdr.time_to_live) { list[t].h_u.ipv4_hdr.time_to_live = ipv4_spec->hdr.time_to_live; list[t].m_u.ipv4_hdr.time_to_live = ipv4_mask->hdr.time_to_live; + input_set_byte++; } if (ipv4_mask->hdr.next_proto_id) { list[t].h_u.ipv4_hdr.protocol = ipv4_spec->hdr.next_proto_id; list[t].m_u.ipv4_hdr.protocol = ipv4_mask->hdr.next_proto_id; + input_set_byte++; } if ((ipv4_spec->hdr.next_proto_id & ipv4_mask->hdr.next_proto_id) == @@ -644,6 +653,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], ipv4_spec->hdr.type_of_service; list[t].m_u.ipv4_hdr.tos = ipv4_mask->hdr.type_of_service; + input_set_byte++; } t++; } @@ -721,12 +731,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], ipv6_spec->hdr.src_addr[j]; s->src_addr[j] = ipv6_mask->hdr.src_addr[j]; + input_set_byte++; } if (ipv6_mask->hdr.dst_addr[j]) { f->dst_addr[j] = ipv6_spec->hdr.dst_addr[j]; s->dst_addr[j] = ipv6_mask->hdr.dst_addr[j]; + input_set_byte++; } } if (ipv6_mask->hdr.proto) { @@ -734,12 +746,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], ipv6_spec->hdr.proto; s->next_hdr = ipv6_mask->hdr.proto; + input_set_byte++; } if (ipv6_mask->hdr.hop_limits) { f->hop_limit = ipv6_spec->hdr.hop_limits; s->hop_limit = ipv6_mask->hdr.hop_limits; + input_set_byte++; } if (ipv6_mask->hdr.vtc_flow & rte_cpu_to_be_32 @@ -757,6 +771,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT; s->be_ver_tc_flow = CPU_TO_BE32(vtf.u.val); + input_set_byte += 4; } t++; } @@ -802,14 +817,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], udp_spec->hdr.src_port; list[t].m_u.l4_hdr.src_port = udp_mask->hdr.src_port; + input_set_byte += 2; } if (udp_mask->hdr.dst_port) { list[t].h_u.l4_hdr.dst_port = udp_spec->hdr.dst_port; list[t].m_u.l4_hdr.dst_port = udp_mask->hdr.dst_port; + input_set_byte += 2; } - t++; + t++; } break; @@ -854,12 +871,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], tcp_spec->hdr.src_port; list[t].m_u.l4_hdr.src_port = tcp_mask->hdr.src_port; + input_set_byte += 2; } if (tcp_mask->hdr.dst_port) { list[t].h_u.l4_hdr.dst_port = tcp_spec->hdr.dst_port; list[t].m_u.l4_hdr.dst_port = tcp_mask->hdr.dst_port; + input_set_byte += 2; } t++; } @@ -899,12 +918,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], sctp_spec->hdr.src_port; list[t].m_u.sctp_hdr.src_port = sctp_mask->hdr.src_port; + input_set_byte += 2; } if (sctp_mask->hdr.dst_port) { list[t].h_u.sctp_hdr.dst_port = sctp_spec->hdr.dst_port; list[t].m_u.sctp_hdr.dst_port = sctp_mask->hdr.dst_port; + input_set_byte += 2; } t++; } @@ -942,6 +963,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], vxlan_mask->vni[0]; input_set |= ICE_INSET_TUN_VXLAN_VNI; + input_set_byte += 2; } t++; } @@ -979,6 +1001,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], nvgre_mask->tni[0]; input_set |= ICE_INSET_TUN_NVGRE_TNI; + input_set_byte += 2; } t++; } @@ -1007,6 +1030,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.vlan_hdr.vlan = vlan_mask->tci; input_set |= ICE_INSET_VLAN_OUTER; + input_set_byte += 2; } if (vlan_mask->inner_type) { list[t].h_u.vlan_hdr.type = @@ -1014,6 +1038,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.vlan_hdr.type = vlan_mask->inner_type; input_set |= ICE_INSET_ETHERTYPE; + input_set_byte += 2; } t++; } @@ -1054,6 +1079,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.pppoe_hdr.session_id = pppoe_mask->session_id; input_set |= ICE_INSET_PPPOE_SESSION; + input_set_byte += 2; } t++; pppoe_elem_valid = 1; @@ -1086,7 +1112,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.pppoe_hdr.ppp_prot_id = pppoe_proto_mask->proto_id; input_set |= ICE_INSET_PPPOE_PROTO; - + input_set_byte += 2; pppoe_prot_valid = 1; } if ((pppoe_proto_mask->proto_id & @@ -1143,6 +1169,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.esp_hdr.spi = esp_mask->hdr.spi; input_set |= ICE_INSET_ESP_SPI; + input_set_byte += 4; t++; } @@ -1199,6 +1226,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.ah_hdr.spi = ah_mask->spi; input_set |= ICE_INSET_AH_SPI; + input_set_byte += 4; t++; } @@ -1238,6 +1266,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.l2tpv3_sess_hdr.session_id = l2tp_mask->session_id; input_set |= ICE_INSET_L2TPV3OIP_SESSION_ID; + input_set_byte += 4; t++; } @@ -1343,6 +1372,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], *tun_type = ICE_SW_IPV6_UDP; } + if (input_set_byte > MAX_INPUT_SET_BYTE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "too much input set"); + return -ENOTSUP; + } + *lkups_num = t; return input_set; From patchwork Mon Jun 29 05:10:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 72401 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C14A2A0522; Mon, 29 Jun 2020 07:36:12 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AE3131BEC7; Mon, 29 Jun 2020 07:35:46 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 70C0F2C01; Mon, 29 Jun 2020 07:35:35 +0200 (CEST) IronPort-SDR: I6zWyK6MaF87r3YumeisdqWNywHw/LSdxMVVmP48y2IuZmI8WS1oEEb49fGQLBuPSSmtkibQo9 WKXDOA923KPQ== X-IronPort-AV: E=McAfee;i="6000,8403,9666"; a="147458197" X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="147458197" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2020 22:35:35 -0700 IronPort-SDR: i7g8Uen4pFy2p2IMSD4zu1u7iYYOLmC+VxBILojBwveS41/EvE3uvXGy+BvALI4vKVZ6VBRBjQ RRYw9suU7eAA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="302948839" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by fmsmga004.fm.intel.com with ESMTP; 28 Jun 2020 22:35:33 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Mon, 29 Jun 2020 13:10:30 +0800 Message-Id: <20200629051030.3541-6-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200629051030.3541-1-wei.zhao1@intel.com> References: <20200628052857.67428-1-wei.zhao1@intel.com> <20200629051030.3541-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 5/5] net/ice: fix typo X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" fix typo of "valid". Fixes: 8f5d8e74fb38 ("net/ice: support flow for AH ESP and L2TP") Fixes: 66ff8851792f ("net/ice: support ESP/AH/L2TP") Fixes: 45b53ed3701d ("net/ice: support IPv6 NAT-T") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 76 ++++++++++++++--------------- 1 file changed, 38 insertions(+), 38 deletions(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 5054555c2..267af5a54 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -480,10 +480,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], bool profile_rule = 0; bool nvgre_valid = 0; bool vxlan_valid = 0; - bool ipv6_valiad = 0; - bool ipv4_valiad = 0; - bool udp_valiad = 0; - bool tcp_valiad = 0; + bool ipv6_valid = 0; + bool ipv4_valid = 0; + bool udp_valid = 0; + bool tcp_valid = 0; uint16_t j, t = 0; for (item = pattern; item->type != @@ -570,7 +570,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], case RTE_FLOW_ITEM_TYPE_IPV4: ipv4_spec = item->spec; ipv4_mask = item->mask; - ipv4_valiad = 1; + ipv4_valid = 1; if (ipv4_spec && ipv4_mask) { /* Check IPv4 mask and update input set */ if (ipv4_mask->hdr.version_ihl || @@ -662,7 +662,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], case RTE_FLOW_ITEM_TYPE_IPV6: ipv6_spec = item->spec; ipv6_mask = item->mask; - ipv6_valiad = 1; + ipv6_valid = 1; if (ipv6_spec && ipv6_mask) { if (ipv6_mask->hdr.payload_len) { rte_flow_error_set(error, EINVAL, @@ -780,7 +780,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], case RTE_FLOW_ITEM_TYPE_UDP: udp_spec = item->spec; udp_mask = item->mask; - udp_valiad = 1; + udp_valid = 1; if (udp_spec && udp_mask) { /* Check UDP mask and update input set*/ if (udp_mask->hdr.dgram_len || @@ -833,7 +833,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], case RTE_FLOW_ITEM_TYPE_TCP: tcp_spec = item->spec; tcp_mask = item->mask; - tcp_valiad = 1; + tcp_valid = 1; if (tcp_spec && tcp_mask) { /* Check TCP mask and update input set */ if (tcp_mask->hdr.sent_seq || @@ -1151,16 +1151,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], if (!esp_spec && !esp_mask && !input_set) { profile_rule = 1; - if (ipv6_valiad && udp_valiad) + if (ipv6_valid && udp_valid) *tun_type = ICE_SW_TUN_PROFID_IPV6_NAT_T; - else if (ipv6_valiad) + else if (ipv6_valid) *tun_type = ICE_SW_TUN_PROFID_IPV6_ESP; - else if (ipv4_valiad) + else if (ipv4_valid) return 0; } else if (esp_spec && esp_mask && esp_mask->hdr.spi){ - if (udp_valiad) + if (udp_valid) list[t].type = ICE_NAT_T; else list[t].type = ICE_ESP; @@ -1174,13 +1174,13 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } if (!profile_rule) { - if (ipv6_valiad && udp_valiad) + if (ipv6_valid && udp_valid) *tun_type = ICE_SW_TUN_IPV6_NAT_T; - else if (ipv4_valiad && udp_valiad) + else if (ipv4_valid && udp_valid) *tun_type = ICE_SW_TUN_IPV4_NAT_T; - else if (ipv6_valiad) + else if (ipv6_valid) *tun_type = ICE_SW_TUN_IPV6_ESP; - else if (ipv4_valiad) + else if (ipv4_valid) *tun_type = ICE_SW_TUN_IPV4_ESP; } break; @@ -1211,12 +1211,12 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], if (!ah_spec && !ah_mask && !input_set) { profile_rule = 1; - if (ipv6_valiad && udp_valiad) + if (ipv6_valid && udp_valid) *tun_type = ICE_SW_TUN_PROFID_IPV6_NAT_T; - else if (ipv6_valiad) + else if (ipv6_valid) *tun_type = ICE_SW_TUN_PROFID_IPV6_AH; - else if (ipv4_valiad) + else if (ipv4_valid) return 0; } else if (ah_spec && ah_mask && ah_mask->spi){ @@ -1231,11 +1231,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } if (!profile_rule) { - if (udp_valiad) + if (udp_valid) return 0; - else if (ipv6_valiad) + else if (ipv6_valid) *tun_type = ICE_SW_TUN_IPV6_AH; - else if (ipv4_valiad) + else if (ipv4_valid) *tun_type = ICE_SW_TUN_IPV4_AH; } break; @@ -1253,10 +1253,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } if (!l2tp_spec && !l2tp_mask && !input_set) { - if (ipv6_valiad) + if (ipv6_valid) *tun_type = ICE_SW_TUN_PROFID_MAC_IPV6_L2TPV3; - else if (ipv4_valiad) + else if (ipv4_valid) return 0; } else if (l2tp_spec && l2tp_mask && l2tp_mask->session_id){ @@ -1271,10 +1271,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } if (!profile_rule) { - if (ipv6_valiad) + if (ipv6_valid) *tun_type = ICE_SW_TUN_IPV6_L2TPV3; - else if (ipv4_valiad) + else if (ipv4_valid) *tun_type = ICE_SW_TUN_IPV4_L2TPV3; } @@ -1308,7 +1308,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } if (pfcp_mask->s_field && pfcp_spec->s_field == 0x01 && - ipv6_valiad) + ipv6_valid) *tun_type = ICE_SW_TUN_PROFID_IPV6_PFCP_SESSION; else if (pfcp_mask->s_field && @@ -1317,7 +1317,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], ICE_SW_TUN_PROFID_IPV4_PFCP_SESSION; else if (pfcp_mask->s_field && !pfcp_spec->s_field && - ipv6_valiad) + ipv6_valid) *tun_type = ICE_SW_TUN_PROFID_IPV6_PFCP_NODE; else if (pfcp_mask->s_field && @@ -1341,17 +1341,17 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } if (pppoe_patt_valid && !pppoe_prot_valid) { - if (ipv6_valiad && udp_valiad) + if (ipv6_valid && udp_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP; - else if (ipv6_valiad && tcp_valiad) + else if (ipv6_valid && tcp_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP; - else if (ipv4_valiad && udp_valiad) + else if (ipv4_valid && udp_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP; - else if (ipv4_valiad && tcp_valiad) + else if (ipv4_valid && tcp_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP; - else if (ipv6_valiad) + else if (ipv6_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV6; - else if (ipv4_valiad) + else if (ipv4_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV4; else *tun_type = ICE_SW_TUN_PPPOE; @@ -1362,13 +1362,13 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], *tun_type = ICE_SW_TUN_VXLAN; else if (nvgre_valid) *tun_type = ICE_SW_TUN_NVGRE; - else if (ipv4_valiad && tcp_valiad) + else if (ipv4_valid && tcp_valid) *tun_type = ICE_SW_IPV4_TCP; - else if (ipv4_valiad && udp_valiad) + else if (ipv4_valid && udp_valid) *tun_type = ICE_SW_IPV4_UDP; - else if (ipv6_valiad && tcp_valiad) + else if (ipv6_valid && tcp_valid) *tun_type = ICE_SW_IPV6_TCP; - else if (ipv6_valiad && udp_valiad) + else if (ipv6_valid && udp_valid) *tun_type = ICE_SW_IPV6_UDP; }