From patchwork Sun Jun 28 05:01:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 72360 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 41CA4A0350; Sun, 28 Jun 2020 07:26:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 433D31BFF9; Sun, 28 Jun 2020 07:26:47 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 6DA311C112; Sun, 28 Jun 2020 07:26:41 +0200 (CEST) IronPort-SDR: FA2kwrtSeGfX9b2kELveFpdpR4C/DgYoP+wRgOjHkQwT8ougyzbxUkC64BFI9cjYuc9NrNvk1x HuZaN0GKlJnQ== X-IronPort-AV: E=McAfee;i="6000,8403,9665"; a="147327195" X-IronPort-AV: E=Sophos;i="5.75,290,1589266800"; d="scan'208";a="147327195" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2020 22:26:41 -0700 IronPort-SDR: TyaEn9H6CsyROMeLIfFxQ7Ep8w+DN9ClG80A50Ouh82Z25YATu/6wXlabiesLXYR5gbdJ9LX0c WbWZd9Qxe/lw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,290,1589266800"; d="scan'208";a="453811972" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by orsmga005.jf.intel.com with ESMTP; 27 Jun 2020 22:26:39 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Sun, 28 Jun 2020 13:01:42 +0800 Message-Id: <20200628050145.72810-2-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200628050145.72810-1-wei.zhao1@intel.com> References: <20200617061429.6447-1-wei.zhao1@intel.com> <20200628050145.72810-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 1/4] net/ice: add support more PPPoE packet type for switch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add more support for switch parser of pppoe packet, it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for pppoe payload, so we can use L4 dst/src port and L3 ip address as input set for switch filter pppoe related rule. Signed-off-by: Wei Zhao --- doc/guides/rel_notes/release_20_08.rst | 2 + drivers/net/ice/ice_switch_filter.c | 115 +++++++++++++++++++++---- 2 files changed, 102 insertions(+), 15 deletions(-) diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index 3c40424cc..79ef218b9 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -86,6 +86,8 @@ New Features Updated the Intel ice driver with new features and improvements, including: * Added support for DCF datapath configuration. + * Added support for more PPPoE packet type for switch filter. + Removed Items ------------- diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 5ccd020c5..3c0c36bce 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -26,6 +26,8 @@ #define MAX_QGRP_NUM_TYPE 7 +#define ICE_PPP_IPV4_PROTO 0x0021 +#define ICE_PPP_IPV6_PROTO 0x0057 #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@ -95,6 +97,18 @@ ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \ ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION | \ ICE_INSET_PPPOE_PROTO) +#define ICE_SW_INSET_MAC_PPPOE_IPV4 ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4) +#define ICE_SW_INSET_MAC_PPPOE_IPV4_TCP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_TCP) +#define ICE_SW_INSET_MAC_PPPOE_IPV4_UDP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_UDP) +#define ICE_SW_INSET_MAC_PPPOE_IPV6 ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6) +#define ICE_SW_INSET_MAC_PPPOE_IPV6_TCP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_TCP) +#define ICE_SW_INSET_MAC_PPPOE_IPV6_UDP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_UDP) #define ICE_SW_INSET_MAC_IPV4_ESP ( \ ICE_SW_INSET_MAC_IPV4 | ICE_INSET_ESP_SPI) #define ICE_SW_INSET_MAC_IPV6_ESP ( \ @@ -154,10 +168,6 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = { ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_pppoes, ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes, @@ -166,6 +176,30 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = { ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes_proto, ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_esp, ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE}, {pattern_eth_ipv4_udp_esp, @@ -254,10 +288,6 @@ ice_pattern_match_item ice_switch_pattern_perm[] = { ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_pppoes, ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes, @@ -266,6 +296,30 @@ ice_pattern_match_item ice_switch_pattern_perm[] = { ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes_proto, ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_esp, ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE}, {pattern_eth_ipv4_udp_esp, @@ -416,13 +470,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask; const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask; uint64_t input_set = ICE_INSET_NONE; - uint16_t j, t = 0; + bool pppoe_elem_valid = 0; + bool pppoe_patt_valid = 0; + bool pppoe_prot_valid = 0; bool profile_rule = 0; bool tunnel_valid = 0; - bool pppoe_valid = 0; bool ipv6_valiad = 0; bool ipv4_valiad = 0; bool udp_valiad = 0; + bool tcp_valiad = 0; + uint16_t j, t = 0; for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { @@ -752,6 +809,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], case RTE_FLOW_ITEM_TYPE_TCP: tcp_spec = item->spec; tcp_mask = item->mask; + tcp_valiad = 1; if (tcp_spec && tcp_mask) { /* Check TCP mask and update input set */ if (tcp_mask->hdr.sent_seq || @@ -969,6 +1027,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid pppoe item"); return 0; } + pppoe_patt_valid = 1; if (pppoe_spec && pppoe_mask) { /* Check pppoe mask and update input set */ if (pppoe_mask->length || @@ -989,7 +1048,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], input_set |= ICE_INSET_PPPOE_SESSION; } t++; - pppoe_valid = 1; + pppoe_elem_valid = 1; } break; @@ -1010,7 +1069,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], return 0; } if (pppoe_proto_spec && pppoe_proto_mask) { - if (pppoe_valid) + if (pppoe_elem_valid) t--; list[t].type = ICE_PPPOE; if (pppoe_proto_mask->proto_id) { @@ -1019,9 +1078,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.pppoe_hdr.ppp_prot_id = pppoe_proto_mask->proto_id; input_set |= ICE_INSET_PPPOE_PROTO; + + pppoe_prot_valid = 1; } + if ((pppoe_proto_mask->proto_id & + pppoe_proto_spec->proto_id) != + CPU_TO_BE16(ICE_PPP_IPV4_PROTO) && + (pppoe_proto_mask->proto_id & + pppoe_proto_spec->proto_id) != + CPU_TO_BE16(ICE_PPP_IPV6_PROTO)) + *tun_type = ICE_SW_TUN_PPPOE_PAY; + else + *tun_type = ICE_SW_TUN_PPPOE; t++; } + break; case RTE_FLOW_ITEM_TYPE_ESP: @@ -1232,6 +1303,23 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } } + if (pppoe_patt_valid && !pppoe_prot_valid) { + if (ipv6_valiad && udp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP; + else if (ipv6_valiad && tcp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP; + else if (ipv4_valiad && udp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP; + else if (ipv4_valiad && tcp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP; + else if (ipv6_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV6; + else if (ipv4_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV4; + else + *tun_type = ICE_SW_TUN_PPPOE; + } + *lkups_num = t; return input_set; @@ -1447,9 +1535,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, tun_type = ICE_SW_TUN_VXLAN; if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) tun_type = ICE_SW_TUN_NVGRE; - if (item->type == RTE_FLOW_ITEM_TYPE_PPPOED || - item->type == RTE_FLOW_ITEM_TYPE_PPPOES) - tun_type = ICE_SW_TUN_PPPOE; if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { const struct rte_flow_item_eth *eth_mask; if (item->mask) From patchwork Sun Jun 28 05:01:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 72361 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A8BC8A0350; Sun, 28 Jun 2020 07:26:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0103C1C139; Sun, 28 Jun 2020 07:26:53 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 7E33D1C119; Sun, 28 Jun 2020 07:26:43 +0200 (CEST) IronPort-SDR: Kci7hy3C0jJBkynZ04T8KnpH3ECGOAFKOaXXkq+q4FSvcGQmnop2WFtWhk4H0PmXvZmuaXEQX6 J2BFDZXoTYbg== X-IronPort-AV: E=McAfee;i="6000,8403,9665"; a="147327197" X-IronPort-AV: E=Sophos;i="5.75,290,1589266800"; d="scan'208";a="147327197" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2020 22:26:43 -0700 IronPort-SDR: aZjekd8Lxnu3YaTZUqkNbdAr+wScH9DOVIkfmWTEOPta2xnzAnAY6BkXKeLKVSPX3yH76dcQfA pIHpBauP77iw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,290,1589266800"; d="scan'208";a="453811980" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by orsmga005.jf.intel.com with ESMTP; 27 Jun 2020 22:26:41 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Sun, 28 Jun 2020 13:01:43 +0800 Message-Id: <20200628050145.72810-3-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200628050145.72810-1-wei.zhao1@intel.com> References: <20200617061429.6447-1-wei.zhao1@intel.com> <20200628050145.72810-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 2/4] net/ice: fix tunnel type for switch rule X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add check for protocol type of IPv4 packet, it need to update tunnel type when NVGRE is in payload. Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 3c0c36bce..c607e8d17 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -28,6 +28,7 @@ #define MAX_QGRP_NUM_TYPE 7 #define ICE_PPP_IPV4_PROTO 0x0021 #define ICE_PPP_IPV6_PROTO 0x0057 +#define ICE_IPV4_PROTO_NVGRE 0x002F #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@ -632,6 +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.ipv4_hdr.protocol = ipv4_mask->hdr.next_proto_id; } + if ((ipv4_spec->hdr.next_proto_id & + ipv4_mask->hdr.next_proto_id) == + ICE_IPV4_PROTO_NVGRE) + *tun_type = ICE_SW_TUN_AND_NON_TUN; if (ipv4_mask->hdr.type_of_service) { list[t].h_u.ipv4_hdr.tos = ipv4_spec->hdr.type_of_service; @@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, const struct rte_flow_item *item = pattern; uint16_t item_num = 0; enum ice_sw_tunnel_type tun_type = - ICE_SW_TUN_AND_NON_TUN; + ICE_NON_TUN; struct ice_pattern_match_item *pattern_match_item = NULL; for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { From patchwork Sun Jun 28 05:01:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 72362 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 90E6DA0350; Sun, 28 Jun 2020 07:27:10 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C2C4D1C197; Sun, 28 Jun 2020 07:26:54 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id E0B6D1BFE8; Sun, 28 Jun 2020 07:26:45 +0200 (CEST) IronPort-SDR: RSAI9xHG6ZZa3pCg7MvQt3SzWC6cet3Zso/dqXHutnfdhNBfL78WbS/oq45KDUUOK2bNcp5s70 KuqJIrzsaNag== X-IronPort-AV: E=McAfee;i="6000,8403,9665"; a="147327199" X-IronPort-AV: E=Sophos;i="5.75,290,1589266800"; d="scan'208";a="147327199" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2020 22:26:45 -0700 IronPort-SDR: O6F96EOex7x2IYWBxMrs0bEBiqBM03j7/W3uYyCB1V4QmjLZ49bpNGEjd/LX5H5Q0SSzBGv4dp baPvCY5ueFMA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,290,1589266800"; d="scan'208";a="453811994" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by orsmga005.jf.intel.com with ESMTP; 27 Jun 2020 22:26:43 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Sun, 28 Jun 2020 13:01:44 +0800 Message-Id: <20200628050145.72810-4-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200628050145.72810-1-wei.zhao1@intel.com> References: <20200617061429.6447-1-wei.zhao1@intel.com> <20200628050145.72810-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add more specific tunnel type for ipv4/ipv6 packet, it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without L4 dst/src port number as input set for the switch filter rule. Fixes: 47d460d63233 ("net/ice: rework switch filter") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index c607e8d17..c1ea74c73 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -29,6 +29,8 @@ #define ICE_PPP_IPV4_PROTO 0x0021 #define ICE_PPP_IPV6_PROTO 0x0057 #define ICE_IPV4_PROTO_NVGRE 0x002F +#define ICE_TUN_VXLAN_VALID 0x0001 +#define ICE_TUN_NVGRE_VALID 0x0002 #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@ -471,11 +473,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask; const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask; uint64_t input_set = ICE_INSET_NONE; + uint16_t tunnel_valid = 0; bool pppoe_elem_valid = 0; bool pppoe_patt_valid = 0; bool pppoe_prot_valid = 0; bool profile_rule = 0; - bool tunnel_valid = 0; bool ipv6_valiad = 0; bool ipv4_valiad = 0; bool udp_valiad = 0; @@ -924,7 +926,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], return 0; } - tunnel_valid = 1; + tunnel_valid = ICE_TUN_VXLAN_VALID; if (vxlan_spec && vxlan_mask) { list[t].type = ICE_VXLAN; if (vxlan_mask->vni[0] || @@ -960,7 +962,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid NVGRE item"); return 0; } - tunnel_valid = 1; + tunnel_valid = ICE_TUN_NVGRE_VALID; if (nvgre_spec && nvgre_mask) { list[t].type = ICE_NVGRE; if (nvgre_mask->tni[0] || @@ -1325,6 +1327,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], *tun_type = ICE_SW_TUN_PPPOE; } + if (*tun_type == ICE_NON_TUN) { + if (tunnel_valid == ICE_TUN_VXLAN_VALID) + *tun_type = ICE_SW_TUN_VXLAN; + else if (tunnel_valid == ICE_TUN_NVGRE_VALID) + *tun_type = ICE_SW_TUN_NVGRE; + else if (ipv4_valiad && tcp_valiad) + *tun_type = ICE_SW_IPV4_TCP; + else if (ipv4_valiad && udp_valiad) + *tun_type = ICE_SW_IPV4_UDP; + else if (ipv6_valiad && tcp_valiad) + *tun_type = ICE_SW_IPV6_TCP; + else if (ipv6_valiad && udp_valiad) + *tun_type = ICE_SW_IPV6_UDP; + } + *lkups_num = t; return input_set; @@ -1536,10 +1553,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { item_num++; - if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) - tun_type = ICE_SW_TUN_VXLAN; - if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - tun_type = ICE_SW_TUN_NVGRE; if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { const struct rte_flow_item_eth *eth_mask; if (item->mask) From patchwork Sun Jun 28 05:01:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 72363 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 24A37A0350; Sun, 28 Jun 2020 07:27:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4C6901C1A1; Sun, 28 Jun 2020 07:26:56 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 82B3C1C11F; Sun, 28 Jun 2020 07:26:47 +0200 (CEST) IronPort-SDR: HWwU6OEia+bFNwjfK4iWfO5hCAoguBelNaRaFF+eNDda6gFBbWB7i/egKFZu1R7+02H7LtRgeN FBIgOnwDc0jg== X-IronPort-AV: E=McAfee;i="6000,8403,9665"; a="147327201" X-IronPort-AV: E=Sophos;i="5.75,290,1589266800"; d="scan'208";a="147327201" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2020 22:26:47 -0700 IronPort-SDR: gayacRyUm4QVWo2wku+IEaes2GcebR7MpFGfePuxbthIH4h5cRXbtBCJjTqHjq1kd6I53glRm5 rhGatU6a1F7g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,290,1589266800"; d="scan'208";a="453812007" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by orsmga005.jf.intel.com with ESMTP; 27 Jun 2020 22:26:45 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Sun, 28 Jun 2020 13:01:45 +0800 Message-Id: <20200628050145.72810-5-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200628050145.72810-1-wei.zhao1@intel.com> References: <20200617061429.6447-1-wei.zhao1@intel.com> <20200628050145.72810-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 4/4] net/ice: add input set byte number check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add the total input set byte number check, as there is a hardware requirement for the total number of 32 byte. Fixes: 47d460d63233 ("net/ice: rework switch filter") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 43 +++++++++++++++++++++++++++-- 1 file changed, 40 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index c1ea74c73..a4d7fcb14 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -25,7 +25,8 @@ #include "ice_generic_flow.h" -#define MAX_QGRP_NUM_TYPE 7 +#define MAX_QGRP_NUM_TYPE 7 +#define MAX_INPUT_SET_BYTE 32 #define ICE_PPP_IPV4_PROTO 0x0021 #define ICE_PPP_IPV6_PROTO 0x0057 #define ICE_IPV4_PROTO_NVGRE 0x002F @@ -473,6 +474,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask; const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask; uint64_t input_set = ICE_INSET_NONE; + uint16_t feild_vec_byte = 0; uint16_t tunnel_valid = 0; bool pppoe_elem_valid = 0; bool pppoe_patt_valid = 0; @@ -540,6 +542,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], m->src_addr[j] = eth_mask->src.addr_bytes[j]; i = 1; + feild_vec_byte++; } if (eth_mask->dst.addr_bytes[j]) { h->dst_addr[j] = @@ -547,6 +550,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], m->dst_addr[j] = eth_mask->dst.addr_bytes[j]; i = 1; + feild_vec_byte++; } } if (i) @@ -557,6 +561,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], eth_spec->type; list[t].m_u.ethertype.ethtype_id = eth_mask->type; + feild_vec_byte += 2; t++; } } @@ -616,24 +621,28 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], ipv4_spec->hdr.src_addr; list[t].m_u.ipv4_hdr.src_addr = ipv4_mask->hdr.src_addr; + feild_vec_byte += 2; } if (ipv4_mask->hdr.dst_addr) { list[t].h_u.ipv4_hdr.dst_addr = ipv4_spec->hdr.dst_addr; list[t].m_u.ipv4_hdr.dst_addr = ipv4_mask->hdr.dst_addr; + feild_vec_byte += 2; } if (ipv4_mask->hdr.time_to_live) { list[t].h_u.ipv4_hdr.time_to_live = ipv4_spec->hdr.time_to_live; list[t].m_u.ipv4_hdr.time_to_live = ipv4_mask->hdr.time_to_live; + feild_vec_byte++; } if (ipv4_mask->hdr.next_proto_id) { list[t].h_u.ipv4_hdr.protocol = ipv4_spec->hdr.next_proto_id; list[t].m_u.ipv4_hdr.protocol = ipv4_mask->hdr.next_proto_id; + feild_vec_byte++; } if ((ipv4_spec->hdr.next_proto_id & ipv4_mask->hdr.next_proto_id) == @@ -644,6 +653,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], ipv4_spec->hdr.type_of_service; list[t].m_u.ipv4_hdr.tos = ipv4_mask->hdr.type_of_service; + feild_vec_byte++; } t++; } @@ -721,12 +731,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], ipv6_spec->hdr.src_addr[j]; s->src_addr[j] = ipv6_mask->hdr.src_addr[j]; + feild_vec_byte++; } if (ipv6_mask->hdr.dst_addr[j]) { f->dst_addr[j] = ipv6_spec->hdr.dst_addr[j]; s->dst_addr[j] = ipv6_mask->hdr.dst_addr[j]; + feild_vec_byte++; } } if (ipv6_mask->hdr.proto) { @@ -734,12 +746,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], ipv6_spec->hdr.proto; s->next_hdr = ipv6_mask->hdr.proto; + feild_vec_byte++; } if (ipv6_mask->hdr.hop_limits) { f->hop_limit = ipv6_spec->hdr.hop_limits; s->hop_limit = ipv6_mask->hdr.hop_limits; + feild_vec_byte++; } if (ipv6_mask->hdr.vtc_flow & rte_cpu_to_be_32 @@ -757,6 +771,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT; s->be_ver_tc_flow = CPU_TO_BE32(vtf.u.val); + feild_vec_byte += 4; } t++; } @@ -802,14 +817,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], udp_spec->hdr.src_port; list[t].m_u.l4_hdr.src_port = udp_mask->hdr.src_port; + feild_vec_byte += 2; } if (udp_mask->hdr.dst_port) { list[t].h_u.l4_hdr.dst_port = udp_spec->hdr.dst_port; list[t].m_u.l4_hdr.dst_port = udp_mask->hdr.dst_port; + feild_vec_byte += 2; } - t++; + t++; } break; @@ -854,12 +871,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], tcp_spec->hdr.src_port; list[t].m_u.l4_hdr.src_port = tcp_mask->hdr.src_port; + feild_vec_byte += 2; } if (tcp_mask->hdr.dst_port) { list[t].h_u.l4_hdr.dst_port = tcp_spec->hdr.dst_port; list[t].m_u.l4_hdr.dst_port = tcp_mask->hdr.dst_port; + feild_vec_byte += 2; } t++; } @@ -899,12 +918,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], sctp_spec->hdr.src_port; list[t].m_u.sctp_hdr.src_port = sctp_mask->hdr.src_port; + feild_vec_byte += 2; } if (sctp_mask->hdr.dst_port) { list[t].h_u.sctp_hdr.dst_port = sctp_spec->hdr.dst_port; list[t].m_u.sctp_hdr.dst_port = sctp_mask->hdr.dst_port; + feild_vec_byte += 2; } t++; } @@ -942,6 +963,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], vxlan_mask->vni[0]; input_set |= ICE_INSET_TUN_VXLAN_VNI; + feild_vec_byte += 2; } t++; } @@ -978,6 +1000,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], nvgre_mask->tni[0]; input_set |= ICE_INSET_TUN_NVGRE_TNI; + feild_vec_byte += 2; } t++; } @@ -1006,6 +1029,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.vlan_hdr.vlan = vlan_mask->tci; input_set |= ICE_INSET_VLAN_OUTER; + feild_vec_byte += 2; } if (vlan_mask->inner_type) { list[t].h_u.vlan_hdr.type = @@ -1013,6 +1037,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.vlan_hdr.type = vlan_mask->inner_type; input_set |= ICE_INSET_ETHERTYPE; + feild_vec_byte += 2; } t++; } @@ -1053,6 +1078,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.pppoe_hdr.session_id = pppoe_mask->session_id; input_set |= ICE_INSET_PPPOE_SESSION; + feild_vec_byte += 2; } t++; pppoe_elem_valid = 1; @@ -1085,7 +1111,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.pppoe_hdr.ppp_prot_id = pppoe_proto_mask->proto_id; input_set |= ICE_INSET_PPPOE_PROTO; - + feild_vec_byte += 2; pppoe_prot_valid = 1; } if ((pppoe_proto_mask->proto_id & @@ -1142,6 +1168,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.esp_hdr.spi = esp_mask->hdr.spi; input_set |= ICE_INSET_ESP_SPI; + feild_vec_byte += 4; t++; } @@ -1198,6 +1225,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.ah_hdr.spi = ah_mask->spi; input_set |= ICE_INSET_AH_SPI; + feild_vec_byte += 4; t++; } @@ -1237,6 +1265,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.l2tpv3_sess_hdr.session_id = l2tp_mask->session_id; input_set |= ICE_INSET_L2TPV3OIP_SESSION_ID; + feild_vec_byte += 4; t++; } @@ -1342,6 +1371,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], *tun_type = ICE_SW_IPV6_UDP; } + if (feild_vec_byte >= MAX_INPUT_SET_BYTE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "too much input set"); + return -ENOTSUP; + } + *lkups_num = t; return input_set;