From patchwork Wed Jun 17 06:14:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 71664 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE838A04A5; Wed, 17 Jun 2020 08:39:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 04BF01B13C; Wed, 17 Jun 2020 08:39:15 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id EEB1C4F9C; Wed, 17 Jun 2020 08:39:10 +0200 (CEST) IronPort-SDR: a+xxkR0jqWBbSpCMV03dp0AaK5Owd0wNz517kMdu5issYPiwsWAJw1rXZDJ7SKN9EhzwsWyog1 4mIgZcR3AV+w== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2020 23:39:10 -0700 IronPort-SDR: WBL1Y4gAnO4Yihe6Qp0cp5s2we9tarxa+lA6CyLBFje2J3Q+j5laXr+C9Hp3LUuTJq8U+uzdEt u/taoI36j6KA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,521,1583222400"; d="scan'208";a="450130147" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by orsmga005.jf.intel.com with ESMTP; 16 Jun 2020 23:39:08 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, Wei Zhao Date: Wed, 17 Jun 2020 14:14:26 +0800 Message-Id: <20200617061429.6447-2-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200617061429.6447-1-wei.zhao1@intel.com> References: <20200605074031.16231-1-wei.zhao1@intel.com> <20200617061429.6447-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 1/4] net/ice: add support more PPPoE packet type for switch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add more support for switch parser of pppoe packet, it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for pppoe payload, so we can use L4 dst/src port and L3 ip address as input set for switch filter pppoe related rule. Signed-off-by: Wei Zhao --- doc/guides/rel_notes/release_20_08.rst | 6 ++ drivers/net/ice/ice_switch_filter.c | 115 +++++++++++++++++++++---- 2 files changed, 106 insertions(+), 15 deletions(-) diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index 86d240213..d2193b0a6 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -55,6 +55,12 @@ New Features This section is a comment. Do not overwrite or remove it. Also, make sure to start the actual text at the margin. ========================================================= +* **Updated the Intel ice driver.** + + Updated the Intel ice driver with new features and improvements, including: + + * Add support more PPPoE packet type for switch filter + * **Updated Mellanox mlx5 driver.** diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 20e8187d3..a5dd1f7ab 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -26,6 +26,8 @@ #define MAX_QGRP_NUM_TYPE 7 +#define ICE_PPP_IPV4_PROTO 0x0021 +#define ICE_PPP_IPV6_PROTO 0x0057 #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@ -95,6 +97,18 @@ ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \ ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION | \ ICE_INSET_PPPOE_PROTO) +#define ICE_SW_INSET_MAC_PPPOE_IPV4 ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4) +#define ICE_SW_INSET_MAC_PPPOE_IPV4_TCP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_TCP) +#define ICE_SW_INSET_MAC_PPPOE_IPV4_UDP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_UDP) +#define ICE_SW_INSET_MAC_PPPOE_IPV6 ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6) +#define ICE_SW_INSET_MAC_PPPOE_IPV6_TCP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_TCP) +#define ICE_SW_INSET_MAC_PPPOE_IPV6_UDP ( \ + ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_UDP) #define ICE_SW_INSET_MAC_IPV4_ESP ( \ ICE_SW_INSET_MAC_IPV4 | ICE_INSET_ESP_SPI) #define ICE_SW_INSET_MAC_IPV6_ESP ( \ @@ -154,10 +168,6 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = { ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_pppoes, ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes, @@ -166,6 +176,30 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = { ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes_proto, ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_esp, ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE}, {pattern_eth_ipv4_udp_esp, @@ -254,10 +288,6 @@ ice_pattern_match_item ice_switch_pattern_perm[] = { ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoed, - ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_pppoes, ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes, @@ -266,6 +296,30 @@ ice_pattern_match_item ice_switch_pattern_perm[] = { ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, {pattern_eth_vlan_pppoes_proto, ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv4_udp, + ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_tcp, + ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE}, + {pattern_eth_vlan_pppoes_ipv6_udp, + ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE}, {pattern_eth_ipv4_esp, ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE}, {pattern_eth_ipv4_udp_esp, @@ -416,13 +470,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask; const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask; uint64_t input_set = ICE_INSET_NONE; - uint16_t j, t = 0; + bool pppoe_elem_valid = 0; + bool pppoe_patt_valid = 0; + bool pppoe_prot_valid = 0; bool profile_rule = 0; bool tunnel_valid = 0; - bool pppoe_valid = 0; bool ipv6_valiad = 0; bool ipv4_valiad = 0; bool udp_valiad = 0; + bool tcp_valiad = 0; + uint16_t j, t = 0; for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { @@ -752,6 +809,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], case RTE_FLOW_ITEM_TYPE_TCP: tcp_spec = item->spec; tcp_mask = item->mask; + tcp_valiad = 1; if (tcp_spec && tcp_mask) { /* Check TCP mask and update input set */ if (tcp_mask->hdr.sent_seq || @@ -969,6 +1027,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid pppoe item"); return 0; } + pppoe_patt_valid = 1; if (pppoe_spec && pppoe_mask) { /* Check pppoe mask and update input set */ if (pppoe_mask->length || @@ -989,7 +1048,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], input_set |= ICE_INSET_PPPOE_SESSION; } t++; - pppoe_valid = 1; + pppoe_elem_valid = 1; } break; @@ -1010,7 +1069,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], return 0; } if (pppoe_proto_spec && pppoe_proto_mask) { - if (pppoe_valid) + if (pppoe_elem_valid) t--; list[t].type = ICE_PPPOE; if (pppoe_proto_mask->proto_id) { @@ -1019,9 +1078,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.pppoe_hdr.ppp_prot_id = pppoe_proto_mask->proto_id; input_set |= ICE_INSET_PPPOE_PROTO; + + pppoe_prot_valid = 1; } + if ((pppoe_proto_mask->proto_id & + pppoe_proto_spec->proto_id) != + CPU_TO_BE16(ICE_PPP_IPV4_PROTO) && + (pppoe_proto_mask->proto_id & + pppoe_proto_spec->proto_id) != + CPU_TO_BE16(ICE_PPP_IPV6_PROTO)) + *tun_type = ICE_SW_TUN_PPPOE_PAY; + else + *tun_type = ICE_SW_TUN_PPPOE; t++; } + break; case RTE_FLOW_ITEM_TYPE_ESP: @@ -1232,6 +1303,23 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } } + if (pppoe_patt_valid && !pppoe_prot_valid) { + if (ipv6_valiad && udp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP; + else if (ipv6_valiad && tcp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP; + else if (ipv4_valiad && udp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP; + else if (ipv4_valiad && tcp_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP; + else if (ipv6_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV6; + else if (ipv4_valiad) + *tun_type = ICE_SW_TUN_PPPOE_IPV4; + else + *tun_type = ICE_SW_TUN_PPPOE; + } + *lkups_num = t; return input_set; @@ -1447,9 +1535,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, tun_type = ICE_SW_TUN_VXLAN; if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) tun_type = ICE_SW_TUN_NVGRE; - if (item->type == RTE_FLOW_ITEM_TYPE_PPPOED || - item->type == RTE_FLOW_ITEM_TYPE_PPPOES) - tun_type = ICE_SW_TUN_PPPOE; if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { const struct rte_flow_item_eth *eth_mask; if (item->mask) From patchwork Wed Jun 17 06:14:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 71665 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE8C5A04A5; Wed, 17 Jun 2020 08:39:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 88DF11BE85; Wed, 17 Jun 2020 08:39:16 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 530254F9C; Wed, 17 Jun 2020 08:39:12 +0200 (CEST) IronPort-SDR: GYl+Js9fuque6U0/OXWCUVC73f9mJ3hMHO116jNTcWHkcCUHW9CSvYwjLUpOSnMDvVtzWiSonx c+zswxKoQ9fw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2020 23:39:12 -0700 IronPort-SDR: 9Wepwvcn3bXalWsnBxpcw1+sKIh+qUCM34c6nuI+0EQhg6i966DvBNBlF8i88Nx99xJAPbPxYq sTOloKHf5Arg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,521,1583222400"; d="scan'208";a="450130154" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by orsmga005.jf.intel.com with ESMTP; 16 Jun 2020 23:39:10 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, Wei Zhao Date: Wed, 17 Jun 2020 14:14:27 +0800 Message-Id: <20200617061429.6447-3-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200617061429.6447-1-wei.zhao1@intel.com> References: <20200605074031.16231-1-wei.zhao1@intel.com> <20200617061429.6447-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/4] net/ice: add redirect support for VSI list rule X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch enable redirect switch rule of vsi list type. Fixes: 397b4b3c5095 ("net/ice: enable flow redirect on switch") Cc: stable@dpdk.org Signed-off-by: Wei Zhao Acked-by: Qi Zhang --- drivers/net/ice/ice_switch_filter.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index a5dd1f7ab..3c0c36bce 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -1662,6 +1662,9 @@ ice_switch_redirect(struct ice_adapter *ad, uint16_t lkups_cnt; int ret; + if (rdata->vsi_handle != rd->vsi_handle) + return 0; + sw = hw->switch_info; if (!sw->recp_list[rdata->rid].recp_created) return -EINVAL; @@ -1673,25 +1676,32 @@ ice_switch_redirect(struct ice_adapter *ad, LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_adv_fltr_mgmt_list_entry, list_entry) { rinfo = list_itr->rule_info; - if (rinfo.fltr_rule_id == rdata->rule_id && + if ((rinfo.fltr_rule_id == rdata->rule_id && rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI && - rinfo.sw_act.vsi_handle == rd->vsi_handle) { + rinfo.sw_act.vsi_handle == rd->vsi_handle) || + (rinfo.fltr_rule_id == rdata->rule_id && + rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST)){ lkups_cnt = list_itr->lkups_cnt; lkups_dp = (struct ice_adv_lkup_elem *) ice_memdup(hw, list_itr->lkups, sizeof(*list_itr->lkups) * lkups_cnt, ICE_NONDMA_TO_NONDMA); + if (!lkups_dp) { PMD_DRV_LOG(ERR, "Failed to allocate memory."); return -EINVAL; } + if (rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST) { + rinfo.sw_act.vsi_handle = rd->vsi_handle; + rinfo.sw_act.fltr_act = ICE_FWD_TO_VSI; + } break; } } if (!lkups_dp) - return 0; + return -EINVAL; /* Remove the old rule */ ret = ice_rem_adv_rule(hw, list_itr->lkups, From patchwork Wed Jun 17 06:14:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 71666 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A7DF6A04A5; Wed, 17 Jun 2020 08:39:38 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EDA401BEDD; Wed, 17 Jun 2020 08:39:18 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 1AB88AAB7; Wed, 17 Jun 2020 08:39:13 +0200 (CEST) IronPort-SDR: EZh29gy6q+nsU4tH+nvnN69FxMORddUEnLAdPrVMY4KNsF2or7CNxLz6rmLrD+y+N56kQAL6dU K/HC9fRTi7hA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2020 23:39:13 -0700 IronPort-SDR: rh+qgaPoGKZbY3rC07IvWtutIizBQvVdeNHA2NF6qCPkzryKKehDzLIQe9y4700CQSIf2OujJM u5VQD9gnNhKQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,521,1583222400"; d="scan'208";a="450130161" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by orsmga005.jf.intel.com with ESMTP; 16 Jun 2020 23:39:12 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, Wei Zhao Date: Wed, 17 Jun 2020 14:14:28 +0800 Message-Id: <20200617061429.6447-4-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200617061429.6447-1-wei.zhao1@intel.com> References: <20200605074031.16231-1-wei.zhao1@intel.com> <20200617061429.6447-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/4] net/ice: add check for NVGRE protocol X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add check for protocol type of IPv4 packet, it need to update tunnel type when NVGRE is in payload. Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 3c0c36bce..3b38195d6 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -28,6 +28,7 @@ #define MAX_QGRP_NUM_TYPE 7 #define ICE_PPP_IPV4_PROTO 0x0021 #define ICE_PPP_IPV6_PROTO 0x0057 +#define ICE_IPV4_PROTO_NVGRE 0x2F #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@ -632,6 +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], list[t].m_u.ipv4_hdr.protocol = ipv4_mask->hdr.next_proto_id; } + if ((ipv4_spec->hdr.next_proto_id & + ipv4_mask->hdr.next_proto_id) == + ICE_IPV4_PROTO_NVGRE) + *tun_type = ICE_SW_TUN_AND_NON_TUN; if (ipv4_mask->hdr.type_of_service) { list[t].h_u.ipv4_hdr.tos = ipv4_spec->hdr.type_of_service; @@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, const struct rte_flow_item *item = pattern; uint16_t item_num = 0; enum ice_sw_tunnel_type tun_type = - ICE_SW_TUN_AND_NON_TUN; + ICE_NON_TUN; struct ice_pattern_match_item *pattern_match_item = NULL; for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { From patchwork Wed Jun 17 06:14:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhao1, Wei" X-Patchwork-Id: 71667 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25C19A04A5; Wed, 17 Jun 2020 08:39:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 63C3F1BF3C; Wed, 17 Jun 2020 08:39:20 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id D6F1B1B94F; Wed, 17 Jun 2020 08:39:15 +0200 (CEST) IronPort-SDR: g1WZ/k0GcM7KV/NLOFqkIKogWzfgmCgxeN07DHKouGnEigN8lIpQZTUCUpGaJwi7BpeNB69Pkz HtqQEMzcFoFA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2020 23:39:15 -0700 IronPort-SDR: 3GKweSZuscG1sSnzIOmG3weVCgWwmqS60AOiY1CIeYf5jlnsewFvqGRdPOI8FQ6s/pzwEHEbbx GMPJ6R7nm/dQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,521,1583222400"; d="scan'208";a="450130169" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by orsmga005.jf.intel.com with ESMTP; 16 Jun 2020 23:39:14 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, Wei Zhao Date: Wed, 17 Jun 2020 14:14:29 +0800 Message-Id: <20200617061429.6447-5-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200617061429.6447-1-wei.zhao1@intel.com> References: <20200605074031.16231-1-wei.zhao1@intel.com> <20200617061429.6447-1-wei.zhao1@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 4/4] net/ice: support switch flow for specific L4 type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add more specific tunnel type for ipv4/ipv6 packet, it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without L4 dst/src port number as input set for the switch filter rule. Fixes: 47d460d63233 ("net/ice: rework switch filter") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 3b38195d6..f4fd8ff33 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -471,11 +471,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask; const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask; uint64_t input_set = ICE_INSET_NONE; + uint16_t tunnel_valid = 0; bool pppoe_elem_valid = 0; bool pppoe_patt_valid = 0; bool pppoe_prot_valid = 0; bool profile_rule = 0; - bool tunnel_valid = 0; bool ipv6_valiad = 0; bool ipv4_valiad = 0; bool udp_valiad = 0; @@ -960,7 +960,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid NVGRE item"); return 0; } - tunnel_valid = 1; + tunnel_valid = 2; if (nvgre_spec && nvgre_mask) { list[t].type = ICE_NVGRE; if (nvgre_mask->tni[0] || @@ -1325,6 +1325,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], *tun_type = ICE_SW_TUN_PPPOE; } + if (!pppoe_patt_valid) { + if (tunnel_valid == 1) + *tun_type = ICE_SW_TUN_VXLAN; + else if (tunnel_valid == 2) + *tun_type = ICE_SW_TUN_NVGRE; + else if (ipv4_valiad && tcp_valiad) + *tun_type = ICE_SW_IPV4_TCP; + else if (ipv4_valiad && udp_valiad) + *tun_type = ICE_SW_IPV4_UDP; + else if (ipv6_valiad && tcp_valiad) + *tun_type = ICE_SW_IPV6_TCP; + else if (ipv6_valiad && udp_valiad) + *tun_type = ICE_SW_IPV6_UDP; + } + *lkups_num = t; return input_set; @@ -1536,10 +1551,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { item_num++; - if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) - tun_type = ICE_SW_TUN_VXLAN; - if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - tun_type = ICE_SW_TUN_NVGRE; if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { const struct rte_flow_item_eth *eth_mask; if (item->mask)