From patchwork Tue Apr 13 08:10:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 91210 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D008FA0524; Tue, 13 Apr 2021 10:16:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D40D160C59; Tue, 13 Apr 2021 10:16:07 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id DA6CA160C0C for ; Tue, 13 Apr 2021 10:16:04 +0200 (CEST) IronPort-SDR: 4ZDc5ewbOP4CZCmtiBOGe9tyzJl43gFF65vcz+xrztv//TXXVmy/mzBH7ImudXvyuqZqbvPwY3 H4tYAZ3+aAGg== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="181492830" X-IronPort-AV: E=Sophos;i="5.82,218,1613462400"; d="scan'208";a="181492830" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 01:16:04 -0700 IronPort-SDR: oKgkMFPNLF7aDDk7todWl3M5cRahkgZkuVL44gX95FAiX95ie2XaHgAa/m90dmkacxuJRtvb9v ETdl3K6vkB0w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,218,1613462400"; d="scan'208";a="424152508" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 01:16:02 -0700 From: Jeff Guo To: orika@nvidia.com, qi.z.zhang@intel.com, beilei.xing@intel.com, xiaoyun.li@intel.com, jingjing.wu@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, jia.guo@intel.com Date: Tue, 13 Apr 2021 16:10:29 +0800 Message-Id: <20210413081032.60509-2-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210413081032.60509-1-jia.guo@intel.com> References: <20210324134844.60410-1-jia.guo@intel.com> <20210413081032.60509-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 1/4] app/testpmd: add packet id for IP fragment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the new items to support the flow configuration for IP fragment packets. Signed-off-by: Ting Xu Signed-off-by: Jeff Guo Acked-by: Ori Kam Reviewed-by: Xiaoyu Min --- app/test-pmd/cmdline_flow.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index fb7a3a8bd3..46ae342b85 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -166,6 +166,7 @@ enum index { ITEM_VLAN_HAS_MORE_VLAN, ITEM_IPV4, ITEM_IPV4_TOS, + ITEM_IPV4_ID, ITEM_IPV4_FRAGMENT_OFFSET, ITEM_IPV4_TTL, ITEM_IPV4_PROTO, @@ -236,6 +237,7 @@ enum index { ITEM_IPV6_FRAG_EXT, ITEM_IPV6_FRAG_EXT_NEXT_HDR, ITEM_IPV6_FRAG_EXT_FRAG_DATA, + ITEM_IPV6_FRAG_EXT_ID, ITEM_ICMP6, ITEM_ICMP6_TYPE, ITEM_ICMP6_CODE, @@ -1028,6 +1030,7 @@ static const enum index item_vlan[] = { static const enum index item_ipv4[] = { ITEM_IPV4_TOS, + ITEM_IPV4_ID, ITEM_IPV4_FRAGMENT_OFFSET, ITEM_IPV4_TTL, ITEM_IPV4_PROTO, @@ -1164,6 +1167,7 @@ static const enum index item_ipv6_ext[] = { static const enum index item_ipv6_frag_ext[] = { ITEM_IPV6_FRAG_EXT_NEXT_HDR, ITEM_IPV6_FRAG_EXT_FRAG_DATA, + ITEM_IPV6_FRAG_EXT_ID, ITEM_NEXT, ZERO, }; @@ -2466,6 +2470,13 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4, hdr.type_of_service)), }, + [ITEM_IPV4_ID] = { + .name = "packet_id", + .help = "fragment packet id", + .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4, + hdr.packet_id)), + }, [ITEM_IPV4_FRAGMENT_OFFSET] = { .name = "fragment_offset", .help = "fragmentation flags and fragment offset", @@ -2969,12 +2980,20 @@ static const struct token token_list[] = { }, [ITEM_IPV6_FRAG_EXT_FRAG_DATA] = { .name = "frag_data", - .help = "Fragment flags and offset", + .help = "fragment flags and offset", .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext, hdr.frag_data)), }, + [ITEM_IPV6_FRAG_EXT_ID] = { + .name = "packet_id", + .help = "fragment packet id", + .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext, + hdr.id)), + }, [ITEM_ICMP6] = { .name = "icmp6", .help = "match any ICMPv6 header", From patchwork Tue Apr 13 08:10:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 91211 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0146DA0524; Tue, 13 Apr 2021 10:16:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 45991160C54; Tue, 13 Apr 2021 10:16:09 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 3809C160C5D for ; Tue, 13 Apr 2021 10:16:07 +0200 (CEST) IronPort-SDR: amMgG94MzIY0eP9GcSyyetl4ORj+VyyYltI59xzqeg33ohf4I9vlSVFsnC4UxqQkDgcZ9B8fXf 6bcrkCkft0Mw== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="181492834" X-IronPort-AV: E=Sophos;i="5.82,218,1613462400"; d="scan'208";a="181492834" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 01:16:06 -0700 IronPort-SDR: EgFvsfe6+XsUXkY5065fu/B0kViufJhyf3zAslWB/beGlCySvN7xm3N4zk7bEMv2oUbhufCBwJ kMaSfSZ4R1dA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,218,1613462400"; d="scan'208";a="424152514" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 01:16:04 -0700 From: Jeff Guo To: orika@nvidia.com, qi.z.zhang@intel.com, beilei.xing@intel.com, xiaoyun.li@intel.com, jingjing.wu@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, jia.guo@intel.com Date: Tue, 13 Apr 2021 16:10:30 +0800 Message-Id: <20210413081032.60509-3-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210413081032.60509-1-jia.guo@intel.com> References: <20210324134844.60410-1-jia.guo@intel.com> <20210413081032.60509-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 2/4] common/iavf: add proto header for IP fragment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add new virtchnl protocol header type and fields for IP fragment packets to support RSS hash and FDIR. Signed-off-by: Ting Xu Signed-off-by: Jeff Guo --- drivers/common/iavf/virtchnl.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h index 6b99e170f0..e3eb767d66 100644 --- a/drivers/common/iavf/virtchnl.h +++ b/drivers/common/iavf/virtchnl.h @@ -1415,7 +1415,9 @@ enum virtchnl_proto_hdr_type { VIRTCHNL_PROTO_HDR_S_VLAN, VIRTCHNL_PROTO_HDR_C_VLAN, VIRTCHNL_PROTO_HDR_IPV4, + VIRTCHNL_PROTO_HDR_IPV4_FRAG, VIRTCHNL_PROTO_HDR_IPV6, + VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG, VIRTCHNL_PROTO_HDR_TCP, VIRTCHNL_PROTO_HDR_UDP, VIRTCHNL_PROTO_HDR_SCTP, @@ -1452,6 +1454,8 @@ enum virtchnl_proto_hdr_field { VIRTCHNL_PROTO_HDR_IPV4_DSCP, VIRTCHNL_PROTO_HDR_IPV4_TTL, VIRTCHNL_PROTO_HDR_IPV4_PROT, + VIRTCHNL_PROTO_HDR_IPV4_FRAG_PKID = + PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4_FRAG), /* IPV6 */ VIRTCHNL_PROTO_HDR_IPV6_SRC = PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6), @@ -1472,6 +1476,9 @@ enum virtchnl_proto_hdr_field { VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_DST, VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_SRC, VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_DST, + /* IPv6 Extension Header Fragment */ + VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID = + PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG), /* TCP */ VIRTCHNL_PROTO_HDR_TCP_SRC_PORT = PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_TCP), From patchwork Tue Apr 13 08:10:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 91212 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E078A0524; Tue, 13 Apr 2021 10:16:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 81002160C65; Tue, 13 Apr 2021 10:16:11 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 53F9C160C63 for ; Tue, 13 Apr 2021 10:16:10 +0200 (CEST) IronPort-SDR: h/Fa6DoCtYPPwEyX6sG5AZ7OaZFQcJtYon4zqLl3fqeh9i+1qbgLO3FjPQ2KzTTqHyG4BzCKX3 nOa5yC/ra7Kg== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="181492850" X-IronPort-AV: E=Sophos;i="5.82,218,1613462400"; d="scan'208";a="181492850" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 01:16:10 -0700 IronPort-SDR: 7YIbVDBgrdu4MAOvLrereAFoxPvpBd16u5p6jlkDslfzkgR2DA6dOmsQyAGXKQurLww5NlUAx5 /GZqK0WFixaA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,218,1613462400"; d="scan'208";a="424152522" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 01:16:07 -0700 From: Jeff Guo To: orika@nvidia.com, qi.z.zhang@intel.com, beilei.xing@intel.com, xiaoyun.li@intel.com, jingjing.wu@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, jia.guo@intel.com Date: Tue, 13 Apr 2021 16:10:31 +0800 Message-Id: <20210413081032.60509-4-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210413081032.60509-1-jia.guo@intel.com> References: <20210324134844.60410-1-jia.guo@intel.com> <20210413081032.60509-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 3/4] net/iavf: support RSS hash for IP fragment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" New pattern and RSS hash flow parsing are added to handle fragmented IPv4/IPv6 packet. Signed-off-by: Ting Xu Signed-off-by: Jeff Guo --- drivers/net/iavf/iavf_generic_flow.c | 24 ++++++++ drivers/net/iavf/iavf_generic_flow.h | 3 + drivers/net/iavf/iavf_hash.c | 83 ++++++++++++++++++++++++---- 3 files changed, 100 insertions(+), 10 deletions(-) diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c index 8635ff83ca..242bb4abc5 100644 --- a/drivers/net/iavf/iavf_generic_flow.c +++ b/drivers/net/iavf/iavf_generic_flow.c @@ -219,6 +219,30 @@ enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[] = { RTE_FLOW_ITEM_TYPE_END, }; +enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT, + RTE_FLOW_ITEM_TYPE_END, +}; + +enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT, + RTE_FLOW_ITEM_TYPE_END, +}; + +enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT, + RTE_FLOW_ITEM_TYPE_END, +}; + enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_IPV6, diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h index 005eeb3553..32932557ca 100644 --- a/drivers/net/iavf/iavf_generic_flow.h +++ b/drivers/net/iavf/iavf_generic_flow.h @@ -203,6 +203,9 @@ extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv4_icmp[]; extern enum rte_flow_item_type iavf_pattern_eth_ipv6[]; extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6[]; extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6[]; +extern enum rte_flow_item_type iavf_pattern_eth_ipv6_frag_ext[]; +extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_frag_ext[]; +extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_frag_ext[]; extern enum rte_flow_item_type iavf_pattern_eth_ipv6_udp[]; extern enum rte_flow_item_type iavf_pattern_eth_vlan_ipv6_udp[]; extern enum rte_flow_item_type iavf_pattern_eth_qinq_ipv6_udp[]; diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c index d8d22f8009..5d3d62839b 100644 --- a/drivers/net/iavf/iavf_hash.c +++ b/drivers/net/iavf/iavf_hash.c @@ -112,6 +112,10 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad, FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \ FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_DST), {BUFF_NOUSED} } +#define proto_hdr_ipv6_frag { \ + VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG, \ + FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID), {BUFF_NOUSED} } + #define proto_hdr_ipv6_with_prot { \ VIRTCHNL_PROTO_HDR_IPV6, \ FIELD_SELECTOR(VIRTCHNL_PROTO_HDR_IPV6_SRC) | \ @@ -190,6 +194,12 @@ struct virtchnl_proto_hdrs outer_ipv6_tmplt = { {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6} }; +struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = { + TUNNEL_LEVEL_OUTER, 5, + {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv6, proto_hdr_ipv6_frag} +}; + struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = { TUNNEL_LEVEL_OUTER, 5, {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, @@ -303,7 +313,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = { /* rss type super set */ /* IPv4 outer */ -#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4) +#define IAVF_RSS_TYPE_OUTER_IPV4 (ETH_RSS_ETH | ETH_RSS_IPV4 | \ + ETH_RSS_FRAG_IPV4) #define IAVF_RSS_TYPE_OUTER_IPV4_UDP (IAVF_RSS_TYPE_OUTER_IPV4 | \ ETH_RSS_NONFRAG_IPV4_UDP) #define IAVF_RSS_TYPE_OUTER_IPV4_TCP (IAVF_RSS_TYPE_OUTER_IPV4 | \ @@ -312,6 +323,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = { ETH_RSS_NONFRAG_IPV4_SCTP) /* IPv6 outer */ #define IAVF_RSS_TYPE_OUTER_IPV6 (ETH_RSS_ETH | ETH_RSS_IPV6) +#define IAVF_RSS_TYPE_OUTER_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6 | \ + ETH_RSS_FRAG_IPV6) #define IAVF_RSS_TYPE_OUTER_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6 | \ ETH_RSS_NONFRAG_IPV6_UDP) #define IAVF_RSS_TYPE_OUTER_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6 | \ @@ -330,6 +343,8 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = { /* VLAN IPv6 */ #define IAVF_RSS_TYPE_VLAN_IPV6 (IAVF_RSS_TYPE_OUTER_IPV6 | \ ETH_RSS_S_VLAN | ETH_RSS_C_VLAN) +#define IAVF_RSS_TYPE_VLAN_IPV6_FRAG (IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \ + ETH_RSS_S_VLAN | ETH_RSS_C_VLAN) #define IAVF_RSS_TYPE_VLAN_IPV6_UDP (IAVF_RSS_TYPE_OUTER_IPV6_UDP | \ ETH_RSS_S_VLAN | ETH_RSS_C_VLAN) #define IAVF_RSS_TYPE_VLAN_IPV6_TCP (IAVF_RSS_TYPE_OUTER_IPV6_TCP | \ @@ -415,10 +430,12 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = { {iavf_pattern_eth_ipv4_ecpri, ETH_RSS_ECPRI, &ipv4_ecpri_tmplt}, /* IPv6 */ {iavf_pattern_eth_ipv6, IAVF_RSS_TYPE_OUTER_IPV6, &outer_ipv6_tmplt}, + {iavf_pattern_eth_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt}, {iavf_pattern_eth_ipv6_udp, IAVF_RSS_TYPE_OUTER_IPV6_UDP, &outer_ipv6_udp_tmplt}, {iavf_pattern_eth_ipv6_tcp, IAVF_RSS_TYPE_OUTER_IPV6_TCP, &outer_ipv6_tcp_tmplt}, {iavf_pattern_eth_ipv6_sctp, IAVF_RSS_TYPE_OUTER_IPV6_SCTP, &outer_ipv6_sctp_tmplt}, {iavf_pattern_eth_vlan_ipv6, IAVF_RSS_TYPE_VLAN_IPV6, &outer_ipv6_tmplt}, + {iavf_pattern_eth_vlan_ipv6_frag_ext, IAVF_RSS_TYPE_OUTER_IPV6_FRAG, &outer_ipv6_frag_tmplt}, {iavf_pattern_eth_vlan_ipv6_udp, IAVF_RSS_TYPE_VLAN_IPV6_UDP, &outer_ipv6_udp_tmplt}, {iavf_pattern_eth_vlan_ipv6_tcp, IAVF_RSS_TYPE_VLAN_IPV6_TCP, &outer_ipv6_tcp_tmplt}, {iavf_pattern_eth_vlan_ipv6_sctp, IAVF_RSS_TYPE_VLAN_IPV6_SCTP, &outer_ipv6_sctp_tmplt}, @@ -626,6 +643,29 @@ do { \ REFINE_PROTO_FLD(ADD, fld_2); \ } while (0) +static void +iavf_hash_add_fragment_hdr(struct virtchnl_proto_hdrs *hdrs, int layer) +{ + struct virtchnl_proto_hdr *hdr1; + struct virtchnl_proto_hdr *hdr2; + int i; + + if (layer < 0 || layer > hdrs->count) + return; + + /* shift headers layer */ + for (i = hdrs->count; i >= layer; i--) { + hdr1 = &hdrs->proto_hdr[i]; + hdr2 = &hdrs->proto_hdr[i - 1]; + *hdr1 = *hdr2; + } + + /* adding dummy fragment header */ + hdr1 = &hdrs->proto_hdr[layer]; + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, IPV4_FRAG); + hdrs->count = ++layer; +} + /* refine proto hdrs base on l2, l3, l4 rss type */ static void iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs, @@ -647,17 +687,19 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs, break; case VIRTCHNL_PROTO_HDR_IPV4: if (rss_type & - (ETH_RSS_IPV4 | + (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)) { - if (rss_type & ETH_RSS_L3_SRC_ONLY) { + if (rss_type & ETH_RSS_FRAG_IPV4) { + iavf_hash_add_fragment_hdr(proto_hdrs, i + 1); + } else if (rss_type & ETH_RSS_L3_SRC_ONLY) { REFINE_PROTO_FLD(DEL, IPV4_DST); } else if (rss_type & ETH_RSS_L3_DST_ONLY) { REFINE_PROTO_FLD(DEL, IPV4_SRC); } else if (rss_type & - (ETH_RSS_L4_SRC_ONLY | - ETH_RSS_L4_DST_ONLY)) { + (ETH_RSS_L4_SRC_ONLY | + ETH_RSS_L4_DST_ONLY)) { REFINE_PROTO_FLD(DEL, IPV4_DST); REFINE_PROTO_FLD(DEL, IPV4_SRC); } @@ -665,9 +707,21 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs, hdr->field_selector = 0; } break; + case VIRTCHNL_PROTO_HDR_IPV4_FRAG: + if (rss_type & + (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | + ETH_RSS_NONFRAG_IPV4_UDP | + ETH_RSS_NONFRAG_IPV4_TCP | + ETH_RSS_NONFRAG_IPV4_SCTP)) { + if (rss_type & ETH_RSS_FRAG_IPV4) + REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID); + } else { + hdr->field_selector = 0; + } + break; case VIRTCHNL_PROTO_HDR_IPV6: if (rss_type & - (ETH_RSS_IPV6 | + (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)) { @@ -676,8 +730,8 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs, } else if (rss_type & ETH_RSS_L3_DST_ONLY) { REFINE_PROTO_FLD(DEL, IPV6_SRC); } else if (rss_type & - (ETH_RSS_L4_SRC_ONLY | - ETH_RSS_L4_DST_ONLY)) { + (ETH_RSS_L4_SRC_ONLY | + ETH_RSS_L4_DST_ONLY)) { REFINE_PROTO_FLD(DEL, IPV6_DST); REFINE_PROTO_FLD(DEL, IPV6_SRC); } @@ -692,6 +746,13 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs, REPALCE_PROTO_FLD(IPV6_DST, IPV6_PREFIX64_DST); } + break; + case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG: + if (rss_type & ETH_RSS_FRAG_IPV6) + REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID); + else + hdr->field_selector = 0; + break; case VIRTCHNL_PROTO_HDR_UDP: if (rss_type & @@ -885,8 +946,10 @@ struct rss_attr_type { ETH_RSS_NONFRAG_IPV6_TCP | \ ETH_RSS_NONFRAG_IPV6_SCTP) -#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | VALID_RSS_IPV4_L4) -#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | VALID_RSS_IPV6_L4) +#define VALID_RSS_IPV4 (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \ + VALID_RSS_IPV4_L4) +#define VALID_RSS_IPV6 (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \ + VALID_RSS_IPV6_L4) #define VALID_RSS_L3 (VALID_RSS_IPV4 | VALID_RSS_IPV6) #define VALID_RSS_L4 (VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4) From patchwork Tue Apr 13 08:10:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 91213 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2C677A0524; Tue, 13 Apr 2021 10:16:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C11EE160C69; Tue, 13 Apr 2021 10:16:14 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 25384160C69 for ; Tue, 13 Apr 2021 10:16:12 +0200 (CEST) IronPort-SDR: I5WkAj8Z9DUsKnIW+E0r3To7+hC1ysV2kh03zrga3Vo/+QpZ/nHB9MQ1dJocWQbWas59Q9JKVl HWy5lJXPWR4A== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="181492858" X-IronPort-AV: E=Sophos;i="5.82,218,1613462400"; d="scan'208";a="181492858" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 01:16:12 -0700 IronPort-SDR: dB7mXNomwNb+2IxLjbKnTw4f8wvKjwfJdgyGqAJuhcVmM/fn+EVM/qTOWPxfM85jXx/5hqTg3t nUvdszNVH8iA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,218,1613462400"; d="scan'208";a="424152536" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 01:16:10 -0700 From: Jeff Guo To: orika@nvidia.com, qi.z.zhang@intel.com, beilei.xing@intel.com, xiaoyun.li@intel.com, jingjing.wu@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, jia.guo@intel.com Date: Tue, 13 Apr 2021 16:10:32 +0800 Message-Id: <20210413081032.60509-5-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210413081032.60509-1-jia.guo@intel.com> References: <20210324134844.60410-1-jia.guo@intel.com> <20210413081032.60509-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 4/4] net/iavf: support FDIR for IP fragment packet X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" New FDIR parsing are added to handle the fragmented IPv4/IPv6 packet. Signed-off-by: Ting Xu Signed-off-by: Jeff Guo --- drivers/net/iavf/iavf_fdir.c | 386 ++++++++++++++++++--------- drivers/net/iavf/iavf_generic_flow.h | 5 + 2 files changed, 267 insertions(+), 124 deletions(-) diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c index 62f032985a..f238a83c84 100644 --- a/drivers/net/iavf/iavf_fdir.c +++ b/drivers/net/iavf/iavf_fdir.c @@ -34,7 +34,7 @@ #define IAVF_FDIR_INSET_ETH_IPV4 (\ IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \ IAVF_INSET_IPV4_PROTO | IAVF_INSET_IPV4_TOS | \ - IAVF_INSET_IPV4_TTL) + IAVF_INSET_IPV4_TTL | IAVF_INSET_IPV4_ID) #define IAVF_FDIR_INSET_ETH_IPV4_UDP (\ IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \ @@ -56,6 +56,9 @@ IAVF_INSET_IPV6_NEXT_HDR | IAVF_INSET_IPV6_TC | \ IAVF_INSET_IPV6_HOP_LIMIT) +#define IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT (\ + IAVF_INSET_IPV6_ID) + #define IAVF_FDIR_INSET_ETH_IPV6_UDP (\ IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \ IAVF_INSET_IPV6_TC | IAVF_INSET_IPV6_HOP_LIMIT | \ @@ -143,6 +146,7 @@ static struct iavf_pattern_match_item iavf_fdir_pattern[] = { {iavf_pattern_eth_ipv4_tcp, IAVF_FDIR_INSET_ETH_IPV4_TCP, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv4_sctp, IAVF_FDIR_INSET_ETH_IPV4_SCTP, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv6, IAVF_FDIR_INSET_ETH_IPV6, IAVF_INSET_NONE}, + {iavf_pattern_eth_ipv6_frag_ext, IAVF_FDIR_INSET_ETH_IPV6_FRAG_EXT, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv6_udp, IAVF_FDIR_INSET_ETH_IPV6_UDP, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv6_tcp, IAVF_FDIR_INSET_ETH_IPV6_TCP, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv6_sctp, IAVF_FDIR_INSET_ETH_IPV6_SCTP, IAVF_INSET_NONE}, @@ -543,6 +547,29 @@ iavf_fdir_refine_input_set(const uint64_t input_set, } } +static void +iavf_fdir_add_fragment_hdr(struct virtchnl_proto_hdrs *hdrs, int layer) +{ + struct virtchnl_proto_hdr *hdr1; + struct virtchnl_proto_hdr *hdr2; + int i; + + if (layer < 0 || layer > hdrs->count) + return; + + /* shift headers layer */ + for (i = hdrs->count; i >= layer; i--) { + hdr1 = &hdrs->proto_hdr[i]; + hdr2 = &hdrs->proto_hdr[i - 1]; + *hdr1 = *hdr2; + } + + /* adding dummy fragment header */ + hdr1 = &hdrs->proto_hdr[layer]; + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, IPV4_FRAG); + hdrs->count = ++layer; +} + static int iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, const struct rte_flow_item pattern[], @@ -550,12 +577,15 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, struct rte_flow_error *error, struct iavf_fdir_conf *filter) { - const struct rte_flow_item *item = pattern; - enum rte_flow_item_type item_type; + struct virtchnl_proto_hdrs *hdrs = + &filter->add_fltr.rule_cfg.proto_hdrs; enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END; const struct rte_flow_item_eth *eth_spec, *eth_mask; - const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask; + const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask; const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_spec; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_last; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_mask; const struct rte_flow_item_udp *udp_spec, *udp_mask; const struct rte_flow_item_tcp *tcp_spec, *tcp_mask; const struct rte_flow_item_sctp *sctp_spec, *sctp_mask; @@ -566,15 +596,15 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, const struct rte_flow_item_ah *ah_spec, *ah_mask; const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask; const struct rte_flow_item_ecpri *ecpri_spec, *ecpri_mask; + const struct rte_flow_item *item = pattern; + struct virtchnl_proto_hdr *hdr, *hdr1 = NULL; struct rte_ecpri_common_hdr ecpri_common; uint64_t input_set = IAVF_INSET_NONE; - + enum rte_flow_item_type item_type; enum rte_flow_item_type next_type; + uint8_t tun_inner = 0; uint16_t ether_type; - - u8 tun_inner = 0; int layer = 0; - struct virtchnl_proto_hdr *hdr; uint8_t ipv6_addr_mask[16] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, @@ -582,26 +612,28 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, }; for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { - if (item->last) { + item_type = item->type; + + if (item->last && !(item_type == RTE_FLOW_ITEM_TYPE_IPV4 || + item_type == + RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT)) { rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Not support range"); + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Not support range"); } - item_type = item->type; - switch (item_type) { case RTE_FLOW_ITEM_TYPE_ETH: eth_spec = item->spec; eth_mask = item->mask; next_type = (item + 1)->type; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr1 = &hdrs->proto_hdr[layer]; - VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ETH); + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr1, ETH); if (next_type == RTE_FLOW_ITEM_TYPE_END && - (!eth_spec || !eth_mask)) { + (!eth_spec || !eth_mask)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "NULL eth spec/mask."); @@ -637,69 +669,122 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, } input_set |= IAVF_INSET_ETHERTYPE; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, ETH, ETHERTYPE); + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH, + ETHERTYPE); - rte_memcpy(hdr->buffer, - eth_spec, sizeof(struct rte_ether_hdr)); + rte_memcpy(hdr1->buffer, eth_spec, + sizeof(struct rte_ether_hdr)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_IPV4: l3 = RTE_FLOW_ITEM_TYPE_IPV4; ipv4_spec = item->spec; + ipv4_last = item->last; ipv4_mask = item->mask; + next_type = (item + 1)->type; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV4); - if (ipv4_spec && ipv4_mask) { - if (ipv4_mask->hdr.version_ihl || - ipv4_mask->hdr.total_length || - ipv4_mask->hdr.packet_id || - ipv4_mask->hdr.fragment_offset || - ipv4_mask->hdr.hdr_checksum) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Invalid IPv4 mask."); - return -rte_errno; - } + if (!(ipv4_spec && ipv4_mask)) { + hdrs->count = ++layer; + break; + } - if (ipv4_mask->hdr.type_of_service == - UINT8_MAX) { - input_set |= IAVF_INSET_IPV4_TOS; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DSCP); - } - if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) { - input_set |= IAVF_INSET_IPV4_PROTO; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, PROT); - } - if (ipv4_mask->hdr.time_to_live == UINT8_MAX) { - input_set |= IAVF_INSET_IPV4_TTL; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, TTL); - } - if (ipv4_mask->hdr.src_addr == UINT32_MAX) { - input_set |= IAVF_INSET_IPV4_SRC; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, SRC); - } - if (ipv4_mask->hdr.dst_addr == UINT32_MAX) { - input_set |= IAVF_INSET_IPV4_DST; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, DST); - } + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid IPv4 mask."); + return -rte_errno; + } - if (tun_inner) { - input_set &= ~IAVF_PROT_IPV4_OUTER; - input_set |= IAVF_PROT_IPV4_INNER; - } + if (ipv4_last && + (ipv4_last->hdr.version_ihl || + ipv4_last->hdr.type_of_service || + ipv4_last->hdr.time_to_live || + ipv4_last->hdr.total_length | + ipv4_last->hdr.next_proto_id || + ipv4_last->hdr.hdr_checksum || + ipv4_last->hdr.src_addr || + ipv4_last->hdr.dst_addr)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid IPv4 last."); + return -rte_errno; + } - rte_memcpy(hdr->buffer, - &ipv4_spec->hdr, - sizeof(ipv4_spec->hdr)); + if (ipv4_mask->hdr.type_of_service == + UINT8_MAX) { + input_set |= IAVF_INSET_IPV4_TOS; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, + DSCP); + } + + if (ipv4_mask->hdr.next_proto_id == UINT8_MAX) { + input_set |= IAVF_INSET_IPV4_PROTO; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, + PROT); + } + + if (ipv4_mask->hdr.time_to_live == UINT8_MAX) { + input_set |= IAVF_INSET_IPV4_TTL; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, + TTL); + } + + if (ipv4_mask->hdr.src_addr == UINT32_MAX) { + input_set |= IAVF_INSET_IPV4_SRC; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, + SRC); + } + + if (ipv4_mask->hdr.dst_addr == UINT32_MAX) { + input_set |= IAVF_INSET_IPV4_DST; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, + DST); + } + + if (tun_inner) { + input_set &= ~IAVF_PROT_IPV4_OUTER; + input_set |= IAVF_PROT_IPV4_INNER; + } + + rte_memcpy(hdr->buffer, &ipv4_spec->hdr, + sizeof(ipv4_spec->hdr)); + + hdrs->count = ++layer; + + /* only support any packet id for fragment IPv4 + * any packet_id: + * spec is 0, last is 0xffff, mask is 0xffff + */ + if (ipv4_last && ipv4_spec->hdr.packet_id == 0 && + ipv4_last->hdr.packet_id == UINT16_MAX && + ipv4_mask->hdr.packet_id == UINT16_MAX && + ipv4_mask->hdr.fragment_offset == UINT16_MAX) { + /* all IPv4 fragment packet has the same + * ethertype, if the spec is for all valid + * packet id, set ethertype into input set. + */ + input_set |= IAVF_INSET_ETHERTYPE; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH, + ETHERTYPE); + + /* add dummy header for IPv4 Fragment */ + iavf_fdir_add_fragment_hdr(hdrs, layer); + } else if (ipv4_mask->hdr.packet_id == UINT16_MAX) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid IPv4 mask."); + return -rte_errno; } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; break; case RTE_FLOW_ITEM_TYPE_IPV6: @@ -707,63 +792,114 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, ipv6_spec = item->spec; ipv6_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6); - if (ipv6_spec && ipv6_mask) { - if (ipv6_mask->hdr.payload_len) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Invalid IPv6 mask"); - return -rte_errno; - } + if (!(ipv6_spec && ipv6_mask)) { + hdrs->count = ++layer; + break; + } - if ((ipv6_mask->hdr.vtc_flow & - rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) - == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) { - input_set |= IAVF_INSET_IPV6_TC; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, TC); - } - if (ipv6_mask->hdr.proto == UINT8_MAX) { - input_set |= IAVF_INSET_IPV6_NEXT_HDR; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, PROT); - } - if (ipv6_mask->hdr.hop_limits == UINT8_MAX) { - input_set |= IAVF_INSET_IPV6_HOP_LIMIT; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, HOP_LIMIT); - } - if (!memcmp(ipv6_mask->hdr.src_addr, - ipv6_addr_mask, - RTE_DIM(ipv6_mask->hdr.src_addr))) { - input_set |= IAVF_INSET_IPV6_SRC; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, SRC); - } - if (!memcmp(ipv6_mask->hdr.dst_addr, - ipv6_addr_mask, - RTE_DIM(ipv6_mask->hdr.dst_addr))) { - input_set |= IAVF_INSET_IPV6_DST; - VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, DST); - } + if (ipv6_mask->hdr.payload_len) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid IPv6 mask"); + return -rte_errno; + } - if (tun_inner) { - input_set &= ~IAVF_PROT_IPV6_OUTER; - input_set |= IAVF_PROT_IPV6_INNER; - } + if ((ipv6_mask->hdr.vtc_flow & + rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) + == rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) { + input_set |= IAVF_INSET_IPV6_TC; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, + TC); + } - rte_memcpy(hdr->buffer, - &ipv6_spec->hdr, - sizeof(ipv6_spec->hdr)); + if (ipv6_mask->hdr.proto == UINT8_MAX) { + input_set |= IAVF_INSET_IPV6_NEXT_HDR; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, + PROT); + } + + if (ipv6_mask->hdr.hop_limits == UINT8_MAX) { + input_set |= IAVF_INSET_IPV6_HOP_LIMIT; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, + HOP_LIMIT); + } + + if (!memcmp(ipv6_mask->hdr.src_addr, ipv6_addr_mask, + RTE_DIM(ipv6_mask->hdr.src_addr))) { + input_set |= IAVF_INSET_IPV6_SRC; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, + SRC); + } + if (!memcmp(ipv6_mask->hdr.dst_addr, ipv6_addr_mask, + RTE_DIM(ipv6_mask->hdr.dst_addr))) { + input_set |= IAVF_INSET_IPV6_DST; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, + DST); + } + + if (tun_inner) { + input_set &= ~IAVF_PROT_IPV6_OUTER; + input_set |= IAVF_PROT_IPV6_INNER; + } + + rte_memcpy(hdr->buffer, &ipv6_spec->hdr, + sizeof(ipv6_spec->hdr)); + + hdrs->count = ++layer; + break; + + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + ipv6_frag_spec = item->spec; + ipv6_frag_last = item->last; + ipv6_frag_mask = item->mask; + next_type = (item + 1)->type; + + hdr = &hdrs->proto_hdr[layer]; + + VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6_EH_FRAG); + + if (!(ipv6_frag_spec && ipv6_frag_mask)) { + hdrs->count = ++layer; + break; + } + + /* only support any packet id for fragment IPv6 + * any packet_id: + * spec is 0, last is 0xffffffff, mask is 0xffffffff + */ + if (ipv6_frag_last && ipv6_frag_spec->hdr.id == 0 && + ipv6_frag_last->hdr.id == UINT32_MAX && + ipv6_frag_mask->hdr.id == UINT32_MAX && + ipv6_frag_mask->hdr.frag_data == UINT16_MAX) { + /* all IPv6 fragment packet has the same + * ethertype, if the spec is for all valid + * packet id, set ethertype into input set. + */ + input_set |= IAVF_INSET_ETHERTYPE; + VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH, + ETHERTYPE); + + rte_memcpy(hdr->buffer, &ipv6_frag_spec->hdr, + sizeof(ipv6_frag_spec->hdr)); + } else if (ipv6_frag_mask->hdr.id == UINT32_MAX) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid IPv6 mask."); + return -rte_errno; } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_UDP: udp_spec = item->spec; udp_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, UDP); @@ -800,14 +936,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, sizeof(udp_spec->hdr)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_TCP: tcp_spec = item->spec; tcp_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, TCP); @@ -849,14 +985,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, sizeof(tcp_spec->hdr)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_SCTP: sctp_spec = item->spec; sctp_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, SCTP); @@ -887,14 +1023,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, sizeof(sctp_spec->hdr)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_GTPU: gtp_spec = item->spec; gtp_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP); @@ -919,14 +1055,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, tun_inner = 1; - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_GTP_PSC: gtp_psc_spec = item->spec; gtp_psc_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; if (!gtp_psc_spec) VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_EH); @@ -947,14 +1083,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, sizeof(*gtp_psc_spec)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_L2TPV3OIP: l2tpv3oip_spec = item->spec; l2tpv3oip_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, L2TPV3); @@ -968,14 +1104,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, sizeof(*l2tpv3oip_spec)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_ESP: esp_spec = item->spec; esp_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ESP); @@ -989,14 +1125,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, sizeof(esp_spec->hdr)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_AH: ah_spec = item->spec; ah_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, AH); @@ -1010,14 +1146,14 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, sizeof(*ah_spec)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_PFCP: pfcp_spec = item->spec; pfcp_mask = item->mask; - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, PFCP); @@ -1031,7 +1167,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, sizeof(*pfcp_spec)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_ECPRI: @@ -1040,7 +1176,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, ecpri_common.u32 = rte_be_to_cpu_32(ecpri_spec->hdr.common.u32); - hdr = &filter->add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; + hdr = &hdrs->proto_hdr[layer]; VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, ECPRI); @@ -1056,7 +1192,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, sizeof(*ecpri_spec)); } - filter->add_fltr.rule_cfg.proto_hdrs.count = ++layer; + hdrs->count = ++layer; break; case RTE_FLOW_ITEM_TYPE_VOID: @@ -1077,7 +1213,9 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, return -rte_errno; } - if (!iavf_fdir_refine_input_set(input_set, input_set_mask, filter)) { + if (!iavf_fdir_refine_input_set(input_set, + input_set_mask | IAVF_INSET_ETHERTYPE, + filter)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_SPEC, pattern, "Invalid input set"); diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h index 32932557ca..e19da15518 100644 --- a/drivers/net/iavf/iavf_generic_flow.h +++ b/drivers/net/iavf/iavf_generic_flow.h @@ -61,6 +61,7 @@ #define IAVF_PFCP_S_FIELD (1ULL << 44) #define IAVF_PFCP_SEID (1ULL << 43) #define IAVF_ECPRI_PC_RTC_ID (1ULL << 42) +#define IAVF_IP_PKID (1ULL << 41) /* input set */ @@ -84,6 +85,8 @@ (IAVF_PROT_IPV4_OUTER | IAVF_IP_PROTO) #define IAVF_INSET_IPV4_TTL \ (IAVF_PROT_IPV4_OUTER | IAVF_IP_TTL) +#define IAVF_INSET_IPV4_ID \ + (IAVF_PROT_IPV4_OUTER | IAVF_IP_PKID) #define IAVF_INSET_IPV6_SRC \ (IAVF_PROT_IPV6_OUTER | IAVF_IP_SRC) #define IAVF_INSET_IPV6_DST \ @@ -94,6 +97,8 @@ (IAVF_PROT_IPV6_OUTER | IAVF_IP_TTL) #define IAVF_INSET_IPV6_TC \ (IAVF_PROT_IPV6_OUTER | IAVF_IP_TOS) +#define IAVF_INSET_IPV6_ID \ + (IAVF_PROT_IPV6_OUTER | IAVF_IP_PKID) #define IAVF_INSET_TUN_IPV4_SRC \ (IAVF_PROT_IPV4_INNER | IAVF_IP_SRC)