From patchwork Fri May 20 09:16:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 111529 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1DF52A0503; Fri, 20 May 2022 11:16:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 09A9542BAB; Fri, 20 May 2022 11:16:30 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 0163D42B9E for ; Fri, 20 May 2022 11:16:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653038187; x=1684574187; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uw7lz5HO/7qw3cNwqy2a5W6O1GgqV5Jy9ecEVazgwXo=; b=HygC8FEpmZGM21sxS6bFfq0KaWHhETO8AXW0XF8YXHbAVXUizTnHCmqS LFj8NZJ4fCZzdv6lNEzIwHOUzw35h2UZoOBsvXYyeinZPMhwWVhh4sd8r +KzMa1W2WDSY/1w4MYAsNCjvO5iqeQuShSrwHZ+6niuczSt7hVXRQG9Km SdUOIs+UtnlWWe0uEB8/NORtneW34kYUPSEpVPOlktci3OW1mRuwNTWCn FjS/V9yehTTyc0liZXrRpzfEEaaJOz64b+vw8e3F4LC6TYrKPcVS2HZQt VXEz+c5t0LVjfDRGrOlpCuodJOBTMJQC2jztFq4RVKQxmiOOScRB3wrgW g==; X-IronPort-AV: E=McAfee;i="6400,9594,10352"; a="297857434" X-IronPort-AV: E=Sophos;i="5.91,238,1647327600"; d="scan'208";a="297857434" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2022 02:16:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,238,1647327600"; d="scan'208";a="715432659" Received: from dpdk-jf-ntb-v2.sh.intel.com ([10.67.119.111]) by fmsmga001.fm.intel.com with ESMTP; 20 May 2022 02:16:25 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, junfeng.guo@intel.com Subject: [PATCH v5 1/4] common/iavf: support raw packet in protocol header Date: Fri, 20 May 2022 17:16:45 +0800 Message-Id: <20220520091648.3524540-2-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220520091648.3524540-1-junfeng.guo@intel.com> References: <20220421032851.1355350-5-junfeng.guo@intel.com> <20220520091648.3524540-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The patch extends existing virtchnl_proto_hdrs structure to allow VF to pass a pair of buffers as packet data and mask that describe a match pattern of a filter rule. Then the kernel PF driver is requested to parse the pair of buffer and figure out low level hardware metadata (ptype, profile, field vector.. ) to program the expected FDIR or RSS rules. Signed-off-by: Qi Zhang Signed-off-by: Junfeng Guo --- drivers/common/iavf/virtchnl.h | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h index 2d49f95f84..f123daec8e 100644 --- a/drivers/common/iavf/virtchnl.h +++ b/drivers/common/iavf/virtchnl.h @@ -1503,6 +1503,7 @@ enum virtchnl_vfr_states { }; #define VIRTCHNL_MAX_NUM_PROTO_HDRS 32 +#define VIRTCHNL_MAX_SIZE_RAW_PACKET 1024 #define PROTO_HDR_SHIFT 5 #define PROTO_HDR_FIELD_START(proto_hdr_type) \ (proto_hdr_type << PROTO_HDR_SHIFT) @@ -1697,14 +1698,25 @@ VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr); struct virtchnl_proto_hdrs { u8 tunnel_level; /** - * specify where protocol header start from. + * specify where protocol header start from. must be 0 when sending a raw packet request. * 0 - from the outer layer * 1 - from the first inner layer * 2 - from the second inner layer * .... - **/ - int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */ - struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS]; + */ + int count; + /** + * number of proto layers, must < VIRTCHNL_MAX_NUM_PROTO_HDRS + * must be 0 for a raw packet request. + */ + union { + struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS]; + struct { + u16 pkt_len; + u8 spec[VIRTCHNL_MAX_SIZE_RAW_PACKET]; + u8 mask[VIRTCHNL_MAX_SIZE_RAW_PACKET]; + } raw; + }; }; VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs); From patchwork Fri May 20 09:16:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 111530 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC9FCA0503; Fri, 20 May 2022 11:16:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0F0DE42BB3; Fri, 20 May 2022 11:16:33 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 1AD6C42BA1 for ; Fri, 20 May 2022 11:16:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653038189; x=1684574189; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Osah9MSTihN/fv6do19ERL2FeD51ZBEJ5oKUquHuK0k=; b=kaGUi354w5KSt2Mv3XpJGT3nqE0wGHeDAAxHWtitxSCa9RC3YgQCBtM1 qzADWD4c7Kqeiw7klSbFtZUQGDSoeRen4FkO/kwlFfuDtE2pgboAAwBJH HB0NYxgpK0BSzQLjTnSi453vHDQQcQN1xIgU9JM1GSim6haE1UYkICWnx uouu76niGGZHPcPaJn3Ht5RpOYZ5flhq74o3rtxc8jQtLfRW8X26u9IuI qm318vxhC1S8isxQbtOIysEvgZxM8UtDpkVS8adoQYHbxH7lVanL+vjxo HyZBWAXCab86/WMVv+fG2QFeRnQLYuTDreS0ljpxTAvwKHof2kTx/nNmV g==; X-IronPort-AV: E=McAfee;i="6400,9594,10352"; a="297857442" X-IronPort-AV: E=Sophos;i="5.91,238,1647327600"; d="scan'208";a="297857442" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2022 02:16:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,238,1647327600"; d="scan'208";a="715432666" Received: from dpdk-jf-ntb-v2.sh.intel.com ([10.67.119.111]) by fmsmga001.fm.intel.com with ESMTP; 20 May 2022 02:16:26 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, junfeng.guo@intel.com Subject: [PATCH v5 2/4] net/iavf: align with proto hdr struct change Date: Fri, 20 May 2022 17:16:46 +0800 Message-Id: <20220520091648.3524540-3-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220520091648.3524540-1-junfeng.guo@intel.com> References: <20220421032851.1355350-5-junfeng.guo@intel.com> <20220520091648.3524540-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Structure virtchnl_proto_headrs is extended with a union struct for proto_hdr table and raw struct. Thus update the proto_hdrs template init to align the virtchnl changes. Signed-off-by: Junfeng Guo --- drivers/net/iavf/iavf_hash.c | 180 ++++++++++++++++++----------------- 1 file changed, 92 insertions(+), 88 deletions(-) diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c index f35a07653b..278e75117d 100644 --- a/drivers/net/iavf/iavf_hash.c +++ b/drivers/net/iavf/iavf_hash.c @@ -181,252 +181,256 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad, /* proto_hdrs template */ struct virtchnl_proto_hdrs outer_ipv4_tmplt = { TUNNEL_LEVEL_OUTER, 4, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}} }; struct virtchnl_proto_hdrs outer_ipv4_udp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv4_with_prot, - proto_hdr_udp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv4_with_prot, + proto_hdr_udp}} }; struct virtchnl_proto_hdrs outer_ipv4_tcp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv4_with_prot, - proto_hdr_tcp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv4_with_prot, + proto_hdr_tcp}} }; struct virtchnl_proto_hdrs outer_ipv4_sctp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4, - proto_hdr_sctp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4, + proto_hdr_sctp}} }; struct virtchnl_proto_hdrs outer_ipv6_tmplt = { TUNNEL_LEVEL_OUTER, 4, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}} }; struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv6, proto_hdr_ipv6_frag} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv6, proto_hdr_ipv6_frag}} }; struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv6_with_prot, - proto_hdr_udp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv6_with_prot, + proto_hdr_udp}} }; struct virtchnl_proto_hdrs outer_ipv6_tcp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv6_with_prot, - proto_hdr_tcp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv6_with_prot, + proto_hdr_tcp}} }; struct virtchnl_proto_hdrs outer_ipv6_sctp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6, - proto_hdr_sctp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6, + proto_hdr_sctp}} }; struct virtchnl_proto_hdrs inner_ipv4_tmplt = { - TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv4} + TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv4}} }; struct virtchnl_proto_hdrs inner_ipv4_udp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}} }; struct virtchnl_proto_hdrs inner_ipv4_tcp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}} }; struct virtchnl_proto_hdrs second_inner_ipv4_tmplt = { - 2, 1, {proto_hdr_ipv4} + 2, 1, {{proto_hdr_ipv4}} }; struct virtchnl_proto_hdrs second_inner_ipv4_udp_tmplt = { - 2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp} + 2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}} }; struct virtchnl_proto_hdrs second_inner_ipv4_tcp_tmplt = { - 2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp} + 2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}} }; struct virtchnl_proto_hdrs second_inner_ipv6_tmplt = { - 2, 1, {proto_hdr_ipv6} + 2, 1, {{proto_hdr_ipv6}} }; struct virtchnl_proto_hdrs second_inner_ipv6_udp_tmplt = { - 2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp} + 2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}} }; struct virtchnl_proto_hdrs second_inner_ipv6_tcp_tmplt = { - 2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp} + 2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}} }; struct virtchnl_proto_hdrs inner_ipv4_sctp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4, proto_hdr_sctp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4, proto_hdr_sctp}} }; struct virtchnl_proto_hdrs inner_ipv6_tmplt = { - TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv6} + TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv6}} }; struct virtchnl_proto_hdrs inner_ipv6_udp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}} }; struct virtchnl_proto_hdrs inner_ipv6_tcp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}} }; struct virtchnl_proto_hdrs inner_ipv6_sctp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6, proto_hdr_sctp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6, proto_hdr_sctp}} }; struct virtchnl_proto_hdrs ipv4_esp_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_esp} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_esp}} }; struct virtchnl_proto_hdrs ipv4_udp_esp_tmplt = { TUNNEL_LEVEL_OUTER, 3, - {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp} + {{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}} }; struct virtchnl_proto_hdrs ipv4_ah_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_ah} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_ah}} }; struct virtchnl_proto_hdrs ipv6_esp_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_esp} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_esp}} }; struct virtchnl_proto_hdrs ipv6_udp_esp_tmplt = { TUNNEL_LEVEL_OUTER, 3, - {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp} + {{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}} }; struct virtchnl_proto_hdrs ipv6_ah_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_ah} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_ah}} }; struct virtchnl_proto_hdrs ipv4_l2tpv3_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_l2tpv3} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_l2tpv3}} }; struct virtchnl_proto_hdrs ipv6_l2tpv3_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_l2tpv3} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_l2tpv3}} }; struct virtchnl_proto_hdrs ipv4_pfcp_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_pfcp} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_pfcp}} }; struct virtchnl_proto_hdrs ipv6_pfcp_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_pfcp} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_pfcp}} }; struct virtchnl_proto_hdrs ipv4_udp_gtpc_tmplt = { - TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc} + TUNNEL_LEVEL_OUTER, 3, + {{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}} }; struct virtchnl_proto_hdrs ipv6_udp_gtpc_tmplt = { - TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc} + TUNNEL_LEVEL_OUTER, 3, + {{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}} }; struct virtchnl_proto_hdrs eth_ecpri_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_eth, proto_hdr_ecpri} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_eth, proto_hdr_ecpri}} }; struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = { - TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri} + TUNNEL_LEVEL_OUTER, 3, + {{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tmplt = { TUNNEL_LEVEL_INNER, 3, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv4} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv4}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tmplt = { TUNNEL_LEVEL_INNER, 3, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv6} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv6}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_udp_tmplt = { TUNNEL_LEVEL_INNER, 4, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv4_with_prot, - proto_hdr_udp} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv4_with_prot, + proto_hdr_udp}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tcp_tmplt = { TUNNEL_LEVEL_INNER, 4, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv4_with_prot, - proto_hdr_tcp} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv4_with_prot, + proto_hdr_tcp}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_udp_tmplt = { TUNNEL_LEVEL_INNER, 4, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv6_with_prot, - proto_hdr_udp} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv6_with_prot, + proto_hdr_udp}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = { TUNNEL_LEVEL_INNER, 4, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv6_with_prot, - proto_hdr_tcp} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv6_with_prot, + proto_hdr_tcp}} + }; struct virtchnl_proto_hdrs ipv4_l2tpv2_tmplt = { TUNNEL_LEVEL_OUTER, 4, - {proto_hdr_eth, - proto_hdr_ipv4, - proto_hdr_udp, - proto_hdr_l2tpv2} + {{proto_hdr_eth, + proto_hdr_ipv4, + proto_hdr_udp, + proto_hdr_l2tpv2}} }; struct virtchnl_proto_hdrs ipv6_l2tpv2_tmplt = { TUNNEL_LEVEL_OUTER, 4, - {proto_hdr_eth, - proto_hdr_ipv6, - proto_hdr_udp, - proto_hdr_l2tpv2} + {{proto_hdr_eth, + proto_hdr_ipv6, + proto_hdr_udp, + proto_hdr_l2tpv2}} }; struct virtchnl_proto_hdrs ipv4_l2tpv2_ppp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, - proto_hdr_ipv4, - proto_hdr_udp, - proto_hdr_l2tpv2, - proto_hdr_ppp} + {{proto_hdr_eth, + proto_hdr_ipv4, + proto_hdr_udp, + proto_hdr_l2tpv2, + proto_hdr_ppp}} }; struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, - proto_hdr_ipv6, - proto_hdr_udp, - proto_hdr_l2tpv2, - proto_hdr_ppp} + {{proto_hdr_eth, + proto_hdr_ipv6, + proto_hdr_udp, + proto_hdr_l2tpv2, + proto_hdr_ppp}} }; /* rss type super set */ From patchwork Fri May 20 09:16:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 111531 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 398F2A0503; Fri, 20 May 2022 11:16:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E41B442BB8; Fri, 20 May 2022 11:16:33 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 17C8842BB2 for ; Fri, 20 May 2022 11:16:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653038191; x=1684574191; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cGCEattwcS+JWc06mZ6AnIPjuxYcqyArSNpDDeg890M=; b=PT8c+hCsK7F/ANtRhgpaW2SMT6P/+1FcUYbeVniJDWg8BOHq3uRgqr7t qecpflYCX7XnnPvsbAeUk+p9C/LoOhs1yeJaBXl+vo0jkCKhAl6YI7FoG 5ZI4v3OAzeKQYBbVZpVpvkIwLssvGIvRI6/hNRAoxJkSc6R4as6o+r/JB ojNhpTVYbN/Lrj0PzWl0HYogX3urckCC5NiLXmRT99q7RaZuF6C8mFSHd P1EgmRbj5HCojJKCqbFHBgsjHtDwQHAEnFpFB0i0hnp1J23BkQQSm6Zln xSw1k8Lfg/CIBOWCssU1IxiRvHKJ+6kgyaOUtOe4D945jj8d74VrXPom6 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10352"; a="297857451" X-IronPort-AV: E=Sophos;i="5.91,238,1647327600"; d="scan'208";a="297857451" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2022 02:16:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,238,1647327600"; d="scan'208";a="715432672" Received: from dpdk-jf-ntb-v2.sh.intel.com ([10.67.119.111]) by fmsmga001.fm.intel.com with ESMTP; 20 May 2022 02:16:29 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, junfeng.guo@intel.com Subject: [PATCH v5 3/4] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Date: Fri, 20 May 2022 17:16:47 +0800 Message-Id: <20220520091648.3524540-4-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220520091648.3524540-1-junfeng.guo@intel.com> References: <20220421032851.1355350-5-junfeng.guo@intel.com> <20220520091648.3524540-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch enabled Protocol Agnostic Flow (raw flow) Offloading Flow Director (FDIR) in AVF, based on the Parser Library feature and the existing rte_flow `raw` API. The input spec and mask of raw pattern are first parsed via the Parser Library, and then passed to the kernel driver to create the flow rule. Similar as PF FDIR, each raw flow requires: 1. A byte string of raw target packet bits. 2. A byte string contains mask of target packet. Here is an example: FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3: flow create 0 ingress pattern raw \ pattern spec \ 00000000000000000000000008004500001400004000401000000000000001020304 \ pattern mask \ 000000000000000000000000000000000000000000000000000000000000ffffffff \ / end actions queue index 3 / mark id 3 / end Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto) is optional in our cases. To avoid redundancy, we just omit the mask of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix '0x' for the spec and mask byte (hex) strings are also omitted here. Signed-off-by: Junfeng Guo --- doc/guides/rel_notes/release_22_07.rst | 1 + drivers/net/iavf/iavf_fdir.c | 67 ++++++++++++++++++++++++++ drivers/net/iavf/iavf_generic_flow.c | 6 +++ drivers/net/iavf/iavf_generic_flow.h | 3 ++ 4 files changed, 77 insertions(+) diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index a0eb6ab61b..829fa6047e 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -65,6 +65,7 @@ New Features * Added Tx QoS queue rate limitation support. * Added quanta size configuration support. * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support. + * Added Protocol Agnostic Flow Offloading support in AVF FDIR and RSS. * **Updated Intel ice driver.** diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c index e9a3566c0d..f236260502 100644 --- a/drivers/net/iavf/iavf_fdir.c +++ b/drivers/net/iavf/iavf_fdir.c @@ -194,6 +194,7 @@ IAVF_INSET_TUN_TCP_DST_PORT) static struct iavf_pattern_match_item iavf_fdir_pattern[] = { + {iavf_pattern_raw, IAVF_INSET_NONE, IAVF_INSET_NONE}, {iavf_pattern_ethertype, IAVF_FDIR_INSET_ETH, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv4, IAVF_FDIR_INSET_ETH_IPV4, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv4_udp, IAVF_FDIR_INSET_ETH_IPV4_UDP, IAVF_INSET_NONE}, @@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, struct virtchnl_proto_hdrs *hdrs = &filter->add_fltr.rule_cfg.proto_hdrs; enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END; + const struct rte_flow_item_raw *raw_spec, *raw_mask; const struct rte_flow_item_eth *eth_spec, *eth_mask; const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask; const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; @@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, enum rte_flow_item_type next_type; uint8_t tun_inner = 0; uint16_t ether_type, flags_version; + uint8_t item_num = 0; int layer = 0; uint8_t ipv6_addr_mask[16] = { @@ -763,8 +766,72 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, RTE_FLOW_ERROR_TYPE_ITEM, item, "Not support range"); } + item_num++; switch (item_type) { + case RTE_FLOW_ITEM_TYPE_RAW: { + raw_spec = item->spec; + raw_mask = item->mask; + + if (item_num != 1) + return -rte_errno; + + if (raw_spec->length != raw_mask->length) + return -rte_errno; + + uint16_t pkt_len = 0; + uint16_t tmp_val = 0; + uint8_t tmp = 0; + int i, j; + + pkt_len = raw_spec->length; + + for (i = 0, j = 0; i < pkt_len; i += 2, j++) { + tmp = raw_spec->pattern[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = raw_spec->pattern[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val += (tmp - 'a' + 10); + if (tmp >= 'A' && tmp <= 'F') + tmp_val += (tmp - 'A' + 10); + if (tmp >= '0' && tmp <= '9') + tmp_val += (tmp - '0'); + + hdrs->raw.spec[j] = tmp_val; + + tmp = raw_mask->pattern[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = raw_mask->pattern[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val += (tmp - 'a' + 10); + if (tmp >= 'A' && tmp <= 'F') + tmp_val += (tmp - 'A' + 10); + if (tmp >= '0' && tmp <= '9') + tmp_val += (tmp - '0'); + + hdrs->raw.mask[j] = tmp_val; + } + + hdrs->raw.pkt_len = pkt_len / 2; + hdrs->tunnel_level = 0; + hdrs->count = 0; + return 0; + } + case RTE_FLOW_ITEM_TYPE_ETH: eth_spec = item->spec; eth_mask = item->mask; diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c index ddc1fdd22b..e1a611e319 100644 --- a/drivers/net/iavf/iavf_generic_flow.c +++ b/drivers/net/iavf/iavf_generic_flow.c @@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = { .query = iavf_flow_query, }; +/* raw */ +enum rte_flow_item_type iavf_pattern_raw[] = { + RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_END, +}; + /* empty */ enum rte_flow_item_type iavf_pattern_empty[] = { RTE_FLOW_ITEM_TYPE_END, diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h index f6af176073..52eb1caf29 100644 --- a/drivers/net/iavf/iavf_generic_flow.h +++ b/drivers/net/iavf/iavf_generic_flow.h @@ -180,6 +180,9 @@ #define IAVF_INSET_L2TPV2 \ (IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID) +/* raw pattern */ +extern enum rte_flow_item_type iavf_pattern_raw[]; + /* empty pattern */ extern enum rte_flow_item_type iavf_pattern_empty[]; From patchwork Fri May 20 09:16:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 111532 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7B2C2A0503; Fri, 20 May 2022 11:16:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D5B4642BBC; Fri, 20 May 2022 11:16:34 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 1BA8A42BB5 for ; Fri, 20 May 2022 11:16:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653038193; x=1684574193; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ciPsB/6QxJUkMg9fSH1PxHoiZ7gJKAdqjZHCj6svUME=; b=joKALaw+NdEDPrkI0/wWx8NLR0VjBZSGLnMgSmRtiyFgRs00vkhIr/J7 ullO5RnKYwR7u1At/JN7MfM4SS7mnArWc/2kjRCk3XhIY8JUmCDOtMdkv +gy+qp/X7pl6aWQkw3vRnlmPliGU3CDyFrDSZuoi5siMahWPVhZ/x04An kVkR38tQnE0HJky4zJoYyCqtWl3pDXZ+WZAXG5NDNz+nXEZoWbPMYR/BB 9tNduN4tkTBbmLucuQIR6sf00qQt+urU/hj2yJmy/aZh2DBF4aSt0Zewh yImRQeSklEV7YKFZIe+v01iiCNMkBFeOkmRavDcQFvqYYBD/m4Db9JtHY A==; X-IronPort-AV: E=McAfee;i="6400,9594,10352"; a="297857459" X-IronPort-AV: E=Sophos;i="5.91,238,1647327600"; d="scan'208";a="297857459" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2022 02:16:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,238,1647327600"; d="scan'208";a="715432679" Received: from dpdk-jf-ntb-v2.sh.intel.com ([10.67.119.111]) by fmsmga001.fm.intel.com with ESMTP; 20 May 2022 02:16:30 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, junfeng.guo@intel.com Subject: [PATCH v5 4/4] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Date: Fri, 20 May 2022 17:16:48 +0800 Message-Id: <20220520091648.3524540-5-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220520091648.3524540-1-junfeng.guo@intel.com> References: <20220421032851.1355350-5-junfeng.guo@intel.com> <20220520091648.3524540-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Ting Xu Enable Protocol Agnostic Flow Offloading for RSS hash in VF. It supports raw pattern flow rule creation in VF based on Parser Library feature. VF parses the spec and mask input of raw pattern, and passes it to kernel driver to create the flow rule. Current rte_flow raw API is utilized. command example: RSS hash for ipv4-src-dst: flow create 0 ingress pattern raw pattern spec 00000000000000000000000008004500001400004000401000000000000000000000 pattern mask 0000000000000000000000000000000000000000000000000000ffffffffffffffff / end actions rss queues end / end Signed-off-by: Ting Xu --- drivers/net/iavf/iavf_hash.c | 96 ++++++++++++++++++++++++++++++++++++ 1 file changed, 96 insertions(+) diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c index 278e75117d..42df7c4e48 100644 --- a/drivers/net/iavf/iavf_hash.c +++ b/drivers/net/iavf/iavf_hash.c @@ -37,6 +37,8 @@ /* L2TPv2 */ #define IAVF_PHINT_L2TPV2 BIT_ULL(9) #define IAVF_PHINT_L2TPV2_LEN BIT_ULL(10) +/* Raw */ +#define IAVF_PHINT_RAW BIT_ULL(11) #define IAVF_PHINT_GTPU_MSK (IAVF_PHINT_GTPU | \ IAVF_PHINT_GTPU_EH | \ @@ -58,6 +60,7 @@ struct iavf_hash_match_type { struct iavf_rss_meta { struct virtchnl_proto_hdrs proto_hdrs; enum virtchnl_rss_algorithm rss_algorithm; + bool raw_ena; }; struct iavf_hash_flow_cfg { @@ -532,6 +535,7 @@ struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = { */ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = { /* IPv4 */ + {iavf_pattern_raw, IAVF_INSET_NONE, NULL}, {iavf_pattern_eth_ipv4, IAVF_RSS_TYPE_OUTER_IPV4, &outer_ipv4_tmplt}, {iavf_pattern_eth_ipv4_udp, IAVF_RSS_TYPE_OUTER_IPV4_UDP, &outer_ipv4_udp_tmplt}, {iavf_pattern_eth_ipv4_tcp, IAVF_RSS_TYPE_OUTER_IPV4_TCP, &outer_ipv4_tcp_tmplt}, @@ -804,6 +808,9 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint, } switch (item->type) { + case RTE_FLOW_ITEM_TYPE_RAW: + *phint |= IAVF_PHINT_RAW; + break; case RTE_FLOW_ITEM_TYPE_IPV4: if (!(*phint & IAVF_PHINT_GTPU_MSK) && !(*phint & IAVF_PHINT_GRE) && @@ -873,6 +880,80 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint, return 0; } +static int +iavf_hash_parse_raw_pattern(const struct rte_flow_item *item, + struct iavf_rss_meta *meta) +{ + const struct rte_flow_item_raw *raw_spec, *raw_mask; + uint8_t *pkt_buf, *msk_buf; + uint8_t spec_len, pkt_len; + uint8_t tmp_val = 0; + uint8_t tmp_c = 0; + int i, j; + + raw_spec = item->spec; + raw_mask = item->mask; + + spec_len = strlen((char *)(uintptr_t)raw_spec->pattern); + if (strlen((char *)(uintptr_t)raw_mask->pattern) != + spec_len) + return -rte_errno; + + pkt_len = spec_len / 2; + + pkt_buf = rte_zmalloc(NULL, pkt_len, 0); + if (!pkt_buf) + return -ENOMEM; + + msk_buf = rte_zmalloc(NULL, pkt_len, 0); + if (!msk_buf) + return -ENOMEM; + + /* convert string to int array */ + for (i = 0, j = 0; i < spec_len; i += 2, j++) { + tmp_c = raw_spec->pattern[i]; + if (tmp_c >= 'a' && tmp_c <= 'f') + tmp_val = tmp_c - 'a' + 10; + if (tmp_c >= 'A' && tmp_c <= 'F') + tmp_val = tmp_c - 'A' + 10; + if (tmp_c >= '0' && tmp_c <= '9') + tmp_val = tmp_c - '0'; + + tmp_c = raw_spec->pattern[i + 1]; + if (tmp_c >= 'a' && tmp_c <= 'f') + pkt_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10; + if (tmp_c >= 'A' && tmp_c <= 'F') + pkt_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10; + if (tmp_c >= '0' && tmp_c <= '9') + pkt_buf[j] = tmp_val * 16 + tmp_c - '0'; + + tmp_c = raw_mask->pattern[i]; + if (tmp_c >= 'a' && tmp_c <= 'f') + tmp_val = tmp_c - 0x57; + if (tmp_c >= 'A' && tmp_c <= 'F') + tmp_val = tmp_c - 0x37; + if (tmp_c >= '0' && tmp_c <= '9') + tmp_val = tmp_c - '0'; + + tmp_c = raw_mask->pattern[i + 1]; + if (tmp_c >= 'a' && tmp_c <= 'f') + msk_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10; + if (tmp_c >= 'A' && tmp_c <= 'F') + msk_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10; + if (tmp_c >= '0' && tmp_c <= '9') + msk_buf[j] = tmp_val * 16 + tmp_c - '0'; + } + + rte_memcpy(meta->proto_hdrs.raw.spec, pkt_buf, pkt_len); + rte_memcpy(meta->proto_hdrs.raw.mask, msk_buf, pkt_len); + meta->proto_hdrs.raw.pkt_len = pkt_len; + + rte_free(pkt_buf); + rte_free(msk_buf); + + return 0; +} + #define REFINE_PROTO_FLD(op, fld) \ VIRTCHNL_##op##_PROTO_HDR_FIELD(hdr, VIRTCHNL_PROTO_HDR_##fld) #define REPALCE_PROTO_FLD(fld_1, fld_2) \ @@ -1387,6 +1468,10 @@ iavf_hash_parse_action(struct iavf_pattern_match_item *match_item, RTE_FLOW_ERROR_TYPE_ACTION, action, "a non-NULL RSS queue is not supported"); + /* If pattern type is raw, no need to refine rss type */ + if (pattern_hint == IAVF_PHINT_RAW) + break; + /** * Check simultaneous use of SRC_ONLY and DST_ONLY * of the same level. @@ -1453,6 +1538,17 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, if (ret) goto error; + if (phint == IAVF_PHINT_RAW) { + rss_meta_ptr->raw_ena = true; + ret = iavf_hash_parse_raw_pattern(pattern, rss_meta_ptr); + if (ret) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "Parse raw pattern failed"); + goto error; + } + } + ret = iavf_hash_parse_action(pattern_match_item, actions, phint, rss_meta_ptr, error);