From patchwork Mon May 23 02:31:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 111585 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B4CCA04FD; Mon, 23 May 2022 04:31:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 391234161A; Mon, 23 May 2022 04:31:18 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 2217B40040 for ; Mon, 23 May 2022 04:31:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653273076; x=1684809076; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VXZKk7BTnuL7VzqcdKUwo/pUXZCMOSYuFuw49lMkeI4=; b=hkpV7RVNqvUjo64g1U2v1KSfCMVMbzn8/NSOp7xycWYjXm7gU3cuNhbk eTxZAwwqhh5L0RebDaPOYfgzUQ08wZ1L1A79bbWZAMRG1XTsSEdlYM701 MPauHOujMRIwhzyvAj2CHwLskindfrGbWhCeV9xI2sgtgmqsRNQ75LpdJ 3q6H/xI4rff0fuctiq9mVIwyxBoepkNqEjSe/l0yPdD0GnpoejOiKzG08 vQPIVskm1lDxLjZpO21CsZFB0vV6lBLr7PaI1M8oqYO9dOPuxgHkpr+M9 Sc0yYBAutH3ge628Y/sqCiaST9WRzS2tNEvLTXhG1WJMYV1WzUkWKfOX4 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10355"; a="270651612" X-IronPort-AV: E=Sophos;i="5.91,245,1647327600"; d="scan'208";a="270651612" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2022 19:31:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,245,1647327600"; d="scan'208";a="577169620" Received: from dpdk-jf-ntb-v2.sh.intel.com ([10.67.119.111]) by fmsmga007.fm.intel.com with ESMTP; 22 May 2022 19:31:13 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, junfeng.guo@intel.com Subject: [PATCH v6 1/3] common/iavf: support raw packet in protocol header Date: Mon, 23 May 2022 10:31:34 +0800 Message-Id: <20220523023138.3777313-2-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220523023138.3777313-1-junfeng.guo@intel.com> References: <20220520091648.3524540-2-junfeng.guo@intel.com> <20220523023138.3777313-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The patch extends existing virtchnl_proto_hdrs structure to allow VF to pass a pair of buffers as packet data and mask that describe a match pattern of a filter rule. Then the kernel PF driver is requested to parse the pair of buffer and figure out low level hardware metadata (ptype, profile, field vector.. ) to program the expected FDIR or RSS rules. Also update the proto_hdrs template init to align the virtchnl changes. Signed-off-by: Qi Zhang Signed-off-by: Junfeng Guo --- drivers/common/iavf/virtchnl.h | 20 +++- drivers/net/iavf/iavf_hash.c | 180 +++++++++++++++++---------------- 2 files changed, 108 insertions(+), 92 deletions(-) diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h index 2d49f95f84..f123daec8e 100644 --- a/drivers/common/iavf/virtchnl.h +++ b/drivers/common/iavf/virtchnl.h @@ -1503,6 +1503,7 @@ enum virtchnl_vfr_states { }; #define VIRTCHNL_MAX_NUM_PROTO_HDRS 32 +#define VIRTCHNL_MAX_SIZE_RAW_PACKET 1024 #define PROTO_HDR_SHIFT 5 #define PROTO_HDR_FIELD_START(proto_hdr_type) \ (proto_hdr_type << PROTO_HDR_SHIFT) @@ -1697,14 +1698,25 @@ VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr); struct virtchnl_proto_hdrs { u8 tunnel_level; /** - * specify where protocol header start from. + * specify where protocol header start from. must be 0 when sending a raw packet request. * 0 - from the outer layer * 1 - from the first inner layer * 2 - from the second inner layer * .... - **/ - int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */ - struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS]; + */ + int count; + /** + * number of proto layers, must < VIRTCHNL_MAX_NUM_PROTO_HDRS + * must be 0 for a raw packet request. + */ + union { + struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS]; + struct { + u16 pkt_len; + u8 spec[VIRTCHNL_MAX_SIZE_RAW_PACKET]; + u8 mask[VIRTCHNL_MAX_SIZE_RAW_PACKET]; + } raw; + }; }; VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs); diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c index f35a07653b..278e75117d 100644 --- a/drivers/net/iavf/iavf_hash.c +++ b/drivers/net/iavf/iavf_hash.c @@ -181,252 +181,256 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad, /* proto_hdrs template */ struct virtchnl_proto_hdrs outer_ipv4_tmplt = { TUNNEL_LEVEL_OUTER, 4, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4}} }; struct virtchnl_proto_hdrs outer_ipv4_udp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv4_with_prot, - proto_hdr_udp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv4_with_prot, + proto_hdr_udp}} }; struct virtchnl_proto_hdrs outer_ipv4_tcp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv4_with_prot, - proto_hdr_tcp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv4_with_prot, + proto_hdr_tcp}} }; struct virtchnl_proto_hdrs outer_ipv4_sctp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4, - proto_hdr_sctp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv4, + proto_hdr_sctp}} }; struct virtchnl_proto_hdrs outer_ipv6_tmplt = { TUNNEL_LEVEL_OUTER, 4, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6}} }; struct virtchnl_proto_hdrs outer_ipv6_frag_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv6, proto_hdr_ipv6_frag} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv6, proto_hdr_ipv6_frag}} }; struct virtchnl_proto_hdrs outer_ipv6_udp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv6_with_prot, - proto_hdr_udp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv6_with_prot, + proto_hdr_udp}} }; struct virtchnl_proto_hdrs outer_ipv6_tcp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, - proto_hdr_ipv6_with_prot, - proto_hdr_tcp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, + proto_hdr_ipv6_with_prot, + proto_hdr_tcp}} }; struct virtchnl_proto_hdrs outer_ipv6_sctp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6, - proto_hdr_sctp} + {{proto_hdr_eth, proto_hdr_svlan, proto_hdr_cvlan, proto_hdr_ipv6, + proto_hdr_sctp}} }; struct virtchnl_proto_hdrs inner_ipv4_tmplt = { - TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv4} + TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv4}} }; struct virtchnl_proto_hdrs inner_ipv4_udp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}} }; struct virtchnl_proto_hdrs inner_ipv4_tcp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}} }; struct virtchnl_proto_hdrs second_inner_ipv4_tmplt = { - 2, 1, {proto_hdr_ipv4} + 2, 1, {{proto_hdr_ipv4}} }; struct virtchnl_proto_hdrs second_inner_ipv4_udp_tmplt = { - 2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_udp} + 2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_udp}} }; struct virtchnl_proto_hdrs second_inner_ipv4_tcp_tmplt = { - 2, 2, {proto_hdr_ipv4_with_prot, proto_hdr_tcp} + 2, 2, {{proto_hdr_ipv4_with_prot, proto_hdr_tcp}} }; struct virtchnl_proto_hdrs second_inner_ipv6_tmplt = { - 2, 1, {proto_hdr_ipv6} + 2, 1, {{proto_hdr_ipv6}} }; struct virtchnl_proto_hdrs second_inner_ipv6_udp_tmplt = { - 2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp} + 2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}} }; struct virtchnl_proto_hdrs second_inner_ipv6_tcp_tmplt = { - 2, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp} + 2, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}} }; struct virtchnl_proto_hdrs inner_ipv4_sctp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv4, proto_hdr_sctp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv4, proto_hdr_sctp}} }; struct virtchnl_proto_hdrs inner_ipv6_tmplt = { - TUNNEL_LEVEL_INNER, 1, {proto_hdr_ipv6} + TUNNEL_LEVEL_INNER, 1, {{proto_hdr_ipv6}} }; struct virtchnl_proto_hdrs inner_ipv6_udp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_udp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_udp}} }; struct virtchnl_proto_hdrs inner_ipv6_tcp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6_with_prot, proto_hdr_tcp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6_with_prot, proto_hdr_tcp}} }; struct virtchnl_proto_hdrs inner_ipv6_sctp_tmplt = { - TUNNEL_LEVEL_INNER, 2, {proto_hdr_ipv6, proto_hdr_sctp} + TUNNEL_LEVEL_INNER, 2, {{proto_hdr_ipv6, proto_hdr_sctp}} }; struct virtchnl_proto_hdrs ipv4_esp_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_esp} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_esp}} }; struct virtchnl_proto_hdrs ipv4_udp_esp_tmplt = { TUNNEL_LEVEL_OUTER, 3, - {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp} + {{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_esp}} }; struct virtchnl_proto_hdrs ipv4_ah_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_ah} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_ah}} }; struct virtchnl_proto_hdrs ipv6_esp_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_esp} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_esp}} }; struct virtchnl_proto_hdrs ipv6_udp_esp_tmplt = { TUNNEL_LEVEL_OUTER, 3, - {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp} + {{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_esp}} }; struct virtchnl_proto_hdrs ipv6_ah_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_ah} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_ah}} }; struct virtchnl_proto_hdrs ipv4_l2tpv3_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_l2tpv3} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_l2tpv3}} }; struct virtchnl_proto_hdrs ipv6_l2tpv3_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_l2tpv3} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_l2tpv3}} }; struct virtchnl_proto_hdrs ipv4_pfcp_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv4, proto_hdr_pfcp} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv4, proto_hdr_pfcp}} }; struct virtchnl_proto_hdrs ipv6_pfcp_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_ipv6, proto_hdr_pfcp} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_ipv6, proto_hdr_pfcp}} }; struct virtchnl_proto_hdrs ipv4_udp_gtpc_tmplt = { - TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc} + TUNNEL_LEVEL_OUTER, 3, + {{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_gtpc}} }; struct virtchnl_proto_hdrs ipv6_udp_gtpc_tmplt = { - TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc} + TUNNEL_LEVEL_OUTER, 3, + {{proto_hdr_ipv6, proto_hdr_udp, proto_hdr_gtpc}} }; struct virtchnl_proto_hdrs eth_ecpri_tmplt = { - TUNNEL_LEVEL_OUTER, 2, {proto_hdr_eth, proto_hdr_ecpri} + TUNNEL_LEVEL_OUTER, 2, {{proto_hdr_eth, proto_hdr_ecpri}} }; struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = { - TUNNEL_LEVEL_OUTER, 3, {proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri} + TUNNEL_LEVEL_OUTER, 3, + {{proto_hdr_ipv4, proto_hdr_udp, proto_hdr_ecpri}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tmplt = { TUNNEL_LEVEL_INNER, 3, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv4} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv4}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tmplt = { TUNNEL_LEVEL_INNER, 3, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv6} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv6}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_udp_tmplt = { TUNNEL_LEVEL_INNER, 4, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv4_with_prot, - proto_hdr_udp} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv4_with_prot, + proto_hdr_udp}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv4_tcp_tmplt = { TUNNEL_LEVEL_INNER, 4, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv4_with_prot, - proto_hdr_tcp} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv4_with_prot, + proto_hdr_tcp}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_udp_tmplt = { TUNNEL_LEVEL_INNER, 4, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv6_with_prot, - proto_hdr_udp} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv6_with_prot, + proto_hdr_udp}} }; struct virtchnl_proto_hdrs udp_l2tpv2_ppp_ipv6_tcp_tmplt = { TUNNEL_LEVEL_INNER, 4, - {proto_hdr_l2tpv2, - proto_hdr_ppp, - proto_hdr_ipv6_with_prot, - proto_hdr_tcp} + {{proto_hdr_l2tpv2, + proto_hdr_ppp, + proto_hdr_ipv6_with_prot, + proto_hdr_tcp}} + }; struct virtchnl_proto_hdrs ipv4_l2tpv2_tmplt = { TUNNEL_LEVEL_OUTER, 4, - {proto_hdr_eth, - proto_hdr_ipv4, - proto_hdr_udp, - proto_hdr_l2tpv2} + {{proto_hdr_eth, + proto_hdr_ipv4, + proto_hdr_udp, + proto_hdr_l2tpv2}} }; struct virtchnl_proto_hdrs ipv6_l2tpv2_tmplt = { TUNNEL_LEVEL_OUTER, 4, - {proto_hdr_eth, - proto_hdr_ipv6, - proto_hdr_udp, - proto_hdr_l2tpv2} + {{proto_hdr_eth, + proto_hdr_ipv6, + proto_hdr_udp, + proto_hdr_l2tpv2}} }; struct virtchnl_proto_hdrs ipv4_l2tpv2_ppp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, - proto_hdr_ipv4, - proto_hdr_udp, - proto_hdr_l2tpv2, - proto_hdr_ppp} + {{proto_hdr_eth, + proto_hdr_ipv4, + proto_hdr_udp, + proto_hdr_l2tpv2, + proto_hdr_ppp}} }; struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = { TUNNEL_LEVEL_OUTER, 5, - {proto_hdr_eth, - proto_hdr_ipv6, - proto_hdr_udp, - proto_hdr_l2tpv2, - proto_hdr_ppp} + {{proto_hdr_eth, + proto_hdr_ipv6, + proto_hdr_udp, + proto_hdr_l2tpv2, + proto_hdr_ppp}} }; /* rss type super set */ From patchwork Mon May 23 02:31:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 111587 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45B29A04FD; Mon, 23 May 2022 04:31:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BE3DF42B6E; Mon, 23 May 2022 04:31:22 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 1B480427F0 for ; Mon, 23 May 2022 04:31:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653273080; x=1684809080; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cGCEattwcS+JWc06mZ6AnIPjuxYcqyArSNpDDeg890M=; b=kWXLqPdzStagxhlrYFKWGoZipr0JoVF7I7VT+vNmbS/inetajwC/2qxZ dy1HVKPFQ4HIRZuoME26xDXBcfe15i01g6nJpJ/ZoJBPuFAqfu0bNv6Mg EmuqS81TnhsnXnjIaoFU7+9/V16G/ZlbMQpc6TVGRIbJQyfiEb5sf7htJ KlXrO7sVy0o+qMYtFAD/6nR3X0S1HfT7jvTfOdVkmn9uYYMPPOh5R2Wwd u/3p6VJc3eUkeT5Qw5IpgwDfJBtMvq6a6ykSucGNXkBQ9Rj5ff7qXBdZi c2n4D3Cp3ygjaajGhHITkvdoN7uerMJF3xH8R2xZyegDg+rbYcEFL4JkD Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10355"; a="270651630" X-IronPort-AV: E=Sophos;i="5.91,245,1647327600"; d="scan'208";a="270651630" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2022 19:31:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,245,1647327600"; d="scan'208";a="577169715" Received: from dpdk-jf-ntb-v2.sh.intel.com ([10.67.119.111]) by fmsmga007.fm.intel.com with ESMTP; 22 May 2022 19:31:18 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, junfeng.guo@intel.com Subject: [PATCH v6 2/3] net/iavf: enable Protocol Agnostic Flow Offloading FDIR Date: Mon, 23 May 2022 10:31:36 +0800 Message-Id: <20220523023138.3777313-4-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220523023138.3777313-1-junfeng.guo@intel.com> References: <20220520091648.3524540-2-junfeng.guo@intel.com> <20220523023138.3777313-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch enabled Protocol Agnostic Flow (raw flow) Offloading Flow Director (FDIR) in AVF, based on the Parser Library feature and the existing rte_flow `raw` API. The input spec and mask of raw pattern are first parsed via the Parser Library, and then passed to the kernel driver to create the flow rule. Similar as PF FDIR, each raw flow requires: 1. A byte string of raw target packet bits. 2. A byte string contains mask of target packet. Here is an example: FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3: flow create 0 ingress pattern raw \ pattern spec \ 00000000000000000000000008004500001400004000401000000000000001020304 \ pattern mask \ 000000000000000000000000000000000000000000000000000000000000ffffffff \ / end actions queue index 3 / mark id 3 / end Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto) is optional in our cases. To avoid redundancy, we just omit the mask of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix '0x' for the spec and mask byte (hex) strings are also omitted here. Signed-off-by: Junfeng Guo --- doc/guides/rel_notes/release_22_07.rst | 1 + drivers/net/iavf/iavf_fdir.c | 67 ++++++++++++++++++++++++++ drivers/net/iavf/iavf_generic_flow.c | 6 +++ drivers/net/iavf/iavf_generic_flow.h | 3 ++ 4 files changed, 77 insertions(+) diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index a0eb6ab61b..829fa6047e 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -65,6 +65,7 @@ New Features * Added Tx QoS queue rate limitation support. * Added quanta size configuration support. * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support. + * Added Protocol Agnostic Flow Offloading support in AVF FDIR and RSS. * **Updated Intel ice driver.** diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c index e9a3566c0d..f236260502 100644 --- a/drivers/net/iavf/iavf_fdir.c +++ b/drivers/net/iavf/iavf_fdir.c @@ -194,6 +194,7 @@ IAVF_INSET_TUN_TCP_DST_PORT) static struct iavf_pattern_match_item iavf_fdir_pattern[] = { + {iavf_pattern_raw, IAVF_INSET_NONE, IAVF_INSET_NONE}, {iavf_pattern_ethertype, IAVF_FDIR_INSET_ETH, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv4, IAVF_FDIR_INSET_ETH_IPV4, IAVF_INSET_NONE}, {iavf_pattern_eth_ipv4_udp, IAVF_FDIR_INSET_ETH_IPV4_UDP, IAVF_INSET_NONE}, @@ -720,6 +721,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, struct virtchnl_proto_hdrs *hdrs = &filter->add_fltr.rule_cfg.proto_hdrs; enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END; + const struct rte_flow_item_raw *raw_spec, *raw_mask; const struct rte_flow_item_eth *eth_spec, *eth_mask; const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask; const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; @@ -746,6 +748,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, enum rte_flow_item_type next_type; uint8_t tun_inner = 0; uint16_t ether_type, flags_version; + uint8_t item_num = 0; int layer = 0; uint8_t ipv6_addr_mask[16] = { @@ -763,8 +766,72 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, RTE_FLOW_ERROR_TYPE_ITEM, item, "Not support range"); } + item_num++; switch (item_type) { + case RTE_FLOW_ITEM_TYPE_RAW: { + raw_spec = item->spec; + raw_mask = item->mask; + + if (item_num != 1) + return -rte_errno; + + if (raw_spec->length != raw_mask->length) + return -rte_errno; + + uint16_t pkt_len = 0; + uint16_t tmp_val = 0; + uint8_t tmp = 0; + int i, j; + + pkt_len = raw_spec->length; + + for (i = 0, j = 0; i < pkt_len; i += 2, j++) { + tmp = raw_spec->pattern[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = raw_spec->pattern[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val += (tmp - 'a' + 10); + if (tmp >= 'A' && tmp <= 'F') + tmp_val += (tmp - 'A' + 10); + if (tmp >= '0' && tmp <= '9') + tmp_val += (tmp - '0'); + + hdrs->raw.spec[j] = tmp_val; + + tmp = raw_mask->pattern[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = raw_mask->pattern[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val += (tmp - 'a' + 10); + if (tmp >= 'A' && tmp <= 'F') + tmp_val += (tmp - 'A' + 10); + if (tmp >= '0' && tmp <= '9') + tmp_val += (tmp - '0'); + + hdrs->raw.mask[j] = tmp_val; + } + + hdrs->raw.pkt_len = pkt_len / 2; + hdrs->tunnel_level = 0; + hdrs->count = 0; + return 0; + } + case RTE_FLOW_ITEM_TYPE_ETH: eth_spec = item->spec; eth_mask = item->mask; diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c index ddc1fdd22b..e1a611e319 100644 --- a/drivers/net/iavf/iavf_generic_flow.c +++ b/drivers/net/iavf/iavf_generic_flow.c @@ -48,6 +48,12 @@ const struct rte_flow_ops iavf_flow_ops = { .query = iavf_flow_query, }; +/* raw */ +enum rte_flow_item_type iavf_pattern_raw[] = { + RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_END, +}; + /* empty */ enum rte_flow_item_type iavf_pattern_empty[] = { RTE_FLOW_ITEM_TYPE_END, diff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h index f6af176073..52eb1caf29 100644 --- a/drivers/net/iavf/iavf_generic_flow.h +++ b/drivers/net/iavf/iavf_generic_flow.h @@ -180,6 +180,9 @@ #define IAVF_INSET_L2TPV2 \ (IAVF_PROT_L2TPV2 | IAVF_L2TPV2_SESSION_ID) +/* raw pattern */ +extern enum rte_flow_item_type iavf_pattern_raw[]; + /* empty pattern */ extern enum rte_flow_item_type iavf_pattern_empty[]; From patchwork Mon May 23 02:31:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 111589 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78CA9A04FD; Mon, 23 May 2022 04:31:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D5BB5427EB; Mon, 23 May 2022 04:31:25 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 771A0427EB for ; Mon, 23 May 2022 04:31:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653273084; x=1684809084; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ciPsB/6QxJUkMg9fSH1PxHoiZ7gJKAdqjZHCj6svUME=; b=lJ9s1KpoqqlfKtZkPGmzpwpiiOyAu8YU3zkyX/LCfsjmw2bbLucX4r26 +1S04gFDdgL09u+KNE4K2xTGE43Ig/EEO6sMGgz7w89vnjYkKdUH/xH7x jCWBFcUfue3W4zD0T/79PH78FF4ooRCKzOJZ6ZpLPFHSSXxZTUiQfE/Eh FBlIWazns+po1KF7VzHsiUU89zUc7Z4V+tCLrNnsdnp/LJboYD+sEZBd0 2C1gNNz8lK6vmeCzUzjBAzkzfm4tyIhnvdF6ew1aNTBz1T7bRhY9DHELV AQUnmf3IvmuxKB/iXUU5/paL6K/dx+jrn9s5Qh6nvtiIG0t0+68arlIS1 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10355"; a="270651636" X-IronPort-AV: E=Sophos;i="5.91,245,1647327600"; d="scan'208";a="270651636" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2022 19:31:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,245,1647327600"; d="scan'208";a="577169781" Received: from dpdk-jf-ntb-v2.sh.intel.com ([10.67.119.111]) by fmsmga007.fm.intel.com with ESMTP; 22 May 2022 19:31:22 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ting.xu@intel.com, junfeng.guo@intel.com Subject: [PATCH v6 3/3] net/iavf: support Protocol Agnostic Flow Offloading VF RSS Date: Mon, 23 May 2022 10:31:38 +0800 Message-Id: <20220523023138.3777313-6-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220523023138.3777313-1-junfeng.guo@intel.com> References: <20220520091648.3524540-2-junfeng.guo@intel.com> <20220523023138.3777313-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Ting Xu Enable Protocol Agnostic Flow Offloading for RSS hash in VF. It supports raw pattern flow rule creation in VF based on Parser Library feature. VF parses the spec and mask input of raw pattern, and passes it to kernel driver to create the flow rule. Current rte_flow raw API is utilized. command example: RSS hash for ipv4-src-dst: flow create 0 ingress pattern raw pattern spec 00000000000000000000000008004500001400004000401000000000000000000000 pattern mask 0000000000000000000000000000000000000000000000000000ffffffffffffffff / end actions rss queues end / end Signed-off-by: Ting Xu --- drivers/net/iavf/iavf_hash.c | 96 ++++++++++++++++++++++++++++++++++++ 1 file changed, 96 insertions(+) diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c index 278e75117d..42df7c4e48 100644 --- a/drivers/net/iavf/iavf_hash.c +++ b/drivers/net/iavf/iavf_hash.c @@ -37,6 +37,8 @@ /* L2TPv2 */ #define IAVF_PHINT_L2TPV2 BIT_ULL(9) #define IAVF_PHINT_L2TPV2_LEN BIT_ULL(10) +/* Raw */ +#define IAVF_PHINT_RAW BIT_ULL(11) #define IAVF_PHINT_GTPU_MSK (IAVF_PHINT_GTPU | \ IAVF_PHINT_GTPU_EH | \ @@ -58,6 +60,7 @@ struct iavf_hash_match_type { struct iavf_rss_meta { struct virtchnl_proto_hdrs proto_hdrs; enum virtchnl_rss_algorithm rss_algorithm; + bool raw_ena; }; struct iavf_hash_flow_cfg { @@ -532,6 +535,7 @@ struct virtchnl_proto_hdrs ipv6_l2tpv2_ppp_tmplt = { */ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = { /* IPv4 */ + {iavf_pattern_raw, IAVF_INSET_NONE, NULL}, {iavf_pattern_eth_ipv4, IAVF_RSS_TYPE_OUTER_IPV4, &outer_ipv4_tmplt}, {iavf_pattern_eth_ipv4_udp, IAVF_RSS_TYPE_OUTER_IPV4_UDP, &outer_ipv4_udp_tmplt}, {iavf_pattern_eth_ipv4_tcp, IAVF_RSS_TYPE_OUTER_IPV4_TCP, &outer_ipv4_tcp_tmplt}, @@ -804,6 +808,9 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint, } switch (item->type) { + case RTE_FLOW_ITEM_TYPE_RAW: + *phint |= IAVF_PHINT_RAW; + break; case RTE_FLOW_ITEM_TYPE_IPV4: if (!(*phint & IAVF_PHINT_GTPU_MSK) && !(*phint & IAVF_PHINT_GRE) && @@ -873,6 +880,80 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint, return 0; } +static int +iavf_hash_parse_raw_pattern(const struct rte_flow_item *item, + struct iavf_rss_meta *meta) +{ + const struct rte_flow_item_raw *raw_spec, *raw_mask; + uint8_t *pkt_buf, *msk_buf; + uint8_t spec_len, pkt_len; + uint8_t tmp_val = 0; + uint8_t tmp_c = 0; + int i, j; + + raw_spec = item->spec; + raw_mask = item->mask; + + spec_len = strlen((char *)(uintptr_t)raw_spec->pattern); + if (strlen((char *)(uintptr_t)raw_mask->pattern) != + spec_len) + return -rte_errno; + + pkt_len = spec_len / 2; + + pkt_buf = rte_zmalloc(NULL, pkt_len, 0); + if (!pkt_buf) + return -ENOMEM; + + msk_buf = rte_zmalloc(NULL, pkt_len, 0); + if (!msk_buf) + return -ENOMEM; + + /* convert string to int array */ + for (i = 0, j = 0; i < spec_len; i += 2, j++) { + tmp_c = raw_spec->pattern[i]; + if (tmp_c >= 'a' && tmp_c <= 'f') + tmp_val = tmp_c - 'a' + 10; + if (tmp_c >= 'A' && tmp_c <= 'F') + tmp_val = tmp_c - 'A' + 10; + if (tmp_c >= '0' && tmp_c <= '9') + tmp_val = tmp_c - '0'; + + tmp_c = raw_spec->pattern[i + 1]; + if (tmp_c >= 'a' && tmp_c <= 'f') + pkt_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10; + if (tmp_c >= 'A' && tmp_c <= 'F') + pkt_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10; + if (tmp_c >= '0' && tmp_c <= '9') + pkt_buf[j] = tmp_val * 16 + tmp_c - '0'; + + tmp_c = raw_mask->pattern[i]; + if (tmp_c >= 'a' && tmp_c <= 'f') + tmp_val = tmp_c - 0x57; + if (tmp_c >= 'A' && tmp_c <= 'F') + tmp_val = tmp_c - 0x37; + if (tmp_c >= '0' && tmp_c <= '9') + tmp_val = tmp_c - '0'; + + tmp_c = raw_mask->pattern[i + 1]; + if (tmp_c >= 'a' && tmp_c <= 'f') + msk_buf[j] = tmp_val * 16 + tmp_c - 'a' + 10; + if (tmp_c >= 'A' && tmp_c <= 'F') + msk_buf[j] = tmp_val * 16 + tmp_c - 'A' + 10; + if (tmp_c >= '0' && tmp_c <= '9') + msk_buf[j] = tmp_val * 16 + tmp_c - '0'; + } + + rte_memcpy(meta->proto_hdrs.raw.spec, pkt_buf, pkt_len); + rte_memcpy(meta->proto_hdrs.raw.mask, msk_buf, pkt_len); + meta->proto_hdrs.raw.pkt_len = pkt_len; + + rte_free(pkt_buf); + rte_free(msk_buf); + + return 0; +} + #define REFINE_PROTO_FLD(op, fld) \ VIRTCHNL_##op##_PROTO_HDR_FIELD(hdr, VIRTCHNL_PROTO_HDR_##fld) #define REPALCE_PROTO_FLD(fld_1, fld_2) \ @@ -1387,6 +1468,10 @@ iavf_hash_parse_action(struct iavf_pattern_match_item *match_item, RTE_FLOW_ERROR_TYPE_ACTION, action, "a non-NULL RSS queue is not supported"); + /* If pattern type is raw, no need to refine rss type */ + if (pattern_hint == IAVF_PHINT_RAW) + break; + /** * Check simultaneous use of SRC_ONLY and DST_ONLY * of the same level. @@ -1453,6 +1538,17 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, if (ret) goto error; + if (phint == IAVF_PHINT_RAW) { + rss_meta_ptr->raw_ena = true; + ret = iavf_hash_parse_raw_pattern(pattern, rss_meta_ptr); + if (ret) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "Parse raw pattern failed"); + goto error; + } + } + ret = iavf_hash_parse_action(pattern_match_item, actions, phint, rss_meta_ptr, error);