From patchwork Tue Sep 28 10:18:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 99840 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 006FEA0C46; Tue, 28 Sep 2021 04:31:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 53E86410EB; Tue, 28 Sep 2021 04:31:36 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 9233C40DF6 for ; Tue, 28 Sep 2021 04:31:31 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10120"; a="222711839" X-IronPort-AV: E=Sophos;i="5.85,328,1624345200"; d="scan'208";a="222711839" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Sep 2021 19:31:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,328,1624345200"; d="scan'208";a="478477077" Received: from dpdk-junfengguo-v1.sh.intel.com ([10.67.119.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Sep 2021 19:31:29 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, junfeng.guo@intel.com Date: Tue, 28 Sep 2021 10:18:19 +0000 Message-Id: <20210928101821.147053-2-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210928101821.147053-1-junfeng.guo@intel.com> References: <20210924162223.1543519-2-junfeng.guo@intel.com> <20210928101821.147053-1-junfeng.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 1/3] app/testpmd: update Max RAW pattern size to 512 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update max size for pattern in struct rte_flow_item_raw to enable protocol agnostic flow offloading. Signed-off-by: Junfeng Guo --- app/test-pmd/cmdline_flow.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 6cd99bf37f..d108fde048 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -459,7 +459,7 @@ enum index { }; /** Maximum size for pattern in struct rte_flow_item_raw. */ -#define ITEM_RAW_PATTERN_SIZE 40 +#define ITEM_RAW_PATTERN_SIZE 512 /** Maximum size for GENEVE option data pattern in bytes. */ #define ITEM_GENEVE_OPT_DATA_SIZE 124 From patchwork Tue Sep 28 10:18:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 99841 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFB39A0C46; Tue, 28 Sep 2021 04:31:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 611D1410F1; Tue, 28 Sep 2021 04:31:37 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 005EA410E8 for ; Tue, 28 Sep 2021 04:31:33 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10120"; a="222711845" X-IronPort-AV: E=Sophos;i="5.85,328,1624345200"; d="scan'208";a="222711845" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Sep 2021 19:31:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,328,1624345200"; d="scan'208";a="478477234" Received: from dpdk-junfengguo-v1.sh.intel.com ([10.67.119.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Sep 2021 19:31:31 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, junfeng.guo@intel.com Date: Tue, 28 Sep 2021 10:18:20 +0000 Message-Id: <20210928101821.147053-3-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210928101821.147053-1-junfeng.guo@intel.com> References: <20210924162223.1543519-2-junfeng.guo@intel.com> <20210928101821.147053-1-junfeng.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/3] net/ice: enable protocol agnostic flow offloading in FDIR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Protocol agnostic flow offloading in Flow Director is enabled by this patch based on the Parser Library, using existing rte_flow raw API. Note that the raw flow requires: 1. byte string of raw target packet bits. 2. byte string of mask of target packet. Here is an example: FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3: flow create 0 ingress pattern raw \ pattern spec \ 00000000000000000000000008004500001400004000401000000000000001020304 \ pattern mask \ 000000000000000000000000000000000000000000000000000000000000ffffffff \ / end actions queue index 3 / mark id 3 / end Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto) is optional in our cases. To avoid redundancy, we just omit the mask of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix '0x' for the spec and mask byte (hex) strings are also omitted here. Signed-off-by: Junfeng Guo --- drivers/net/ice/ice_ethdev.h | 5 + drivers/net/ice/ice_fdir_filter.c | 172 +++++++++++++++++++++++++++++ drivers/net/ice/ice_generic_flow.c | 7 ++ drivers/net/ice/ice_generic_flow.h | 3 + 4 files changed, 187 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 26f5c560f4..1ab883c7b8 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -316,6 +316,11 @@ struct ice_fdir_filter_conf { uint64_t input_set_o; /* used for non-tunnel or tunnel outer fields */ uint64_t input_set_i; /* only for tunnel inner fields */ uint32_t mark_flag; + + struct ice_parser_profile *prof; + const u8 *pkt_buf; + bool parser_ena; + u8 pkt_len; }; #define ICE_MAX_FDIR_FILTER_NUM (1024 * 16) diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index af9669fac6..17f8cee06b 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -107,6 +107,7 @@ ICE_INSET_NAT_T_ESP_SPI) static struct ice_pattern_match_item ice_fdir_pattern_list[] = { + {pattern_raw, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_ethertype, ICE_FDIR_INSET_ETH, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4, ICE_FDIR_INSET_ETH_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4_udp, ICE_FDIR_INSET_ETH_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, @@ -1190,6 +1191,24 @@ ice_fdir_is_tunnel_profile(enum ice_fdir_tunnel_type tunnel_type) return 0; } +static int +ice_fdir_add_del_raw(struct ice_pf *pf, + struct ice_fdir_filter_conf *filter, + bool add) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + + unsigned char *pkt = (unsigned char *)pf->fdir.prg_pkt; + rte_memcpy(pkt, filter->pkt_buf, filter->pkt_len); + + struct ice_fltr_desc desc; + memset(&desc, 0, sizeof(desc)); + filter->input.comp_report = ICE_FXD_FLTR_QW0_COMP_REPORT_SW; + ice_fdir_get_prgm_desc(hw, &filter->input, &desc, add); + + return ice_fdir_programming(pf, &desc); +} + static int ice_fdir_add_del_filter(struct ice_pf *pf, struct ice_fdir_filter_conf *filter, @@ -1306,6 +1325,45 @@ ice_fdir_create_filter(struct ice_adapter *ad, bool is_tun; int ret; + if (filter->parser_ena) { + struct ice_hw *hw = ICE_PF_TO_HW(pf); + + u16 ctrl_vsi = pf->fdir.fdir_vsi->idx; + u16 main_vsi = pf->main_vsi->idx; + + ret = ice_flow_set_hw_prof(hw, main_vsi, ctrl_vsi, + filter->prof, ICE_BLK_FD); + if (ret) + return -rte_errno; + + ret = ice_fdir_add_del_raw(pf, filter, true); + if (ret) + return -rte_errno; + + if (filter->mark_flag == 1) + ice_fdir_rx_parsing_enable(ad, 1); + + entry = rte_zmalloc("fdir_entry", sizeof(*entry), 0); + if (!entry) + return -rte_errno; + + entry->pkt_buf = (u8 *)ice_malloc(hw, filter->pkt_len); + if (!entry->pkt_buf) + return -ENOMEM; + + u8 *pkt_buf = (u8 *)ice_malloc(hw, filter->pkt_len); + if (!pkt_buf) + return -ENOMEM; + + rte_memcpy(entry, filter, sizeof(*filter)); + rte_memcpy(pkt_buf, filter->pkt_buf, filter->pkt_len); + entry->pkt_buf = pkt_buf; + + flow->rule = entry; + + return 0; + } + ice_fdir_extract_fltr_key(&key, filter); node = ice_fdir_entry_lookup(fdir_info, &key); if (node) { @@ -1401,6 +1459,19 @@ ice_fdir_destroy_filter(struct ice_adapter *ad, filter = (struct ice_fdir_filter_conf *)flow->rule; + if (filter->parser_ena) { + ret = ice_fdir_add_del_raw(pf, filter, false); + if (ret) + return -rte_errno; + + filter->pkt_buf = NULL; + flow->rule = NULL; + + rte_free(filter); + + return 0; + } + is_tun = ice_fdir_is_tunnel_profile(filter->tunnel_type); if (filter->counter) { @@ -1679,6 +1750,7 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END; enum rte_flow_item_type l4 = RTE_FLOW_ITEM_TYPE_END; enum ice_fdir_tunnel_type tunnel_type = ICE_FDIR_TUNNEL_TYPE_NONE; + const struct rte_flow_item_raw *raw_spec, *raw_mask; const struct rte_flow_item_eth *eth_spec, *eth_mask; const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask; const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; @@ -1706,6 +1778,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, struct ice_fdir_extra *p_ext_data; struct ice_fdir_v4 *p_v4 = NULL; struct ice_fdir_v6 *p_v6 = NULL; + struct ice_parser_result rslt; + struct ice_parser *psr; + uint8_t item_num = 0; for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) @@ -1717,6 +1792,7 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, item->type == RTE_FLOW_ITEM_TYPE_GTP_PSC) { is_outer = false; } + item_num++; } /* This loop parse flow pattern and distinguish Non-tunnel and tunnel @@ -1737,6 +1813,95 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, &input_set_i : &input_set_o; switch (item_type) { + case RTE_FLOW_ITEM_TYPE_RAW: + raw_spec = item->spec; + raw_mask = item->mask; + + if (item_num != 1) + break; + + /* convert raw spec & mask from byte string to int */ + unsigned char *tmp_spec = + (uint8_t *)(uintptr_t)raw_spec->pattern; + unsigned char *tmp_mask = + (uint8_t *)(uintptr_t)raw_mask->pattern; + uint16_t udp_port = 0; + uint16_t tmp_val = 0; + uint8_t pkt_len = 0; + uint8_t tmp = 0; + int i, j; + + pkt_len = strlen((char *)(uintptr_t)raw_spec->pattern); + if (strlen((char *)(uintptr_t)raw_mask->pattern) != + pkt_len) + return -rte_errno; + + for (i = 0, j = 0; i < pkt_len; i += 2, j++) { + tmp = tmp_spec[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = tmp_spec[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_spec[j] = tmp_val + tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_spec[j] = tmp_val + tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_spec[j] = tmp_val + tmp - '0'; + + tmp = tmp_mask[i]; + if (tmp >= 'a' && tmp <= 'f') + tmp_val = tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_val = tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_val = tmp - '0'; + + tmp_val *= 16; + tmp = tmp_mask[i + 1]; + if (tmp >= 'a' && tmp <= 'f') + tmp_mask[j] = tmp_val + tmp - 'a' + 10; + if (tmp >= 'A' && tmp <= 'F') + tmp_mask[j] = tmp_val + tmp - 'A' + 10; + if (tmp >= '0' && tmp <= '9') + tmp_mask[j] = tmp_val + tmp - '0'; + } + + pkt_len /= 2; + + if (ice_parser_create(&ad->hw, &psr)) + return -rte_errno; + if (ice_get_open_tunnel_port(&ad->hw, TNL_VXLAN, + &udp_port)) + ice_parser_vxlan_tunnel_set(psr, udp_port, + true); + if (ice_parser_run(psr, tmp_spec, pkt_len, &rslt)) + return -rte_errno; + ice_parser_destroy(psr); + + if (!tmp_mask) + return -rte_errno; + + filter->prof = (struct ice_parser_profile *) + ice_malloc(&ad->hw, sizeof(filter->prof)); + if (!filter->prof) + return -ENOMEM; + if (ice_parser_profile_init(&rslt, tmp_spec, tmp_mask, + pkt_len, ICE_BLK_FD, true, filter->prof)) + return -rte_errno; + + filter->pkt_buf = tmp_spec; + filter->pkt_len = pkt_len; + + filter->parser_ena = true; + + break; + case RTE_FLOW_ITEM_TYPE_ETH: flow_type = ICE_FLTR_PTYPE_NON_IP_L2; eth_spec = item->spec; @@ -2202,6 +2367,7 @@ ice_fdir_parse(struct ice_adapter *ad, struct ice_fdir_filter_conf *filter = &pf->fdir.conf; struct ice_pattern_match_item *item = NULL; uint64_t input_set; + bool raw = false; int ret; memset(filter, 0, sizeof(*filter)); @@ -2217,7 +2383,13 @@ ice_fdir_parse(struct ice_adapter *ad, ret = ice_fdir_parse_pattern(ad, pattern, error, filter); if (ret) goto error; + + if (item->pattern_list[0] == RTE_FLOW_ITEM_TYPE_RAW) + raw = true; + input_set = filter->input_set_o | filter->input_set_i; + input_set = raw ? ~input_set : input_set; + if (!input_set || filter->input_set_o & ~(item->input_set_mask_o | ICE_INSET_ETHERTYPE) || filter->input_set_i & ~item->input_set_mask_i) { diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c index 13e3734172..b6083ef0a4 100644 --- a/drivers/net/ice/ice_generic_flow.c +++ b/drivers/net/ice/ice_generic_flow.c @@ -65,6 +65,12 @@ enum rte_flow_item_type pattern_empty[] = { RTE_FLOW_ITEM_TYPE_END, }; +/* raw */ +enum rte_flow_item_type pattern_raw[] = { + RTE_FLOW_ITEM_TYPE_RAW, + RTE_FLOW_ITEM_TYPE_END, +}; + /* L2 */ enum rte_flow_item_type pattern_ethertype[] = { RTE_FLOW_ITEM_TYPE_ETH, @@ -2081,6 +2087,7 @@ struct ice_ptype_match { }; static struct ice_ptype_match ice_ptype_map[] = { + {pattern_raw, ICE_PTYPE_IPV4_PAY}, {pattern_eth_ipv4, ICE_PTYPE_IPV4_PAY}, {pattern_eth_ipv4_udp, ICE_PTYPE_IPV4_UDP_PAY}, {pattern_eth_ipv4_tcp, ICE_PTYPE_IPV4_TCP_PAY}, diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h index 8845a3e156..1b030c0466 100644 --- a/drivers/net/ice/ice_generic_flow.h +++ b/drivers/net/ice/ice_generic_flow.h @@ -124,6 +124,9 @@ /* empty pattern */ extern enum rte_flow_item_type pattern_empty[]; +/* raw pattern */ +extern enum rte_flow_item_type pattern_raw[]; + /* L2 */ extern enum rte_flow_item_type pattern_ethertype[]; extern enum rte_flow_item_type pattern_ethertype_vlan[]; From patchwork Tue Sep 28 10:18:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 99842 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 73527A0C46; Tue, 28 Sep 2021 04:31:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BA88041101; Tue, 28 Sep 2021 04:31:38 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id E8F6A40DF6 for ; Tue, 28 Sep 2021 04:31:35 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10120"; a="222711852" X-IronPort-AV: E=Sophos;i="5.85,328,1624345200"; d="scan'208";a="222711852" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Sep 2021 19:31:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,328,1624345200"; d="scan'208";a="478477324" Received: from dpdk-junfengguo-v1.sh.intel.com ([10.67.119.231]) by fmsmga007.fm.intel.com with ESMTP; 27 Sep 2021 19:31:33 -0700 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, junfeng.guo@intel.com Date: Tue, 28 Sep 2021 10:18:21 +0000 Message-Id: <20210928101821.147053-4-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210928101821.147053-1-junfeng.guo@intel.com> References: <20210924162223.1543519-2-junfeng.guo@intel.com> <20210928101821.147053-1-junfeng.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/3] doc: enable protocol agnostic flow in FDIR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Protocol agnostic flow offloading in Flow Director is enabled based on the Parser Library, using existing rte_flow raw API. Signed-off-by: Junfeng Guo --- doc/guides/rel_notes/release_21_11.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 8e298337b4..0688233b8b 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -62,6 +62,11 @@ New Features * Added bus-level parsing of the devargs syntax. * Kept compatibility with the legacy syntax as parsing fallback. +* **Updated Intel ice driver.** + + * Enabled protocol agnostic flow offloading in Flow Director based on the + parser library, using existing rte_flow raw API. + * **Add new RSS offload types for IPv4/L4 checksum in RSS flow.** Add macros ETH_RSS_IPV4_CHKSUM and ETH_RSS_L4_CHKSUM, now IPv4 and