From patchwork Tue Sep 26 11:29:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 131910 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 480764263C; Tue, 26 Sep 2023 05:09:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8B2D4402D4; Tue, 26 Sep 2023 05:09:26 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 4C4DA4026F for ; Tue, 26 Sep 2023 05:09:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695697763; x=1727233763; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eX4Pfc9cCveVnLUF4SqH94EjaG5vFZZ5TbdoheiPqC4=; b=hdmSerVteNRLbawVIrRiuInWsk0POJo5WBDTvPJJ1drzRJxjCv1oPuEH vX5QSPBTkKrsxnRVxeA1OvuI4P3pDQuWTnfLO5nItyVAmNtErzaUMoqDJ amnx/A4JRoK3qii4RyTd04sFBJd0EC+sDj5u69+UuYXlpWXIPXEGTgVM9 exWIHhu+V1D3mXoYeGM0IZ0+fJ7RqJ7VoEZL33fSzc2oAQVeVSPZ8073R OQ5PYPt32qtVgHIJ7XBCkyRh4/XbxiN7Pd7xwT1jVkHFmV5UMtLYObvRy 55JLzrI3GrIyi2TNXS9sJl3EJhO/tgWpjG04OVXtEBH/dz9bdjZZmkSJi w==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="384246941" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="384246941" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 20:09:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="818878500" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="818878500" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by fmsmga004.fm.intel.com with ESMTP; 25 Sep 2023 20:09:20 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: zhichaox.zeng@intel.com, dev@dpdk.org, Qi Zhang Subject: [PATCH v5 1/5] net/ice: remove pipeline mode Date: Tue, 26 Sep 2023 07:29:27 -0400 Message-Id: <20230926112931.4191107-2-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230926112931.4191107-1-qi.z.zhang@intel.com> References: <20230814202616.3346652-1-qi.z.zhang@intel.com> <20230926112931.4191107-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This marks the initial phase of refactoring the ice rte_flow implementation. The combination of switch and fdir rules within the same syntax has led to inconvenient user experiences. Naturally, the switch filter and fdir filter represent distinct pipeline stages with differing hardware capabilities. To address this, we have made the decision to assign each stage to a separate rte_flow group. This will allow users to clearly specify their intentions when creating a rule. Consequently, the need for a pipeline mode can be removed. Signed-off-by: Qi Zhang --- doc/guides/nics/ice.rst | 19 ----- drivers/net/ice/ice_ethdev.c | 8 -- drivers/net/ice/ice_ethdev.h | 2 - drivers/net/ice/ice_fdir_filter.c | 2 +- drivers/net/ice/ice_generic_flow.c | 120 ++++++++-------------------- drivers/net/ice/ice_generic_flow.h | 6 +- drivers/net/ice/ice_switch_filter.c | 118 +-------------------------- 7 files changed, 40 insertions(+), 235 deletions(-) diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index c351c6bd74..5a47109c3f 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -90,25 +90,6 @@ Runtime Configuration NOTE: In Safe mode, only very limited features are available, features like RSS, checksum, fdir, tunneling ... are all disabled. -- ``Generic Flow Pipeline Mode Support`` (default ``0``) - - In pipeline mode, a flow can be set at one specific stage by setting parameter - ``priority``. Currently, we support two stages: priority = 0 or !0. Flows with - priority 0 located at the first pipeline stage which typically be used as a firewall - to drop the packet on a blocklist(we called it permission stage). At this stage, - flow rules are created for the device's exact match engine: switch. Flows with priority - !0 located at the second stage, typically packets are classified here and be steered to - specific queue or queue group (we called it distribution stage), At this stage, flow - rules are created for device's flow director engine. - For none-pipeline mode, ``priority`` is ignored, a flow rule can be created as a flow director - rule or a switch rule depends on its pattern/action and the resource allocation situation, - all flows are virtually at the same pipeline stage. - By default, generic flow API is enabled in none-pipeline mode, user can choose to - use pipeline mode by setting ``devargs`` parameter ``pipeline-mode-support``, - for example:: - - -a 80:00.0,pipeline-mode-support=1 - - ``Default MAC Disable`` (default ``0``) Disable the default MAC make the device drop all packets by default, diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 4bad39c2c1..036b068c22 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -27,7 +27,6 @@ /* devargs */ #define ICE_SAFE_MODE_SUPPORT_ARG "safe-mode-support" -#define ICE_PIPELINE_MODE_SUPPORT_ARG "pipeline-mode-support" #define ICE_DEFAULT_MAC_DISABLE "default-mac-disable" #define ICE_PROTO_XTR_ARG "proto_xtr" #define ICE_FIELD_OFFS_ARG "field_offs" @@ -43,7 +42,6 @@ int ice_timestamp_dynfield_offset = -1; static const char * const ice_valid_args[] = { ICE_SAFE_MODE_SUPPORT_ARG, - ICE_PIPELINE_MODE_SUPPORT_ARG, ICE_PROTO_XTR_ARG, ICE_FIELD_OFFS_ARG, ICE_FIELD_NAME_ARG, @@ -2103,11 +2101,6 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) if (ret) goto bail; - ret = rte_kvargs_process(kvlist, ICE_PIPELINE_MODE_SUPPORT_ARG, - &parse_bool, &ad->devargs.pipe_mode_support); - if (ret) - goto bail; - ret = rte_kvargs_process(kvlist, ICE_DEFAULT_MAC_DISABLE, &parse_bool, &ad->devargs.default_mac_disable); if (ret) @@ -6549,7 +6542,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_ice, ICE_HW_DEBUG_MASK_ARG "=0xXXX" ICE_PROTO_XTR_ARG "=[queue:]" ICE_SAFE_MODE_SUPPORT_ARG "=<0|1>" - ICE_PIPELINE_MODE_SUPPORT_ARG "=<0|1>" ICE_DEFAULT_MAC_DISABLE "=<0|1>" ICE_RX_LOW_LATENCY_ARG "=<0|1>"); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 9789cb8525..1f88becd19 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -542,7 +542,6 @@ struct ice_pf { struct ice_flow_list flow_list; rte_spinlock_t flow_ops_lock; struct ice_parser_list rss_parser_list; - struct ice_parser_list perm_parser_list; struct ice_parser_list dist_parser_list; bool init_link_up; uint64_t old_rx_bytes; @@ -563,7 +562,6 @@ struct ice_devargs { int rx_low_latency; int safe_mode_support; uint8_t proto_xtr_dflt; - int pipe_mode_support; uint8_t default_mac_disable; uint8_t proto_xtr[ICE_MAX_QUEUE_NUM]; uint8_t pin_idx; diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index e8842bc242..e9ee5a57d6 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -2467,7 +2467,7 @@ ice_fdir_parse(struct ice_adapter *ad, item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); - if (!ad->devargs.pipe_mode_support && priority >= 1) + if (priority >= 1) return -rte_errno; if (!item) diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c index 91bf1d6fcb..6695457bbd 100644 --- a/drivers/net/ice/ice_generic_flow.c +++ b/drivers/net/ice/ice_generic_flow.c @@ -18,16 +18,6 @@ #include "ice_ethdev.h" #include "ice_generic_flow.h" -/** - * Non-pipeline mode, fdir and switch both used as distributor, - * fdir used first, switch used as fdir's backup. - */ -#define ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY 0 -/*Pipeline mode, switch used at permission stage*/ -#define ICE_FLOW_CLASSIFY_STAGE_PERMISSION 1 -/*Pipeline mode, fdir used at distributor stage*/ -#define ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR 2 - #define ICE_FLOW_ENGINE_DISABLED(mask, type) ((mask) & BIT(type)) static struct ice_engine_list engine_list = @@ -1829,7 +1819,6 @@ ice_flow_init(struct ice_adapter *ad) TAILQ_INIT(&pf->flow_list); TAILQ_INIT(&pf->rss_parser_list); - TAILQ_INIT(&pf->perm_parser_list); TAILQ_INIT(&pf->dist_parser_list); rte_spinlock_init(&pf->flow_ops_lock); @@ -1898,11 +1887,6 @@ ice_flow_uninit(struct ice_adapter *ad) rte_free(p_parser); } - while ((p_parser = TAILQ_FIRST(&pf->perm_parser_list))) { - TAILQ_REMOVE(&pf->perm_parser_list, p_parser, node); - rte_free(p_parser); - } - while ((p_parser = TAILQ_FIRST(&pf->dist_parser_list))) { TAILQ_REMOVE(&pf->dist_parser_list, p_parser, node); rte_free(p_parser); @@ -1925,9 +1909,6 @@ ice_get_parser_list(struct ice_flow_parser *parser, case ICE_FLOW_STAGE_RSS: list = &pf->rss_parser_list; break; - case ICE_FLOW_STAGE_PERMISSION: - list = &pf->perm_parser_list; - break; case ICE_FLOW_STAGE_DISTRIBUTOR: list = &pf->dist_parser_list; break; @@ -1958,38 +1939,34 @@ ice_register_parser(struct ice_flow_parser *parser, if (list == NULL) return -EINVAL; - if (ad->devargs.pipe_mode_support) { - TAILQ_INSERT_TAIL(list, parser_node, node); - } else { - if (parser->engine->type == ICE_FLOW_ENGINE_SWITCH) { - RTE_TAILQ_FOREACH_SAFE(existing_node, list, - node, temp) { - if (existing_node->parser->engine->type == - ICE_FLOW_ENGINE_ACL) { - TAILQ_INSERT_AFTER(list, existing_node, - parser_node, node); - goto DONE; - } + if (parser->engine->type == ICE_FLOW_ENGINE_SWITCH) { + RTE_TAILQ_FOREACH_SAFE(existing_node, list, + node, temp) { + if (existing_node->parser->engine->type == + ICE_FLOW_ENGINE_ACL) { + TAILQ_INSERT_AFTER(list, existing_node, + parser_node, node); + goto DONE; } - TAILQ_INSERT_HEAD(list, parser_node, node); - } else if (parser->engine->type == ICE_FLOW_ENGINE_FDIR) { - RTE_TAILQ_FOREACH_SAFE(existing_node, list, - node, temp) { - if (existing_node->parser->engine->type == - ICE_FLOW_ENGINE_SWITCH) { - TAILQ_INSERT_AFTER(list, existing_node, - parser_node, node); - goto DONE; - } + } + TAILQ_INSERT_HEAD(list, parser_node, node); + } else if (parser->engine->type == ICE_FLOW_ENGINE_FDIR) { + RTE_TAILQ_FOREACH_SAFE(existing_node, list, + node, temp) { + if (existing_node->parser->engine->type == + ICE_FLOW_ENGINE_SWITCH) { + TAILQ_INSERT_AFTER(list, existing_node, + parser_node, node); + goto DONE; } - TAILQ_INSERT_HEAD(list, parser_node, node); - } else if (parser->engine->type == ICE_FLOW_ENGINE_HASH) { - TAILQ_INSERT_TAIL(list, parser_node, node); - } else if (parser->engine->type == ICE_FLOW_ENGINE_ACL) { - TAILQ_INSERT_HEAD(list, parser_node, node); - } else { - return -EINVAL; } + TAILQ_INSERT_HEAD(list, parser_node, node); + } else if (parser->engine->type == ICE_FLOW_ENGINE_HASH) { + TAILQ_INSERT_TAIL(list, parser_node, node); + } else if (parser->engine->type == ICE_FLOW_ENGINE_ACL) { + TAILQ_INSERT_HEAD(list, parser_node, node); + } else { + return -EINVAL; } DONE: return 0; @@ -2016,10 +1993,8 @@ ice_unregister_parser(struct ice_flow_parser *parser, } static int -ice_flow_valid_attr(struct ice_adapter *ad, - const struct rte_flow_attr *attr, - int *ice_pipeline_stage, - struct rte_flow_error *error) +ice_flow_valid_attr(const struct rte_flow_attr *attr, + struct rte_flow_error *error) { /* Must be input direction */ if (!attr->ingress) { @@ -2045,23 +2020,11 @@ ice_flow_valid_attr(struct ice_adapter *ad, return -rte_errno; } - /* Check pipeline mode support to set classification stage */ - if (ad->devargs.pipe_mode_support) { - if (attr->priority == 0) - *ice_pipeline_stage = - ICE_FLOW_CLASSIFY_STAGE_PERMISSION; - else - *ice_pipeline_stage = - ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR; - } else { - *ice_pipeline_stage = - ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY; - if (attr->priority > 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Only support priority 0 and 1."); - return -rte_errno; - } + if (attr->priority > 1) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Only support priority 0 and 1."); + return -rte_errno; } /* Not supported */ @@ -2407,7 +2370,6 @@ ice_flow_process_filter(struct rte_eth_dev *dev, struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - int ice_pipeline_stage = 0; if (!pattern) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, @@ -2429,7 +2391,7 @@ ice_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - ret = ice_flow_valid_attr(ad, attr, &ice_pipeline_stage, error); + ret = ice_flow_valid_attr(attr, error); if (ret) return ret; @@ -2438,20 +2400,8 @@ ice_flow_process_filter(struct rte_eth_dev *dev, if (*engine != NULL) return 0; - switch (ice_pipeline_stage) { - case ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR_ONLY: - case ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR: - *engine = ice_parse_engine(ad, flow, &pf->dist_parser_list, - attr->priority, pattern, actions, error); - break; - case ICE_FLOW_CLASSIFY_STAGE_PERMISSION: - *engine = ice_parse_engine(ad, flow, &pf->perm_parser_list, - attr->priority, pattern, actions, error); - break; - default: - return -EINVAL; - } - + *engine = ice_parse_engine(ad, flow, &pf->dist_parser_list, + attr->priority, pattern, actions, error); if (*engine == NULL) return -EINVAL; diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h index 11f51a5c15..471f255bd6 100644 --- a/drivers/net/ice/ice_generic_flow.h +++ b/drivers/net/ice/ice_generic_flow.h @@ -418,15 +418,13 @@ enum ice_flow_engine_type { }; /** - * classification stages. - * for non-pipeline mode, we have two classification stages: Distributor/RSS - * for pipeline-mode we have three classification stages: + * Classification stages. + * We have two classification stages: Distributor/RSS * Permission/Distributor/RSS */ enum ice_flow_classification_stage { ICE_FLOW_STAGE_NONE = 0, ICE_FLOW_STAGE_RSS, - ICE_FLOW_STAGE_PERMISSION, ICE_FLOW_STAGE_DISTRIBUTOR, ICE_FLOW_STAGE_MAX, }; diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 110d8895fe..88d599068f 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -202,7 +202,6 @@ struct ice_switch_filter_conf { }; static struct ice_flow_parser ice_switch_dist_parser; -static struct ice_flow_parser ice_switch_perm_parser; static struct ice_pattern_match_item ice_switch_pattern_dist_list[] = { @@ -288,90 +287,6 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = { {pattern_eth_ipv6_gtpu_eh_ipv6_tcp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6_TCP, ICE_INSET_NONE}, }; -static struct -ice_pattern_match_item ice_switch_pattern_perm_list[] = { - {pattern_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_ethertype, ICE_SW_INSET_ETHER, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_ethertype_vlan, ICE_SW_INSET_MAC_VLAN, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_ethertype_qinq, ICE_SW_INSET_MAC_QINQ, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_arp, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4, ICE_SW_INSET_MAC_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_udp, ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_tcp, ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_udp_vxlan_eth_ipv4, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4, ICE_INSET_NONE}, - {pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_ipv4_nvgre_eth_ipv4, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4, ICE_INSET_NONE}, - {pattern_eth_ipv4_nvgre_eth_ipv4_udp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, ICE_INSET_NONE, ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_pppoes, ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoes, ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_pppoes_proto, ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoes_proto, ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_pppoes_ipv4, ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_pppoes_ipv4_tcp, ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_pppoes_ipv4_udp, ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_pppoes_ipv6, ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_pppoes_ipv6_tcp, ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_pppoes_ipv6_udp, ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoes_ipv4, ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoes_ipv4_tcp, ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoes_ipv4_udp, ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoes_ipv6, ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoes_ipv6_tcp, ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_vlan_pppoes_ipv6_udp, ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_esp, ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_udp_esp, ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6_esp, ICE_SW_INSET_MAC_IPV6_ESP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6_udp_esp, ICE_SW_INSET_MAC_IPV6_ESP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_ah, ICE_SW_INSET_MAC_IPV4_AH, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6_ah, ICE_SW_INSET_MAC_IPV6_AH, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6_udp_ah, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_l2tp, ICE_SW_INSET_MAC_IPV4_L2TP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6_l2tp, ICE_SW_INSET_MAC_IPV6_L2TP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_pfcp, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6_pfcp, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_ipv4, ICE_SW_INSET_MAC_QINQ_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_ipv4_tcp, ICE_SW_INSET_MAC_QINQ_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_ipv4_udp, ICE_SW_INSET_MAC_QINQ_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_ipv6, ICE_SW_INSET_MAC_QINQ_IPV6, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_ipv6_tcp, ICE_SW_INSET_MAC_QINQ_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_ipv6_udp, ICE_SW_INSET_MAC_QINQ_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_pppoes, ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_pppoes_proto, ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_pppoes_ipv4, ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_qinq_pppoes_ipv6, ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu, ICE_SW_INSET_MAC_IPV4_GTPU, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu, ICE_SW_INSET_MAC_IPV6_GTPU, ICE_INSET_NONE, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_ipv4, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV4, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_eh_ipv4, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV4, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_ipv4_udp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV4_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_eh_ipv4_udp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV4_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_ipv4_tcp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_eh_ipv4_tcp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_ipv6, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV6, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_eh_ipv6, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_ipv6_udp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV6_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_eh_ipv6_udp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_ipv6_tcp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV6_TCP, ICE_INSET_NONE}, - {pattern_eth_ipv4_gtpu_eh_ipv6_tcp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6_TCP, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_ipv4, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV4, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_eh_ipv4, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV4, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_ipv4_udp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV4_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_eh_ipv4_udp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV4_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_ipv4_tcp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_eh_ipv4_tcp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV4_TCP, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_ipv6, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV6, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_eh_ipv6, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_ipv6_udp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV6_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_eh_ipv6_udp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6_UDP, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_ipv6_tcp, ICE_SW_INSET_MAC_GTPU_OUTER, ICE_SW_INSET_GTPU_IPV6_TCP, ICE_INSET_NONE}, - {pattern_eth_ipv6_gtpu_eh_ipv6_tcp, ICE_SW_INSET_MAC_GTPU_EH_OUTER, ICE_SW_INSET_GTPU_IPV6_TCP, ICE_INSET_NONE}, -}; - static int ice_switch_create(struct ice_adapter *ad, struct rte_flow *flow, @@ -2139,33 +2054,13 @@ ice_switch_redirect(struct ice_adapter *ad, static int ice_switch_init(struct ice_adapter *ad) { - int ret = 0; - struct ice_flow_parser *dist_parser; - struct ice_flow_parser *perm_parser; - - if (ad->devargs.pipe_mode_support) { - perm_parser = &ice_switch_perm_parser; - ret = ice_register_parser(perm_parser, ad); - } else { - dist_parser = &ice_switch_dist_parser; - ret = ice_register_parser(dist_parser, ad); - } - return ret; + return ice_register_parser(&ice_switch_dist_parser, ad); } static void ice_switch_uninit(struct ice_adapter *ad) { - struct ice_flow_parser *dist_parser; - struct ice_flow_parser *perm_parser; - - if (ad->devargs.pipe_mode_support) { - perm_parser = &ice_switch_perm_parser; - ice_unregister_parser(perm_parser, ad); - } else { - dist_parser = &ice_switch_dist_parser; - ice_unregister_parser(dist_parser, ad); - } + ice_unregister_parser(&ice_switch_dist_parser, ad); } static struct @@ -2189,15 +2084,6 @@ ice_flow_parser ice_switch_dist_parser = { .stage = ICE_FLOW_STAGE_DISTRIBUTOR, }; -static struct -ice_flow_parser ice_switch_perm_parser = { - .engine = &ice_switch_engine, - .array = ice_switch_pattern_perm_list, - .array_len = RTE_DIM(ice_switch_pattern_perm_list), - .parse_pattern_action = ice_switch_parse_pattern_action, - .stage = ICE_FLOW_STAGE_PERMISSION, -}; - RTE_INIT(ice_sw_engine_init) { struct ice_flow_engine *engine = &ice_switch_engine; From patchwork Tue Sep 26 11:29:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 131911 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2AFA14263C; Tue, 26 Sep 2023 05:09:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3CCC3402F2; Tue, 26 Sep 2023 05:09:28 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 22521402D0 for ; Tue, 26 Sep 2023 05:09:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695697765; x=1727233765; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=75JnaB2x7bkU+f+hCmOSu4OeRcAh661XYWVzSbwMgzE=; b=iWZFCmawFOlcJcSFlWQGPNkvZoNcd1XXsUnz+epRsVJsaRWFoRNVlD/5 JfplGtAWffvaUPq58O7q8A4uyXMKrOfYW82VTDKylEep04Qkri0Fu3k3v XQW0SjJ38O0U53soHO0U2MebLS+95j4elJqEOkKfYZXVfi7cS4EVL75w/ LU0rSgGvTZDnKRdTwKp1dsKqbPOWFwMSUhGUitzjdHez2Y7MUYahyhlgx 3YmmfKsn2F/BeNPYKnDVaNFTq4gcpc5ZATM9WZZtxbVz/axdrHzbNLO1o f34WnnHdWBwCaSW0Oemqfuppyug5zmvm8syskaWBDKOl85y09ntRsgdAH A==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="384246943" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="384246943" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 20:09:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="818878512" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="818878512" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by fmsmga004.fm.intel.com with ESMTP; 25 Sep 2023 20:09:23 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: zhichaox.zeng@intel.com, dev@dpdk.org, Qi Zhang Subject: [PATCH v5 2/5] net/ice: refine flow engine disabling Date: Tue, 26 Sep 2023 07:29:28 -0400 Message-Id: <20230926112931.4191107-3-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230926112931.4191107-1-qi.z.zhang@intel.com> References: <20230814202616.3346652-1-qi.z.zhang@intel.com> <20230926112931.4191107-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Only "disable_engine_mask" for flow engine disabling In PF mode, only ACL engine will be disabled. In DCF mode, FDIR and HASH engine will be disabled. In DCF mode with "acl=off", ACL engine will also be disabled. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_acl_filter.c | 3 --- drivers/net/ice/ice_dcf_parent.c | 3 +++ drivers/net/ice/ice_ethdev.c | 1 + drivers/net/ice/ice_fdir_filter.c | 3 --- drivers/net/ice/ice_hash.c | 3 --- 5 files changed, 4 insertions(+), 9 deletions(-) diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c index f2ddbd7b9b..51f4feced4 100644 --- a/drivers/net/ice/ice_acl_filter.c +++ b/drivers/net/ice/ice_acl_filter.c @@ -995,9 +995,6 @@ ice_acl_init(struct ice_adapter *ad) struct ice_hw *hw = ICE_PF_TO_HW(pf); struct ice_flow_parser *parser = &ice_acl_parser; - if (!ad->hw.dcf_enabled) - return 0; - ret = ice_acl_prof_alloc(hw); if (ret) { PMD_DRV_LOG(ERR, "Cannot allocate memory for " diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c index 173ed9f81d..6e845f458a 100644 --- a/drivers/net/ice/ice_dcf_parent.c +++ b/drivers/net/ice/ice_dcf_parent.c @@ -474,6 +474,9 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev) if (ice_devargs_check(eth_dev->device->devargs, ICE_DCF_DEVARG_ACL)) parent_adapter->disabled_engine_mask |= BIT(ICE_FLOW_ENGINE_ACL); + parent_adapter->disabled_engine_mask |= BIT(ICE_FLOW_ENGINE_FDIR); + parent_adapter->disabled_engine_mask |= BIT(ICE_FLOW_ENGINE_HASH); + err = ice_flow_init(parent_adapter); if (err) { PMD_INIT_LOG(ERR, "Failed to initialize flow"); diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 036b068c22..f744bde8f4 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -2442,6 +2442,7 @@ ice_dev_init(struct rte_eth_dev *dev) } if (!ad->is_safe_mode) { + ad->disabled_engine_mask |= BIT(ICE_FLOW_ENGINE_ACL); ret = ice_flow_init(ad); if (ret) { PMD_INIT_LOG(ERR, "Failed to initialize flow"); diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index e9ee5a57d6..bc43883a92 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -1150,9 +1150,6 @@ ice_fdir_init(struct ice_adapter *ad) struct ice_flow_parser *parser; int ret; - if (ad->hw.dcf_enabled) - return 0; - ret = ice_fdir_setup(pf); if (ret) return ret; diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c index e36e7da2b5..37bee808c6 100644 --- a/drivers/net/ice/ice_hash.c +++ b/drivers/net/ice/ice_hash.c @@ -591,9 +591,6 @@ ice_hash_init(struct ice_adapter *ad) { struct ice_flow_parser *parser = NULL; - if (ad->hw.dcf_enabled) - return 0; - parser = &ice_hash_parser; return ice_register_parser(parser, ad); From patchwork Tue Sep 26 11:29:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 131912 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E9524263C; Tue, 26 Sep 2023 05:09:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 80D6D402E0; Tue, 26 Sep 2023 05:09:30 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 4BA5E402DC for ; Tue, 26 Sep 2023 05:09:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695697767; x=1727233767; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=m0YfD5Y5jUCcNIbDuKRzrfkpo3ysK4q6Y2+99ICskxk=; b=evqOnshvAPu2Sd0CsxDFmn/f69NYV3KlgeCk93v+5yrfL450PNEL94PC jeFRCgYmb+DuSe8uXmgKOC61kVSppkiOM7+6KfZla8zXm8tmbi1vFgSr8 3ybJZefUXZ7bL0HEk13rzDTaKUMko5QJZl6pE+ZL1R11jeIS2LTnmI6e0 zfamPEH8Bpz2qkjOwgNj64M1G8yX4clyLEP1eutPCCiloeeK8W4C9ZStk JywzeNqchgxUKq/VFtXy4b3QjOcFh1Lz1Nlbthu0z3fOYil5Vaoxa9iwa n9sh4x/h04XF0WuazaeNxsDOdqzVXvemsq5lr95Qh/ipb8zXp3LJiZ8xv A==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="384246953" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="384246953" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 20:09:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="818878522" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="818878522" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by fmsmga004.fm.intel.com with ESMTP; 25 Sep 2023 20:09:24 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: zhichaox.zeng@intel.com, dev@dpdk.org, Qi Zhang Subject: [PATCH v5 3/5] net/ice: map group to pipeline stage Date: Tue, 26 Sep 2023 07:29:29 -0400 Message-Id: <20230926112931.4191107-4-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230926112931.4191107-1-qi.z.zhang@intel.com> References: <20230814202616.3346652-1-qi.z.zhang@intel.com> <20230926112931.4191107-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Mapping rte_flow_attr->group to a specific hardware stage. Group 0 -> switch filter Group 1 -> acl filter (dcf mode only) Group 2 -> fdir filter (pf mode only) For RSS, it will only be selected if there is a RTE_FLOW_ACTION_RSS action target no queue group and the group ID is ignored. Since each flow parser will be selected based on the group, there is no need to maintain a separate 'parser list' or related APIs for registering/unregistering parsers. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_acl_filter.c | 13 +- drivers/net/ice/ice_ethdev.h | 2 - drivers/net/ice/ice_fdir_filter.c | 19 +-- drivers/net/ice/ice_generic_flow.c | 242 +++++++++------------------- drivers/net/ice/ice_generic_flow.h | 9 +- drivers/net/ice/ice_hash.c | 16 +- drivers/net/ice/ice_switch_filter.c | 13 +- 7 files changed, 91 insertions(+), 223 deletions(-) diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c index 51f4feced4..e507bb927a 100644 --- a/drivers/net/ice/ice_acl_filter.c +++ b/drivers/net/ice/ice_acl_filter.c @@ -41,8 +41,6 @@ ICE_ACL_INSET_ETH_IPV4 | \ ICE_INSET_SCTP_SRC_PORT | ICE_INSET_SCTP_DST_PORT) -static struct ice_flow_parser ice_acl_parser; - struct acl_rule { enum ice_fltr_ptype flow_type; uint64_t entry_id[4]; @@ -993,7 +991,6 @@ ice_acl_init(struct ice_adapter *ad) int ret = 0; struct ice_pf *pf = &ad->pf; struct ice_hw *hw = ICE_PF_TO_HW(pf); - struct ice_flow_parser *parser = &ice_acl_parser; ret = ice_acl_prof_alloc(hw); if (ret) { @@ -1010,11 +1007,7 @@ ice_acl_init(struct ice_adapter *ad) if (ret) return ret; - ret = ice_acl_prof_init(pf); - if (ret) - return ret; - - return ice_register_parser(parser, ad); + return ice_acl_prof_init(pf); } static void @@ -1037,10 +1030,8 @@ ice_acl_uninit(struct ice_adapter *ad) { struct ice_pf *pf = &ad->pf; struct ice_hw *hw = ICE_PF_TO_HW(pf); - struct ice_flow_parser *parser = &ice_acl_parser; if (ad->hw.dcf_enabled) { - ice_unregister_parser(parser, ad); ice_deinit_acl(pf); ice_acl_prof_free(hw); } @@ -1056,7 +1047,7 @@ ice_flow_engine ice_acl_engine = { .type = ICE_FLOW_ENGINE_ACL, }; -static struct +struct ice_flow_parser ice_acl_parser = { .engine = &ice_acl_engine, .array = ice_acl_pattern, diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 1f88becd19..abe6dcdc23 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -541,8 +541,6 @@ struct ice_pf { bool adapter_stopped; struct ice_flow_list flow_list; rte_spinlock_t flow_ops_lock; - struct ice_parser_list rss_parser_list; - struct ice_parser_list dist_parser_list; bool init_link_up; uint64_t old_rx_bytes; uint64_t old_tx_bytes; diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index bc43883a92..6afcdf5376 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -137,8 +137,6 @@ static struct ice_pattern_match_item ice_fdir_pattern_list[] = { {pattern_eth_ipv6_gtpu_eh, ICE_FDIR_INSET_IPV6_GTPU_EH, ICE_FDIR_INSET_IPV6_GTPU_EH, ICE_INSET_NONE}, }; -static struct ice_flow_parser ice_fdir_parser; - static int ice_fdir_is_tunnel_profile(enum ice_fdir_tunnel_type tunnel_type); @@ -1147,31 +1145,18 @@ static int ice_fdir_init(struct ice_adapter *ad) { struct ice_pf *pf = &ad->pf; - struct ice_flow_parser *parser; - int ret; - - ret = ice_fdir_setup(pf); - if (ret) - return ret; - - parser = &ice_fdir_parser; - return ice_register_parser(parser, ad); + return ice_fdir_setup(pf); } static void ice_fdir_uninit(struct ice_adapter *ad) { - struct ice_flow_parser *parser; struct ice_pf *pf = &ad->pf; if (ad->hw.dcf_enabled) return; - parser = &ice_fdir_parser; - - ice_unregister_parser(parser, ad); - ice_fdir_teardown(pf); } @@ -2507,7 +2492,7 @@ ice_fdir_parse(struct ice_adapter *ad, return ret; } -static struct ice_flow_parser ice_fdir_parser = { +struct ice_flow_parser ice_fdir_parser = { .engine = &ice_fdir_engine, .array = ice_fdir_pattern_list, .array_len = RTE_DIM(ice_fdir_pattern_list), diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c index 6695457bbd..50d760004f 100644 --- a/drivers/net/ice/ice_generic_flow.c +++ b/drivers/net/ice/ice_generic_flow.c @@ -1793,15 +1793,13 @@ enum rte_flow_item_type pattern_eth_ipv6_pfcp[] = { RTE_FLOW_ITEM_TYPE_END, }; - - -typedef struct ice_flow_engine * (*parse_engine_t)(struct ice_adapter *ad, - struct rte_flow *flow, - struct ice_parser_list *parser_list, - uint32_t priority, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow_error *error); +typedef bool (*parse_engine_t)(struct ice_adapter *ad, + struct rte_flow *flow, + struct ice_flow_parser *parser, + uint32_t priority, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); void ice_register_flow_engine(struct ice_flow_engine *engine) @@ -1818,8 +1816,6 @@ ice_flow_init(struct ice_adapter *ad) struct ice_flow_engine *engine; TAILQ_INIT(&pf->flow_list); - TAILQ_INIT(&pf->rss_parser_list); - TAILQ_INIT(&pf->dist_parser_list); rte_spinlock_init(&pf->flow_ops_lock); if (ice_parser_create(&ad->hw, &ad->psr) != ICE_SUCCESS) @@ -1860,7 +1856,6 @@ ice_flow_uninit(struct ice_adapter *ad) struct ice_pf *pf = &ad->pf; struct ice_flow_engine *engine; struct rte_flow *p_flow; - struct ice_flow_parser_node *p_parser; void *temp; RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) { @@ -1881,117 +1876,12 @@ ice_flow_uninit(struct ice_adapter *ad) rte_free(p_flow); } - /* Cleanup parser list */ - while ((p_parser = TAILQ_FIRST(&pf->rss_parser_list))) { - TAILQ_REMOVE(&pf->rss_parser_list, p_parser, node); - rte_free(p_parser); - } - - while ((p_parser = TAILQ_FIRST(&pf->dist_parser_list))) { - TAILQ_REMOVE(&pf->dist_parser_list, p_parser, node); - rte_free(p_parser); - } - if (ad->psr != NULL) { ice_parser_destroy(ad->psr); ad->psr = NULL; } } -static struct ice_parser_list * -ice_get_parser_list(struct ice_flow_parser *parser, - struct ice_adapter *ad) -{ - struct ice_parser_list *list; - struct ice_pf *pf = &ad->pf; - - switch (parser->stage) { - case ICE_FLOW_STAGE_RSS: - list = &pf->rss_parser_list; - break; - case ICE_FLOW_STAGE_DISTRIBUTOR: - list = &pf->dist_parser_list; - break; - default: - return NULL; - } - - return list; -} - -int -ice_register_parser(struct ice_flow_parser *parser, - struct ice_adapter *ad) -{ - struct ice_parser_list *list; - struct ice_flow_parser_node *parser_node; - struct ice_flow_parser_node *existing_node; - void *temp; - - parser_node = rte_zmalloc("ice_parser", sizeof(*parser_node), 0); - if (parser_node == NULL) { - PMD_DRV_LOG(ERR, "Failed to allocate memory."); - return -ENOMEM; - } - parser_node->parser = parser; - - list = ice_get_parser_list(parser, ad); - if (list == NULL) - return -EINVAL; - - if (parser->engine->type == ICE_FLOW_ENGINE_SWITCH) { - RTE_TAILQ_FOREACH_SAFE(existing_node, list, - node, temp) { - if (existing_node->parser->engine->type == - ICE_FLOW_ENGINE_ACL) { - TAILQ_INSERT_AFTER(list, existing_node, - parser_node, node); - goto DONE; - } - } - TAILQ_INSERT_HEAD(list, parser_node, node); - } else if (parser->engine->type == ICE_FLOW_ENGINE_FDIR) { - RTE_TAILQ_FOREACH_SAFE(existing_node, list, - node, temp) { - if (existing_node->parser->engine->type == - ICE_FLOW_ENGINE_SWITCH) { - TAILQ_INSERT_AFTER(list, existing_node, - parser_node, node); - goto DONE; - } - } - TAILQ_INSERT_HEAD(list, parser_node, node); - } else if (parser->engine->type == ICE_FLOW_ENGINE_HASH) { - TAILQ_INSERT_TAIL(list, parser_node, node); - } else if (parser->engine->type == ICE_FLOW_ENGINE_ACL) { - TAILQ_INSERT_HEAD(list, parser_node, node); - } else { - return -EINVAL; - } -DONE: - return 0; -} - -void -ice_unregister_parser(struct ice_flow_parser *parser, - struct ice_adapter *ad) -{ - struct ice_parser_list *list; - struct ice_flow_parser_node *p_parser; - void *temp; - - list = ice_get_parser_list(parser, ad); - if (list == NULL) - return; - - RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) { - if (p_parser->parser->engine->type == parser->engine->type) { - TAILQ_REMOVE(list, p_parser, node); - rte_free(p_parser); - } - } -} - static int ice_flow_valid_attr(const struct rte_flow_attr *attr, struct rte_flow_error *error) @@ -2027,14 +1917,6 @@ ice_flow_valid_attr(const struct rte_flow_attr *attr, return -rte_errno; } - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } - return 0; } @@ -2296,64 +2178,73 @@ ice_search_pattern_match_item(struct ice_adapter *ad, return NULL; } -static struct ice_flow_engine * +static bool ice_parse_engine_create(struct ice_adapter *ad, struct rte_flow *flow, - struct ice_parser_list *parser_list, + struct ice_flow_parser *parser, uint32_t priority, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) { - struct ice_flow_engine *engine = NULL; - struct ice_flow_parser_node *parser_node; void *meta = NULL; - void *temp; - RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) { - int ret; + if (ICE_FLOW_ENGINE_DISABLED(ad->disabled_engine_mask, + parser->engine->type)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "engine is not enabled."); + return false; + } - if (parser_node->parser->parse_pattern_action(ad, - parser_node->parser->array, - parser_node->parser->array_len, - pattern, actions, priority, &meta, error) < 0) - continue; + if (parser->parse_pattern_action(ad, + parser->array, + parser->array_len, + pattern, actions, priority, &meta, error) < 0) + return false; - engine = parser_node->parser->engine; - RTE_ASSERT(engine->create != NULL); - ret = engine->create(ad, flow, meta, error); - if (ret == 0) - return engine; - else if (ret == -EEXIST) - return NULL; - } - return NULL; + RTE_ASSERT(parser->engine->create != NULL); + + return parser->engine->create(ad, flow, meta, error) == 0; } -static struct ice_flow_engine * +static bool ice_parse_engine_validate(struct ice_adapter *ad, struct rte_flow *flow __rte_unused, - struct ice_parser_list *parser_list, + struct ice_flow_parser *parser, uint32_t priority, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) { - struct ice_flow_engine *engine = NULL; - struct ice_flow_parser_node *parser_node; - void *temp; - RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) { - if (parser_node->parser->parse_pattern_action(ad, - parser_node->parser->array, - parser_node->parser->array_len, - pattern, actions, priority, NULL, error) < 0) - continue; + if (ICE_FLOW_ENGINE_DISABLED(ad->disabled_engine_mask, + parser->engine->type)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "engine is not enabled."); + return false; + } - engine = parser_node->parser->engine; - break; + return parser->parse_pattern_action(ad, + parser->array, + parser->array_len, + pattern, actions, priority, + NULL, error) >= 0; +} + +static struct ice_flow_parser *get_flow_parser(uint32_t group) +{ + switch (group) { + case 0: + return &ice_switch_parser; + case 1: + return &ice_acl_parser; + case 2: + return &ice_fdir_parser; + default: + return NULL; } - return engine; } static int @@ -2369,7 +2260,7 @@ ice_flow_process_filter(struct rte_eth_dev *dev, int ret = ICE_ERR_NOT_SUPPORTED; struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_flow_parser *parser; if (!pattern) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, @@ -2395,17 +2286,30 @@ ice_flow_process_filter(struct rte_eth_dev *dev, if (ret) return ret; - *engine = ice_parse_engine(ad, flow, &pf->rss_parser_list, - attr->priority, pattern, actions, error); - if (*engine != NULL) + *engine = NULL; + /* always try hash engine first */ + if (ice_parse_engine(ad, flow, &ice_hash_parser, + attr->priority, pattern, + actions, error)) { + *engine = ice_hash_parser.engine; return 0; + } - *engine = ice_parse_engine(ad, flow, &pf->dist_parser_list, - attr->priority, pattern, actions, error); - if (*engine == NULL) - return -EINVAL; + parser = get_flow_parser(attr->group); + if (parser == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -rte_errno; + } - return 0; + if (ice_parse_engine(ad, flow, parser, attr->priority, + pattern, actions, error)) { + *engine = parser->engine; + return 0; + } else { + return -rte_errno; + } } static int diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h index 471f255bd6..391d615b9a 100644 --- a/drivers/net/ice/ice_generic_flow.h +++ b/drivers/net/ice/ice_generic_flow.h @@ -515,10 +515,6 @@ struct ice_flow_parser_node { void ice_register_flow_engine(struct ice_flow_engine *engine); int ice_flow_init(struct ice_adapter *ad); void ice_flow_uninit(struct ice_adapter *ad); -int ice_register_parser(struct ice_flow_parser *parser, - struct ice_adapter *ad); -void ice_unregister_parser(struct ice_flow_parser *parser, - struct ice_adapter *ad); struct ice_pattern_match_item * ice_search_pattern_match_item(struct ice_adapter *ad, const struct rte_flow_item pattern[], @@ -528,4 +524,9 @@ ice_search_pattern_match_item(struct ice_adapter *ad, int ice_flow_redirect(struct ice_adapter *ad, struct ice_flow_redirect *rd); + +extern struct ice_flow_parser ice_switch_parser; +extern struct ice_flow_parser ice_acl_parser; +extern struct ice_flow_parser ice_fdir_parser; +extern struct ice_flow_parser ice_hash_parser; #endif diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c index 37bee808c6..f923641533 100644 --- a/drivers/net/ice/ice_hash.c +++ b/drivers/net/ice/ice_hash.c @@ -572,7 +572,7 @@ static struct ice_flow_engine ice_hash_engine = { }; /* Register parser for os package. */ -static struct ice_flow_parser ice_hash_parser = { +struct ice_flow_parser ice_hash_parser = { .engine = &ice_hash_engine, .array = ice_hash_pattern_list, .array_len = RTE_DIM(ice_hash_pattern_list), @@ -587,13 +587,9 @@ RTE_INIT(ice_hash_engine_init) } static int -ice_hash_init(struct ice_adapter *ad) +ice_hash_init(struct ice_adapter *ad __rte_unused) { - struct ice_flow_parser *parser = NULL; - - parser = &ice_hash_parser; - - return ice_register_parser(parser, ad); + return 0; } static int @@ -1439,12 +1435,8 @@ ice_hash_destroy(struct ice_adapter *ad, } static void -ice_hash_uninit(struct ice_adapter *ad) +ice_hash_uninit(struct ice_adapter *ad __rte_unused) { - if (ad->hw.dcf_enabled) - return; - - ice_unregister_parser(&ice_hash_parser, ad); } static void diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 88d599068f..8f29326762 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -201,8 +201,6 @@ struct ice_switch_filter_conf { struct ice_adv_rule_info rule_info; }; -static struct ice_flow_parser ice_switch_dist_parser; - static struct ice_pattern_match_item ice_switch_pattern_dist_list[] = { {pattern_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, @@ -2052,15 +2050,14 @@ ice_switch_redirect(struct ice_adapter *ad, } static int -ice_switch_init(struct ice_adapter *ad) +ice_switch_init(struct ice_adapter *ad __rte_unused) { - return ice_register_parser(&ice_switch_dist_parser, ad); + return 0; } static void -ice_switch_uninit(struct ice_adapter *ad) +ice_switch_uninit(struct ice_adapter *ad __rte_unused) { - ice_unregister_parser(&ice_switch_dist_parser, ad); } static struct @@ -2075,8 +2072,8 @@ ice_flow_engine ice_switch_engine = { .type = ICE_FLOW_ENGINE_SWITCH, }; -static struct -ice_flow_parser ice_switch_dist_parser = { +struct +ice_flow_parser ice_switch_parser = { .engine = &ice_switch_engine, .array = ice_switch_pattern_dist_list, .array_len = RTE_DIM(ice_switch_pattern_dist_list), From patchwork Tue Sep 26 11:29:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 131914 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4ECDA4263C; Tue, 26 Sep 2023 05:09:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E515440697; Tue, 26 Sep 2023 05:09:33 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 8A1E8402E7 for ; Tue, 26 Sep 2023 05:09:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695697771; x=1727233771; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=x2T9vQG7QdB8WiMnx1MivUW/YuJK6iGz9eIAD6zEP8w=; b=FML2XPOPJvcQwbt8b4sPPFbXpxmQ9UNmXQH5DgY3938CisUDg4EKSPB8 D5aCtSq0F3+qbML9iyk4WyZxuKDpNYAiwzZmHYQxQKwnBBg3KZtaMReqg +lijhPJ9O0xXvABVBjr3nLtx7En/u+xN0xxDMsnlETDJLvJPI2CMuogPl XiAe3yCFJKNUnj2zaG0478pjMqpXKVnGD7aFOgsBQZaimU5YtNhH75Pnz CMa9iZfsGzCJyLwnkVX7fjglEF+AnIf+vI5BApCDvafQAfH65owL3v6zP G0l4RfYLBdMECCtae7U61dmgEXGuRmWXIuD+lVDo2dTQp/t6euDkPruqV w==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="384246966" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="384246966" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 20:09:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="818878531" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="818878531" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by fmsmga004.fm.intel.com with ESMTP; 25 Sep 2023 20:09:27 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: zhichaox.zeng@intel.com, dev@dpdk.org, Qi Zhang Subject: [PATCH v5 4/5] net/ice: refine supported flow pattern name Date: Tue, 26 Sep 2023 07:29:30 -0400 Message-Id: <20230926112931.4191107-5-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230926112931.4191107-1-qi.z.zhang@intel.com> References: <20230814202616.3346652-1-qi.z.zhang@intel.com> <20230926112931.4191107-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Unified the supported pattern array name as ice__supported_pattern. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_acl_filter.c | 6 +++--- drivers/net/ice/ice_fdir_filter.c | 6 +++--- drivers/net/ice/ice_switch_filter.c | 6 +++--- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c index e507bb927a..63a525b363 100644 --- a/drivers/net/ice/ice_acl_filter.c +++ b/drivers/net/ice/ice_acl_filter.c @@ -47,7 +47,7 @@ struct acl_rule { }; static struct -ice_pattern_match_item ice_acl_pattern[] = { +ice_pattern_match_item ice_acl_supported_pattern[] = { {pattern_eth_ipv4, ICE_ACL_INSET_ETH_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4_udp, ICE_ACL_INSET_ETH_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4_tcp, ICE_ACL_INSET_ETH_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, @@ -1050,8 +1050,8 @@ ice_flow_engine ice_acl_engine = { struct ice_flow_parser ice_acl_parser = { .engine = &ice_acl_engine, - .array = ice_acl_pattern, - .array_len = RTE_DIM(ice_acl_pattern), + .array = ice_acl_supported_pattern, + .array_len = RTE_DIM(ice_acl_supported_pattern), .parse_pattern_action = ice_acl_parse, .stage = ICE_FLOW_STAGE_DISTRIBUTOR, }; diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index 6afcdf5376..0b7920ad44 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -106,7 +106,7 @@ ICE_INSET_IPV6_SRC | ICE_INSET_IPV6_DST | \ ICE_INSET_NAT_T_ESP_SPI) -static struct ice_pattern_match_item ice_fdir_pattern_list[] = { +static struct ice_pattern_match_item ice_fdir_supported_pattern[] = { {pattern_raw, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_ethertype, ICE_FDIR_INSET_ETH, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4, ICE_FDIR_INSET_ETH_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, @@ -2494,8 +2494,8 @@ ice_fdir_parse(struct ice_adapter *ad, struct ice_flow_parser ice_fdir_parser = { .engine = &ice_fdir_engine, - .array = ice_fdir_pattern_list, - .array_len = RTE_DIM(ice_fdir_pattern_list), + .array = ice_fdir_supported_pattern, + .array_len = RTE_DIM(ice_fdir_supported_pattern), .parse_pattern_action = ice_fdir_parse, .stage = ICE_FLOW_STAGE_DISTRIBUTOR, }; diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 8f29326762..122b87f625 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -202,7 +202,7 @@ struct ice_switch_filter_conf { }; static struct -ice_pattern_match_item ice_switch_pattern_dist_list[] = { +ice_pattern_match_item ice_switch_supported_pattern[] = { {pattern_any, ICE_INSET_NONE, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_ethertype, ICE_SW_INSET_ETHER, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_ethertype_vlan, ICE_SW_INSET_MAC_VLAN, ICE_INSET_NONE, ICE_INSET_NONE}, @@ -2075,8 +2075,8 @@ ice_flow_engine ice_switch_engine = { struct ice_flow_parser ice_switch_parser = { .engine = &ice_switch_engine, - .array = ice_switch_pattern_dist_list, - .array_len = RTE_DIM(ice_switch_pattern_dist_list), + .array = ice_switch_supported_pattern, + .array_len = RTE_DIM(ice_switch_supported_pattern), .parse_pattern_action = ice_switch_parse_pattern_action, .stage = ICE_FLOW_STAGE_DISTRIBUTOR, }; From patchwork Tue Sep 26 11:29:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 131913 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3ADD34263C; Tue, 26 Sep 2023 05:09:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B43654067B; Tue, 26 Sep 2023 05:09:32 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id EC44A402E7 for ; Tue, 26 Sep 2023 05:09:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695697771; x=1727233771; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hRJ87j7JhZJmVzcXpMOOQmLoDs4hIPpXgRWI+H/LEHU=; b=dIlnfYf6tq2FLZ6TTZ/6iUTE6MbbkwsQXfgI/adLCFrK75xIVtO0gaB/ wiJtfjXtAJDa6e3U44ks5K8Lhm2qrMzO8a/mJIoM71F8LGIxi+rlGKqTn weUBGkzUpQsxldLhpXCg5EegZKDZ831+YAKk7m1feAtbB0G2rSkrTlVkK S7j2alo0u/R5veCVNlsKEuP3I5Mbr8EzHb7/g6ueTp+iCy6K5+QtNrDwI k/MV5AdOFpBpPR0Bda0zqNvwKl0Vq09b8aD6TQUVpnq1IUwUr05zJRU1G YRfAGdJpMEp3ndQGmaWjZTwaaftii++EewDg37PN9Y+HvhjcXA6F1oo5n Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="384246963" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="384246963" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2023 20:09:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10843"; a="818878540" X-IronPort-AV: E=Sophos;i="6.03,176,1694761200"; d="scan'208";a="818878540" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by fmsmga004.fm.intel.com with ESMTP; 25 Sep 2023 20:09:28 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: zhichaox.zeng@intel.com, dev@dpdk.org, Qi Zhang Subject: [PATCH v5 5/5] doc: add generic flow doc for ice PMD Date: Tue, 26 Sep 2023 07:29:31 -0400 Message-Id: <20230926112931.4191107-6-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230926112931.4191107-1-qi.z.zhang@intel.com> References: <20230814202616.3346652-1-qi.z.zhang@intel.com> <20230926112931.4191107-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add some document about how to use rte_flow on ice PMD. Signed-off-by: Qi Zhang --- doc/guides/nics/ice.rst | 45 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index 5a47109c3f..b36a4c260a 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -301,6 +301,51 @@ The DCF PMD needs to advertise and acquire DCF capability which allows DCF to send AdminQ commands that it would like to execute over to the PF and receive responses for the same from PF. +Generic Flow Support +~~~~~~~~~~~~~~~~~~~~ + +The ice PMD provides support for the Generic Flow API (RTE_FLOW), enabling +users to offload various flow classification tasks to the E810 NIC. +The E810 NIC's packet processing pipeline consists of the following stages: + +Switch: Supports exact match and limited wildcard matching with a large flow +capacity. + +ACL: Supports wildcard matching with a smaller flow capacity (DCF mode only). + +FDIR: Supports exact match with a large flow capacity (PF mode only). + +Hash: Supports RSS (PF mode only) + +The ice PMD utilizes the ice_flow_engine structure to represent each of these +stages and leverages the rte_flow rule's ``group`` attribute for selecting the +appropriate engine for Switch, ACL, and FDIR operations: + +Group 0 maps to Switch +Group 1 maps to ACL +Group 2 maps to FDIR + +In the case of RSS, it will only be selected if a ``RTE_FLOW_ACTION_RSS`` action +is targeted to no queue group, and the group attribute is ignored. + +For each engine, a list of supported patterns is maintained in a global array +named ``ice__supported_pattern``. The Ice PMD will reject any rule with +a pattern that is not included in the supported list. + +One notable feature is the ice PMD's ability to leverage the Raw pattern, +enabling protocol-agnostic flow offloading. Here is an example of creating +a rule that matches an IPv4 destination address of 1.2.3.4 and redirects it to +queue 3 using a raw pattern: + +flow create 0 ingress group 2 pattern raw \ +pattern spec \ +00000000000000000000000008004500001400004000401000000000000001020304 \ +pattern mask \ +000000000000000000000000000000000000000000000000000000000000ffffffff \ +end actions queue index 3 / mark id 3 / end + +Currently, raw pattern support is limited to the FDIR and Hash engines. + Additional Options ++++++++++++++++++