From patchwork Tue Jul 6 13:14:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shun Hao X-Patchwork-Id: 95382 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8BF46A0C47; Tue, 6 Jul 2021 15:15:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 699194128B; Tue, 6 Jul 2021 15:15:24 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2069.outbound.protection.outlook.com [40.107.236.69]) by mails.dpdk.org (Postfix) with ESMTP id 7F25E4120E for ; Tue, 6 Jul 2021 15:15:22 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FTKzXlu8n8QGaefqGIpNAcaqC/9SaAgYLyqP9dfMsR+bXeu2eRAXjbOBSw72xunZrki/PDqGIqlx1rlZQIvpzPA44B0+e4NUUJLmWha1nzZZJvQH7803sGtEjz8Guf3+fEZHyLkUtYcxwoWfx+FjBbADaYpP8nfii2DBa57B2wqz4JSvde0GhSfQzOeFQbwW+ZvigjpIXXnTvQyZ7rBbKfVjCk7YOEfLExs0OtOyOO6ISbgATXKPAAmIkiBz/gcbKfvrgv6UYPebUJ14bDCRRXk/QQO2RHbXkUH4LuEfTZn88XuVFfdapaKcs88AEgr0EL7SuAogTH8k2Zr+Ybgn1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RrTAcJutS/JpxX3++8Rl63FDIIDF5BmdeGCn1K5igRI=; b=Wo6aOaKHTxmVnCyYDekoeBwOHp/wQ3ZqDsOau5yOoSeJJOre1gHi7eI9urlnYgNTE/8o9wMIJ8alncI/CVLX5GT42VzhSGz+JIFDHnNmt8Qi73WhZu3s1M8RdGamqzYYvAN0QAdK3lW5ZagFCZq0hetuAAwFw8KqQ7DFeEPbg7DFtGy1ZrCg5G13Ph18KK8rznAYJez6PlIpr+dTaN15KETsJa/h+9PDTBHOWwTwjGdBGHnol4ya7JQmKx5r0ipCw77fI0rKJwCmOEJWsnjmuD+Wc4OLXf6PnvT0ZGL/k6eb3hTfspfNENKf6AhF7pFDxQe6KbzemgtemokYBy+5Eg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RrTAcJutS/JpxX3++8Rl63FDIIDF5BmdeGCn1K5igRI=; b=ZVSqKCuSxZXIjh04pauCcCRs/pGw89U8+IevSE7R9EfVHltBw+gmiID9eLvfMi+OJFNqk7MSaTHTeoNeK2PrNDPgmjXVIlzy3kGJVE6mh9tzinuUxiWtQrlV802S0JFAs78zQZvq5Lk95EhQzldjxwExgbW9ra5v1EC8xWm4qiPL7lVioHDToerm4XE1HD9k49HLNij0jhw+gF/pxDQfO3eNMMIOlbECg5Ei/1yrQ+tskH07J1p+bWgC4qowd9HtNs9CYkQgapv550rveLPwUwd7K6fbUL41lEVqCBUn3fnUZ/xOOV4oJWxjLmlkbocq16yAJVbQq6eVyr+KXztisQ== Received: from BN9PR03CA0627.namprd03.prod.outlook.com (2603:10b6:408:106::32) by MN2PR12MB3806.namprd12.prod.outlook.com (2603:10b6:208:169::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23; Tue, 6 Jul 2021 13:15:20 +0000 Received: from BN8NAM11FT065.eop-nam11.prod.protection.outlook.com (2603:10b6:408:106:cafe::af) by BN9PR03CA0627.outlook.office365.com (2603:10b6:408:106::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Tue, 6 Jul 2021 13:15:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT065.mail.protection.outlook.com (10.13.177.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:15:20 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 6 Jul 2021 13:15:18 +0000 From: Shun Hao To: , , , "Shahaf Shuler" CC: , , Date: Tue, 6 Jul 2021 16:14:47 +0300 Message-ID: <20210706131450.30917-2-shunh@nvidia.com> X-Mailer: git-send-email 2.20.0 In-Reply-To: <20210706131450.30917-1-shunh@nvidia.com> References: <20210706131450.30917-1-shunh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 487402ba-914b-469f-5303-08d940801b26 X-MS-TrafficTypeDiagnostic: MN2PR12MB3806: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:43; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: H3FLpL10scZsecEeQ6ErwD6Mpk134BCtgxP21RIXDoZjRqcFUPgDfzDQtwBelK3Oi6dkK+LtlwhaQBgkiykNdkVVkjpjNnKVHlaypS1U1mAQfRwOb8GSTgqr+ymumt+420lC+u3+gCEme5RSlFFRFzQkaDSfZ/9SSxbgcRuTscOD/0BwH/QyjpqnpELMOQX8JGAt1DP3AmeI8I1x44wegjWW2H/imIxn+7Qx5tEgMl/KWxCc7yrp1drDazhE53cVinwGzwtvtib82EcE8RlMJHRoyPOjX5t8UOGAF6qYgP/H2gwJFliMrlcHCOQ6rFUH+RCJq6O+Zfbrrzt8I+wIosyJWafgJyXa1oq7yB93oJrv6kb6WmbIXzRH9jLjIFoytINiuR1LIWdlrkjdNzRqIzYQwZwkvK9uAeSC0wAOYt9KvX9ErVUgWamsAZtf+lxYkb53TQDESrLVWqWdy5t+IVcqBaiGefSXWgxrDdVXeDcDLrE2LdKoYDRY0MRheO0GsS501mBcWMpO7oJaI6rNQljDo8vR3LhowaohglJBkQnJnWEDKVmW3kVJjnpt2GG/LAvChwvUBvajpi/ieV+nupG0y05KMJ5JqMPr6UlAbgGl8BkhHyDa1jDZSAhpbbeZxnTOjNcpZEUwVdm9EHCPRNdCn1MNUKihE5NV7GdMdqELzD79QmJzTsAotmRiorOmVPKjcVmwyhSYfKk4/ytIRpHAG3ltBdB9fS1vjsGTjAw= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(346002)(39860400002)(396003)(136003)(46966006)(36840700001)(36756003)(30864003)(26005)(6286002)(426003)(186003)(70206006)(8676002)(82740400003)(316002)(55016002)(70586007)(16526019)(83380400001)(6636002)(6666004)(1076003)(36906005)(356005)(2906002)(107886003)(7636003)(2616005)(54906003)(5660300002)(86362001)(36860700001)(82310400003)(336012)(8936002)(4326008)(7696005)(478600001)(47076005)(110136005)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 13:15:20.3474 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 487402ba-914b-469f-5303-08d940801b26 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT065.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3806 Subject: [dpdk-dev] [PATCH v1 1/4] net/mlx5: support meter action in meter policy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This makes the meter policy support meter action. So multiple meters can be chained as a meter hierarchy. Only termination meter is allowed as the last meter in a hierarchy, and there're two cases: 1. The last meter has non-RSS policy, can directly create sub-policy and color rules during each meter's policy creation. 2. The last meter has RSS policy, don't create sub-policy/rules when creating meter policy. Only when a RTE flow is using the meter hierarchy, will iterate all meters of the hierarchy and create neede sub- policies and color rules for them. Signed-off-by: Shun Hao Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5.h | 12 ++ drivers/net/mlx5/mlx5_flow.c | 71 +++++--- drivers/net/mlx5/mlx5_flow.h | 5 + drivers/net/mlx5/mlx5_flow_dv.c | 270 ++++++++++++++++++++++++----- drivers/net/mlx5/mlx5_flow_meter.c | 43 ++++- 5 files changed, 332 insertions(+), 69 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 0f4b239142..0c555f0b1f 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -677,6 +677,12 @@ struct mlx5_meter_policy_action_container { /* Jump/drop action per color. */ uint16_t queue; /* Queue action configuration. */ + struct { + uint32_t next_mtr_id; + /* The next meter id. */ + void *next_sub_policy; + /* Next meter's sub-policy. */ + }; }; }; @@ -694,6 +700,8 @@ struct mlx5_flow_meter_policy { /* Rule applies to transfer domain. */ uint32_t is_queue:1; /* Is queue action in policy table. */ + uint32_t is_hierarchy:1; + /* Is meter action in policy table. */ rte_spinlock_t sl; uint32_t ref_cnt; /* Use count. */ @@ -712,6 +720,7 @@ struct mlx5_flow_meter_policy { #define MLX5_MTR_SUB_POLICY_NUM_SHIFT 3 #define MLX5_MTR_SUB_POLICY_NUM_MASK 0x7 #define MLX5_MTRS_DEFAULT_RULE_PRIORITY 0xFFFF +#define MLX5_MTR_CHAIN_MAX_NUM 8 /* Flow meter default policy parameter structure. * Policy index 0 is reserved by default policy table. @@ -1669,6 +1678,9 @@ struct mlx5_flow_meter_policy *mlx5_flow_meter_policy_find (struct rte_eth_dev *dev, uint32_t policy_id, uint32_t *policy_idx); +struct mlx5_flow_meter_policy * +mlx5_flow_meter_hierarchy_get_final_policy(struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *policy); int mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error); void mlx5_flow_meter_rxq_flush(struct rte_eth_dev *dev); diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index c27f6197a0..6c4bfde098 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3492,10 +3492,18 @@ flow_get_rss_action(struct rte_eth_dev *dev, const struct rte_flow_action_meter *mtr = actions->conf; fm = mlx5_flow_meter_find(priv, mtr->mtr_id, &mtr_idx); - if (fm) { + if (fm && !fm->def_policy) { policy = mlx5_flow_meter_policy_find(dev, fm->policy_id, NULL); - if (policy && policy->is_rss) + MLX5_ASSERT(policy); + if (policy->is_hierarchy) { + policy = + mlx5_flow_meter_hierarchy_get_final_policy(dev, + policy); + if (!policy) + return NULL; + } + if (policy->is_rss) rss = policy->act_cnt[RTE_COLOR_GREEN].rss->conf; } @@ -4564,8 +4572,8 @@ flow_create_split_inner(struct rte_eth_dev *dev, * Pointer to Ethernet device. * @param[in] flow * Parent flow structure pointer. - * @param[in] policy_id; - * Meter Policy id. + * @param wks + * Pointer to thread flow work space. * @param[in] attr * Flow rule attributes. * @param[in] items @@ -4579,31 +4587,22 @@ flow_create_split_inner(struct rte_eth_dev *dev, static struct mlx5_flow_meter_sub_policy * get_meter_sub_policy(struct rte_eth_dev *dev, struct rte_flow *flow, - uint32_t policy_id, + struct mlx5_flow_workspace *wks, const struct rte_flow_attr *attr, const struct rte_flow_item items[], struct rte_flow_error *error) { struct mlx5_flow_meter_policy *policy; + struct mlx5_flow_meter_policy *final_policy; struct mlx5_flow_meter_sub_policy *sub_policy = NULL; - policy = mlx5_flow_meter_policy_find(dev, policy_id, NULL); - if (!policy) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to find Meter Policy."); - goto exit; - } - if (policy->is_rss || - (policy->is_queue && - !policy->sub_policys[MLX5_MTR_DOMAIN_INGRESS][0]->rix_hrxq[0])) { - struct mlx5_flow_workspace *wks = - mlx5_flow_get_thread_workspace(); + policy = wks->policy; + final_policy = policy->is_hierarchy ? wks->final_policy : policy; + if (final_policy->is_rss || final_policy->is_queue) { struct mlx5_flow_rss_desc rss_desc_v[MLX5_MTR_RTE_COLORS]; struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS] = {0}; uint32_t i; - MLX5_ASSERT(wks); /** * This is a tmp dev_flow, * no need to register any matcher for it in translate. @@ -4613,9 +4612,9 @@ get_meter_sub_policy(struct rte_eth_dev *dev, struct mlx5_flow dev_flow = {0}; struct mlx5_flow_handle dev_handle = { {0} }; - if (policy->is_rss) { + if (final_policy->is_rss) { const void *rss_act = - policy->act_cnt[i].rss->conf; + final_policy->act_cnt[i].rss->conf; struct rte_flow_action rss_actions[2] = { [0] = { .type = RTE_FLOW_ACTION_TYPE_RSS, @@ -4656,7 +4655,7 @@ get_meter_sub_policy(struct rte_eth_dev *dev, rss_desc_v[i].key_len = 0; rss_desc_v[i].hash_fields = 0; rss_desc_v[i].queue = - &policy->act_cnt[i].queue; + &final_policy->act_cnt[i].queue; rss_desc_v[i].queue_num = 1; } rss_desc[i] = &rss_desc_v[i]; @@ -4696,8 +4695,8 @@ get_meter_sub_policy(struct rte_eth_dev *dev, * Pointer to Ethernet device. * @param[in] flow * Parent flow structure pointer. - * @param[in] fm - * Pointer to flow meter structure. + * @param wks + * Pointer to thread flow work space. * @param[in] attr * Flow rule attributes. * @param[in] items @@ -4721,7 +4720,7 @@ get_meter_sub_policy(struct rte_eth_dev *dev, static int flow_meter_split_prep(struct rte_eth_dev *dev, struct rte_flow *flow, - struct mlx5_flow_meter_info *fm, + struct mlx5_flow_workspace *wks, const struct rte_flow_attr *attr, const struct rte_flow_item items[], struct rte_flow_item sfx_items[], @@ -4732,6 +4731,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_info *fm = wks->fm; struct rte_flow_action *tag_action = NULL; struct rte_flow_item *tag_item; struct mlx5_rte_flow_action_set_tag *set_tag; @@ -4856,9 +4856,8 @@ flow_meter_split_prep(struct rte_eth_dev *dev, struct mlx5_flow_tbl_data_entry *tbl_data; if (!fm->def_policy) { - sub_policy = get_meter_sub_policy(dev, flow, - fm->policy_id, attr, - items, error); + sub_policy = get_meter_sub_policy(dev, flow, wks, + attr, items, error); if (!sub_policy) return -rte_errno; } else { @@ -5746,6 +5745,22 @@ flow_create_split_meter(struct rte_eth_dev *dev, } MLX5_ASSERT(wks); wks->fm = fm; + if (!fm->def_policy) { + wks->policy = mlx5_flow_meter_policy_find(dev, + fm->policy_id, + NULL); + MLX5_ASSERT(wks->policy); + if (wks->policy->is_hierarchy) { + wks->final_policy = + mlx5_flow_meter_hierarchy_get_final_policy(dev, + wks->policy); + if (!wks->final_policy) + return rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to find terminal policy of hierarchy."); + } + } /* * If it isn't default-policy Meter, and * 1. There's no action in flow to change @@ -5776,7 +5791,7 @@ flow_create_split_meter(struct rte_eth_dev *dev, pre_actions = sfx_actions + 1; else pre_actions = sfx_actions + actions_n; - ret = flow_meter_split_prep(dev, flow, fm, &sfx_attr, + ret = flow_meter_split_prep(dev, flow, wks, &sfx_attr, items, sfx_items, actions, sfx_actions, pre_actions, (set_mtr_reg ? &mtr_flow_id : NULL), diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2f2aa962f9..09d6d609db 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -454,6 +454,7 @@ enum mlx5_flow_fate_type { MLX5_FLOW_FATE_DROP, MLX5_FLOW_FATE_DEFAULT_MISS, MLX5_FLOW_FATE_SHARED_RSS, + MLX5_FLOW_FATE_MTR, MLX5_FLOW_FATE_MAX, }; @@ -1102,6 +1103,10 @@ struct mlx5_flow_workspace { uint32_t rssq_num; /* Allocated queue num in rss_desc. */ uint32_t flow_idx; /* Intermediate device flow index. */ struct mlx5_flow_meter_info *fm; /* Pointer to the meter in flow. */ + struct mlx5_flow_meter_policy *policy; + /* The meter policy used by meter in flow. */ + struct mlx5_flow_meter_policy *final_policy; + /* The final policy when meter policy is hierarchy. */ uint32_t skip_matcher_reg:1; /* Indicates if need to skip matcher register in translate. */ }; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 75ef6216ac..d34f5214a8 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7864,6 +7864,8 @@ flow_dv_prepare(struct rte_eth_dev *dev, MLX5_ASSERT(wks); wks->skip_matcher_reg = 0; + wks->policy = NULL; + wks->final_policy = NULL; /* In case of corrupting the memory. */ if (wks->flow_idx >= MLX5_NUM_MAX_DEV_FLOWS) { rte_flow_error_set(error, ENOSPC, @@ -15028,6 +15030,37 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, action_flags |= MLX5_FLOW_ACTION_JUMP; break; } + case RTE_FLOW_ACTION_TYPE_METER: + { + const struct rte_flow_action_meter *mtr; + struct mlx5_flow_meter_info *next_fm; + struct mlx5_flow_meter_policy *next_policy; + uint32_t next_mtr_idx = 0; + + mtr = act->conf; + next_fm = mlx5_flow_meter_find(priv, + mtr->mtr_id, + &next_mtr_idx); + if (!next_fm) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_MTR_ID, NULL, + "Fail to find next meter."); + if (next_fm->def_policy) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_MTR_ID, NULL, + "Hierarchy only supports termination meter."); + next_policy = mlx5_flow_meter_policy_find(dev, + next_fm->policy_id, NULL); + MLX5_ASSERT(next_policy); + act_cnt->fate_action = MLX5_FLOW_FATE_MTR; + act_cnt->next_mtr_id = next_fm->meter_id; + act_cnt->next_sub_policy = NULL; + mtr_policy->is_hierarchy = 1; + mtr_policy->dev = next_policy->dev; + action_flags |= + MLX5_FLOW_ACTION_METER_WITH_TERMINATED_POLICY; + break; + } default: return -rte_mtr_error_set(error, ENOTSUP, RTE_MTR_ERROR_TYPE_METER_POLICY, @@ -15563,7 +15596,14 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev, struct mlx5_flow_dv_tag_resource *tag; struct mlx5_flow_dv_port_id_action_resource *port_action; struct mlx5_hrxq *hrxq; - uint8_t egress, transfer; + struct mlx5_flow_meter_info *next_fm = NULL; + struct mlx5_flow_meter_policy *next_policy; + struct mlx5_flow_meter_sub_policy *next_sub_policy; + struct mlx5_flow_tbl_data_entry *tbl_data; + struct rte_flow_error error; + uint8_t egress = (domain == MLX5_MTR_DOMAIN_EGRESS) ? 1 : 0; + uint8_t transfer = (domain == MLX5_MTR_DOMAIN_TRANSFER) ? 1 : 0; + bool mtr_first = egress || (transfer && priv->representor_id != 0xffff); bool match_src_port = false; int i; @@ -15578,13 +15618,39 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev, acts[i].actions_n = 1; continue; } + if (mtr_policy->act_cnt[i].fate_action == MLX5_FLOW_FATE_MTR) { + struct rte_flow_attr attr = { + .transfer = transfer + }; + + next_fm = mlx5_flow_meter_find(priv, + mtr_policy->act_cnt[i].next_mtr_id, + NULL); + if (!next_fm) { + DRV_LOG(ERR, + "Failed to get next hierarchy meter."); + goto err_exit; + } + if (mlx5_flow_meter_attach(priv, next_fm, + &attr, &error)) { + DRV_LOG(ERR, "%s", error.message); + next_fm = NULL; + goto err_exit; + } + /* Meter action must be the first for TX. */ + if (mtr_first) { + acts[i].dv_actions[acts[i].actions_n] = + next_fm->meter_action; + acts[i].actions_n++; + } + } if (mtr_policy->act_cnt[i].rix_mark) { tag = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_TAG], mtr_policy->act_cnt[i].rix_mark); if (!tag) { DRV_LOG(ERR, "Failed to find " "mark action for policy."); - return -1; + goto err_exit; } acts[i].dv_actions[acts[i].actions_n] = tag->action; @@ -15604,7 +15670,7 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev, if (!port_action) { DRV_LOG(ERR, "Failed to find " "port action for policy."); - return -1; + goto err_exit; } acts[i].dv_actions[acts[i].actions_n] = port_action->action; @@ -15626,12 +15692,42 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev, if (!hrxq) { DRV_LOG(ERR, "Failed to find " "queue action for policy."); - return -1; + goto err_exit; } acts[i].dv_actions[acts[i].actions_n] = hrxq->action; acts[i].actions_n++; break; + case MLX5_FLOW_FATE_MTR: + if (!next_fm) { + DRV_LOG(ERR, + "No next hierarchy meter."); + goto err_exit; + } + if (!mtr_first) { + acts[i].dv_actions[acts[i].actions_n] = + next_fm->meter_action; + acts[i].actions_n++; + } + if (mtr_policy->act_cnt[i].next_sub_policy) { + next_sub_policy = + mtr_policy->act_cnt[i].next_sub_policy; + } else { + next_policy = + mlx5_flow_meter_policy_find(dev, + next_fm->policy_id, NULL); + MLX5_ASSERT(next_policy); + next_sub_policy = + next_policy->sub_policys[domain][0]; + } + tbl_data = + container_of(next_sub_policy->tbl_rsc, + struct mlx5_flow_tbl_data_entry, tbl); + acts[i].dv_actions[acts[i].actions_n++] = + tbl_data->jump.action; + if (mtr_policy->act_cnt[i].modify_hdr) + match_src_port = !!transfer; + break; default: /*Queue action do nothing*/ break; @@ -15644,9 +15740,13 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev, egress, transfer, match_src_port, acts)) { DRV_LOG(ERR, "Failed to create policy rules per domain."); - return -1; + goto err_exit; } return 0; +err_exit: + if (next_fm) + mlx5_flow_meter_detach(priv, next_fm); + return -1; } /** @@ -15956,22 +16056,12 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, return -1; } -/** - * Find the policy table for prefix table with RSS. - * - * @param[in] dev - * Pointer to Ethernet device. - * @param[in] mtr_policy - * Pointer to meter policy table. - * @param[in] rss_desc - * Pointer to rss_desc - * @return - * Pointer to table set on success, NULL otherwise and rte_errno is set. - */ static struct mlx5_flow_meter_sub_policy * -flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, +__flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, struct mlx5_flow_meter_policy *mtr_policy, - struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]) + struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS], + struct mlx5_flow_meter_sub_policy *next_sub_policy, + bool *is_reuse) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_meter_sub_policy *sub_policy = NULL; @@ -16013,6 +16103,7 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, rte_spinlock_unlock(&mtr_policy->sl); for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) mlx5_hrxq_release(dev, hrxq_idx[j]); + *is_reuse = true; return mtr_policy->sub_policys[domain][i]; } } @@ -16038,24 +16129,30 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, if (!rss_desc[i]) continue; sub_policy->rix_hrxq[i] = hrxq_idx[i]; - /* - * Overwrite the last action from - * RSS action to Queue action. - */ - hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], - hrxq_idx[i]); - if (!hrxq) { - DRV_LOG(ERR, "Failed to create policy hrxq"); - goto rss_sub_policy_error; - } - act_cnt = &mtr_policy->act_cnt[i]; - if (act_cnt->rix_mark || act_cnt->modify_hdr) { - memset(&dh, 0, sizeof(struct mlx5_flow_handle)); - if (act_cnt->rix_mark) - dh.mark = 1; - dh.fate_action = MLX5_FLOW_FATE_QUEUE; - dh.rix_hrxq = hrxq_idx[i]; - flow_drv_rxq_flags_set(dev, &dh); + if (mtr_policy->is_hierarchy) { + act_cnt = &mtr_policy->act_cnt[i]; + act_cnt->next_sub_policy = next_sub_policy; + mlx5_hrxq_release(dev, hrxq_idx[i]); + } else { + /* + * Overwrite the last action from + * RSS action to Queue action. + */ + hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], + hrxq_idx[i]); + if (!hrxq) { + DRV_LOG(ERR, "Failed to create policy hrxq"); + goto rss_sub_policy_error; + } + act_cnt = &mtr_policy->act_cnt[i]; + if (act_cnt->rix_mark || act_cnt->modify_hdr) { + memset(&dh, 0, sizeof(struct mlx5_flow_handle)); + if (act_cnt->rix_mark) + dh.mark = 1; + dh.fate_action = MLX5_FLOW_FATE_QUEUE; + dh.rix_hrxq = hrxq_idx[i]; + flow_drv_rxq_flags_set(dev, &dh); + } } } if (__flow_dv_create_policy_acts_rules(dev, mtr_policy, @@ -16079,6 +16176,7 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain); } rte_spinlock_unlock(&mtr_policy->sl); + *is_reuse = false; return sub_policy; rss_sub_policy_error: if (sub_policy) { @@ -16093,13 +16191,105 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, sub_policy->idx); } } - if (sub_policy_idx) - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], - sub_policy_idx); rte_spinlock_unlock(&mtr_policy->sl); return NULL; } +/** + * Find the policy table for prefix table with RSS. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] mtr_policy + * Pointer to meter policy table. + * @param[in] rss_desc + * Pointer to rss_desc + * @return + * Pointer to table set on success, NULL otherwise and rte_errno is set. + */ +static struct mlx5_flow_meter_sub_policy * +flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *mtr_policy, + struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_sub_policy *sub_policy = NULL; + struct mlx5_flow_meter_info *next_fm; + struct mlx5_flow_meter_policy *next_policy; + struct mlx5_flow_meter_sub_policy *next_sub_policy = NULL; + struct mlx5_flow_meter_policy *policies[MLX5_MTR_CHAIN_MAX_NUM]; + struct mlx5_flow_meter_sub_policy *sub_policies[MLX5_MTR_CHAIN_MAX_NUM]; + uint32_t domain = MLX5_MTR_DOMAIN_INGRESS; + bool reuse_sub_policy; + uint32_t i = 0; + uint32_t j = 0; + + while (true) { + /* Iterate hierarchy to get all policies in this hierarchy. */ + policies[i++] = mtr_policy; + if (!mtr_policy->is_hierarchy) + break; + if (i >= MLX5_MTR_CHAIN_MAX_NUM) { + DRV_LOG(ERR, "Exceed max meter number in hierarchy."); + return NULL; + } + next_fm = mlx5_flow_meter_find(priv, + mtr_policy->act_cnt[RTE_COLOR_GREEN].next_mtr_id, NULL); + if (!next_fm) { + DRV_LOG(ERR, "Failed to get next meter in hierarchy."); + return NULL; + } + next_policy = + mlx5_flow_meter_policy_find(dev, next_fm->policy_id, + NULL); + MLX5_ASSERT(next_policy); + mtr_policy = next_policy; + } + while (i) { + /** + * From last policy to the first one in hierarchy, + * create/get the sub policy for each of them. + */ + sub_policy = __flow_dv_meter_get_rss_sub_policy(dev, + policies[--i], + rss_desc, + next_sub_policy, + &reuse_sub_policy); + if (!sub_policy) { + DRV_LOG(ERR, "Failed to get the sub policy."); + goto err_exit; + } + if (!reuse_sub_policy) + sub_policies[j++] = sub_policy; + next_sub_policy = sub_policy; + } + return sub_policy; +err_exit: + while (j) { + uint16_t sub_policy_num; + + sub_policy = sub_policies[--j]; + mtr_policy = sub_policy->main_policy; + __flow_dv_destroy_sub_policy_rules(dev, sub_policy); + if (sub_policy != mtr_policy->sub_policys[domain][0]) { + sub_policy_num = (mtr_policy->sub_policy_num >> + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & + MLX5_MTR_SUB_POLICY_NUM_MASK; + mtr_policy->sub_policys[domain][sub_policy_num - 1] = + NULL; + sub_policy_num--; + mtr_policy->sub_policy_num &= + ~(MLX5_MTR_SUB_POLICY_NUM_MASK << + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * i)); + mtr_policy->sub_policy_num |= + (sub_policy_num & MLX5_MTR_SUB_POLICY_NUM_MASK) << + (MLX5_MTR_SUB_POLICY_NUM_SHIFT * i); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + sub_policy->idx); + } + } + return NULL; +} /** * Destroy the sub policy table with RX queue. diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 6f962a8d52..03f7e120e1 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -529,6 +529,37 @@ mlx5_flow_meter_policy_find(struct rte_eth_dev *dev, return NULL; } +/** + * Get the last meter's policy from one meter's policy in hierarchy. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] policy + * Pointer to flow meter policy. + * + * @return + * Pointer to the final meter's policy, or NULL when fail. + */ +struct mlx5_flow_meter_policy * +mlx5_flow_meter_hierarchy_get_final_policy(struct rte_eth_dev *dev, + struct mlx5_flow_meter_policy *policy) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_info *next_fm; + struct mlx5_flow_meter_policy *next_policy = policy; + + while (next_policy->is_hierarchy) { + next_fm = mlx5_flow_meter_find(priv, + next_policy->act_cnt[RTE_COLOR_GREEN].next_mtr_id, NULL); + if (!next_fm || next_fm->def_policy) + return NULL; + next_policy = mlx5_flow_meter_policy_find(dev, + next_fm->policy_id, NULL); + MLX5_ASSERT(next_policy); + } + return next_policy; +} + /** * Callback to check MTR policy action validate * @@ -650,6 +681,7 @@ mlx5_flow_meter_policy_add(struct rte_eth_dev *dev, uint16_t sub_policy_num; uint8_t domain_bitmap = 0; union mlx5_l3t_data data; + bool skip_rule = false; if (!priv->mtr_en) return -rte_mtr_error_set(error, ENOTSUP, @@ -759,7 +791,16 @@ mlx5_flow_meter_policy_add(struct rte_eth_dev *dev, policy->actions, error); if (ret) goto policy_add_err; - if (!is_rss && !mtr_policy->is_queue) { + if (mtr_policy->is_hierarchy) { + struct mlx5_flow_meter_policy *final_policy; + + final_policy = + mlx5_flow_meter_hierarchy_get_final_policy(dev, mtr_policy); + if (!final_policy) + goto policy_add_err; + skip_rule = (final_policy->is_rss || final_policy->is_queue); + } + if (!is_rss && !mtr_policy->is_queue && !skip_rule) { /* Create policy rules in HW. */ ret = mlx5_flow_create_policy_rules(dev, mtr_policy); if (ret) From patchwork Tue Jul 6 13:14:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shun Hao X-Patchwork-Id: 95383 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 299FBA0C47; Tue, 6 Jul 2021 15:15:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 09B574133D; Tue, 6 Jul 2021 15:15:45 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2051.outbound.protection.outlook.com [40.107.223.51]) by mails.dpdk.org (Postfix) with ESMTP id 107F74120E for ; Tue, 6 Jul 2021 15:15:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cngPqrU9hC+jR02gV06a6NFnU8A7oUDncks2CwCcnu8wk1sew8urGcBdIPZ5kVbudu7Pbm7oQgXo5rH5OrMcgpmtO2ff+4I0x1ftHlfKgTRDSy1k3UGMsuTqSaqd9v+MgMitALX1teSYAvsNFnyg39OkMRzD6WHZKEyU6FkiZniuGP0/t+Ort5KtFye9y61+OUIBm2BL3o5uyRVGeIsTA6Iq3eZsOUmN2e+6Z6O2J8sMc6bSoHGL7vYMzjO1KY6s+iKkTj7N9qSk2K5WUK9IfEai98At/LdQWsSKP2mxNJCphhHbTzn09hE41Jj61+zKRC8T//E8oIQMKpaFPoBlHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+UXyOBbNyYeBGOmsHZ2w2bBuk3MPrC0HDe1JLJV/D1k=; b=bHLIDgnY4EgFguTH3xmFuf/o6JQmF2k6r78PGM4mmhxmJ1hijBTJES32KHs9KrYLtoIVT6MSdyNDPdlOyERknwvsBrz96Zqgf6J7L/2AxXmoV9dlqQwf8sb3qR9spc9bU2uDGLC8TyDPgB1N4jiihGis7NRqJQ6s74BQBfoH32bvS6A7dfiurD0OUZKFC4/Ba2+e/lXW0iz2/rtrCDqQoBne+GMUXPqRBxzCbX743fH3cIqgHI/PYxkKBwBTZUmxHJZ29UjVvtijT5/sU+KfWR3Ex0BGuznfRZueUz4Y8/BUuC47sHAgib58az22hL4Yas/oNxwym9KTUryRMtMijA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+UXyOBbNyYeBGOmsHZ2w2bBuk3MPrC0HDe1JLJV/D1k=; b=bOFzWDXp5YsV4cQhXt9CDVtAwQZRNb7204XweIE1FhM3/itS1YQJ2XdkqF9TnAbm5Cq8lFlA8kqCbxkE3qVXl40K6HHslt3CdXI0SIwgFFQpFbSdxu3E1Uz+qhWUbs+dnSC3AMsAyXQrVbCx2mjH1iQPvXkbLwHhdN4YguKHdllwXUNQUA1k8HQ7NMIlG/7lcFt95xP0eBhVHa+gP8PELzP8VNV2tWckORlabcOjwoq/xiYP/tU7Bw2y2WgI6aJq913EQL5cO350z1PqefKAeoOeORW+cs8epp8l3S6ql9EvUuc8f0PQSR7jS/POt6tysl8tUs5m9ihNHskZ19SIyQ== Received: from BN6PR14CA0021.namprd14.prod.outlook.com (2603:10b6:404:79::31) by BN6PR12MB1425.namprd12.prod.outlook.com (2603:10b6:404:1f::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.31; Tue, 6 Jul 2021 13:15:40 +0000 Received: from BN8NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:404:79:cafe::c4) by BN6PR14CA0021.outlook.office365.com (2603:10b6:404:79::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.19 via Frontend Transport; Tue, 6 Jul 2021 13:15:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT003.mail.protection.outlook.com (10.13.177.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:15:40 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 6 Jul 2021 13:15:37 +0000 From: Shun Hao To: , , , "Shahaf Shuler" CC: , , Date: Tue, 6 Jul 2021 16:14:48 +0300 Message-ID: <20210706131450.30917-3-shunh@nvidia.com> X-Mailer: git-send-email 2.20.0 In-Reply-To: <20210706131450.30917-1-shunh@nvidia.com> References: <20210706131450.30917-1-shunh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e533dcd2-fcca-4746-5001-08d940802707 X-MS-TrafficTypeDiagnostic: BN6PR12MB1425: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1227; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5vob7CB0br+vtzwFXkysIKm2KYFMYKjlFmGboPkEa+rL4KWDg42z2I81ds/xBX8P9dBoY8FgvUbA+w1XxFVnTQEP76O1ekl8GDs5IRSv1qXos90j+cEaLHKZJy6DQ6nE6debSSQirFlN7FBzGooSLls/zxTccDEMJHl7uoiT44mtL0Ayk7GIcIvonGE2vHJhyph4qR1uPeZDUrjuPFI5sD6iI4ZV9vkrib/HvZfm1v9OZO0O7afRuRt0IToPfBocZMRLy+/3R3wbSh7rQ/fPXi4YUj94/igBVHayH6b5iNNaapygISsfM8xTm8Z3YrQl2JCn/oWEYQ2WMPkkkORINnOun1yMLIh8kTdcPsk6giJIce1f1LgiIKltEIN2eGq316Q2fWLsiVmXVIWV0yH6tQIWCfMOANK2pYOedJAXDN+m2DkExbBeVqV0IilrBguDz35UgE5/uDdkQXzU5hv93PqNpoTArBEdSdal5piPn5C86XWc35p/3FknKxswfFOpl2oTLmRXdT/jvYSYlmM/dPEZ8TMXicmZuJTaheZ8+/WXii3Vxp/MxyBphUwHJs+c0uSDKTqyqXo+w4jHhipG0KtnNRIHadATWSjRqiJ7BaoTeMvqJiBoYq6il/BQjSdbkRs+C3CE602i0tPxhtRrLROC2ipXfqqmSKMIq0D/OXXn3lON0lQABXJRBtI6k5MgdyrbV/wuKQJLf/bQEAjVWw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(376002)(136003)(346002)(396003)(46966006)(36840700001)(2906002)(107886003)(2616005)(30864003)(336012)(186003)(110136005)(8936002)(6286002)(54906003)(8676002)(36906005)(5660300002)(1076003)(55016002)(478600001)(4326008)(316002)(426003)(36860700001)(70206006)(6666004)(16526019)(70586007)(36756003)(47076005)(7636003)(83380400001)(356005)(82310400003)(26005)(82740400003)(6636002)(7696005)(86362001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 13:15:40.1799 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e533dcd2-fcca-4746-5001-08d940802707 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1425 Subject: [dpdk-dev] [PATCH v1 2/4] net/mlx5: support meter hierarchy drop count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When using meter hierarchy with multiple meters, every meter may have drop counter, so a packet being set red color by one meter should be counted to that specific meter only. To support this, add tag action in the color rule so packet going to next new meter can have its meter id, so as to be counted to the correct drop counter in drop table. Signed-off-by: Shun Hao Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5.h | 20 +- drivers/net/mlx5/mlx5_flow.c | 52 +++++- drivers/net/mlx5/mlx5_flow.h | 7 + drivers/net/mlx5/mlx5_flow_dv.c | 318 ++++++++++++++++++++++++++------ 4 files changed, 339 insertions(+), 58 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 0c555f0b1f..e5c9ec0777 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -631,6 +631,20 @@ enum mlx5_meter_domain { MLX5_MTR_DOMAIN_EGRESS_BIT | \ MLX5_MTR_DOMAIN_TRANSFER_BIT) +/* The color tag rule structure. */ +struct mlx5_sub_policy_color_rule { + void *rule; + /* The color rule. */ + struct mlx5_flow_dv_matcher *matcher; + /* The color matcher. */ + TAILQ_ENTRY(mlx5_sub_policy_color_rule) next_port; + /**< Pointer to the next color rule structure. */ + int32_t src_port; + /* On which src port this rule applied. */ +}; + +TAILQ_HEAD(mlx5_sub_policy_color_rules, mlx5_sub_policy_color_rule); + /* * Meter sub-policy structure. * Each RSS TIR in meter policy need its own sub-policy resource. @@ -648,10 +662,8 @@ struct mlx5_flow_meter_sub_policy { /* Index to TIR resource. */ struct mlx5_flow_tbl_resource *jump_tbl[MLX5_MTR_RTE_COLORS]; /* Meter jump/drop table. */ - struct mlx5_flow_dv_matcher *color_matcher[RTE_COLORS]; - /* Matcher for Color. */ - void *color_rule[RTE_COLORS]; - /* Meter green/yellow/drop rule. */ + struct mlx5_sub_policy_color_rules color_rules[RTE_COLORS]; + /* List for the color rules. */ }; struct mlx5_meter_policy_acts { diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 6c4bfde098..3cd91a7e8c 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3449,6 +3449,41 @@ flow_drv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, return fops->meter_sub_policy_rss_prepare(dev, policy, rss_desc); } +/** + * Flow driver color tag rule API. This abstracts calling driver + * specific functions. Parent flow (rte_flow) should have driver + * type (drv_type). It will create the color tag rules in hierarchy meter. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in, out] flow + * Pointer to flow structure. + * @param[in] fm + * Pointer to flow meter structure. + * @param[in] src_port + * The src port this extra rule should use. + * @param[in] item + * The src port id match item. + * @param[out] error + * Pointer to error structure. + */ +static int +flow_drv_mtr_hierarchy_rule_create(struct rte_eth_dev *dev, + struct rte_flow *flow, + struct mlx5_flow_meter_info *fm, + int32_t src_port, + const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + enum mlx5_flow_drv_type type = flow->drv_type; + + MLX5_ASSERT(type > MLX5_FLOW_TYPE_MIN && type < MLX5_FLOW_TYPE_MAX); + fops = flow_get_drv_ops(type); + return fops->meter_hierarchy_rule_create(dev, fm, + src_port, item, error); +} + /** * Get RSS action from the action list. * @@ -4773,6 +4808,15 @@ flow_meter_split_prep(struct rte_eth_dev *dev, pid_v, "Failed to get port info."); flow_src_port = port_priv->representor_id; + if (!fm->def_policy && wks->policy->is_hierarchy && + flow_src_port != priv->representor_id) { + if (flow_drv_mtr_hierarchy_rule_create(dev, + flow, fm, + flow_src_port, + items, + error)) + return -rte_errno; + } memcpy(sfx_items, items, sizeof(*sfx_items)); sfx_items++; break; @@ -5713,6 +5757,7 @@ flow_create_split_meter(struct rte_eth_dev *dev, bool has_mtr = false; bool has_modify = false; bool set_mtr_reg = true; + bool is_mtr_hierarchy = false; uint32_t meter_id = 0; uint32_t mtr_idx = 0; uint32_t mtr_flow_id = 0; @@ -5759,6 +5804,7 @@ flow_create_split_meter(struct rte_eth_dev *dev, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Failed to find terminal policy of hierarchy."); + is_mtr_hierarchy = true; } } /* @@ -5766,9 +5812,11 @@ flow_create_split_meter(struct rte_eth_dev *dev, * 1. There's no action in flow to change * packet (modify/encap/decap etc.), OR * 2. No drop count needed for this meter. - * no need to use regC to save meter id anymore. + * 3. It's not meter hierarchy. + * Then no need to use regC to save meter id anymore. */ - if (!fm->def_policy && (!has_modify || !fm->drop_cnt)) + if (!fm->def_policy && !is_mtr_hierarchy && + (!has_modify || !fm->drop_cnt)) set_mtr_reg = false; /* Prefix actions: meter, decap, encap, tag, jump, end. */ act_size = sizeof(struct rte_flow_action) * (actions_n + 6) + diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 09d6d609db..7d97c5880f 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1161,6 +1161,12 @@ typedef struct mlx5_flow_meter_sub_policy * (struct rte_eth_dev *dev, struct mlx5_flow_meter_policy *mtr_policy, struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]); +typedef int (*mlx5_flow_meter_hierarchy_rule_create_t) + (struct rte_eth_dev *dev, + struct mlx5_flow_meter_info *fm, + int32_t src_port, + const struct rte_flow_item *item, + struct rte_flow_error *error); typedef void (*mlx5_flow_destroy_sub_policy_with_rxq_t) (struct rte_eth_dev *dev, struct mlx5_flow_meter_policy *mtr_policy); @@ -1257,6 +1263,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_create_def_policy_t create_def_policy; mlx5_flow_destroy_def_policy_t destroy_def_policy; mlx5_flow_meter_sub_policy_rss_prepare_t meter_sub_policy_rss_prepare; + mlx5_flow_meter_hierarchy_rule_create_t meter_hierarchy_rule_create; mlx5_flow_destroy_sub_policy_with_rxq_t destroy_sub_policy_with_rxq; mlx5_flow_counter_alloc_t counter_alloc; mlx5_flow_counter_free_t counter_free; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d34f5214a8..119de09809 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include @@ -13090,6 +13091,15 @@ flow_dv_translate(struct rte_eth_dev *dev, matcher.mask.size); matcher.priority = mlx5_get_matcher_priority(dev, attr, matcher.priority); + /** + * When creating meter drop flow in drop table, using original + * 5-tuple match, the matcher priority should be lower than + * mtr_id matcher. + */ + if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && + dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP && + matcher.priority <= MLX5_REG_BITS) + matcher.priority += MLX5_REG_BITS; /* reserved field no needs to be set to 0 here. */ tbl_key.is_fdb = attr->transfer; tbl_key.is_egress = attr->egress; @@ -14579,20 +14589,21 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, struct mlx5_flow_meter_sub_policy *sub_policy) { struct mlx5_flow_tbl_data_entry *tbl; + struct mlx5_sub_policy_color_rule *color_rule; + void *tmp; int i; for (i = 0; i < RTE_COLORS; i++) { - if (sub_policy->color_rule[i]) { - claim_zero(mlx5_flow_os_destroy_flow - (sub_policy->color_rule[i])); - sub_policy->color_rule[i] = NULL; - } - if (sub_policy->color_matcher[i]) { - tbl = container_of(sub_policy->color_matcher[i]->tbl, - typeof(*tbl), tbl); + TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i], + next_port, tmp) { + claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule)); + tbl = container_of(color_rule->matcher->tbl, + typeof(*tbl), tbl); mlx5_cache_unregister(&tbl->matchers, - &sub_policy->color_matcher[i]->entry); - sub_policy->color_matcher[i] = NULL; + &color_rule->matcher->entry); + TAILQ_REMOVE(&sub_policy->color_rules[i], + color_rule, next_port); + mlx5_free(color_rule); } } for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { @@ -14741,6 +14752,7 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, sizeof(struct mlx5_modification_cmd) * (MLX5_MAX_MODIFY_NUM + 1)]; } mhdr_dummy; + struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; egress = (domain == MLX5_MTR_DOMAIN_EGRESS) ? 1 : 0; transfer = (domain == MLX5_MTR_DOMAIN_TRANSFER) ? 1 : 0; @@ -14748,6 +14760,11 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, memset(&dev_flow, 0, sizeof(struct mlx5_flow)); memset(&port_id_action, 0, sizeof(struct mlx5_flow_dv_port_id_action_resource)); + memset(mhdr_res, 0, sizeof(*mhdr_res)); + mhdr_res->ft_type = transfer ? MLX5DV_FLOW_TABLE_TYPE_FDB : + egress ? + MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; dev_flow.handle = &dh; dev_flow.dv.port_id_action = &port_id_action; dev_flow.external = true; @@ -14786,10 +14803,6 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, break; } case RTE_FLOW_ACTION_TYPE_SET_TAG: - { - struct mlx5_flow_dv_modify_hdr_resource - *mhdr_res = &mhdr_dummy.res; - if (i >= MLX5_MTR_RTE_COLORS) return -rte_mtr_error_set(error, ENOTSUP, @@ -14797,12 +14810,6 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, NULL, "cannot create policy " "set tag action for this color"); - memset(mhdr_res, 0, sizeof(*mhdr_res)); - mhdr_res->ft_type = transfer ? - MLX5DV_FLOW_TABLE_TYPE_FDB : - egress ? - MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; if (flow_dv_convert_action_set_tag (dev, mhdr_res, (const struct rte_flow_action_set_tag *) @@ -14818,20 +14825,8 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL, "cannot find policy " "set tag action"); - /* create modify action if needed. */ - dev_flow.dv.group = 1; - if (flow_dv_modify_hdr_resource_register - (dev, mhdr_res, &dev_flow, &flow_err)) - return -rte_mtr_error_set(error, - ENOTSUP, - RTE_MTR_ERROR_TYPE_METER_POLICY, - NULL, "cannot register policy " - "set tag action"); - act_cnt->modify_hdr = - dev_flow.handle->dvh.modify_hdr; action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - } case RTE_FLOW_ACTION_TYPE_DROP: { struct mlx5_flow_mtr_mng *mtrmng = @@ -15035,6 +15030,8 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, const struct rte_flow_action_meter *mtr; struct mlx5_flow_meter_info *next_fm; struct mlx5_flow_meter_policy *next_policy; + struct rte_flow_action tag_action; + struct mlx5_rte_flow_action_set_tag set_tag; uint32_t next_mtr_idx = 0; mtr = act->conf; @@ -15052,6 +15049,30 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, next_policy = mlx5_flow_meter_policy_find(dev, next_fm->policy_id, NULL); MLX5_ASSERT(next_policy); + if (next_fm->drop_cnt) { + set_tag.id = + (enum modify_reg) + mlx5_flow_get_reg_id(dev, + MLX5_MTR_ID, + 0, + (struct rte_flow_error *)error); + set_tag.offset = (priv->mtr_reg_share ? + MLX5_MTR_COLOR_BITS : 0); + set_tag.length = (priv->mtr_reg_share ? + MLX5_MTR_IDLE_BITS_IN_COLOR_REG : + MLX5_REG_BITS); + set_tag.data = next_mtr_idx; + tag_action.type = + (enum rte_flow_action_type) + MLX5_RTE_FLOW_ACTION_TYPE_TAG; + tag_action.conf = &set_tag; + if (flow_dv_convert_action_set_reg + (mhdr_res, &tag_action, + (struct rte_flow_error *)error)) + return -rte_errno; + action_flags |= + MLX5_FLOW_ACTION_SET_TAG; + } act_cnt->fate_action = MLX5_FLOW_FATE_MTR; act_cnt->next_mtr_id = next_fm->meter_id; act_cnt->next_sub_policy = NULL; @@ -15066,6 +15087,19 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, RTE_MTR_ERROR_TYPE_METER_POLICY, NULL, "action type not supported"); } + if (action_flags & MLX5_FLOW_ACTION_SET_TAG) { + /* create modify action if needed. */ + dev_flow.dv.group = 1; + if (flow_dv_modify_hdr_resource_register + (dev, mhdr_res, &dev_flow, &flow_err)) + return -rte_mtr_error_set(error, + ENOTSUP, + RTE_MTR_ERROR_TYPE_METER_POLICY, + NULL, "cannot register policy " + "set tag action"); + act_cnt->modify_hdr = + dev_flow.handle->dvh.modify_hdr; + } } } return 0; @@ -15418,8 +15452,8 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, uint32_t color_reg_c_idx, enum rte_color color, void *matcher_object, int actions_n, void *actions, - bool match_src_port, void **rule, - const struct rte_flow_attr *attr) + bool match_src_port, const struct rte_flow_item *item, + void **rule, const struct rte_flow_attr *attr) { int ret; struct mlx5_flow_dv_match_params value = { @@ -15434,7 +15468,7 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, if (match_src_port && (priv->representor || priv->master)) { if (flow_dv_translate_item_port_id(dev, matcher.buf, - value.buf, NULL, attr)) { + value.buf, item, attr)) { DRV_LOG(ERR, "Failed to create meter policy flow with port."); return -1; @@ -15460,6 +15494,8 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, struct mlx5_flow_meter_sub_policy *sub_policy, const struct rte_flow_attr *attr, bool match_src_port, + const struct rte_flow_item *item, + struct mlx5_flow_dv_matcher **policy_matcher, struct rte_flow_error *error) { struct mlx5_cache_entry *entry; @@ -15485,7 +15521,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, if (match_src_port && (priv->representor || priv->master)) { if (flow_dv_translate_item_port_id(dev, matcher.mask.buf, - value.buf, NULL, attr)) { + value.buf, item, attr)) { DRV_LOG(ERR, "Failed to register meter drop matcher with port."); return -1; @@ -15503,7 +15539,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, DRV_LOG(ERR, "Failed to register meter drop matcher."); return -1; } - sub_policy->color_matcher[priority] = + *policy_matcher = container_of(entry, struct mlx5_flow_dv_matcher, entry); return 0; } @@ -15531,6 +15567,7 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, uint8_t egress, uint8_t transfer, bool match_src_port, struct mlx5_meter_policy_acts acts[RTE_COLORS]) { + struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_error flow_err; uint32_t color_reg_c_idx; struct rte_flow_attr attr = { @@ -15543,6 +15580,7 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, }; int i; int ret = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, &flow_err); + struct mlx5_sub_policy_color_rule *color_rule; if (ret < 0) return -1; @@ -15560,29 +15598,56 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, /* Prepare matchers. */ color_reg_c_idx = ret; for (i = 0; i < RTE_COLORS; i++) { + TAILQ_INIT(&sub_policy->color_rules[i]); if (i == RTE_COLOR_YELLOW || !acts[i].actions_n) continue; + color_rule = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_sub_policy_color_rule), + 0, SOCKET_ID_ANY); + if (!color_rule) { + DRV_LOG(ERR, "No memory to create color rule."); + goto err_exit; + } + color_rule->src_port = priv->representor_id; attr.priority = i; - if (!sub_policy->color_matcher[i]) { - /* Create matchers for Color. */ - if (__flow_dv_create_policy_matcher(dev, - color_reg_c_idx, i, sub_policy, - &attr, match_src_port, &flow_err)) - return -1; + /* Create matchers for Color. */ + if (__flow_dv_create_policy_matcher(dev, + color_reg_c_idx, i, sub_policy, &attr, + (i != RTE_COLOR_RED ? match_src_port : false), + NULL, &color_rule->matcher, &flow_err)) { + DRV_LOG(ERR, "Failed to create color matcher."); + goto err_exit; } /* Create flow, matching color. */ - if (acts[i].actions_n) - if (__flow_dv_create_policy_flow(dev, + if (__flow_dv_create_policy_flow(dev, color_reg_c_idx, (enum rte_color)i, - sub_policy->color_matcher[i]->matcher_object, + color_rule->matcher->matcher_object, acts[i].actions_n, acts[i].dv_actions, - match_src_port, - &sub_policy->color_rule[i], - &attr)) - return -1; + (i != RTE_COLOR_RED ? match_src_port : false), + NULL, &color_rule->rule, + &attr)) { + DRV_LOG(ERR, "Failed to create color rule."); + goto err_exit; + } + TAILQ_INSERT_TAIL(&sub_policy->color_rules[i], + color_rule, next_port); } return 0; +err_exit: + if (color_rule) { + if (color_rule->rule) + mlx5_flow_os_destroy_flow(color_rule->rule); + if (color_rule->matcher) { + struct mlx5_flow_tbl_data_entry *tbl = + container_of(color_rule->matcher->tbl, + typeof(*tbl), tbl); + mlx5_cache_unregister(&tbl->matchers, + &color_rule->matcher->entry); + } + mlx5_free(color_rule); + } + return -1; } static int @@ -15734,8 +15799,6 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev, } } } - egress = (domain == MLX5_MTR_DOMAIN_EGRESS) ? 1 : 0; - transfer = (domain == MLX5_MTR_DOMAIN_TRANSFER) ? 1 : 0; if (__flow_dv_create_domain_policy_rules(dev, sub_policy, egress, transfer, match_src_port, acts)) { DRV_LOG(ERR, @@ -16291,6 +16354,156 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, return NULL; } +/** + * Create the sub policy tag rule for all meters in hierarchy. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] fm + * Meter information table. + * @param[in] src_port + * The src port this extra rule should use. + * @param[in] item + * The src port match item. + * @param[out] error + * Perform verbose error reporting if not NULL. + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, + struct mlx5_flow_meter_info *fm, + int32_t src_port, + const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_policy *mtr_policy; + struct mlx5_flow_meter_sub_policy *sub_policy; + struct mlx5_flow_meter_info *next_fm = NULL; + struct mlx5_flow_meter_policy *next_policy; + struct mlx5_flow_meter_sub_policy *next_sub_policy; + struct mlx5_flow_tbl_data_entry *tbl_data; + struct mlx5_sub_policy_color_rule *color_rule; + struct mlx5_meter_policy_acts acts; + uint32_t color_reg_c_idx; + bool mtr_first = (src_port != 0xffff) ? true : false; + struct rte_flow_attr attr = { + .group = MLX5_FLOW_TABLE_LEVEL_POLICY, + .priority = 0, + .ingress = 0, + .egress = 0, + .transfer = 1, + .reserved = 0, + }; + uint32_t domain = MLX5_MTR_DOMAIN_TRANSFER; + int i; + + mtr_policy = mlx5_flow_meter_policy_find(dev, fm->policy_id, NULL); + MLX5_ASSERT(mtr_policy); + if (!mtr_policy->is_hierarchy) + return 0; + next_fm = mlx5_flow_meter_find(priv, + mtr_policy->act_cnt[RTE_COLOR_GREEN].next_mtr_id, NULL); + if (!next_fm) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to find next meter in hierarchy."); + } + if (!next_fm->drop_cnt) + goto exit; + color_reg_c_idx = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, error); + sub_policy = mtr_policy->sub_policys[domain][0]; + for (i = 0; i < RTE_COLORS; i++) { + bool rule_exist = false; + struct mlx5_meter_policy_action_container *act_cnt; + + if (i >= RTE_COLOR_YELLOW) + break; + TAILQ_FOREACH(color_rule, + &sub_policy->color_rules[i], next_port) + if (color_rule->src_port == src_port) { + rule_exist = true; + break; + } + if (rule_exist) + continue; + color_rule = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_sub_policy_color_rule), + 0, SOCKET_ID_ANY); + if (!color_rule) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "No memory to create tag color rule."); + color_rule->src_port = src_port; + attr.priority = i; + next_policy = mlx5_flow_meter_policy_find(dev, + next_fm->policy_id, NULL); + MLX5_ASSERT(next_policy); + next_sub_policy = next_policy->sub_policys[domain][0]; + tbl_data = container_of(next_sub_policy->tbl_rsc, + struct mlx5_flow_tbl_data_entry, tbl); + act_cnt = &mtr_policy->act_cnt[i]; + if (mtr_first) { + acts.dv_actions[0] = next_fm->meter_action; + acts.dv_actions[1] = act_cnt->modify_hdr->action; + } else { + acts.dv_actions[0] = act_cnt->modify_hdr->action; + acts.dv_actions[1] = next_fm->meter_action; + } + acts.dv_actions[2] = tbl_data->jump.action; + acts.actions_n = 3; + if (mlx5_flow_meter_attach(priv, next_fm, &attr, error)) { + next_fm = NULL; + goto err_exit; + } + if (__flow_dv_create_policy_matcher(dev, color_reg_c_idx, + i, sub_policy, &attr, true, item, + &color_rule->matcher, error)) { + rte_flow_error_set(error, errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to create hierarchy meter matcher."); + goto err_exit; + } + if (__flow_dv_create_policy_flow(dev, color_reg_c_idx, + (enum rte_color)i, + color_rule->matcher->matcher_object, + acts.actions_n, acts.dv_actions, + true, item, + &color_rule->rule, &attr)) { + rte_flow_error_set(error, errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to create hierarchy meter rule."); + goto err_exit; + } + TAILQ_INSERT_TAIL(&sub_policy->color_rules[i], + color_rule, next_port); + } +exit: + /** + * Recursive call to iterate all meters in hierarchy and + * create needed rules. + */ + return flow_dv_meter_hierarchy_rule_create(dev, next_fm, + src_port, item, error); +err_exit: + if (color_rule) { + if (color_rule->rule) + mlx5_flow_os_destroy_flow(color_rule->rule); + if (color_rule->matcher) { + struct mlx5_flow_tbl_data_entry *tbl = + container_of(color_rule->matcher->tbl, + typeof(*tbl), tbl); + mlx5_cache_unregister(&tbl->matchers, + &color_rule->matcher->entry); + } + mlx5_free(color_rule); + } + if (next_fm) + mlx5_flow_meter_detach(priv, next_fm); + return -rte_errno; +} + /** * Destroy the sub policy table with RX queue. * @@ -16966,6 +17179,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .create_def_policy = flow_dv_create_def_policy, .destroy_def_policy = flow_dv_destroy_def_policy, .meter_sub_policy_rss_prepare = flow_dv_meter_sub_policy_rss_prepare, + .meter_hierarchy_rule_create = flow_dv_meter_hierarchy_rule_create, .destroy_sub_policy_with_rxq = flow_dv_destroy_sub_policy_with_rxq, .counter_alloc = flow_dv_counter_allocate, .counter_free = flow_dv_counter_free, From patchwork Tue Jul 6 13:14:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shun Hao X-Patchwork-Id: 95384 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3B26A0C47; Tue, 6 Jul 2021 15:15:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4365041367; Tue, 6 Jul 2021 15:15:48 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2056.outbound.protection.outlook.com [40.107.93.56]) by mails.dpdk.org (Postfix) with ESMTP id 2AE3041362 for ; Tue, 6 Jul 2021 15:15:47 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JeYtYCXhhMU6d5fhxKV2R5WfNjoJpHHAoR8L2jG0YurbpDDu8G4YFa0S1qE/beflqlDZB4mUV337Z5yYMfSnSEXQusBQrYFGC0KNmABL2qFWUiSDvHNb+ndd6uYEDhBTcxrSKeXIQh4El/Hr/9N4HG787KkUn08uNFTkJBJ/Xbwd/0aE003BEAskvZunxNmkWJbLjPu5SALTzApky48OgX+pxFpBqzL8eymj5fl/9KLjmrW6yDihYaUD139jbFBWTkXRQ2IEJ6On3bziMGC4RNqK0BtVwW7IjLNFCvXpNOacJ87SDDI6IIqPUsxSHQ+7DMgLtXGZ53kpMMXuE7KEkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YJ/tTMumzv6My+7A5H7rLAa9O8PlwwMKWmjfp0MbC0w=; b=mkbuYVlAXfvsuK+1Ztobi6XWO1EfjC9WHISkiulAYW3iEO7509oUT1uFKsx1EWw8SeZgBhYE6NFMVo1CEKJygjofSfgnNWZ6OfbqKdr2ue9kRllf81deQeoMhuS+62OSECRdng+ITxDSySTyOoxFr0Dljtul4DS1EZxqQhBk3pVWnOlP0wceK1HbtL8d6+IT9Fvm+anSMQjfp2FviqPRWrrTCPVCE9esoVj26MR3JF3rTi+FJys4K/4MNlFesFni0Rs2lowgfo0M8Hnduoujcby5oS5LWZEG6E9T2lSRywxj90E3xyFR9xOwtF7c8EZdUjbjUpBaNH0lMCGYCZYNdw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YJ/tTMumzv6My+7A5H7rLAa9O8PlwwMKWmjfp0MbC0w=; b=mGAW0eEj0te4Ads33EYCHD/CqNubFWNVvLFTcfcuB10lNZl7NzXgyoNTC2R36Nxylb5Ujcs/06Iza2s5qXQVd9YGLHOnzgyO2EgZRsuZ4I4uLqJ6Azv2bDtDETrnqXWgLq+jOrpwkE/O2nsbZ/GlVlG+lqhkUldIart+hvahOykOLCxQaUAV7bvKZRvDmCiSDhR2oz4XFkr3VpUaaC2O26f65DTINp0+NULHuPLayQqFm6Jfxeyc/pY8n3teudlSi6w0fenycd7byzjeu2UTI7qAa+5K4oTIvzNxJjePH8I6PGu/gBGC0T6Njz/PZh0u7OHOwMfpEs0X0oHHA7Bxog== Received: from BN9P223CA0002.NAMP223.PROD.OUTLOOK.COM (2603:10b6:408:10b::7) by BL0PR12MB2529.namprd12.prod.outlook.com (2603:10b6:207:4e::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Tue, 6 Jul 2021 13:15:45 +0000 Received: from BN8NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10b:cafe::d0) by BN9P223CA0002.outlook.office365.com (2603:10b6:408:10b::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:15:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT058.mail.protection.outlook.com (10.13.177.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:15:44 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 6 Jul 2021 13:15:41 +0000 From: Shun Hao To: , , , "Shahaf Shuler" CC: , , Date: Tue, 6 Jul 2021 16:14:49 +0300 Message-ID: <20210706131450.30917-4-shunh@nvidia.com> X-Mailer: git-send-email 2.20.0 In-Reply-To: <20210706131450.30917-1-shunh@nvidia.com> References: <20210706131450.30917-1-shunh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 94114b13-5e8e-4326-6ef1-08d9408029bb X-MS-TrafficTypeDiagnostic: BL0PR12MB2529: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2733; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kWbxPMl3/+0SGsz9t//1alM9gkI7jFgWryouvy2ICpNEq87MTinWQ6icYFVTVSQfWPlB3P+xYf7s5RqOEn+6XSeiSeGy02l5vl/GuB+vQiqO8MTe3XGu6peoardqsf88Uoa9pquxzGYU1czxC3VbBZkoLGVSYhN/DlgNXMshJlCE1R1KF1+LHCv4MgmRrxAqCF41JcP7b+P55VpVbWMX8OlnE3cqclDnqaRZw4ptzngmjNAlTNWMCvgEsNOx970cV2tf5L7stEM7wYCcMyrOvaWfxW+iPR6uo5A5aOi1DJpJsjXk8k/YBMBIG+xca60dXXlSeyfJrDHaSQ9oVkYKT00KEzpuTbZNJvK2F84BjLMtwNz07lqW+E3N1vYHpMkWze2E+cLuqs1kiwUriJw93yZrQRuj/Qr3QTUnzHqstENwsefB9XpOWL96YRhxhVuCXfg93yPJnNaWox/QdD8nkd4u7dIOcikrh1vpB2kV4LBX9/HKHwHAeaPJJE4AJr6e8yfFESXSu8qRsh506vQnaVjDc594SpFWNVGLcDxoP4x+rj0jIqACAis3IhyqexwWk21uzn5SYPxQr4AiOzjTkebnVcIGyioOhJZZBB/ivGH7qNla0rbsFh7W52TNTy5bJv+fD4UIcaAxZfnzgZQqTk9q52qvAXZ23upGNAhm8Q0KJ4qtprysX4D76Ue4pPM9MFzZzH/wFL4NvPnEPbAPXQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(46966006)(36840700001)(7636003)(82310400003)(7696005)(26005)(2906002)(336012)(478600001)(426003)(55016002)(8676002)(2616005)(6636002)(8936002)(356005)(36860700001)(54906003)(16526019)(186003)(1076003)(83380400001)(110136005)(4326008)(82740400003)(316002)(6286002)(5660300002)(70586007)(107886003)(36906005)(70206006)(86362001)(47076005)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 13:15:44.7917 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 94114b13-5e8e-4326-6ef1-08d9408029bb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB2529 Subject: [dpdk-dev] [PATCH v1 3/4] net/mlx5: meter hierarchy destroy and cleanup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When creating hierarchy meter, its color rules will increase next meter's reference count, so when destroy the hierarchy meter, also need to dereference the next meter's count. During flushing all meters of a port, need to destroy all hierarchy meters and their policies first, to dereference the last meter in hierarchy. Then all meters have no reference and can be destroyed. Signed-off-by: Shun Hao Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_flow_dv.c | 15 +++- drivers/net/mlx5/mlx5_flow_meter.c | 132 +++++++++++++++++++++++++++++ 2 files changed, 145 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 119de09809..681e6fb07c 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -14588,12 +14588,20 @@ static void __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, struct mlx5_flow_meter_sub_policy *sub_policy) { + struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_tbl_data_entry *tbl; + struct mlx5_flow_meter_policy *policy = sub_policy->main_policy; + struct mlx5_flow_meter_info *next_fm; struct mlx5_sub_policy_color_rule *color_rule; void *tmp; - int i; + uint32_t i; for (i = 0; i < RTE_COLORS; i++) { + next_fm = NULL; + if (i == RTE_COLOR_GREEN && policy && + policy->act_cnt[i].fate_action == MLX5_FLOW_FATE_MTR) + next_fm = mlx5_flow_meter_find(priv, + policy->act_cnt[i].next_mtr_id, NULL); TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i], next_port, tmp) { claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule)); @@ -14604,11 +14612,14 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, TAILQ_REMOVE(&sub_policy->color_rules[i], color_rule, next_port); mlx5_free(color_rule); + if (next_fm) + mlx5_flow_meter_detach(priv, next_fm); } } for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { if (sub_policy->rix_hrxq[i]) { - mlx5_hrxq_release(dev, sub_policy->rix_hrxq[i]); + if (policy && !policy->is_hierarchy) + mlx5_hrxq_release(dev, sub_policy->rix_hrxq[i]); sub_policy->rix_hrxq[i] = 0; } if (sub_policy->jump_tbl[i]) { diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 03f7e120e1..78eb2a60f9 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -1891,6 +1891,136 @@ mlx5_flow_meter_rxq_flush(struct rte_eth_dev *dev) } } +/** + * Iterate a meter hierarchy and flush all meters and policies if possible. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] fm + * Pointer to flow meter. + * @param[in] mtr_idx + * .Meter's index + * @param[out] error + * Pointer to rte meter error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_flush_hierarchy(struct rte_eth_dev *dev, + struct mlx5_flow_meter_info *fm, + uint32_t mtr_idx, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_policy *policy; + uint32_t policy_id; + struct mlx5_flow_meter_info *next_fm; + uint32_t next_mtr_idx; + struct mlx5_flow_meter_policy *next_policy = NULL; + + policy = mlx5_flow_meter_policy_find(dev, fm->policy_id, NULL); + MLX5_ASSERT(policy); + while (!fm->ref_cnt && policy->is_hierarchy) { + policy_id = fm->policy_id; + next_fm = mlx5_flow_meter_find(priv, + policy->act_cnt[RTE_COLOR_GREEN].next_mtr_id, + &next_mtr_idx); + if (next_fm) { + next_policy = mlx5_flow_meter_policy_find(dev, + next_fm->policy_id, + NULL); + MLX5_ASSERT(next_policy); + } + if (mlx5_flow_meter_params_flush(dev, fm, mtr_idx)) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_MTR_ID, + NULL, + "Failed to flush meter."); + if (policy->ref_cnt) + break; + if (__mlx5_flow_meter_policy_delete(dev, policy_id, + policy, error, true)) + return -rte_errno; + mlx5_free(policy); + if (!next_fm || !next_policy) + break; + fm = next_fm; + mtr_idx = next_mtr_idx; + policy = next_policy; + } + return 0; +} + +/** + * Flush all the hierarchy meters and their policies. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[out] error + * Pointer to rte meter error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_flush_all_hierarchies(struct rte_eth_dev *dev, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_info *fm; + struct mlx5_flow_meter_policy *policy; + struct mlx5_flow_meter_sub_policy *sub_policy; + struct mlx5_flow_meter_info *next_fm; + struct mlx5_aso_mtr *aso_mtr; + uint32_t mtr_idx = 0; + uint32_t i, policy_idx; + void *entry; + + if (!priv->mtr_idx_tbl || !priv->policy_idx_tbl) + return 0; + MLX5_L3T_FOREACH(priv->mtr_idx_tbl, i, entry) { + mtr_idx = *(uint32_t *)entry; + if (!mtr_idx) + continue; + aso_mtr = mlx5_aso_meter_by_idx(priv, mtr_idx); + fm = &aso_mtr->fm; + if (fm->ref_cnt || fm->def_policy) + continue; + if (mlx5_flow_meter_flush_hierarchy(dev, fm, mtr_idx, error)) + return -rte_errno; + } + MLX5_L3T_FOREACH(priv->policy_idx_tbl, i, entry) { + policy_idx = *(uint32_t *)entry; + sub_policy = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], + policy_idx); + if (!sub_policy) + return -rte_mtr_error_set(error, + EINVAL, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy invalid."); + policy = sub_policy->main_policy; + if (!policy || !policy->is_hierarchy || policy->ref_cnt) + continue; + next_fm = mlx5_flow_meter_find(priv, + policy->act_cnt[RTE_COLOR_GREEN].next_mtr_id, + &mtr_idx); + if (__mlx5_flow_meter_policy_delete(dev, i, policy, + error, true)) + return -rte_mtr_error_set(error, + EINVAL, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy invalid."); + mlx5_free(policy); + if (!next_fm || next_fm->ref_cnt || next_fm->def_policy) + continue; + if (mlx5_flow_meter_flush_hierarchy(dev, next_fm, + mtr_idx, error)) + return -rte_errno; + } + return 0; +} /** * Flush meter configuration. * @@ -1919,6 +2049,8 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error) if (!priv->mtr_en) return 0; if (priv->sh->meter_aso_en) { + if (mlx5_flow_meter_flush_all_hierarchies(dev, error)) + return -rte_errno; if (priv->mtr_idx_tbl) { MLX5_L3T_FOREACH(priv->mtr_idx_tbl, i, entry) { mtr_idx = *(uint32_t *)entry; From patchwork Tue Jul 6 13:14:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shun Hao X-Patchwork-Id: 95385 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 81A2EA0C47; Tue, 6 Jul 2021 15:15:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BBF5341262; Tue, 6 Jul 2021 15:15:53 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2052.outbound.protection.outlook.com [40.107.236.52]) by mails.dpdk.org (Postfix) with ESMTP id 6F79D41377 for ; Tue, 6 Jul 2021 15:15:52 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KDBQ1ajVamu7hgRtIqYIlmzvDEOGuNAJrBL5PigJx0a06noIR+iMKuqnL4BX2MdpMZZ12DwDCINtXDQS4m9DHk1WaGS/bwQtu2mfloy75VQI9jSDJDmwKSQ75LA+bz0o+MqvgHdl2VtCTra4lIrEh3fmkwdUEVTG/kIMlpUsIm/95z4W3vvB+L4TUjQsp0Cetqo5djZ3By/ltvjrdjAq9GaaY92eg/D5w1CMMW1iyMJ7kQ1DTM1q2aKpz8nTSeKyLvclFJwiF0lgr165B+bkvOTkJUsqgQ3mmpQdBH6GQD+zPhW1GUHcT/6zHhiN9DrxUEQT0e8nKe6FEOEsr9XGSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HWbMa8OdTNdJ0xQXoVZ/KNy2XJagpFDE8yssdwTd6Og=; b=hUlbS9PdXmkFcHcZle6KyvesJXAjULnU236Wq4FOS5fu2aAqiWIsmvAGaSt8VHb5NvQybelcqDfIZK3kShqBxQPGNg1TGu77jSZsGhLpPEX8Jg74LFcc/v8bwIL9qBtmIcDLEbcHk8y/owyp7CAr0pnPPbiURR2y+9MRgXf75qst69/BxEgYVhlXasSpg0TD8/kNl1+qWqaSOTwv3BV0gEzpnfAAohbhcWwdxKKV/yRvs2TYPP7yfqHsltZBh3BAzPmzwJ+n/X1Qh5MJ1FhYe4Vi/UjYOxTaNqO4HY1tp+PMjBEBnaN374ql+ItRr6IUlYO7j1EWWa9m0ZnJxpUSUw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HWbMa8OdTNdJ0xQXoVZ/KNy2XJagpFDE8yssdwTd6Og=; b=qQowipxTdWln+VpjrYsu8Qf8XKb/4tJXOhIqPFTPC6TxeZcNkSjXTAFfDOpxix2uGmnXH4yZYH3F2unZJMOxdeDqChWjCAPT8msx36KnA3OypDACaaP1MT9bUiDtDFs0g/2MCeC5iXf+F0CnCLNeAsOcXYqIgLzCQmlyxkp/IVFP1V5mk+FKXv2WMOpg8MkVN8D3e0itj0n/4/LE9INTIz+4mHf3oGO/piPHqk2wByYyBXiLRtDgjLPV5VRTdEWxFX2JchYEOJOpK6z/oaKNdD8F7RDB/qx7GlVlWk88rIKDiPiXYRFmAj/nLnTmoVzGONOyw7+rylRUcbJUBLLwyw== Received: from BN6PR11CA0068.namprd11.prod.outlook.com (2603:10b6:404:f7::30) by DM6PR12MB3593.namprd12.prod.outlook.com (2603:10b6:5:11c::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Tue, 6 Jul 2021 13:15:49 +0000 Received: from BN8NAM11FT035.eop-nam11.prod.protection.outlook.com (2603:10b6:404:f7:cafe::c4) by BN6PR11CA0068.outlook.office365.com (2603:10b6:404:f7::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.19 via Frontend Transport; Tue, 6 Jul 2021 13:15:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT035.mail.protection.outlook.com (10.13.177.116) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 13:15:49 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 6 Jul 2021 13:15:46 +0000 From: Shun Hao To: , , , "Shahaf Shuler" CC: , , Date: Tue, 6 Jul 2021 16:14:50 +0300 Message-ID: <20210706131450.30917-5-shunh@nvidia.com> X-Mailer: git-send-email 2.20.0 In-Reply-To: <20210706131450.30917-1-shunh@nvidia.com> References: <20210706131450.30917-1-shunh@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e86c07c0-b3bb-4a3b-b076-08d940802c9a X-MS-TrafficTypeDiagnostic: DM6PR12MB3593: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cxiWRTsZPhmKpRd9g5Bma9aFAF/4L48nThsHsLM9o9+d5Bp9KYJNN8Cp+VitJ64LzbF86jQ/jkiMLkS3bFq6qJawnFD7kV0fzJTp66IjDO+WcZwgpCmztHI36BRghC6ecwwLHLMYtiCFrEKd1aedH+3oNtLY7F3dRjPXZw5jsRDKPnPke3QEIFiqZPW1NLUoaFzuwjNsGq6gKNHPO2dVee3lcwYc9P9ftBsxOKTAV92K1QZIOj8AwisZBmbeRsE7r8xuwjaZMCsU+BsKPsins/yZ/sXtCZLVEGT5ED/gTA5AAl2wG30qWobNgflAs/fByP55w1hCixMvzF9MSEQIBaa7NVcV/V9s8ycMAHSsfFggfQFLKW6fkvZhUKB1nsgxdwMGU1DiUxRIrB0EOwZpLKdzLtDPjTU+shPsuKUxh3PiRAg4WeJ9plICf58st8z/D6jhHl2S8k+BDVZPMOO+eucdMI7d4oWCXQtXQi/jqCeGiSmXJrdSEN96NZW8JZ7Je+1VLhzRIYyAFmR+daGSvjhYt3rfJVfMcuI2qR9O8LJiUR/DrDif7DJvMQbTOJKxueSUHZbJF71WyZyZJ65NxVD1sZcqw7vBDQv7HYfFcDegAc1dWlTLOAIxb8a6DIRorp9R/9CSGTW33JTDVB6O+XHyogKzYGje40PwjaVF3x84bVCmPRx9aSNmYqPMMEH/zp+QMejSgoxNS2exf4/xeA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(376002)(39860400002)(396003)(346002)(36840700001)(46966006)(110136005)(186003)(6666004)(55016002)(8936002)(7696005)(107886003)(4326008)(26005)(336012)(36860700001)(2616005)(70206006)(82310400003)(54906003)(6636002)(15650500001)(8676002)(1076003)(86362001)(316002)(5660300002)(36756003)(356005)(16526019)(70586007)(82740400003)(478600001)(2906002)(83380400001)(36906005)(426003)(7636003)(6286002)(47076005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 13:15:49.5993 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e86c07c0-b3bb-4a3b-b076-08d940802c9a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT035.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3593 Subject: [dpdk-dev] [PATCH v1 4/4] net/mlx5: validate meter action in policy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This adds the validation when creating a policy with meter action. Currently meter action is only allowed for green color in policy, and 8 meters are supported at maximum in one meter hierarchy. Signed-off-by: Shun Hao Acked-by: Matan Azrad --- doc/guides/nics/mlx5.rst | 15 ++++ doc/guides/rel_notes/release_21_08.rst | 6 ++ drivers/net/mlx5/mlx5_flow_dv.c | 98 ++++++++++++++++++++++++++ 3 files changed, 119 insertions(+) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index a16af32e67..de04931f80 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -107,6 +107,7 @@ Features - 21844 flow priorities for ingress or egress flow groups greater than 0 and for any transfer flow group. - Flow metering, including meter policy API. +- Flow meter hierarchy. - Flow integrity offload API. - Connection tracking. - Sub-Function representors. @@ -1927,3 +1928,17 @@ This section demonstrates how to use the shared meter. A meter M can be created on port X and to be shared with a port Y on the same switch domain by the next way: flow create X ingress transfer pattern eth / port_id id is Y / end actions meter mtr_id M / end + +How to use meter hierarchy +-------------------------- + +This section demonstrates how to create and use a meter hierarchy. +A termination meter M can be the policy green action of another termination meter N. +The two meters are chained together as a chain. Using meter N in a flow will apply +both the meters in hierarchy on that flow. + + add port meter policy 0 1 g_actions queue index 0 / end y_actions end r_actions drop / end + create port meter 0 M 1 1 yes 0xffff 1 0 + add port meter policy 0 2 g_actions meter mtr_id M / end y_actions end r_actions drop / end + create port meter 0 N 2 2 yes 0xffff 1 0 + flow create 0 ingress group 1 pattern eth / end actions meter mtr_id N / end diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index 0a05cb02fa..b29d78f4de 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -68,6 +68,12 @@ New Features usecases. Configuration happens via standard rawdev enq/deq operations. See the :doc:`../rawdevs/cnxk_bphy` rawdev guide for more details on this driver. +* **Updated Mellanox mlx5 driver.** + + Updated the Mellanox mlx5 driver with new features and improvements, including: + + * Added support for Meter hierarchy. + Removed Items ------------- diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 681e6fb07c..c085deed50 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -16858,6 +16858,78 @@ flow_dv_action_validate(struct rte_eth_dev *dev, } } +/** + * Validate the meter hierarchy chain for meter policy. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] meter_id + * Meter id. + * @param[in] action_flags + * Holds the actions detected until now. + * @param[out] is_rss + * Is RSS or not. + * @param[out] hierarchy_domain + * The domain bitmap for hierarchy policy. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value with error set. + */ +static int +flow_dv_validate_policy_mtr_hierarchy(struct rte_eth_dev *dev, + uint32_t meter_id, + uint64_t action_flags, + bool *is_rss, + uint8_t *hierarchy_domain, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_info *fm; + struct mlx5_flow_meter_policy *policy; + uint8_t cnt = 1; + + if (action_flags & (MLX5_FLOW_FATE_ACTIONS | + MLX5_FLOW_FATE_ESWITCH_ACTIONS)) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_POLICER_ACTION_GREEN, + NULL, + "Multiple fate actions not supported."); + while (true) { + fm = mlx5_flow_meter_find(priv, meter_id, NULL); + if (!fm) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_MTR_ID, NULL, + "Meter not found in meter hierarchy."); + if (fm->def_policy) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_MTR_ID, NULL, + "Non termination meter not supported in hierarchy."); + policy = mlx5_flow_meter_policy_find(dev, fm->policy_id, NULL); + MLX5_ASSERT(policy); + if (!policy->is_hierarchy) { + if (policy->transfer) + *hierarchy_domain |= + MLX5_MTR_DOMAIN_TRANSFER_BIT; + if (policy->ingress) + *hierarchy_domain |= + MLX5_MTR_DOMAIN_INGRESS_BIT; + if (policy->egress) + *hierarchy_domain |= MLX5_MTR_DOMAIN_EGRESS_BIT; + *is_rss = policy->is_rss; + break; + } + meter_id = policy->act_cnt[RTE_COLOR_GREEN].next_mtr_id; + if (++cnt >= MLX5_MTR_CHAIN_MAX_NUM) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_METER_POLICY, NULL, + "Exceed max hierarchy meter number."); + } + return 0; +} + /** * Validate meter policy actions. * Dispatcher for action type specific validation. @@ -16893,6 +16965,8 @@ flow_dv_validate_mtr_policy_acts(struct rte_eth_dev *dev, struct rte_flow_error flow_err; uint8_t domain_color[RTE_COLORS] = {0}; uint8_t def_domain = MLX5_MTR_ALL_DOMAIN_BIT; + uint8_t hierarchy_domain = 0; + const struct rte_flow_action_meter *mtr; if (!priv->config.dv_esw_en) def_domain &= ~MLX5_MTR_DOMAIN_TRANSFER_BIT; @@ -17070,6 +17144,27 @@ flow_dv_validate_mtr_policy_acts(struct rte_eth_dev *dev, ++actions_n; action_flags |= MLX5_FLOW_ACTION_JUMP; break; + case RTE_FLOW_ACTION_TYPE_METER: + if (i != RTE_COLOR_GREEN) + return -rte_mtr_error_set(error, + ENOTSUP, + RTE_MTR_ERROR_TYPE_METER_POLICY, + NULL, flow_err.message ? + flow_err.message : + "Meter hierarchy only supports GREEN color."); + mtr = act->conf; + ret = flow_dv_validate_policy_mtr_hierarchy(dev, + mtr->mtr_id, + action_flags, + is_rss, + &hierarchy_domain, + error); + if (ret) + return ret; + ++actions_n; + action_flags |= + MLX5_FLOW_ACTION_METER_WITH_TERMINATED_POLICY; + break; default: return -rte_mtr_error_set(error, ENOTSUP, RTE_MTR_ERROR_TYPE_METER_POLICY, @@ -17090,6 +17185,9 @@ flow_dv_validate_mtr_policy_acts(struct rte_eth_dev *dev, * so MARK action only in ingress domain. */ domain_color[i] = MLX5_MTR_DOMAIN_INGRESS_BIT; + else if (action_flags & + MLX5_FLOW_ACTION_METER_WITH_TERMINATED_POLICY) + domain_color[i] = hierarchy_domain; else domain_color[i] = def_domain; /*