From patchwork Wed Feb 28 17:00:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 137451 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A93743C2C; Wed, 28 Feb 2024 18:03:25 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DAE4A42FC1; Wed, 28 Feb 2024 18:02:06 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2072.outbound.protection.outlook.com [40.107.243.72]) by mails.dpdk.org (Postfix) with ESMTP id 3840F42FF1 for ; Wed, 28 Feb 2024 18:02:05 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UuWSIMGUf/+UtSYekKE72sgIjx/LZzn3Rfnc9jjfdq9BG6Frb0CttIZ5J52vCbzNxr3qI6MF09J1UnKfRT7L6MomI+6V2/xHWRG0lgPrOkVoWcs49LkYbLa1HG0u3sXmKshMQU7LrVmpsj8I/FF3AJOmml55kwxXYXyWSGsW9PtV5TaCL4zZ0GXGuXKAcg1YrYh+WnEM/ZQ6uXsgPFpW7pSaLL6su4nPkMei7Miovxoa6I5ADxmgDJm+4mN6hGWvf84dMEt9aXzPnd8RAXij77CfKx62EO9U5/IiR2EsoCyeBWHDDAZjyr93tmD6idGW4N4ygrxaGEvlb8uQ0oVKYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=P/iWlwokZLIbFHMRW0YygVgjtfSaSwBqpvAP0lx2GXc=; b=AiQhIGP3pgmRJCiTLRW9PxXa4gxBYcTzWHWHF344ZHSR9HuzOFHyv0vxlR4IM6ZXVKWng9CsS4XQgjiwoPziEn+yIECFNkTRAKaAhCf14bXytVawak83VKqlcz3lDY4E7yBje9lh4/1NA9O0OOwUrvIdDqbR1uhujRmyp+BRofohOb3zEflk2L57ntMXC84n9DftyK10qZ4/RgT8wqC6W0AuM7HNmWhxMmklXFEKBsL9LkaJACtc7V4Z5ljECb18R9eUOVGLLpYrPizeNPLCFD2+lihuOXkB6Tk3Q7yT/xq0dyGAYa52nxsHLKc0vQ2ryeAoyOfkAEWd9FFN+Jneag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=P/iWlwokZLIbFHMRW0YygVgjtfSaSwBqpvAP0lx2GXc=; b=f6L0uUGNR2MLAHjRctjfxEK+inw2FQWIFIKaj4qYu7iCT1mX48uN8doRaLUisFWfI8w3U4dzYGucd2vPbJmC3sjnIXAasHKEdw4s49fQYyUGAhR3ojCsPbOe0PMWPAMawQJKCVihq8TnfGaTH9aaFoILHK4ybA2MTu9p2urG4PxuzjIEkg6UG4GRPvF/geOFUGPzlEBBlQvaKVZAjJDdLUHs2B64oNHJ9DuClVm85w+CHaV+mUV4HmQDpLfC18qtu3IT6oPfpXPBgkw/uU/8XcM/4kVUYvrAgCi9/VfxcUmKSqtsXL3XZR34n3+N9BHlfdCBivMT0E3BXFqGxkDFbA== Received: from BL0PR1501CA0005.namprd15.prod.outlook.com (2603:10b6:207:17::18) by MW3PR12MB4587.namprd12.prod.outlook.com (2603:10b6:303:5d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Wed, 28 Feb 2024 17:02:02 +0000 Received: from BL02EPF0001A103.namprd05.prod.outlook.com (2603:10b6:207:17:cafe::6) by BL0PR1501CA0005.outlook.office365.com (2603:10b6:207:17::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.49 via Frontend Transport; Wed, 28 Feb 2024 17:02:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL02EPF0001A103.mail.protection.outlook.com (10.167.241.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 17:02:01 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 09:01:26 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 09:01:24 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: , Raslan Darawsheh , Bing Zhao Subject: [PATCH 10/11] net/mlx5: reuse flow fields Date: Wed, 28 Feb 2024 18:00:45 +0100 Message-ID: <20240228170046.176600-11-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228170046.176600-1-dsosnowski@nvidia.com> References: <20240228170046.176600-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0001A103:EE_|MW3PR12MB4587:EE_ X-MS-Office365-Filtering-Correlation-Id: cd5368da-fcb0-47de-f9ba-08dc387efb66 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nRoIt3vOJKUeN5XWUG9/74Tu4DSnqNnx02qIgk4lbbExwjj1WT+tnVQSMyHje+Z7QWgsiyEDJ4PKO3l5yee98FM3D3DpTzsjWf4myHOlHEE7vbth4PIMAUNH+as+rDDMoHj5vHcFRGALJRZ6MFYQ/6QIznw8Nmkpr/RmpBQC0wdzvkxWLpih9IbKL16UQlPTnBIPq/Ut8A+z2OYMF0PRw2KcYqGaE/XF+XBAy5epil3FEHI9OvRmCXHBK7w27shR0rSuNdZLBnSLqyaq9OrYKjeLBq8v2JlFBeObn7FMrgqc0kvJKS6B1aaxyUcgnyIEZPXxAAoSCoFQpy0Oey+UAHC1fXJLECnDjWwEiPa9n19ZnCksFtTrgGh8iExIZje3t6IFPYYARI8xziKKkXhokmc40yRx0B+VOlRx91JdwBoSNA+UHm7h5MqPZelLTxEjAGoqhAnhAHSKtYqGjKzaeO6WDb9bpglNn5G+GlYM+EH+IffeOVNmYPnRbnFZ+su+FaqrFdl9vVP+Z1FtPBmSGtG2x7kzX89mgL5GnbvnvJEewn4FeHp9bxDxXDv+sg5Z4nhKlyHX2lB5MKGjyAU6o8PxDmNAVvetK+CErWMwLqNIxiMECEESR9GBmVvphKK6cC0fRmI9udE52IJMI7LjtFEs/5BZM0ykcQZAdjOHnx3ujJSEw+P/zMUZRdaKHcT4ikKXPijeqbdRM27q6KbvfRQ6Y/mq38RYU3/KKzjRGMa8STG1S7QqRUorr2Jd4pdz X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 17:02:01.1507 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cd5368da-fcb0-47de-f9ba-08dc387efb66 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0001A103.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4587 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Each time a flow is allocated in mlx5 PMD the whole buffer, both rte_flow_hw and mlx5dr_rule parts, are zeroed. This introduces some wasted work because: - mlx5dr layer does not assume that mlx5dr_rule must be initialized, - flow action translation in mlx5 PMD does not need most of the fields of rte_flow_hw to be zeroed. To reduce this wasted work, this patch introduces flags field to flow definition. Each flow field which is not always initialized during flow creation, will have a correspondent flag set if value is valid (in other words - it was set during flow creation). Utilizing this mechanism allows PMD to: - remove zeroing from flow allocation, - access some fields (especially from rte_flow_hw_aux) if and only if corresponding flag is set. Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow.h | 24 ++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 93 +++++++++++++++++++++------------ 2 files changed, 83 insertions(+), 34 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1c67d8dd35..a01e970d04 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1267,6 +1267,26 @@ enum { MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_MOVE, }; +enum { + MLX5_FLOW_HW_FLOW_FLAG_CNT_ID = RTE_BIT32(0), + MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP = RTE_BIT32(1), + MLX5_FLOW_HW_FLOW_FLAG_FATE_HRXQ = RTE_BIT32(2), + MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX = RTE_BIT32(3), + MLX5_FLOW_HW_FLOW_FLAG_MTR_ID = RTE_BIT32(4), + MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR = RTE_BIT32(5), + MLX5_FLOW_HW_FLOW_FLAG_UPD_FLOW = RTE_BIT32(6), +}; + +#define MLX5_FLOW_HW_FLOW_FLAGS_ALL ( \ + MLX5_FLOW_HW_FLOW_FLAG_CNT_ID | \ + MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP | \ + MLX5_FLOW_HW_FLOW_FLAG_FATE_HRXQ | \ + MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX | \ + MLX5_FLOW_HW_FLOW_FLAG_MTR_ID | \ + MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR | \ + MLX5_FLOW_HW_FLOW_FLAG_UPD_FLOW \ + ) + #ifdef PEDANTIC #pragma GCC diagnostic ignored "-Wpedantic" #endif @@ -1283,8 +1303,8 @@ struct rte_flow_hw { uint32_t res_idx; /** HWS flow rule index passed to mlx5dr. */ uint32_t rule_idx; - /** Fate action type. */ - uint32_t fate_type; + /** Which flow fields (inline or in auxiliary struct) are used. */ + uint32_t flags; /** Ongoing flow operation type. */ uint8_t operation_type; /** Index of pattern template this flow is based on. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 3252f76e64..4e4beb4428 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2832,6 +2832,7 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, &rule_act->action, &rule_act->counter.offset)) return -1; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = act_idx; break; case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -2841,6 +2842,7 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, * it in flow destroy. */ mlx5_flow_hw_aux_set_age_idx(flow, aux, act_idx); + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX; if (action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) /* * The mutual update for idirect AGE & COUNT will be @@ -2856,6 +2858,7 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, ¶m->queue_id, &age_cnt, idx) < 0) return -1; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = age_cnt; param->nb_cnts++; } else { @@ -3160,7 +3163,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, rule_acts[act_data->action_dst].action = (!!attr.group) ? jump->hws_action : jump->root_action; flow->jump = jump; - flow->fate_type = MLX5_FLOW_FATE_JUMP; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP; break; case RTE_FLOW_ACTION_TYPE_RSS: case RTE_FLOW_ACTION_TYPE_QUEUE: @@ -3171,7 +3174,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return -1; rule_acts[act_data->action_dst].action = hrxq->action; flow->hrxq = hrxq; - flow->fate_type = MLX5_FLOW_FATE_QUEUE; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_FATE_HRXQ; break; case MLX5_RTE_FLOW_ACTION_TYPE_RSS: item_flags = table->its[it_idx]->item_flags; @@ -3250,7 +3253,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, (!!attr.group) ? jump->hws_action : jump->root_action; flow->jump = jump; - flow->fate_type = MLX5_FLOW_FATE_JUMP; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP; if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) return -1; break; @@ -3270,6 +3273,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (age_idx == 0) return -rte_errno; mlx5_flow_hw_aux_set_age_idx(flow, aux, age_idx); + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX; if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) /* * When AGE uses indirect counter, no need to @@ -3292,6 +3296,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, ); if (ret != 0) return ret; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = cnt_id; break; case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: @@ -3303,6 +3308,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, ); if (ret != 0) return ret; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = act_data->shared_counter.id; break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: @@ -3335,13 +3341,18 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return ret; aux = mlx5_flow_hw_aux(dev->data->port_id, flow); mlx5_flow_hw_aux_set_mtr_id(flow, aux, mtr_idx); + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_MTR_ID; break; default: break; } } if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) { + /* If indirect count is used, then CNT_ID flag should be set. */ + MLX5_ASSERT(flow->flags & MLX5_FLOW_HW_FLOW_FLAG_CNT_ID); if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_AGE) { + /* If indirect AGE is used, then AGE_IDX flag should be set. */ + MLX5_ASSERT(flow->flags & MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX); aux = mlx5_flow_hw_aux(dev->data->port_id, flow); age_idx = mlx5_flow_hw_aux_get_age_idx(flow, aux) & MLX5_HWS_AGE_IDX_MASK; @@ -3379,8 +3390,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, flow->res_idx - 1; rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = ap->ipv6_push_data; } - if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) + if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) { + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = hw_acts->cnt_id; + } return 0; } @@ -3493,7 +3506,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, "Port must be started before enqueueing flow operations"); return NULL; } - flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); + flow = mlx5_ipool_malloc(table->flow, &flow_idx); if (!flow) goto error; rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); @@ -3512,6 +3525,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, } else { flow->res_idx = flow_idx; } + flow->flags = 0; /* * Set the flow operation type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3563,6 +3577,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, (struct mlx5dr_rule *)flow->rule); rte_rwlock_read_unlock(&table->matcher_replace_rwlk); aux->matcher_selector = selector; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR; } if (likely(!ret)) { flow_hw_q_inc_flow_ops(priv, queue); @@ -3636,7 +3651,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, "Flow rule index exceeds table size"); return NULL; } - flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); + flow = mlx5_ipool_malloc(table->flow, &flow_idx); if (!flow) goto error; rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); @@ -3655,6 +3670,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, } else { flow->res_idx = flow_idx; } + flow->flags = 0; /* * Set the flow operation type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3696,6 +3712,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, (struct mlx5dr_rule *)flow->rule); rte_rwlock_read_unlock(&table->matcher_replace_rwlk); aux->matcher_selector = selector; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR; } if (likely(!ret)) { flow_hw_q_inc_flow_ops(priv, queue); @@ -3783,6 +3800,7 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, } else { nf->res_idx = of->res_idx; } + nf->flags = 0; /* Indicate the construction function to set the proper fields. */ nf->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE; /* @@ -3812,6 +3830,7 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, */ of->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE; of->user_data = user_data; + of->flags |= MLX5_FLOW_HW_FLOW_FLAG_UPD_FLOW; rule_attr.user_data = of; ret = mlx5dr_rule_action_update((struct mlx5dr_rule *)of->rule, action_template_index, rule_acts, &rule_attr); @@ -3906,13 +3925,14 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, uint32_t *cnt_queue; uint32_t age_idx = aux->orig.age_idx; + MLX5_ASSERT(flow->flags & MLX5_FLOW_HW_FLOW_FLAG_CNT_ID); if (mlx5_hws_cnt_is_shared(priv->hws_cpool, flow->cnt_id)) { - if (age_idx && !mlx5_hws_age_is_indirect(age_idx)) { + if ((flow->flags & MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX) && + !mlx5_hws_age_is_indirect(age_idx)) { /* Remove this AGE parameter from indirect counter. */ mlx5_hws_cnt_age_set(priv->hws_cpool, flow->cnt_id, 0); /* Release the AGE parameter. */ mlx5_hws_age_action_destroy(priv, age_idx, error); - mlx5_flow_hw_aux_set_age_idx(flow, aux, 0); } return; } @@ -3920,8 +3940,7 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, cnt_queue = mlx5_hws_cnt_is_pool_shared(priv) ? NULL : &queue; /* Put the counter first to reduce the race risk in BG thread. */ mlx5_hws_cnt_pool_put(priv->hws_cpool, cnt_queue, &flow->cnt_id); - flow->cnt_id = 0; - if (age_idx) { + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX) { if (mlx5_hws_age_is_indirect(age_idx)) { uint32_t idx = age_idx & MLX5_HWS_AGE_IDX_MASK; @@ -3930,7 +3949,6 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, /* Release the AGE parameter. */ mlx5_hws_age_action_destroy(priv, age_idx, error); } - mlx5_flow_hw_aux_set_age_idx(flow, aux, age_idx); } } @@ -4060,34 +4078,35 @@ hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct rte_flow_template_table *table = flow->table; - struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, flow); /* Release the original resource index in case of update. */ uint32_t res_idx = flow->res_idx; - if (flow->fate_type == MLX5_FLOW_FATE_JUMP) - flow_hw_jump_release(dev, flow->jump); - else if (flow->fate_type == MLX5_FLOW_FATE_QUEUE) - mlx5_hrxq_obj_release(dev, flow->hrxq); - if (mlx5_hws_cnt_id_valid(flow->cnt_id)) - flow_hw_age_count_release(priv, queue, - flow, error); - if (aux->orig.mtr_id) { - mlx5_ipool_free(pool->idx_pool, aux->orig.mtr_id); - aux->orig.mtr_id = 0; - } - if (flow->operation_type != MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE) { - if (table->resource) - mlx5_ipool_free(table->resource, res_idx); - mlx5_ipool_free(table->flow, flow->idx); - } else { + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAGS_ALL) { struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, flow); - struct rte_flow_hw *upd_flow = &aux->upd_flow; - rte_memcpy(flow, upd_flow, offsetof(struct rte_flow_hw, rule)); - aux->orig = aux->upd; - flow->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_CREATE; + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP) + flow_hw_jump_release(dev, flow->jump); + else if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_FATE_HRXQ) + mlx5_hrxq_obj_release(dev, flow->hrxq); + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_CNT_ID) + flow_hw_age_count_release(priv, queue, flow, error); + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_MTR_ID) + mlx5_ipool_free(pool->idx_pool, aux->orig.mtr_id); + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_UPD_FLOW) { + struct rte_flow_hw *upd_flow = &aux->upd_flow; + + rte_memcpy(flow, upd_flow, offsetof(struct rte_flow_hw, rule)); + aux->orig = aux->upd; + flow->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_CREATE; + if (table->resource) + mlx5_ipool_free(table->resource, res_idx); + } + } + if (flow->operation_type == MLX5_FLOW_HW_FLOW_OP_TYPE_DESTROY || + flow->operation_type == MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_DESTROY) { if (table->resource) mlx5_ipool_free(table->resource, res_idx); + mlx5_ipool_free(table->flow, flow->idx); } } @@ -4102,6 +4121,7 @@ hw_cmpl_resizable_tbl(struct rte_eth_dev *dev, uint32_t selector = aux->matcher_selector; uint32_t other_selector = (selector + 1) & 1; + MLX5_ASSERT(flow->flags & MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR); switch (flow->operation_type) { case MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_CREATE: rte_atomic_fetch_add_explicit @@ -11275,10 +11295,18 @@ flow_hw_query(struct rte_eth_dev *dev, struct rte_flow *flow, case RTE_FLOW_ACTION_TYPE_VOID: break; case RTE_FLOW_ACTION_TYPE_COUNT: + if (!(hw_flow->flags & MLX5_FLOW_HW_FLOW_FLAG_CNT_ID)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "counter not defined in the rule"); ret = flow_hw_query_counter(dev, hw_flow->cnt_id, data, error); break; case RTE_FLOW_ACTION_TYPE_AGE: + if (!(hw_flow->flags & MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "age data not available"); aux = mlx5_flow_hw_aux(dev->data->port_id, hw_flow); ret = flow_hw_query_age(dev, mlx5_flow_hw_aux_get_age_idx(hw_flow, aux), data, error); @@ -12571,6 +12599,7 @@ flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue, .burst = attr->postpone, }; + MLX5_ASSERT(hw_flow->flags & MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR); /** * mlx5dr_matcher_resize_rule_move() accepts original table matcher - * the one that was used BEFORE table resize.