From patchwork Fri Feb 2 11:56:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 136315 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 17ACD43A4F; Fri, 2 Feb 2024 12:56:50 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0009C402BF; Fri, 2 Feb 2024 12:56:47 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2040.outbound.protection.outlook.com [40.107.212.40]) by mails.dpdk.org (Postfix) with ESMTP id 6E3354026E for ; Fri, 2 Feb 2024 12:56:46 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TbFckkkCySg2+xsthhucGhih/czGoj9gDc0yoNbxF4o0vdBfkVAr5p3Iz9kto8Jqmug5X7HadCH5s6UYRDOiXsuy30r9Vqelc19wTiQuHtyJkp3uVz4TBKHSa62Y5A2TzFF/HTrayNgmZTI7nIncwgb1r3EpewOV+t6iRZsRv3bxCQmO5aTM72WpzyrLjBpXktpDv/LxkR/oG7V7OJxP9tPKnBSZ0D4hDItpQq8QGn0gavbuSiYgtmW+nhzSYXezORRVNXfj+dp3GdD1WAzseDZ9YTGV7uH+Gitvp3EIlzSh0QEuhvbcuXMuHQgnSnL6Se85WkFtao8zsktqw+/GcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=7TzHGnGmLQ5b2CYiR92x7bC3pL6JKSdJXvwe6jgsgw0=; b=ffe7qw7QCIMYrpgre2trpevBuJhf6s7lxIl0GX7JOWXK2PFxNSYFfNl8lb2d7quaftOaFXEk/2q16uAmKedPcPCkZ7O4UsxP/SsyK6lgIO6LmzHdi4fav0AuTz9xGmYvSzbxs+rK0Wu0GfOm7b36K3PxSbZTLwlZboHw+/G1JQKcubSVcvCG9Nevj2l6ekHsJuKWH7N4kGbDsiRcxIr2tizQy+0sTXq7omhkzdMAeZEmJTwwrKIUucZBP8gnPsEFyNuGHXgzm3Lhrw6c8KDbWN2W8YV4i8AwMHec/pmqnZRhByk3W7/M5BLZRkmUKcjglc9Q2uwAlfhPw5w+KO7o+w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7TzHGnGmLQ5b2CYiR92x7bC3pL6JKSdJXvwe6jgsgw0=; b=FcqGzcnXdqq586hKdLHc9yjTQ1xG9twi39G59WmTifXtmVWHRK2/aTzq3pJ6aAzNcVXBeakAu/pFwxg5soSL5/PTTd3GcsmrXETWsal5ZtPeWtx+zWpwKRj0TD3IB4v1YLkl2s8HoPR18XHRNk7u42CtTOix3RyrohJGDnXhLCO5VivwUMmbxgRWTcDm3uJz5MBiucgDJ9DR1yPX4GmpH6O5ZiUsPKUTerlrjmWgoOVRhMXD0HmsKmreXdd53Jn6DTi73QlIghrhAdHcNvp857pztjwHdlHv5AgeH20c61QOcH6/c6KzXHk6rn0GU3GhcSwvD7UAx6Qu9Hba6TMbUw== Received: from CH5P223CA0001.NAMP223.PROD.OUTLOOK.COM (2603:10b6:610:1f3::27) by MN0PR12MB6079.namprd12.prod.outlook.com (2603:10b6:208:3c9::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.23; Fri, 2 Feb 2024 11:56:41 +0000 Received: from CH2PEPF0000009D.namprd02.prod.outlook.com (2603:10b6:610:1f3:cafe::a1) by CH5P223CA0001.outlook.office365.com (2603:10b6:610:1f3::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.30 via Frontend Transport; Fri, 2 Feb 2024 11:56:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF0000009D.mail.protection.outlook.com (10.167.244.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.19 via Frontend Transport; Fri, 2 Feb 2024 11:56:41 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:31 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:27 -0800 From: Gregory Etelson To: CC: , =?utf-8?b?wqA=?= , , , Yevgeny Kliteynik , Dariusz Sosnowski , "Viacheslav Ovsiienko" , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH 1/5] net/mlx5/hws: add support for resizable matchers Date: Fri, 2 Feb 2024 13:56:07 +0200 Message-ID: <20240202115611.288892-2-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240202115611.288892-1-getelson@nvidia.com> References: <20240202115611.288892-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009D:EE_|MN0PR12MB6079:EE_ X-MS-Office365-Filtering-Correlation-Id: 273e2885-e43c-43cc-8882-08dc23e604ed X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: t9nkqBswYj+QSueC0VEmfYdJQ0JZ8g3EzHLw0LG9seOgaKOrGjX4JPHED2A6e2E9jI7joYEANnSkuea468oT1F5cvl2TMMxLq5UhUyUKTGp4Z9VsNKXG4I+rxnmQA+vDgju/+G+iT+5jxRzb0tFxCLvJywC8jPSKP2gnd1CA4rAShBggxAdqPgZuxArnVq9ljpEGXJzWHzriv3Pf5+OjzjgZnOTl95zJ/3SKz7JqLwE/TXN/BErdN5t+6G5k2+Rbl0GIr2LsmMYUiWAvg4l5UiG6sdNd6vWrJwHlYc2vOyZglZWttILywzi8CUscRWP5FfxwA0PIBTT67Ianb7caXZxYPK/Aadi+5Uu41WYMfLyvM6BNd4V/aUMmcsNeY8yB0WPXrBsiCuOEHutWSutNRDXTHL+PSXb5jqxN+bJd6ZMjlBed9M+Bkd8ViDCHAioSOB2eu8BnagX9tQDFDf9O5YCVkZbOyQyeSR0onsxVtNGvmtiWOlFAvrikojsuu8mlEZhTdPoVH1bO9QL0HBnPORuGojL32cC/2PqxafVsFLuJfbJAsFxjEgwFjfP7s/zcH0xosGekS/EtBIOaM1h5pTRm6Eb7j+M9wOOJqZapSg3QLY7Cd+UxopUXV3f/yrZpy2xdhI7hKigrZx/21AFWMMsHI3xjmpJrot4RmpvO7zR7HGyQ2MrGhuSFHkPIFBF8kYsBaF9qy0BjKGuK3tKS7u1zCEIZ5mtPXS9HWvEl9lxz9YVGclbwIe6XjcWv2mHG5RXc4ILVZvetIDkwLW75xw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(376002)(346002)(396003)(136003)(230922051799003)(451199024)(186009)(82310400011)(1800799012)(64100799003)(40470700004)(36840700001)(46966006)(6286002)(336012)(426003)(6666004)(1076003)(2616005)(107886003)(16526019)(83380400001)(26005)(478600001)(7696005)(41300700001)(36860700001)(36756003)(8936002)(8676002)(4326008)(356005)(55016003)(316002)(40460700003)(40480700001)(2906002)(54906003)(6916009)(7636003)(30864003)(70206006)(5660300002)(70586007)(82740400003)(47076005)(86362001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2024 11:56:41.0331 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 273e2885-e43c-43cc-8882-08dc23e604ed X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009D.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6079 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Yevgeny Kliteynik Add support for matcher resize with the following new API calls: - mlx5dr_matcher_resize_set_target - mlx5dr_matcher_resize_rule_move The first function links two matchers and allows moving rules from src matcher to dst matcher. Both matchers should have the same characteristics (e.g. same mt, same at). It is the user's responsibility to make sure that the dst matcher has enough space for the moved rules. After this function, the user can move rules from src into dst matcher, and he is no longer allowed to insert rules to the src matcher. The second function is used to move the rule from matcher that is being resized to a bigger matcher. Moving a single rule includes creating a new rule in the destination matcher, and deleting the rule from the source matcher. This operation creates a single completion. Signed-off-by: Yevgeny Kliteynik --- drivers/net/mlx5/hws/mlx5dr.h | 39 +++++ drivers/net/mlx5/hws/mlx5dr_definer.c | 5 +- drivers/net/mlx5/hws/mlx5dr_definer.h | 3 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 181 +++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_matcher.h | 21 +++ drivers/net/mlx5/hws/mlx5dr_rule.c | 229 ++++++++++++++++++++++++-- drivers/net/mlx5/hws/mlx5dr_rule.h | 34 +++- drivers/net/mlx5/hws/mlx5dr_send.c | 45 +++++ drivers/net/mlx5/mlx5_flow.h | 2 + 9 files changed, 537 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index d88f73ab57..9d8f8e13dc 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -139,6 +139,8 @@ struct mlx5dr_matcher_attr { /* Define the insertion and distribution modes for this matcher */ enum mlx5dr_matcher_insert_mode insert_mode; enum mlx5dr_matcher_distribute_mode distribute_mode; + /* Define whether the created matcher supports resizing into a bigger matcher */ + bool resizable; union { struct { uint8_t sz_row_log; @@ -419,6 +421,43 @@ int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher); int mlx5dr_matcher_attach_at(struct mlx5dr_matcher *matcher, struct mlx5dr_action_template *at); +/* Link two matchers and enable moving rules from src matcher to dst matcher. + * Both matchers must be in the same table type, must be created with 'resizable' + * property, and should have the same characteristics (e.g. same mt, same at). + * + * It is the user's responsibility to make sure that the dst matcher + * was allocated with the appropriate size. + * + * Once the function is completed, the user is: + * - allowed to move rules from src into dst matcher + * - no longer allowed to insert rules to the src matcher + * + * The user is always allowed to insert rules to the dst matcher and + * to delete rules from any matcher. + * + * @param[in] src_matcher + * source matcher for moving rules from + * @param[in] dst_matcher + * destination matcher for moving rules to + * @return zero on successful move, non zero otherwise. + */ +int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_matcher *dst_matcher); + +/* Enqueue moving rule operation: moving rule from src matcher to a dst matcher + * + * @param[in] src_matcher + * matcher that the rule belongs to + * @param[in] rule + * the rule to move + * @param[in] attr + * rule attributes + * @return zero on success, non zero otherwise. + */ +int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr); + /* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation. * * @return size in bytes of rule handle struct. diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 0b60479406..6703c233bb 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -2919,9 +2919,8 @@ int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) return definer->obj->id; } -static int -mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, - struct mlx5dr_definer *definer_b) +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) { int i; diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index 6f1c99e37a..9c3db53ff3 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -673,4 +673,7 @@ int mlx5dr_definer_init_cache(struct mlx5dr_definer_cache **cache); void mlx5dr_definer_uninit_cache(struct mlx5dr_definer_cache *cache); +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b); + #endif diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 4ea161eae6..5075342d72 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -704,6 +704,65 @@ static int mlx5dr_matcher_check_and_process_at(struct mlx5dr_matcher *matcher, return 0; } +static int +mlx5dr_matcher_resize_init(struct mlx5dr_matcher *src_matcher) +{ + struct mlx5dr_matcher_resize_data *resize_data; + + resize_data = simple_calloc(1, sizeof(*resize_data)); + if (!resize_data) { + rte_errno = ENOMEM; + return rte_errno; + } + + resize_data->stc = src_matcher->action_ste.stc; + resize_data->action_ste_rtc_0 = src_matcher->action_ste.rtc_0; + resize_data->action_ste_rtc_1 = src_matcher->action_ste.rtc_1; + resize_data->action_ste_pool = src_matcher->action_ste.max_stes ? + src_matcher->action_ste.pool : + NULL; + + /* Place the new resized matcher on the dst matcher's list */ + LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data, + resize_data, next); + + /* Move all the previous resized matchers to the dst matcher's list */ + while (!LIST_EMPTY(&src_matcher->resize_data)) { + resize_data = LIST_FIRST(&src_matcher->resize_data); + LIST_REMOVE(resize_data, next); + LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data, + resize_data, next); + } + + return 0; +} + +static void +mlx5dr_matcher_resize_uninit(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_matcher_resize_data *resize_data; + + if (!mlx5dr_matcher_is_resizable(matcher) || + !matcher->action_ste.max_stes) + return; + + while (!LIST_EMPTY(&matcher->resize_data)) { + resize_data = LIST_FIRST(&matcher->resize_data); + LIST_REMOVE(resize_data, next); + + mlx5dr_action_free_single_stc(matcher->tbl->ctx, + matcher->tbl->type, + &resize_data->stc); + + if (matcher->tbl->type == MLX5DR_TABLE_TYPE_FDB) + mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_1); + mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_0); + if (resize_data->action_ste_pool) + mlx5dr_pool_destroy(resize_data->action_ste_pool); + simple_free(resize_data); + } +} + static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) { bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt); @@ -790,7 +849,9 @@ static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher) { struct mlx5dr_table *tbl = matcher->tbl; - if (!matcher->action_ste.max_stes || matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION) + if (!matcher->action_ste.max_stes || + matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION || + mlx5dr_matcher_is_in_resize(matcher)) return; mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc); @@ -947,6 +1008,10 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, DR_LOG(ERR, "Root matcher does not support at attaching"); goto not_supported; } + if (attr->resizable) { + DR_LOG(ERR, "Root matcher does not support resizeing"); + goto not_supported; + } return 0; } @@ -960,6 +1025,8 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH) attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); + matcher->flags |= attr->resizable ? MLX5DR_MATCHER_FLAGS_RESIZABLE : 0; + return mlx5dr_matcher_check_attr_sz(caps, attr); not_supported: @@ -1018,6 +1085,7 @@ static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher) static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher) { + mlx5dr_matcher_resize_uninit(matcher); mlx5dr_matcher_disconnect(matcher); mlx5dr_matcher_create_uninit_shared(matcher); mlx5dr_matcher_destroy_rtc(matcher, DR_MATCHER_RTC_TYPE_MATCH); @@ -1452,3 +1520,114 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) simple_free(mt); return 0; } + +static int mlx5dr_matcher_resize_precheck(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_matcher *dst_matcher) +{ + int i; + + if (mlx5dr_table_is_root(src_matcher->tbl) || + mlx5dr_table_is_root(dst_matcher->tbl)) { + DR_LOG(ERR, "Src/dst matcher belongs to root table - resize unsupported"); + goto out_einval; + } + + if (src_matcher->tbl->type != dst_matcher->tbl->type) { + DR_LOG(ERR, "Table type mismatch for src/dst matchers"); + goto out_einval; + } + + if (mlx5dr_matcher_req_fw_wqe(src_matcher) || + mlx5dr_matcher_req_fw_wqe(dst_matcher)) { + DR_LOG(ERR, "Matchers require FW WQE - resize unsupported"); + goto out_einval; + } + + if (!mlx5dr_matcher_is_resizable(src_matcher) || + !mlx5dr_matcher_is_resizable(dst_matcher)) { + DR_LOG(ERR, "Src/dst matcher is not resizable"); + goto out_einval; + } + + if (mlx5dr_matcher_is_insert_by_idx(src_matcher) != + mlx5dr_matcher_is_insert_by_idx(dst_matcher)) { + DR_LOG(ERR, "Src/dst matchers insert mode mismatch"); + goto out_einval; + } + + if (mlx5dr_matcher_is_in_resize(src_matcher) || + mlx5dr_matcher_is_in_resize(dst_matcher)) { + DR_LOG(ERR, "Src/dst matcher is already in resize"); + goto out_einval; + } + + /* Compare match templates - make sure the definers are equivalent */ + if (src_matcher->num_of_mt != dst_matcher->num_of_mt) { + DR_LOG(ERR, "Src/dst matcher match templates mismatch"); + goto out_einval; + } + + if (src_matcher->action_ste.max_stes != dst_matcher->action_ste.max_stes) { + DR_LOG(ERR, "Src/dst matcher max STEs mismatch"); + goto out_einval; + } + + for (i = 0; i < src_matcher->num_of_mt; i++) { + if (mlx5dr_definer_compare(src_matcher->mt[i].definer, + dst_matcher->mt[i].definer)) { + DR_LOG(ERR, "Src/dst matcher definers mismatch"); + goto out_einval; + } + } + + return 0; + +out_einval: + rte_errno = EINVAL; + return rte_errno; +} + +int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_matcher *dst_matcher) +{ + int ret = 0; + + pthread_spin_lock(&src_matcher->tbl->ctx->ctrl_lock); + + if (mlx5dr_matcher_resize_precheck(src_matcher, dst_matcher)) { + ret = -rte_errno; + goto out; + } + + src_matcher->resize_dst = dst_matcher; + + if (mlx5dr_matcher_resize_init(src_matcher)) { + src_matcher->resize_dst = NULL; + ret = -rte_errno; + } + +out: + pthread_spin_unlock(&src_matcher->tbl->ctx->ctrl_lock); + return ret; +} + +int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher, + struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + if (unlikely(!mlx5dr_matcher_is_in_resize(src_matcher))) { + DR_LOG(ERR, "Matcher is not resizable or not in resize"); + goto out_einval; + } + + if (unlikely(src_matcher != rule->matcher)) { + DR_LOG(ERR, "Rule doesn't belong to src matcher"); + goto out_einval; + } + + return mlx5dr_rule_move_hws_add(rule, attr); + +out_einval: + rte_errno = EINVAL; + return -rte_errno; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h index 363a61fd41..0f2bf96e8b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.h +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -26,6 +26,7 @@ enum mlx5dr_matcher_flags { MLX5DR_MATCHER_FLAGS_RANGE_DEFINER = 1 << 0, MLX5DR_MATCHER_FLAGS_HASH_DEFINER = 1 << 1, MLX5DR_MATCHER_FLAGS_COLLISION = 1 << 2, + MLX5DR_MATCHER_FLAGS_RESIZABLE = 1 << 3, }; struct mlx5dr_match_template { @@ -59,6 +60,14 @@ struct mlx5dr_matcher_action_ste { uint8_t max_stes; }; +struct mlx5dr_matcher_resize_data { + struct mlx5dr_pool_chunk stc; + struct mlx5dr_devx_obj *action_ste_rtc_0; + struct mlx5dr_devx_obj *action_ste_rtc_1; + struct mlx5dr_pool *action_ste_pool; + LIST_ENTRY(mlx5dr_matcher_resize_data) next; +}; + struct mlx5dr_matcher { struct mlx5dr_table *tbl; struct mlx5dr_matcher_attr attr; @@ -71,10 +80,12 @@ struct mlx5dr_matcher { uint8_t flags; struct mlx5dr_devx_obj *end_ft; struct mlx5dr_matcher *col_matcher; + struct mlx5dr_matcher *resize_dst; struct mlx5dr_matcher_match_ste match_ste; struct mlx5dr_matcher_action_ste action_ste; struct mlx5dr_definer *hash_definer; LIST_ENTRY(mlx5dr_matcher) next; + LIST_HEAD(resize_data_head, mlx5dr_matcher_resize_data) resize_data; }; static inline bool @@ -89,6 +100,16 @@ mlx5dr_matcher_mt_is_range(struct mlx5dr_match_template *mt) return (!!mt->range_definer); } +static inline bool mlx5dr_matcher_is_resizable(struct mlx5dr_matcher *matcher) +{ + return !!(matcher->flags & MLX5DR_MATCHER_FLAGS_RESIZABLE); +} + +static inline bool mlx5dr_matcher_is_in_resize(struct mlx5dr_matcher *matcher) +{ + return !!matcher->resize_dst; +} + static inline bool mlx5dr_matcher_req_fw_wqe(struct mlx5dr_matcher *matcher) { /* Currently HWS doesn't support hash different from match or range */ diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index fa19303b91..03e62a3f14 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -111,6 +111,23 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, } } +static void mlx5dr_rule_move_get_rtc(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_matcher *dst_matcher = rule->matcher->resize_dst; + + if (rule->resize_info->rtc_0) { + ste_attr->rtc_0 = dst_matcher->match_ste.rtc_0->id; + ste_attr->retry_rtc_0 = dst_matcher->col_matcher ? + dst_matcher->col_matcher->match_ste.rtc_0->id : 0; + } + if (rule->resize_info->rtc_1) { + ste_attr->rtc_1 = dst_matcher->match_ste.rtc_1->id; + ste_attr->retry_rtc_1 = dst_matcher->col_matcher ? + dst_matcher->col_matcher->match_ste.rtc_1->id : 0; + } +} + static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, struct mlx5dr_rule *rule, bool err, @@ -131,12 +148,41 @@ static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); } +static void +mlx5dr_rule_save_resize_info(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + rule->resize_info = simple_calloc(1, sizeof(*rule->resize_info)); + if (unlikely(!rule->resize_info)) { + assert(rule->resize_info); + rte_errno = ENOMEM; + } + + memcpy(rule->resize_info->ctrl_seg, ste_attr->wqe_ctrl, + sizeof(rule->resize_info->ctrl_seg)); + memcpy(rule->resize_info->data_seg, ste_attr->wqe_data, + sizeof(rule->resize_info->data_seg)); + + rule->resize_info->action_ste_pool = rule->matcher->action_ste.max_stes ? + rule->matcher->action_ste.pool : + NULL; +} + +static void mlx5dr_rule_clear_resize_info(struct mlx5dr_rule *rule) +{ + if (rule->resize_info) { + simple_free(rule->resize_info); + rule->resize_info = NULL; + } +} + static void mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule, struct mlx5dr_send_ste_attr *ste_attr) { struct mlx5dr_match_template *mt = rule->matcher->mt; bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(mt); + struct mlx5dr_rule_match_tag *tag; if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) { uint8_t *src_tag; @@ -158,17 +204,31 @@ mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule, return; } + if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) { + mlx5dr_rule_save_resize_info(rule, ste_attr); + tag = &rule->resize_info->tag; + } else { + tag = &rule->tag; + } + if (is_jumbo) memcpy(rule->tag.jumbo, ste_attr->wqe_data->jumbo, MLX5DR_JUMBO_TAG_SZ); else - memcpy(rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ); + memcpy(tag->match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ); } static void mlx5dr_rule_clear_delete_info(struct mlx5dr_rule *rule) { - if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) { simple_free(rule->tag_ptr); + return; + } + + if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) { + mlx5dr_rule_clear_resize_info(rule); + return; + } } static void @@ -185,8 +245,10 @@ mlx5dr_rule_load_delete_info(struct mlx5dr_rule *rule, ste_attr->range_wqe_tag = &rule->tag_ptr[1]; ste_attr->send_attr.range_definer_id = rule->tag_ptr[1].reserved[1]; } - } else { + } else if (likely(!mlx5dr_matcher_is_resizable(rule->matcher))) { ste_attr->wqe_tag = &rule->tag; + } else { + ste_attr->wqe_tag = &rule->resize_info->tag; } } @@ -217,6 +279,7 @@ static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) { struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_pool *pool; if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx && @@ -226,7 +289,11 @@ void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) /* This release is safe only when the rule match part was deleted */ ste.order = rte_log2_u32(matcher->action_ste.max_stes); ste.offset = rule->action_ste_idx; - mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + + /* Free the original action pool if rule was resized */ + pool = mlx5dr_matcher_is_resizable(matcher) ? rule->resize_info->action_ste_pool : + matcher->action_ste.pool; + mlx5dr_pool_chunk_free(pool, &ste); } } @@ -263,6 +330,23 @@ static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, apply->require_dep = 0; } +static void mlx5dr_rule_move_init(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + /* Save the old RTC IDs to be later used in match STE delete */ + rule->resize_info->rtc_0 = rule->rtc_0; + rule->resize_info->rtc_1 = rule->rtc_1; + rule->resize_info->rule_idx = attr->rule_idx; + + rule->rtc_0 = 0; + rule->rtc_1 = 0; + + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_WRITING; +} + static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr, uint8_t mt_idx, @@ -343,7 +427,9 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, /* Send WQEs to FW */ mlx5dr_send_stes_fw(queue, &ste_attr); - /* Backup TAG on the rule for deletion */ + /* Backup TAG on the rule for deletion, and save ctrl/data + * segments to be used when resizing the matcher. + */ mlx5dr_rule_save_delete_info(rule, &ste_attr); mlx5dr_send_engine_inc_rule(queue); @@ -466,7 +552,9 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, mlx5dr_send_ste(queue, &ste_attr); } - /* Backup TAG on the rule for deletion, only after insertion */ + /* Backup TAG on the rule for deletion and resize info for + * moving rules to a new matcher, only after insertion. + */ if (!is_update) mlx5dr_rule_save_delete_info(rule, &ste_attr); @@ -493,7 +581,7 @@ static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, /* Rule failed now we can safely release action STEs */ mlx5dr_rule_free_action_ste_idx(rule); - /* Clear complex tag */ + /* Clear complex tag or info that was saved for matcher resizing */ mlx5dr_rule_clear_delete_info(rule); /* If a rule that was indicated as burst (need to trigger HW) has failed @@ -568,12 +656,12 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, mlx5dr_rule_load_delete_info(rule, &ste_attr); - if (unlikely(fw_wqe)) { + if (unlikely(fw_wqe)) mlx5dr_send_stes_fw(queue, &ste_attr); - mlx5dr_rule_clear_delete_info(rule); - } else { + else mlx5dr_send_ste(queue, &ste_attr); - } + + mlx5dr_rule_clear_delete_info(rule); return 0; } @@ -661,9 +749,11 @@ static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, return 0; } -static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx, +static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr) { + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + if (unlikely(!attr->user_data)) { rte_errno = EINVAL; return rte_errno; @@ -678,6 +768,113 @@ static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx, return 0; } +static int mlx5dr_rule_enqueue_precheck_create(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + if (unlikely(mlx5dr_matcher_is_in_resize(rule->matcher))) { + /* Matcher in resize - new rules are not allowed */ + rte_errno = EAGAIN; + return rte_errno; + } + + return mlx5dr_rule_enqueue_precheck(rule, attr); +} + +static int mlx5dr_rule_enqueue_precheck_update(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) { + /* Update is not supported on resizable matchers */ + rte_errno = ENOTSUP; + return rte_errno; + } + + return mlx5dr_rule_enqueue_precheck_create(rule, attr); +} + +int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule, + void *queue_ptr, + void *user_data) +{ + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt); + struct mlx5dr_wqe_gta_ctrl_seg empty_wqe_ctrl = {0}; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_send_engine *queue = queue_ptr; + struct mlx5dr_send_ste_attr ste_attr = {0}; + + rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_DELETING; + + ste_attr.send_attr.fence = 0; + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = 1; + ste_attr.send_attr.user_data = user_data; + ste_attr.rtc_0 = rule->resize_info->rtc_0; + ste_attr.rtc_1 = rule->resize_info->rtc_1; + ste_attr.used_id_rtc_0 = &rule->resize_info->rtc_0; + ste_attr.used_id_rtc_1 = &rule->resize_info->rtc_1; + ste_attr.wqe_ctrl = &empty_wqe_ctrl; + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher))) + ste_attr.direct_index = rule->resize_info->rule_idx; + + mlx5dr_rule_load_delete_info(rule, &ste_attr); + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt); + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + if (unlikely(mlx5dr_rule_enqueue_precheck(rule, attr))) + return -rte_errno; + + queue = &ctx->send_queue[attr->queue_id]; + + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_move_init(rule, attr); + + mlx5dr_rule_move_get_rtc(rule, &ste_attr); + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr.wqe_tag_is_jumbo = is_jumbo; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.fence = 0; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = (struct mlx5dr_wqe_gta_ctrl_seg *)rule->resize_info->ctrl_seg; + ste_attr.wqe_data = (struct mlx5dr_wqe_gta_data_seg_ste *)rule->resize_info->data_seg; + ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ? + attr->rule_idx : 0; + + mlx5dr_send_ste(queue, &ste_attr); + mlx5dr_send_engine_inc_rule(queue); + + return 0; +} + int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, uint8_t mt_idx, const struct rte_flow_item items[], @@ -686,13 +883,11 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, struct mlx5dr_rule_attr *attr, struct mlx5dr_rule *rule_handle) { - struct mlx5dr_context *ctx; int ret; rule_handle->matcher = matcher; - ctx = matcher->tbl->ctx; - if (mlx5dr_rule_enqueue_precheck(ctx, attr)) + if (unlikely(mlx5dr_rule_enqueue_precheck_create(rule_handle, attr))) return -rte_errno; assert(matcher->num_of_mt >= mt_idx); @@ -720,7 +915,7 @@ int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, { int ret; - if (mlx5dr_rule_enqueue_precheck(rule->matcher->tbl->ctx, attr)) + if (unlikely(mlx5dr_rule_enqueue_precheck(rule, attr))) return -rte_errno; if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) @@ -753,7 +948,7 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle, return -rte_errno; } - if (mlx5dr_rule_enqueue_precheck(matcher->tbl->ctx, attr)) + if (unlikely(mlx5dr_rule_enqueue_precheck_update(rule_handle, attr))) return -rte_errno; ret = mlx5dr_rule_create_hws(rule_handle, diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h index f7d97eead5..14115fe329 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.h +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -10,7 +10,9 @@ enum { MLX5DR_ACTIONS_SZ = 12, MLX5DR_MATCH_TAG_SZ = 32, MLX5DR_JUMBO_TAG_SZ = 44, - MLX5DR_STE_SZ = 64, + MLX5DR_STE_SZ = MLX5DR_STE_CTRL_SZ + + MLX5DR_ACTIONS_SZ + + MLX5DR_MATCH_TAG_SZ, }; enum mlx5dr_rule_status { @@ -23,6 +25,12 @@ enum mlx5dr_rule_status { MLX5DR_RULE_STATUS_FAILED, }; +enum mlx5dr_rule_move_state { + MLX5DR_RULE_RESIZE_STATE_IDLE, + MLX5DR_RULE_RESIZE_STATE_WRITING, + MLX5DR_RULE_RESIZE_STATE_DELETING, +}; + struct mlx5dr_rule_match_tag { union { uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; @@ -33,6 +41,17 @@ struct mlx5dr_rule_match_tag { }; }; +struct mlx5dr_rule_resize_info { + uint8_t state; + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t rule_idx; + struct mlx5dr_pool *action_ste_pool; + struct mlx5dr_rule_match_tag tag; + uint8_t ctrl_seg[MLX5DR_WQE_SZ_GTA_CTRL]; /* Ctrl segment of STE: 48 bytes */ + uint8_t data_seg[MLX5DR_STE_SZ]; /* Data segment of STE: 64 bytes */ +}; + struct mlx5dr_rule { struct mlx5dr_matcher *matcher; union { @@ -40,6 +59,7 @@ struct mlx5dr_rule { /* Pointer to tag to store more than one tag */ struct mlx5dr_rule_match_tag *tag_ptr; struct ibv_flow *flow; + struct mlx5dr_rule_resize_info *resize_info; }; uint32_t rtc_0; /* The RTC into which the STE was inserted */ uint32_t rtc_1; /* The RTC into which the STE was inserted */ @@ -50,4 +70,16 @@ struct mlx5dr_rule { void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); +int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule, + void *queue, void *user_data); + +int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr); + +static inline bool mlx5dr_rule_move_in_progress(struct mlx5dr_rule *rule) +{ + return rule->resize_info && + rule->resize_info->state != MLX5DR_RULE_RESIZE_STATE_IDLE; +} + #endif /* MLX5DR_RULE_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index 622d574bfa..936dfc1fe6 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -444,6 +444,46 @@ void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue) mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl); } +static void +mlx5dr_send_engine_update_rule_resize(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + enum rte_flow_op_status *status) +{ + switch (priv->rule->resize_info->state) { + case MLX5DR_RULE_RESIZE_STATE_WRITING: + if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) { + /* Backup original RTCs */ + uint32_t orig_rtc_0 = priv->rule->resize_info->rtc_0; + uint32_t orig_rtc_1 = priv->rule->resize_info->rtc_1; + + /* Delete partialy failed move rule using resize_info */ + priv->rule->resize_info->rtc_0 = priv->rule->rtc_0; + priv->rule->resize_info->rtc_1 = priv->rule->rtc_1; + + /* Move rule to orginal RTC for future delete */ + priv->rule->rtc_0 = orig_rtc_0; + priv->rule->rtc_1 = orig_rtc_1; + } + /* Clean leftovers */ + mlx5dr_rule_move_hws_remove(priv->rule, queue, priv->user_data); + break; + + case MLX5DR_RULE_RESIZE_STATE_DELETING: + if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) { + *status = RTE_FLOW_OP_ERROR; + } else { + *status = RTE_FLOW_OP_SUCCESS; + priv->rule->matcher = priv->rule->matcher->resize_dst; + } + priv->rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_IDLE; + priv->rule->status = MLX5DR_RULE_STATUS_CREATED; + break; + + default: + break; + } +} + static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, struct mlx5dr_send_ring_priv *priv, uint16_t wqe_cnt, @@ -465,6 +505,11 @@ static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, /* Update rule status for the last completion */ if (!priv->rule->pending_wqes) { + if (unlikely(mlx5dr_rule_move_in_progress(priv->rule))) { + mlx5dr_send_engine_update_rule_resize(queue, priv, status); + return; + } + if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) { /* Rule completely failed and doesn't require cleanup */ if (!priv->rule->rtc_0 && !priv->rule->rtc_1) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index b35079b30a..b003e97dc9 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -17,6 +17,8 @@ #include #include "mlx5.h" +#include "hws/mlx5dr.h" +#include "rte_flow.h" #include "rte_pmd_mlx5.h" #include "hws/mlx5dr.h" From patchwork Fri Feb 2 11:56:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 136317 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC3B743A4F; Fri, 2 Feb 2024 12:57:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E06B942E53; Fri, 2 Feb 2024 12:56:55 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2065.outbound.protection.outlook.com [40.107.220.65]) by mails.dpdk.org (Postfix) with ESMTP id 0F0A542E42 for ; Fri, 2 Feb 2024 12:56:54 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CSWgQZYkY4pqDJxRYgWadHiqIAjtKprPro03PM7XLDA445ZNI5O21vsnBugI24V5leKzHMCbCBH2yndaejG3ndXqlggNAbDcN2no+hN312kPDvKWxf1Jx3OrEuFvhl+wIxSASv9FTJCiMD7GN84fE1E46J026CsodY2oIiQCCrXL90Bd9/Rb0XAIADXiQD9/q30kiKdR21cgC8YlmlZHVMl9atkxqSnzIIuwI0vFuC9z4FxsVaokaAwQORy7XAE31gjJdvOPUUSTJqf8RC5A+go6ftC7gHC+rlYHPH97mIyy6RGsutt4MiDz5xJ1rMpZjkOWacJFM7un++RtR3Bi2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HZykOJNh7JY1g2UVbAGeRnAKkwjVpBeFuwmlt21v/WA=; b=VQVOTchwjI2PW1wylEr171dtKfmvz+0hv7Js760/ePmKYc2DXGAD+yUClSvVP5MfHjgXJC7+Is5CogEo1I0NQ23E7E+fOSr3iES8LqwAS+6pD31+hNHHliXZtlxbUcopbmRCqvM+Tp1JIPh6K9OEfkdEro2iwHtxGuelIJ1c4TIeC34WZn4uRshkaUj+Rv++Eh7MrKQQTBFiplxcYY3PfT9/uHw/7FRN1sZB88IKZWkCXtb2Oy9/aGYe79DBX3ezHNU58V+K/5hrWgDKt0w+W/P8HC2Afxrkv5vwdC0dlxo7tiem6CnQ6ba9mneo9g83JMOjKAOdVWrUz5ScLieHxA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HZykOJNh7JY1g2UVbAGeRnAKkwjVpBeFuwmlt21v/WA=; b=ubyBxUbtrNyTRcFXRRUgrlfqzHfv62NJ8SyvF1V29e1T7cSkIrvcoblYztt4VSfdgbU/5Qvdsp9+ON0rgyAQBG8wdiwT09gqVB7QjPwC3yzNZViO1DLZLidCLvQmYGAEsF3k2OxJ7Ug0uu5onGkF4OEKrFnhDbfAshameusf+JG3gYV+ScuiW2CCCppF6z2ftE5P+D7c/fMdenVTr6a5m1J5ZJ1u2BkwLBfsIiaU9VBPP8U1xcorWiOSEUVIlvUS3WMHuTX7K7LX+VHgHF7xhLauTUi64ovCPSqzMcaNxcRS05DcH4kaQKIfsoBpc6gPGXyi5/4l6MB7r05Edi4lkA== Received: from SA1P222CA0074.NAMP222.PROD.OUTLOOK.COM (2603:10b6:806:2c1::20) by PH0PR12MB5629.namprd12.prod.outlook.com (2603:10b6:510:141::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.22; Fri, 2 Feb 2024 11:56:51 +0000 Received: from SN1PEPF0002636E.namprd02.prod.outlook.com (2603:10b6:806:2c1:cafe::8c) by SA1P222CA0074.outlook.office365.com (2603:10b6:806:2c1::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.29 via Frontend Transport; Fri, 2 Feb 2024 11:56:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SN1PEPF0002636E.mail.protection.outlook.com (10.167.241.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.19 via Frontend Transport; Fri, 2 Feb 2024 11:56:51 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:34 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:31 -0800 From: Gregory Etelson To: CC: , =?utf-8?b?wqA=?= , , , Dariusz Sosnowski , Viacheslav Ovsiienko , "Ori Kam" , Suanming Mou , Matan Azrad Subject: [PATCH 2/5] net/mlx5: add resize function to ipool Date: Fri, 2 Feb 2024 13:56:08 +0200 Message-ID: <20240202115611.288892-3-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240202115611.288892-1-getelson@nvidia.com> References: <20240202115611.288892-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002636E:EE_|PH0PR12MB5629:EE_ X-MS-Office365-Filtering-Correlation-Id: 891f25e1-fbdf-4d06-1f1b-08dc23e60aeb X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CD/z/6RqQIyHNbDdEmKb9zjJux/zQkf9leApLsiIYvsfS3D+gq0EMtv6rEK86MEDFGF4QLvr9TVq3npGKB2/efczHpiYV9vWv+ygB6BgxnVu/HlWdPhDnT8qsz43YSwVDBSzu0LPXAskEMiFsUocnoHktL7T7hM981MYfSmPJ7LeE0sfrCA5LmlhQfZG+iHyMAOoXbXtw5vQRhxDeNyfTahb8DZxW0L+HCpqJHfM9SQkqsaUijapfkjnN1bqCb8pvc/fXXkEUziVGrAFsOcgl+rhwn3Iu5NBTTI0E0QhKhAzENrq4iHEp9mXERO/pHTIDckFBIzAb6csYpCIvuyuBso4QxepaYGvSZHd3mZZf0N13gibLzl6w1vwGEkyDfY4skBhLyo2veNvKZsJDKwdkr/GDheDfWEG0W1Pn+YKqXHNxhtsZOxk9FGYh6Q6TiKdH5FAHlr2u+07eRrKr+lfmFGIWqwKf2lgWHPEVGJKoNk2OQeAEkHs6k0U17CN/slTyih7qEoWcL+ifZ1G+EFY9RCBo2z6x0T+lUSZnPhYlVeB5dk2C5B5bwNJ5l33Nk6qnA6zdxzXNbtjI6my68BCOs1a3y8g/T8H3HTpX590bBtNOdrMD2pez112eJn37J3xIsHKBi2/rI91CTJL3geuctJScPi6/cN2U1Wx/wlBAai9TRz5KCaKKK39WPvAG4Zi X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(230922051799003)(82310400011)(186009)(451199024)(1800799012)(36840700001)(46966006)(55016003)(83380400001)(26005)(6286002)(36756003)(86362001)(16526019)(356005)(426003)(7696005)(36860700001)(107886003)(47076005)(336012)(6916009)(2616005)(1076003)(70586007)(54906003)(2906002)(5660300002)(4326008)(6666004)(7636003)(8676002)(316002)(70206006)(508600001)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2024 11:56:51.0992 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 891f25e1-fbdf-4d06-1f1b-08dc23e60aeb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002636E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5629 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Maayan Kashani Before this patch, ipool size could be fixed by setting max_idx in mlx5_indexed_pool_config upon ipool creation. Or it can be auto resized to the maximum limit by setting max_idx to zero upon ipool creation and the saved value is the maximum index possible. This patch adds ipool_resize API that enables to update the value of max_idx in case it is not set to maximum, meaning not in auto resize mode. It enables the allocation of new trunk when using malloc/zmalloc up to the max_idx limit. Please notice the resize number of entries should be divisible by trunk_size. Signed-off-by: Maayan Kashani --- drivers/net/mlx5/mlx5_utils.c | 29 +++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_utils.h | 16 ++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 4db738785f..e28db2ec43 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -809,6 +809,35 @@ mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos) return NULL; } +int +mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries) +{ + uint32_t cur_max_idx; + uint32_t max_index = mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1); + + if (num_entries % pool->cfg.trunk_size) { + DRV_LOG(ERR, "num_entries param should be trunk_size(=%u) multiplication\n", + pool->cfg.trunk_size); + return -EINVAL; + } + + mlx5_ipool_lock(pool); + cur_max_idx = pool->cfg.max_idx + num_entries; + /* If the ipool max idx is above maximum or uint overflow occurred. */ + if (cur_max_idx > max_index || cur_max_idx < num_entries) { + DRV_LOG(ERR, "Ipool resize failed\n"); + DRV_LOG(ERR, "Adding %u entries to existing %u entries, will cross max limit(=%u)\n", + num_entries, cur_max_idx, max_index); + mlx5_ipool_unlock(pool); + return -EINVAL; + } + + /* Update maximum entries number. */ + pool->cfg.max_idx = cur_max_idx; + mlx5_ipool_unlock(pool); + return 0; +} + void mlx5_ipool_dump(struct mlx5_indexed_pool *pool) { diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 82e8298781..f3c0d76a6d 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -427,6 +427,22 @@ void mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool); */ void *mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos); +/** + * This function resize the ipool. + * + * @param pool + * Pointer to the index memory pool handler. + * @param num_entries + * Number of entries to be added to the pool. + * This number should be divisible by trunk_size. + * + * @return + * - non-zero value on error. + * - 0 on success. + * + */ +int mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries); + /** * This function allocates new empty Three-level table. * From patchwork Fri Feb 2 11:56:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 136316 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2175C43A4F; Fri, 2 Feb 2024 12:56:56 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C03C40A7A; Fri, 2 Feb 2024 12:56:52 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2056.outbound.protection.outlook.com [40.107.220.56]) by mails.dpdk.org (Postfix) with ESMTP id 0A68640A70 for ; Fri, 2 Feb 2024 12:56:51 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=L9/IQOtEtd59zD3u9wcRttpwmsBx4GXd5WglsHSssrsJtqCV4CwKLnKOedd55apn5bBwSP/qE4xJHg6G5T7KXx9B4Ub87zXlr/ZWsfPb9CuJ84SEIFEgTbcSA3w9bL3veRD659MVgNCZ1y0MtnL65GcCShEwMLpl3hs9mGKGcoH/jZNzbwMXaJrcnYsnkgFCoV+zv+qUfNnWN9t/PO2tOZeTk2s7KXxLWuvxFc6eO+EOQ2Jp6pQ7Utx+MFro5tzlRCSYSJUMowHcjv40bb3bUee3rFaqp5b3DiOBQ05pLywuHhpJ5LGp/hF/AABOBVge93BwOeFAkB04CHpHTGTvPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=B3B3etSkVAQ+HPI1OE4VhEc0+MqqVZ0OPbXZcKsx/QQ=; b=Uy2dbtY0hTKUOKoABahFUC2LFpdfHsNtiolGEXmh3EAl1rNGGf7G1XgKFUd+IbUDeTQyRVEXdYiBF7XzB1GxMNV8u3EW2CB3Cxs/Dni5/H9fBuMz7Nx2ZzS/C+p85xODB/H0TqFooTqDHAKL9kYWUENJtWCg2w+GZhKv81SAPXvJmExFjmqRBYUvAFh7jFdOoMD/sYRGdqIFp8sz0aW8N+4sL/yVRUqgOOlJWWKRm4KSJpcieLfDB+KfOT2Z8ouGKs2CDO4e9IPa6yvW9lSnV2CbAs/5j4Fn9F6spqAkaq7UE8Dvk9jlJtbZfbKQjnf5CoWPXOcVjRLG0ajYIe30Uw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=B3B3etSkVAQ+HPI1OE4VhEc0+MqqVZ0OPbXZcKsx/QQ=; b=o5d9vo8UeBHt6S4NU3az5jQvAlizjwL58KOyO45nGL787zu4t+nHzr2HFV2DHSRNhyQWRToPpkWQBOtefLZZ8vuRFdk69bfs+uAv/+nR5UYLY3F6IytDKP9z80idqK+oQ88lWaP0cD1OdvV0BvV5qkqqIrVrzz5g8BygtFiOdSOnb/aAa+TuCDDIGlxrITYqS9hhp5bkb74d9N8zF5O2BeNgIgn30yjMnwYg6luokZ2UP7OtyPNBoWwwUUxy6N4PkiPF+daBEfEnsP8o6HEeAOoq5r1wLJ7yfwfcYWss574M1YQ/rLxB2luBpGq22SpsQZjjH/4zeM/zBfl0/lvgqg== Received: from CH0PR03CA0432.namprd03.prod.outlook.com (2603:10b6:610:10e::20) by CY5PR12MB9055.namprd12.prod.outlook.com (2603:10b6:930:35::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7270.9; Fri, 2 Feb 2024 11:56:47 +0000 Received: from CH2PEPF00000099.namprd02.prod.outlook.com (2603:10b6:610:10e:cafe::a6) by CH0PR03CA0432.outlook.office365.com (2603:10b6:610:10e::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.29 via Frontend Transport; Fri, 2 Feb 2024 11:56:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF00000099.mail.protection.outlook.com (10.167.244.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.19 via Frontend Transport; Fri, 2 Feb 2024 11:56:46 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:38 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:34 -0800 From: Gregory Etelson To: CC: , =?utf-8?b?wqA=?= , , , Dariusz Sosnowski , Viacheslav Ovsiienko , "Ori Kam" , Suanming Mou , Matan Azrad , Rongwei Liu Subject: [PATCH 3/5] net/mlx5: fix parameters verification in HWS table create Date: Fri, 2 Feb 2024 13:56:09 +0200 Message-ID: <20240202115611.288892-4-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240202115611.288892-1-getelson@nvidia.com> References: <20240202115611.288892-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF00000099:EE_|CY5PR12MB9055:EE_ X-MS-Office365-Filtering-Correlation-Id: b66e7f38-7e05-450a-2b83-08dc23e60848 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: mC/pJKR7D2/UhpdM12Kd5WI0k2bruZ8zv39WuRelGHFV1y2Ywo7Wa/UYKYJ/PdIPm7w761IxdT7gqoJEwSofbIg/Ih2GKUfq5FFUTl7ogvKCgRjjtNxNtAANoY8z9YfdHSxm2AlB7GTMfL6myCtxDkG6M/FDEvaF9jTun7aFS6MW5ZfNn7Vr84CgAwyVkdPsrzORKxSase/LZxtoevmD90LsoBryQ+IIasULdwd37nayMb1RJWrC2LvxjuYw+p0J5WfIP62yLC4X+1J6u2hMAV7t6SpMbHPh2bd+DByMeS5jtTrAth11TAWzHzJDhuYPrv/N5TAJuLelLt+W6Mz77nrld0a9eQ60C0YyAXA2h2riVi+G9J4OyC5RD1FU3uDxeyjmOgVBQA1MjdnGvBUTB5RHznhPG36qxQKJsOA/f2YiqRzoJACvkZV4HMibgRhtNN/17PTDMHNfbzvn4RPm77y5Gdw7A5wj4WXfrtttQr3Dt4WBhOOMT63o9y+OjWAOB5Ld0B89ieiN06uJkd7sYEOp+G1Smp/BlB/PUuuv/TyM6y3fc/b6Vic4trXV970xWTmwqHNAqbITxynXekd9/lAOu8E0+xW/HroX3yMmIVgxSevTITmAWS9PC79BHy1MmEyLvgwrfaQbrt+l48PKVzeDmTrtxApzRnbNKFqY46ReicLzfH+AmtNS/vsNdFsWZHwA9izRwAgfIrxe03zaC+uycJB5+VUwX4zegT8ep4VCG1KUD3OsiITdUO2Y3AO6 X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(376002)(136003)(346002)(396003)(39860400002)(230922051799003)(186009)(451199024)(82310400011)(64100799003)(1800799012)(46966006)(40470700004)(36840700001)(41300700001)(82740400003)(70586007)(8676002)(70206006)(4326008)(8936002)(2906002)(5660300002)(86362001)(36756003)(316002)(54906003)(6916009)(36860700001)(356005)(6666004)(7696005)(478600001)(7636003)(47076005)(83380400001)(336012)(6286002)(426003)(16526019)(2616005)(26005)(40460700003)(1076003)(55016003)(107886003)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2024 11:56:46.6603 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b66e7f38-7e05-450a-2b83-08dc23e60848 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF00000099.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB9055 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Modified the conditionals in `flow_hw_table_create()` to use bitwise AND instead of equality checks when assessing `table_cfg->attr->specialize` bitmask. This will allow for greater flexibility as the bitmask may encapsulate multiple flags. The patch maintains the previous behavior with single flag values, while providing support for multiple flags. Fixes: 592d5367b5e4 ("net/mlx5: enable hint in async flow table") Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow_hw.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index da873ae2e2..3125500641 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4368,12 +4368,23 @@ flow_hw_table_create(struct rte_eth_dev *dev, matcher_attr.rule.num_log = rte_log2_u32(nb_flows); /* Parse hints information. */ if (attr->specialize) { - if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG) - matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_WIRE; - else if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG) - matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_VPORT; - else - DRV_LOG(INFO, "Unsupported hint value %x", attr->specialize); + uint32_t val = RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG | + RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG; + + if ((attr->specialize & val) == val) { + DRV_LOG(INFO, "Invalid hint value %x", + attr->specialize); + rte_errno = EINVAL; + goto it_error; + } + if (attr->specialize & + RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG) + matcher_attr.optimize_flow_src = + MLX5DR_MATCHER_FLOW_SRC_WIRE; + else if (attr->specialize & + RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG) + matcher_attr.optimize_flow_src = + MLX5DR_MATCHER_FLOW_SRC_VPORT; } /* Build the item template. */ for (i = 0; i < nb_item_templates; i++) { From patchwork Fri Feb 2 11:56:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 136318 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2DD5743A4F; Fri, 2 Feb 2024 12:57:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6B83342E60; Fri, 2 Feb 2024 12:56:59 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2082.outbound.protection.outlook.com [40.107.237.82]) by mails.dpdk.org (Postfix) with ESMTP id 8F69A42E5E for ; Fri, 2 Feb 2024 12:56:58 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KPb/9Q4Dq91hyKAk2/AevTQFCCYYhgsoRwtK6AdmDvTG+0QGu02FcT8rGyMr7LM0xLEaSYdJQNALSDJdQThBD1NBOk6C6tJZXYZDuyBD3QRryulJiWpEOhL8KG/PD9EYd5OaWH0Slnk0sYLiHPox6Ust/LLnIEsLdwkvhS8PlOqt4yAU+JUhK4QdEPlLgmL0RR4dbGtK75kDOboSln1n6kG9DUFP2vpC9erfM3P3F7gFM5mEA+DrSP8emcaX+O9yAyLubYBFWzqEc/W6GDR2k2eZhChNozUgCbOHj0UlPOqTT/LDVCe/Xf/SsHLKrXDd0BtbJLvP+7P3sewzF4fUOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rsoCu011pciN33cXOe1B5anOw2r9cG1wJIG+PY1uBfw=; b=IbYTLVxonrUJbGqCQ2gtV2nHCD8dPjjqzM7RagpOY0HwUYJ6we0wF4MJWePt8fr2iP68qHYr6K7pkt2b+NH6tWlSRQauzWNbVZtS/2mqaDjYwPY/mcRL374xLgpG7Zg2wU4d465mnfQMADWaMtGYbUeV2ZiX5MlA9RDtZiIWwHukP+hv1UmZ+jS6hkfosyZGEWnEebkFLIqgZfJGuchlQSNhwp2dCPppKYyPXWRzFBJkZduZckUCTChEDg6SXw8/Hw3SqVXv6NcEOfUFN38MRatRlQYhf3YBp/qWQoc4F1zcWhmShDHaN0EufiWSHZN0e8zQ3CN5PA/UEWvREgxH0w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rsoCu011pciN33cXOe1B5anOw2r9cG1wJIG+PY1uBfw=; b=Gm9oOZnqvhsiO7HoFXT9fLNcEfFo4M+Gz6/9JqH/pB7y8O67MNq7kG+PpA8i/eimhviqHMkbOw0axOt+ByYMNApV5hzjiNoPXnFj9eN+eGJkB+iKsA1ZrGlSn5jsC+xaYAVRv6GI04jFHtQV1+DidBkRXRaH++5H1P/ZTD5jF4g9ePjKhbWiJZF1l0e5vBJz4q++jkbrYdl0wBcPJf09Rm6rLW/REt65mhdjCTkWvT8ccHjk3cHQNOk0xo+P+qRM5NbeNDqdRn2ya6834loILxncD2ouiL8V5Y2whEJ9MUAM6a2S0/o6w0o2lXfe/OzlGvI+heb3NbOrPkeA02vNlg== Received: from SA1P222CA0108.NAMP222.PROD.OUTLOOK.COM (2603:10b6:806:3c5::29) by SA0PR12MB4383.namprd12.prod.outlook.com (2603:10b6:806:94::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7270.12; Fri, 2 Feb 2024 11:56:54 +0000 Received: from SN1PEPF00026369.namprd02.prod.outlook.com (2603:10b6:806:3c5:cafe::b8) by SA1P222CA0108.outlook.office365.com (2603:10b6:806:3c5::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.30 via Frontend Transport; Fri, 2 Feb 2024 11:56:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SN1PEPF00026369.mail.protection.outlook.com (10.167.241.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.19 via Frontend Transport; Fri, 2 Feb 2024 11:56:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:41 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:38 -0800 From: Gregory Etelson To: CC: , =?utf-8?b?wqA=?= , , , Dariusz Sosnowski , Viacheslav Ovsiienko , "Ori Kam" , Suanming Mou , Matan Azrad Subject: [PATCH 4/5] net/mlx5: move multi-pattern actions management to table level Date: Fri, 2 Feb 2024 13:56:10 +0200 Message-ID: <20240202115611.288892-5-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240202115611.288892-1-getelson@nvidia.com> References: <20240202115611.288892-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF00026369:EE_|SA0PR12MB4383:EE_ X-MS-Office365-Filtering-Correlation-Id: c2378a49-cc34-4476-1be4-08dc23e60d13 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /BovK140xsBuw87t94WFAlKP9iiXi0Esr4C22ixPMqqIOtpXi5D6nbOoG39HBGq5K35Wd1VPhAyoXjfLSxuS/i+zmKQ5MAX8LCJs5u5vw+UfgHSFB6MBQUbrrl+RApMfGL+hJBMkDHkB9PASr3tnW6egSmp9SdMpsIxIAKcvf5T4I5NUDkVVDbydZpQG1+a8XZSpTrAMdSagi7RggOpxaBMjO9Vio0A/ZuCEvLYNcEQTJjvszQxU3+TPTTFXn8naSFvOGDisOa74Ud6QMJAGWeYM4qisSUPUoTKJh71dmC3epc1bUhvnJedJttZmI1l9pPOOKjrrzMFGvFXQ1NqrH31TYily1LPYSZVafscrgrHBYyFgICI2X8d1Q4Oh1/WY0c/SAXB5SbgGiysxqPFosKvboQS1kAoUug5RcrJMp3OWb9adpIIfOj+6o/w19NrpVM11D+iQSE8VHbyHfXQvg2WE7QqONEdMulgFJDaeQ0XGauaqQK6pzqy7kRRgq35g5+S7WfNr7EPy5WeAUlpo0n6u6+CeBU7KVK7j1M/CL18IlvVNXgiYUMCvLVnG/EWq3V8WZ+/OhK4k8qDXz5N1kXVwy+b+gxNEU04noOWg8haTpG9BXDiohshd2InhfzFeJvWEW4goYLxztjFqxWalXCLIS2hbJbbu175Ahu1TxqIGb0yeYgCHkLDLaWuVHR0s12BWpEkdjhV0t6XAggXfk2EHWvsJJzx02giXFJ66M0rYujYDJck5dd5V5ymH7s10G9JyXKQpHAD7TbaO2DeZRA3ySiD1r/2FzyzEr19Rn7A= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(376002)(346002)(396003)(136003)(230273577357003)(230173577357003)(230922051799003)(82310400011)(64100799003)(186009)(451199024)(1800799012)(46966006)(40470700004)(36840700001)(40460700003)(40480700001)(55016003)(41300700001)(36756003)(86362001)(426003)(478600001)(336012)(2616005)(356005)(7636003)(83380400001)(1076003)(107886003)(16526019)(6286002)(26005)(47076005)(30864003)(2906002)(54906003)(4326008)(6916009)(5660300002)(82740400003)(6666004)(7696005)(36860700001)(70586007)(8936002)(316002)(70206006)(8676002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2024 11:56:54.7167 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c2378a49-cc34-4476-1be4-08dc23e60d13 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF00026369.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4383 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The multi-pattern actions related structures and management code have been moved to the table level. That code refactor is required for the upcoming table resize feature. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow.h | 73 +++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 229 +++++++++++++++----------------- 2 files changed, 177 insertions(+), 125 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index b003e97dc9..497d4b0f0c 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1390,7 +1390,6 @@ struct mlx5_hw_encap_decap_action { /* Is header_reformat action shared across flows in table. */ uint32_t shared:1; uint32_t multi_pattern:1; - volatile uint32_t *multi_pattern_refcnt; size_t data_size; /* Action metadata size. */ uint8_t data[]; /* Action data. */ }; @@ -1413,7 +1412,6 @@ struct mlx5_hw_modify_header_action { /* Is MODIFY_HEADER action shared across flows in table. */ uint32_t shared:1; uint32_t multi_pattern:1; - volatile uint32_t *multi_pattern_refcnt; /* Amount of modification commands stored in the precompiled buffer. */ uint32_t mhdr_cmds_num; /* Precompiled modification commands. */ @@ -1467,6 +1465,76 @@ struct mlx5_flow_group { #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32 +#define MLX5_MULTIPATTERN_ENCAP_NUM 5 +#define MLX5_MAX_TABLE_RESIZE_NUM 64 + +struct mlx5_multi_pattern_segment { + uint32_t capacity; + uint32_t head_index; + struct mlx5dr_action *mhdr_action; + struct mlx5dr_action *reformat_action[MLX5_MULTIPATTERN_ENCAP_NUM]; +}; + +struct mlx5_tbl_multi_pattern_ctx { + struct { + uint32_t elements_num; + struct mlx5dr_action_reformat_header reformat_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + /** + * insert_header structure is larger than reformat_header. + * Enclosing these structures with union will case a gap between + * reformat_hdr array elements. + * mlx5dr_action_create_reformat() expects adjacent array elements. + */ + struct mlx5dr_action_insert_header insert_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; + + struct { + uint32_t elements_num; + struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } mh; + struct mlx5_multi_pattern_segment segments[MLX5_MAX_TABLE_RESIZE_NUM]; +}; + +static __rte_always_inline void +mlx5_multi_pattern_activate(struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + mpctx->segments[0].head_index = 1; +} + +static __rte_always_inline bool +mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + return mpctx->segments[0].head_index == 1; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + int i; + + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + if (!mpctx->segments[i].capacity) + return &mpctx->segments[i]; + } + return NULL; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx, + uint32_t flow_resource_ix) +{ + int i; + + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + uint32_t limit = mpctx->segments[i].head_index + + mpctx->segments[i].capacity; + + if (flow_resource_ix < limit) + return &mpctx->segments[i]; + } + return NULL; +} + struct mlx5_flow_template_table_cfg { struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */ bool external; /* True if created by flow API, false if table is internal to PMD. */ @@ -1487,6 +1555,7 @@ struct rte_flow_template_table { uint8_t nb_item_templates; /* Item template number. */ uint8_t nb_action_templates; /* Action template number. */ uint32_t refcnt; /* Table reference counter. */ + struct mlx5_tbl_multi_pattern_ctx mpctx; }; #endif diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 3125500641..e5c770c6fc 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -74,41 +74,14 @@ struct mlx5_indlst_legacy { #define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \ (((const struct encap_type *)(ptr))->definition) -struct mlx5_multi_pattern_ctx { - union { - struct mlx5dr_action_reformat_header reformat_hdr; - struct mlx5dr_action_mh_pattern mh_pattern; - }; - union { - /* action template auxiliary structures for object destruction */ - struct mlx5_hw_encap_decap_action *encap; - struct mlx5_hw_modify_header_action *mhdr; - }; - /* multi pattern action */ - struct mlx5dr_rule_action *rule_action; -}; - -#define MLX5_MULTIPATTERN_ENCAP_NUM 4 - -struct mlx5_tbl_multi_pattern_ctx { - struct { - uint32_t elements_num; - struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; - - struct { - uint32_t elements_num; - struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - } mh; -}; - -#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},} - static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, - struct mlx5_tbl_multi_pattern_ctx *mpat, + struct mlx5_multi_pattern_segment *segment, + uint32_t bulk_size, struct rte_flow_error *error); +static void +mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment); static __rte_always_inline int mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) @@ -570,28 +543,14 @@ flow_hw_ct_compile(struct rte_eth_dev *dev, static void flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap) { - if (encap_decap->multi_pattern) { - uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt, - 1, __ATOMIC_RELAXED); - if (refcnt) - return; - mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt); - } - if (encap_decap->action) + if (encap_decap->action && !encap_decap->multi_pattern) mlx5dr_action_destroy(encap_decap->action); } static void flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr) { - if (mhdr->multi_pattern) { - uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt, - 1, __ATOMIC_RELAXED); - if (refcnt) - return; - mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt); - } - if (mhdr->action) + if (mhdr->action && !mhdr->multi_pattern) mlx5dr_action_destroy(mhdr->action); } @@ -1870,6 +1829,7 @@ mlx5_tbl_translate_reformat(struct mlx5_priv *priv, const struct rte_flow_attr *attr = &table_attr->flow_attr; enum mlx5dr_table_type tbl_type = get_mlx5dr_table_type(attr); struct mlx5dr_action_reformat_header hdr; + struct mlx5dr_action_insert_header ihdr; uint8_t buf[MLX5_ENCAP_MAX_LEN]; bool shared_rfmt = false; int ret; @@ -1911,21 +1871,25 @@ mlx5_tbl_translate_reformat(struct mlx5_priv *priv, acts->encap_decap->shared = true; } else { uint32_t ix; - typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat + - mp_reformat_ix; + typeof(mp_ctx->reformat[0]) *reformat = mp_ctx->reformat + + mp_reformat_ix; - ix = reformat_ctx->elements_num++; - reformat_ctx->ctx[ix].reformat_hdr = hdr; - reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off]; - reformat_ctx->ctx[ix].encap = acts->encap_decap; + ix = reformat->elements_num++; + if (refmt_type == MLX5DR_ACTION_TYP_INSERT_HEADER) + reformat->insert_hdr[ix] = ihdr; + else + reformat->reformat_hdr[ix] = hdr; acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix; acts->encap_decap_pos = at->reformat_off; + acts->encap_decap->multi_pattern = 1; acts->encap_decap->data_size = data_size; + acts->encap_decap->action_type = refmt_type; ret = __flow_hw_act_data_encap_append (priv, acts, (at->actions + reformat_src)->type, reformat_src, at->reformat_off, data_size); if (ret) return -rte_errno; + mlx5_multi_pattern_activate(mp_ctx); } return 0; } @@ -1974,12 +1938,11 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, } else { typeof(mp_ctx->mh) *mh = &mp_ctx->mh; uint32_t idx = mh->elements_num; - struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++; - mh_ctx->mh_pattern = pattern; - mh_ctx->mhdr = acts->mhdr; - mh_ctx->rule_action = &acts->rule_acts[mhdr_ix]; + mh->pattern[mh->elements_num++] = pattern; + acts->mhdr->multi_pattern = 1; acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx; + mlx5_multi_pattern_activate(mp_ctx); } return 0; } @@ -2539,16 +2502,17 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, { int ret; uint32_t i; - struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < tbl->nb_action_templates; i++) { if (__flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, tbl->ats[i].action_template, - &mpat, error)) + &tbl->mpctx, error)) goto err; } - ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); + ret = mlx5_tbl_multi_pattern_process(dev, tbl, &tbl->mpctx.segments[0], + rte_log2_u32(tbl->cfg.attr.nb_flows), + error); if (ret) goto err; return 0; @@ -2922,6 +2886,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, int ret; uint32_t age_idx = 0; struct mlx5_aso_mtr *aso_mtr; + struct mlx5_multi_pattern_segment *mp_segment; rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; @@ -3052,6 +3017,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment || !mp_segment->mhdr_action) + return -1; + rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action; if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, act_data, @@ -3203,9 +3172,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, age_idx); } if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { - rule_acts[hw_acts->encap_decap_pos].reformat.offset = - job->flow->res_idx - 1; - rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; + int ix = mlx5_multi_pattern_reformat_to_index(hw_acts->encap_decap->action_type); + struct mlx5dr_rule_action *ra = &rule_acts[hw_acts->encap_decap_pos]; + + if (ix < 0) + return -1; + mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment || !mp_segment->reformat_action[ix]) + return -1; + ra->action = mp_segment->reformat_action[ix]; + ra->reformat.offset = job->flow->res_idx - 1; + ra->reformat.data = buf; } if (hw_acts->push_remove && !hw_acts->push_remove->shared) { rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = @@ -4111,86 +4088,65 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, - struct mlx5_tbl_multi_pattern_ctx *mpat, + struct mlx5_multi_pattern_segment *segment, + uint32_t bulk_size, struct rte_flow_error *error) { + int ret = 0; uint32_t i; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &tbl->mpctx; const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr; const struct rte_flow_attr *attr = &table_attr->flow_attr; enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); uint32_t flags = mlx5_hw_act_flag[!!attr->group][type]; - struct mlx5dr_action *dr_action; - uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows); + struct mlx5dr_action *dr_action = NULL; for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { - uint32_t j; - uint32_t *reformat_refcnt; - typeof(mpat->reformat[0]) *reformat = mpat->reformat + i; - struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + typeof(mpctx->reformat[0]) *reformat = mpctx->reformat + i; enum mlx5dr_action_type reformat_type = mlx5_multi_pattern_reformat_index_to_type(i); if (!reformat->elements_num) continue; - for (j = 0; j < reformat->elements_num; j++) - hdr[j] = reformat->ctx[j].reformat_hdr; - reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0, - rte_socket_id()); - if (!reformat_refcnt) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "failed to allocate multi-pattern encap counter"); - *reformat_refcnt = reformat->elements_num; - dr_action = mlx5dr_action_create_reformat - (priv->dr_ctx, reformat_type, reformat->elements_num, hdr, - bulk_size, flags); + dr_action = reformat_type == MLX5DR_ACTION_TYP_INSERT_HEADER ? + mlx5dr_action_create_insert_header + (priv->dr_ctx, reformat->elements_num, + reformat->insert_hdr, bulk_size, flags) : + mlx5dr_action_create_reformat + (priv->dr_ctx, reformat_type, reformat->elements_num, + reformat->reformat_hdr, bulk_size, flags); if (!dr_action) { - mlx5_free(reformat_refcnt); - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to create multi-pattern encap action"); - } - for (j = 0; j < reformat->elements_num; j++) { - reformat->ctx[j].rule_action->action = dr_action; - reformat->ctx[j].encap->action = dr_action; - reformat->ctx[j].encap->multi_pattern = 1; - reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt; + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to create multi-pattern encap action"); + goto error; } + segment->reformat_action[i] = dr_action; } - if (mpat->mh.elements_num) { - typeof(mpat->mh) *mh = &mpat->mh; - struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), - 0, rte_socket_id()); - - if (!mh_refcnt) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "failed to allocate modify header counter"); - *mh_refcnt = mpat->mh.elements_num; - for (i = 0; i < mpat->mh.elements_num; i++) - pattern[i] = mh->ctx[i].mh_pattern; + if (mpctx->mh.elements_num) { + typeof(mpctx->mh) *mh = &mpctx->mh; dr_action = mlx5dr_action_create_modify_header - (priv->dr_ctx, mpat->mh.elements_num, pattern, + (priv->dr_ctx, mpctx->mh.elements_num, mh->pattern, bulk_size, flags); if (!dr_action) { - mlx5_free(mh_refcnt); - return rte_flow_error_set(error, rte_errno, + ret = rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to create multi-pattern header modify action"); - } - for (i = 0; i < mpat->mh.elements_num; i++) { - mh->ctx[i].rule_action->action = dr_action; - mh->ctx[i].mhdr->action = dr_action; - mh->ctx[i].mhdr->multi_pattern = 1; - mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt; + NULL, "failed to create multi-pattern header modify action"); + goto error; } + segment->mhdr_action = dr_action; + } + if (dr_action) { + segment->capacity = RTE_BIT32(bulk_size); + if (segment != &mpctx->segments[MLX5_MAX_TABLE_RESIZE_NUM - 1]) + segment[1].head_index = segment->head_index + segment->capacity; } - return 0; +error: + mlx5_destroy_multi_pattern_segment(segment); + return ret; } static int @@ -4203,7 +4159,6 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev, { int ret; uint8_t i; - struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < nb_action_templates; i++) { uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1, @@ -4224,16 +4179,21 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev, ret = __flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, action_templates[i], - &mpat, error); + &tbl->mpctx, error); if (ret) { i++; goto at_error; } } tbl->nb_action_templates = nb_action_templates; - ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); - if (ret) - goto at_error; + if (mlx5_is_multi_pattern_active(&tbl->mpctx)) { + ret = mlx5_tbl_multi_pattern_process(dev, tbl, + &tbl->mpctx.segments[0], + rte_log2_u32(tbl->cfg.attr.nb_flows), + error); + if (ret) + goto at_error; + } return 0; at_error: @@ -4600,6 +4560,28 @@ flow_hw_template_table_create(struct rte_eth_dev *dev, action_templates, nb_action_templates, error); } +static void +mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment) +{ + int i; + + if (segment->mhdr_action) + mlx5dr_action_destroy(segment->mhdr_action); + for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { + if (segment->reformat_action[i]) + mlx5dr_action_destroy(segment->reformat_action[i]); + } + segment->capacity = 0; +} + +static void +flow_hw_destroy_table_multi_pattern_ctx(struct rte_flow_template_table *table) +{ + int sx; + + for (sx = 0; sx < MLX5_MAX_TABLE_RESIZE_NUM; sx++) + mlx5_destroy_multi_pattern_segment(table->mpctx.segments + sx); +} /** * Destroy flow table. * @@ -4645,6 +4627,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, __atomic_fetch_sub(&table->ats[i].action_template->refcnt, 1, __ATOMIC_RELAXED); } + flow_hw_destroy_table_multi_pattern_ctx(table); mlx5dr_matcher_destroy(table->matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); mlx5_ipool_destroy(table->resource); From patchwork Fri Feb 2 11:56:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 136319 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3A11E43A4F; Fri, 2 Feb 2024 12:57:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B51742E6F; Fri, 2 Feb 2024 12:57:04 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2044.outbound.protection.outlook.com [40.107.94.44]) by mails.dpdk.org (Postfix) with ESMTP id 06E5E42E6F for ; Fri, 2 Feb 2024 12:57:02 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KsZfLUecKsjANmYzDuoMblwhcH2Eg0n0Q6ym+oFLrj/mdzKFh6QYz2tEOR35+mshO7chcTbTw5ZQb7Lnonw/fM5Sby/5IJSRDqTHBFTGlhf6Bqve+l3vMe2/CjlX8nPEkXcifnt95AU5DxyPMIlDz4A9qmUAKGwY0Qmdp8V6vls/lFrUIo1i8IEuJ4E08RXQhQt3vSa2LpTSRoXhLxkre9AtkQanbTq4LCNyoHM/yi1Fd2DuiWxOHGogZFTQnVmeD7t6nzipFTd3s0FLg38+TBP0jX/IjggY7mo8q3nbnlrL2ok8aJajxXvOEasO1Ms6efww6SZpdVu3YCKWbmsU/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cz7O+sqKz4/xfkeDbARoE+wfmUoV5/Fw3DvPmhgvH7I=; b=gtG8f/UErLAF0PvyTxvcMibx8qrVwnmhd3qugV5/NeT7YbC/CWuLHP26fKDU+ocTUgXkjgr1xX19f3XXQSaAkwCANZ45OJDONb4/g8SqLc/rncnv0ZeCofPL6MDG7T7bz75OOqnReET8EpBWvBYHgyEQK8WRByl5zSONkA13fAM+pR2+nqYpctXe+Ae96MxTAyL3uPcXVIbskv3TuYYoW78tAjQ+aUaJvLE9VcwuOm/vLUR2s0OxnpJG5LhNWW/8dg3PacbrhgLBQRn79XkdM/aeTpN0zdEqM8aMlKp6JmFjoN1z0kWZpxtyQd0KasWA339xKG6L/Wu67QEP6lougQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cz7O+sqKz4/xfkeDbARoE+wfmUoV5/Fw3DvPmhgvH7I=; b=KiCOSlmE1FMDIo52Yki82iu/PQWLSCzBjBb1fDPyt8Ghjt7SsiFzkpQ0KsrggXcZjj1oC4TMLbagLMb6F9iz3FIQaHyefeHCh1eKRxcAlG3hsDC2TfoU1tTKlmiqJqPqQRCYJnkB3mcngHPsO642bTliLIvWS3/A7dWfi5yVwtu/+xEqNAQB7SYz7ZWcFzpBCJaoxyO3nB5irHtP1g5xhfLfl7Wn8S/D6aEQqyGzgihsyB5uNcKH3hBIDxBQH9jwnYvBJ1pescB0B7Aazgs3BuPf+W1hgUSVnrets1uni90x6EfB0MVdYGPRrZZSh3qB0Sja0bb5lUrzP5zfgq6wRQ== Received: from SA1P222CA0117.NAMP222.PROD.OUTLOOK.COM (2603:10b6:806:3c5::23) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.28; Fri, 2 Feb 2024 11:56:58 +0000 Received: from SN1PEPF00026369.namprd02.prod.outlook.com (2603:10b6:806:3c5:cafe::62) by SA1P222CA0117.outlook.office365.com (2603:10b6:806:3c5::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.30 via Frontend Transport; Fri, 2 Feb 2024 11:56:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SN1PEPF00026369.mail.protection.outlook.com (10.167.241.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.19 via Frontend Transport; Fri, 2 Feb 2024 11:56:57 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:45 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Fri, 2 Feb 2024 03:56:41 -0800 From: Gregory Etelson To: CC: , =?utf-8?b?wqA=?= , , , Dariusz Sosnowski , Viacheslav Ovsiienko , "Ori Kam" , Suanming Mou , Matan Azrad Subject: [PATCH 5/5] net/mlx5: add support for flow table resizing Date: Fri, 2 Feb 2024 13:56:11 +0200 Message-ID: <20240202115611.288892-6-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240202115611.288892-1-getelson@nvidia.com> References: <20240202115611.288892-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF00026369:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 9dd56405-717d-4b5e-f664-08dc23e60eef X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ogMfzNPMfqNYArnYMguk9QzKVdjafsd7fADCKAUqBPJyTpNKV5VaeW2AsK5sGQZ4Tm0kJYkhFdAZzHuJP+t6KKOZWFyocdi2h7NlBOsJHT1I2heqUpYYyOSLXxQyYF5lv8gT74ItmMLd04U22ntbexlM3QhGkLIJRvvUJx4070w5iCkKVAUWWOap1P9rD+24xjWdxRDF0s9Lf10///U8eBXCUahrTLiuNZJetYaYrIrEd1BFjJw370w2uI3AdiOyiyq4LW9alVb0ftb9vyDe0VGkpAsAJ4hpL21a4lU8AGvf2wiakSHmB+lgnZbjqckBFk4cBqw0l5NbW9/FeNZIPIAqtmnHaKtocBRGQEkkYa+/ecEMtO3rhsiRok0rUVKRoY6S0pF3Xovj350fiRKPTybNoqcJDMPlLQwC8woLD0DLowBUD/Oe1A+RXXAKDF1+mVXiZXjQSH8FL8uTdyyCaZ9o4RuWASFL48AWGQYwGCmTMcjjHzbniuEIA6m77Kc17ohu0usMSh9AQ8gNlX7Etn3htFdLxYqJeCIbMS3x0AfUCSQ5N8WNCfXfOPcrEmkGzDHRq3VywzaemuJ6qyn78Ow79+cIMsRidZ7U/oU3WEbcYDCaG/WB7I0xcC9Tt0kKd4thQqoVqVLjEIRFkTj92eTN9A1RDMvGregmbT72vzi4iRTZlJGd3FHxn2PeLHMA/D9b5okqUaeq5qKDUMTWUS9t9Q4zTEokQalv8MN1ft32AJkByQsPe1ybab86Vwwumh85U5J9TdzCBkI/ANQHBf4Z9MQH/8l8sFFG/oXEIlw/QHI27zAsbyBp67NaJJw4 X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(136003)(39860400002)(346002)(376002)(230922051799003)(230173577357003)(230273577357003)(82310400011)(64100799003)(1800799012)(186009)(451199024)(40470700004)(36840700001)(46966006)(55016003)(40480700001)(40460700003)(83380400001)(36756003)(47076005)(1076003)(41300700001)(86362001)(7636003)(36860700001)(8936002)(82740400003)(356005)(26005)(16526019)(6286002)(2616005)(426003)(336012)(107886003)(2906002)(7696005)(478600001)(70206006)(6916009)(316002)(70586007)(30864003)(5660300002)(4326008)(54906003)(6666004)(8676002)(579004)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2024 11:56:57.8418 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9dd56405-717d-4b5e-f664-08dc23e60eef X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF00026369.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support template table API in PMD. The patch allows to increase existing table capacity. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5.h | 5 + drivers/net/mlx5/mlx5_flow.c | 51 ++++ drivers/net/mlx5/mlx5_flow.h | 84 ++++-- drivers/net/mlx5/mlx5_flow_hw.c | 512 +++++++++++++++++++++++++++----- drivers/net/mlx5/mlx5_host.c | 211 +++++++++++++ 5 files changed, 758 insertions(+), 105 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_host.c diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f2e2e04429..ff0ca7fa42 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -380,6 +380,9 @@ enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */ MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */ MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */ + MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE, /* Non-optimized flow create job type. */ + MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY, /* Non-optimized destroy create job type. */ + MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE, /* Move flow after table resize. */ }; enum mlx5_hw_indirect_type { @@ -422,6 +425,8 @@ struct mlx5_hw_q { struct mlx5_hw_q_job **job; /* LIFO header. */ struct rte_ring *indir_cq; /* Indirect action SW completion queue. */ struct rte_ring *indir_iq; /* Indirect action SW in progress queue. */ + struct rte_ring *flow_transfer_pending; + struct rte_ring *flow_transfer_completed; } __rte_cache_aligned; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 85e8c77c81..521119e138 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1198,6 +1198,20 @@ mlx5_flow_calc_table_hash(struct rte_eth_dev *dev, uint8_t pattern_template_index, uint32_t *hash, struct rte_flow_error *error); +static int +mlx5_template_table_resize(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_rules, struct rte_flow_error *error); +static int +mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *rule, void *user_data, + struct rte_flow_error *error); +static int +mlx5_table_resize_complete(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error); + static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -1253,6 +1267,9 @@ static const struct rte_flow_ops mlx5_flow_ops = { .async_action_list_handle_query_update = mlx5_flow_async_action_list_handle_query_update, .flow_calc_table_hash = mlx5_flow_calc_table_hash, + .flow_template_table_resize = mlx5_template_table_resize, + .flow_update_resized = mlx5_flow_async_update_resized, + .flow_template_table_resize_complete = mlx5_table_resize_complete, }; /* Tunnel information. */ @@ -11115,6 +11132,40 @@ mlx5_flow_calc_table_hash(struct rte_eth_dev *dev, hash, error); } +static int +mlx5_template_table_resize(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_rules, struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize, ENOTSUP); + return fops->table_resize(dev, table, nb_rules, error); +} + +static int +mlx5_table_resize_complete(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize_complete, ENOTSUP); + return fops->table_resize_complete(dev, table, error); +} + +static int +mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *rule, void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, flow_update_resized, ENOTSUP); + return fops->flow_update_resized(dev, queue, op_attr, rule, user_data, error); +} + /** * Destroy all indirect actions (shared RSS). * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 497d4b0f0c..c7d84af659 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1210,6 +1210,7 @@ struct rte_flow { uint32_t tunnel:1; uint32_t meter:24; /**< Holds flow meter id. */ uint32_t indirect_type:2; /**< Indirect action type. */ + uint32_t matcher_selector:1; /**< Matcher index in resizable table. */ uint32_t rix_mreg_copy; /**< Index to metadata register copy table resource. */ uint32_t counter; /**< Holds flow counter. */ @@ -1255,6 +1256,7 @@ struct rte_flow_hw { }; struct rte_flow_template_table *table; /* The table flow allcated from. */ uint8_t mt_idx; + uint8_t matcher_selector:1; uint32_t age_idx; cnt_id_t cnt_id; uint32_t mtr_id; @@ -1469,6 +1471,11 @@ struct mlx5_flow_group { #define MLX5_MAX_TABLE_RESIZE_NUM 64 struct mlx5_multi_pattern_segment { + /* + * Modify Header Argument Objects number allocated for action in that + * segment. + * Capacity is always power of 2. + */ uint32_t capacity; uint32_t head_index; struct mlx5dr_action *mhdr_action; @@ -1507,43 +1514,22 @@ mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx) return mpctx->segments[0].head_index == 1; } -static __rte_always_inline struct mlx5_multi_pattern_segment * -mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx) -{ - int i; - - for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { - if (!mpctx->segments[i].capacity) - return &mpctx->segments[i]; - } - return NULL; -} - -static __rte_always_inline struct mlx5_multi_pattern_segment * -mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx, - uint32_t flow_resource_ix) -{ - int i; - - for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { - uint32_t limit = mpctx->segments[i].head_index + - mpctx->segments[i].capacity; - - if (flow_resource_ix < limit) - return &mpctx->segments[i]; - } - return NULL; -} - struct mlx5_flow_template_table_cfg { struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */ bool external; /* True if created by flow API, false if table is internal to PMD. */ }; +struct mlx5_matcher_info { + struct mlx5dr_matcher *matcher; /* Template matcher. */ + uint32_t refcnt; +}; + struct rte_flow_template_table { LIST_ENTRY(rte_flow_template_table) next; struct mlx5_flow_group *grp; /* The group rte_flow_template_table uses. */ - struct mlx5dr_matcher *matcher; /* Template matcher. */ + struct mlx5_matcher_info matcher_info[2]; + uint32_t matcher_selector; + rte_rwlock_t matcher_replace_rwlk; /* RW lock for resizable tables */ /* Item templates bind to the table. */ struct rte_flow_pattern_template *its[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; /* Action templates bind to the table. */ @@ -1556,8 +1542,34 @@ struct rte_flow_template_table { uint8_t nb_action_templates; /* Action template number. */ uint32_t refcnt; /* Table reference counter. */ struct mlx5_tbl_multi_pattern_ctx mpctx; + struct mlx5dr_matcher_attr matcher_attr; }; +static __rte_always_inline struct mlx5dr_matcher * +mlx5_table_matcher(const struct rte_flow_template_table *table) +{ + return table->matcher_info[table->matcher_selector].matcher; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_find(struct rte_flow_template_table *table, + uint32_t flow_resource_ix) +{ + int i; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx; + + if (likely(!rte_flow_table_resizable(0, &table->cfg.attr))) + return &mpctx->segments[0]; + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + uint32_t limit = mpctx->segments[i].head_index + + mpctx->segments[i].capacity; + + if (flow_resource_ix < limit) + return &mpctx->segments[i]; + } + return NULL; +} + #endif /* @@ -2177,6 +2189,17 @@ typedef int const struct rte_flow_item pattern[], uint8_t pattern_template_index, uint32_t *hash, struct rte_flow_error *error); +typedef int (*mlx5_table_resize_t)(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_rules, struct rte_flow_error *error); +typedef int (*mlx5_flow_update_resized_t) + (struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *rule, void *user_data, + struct rte_flow_error *error); +typedef int (*table_resize_complete_t)(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -2250,6 +2273,9 @@ struct mlx5_flow_driver_ops { mlx5_flow_async_action_list_handle_query_update_t async_action_list_handle_query_update; mlx5_flow_calc_table_hash_t flow_calc_table_hash; + mlx5_table_resize_t table_resize; + mlx5_flow_update_resized_t flow_update_resized; + table_resize_complete_t table_resize_complete; }; /* mlx5_flow.c */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index e5c770c6fc..874ae00028 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2886,7 +2886,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, int ret; uint32_t age_idx = 0; struct mlx5_aso_mtr *aso_mtr; - struct mlx5_multi_pattern_segment *mp_segment; + struct mlx5_multi_pattern_segment *mp_segment = NULL; rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; @@ -2900,17 +2900,20 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } else { attr.ingress = 1; } - if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0) { + if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0 && !hw_acts->mhdr->shared) { uint16_t pos = hw_acts->mhdr->pos; - if (!hw_acts->mhdr->shared) { - rule_acts[pos].modify_header.offset = - job->flow->res_idx - 1; - rule_acts[pos].modify_header.data = - (uint8_t *)job->mhdr_cmd; - rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, - sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); - } + mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); + if (!mp_segment || !mp_segment->mhdr_action) + return -1; + rule_acts[pos].action = mp_segment->mhdr_action; + /* offset is relative to DR action */ + rule_acts[pos].modify_header.offset = + job->flow->res_idx - mp_segment->head_index; + rule_acts[pos].modify_header.data = + (uint8_t *)job->mhdr_cmd; + rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, + sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); } LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; @@ -3017,10 +3020,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); - if (!mp_segment || !mp_segment->mhdr_action) - return -1; - rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action; if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, act_data, @@ -3177,11 +3176,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (ix < 0) return -1; - mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment) + mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); if (!mp_segment || !mp_segment->reformat_action[ix]) return -1; ra->action = mp_segment->reformat_action[ix]; - ra->reformat.offset = job->flow->res_idx - 1; + /* reformat offset is relative to selected DR action */ + ra->reformat.offset = job->flow->res_idx - mp_segment->head_index; ra->reformat.data = buf; } if (hw_acts->push_remove && !hw_acts->push_remove->shared) { @@ -3353,10 +3354,26 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, pattern_template_index, job); if (!rule_items) goto error; - ret = mlx5dr_rule_create(table->matcher, - pattern_template_index, rule_items, - action_template_index, rule_acts, - &rule_attr, (struct mlx5dr_rule *)flow->rule); + if (likely(!rte_flow_table_resizable(dev->data->port_id, &table->cfg.attr))) { + ret = mlx5dr_rule_create(table->matcher_info[0].matcher, + pattern_template_index, rule_items, + action_template_index, rule_acts, + &rule_attr, + (struct mlx5dr_rule *)flow->rule); + } else { + uint32_t selector; + + job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE; + rte_rwlock_read_lock(&table->matcher_replace_rwlk); + selector = table->matcher_selector; + ret = mlx5dr_rule_create(table->matcher_info[selector].matcher, + pattern_template_index, rule_items, + action_template_index, rule_acts, + &rule_attr, + (struct mlx5dr_rule *)flow->rule); + rte_rwlock_read_unlock(&table->matcher_replace_rwlk); + flow->matcher_selector = selector; + } if (likely(!ret)) return (struct rte_flow *)flow; error: @@ -3473,9 +3490,23 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, rte_errno = EINVAL; goto error; } - ret = mlx5dr_rule_create(table->matcher, - 0, items, action_template_index, rule_acts, - &rule_attr, (struct mlx5dr_rule *)flow->rule); + if (likely(!rte_flow_table_resizable(dev->data->port_id, &table->cfg.attr))) { + ret = mlx5dr_rule_create(table->matcher_info[0].matcher, + 0, items, action_template_index, + rule_acts, &rule_attr, + (struct mlx5dr_rule *)flow->rule); + } else { + uint32_t selector; + + job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE; + rte_rwlock_read_lock(&table->matcher_replace_rwlk); + selector = table->matcher_selector; + ret = mlx5dr_rule_create(table->matcher_info[selector].matcher, + 0, items, action_template_index, + rule_acts, &rule_attr, + (struct mlx5dr_rule *)flow->rule); + rte_rwlock_read_unlock(&table->matcher_replace_rwlk); + } if (likely(!ret)) return (struct rte_flow *)flow; error: @@ -3655,7 +3686,8 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to destroy rte flow: flow queue full"); - job->type = MLX5_HW_Q_JOB_TYPE_DESTROY; + job->type = !rte_flow_table_resizable(dev->data->port_id, &fh->table->cfg.attr) ? + MLX5_HW_Q_JOB_TYPE_DESTROY : MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY; job->user_data = user_data; job->flow = fh; rule_attr.user_data = job; @@ -3767,6 +3799,26 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job } } +static __rte_always_inline int +mlx5_hw_pull_flow_transfer_comp(struct rte_eth_dev *dev, + uint32_t queue, struct rte_flow_op_result res[], + uint16_t n_res) +{ + uint32_t size, i; + struct mlx5_hw_q_job *job = NULL; + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_ring *ring = priv->hw_q[queue].flow_transfer_completed; + + size = RTE_MIN(rte_ring_count(ring), n_res); + for (i = 0; i < size; i++) { + res[i].status = RTE_FLOW_OP_SUCCESS; + rte_ring_dequeue(ring, (void **)&job); + res[i].user_data = job->user_data; + flow_hw_job_put(priv, job, queue); + } + return (int)size; +} + static inline int __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, uint32_t queue, @@ -3815,6 +3867,76 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, return ret_comp; } +static __rte_always_inline void +hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + uint32_t queue, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + struct rte_flow_hw *flow = job->flow; + struct rte_flow_template_table *table = flow->table; + /* Release the original resource index in case of update. */ + uint32_t res_idx = flow->res_idx; + + if (flow->fate_type == MLX5_FLOW_FATE_JUMP) + flow_hw_jump_release(dev, flow->jump); + else if (flow->fate_type == MLX5_FLOW_FATE_QUEUE) + mlx5_hrxq_obj_release(dev, flow->hrxq); + if (mlx5_hws_cnt_id_valid(flow->cnt_id)) + flow_hw_age_count_release(priv, queue, + flow, error); + if (flow->mtr_id) { + mlx5_ipool_free(pool->idx_pool, flow->mtr_id); + flow->mtr_id = 0; + } + if (job->type != MLX5_HW_Q_JOB_TYPE_UPDATE) { + if (table) { + mlx5_ipool_free(table->resource, res_idx); + mlx5_ipool_free(table->flow, flow->idx); + } + } else { + rte_memcpy(flow, job->upd_flow, + offsetof(struct rte_flow_hw, rule)); + mlx5_ipool_free(table->resource, res_idx); + } +} + +static __rte_always_inline void +hw_cmpl_resizable_tbl(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + uint32_t queue, enum rte_flow_op_status status, + struct rte_flow_error *error) +{ + struct rte_flow_hw *flow = job->flow; + struct rte_flow_template_table *table = flow->table; + uint32_t selector = flow->matcher_selector; + uint32_t other_selector = (selector + 1) & 1; + + switch (job->type) { + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE: + __atomic_add_fetch(&table->matcher_info[selector].refcnt, + 1, __ATOMIC_RELAXED); + break; + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY: + __atomic_sub_fetch(&table->matcher_info[selector].refcnt, 1, + __ATOMIC_RELAXED); + hw_cmpl_flow_update_or_destroy(dev, job, queue, error); + break; + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE: + if (status == RTE_FLOW_OP_SUCCESS) { + __atomic_sub_fetch(&table->matcher_info[selector].refcnt, 1, + __ATOMIC_RELAXED); + __atomic_add_fetch(&table->matcher_info[other_selector].refcnt, + 1, __ATOMIC_RELAXED); + flow->matcher_selector = other_selector; + } + break; + default: + break; + } +} + /** * Pull the enqueued flows. * @@ -3843,9 +3965,7 @@ flow_hw_pull(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct mlx5_hw_q_job *job; - uint32_t res_idx; int ret, i; /* 1. Pull the flow completion. */ @@ -3856,31 +3976,20 @@ flow_hw_pull(struct rte_eth_dev *dev, "fail to query flow queue"); for (i = 0; i < ret; i++) { job = (struct mlx5_hw_q_job *)res[i].user_data; - /* Release the original resource index in case of update. */ - res_idx = job->flow->res_idx; /* Restore user data. */ res[i].user_data = job->user_data; - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY || - job->type == MLX5_HW_Q_JOB_TYPE_UPDATE) { - if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP) - flow_hw_jump_release(dev, job->flow->jump); - else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE) - mlx5_hrxq_obj_release(dev, job->flow->hrxq); - if (mlx5_hws_cnt_id_valid(job->flow->cnt_id)) - flow_hw_age_count_release(priv, queue, - job->flow, error); - if (job->flow->mtr_id) { - mlx5_ipool_free(pool->idx_pool, job->flow->mtr_id); - job->flow->mtr_id = 0; - } - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { - mlx5_ipool_free(job->flow->table->resource, res_idx); - mlx5_ipool_free(job->flow->table->flow, job->flow->idx); - } else { - rte_memcpy(job->flow, job->upd_flow, - offsetof(struct rte_flow_hw, rule)); - mlx5_ipool_free(job->flow->table->resource, res_idx); - } + switch (job->type) { + case MLX5_HW_Q_JOB_TYPE_DESTROY: + case MLX5_HW_Q_JOB_TYPE_UPDATE: + hw_cmpl_flow_update_or_destroy(dev, job, queue, error); + break; + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE: + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE: + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY: + hw_cmpl_resizable_tbl(dev, job, queue, res[i].status, error); + break; + default: + break; } flow_hw_job_put(priv, job, queue); } @@ -3888,24 +3997,36 @@ flow_hw_pull(struct rte_eth_dev *dev, if (ret < n_res) ret += __flow_hw_pull_indir_action_comp(dev, queue, &res[ret], n_res - ret); + if (ret < n_res) + ret += mlx5_hw_pull_flow_transfer_comp(dev, queue, &res[ret], + n_res - ret); + return ret; } +static uint32_t +mlx5_hw_push_queue(struct rte_ring *pending_q, struct rte_ring *cmpl_q) +{ + void *job = NULL; + uint32_t i, size = rte_ring_count(pending_q); + + for (i = 0; i < size; i++) { + rte_ring_dequeue(pending_q, &job); + rte_ring_enqueue(cmpl_q, job); + } + return size; +} + static inline uint32_t __flow_hw_push_action(struct rte_eth_dev *dev, uint32_t queue) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_ring *iq = priv->hw_q[queue].indir_iq; - struct rte_ring *cq = priv->hw_q[queue].indir_cq; - void *job = NULL; - uint32_t ret, i; + struct mlx5_hw_q *hw_q = &priv->hw_q[queue]; - ret = rte_ring_count(iq); - for (i = 0; i < ret; i++) { - rte_ring_dequeue(iq, &job); - rte_ring_enqueue(cq, job); - } + mlx5_hw_push_queue(hw_q->indir_iq, hw_q->indir_cq); + mlx5_hw_push_queue(hw_q->flow_transfer_pending, + hw_q->flow_transfer_completed); if (!priv->shared_host) { if (priv->hws_ctpool) mlx5_aso_push_wqe(priv->sh, @@ -4314,6 +4435,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, grp = container_of(ge, struct mlx5_flow_group, entry); tbl->grp = grp; /* Prepare matcher information. */ + matcher_attr.resizable = !!rte_flow_table_resizable(dev->data->port_id, &table_cfg->attr); matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_ANY; matcher_attr.priority = attr->flow_attr.priority; matcher_attr.optimize_using_rule_idx = true; @@ -4332,7 +4454,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG; if ((attr->specialize & val) == val) { - DRV_LOG(INFO, "Invalid hint value %x", + DRV_LOG(ERR, "Invalid hint value %x", attr->specialize); rte_errno = EINVAL; goto it_error; @@ -4374,10 +4496,11 @@ flow_hw_table_create(struct rte_eth_dev *dev, i = nb_item_templates; goto it_error; } - tbl->matcher = mlx5dr_matcher_create + tbl->matcher_info[0].matcher = mlx5dr_matcher_create (tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr); - if (!tbl->matcher) + if (!tbl->matcher_info[0].matcher) goto at_error; + tbl->matcher_attr = matcher_attr; tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB : (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX : MLX5DR_TABLE_TYPE_NIC_RX); @@ -4385,6 +4508,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next); else LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next); + rte_rwlock_init(&tbl->matcher_replace_rwlk); return tbl; at_error: for (i = 0; i < nb_action_templates; i++) { @@ -4556,6 +4680,11 @@ flow_hw_template_table_create(struct rte_eth_dev *dev, if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error)) return NULL; + if (!cfg.attr.flow_attr.group && rte_flow_table_resizable(dev->data->port_id, attr)) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "table cannot be resized: invalid group"); + return NULL; + } return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates, action_templates, nb_action_templates, error); } @@ -4628,7 +4757,10 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, 1, __ATOMIC_RELAXED); } flow_hw_destroy_table_multi_pattern_ctx(table); - mlx5dr_matcher_destroy(table->matcher); + if (table->matcher_info[0].matcher) + mlx5dr_matcher_destroy(table->matcher_info[0].matcher); + if (table->matcher_info[1].matcher) + mlx5dr_matcher_destroy(table->matcher_info[1].matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); mlx5_ipool_destroy(table->resource); mlx5_ipool_destroy(table->flow); @@ -9178,6 +9310,16 @@ flow_hw_compare_config(const struct mlx5_flow_hw_attr *hw_attr, return true; } +static __rte_always_inline struct rte_ring * +mlx5_hwq_ring_create(uint16_t port_id, uint32_t queue, uint32_t size, const char *str) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + + snprintf(mz_name, sizeof(mz_name), "port_%u_%s_%u", port_id, str, queue); + return rte_ring_create(mz_name, size, SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ); +} + /** * Configure port HWS resources. * @@ -9305,7 +9447,6 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } for (i = 0; i < nb_q_updated; i++) { - char mz_name[RTE_MEMZONE_NAMESIZE]; uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; @@ -9339,22 +9480,22 @@ flow_hw_configure(struct rte_eth_dev *dev, job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; } - snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_cq_%u", - dev->data->port_id, i); - priv->hw_q[i].indir_cq = rte_ring_create(mz_name, - _queue_attr[i]->size, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_SC_DEQ | - RING_F_EXACT_SZ); + priv->hw_q[i].indir_cq = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "indir_act_cq"); if (!priv->hw_q[i].indir_cq) goto err; - snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_iq_%u", - dev->data->port_id, i); - priv->hw_q[i].indir_iq = rte_ring_create(mz_name, - _queue_attr[i]->size, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_SC_DEQ | - RING_F_EXACT_SZ); + priv->hw_q[i].indir_iq = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "indir_act_iq"); if (!priv->hw_q[i].indir_iq) goto err; + priv->hw_q[i].flow_transfer_pending = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "pending_transfer"); + if (!priv->hw_q[i].flow_transfer_pending) + goto err; + priv->hw_q[i].flow_transfer_completed = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "completed_transfer"); + if (!priv->hw_q[i].flow_transfer_completed) + goto err; } dr_ctx_attr.pd = priv->sh->cdev->pd; dr_ctx_attr.queues = nb_q_updated; @@ -9570,6 +9711,8 @@ flow_hw_configure(struct rte_eth_dev *dev, for (i = 0; i < nb_q_updated; i++) { rte_ring_free(priv->hw_q[i].indir_iq); rte_ring_free(priv->hw_q[i].indir_cq); + rte_ring_free(priv->hw_q[i].flow_transfer_pending); + rte_ring_free(priv->hw_q[i].flow_transfer_completed); } mlx5_free(priv->hw_q); priv->hw_q = NULL; @@ -11494,7 +11637,7 @@ flow_hw_calc_table_hash(struct rte_eth_dev *dev, items = flow_hw_get_rule_items(dev, table, pattern, pattern_template_index, &job); - res = mlx5dr_rule_hash_calculate(table->matcher, items, + res = mlx5dr_rule_hash_calculate(mlx5_table_matcher(table), items, pattern_template_index, MLX5DR_RULE_HASH_CALC_MODE_RAW, hash); @@ -11506,6 +11649,220 @@ flow_hw_calc_table_hash(struct rte_eth_dev *dev, return 0; } +static int +flow_hw_table_resize_multi_pattern_actions(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_flows, + struct rte_flow_error *error) +{ + struct mlx5_multi_pattern_segment *segment = table->mpctx.segments; + uint32_t bulk_size; + int i, ret; + + /** + * Segment always allocates Modify Header Argument Objects number in + * powers of 2. + * On resize, PMD adds minimal required argument objects number. + * For example, if table size was 10, it allocated 16 argument objects. + * Resize to 15 will not add new objects. + */ + for (i = 1; + i < MLX5_MAX_TABLE_RESIZE_NUM && segment->capacity; + i++, segment++); + if (i == MLX5_MAX_TABLE_RESIZE_NUM) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "too many resizes"); + if (segment->head_index - 1 >= nb_flows) + return 0; + bulk_size = rte_align32pow2(nb_flows - segment->head_index + 1); + ret = mlx5_tbl_multi_pattern_process(dev, table, segment, + rte_log2_u32(bulk_size), + error); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "too many resizes"); + return i; +} + +static int +flow_hw_table_resize(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_flows, + struct rte_flow_error *error) +{ + struct mlx5dr_action_template *at[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; + struct mlx5dr_matcher_attr matcher_attr = table->matcher_attr; + struct mlx5_multi_pattern_segment *segment = NULL; + struct mlx5dr_matcher *matcher = NULL; + uint32_t i, selector = table->matcher_selector; + uint32_t other_selector = (selector + 1) & 1; + int ret; + + if (!rte_flow_table_resizable(dev->data->port_id, &table->cfg.attr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "no resizable attribute"); + if (table->matcher_info[other_selector].matcher) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "last table resize was not completed"); + if (nb_flows <= table->cfg.attr.nb_flows) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "shrinking table is not supported"); + ret = mlx5_ipool_resize(table->flow, nb_flows); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "cannot resize flows pool"); + ret = mlx5_ipool_resize(table->resource, nb_flows); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "cannot resize resources pool"); + if (mlx5_is_multi_pattern_active(&table->mpctx)) { + ret = flow_hw_table_resize_multi_pattern_actions(dev, table, nb_flows, error); + if (ret < 0) + return ret; + if (ret > 0) + segment = table->mpctx.segments + ret; + } + for (i = 0; i < table->nb_item_templates; i++) + mt[i] = table->its[i]->mt; + for (i = 0; i < table->nb_action_templates; i++) + at[i] = table->ats[i].action_template->tmpl; + nb_flows = rte_align32pow2(nb_flows); + matcher_attr.rule.num_log = rte_log2_u32(nb_flows); + matcher = mlx5dr_matcher_create(table->grp->tbl, mt, + table->nb_item_templates, at, + table->nb_action_templates, + &matcher_attr); + if (!matcher) { + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to create new matcher"); + goto error; + } + rte_rwlock_write_lock(&table->matcher_replace_rwlk); + ret = mlx5dr_matcher_resize_set_target + (table->matcher_info[selector].matcher, matcher); + if (ret) { + rte_rwlock_write_unlock(&table->matcher_replace_rwlk); + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to initiate matcher swap"); + goto error; + } + table->cfg.attr.nb_flows = nb_flows; + table->matcher_info[other_selector].matcher = matcher; + table->matcher_info[other_selector].refcnt = 0; + table->matcher_selector = other_selector; + rte_rwlock_write_unlock(&table->matcher_replace_rwlk); + return 0; +error: + if (segment) + mlx5_destroy_multi_pattern_segment(segment); + if (matcher) { + ret = mlx5dr_matcher_destroy(matcher); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to destroy new matcher"); + } + return ret; +} + +static int +flow_hw_table_resize_complete(__rte_unused struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error) +{ + int ret; + uint32_t selector = table->matcher_selector; + uint32_t other_selector = (selector + 1) & 1; + struct mlx5_matcher_info *matcher_info = &table->matcher_info[other_selector]; + + if (!rte_flow_table_resizable(dev->data->port_id, &table->cfg.attr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "no resizable attribute"); + if (!matcher_info->matcher || matcher_info->refcnt) + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "cannot complete table resize"); + ret = mlx5dr_matcher_destroy(matcher_info->matcher); + if (ret) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to destroy retired matcher"); + matcher_info->matcher = NULL; + return 0; +} + +static int +flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, void *user_data, + struct rte_flow_error *error) +{ + int ret; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_q_job *job; + struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow; + struct rte_flow_template_table *table = hw_flow->table; + uint32_t table_selector = table->matcher_selector; + uint32_t rule_selector = hw_flow->matcher_selector; + uint32_t other_selector; + struct mlx5dr_matcher *other_matcher; + struct mlx5dr_rule_attr rule_attr = { + .queue_id = queue, + .burst = attr->postpone, + }; + + /** + * mlx5dr_matcher_resize_rule_move() accepts original table matcher - + * the one that was used BEFORE table resize. + * Since the function is called AFTER table resize, + * `table->matcher_selector` always points to the new matcher and + * `hw_flow->matcher_selector` points to a matcher used to create the flow. + */ + other_selector = rule_selector == table_selector ? + (rule_selector + 1) & 1 : rule_selector; + other_matcher = table->matcher_info[other_selector].matcher; + if (!other_matcher) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "no active table resize"); + job = flow_hw_job_get(priv, queue); + if (!job) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "queue is full"); + job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE; + job->user_data = user_data; + job->flow = hw_flow; + rule_attr.user_data = job; + if (rule_selector == table_selector) { + struct rte_ring *ring = !attr->postpone ? + priv->hw_q[queue].flow_transfer_completed : + priv->hw_q[queue].flow_transfer_pending; + rte_ring_enqueue(ring, job); + return 0; + } + ret = mlx5dr_matcher_resize_rule_move(other_matcher, + (struct mlx5dr_rule *)hw_flow->rule, + &rule_attr); + if (ret) { + flow_hw_job_put(priv, job, queue); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "flow transfer failed"); + } + return 0; +} + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, @@ -11517,11 +11874,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .actions_template_destroy = flow_hw_actions_template_destroy, .template_table_create = flow_hw_template_table_create, .template_table_destroy = flow_hw_table_destroy, + .table_resize = flow_hw_table_resize, .group_set_miss_actions = flow_hw_group_set_miss_actions, .async_flow_create = flow_hw_async_flow_create, .async_flow_create_by_index = flow_hw_async_flow_create_by_index, .async_flow_update = flow_hw_async_flow_update, .async_flow_destroy = flow_hw_async_flow_destroy, + .flow_update_resized = flow_hw_update_resized, + .table_resize_complete = flow_hw_table_resize_complete, .pull = flow_hw_pull, .push = flow_hw_push, .async_action_create = flow_hw_action_handle_create, diff --git a/drivers/net/mlx5/mlx5_host.c b/drivers/net/mlx5/mlx5_host.c new file mode 100644 index 0000000000..4f3356d6e6 --- /dev/null +++ b/drivers/net/mlx5/mlx5_host.c @@ -0,0 +1,211 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include +#include +#include + +#include +#include +#include + +#include "mlx5_flow.h" +#include "mlx5.h" + +#include "hws/host/mlx5dr_host.h" + +struct rte_pmd_mlx5_dr_action_cache { + enum rte_flow_action_type type; + void *release_data; + struct mlx5dr_dev_action *dr_dev_action; + LIST_ENTRY(rte_pmd_mlx5_dr_action_cache) next; +}; + +struct rte_pmd_mlx5_dev_process { + struct mlx5dr_dev_process *dr_dev_process; + struct mlx5dr_dev_context *dr_dev_ctx; + uint16_t port_id; + LIST_HEAD(action_head, rte_pmd_mlx5_dr_action_cache) head; +}; + +struct rte_pmd_mlx5_dev_process * +rte_pmd_mlx5_host_process_open(uint16_t port_id, + struct rte_pmd_mlx5_host_device_info *info) +{ + struct rte_pmd_mlx5_dev_process *dev_process; + struct mlx5dr_dev_context_attr dr_attr = {0}; + struct mlx5dr_dev_process *dr_dev_process; + const struct mlx5_priv *priv; + + dev_process = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, + sizeof(struct rte_pmd_mlx5_dev_process), + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); + if (!dev_process) { + rte_errno = ENOMEM; + return NULL; + } + + if (info->type == RTE_PMD_MLX5_DEVICE_TYPE_DPA) + dr_dev_process = mlx5dr_host_process_open(info->dpa.process, info->dpa.outbox); + else + dr_dev_process = mlx5dr_host_process_open(NULL, NULL); + + if (!dr_dev_process) + goto free_dev_process; + + dev_process->port_id = port_id; + dev_process->dr_dev_process = dr_dev_process; + + priv = rte_eth_devices[port_id].data->dev_private; + dr_attr.queue_size = info->queue_size; + dr_attr.queues = info->queues; + + dev_process->dr_dev_ctx = mlx5dr_host_context_bind(dr_dev_process, + priv->dr_ctx, + &dr_attr); + if (!dev_process->dr_dev_ctx) + goto close_process; + + return (struct rte_pmd_mlx5_dev_process *)dev_process; + +close_process: + mlx5dr_host_process_close(dr_dev_process); +free_dev_process: + mlx5_free(dev_process); + return NULL; +} + +int +rte_pmd_mlx5_host_process_close(struct rte_pmd_mlx5_dev_process *dev_process) +{ + struct mlx5dr_dev_process *dr_dev_process = dev_process->dr_dev_process; + + mlx5dr_host_context_unbind(dr_dev_process, dev_process->dr_dev_ctx); + mlx5dr_host_process_close(dr_dev_process); + mlx5_free(dev_process); + return 0; +} + +struct rte_pmd_mlx5_dev_ctx * +rte_pmd_mlx5_host_get_dev_ctx(struct rte_pmd_mlx5_dev_process *dev_process) +{ + return (struct rte_pmd_mlx5_dev_ctx *)dev_process->dr_dev_ctx; +} + +struct rte_pmd_mlx5_dev_table * +rte_pmd_mlx5_host_table_bind(struct rte_pmd_mlx5_dev_process *dev_process, + struct rte_flow_template_table *table) +{ + struct mlx5dr_dev_process *dr_dev_process; + struct mlx5dr_dev_matcher *dr_dev_matcher; + struct mlx5dr_matcher *matcher; + + if (rte_flow_table_resizable(&table->cfg.attr)) { + rte_errno = EINVAL; + return NULL; + } + + dr_dev_process = dev_process->dr_dev_process; + matcher = table->matcher_info[0].matcher; + + dr_dev_matcher = mlx5dr_host_matcher_bind(dr_dev_process, matcher); + + return (struct rte_pmd_mlx5_dev_table *)dr_dev_matcher; +} + +int +rte_pmd_mlx5_host_table_unbind(struct rte_pmd_mlx5_dev_process *dev_process, + struct rte_pmd_mlx5_dev_table *dev_table) +{ + struct mlx5dr_dev_process *dr_dev_process; + struct mlx5dr_dev_matcher *dr_dev_matcher; + + dr_dev_process = dev_process->dr_dev_process; + dr_dev_matcher = (struct mlx5dr_dev_matcher *)dev_table; + + return mlx5dr_host_matcher_unbind(dr_dev_process, dr_dev_matcher); +} + +struct rte_pmd_mlx5_dev_action * +rte_pmd_mlx5_host_action_bind(struct rte_pmd_mlx5_dev_process *dev_process, + struct rte_pmd_mlx5_host_action *action) +{ + struct rte_eth_dev *dev = &rte_eth_devices[dev_process->port_id]; + struct rte_pmd_mlx5_dr_action_cache *action_cache; + struct mlx5dr_dev_process *dr_dev_process; + struct mlx5dr_dev_action *dr_dev_action; + struct mlx5dr_action *dr_action; + void *release_data; + + dr_dev_process = dev_process->dr_dev_process; + + action_cache = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, + sizeof(*action_cache), + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); + if (!action_cache) { + rte_errno = ENOMEM; + return NULL; + } + + dr_action = mlx5_flow_hw_get_dr_action(dev, action, &release_data); + if (!dr_action) { + DRV_LOG(ERR, "Failed to get dr action type %d", action->type); + goto free_rte_host_action; + } + + dr_dev_action = mlx5dr_host_action_bind(dr_dev_process, dr_action); + if (!dr_dev_action) { + DRV_LOG(ERR, "Failed to bind dr_action"); + goto put_dr_action; + } + + action_cache->type = action->type; + action_cache->release_data = release_data; + action_cache->dr_dev_action = dr_dev_action; + LIST_INSERT_HEAD(&dev_process->head, action_cache, next); + + return (struct rte_pmd_mlx5_dev_action *)dr_dev_action; + +put_dr_action: + mlx5_flow_hw_put_dr_action(dev, action->type, release_data); +free_rte_host_action: + mlx5_free(action_cache); + return NULL; +} + +int +rte_pmd_mlx5_host_action_unbind(struct rte_pmd_mlx5_dev_process *dev_process, + struct rte_pmd_mlx5_dev_action *dev_action) +{ + struct rte_eth_dev *dev = &rte_eth_devices[dev_process->port_id]; + struct rte_pmd_mlx5_dr_action_cache *action_cache; + struct mlx5dr_dev_process *dr_dev_process; + struct mlx5dr_dev_action *dr_dev_action; + + dr_dev_process = dev_process->dr_dev_process; + dr_dev_action = (struct mlx5dr_dev_action *)dev_action; + + LIST_FOREACH(action_cache, &dev_process->head, next) { + if (action_cache->dr_dev_action == dr_dev_action) { + LIST_REMOVE(action_cache, next); + mlx5dr_host_action_unbind(dr_dev_process, dr_dev_action); + mlx5_flow_hw_put_dr_action(dev, + action_cache->type, + action_cache->release_data); + mlx5_free(action_cache); + return 0; + } + } + + DRV_LOG(ERR, "Failed to find dr aciton to unbind"); + rte_errno = EINVAL; + return rte_errno; +} + +size_t rte_pmd_mlx5_host_get_dev_rule_handle_size(void) +{ + return mlx5dr_host_rule_get_dev_rule_handle_size(); +}