From patchwork Wed Feb 28 10:25:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137421 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5848343C21; Wed, 28 Feb 2024 11:26:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 42EE7410E7; Wed, 28 Feb 2024 11:26:16 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2081.outbound.protection.outlook.com [40.107.244.81]) by mails.dpdk.org (Postfix) with ESMTP id 971DC4113D for ; Wed, 28 Feb 2024 11:26:10 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gpSo/pYOkPlP9zdDnV7vMOaVrXFpbbjIMHvPT6R4CsBdO227jmB6FHS7uzmyMMuq9FA9Q7ef/Wf7/sPq6AOO2y5lIek6OOpu+h9p6MQuFc8qVCcv4mNZXOni123sZfHo0HR5SbzJnWrRz8CIFXnZH6L71urQjEDXAk6AoGHTffCTUnb7KxshDMNy/KmcqlWTpwp+8tjLBvdWsEc6H0Nysk/i+j3cSYWgzvPp+7JdqZJV09vB3qX3yWbkwEUcP4u4f3Lz64c6IvZrNBh8bcIZ7zDP6tyEB+p/90LijSnAaj4lmxUXEQ6hJIPri11vOxC+wxq0aUKMdz+BEwk/HHmWjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/NHocJFdwyH7kRXN3FYPhBnacIMi3VasUtxeVf2V30o=; b=CKWZ1jy8tyr3B+6mo4WfZFZjRt8OotGJ9af/GICC7dinUDle8C4H5wGqexkI0Jfj2T6WFsH8F0Xu0DjoYBRkvvAaVJyjBriDWcckMU+X2aE0xUUR2POAob9J71yHhISSheo08dqQ61U8Ldyu/ZafxfgGYXtOGdp/qhIdyYmyb6huacrOOr+RzNGhPrT24u4JxQdHv1pZtf+HFX+CyTYF/X4HzWRDE7VgBdehNNXm00GRwgKMqN28qFLdsuPug6YbSYR/aPZyTaZUl/BKYuv+4SCjQ4rQUU+oSpueTNEUjvdeEH0OmIBAl+AO3ggoeuCKFLA0pF3fh5eQTBHGSOaOCw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/NHocJFdwyH7kRXN3FYPhBnacIMi3VasUtxeVf2V30o=; b=i+4KBnXwMGgGQ/MFsIXOSxWIPR0GyX3n/60eaEcwTFvvy30LNA5fkL6jVx3T/SdxRCyGD6hmrUAObIbIQQa/CxPHe7H8KFWuNbuyw2QB0k9BwRkRJUHKT6R6wu+ZE3bOHGGOPxn/RHJjPRxm8BlyrMH23iLgj5xHzuDc1XaeKtaQjExKZwJRSyDDt5t4wEwMZrubEm8ECawgebFZhOvK+KXU5NuNSrWFxD/W63Sh+U251UCQO9FUFwPl3BH4xfNf8zC2OY5HevrvBgL606vw+N3WIvHA9aVHKvrEjh8nzEypgjUb3vNgVTC+i3hS21f87OBJey3xjZnZUwR8X5o4+w== Received: from SJ0PR05CA0187.namprd05.prod.outlook.com (2603:10b6:a03:330::12) by MN2PR12MB4568.namprd12.prod.outlook.com (2603:10b6:208:260::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.39; Wed, 28 Feb 2024 10:26:07 +0000 Received: from SJ1PEPF00001CE3.namprd05.prod.outlook.com (2603:10b6:a03:330:cafe::6b) by SJ0PR05CA0187.outlook.office365.com (2603:10b6:a03:330::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.25 via Frontend Transport; Wed, 28 Feb 2024 10:26:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SJ1PEPF00001CE3.mail.protection.outlook.com (10.167.242.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 10:26:06 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 02:25:52 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 02:25:49 -0800 From: Gregory Etelson To: CC: , , , Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v2 1/4] net/mlx5: add resize function to ipool Date: Wed, 28 Feb 2024 12:25:23 +0200 Message-ID: <20240228102526.434717-2-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228102526.434717-1-getelson@nvidia.com> References: <20240202115611.288892-2-getelson@nvidia.com> <20240228102526.434717-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PEPF00001CE3:EE_|MN2PR12MB4568:EE_ X-MS-Office365-Filtering-Correlation-Id: 48074558-16c6-4572-7974-08dc3847acbc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6/bhJ/b1349fkmXK9aZ88fBgVKPMoN0Qr0Ksj01NOjvpBhWQP6e25CmJYhqeqnvfH1PXMv7igAUhzHjZKy9Qy7w0FwkPodzhV1pSq7trjK/P3nTRZjw7seTy3L0BPh6ui1FxoAJQ6+NDhNJCrAC6EMEksovn6wPitz2KOCu7YZCXcSkxc3nWWU5mYr8swnojDJfZmCE0ozRDLeSnpU9DsPwZwAz1Q7auRaCGsW2Wn+4Whjjglr/YHnG1TC3p0/Nl0bSSV0vLG/rBRTD3Al+NScZkeOgUwABQQImkGyqx5ACShe1b6cj3sxJV62dzfIqL43cTwvMSe2p1kBSPhmLTkFVD7+AUKcCp3eQ4ms6uQGmspUfFXXKB58pdjSRknJZy49oWR79PZbS0G90Zs6ktV9mYzHUiMCTjgIx1C2GNUDmXzhUGErZ4ceqxtyPkvK8YbVMNY2kiy6M+df/Yvi2wMTx1HrZwnoj/6+DHVqU7L8I/JkNl94wATLHlgf1G1jNQvrOo03ZomBLzWAp0D1/iMWuPqZc78yIO6zKykWwspI2w8xwlRs++LW1HyZRXPGaaIXhOHJpGH7DdkhMAmy0+YmULunobBVPD1pcnfLwv5581b2w8GHgil44rXIib9MPzjembo4GWhoEju5pHA48PnzgPOcxqlIkkXZ++J2Iczv3stm8MhTTi1mbkIpQpxqrAEwEqOHUHxe9nLvILzpGBF+Gin/1JqC3KN7SxXVHvE54g44bWi15TdWZNx5KBDbJ3 X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 10:26:06.9134 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 48074558-16c6-4572-7974-08dc3847acbc X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF00001CE3.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4568 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Maayan Kashani Before this patch, ipool size could be fixed by setting max_idx in mlx5_indexed_pool_config upon ipool creation. Or it can be auto resized to the maximum limit by setting max_idx to zero upon ipool creation and the saved value is the maximum index possible. This patch adds ipool_resize API that enables to update the value of max_idx in case it is not set to maximum, meaning not in auto resize mode. It enables the allocation of new trunk when using malloc/zmalloc up to the max_idx limit. Please notice the resize number of entries should be divisible by trunk_size. Signed-off-by: Maayan Kashani Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_utils.c | 29 +++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_utils.h | 16 ++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 4db738785f..e28db2ec43 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -809,6 +809,35 @@ mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos) return NULL; } +int +mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries) +{ + uint32_t cur_max_idx; + uint32_t max_index = mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1); + + if (num_entries % pool->cfg.trunk_size) { + DRV_LOG(ERR, "num_entries param should be trunk_size(=%u) multiplication\n", + pool->cfg.trunk_size); + return -EINVAL; + } + + mlx5_ipool_lock(pool); + cur_max_idx = pool->cfg.max_idx + num_entries; + /* If the ipool max idx is above maximum or uint overflow occurred. */ + if (cur_max_idx > max_index || cur_max_idx < num_entries) { + DRV_LOG(ERR, "Ipool resize failed\n"); + DRV_LOG(ERR, "Adding %u entries to existing %u entries, will cross max limit(=%u)\n", + num_entries, cur_max_idx, max_index); + mlx5_ipool_unlock(pool); + return -EINVAL; + } + + /* Update maximum entries number. */ + pool->cfg.max_idx = cur_max_idx; + mlx5_ipool_unlock(pool); + return 0; +} + void mlx5_ipool_dump(struct mlx5_indexed_pool *pool) { diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 82e8298781..f3c0d76a6d 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -427,6 +427,22 @@ void mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool); */ void *mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos); +/** + * This function resize the ipool. + * + * @param pool + * Pointer to the index memory pool handler. + * @param num_entries + * Number of entries to be added to the pool. + * This number should be divisible by trunk_size. + * + * @return + * - non-zero value on error. + * - 0 on success. + * + */ +int mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries); + /** * This function allocates new empty Three-level table. * From patchwork Wed Feb 28 10:25:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137422 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3D81743C21; Wed, 28 Feb 2024 11:26:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD12A427E5; Wed, 28 Feb 2024 11:26:17 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2054.outbound.protection.outlook.com [40.107.212.54]) by mails.dpdk.org (Postfix) with ESMTP id ABFA340FDE for ; Wed, 28 Feb 2024 11:26:12 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Zm+W14cxgMGQpaOnOsYipja2n2DomvvbUkXm/7QsT7XT4ZsdwCboZFEjNPKt8VBdbLb25rkRqQ4AtFstvif2AEFBUAxVRfbgWDc1Uh5u3xwHf0VTwMQZRS+2btpUj87pmK5mpbZMWm0MFb/lY4hJBLxMcTTNN10YT+aAhKm0WheYLHK5cz3yHgskf9uY/H/pAsjgNyh3LS7DK58aA5sLJjb8UKhZ4cgoI2xha9ILz9C5NReGySUS4Kc6Bun4sIQEXMXN8BB9nX9dTWFxyVMVKLA1lmWf7/0YMbsFhU9jLMGA1hRM+B1ZciZ6xwypUcbpDvmJThg5jAWzIWL3rq9LDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0HHziVhXwRoYVGdb6Ubcg95h6eY/Sz205n4MwgN9DvE=; b=OvFI59Fe/Z6YcbVBwrWqP5WGBvcE/iiP8nfh4FSR2bEL7IJA007UIsG6deC3ECZTvNxWb6aqB+jyg7a7Y3gNv1JPxfnksk8DudaEpTlvfeghRH8PV01WD1rJgbXPUvddKudszFYvHf+migTyJYgYxMKubaA7gc3LNTqEt9gMJz5WcZcSWjQ1rjtAGKjA+OggCxKjcfXYqeCB4I0/7oMTkYWdDz+72fAeBhWmHt9dFNS3TV4uywRJSemAZtBBaRMurTQzTNTveBFbrlk/aWIwLLJyVcsWgaWyIIciA0KOOs1rgf9KHpGaNJXgsNUQzIGMcPjH0Wa3cO6xu9WAb/JNNQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0HHziVhXwRoYVGdb6Ubcg95h6eY/Sz205n4MwgN9DvE=; b=aNNEIUNoBpwAeKpJiZT2le3F/vp0k+MHDkqGDFqtZ0NIkHOQOsPVFYuiCbrHYplGevCiJlEltMLe4zHcesQokDHUqngVHA6krgoPgUMfAklhSsUdGCJF0TgZjjogqKHTOJOtgBIRrnZ+LZ28fKBN2/l7yoCFePguqlsBMBxC+H5jeRlHqeMp3OwxlZUJ9yTnZMLjXsdv9vFxDeDXdShrLz9lG+NbVLsnkou6L5Qhr46vNm0e5KUh9c/yMvtdsm2MAzMHNIkr4K3dQYG0CHci5x4dqaiVmfCDZaf+zlPeK9mfXxXIKXt00R+4KuyD4WyDuBs8slfhRSF5T0B+Izdqdw== Received: from CY5PR19CA0115.namprd19.prod.outlook.com (2603:10b6:930:64::23) by CH0PR12MB5140.namprd12.prod.outlook.com (2603:10b6:610:bf::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.39; Wed, 28 Feb 2024 10:26:08 +0000 Received: from CY4PEPF0000EE36.namprd05.prod.outlook.com (2603:10b6:930:64:cafe::5a) by CY5PR19CA0115.outlook.office365.com (2603:10b6:930:64::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.27 via Frontend Transport; Wed, 28 Feb 2024 10:26:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE36.mail.protection.outlook.com (10.167.242.42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 10:26:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 02:25:57 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 02:25:53 -0800 From: Gregory Etelson To: CC: , , , Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad , Rongwei Liu Subject: [PATCH v2 2/4] net/mlx5: fix parameters verification in HWS table create Date: Wed, 28 Feb 2024 12:25:24 +0200 Message-ID: <20240228102526.434717-3-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228102526.434717-1-getelson@nvidia.com> References: <20240202115611.288892-2-getelson@nvidia.com> <20240228102526.434717-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE36:EE_|CH0PR12MB5140:EE_ X-MS-Office365-Filtering-Correlation-Id: adb36759-ed58-4f32-b909-08dc3847ad5b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: f/lhMF4+0eTpJSBgAwn0be0jhvgnDkX1p9xbjU5ioNo81L49vBNAZ/9C3gGinfosUrMSFduLfwuKpk7GX6L0814cN7+piCVJJB2Nydthx6jKzR+aIskaY1n3gQXIX8j2Voauw8d+UXrCnRcbhjp2/8OmQ0BLk37ncj+Kdcb/eLpnpXKxfv6VRruGIhbtMdfP1s6fJZlWZNCZ7oWPZu5Oo6sr7poVpAmTPLaHPwlv3oSpzESG1HNxsyAtqlT10LMq3+iHDCh+b1zUl/nOMqJ2iYDW7nlR+zRN9q96/M+qE7/qDLh6kCC9JWfTrCpOjIYGOPEmz3HW1H8NSMCh5LJrD+CeascOaCraPiES/dSJ+lVLZEd3gmWH1WG2iyRtef9p+kyoYYMhTWxx4IyOB+CpMXszS3Xhd0xfv5kOu8epL3tz9lly7ySCH/TbcZ8aFdJaFQAH+AW1arv7VDtTBubXRf7HgHVHFGdCPZQH5lDE2jdB7zwqvU5Ik0UF6ltPaK6NnCOB5ts3rAfLiYhgfdw40LAYCqmttf+RK68ExIsBvpmE1QRqDFR9X9aKq89TLhu77IX9w/4wPD69NpGiNETngOdvB+2AMkuE6GnKNPNWxb4LhIfndDZHa6Vnk/mehRZw4aFwdCTYwXSBCLrmLIFQg1EBiplZsFHLjZ3irBuuFK8PIrYmZcnP2L1xuPjugyOKTeNUqcppcma/7DA26xR6ZZbcZJaRZnkiWOjwuaGpU57aykUNdlf5ZBjJnbMprcgY X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 10:26:08.0039 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: adb36759-ed58-4f32-b909-08dc3847ad5b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE36.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5140 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Modified the conditionals in `flow_hw_table_create()` to use bitwise AND instead of equality checks when assessing `table_cfg->attr->specialize` bitmask. This will allow for greater flexibility as the bitmask may encapsulate multiple flags. The patch maintains the previous behavior with single flag values, while providing support for multiple flags. Fixes: 592d5367b5e4 ("net/mlx5: enable hint in async flow table") Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow_hw.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 783ad9e72a..5938d8b90c 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4390,12 +4390,23 @@ flow_hw_table_create(struct rte_eth_dev *dev, matcher_attr.rule.num_log = rte_log2_u32(nb_flows); /* Parse hints information. */ if (attr->specialize) { - if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG) - matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_WIRE; - else if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG) - matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_VPORT; - else - DRV_LOG(INFO, "Unsupported hint value %x", attr->specialize); + uint32_t val = RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG | + RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG; + + if ((attr->specialize & val) == val) { + DRV_LOG(INFO, "Invalid hint value %x", + attr->specialize); + rte_errno = EINVAL; + goto it_error; + } + if (attr->specialize & + RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG) + matcher_attr.optimize_flow_src = + MLX5DR_MATCHER_FLOW_SRC_WIRE; + else if (attr->specialize & + RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG) + matcher_attr.optimize_flow_src = + MLX5DR_MATCHER_FLOW_SRC_VPORT; } /* Build the item template. */ for (i = 0; i < nb_item_templates; i++) { From patchwork Wed Feb 28 10:25:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137423 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3081843C21; Wed, 28 Feb 2024 11:26:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2E43042EA9; Wed, 28 Feb 2024 11:26:23 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2063.outbound.protection.outlook.com [40.107.93.63]) by mails.dpdk.org (Postfix) with ESMTP id 5EB7042D6B for ; Wed, 28 Feb 2024 11:26:18 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SEnvQQ1njwnbaRgj7tUyAPPt+6ymv9AmO7AFD6a7Cq10vDbY5Z9IVCSfn6uxhjT4uMd6KqlehLVBlFS+yvxB9aOpMQrhPCGLXECR9P5WRkpAtK1gFlH8DWCfB6mb5FcEuNZ5it88c6SvFEe4LPhL402StLnrhQ8dKu5wlgppmUUDR4n08VmpwE28qAEKlnDgdE4a5o7mI75OM6Y+KLF61QW2QUldwbQUe3XuOigEk42hwArqnrDbazs8Ecw/HTDbkkM8dJRf4Z9UR97bglgXYQbEizx6jHGPeJDU90aDi+CM0BHFTux3mJEi+vV050lgJ8VuaJjMK5iDo0Ye4Za9cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vve1Ub/YzAR6sBY30hi0gguyAyZfp5YXoRntlTs28iE=; b=WKNdqLakJY1GN9LgaEDuY8hGKcwI6RTwNgZRE9ku7z2S0YD4wIiHg5o+8U+e+bO5jW99zJd5Yv0HtQxAsKQGQvcWAlvdtk7VROhJMBcHEAOFTBTUK/lD1YN1ywJWuWcZBzhuIc1sRnAEJP8r9ajcV8fhLAxXhBtCVoho3VrAisUy95TS8NOlUUAD9OUvOugLyzEv5hRNQlZYUA6RlSnr2kIg2/IijDWa02gPTMMUvvqw6dIFdN0Z3KApjpuuA/3prtmmdjlBAnQHzcYgxSuLszmXKC1CEhwB1FxE11L9GOeBERAS7UR1oj6iY/yhRgrRZpv8CV7U3aokBU/IB7p20w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vve1Ub/YzAR6sBY30hi0gguyAyZfp5YXoRntlTs28iE=; b=eaYVkmPYxA3MAMjaAcWJ0G6QrdLzYxOFyOy/roxGpyeawcdlFEaHhWts+vPllqC9HBodAoF6YrKf13rl7Wqc0fD1YbMeZUkXZCv4qSK14fNcDFfT2CC308uhjVUAPymcq1xYRHcWDGMFgSJxNkO5Z9qbGHexEcyq9oUETPSu9J8DfXjwPDmKjZp7wi9jjXWPqSoFr47XX3xwsZHYDJF/3yha9gOvRvEAVL6vbdNifSL5KHObSbRTTVcJMe+7TaBr4rICdYdwFW2tbTyOPOjsVSIC4xodJ3PVU7E/laRwRFPSCP1o3eZAbt2T6wyTEVAFhNGHo1axuwrIOVLja8f/BA== Received: from SJ0PR03CA0240.namprd03.prod.outlook.com (2603:10b6:a03:39f::35) by SJ2PR12MB9242.namprd12.prod.outlook.com (2603:10b6:a03:56f::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Wed, 28 Feb 2024 10:26:15 +0000 Received: from SJ1PEPF00001CDC.namprd05.prod.outlook.com (2603:10b6:a03:39f:cafe::88) by SJ0PR03CA0240.outlook.office365.com (2603:10b6:a03:39f::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.28 via Frontend Transport; Wed, 28 Feb 2024 10:26:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SJ1PEPF00001CDC.mail.protection.outlook.com (10.167.242.4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 10:26:15 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 02:26:00 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 02:25:57 -0800 From: Gregory Etelson To: CC: , , , Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v2 3/4] net/mlx5: move multi-pattern actions management to table level Date: Wed, 28 Feb 2024 12:25:25 +0200 Message-ID: <20240228102526.434717-4-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228102526.434717-1-getelson@nvidia.com> References: <20240202115611.288892-2-getelson@nvidia.com> <20240228102526.434717-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PEPF00001CDC:EE_|SJ2PR12MB9242:EE_ X-MS-Office365-Filtering-Correlation-Id: 9910473f-2a2d-4749-a2e0-08dc3847b1cd X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: g1ZMHRLZRMdNKnIrNW596RqUTaWLsvtD0mIEnVLbCDToveI+pwaAeKuqDlgK1IowjPgdjiq/Od+lRQwtRM9nmThXYhViJtwYX7Bej+cT6kHn/Or0ROU2REHPNrbmR0oAiCS/QCFpv8cXcC3GBgX6YD9gUiJ/e3of+NT1yuNPeahmdQSI7yk1iNTeNGF60P8F3hKZCUNknNrawXmpCX9bz3ZFcskc4+EQnzmiPG31PhUjXbSCZT0vfiXoK6eB6rfrK+nA8hlOyEbZHcc3DIkI2Gj8jPQsKDHSetvehRSXLS6Bo5dhzCEP0we6QZyYBfWjl/9beyelDGKBbY9WdE2qOdBmW/hhJXNTltxlzD08/EbT/mKBOo9fWcxEb3TDaIAzBv0i58o8RONwgBnvIzXPRgGx7SJvlFbkgxZbgiGLL7U4uZr9EclPQWIpWhYVyo/aOiqeSCtYILkWAoigNsz5vJZURBq02FpPX8PXB8EvrY7JPSu6HalWPgp6/JrRPXhMsMzyOPTFJE9N9TXW7nMI2uAgp76W/mO9qVFh9dsw+slgCfz9GQNrkVfllIBpD2uZ+/vHGKPBt/PvJpI13tiRiJ0Ar8nfykbjaErd4HAFk+T7U1SUXaI9V/SIhp/ep8yzAmchxyacqk5PJ9rRKizhMbKX4k5dXuwn289BVryVoGKYw3da01Bm2x7ODe9x4cHeihnTiIvqRu5cFPq8SXcr1oMJeYZIQ/cqwrpdNC6gP/24xIQMC9MAgCNHqTvwBv5X X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(230273577357003)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 10:26:15.4145 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9910473f-2a2d-4749-a2e0-08dc3847b1cd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF00001CDC.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB9242 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The multi-pattern actions related structures and management code have been moved to the table level. That code refactor is required for the upcoming table resize feature. Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow.h | 73 ++++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 225 +++++++++++++++----------------- 2 files changed, 173 insertions(+), 125 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index a4d0ff7b13..9cc237c542 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1410,7 +1410,6 @@ struct mlx5_hw_encap_decap_action { /* Is header_reformat action shared across flows in table. */ uint32_t shared:1; uint32_t multi_pattern:1; - volatile uint32_t *multi_pattern_refcnt; size_t data_size; /* Action metadata size. */ uint8_t data[]; /* Action data. */ }; @@ -1433,7 +1432,6 @@ struct mlx5_hw_modify_header_action { /* Is MODIFY_HEADER action shared across flows in table. */ uint32_t shared:1; uint32_t multi_pattern:1; - volatile uint32_t *multi_pattern_refcnt; /* Amount of modification commands stored in the precompiled buffer. */ uint32_t mhdr_cmds_num; /* Precompiled modification commands. */ @@ -1487,6 +1485,76 @@ struct mlx5_flow_group { #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32 +#define MLX5_MULTIPATTERN_ENCAP_NUM 5 +#define MLX5_MAX_TABLE_RESIZE_NUM 64 + +struct mlx5_multi_pattern_segment { + uint32_t capacity; + uint32_t head_index; + struct mlx5dr_action *mhdr_action; + struct mlx5dr_action *reformat_action[MLX5_MULTIPATTERN_ENCAP_NUM]; +}; + +struct mlx5_tbl_multi_pattern_ctx { + struct { + uint32_t elements_num; + struct mlx5dr_action_reformat_header reformat_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + /** + * insert_header structure is larger than reformat_header. + * Enclosing these structures with union will case a gap between + * reformat_hdr array elements. + * mlx5dr_action_create_reformat() expects adjacent array elements. + */ + struct mlx5dr_action_insert_header insert_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; + + struct { + uint32_t elements_num; + struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } mh; + struct mlx5_multi_pattern_segment segments[MLX5_MAX_TABLE_RESIZE_NUM]; +}; + +static __rte_always_inline void +mlx5_multi_pattern_activate(struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + mpctx->segments[0].head_index = 1; +} + +static __rte_always_inline bool +mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + return mpctx->segments[0].head_index == 1; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + int i; + + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + if (!mpctx->segments[i].capacity) + return &mpctx->segments[i]; + } + return NULL; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx, + uint32_t flow_resource_ix) +{ + int i; + + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + uint32_t limit = mpctx->segments[i].head_index + + mpctx->segments[i].capacity; + + if (flow_resource_ix < limit) + return &mpctx->segments[i]; + } + return NULL; +} + struct mlx5_flow_template_table_cfg { struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */ bool external; /* True if created by flow API, false if table is internal to PMD. */ @@ -1507,6 +1575,7 @@ struct rte_flow_template_table { uint8_t nb_item_templates; /* Item template number. */ uint8_t nb_action_templates; /* Action template number. */ uint32_t refcnt; /* Table reference counter. */ + struct mlx5_tbl_multi_pattern_ctx mpctx; }; #endif diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 5938d8b90c..38aed03970 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -78,41 +78,14 @@ struct mlx5_indlst_legacy { #define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \ (((const struct encap_type *)(ptr))->definition) -struct mlx5_multi_pattern_ctx { - union { - struct mlx5dr_action_reformat_header reformat_hdr; - struct mlx5dr_action_mh_pattern mh_pattern; - }; - union { - /* action template auxiliary structures for object destruction */ - struct mlx5_hw_encap_decap_action *encap; - struct mlx5_hw_modify_header_action *mhdr; - }; - /* multi pattern action */ - struct mlx5dr_rule_action *rule_action; -}; - -#define MLX5_MULTIPATTERN_ENCAP_NUM 4 - -struct mlx5_tbl_multi_pattern_ctx { - struct { - uint32_t elements_num; - struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; - - struct { - uint32_t elements_num; - struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - } mh; -}; - -#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},} - static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, - struct mlx5_tbl_multi_pattern_ctx *mpat, + struct mlx5_multi_pattern_segment *segment, + uint32_t bulk_size, struct rte_flow_error *error); +static void +mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment); static __rte_always_inline int mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) @@ -577,28 +550,14 @@ flow_hw_ct_compile(struct rte_eth_dev *dev, static void flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap) { - if (encap_decap->multi_pattern) { - uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt, - 1, __ATOMIC_RELAXED); - if (refcnt) - return; - mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt); - } - if (encap_decap->action) + if (encap_decap->action && !encap_decap->multi_pattern) mlx5dr_action_destroy(encap_decap->action); } static void flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr) { - if (mhdr->multi_pattern) { - uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt, - 1, __ATOMIC_RELAXED); - if (refcnt) - return; - mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt); - } - if (mhdr->action) + if (mhdr->action && !mhdr->multi_pattern) mlx5dr_action_destroy(mhdr->action); } @@ -1924,21 +1883,22 @@ mlx5_tbl_translate_reformat(struct mlx5_priv *priv, acts->encap_decap->shared = true; } else { uint32_t ix; - typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat + - mp_reformat_ix; + typeof(mp_ctx->reformat[0]) *reformat = mp_ctx->reformat + + mp_reformat_ix; - ix = reformat_ctx->elements_num++; - reformat_ctx->ctx[ix].reformat_hdr = hdr; - reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off]; - reformat_ctx->ctx[ix].encap = acts->encap_decap; + ix = reformat->elements_num++; + reformat->reformat_hdr[ix] = hdr; acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix; acts->encap_decap_pos = at->reformat_off; + acts->encap_decap->multi_pattern = 1; acts->encap_decap->data_size = data_size; + acts->encap_decap->action_type = refmt_type; ret = __flow_hw_act_data_encap_append (priv, acts, (at->actions + reformat_src)->type, reformat_src, at->reformat_off, data_size); if (ret) return -rte_errno; + mlx5_multi_pattern_activate(mp_ctx); } return 0; } @@ -1987,12 +1947,11 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, } else { typeof(mp_ctx->mh) *mh = &mp_ctx->mh; uint32_t idx = mh->elements_num; - struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++; - mh_ctx->mh_pattern = pattern; - mh_ctx->mhdr = acts->mhdr; - mh_ctx->rule_action = &acts->rule_acts[mhdr_ix]; + mh->pattern[mh->elements_num++] = pattern; + acts->mhdr->multi_pattern = 1; acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx; + mlx5_multi_pattern_activate(mp_ctx); } return 0; } @@ -2552,16 +2511,17 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, { int ret; uint32_t i; - struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < tbl->nb_action_templates; i++) { if (__flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, tbl->ats[i].action_template, - &mpat, error)) + &tbl->mpctx, error)) goto err; } - ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); + ret = mlx5_tbl_multi_pattern_process(dev, tbl, &tbl->mpctx.segments[0], + rte_log2_u32(tbl->cfg.attr.nb_flows), + error); if (ret) goto err; return 0; @@ -2944,6 +2904,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, int ret; uint32_t age_idx = 0; struct mlx5_aso_mtr *aso_mtr; + struct mlx5_multi_pattern_segment *mp_segment; rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; @@ -3074,6 +3035,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment || !mp_segment->mhdr_action) + return -1; + rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action; if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, act_data, @@ -3225,9 +3190,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, age_idx); } if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { - rule_acts[hw_acts->encap_decap_pos].reformat.offset = - job->flow->res_idx - 1; - rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; + int ix = mlx5_multi_pattern_reformat_to_index(hw_acts->encap_decap->action_type); + struct mlx5dr_rule_action *ra = &rule_acts[hw_acts->encap_decap_pos]; + + if (ix < 0) + return -1; + mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment || !mp_segment->reformat_action[ix]) + return -1; + ra->action = mp_segment->reformat_action[ix]; + ra->reformat.offset = job->flow->res_idx - 1; + ra->reformat.data = buf; } if (hw_acts->push_remove && !hw_acts->push_remove->shared) { rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = @@ -4133,86 +4106,65 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, - struct mlx5_tbl_multi_pattern_ctx *mpat, + struct mlx5_multi_pattern_segment *segment, + uint32_t bulk_size, struct rte_flow_error *error) { + int ret = 0; uint32_t i; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &tbl->mpctx; const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr; const struct rte_flow_attr *attr = &table_attr->flow_attr; enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); uint32_t flags = mlx5_hw_act_flag[!!attr->group][type]; - struct mlx5dr_action *dr_action; - uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows); + struct mlx5dr_action *dr_action = NULL; for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { - uint32_t j; - uint32_t *reformat_refcnt; - typeof(mpat->reformat[0]) *reformat = mpat->reformat + i; - struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + typeof(mpctx->reformat[0]) *reformat = mpctx->reformat + i; enum mlx5dr_action_type reformat_type = mlx5_multi_pattern_reformat_index_to_type(i); if (!reformat->elements_num) continue; - for (j = 0; j < reformat->elements_num; j++) - hdr[j] = reformat->ctx[j].reformat_hdr; - reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0, - rte_socket_id()); - if (!reformat_refcnt) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "failed to allocate multi-pattern encap counter"); - *reformat_refcnt = reformat->elements_num; - dr_action = mlx5dr_action_create_reformat - (priv->dr_ctx, reformat_type, reformat->elements_num, hdr, - bulk_size, flags); + dr_action = reformat_type == MLX5DR_ACTION_TYP_INSERT_HEADER ? + mlx5dr_action_create_insert_header + (priv->dr_ctx, reformat->elements_num, + reformat->insert_hdr, bulk_size, flags) : + mlx5dr_action_create_reformat + (priv->dr_ctx, reformat_type, reformat->elements_num, + reformat->reformat_hdr, bulk_size, flags); if (!dr_action) { - mlx5_free(reformat_refcnt); - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to create multi-pattern encap action"); - } - for (j = 0; j < reformat->elements_num; j++) { - reformat->ctx[j].rule_action->action = dr_action; - reformat->ctx[j].encap->action = dr_action; - reformat->ctx[j].encap->multi_pattern = 1; - reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt; + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to create multi-pattern encap action"); + goto error; } + segment->reformat_action[i] = dr_action; } - if (mpat->mh.elements_num) { - typeof(mpat->mh) *mh = &mpat->mh; - struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), - 0, rte_socket_id()); - - if (!mh_refcnt) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "failed to allocate modify header counter"); - *mh_refcnt = mpat->mh.elements_num; - for (i = 0; i < mpat->mh.elements_num; i++) - pattern[i] = mh->ctx[i].mh_pattern; + if (mpctx->mh.elements_num) { + typeof(mpctx->mh) *mh = &mpctx->mh; dr_action = mlx5dr_action_create_modify_header - (priv->dr_ctx, mpat->mh.elements_num, pattern, + (priv->dr_ctx, mpctx->mh.elements_num, mh->pattern, bulk_size, flags); if (!dr_action) { - mlx5_free(mh_refcnt); - return rte_flow_error_set(error, rte_errno, + ret = rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to create multi-pattern header modify action"); - } - for (i = 0; i < mpat->mh.elements_num; i++) { - mh->ctx[i].rule_action->action = dr_action; - mh->ctx[i].mhdr->action = dr_action; - mh->ctx[i].mhdr->multi_pattern = 1; - mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt; + NULL, "failed to create multi-pattern header modify action"); + goto error; } + segment->mhdr_action = dr_action; + } + if (dr_action) { + segment->capacity = RTE_BIT32(bulk_size); + if (segment != &mpctx->segments[MLX5_MAX_TABLE_RESIZE_NUM - 1]) + segment[1].head_index = segment->head_index + segment->capacity; } - return 0; +error: + mlx5_destroy_multi_pattern_segment(segment); + return ret; } static int @@ -4225,7 +4177,6 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev, { int ret; uint8_t i; - struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < nb_action_templates; i++) { uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1, @@ -4246,16 +4197,21 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev, ret = __flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, action_templates[i], - &mpat, error); + &tbl->mpctx, error); if (ret) { i++; goto at_error; } } tbl->nb_action_templates = nb_action_templates; - ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); - if (ret) - goto at_error; + if (mlx5_is_multi_pattern_active(&tbl->mpctx)) { + ret = mlx5_tbl_multi_pattern_process(dev, tbl, + &tbl->mpctx.segments[0], + rte_log2_u32(tbl->cfg.attr.nb_flows), + error); + if (ret) + goto at_error; + } return 0; at_error: @@ -4624,6 +4580,28 @@ flow_hw_template_table_create(struct rte_eth_dev *dev, action_templates, nb_action_templates, error); } +static void +mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment) +{ + int i; + + if (segment->mhdr_action) + mlx5dr_action_destroy(segment->mhdr_action); + for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { + if (segment->reformat_action[i]) + mlx5dr_action_destroy(segment->reformat_action[i]); + } + segment->capacity = 0; +} + +static void +flow_hw_destroy_table_multi_pattern_ctx(struct rte_flow_template_table *table) +{ + int sx; + + for (sx = 0; sx < MLX5_MAX_TABLE_RESIZE_NUM; sx++) + mlx5_destroy_multi_pattern_segment(table->mpctx.segments + sx); +} /** * Destroy flow table. * @@ -4669,6 +4647,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, __atomic_fetch_sub(&table->ats[i].action_template->refcnt, 1, __ATOMIC_RELAXED); } + flow_hw_destroy_table_multi_pattern_ctx(table); mlx5dr_matcher_destroy(table->matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); mlx5_ipool_destroy(table->resource); From patchwork Wed Feb 28 10:25:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137424 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E775143C21; Wed, 28 Feb 2024 11:26:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C4BF42EEF; Wed, 28 Feb 2024 11:26:24 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2069.outbound.protection.outlook.com [40.107.220.69]) by mails.dpdk.org (Postfix) with ESMTP id E55AE42D80 for ; Wed, 28 Feb 2024 11:26:21 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PolutBlH/gvn+mzLRUAq2SctYvuVbjzTqVRhIefKusmVbE6yQ8MEOav9siey7AVnHL7YKiy6KaSo1kzJh6XDJGADk05O2oL7lJmRI80wbOrZf1DQKHZdd+C+HVO2eYhUcY7Zyxfko121fw68FVl6eQF99W83l+A4TgBW+wPK6CKuQQXA61HZzy3JVXKObrDRLnrk2nkc3IZSJZw6Qjs51LXqMGcgfDe2k82pqIdWzKAJx7kLNJUBgCZZweY/TUHO3NRxvih+dCUFqK5pewd1cIDXSoP4tI3ULdSAVrsJK6xZ/P+4DQSFGfbrkoXQHYqYxOI4761dT4pgqQyTiedWfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ErGWIoZeINE7XfT8F5T2ARa2azYLALgX5M+Y50qh+SE=; b=OpHseyuwud+1mSY/pAGjqwjgBxF/NOfPI07J9sK9TxDg8e4M8Fo+/W1k4Jm5nJTgilqDbNFGbvAmWFg+cyt3n236WZYGxr7tp4+uHpBfN/bO1axx19vU5DBFMOe3cYXp5sg7M65Akz4GFz1+CQoa7f/PFsh+GC2r/8m6cOzuHK8yztRn7cnx96qrIu3+k3PNPH5Jyh+svlb79CzyHK+9fnUcfIr2MY/n8yD2wVNo/TaBlaBpoQZ9P6VMIAdjbtRB223aEi07oNc+FXyYaQYo5Jx/uWYTawKP3Zux5pl1jag88M+CpuBCpqhz59N38lYFMfvlzFiNJmlmQdIR8xANhg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ErGWIoZeINE7XfT8F5T2ARa2azYLALgX5M+Y50qh+SE=; b=cah69ccgbioBaDeeJ/0xyJIJ1pXQ2zBSqwMixzZGlk3sAo6MjsMPGNV4JlalByyvMlI13P5K/IcXwn/FOTOxZ4n/a+ge3x5N/Y737JNVGr7kWrRDrdBIjd+zCJDPVhiKGL5esoVnyMhvRbpw9LhVWjEvMFzS9iVDpuIXSaYn9wM8i0oXMGMCom0uqoLI6w4D6ZqUiNKJ5XgiZELmSvW3wqVPLK9IPEnAMY/2De+mqRz6AingNFvQUX1WQtPHBQPoufw+zSXO+L+f1dyq1WbPdcrP78XpH1z7HY9B4h1ERzqrSwEuTniNyLZfWH3py2vGidIWDcIljk7H5bAlgUNzCQ== Received: from CYXPR02CA0090.namprd02.prod.outlook.com (2603:10b6:930:ce::17) by SA1PR12MB6894.namprd12.prod.outlook.com (2603:10b6:806:24d::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Wed, 28 Feb 2024 10:26:18 +0000 Received: from CY4PEPF0000EE30.namprd05.prod.outlook.com (2603:10b6:930:ce:cafe::33) by CYXPR02CA0090.outlook.office365.com (2603:10b6:930:ce::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.28 via Frontend Transport; Wed, 28 Feb 2024 10:26:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE30.mail.protection.outlook.com (10.167.242.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 10:26:17 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 02:26:04 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 02:26:00 -0800 From: Gregory Etelson To: CC: , , , Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v2 4/4] net/mlx5: add support for flow table resizing Date: Wed, 28 Feb 2024 12:25:26 +0200 Message-ID: <20240228102526.434717-5-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228102526.434717-1-getelson@nvidia.com> References: <20240202115611.288892-2-getelson@nvidia.com> <20240228102526.434717-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE30:EE_|SA1PR12MB6894:EE_ X-MS-Office365-Filtering-Correlation-Id: a242f4e2-495c-4248-ab81-08dc3847b2fb X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Y2JxwjKQnAi4M4Pju3Fr+MWX4o7qF+35pTfYkjaHjwn+DEhH1/FADWQWk5D06NSVxGw60bPKcVy4fvgvHsRYeL0MhPWs0c6h8RF/jzGb/GAUj7SlywPD7HBHzKs1fuUO2hZQFUbzRiIQxbxYNnBcfewO/GwQHJJBcc2/imdE07ch2fKZlPRAkMYct6wHNeHtSK2SgRs2nmzlU4kbGVtXwg5JJjoZTApOlFUC0oEoDOZZfnGvA5QivoLTUrFi964lEcpOt1qx/H0riDk1D90wdFraudKWQC1rchoEPpMbfsA01ZpoNJ7b+RyQ6wLGgtjOf+FwU/o4+UmM9YNBbZSXm8s4nzsN2GYA+MqewMUzl9O+DbDpS++lurjU8/4tawR/qrMLkCZfj1NZSgD5z4NidCBrdYg0ji5zdCGBYP44lwcpM48ZuHIzz69oHI+EUp68wum1j0ZUa9tLD/H3RAjL1Q8c0r7JhN6Pg8jT7eoA4Cc99kFrnL56J1BRtObYuD2TMey2lferlFK/owYv2U10e5L5K+GqE0Fx3r2UlG//nuyhR9CFKj/FahWSSVIhtDWMctq+ISyVdzVhGHD40iz6NVApJVJ6JfFsmeSSHK6QNZVVGS4EYGodmyhybZUVivbO8DXSshB7i38dSFqeLBYCp4QwXY4h7nDfq6JZVTU8v0Uncds2ulXnPWJelzPVRcP29FQO5Cxdtaubzp1gerp1p1PTLv8ez1988ATG2gpDmzWAIOzb/ergCB+fXjMAKK9z X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(230273577357003)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 10:26:17.4564 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a242f4e2-495c-4248-ab81-08dc3847b2fb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE30.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB6894 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support template table API in PMD. The patch allows to increase existing table capacity. Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5.h | 5 + drivers/net/mlx5/mlx5_flow.c | 51 ++++ drivers/net/mlx5/mlx5_flow.h | 84 ++++-- drivers/net/mlx5/mlx5_flow_hw.c | 518 +++++++++++++++++++++++++++----- 4 files changed, 553 insertions(+), 105 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 99850a58af..bb1853e797 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -380,6 +380,9 @@ enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */ MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */ MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */ + MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE, /* Non-optimized flow create job type. */ + MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY, /* Non-optimized destroy create job type. */ + MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE, /* Move flow after table resize. */ }; enum mlx5_hw_indirect_type { @@ -422,6 +425,8 @@ struct mlx5_hw_q { struct mlx5_hw_q_job **job; /* LIFO header. */ struct rte_ring *indir_cq; /* Indirect action SW completion queue. */ struct rte_ring *indir_iq; /* Indirect action SW in progress queue. */ + struct rte_ring *flow_transfer_pending; + struct rte_ring *flow_transfer_completed; } __rte_cache_aligned; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 3e179110a0..477b13e04d 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1095,6 +1095,20 @@ mlx5_flow_calc_encap_hash(struct rte_eth_dev *dev, uint8_t *hash, struct rte_flow_error *error); +static int +mlx5_template_table_resize(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_rules, struct rte_flow_error *error); +static int +mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *rule, void *user_data, + struct rte_flow_error *error); +static int +mlx5_table_resize_complete(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error); + static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -1133,6 +1147,9 @@ static const struct rte_flow_ops mlx5_flow_ops = { mlx5_flow_action_list_handle_query_update, .flow_calc_table_hash = mlx5_flow_calc_table_hash, .flow_calc_encap_hash = mlx5_flow_calc_encap_hash, + .flow_template_table_resize = mlx5_template_table_resize, + .flow_update_resized = mlx5_flow_async_update_resized, + .flow_template_table_resize_complete = mlx5_table_resize_complete, }; /* Tunnel information. */ @@ -10548,6 +10565,40 @@ mlx5_flow_calc_encap_hash(struct rte_eth_dev *dev, return fops->flow_calc_encap_hash(dev, pattern, dest_field, hash, error); } +static int +mlx5_template_table_resize(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_rules, struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize, ENOTSUP); + return fops->table_resize(dev, table, nb_rules, error); +} + +static int +mlx5_table_resize_complete(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize_complete, ENOTSUP); + return fops->table_resize_complete(dev, table, error); +} + +static int +mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *rule, void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, flow_update_resized, ENOTSUP); + return fops->flow_update_resized(dev, queue, op_attr, rule, user_data, error); +} + /** * Destroy all indirect actions (shared RSS). * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 9cc237c542..6c2944c21a 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1217,6 +1217,7 @@ struct rte_flow { uint32_t tunnel:1; uint32_t meter:24; /**< Holds flow meter id. */ uint32_t indirect_type:2; /**< Indirect action type. */ + uint32_t matcher_selector:1; /**< Matcher index in resizable table. */ uint32_t rix_mreg_copy; /**< Index to metadata register copy table resource. */ uint32_t counter; /**< Holds flow counter. */ @@ -1262,6 +1263,7 @@ struct rte_flow_hw { }; struct rte_flow_template_table *table; /* The table flow allcated from. */ uint8_t mt_idx; + uint8_t matcher_selector:1; uint32_t age_idx; cnt_id_t cnt_id; uint32_t mtr_id; @@ -1489,6 +1491,11 @@ struct mlx5_flow_group { #define MLX5_MAX_TABLE_RESIZE_NUM 64 struct mlx5_multi_pattern_segment { + /* + * Modify Header Argument Objects number allocated for action in that + * segment. + * Capacity is always power of 2. + */ uint32_t capacity; uint32_t head_index; struct mlx5dr_action *mhdr_action; @@ -1527,43 +1534,22 @@ mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx) return mpctx->segments[0].head_index == 1; } -static __rte_always_inline struct mlx5_multi_pattern_segment * -mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx) -{ - int i; - - for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { - if (!mpctx->segments[i].capacity) - return &mpctx->segments[i]; - } - return NULL; -} - -static __rte_always_inline struct mlx5_multi_pattern_segment * -mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx, - uint32_t flow_resource_ix) -{ - int i; - - for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { - uint32_t limit = mpctx->segments[i].head_index + - mpctx->segments[i].capacity; - - if (flow_resource_ix < limit) - return &mpctx->segments[i]; - } - return NULL; -} - struct mlx5_flow_template_table_cfg { struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */ bool external; /* True if created by flow API, false if table is internal to PMD. */ }; +struct mlx5_matcher_info { + struct mlx5dr_matcher *matcher; /* Template matcher. */ + uint32_t refcnt; +}; + struct rte_flow_template_table { LIST_ENTRY(rte_flow_template_table) next; struct mlx5_flow_group *grp; /* The group rte_flow_template_table uses. */ - struct mlx5dr_matcher *matcher; /* Template matcher. */ + struct mlx5_matcher_info matcher_info[2]; + uint32_t matcher_selector; + rte_rwlock_t matcher_replace_rwlk; /* RW lock for resizable tables */ /* Item templates bind to the table. */ struct rte_flow_pattern_template *its[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; /* Action templates bind to the table. */ @@ -1576,8 +1562,34 @@ struct rte_flow_template_table { uint8_t nb_action_templates; /* Action template number. */ uint32_t refcnt; /* Table reference counter. */ struct mlx5_tbl_multi_pattern_ctx mpctx; + struct mlx5dr_matcher_attr matcher_attr; }; +static __rte_always_inline struct mlx5dr_matcher * +mlx5_table_matcher(const struct rte_flow_template_table *table) +{ + return table->matcher_info[table->matcher_selector].matcher; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_find(struct rte_flow_template_table *table, + uint32_t flow_resource_ix) +{ + int i; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx; + + if (likely(!rte_flow_template_table_resizable(0, &table->cfg.attr))) + return &mpctx->segments[0]; + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + uint32_t limit = mpctx->segments[i].head_index + + mpctx->segments[i].capacity; + + if (flow_resource_ix < limit) + return &mpctx->segments[i]; + } + return NULL; +} + #endif /* @@ -2274,6 +2286,17 @@ typedef int enum rte_flow_encap_hash_field dest_field, uint8_t *hash, struct rte_flow_error *error); +typedef int (*mlx5_table_resize_t)(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_rules, struct rte_flow_error *error); +typedef int (*mlx5_flow_update_resized_t) + (struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *rule, void *user_data, + struct rte_flow_error *error); +typedef int (*table_resize_complete_t)(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -2348,6 +2371,9 @@ struct mlx5_flow_driver_ops { async_action_list_handle_query_update; mlx5_flow_calc_table_hash_t flow_calc_table_hash; mlx5_flow_calc_encap_hash_t flow_calc_encap_hash; + mlx5_table_resize_t table_resize; + mlx5_flow_update_resized_t flow_update_resized; + table_resize_complete_t table_resize_complete; }; /* mlx5_flow.c */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 38aed03970..1bd29999f9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2904,7 +2904,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, int ret; uint32_t age_idx = 0; struct mlx5_aso_mtr *aso_mtr; - struct mlx5_multi_pattern_segment *mp_segment; + struct mlx5_multi_pattern_segment *mp_segment = NULL; rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; @@ -2918,17 +2918,20 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } else { attr.ingress = 1; } - if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0) { + if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0 && !hw_acts->mhdr->shared) { uint16_t pos = hw_acts->mhdr->pos; - if (!hw_acts->mhdr->shared) { - rule_acts[pos].modify_header.offset = - job->flow->res_idx - 1; - rule_acts[pos].modify_header.data = - (uint8_t *)job->mhdr_cmd; - rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, - sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); - } + mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); + if (!mp_segment || !mp_segment->mhdr_action) + return -1; + rule_acts[pos].action = mp_segment->mhdr_action; + /* offset is relative to DR action */ + rule_acts[pos].modify_header.offset = + job->flow->res_idx - mp_segment->head_index; + rule_acts[pos].modify_header.data = + (uint8_t *)job->mhdr_cmd; + rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, + sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); } LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; @@ -3035,10 +3038,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); - if (!mp_segment || !mp_segment->mhdr_action) - return -1; - rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action; if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, act_data, @@ -3195,11 +3194,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (ix < 0) return -1; - mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment) + mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); if (!mp_segment || !mp_segment->reformat_action[ix]) return -1; ra->action = mp_segment->reformat_action[ix]; - ra->reformat.offset = job->flow->res_idx - 1; + /* reformat offset is relative to selected DR action */ + ra->reformat.offset = job->flow->res_idx - mp_segment->head_index; ra->reformat.data = buf; } if (hw_acts->push_remove && !hw_acts->push_remove->shared) { @@ -3371,10 +3372,26 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, pattern_template_index, job); if (!rule_items) goto error; - ret = mlx5dr_rule_create(table->matcher, - pattern_template_index, rule_items, - action_template_index, rule_acts, - &rule_attr, (struct mlx5dr_rule *)flow->rule); + if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) { + ret = mlx5dr_rule_create(table->matcher_info[0].matcher, + pattern_template_index, rule_items, + action_template_index, rule_acts, + &rule_attr, + (struct mlx5dr_rule *)flow->rule); + } else { + uint32_t selector; + + job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE; + rte_rwlock_read_lock(&table->matcher_replace_rwlk); + selector = table->matcher_selector; + ret = mlx5dr_rule_create(table->matcher_info[selector].matcher, + pattern_template_index, rule_items, + action_template_index, rule_acts, + &rule_attr, + (struct mlx5dr_rule *)flow->rule); + rte_rwlock_read_unlock(&table->matcher_replace_rwlk); + flow->matcher_selector = selector; + } if (likely(!ret)) return (struct rte_flow *)flow; error: @@ -3491,9 +3508,23 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, rte_errno = EINVAL; goto error; } - ret = mlx5dr_rule_create(table->matcher, - 0, items, action_template_index, rule_acts, - &rule_attr, (struct mlx5dr_rule *)flow->rule); + if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) { + ret = mlx5dr_rule_create(table->matcher_info[0].matcher, + 0, items, action_template_index, + rule_acts, &rule_attr, + (struct mlx5dr_rule *)flow->rule); + } else { + uint32_t selector; + + job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE; + rte_rwlock_read_lock(&table->matcher_replace_rwlk); + selector = table->matcher_selector; + ret = mlx5dr_rule_create(table->matcher_info[selector].matcher, + 0, items, action_template_index, + rule_acts, &rule_attr, + (struct mlx5dr_rule *)flow->rule); + rte_rwlock_read_unlock(&table->matcher_replace_rwlk); + } if (likely(!ret)) return (struct rte_flow *)flow; error: @@ -3673,7 +3704,8 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to destroy rte flow: flow queue full"); - job->type = MLX5_HW_Q_JOB_TYPE_DESTROY; + job->type = !rte_flow_template_table_resizable(dev->data->port_id, &fh->table->cfg.attr) ? + MLX5_HW_Q_JOB_TYPE_DESTROY : MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY; job->user_data = user_data; job->flow = fh; rule_attr.user_data = job; @@ -3785,6 +3817,26 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job } } +static __rte_always_inline int +mlx5_hw_pull_flow_transfer_comp(struct rte_eth_dev *dev, + uint32_t queue, struct rte_flow_op_result res[], + uint16_t n_res) +{ + uint32_t size, i; + struct mlx5_hw_q_job *job = NULL; + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_ring *ring = priv->hw_q[queue].flow_transfer_completed; + + size = RTE_MIN(rte_ring_count(ring), n_res); + for (i = 0; i < size; i++) { + res[i].status = RTE_FLOW_OP_SUCCESS; + rte_ring_dequeue(ring, (void **)&job); + res[i].user_data = job->user_data; + flow_hw_job_put(priv, job, queue); + } + return (int)size; +} + static inline int __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, uint32_t queue, @@ -3833,6 +3885,79 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, return ret_comp; } +static __rte_always_inline void +hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + uint32_t queue, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + struct rte_flow_hw *flow = job->flow; + struct rte_flow_template_table *table = flow->table; + /* Release the original resource index in case of update. */ + uint32_t res_idx = flow->res_idx; + + if (flow->fate_type == MLX5_FLOW_FATE_JUMP) + flow_hw_jump_release(dev, flow->jump); + else if (flow->fate_type == MLX5_FLOW_FATE_QUEUE) + mlx5_hrxq_obj_release(dev, flow->hrxq); + if (mlx5_hws_cnt_id_valid(flow->cnt_id)) + flow_hw_age_count_release(priv, queue, + flow, error); + if (flow->mtr_id) { + mlx5_ipool_free(pool->idx_pool, flow->mtr_id); + flow->mtr_id = 0; + } + if (job->type != MLX5_HW_Q_JOB_TYPE_UPDATE) { + if (table) { + mlx5_ipool_free(table->resource, res_idx); + mlx5_ipool_free(table->flow, flow->idx); + } + } else { + rte_memcpy(flow, job->upd_flow, + offsetof(struct rte_flow_hw, rule)); + mlx5_ipool_free(table->resource, res_idx); + } +} + +static __rte_always_inline void +hw_cmpl_resizable_tbl(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + uint32_t queue, enum rte_flow_op_status status, + struct rte_flow_error *error) +{ + struct rte_flow_hw *flow = job->flow; + struct rte_flow_template_table *table = flow->table; + uint32_t selector = flow->matcher_selector; + uint32_t other_selector = (selector + 1) & 1; + uint32_t __rte_unused refcnt; + + switch (job->type) { + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE: + __atomic_add_fetch(&table->matcher_info[selector].refcnt, + 1, __ATOMIC_RELAXED); + break; + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY: + refcnt = __atomic_sub_fetch(&table->matcher_info[selector].refcnt, 1, + __ATOMIC_RELAXED); + MLX5_ASSERT((int)refcnt >= 0); + hw_cmpl_flow_update_or_destroy(dev, job, queue, error); + break; + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE: + if (status == RTE_FLOW_OP_SUCCESS) { + refcnt = __atomic_sub_fetch(&table->matcher_info[selector].refcnt, + 1, __ATOMIC_RELAXED); + MLX5_ASSERT((int)refcnt >= 0); + __atomic_add_fetch(&table->matcher_info[other_selector].refcnt, + 1, __ATOMIC_RELAXED); + flow->matcher_selector = other_selector; + } + break; + default: + break; + } +} + /** * Pull the enqueued flows. * @@ -3861,9 +3986,7 @@ flow_hw_pull(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct mlx5_hw_q_job *job; - uint32_t res_idx; int ret, i; /* 1. Pull the flow completion. */ @@ -3874,31 +3997,20 @@ flow_hw_pull(struct rte_eth_dev *dev, "fail to query flow queue"); for (i = 0; i < ret; i++) { job = (struct mlx5_hw_q_job *)res[i].user_data; - /* Release the original resource index in case of update. */ - res_idx = job->flow->res_idx; /* Restore user data. */ res[i].user_data = job->user_data; - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY || - job->type == MLX5_HW_Q_JOB_TYPE_UPDATE) { - if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP) - flow_hw_jump_release(dev, job->flow->jump); - else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE) - mlx5_hrxq_obj_release(dev, job->flow->hrxq); - if (mlx5_hws_cnt_id_valid(job->flow->cnt_id)) - flow_hw_age_count_release(priv, queue, - job->flow, error); - if (job->flow->mtr_id) { - mlx5_ipool_free(pool->idx_pool, job->flow->mtr_id); - job->flow->mtr_id = 0; - } - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { - mlx5_ipool_free(job->flow->table->resource, res_idx); - mlx5_ipool_free(job->flow->table->flow, job->flow->idx); - } else { - rte_memcpy(job->flow, job->upd_flow, - offsetof(struct rte_flow_hw, rule)); - mlx5_ipool_free(job->flow->table->resource, res_idx); - } + switch (job->type) { + case MLX5_HW_Q_JOB_TYPE_DESTROY: + case MLX5_HW_Q_JOB_TYPE_UPDATE: + hw_cmpl_flow_update_or_destroy(dev, job, queue, error); + break; + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE: + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE: + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY: + hw_cmpl_resizable_tbl(dev, job, queue, res[i].status, error); + break; + default: + break; } flow_hw_job_put(priv, job, queue); } @@ -3906,24 +4018,36 @@ flow_hw_pull(struct rte_eth_dev *dev, if (ret < n_res) ret += __flow_hw_pull_indir_action_comp(dev, queue, &res[ret], n_res - ret); + if (ret < n_res) + ret += mlx5_hw_pull_flow_transfer_comp(dev, queue, &res[ret], + n_res - ret); + return ret; } +static uint32_t +mlx5_hw_push_queue(struct rte_ring *pending_q, struct rte_ring *cmpl_q) +{ + void *job = NULL; + uint32_t i, size = rte_ring_count(pending_q); + + for (i = 0; i < size; i++) { + rte_ring_dequeue(pending_q, &job); + rte_ring_enqueue(cmpl_q, job); + } + return size; +} + static inline uint32_t __flow_hw_push_action(struct rte_eth_dev *dev, uint32_t queue) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_ring *iq = priv->hw_q[queue].indir_iq; - struct rte_ring *cq = priv->hw_q[queue].indir_cq; - void *job = NULL; - uint32_t ret, i; + struct mlx5_hw_q *hw_q = &priv->hw_q[queue]; - ret = rte_ring_count(iq); - for (i = 0; i < ret; i++) { - rte_ring_dequeue(iq, &job); - rte_ring_enqueue(cq, job); - } + mlx5_hw_push_queue(hw_q->indir_iq, hw_q->indir_cq); + mlx5_hw_push_queue(hw_q->flow_transfer_pending, + hw_q->flow_transfer_completed); if (!priv->shared_host) { if (priv->hws_ctpool) mlx5_aso_push_wqe(priv->sh, @@ -4332,6 +4456,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, grp = container_of(ge, struct mlx5_flow_group, entry); tbl->grp = grp; /* Prepare matcher information. */ + matcher_attr.resizable = !!rte_flow_template_table_resizable(dev->data->port_id, &table_cfg->attr); matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_ANY; matcher_attr.priority = attr->flow_attr.priority; matcher_attr.optimize_using_rule_idx = true; @@ -4350,7 +4475,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG; if ((attr->specialize & val) == val) { - DRV_LOG(INFO, "Invalid hint value %x", + DRV_LOG(ERR, "Invalid hint value %x", attr->specialize); rte_errno = EINVAL; goto it_error; @@ -4394,10 +4519,11 @@ flow_hw_table_create(struct rte_eth_dev *dev, i = nb_item_templates; goto it_error; } - tbl->matcher = mlx5dr_matcher_create + tbl->matcher_info[0].matcher = mlx5dr_matcher_create (tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr); - if (!tbl->matcher) + if (!tbl->matcher_info[0].matcher) goto at_error; + tbl->matcher_attr = matcher_attr; tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB : (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX : MLX5DR_TABLE_TYPE_NIC_RX); @@ -4405,6 +4531,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next); else LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next); + rte_rwlock_init(&tbl->matcher_replace_rwlk); return tbl; at_error: for (i = 0; i < nb_action_templates; i++) { @@ -4576,6 +4703,11 @@ flow_hw_template_table_create(struct rte_eth_dev *dev, if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error)) return NULL; + if (!cfg.attr.flow_attr.group && rte_flow_template_table_resizable(dev->data->port_id, attr)) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "table cannot be resized: invalid group"); + return NULL; + } return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates, action_templates, nb_action_templates, error); } @@ -4648,7 +4780,10 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, 1, __ATOMIC_RELAXED); } flow_hw_destroy_table_multi_pattern_ctx(table); - mlx5dr_matcher_destroy(table->matcher); + if (table->matcher_info[0].matcher) + mlx5dr_matcher_destroy(table->matcher_info[0].matcher); + if (table->matcher_info[1].matcher) + mlx5dr_matcher_destroy(table->matcher_info[1].matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); mlx5_ipool_destroy(table->resource); mlx5_ipool_destroy(table->flow); @@ -9642,6 +9777,16 @@ action_template_drop_init(struct rte_eth_dev *dev, return 0; } +static __rte_always_inline struct rte_ring * +mlx5_hwq_ring_create(uint16_t port_id, uint32_t queue, uint32_t size, const char *str) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + + snprintf(mz_name, sizeof(mz_name), "port_%u_%s_%u", port_id, str, queue); + return rte_ring_create(mz_name, size, SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ); +} + /** * Configure port HWS resources. * @@ -9769,7 +9914,6 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } for (i = 0; i < nb_q_updated; i++) { - char mz_name[RTE_MEMZONE_NAMESIZE]; uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; @@ -9803,22 +9947,23 @@ flow_hw_configure(struct rte_eth_dev *dev, job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; } - snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_cq_%u", - dev->data->port_id, i); - priv->hw_q[i].indir_cq = rte_ring_create(mz_name, - _queue_attr[i]->size, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_SC_DEQ | - RING_F_EXACT_SZ); + /* Notice ring name length is limited. */ + priv->hw_q[i].indir_cq = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "indir_act_cq"); if (!priv->hw_q[i].indir_cq) goto err; - snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_iq_%u", - dev->data->port_id, i); - priv->hw_q[i].indir_iq = rte_ring_create(mz_name, - _queue_attr[i]->size, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_SC_DEQ | - RING_F_EXACT_SZ); + priv->hw_q[i].indir_iq = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "indir_act_iq"); if (!priv->hw_q[i].indir_iq) goto err; + priv->hw_q[i].flow_transfer_pending = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "tx_pending"); + if (!priv->hw_q[i].flow_transfer_pending) + goto err; + priv->hw_q[i].flow_transfer_completed = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "tx_done"); + if (!priv->hw_q[i].flow_transfer_completed) + goto err; } dr_ctx_attr.pd = priv->sh->cdev->pd; dr_ctx_attr.queues = nb_q_updated; @@ -10039,6 +10184,8 @@ flow_hw_configure(struct rte_eth_dev *dev, for (i = 0; i < nb_q_updated; i++) { rte_ring_free(priv->hw_q[i].indir_iq); rte_ring_free(priv->hw_q[i].indir_cq); + rte_ring_free(priv->hw_q[i].flow_transfer_pending); + rte_ring_free(priv->hw_q[i].flow_transfer_completed); } mlx5_free(priv->hw_q); priv->hw_q = NULL; @@ -10139,6 +10286,8 @@ flow_hw_resource_release(struct rte_eth_dev *dev) for (i = 0; i < priv->nb_queue; i++) { rte_ring_free(priv->hw_q[i].indir_iq); rte_ring_free(priv->hw_q[i].indir_cq); + rte_ring_free(priv->hw_q[i].flow_transfer_pending); + rte_ring_free(priv->hw_q[i].flow_transfer_completed); } mlx5_free(priv->hw_q); priv->hw_q = NULL; @@ -11969,7 +12118,7 @@ flow_hw_calc_table_hash(struct rte_eth_dev *dev, items = flow_hw_get_rule_items(dev, table, pattern, pattern_template_index, &job); - res = mlx5dr_rule_hash_calculate(table->matcher, items, + res = mlx5dr_rule_hash_calculate(mlx5_table_matcher(table), items, pattern_template_index, MLX5DR_RULE_HASH_CALC_MODE_RAW, hash); @@ -12046,6 +12195,220 @@ flow_hw_calc_encap_hash(struct rte_eth_dev *dev, return 0; } +static int +flow_hw_table_resize_multi_pattern_actions(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_flows, + struct rte_flow_error *error) +{ + struct mlx5_multi_pattern_segment *segment = table->mpctx.segments; + uint32_t bulk_size; + int i, ret; + + /** + * Segment always allocates Modify Header Argument Objects number in + * powers of 2. + * On resize, PMD adds minimal required argument objects number. + * For example, if table size was 10, it allocated 16 argument objects. + * Resize to 15 will not add new objects. + */ + for (i = 1; + i < MLX5_MAX_TABLE_RESIZE_NUM && segment->capacity; + i++, segment++); + if (i == MLX5_MAX_TABLE_RESIZE_NUM) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "too many resizes"); + if (segment->head_index - 1 >= nb_flows) + return 0; + bulk_size = rte_align32pow2(nb_flows - segment->head_index + 1); + ret = mlx5_tbl_multi_pattern_process(dev, table, segment, + rte_log2_u32(bulk_size), + error); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "too many resizes"); + return i; +} + +static int +flow_hw_table_resize(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_flows, + struct rte_flow_error *error) +{ + struct mlx5dr_action_template *at[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; + struct mlx5dr_matcher_attr matcher_attr = table->matcher_attr; + struct mlx5_multi_pattern_segment *segment = NULL; + struct mlx5dr_matcher *matcher = NULL; + uint32_t i, selector = table->matcher_selector; + uint32_t other_selector = (selector + 1) & 1; + int ret; + + if (!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "no resizable attribute"); + if (table->matcher_info[other_selector].matcher) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "last table resize was not completed"); + if (nb_flows <= table->cfg.attr.nb_flows) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "shrinking table is not supported"); + ret = mlx5_ipool_resize(table->flow, nb_flows); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "cannot resize flows pool"); + ret = mlx5_ipool_resize(table->resource, nb_flows); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "cannot resize resources pool"); + if (mlx5_is_multi_pattern_active(&table->mpctx)) { + ret = flow_hw_table_resize_multi_pattern_actions(dev, table, nb_flows, error); + if (ret < 0) + return ret; + if (ret > 0) + segment = table->mpctx.segments + ret; + } + for (i = 0; i < table->nb_item_templates; i++) + mt[i] = table->its[i]->mt; + for (i = 0; i < table->nb_action_templates; i++) + at[i] = table->ats[i].action_template->tmpl; + nb_flows = rte_align32pow2(nb_flows); + matcher_attr.rule.num_log = rte_log2_u32(nb_flows); + matcher = mlx5dr_matcher_create(table->grp->tbl, mt, + table->nb_item_templates, at, + table->nb_action_templates, + &matcher_attr); + if (!matcher) { + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to create new matcher"); + goto error; + } + rte_rwlock_write_lock(&table->matcher_replace_rwlk); + ret = mlx5dr_matcher_resize_set_target + (table->matcher_info[selector].matcher, matcher); + if (ret) { + rte_rwlock_write_unlock(&table->matcher_replace_rwlk); + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to initiate matcher swap"); + goto error; + } + table->cfg.attr.nb_flows = nb_flows; + table->matcher_info[other_selector].matcher = matcher; + table->matcher_info[other_selector].refcnt = 0; + table->matcher_selector = other_selector; + rte_rwlock_write_unlock(&table->matcher_replace_rwlk); + return 0; +error: + if (segment) + mlx5_destroy_multi_pattern_segment(segment); + if (matcher) { + ret = mlx5dr_matcher_destroy(matcher); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to destroy new matcher"); + } + return ret; +} + +static int +flow_hw_table_resize_complete(__rte_unused struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error) +{ + int ret; + uint32_t selector = table->matcher_selector; + uint32_t other_selector = (selector + 1) & 1; + struct mlx5_matcher_info *matcher_info = &table->matcher_info[other_selector]; + + if (!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "no resizable attribute"); + if (!matcher_info->matcher || matcher_info->refcnt) + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "cannot complete table resize"); + ret = mlx5dr_matcher_destroy(matcher_info->matcher); + if (ret) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to destroy retired matcher"); + matcher_info->matcher = NULL; + return 0; +} + +static int +flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, void *user_data, + struct rte_flow_error *error) +{ + int ret; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_q_job *job; + struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow; + struct rte_flow_template_table *table = hw_flow->table; + uint32_t table_selector = table->matcher_selector; + uint32_t rule_selector = hw_flow->matcher_selector; + uint32_t other_selector; + struct mlx5dr_matcher *other_matcher; + struct mlx5dr_rule_attr rule_attr = { + .queue_id = queue, + .burst = attr->postpone, + }; + + /** + * mlx5dr_matcher_resize_rule_move() accepts original table matcher - + * the one that was used BEFORE table resize. + * Since the function is called AFTER table resize, + * `table->matcher_selector` always points to the new matcher and + * `hw_flow->matcher_selector` points to a matcher used to create the flow. + */ + other_selector = rule_selector == table_selector ? + (rule_selector + 1) & 1 : rule_selector; + other_matcher = table->matcher_info[other_selector].matcher; + if (!other_matcher) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "no active table resize"); + job = flow_hw_job_get(priv, queue); + if (!job) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "queue is full"); + job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE; + job->user_data = user_data; + job->flow = hw_flow; + rule_attr.user_data = job; + if (rule_selector == table_selector) { + struct rte_ring *ring = !attr->postpone ? + priv->hw_q[queue].flow_transfer_completed : + priv->hw_q[queue].flow_transfer_pending; + rte_ring_enqueue(ring, job); + return 0; + } + ret = mlx5dr_matcher_resize_rule_move(other_matcher, + (struct mlx5dr_rule *)hw_flow->rule, + &rule_attr); + if (ret) { + flow_hw_job_put(priv, job, queue); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "flow transfer failed"); + } + return 0; +} + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, @@ -12057,11 +12420,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .actions_template_destroy = flow_hw_actions_template_destroy, .template_table_create = flow_hw_template_table_create, .template_table_destroy = flow_hw_table_destroy, + .table_resize = flow_hw_table_resize, .group_set_miss_actions = flow_hw_group_set_miss_actions, .async_flow_create = flow_hw_async_flow_create, .async_flow_create_by_index = flow_hw_async_flow_create_by_index, .async_flow_update = flow_hw_async_flow_update, .async_flow_destroy = flow_hw_async_flow_destroy, + .flow_update_resized = flow_hw_update_resized, + .table_resize_complete = flow_hw_table_resize_complete, .pull = flow_hw_pull, .push = flow_hw_push, .async_action_create = flow_hw_action_handle_create,