From patchwork Wed Feb 28 13:33:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137427 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1526543C27; Wed, 28 Feb 2024 14:33:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DE1354111C; Wed, 28 Feb 2024 14:33:48 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2069.outbound.protection.outlook.com [40.107.100.69]) by mails.dpdk.org (Postfix) with ESMTP id 9CB2B40A70 for ; Wed, 28 Feb 2024 14:33:47 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=StjHNJYyE7fNHrtOMRDHq9v1WJ/MGgAWI6MdZjX+tiCKw/g2FRlAXTD+poi8Kth7kO1XrhmnaoBnOoJJA2TP4Fh6SGAo1RBTVyZKnVcEgSTLgu+Kr7tcyWu4FMfFd3xktMkOYh2LL38SF2pS1dfLNfSJ41AHKvXBQo9ItjlRiYcmIJp7FGQErjN4SOUOwr5rYHOIgk6MIGqP1ium5K976UIOVSUNS6Ol7+j9MTfAlUwTRdNGRTPU5t/t/l96jGmtZj++loTdyTLvhj3x/oFOkRcQyWrtdHptRASg/2uVQXkBpQY5x3/Gy/zGIDytMY2ClAu+BGd5WshfmrDfZqvyiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/NHocJFdwyH7kRXN3FYPhBnacIMi3VasUtxeVf2V30o=; b=T0pwFx5GrxnPnxhXzTBN7CL4ug8GPDZkHR8dfXSNly4PShZgIgELB2Ca7fuR0oVuD+7kVjg/sDWNY5oIrCo2sSuvYguZsbZq4k+86rpslBH/7csXCfTJnUzjH9RBJUnoFFTqplcW9oueJXpsk1beWHLzYt5O5c9kzySPZvUkn00Ar9vlbkMDZgF4KRiLh2/cEK0mthL+k8A6BKrXdU7wZAUvSV6PFsmyzKQJGNY5ioG0cxE0oFkPhi9b3aOCUp/4yuy6JYI+IXrdZUA2Brw5kambfGtL+2UqHmYUn8gl+9FE/KKiyhIJPdlvxEtfEyEGBBScn2PKWDPrGPTR7HzW4g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/NHocJFdwyH7kRXN3FYPhBnacIMi3VasUtxeVf2V30o=; b=JxzrjwRRCywulQad2mz7vQKs0X94EJC9+zIa9kEbeUE+Z15MfiQiuydy06Fyzp8J80DKFI+lUjDPLktXjW+MSujd2gJRoGWM6CepgfzVhU1cqol7Xpf06ujpQohulp87DzT7pEc3Vdxlj+fUGlDGD5Vonons/hxxJ9l+++CGIl9IvQ88oXB/2/uz0kuT33KtXJXjrXh1ENZSnALj4MHdwPw1Pxrz2ljzLnj0hDOrY6LC4ip3BbWNQ1L5JaSWv3YG1gb6nHCyfQ1K9ZRi68M0FntrSdqSCu27sHcspOojKIZ4kIpOYyshDWNGXHX1xeBGzBwKK1gvAus06Ij2dgQzOQ== Received: from CH2PR05CA0027.namprd05.prod.outlook.com (2603:10b6:610::40) by SN7PR12MB6862.namprd12.prod.outlook.com (2603:10b6:806:265::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.34; Wed, 28 Feb 2024 13:33:45 +0000 Received: from CH2PEPF000000A0.namprd02.prod.outlook.com (2603:10b6:610:0:cafe::4b) by CH2PR05CA0027.outlook.office365.com (2603:10b6:610::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.25 via Frontend Transport; Wed, 28 Feb 2024 13:33:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF000000A0.mail.protection.outlook.com (10.167.244.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 13:33:44 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 05:33:32 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 05:33:28 -0800 From: Gregory Etelson To: CC: , , , Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v3 1/4] net/mlx5: add resize function to ipool Date: Wed, 28 Feb 2024 15:33:09 +0200 Message-ID: <20240228133312.474474-2-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228133312.474474-1-getelson@nvidia.com> References: <20240202115611.288892-2-getelson@nvidia.com> <20240228133312.474474-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF000000A0:EE_|SN7PR12MB6862:EE_ X-MS-Office365-Filtering-Correlation-Id: b3e05261-3aa4-417d-ab12-08dc3861e2dd X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: e+anZkjrweNaxvboMyHrRMZNZCzC5nj5sODEo4TVwsx/we3OXkGgN6Qr6fgvnqoYtD0U/77+jxCtAdE9TkthW4qFGvI3H1o+YDAdvcb0cM0t1SInJ2dKHHySzFMfjVqaVjfCtzYhxeks9rqnY26IIozZCBZwzRn/hSMVB9ARlXhCHIY3BtIuG4osA7Q8nh4hD/5Fzm3TohfZzPACtGhiytCg2SnaSnIc9I73baCLm/l2p8s+CnJLz3QmR156nDNgQfMc3UpU2KeZnywC/ypquh5kV7wsTvx/RZM033KLVyFGyqsEf0eWOi9Dm43LmQBGF2VNPDbF42/U36lIQbJclJ5TlxtlWuklgsDZbar16P0h9Kp1qO76ZwWtyAP3jsRM6Po/a9wQlxPgQ/LJsexDMhvs6dNbW9eBNs2CvVMLpgCsu0PL5u7TViDaZWIbUH6eWtAQ+IqAXRZQI8DcN07/DZO6g0E5VtNCoSmq2PD6vnoynKbgu5OXBQgLYT9TiBcO4qValE4XwP/vtOyy+fzQisVb1LJbYNBvwuC7jW9doEFw3f8hi5mz1MhwjgZGd3v5GDHYfRzGEDz8rqz++NuZwVm0jW9WbavGr00HR1lzVV5vm5Aly/UbscxFc/zMlQHsB7lQFyDO4EQLl8qkqmedA5VmVslkq5hpk4iu7QRBuTlseBb4OivHRtuMnwuO1ARTrIUkBHEVc07yqSO7cxpGOafiX9wag3j7Pm5WibG+4k9prd1v+O+yXd0zyHbmDFOD X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 13:33:44.7366 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b3e05261-3aa4-417d-ab12-08dc3861e2dd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF000000A0.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6862 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Maayan Kashani Before this patch, ipool size could be fixed by setting max_idx in mlx5_indexed_pool_config upon ipool creation. Or it can be auto resized to the maximum limit by setting max_idx to zero upon ipool creation and the saved value is the maximum index possible. This patch adds ipool_resize API that enables to update the value of max_idx in case it is not set to maximum, meaning not in auto resize mode. It enables the allocation of new trunk when using malloc/zmalloc up to the max_idx limit. Please notice the resize number of entries should be divisible by trunk_size. Signed-off-by: Maayan Kashani Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_utils.c | 29 +++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_utils.h | 16 ++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 4db738785f..e28db2ec43 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -809,6 +809,35 @@ mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos) return NULL; } +int +mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries) +{ + uint32_t cur_max_idx; + uint32_t max_index = mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1); + + if (num_entries % pool->cfg.trunk_size) { + DRV_LOG(ERR, "num_entries param should be trunk_size(=%u) multiplication\n", + pool->cfg.trunk_size); + return -EINVAL; + } + + mlx5_ipool_lock(pool); + cur_max_idx = pool->cfg.max_idx + num_entries; + /* If the ipool max idx is above maximum or uint overflow occurred. */ + if (cur_max_idx > max_index || cur_max_idx < num_entries) { + DRV_LOG(ERR, "Ipool resize failed\n"); + DRV_LOG(ERR, "Adding %u entries to existing %u entries, will cross max limit(=%u)\n", + num_entries, cur_max_idx, max_index); + mlx5_ipool_unlock(pool); + return -EINVAL; + } + + /* Update maximum entries number. */ + pool->cfg.max_idx = cur_max_idx; + mlx5_ipool_unlock(pool); + return 0; +} + void mlx5_ipool_dump(struct mlx5_indexed_pool *pool) { diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 82e8298781..f3c0d76a6d 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -427,6 +427,22 @@ void mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool); */ void *mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos); +/** + * This function resize the ipool. + * + * @param pool + * Pointer to the index memory pool handler. + * @param num_entries + * Number of entries to be added to the pool. + * This number should be divisible by trunk_size. + * + * @return + * - non-zero value on error. + * - 0 on success. + * + */ +int mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries); + /** * This function allocates new empty Three-level table. * From patchwork Wed Feb 28 13:33:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137428 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A335D43C27; Wed, 28 Feb 2024 14:34:00 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9190E432A8; Wed, 28 Feb 2024 14:34:00 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2068.outbound.protection.outlook.com [40.107.243.68]) by mails.dpdk.org (Postfix) with ESMTP id EAC364329C; Wed, 28 Feb 2024 14:33:54 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DlIA037ZydJJJZzk9ck2H7NmXM8ufKxvoiVJE/eelvk9X55k6tEDv+qzTD28YRg7pk2sLxICYk/J07Lztym8QaDFT4a8Vfp+2+/ob/uTxo8/7V7AlHzJC8Dxr2iSDP+d1s1qfsqSI6uT7ZxPU7XAcre3iSVVkz24KpJUTvGA4uAcY34ZWbLd0PexcTdHh7bFw4wLulx+QEhpFaPNBJ7W3CjoLGXxPofqrNZee+Pnz+xibyXWqJs/hP7impKna+7wQAY0m7v18PqaNLI50DrAT08mY6BTVH7LQP3V83EPNtdrNKBbciDLKB4bFoTVy5czQry3Jxlm8snF/ih+Iu1A8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D/9ycIBQ0V8qF/SEy14URUQ2fs8gL0emMEkZwp9djVo=; b=RZ4OWOuBXRCy1/zfMnpCp5u+3vxE6sVcrgOLqBaqVVtz/g4HYpIH8KsFWyhc+AqM1QCiKQRpeebt4HCthvqSX66oHgNCEythcb0PQDNiKjCZLU8Y+CHndFau0lDIrwX0REmhzuaNsW0KKL/UDg8lGUErTz13O8f85Ad9K9hrZE4cq00ydwYS3HKuJQ2aBJzNtF9lXRMtvTmRnFBGg8h5San6aE2Xoluo5W9++0QKBK2Pctji58iLvtxAvlNq96gqfCkSiyT/LRhdIbTyVdUJBSom/RdBzQ6EnkeaK3w47blJjJV05fT1MPODah80fuCZ2/JzNyVxgWxrRc9mS/UgjQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D/9ycIBQ0V8qF/SEy14URUQ2fs8gL0emMEkZwp9djVo=; b=Bg3KnTFtOoPb2ikHCosxQmdhJ+jr3AI8ILKTwSAcJKKgbpt5WlKvidONy8ueQ1noicqpM2qYe4JGyUxtAM+2K7mk2P1GKHubmbchl1Turv3XyvPCBr/CNkwwAQyal9dngHBJ6GRscz+mD1md0fSt1ValET0jTN6rcKLLRrGJ+g2aux68iMdWcpjfp8M/9YxwplENjF3vj/dWdaaWyPPMq+agXo2+l2phLUhSFfCWqNd5fWGZwnK1ylHAGaxZwrtCmL6/jhTuILsMWmtzqH66B94wQZohLXPPzepzrqk6BN/QLbJSgFsW9TbzhXATI0G+VXaCuUMV5VSn1AwMQ3FVkg== Received: from CH2PR12CA0028.namprd12.prod.outlook.com (2603:10b6:610:57::38) by DS0PR12MB7582.namprd12.prod.outlook.com (2603:10b6:8:13c::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Wed, 28 Feb 2024 13:33:51 +0000 Received: from CH2PEPF0000009B.namprd02.prod.outlook.com (2603:10b6:610:57:cafe::1b) by CH2PR12CA0028.outlook.office365.com (2603:10b6:610:57::38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.49 via Frontend Transport; Wed, 28 Feb 2024 13:33:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF0000009B.mail.protection.outlook.com (10.167.244.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 13:33:51 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 05:33:37 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 05:33:32 -0800 From: Gregory Etelson To: CC: , , , , Dariusz Sosnowski , "Viacheslav Ovsiienko" , Ori Kam , Suanming Mou , Matan Azrad , Rongwei Liu Subject: [PATCH v3 2/4] net/mlx5: fix parameters verification in HWS table create Date: Wed, 28 Feb 2024 15:33:10 +0200 Message-ID: <20240228133312.474474-3-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228133312.474474-1-getelson@nvidia.com> References: <20240202115611.288892-2-getelson@nvidia.com> <20240228133312.474474-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009B:EE_|DS0PR12MB7582:EE_ X-MS-Office365-Filtering-Correlation-Id: 44637361-da63-460d-c391-08dc3861e6f8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UC1LfRU5Ygr0NnH8qEEHS8RzTUY8PtZr7JRCOb1mCoSfv9r3sVy7oJvnv91b/Vpw/uNpyR5K7hYZkJaKmLAU9XyLrbEEaWN/inwWo+R2DSRtOWXC0XX0GznwP+dIE7um3mzO3QcsO5hSS9pJRqUBQ19rvzBXzb2SpgSsn3W3KfXcfKS7BVfhndB72Fm/3gJwwZeBd3OR1dhkFqcR3ZmDFJE6wYwsYr+59ZaVZqXccC/mQh3wwHzVQHb200RQ9Rb+baD7LriVpdPPWbOayGfuS89gaiiJPGMhMzAhGnz4lEsdsAek8Um9a3Ux6fDs7MgwITqaX7I1dWWr9ColC/iJou0di9a0Ynj3Dh5Ty/mfxeb4b9kHDbaLl7kbCEDW4nVwYObEV5wSiM/jM0l+KPKWfmmCtS+2INJ1cHlwNwkSf4CJi1z/walBsYHLRHXbxj58nBH0+5UeEGSoos9VD3CEmR6CS3VUWj5qvy4D9WXg7bgn/P3tiOS4M0moIBDaqi/dNKa+/C/EyFY6isW0VPwSAdpWpIWH7TkorbjGgiJyU+2keq/k52w2pK/njOOwdUKNbzovfVCa8bImbkJ9PCx6xCbzZ/IALGe6E3Ye+YVwEwJJT6xgDiuf4CxDyxByoUnAjYqyrARovKIpk3Wqnw1o77Vi/gb5Qt062a7x442vpqQSIwgdIW8mX19JynuovplPNfACsn4oUsQ+h+hY+lBoc8UOZP2GvtWYZhqCyx1s7DBb1CkQfbsbx9zxANMWgz9F X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 13:33:51.5469 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 44637361-da63-460d-c391-08dc3861e6f8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009B.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7582 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Modified the conditionals in `flow_hw_table_create()` to use bitwise AND instead of equality checks when assessing `table_cfg->attr->specialize` bitmask. This will allow for greater flexibility as the bitmask may encapsulate multiple flags. The patch maintains the previous behavior with single flag values, while providing support for multiple flags. Fixes: 240b77cfcba5 ("net/mlx5: enable hint in async flow table") Cc: stable@dpdk.org Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow_hw.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 783ad9e72a..5938d8b90c 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4390,12 +4390,23 @@ flow_hw_table_create(struct rte_eth_dev *dev, matcher_attr.rule.num_log = rte_log2_u32(nb_flows); /* Parse hints information. */ if (attr->specialize) { - if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG) - matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_WIRE; - else if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG) - matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_VPORT; - else - DRV_LOG(INFO, "Unsupported hint value %x", attr->specialize); + uint32_t val = RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG | + RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG; + + if ((attr->specialize & val) == val) { + DRV_LOG(INFO, "Invalid hint value %x", + attr->specialize); + rte_errno = EINVAL; + goto it_error; + } + if (attr->specialize & + RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG) + matcher_attr.optimize_flow_src = + MLX5DR_MATCHER_FLOW_SRC_WIRE; + else if (attr->specialize & + RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG) + matcher_attr.optimize_flow_src = + MLX5DR_MATCHER_FLOW_SRC_VPORT; } /* Build the item template. */ for (i = 0; i < nb_item_templates; i++) { From patchwork Wed Feb 28 13:33:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137429 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2F5E843C27; Wed, 28 Feb 2024 14:34:09 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 51AB4432B8; Wed, 28 Feb 2024 14:34:06 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2050.outbound.protection.outlook.com [40.107.93.50]) by mails.dpdk.org (Postfix) with ESMTP id 922D2432D0 for ; Wed, 28 Feb 2024 14:34:02 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FEjtU8SJ7Ov86KCgVbUFil5Xcr2NgbWHs7cN9hTEFApH/+s/NoJ7+rT44PRkAHOh1gnbeHgYyn6Q1DIPHocmqaXl9nyluXgtQlXBpiJ0AmF03XnyyTRpNFN/zHk8xOBYyWwD3XBRNkjc9+nC7iuqrRQvvPlHD1/eMa+WUnW4buBZLG21edNNI0Laz5Br2xw6s8k736CekI+ImbI12D8iiX5eYUrDLP4DxEMBfmEt4vg5zdHtBg42el1pvWCe5Uc4pTecmmZ8CEO79Rgp8dSvprIA/p1uP8IT2A/Pp3+QskIKHlEyDv2Gf4VcRRxZQU4iJQnbSvGWqoYokVL1uqXuaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RZ+mLOKWmog1X6Dn0OubeFQce2wZV1e3kwxH9W9+XDA=; b=L7TwwnhEzhozyc1XJiruXxNbkPxl21ZqYwUww22IVovZFaI3hFb+dcI80N0XGyC17VLEKty6VRBkvG6eTfNzsxq6J3wq/z0iFCaVgCEOVQUXp+8uc7bmMwVZysD/VFNCFTvAuKC7WU7vhwEWpwrehxoxMf20VbxxWvyzwgQmHjQLzxngsHQp5VMg6JHcxmrmbWJgrG2m9T2ESnWTlkroMluOgP+l/eBRnkhfQiVdPP+oxvNZqaJmn29g05LSIEzgz2kzbaLXg2Uz8C1wqwofJuue2bFjAjkjjg1XFkdBV6Z/A1/NGnD6lWHYM7cbKSM/xbzedf0PDF/eHPoQJf7teA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RZ+mLOKWmog1X6Dn0OubeFQce2wZV1e3kwxH9W9+XDA=; b=n9F2eiGi/ZfmQdpeJ+IBF+tLAeBQfHFnc+37M88J18HkPDWxJ+pUis0VOZDLc6NOVEhfvxwxL9YUNnObpdpV2v1Ae+duWgjREnqsfcpjOZssaSjGL5fLT0wv61jRc1a14H3KkbtQimz6Pg52T1i43knDMsA1BQ1KISKcv0QpBnOcHKwY3qc1aEH0QR+yKMWPOe2TtgHukSLYkFs7+9uJRHpvrHHZ8zr0WSp9xsfldjZMRqKnEP+SPcW9/gFUtEBkHzm8e4f2B6OorObgzZHIJGUr0yY4Eql3LL5USNUxngcULZTWia8BBGQDQr7m36RdS9rhKopWt4RUuSA+YaiKwA== Received: from CH2PR05CA0001.namprd05.prod.outlook.com (2603:10b6:610::14) by PH0PR12MB7790.namprd12.prod.outlook.com (2603:10b6:510:289::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Wed, 28 Feb 2024 13:33:59 +0000 Received: from CH2PEPF000000A0.namprd02.prod.outlook.com (2603:10b6:610:0:cafe::17) by CH2PR05CA0001.outlook.office365.com (2603:10b6:610::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.24 via Frontend Transport; Wed, 28 Feb 2024 13:33:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF000000A0.mail.protection.outlook.com (10.167.244.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 13:33:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 05:33:40 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 05:33:36 -0800 From: Gregory Etelson To: CC: , , , Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v3 3/4] net/mlx5: move multi-pattern actions management to table level Date: Wed, 28 Feb 2024 15:33:11 +0200 Message-ID: <20240228133312.474474-4-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228133312.474474-1-getelson@nvidia.com> References: <20240202115611.288892-2-getelson@nvidia.com> <20240228133312.474474-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF000000A0:EE_|PH0PR12MB7790:EE_ X-MS-Office365-Filtering-Correlation-Id: 1007883f-a56a-4c20-d075-08dc3861eb41 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JB9N0QsdIBeRr5qvY2XsEAXmXOEI6WmTXBeX9zyVj/9hpyl0BCPN3VxK7UrJx6uGttnhg+QkuXEwMnLSQk6IGRy5fRMG/6VwcnSitZBKL7+M9XBjhC2gs5etXRaUPvy2Gsl3JULdrps42plHHbQ7PPBcBNthDY72wUm0eex2QYbJXyjRTW+LPIcvrBhhbjx4HzfX9+owdqdn9hFdqovUubBs4AxAJkthmwSS2pchgE5NCDvApuSVVYHZm3Ephzb8gy5ems7NViqnzO0JeJFGgLd1oap5W3wUVayyAwtZQsDbHxwAtz0D6cexC+rcEtIewumvJQ3OAAQ4hRbaiM+JzNc5jHP+lwrF/yAsIR8zPi/ipBvXqMpF8J3E1S0+a2Pzd9rvzuJJ2ggkF0B+gDHidpRGSeUKbNYVzz96rw3nFB2QiR/t0RJuTwqqkW9NTf2XlPXugkc6Sn//rTfUV11lkwcCw9pc/lZVs54yrFSo0+fE+6HYyVqYf2n8IRD+R1rAzMgNxXp5y2TFK7W6X9V+ZUZ7KfDP1zhs23ZX3dfVurYKtjww9tJJ3D0WxpjbvdafSuDMTTThRHdc84sZnuMGuLcKp6l7AEi81QmG20CKwM4OVSvlxGRAW0nrHLv3srZ4V2AWN5/xkbsklAl3LDGJceQPRWfZYShx5P4LFbZ4/+rOjuHC4hwcEG9nk9aWjbH3gwRJlYgJ78SfiuBhF/vwnR3ZKVyIS1k+l1rROfqPgiehcf51ZzNMhtVoKl+n2Vzt X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(230273577357003)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 13:33:58.7522 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1007883f-a56a-4c20-d075-08dc3861eb41 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF000000A0.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7790 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The multi-pattern actions related structures and management code have been moved to the table level. That code refactor is required for the upcoming table resize feature. Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow.h | 73 ++++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 226 +++++++++++++++----------------- 2 files changed, 174 insertions(+), 125 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index a4d0ff7b13..9cc237c542 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1410,7 +1410,6 @@ struct mlx5_hw_encap_decap_action { /* Is header_reformat action shared across flows in table. */ uint32_t shared:1; uint32_t multi_pattern:1; - volatile uint32_t *multi_pattern_refcnt; size_t data_size; /* Action metadata size. */ uint8_t data[]; /* Action data. */ }; @@ -1433,7 +1432,6 @@ struct mlx5_hw_modify_header_action { /* Is MODIFY_HEADER action shared across flows in table. */ uint32_t shared:1; uint32_t multi_pattern:1; - volatile uint32_t *multi_pattern_refcnt; /* Amount of modification commands stored in the precompiled buffer. */ uint32_t mhdr_cmds_num; /* Precompiled modification commands. */ @@ -1487,6 +1485,76 @@ struct mlx5_flow_group { #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32 +#define MLX5_MULTIPATTERN_ENCAP_NUM 5 +#define MLX5_MAX_TABLE_RESIZE_NUM 64 + +struct mlx5_multi_pattern_segment { + uint32_t capacity; + uint32_t head_index; + struct mlx5dr_action *mhdr_action; + struct mlx5dr_action *reformat_action[MLX5_MULTIPATTERN_ENCAP_NUM]; +}; + +struct mlx5_tbl_multi_pattern_ctx { + struct { + uint32_t elements_num; + struct mlx5dr_action_reformat_header reformat_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + /** + * insert_header structure is larger than reformat_header. + * Enclosing these structures with union will case a gap between + * reformat_hdr array elements. + * mlx5dr_action_create_reformat() expects adjacent array elements. + */ + struct mlx5dr_action_insert_header insert_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; + + struct { + uint32_t elements_num; + struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } mh; + struct mlx5_multi_pattern_segment segments[MLX5_MAX_TABLE_RESIZE_NUM]; +}; + +static __rte_always_inline void +mlx5_multi_pattern_activate(struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + mpctx->segments[0].head_index = 1; +} + +static __rte_always_inline bool +mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + return mpctx->segments[0].head_index == 1; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx) +{ + int i; + + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + if (!mpctx->segments[i].capacity) + return &mpctx->segments[i]; + } + return NULL; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx, + uint32_t flow_resource_ix) +{ + int i; + + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + uint32_t limit = mpctx->segments[i].head_index + + mpctx->segments[i].capacity; + + if (flow_resource_ix < limit) + return &mpctx->segments[i]; + } + return NULL; +} + struct mlx5_flow_template_table_cfg { struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */ bool external; /* True if created by flow API, false if table is internal to PMD. */ @@ -1507,6 +1575,7 @@ struct rte_flow_template_table { uint8_t nb_item_templates; /* Item template number. */ uint8_t nb_action_templates; /* Action template number. */ uint32_t refcnt; /* Table reference counter. */ + struct mlx5_tbl_multi_pattern_ctx mpctx; }; #endif diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 5938d8b90c..05442f0bd3 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -78,41 +78,14 @@ struct mlx5_indlst_legacy { #define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \ (((const struct encap_type *)(ptr))->definition) -struct mlx5_multi_pattern_ctx { - union { - struct mlx5dr_action_reformat_header reformat_hdr; - struct mlx5dr_action_mh_pattern mh_pattern; - }; - union { - /* action template auxiliary structures for object destruction */ - struct mlx5_hw_encap_decap_action *encap; - struct mlx5_hw_modify_header_action *mhdr; - }; - /* multi pattern action */ - struct mlx5dr_rule_action *rule_action; -}; - -#define MLX5_MULTIPATTERN_ENCAP_NUM 4 - -struct mlx5_tbl_multi_pattern_ctx { - struct { - uint32_t elements_num; - struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; - - struct { - uint32_t elements_num; - struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - } mh; -}; - -#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},} - static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, - struct mlx5_tbl_multi_pattern_ctx *mpat, + struct mlx5_multi_pattern_segment *segment, + uint32_t bulk_size, struct rte_flow_error *error); +static void +mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment); static __rte_always_inline int mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) @@ -577,28 +550,14 @@ flow_hw_ct_compile(struct rte_eth_dev *dev, static void flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap) { - if (encap_decap->multi_pattern) { - uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt, - 1, __ATOMIC_RELAXED); - if (refcnt) - return; - mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt); - } - if (encap_decap->action) + if (encap_decap->action && !encap_decap->multi_pattern) mlx5dr_action_destroy(encap_decap->action); } static void flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr) { - if (mhdr->multi_pattern) { - uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt, - 1, __ATOMIC_RELAXED); - if (refcnt) - return; - mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt); - } - if (mhdr->action) + if (mhdr->action && !mhdr->multi_pattern) mlx5dr_action_destroy(mhdr->action); } @@ -1924,21 +1883,22 @@ mlx5_tbl_translate_reformat(struct mlx5_priv *priv, acts->encap_decap->shared = true; } else { uint32_t ix; - typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat + - mp_reformat_ix; + typeof(mp_ctx->reformat[0]) *reformat = mp_ctx->reformat + + mp_reformat_ix; - ix = reformat_ctx->elements_num++; - reformat_ctx->ctx[ix].reformat_hdr = hdr; - reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off]; - reformat_ctx->ctx[ix].encap = acts->encap_decap; + ix = reformat->elements_num++; + reformat->reformat_hdr[ix] = hdr; acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix; acts->encap_decap_pos = at->reformat_off; + acts->encap_decap->multi_pattern = 1; acts->encap_decap->data_size = data_size; + acts->encap_decap->action_type = refmt_type; ret = __flow_hw_act_data_encap_append (priv, acts, (at->actions + reformat_src)->type, reformat_src, at->reformat_off, data_size); if (ret) return -rte_errno; + mlx5_multi_pattern_activate(mp_ctx); } return 0; } @@ -1987,12 +1947,11 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, } else { typeof(mp_ctx->mh) *mh = &mp_ctx->mh; uint32_t idx = mh->elements_num; - struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++; - mh_ctx->mh_pattern = pattern; - mh_ctx->mhdr = acts->mhdr; - mh_ctx->rule_action = &acts->rule_acts[mhdr_ix]; + mh->pattern[mh->elements_num++] = pattern; + acts->mhdr->multi_pattern = 1; acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx; + mlx5_multi_pattern_activate(mp_ctx); } return 0; } @@ -2552,16 +2511,17 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, { int ret; uint32_t i; - struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < tbl->nb_action_templates; i++) { if (__flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, tbl->ats[i].action_template, - &mpat, error)) + &tbl->mpctx, error)) goto err; } - ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); + ret = mlx5_tbl_multi_pattern_process(dev, tbl, &tbl->mpctx.segments[0], + rte_log2_u32(tbl->cfg.attr.nb_flows), + error); if (ret) goto err; return 0; @@ -2944,6 +2904,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, int ret; uint32_t age_idx = 0; struct mlx5_aso_mtr *aso_mtr; + struct mlx5_multi_pattern_segment *mp_segment; rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; @@ -3074,6 +3035,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + mp_segment = mlx5_multi_pattern_segment_find + (&table->mpctx, job->flow->res_idx); + if (!mp_segment || !mp_segment->mhdr_action) + return -1; + rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action; if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, act_data, @@ -3225,9 +3191,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, age_idx); } if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { - rule_acts[hw_acts->encap_decap_pos].reformat.offset = - job->flow->res_idx - 1; - rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; + int ix = mlx5_multi_pattern_reformat_to_index(hw_acts->encap_decap->action_type); + struct mlx5dr_rule_action *ra = &rule_acts[hw_acts->encap_decap_pos]; + + if (ix < 0) + return -1; + mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment || !mp_segment->reformat_action[ix]) + return -1; + ra->action = mp_segment->reformat_action[ix]; + ra->reformat.offset = job->flow->res_idx - 1; + ra->reformat.data = buf; } if (hw_acts->push_remove && !hw_acts->push_remove->shared) { rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = @@ -4133,86 +4107,65 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, static int mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, - struct mlx5_tbl_multi_pattern_ctx *mpat, + struct mlx5_multi_pattern_segment *segment, + uint32_t bulk_size, struct rte_flow_error *error) { + int ret = 0; uint32_t i; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &tbl->mpctx; const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr; const struct rte_flow_attr *attr = &table_attr->flow_attr; enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); uint32_t flags = mlx5_hw_act_flag[!!attr->group][type]; - struct mlx5dr_action *dr_action; - uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows); + struct mlx5dr_action *dr_action = NULL; for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { - uint32_t j; - uint32_t *reformat_refcnt; - typeof(mpat->reformat[0]) *reformat = mpat->reformat + i; - struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + typeof(mpctx->reformat[0]) *reformat = mpctx->reformat + i; enum mlx5dr_action_type reformat_type = mlx5_multi_pattern_reformat_index_to_type(i); if (!reformat->elements_num) continue; - for (j = 0; j < reformat->elements_num; j++) - hdr[j] = reformat->ctx[j].reformat_hdr; - reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0, - rte_socket_id()); - if (!reformat_refcnt) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "failed to allocate multi-pattern encap counter"); - *reformat_refcnt = reformat->elements_num; - dr_action = mlx5dr_action_create_reformat - (priv->dr_ctx, reformat_type, reformat->elements_num, hdr, - bulk_size, flags); + dr_action = reformat_type == MLX5DR_ACTION_TYP_INSERT_HEADER ? + mlx5dr_action_create_insert_header + (priv->dr_ctx, reformat->elements_num, + reformat->insert_hdr, bulk_size, flags) : + mlx5dr_action_create_reformat + (priv->dr_ctx, reformat_type, reformat->elements_num, + reformat->reformat_hdr, bulk_size, flags); if (!dr_action) { - mlx5_free(reformat_refcnt); - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to create multi-pattern encap action"); - } - for (j = 0; j < reformat->elements_num; j++) { - reformat->ctx[j].rule_action->action = dr_action; - reformat->ctx[j].encap->action = dr_action; - reformat->ctx[j].encap->multi_pattern = 1; - reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt; + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to create multi-pattern encap action"); + goto error; } + segment->reformat_action[i] = dr_action; } - if (mpat->mh.elements_num) { - typeof(mpat->mh) *mh = &mpat->mh; - struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; - uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), - 0, rte_socket_id()); - - if (!mh_refcnt) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "failed to allocate modify header counter"); - *mh_refcnt = mpat->mh.elements_num; - for (i = 0; i < mpat->mh.elements_num; i++) - pattern[i] = mh->ctx[i].mh_pattern; + if (mpctx->mh.elements_num) { + typeof(mpctx->mh) *mh = &mpctx->mh; dr_action = mlx5dr_action_create_modify_header - (priv->dr_ctx, mpat->mh.elements_num, pattern, + (priv->dr_ctx, mpctx->mh.elements_num, mh->pattern, bulk_size, flags); if (!dr_action) { - mlx5_free(mh_refcnt); - return rte_flow_error_set(error, rte_errno, + ret = rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to create multi-pattern header modify action"); - } - for (i = 0; i < mpat->mh.elements_num; i++) { - mh->ctx[i].rule_action->action = dr_action; - mh->ctx[i].mhdr->action = dr_action; - mh->ctx[i].mhdr->multi_pattern = 1; - mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt; + NULL, "failed to create multi-pattern header modify action"); + goto error; } + segment->mhdr_action = dr_action; + } + if (dr_action) { + segment->capacity = RTE_BIT32(bulk_size); + if (segment != &mpctx->segments[MLX5_MAX_TABLE_RESIZE_NUM - 1]) + segment[1].head_index = segment->head_index + segment->capacity; } - return 0; +error: + mlx5_destroy_multi_pattern_segment(segment); + return ret; } static int @@ -4225,7 +4178,6 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev, { int ret; uint8_t i; - struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < nb_action_templates; i++) { uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1, @@ -4246,16 +4198,21 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev, ret = __flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, action_templates[i], - &mpat, error); + &tbl->mpctx, error); if (ret) { i++; goto at_error; } } tbl->nb_action_templates = nb_action_templates; - ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); - if (ret) - goto at_error; + if (mlx5_is_multi_pattern_active(&tbl->mpctx)) { + ret = mlx5_tbl_multi_pattern_process(dev, tbl, + &tbl->mpctx.segments[0], + rte_log2_u32(tbl->cfg.attr.nb_flows), + error); + if (ret) + goto at_error; + } return 0; at_error: @@ -4624,6 +4581,28 @@ flow_hw_template_table_create(struct rte_eth_dev *dev, action_templates, nb_action_templates, error); } +static void +mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment) +{ + int i; + + if (segment->mhdr_action) + mlx5dr_action_destroy(segment->mhdr_action); + for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { + if (segment->reformat_action[i]) + mlx5dr_action_destroy(segment->reformat_action[i]); + } + segment->capacity = 0; +} + +static void +flow_hw_destroy_table_multi_pattern_ctx(struct rte_flow_template_table *table) +{ + int sx; + + for (sx = 0; sx < MLX5_MAX_TABLE_RESIZE_NUM; sx++) + mlx5_destroy_multi_pattern_segment(table->mpctx.segments + sx); +} /** * Destroy flow table. * @@ -4669,6 +4648,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, __atomic_fetch_sub(&table->ats[i].action_template->refcnt, 1, __ATOMIC_RELAXED); } + flow_hw_destroy_table_multi_pattern_ctx(table); mlx5dr_matcher_destroy(table->matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); mlx5_ipool_destroy(table->resource); From patchwork Wed Feb 28 13:33:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137430 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E4C943C27; Wed, 28 Feb 2024 14:34:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8E0C3432C8; Wed, 28 Feb 2024 14:34:13 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2055.outbound.protection.outlook.com [40.107.94.55]) by mails.dpdk.org (Postfix) with ESMTP id 9ACFB432B6 for ; Wed, 28 Feb 2024 14:34:11 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h2kmQFfIZB4hDqJAPFqtERu5lyki4aQoV7cWZTK4zBc1Eq+OIznZ6oSIxTya6gBJaMPoWMXzVnM+JK+M6H7WM6blD3xALW23vRRb/4B+pPHALsNVslCk5taHGB48V6I4qvNYeZtthqGJatLDm7njh6PYPvvVQI3MevxLFu6tTrhuvaS8AXQ0E0xb1NzncmbWv9AMH0O1xsmVpeRgiH/dvaK4qw3baTPikskYlRfh9yqrCLGIageUqiUasFIZPK88KZlL7surxWsLN3z7D6CAf5vb3oMbi3SdHTIYeJg1PN2TanRlTyUXvyC9jdOf1jZBpVSPUYuYLE8LAcTRicGlsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=q6VbUNMhNul6hujpYfyrTFG0cmaZg+dDNH3jBv/bZT8=; b=VlkvDVRCdp5pIL83X/akUU1pi8siklCOoK/lcRgxwwrt8ktJYx72oCC3DmlpyO037FHSWVyEivFW/JlYqDNWpY5wNPRcIMPVjVJzmlpeYhho1OwZMopffioYddP/Ytexx9KoDPQW1E1OlFFpB398j8jhi9RtCnPCRWnrUk6Y5ZmJPGW8jcy5LCD8gySpxi4u+gneYKpa2Z/UHXVZ8GkhmBldyUAUJqKX0Y/xoFDd8OAT2cPR3iXMxeaVDBd4QJy20z1bWrjhTQx7+d29jHirSARr7xVYnFL+zlubxGgw4uI1eYZcj8hhWtjRkUqEdpj6UbstAQRLkyd2Vvy68J8ldw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=q6VbUNMhNul6hujpYfyrTFG0cmaZg+dDNH3jBv/bZT8=; b=AdCoB4MM+oU1W7hF7LPUl6olya4+9S//hGrNfhnt6/oB/irjrlO1A55oHwAOfP7Jhp+jr0Mug08smHYDqxO7yt0AOPbamBOBWf3oLXxGH7upPXEgvX6L+K9o9AGt2e4SypQ33I4KOCx1ckw98o+eSMBL1pSdcEZcApbk8yLTdgr7jKd2bh7CkNKORZYnIeDvleJlqtVub/z68I6OgfqVQvixSp5hMyzxyScv0mY/Ikefm0zCq1Y4Nb9G/JUK1sBfM72JYAUpOm0E6rLytnHKP7TELRkguAG6a6yHQlLr+5ya7cit57Q9NyXvfni35MyW14R0esTN3bxnFyHjXGQdag== Received: from CH2PR12CA0007.namprd12.prod.outlook.com (2603:10b6:610:57::17) by DM4PR12MB5293.namprd12.prod.outlook.com (2603:10b6:5:390::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.41; Wed, 28 Feb 2024 13:34:04 +0000 Received: from CH2PEPF0000009B.namprd02.prod.outlook.com (2603:10b6:610:57:cafe::56) by CH2PR12CA0007.outlook.office365.com (2603:10b6:610:57::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.27 via Frontend Transport; Wed, 28 Feb 2024 13:34:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CH2PEPF0000009B.mail.protection.outlook.com (10.167.244.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 13:34:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 05:33:45 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 05:33:40 -0800 From: Gregory Etelson To: CC: , , , Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad Subject: [PATCH v3 4/4] net/mlx5: add support for flow table resizing Date: Wed, 28 Feb 2024 15:33:12 +0200 Message-ID: <20240228133312.474474-5-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228133312.474474-1-getelson@nvidia.com> References: <20240202115611.288892-2-getelson@nvidia.com> <20240228133312.474474-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PEPF0000009B:EE_|DM4PR12MB5293:EE_ X-MS-Office365-Filtering-Correlation-Id: d6d3b688-160c-4465-95d7-08dc3861ee81 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TOcliLRVSeh8E/x0pmnJiUS8UzL93NxYv3A+rBT9qJXY1sBxggAuVjT1cE+tr6Kpj4r+8v+sS/HKOtcfEf8ejnRXtAkAIp3lyXvVxhi/h4q9x2a39iHP03IRGGL0tjskZV3n8MjiHcBsK+gC36USImvEya3pjMD9fTOClI3a8oBNMHBS0JlbaW8aSD35pwgRarKQ2ZWinQoQ+7o/35zDNa04O7W8Ai5aFJgyKskQzpKI6euNYLSG6aTjjr5I3hYNCm7FMyLWTrp3gVGGFIsXcADw147X2lt74qKi41wORPGFDvqbjNrpZ4Xz4rY3bkJzCbF9l2k6nU4TyA9rufa2Z3zvk8QF+HIFMsax8gyO5wE/aIKgIJFWojYw5qc1yCi2++fUCWnwKC7v81NC+UmpEyeYfxFbwsfYCtGqVvJV1LDbp/muE7275uLx5AMJxP4815cl4qMFklrB0o+yQI3ihK0sMJzSZj8gPaKi6hll8s7gcI7fQQpUJPnRtbpdv8zvdb5BgAApMdjFDun1UVyDPBqGeXqyBUFCwazO5g1aaXSD+ALuAmdgzkMPlK2BDi79y3UKeQdyKzAPC+aD5GK52De2FWlIjwGHXD5fx+7FxDiO5+VpoF1nQj/etlO+8cQwelFpKYgTbZayjnbEXpzD6EDwqFCpBQELoGwpahqMcPWWY+E0/OLPskejHU9NX752ZQNw89xZtLV1qipxvsuppBHSjY8nKmV+3SJNSoW+w22USEI9P3hELVZPJ6h5hQM7 X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(230273577357003)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 13:34:04.2502 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d6d3b688-160c-4465-95d7-08dc3861ee81 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH2PEPF0000009B.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5293 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support template table API in PMD. The patch allows to increase existing table capacity. Signed-off-by: Gregory Etelson Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5.h | 5 + drivers/net/mlx5/mlx5_flow.c | 51 +++ drivers/net/mlx5/mlx5_flow.h | 84 +++-- drivers/net/mlx5/mlx5_flow_hw.c | 530 +++++++++++++++++++++++++++----- 4 files changed, 564 insertions(+), 106 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 99850a58af..bb1853e797 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -380,6 +380,9 @@ enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */ MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */ MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */ + MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE, /* Non-optimized flow create job type. */ + MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY, /* Non-optimized destroy create job type. */ + MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE, /* Move flow after table resize. */ }; enum mlx5_hw_indirect_type { @@ -422,6 +425,8 @@ struct mlx5_hw_q { struct mlx5_hw_q_job **job; /* LIFO header. */ struct rte_ring *indir_cq; /* Indirect action SW completion queue. */ struct rte_ring *indir_iq; /* Indirect action SW in progress queue. */ + struct rte_ring *flow_transfer_pending; + struct rte_ring *flow_transfer_completed; } __rte_cache_aligned; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 3e179110a0..477b13e04d 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1095,6 +1095,20 @@ mlx5_flow_calc_encap_hash(struct rte_eth_dev *dev, uint8_t *hash, struct rte_flow_error *error); +static int +mlx5_template_table_resize(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_rules, struct rte_flow_error *error); +static int +mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *rule, void *user_data, + struct rte_flow_error *error); +static int +mlx5_table_resize_complete(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error); + static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -1133,6 +1147,9 @@ static const struct rte_flow_ops mlx5_flow_ops = { mlx5_flow_action_list_handle_query_update, .flow_calc_table_hash = mlx5_flow_calc_table_hash, .flow_calc_encap_hash = mlx5_flow_calc_encap_hash, + .flow_template_table_resize = mlx5_template_table_resize, + .flow_update_resized = mlx5_flow_async_update_resized, + .flow_template_table_resize_complete = mlx5_table_resize_complete, }; /* Tunnel information. */ @@ -10548,6 +10565,40 @@ mlx5_flow_calc_encap_hash(struct rte_eth_dev *dev, return fops->flow_calc_encap_hash(dev, pattern, dest_field, hash, error); } +static int +mlx5_template_table_resize(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_rules, struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize, ENOTSUP); + return fops->table_resize(dev, table, nb_rules, error); +} + +static int +mlx5_table_resize_complete(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize_complete, ENOTSUP); + return fops->table_resize_complete(dev, table, error); +} + +static int +mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *rule, void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, flow_update_resized, ENOTSUP); + return fops->flow_update_resized(dev, queue, op_attr, rule, user_data, error); +} + /** * Destroy all indirect actions (shared RSS). * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 9cc237c542..6c2944c21a 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1217,6 +1217,7 @@ struct rte_flow { uint32_t tunnel:1; uint32_t meter:24; /**< Holds flow meter id. */ uint32_t indirect_type:2; /**< Indirect action type. */ + uint32_t matcher_selector:1; /**< Matcher index in resizable table. */ uint32_t rix_mreg_copy; /**< Index to metadata register copy table resource. */ uint32_t counter; /**< Holds flow counter. */ @@ -1262,6 +1263,7 @@ struct rte_flow_hw { }; struct rte_flow_template_table *table; /* The table flow allcated from. */ uint8_t mt_idx; + uint8_t matcher_selector:1; uint32_t age_idx; cnt_id_t cnt_id; uint32_t mtr_id; @@ -1489,6 +1491,11 @@ struct mlx5_flow_group { #define MLX5_MAX_TABLE_RESIZE_NUM 64 struct mlx5_multi_pattern_segment { + /* + * Modify Header Argument Objects number allocated for action in that + * segment. + * Capacity is always power of 2. + */ uint32_t capacity; uint32_t head_index; struct mlx5dr_action *mhdr_action; @@ -1527,43 +1534,22 @@ mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx) return mpctx->segments[0].head_index == 1; } -static __rte_always_inline struct mlx5_multi_pattern_segment * -mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx) -{ - int i; - - for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { - if (!mpctx->segments[i].capacity) - return &mpctx->segments[i]; - } - return NULL; -} - -static __rte_always_inline struct mlx5_multi_pattern_segment * -mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx, - uint32_t flow_resource_ix) -{ - int i; - - for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { - uint32_t limit = mpctx->segments[i].head_index + - mpctx->segments[i].capacity; - - if (flow_resource_ix < limit) - return &mpctx->segments[i]; - } - return NULL; -} - struct mlx5_flow_template_table_cfg { struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */ bool external; /* True if created by flow API, false if table is internal to PMD. */ }; +struct mlx5_matcher_info { + struct mlx5dr_matcher *matcher; /* Template matcher. */ + uint32_t refcnt; +}; + struct rte_flow_template_table { LIST_ENTRY(rte_flow_template_table) next; struct mlx5_flow_group *grp; /* The group rte_flow_template_table uses. */ - struct mlx5dr_matcher *matcher; /* Template matcher. */ + struct mlx5_matcher_info matcher_info[2]; + uint32_t matcher_selector; + rte_rwlock_t matcher_replace_rwlk; /* RW lock for resizable tables */ /* Item templates bind to the table. */ struct rte_flow_pattern_template *its[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; /* Action templates bind to the table. */ @@ -1576,8 +1562,34 @@ struct rte_flow_template_table { uint8_t nb_action_templates; /* Action template number. */ uint32_t refcnt; /* Table reference counter. */ struct mlx5_tbl_multi_pattern_ctx mpctx; + struct mlx5dr_matcher_attr matcher_attr; }; +static __rte_always_inline struct mlx5dr_matcher * +mlx5_table_matcher(const struct rte_flow_template_table *table) +{ + return table->matcher_info[table->matcher_selector].matcher; +} + +static __rte_always_inline struct mlx5_multi_pattern_segment * +mlx5_multi_pattern_segment_find(struct rte_flow_template_table *table, + uint32_t flow_resource_ix) +{ + int i; + struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx; + + if (likely(!rte_flow_template_table_resizable(0, &table->cfg.attr))) + return &mpctx->segments[0]; + for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) { + uint32_t limit = mpctx->segments[i].head_index + + mpctx->segments[i].capacity; + + if (flow_resource_ix < limit) + return &mpctx->segments[i]; + } + return NULL; +} + #endif /* @@ -2274,6 +2286,17 @@ typedef int enum rte_flow_encap_hash_field dest_field, uint8_t *hash, struct rte_flow_error *error); +typedef int (*mlx5_table_resize_t)(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_rules, struct rte_flow_error *error); +typedef int (*mlx5_flow_update_resized_t) + (struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *rule, void *user_data, + struct rte_flow_error *error); +typedef int (*table_resize_complete_t)(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -2348,6 +2371,9 @@ struct mlx5_flow_driver_ops { async_action_list_handle_query_update; mlx5_flow_calc_table_hash_t flow_calc_table_hash; mlx5_flow_calc_encap_hash_t flow_calc_encap_hash; + mlx5_table_resize_t table_resize; + mlx5_flow_update_resized_t flow_update_resized; + table_resize_complete_t table_resize_complete; }; /* mlx5_flow.c */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 05442f0bd3..51b37753d6 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4,6 +4,7 @@ #include #include +#include #include @@ -2904,7 +2905,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, int ret; uint32_t age_idx = 0; struct mlx5_aso_mtr *aso_mtr; - struct mlx5_multi_pattern_segment *mp_segment; + struct mlx5_multi_pattern_segment *mp_segment = NULL; rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; @@ -2918,17 +2919,20 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } else { attr.ingress = 1; } - if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0) { + if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0 && !hw_acts->mhdr->shared) { uint16_t pos = hw_acts->mhdr->pos; - if (!hw_acts->mhdr->shared) { - rule_acts[pos].modify_header.offset = - job->flow->res_idx - 1; - rule_acts[pos].modify_header.data = - (uint8_t *)job->mhdr_cmd; - rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, - sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); - } + mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); + if (!mp_segment || !mp_segment->mhdr_action) + return -1; + rule_acts[pos].action = mp_segment->mhdr_action; + /* offset is relative to DR action */ + rule_acts[pos].modify_header.offset = + job->flow->res_idx - mp_segment->head_index; + rule_acts[pos].modify_header.data = + (uint8_t *)job->mhdr_cmd; + rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, + sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); } LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; @@ -3035,11 +3039,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - mp_segment = mlx5_multi_pattern_segment_find - (&table->mpctx, job->flow->res_idx); - if (!mp_segment || !mp_segment->mhdr_action) - return -1; - rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action; if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, act_data, @@ -3196,11 +3195,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (ix < 0) return -1; - mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx); + if (!mp_segment) + mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx); if (!mp_segment || !mp_segment->reformat_action[ix]) return -1; ra->action = mp_segment->reformat_action[ix]; - ra->reformat.offset = job->flow->res_idx - 1; + /* reformat offset is relative to selected DR action */ + ra->reformat.offset = job->flow->res_idx - mp_segment->head_index; ra->reformat.data = buf; } if (hw_acts->push_remove && !hw_acts->push_remove->shared) { @@ -3372,10 +3373,26 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, pattern_template_index, job); if (!rule_items) goto error; - ret = mlx5dr_rule_create(table->matcher, - pattern_template_index, rule_items, - action_template_index, rule_acts, - &rule_attr, (struct mlx5dr_rule *)flow->rule); + if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) { + ret = mlx5dr_rule_create(table->matcher_info[0].matcher, + pattern_template_index, rule_items, + action_template_index, rule_acts, + &rule_attr, + (struct mlx5dr_rule *)flow->rule); + } else { + uint32_t selector; + + job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE; + rte_rwlock_read_lock(&table->matcher_replace_rwlk); + selector = table->matcher_selector; + ret = mlx5dr_rule_create(table->matcher_info[selector].matcher, + pattern_template_index, rule_items, + action_template_index, rule_acts, + &rule_attr, + (struct mlx5dr_rule *)flow->rule); + rte_rwlock_read_unlock(&table->matcher_replace_rwlk); + flow->matcher_selector = selector; + } if (likely(!ret)) return (struct rte_flow *)flow; error: @@ -3492,9 +3509,23 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, rte_errno = EINVAL; goto error; } - ret = mlx5dr_rule_create(table->matcher, - 0, items, action_template_index, rule_acts, - &rule_attr, (struct mlx5dr_rule *)flow->rule); + if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) { + ret = mlx5dr_rule_create(table->matcher_info[0].matcher, + 0, items, action_template_index, + rule_acts, &rule_attr, + (struct mlx5dr_rule *)flow->rule); + } else { + uint32_t selector; + + job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE; + rte_rwlock_read_lock(&table->matcher_replace_rwlk); + selector = table->matcher_selector; + ret = mlx5dr_rule_create(table->matcher_info[selector].matcher, + 0, items, action_template_index, + rule_acts, &rule_attr, + (struct mlx5dr_rule *)flow->rule); + rte_rwlock_read_unlock(&table->matcher_replace_rwlk); + } if (likely(!ret)) return (struct rte_flow *)flow; error: @@ -3674,7 +3705,8 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to destroy rte flow: flow queue full"); - job->type = MLX5_HW_Q_JOB_TYPE_DESTROY; + job->type = !rte_flow_template_table_resizable(dev->data->port_id, &fh->table->cfg.attr) ? + MLX5_HW_Q_JOB_TYPE_DESTROY : MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY; job->user_data = user_data; job->flow = fh; rule_attr.user_data = job; @@ -3786,6 +3818,26 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job } } +static __rte_always_inline int +mlx5_hw_pull_flow_transfer_comp(struct rte_eth_dev *dev, + uint32_t queue, struct rte_flow_op_result res[], + uint16_t n_res) +{ + uint32_t size, i; + struct mlx5_hw_q_job *job = NULL; + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_ring *ring = priv->hw_q[queue].flow_transfer_completed; + + size = RTE_MIN(rte_ring_count(ring), n_res); + for (i = 0; i < size; i++) { + res[i].status = RTE_FLOW_OP_SUCCESS; + rte_ring_dequeue(ring, (void **)&job); + res[i].user_data = job->user_data; + flow_hw_job_put(priv, job, queue); + } + return (int)size; +} + static inline int __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, uint32_t queue, @@ -3834,6 +3886,80 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, return ret_comp; } +static __rte_always_inline void +hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + uint32_t queue, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + struct rte_flow_hw *flow = job->flow; + struct rte_flow_template_table *table = flow->table; + /* Release the original resource index in case of update. */ + uint32_t res_idx = flow->res_idx; + + if (flow->fate_type == MLX5_FLOW_FATE_JUMP) + flow_hw_jump_release(dev, flow->jump); + else if (flow->fate_type == MLX5_FLOW_FATE_QUEUE) + mlx5_hrxq_obj_release(dev, flow->hrxq); + if (mlx5_hws_cnt_id_valid(flow->cnt_id)) + flow_hw_age_count_release(priv, queue, + flow, error); + if (flow->mtr_id) { + mlx5_ipool_free(pool->idx_pool, flow->mtr_id); + flow->mtr_id = 0; + } + if (job->type != MLX5_HW_Q_JOB_TYPE_UPDATE) { + if (table) { + mlx5_ipool_free(table->resource, res_idx); + mlx5_ipool_free(table->flow, flow->idx); + } + } else { + rte_memcpy(flow, job->upd_flow, + offsetof(struct rte_flow_hw, rule)); + mlx5_ipool_free(table->resource, res_idx); + } +} + +static __rte_always_inline void +hw_cmpl_resizable_tbl(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + uint32_t queue, enum rte_flow_op_status status, + struct rte_flow_error *error) +{ + struct rte_flow_hw *flow = job->flow; + struct rte_flow_template_table *table = flow->table; + uint32_t selector = flow->matcher_selector; + uint32_t other_selector = (selector + 1) & 1; + + switch (job->type) { + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE: + rte_atomic_fetch_add_explicit + (&table->matcher_info[selector].refcnt, 1, + rte_memory_order_relaxed); + break; + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY: + rte_atomic_fetch_sub_explicit + (&table->matcher_info[selector].refcnt, 1, + rte_memory_order_relaxed); + hw_cmpl_flow_update_or_destroy(dev, job, queue, error); + break; + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE: + if (status == RTE_FLOW_OP_SUCCESS) { + rte_atomic_fetch_sub_explicit + (&table->matcher_info[selector].refcnt, 1, + rte_memory_order_relaxed); + rte_atomic_fetch_add_explicit + (&table->matcher_info[other_selector].refcnt, 1, + rte_memory_order_relaxed); + flow->matcher_selector = other_selector; + } + break; + default: + break; + } +} + /** * Pull the enqueued flows. * @@ -3862,9 +3988,7 @@ flow_hw_pull(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct mlx5_hw_q_job *job; - uint32_t res_idx; int ret, i; /* 1. Pull the flow completion. */ @@ -3875,31 +3999,20 @@ flow_hw_pull(struct rte_eth_dev *dev, "fail to query flow queue"); for (i = 0; i < ret; i++) { job = (struct mlx5_hw_q_job *)res[i].user_data; - /* Release the original resource index in case of update. */ - res_idx = job->flow->res_idx; /* Restore user data. */ res[i].user_data = job->user_data; - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY || - job->type == MLX5_HW_Q_JOB_TYPE_UPDATE) { - if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP) - flow_hw_jump_release(dev, job->flow->jump); - else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE) - mlx5_hrxq_obj_release(dev, job->flow->hrxq); - if (mlx5_hws_cnt_id_valid(job->flow->cnt_id)) - flow_hw_age_count_release(priv, queue, - job->flow, error); - if (job->flow->mtr_id) { - mlx5_ipool_free(pool->idx_pool, job->flow->mtr_id); - job->flow->mtr_id = 0; - } - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { - mlx5_ipool_free(job->flow->table->resource, res_idx); - mlx5_ipool_free(job->flow->table->flow, job->flow->idx); - } else { - rte_memcpy(job->flow, job->upd_flow, - offsetof(struct rte_flow_hw, rule)); - mlx5_ipool_free(job->flow->table->resource, res_idx); - } + switch (job->type) { + case MLX5_HW_Q_JOB_TYPE_DESTROY: + case MLX5_HW_Q_JOB_TYPE_UPDATE: + hw_cmpl_flow_update_or_destroy(dev, job, queue, error); + break; + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE: + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE: + case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY: + hw_cmpl_resizable_tbl(dev, job, queue, res[i].status, error); + break; + default: + break; } flow_hw_job_put(priv, job, queue); } @@ -3907,24 +4020,36 @@ flow_hw_pull(struct rte_eth_dev *dev, if (ret < n_res) ret += __flow_hw_pull_indir_action_comp(dev, queue, &res[ret], n_res - ret); + if (ret < n_res) + ret += mlx5_hw_pull_flow_transfer_comp(dev, queue, &res[ret], + n_res - ret); + return ret; } +static uint32_t +mlx5_hw_push_queue(struct rte_ring *pending_q, struct rte_ring *cmpl_q) +{ + void *job = NULL; + uint32_t i, size = rte_ring_count(pending_q); + + for (i = 0; i < size; i++) { + rte_ring_dequeue(pending_q, &job); + rte_ring_enqueue(cmpl_q, job); + } + return size; +} + static inline uint32_t __flow_hw_push_action(struct rte_eth_dev *dev, uint32_t queue) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_ring *iq = priv->hw_q[queue].indir_iq; - struct rte_ring *cq = priv->hw_q[queue].indir_cq; - void *job = NULL; - uint32_t ret, i; + struct mlx5_hw_q *hw_q = &priv->hw_q[queue]; - ret = rte_ring_count(iq); - for (i = 0; i < ret; i++) { - rte_ring_dequeue(iq, &job); - rte_ring_enqueue(cq, job); - } + mlx5_hw_push_queue(hw_q->indir_iq, hw_q->indir_cq); + mlx5_hw_push_queue(hw_q->flow_transfer_pending, + hw_q->flow_transfer_completed); if (!priv->shared_host) { if (priv->hws_ctpool) mlx5_aso_push_wqe(priv->sh, @@ -4333,6 +4458,8 @@ flow_hw_table_create(struct rte_eth_dev *dev, grp = container_of(ge, struct mlx5_flow_group, entry); tbl->grp = grp; /* Prepare matcher information. */ + matcher_attr.resizable = !!rte_flow_template_table_resizable + (dev->data->port_id, &table_cfg->attr); matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_ANY; matcher_attr.priority = attr->flow_attr.priority; matcher_attr.optimize_using_rule_idx = true; @@ -4351,7 +4478,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG; if ((attr->specialize & val) == val) { - DRV_LOG(INFO, "Invalid hint value %x", + DRV_LOG(ERR, "Invalid hint value %x", attr->specialize); rte_errno = EINVAL; goto it_error; @@ -4395,10 +4522,11 @@ flow_hw_table_create(struct rte_eth_dev *dev, i = nb_item_templates; goto it_error; } - tbl->matcher = mlx5dr_matcher_create + tbl->matcher_info[0].matcher = mlx5dr_matcher_create (tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr); - if (!tbl->matcher) + if (!tbl->matcher_info[0].matcher) goto at_error; + tbl->matcher_attr = matcher_attr; tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB : (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX : MLX5DR_TABLE_TYPE_NIC_RX); @@ -4406,6 +4534,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next); else LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next); + rte_rwlock_init(&tbl->matcher_replace_rwlk); return tbl; at_error: for (i = 0; i < nb_action_templates; i++) { @@ -4577,6 +4706,13 @@ flow_hw_template_table_create(struct rte_eth_dev *dev, if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error)) return NULL; + if (!cfg.attr.flow_attr.group && + rte_flow_template_table_resizable(dev->data->port_id, attr)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "table cannot be resized: invalid group"); + return NULL; + } return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates, action_templates, nb_action_templates, error); } @@ -4649,7 +4785,10 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, 1, __ATOMIC_RELAXED); } flow_hw_destroy_table_multi_pattern_ctx(table); - mlx5dr_matcher_destroy(table->matcher); + if (table->matcher_info[0].matcher) + mlx5dr_matcher_destroy(table->matcher_info[0].matcher); + if (table->matcher_info[1].matcher) + mlx5dr_matcher_destroy(table->matcher_info[1].matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); mlx5_ipool_destroy(table->resource); mlx5_ipool_destroy(table->flow); @@ -9643,6 +9782,16 @@ action_template_drop_init(struct rte_eth_dev *dev, return 0; } +static __rte_always_inline struct rte_ring * +mlx5_hwq_ring_create(uint16_t port_id, uint32_t queue, uint32_t size, const char *str) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + + snprintf(mz_name, sizeof(mz_name), "port_%u_%s_%u", port_id, str, queue); + return rte_ring_create(mz_name, size, SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ); +} + /** * Configure port HWS resources. * @@ -9770,7 +9919,6 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } for (i = 0; i < nb_q_updated; i++) { - char mz_name[RTE_MEMZONE_NAMESIZE]; uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; @@ -9804,22 +9952,23 @@ flow_hw_configure(struct rte_eth_dev *dev, job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; } - snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_cq_%u", - dev->data->port_id, i); - priv->hw_q[i].indir_cq = rte_ring_create(mz_name, - _queue_attr[i]->size, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_SC_DEQ | - RING_F_EXACT_SZ); + /* Notice ring name length is limited. */ + priv->hw_q[i].indir_cq = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "indir_act_cq"); if (!priv->hw_q[i].indir_cq) goto err; - snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_iq_%u", - dev->data->port_id, i); - priv->hw_q[i].indir_iq = rte_ring_create(mz_name, - _queue_attr[i]->size, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_SC_DEQ | - RING_F_EXACT_SZ); + priv->hw_q[i].indir_iq = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "indir_act_iq"); if (!priv->hw_q[i].indir_iq) goto err; + priv->hw_q[i].flow_transfer_pending = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "tx_pending"); + if (!priv->hw_q[i].flow_transfer_pending) + goto err; + priv->hw_q[i].flow_transfer_completed = mlx5_hwq_ring_create + (dev->data->port_id, i, _queue_attr[i]->size, "tx_done"); + if (!priv->hw_q[i].flow_transfer_completed) + goto err; } dr_ctx_attr.pd = priv->sh->cdev->pd; dr_ctx_attr.queues = nb_q_updated; @@ -10040,6 +10189,8 @@ flow_hw_configure(struct rte_eth_dev *dev, for (i = 0; i < nb_q_updated; i++) { rte_ring_free(priv->hw_q[i].indir_iq); rte_ring_free(priv->hw_q[i].indir_cq); + rte_ring_free(priv->hw_q[i].flow_transfer_pending); + rte_ring_free(priv->hw_q[i].flow_transfer_completed); } mlx5_free(priv->hw_q); priv->hw_q = NULL; @@ -10140,6 +10291,8 @@ flow_hw_resource_release(struct rte_eth_dev *dev) for (i = 0; i < priv->nb_queue; i++) { rte_ring_free(priv->hw_q[i].indir_iq); rte_ring_free(priv->hw_q[i].indir_cq); + rte_ring_free(priv->hw_q[i].flow_transfer_pending); + rte_ring_free(priv->hw_q[i].flow_transfer_completed); } mlx5_free(priv->hw_q); priv->hw_q = NULL; @@ -11970,7 +12123,7 @@ flow_hw_calc_table_hash(struct rte_eth_dev *dev, items = flow_hw_get_rule_items(dev, table, pattern, pattern_template_index, &job); - res = mlx5dr_rule_hash_calculate(table->matcher, items, + res = mlx5dr_rule_hash_calculate(mlx5_table_matcher(table), items, pattern_template_index, MLX5DR_RULE_HASH_CALC_MODE_RAW, hash); @@ -12047,6 +12200,226 @@ flow_hw_calc_encap_hash(struct rte_eth_dev *dev, return 0; } +static int +flow_hw_table_resize_multi_pattern_actions(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_flows, + struct rte_flow_error *error) +{ + struct mlx5_multi_pattern_segment *segment = table->mpctx.segments; + uint32_t bulk_size; + int i, ret; + + /** + * Segment always allocates Modify Header Argument Objects number in + * powers of 2. + * On resize, PMD adds minimal required argument objects number. + * For example, if table size was 10, it allocated 16 argument objects. + * Resize to 15 will not add new objects. + */ + for (i = 1; + i < MLX5_MAX_TABLE_RESIZE_NUM && segment->capacity; + i++, segment++) { + /* keep the devtools/checkpatches.sh happy */ + } + if (i == MLX5_MAX_TABLE_RESIZE_NUM) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "too many resizes"); + if (segment->head_index - 1 >= nb_flows) + return 0; + bulk_size = rte_align32pow2(nb_flows - segment->head_index + 1); + ret = mlx5_tbl_multi_pattern_process(dev, table, segment, + rte_log2_u32(bulk_size), + error); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "too many resizes"); + return i; +} + +static int +flow_hw_table_resize(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + uint32_t nb_flows, + struct rte_flow_error *error) +{ + struct mlx5dr_action_template *at[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; + struct mlx5dr_matcher_attr matcher_attr = table->matcher_attr; + struct mlx5_multi_pattern_segment *segment = NULL; + struct mlx5dr_matcher *matcher = NULL; + uint32_t i, selector = table->matcher_selector; + uint32_t other_selector = (selector + 1) & 1; + int ret; + + if (!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "no resizable attribute"); + if (table->matcher_info[other_selector].matcher) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "last table resize was not completed"); + if (nb_flows <= table->cfg.attr.nb_flows) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "shrinking table is not supported"); + ret = mlx5_ipool_resize(table->flow, nb_flows); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "cannot resize flows pool"); + ret = mlx5_ipool_resize(table->resource, nb_flows); + if (ret) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "cannot resize resources pool"); + if (mlx5_is_multi_pattern_active(&table->mpctx)) { + ret = flow_hw_table_resize_multi_pattern_actions(dev, table, nb_flows, error); + if (ret < 0) + return ret; + if (ret > 0) + segment = table->mpctx.segments + ret; + } + for (i = 0; i < table->nb_item_templates; i++) + mt[i] = table->its[i]->mt; + for (i = 0; i < table->nb_action_templates; i++) + at[i] = table->ats[i].action_template->tmpl; + nb_flows = rte_align32pow2(nb_flows); + matcher_attr.rule.num_log = rte_log2_u32(nb_flows); + matcher = mlx5dr_matcher_create(table->grp->tbl, mt, + table->nb_item_templates, at, + table->nb_action_templates, + &matcher_attr); + if (!matcher) { + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to create new matcher"); + goto error; + } + rte_rwlock_write_lock(&table->matcher_replace_rwlk); + ret = mlx5dr_matcher_resize_set_target + (table->matcher_info[selector].matcher, matcher); + if (ret) { + rte_rwlock_write_unlock(&table->matcher_replace_rwlk); + ret = rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to initiate matcher swap"); + goto error; + } + table->cfg.attr.nb_flows = nb_flows; + table->matcher_info[other_selector].matcher = matcher; + table->matcher_selector = other_selector; + rte_atomic_store_explicit(&table->matcher_info[other_selector].refcnt, + 0, rte_memory_order_relaxed); + rte_rwlock_write_unlock(&table->matcher_replace_rwlk); + return 0; +error: + if (segment) + mlx5_destroy_multi_pattern_segment(segment); + if (matcher) { + ret = mlx5dr_matcher_destroy(matcher); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to destroy new matcher"); + } + return ret; +} + +static int +flow_hw_table_resize_complete(__rte_unused struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error) +{ + int ret; + uint32_t selector = table->matcher_selector; + uint32_t other_selector = (selector + 1) & 1; + struct mlx5_matcher_info *matcher_info = &table->matcher_info[other_selector]; + uint32_t matcher_refcnt; + + if (!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "no resizable attribute"); + matcher_refcnt = rte_atomic_load_explicit(&matcher_info->refcnt, + rte_memory_order_relaxed); + if (!matcher_info->matcher || matcher_refcnt) + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "cannot complete table resize"); + ret = mlx5dr_matcher_destroy(matcher_info->matcher); + if (ret) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + table, "failed to destroy retired matcher"); + matcher_info->matcher = NULL; + return 0; +} + +static int +flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, void *user_data, + struct rte_flow_error *error) +{ + int ret; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_q_job *job; + struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow; + struct rte_flow_template_table *table = hw_flow->table; + uint32_t table_selector = table->matcher_selector; + uint32_t rule_selector = hw_flow->matcher_selector; + uint32_t other_selector; + struct mlx5dr_matcher *other_matcher; + struct mlx5dr_rule_attr rule_attr = { + .queue_id = queue, + .burst = attr->postpone, + }; + + /** + * mlx5dr_matcher_resize_rule_move() accepts original table matcher - + * the one that was used BEFORE table resize. + * Since the function is called AFTER table resize, + * `table->matcher_selector` always points to the new matcher and + * `hw_flow->matcher_selector` points to a matcher used to create the flow. + */ + other_selector = rule_selector == table_selector ? + (rule_selector + 1) & 1 : rule_selector; + other_matcher = table->matcher_info[other_selector].matcher; + if (!other_matcher) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "no active table resize"); + job = flow_hw_job_get(priv, queue); + if (!job) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "queue is full"); + job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE; + job->user_data = user_data; + job->flow = hw_flow; + rule_attr.user_data = job; + if (rule_selector == table_selector) { + struct rte_ring *ring = !attr->postpone ? + priv->hw_q[queue].flow_transfer_completed : + priv->hw_q[queue].flow_transfer_pending; + rte_ring_enqueue(ring, job); + return 0; + } + ret = mlx5dr_matcher_resize_rule_move(other_matcher, + (struct mlx5dr_rule *)hw_flow->rule, + &rule_attr); + if (ret) { + flow_hw_job_put(priv, job, queue); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "flow transfer failed"); + } + return 0; +} + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, @@ -12058,11 +12431,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .actions_template_destroy = flow_hw_actions_template_destroy, .template_table_create = flow_hw_template_table_create, .template_table_destroy = flow_hw_table_destroy, + .table_resize = flow_hw_table_resize, .group_set_miss_actions = flow_hw_group_set_miss_actions, .async_flow_create = flow_hw_async_flow_create, .async_flow_create_by_index = flow_hw_async_flow_create_by_index, .async_flow_update = flow_hw_async_flow_update, .async_flow_destroy = flow_hw_async_flow_destroy, + .flow_update_resized = flow_hw_update_resized, + .table_resize_complete = flow_hw_table_resize_complete, .pull = flow_hw_pull, .push = flow_hw_push, .async_action_create = flow_hw_action_handle_create,