From patchwork Thu Oct 6 15:03:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 117470 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99445A00C2; Thu, 6 Oct 2022 17:05:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 572DA42CB5; Thu, 6 Oct 2022 17:04:28 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2074.outbound.protection.outlook.com [40.107.223.74]) by mails.dpdk.org (Postfix) with ESMTP id DE4D342CC3 for ; Thu, 6 Oct 2022 17:04:26 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YjYdg4mBOkeaUvI81hGsWOIYtltR7JpQjq5v3AU7TQHsNCzzwxG0126KiEoYqd0/1so/gsklb4onE5+y8M70YbGPWIAaQOdAil0wm5BQbX2Kmu8ilkJ+4HH2692QjAisDEdwr6rH/72bi9etVkqoWNEp7gyfiMEgLP+3NSVCkziKDfNlZoRs7Q1lr8QSY2gXlEXirfPFZS8j0b+nOsltZJ9vD07ejsKEnK2Wg+dDF061PGhLsc4NLXIwRkYEUfFBIXv4oqF2BGakKe1sT5gkNcTGrsUME4WI+OcPFKjjb1DWvODvfn+UUGpKUXNH6NZY6f1+VIVWq3blT4huhXbEGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pyR566Se1l+cFJ1dyklO/1T9zY+LqkA7lZ+pM4r/qCQ=; b=SJNchRFzH86yfo79+9cB83GOEQ9CDq1or18AA2k/BpaOQxo8G/n2e1O2Ai7HrYDKkGGKjXbjLPCxq7fGzBZYMNaTOwbb8/VCI9vGgaXAIqVKl2q3pFPqi59k04Rfx8SskLy8cxcaUuQT7lrrWMaFR+fRp3/ewGwzqKH2pOWSODcVBddBejY7B0ZINsMok6hdCAfPUMpFHO90CwqY9psCcoIpckbaZKLWTbzqmFfFGuX91NFLU8IOvVuYCcX+dgDNIDjF7C4/vjOLw6D7n0NYNdjbNTd56xPQ7EeS2YOWxdUsAEFYcFKi2DfQN8WU4v4AXfLLZ/exZmUAhNEYxgEmgA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pyR566Se1l+cFJ1dyklO/1T9zY+LqkA7lZ+pM4r/qCQ=; b=LTenpPT+MctiGzGrrVsnSuAlNmW28kiWSIWVYcIvbpRquj/+tZ8vMSzJ9muiEGPScy3VVwDWh+FAqXe9xnEQ19KfxC91QbILQFnnf0kjX8VBC8uZeih22YxRYkPkUNAHCzURmClCuLd/wRAgoXjPSYaKxGU9fXMd//UEOLZ9yk5QyBj6dfhbTg36naBvzjiMNa8gPopgOHBbLhqNjwF5xgLfF5X3g+agYDg6SjBh5DuoRbKGd8Lqe5oc1mXwPPfqHaAzmcXW2mmhXRd6sOnlIn1Qc/jNnHh82b81wy0jHXOYh5XC1grumYFhrbjCj1axUvA4OMG4Wbn7z6epZJBAYQ== Received: from BN9PR03CA0756.namprd03.prod.outlook.com (2603:10b6:408:13a::11) by BL1PR12MB5269.namprd12.prod.outlook.com (2603:10b6:208:30b::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.28; Thu, 6 Oct 2022 15:04:24 +0000 Received: from BN8NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13a:cafe::8) by BN9PR03CA0756.outlook.office365.com (2603:10b6:408:13a::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.28 via Frontend Transport; Thu, 6 Oct 2022 15:04:24 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT013.mail.protection.outlook.com (10.13.176.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5709.10 via Frontend Transport; Thu, 6 Oct 2022 15:04:24 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 6 Oct 2022 08:04:13 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Thu, 6 Oct 2022 08:04:10 -0700 From: Alex Vesker To: , , , , Matan Azrad CC: , , Erez Shitrit Subject: [v2 10/19] net/mlx5/hws: Add HWS pool and buddy Date: Thu, 6 Oct 2022 18:03:16 +0300 Message-ID: <20221006150325.660-11-valex@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20221006150325.660-1-valex@nvidia.com> References: <20220922190345.394-1-valex@nvidia.com> <20221006150325.660-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT013:EE_|BL1PR12MB5269:EE_ X-MS-Office365-Filtering-Correlation-Id: 001c2c87-5cdc-4cf2-247f-08daa7ac0e7f X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: HMgPI6LskjylQvIwCQi8yjM0jMUU0ntunLKQqvydU+GT5KWD1sKKfZXkCiniQRUGcOTvX2WY3GtDwQi+U/3t4s9a/f1F3qIErXZNzBMU2feJy+iDDxew9+rH3QeGfRlM2SHF0lm8td3egxVjAIOEzmh6mjR7rh4BgNgCF5kkMYJRe0jUivhQpTVo+dDavRisEbGmG6TqQxu+KlUw+vsK+kR+iUXcnjLHUw9pt/WcYzi3v8oAao8SYmglD0iXeezW8EUqnEDtErHmufXuRnVuEoAT7kTjTvmI7KGnpl7SI+eh9KT9XoF3jopmI0scLLFlvI6CnZPzsDUBWdfAEDWkJsFDCSiL1s9JIHGzYHav8F8T0r/JE23t5r03Df4/s2ZVENHd6vglVKWBgEEKgKqF66eMbE7ZyLSNTRdOrtSFP/FPe9RcGPie/4l1SFFeRvcuqrAnqCMQs+gHk6H2FbMUJCo9hyJNQqO812pJadPKwYiT5bm1+gE3fjzuX7EJRHm8Q+iG2gBD5sHHz9+dy2W5vmpu6ATSyM7KuLOz2YfgwNSNd1p/rlPqKGWkvFzGStLUTId349dOEvvJ7W4BJHC/HuYUNyZOsPCnItgKTjnlzVGkLqxk4cuz6M1ykuR6xauVk0WqcAgpkwW/MRfjd1fwTSbFNibpzLIJOQTCFOJwhvm0VPMCmoqBP2C6BMjRrFkHq0n8Fpf65OUi1xyCEJoEfbg3OOTjIqz+LcIPFrPPJidhqNg1wVHfCIO3pKT1wn/Z+U5+1hJVW80OijieHW3mDxZY4+mGI7TqwzX0bG7O1IE= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(39860400002)(396003)(136003)(376002)(451199015)(40470700004)(36840700001)(46966006)(26005)(6286002)(316002)(356005)(6636002)(82740400003)(7636003)(110136005)(54906003)(36860700001)(40460700003)(36756003)(86362001)(55016003)(40480700001)(107886003)(83380400001)(2616005)(7696005)(478600001)(1076003)(4326008)(30864003)(8676002)(70206006)(70586007)(8936002)(5660300002)(82310400005)(336012)(426003)(47076005)(186003)(16526019)(2906002)(41300700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2022 15:04:24.3586 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 001c2c87-5cdc-4cf2-247f-08daa7ac0e7f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5269 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org HWS needs to manage different types of device memory in an efficient and quick way. For this, memory pools are being used. Signed-off-by: Erez Shitrit Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 +++++++++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 +++++++ 4 files changed, 1047 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.c b/drivers/net/mlx5/hws/mlx5dr_buddy.c new file mode 100644 index 0000000000..9dba95f0b1 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.c @@ -0,0 +1,201 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include +#include +#include "mlx5dr_internal.h" +#include "mlx5dr_buddy.h" + +static struct rte_bitmap *bitmap_alloc0(int s) +{ + struct rte_bitmap *bitmap; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(s); + mem = rte_zmalloc("create_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + bitmap = rte_bitmap_init(s, mem, bmp_size); + if (!bitmap) { + DR_LOG(ERR, "%s Failed to initialize bitmap", __func__); + rte_errno = EINVAL; + goto err_mem_alloc; + } + + return bitmap; + +err_mem_alloc: + rte_free(mem); + return NULL; +} + +static void bitmap_set_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_set(bmp, pos); +} + +static void bitmap_clear_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_clear(bmp, pos); +} + +static bool bitmap_test_bit(struct rte_bitmap *bmp, unsigned long n) +{ + return !!rte_bitmap_get(bmp, n); +} + +static unsigned long bitmap_ffs(struct rte_bitmap *bmap, + unsigned long n, unsigned long m) +{ + uint64_t out_slab = 0; + uint32_t pos = 0; /* Compilation warn */ + + __rte_bitmap_scan_init(bmap); + if (!rte_bitmap_scan(bmap, &pos, &out_slab)) { + DR_LOG(ERR, "Failed to get slab from bitmap."); + return m; + } + pos = pos + __builtin_ctzll(out_slab); + + if (pos < n) { + DR_LOG(ERR, "Unexpected bit (%d < %"PRIx64") from bitmap", pos, n); + return m; + } + return pos; +} + +static unsigned long mlx5dr_buddy_find_first_bit(struct rte_bitmap *addr, + uint32_t size) +{ + return bitmap_ffs(addr, 0, size); +} + +static int mlx5dr_buddy_init(struct mlx5dr_buddy_mem *buddy, uint32_t max_order) +{ + int i, s; + + buddy->max_order = max_order; + + buddy->bits = simple_calloc(buddy->max_order + 1, sizeof(long *)); + if (!buddy->bits) { + rte_errno = ENOMEM; + return -1; + } + + buddy->num_free = simple_calloc(buddy->max_order + 1, sizeof(*buddy->num_free)); + if (!buddy->num_free) { + rte_errno = ENOMEM; + goto err_out_free_bits; + } + + for (i = 0; i <= (int)buddy->max_order; ++i) { + s = 1 << (buddy->max_order - i); + buddy->bits[i] = bitmap_alloc0(s); + if (!buddy->bits[i]) + goto err_out_free_num_free; + } + + bitmap_set_bit(buddy->bits[buddy->max_order], 0); + + buddy->num_free[buddy->max_order] = 1; + + return 0; + +err_out_free_num_free: + for (i = 0; i <= (int)buddy->max_order; ++i) + rte_free(buddy->bits[i]); + + simple_free(buddy->num_free); + +err_out_free_bits: + simple_free(buddy->bits); + return -1; +} + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = simple_calloc(1, sizeof(*buddy)); + if (!buddy) { + rte_errno = ENOMEM; + return NULL; + } + + if (mlx5dr_buddy_init(buddy, max_order)) + goto free_buddy; + + return buddy; + +free_buddy: + simple_free(buddy); + return NULL; +} + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy) +{ + int i; + + for (i = 0; i <= (int)buddy->max_order; ++i) { + rte_free(buddy->bits[i]); + } + + simple_free(buddy->num_free); + simple_free(buddy->bits); +} + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order) +{ + int seg; + int o, m; + + for (o = order; o <= (int)buddy->max_order; ++o) + if (buddy->num_free[o]) { + m = 1 << (buddy->max_order - o); + seg = mlx5dr_buddy_find_first_bit(buddy->bits[o], m); + if (m <= seg) + return -1; + + goto found; + } + + return -1; + +found: + bitmap_clear_bit(buddy->bits[o], seg); + --buddy->num_free[o]; + + while (o > order) { + --o; + seg <<= 1; + bitmap_set_bit(buddy->bits[o], seg ^ 1); + ++buddy->num_free[o]; + } + + seg <<= order; + + return seg; +} + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order) +{ + seg >>= order; + + while (bitmap_test_bit(buddy->bits[order], seg ^ 1)) { + bitmap_clear_bit(buddy->bits[order], seg ^ 1); + --buddy->num_free[order]; + seg >>= 1; + ++order; + } + + bitmap_set_bit(buddy->bits[order], seg); + + ++buddy->num_free[order]; +} + diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.h b/drivers/net/mlx5/hws/mlx5dr_buddy.h new file mode 100644 index 0000000000..b9ec446b99 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_BUDDY_H_ +#define MLX5DR_BUDDY_H_ + +struct mlx5dr_buddy_mem { + struct rte_bitmap **bits; + unsigned int *num_free; + uint32_t max_order; +}; + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order); + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy); + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order); + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order); + +#endif /* MLX5DR_BUDDY_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.c b/drivers/net/mlx5/hws/mlx5dr_pool.c new file mode 100644 index 0000000000..2bfda5b4a5 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.c @@ -0,0 +1,672 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include +#include +#include "mlx5dr_buddy.h" +#include "mlx5dr_internal.h" + +static void mlx5dr_pool_free_one_resource(struct mlx5dr_pool_resource *resource) +{ + mlx5dr_cmd_destroy_obj(resource->devx_obj); + + simple_free(resource); +} + +static void mlx5dr_pool_resource_free(struct mlx5dr_pool *pool, + int resource_idx) +{ + mlx5dr_pool_free_one_resource(pool->resource[resource_idx]); + pool->resource[resource_idx] = NULL; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + mlx5dr_pool_free_one_resource(pool->mirror_resource[resource_idx]); + pool->mirror_resource[resource_idx] = NULL; + } +} + +static struct mlx5dr_pool_resource * +mlx5dr_pool_create_one_resource(struct mlx5dr_pool *pool, uint32_t log_range, + uint32_t fw_ft_type) +{ + struct mlx5dr_cmd_ste_create_attr ste_attr; + struct mlx5dr_cmd_stc_create_attr stc_attr; + struct mlx5dr_pool_resource *resource; + struct mlx5dr_devx_obj *devx_obj; + + resource = simple_malloc(sizeof(*resource)); + if (!resource) { + rte_errno = ENOMEM; + return NULL; + } + + switch (pool->type) { + case MLX5DR_POOL_TYPE_STE: + ste_attr.log_obj_range = log_range; + ste_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_ste_create(pool->ctx->ibv_ctx, &ste_attr); + break; + case MLX5DR_POOL_TYPE_STC: + stc_attr.log_obj_range = log_range; + stc_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_stc_create(pool->ctx->ibv_ctx, &stc_attr); + break; + default: + assert(0); + break; + } + + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate resource objects"); + goto free_resource; + } + + resource->pool = pool; + resource->devx_obj = devx_obj; + resource->range = 1 << log_range; + resource->base_id = devx_obj->id; + + return resource; + +free_resource: + simple_free(resource); + return NULL; +} + +static int +mlx5dr_pool_resource_alloc(struct mlx5dr_pool *pool, uint32_t log_range, int idx) +{ + struct mlx5dr_pool_resource *resource; + uint32_t fw_ft_type, opt_log_range; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, false); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_ORIG ? 0 : log_range; + resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!resource) { + DR_LOG(ERR, "Failed allocating resource"); + return rte_errno; + } + pool->resource[idx] = resource; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_pool_resource *mir_resource; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, true); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_MIRROR ? 0 : log_range; + mir_resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!mir_resource) { + DR_LOG(ERR, "Failed allocating mirrored resource"); + mlx5dr_pool_free_one_resource(resource); + pool->resource[idx] = NULL; + return rte_errno; + } + pool->mirror_resource[idx] = mir_resource; + } + + return 0; +} + +static int mlx5dr_pool_bitmap_get_free_slot(struct rte_bitmap *bitmap, uint32_t *iidx) +{ + uint64_t slab = 0; + + __rte_bitmap_scan_init(bitmap); + + if (!rte_bitmap_scan(bitmap, iidx, &slab)) + return ENOMEM; + + *iidx += __builtin_ctzll(slab); + + rte_bitmap_clear(bitmap, *iidx); + + return 0; +} + +static struct rte_bitmap *mlx5dr_pool_create_and_init_bitmap(uint32_t log_range) +{ + struct rte_bitmap *cur_bmp; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(1 << log_range); + mem = rte_zmalloc("create_stc_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + cur_bmp = rte_bitmap_init_with_all_set(1 << log_range, mem, bmp_size); + if (!cur_bmp) { + rte_free(mem); + DR_LOG(ERR, "Failed to initialize stc bitmap."); + rte_errno = ENOMEM; + return NULL; + } + + return cur_bmp; +} + +static void mlx5dr_pool_buddy_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = pool->db.buddy_manager->buddies[chunk->resource_idx]; + if (!buddy) { + assert(false); + DR_LOG(ERR, "No such buddy (%d)", chunk->resource_idx); + return; + } + + mlx5dr_buddy_free_mem(buddy, chunk->offset, chunk->order); +} + +static struct mlx5dr_buddy_mem * +mlx5dr_pool_buddy_get_next_buddy(struct mlx5dr_pool *pool, int idx, + uint32_t order, bool *is_new_buddy) +{ + static struct mlx5dr_buddy_mem *buddy; + uint32_t new_buddy_size; + + buddy = pool->db.buddy_manager->buddies[idx]; + if (buddy) + return buddy; + + new_buddy_size = RTE_MAX(pool->alloc_log_sz, order); + *is_new_buddy = true; + buddy = mlx5dr_buddy_create(new_buddy_size); + if (!buddy) { + DR_LOG(ERR, "Failed to create buddy order: %d index: %d", + new_buddy_size, idx); + return NULL; + } + + if (mlx5dr_pool_resource_alloc(pool, new_buddy_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, new_buddy_size, idx); + mlx5dr_buddy_cleanup(buddy); + return NULL; + } + + pool->db.buddy_manager->buddies[idx] = buddy; + + return buddy; +} + +static int mlx5dr_pool_buddy_get_mem_chunk(struct mlx5dr_pool *pool, + int order, + uint32_t *buddy_idx, + int *seg) +{ + struct mlx5dr_buddy_mem *buddy; + bool new_mem = false; + int err = 0; + int i; + + *seg = -1; + + /* Find the next free place from the buddy array */ + while (*seg == -1) { + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = mlx5dr_pool_buddy_get_next_buddy(pool, i, + order, + &new_mem); + if (!buddy) { + err = rte_errno; + goto out; + } + + *seg = mlx5dr_buddy_alloc_mem(buddy, order); + if (*seg != -1) + goto found; + + if (pool->flags & MLX5DR_POOL_FLAGS_ONE_RESOURCE) { + DR_LOG(ERR, "Fail to allocate seg for one resource pool"); + err = rte_errno; + goto out; + } + + if (new_mem) { + /* We have new memory pool, should be place for us */ + assert(false); + DR_LOG(ERR, "No memory for order: %d with buddy no: %d", + order, i); + rte_errno = ENOMEM; + err = ENOMEM; + goto out; + } + } + } + +found: + *buddy_idx = i; +out: + return err; +} + +static int mlx5dr_pool_buddy_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over the buddies and find next free slot */ + ret = mlx5dr_pool_buddy_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_buddy_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_buddy_mem *buddy; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = pool->db.buddy_manager->buddies[i]; + if (buddy) { + mlx5dr_buddy_cleanup(buddy); + simple_free(buddy); + pool->db.buddy_manager->buddies[i] = NULL; + } + } + + simple_free(pool->db.buddy_manager); +} + +static int mlx5dr_pool_buddy_db_init(struct mlx5dr_pool *pool, uint32_t log_range) +{ + pool->db.buddy_manager = simple_calloc(1, sizeof(*pool->db.buddy_manager)); + if (!pool->db.buddy_manager) { + DR_LOG(ERR, "No mem for buddy_manager with log_range: %d", log_range); + rte_errno = ENOMEM; + return rte_errno; + } + + if (pool->flags & MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE) { + bool new_buddy; + + if (!mlx5dr_pool_buddy_get_next_buddy(pool, 0, log_range, &new_buddy)) { + DR_LOG(ERR, "Failed allocating memory on create log_sz: %d", log_range); + simple_free(pool->db.buddy_manager); + return rte_errno; + } + } + + pool->p_db_uninit = &mlx5dr_pool_buddy_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_buddy_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_buddy_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_create_resource_on_index(struct mlx5dr_pool *pool, + uint32_t alloc_size, int idx) +{ + if (mlx5dr_pool_resource_alloc(pool, alloc_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + return rte_errno; + } + + return 0; +} + +static struct mlx5dr_pool_elements * +mlx5dr_pool_element_create_new_elem(struct mlx5dr_pool *pool, uint32_t order, int idx) +{ + struct mlx5dr_pool_elements *elem; + uint32_t alloc_size; + + alloc_size = pool->alloc_log_sz; + + elem = simple_calloc(1, sizeof(*elem)); + if (!elem) { + DR_LOG(ERR, "Failed to create elem order: %d index: %d", + order, idx); + rte_errno = ENOMEM; + return NULL; + } + /*sharing the same resource, also means that all the elements are with size 1*/ + if ((pool->flags & MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS) && + !(pool->flags & MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) { + /* Currently all chunks in size 1 */ + elem->bitmap = mlx5dr_pool_create_and_init_bitmap(alloc_size - order); + if (!elem->bitmap) { + DR_LOG(ERR, "Failed to create bitmap type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_elem; + } + } + + if (mlx5dr_pool_create_resource_on_index(pool, alloc_size, idx)) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_db; + } + + pool->db.element_manager->elements[idx] = elem; + + return elem; + +free_db: + rte_free(elem->bitmap); +free_elem: + simple_free(elem); + return NULL; +} + +static int mlx5dr_pool_element_find_seg(struct mlx5dr_pool_elements *elem, int *seg) +{ + if (mlx5dr_pool_bitmap_get_free_slot(elem->bitmap, (uint32_t *)seg)) { + elem->is_full = true; + return ENOMEM; + } + return 0; +} + +static int +mlx5dr_pool_onesize_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + struct mlx5dr_pool_elements *elem; + + elem = pool->db.element_manager->elements[0]; + if (!elem) + elem = mlx5dr_pool_element_create_new_elem(pool, order, 0); + if (!elem) + goto err_no_elem; + + *idx = 0; + + if (mlx5dr_pool_element_find_seg(elem, seg) != 0) { + DR_LOG(ERR, "No more resources (last request order: %d)", order); + rte_errno = ENOMEM; + return ENOMEM; + } + + elem->num_of_elements++; + return 0; + +err_no_elem: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int +mlx5dr_pool_general_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + int ret; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + if (!pool->resource[i]) { + ret = mlx5dr_pool_create_resource_on_index(pool, order, i); + if (ret) + goto err_no_res; + *idx = i; + *seg = 0; /* One memory slot in that element */ + return 0; + } + } + + rte_errno = ENOMEM; + DR_LOG(ERR, "No more resources (last request order: %d)", order); + return ENOMEM; + +err_no_res: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int mlx5dr_pool_general_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_general_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_general_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE) + mlx5dr_pool_resource_free(pool, chunk->resource_idx); +} + +static void mlx5dr_pool_general_element_db_uninit(struct mlx5dr_pool *pool) +{ + (void)pool; +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * allocate resource and give it. + * - When free that chunk: + * the resource is freed. + */ +static int mlx5dr_pool_general_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_pool_general_element_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_general_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_general_element_db_put_chunk; + + return 0; +} + +static void mlx5dr_onesize_element_db_destroy_element(struct mlx5dr_pool *pool, + struct mlx5dr_pool_elements *elem, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + mlx5dr_pool_resource_free(pool, chunk->resource_idx); + + simple_free(elem); + pool->db.element_manager->elements[chunk->resource_idx] = NULL; +} + +static void mlx5dr_onesize_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_pool_elements *elem; + + assert(chunk->resource_idx == 0); + + elem = pool->db.element_manager->elements[chunk->resource_idx]; + if (!elem) { + assert(false); + DR_LOG(ERR, "No such element (%d)", chunk->resource_idx); + return; + } + + rte_bitmap_set(elem->bitmap, chunk->offset); + elem->is_full = false; + elem->num_of_elements--; + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE && + !elem->num_of_elements) + mlx5dr_onesize_element_db_destroy_element(pool, elem, chunk); +} + +static int mlx5dr_onesize_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_onesize_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_onesize_element_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_pool_elements *elem; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + elem = pool->db.element_manager->elements[i]; + if (elem) { + if (elem->bitmap) + rte_free(elem->bitmap); + simple_free(elem); + pool->db.element_manager->elements[i] = NULL; + } + } + simple_free(pool->db.element_manager); +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * aloocate the first and only slot of memory/resource + * when it ended return error. + */ +static int mlx5dr_pool_onesize_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_onesize_element_db_uninit; + pool->p_get_chunk = &mlx5dr_onesize_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_onesize_element_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_db_init(struct mlx5dr_pool *pool, + enum mlx5dr_db_type db_type) +{ + int ret; + + if (db_type == MLX5DR_POOL_DB_TYPE_GENERAL_SIZE) + ret = mlx5dr_pool_general_element_db_init(pool); + else if (db_type == MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE) + ret = mlx5dr_pool_onesize_element_db_init(pool); + else + ret = mlx5dr_pool_buddy_db_init(pool, pool->alloc_log_sz); + + if (ret) { + DR_LOG(ERR, "Failed to init general db : %d (ret: %d)", db_type, ret); + return ret; + } + + return 0; +} + +static void mlx5dr_pool_db_unint(struct mlx5dr_pool *pool) +{ + pool->p_db_uninit(pool); +} + +int +mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + pthread_spin_lock(&pool->lock); + ret = pool->p_get_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); + + return ret; +} + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + pthread_spin_lock(&pool->lock); + pool->p_put_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); +} + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, struct mlx5dr_pool_attr *pool_attr) +{ + enum mlx5dr_db_type res_db_type; + struct mlx5dr_pool *pool; + + pool = simple_calloc(1, sizeof(*pool)); + if (!pool) + return NULL; + + pool->ctx = ctx; + pool->type = pool_attr->pool_type; + pool->alloc_log_sz = pool_attr->alloc_log_sz; + pool->flags = pool_attr->flags; + pool->tbl_type = pool_attr->table_type; + pool->opt_type = pool_attr->opt_type; + + pthread_spin_init(&pool->lock, PTHREAD_PROCESS_PRIVATE); + + /* Support general db */ + if (pool->flags == (MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) + res_db_type = MLX5DR_POOL_DB_TYPE_GENERAL_SIZE; + else if (pool->flags == (MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS)) + res_db_type = MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE; + else + res_db_type = MLX5DR_POOL_DB_TYPE_BUDDY; + + pool->alloc_log_sz = pool_attr->alloc_log_sz; + + if (mlx5dr_pool_db_init(pool, res_db_type)) + goto free_pool; + + return pool; + +free_pool: + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return NULL; +} + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool) +{ + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) + if (pool->resource[i]) + mlx5dr_pool_resource_free(pool, i); + + mlx5dr_pool_db_unint(pool); + + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.h b/drivers/net/mlx5/hws/mlx5dr_pool.h new file mode 100644 index 0000000000..cd12c3ab9a --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_POOL_H_ +#define MLX5DR_POOL_H_ + +enum mlx5dr_pool_type { + MLX5DR_POOL_TYPE_STE, + MLX5DR_POOL_TYPE_STC, +}; + +#define MLX5DR_POOL_STC_LOG_SZ 14 + +#define MLX5DR_POOL_RESOURCE_ARR_SZ 100 + +struct mlx5dr_pool_chunk { + uint32_t resource_idx; + /* Internal offset, relative to base index */ + int offset; + int order; +}; + +struct mlx5dr_pool_resource { + struct mlx5dr_pool *pool; + struct mlx5dr_devx_obj *devx_obj; + uint32_t base_id; + uint32_t range; +}; + +enum mlx5dr_pool_flags { + /* Only a one resource in that pool */ + MLX5DR_POOL_FLAGS_ONE_RESOURCE = 1 << 0, + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE = 1 << 1, + /* No sharing resources between chunks */ + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK = 1 << 2, + /* All objects are in the same size */ + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS = 1 << 3, + /* Manged by buddy allocator */ + MLX5DR_POOL_FLAGS_BUDDY_MANAGED = 1 << 4, + /* Allocate pool_type memory on pool creation */ + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE = 1 << 5, + + /* These values should be used by the caller */ + MLX5DR_POOL_FLAGS_FOR_STC_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS, + MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL = + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK, + MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_BUDDY_MANAGED | + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE, +}; + +enum mlx5dr_pool_optimize { + MLX5DR_POOL_OPTIMIZE_NONE = 0x0, + MLX5DR_POOL_OPTIMIZE_ORIG = 0x1, + MLX5DR_POOL_OPTIMIZE_MIRROR = 0x2, +}; + +struct mlx5dr_pool_attr { + enum mlx5dr_pool_type pool_type; + enum mlx5dr_table_type table_type; + enum mlx5dr_pool_flags flags; + enum mlx5dr_pool_optimize opt_type; + /* Allocation size once memory is depleted */ + size_t alloc_log_sz; +}; + +enum mlx5dr_db_type { + /* Uses for allocating chunk of big memory, each element has its own resource in the FW*/ + MLX5DR_POOL_DB_TYPE_GENERAL_SIZE, + /* One resource only, all the elements are with same one size */ + MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE, + /* Many resources, the memory allocated with buddy mechanism */ + MLX5DR_POOL_DB_TYPE_BUDDY, +}; + +struct mlx5dr_buddy_manager { + struct mlx5dr_buddy_mem *buddies[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_elements { + uint32_t num_of_elements; + struct rte_bitmap *bitmap; + bool is_full; +}; + +struct mlx5dr_element_manager { + struct mlx5dr_pool_elements *elements[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_db { + enum mlx5dr_db_type type; + union { + struct mlx5dr_element_manager *element_manager; + struct mlx5dr_buddy_manager *buddy_manager; + }; +}; + +typedef int (*mlx5dr_pool_db_get_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_db_put_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_unint_db)(struct mlx5dr_pool *pool); + +struct mlx5dr_pool { + struct mlx5dr_context *ctx; + enum mlx5dr_pool_type type; + enum mlx5dr_pool_flags flags; + pthread_spinlock_t lock; + size_t alloc_log_sz; + enum mlx5dr_table_type tbl_type; + enum mlx5dr_pool_optimize opt_type; + struct mlx5dr_pool_resource *resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + struct mlx5dr_pool_resource *mirror_resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + /* DB */ + struct mlx5dr_pool_db db; + /* Functions */ + mlx5dr_pool_unint_db p_db_uninit; + mlx5dr_pool_db_get_chunk p_get_chunk; + mlx5dr_pool_db_put_chunk p_put_chunk; +}; + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, + struct mlx5dr_pool_attr *pool_attr); + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool); + +int mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->resource[chunk->resource_idx]->devx_obj; +} + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj_mirror(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->mirror_resource[chunk->resource_idx]->devx_obj; +} +#endif /* MLX5DR_POOL_H_ */