From patchwork Wed Feb 21 10:21:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 136955 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23DFF43B61; Wed, 21 Feb 2024 11:22:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1326740691; Wed, 21 Feb 2024 11:22:12 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2046.outbound.protection.outlook.com [40.107.223.46]) by mails.dpdk.org (Postfix) with ESMTP id D763C402CE for ; Wed, 21 Feb 2024 11:22:10 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=khBYRATPhTFtWu+Tx0lD8FbMYzRnyPjZKRRws332iSjhwIYafD6rPk8N6whfNaChyqHZpXTNYVq0E2OzQzSxCTjldbk9tutKgXhFw6fPex0/pUAeBCJMTrM3Jrw4jrYkpGmABjS50ePecf2cm/M0Kx2U1I4eobUUhW1rkta7FU3Wz/oPvsNDf3hICLP/jghlw9P/Fslk2Up0GvMOdPLvXquVLJwUU8FHB6HlOOJieN8F8oGrEbVPFqBZJkmJ7lij3j78+1NkcQb2ThYuVvL6B2Kxxv8ejSAM391aGFpk57emfM4xPI3oYLArdi1e6bjIxDLyRA66jdr1O3mnwU7B6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=e9zB10e4hUgDLDgkT7B7rde9/7LPWiQF0ItHvHr8bgc=; b=MmFqhHhRfcC4+8YWymmcVlnCCO+ufgLmpoBDF1lL99/9MkQcJ4IIi1RUZRM2EUgfHwFGYovgEL5zpQ8PBeQyBqQ+RcTLDal5M8oqJVdKLUkYEDyx7ggTDGprrjFup2nQ3oCcEiAT7yd6+NIJ2IsHRJ2/jQ4bYtHN/2UbHTsbmIc6rg1YFUtuu2rqr8ne/gM0Lom6KwyDOI5C6RxEdOeFHN67Wouqa7+0xLQaS4Zuw9NMi5OMdOlPcpaobO6hG8ifYTURc/yrlwjS5m8BW2AUA9FYIja+QvwHsNkuo31E+XUibgo7Rr3OtltmxlN1BKe63lvLSnkPnOGT1FLoro5lVA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=e9zB10e4hUgDLDgkT7B7rde9/7LPWiQF0ItHvHr8bgc=; b=Ru8TgnIrmU6Jq+ezFRO3FS2sRptdh5DHdIN57AfYHLdWPJV3GCYzXJoPeE5SNkNticcs46jW7L6cdhl29b/C4QlWQ/2dIrXGjQXz2itiYKXsH8uNHdrfNJKYI494d179yyvi/Ix3guFY8Wg6aDFQj01osCfMPYuc+nPuhSBb1hJcTfqA59b/0M13BR6a9SccaIUcLGOvbciI0nmbn5U4v4F+GD2YcwqmTHpQObDdbcCyU2xMDpLDk9Wrgn27Pme9czLUy11Pf8hs34jsNsMFxU6kPDmwMnlA6IfRLxCgxGXS6p8AkdT6F9souC8eU+T8twNZAGAM2XRCocOAwu04Qw== Received: from CY5PR19CA0116.namprd19.prod.outlook.com (2603:10b6:930:64::8) by BL3PR12MB6427.namprd12.prod.outlook.com (2603:10b6:208:3b6::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.21; Wed, 21 Feb 2024 10:22:08 +0000 Received: from CY4PEPF0000EE3D.namprd03.prod.outlook.com (2603:10b6:930:64:cafe::92) by CY5PR19CA0116.outlook.office365.com (2603:10b6:930:64::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.41 via Frontend Transport; Wed, 21 Feb 2024 10:22:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE3D.mail.protection.outlook.com (10.167.242.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 21 Feb 2024 10:22:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 21 Feb 2024 02:21:49 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 21 Feb 2024 02:21:46 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: Subject: [PATCH v2 1/2] net/mlx5: move meter init functions Date: Wed, 21 Feb 2024 12:21:29 +0200 Message-ID: <20240221102130.10124-2-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240221102130.10124-1-dsosnowski@nvidia.com> References: <20240221101327.9820-1-dsosnowski@nvidia.com> <20240221102130.10124-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3D:EE_|BL3PR12MB6427:EE_ X-MS-Office365-Filtering-Correlation-Id: f2400827-908d-4dd8-650e-08dc32c6f575 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CxFk/IZeOu/yYYOedbkZdSuYc0V/cLXDtx49hBXOyBpYdDRW6Cer5XKwwzUjQ20YO+q5XMw52ADrA9AR1pTBgWNVT+d5MYrqSKJW+zbWYErUma+FzZqG9eeKdnz/XoAQsqureeP08RPlat68mTkiq8Ehn09FB9VqjW+erEcirvIwX8aN9J9RnCtxoEd9BN5dK42Srvmbt5Ozp/BXHZPP7oJi64UbiU0qvSDHqGBrfeu15odWuofbxqPBrzAe85ocsCFwKii8vWiy2Leo3rQLh0vq7MzdfuGYTXUGz48CsuYyG9vB6ZMMtNqFO6rbI7u+P71jl59utsC3DLGY1hfQ9P5FzZPTYhepyfyucWHWe3U16Mn/hAYwU2/94Y3pFucWTWwyLp+Rt+mY1sDCeVsiLi7HxhliPFPXmLspW+SvYoiWz+VK7pmXoaSse6olfGzNXDVy4hm0M4ZROWeEwJfuffMXFhdd5n3aUfCjLZQ6A2Mt/xTi327Ye9m3weba7SqBeY6hHWsRQrLayqApK9AbpWYg73CB4qMldiKq5yrDI9oazYA/YTNY2N2bwJ+2Wqwd2ubW3Vt4j8lmZbKBugH8IiekHCGHkFuhD9NzBnG22rCGbD8DgwwszarauTtWa0Onmwx7Jxa0Ba2yIqxfhTzVVuCHIJW1vTepxTBcooj99I0= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(46966006)(40470700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2024 10:22:08.1230 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f2400827-908d-4dd8-650e-08dc32c6f575 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3D.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6427 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move mlx5_flow_meter_init() and mlx5_flow_meter_uinit() to module for meter operations. Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_hw.c | 203 ---------------------------- drivers/net/mlx5/mlx5_flow_meter.c | 207 +++++++++++++++++++++++++++++ 2 files changed, 207 insertions(+), 203 deletions(-) -- 2.34.1 diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 3bb3a9a178..4d6b22c4e3 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -12859,209 +12859,6 @@ mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags) return 0; } -void -mlx5_flow_meter_uninit(struct rte_eth_dev *dev) -{ - struct mlx5_priv *priv = dev->data->dev_private; - - if (priv->mtr_policy_arr) { - mlx5_free(priv->mtr_policy_arr); - priv->mtr_policy_arr = NULL; - } - if (priv->mtr_profile_arr) { - mlx5_free(priv->mtr_profile_arr); - priv->mtr_profile_arr = NULL; - } - if (priv->hws_mpool) { - mlx5_aso_mtr_queue_uninit(priv->sh, priv->hws_mpool, NULL); - mlx5_ipool_destroy(priv->hws_mpool->idx_pool); - mlx5_free(priv->hws_mpool); - priv->hws_mpool = NULL; - } - if (priv->mtr_bulk.aso) { - mlx5_free(priv->mtr_bulk.aso); - priv->mtr_bulk.aso = NULL; - priv->mtr_bulk.size = 0; - mlx5_aso_queue_uninit(priv->sh, ASO_OPC_MOD_POLICER); - } - if (priv->mtr_bulk.action) { - mlx5dr_action_destroy(priv->mtr_bulk.action); - priv->mtr_bulk.action = NULL; - } - if (priv->mtr_bulk.devx_obj) { - claim_zero(mlx5_devx_cmd_destroy(priv->mtr_bulk.devx_obj)); - priv->mtr_bulk.devx_obj = NULL; - } -} - -int -mlx5_flow_meter_init(struct rte_eth_dev *dev, - uint32_t nb_meters, - uint32_t nb_meter_profiles, - uint32_t nb_meter_policies, - uint32_t nb_queues) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_devx_obj *dcs = NULL; - uint32_t log_obj_size; - int ret = 0; - int reg_id; - struct mlx5_aso_mtr *aso; - uint32_t i; - struct rte_flow_error error; - uint32_t flags; - uint32_t nb_mtrs = rte_align32pow2(nb_meters); - struct mlx5_indexed_pool_config cfg = { - .size = sizeof(struct mlx5_aso_mtr), - .trunk_size = 1 << 12, - .per_core_cache = 1 << 13, - .need_lock = 1, - .release_mem_en = !!priv->sh->config.reclaim_mode, - .malloc = mlx5_malloc, - .max_idx = nb_meters, - .free = mlx5_free, - .type = "mlx5_hw_mtr_mark_action", - }; - - if (!nb_meters) { - ret = ENOTSUP; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter configuration is invalid."); - goto err; - } - if (!priv->mtr_en || !priv->sh->meter_aso_en) { - ret = ENOTSUP; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO is not supported."); - goto err; - } - priv->mtr_config.nb_meters = nb_meters; - log_obj_size = rte_log2_u32(nb_meters >> 1); - dcs = mlx5_devx_cmd_create_flow_meter_aso_obj - (priv->sh->cdev->ctx, priv->sh->cdev->pdn, - log_obj_size); - if (!dcs) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO object allocation failed."); - goto err; - } - priv->mtr_bulk.devx_obj = dcs; - reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); - if (reg_id < 0) { - ret = ENOTSUP; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter register is not available."); - goto err; - } - flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; - if (priv->sh->config.dv_esw_en && priv->master) - flags |= MLX5DR_ACTION_FLAG_HWS_FDB; - priv->mtr_bulk.action = mlx5dr_action_create_aso_meter - (priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs, - reg_id - REG_C_0, flags); - if (!priv->mtr_bulk.action) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter action creation failed."); - goto err; - } - priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_aso_mtr) * - nb_meters, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!priv->mtr_bulk.aso) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter bulk ASO allocation failed."); - goto err; - } - priv->mtr_bulk.size = nb_meters; - aso = priv->mtr_bulk.aso; - for (i = 0; i < priv->mtr_bulk.size; i++) { - aso->type = ASO_METER_DIRECT; - aso->state = ASO_METER_WAIT; - aso->offset = i; - aso++; - } - priv->hws_mpool = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_aso_mtr_pool), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (!priv->hws_mpool) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ipool allocation failed."); - goto err; - } - priv->hws_mpool->devx_obj = priv->mtr_bulk.devx_obj; - priv->hws_mpool->action = priv->mtr_bulk.action; - priv->hws_mpool->nb_sq = nb_queues; - if (mlx5_aso_mtr_queue_init(priv->sh, priv->hws_mpool, - &priv->sh->mtrmng->pools_mng, nb_queues)) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO queue allocation failed."); - goto err; - } - /* - * No need for local cache if Meter number is a small number. - * Since flow insertion rate will be very limited in that case. - * Here let's set the number to less than default trunk size 4K. - */ - if (nb_mtrs <= cfg.trunk_size) { - cfg.per_core_cache = 0; - cfg.trunk_size = nb_mtrs; - } else if (nb_mtrs <= MLX5_HW_IPOOL_SIZE_THRESHOLD) { - cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; - } - priv->hws_mpool->idx_pool = mlx5_ipool_create(&cfg); - if (nb_meter_profiles) { - priv->mtr_config.nb_meter_profiles = nb_meter_profiles; - priv->mtr_profile_arr = - mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_profile) * - nb_meter_profiles, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!priv->mtr_profile_arr) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter profile allocation failed."); - goto err; - } - } - if (nb_meter_policies) { - priv->mtr_config.nb_meter_policies = nb_meter_policies; - priv->mtr_policy_arr = - mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_policy) * - nb_meter_policies, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!priv->mtr_policy_arr) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter policy allocation failed."); - goto err; - } - } - return 0; -err: - mlx5_flow_meter_uninit(dev); - return ret; -} - static __rte_always_inline uint32_t mlx5_reformat_domain_to_tbl_type(const struct rte_flow_indir_action_conf *domain) { diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 7cbf772ea4..9cb4614436 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -15,6 +15,213 @@ #include "mlx5.h" #include "mlx5_flow.h" +#ifdef HAVE_MLX5_HWS_SUPPORT + +void +mlx5_flow_meter_uninit(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (priv->mtr_policy_arr) { + mlx5_free(priv->mtr_policy_arr); + priv->mtr_policy_arr = NULL; + } + if (priv->mtr_profile_arr) { + mlx5_free(priv->mtr_profile_arr); + priv->mtr_profile_arr = NULL; + } + if (priv->hws_mpool) { + mlx5_aso_mtr_queue_uninit(priv->sh, priv->hws_mpool, NULL); + mlx5_ipool_destroy(priv->hws_mpool->idx_pool); + mlx5_free(priv->hws_mpool); + priv->hws_mpool = NULL; + } + if (priv->mtr_bulk.aso) { + mlx5_free(priv->mtr_bulk.aso); + priv->mtr_bulk.aso = NULL; + priv->mtr_bulk.size = 0; + mlx5_aso_queue_uninit(priv->sh, ASO_OPC_MOD_POLICER); + } + if (priv->mtr_bulk.action) { + mlx5dr_action_destroy(priv->mtr_bulk.action); + priv->mtr_bulk.action = NULL; + } + if (priv->mtr_bulk.devx_obj) { + claim_zero(mlx5_devx_cmd_destroy(priv->mtr_bulk.devx_obj)); + priv->mtr_bulk.devx_obj = NULL; + } +} + +int +mlx5_flow_meter_init(struct rte_eth_dev *dev, + uint32_t nb_meters, + uint32_t nb_meter_profiles, + uint32_t nb_meter_policies, + uint32_t nb_queues) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_devx_obj *dcs = NULL; + uint32_t log_obj_size; + int ret = 0; + int reg_id; + struct mlx5_aso_mtr *aso; + uint32_t i; + struct rte_flow_error error; + uint32_t flags; + uint32_t nb_mtrs = rte_align32pow2(nb_meters); + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct mlx5_aso_mtr), + .trunk_size = 1 << 12, + .per_core_cache = 1 << 13, + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .max_idx = nb_meters, + .free = mlx5_free, + .type = "mlx5_hw_mtr_mark_action", + }; + + if (!nb_meters) { + ret = ENOTSUP; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter configuration is invalid."); + goto err; + } + if (!priv->mtr_en || !priv->sh->meter_aso_en) { + ret = ENOTSUP; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO is not supported."); + goto err; + } + priv->mtr_config.nb_meters = nb_meters; + log_obj_size = rte_log2_u32(nb_meters >> 1); + dcs = mlx5_devx_cmd_create_flow_meter_aso_obj + (priv->sh->cdev->ctx, priv->sh->cdev->pdn, + log_obj_size); + if (!dcs) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO object allocation failed."); + goto err; + } + priv->mtr_bulk.devx_obj = dcs; + reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); + if (reg_id < 0) { + ret = ENOTSUP; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter register is not available."); + goto err; + } + flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + if (priv->sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; + priv->mtr_bulk.action = mlx5dr_action_create_aso_meter + (priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs, + reg_id - REG_C_0, flags); + if (!priv->mtr_bulk.action) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter action creation failed."); + goto err; + } + priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_mtr) * + nb_meters, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_bulk.aso) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter bulk ASO allocation failed."); + goto err; + } + priv->mtr_bulk.size = nb_meters; + aso = priv->mtr_bulk.aso; + for (i = 0; i < priv->mtr_bulk.size; i++) { + aso->type = ASO_METER_DIRECT; + aso->state = ASO_METER_WAIT; + aso->offset = i; + aso++; + } + priv->hws_mpool = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_mtr_pool), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!priv->hws_mpool) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ipool allocation failed."); + goto err; + } + priv->hws_mpool->devx_obj = priv->mtr_bulk.devx_obj; + priv->hws_mpool->action = priv->mtr_bulk.action; + priv->hws_mpool->nb_sq = nb_queues; + if (mlx5_aso_mtr_queue_init(priv->sh, priv->hws_mpool, + &priv->sh->mtrmng->pools_mng, nb_queues)) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO queue allocation failed."); + goto err; + } + /* + * No need for local cache if Meter number is a small number. + * Since flow insertion rate will be very limited in that case. + * Here let's set the number to less than default trunk size 4K. + */ + if (nb_mtrs <= cfg.trunk_size) { + cfg.per_core_cache = 0; + cfg.trunk_size = nb_mtrs; + } else if (nb_mtrs <= MLX5_HW_IPOOL_SIZE_THRESHOLD) { + cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; + } + priv->hws_mpool->idx_pool = mlx5_ipool_create(&cfg); + if (nb_meter_profiles) { + priv->mtr_config.nb_meter_profiles = nb_meter_profiles; + priv->mtr_profile_arr = + mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_flow_meter_profile) * + nb_meter_profiles, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_profile_arr) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter profile allocation failed."); + goto err; + } + } + if (nb_meter_policies) { + priv->mtr_config.nb_meter_policies = nb_meter_policies; + priv->mtr_policy_arr = + mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_flow_meter_policy) * + nb_meter_policies, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_policy_arr) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter policy allocation failed."); + goto err; + } + } + return 0; +err: + mlx5_flow_meter_uninit(dev); + return ret; +} + +#endif /* HAVE_MLX5_HWS_SUPPORT */ + static int mlx5_flow_meter_disable(struct rte_eth_dev *dev, uint32_t meter_id, struct rte_mtr_error *error);