From patchwork Tue Feb 27 13:37:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 137349 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 85C6D43C06; Tue, 27 Feb 2024 14:37:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 24BB240A8A; Tue, 27 Feb 2024 14:37:58 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2046.outbound.protection.outlook.com [40.107.220.46]) by mails.dpdk.org (Postfix) with ESMTP id 0A7BF4027D for ; Tue, 27 Feb 2024 14:37:56 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d0WTJJhmjJo4j6yrjPdq8ZMBRA6oZhFVktMBRtpHEO5uVq9vjgFRFzb0x6Y+bZ3gabvJSqFlWEoq+/nwqRv83dLRVcvFM85SjUd1ZoZ/quACwsC2FCjrc14AdfidFnRidrgc0KsmMaNY7pwPByxNtwScB5RMZ6YAH01b42+UCbHypDuKRQL/tFh6JJNf+90K5bvhR3Ers/8BwyGPqzwzSFMboKDg7w0qG/fCYGEJVrI6prgse9ayMsJ81TD/Bn+PzfX/DGzXp6/nimshBfFnS0HrFa2A89S3FSbpaLvlRLDND9RDhiA7f+mPM1DwgwRAQ91d12pHEBgXfbfI+dbW6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=B925omgLPO9O09R028zFLhEWkfZR9AihBIVlkoacZG8=; b=RD/VBOzXpez7vSGPmOjSs4PrC+BheBH0bvD0Q3GBcXYsuNIKhn1qyEPnU2FKp7AcaZ8wrQIRmDnE2yVVg6xjkNNq7UFb+jsD3v5WQxjMPi03qq1mijGjiU1OUKklkXOsTBxC73oQxrhMMnOJG/nfWr7/+b7eLjyF4/jmGab4UJ/4tmVFT02O5PPa6Qmj7It7TFRRVmrAcvW5M2yNdaoc8eJp8yyfh2JRqXeVE+PV0reOnRRkedqXvvcpALVTMtukYevTpTkRAgxM8p41nfa386EhZX+gewYMTcCg0gM5ixXSj9lV1NcG7G4S5UvBbZE0aD+5GdjOhPjH0mkXaw95cA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=B925omgLPO9O09R028zFLhEWkfZR9AihBIVlkoacZG8=; b=hmA6emTTsB5n4tXKy5DBa0FvxRQIDsOXOUIErIEggp9UupZgf5o1RKyHWA3l2nmB8vUVwrDF5j5g10uXHemRv9Y5itCvn5YrxX7tU2VkNJGBgPELqJZnnadAp4Qnxt2k6It+3zYiS/axMARAu7C5nwYam5gpEDx23R/Teh/pvS71zrdVTDDsflxB/fGzK/vldwySgASIfeQEZYWBcUvgEYPLbmOxaWbflm25bubsQhmS9FwZDxZXfE7s7K/+18GOpHYgPgy6wWamCsAGG7EoBJ9RQHsnB5L7Du/ZSObFYx0NgX84Asxj2EEs9yGV7wrNjzo0raoCFKYvXTwxE1QmHw== Received: from DM6PR21CA0012.namprd21.prod.outlook.com (2603:10b6:5:174::22) by DS0PR12MB6584.namprd12.prod.outlook.com (2603:10b6:8:d0::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Tue, 27 Feb 2024 13:37:50 +0000 Received: from CY4PEPF0000EE32.namprd05.prod.outlook.com (2603:10b6:5:174:cafe::e9) by DM6PR21CA0012.outlook.office365.com (2603:10b6:5:174::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.11 via Frontend Transport; Tue, 27 Feb 2024 13:37:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000EE32.mail.protection.outlook.com (10.167.242.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Tue, 27 Feb 2024 13:37:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 27 Feb 2024 05:37:33 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Tue, 27 Feb 2024 05:37:30 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: Subject: [PATCH v4 1/2] net/mlx5: move meter init functions Date: Tue, 27 Feb 2024 15:37:13 +0200 Message-ID: <20240227133714.12705-2-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240227133714.12705-1-dsosnowski@nvidia.com> References: <20240222180059.50597-1-dsosnowski@nvidia.com> <20240227133714.12705-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE32:EE_|DS0PR12MB6584:EE_ X-MS-Office365-Filtering-Correlation-Id: 6ec8e6ee-4b09-4be2-8473-08dc37994aa4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9LLGTg7VGybHIm4MHryaDLgWt8q5VhNUigrq/VOcyEx4tnj61vC17C+v88IXdvRbRS7TO0ORwGosaz/oGKONeJTEk2R5urwszdZhX/FJnk67rDiqTPweQgElLELPCq5vOF5AcTy5Lht+QBzS0jcmeTgU/0aBtvSMDE8dWG1+yR5tFw8/3Hx77phc3HjGS8VVl3OLxQt+z6N1ZxOkpo2LEaHGP8fAjs93rHIjaC9Ii0Ib3k4eByc4emnJ7FdqHYleiVfkyQ7LClxAyNS9eIQivObetcM7R6H2/UuL/uEiih/YMI8fIoipJVN+pZdwjNoHRAUzxqcUyau268yEVAmQSUQd09XFUbkMFeEWy8zG6SDaXpuT24zKgY6mk/RdILHyRpl/6EkDFl21oh2GwJAhKLqkhy9li61CVSkmasHmABExR0dPF868m1nFa9GGGH5Qe0PcWNug42l1K2GfWs4oGxOqeL+OBfgtNmyrAaSC+d3ODXwANvnIdBGz8/QQQJETCR1xBb0Wv+OpzPPQhoLb63ixlNvkrl5gtWrmKQVHnFsuKqe3Vw6bDRoevPOfJe/0Sy6/6c/+Il3BG4wAlaEDFiEFduMWBXuTzvjTUThYTylTShGpb6Ub/LqKWEUbJFfGtT6tW+1gd2oRGTOBva+wQPbdONMS2WPNE56X4P7HvN/OKgzYAASuMYoWdYQDByQpqd9rCGMMAAcNdDACmmalp5qiQsiOnYMVKwUTPLaAR8h339eYlVrSwml1TLcu9aid X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Feb 2024 13:37:50.0025 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6ec8e6ee-4b09-4be2-8473-08dc37994aa4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE32.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB6584 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move mlx5_flow_meter_init() and mlx5_flow_meter_uinit() to module for meter operations. Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_hw.c | 203 ---------------------------- drivers/net/mlx5/mlx5_flow_meter.c | 207 +++++++++++++++++++++++++++++ 2 files changed, 207 insertions(+), 203 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 769ec9ff94..49c164060b 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -13139,209 +13139,6 @@ mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags) return 0; } -void -mlx5_flow_meter_uninit(struct rte_eth_dev *dev) -{ - struct mlx5_priv *priv = dev->data->dev_private; - - if (priv->mtr_policy_arr) { - mlx5_free(priv->mtr_policy_arr); - priv->mtr_policy_arr = NULL; - } - if (priv->mtr_profile_arr) { - mlx5_free(priv->mtr_profile_arr); - priv->mtr_profile_arr = NULL; - } - if (priv->hws_mpool) { - mlx5_aso_mtr_queue_uninit(priv->sh, priv->hws_mpool, NULL); - mlx5_ipool_destroy(priv->hws_mpool->idx_pool); - mlx5_free(priv->hws_mpool); - priv->hws_mpool = NULL; - } - if (priv->mtr_bulk.aso) { - mlx5_free(priv->mtr_bulk.aso); - priv->mtr_bulk.aso = NULL; - priv->mtr_bulk.size = 0; - mlx5_aso_queue_uninit(priv->sh, ASO_OPC_MOD_POLICER); - } - if (priv->mtr_bulk.action) { - mlx5dr_action_destroy(priv->mtr_bulk.action); - priv->mtr_bulk.action = NULL; - } - if (priv->mtr_bulk.devx_obj) { - claim_zero(mlx5_devx_cmd_destroy(priv->mtr_bulk.devx_obj)); - priv->mtr_bulk.devx_obj = NULL; - } -} - -int -mlx5_flow_meter_init(struct rte_eth_dev *dev, - uint32_t nb_meters, - uint32_t nb_meter_profiles, - uint32_t nb_meter_policies, - uint32_t nb_queues) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_devx_obj *dcs = NULL; - uint32_t log_obj_size; - int ret = 0; - int reg_id; - struct mlx5_aso_mtr *aso; - uint32_t i; - struct rte_flow_error error; - uint32_t flags; - uint32_t nb_mtrs = rte_align32pow2(nb_meters); - struct mlx5_indexed_pool_config cfg = { - .size = sizeof(struct mlx5_aso_mtr), - .trunk_size = 1 << 12, - .per_core_cache = 1 << 13, - .need_lock = 1, - .release_mem_en = !!priv->sh->config.reclaim_mode, - .malloc = mlx5_malloc, - .max_idx = nb_meters, - .free = mlx5_free, - .type = "mlx5_hw_mtr_mark_action", - }; - - if (!nb_meters) { - ret = ENOTSUP; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter configuration is invalid."); - goto err; - } - if (!priv->mtr_en || !priv->sh->meter_aso_en) { - ret = ENOTSUP; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO is not supported."); - goto err; - } - priv->mtr_config.nb_meters = nb_meters; - log_obj_size = rte_log2_u32(nb_meters >> 1); - dcs = mlx5_devx_cmd_create_flow_meter_aso_obj - (priv->sh->cdev->ctx, priv->sh->cdev->pdn, - log_obj_size); - if (!dcs) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO object allocation failed."); - goto err; - } - priv->mtr_bulk.devx_obj = dcs; - reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); - if (reg_id < 0) { - ret = ENOTSUP; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter register is not available."); - goto err; - } - flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; - if (priv->sh->config.dv_esw_en && priv->master) - flags |= MLX5DR_ACTION_FLAG_HWS_FDB; - priv->mtr_bulk.action = mlx5dr_action_create_aso_meter - (priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs, - reg_id - REG_C_0, flags); - if (!priv->mtr_bulk.action) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter action creation failed."); - goto err; - } - priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_aso_mtr) * - nb_meters, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!priv->mtr_bulk.aso) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter bulk ASO allocation failed."); - goto err; - } - priv->mtr_bulk.size = nb_meters; - aso = priv->mtr_bulk.aso; - for (i = 0; i < priv->mtr_bulk.size; i++) { - aso->type = ASO_METER_DIRECT; - aso->state = ASO_METER_WAIT; - aso->offset = i; - aso++; - } - priv->hws_mpool = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_aso_mtr_pool), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (!priv->hws_mpool) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ipool allocation failed."); - goto err; - } - priv->hws_mpool->devx_obj = priv->mtr_bulk.devx_obj; - priv->hws_mpool->action = priv->mtr_bulk.action; - priv->hws_mpool->nb_sq = nb_queues; - if (mlx5_aso_mtr_queue_init(priv->sh, priv->hws_mpool, - &priv->sh->mtrmng->pools_mng, nb_queues)) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO queue allocation failed."); - goto err; - } - /* - * No need for local cache if Meter number is a small number. - * Since flow insertion rate will be very limited in that case. - * Here let's set the number to less than default trunk size 4K. - */ - if (nb_mtrs <= cfg.trunk_size) { - cfg.per_core_cache = 0; - cfg.trunk_size = nb_mtrs; - } else if (nb_mtrs <= MLX5_HW_IPOOL_SIZE_THRESHOLD) { - cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; - } - priv->hws_mpool->idx_pool = mlx5_ipool_create(&cfg); - if (nb_meter_profiles) { - priv->mtr_config.nb_meter_profiles = nb_meter_profiles; - priv->mtr_profile_arr = - mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_profile) * - nb_meter_profiles, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!priv->mtr_profile_arr) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter profile allocation failed."); - goto err; - } - } - if (nb_meter_policies) { - priv->mtr_config.nb_meter_policies = nb_meter_policies; - priv->mtr_policy_arr = - mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_policy) * - nb_meter_policies, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!priv->mtr_policy_arr) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter policy allocation failed."); - goto err; - } - } - return 0; -err: - mlx5_flow_meter_uninit(dev); - return ret; -} - static __rte_always_inline uint32_t mlx5_reformat_domain_to_tbl_type(const struct rte_flow_indir_action_conf *domain) { diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 7cbf772ea4..9cb4614436 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -15,6 +15,213 @@ #include "mlx5.h" #include "mlx5_flow.h" +#ifdef HAVE_MLX5_HWS_SUPPORT + +void +mlx5_flow_meter_uninit(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (priv->mtr_policy_arr) { + mlx5_free(priv->mtr_policy_arr); + priv->mtr_policy_arr = NULL; + } + if (priv->mtr_profile_arr) { + mlx5_free(priv->mtr_profile_arr); + priv->mtr_profile_arr = NULL; + } + if (priv->hws_mpool) { + mlx5_aso_mtr_queue_uninit(priv->sh, priv->hws_mpool, NULL); + mlx5_ipool_destroy(priv->hws_mpool->idx_pool); + mlx5_free(priv->hws_mpool); + priv->hws_mpool = NULL; + } + if (priv->mtr_bulk.aso) { + mlx5_free(priv->mtr_bulk.aso); + priv->mtr_bulk.aso = NULL; + priv->mtr_bulk.size = 0; + mlx5_aso_queue_uninit(priv->sh, ASO_OPC_MOD_POLICER); + } + if (priv->mtr_bulk.action) { + mlx5dr_action_destroy(priv->mtr_bulk.action); + priv->mtr_bulk.action = NULL; + } + if (priv->mtr_bulk.devx_obj) { + claim_zero(mlx5_devx_cmd_destroy(priv->mtr_bulk.devx_obj)); + priv->mtr_bulk.devx_obj = NULL; + } +} + +int +mlx5_flow_meter_init(struct rte_eth_dev *dev, + uint32_t nb_meters, + uint32_t nb_meter_profiles, + uint32_t nb_meter_policies, + uint32_t nb_queues) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_devx_obj *dcs = NULL; + uint32_t log_obj_size; + int ret = 0; + int reg_id; + struct mlx5_aso_mtr *aso; + uint32_t i; + struct rte_flow_error error; + uint32_t flags; + uint32_t nb_mtrs = rte_align32pow2(nb_meters); + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct mlx5_aso_mtr), + .trunk_size = 1 << 12, + .per_core_cache = 1 << 13, + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .max_idx = nb_meters, + .free = mlx5_free, + .type = "mlx5_hw_mtr_mark_action", + }; + + if (!nb_meters) { + ret = ENOTSUP; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter configuration is invalid."); + goto err; + } + if (!priv->mtr_en || !priv->sh->meter_aso_en) { + ret = ENOTSUP; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO is not supported."); + goto err; + } + priv->mtr_config.nb_meters = nb_meters; + log_obj_size = rte_log2_u32(nb_meters >> 1); + dcs = mlx5_devx_cmd_create_flow_meter_aso_obj + (priv->sh->cdev->ctx, priv->sh->cdev->pdn, + log_obj_size); + if (!dcs) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO object allocation failed."); + goto err; + } + priv->mtr_bulk.devx_obj = dcs; + reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); + if (reg_id < 0) { + ret = ENOTSUP; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter register is not available."); + goto err; + } + flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + if (priv->sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; + priv->mtr_bulk.action = mlx5dr_action_create_aso_meter + (priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs, + reg_id - REG_C_0, flags); + if (!priv->mtr_bulk.action) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter action creation failed."); + goto err; + } + priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_mtr) * + nb_meters, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_bulk.aso) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter bulk ASO allocation failed."); + goto err; + } + priv->mtr_bulk.size = nb_meters; + aso = priv->mtr_bulk.aso; + for (i = 0; i < priv->mtr_bulk.size; i++) { + aso->type = ASO_METER_DIRECT; + aso->state = ASO_METER_WAIT; + aso->offset = i; + aso++; + } + priv->hws_mpool = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_mtr_pool), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!priv->hws_mpool) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ipool allocation failed."); + goto err; + } + priv->hws_mpool->devx_obj = priv->mtr_bulk.devx_obj; + priv->hws_mpool->action = priv->mtr_bulk.action; + priv->hws_mpool->nb_sq = nb_queues; + if (mlx5_aso_mtr_queue_init(priv->sh, priv->hws_mpool, + &priv->sh->mtrmng->pools_mng, nb_queues)) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO queue allocation failed."); + goto err; + } + /* + * No need for local cache if Meter number is a small number. + * Since flow insertion rate will be very limited in that case. + * Here let's set the number to less than default trunk size 4K. + */ + if (nb_mtrs <= cfg.trunk_size) { + cfg.per_core_cache = 0; + cfg.trunk_size = nb_mtrs; + } else if (nb_mtrs <= MLX5_HW_IPOOL_SIZE_THRESHOLD) { + cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; + } + priv->hws_mpool->idx_pool = mlx5_ipool_create(&cfg); + if (nb_meter_profiles) { + priv->mtr_config.nb_meter_profiles = nb_meter_profiles; + priv->mtr_profile_arr = + mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_flow_meter_profile) * + nb_meter_profiles, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_profile_arr) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter profile allocation failed."); + goto err; + } + } + if (nb_meter_policies) { + priv->mtr_config.nb_meter_policies = nb_meter_policies; + priv->mtr_policy_arr = + mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_flow_meter_policy) * + nb_meter_policies, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_policy_arr) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter policy allocation failed."); + goto err; + } + } + return 0; +err: + mlx5_flow_meter_uninit(dev); + return ret; +} + +#endif /* HAVE_MLX5_HWS_SUPPORT */ + static int mlx5_flow_meter_disable(struct rte_eth_dev *dev, uint32_t meter_id, struct rte_mtr_error *error); From patchwork Tue Feb 27 13:37:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 137350 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CA0443C06; Tue, 27 Feb 2024 14:38:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 090F840DFD; Tue, 27 Feb 2024 14:38:00 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2087.outbound.protection.outlook.com [40.107.220.87]) by mails.dpdk.org (Postfix) with ESMTP id 4C60B40A76 for ; Tue, 27 Feb 2024 14:37:56 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CVxVKp9h04f3hkZDmzXInuDCoYN3dIyJPtom8tBbquKIIH35W3w9WrraTIZmasc0ZMiQ1aqocUgrUtBEXClgu0rNtXb55sEp5DbquM9bGqbHQlnSPaVgOEwR/ldDwO1lzlNKeUhdG9oL6zP4UNwb7trPQS8A0JDUK3tviJ22faTGVdzsGqefGrRcaSl5YS4m87PZ2pWaReppAOGrcXc5Bg+3hfU99idanaPyKqDQHdCOxFiH01e43mY5L+RCBYR2izNOnVIc+ruxA/tNb3meYphmhLHjCng4LqYOb4VSY45Cu3zn2fng0+3uA37GnYo2VJwS0kx078MpHAJ09wCPTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MqPsEZ4FVsP3ADl6pNIIUj3t9Sp9Q/J2gi0R7U9MgpQ=; b=lc2eD9L7sT7DCQU30k73p16z1q+JZJFf9LRWVeH4upz7Z7/x3TURVzSCmK/2gsYF7ZErxPhcPtKxTU40UDdsLozhQqORHkBCJE75NTc4dPGiilDTEPQeJi28QtMgjIG8E1ArtP/mTkzvjDuIuvFZLea9pifqKGAZTO8SeMxWkHOCvKKNWn6LVvZrIeANemO0KLb/sKXIfJ8LXOQ8mm6tjnQiT6y8aoIb5ZaVPuBy/3pZyu40DOyrWb/x1vG3jUr4wk8uakd90It/2mjZqNHXPLs8OSv9EDL2MZ2u6WLCdzxxPzbZmYzDWz/a9h2/rlhTnzLqrIjA0ZZh0BzhdtsEKw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MqPsEZ4FVsP3ADl6pNIIUj3t9Sp9Q/J2gi0R7U9MgpQ=; b=HGHm957/orm7WIQshpPto1BtYwOFO5m6Qf+drjwJwyyzPjfSENS9mFtC9bQeqhV6hisQA+jTn82wybloxQMUohNhY59By/XvU62zfPx23CR6Vw7bGYXCKK7ZMiClLNsqu1rb+QFj9+N5sRIuxBa3n5hcnRQNTNuqVNKe4ayhvP26SB6oh+1x1fWqEqblUuuFF9dLuTuSR0/9GnzOvO7bWSsTf6QfE6c03cvCSItqn9+rIUC5jglo4UO/NT5NkllPomppInP6yZVEYLlzt73t4nMdQDGAgC6oD3nO5ZGFaEj5bXYA0ERX2jo+Wb+ItpflAbI6S+b8iZF/u/ZdCbH8zQ== Received: from SA1PR05CA0023.namprd05.prod.outlook.com (2603:10b6:806:2d2::17) by SJ0PR12MB7007.namprd12.prod.outlook.com (2603:10b6:a03:486::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Tue, 27 Feb 2024 13:37:50 +0000 Received: from SN1PEPF00026368.namprd02.prod.outlook.com (2603:10b6:806:2d2:cafe::7c) by SA1PR05CA0023.outlook.office365.com (2603:10b6:806:2d2::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.25 via Frontend Transport; Tue, 27 Feb 2024 13:37:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SN1PEPF00026368.mail.protection.outlook.com (10.167.241.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Tue, 27 Feb 2024 13:37:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 27 Feb 2024 05:37:35 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Tue, 27 Feb 2024 05:37:33 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: Subject: [PATCH v4 2/2] net/mlx5: add cross port meter mark action sharing Date: Tue, 27 Feb 2024 15:37:14 +0200 Message-ID: <20240227133714.12705-3-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240227133714.12705-1-dsosnowski@nvidia.com> References: <20240222180059.50597-1-dsosnowski@nvidia.com> <20240227133714.12705-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF00026368:EE_|SJ0PR12MB7007:EE_ X-MS-Office365-Filtering-Correlation-Id: 32e6ca28-d265-433b-8ce1-08dc37994abc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0COqO46LQeS4/aw9nOlA+voelq8rJmwTPBesg/rtCMz9ma863Khkqkg71p60YHgEcjrd1u0ZY67mEHf5aBN1cXr8z3pxEI0PxJ/RGtpeWGD51Uvuj8xQS+HNhJGeQFUNIsqeMY8G4t1EfJLxm7tLZ88VVPlahlKu/lYsACpdlT47M1XB2bzYIPrD33W8yoQ95hLlGAm0wKSU6u64YfmFLnrhWAkoizPkBOFY7Fk7lNSRrxNWwxoRj4R6iG5byL2kJutt5FrBBfU300Ly7nLOmiyE8CcWf51Fu6XGRwSTNutgOKECmrYsqxPFXjHdHOR5bY2niO3Tj+1oZSVv3Uk6FYg4GoJQlUpPyBbjVfFs3tQmt6cBE7PuwTOuAIunf/l24xfNcz1Ote4JqTxTyn0XMs3MzA2j2Pt9G08XIXH1uEmGxOrIUEUi8dLYa03gadgLYriyQy3kPHAe1xNKKF1FYdukdeTzEhCf7AjRizz/qzbSLrCJYF2Ka/L8ER2D+R8oF3YjMShCIoCLpbeQOJhmRhd0vFBB/cEMfnxtvtPJhJVsBrGIoiOhgLEgW5X8tG/JVBtCZexGgUDOj901U1Q+Whz+3AUu+/qE2ArpLzlRlC/ybJDcgLZL+48T49te+J0FPsM1qqz66GsTYYixSHd+FBtOuIsbMcMKuF3sxiNYOCNW5JIGR5YD3xIdLj/Rq7OvUf7hsiXIQwwPc3s5zp08EsJcDM5XBvVskdQaLjE7KiYfgleWlw/4CNt875ARe4TS X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Feb 2024 13:37:50.1443 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 32e6ca28-d265-433b-8ce1-08dc37994abc X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF00026368.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB7007 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support for sharing meter mark actions between multiple ports of the same physical NIC. Meter objects pool, meter mark actions and meter profiles can be created only on the host port. Guest ports are allowed to use meter objects created on the host port through indirect actions. Direct use of meter mark actions (e.g. putting meter mark action in actions template), creation of indirect meters and meter profiles on the guest port is not allowed. Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- doc/guides/rel_notes/release_24_03.rst | 2 + drivers/net/mlx5/mlx5_flow_hw.c | 42 +++++++++----- drivers/net/mlx5/mlx5_flow_meter.c | 77 ++++++++++++++++++++++++++ 3 files changed, 108 insertions(+), 13 deletions(-) diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst index ff9c6552e4..76d2e60f59 100644 --- a/doc/guides/rel_notes/release_24_03.rst +++ b/doc/guides/rel_notes/release_24_03.rst @@ -133,6 +133,8 @@ New Features * Added support for comparing result between packet fields or value. * Added support for accumulating value of field into another one. * Added support for copy inner fields in HWS flow engine. + * Added support for sharing indirect action objects of type ``RTE_FLOW_ACTION_TYPE_METER_MARK`` + in HWS flow engine. * **Updated Marvell cnxk crypto driver.** diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 49c164060b..2a1281732a 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1545,7 +1545,8 @@ static rte_be32_t vlan_hdr_to_be32(const struct rte_flow_action *actions) static __rte_always_inline struct mlx5_aso_mtr * flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue, const struct rte_flow_action *action, - void *user_data, bool push) + void *user_data, bool push, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; @@ -1554,6 +1555,11 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue, struct mlx5_flow_meter_info *fm; uint32_t mtr_id; + if (priv->shared_host) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Meter mark actions can only be created on the host port"); + return NULL; + } if (meter_mark->profile == NULL) return NULL; aso_mtr = mlx5_ipool_malloc(priv->hws_mpool->idx_pool, &mtr_id); @@ -1592,13 +1598,14 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev, const struct rte_flow_action *action, struct mlx5dr_rule_action *acts, uint32_t *index, - uint32_t queue) + uint32_t queue, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct mlx5_aso_mtr *aso_mtr; - aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, NULL, true); + aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, NULL, true, error); if (!aso_mtr) return -1; @@ -2474,7 +2481,8 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, dr_pos, actions, acts->rule_acts, &acts->mtr_id, - MLX5_HW_INV_QUEUE); + MLX5_HW_INV_QUEUE, + error); if (err) goto err; } else if (__flow_hw_act_data_general_append(priv, acts, @@ -3197,7 +3205,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, */ ret = flow_hw_meter_mark_compile(dev, act_data->action_dst, action, - rule_acts, &job->flow->mtr_id, MLX5_HW_INV_QUEUE); + rule_acts, &job->flow->mtr_id, + MLX5_HW_INV_QUEUE, error); if (ret != 0) return ret; break; @@ -3832,9 +3841,11 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, res[i].user_data = user_data; res[i].status = RTE_FLOW_OP_SUCCESS; } - if (ret_comp < n_res && priv->hws_mpool) - ret_comp += mlx5_aso_pull_completion(&priv->hws_mpool->sq[queue], - &res[ret_comp], n_res - ret_comp); + if (!priv->shared_host) { + if (ret_comp < n_res && priv->hws_mpool) + ret_comp += mlx5_aso_pull_completion(&priv->hws_mpool->sq[queue], + &res[ret_comp], n_res - ret_comp); + } if (ret_comp < n_res && priv->hws_ctpool) ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue], &res[ret_comp], n_res - ret_comp); @@ -5450,6 +5461,8 @@ flow_hw_validate_action_count(struct rte_eth_dev *dev, * Pointer to rte_eth_dev structure. * @param[in] action * Pointer to the indirect action. + * @param[in] indirect + * If true, then provided action was passed using an indirect action. * @param[out] error * Pointer to error structure. * @@ -5459,6 +5472,7 @@ flow_hw_validate_action_count(struct rte_eth_dev *dev, static int flow_hw_validate_action_meter_mark(struct rte_eth_dev *dev, const struct rte_flow_action *action, + bool indirect, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -5469,6 +5483,9 @@ flow_hw_validate_action_meter_mark(struct rte_eth_dev *dev, return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, action, "meter_mark action not supported"); + if (!indirect && priv->shared_host) + return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, action, + "meter_mark action can only be used on host port"); if (!priv->hws_mpool) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, @@ -5512,7 +5529,7 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, type = mask->type; switch (type) { case RTE_FLOW_ACTION_TYPE_METER_MARK: - ret = flow_hw_validate_action_meter_mark(dev, mask, error); + ret = flow_hw_validate_action_meter_mark(dev, mask, true, error); if (ret < 0) return ret; *action_flags |= MLX5_FLOW_ACTION_METER; @@ -5969,8 +5986,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, action_flags |= MLX5_FLOW_ACTION_METER; break; case RTE_FLOW_ACTION_TYPE_METER_MARK: - ret = flow_hw_validate_action_meter_mark(dev, action, - error); + ret = flow_hw_validate_action_meter_mark(dev, action, false, error); if (ret < 0) return ret; action_flags |= MLX5_FLOW_ACTION_METER; @@ -10418,7 +10434,7 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue, "CT pool not initialized"); return mlx5_validate_action_ct(dev, action->conf, error); case RTE_FLOW_ACTION_TYPE_METER_MARK: - return flow_hw_validate_action_meter_mark(dev, action, error); + return flow_hw_validate_action_meter_mark(dev, action, true, error); case RTE_FLOW_ACTION_TYPE_RSS: return flow_dv_action_validate(dev, conf, action, error); case RTE_FLOW_ACTION_TYPE_QUOTA: @@ -10577,7 +10593,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, break; case RTE_FLOW_ACTION_TYPE_METER_MARK: aso = true; - aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, job, push); + aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, job, push, error); if (!aso_mtr) break; mtr_id = (MLX5_INDIRECT_ACTION_TYPE_METER_MARK << diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 9cb4614436..c0578ce6e9 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -17,11 +17,32 @@ #ifdef HAVE_MLX5_HWS_SUPPORT +static void +mlx5_flow_meter_uninit_guest(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (priv->hws_mpool) { + if (priv->hws_mpool->action) { + claim_zero(mlx5dr_action_destroy(priv->hws_mpool->action)); + priv->hws_mpool->action = NULL; + } + priv->hws_mpool->devx_obj = NULL; + priv->hws_mpool->idx_pool = NULL; + mlx5_free(priv->hws_mpool); + priv->hws_mpool = NULL; + } +} + void mlx5_flow_meter_uninit(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; + if (priv->shared_host) { + mlx5_flow_meter_uninit_guest(dev); + return; + } if (priv->mtr_policy_arr) { mlx5_free(priv->mtr_policy_arr); priv->mtr_policy_arr = NULL; @@ -52,6 +73,54 @@ mlx5_flow_meter_uninit(struct rte_eth_dev *dev) } } +static int +mlx5_flow_meter_init_guest(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_eth_dev *host_dev = priv->shared_host; + struct mlx5_priv *host_priv = host_dev->data->dev_private; + int reg_id = 0; + uint32_t flags; + int ret = 0; + + MLX5_ASSERT(priv->shared_host); + reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); + if (reg_id < 0) { + rte_errno = ENOMEM; + ret = -rte_errno; + DRV_LOG(ERR, "Meter register is not available."); + goto err; + } + priv->hws_mpool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_aso_mtr_pool), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!priv->hws_mpool) { + rte_errno = ENOMEM; + ret = -rte_errno; + DRV_LOG(ERR, "Meter ipool allocation failed."); + goto err; + } + MLX5_ASSERT(host_priv->hws_mpool->idx_pool); + MLX5_ASSERT(host_priv->hws_mpool->devx_obj); + priv->hws_mpool->idx_pool = host_priv->hws_mpool->idx_pool; + priv->hws_mpool->devx_obj = host_priv->hws_mpool->devx_obj; + flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + if (priv->sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; + priv->hws_mpool->action = mlx5dr_action_create_aso_meter + (priv->dr_ctx, (struct mlx5dr_devx_obj *)priv->hws_mpool->devx_obj, + reg_id - REG_C_0, flags); + if (!priv->hws_mpool->action) { + rte_errno = ENOMEM; + ret = -rte_errno; + DRV_LOG(ERR, "Meter action creation failed."); + goto err; + } + return 0; +err: + mlx5_flow_meter_uninit(dev); + return ret; +} + int mlx5_flow_meter_init(struct rte_eth_dev *dev, uint32_t nb_meters, @@ -81,6 +150,8 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, .type = "mlx5_hw_mtr_mark_action", }; + if (priv->shared_host) + return mlx5_flow_meter_init_guest(dev); if (!nb_meters) { ret = ENOTSUP; rte_flow_error_set(&error, ENOMEM, @@ -850,6 +921,9 @@ mlx5_flow_meter_profile_hws_add(struct rte_eth_dev *dev, struct mlx5_flow_meter_profile *fmp; int ret; + if (priv->shared_host) + return -rte_mtr_error_set(error, ENOTSUP, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL, + "Meter profiles cannot be created on guest port"); if (!priv->mtr_profile_arr) return mlx5_flow_meter_profile_add(dev, meter_profile_id, profile, error); /* Check input params. */ @@ -887,6 +961,9 @@ mlx5_flow_meter_profile_hws_delete(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_meter_profile *fmp; + if (priv->shared_host) + return -rte_mtr_error_set(error, ENOTSUP, RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL, + "Meter profiles cannot be destroyed through guest port"); if (!priv->mtr_profile_arr) return mlx5_flow_meter_profile_delete(dev, meter_profile_id, error); /* Meter profile must exist. */