From patchwork Mon Oct 16 18:42:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 132651 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BAB5543181; Mon, 16 Oct 2023 20:44:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D18BB41060; Mon, 16 Oct 2023 20:43:54 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2059.outbound.protection.outlook.com [40.107.92.59]) by mails.dpdk.org (Postfix) with ESMTP id 4E97F40DFD for ; Mon, 16 Oct 2023 20:43:53 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hSEy1+XINTppJ1FTXSAwc7Hi5b02UslXOysT80XCb5tl4M19GXdz/d7aFrHPHey5eLQDYZmTP6fpVBWJ65yalvlpXVeZa/2FM954BU+Mhy0DsZBTmxRkuMtN/lt7DvMMVQKVW+GIqJ6alRkljYu85DDLNFb56p/vhKiE4T8OJLOdlmKsPVlFCD3nS94m002rwN+2+MYNNnlfMF/nf2P3pW16byqp1xpKZkWhJTnDIH8dgZLplj9Pq/qps0qiKFp9eebuBP3C5+NmofBpaOiLr0XjySTb8XfUyQ9Z4LYOrJGMsOmcOnQcLvRAGx1d7nBeV0xvaXWWAY/qNoPpf0jIHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oBgiwvtvrUwB4k0GhRfI1ULL9fn0YlFyvPO6UW+Gf7I=; b=ak4C3U++EZlqShUodfU2T9OZX8ODN0YapKmedVziZreiLDoJBxZe+bF5UastYVJ0Rdn3ruFZsyGI6N3fIeiOSIjTX19H9yVp0eGSgvEZQEkaGTS10ecTc1RJW6MEw1/kh4bTpsr9X/4pozjL8lxZ1rVvY94DZHuVTfPofC1V41sy7Ut9ft2v3FaIajPdtTm+owqBApB8nBaxhmhsaB+09B7R+I5VmF+5y8xF0Vpw8swJvETi9NTtsZNZQGQhrSP2NM5m+jER2sqpf+omdZc2ejt1KGSvIaLQTANrdJpVGKGnJo4xE24B1dQwK/1VF/aC4h3zFHy4cAyia0LcRgAgCw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oBgiwvtvrUwB4k0GhRfI1ULL9fn0YlFyvPO6UW+Gf7I=; b=dfMEZbsNOfIo4a4nMkuaNksnJh4zbTvlSfY5D+DRz5exgJwFwZCLsLx3ppuCu6ty0SAV0N52yRvebPcHp74Vzr3bGfA+mBa294cnYSBG1ubuzUiXIMZHT71OddeD3roMkAWkPAnz7vAJACnRkuOu3gznGQd73gtnrC+vc6JchAVNZDwrQevvajzs9nRcPHXc2ATnYuuiCm8iFRFQVyUIHnRt+WwaOw58lxZZmMhuSwfjmdaE6YZucsmGtvShbeTBEDP6EmfKPhJ6C3HoKiXrjuRJ8URDEhvHPcQarpsKNdV6sleZcwJTkx7WDh6uRV9XrXfgw3VyzWiklHDaqLgx5g== Received: from SN4PR0501CA0126.namprd05.prod.outlook.com (2603:10b6:803:42::43) by SA1PR12MB8724.namprd12.prod.outlook.com (2603:10b6:806:38b::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.35; Mon, 16 Oct 2023 18:43:50 +0000 Received: from SN1PEPF0002636C.namprd02.prod.outlook.com (2603:10b6:803:42:cafe::7a) by SN4PR0501CA0126.outlook.office365.com (2603:10b6:803:42::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.18 via Frontend Transport; Mon, 16 Oct 2023 18:43:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SN1PEPF0002636C.mail.protection.outlook.com (10.167.241.137) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.22 via Frontend Transport; Mon, 16 Oct 2023 18:43:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Mon, 16 Oct 2023 11:43:37 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Mon, 16 Oct 2023 11:43:34 -0700 From: Gregory Etelson To: CC: , =?utf-8?b?wqA=?= , =?utf-8?q?=C2=A0_=2E_=2E_/patches/upstream-pmd-indirect-actions-list/v2/v2-?= =?utf-8?q?0000-cover-letter_=2E_patch?= , Matan Azrad , Viacheslav Ovsiienko , Ori Kam , Suanming Mou Subject: [PATCH v2 15/16] net/mlx5: support indirect list METER_MARK action Date: Mon, 16 Oct 2023 21:42:34 +0300 Message-ID: <20231016184235.200427-15-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231016184235.200427-1-getelson@nvidia.com> References: <20230927191046.405282-1-getelson@nvidia.com> <20231016184235.200427-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002636C:EE_|SA1PR12MB8724:EE_ X-MS-Office365-Filtering-Correlation-Id: 55e63669-7dcf-42ba-bf7d-08dbce77d71a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +opI+px5Ehgib/LTH8oFslsBXuAFHB+3U+de9/RfysrZrUGdp3xnnpEKO0fTL8KTDoCx5Plv5qkN1COgNU+gByX7OZhtu6YxaFy0nKUmw9K+oGLjQ5mS+1bwneiI4geJyKDTduP8LoFjTfz7KLrjexs65IUg0P+IPDlOS59TjXtGpz5sRiY8s+41FIrV1rUV+y/NSKoO7BGbJ77lZKx10dFEVE5tJwZrMjJVxXJdgsSd++h+R2ug6kZUMn50smiG//Je0MU+STbUYvhdKr6rnhPvLlGjqC9LXCvQlE/Z/0TswROaYiXnZCRhKo+IVjE8kOzfsHYM3daeuJmQBlop9mUGyCKayK1dD/AvfuHwvD06YrheO8mgUp9H9ZfxklflNFiIuzjaX4lIvaxXFDmV0d7J5HTOBvFMRkbH7UVsFXOWX9DGHtWGDePHYfARo7yylDf28fJyt04N0mULHgaemkgqqDjcQwoJLwMVyMJxXaIrlXFGCzv+XIApLrCwxtg3hWvbiWbEKi/yKJ4aQbuUmOd9GMzd8h/JspZJT7RaY1xrXRhEiMSx8WFJ84BhpmXarN4on0+M5IGnsG8ZBEK6N5ZfthgqLrjMrrNJaYVCtM735pjCjKJCJsREMxbHPH72+CZKHzICs4eIeZaojOjDgatomobYbvwOGW6+Wy4DeYASl3Uevx1QrcC0vW+2lVPyNZtVG4TQdCmeT8QVvyP5BFKAkywuI77TVyHm4MMmslGmRPLVN1uF0pBvHFD1Iecz X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(396003)(346002)(136003)(376002)(230922051799003)(64100799003)(1800799009)(186009)(82310400011)(451199024)(46966006)(36840700001)(40470700004)(478600001)(70586007)(47076005)(70206006)(6916009)(54906003)(26005)(16526019)(107886003)(2616005)(336012)(41300700001)(6286002)(1076003)(316002)(426003)(7696005)(30864003)(8676002)(8936002)(4326008)(2906002)(5660300002)(36756003)(86362001)(7636003)(36860700001)(83380400001)(356005)(82740400003)(40460700003)(55016003)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Oct 2023 18:43:50.6935 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 55e63669-7dcf-42ba-bf7d-08dbce77d71a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002636C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8724 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow.c | 69 +++++- drivers/net/mlx5/mlx5_flow.h | 67 ++++- drivers/net/mlx5/mlx5_flow_hw.c | 427 +++++++++++++++++++++++++++----- 3 files changed, 482 insertions(+), 81 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 693d1320e1..16fce9c64e 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -75,8 +75,11 @@ mlx5_indirect_list_handles_release(struct rte_eth_dev *dev) switch (e->type) { #ifdef HAVE_MLX5_HWS_SUPPORT case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: - mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)e, true); + mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)e); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: + mlx5_destroy_legacy_indirect(dev, e); + break; #endif default: DRV_LOG(ERR, "invalid indirect list type"); @@ -1169,7 +1172,24 @@ mlx5_flow_async_action_list_handle_destroy const struct rte_flow_op_attr *op_attr, struct rte_flow_action_list_handle *action_handle, void *user_data, struct rte_flow_error *error); - +static int +mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev, + const + struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); +static int +mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_action_list_handle *handle, + const void **update, + void **query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -1219,6 +1239,10 @@ static const struct rte_flow_ops mlx5_flow_ops = { mlx5_flow_async_action_list_handle_create, .async_action_list_handle_destroy = mlx5_flow_async_action_list_handle_destroy, + .action_list_handle_query_update = + mlx5_flow_action_list_handle_query_update, + .async_action_list_handle_query_update = + mlx5_flow_async_action_list_handle_query_update, }; /* Tunnel information. */ @@ -11003,6 +11027,47 @@ mlx5_flow_async_action_list_handle_destroy error); } +static int +mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev, + const + struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, + action_list_handle_query_update, ENOTSUP); + return fops->action_list_handle_query_update(dev, handle, update, query, + mode, error); +} + +static int +mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev, + uint32_t queue_id, + const + struct rte_flow_op_attr *op_attr, + const struct + rte_flow_action_list_handle *handle, + const void **update, + void **query, + enum + rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + + MLX5_DRV_FOPS_OR_ERR(dev, fops, + async_action_list_handle_query_update, ENOTSUP); + return fops->async_action_list_handle_query_update(dev, queue_id, op_attr, + handle, update, + query, mode, + user_data, error); +} + + /** * Destroy all indirect actions (shared RSS). * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 19b26ad333..2c086026a2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -98,25 +98,41 @@ enum mlx5_indirect_type{ #define MLX5_ACTION_CTX_CT_GEN_IDX MLX5_INDIRECT_ACT_CT_GEN_IDX enum mlx5_indirect_list_type { - MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR = 1, + MLX5_INDIRECT_ACTION_LIST_TYPE_ERR = 0, + MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY = 1, + MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR = 2, }; -/* +/** * Base type for indirect list type. - * Actual indirect list type MUST override that type and put type spec data - * after the `chain`. */ struct mlx5_indirect_list { - /* type field MUST be the first */ + /* Indirect list type. */ enum mlx5_indirect_list_type type; + /* Optional storage list entry */ LIST_ENTRY(mlx5_indirect_list) entry; - /* put type specific data after chain */ }; +static __rte_always_inline void +mlx5_indirect_list_add_entry(void *head, struct mlx5_indirect_list *elem) +{ + LIST_HEAD(, mlx5_indirect_list) *h = head; + + LIST_INSERT_HEAD(h, elem, entry); +} + +static __rte_always_inline void +mlx5_indirect_list_remove_entry(struct mlx5_indirect_list *elem) +{ + if (elem->entry.le_prev) + LIST_REMOVE(elem, entry); + +} + static __rte_always_inline enum mlx5_indirect_list_type -mlx5_get_indirect_list_type(const struct mlx5_indirect_list *obj) +mlx5_get_indirect_list_type(const struct rte_flow_action_list_handle *obj) { - return obj->type; + return ((const struct mlx5_indirect_list *)obj)->type; } /* Matches on selected register. */ @@ -1240,9 +1256,12 @@ struct rte_flow_hw { #pragma GCC diagnostic error "-Wpedantic" #endif -struct mlx5dr_action; -typedef struct mlx5dr_action * -(*indirect_list_callback_t)(const struct rte_flow_action *); +struct mlx5_action_construct_data; +typedef int +(*indirect_list_callback_t)(struct rte_eth_dev *, + const struct mlx5_action_construct_data *, + const struct rte_flow_action *, + struct mlx5dr_rule_action *); /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { @@ -1291,6 +1310,7 @@ struct mlx5_action_construct_data { } shared_counter; struct { uint32_t id; + uint32_t conf_masked:1; } shared_meter; struct { indirect_list_callback_t cb; @@ -2017,7 +2037,21 @@ typedef int const struct rte_flow_op_attr *op_attr, struct rte_flow_action_list_handle *action_handle, void *user_data, struct rte_flow_error *error); - +typedef int +(*mlx5_flow_action_list_handle_query_update_t) + (struct rte_eth_dev *dev, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); +typedef int +(*mlx5_flow_async_action_list_handle_query_update_t) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -2085,6 +2119,10 @@ struct mlx5_flow_driver_ops { async_action_list_handle_create; mlx5_flow_async_action_list_handle_destroy_t async_action_list_handle_destroy; + mlx5_flow_action_list_handle_query_update_t + action_list_handle_query_update; + mlx5_flow_async_action_list_handle_query_update_t + async_action_list_handle_query_update; }; /* mlx5_flow.c */ @@ -2820,6 +2858,9 @@ mlx5_indirect_list_handles_release(struct rte_eth_dev *dev); #ifdef HAVE_MLX5_HWS_SUPPORT struct mlx5_mirror; void -mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror, bool release); +mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror); +void +mlx5_destroy_legacy_indirect(struct rte_eth_dev *dev, + struct mlx5_indirect_list *ptr); #endif #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index ae017d2815..4d070624c8 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -61,16 +61,23 @@ #define MLX5_MIRROR_MAX_CLONES_NUM 3 #define MLX5_MIRROR_MAX_SAMPLE_ACTIONS_LEN 4 +#define MLX5_HW_PORT_IS_PROXY(priv) \ + (!!((priv)->sh->esw_mode && (priv)->master)) + + +struct mlx5_indlst_legacy { + struct mlx5_indirect_list indirect; + struct rte_flow_action_handle *handle; + enum rte_flow_action_type legacy_type; +}; + struct mlx5_mirror_clone { enum rte_flow_action_type type; void *action_ctx; }; struct mlx5_mirror { - /* type field MUST be the first */ - enum mlx5_indirect_list_type type; - LIST_ENTRY(mlx5_indirect_list) entry; - + struct mlx5_indirect_list indirect; uint32_t clones_num; struct mlx5dr_action *mirror_action; struct mlx5_mirror_clone clone[MLX5_MIRROR_MAX_CLONES_NUM]; @@ -1416,46 +1423,211 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev, return 0; } -static struct mlx5dr_action * -flow_hw_mirror_action(const struct rte_flow_action *action) +static int +flow_hw_translate_indirect_mirror(__rte_unused struct rte_eth_dev *dev, + __rte_unused const struct mlx5_action_construct_data *act_data, + const struct rte_flow_action *action, + struct mlx5dr_rule_action *dr_rule) +{ + const struct rte_flow_action_indirect_list *list_conf = action->conf; + const struct mlx5_mirror *mirror = (typeof(mirror))list_conf->handle; + + dr_rule->action = mirror->mirror_action; + return 0; +} + +/** + * HWS mirror implemented as FW island. + * The action does not support indirect list flow configuration. + * If template handle was masked, use handle mirror action in flow rules. + * Otherwise let flow rule specify mirror handle. + */ +static int +hws_table_tmpl_translate_indirect_mirror(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct mlx5_hw_actions *acts, + uint16_t action_src, uint16_t action_dst) +{ + int ret = 0; + const struct rte_flow_action_indirect_list *mask_conf = mask->conf; + + if (mask_conf && mask_conf->handle) { + /** + * If mirror handle was masked, assign fixed DR5 mirror action. + */ + flow_hw_translate_indirect_mirror(dev, NULL, action, + &acts->rule_acts[action_dst]); + } else { + struct mlx5_priv *priv = dev->data->dev_private; + ret = flow_hw_act_data_indirect_list_append + (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, + action_src, action_dst, + flow_hw_translate_indirect_mirror); + } + return ret; +} + +static int +flow_dr_set_meter(struct mlx5_priv *priv, + struct mlx5dr_rule_action *dr_rule, + const struct rte_flow_action_indirect_list *action_conf) +{ + const struct mlx5_indlst_legacy *legacy_obj = + (typeof(legacy_obj))action_conf->handle; + struct mlx5_aso_mtr_pool *mtr_pool = priv->hws_mpool; + uint32_t act_idx = (uint32_t)(uintptr_t)legacy_obj->handle; + uint32_t mtr_id = act_idx & (RTE_BIT32(MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + struct mlx5_aso_mtr *aso_mtr = mlx5_ipool_get(mtr_pool->idx_pool, mtr_id); + + if (!aso_mtr) + return -EINVAL; + dr_rule->action = mtr_pool->action; + dr_rule->aso_meter.offset = aso_mtr->offset; + return 0; +} + +__rte_always_inline static void +flow_dr_mtr_flow_color(struct mlx5dr_rule_action *dr_rule, enum rte_color init_color) +{ + dr_rule->aso_meter.init_color = + (enum mlx5dr_action_aso_meter_color)rte_col_2_mlx5_col(init_color); +} + +static int +flow_hw_translate_indirect_meter(struct rte_eth_dev *dev, + const struct mlx5_action_construct_data *act_data, + const struct rte_flow_action *action, + struct mlx5dr_rule_action *dr_rule) +{ + int ret; + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_action_indirect_list *action_conf = action->conf; + const struct rte_flow_indirect_update_flow_meter_mark **flow_conf = + (typeof(flow_conf))action_conf->conf; + + /* + * Masked indirect handle set dr5 action during template table + * translation. + */ + if (!dr_rule->action) { + ret = flow_dr_set_meter(priv, dr_rule, action_conf); + if (ret) + return ret; + } + if (!act_data->shared_meter.conf_masked) { + if (flow_conf && flow_conf[0] && flow_conf[0]->init_color < RTE_COLORS) + flow_dr_mtr_flow_color(dr_rule, flow_conf[0]->init_color); + } + return 0; +} + +static int +hws_table_tmpl_translate_indirect_meter(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct mlx5_hw_actions *acts, + uint16_t action_src, uint16_t action_dst) +{ + int ret; + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_action_indirect_list *action_conf = action->conf; + const struct rte_flow_action_indirect_list *mask_conf = mask->conf; + bool is_handle_masked = mask_conf && mask_conf->handle; + bool is_conf_masked = mask_conf && mask_conf->conf && mask_conf->conf[0]; + struct mlx5dr_rule_action *dr_rule = &acts->rule_acts[action_dst]; + + if (is_handle_masked) { + ret = flow_dr_set_meter(priv, dr_rule, action->conf); + if (ret) + return ret; + } + if (is_conf_masked) { + const struct + rte_flow_indirect_update_flow_meter_mark **flow_conf = + (typeof(flow_conf))action_conf->conf; + flow_dr_mtr_flow_color(dr_rule, + flow_conf[0]->init_color); + } + if (!is_handle_masked || !is_conf_masked) { + struct mlx5_action_construct_data *act_data; + + ret = flow_hw_act_data_indirect_list_append + (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, + action_src, action_dst, flow_hw_translate_indirect_meter); + if (ret) + return ret; + act_data = LIST_FIRST(&acts->act_list); + act_data->shared_meter.conf_masked = is_conf_masked; + } + return 0; +} + +static int +hws_table_tmpl_translate_indirect_legacy(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct mlx5_hw_actions *acts, + uint16_t action_src, uint16_t action_dst) { - struct mlx5_mirror *mirror = (void *)(uintptr_t)action->conf; + int ret; + const struct rte_flow_action_indirect_list *indlst_conf = action->conf; + struct mlx5_indlst_legacy *indlst_obj = (typeof(indlst_obj))indlst_conf->handle; + uint32_t act_idx = (uint32_t)(uintptr_t)indlst_obj->handle; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; - return mirror->mirror_action; + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + ret = hws_table_tmpl_translate_indirect_meter(dev, action, mask, + acts, action_src, + action_dst); + break; + default: + ret = -EINVAL; + break; + } + return ret; } +/* + * template .. indirect_list handle Ht conf Ct .. + * mask .. indirect_list handle Hm conf Cm .. + * + * PMD requires Ht != 0 to resolve handle type. + * If Ht was masked (Hm != 0) DR5 action will be set according to Ht and will + * not change. Otherwise, DR5 action will be resolved during flow rule build. + * If Ct was masked (Cm != 0), table template processing updates base + * indirect action configuration with Ct parameters. + */ static int table_template_translate_indirect_list(struct rte_eth_dev *dev, const struct rte_flow_action *action, const struct rte_flow_action *mask, struct mlx5_hw_actions *acts, - uint16_t action_src, - uint16_t action_dst) + uint16_t action_src, uint16_t action_dst) { - int ret; - bool is_masked = action->conf && mask->conf; - struct mlx5_priv *priv = dev->data->dev_private; + int ret = 0; enum mlx5_indirect_list_type type; + const struct rte_flow_action_indirect_list *list_conf = action->conf; - if (!action->conf) + if (!list_conf || !list_conf->handle) return -EINVAL; - type = mlx5_get_indirect_list_type(action->conf); + type = mlx5_get_indirect_list_type(list_conf->handle); switch (type) { + case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: + ret = hws_table_tmpl_translate_indirect_legacy(dev, action, mask, + acts, action_src, + action_dst); + break; case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: - if (is_masked) { - acts->rule_acts[action_dst].action = flow_hw_mirror_action(action); - } else { - ret = flow_hw_act_data_indirect_list_append - (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, - action_src, action_dst, flow_hw_mirror_action); - if (ret) - return ret; - } + ret = hws_table_tmpl_translate_indirect_mirror(dev, action, mask, + acts, action_src, + action_dst); break; default: return -EINVAL; } - return 0; + return ret; } /** @@ -2366,8 +2538,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, (int)action->type == act_data->type); switch ((int)act_data->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST: - rule_acts[act_data->action_dst].action = - act_data->indirect_list.cb(action); + act_data->indirect_list.cb(dev, act_data, actions, rule_acts); break; case RTE_FLOW_ACTION_TYPE_INDIRECT: if (flow_hw_shared_action_construct @@ -4664,20 +4835,11 @@ action_template_set_type(struct rte_flow_actions_template *at, } static int -flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, - unsigned int action_src, +flow_hw_dr_actions_template_handle_shared(int type, uint32_t action_src, enum mlx5dr_action_type *action_types, uint16_t *curr_off, uint16_t *cnt_off, struct rte_flow_actions_template *at) { - uint32_t type; - - if (!mask) { - DRV_LOG(WARNING, "Unable to determine indirect action type " - "without a mask specified"); - return -EINVAL; - } - type = mask->type; switch (type) { case RTE_FLOW_ACTION_TYPE_RSS: action_template_set_type(at, action_types, action_src, curr_off, @@ -4718,12 +4880,24 @@ static int flow_hw_template_actions_list(struct rte_flow_actions_template *at, unsigned int action_src, enum mlx5dr_action_type *action_types, - uint16_t *curr_off) + uint16_t *curr_off, uint16_t *cnt_off) { - enum mlx5_indirect_list_type list_type; + int ret; + const struct rte_flow_action_indirect_list *indlst_conf = at->actions[action_src].conf; + enum mlx5_indirect_list_type list_type = mlx5_get_indirect_list_type(indlst_conf->handle); + const union { + struct mlx5_indlst_legacy *legacy; + struct rte_flow_action_list_handle *handle; + } indlst_obj = { .handle = indlst_conf->handle }; - list_type = mlx5_get_indirect_list_type(at->actions[action_src].conf); switch (list_type) { + case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: + ret = flow_hw_dr_actions_template_handle_shared + (indlst_obj.legacy->legacy_type, action_src, + action_types, curr_off, cnt_off, at); + if (ret) + return ret; + break; case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: action_template_set_type(at, action_types, action_src, curr_off, MLX5DR_ACTION_TYP_DEST_ARRAY); @@ -4769,17 +4943,14 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) break; case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST: ret = flow_hw_template_actions_list(at, i, action_types, - &curr_off); + &curr_off, &cnt_off); if (ret) return NULL; break; case RTE_FLOW_ACTION_TYPE_INDIRECT: ret = flow_hw_dr_actions_template_handle_shared - (&at->masks[i], - i, - action_types, - &curr_off, - &cnt_off, at); + (at->masks[i].type, i, action_types, + &curr_off, &cnt_off, at); if (ret) return NULL; break; @@ -5259,9 +5430,8 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, * Need to restore the indirect action index from action conf here. */ case RTE_FLOW_ACTION_TYPE_INDIRECT: - case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST: - at->actions[i].conf = actions->conf; - at->masks[i].conf = masks->conf; + at->actions[i].conf = ra[i].conf; + at->masks[i].conf = rm[i].conf; break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: info = actions->conf; @@ -9519,18 +9689,16 @@ mlx5_mirror_destroy_clone(struct rte_eth_dev *dev, } void -mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror, bool release) +mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror) { uint32_t i; - if (mirror->entry.le_prev) - LIST_REMOVE(mirror, entry); + mlx5_indirect_list_remove_entry(&mirror->indirect); for(i = 0; i < mirror->clones_num; i++) mlx5_mirror_destroy_clone(dev, &mirror->clone[i]); if (mirror->mirror_action) mlx5dr_action_destroy(mirror->mirror_action); - if (release) - mlx5_free(mirror); + mlx5_free(mirror); } static inline enum mlx5dr_table_type @@ -9825,7 +9993,8 @@ mlx5_hw_mirror_handle_create(struct rte_eth_dev *dev, actions, "Failed to allocate mirror context"); return NULL; } - mirror->type = MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR; + + mirror->indirect.type = MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR; mirror->clones_num = clones_num; for (i = 0; i < clones_num; i++) { const struct rte_flow_action *clone_actions; @@ -9857,15 +10026,72 @@ mlx5_hw_mirror_handle_create(struct rte_eth_dev *dev, goto error; } - LIST_INSERT_HEAD(&priv->indirect_list_head, - (struct mlx5_indirect_list *)mirror, entry); + mlx5_indirect_list_add_entry(&priv->indirect_list_head, &mirror->indirect); return (struct rte_flow_action_list_handle *)mirror; error: - mlx5_hw_mirror_destroy(dev, mirror, true); + mlx5_hw_mirror_destroy(dev, mirror); return NULL; } +void +mlx5_destroy_legacy_indirect(__rte_unused struct rte_eth_dev *dev, + struct mlx5_indirect_list *ptr) +{ + struct mlx5_indlst_legacy *obj = (typeof(obj))ptr; + + switch (obj->legacy_type) { + case RTE_FLOW_ACTION_TYPE_METER_MARK: + break; /* ASO meters were released in mlx5_flow_meter_flush() */ + default: + break; + } + mlx5_free(obj); +} + +static struct rte_flow_action_list_handle * +mlx5_create_legacy_indlst(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indlst_legacy *indlst_obj = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*indlst_obj), + 0, SOCKET_ID_ANY); + + if (!indlst_obj) + return NULL; + indlst_obj->handle = flow_hw_action_handle_create(dev, queue, attr, conf, + actions, user_data, + error); + if (!indlst_obj->handle) { + mlx5_free(indlst_obj); + return NULL; + } + indlst_obj->legacy_type = actions[0].type; + indlst_obj->indirect.type = MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY; + mlx5_indirect_list_add_entry(&priv->indirect_list_head, &indlst_obj->indirect); + return (struct rte_flow_action_list_handle *)indlst_obj; +} + +static __rte_always_inline enum mlx5_indirect_list_type +flow_hw_inlist_type_get(const struct rte_flow_action *actions) +{ + switch (actions[0].type) { + case RTE_FLOW_ACTION_TYPE_SAMPLE: + return MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + return actions[1].type == RTE_FLOW_ACTION_TYPE_END ? + MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY : + MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; + default: + break; + } + return MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; +} + static struct rte_flow_action_list_handle * flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, const struct rte_flow_op_attr *attr, @@ -9876,6 +10102,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, { struct mlx5_hw_q_job *job = NULL; bool push = flow_hw_action_push(attr); + enum mlx5_indirect_list_type list_type; struct rte_flow_action_list_handle *handle; struct mlx5_priv *priv = dev->data->dev_private; const struct mlx5_flow_template_table_cfg table_cfg = { @@ -9894,6 +10121,16 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, NULL, "No action list"); return NULL; } + list_type = flow_hw_inlist_type_get(actions); + if (list_type == MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY) { + /* + * Legacy indirect actions already have + * async resources management. No need to do it twice. + */ + handle = mlx5_create_legacy_indlst(dev, queue, attr, conf, + actions, user_data, error); + goto end; + } if (attr) { job = flow_hw_action_job_init(priv, queue, NULL, user_data, NULL, MLX5_HW_Q_JOB_TYPE_CREATE, @@ -9901,8 +10138,8 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, if (!job) return NULL; } - switch (actions[0].type) { - case RTE_FLOW_ACTION_TYPE_SAMPLE: + switch (list_type) { + case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: handle = mlx5_hw_mirror_handle_create(dev, &table_cfg, actions, error); break; @@ -9916,6 +10153,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, flow_hw_action_finalize(dev, queue, job, push, false, handle != NULL); } +end: return handle; } @@ -9944,6 +10182,15 @@ flow_hw_async_action_list_handle_destroy enum mlx5_indirect_list_type type = mlx5_get_indirect_list_type((void *)handle); + if (type == MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY) { + struct mlx5_indlst_legacy *legacy = (typeof(legacy))handle; + + ret = flow_hw_action_handle_destroy(dev, queue, attr, + legacy->handle, + user_data, error); + mlx5_indirect_list_remove_entry(&legacy->indirect); + goto end; + } if (attr) { job = flow_hw_action_job_init(priv, queue, NULL, user_data, NULL, MLX5_HW_Q_JOB_TYPE_DESTROY, @@ -9953,20 +10200,17 @@ flow_hw_async_action_list_handle_destroy } switch(type) { case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: - mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)handle, false); + mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)handle); break; default: - handle = NULL; ret = rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Invalid indirect list handle"); } if (job) { - job->action = handle; - flow_hw_action_finalize(dev, queue, job, push, false, - handle != NULL); + flow_hw_action_finalize(dev, queue, job, push, false, true); } - mlx5_free(handle); +end: return ret; } @@ -9980,6 +10224,53 @@ flow_hw_action_list_handle_destroy(struct rte_eth_dev *dev, error); } +static int +flow_hw_async_action_list_handle_query_update + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, struct rte_flow_error *error) +{ + enum mlx5_indirect_list_type type = + mlx5_get_indirect_list_type((const void *)handle); + + if (type == MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY) { + struct mlx5_indlst_legacy *legacy = (void *)(uintptr_t)handle; + + if (update && query) + return flow_hw_async_action_handle_query_update + (dev, queue_id, attr, legacy->handle, + update, query, mode, user_data, error); + else if (update && update[0]) + return flow_hw_action_handle_update(dev, queue_id, attr, + legacy->handle, update[0], + user_data, error); + else if (query && query[0]) + return flow_hw_action_handle_query(dev, queue_id, attr, + legacy->handle, query[0], + user_data, error); + else + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid legacy handle query_update parameters"); + } + return -ENOTSUP; +} + +static int +flow_hw_action_list_handle_query_update(struct rte_eth_dev *dev, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error) +{ + return flow_hw_async_action_list_handle_query_update + (dev, MLX5_HW_INV_QUEUE, NULL, handle, + update, query, mode, NULL, error); +} + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, @@ -10010,10 +10301,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .action_query_update = flow_hw_action_query_update, .action_list_handle_create = flow_hw_action_list_handle_create, .action_list_handle_destroy = flow_hw_action_list_handle_destroy, + .action_list_handle_query_update = + flow_hw_action_list_handle_query_update, .async_action_list_handle_create = flow_hw_async_action_list_handle_create, .async_action_list_handle_destroy = flow_hw_async_action_list_handle_destroy, + .async_action_list_handle_query_update = + flow_hw_async_action_list_handle_query_update, .query = flow_hw_query, .get_aged_flows = flow_hw_get_aged_flows, .get_q_aged_flows = flow_hw_get_q_aged_flows,