From patchwork Wed Oct 18 02:26:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 132827 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 601A743193; Wed, 18 Oct 2023 04:27:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3AC2E40261; Wed, 18 Oct 2023 04:27:11 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2068.outbound.protection.outlook.com [40.107.95.68]) by mails.dpdk.org (Postfix) with ESMTP id 602784003C for ; Wed, 18 Oct 2023 04:27:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=F56CU4wEMXmlg0auUsE38dAyLUTUAIV7qRzmEc3nU9oXHPQjHuaEKEyG5Ij1VyRaXsZ6M9t85NBREhCMXxbrdYEDO/pz63lah96OIoB4wHDYryNpS4wXbehmKO6MslB/C/za2DgsVNVgwfh9vFzhewpFwju+jGeO9fuSdjkSUrpmcwUOuYoUeJpFT3DqHtlYLC7jxZc+8NiRERpp1J/AolQc3GD0WyDbJriZrPtUSVKONxcPbdLN2cV5zYM1kLpgva4xayRr86iIxM6KsqCDSQrrNFrCTimKhK+vXmISnT6GW0LHkVJeTd+ztavBDH0xVMaj4O4dss1rY21gdZOQOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Jaa1LBld+OjSuDJ0NieI2oRgFNxHm+8KndD9ntjJnjw=; b=oChGgu7XfM9ACz01ESTrO3U6z5CXiFrdukUWfriuoggYlNV/wtNI038AT3ZUqC1ZwpgVVjiauiUS4e4lGsBlHzg+jLKplZfjdObQsJx0YIWr9ciqX/KHvMAjJIy1r7aOgwlmhDfAl21Elnt4vgZ+I+CI96V3f8/nFXT7nkpnv6OpIfNrkRfNSqNAHya42pvXjWNTAegA35AWz2/KeU9jgYJgfK/LVPSufOgagZwDdSlIs5APBRRzZv7z1d1AxSwarOkJPRWNfo83Y2s4782hkjm15pKC7mFng7IoRMrZ9Zvr1LaYpkuXIFKNqWS/EoYovexbK9qehGBtpcK7bCzSnA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Jaa1LBld+OjSuDJ0NieI2oRgFNxHm+8KndD9ntjJnjw=; b=pjXAgO+aw1XsRZcJErny1YsggWlLRM01hC9gqHYiqJ6qRIdciIDyJs4gPPZ9ReD2O3mc0aX4VAdUs0ZqV1xSa2rTQKD28Kw6SZ4GP43nN17SQza/7eDj+8FaCc1p/iMPl/efkim6GDYIZ5VCmSC5N494jufqNk6Z8FAHTIuYlrYj6ueXJez3tXTCph2Wpkgng3MNGpfTCjYdxz4k+9t+jOpvDoCq8Zm2qCGERF0kUEHC6tLmUNOwv9HJIn56Aq5jACzM1Uv6WfL2ii1wgOw1SRm/jQ2BeeqqvGbi24TpvaztP9F8FUQw6xnPJfIt5f+nxheJR8SWJuvikyE0PCN9SQ== Received: from SA1P222CA0156.NAMP222.PROD.OUTLOOK.COM (2603:10b6:806:3c3::19) by PH7PR12MB8594.namprd12.prod.outlook.com (2603:10b6:510:1b3::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.45; Wed, 18 Oct 2023 02:27:05 +0000 Received: from SA2PEPF0000150B.namprd04.prod.outlook.com (2603:10b6:806:3c3:cafe::1f) by SA1P222CA0156.outlook.office365.com (2603:10b6:806:3c3::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.21 via Frontend Transport; Wed, 18 Oct 2023 02:27:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SA2PEPF0000150B.mail.protection.outlook.com (10.167.242.43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.20 via Frontend Transport; Wed, 18 Oct 2023 02:27:05 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 19:26:54 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 17 Oct 2023 19:26:52 -0700 From: Rongwei Liu To: , , , , , Subject: [PATCH v1] net/mlx5: add indirect encap decap support Date: Wed, 18 Oct 2023 05:26:39 +0300 Message-ID: <20231018022639.508987-1-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF0000150B:EE_|PH7PR12MB8594:EE_ X-MS-Office365-Filtering-Correlation-Id: f5d49689-9411-445c-0214-08dbcf81b858 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7eHo9i7Tu5gZHlNA2sM7I0DOZt8rzc1osWa7nc/MnVYvfktWzJvhYfv7za5U9YtMYDzNTNYXg8OMPqKLgKoq2Oy9PNetscvblWhfYm6fu+qxKyCkmeoZWL+1BAyErd2BiGgz+oik9hWwd3lAyR8538uIfl3QlyuWpuuhKeR2OSe+no92ksSsiRPV99KU5/yqofzFv4R1Hy/e2cXwUryaalG++8SThaXhrLiIfd4+7WN8pR+NschG6nb/E3d7NXQkjBdqbxX5CyRUcohV+dg/2ARg2urnNdiy1nCYUp9/0ofGuy8p9NOQSYpbZBfJBcUkolsWrA5PHOQlBkzT46soV5LbTXMMiXeMieSPuTZV9V91KOfLzo1d31tKs1pWyk7fwmplE2np1yIfAJ7GudVvykP5jplp2FZHa1mHp/a2y5iOn5D3XVbbABz4Fos5kitRulTnDH1DDKe2GhcHO3aQd5IOsP/wcFPDbae0ql0H0SZEEUSDtEO0tewJ5bvUmIKh3BJ+8srfWL5LnvsltqgPLbf/LORt4cHF6FUK3V5alna3OvE3M0LS2Gcl47GSeNt+fV6CN/+JEsYDxNfEcKJK/DqO4QUq1XrhgAnHKW3ks7Jk6lN5zxl9qEzazcrB5CLHujpDIPIuKX+o9dGtGofKkSsTQcauIKH5/Lgj/zCIXRwjlI1ZaCn4b5bsh4kI1tgg6hKA3Do0RKMhUymKpgsX8d6FPq2hK1zn3ypjuM5RaHZXFc86X6qylwTHp72uPEe8TXcNxQqEevYoJqySGHljp/Z6Z76hvbacLF9B3IVhTLw= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(376002)(346002)(39860400002)(136003)(230922051799003)(64100799003)(186009)(1800799009)(451199024)(82310400011)(36840700001)(46966006)(40470700004)(36860700001)(7696005)(83380400001)(478600001)(36756003)(70586007)(6666004)(316002)(966005)(47076005)(26005)(2616005)(6286002)(1076003)(16526019)(336012)(70206006)(426003)(110136005)(7636003)(41300700001)(356005)(8936002)(40460700003)(8676002)(5660300002)(82740400003)(55016003)(2906002)(30864003)(40480700001)(86362001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Oct 2023 02:27:05.2157 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f5d49689-9411-445c-0214-08dbcf81b858 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF0000150B.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8594 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support the raw_encap/decap combinations in the indirect action list, and translates to 4 types of underlayer tunnel operations: 1. Layer 2 encapsulation like VxLAN. 2. Layer 2 decapsulation like VxLAN. 3. Layer 3 encapsulation like GRE. 4. Layer 3 decapsulation like GRE. Each indirect action list has a unique handle ID and stands for different tunnel operations. The operation is shared globally with fixed patterns. It means there is no configuration associated with each handle ID and conf pointer should be NULL always no matter in the action template or flow rules. If the handle ID mask in the action template is NULL, each flow rule can take its own indirect handle, otherwise, the ID in action template is used for all rules. The handle ID used in the flow rules must be the same type as the one in the action template. Testpmd cli example: flow indirect_action 0 create action_id 10 transfer list actions raw_decap index 1 / raw_encap index 2 / end  flow pattern_template 0 create transfer pattern_template_id 1 template eth / ipv4 / udp / end flow actions_template 0 create transfer actions_template_id 1 template indirect_list handle 10 / jump / end mask indirect_list / jump / end flow template_table 0 create table_id 1 group 1 priority 0 transfer rules_number 64 pattern_template 1 actions_template 1 flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone no pattern eth / ipv4 / udp / end actions indirect_list handle 11 / jump group 10 / end  Signed-off-by: Rongwei Liu Acked-by: Ori Kam Acked-by: Suanming Mou Depends on the preceding series: https://inbox.dpdk.org/dev/20231017073117.23738-1-getelson@nvidia.com/ --- drivers/net/mlx5/mlx5_flow.c | 5 + drivers/net/mlx5/mlx5_flow.h | 15 ++ drivers/net/mlx5/mlx5_flow_hw.c | 323 ++++++++++++++++++++++++++++++++ 3 files changed, 343 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 16fce9c64e..588ce1b4a3 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -66,6 +66,7 @@ void mlx5_indirect_list_handles_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_error error; while (!LIST_EMPTY(&priv->indirect_list_head)) { struct mlx5_indirect_list *e = @@ -80,6 +81,10 @@ mlx5_indirect_list_handles_release(struct rte_eth_dev *dev) case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: mlx5_destroy_legacy_indirect(dev, e); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + mlx5_reformat_action_destroy(dev, + (struct rte_flow_action_list_handle *)e, &error); + break; #endif default: DRV_LOG(ERR, "invalid indirect list type"); diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 53c11651c8..593eadeafc 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -101,6 +101,7 @@ enum mlx5_indirect_list_type { MLX5_INDIRECT_ACTION_LIST_TYPE_ERR = 0, MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY = 1, MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR = 2, + MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT = 3, }; /** @@ -1367,6 +1368,8 @@ struct mlx5_hw_jump_action { /* Encap decap action struct. */ struct mlx5_hw_encap_decap_action { + struct mlx5_indirect_list indirect; + enum mlx5dr_action_type action_type; struct mlx5dr_action *action; /* Action object. */ /* Is header_reformat action shared across flows in table. */ bool shared; @@ -2427,6 +2430,15 @@ const struct rte_flow_action *mlx5_flow_find_action int mlx5_validate_action_rss(struct rte_eth_dev *dev, const struct rte_flow_action *action, struct rte_flow_error *error); +struct mlx5_hw_encap_decap_action* +mlx5_reformat_action_create(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *encap_action, + const struct rte_flow_action *decap_action, + struct rte_flow_error *error); +int mlx5_reformat_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); int mlx5_flow_validate_action_count(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, struct rte_flow_error *error); @@ -2860,5 +2872,8 @@ mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror); void mlx5_destroy_legacy_indirect(struct rte_eth_dev *dev, struct mlx5_indirect_list *ptr); +void +mlx5_hw_decap_encap_destroy(struct rte_eth_dev *dev, + struct mlx5_indirect_list *reformat); #endif #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 5114cc1920..a7fdb89bc9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1468,6 +1468,49 @@ hws_table_tmpl_translate_indirect_mirror(struct rte_eth_dev *dev, return ret; } +static int +flow_hw_reformat_action(__rte_unused struct rte_eth_dev *dev, + __rte_unused const struct mlx5_action_construct_data *data, + const struct rte_flow_action *action, + struct mlx5dr_rule_action *dr_rule) +{ + const struct rte_flow_action_indirect_list *indlst_conf = action->conf; + + dr_rule->action = ((struct mlx5_hw_encap_decap_action *) + (indlst_conf->handle))->action; + if (!dr_rule->action) + return -EINVAL; + return 0; +} + +/** + * Template conf must not be masked. If handle is masked, use the one in template, + * otherwise update per flow rule. + */ +static int +hws_table_tmpl_translate_indirect_reformat(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct mlx5_hw_actions *acts, + uint16_t action_src, uint16_t action_dst) +{ + int ret = -1; + const struct rte_flow_action_indirect_list *mask_conf = mask->conf; + struct mlx5_priv *priv = dev->data->dev_private; + + if (mask_conf && mask_conf->handle && !mask_conf->conf) + /** + * If handle was masked, assign fixed DR action. + */ + ret = flow_hw_reformat_action(dev, NULL, action, + &acts->rule_acts[action_dst]); + else if (mask_conf && !mask_conf->handle && !mask_conf->conf) + ret = flow_hw_act_data_indirect_list_append + (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, + action_src, action_dst, flow_hw_reformat_action); + return ret; +} + static int flow_dr_set_meter(struct mlx5_priv *priv, struct mlx5dr_rule_action *dr_rule, @@ -1624,6 +1667,13 @@ table_template_translate_indirect_list(struct rte_eth_dev *dev, acts, action_src, action_dst); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + if (list_conf->conf) + return -EINVAL; + ret = hws_table_tmpl_translate_indirect_reformat(dev, action, mask, + acts, action_src, + action_dst); + break; default: return -EINVAL; } @@ -4890,6 +4940,7 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, struct mlx5_indlst_legacy *legacy; struct rte_flow_action_list_handle *handle; } indlst_obj = { .handle = indlst_conf->handle }; + enum mlx5dr_action_type type; switch (list_type) { case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: @@ -4903,6 +4954,11 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, action_template_set_type(at, action_types, action_src, curr_off, MLX5DR_ACTION_TYP_DEST_ARRAY); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + type = ((struct mlx5_hw_encap_decap_action *) + (indlst_conf->handle))->action_type; + action_template_set_type(at, action_types, action_src, curr_off, type); + break; default: DRV_LOG(ERR, "Unsupported indirect list type"); return -EINVAL; @@ -10087,12 +10143,79 @@ flow_hw_inlist_type_get(const struct rte_flow_action *actions) return actions[1].type == RTE_FLOW_ACTION_TYPE_END ? MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY : MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + return MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT; default: break; } return MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; } +static struct rte_flow_action_list_handle* +mlx5_hw_decap_encap_handle_create(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *table_cfg, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_attr *flow_attr = &table_cfg->attr.flow_attr; + const struct rte_flow_action *encap = NULL; + const struct rte_flow_action *decap = NULL; + struct rte_flow_indir_action_conf indirect_conf = { + .ingress = flow_attr->ingress, + .egress = flow_attr->egress, + .transfer = flow_attr->transfer, + }; + struct mlx5_hw_encap_decap_action *handle; + uint64_t action_flags = 0; + + /* + * Allow + * 1. raw_decap / raw_encap / end + * 2. raw_encap / end + * 3. raw_decap / end + */ + while (actions->type != RTE_FLOW_ACTION_TYPE_END) { + if (actions->type == RTE_FLOW_ACTION_TYPE_RAW_DECAP) { + if (action_flags) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action list sequence"); + return NULL; + } + action_flags |= MLX5_FLOW_ACTION_DECAP; + decap = actions; + } else if (actions->type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if (action_flags & MLX5_FLOW_ACTION_ENCAP) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action list sequence"); + return NULL; + } + action_flags |= MLX5_FLOW_ACTION_ENCAP; + encap = actions; + } else { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action type in list"); + return NULL; + } + actions++; + } + if (!decap && !encap) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action combinations"); + return NULL; + } + handle = mlx5_reformat_action_create(dev, &indirect_conf, encap, decap, error); + if (!handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Failed to create HWS decap_encap action"); + return NULL; + } + handle->indirect.type = MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT; + LIST_INSERT_HEAD(&priv->indirect_list_head, &handle->indirect, entry); + return (struct rte_flow_action_list_handle *)handle; +} + static struct rte_flow_action_list_handle * flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, const struct rte_flow_op_attr *attr, @@ -10144,6 +10267,10 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, handle = mlx5_hw_mirror_handle_create(dev, &table_cfg, actions, error); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + handle = mlx5_hw_decap_encap_handle_create(dev, &table_cfg, + actions, error); + break; default: handle = NULL; rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, @@ -10203,6 +10330,11 @@ flow_hw_async_action_list_handle_destroy case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)handle); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + LIST_REMOVE(&((struct mlx5_hw_encap_decap_action *)handle)->indirect, + entry); + mlx5_reformat_action_destroy(dev, handle, error); + break; default: ret = rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, @@ -11427,4 +11559,195 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, return ret; } +static __rte_always_inline uint32_t +mlx5_reformat_domain_to_tbl_type(const struct rte_flow_indir_action_conf *domain) +{ + uint32_t tbl_type; + + if (domain->transfer) + tbl_type = MLX5DR_ACTION_FLAG_HWS_FDB; + else if (domain->egress) + tbl_type = MLX5DR_ACTION_FLAG_HWS_TX; + else if (domain->ingress) + tbl_type = MLX5DR_ACTION_FLAG_HWS_RX; + else + tbl_type = UINT32_MAX; + return tbl_type; +} + +static struct mlx5_hw_encap_decap_action * +__mlx5_reformat_create(struct rte_eth_dev *dev, + const struct rte_flow_action_raw_encap *encap_conf, + const struct rte_flow_indir_action_conf *domain, + enum mlx5dr_action_type type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_encap_decap_action *handle; + struct mlx5dr_action_reformat_header hdr; + uint32_t flags; + + flags = mlx5_reformat_domain_to_tbl_type(domain); + flags |= (uint32_t)MLX5DR_ACTION_FLAG_SHARED; + if (flags == UINT32_MAX) { + DRV_LOG(ERR, "Reformat: invalid indirect action configuration"); + return NULL; + } + /* Allocate new list entry. */ + handle = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*handle), 0, SOCKET_ID_ANY); + if (!handle) { + DRV_LOG(ERR, "Reformat: failed to allocate reformat entry"); + return NULL; + } + handle->action_type = type; + hdr.sz = encap_conf ? encap_conf->size : 0; + hdr.data = encap_conf ? encap_conf->data : NULL; + handle->action = mlx5dr_action_create_reformat(priv->dr_ctx, + type, 1, &hdr, 0, flags); + if (!handle->action) { + DRV_LOG(ERR, "Reformat: failed to create reformat action"); + mlx5_free(handle); + return NULL; + } + return handle; +} + +/** + * Create mlx5 reformat action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] conf + * Pointer to the indirect action parameters. + * @param[in] encap_action + * Pointer to the raw_encap action configuration. + * @param[in] decap_action + * Pointer to the raw_decap action configuration. + * @param[out] error + * Pointer to error structure. + * + * @return + * A valid shared action handle in case of success, NULL otherwise and + * rte_errno is set. + */ +struct mlx5_hw_encap_decap_action* +mlx5_reformat_action_create(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *encap_action, + const struct rte_flow_action *decap_action, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_encap_decap_action *handle; + const struct rte_flow_action_raw_encap *encap = NULL; + const struct rte_flow_action_raw_decap *decap = NULL; + enum mlx5dr_action_type type = MLX5DR_ACTION_TYP_LAST; + + MLX5_ASSERT(!encap_action || encap_action->type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP); + MLX5_ASSERT(!decap_action || decap_action->type == RTE_FLOW_ACTION_TYPE_RAW_DECAP); + if (priv->sh->config.dv_flow_en != 2) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: hardware does not support"); + return NULL; + } + if (!conf || (conf->transfer + conf->egress + conf->ingress != 1)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: domain should be specified"); + return NULL; + } + if ((encap_action && !encap_action->conf) || (decap_action && !decap_action->conf)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: missed action configuration"); + return NULL; + } + if (encap_action && !decap_action) { + encap = (const struct rte_flow_action_raw_encap *)encap_action->conf; + if (!encap->size || encap->size > MLX5_ENCAP_MAX_LEN || + encap->size < MLX5_ENCAPSULATION_DECISION_SIZE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid encap length"); + return NULL; + } + type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; + } else if (decap_action && !encap_action) { + decap = (const struct rte_flow_action_raw_decap *)decap_action->conf; + if (!decap->size || decap->size < MLX5_ENCAPSULATION_DECISION_SIZE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid decap length"); + return NULL; + } + type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; + } else if (encap_action && decap_action) { + decap = (const struct rte_flow_action_raw_decap *)decap_action->conf; + encap = (const struct rte_flow_action_raw_encap *)encap_action->conf; + if (decap->size < MLX5_ENCAPSULATION_DECISION_SIZE && + encap->size >= MLX5_ENCAPSULATION_DECISION_SIZE && + encap->size <= MLX5_ENCAP_MAX_LEN) { + type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3; + } else if (decap->size >= MLX5_ENCAPSULATION_DECISION_SIZE && + encap->size < MLX5_ENCAPSULATION_DECISION_SIZE) { + type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2; + } else { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid decap & encap length"); + return NULL; + } + } else if (!encap_action && !decap_action) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid decap & encap configurations"); + return NULL; + } + if (!priv->dr_ctx) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + encap_action, "Reformat: HWS not supported"); + return NULL; + } + handle = __mlx5_reformat_create(dev, encap, conf, type); + if (!handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: failed to create indirect action"); + return NULL; + } + return handle; +} + +/** + * Destroy the indirect reformat action. + * Release action related resources on the NIC and the memory. + * Lock free, (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] handle + * The indirect action list handle to be removed. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +int +mlx5_reformat_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_encap_decap_action *action; + + action = (struct mlx5_hw_encap_decap_action *)handle; + if (!priv->dr_ctx || !action) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, handle, + "Reformat: invalid action handle"); + mlx5dr_action_destroy(action->action); + mlx5_free(handle); + return 0; +} #endif