From patchwork Thu Sep 28 02:33:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 132068 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3FECB42658; Thu, 28 Sep 2023 04:34:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E9361402D8; Thu, 28 Sep 2023 04:34:11 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2060.outbound.protection.outlook.com [40.107.92.60]) by mails.dpdk.org (Postfix) with ESMTP id 84F8040150 for ; Thu, 28 Sep 2023 04:34:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZUyRejdsQ5GDgPp6KyoKM+UXG7v/Gsvb7T59Qo+1yvo1SyDkI3l/VP0T/jYYCK2m6zuZKHKoVuF46KCHrIZqqlzDtMotEwAbu49Fs8BeXe28L+fM8cFtY0aokj8lob6/Xpr+ZYP6dpV+GmmKsENW6InJpNi9PmUv9uyJgzQhE0sDXQNuzB2IU8OEGU3V7XoTRDHKfGGNwAJqEBzCrX7iUAkrHbNRIaw3CLsYft+SS3XHJqDXL16WqTcdpuS3pjUbdNPgFfZrFLCsw8jOyod0SsFgSYl2qvyTbmOwBptKYGXZNlmFQlcJZAdAvT7cwO5L6SYeoU4HAqFHe9ZCeK6rdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SJG7SR4jkTzQ3FVO3Dx0i0GwXtbiXzHch5upfjGrgBc=; b=FkA455DoHBfBlZGLXBRFkI06Qm9bEDT0YXs9a5OKO2HPr0yYOG5RCiKFrE3tvrXKWYFt+/Dzlhyus8QqHL+LfsSl8Mk/1YSbrSPQey4CHqZOHe+F79ua9Grhf5UEILqzXs/Xp+oeEr+xTJRhSmmjeMosUzmmZtCAvqDLwOfOvaG3u6SHak592ZJ0qLthQDzGoaI2wQqatnnoGj+U+f8g6O0mv/uev6CebUAT8sUPZkY2mXoHqpvyU3Izy9CD+G/53qn8mpuGhRDgaDUaK21bFDbl7HFm9xdasUAlkExw6+kyWykfk8yRfawvVWl4uMKzv7ps0XaunYdxG/artBQuCA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SJG7SR4jkTzQ3FVO3Dx0i0GwXtbiXzHch5upfjGrgBc=; b=oG3UhOBRznw286kSDoBNPXjfWEheiQ9MLPy7uqYyNA8juxA4gQ2a0Cr9YUkICO6JPtGxbjDgkNdNSOWWWxVbA2IIbfGcq5DIC9jJnyCch7SzTYQ1EdrRXwVSlWDfJ6k/7edijtAGZPMZhx4SB+/3GeWPPpQEQme6/nzO21OvnweTMcOJIUJWFJdgjUVCgvAYTAwBLY0WfqRJxebZuKqBut26UDEYwxtUHQlZiSGt3sPBPUPJ7EkG7fw6Pvf/PF/MD0xvVEWBnNvlUUkfJMtHMYfUPhCY+9EnLKWNnxjoeap4q38vsPc129gqSxsSp5pyl9BnkuMkKElMPpPLK8WX6Q== Received: from DM5PR08CA0040.namprd08.prod.outlook.com (2603:10b6:4:60::29) by PH7PR12MB7796.namprd12.prod.outlook.com (2603:10b6:510:275::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6813.28; Thu, 28 Sep 2023 02:34:05 +0000 Received: from CY4PEPF0000EDD7.namprd03.prod.outlook.com (2603:10b6:4:60:cafe::3e) by DM5PR08CA0040.outlook.office365.com (2603:10b6:4:60::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.24 via Frontend Transport; Thu, 28 Sep 2023 02:34:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000EDD7.mail.protection.outlook.com (10.167.241.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.14 via Frontend Transport; Thu, 28 Sep 2023 02:34:05 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:33:58 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:33:56 -0700 From: Rongwei Liu To: , , , , , Subject: [PATCH v1 1/5] net/mlx5: sample the srv6 last segment Date: Thu, 28 Sep 2023 05:33:36 +0300 Message-ID: <20230928023341.1239731-2-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230928023341.1239731-1-rongweil@nvidia.com> References: <20230928023341.1239731-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail202.nvidia.com (10.129.68.7) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EDD7:EE_|PH7PR12MB7796:EE_ X-MS-Office365-Filtering-Correlation-Id: f11344f8-f2d8-42ae-3a80-08dbbfcb6249 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wzXt1QZAbaCk41dRHuJpaUU0OitTibLGQGLY9nDvJV8j9zjWhqQlazJBLeNZC29kBMcgiDBDPpkZ92Qm+adkAvnN9Hk5G4ti0HiEcnIgeA6Lhy9qLbYANkh+z9twbWlQxfaPDa6WArB2q9H8RFovAIWcX70Eqbm3MwNktZ4oosvb1eTBLPNkgiYAOyDMwQezvbzagz9pUgoxjWTXjjNaLFp0y1Ox/XJEkk6RiQh9uT9KybOvI7BiddI+0Gt9fR1eKgB/C5oPQ2t5A4eXkqgioO9LYCQHeWCcbLUirUu+CZ0MTYhLYeKFwpDW4ypwz4TtDt+yOh1LWXcTyEgqoxy5aoGhoOpmk2y4hqUxcO2OYfmznWFNhodqqysYXGcpQfpk4S2InqnyhtBtqh16rJCLHS9NtG9pGGpe5ztLamcrjygtGVWMZ7hQt9FD1VxMdzPxEWfP7JJqKsHPT8beDmMVBTVKFQL4MJDVQSnG7vI/DSNDLwJada5wH30pg6cK0cRRTkoSAkc6Qa3cqyBnWrdGCkN0CFJqMSV+UeTYbDBOUhJuMhy55rjiwISWkWqknDQRq0YYCb8YMVrAf+3yW94ivPjjyFa+oSUKrGlKwRh14ib6VUdZkWFuf+OQpyoWk+7lP88ZjxfOut/TJy+6tpO3ss50Apvd71ZGEZmZB64+1LripvoX3s/iD/4F2cNJzhJFsSNpZGDbeFvGoBEStXmFvttnDGYdj72U1Sf+YC3Ti1/Jdh4Np1Sb6oXffa5LwQB8 X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(376002)(346002)(136003)(396003)(39860400002)(230922051799003)(1800799009)(186009)(64100799003)(451199024)(82310400011)(36840700001)(40470700004)(46966006)(70586007)(336012)(40460700003)(16526019)(6286002)(26005)(82740400003)(6666004)(426003)(86362001)(1076003)(7696005)(110136005)(47076005)(478600001)(36860700001)(2906002)(55016003)(83380400001)(40480700001)(7636003)(356005)(316002)(8936002)(36756003)(2616005)(5660300002)(70206006)(8676002)(41300700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2023 02:34:05.0352 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f11344f8-f2d8-42ae-3a80-08dbbfcb6249 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EDD7.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7796 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When removing the IPv6 routing extension header from the packets, the destination address should be updated to the last one in the segment list. Enlarge the hardware sample scope to cover the last segment. Signed-off-by: Rongwei Liu --- drivers/net/mlx5/mlx5.c | 41 ++++++++++++++++++++++++++++++----------- drivers/net/mlx5/mlx5.h | 6 ++++++ 2 files changed, 36 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 902e919425..2c6919a0c1 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1088,6 +1088,7 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) struct mlx5_devx_graph_node_attr node = { .modify_field_select = 0, }; + uint32_t i; uint32_t ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_common_dev_config *config = &priv->sh->cdev->config; @@ -1121,10 +1122,18 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) node.next_header_field_size = 0x8; node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP; node.in[0].compare_condition_value = IPPROTO_ROUTING; - node.sample[0].flow_match_sample_en = 1; - /* First come first serve no matter inner or outer. */ - node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; - node.sample[0].flow_match_sample_offset_mode = MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* Final IPv6 address. */ + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + node.sample[i].flow_match_sample_en = 1; + node.sample[i].flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* First come first serve no matter inner or outer. */ + node.sample[i].flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + node.sample[i].flow_match_sample_field_base_offset = + (i + 1) * sizeof(uint32_t); /* in bytes */ + } + node.sample[0].flow_match_sample_field_base_offset = 0; node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP; node.out[0].compare_condition_value = IPPROTO_TCP; node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP; @@ -1137,8 +1146,8 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) goto error; } priv->sh->srh_flex_parser.flex.devx_fp->devx_obj = fp; - priv->sh->srh_flex_parser.flex.mapnum = 1; - priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 1; + priv->sh->srh_flex_parser.flex.mapnum = MLX5_SRV6_SAMPLE_NUM; + priv->sh->srh_flex_parser.flex.devx_fp->num_samples = MLX5_SRV6_SAMPLE_NUM; ret = mlx5_devx_cmd_query_parse_samples(fp, ids, priv->sh->srh_flex_parser.flex.mapnum, &priv->sh->srh_flex_parser.flex.devx_fp->anchor_id); @@ -1146,12 +1155,22 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) DRV_LOG(ERR, "Failed to query sample IDs."); goto error; } - ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[0], - &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[0]); - if (ret) { - DRV_LOG(ERR, "Failed to query sample id information."); - goto error; + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[i], + &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[i]); + if (ret) { + DRV_LOG(ERR, "Failed to query sample id %u information.", ids[i]); + goto error; + } + } + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + priv->sh->srh_flex_parser.flex.devx_fp->sample_ids[i] = ids[i]; + priv->sh->srh_flex_parser.flex.map[i].width = sizeof(uint32_t) * CHAR_BIT; + priv->sh->srh_flex_parser.flex.map[i].reg_id = i; + priv->sh->srh_flex_parser.flex.map[i].shift = + (i + 1) * sizeof(uint32_t) * CHAR_BIT; } + priv->sh->srh_flex_parser.flex.map[0].shift = 0; return 0; error: if (fp) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9c93eea269..951feb5ac4 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1376,6 +1376,7 @@ struct mlx5_flex_pattern_field { uint16_t shift:5; uint16_t reg_id:5; }; + #define MLX5_INVALID_SAMPLE_REG_ID 0x1F /* Port flex item context. */ @@ -1387,6 +1388,11 @@ struct mlx5_flex_item { struct mlx5_flex_pattern_field map[MLX5_FLEX_ITEM_MAPPING_NUM]; }; +/* + * Sample an IPv6 address and the first dword of SRv6 header. + * Then it is 16 + 4 = 20 bytes which is 5 dwords. + */ +#define MLX5_SRV6_SAMPLE_NUM 5 /* Mlx5 internal flex parser profile structure. */ struct mlx5_internal_flex_parser_profile { uint32_t refcnt; From patchwork Thu Sep 28 02:33:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 132069 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9ED2C42658; Thu, 28 Sep 2023 04:34:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7CC00402EB; Thu, 28 Sep 2023 04:34:14 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2052.outbound.protection.outlook.com [40.107.243.52]) by mails.dpdk.org (Postfix) with ESMTP id A0415402D6 for ; Thu, 28 Sep 2023 04:34:10 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AGY724E8jm2/jP3QDJhjIw5opkKmlK+qoIOD+6XRa0+aQA3BpvjE6ZyvI/SxsFN3bOCEvCEaIaThC+FdidzZmSv44Dygx5SnhD5MmreTTcy0/Gi3B3m5sIpqUw6m+du6Y0/zkWgeThyl6LO5ARpHh3qbS8fwz85r584j2T3yDwllj1xlUVWbU/b5xpx5wstmwFBlnP6aLkBIHktF72ylIizp0ZiVTQB5YD7IcyiHgBzD2zarhDX73+/kAYYF1gXr3mdqx3V3x9CTFfvJGZYC0KwxZk/08lBEpmrqa6Qk4LGaQl8vblS/qUTCbKuxA9HltBtyFtxkfrEm5HWTPsSnUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rBygQ+huXweDwI1Bh/57TSBj8VxipEaMqQKYf6+61co=; b=Vnyj8EG1/3vAO8SWxB6v+rxxP+kSJxMwdHHHU9p0tXWt1OoAwLAWq1GKKIKaCsoarnvhPsdYx/iiKAH183v3yl0NJVGk07+KfI+C9rK1Tm/MnTc6YDHKQhTxG6NU8I1Y8ZV6eSwlNsjmzVGa3Xp097p3zI6+SORjpW4hwJ6R1za7lXw6B3WGqG3vNuL2YoVBMh0vGsvrHt+lgF2Z/dveIfWguyHBQbRZGBaVBV0mKMbJC31VjNB/vKQsrssakwnzkr/rvpJmEleDxxZzqvLGE5SMUIG/H1B9GXjnoEqHPzY0FPppScuVWfaIYw3JRLo28o1ibC8IXPaIPA48ECHYeA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rBygQ+huXweDwI1Bh/57TSBj8VxipEaMqQKYf6+61co=; b=AHsz8kcDBpSpq9vE3BgkHACXbiTXkmuIrkJkaqpCxk+0JLoNEnUX/oPyLTNpGorXyMoVw6Lb7WmOI+q0vj089OoI6tmtN2mP3Sq/4WKNXiSDVZJP5HWs2BPXU6Oui8WVMX4vWWvz3w8ph80+Ek5uwTpu5rtGqlkEzQN3xHTQ8sN2ddm52rt9dwjufG9tB7tQBA5YYsHb+eJnMHcX7gbIUAaf83ApdmM7wAQ+fkR7heKO92k0CmkE2J4hxG7sSVBBVcsI01UExYokGVLPJsOwvQByO9eO+7cuP2zn6S6GmbRm/Mcrfbda081fdd9ceSFwRi4+gLNq2YadTyS8kIJYNA== Received: from DM6PR05CA0049.namprd05.prod.outlook.com (2603:10b6:5:335::18) by PH8PR12MB7278.namprd12.prod.outlook.com (2603:10b6:510:222::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6813.28; Thu, 28 Sep 2023 02:34:08 +0000 Received: from CY4PEPF0000EDD6.namprd03.prod.outlook.com (2603:10b6:5:335:cafe::1c) by DM6PR05CA0049.outlook.office365.com (2603:10b6:5:335::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.22 via Frontend Transport; Thu, 28 Sep 2023 02:34:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000EDD6.mail.protection.outlook.com (10.167.241.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.19 via Frontend Transport; Thu, 28 Sep 2023 02:34:08 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:34:01 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:33:58 -0700 From: Rongwei Liu To: , , , , , CC: Subject: [PATCH v1 2/5] net/mlx5/hws: fix potential wrong rte_errno value Date: Thu, 28 Sep 2023 05:33:37 +0300 Message-ID: <20230928023341.1239731-3-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230928023341.1239731-1-rongweil@nvidia.com> References: <20230928023341.1239731-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail202.nvidia.com (10.129.68.7) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EDD6:EE_|PH8PR12MB7278:EE_ X-MS-Office365-Filtering-Correlation-Id: f7391ed1-8097-4c9e-c6dd-08dbbfcb6457 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xEdlNeBOv8/kb9OMHT+TdXzOUjsu+/v9HI3vgJ8JJnd7GGSgTRwLoas8dRM13HGMWp+oa6Hy9E5so/Ow6eUF9+Aqlw/t7ytWC7xcRn32StAFb47Wa7nLOkw9eO5lwvCPAlGkWthsUdIdH4Wh5movjNKwaWg1LwATztxuSOTxCZeJmXttj+3X+BD9jCIxlVCeWfAcl/IhkTVYsnumwZebP5P6ohP3n0TlPLmu7WqpNDgOXWucIH5rypVA0XlRARBK/k+TMPjUp5e9sf8GVLJyxXHJVgN2oYM745NOmrogrxBz7r+a0gMgsLU0CBHBdAFVNSdGTYV679m0BJqsv8xsJwy+2S0TEcHknjehrx6qt/szdqUl5TxibHXolLvjFiKcmK/vdb24cL7CLABMaMLlpzp9p4fqwEbINvwGmEhRHBbdIoyROVhVSI9NHNaQt7eFBcLh0qTUQdivuVTjOazL6w8SnQ4i7WjUinCmENRY/6y3ei3nMMuyYhd59Dtnd/LLCw+2Lfqs3lsgvl6vJEYD+hfyK6BgRPgkjUnbygzodQUH/uviTpH56aMlWkaqtuZ3ExksygSyZA4NUryjwtq9+huaA0maI3IhclvEcJRwjV+wtCX24v7fABguIh43W9SAVTLWFhBTCgK1V1atA0Laqb9EpiCUjvjXERs5HJRCXG8jFIlj0lopJ5VvaEZCcq378+GGO0XlbFvm5e/6IQymaYyuvRJZ2GCowhxfUa0jF9jOVBeiHLIizfD8j1Q8hL3G X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(376002)(39860400002)(136003)(346002)(230922051799003)(82310400011)(1800799009)(64100799003)(451199024)(186009)(40470700004)(46966006)(36840700001)(426003)(16526019)(2616005)(6286002)(336012)(107886003)(26005)(1076003)(7696005)(7636003)(86362001)(36756003)(47076005)(36860700001)(55016003)(40480700001)(40460700003)(83380400001)(356005)(82740400003)(41300700001)(110136005)(8936002)(4326008)(8676002)(70586007)(70206006)(5660300002)(4744005)(2906002)(316002)(478600001)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2023 02:34:08.4703 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f7391ed1-8097-4c9e-c6dd-08dbbfcb6457 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EDD6.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7278 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A valid rte_errno is desired when DR layer api returns error and it can't over-write the value set by under-layer. Fixes: 0a2657c4ff4d ("net/mlx5/hws: support insert header action") Cc: hamdani@nvidia.com Signed-off-by: Rongwei Liu --- drivers/net/mlx5/hws/mlx5dr_action.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 769bb97e18..4e04d77852 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -2291,6 +2291,7 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, if (!num_of_hdrs) { DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero"); + rte_errno = EINVAL; return NULL; } @@ -2338,7 +2339,6 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, reformat_hdrs, log_bulk_size); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); - rte_errno = EINVAL; goto free_reformat_hdrs; } From patchwork Thu Sep 28 02:33:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 132070 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 71ABB42658; Thu, 28 Sep 2023 04:34:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B6EC3402E6; Thu, 28 Sep 2023 04:34:16 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2078.outbound.protection.outlook.com [40.107.243.78]) by mails.dpdk.org (Postfix) with ESMTP id 18DB6402F0 for ; Thu, 28 Sep 2023 04:34:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D+fmurCQ1ajyWPRL+Q5+BaK86Cj9chnmXqVhzgGuZai4nNM+VAh67PcJ8fnMOxxzShpz8JTgnooxi770+d2r7g+xtTqdXMupnraJZV7NfbW8AdhiIMa91KHOMLAVWtc3kmQvFXOz9cQ0hjLyWh1SpLJ0HEU0eaRvmA++ZU+XYUBs2S1cK2zf7VeyHO6Tfu7LjPtlslqhZkXywK8/bWdXLlLVIFQwsPvN12MrTUn6bsNakhSDX08GF+Vcit3NgIPps1Op+gHCVotHnrdflcAH14WwPbQ01MhlssTG0+8GAdf4uovAli1ecmgXAhGboEUTw691fRpG+T/pyYDP6OOv5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ENYMYKfq+56+qOTFRj4BOfq+aDVcKyGk0QMO+L6+D6Q=; b=FXMNQZkFRLQy4LHhp+m3WNVvwTaEjsvH5310RrFLO2lPeYHGNzcfA8DFSkPHLNyh15XLqZrsxA1sD5c4brzBYQ073CvM+la5suhLEzG7zpONzjyav5yaIHLGPBv54rTVZcejo4u2JKzRme+2AbYg5CL+RdYJHJ6K06nkMx2lvMweviFgXZeVQ3Jtmuk8/SpZbw9f0ubdES6KuNJbBqDu3XGEmnzSvX1fpSgcoWTGqgLMSUJiRM8RiyqINqv9gJV08jIcszdEDQ1vesnpTRH9k9BQauFkqfRt7yZXDgmjYdxv1YhRkx2AVLlTmEcEhdRopP9IL4MnMEVNHvgPklDrhw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ENYMYKfq+56+qOTFRj4BOfq+aDVcKyGk0QMO+L6+D6Q=; b=cQj8ojDkeGcCeov84zNP2/sEBANhaKDUlq57F87kc9vfim/6BYloK0CR0t8iA5eWS1u9Z7FUEz+zezUeaslBVwKlxC3uTz7Au37YVtXgHzIzcGrVoeq1QdML6h0lIR7WzxIVNMjMVBQwG7mq44nd0jZ6GIdBKBhWTc/39ZbD/T1VpVfSP25ruUD2G/TSisCQT+nnx/x4flgbFJRaKQ2SWFIMlGfgepQ8itk2alZRMhAWWvn/Tme8ltYRseDcwBCEiGpaINGKDFMHA8EDoRDoyplim0S95dMts5Op1ayU+O3oAQxl4fvpZI4ROouTcSr/ve3NRF8jVxR60Us7g/Q+gg== Received: from CYZPR12CA0013.namprd12.prod.outlook.com (2603:10b6:930:8b::6) by SA3PR12MB8024.namprd12.prod.outlook.com (2603:10b6:806:312::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.24; Thu, 28 Sep 2023 02:34:12 +0000 Received: from CY4PEPF0000EE3C.namprd03.prod.outlook.com (2603:10b6:930:8b:cafe::cb) by CYZPR12CA0013.outlook.office365.com (2603:10b6:930:8b::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.25 via Frontend Transport; Thu, 28 Sep 2023 02:34:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE3C.mail.protection.outlook.com (10.167.242.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.14 via Frontend Transport; Thu, 28 Sep 2023 02:34:12 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:34:04 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:34:01 -0700 From: Rongwei Liu To: , , , , , Subject: [PATCH v1 3/5] net/mlx5/hws: add IPv6 routing extension push remove actions Date: Thu, 28 Sep 2023 05:33:38 +0300 Message-ID: <20230928023341.1239731-4-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230928023341.1239731-1-rongweil@nvidia.com> References: <20230928023341.1239731-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail202.nvidia.com (10.129.68.7) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3C:EE_|SA3PR12MB8024:EE_ X-MS-Office365-Filtering-Correlation-Id: 22bac584-2982-4e91-14da-08dbbfcb66d7 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WDcACPqOPpIEcpiMo5UCTDKz/vYbiMdRxIpx37OzJjmQtmrd3/At1QH5UH9rihtgcEs0D3jdbHnNjFMPTJP3z0oC/XxhgS/0/A1nttqeclxUSVBRdPXKw83R61bd2hbRXFpQsvN0ZSUW4L96x/N6OdjIJwXkwxc3lJo8S+ZN6BhMZvL2+g0FzVQpDh6nTHIeyUCQiPvu051x8S/LN8ue+CKdnO0gpTsBNdLM7Di73dFOkCSrkcV5qlxwSDW0bvpe7kVjHvuZI6Nyox1IVuXNsg2IfkcPfYwNzOXFYthrj2xTc3OcF01Jew1r9xdOZZBawL2NPYUvgkaKpDx2qilmpcr5X4CfOBHY71VXGHnWLm3CdKjd29DHphIqzVrfD7jC/Im2kPTPXT7gJmHZ2h/ILTKjWaWPUzL0Ir/sYMHXgHaMFFkhJ7ZtLxIzVNeeUWyA1aAxRaODfs2jTOCep/SfCp749HVwRvqMoVhYGYCab8v0Sb+7GEUhnvUkoaOLoGM66GiCQI480njweU7RhlW6nal+MjGCsaJWZPH/J+n2lSkBLEkL0U0mYvFotXSU8JmnwACYu6Lxz3W6rV8idNxtmS7CGB0BCFNDVDQg9fGplvuYQkZS+j5dHZwmbYy3HdcbFZrB+Hm3N/OyYkhq3agxALL19HqunFwsbWaa9SFJtVw72oWf2rj/dCX5MPjNb6l9bmLW6FI0CbPHCzluR7gCRXWhELVVqR4u+Tc4MZlQq/qSgkiSFJa5LAnKgo30au3k X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(396003)(376002)(136003)(346002)(230922051799003)(186009)(82310400011)(1800799009)(451199024)(64100799003)(36840700001)(40470700004)(46966006)(40460700003)(7696005)(6666004)(83380400001)(82740400003)(356005)(7636003)(86362001)(36860700001)(47076005)(36756003)(1076003)(2616005)(55016003)(16526019)(40480700001)(6286002)(26005)(426003)(336012)(30864003)(110136005)(70586007)(70206006)(41300700001)(316002)(2906002)(8936002)(5660300002)(8676002)(478600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2023 02:34:12.5351 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 22bac584-2982-4e91-14da-08dbbfcb66d7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3C.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB8024 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add two dr_actions to implement IPv6 routing extension push and remove, the new actions are multiple actions combination instead of new types. Basically, there are two modify headers plus one reformat action. Action order is the same as encap and decap actions. Signed-off-by: Rongwei Liu --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 29 +++ drivers/net/mlx5/hws/mlx5dr_action.c | 358 ++++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 7 + drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + drivers/net/mlx5/mlx5_flow.h | 34 +++ 6 files changed, 428 insertions(+), 3 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index bdb65c4951..16cebd54ba 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3636,6 +3636,7 @@ enum mlx5_ifc_header_anchors { MLX5_HEADER_ANCHOR_PACKET_START = 0x0, MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_TCP_UDP = 0x09, MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, }; diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index 97d9e99382..2a3d1af5db 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -55,6 +55,8 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_DEST_ROOT, MLX5DR_ACTION_TYP_DEST_ARRAY, MLX5DR_ACTION_TYP_DEST_IPSEC_DECRYPT, + MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, + MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, MLX5DR_ACTION_TYP_MAX, }; @@ -195,6 +197,11 @@ struct mlx5dr_rule_action { uint8_t *data; } reformat; + struct { + uint32_t offset; + uint8_t *header; + } ipv6_ext; + struct { rte_be32_t vlan_hdr; } push_vlan; @@ -897,6 +904,28 @@ mlx5dr_tmp_action_create_dest_ipsec_dec(struct mlx5dr_context *ctx, uint8_t log_bulk_sz, uint32_t flags); +/* Create action to push or remove IPv6 extension header. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] type + * Type of direct rule action: MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT or + * MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT. + * @param[in] hdr + * Header for packet reformat. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags); + /* Destroy direct rule action. * * @param[in] action diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 4e04d77852..68fbd34620 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -26,7 +26,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_CTR), @@ -39,6 +40,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -63,6 +65,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), @@ -77,7 +80,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), @@ -91,6 +95,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), @@ -1746,7 +1751,7 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, if (!mlx5dr_action_is_hws_flags(flags) || ((flags & MLX5DR_ACTION_FLAG_SHARED) && (log_bulk_size || num_of_hdrs > 1))) { - DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)", flags); + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: 0x%x)", flags); rte_errno = EINVAL; goto free_action; } @@ -2489,6 +2494,347 @@ mlx5dr_tmp_action_create_dest_ipsec_dec(struct mlx5dr_context *ctx, return NULL; } +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[3] = {0}; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Next_hdr will be copied to ipv6.protocol after pop done. + */ + MLX5_SET(copy_action_in, &cmd[0], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[0], length, 8); + MLX5_SET(copy_action_in, &cmd[0], src_offset, 24); + MLX5_SET(copy_action_in, &cmd[0], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[0], dst_field, mod_id); + + /* Add nop between the continuous same modify field id */ + MLX5_SET(copy_action_in, &cmd[1], action_type, MLX5_MODIFICATION_TYPE_NOP); + + /* Clear next_hdr for right checksum */ + MLX5_SET(set_action_in, &cmd[2], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[2], length, 8); + MLX5_SET(set_action_in, &cmd[2], offset, 24); + MLX5_SET(set_action_in, &cmd[2], field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Copy ipv6_route_ext[first_segment].dst_addr by flex parser to ipv6.dst_addr */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, i + 1); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + MLX5_SET(copy_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[i], dst_field, field[i]); + MLX5_SET(copy_action_in, &cmd[i], src_field, mod_id); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Restore next_hdr from seg_left for flex parser identifying */ + MLX5_SET(copy_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[4], length, 8); + MLX5_SET(copy_action_in, &cmd[4], dst_offset, 24); + MLX5_SET(copy_action_in, &cmd[4], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[4], dst_field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Copy ipv6_route_ext.next_hdr to ipv6.protocol */ + MLX5_SET(copy_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, cmd, length, 8); + MLX5_SET(copy_action_in, cmd, src_offset, 24); + MLX5_SET(copy_action_in, cmd, src_field, mod_id); + MLX5_SET(copy_action_in, cmd, dst_field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static int +mlx5dr_action_create_pop_ipv6_route_ext(struct mlx5dr_action *action) +{ + uint8_t anchor_id = flow_hw_get_ipv6_route_ext_anchor_from_ctx(action->ctx); + struct mlx5dr_action_remove_header_attr hdr_attr; + uint32_t i; + + if (!anchor_id) { + rte_errno = EINVAL; + return rte_errno; + } + + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(action); + + hdr_attr.by_anchor.decap = 1; + hdr_attr.by_anchor.start_anchor = anchor_id; + hdr_attr.by_anchor.end_anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + hdr_attr.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER; + action->ipv6_route_ext.action[3] = + mlx5dr_action_create_remove_header(action->ctx, &hdr_attr, action->flags); + + if (!action->ipv6_route_ext.action[0] || !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2] || !action->ipv6_route_ext.action[3]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext pop subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + + /* Set ipv6.protocol to IPPROTO_ROUTING */ + MLX5_SET(set_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, cmd, length, 8); + MLX5_SET(set_action_in, cmd, field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + MLX5_SET(set_action_in, cmd, data, IPPROTO_ROUTING); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, 0, + action->flags | MLX5DR_ACTION_FLAG_SHARED); +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr2(struct mlx5dr_action *action, + uint32_t bulk_size, + uint8_t *data) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + uint8_t seg_left, next_hdr; + uint32_t *ipv6_dst_addr; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + } + + /* Copy IPv6 destination address from ipv6_route_ext.last_segment */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[i], field, field[i]); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + MLX5_SET(set_action_in, &cmd[i], data, be32toh(*ipv6_dst_addr++)); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Set ipv6_route_ext.next_hdr since initially pushed as 0 for right checksum */ + MLX5_SET(set_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[4], length, 8); + MLX5_SET(set_action_in, &cmd[4], offset, 24); + MLX5_SET(set_action_in, &cmd[4], field, mod_id); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + next_hdr = MLX5_GET(header_ipv6_routing_ext, data, next_hdr); + MLX5_SET(set_action_in, &cmd[4], data, next_hdr); + } + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + bulk_size, action->flags); +} + +static int +mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action, + struct mlx5dr_action_reformat_header *hdr, + uint32_t bulk_size) +{ + struct mlx5dr_action_insert_header insert_hdr = {{0}}; + uint8_t header[MLX5_PUSH_MAX_LEN]; + uint32_t i; + + if (!hdr || !hdr->sz || hdr->sz > MLX5_PUSH_MAX_LEN || + ((action->flags & MLX5DR_ACTION_FLAG_SHARED) && !hdr->data)) { + DR_LOG(ERR, "Invalid ipv6_route_ext header"); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + memcpy(header, hdr->data, hdr->sz); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + } + + insert_hdr.anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + insert_hdr.encap = 1; + insert_hdr.hdr.sz = hdr->sz; + insert_hdr.hdr.data = header; + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr, + bulk_size, action->flags); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr2(action, bulk_size, hdr->data); + + if (!action->ipv6_route_ext.action[0] || + !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext push subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_context_cap_dynamic_reparse(ctx)) { + DR_LOG(ERR, "IPv6 extension actions is not supported"); + rte_errno = ENOTSUP; + return NULL; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) { + rte_errno = ENOMEM; + return NULL; + } + + switch (action_type) { + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) { + DR_LOG(ERR, "Pop ipv6_route_ext must be shared"); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_pop_ipv6_route_ext(action); + break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size); + break; + default: + DR_LOG(ERR, "Unsupported action type %d\n", action_type); + rte_errno = ENOTSUP; + goto free_action; + } + + if (ret) { + DR_LOG(ERR, "Failed to create IPv6 extension reformat action"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) { struct mlx5dr_devx_obj *obj = NULL; @@ -2566,6 +2912,12 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_action_destroy_stcs(action); mlx5dr_tmp_action_dest_ipsec_destroy_ste_arr(action->dest_ipsec.action_ste); break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + break; } } diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 3a35a23603..a9d463370c 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -8,6 +8,9 @@ /* Max number of STEs needed for a rule (including match) */ #define MLX5DR_ACTION_MAX_STE 10 +/* Max number of internal subactions of ipv6_ext */ +#define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 + enum mlx5dr_action_stc_idx { MLX5DR_ACTION_STC_IDX_CTRL = 0, MLX5DR_ACTION_STC_IDX_HIT = 1, @@ -143,6 +146,10 @@ struct mlx5dr_action { uint8_t offset; bool encap; } reformat; + struct { + struct mlx5dr_action + *action[MLX5DR_ACTION_IPV6_EXT_MAX_SA]; + } ipv6_route_ext; struct { struct mlx5dr_devx_obj *devx_obj; uint8_t return_reg_id; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index 4df1c41f9d..1e11de3b6d 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -32,6 +32,8 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER", [MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER", [MLX5DR_ACTION_TYP_DEST_IPSEC_DECRYPT] = "DEST_IPSEC_DECRYPT", + [MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT", + [MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index d5c9252b00..ee4b19e5e5 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -647,6 +647,7 @@ struct mlx5_flow_dv_matcher { struct mlx5_flow_dv_match_params mask; /**< Matcher mask. */ }; +#define MLX5_PUSH_MAX_LEN 128 #define MLX5_ENCAP_MAX_LEN 132 /* Encap/decap resource structure. */ @@ -3080,6 +3081,39 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx) return UINT32_MAX; } +static __rte_always_inline uint8_t +flow_hw_get_ipv6_route_ext_anchor_from_ctx(void *dr_ctx) +{ + uint16_t port; + struct mlx5_priv *priv; + + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) + return priv->sh->srh_flex_parser.flex.devx_fp->anchor_id; + } + return 0; +} + +static __rte_always_inline uint16_t +flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx) +{ + uint16_t port; + struct mlx5_priv *priv; + struct mlx5_flex_parser_devx *fp; + + if (idx >= MLX5_GRAPH_NODE_SAMPLE_NUM || idx >= MLX5_SRV6_SAMPLE_NUM) + return 0; + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) { + fp = priv->sh->srh_flex_parser.flex.devx_fp; + return fp->sample_info[idx].modify_field_id; + } + } + return 0; +} + struct mlx5_list_entry * flow_dv_sft_create_cb(void *tool_ctx __rte_unused, void *cb_ctx); int From patchwork Thu Sep 28 02:33:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 132071 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A5DD442658; Thu, 28 Sep 2023 04:34:40 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 53C19402DF; Thu, 28 Sep 2023 04:34:21 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2064.outbound.protection.outlook.com [40.107.92.64]) by mails.dpdk.org (Postfix) with ESMTP id 286C540277 for ; Thu, 28 Sep 2023 04:34:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=M2/nmqPBnJsPzM64GU7KjT0EZwwqGiMMRpkULNMiexJGkzxu6f2P6jJzxvFmG3OYARZh4VCQpAZ8PUFKrJFuGpSsVXwAA1XnBybe6Jh8QYK/NcBuoVWykrybRJNr1YZ/bdqHLg1iejJQHSgOiPwvqNQygUiS4g5vsCcTYzzJGq9FmbsW2Zcc55FaOrFoVKZ4SGfOCKivQopi0npPi2ll1N8GlYNtLoaDxU4KjdPTEqPF5kzA9VNzS08ustHsLjgX4vnSKVTJNb/dQZrSdq6JeCGZfdTvRI8DGC2+SBTsw58C6u3H20sqzQlc/0VE1/PnB37GLrm8Z0Q+eJtoU8rgtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ms6FI6QgK11uMQoqAv6cGcVfqj+9lrxSlTY5otYdfcA=; b=b5Fu8f1YWYRNtONW1UlZUu8NJ68f3SxudR/VwRaGmek6CP+gTUxpOE2QkAM+8SZmHJM4ovwKrtA7JR1Wgr3jksDyVXuHdGgCijCfAcF0epzGZDs6lP8WiVtqdGj2f5lSi61Cdzt1pTKuJ+Ww0sjlwUiHhgCFM9VkzKzOcntpJK+0RyD41Hlmy4ohTC1XavYCD5GMAACSp9wzFBn//1SMaeL9MAjXeutW/NaSy3RTCDIys5Jy7wKs/Ro3Em5e7Nzpwqz6ivWVwVzqyxAKYzqLIHtg/fKh5hxGnimMBDdH678yNIHTK0uu0DdQzulRSgG7h1zOZUNHXGqNrf7oI3vCmQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ms6FI6QgK11uMQoqAv6cGcVfqj+9lrxSlTY5otYdfcA=; b=Y+ahIDG2yONvK3g0V29aWEIcnRMTrxitjg5mnOxvHVTmpd5IaAsKNnWgynR5P1inDV8Op4PoTZO0d35ajRQW2NuskqyTmDRPvW0OzzGhYg24Yhlf4Gbqm0/Z57MS1QcWJjSFZ3tERD3w3aqdYr3ljTAwWnWNAO7spLJWBorKawMjya4/17dtN4CltxRruYuPpsper+v6FcEhFZesVOzqKW8yBHBMqiBNljG+cPOQm9S7Pp7lq2tfOf+X+RY4FFbLHie6H9KKRVPB0LkJov9P6Vo+DUXQPnGP+eOPbzldDwL/v15hjx6BmYjajyI/qWQtrnLpqvUKEJ2x6A2q9pXZqA== Received: from DS7PR05CA0107.namprd05.prod.outlook.com (2603:10b6:8:56::25) by CH3PR12MB8210.namprd12.prod.outlook.com (2603:10b6:610:129::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.22; Thu, 28 Sep 2023 02:34:15 +0000 Received: from CY4PEPF0000EE3D.namprd03.prod.outlook.com (2603:10b6:8:56:cafe::af) by DS7PR05CA0107.outlook.office365.com (2603:10b6:8:56::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.13 via Frontend Transport; Thu, 28 Sep 2023 02:34:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE3D.mail.protection.outlook.com (10.167.242.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.19 via Frontend Transport; Thu, 28 Sep 2023 02:34:15 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:34:06 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:34:04 -0700 From: Rongwei Liu To: , , , , , Subject: [PATCH v1 4/5] net/mlx5/hws: add setter for IPv6 routing push remove Date: Thu, 28 Sep 2023 05:33:39 +0300 Message-ID: <20230928023341.1239731-5-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230928023341.1239731-1-rongweil@nvidia.com> References: <20230928023341.1239731-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail202.nvidia.com (10.129.68.7) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3D:EE_|CH3PR12MB8210:EE_ X-MS-Office365-Filtering-Correlation-Id: 7a70d1d6-3a0c-439f-dfe6-08dbbfcb687f X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6i6XRhwKUsYcJQ9dyKlAfZ0KR/XjSp17npvepxc7vwmGaTETsND7eEeWQB+rgZpUzNcFiSGVZCL7VntQ8XxvKdTRYhY63SQGa0bV6a4l6yFMcIIOO+InL7CmvV3O0eYwBL8ZRTmMYJLjpmlO+4Jv6uNVQpbIGDpjkXpzWrJEG6l47/fjM9mv1UC0QX1kHYJhe7iXcWATDQ9sOMxRVrXCBX5dN+YclwR7qwfeDvcmGY3J0rd44WOhfNTs4qAolxOWUxDRJEGZ7KbbIs5BmiAGPqJ6iOTw3MRRSd3m9yJq4ExKJIoa+mNGXezJ6e7o+JFb5Zk8P5mWgJW0QN5/VRGrEspSLkakn9bJKCbPOHJJjTXMLXIfx0ZCzTR5OXbcj+irJPb+1kHx3StXGjWMr5BJKQ6FK3bYRFZIqcHWv8Hm53oLp84ABsmVOSZ0X0xsL53OKpPMbHpSLkMIc8kHeDL/bztHlb5qSbd0t3ldbG0LYjSJ6FgG02MiaN4iX3nA4qb9nu+FXz5nRvFW5oKI4k1f08UuFNUcI8bcCBCgBho+WU7oXeuZ6oPH6OLiVkHIIlGJz9LjYbzMiCg+eNszjzbuuyAW5exkdbRlZASYRAqqZbEnbvV2rIGrMaV3zemuojXW+Amo8ICNpbm0OvC/Dfy+dscvuXks0Y8KdotUrmKWZHaHxKh48X7VbJf3yo5JBtzHoP95pFuvG3z6dk/RUh5uGs6X0hsaSZ1zCHVACdJkhHnbFVOP1/zrSotDFV1IaMBc X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(136003)(396003)(346002)(39860400002)(376002)(230922051799003)(1800799009)(186009)(82310400011)(64100799003)(451199024)(46966006)(40470700004)(36840700001)(7636003)(2906002)(82740400003)(40460700003)(356005)(5660300002)(86362001)(36756003)(55016003)(40480700001)(47076005)(36860700001)(6666004)(7696005)(83380400001)(6286002)(16526019)(26005)(478600001)(41300700001)(316002)(8936002)(70206006)(8676002)(2616005)(426003)(336012)(70586007)(110136005)(1076003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2023 02:34:15.3971 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7a70d1d6-3a0c-439f-dfe6-08dbbfcb687f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3D.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8210 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The rte action will be translated to multiple dr_actions which need different setters to program them. In order to leverage the existing setter logic, there is a new callback introduce which called fetch_opt with unique parameter. For each setter, it may have different reparsing properties. Setter which requires no reparse can't share the same one with the one has reparse enabled even if there is spare space. Signed-off-by: Rongwei Liu --- drivers/net/mlx5/hws/mlx5dr_action.c | 174 +++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 3 +- 2 files changed, 176 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 68fbd34620..aa616cc4ab 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -3422,6 +3422,121 @@ mlx5dr_action_setter_reformat_trailer(struct mlx5dr_actions_apply_data *apply, apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; } +static void +mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(uint8_t *data, void *mh_data) +{ + uint8_t *action_ptr = mh_data; + uint32_t *ipv6_dst_addr; + uint8_t seg_left; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list which is the next hop */ + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + + /* Load next hop IPv6 address in reverse order to ipv6.dst_address */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, action_ptr, data, be32toh(*ipv6_dst_addr++)); + action_ptr += MLX5DR_MODIFY_ACTION_SIZE; + } + + /* Set ipv6_route_ext.next_hdr per user input */ + MLX5_SET(set_action_in, action_ptr, data, *data); +} + +static void +mlx5dr_action_setter_ipv6_route_ext_mhdr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + __be64 cmd[MLX5_SRV6_SAMPLE_NUM] = {0}; + struct mlx5dr_action *ipv6_ext_action; + uint8_t *header; + + header = rule_action[setter->idx_double].ipv6_ext.header; + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.modify_header.offset = 0; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.data = NULL; + } else { + /* + * Copy ipv6_dst from ipv6_route_ext.last_seg. + * Set ipv6_route_ext.next_hdr. + */ + mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(header, cmd); + tmp_rule_action.modify_header.data = (uint8_t *)cmd; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_modify_header(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + struct mlx5dr_action *ipv6_ext_action; + uint8_t header[MLX5_PUSH_MAX_LEN]; + + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.reformat.offset = 0; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.data = NULL; + } else { + memcpy(header, rule_action[setter->idx_double].ipv6_ext.header, + tmp_rule_action.action->reformat.header_size); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + tmp_rule_action.reformat.data = header; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_insert_ptr(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_pop(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = &apply->rule_action[setter->idx_single]; + uint8_t idx = MLX5DR_ACTION_IPV6_EXT_MAX_SA - 1; + struct mlx5dr_action *action; + + /* Pop the ipv6_route_ext as set_single logic */ + action = rule_action->action->ipv6_route_ext.action[idx]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(action->stc[apply->tbl_type].offset); +} + int mlx5dr_action_template_process(struct mlx5dr_action_template *at) { struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; @@ -3485,6 +3600,65 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) setter->idx_double = i; break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Set ipv6_route_ext.next_hdr to 0 for checksum bug. + */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* + * Restore ipv6_route_ext.next_hdr from ipv6_route_ext.seg_left. + * Load the final destination address from flex parser sample 1->4. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* Set the ipv6.protocol per ipv6_route_ext.next_hdr */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + /* Pop ipv6_route_ext */ + setter->flags |= ASF_SINGLE1 | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_ipv6_route_ext_pop; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + /* Insert ipv6_route_ext with next_hdr as 0 due to checksum bug */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_INSERT; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_insert_ptr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* Set ipv6.protocol as IPPROTO_ROUTING: 0x2b */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* + * Load the right ipv6_route_ext.next_hdr per user input buffer. + * Load the next dest_addr from the ipv6_route_ext.seg_list[last]. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + break; + case MLX5DR_ACTION_TYP_MODIFY_HDR: /* Double modify header list */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index a9d463370c..005b43fa07 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -6,7 +6,7 @@ #define MLX5DR_ACTION_H_ /* Max number of STEs needed for a rule (including match) */ -#define MLX5DR_ACTION_MAX_STE 10 +#define MLX5DR_ACTION_MAX_STE 20 /* Max number of internal subactions of ipv6_ext */ #define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 @@ -109,6 +109,7 @@ struct mlx5dr_actions_wqe_setter { uint8_t idx_ctr; uint8_t idx_hit; uint8_t flags; + uint8_t extra_data; }; struct mlx5dr_action_template { From patchwork Thu Sep 28 02:33:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 132072 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 298B942658; Thu, 28 Sep 2023 04:34:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 84A0340698; Thu, 28 Sep 2023 04:34:23 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2068.outbound.protection.outlook.com [40.107.237.68]) by mails.dpdk.org (Postfix) with ESMTP id 3D1F9402E9 for ; Thu, 28 Sep 2023 04:34:22 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=S5yV8xF4x5e52E/aXqoL7bX1u79jKxbMyXjMTA7SlsHTbfSspE5qjE2M32DgaiyPHoP6OLZbqDV7sa16MYC8jH/Zjh47htkXT65C9uijNFLbZ02fHHUZBPzWRue0w61oHZEQjO0CD1RE1ie8L6eFmOG7HgcY3sB+5ysqYGJVhHJZP86+oBbw5vWnTI/04Rl4d71fYGyUbHiVzAaXIZiY1KIB+T3LyEVfr3sQaCLzE68lFil/x+gHnkY7b1c6HA4eV9rxdkPFIQr1FVbKyHrOlBuxsGb6QiYPDyIemb99z5V4N9koG/VlYURVck3B2rbbT5sZWh8po6to/5bikjla7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dC3DtJp9aMj73zm8jbuDc1NLKafnqRdCmYWpMRq6pes=; b=U7vwf9BmAOadyyacE/o1KohJy2aKHu1huYLg/OsYy/vDsUCSppfvWjDHZz/7DYDiBaFsUanTF/ITIEiAI1gkKkVmE6DWbESeg6LI8ZarWA+0LaPhpE4OjTc75bg2NHk/ePNGMPxg8OS3vbVDlP+ASPt2lb/WE7S7qUCN7THvneLg94NB3usFS8F+Gb/yNmtOiGXmFaWOHaxSaaf0mB82Bu1E4xbRXreVaVolLGH6MO3fwX9+4/9eqHjWhEeBBNINEEr8nKya9uSMyjYh9ETi/dXvdDHAhIH+yUhRo1vBMqwLB+1IqBMhdYDSf+V/tGRzCVR0Tfu2nt3MtmswNbt3Jw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dC3DtJp9aMj73zm8jbuDc1NLKafnqRdCmYWpMRq6pes=; b=juLJtew8LR4XHkc8/koMJrNcNMaxPeoCyyYLS2phGsvp7SMxLatXRLQqvvmMmtp2cCGGyNo/aqet4CutLHxkcF6efzu4RlRPWXXlyYsz4f84/tB6IHGlOTOFevR1IB6tQxCvNNrcMkFKWG7IJ5fydgzny0ZskUzfEoNTOfqKEZnQH4jU3TNYLPuVuiVil//eyXvbdwg6XX3lXJhHyIeaA9DHK4W4IN+tLpv5YonoFzk0reX6Jv7zT9A5XEWbQcpAhWp5LMi7YWGncrf6k9IWwYm2f+/Whim4l6XvNytn7SbWV5VylJlCCLo1Xe0Rl10WgMxcqbvjO/bWfS4qcjPqZA== Received: from DS7PR03CA0327.namprd03.prod.outlook.com (2603:10b6:8:2b::35) by PH0PR12MB8051.namprd12.prod.outlook.com (2603:10b6:510:26d::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.24; Thu, 28 Sep 2023 02:34:18 +0000 Received: from CY4PEPF0000EE3E.namprd03.prod.outlook.com (2603:10b6:8:2b:cafe::d9) by DS7PR03CA0327.outlook.office365.com (2603:10b6:8:2b::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.22 via Frontend Transport; Thu, 28 Sep 2023 02:34:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE3E.mail.protection.outlook.com (10.167.242.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.21 via Frontend Transport; Thu, 28 Sep 2023 02:34:17 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:34:08 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 27 Sep 2023 19:34:06 -0700 From: Rongwei Liu To: , , , , , CC: Ferruh Yigit Subject: [PATCH v1 5/5] net/mlx5: implement IPv6 routing push remove Date: Thu, 28 Sep 2023 05:33:40 +0300 Message-ID: <20230928023341.1239731-6-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230928023341.1239731-1-rongweil@nvidia.com> References: <20230928023341.1239731-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail202.nvidia.com (10.129.68.7) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3E:EE_|PH0PR12MB8051:EE_ X-MS-Office365-Filtering-Correlation-Id: 1fa81ba5-5156-43a0-e1ee-08dbbfcb69cb X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4+7+80QlBCexkR21gtXn1PUcXWzsqGrX1ETBfIYuoqHj1tB2SFvpCalRbDQHTXZGR/kA9sSzzEVGhTID7tKCcF826K6HERTyYUESv6QzVxQjB7shTE0ZnHhW3s+BiSd/2fhgFLvnRjKtoztWA1Wm4PkqWWWTla2dgNDBlmtiA+mH45wfoN4R8u5WI+aEZqZjAsDDf11EsN3JHcMkCV0iGLI+0I4EC0iOh7VCI1k+3kZTE8Ch0lOdry2zO5RvjTxahXwnd70r9FetFui+F1oO6CUst6SI6IbCAgXo/alJZE9KduRXfNn27Uv3ojEOwqG9509wJL5AsyTd9wSk1Xa+pOgsqPgi5om5Royi20S/vGfHtFc6xGObphlm7GqAw5bE93qA6Pu6XUUZ8SJg/daBvvvMdiKv/tw+3/mjpEiaA4fyPBxwSGYcO7xiUMqT3+L8QFNOO5IA6Ba0B8ksVezK/BfmD69YkUj17Eoqsli+5uy4fnmV/3x7xQ1flBd4DsMRumDEzzcuCDWCorzx+Cs11M30X8sQkcEx902XVtQ85Ywg/cBGoiLInPl9ZtarDEXQ9exsoPy2HtVUYd076b1m6PbKAp0gEebD2hD5dDPHCrDEyj28qz3EU9A61wAEMPPHm0nKXAqyLt5KT+MEggOV1yPrUn9cvdNZOTdzXG0e2dVotDEB1LDB8EFRzGOZw6ctt9XwDdUSjF6j9nqPlCDAw3Y+Q0d9mceQPO8FgUyqlcvgKNZpwcHq1wDm3gOtP5gx X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(396003)(136003)(39860400002)(376002)(230922051799003)(82310400011)(186009)(1800799009)(451199024)(64100799003)(40470700004)(46966006)(36840700001)(7696005)(2616005)(478600001)(6666004)(83380400001)(47076005)(16526019)(6286002)(26005)(336012)(426003)(30864003)(2906002)(110136005)(70206006)(1076003)(70586007)(316002)(8936002)(8676002)(4326008)(5660300002)(41300700001)(40460700003)(36860700001)(36756003)(86362001)(7636003)(82740400003)(356005)(40480700001)(55016003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2023 02:34:17.5886 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1fa81ba5-5156-43a0-e1ee-08dbbfcb69cb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3E.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB8051 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Reserve the push data buffer for each job and the maximum length is set to 128 for now. Only supports type IPPROTO_ROUTING when translating the rte flow action. Remove actions must be shared globally and only supports next layer as TCP or UDP. Signed-off-by: Rongwei Liu --- doc/guides/nics/features/default.ini | 2 + doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 +- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 +- drivers/net/mlx5/mlx5_flow_hw.c | 283 ++++++++++++++++++++++++++- 6 files changed, 310 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 137825edbc..7f25456e2b 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -160,6 +160,8 @@ drop = flag = inc_tcp_ack = inc_tcp_seq = +ipv6_ext_push = +ipv6_ext_remove = jump = mac_swap = mark = diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 19af54d4ba..2fe6e719d4 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -105,6 +105,8 @@ drop = Y flag = Y inc_tcp_ack = Y inc_tcp_seq = Y +ipv6_ext_push = Y +ipv6_ext_remove = Y jump = Y mark = Y meter = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index c668f80916..da293e31b1 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -112,6 +112,7 @@ Features - Modify flex item field. - Matching on random value. - Send to kernel. +- Push or remove IPv6 routing extension. Limitations ----------- @@ -721,7 +722,15 @@ Limitations - Supports on non-root table. - Supports on isolated mode. - In HW steering (``dv_flow_en`` = 2): - - not supported on guest port. + - not supported on guest port. + +- IPv6 routing extension push or remove: + + - Supported only with HW Steering enabled (``dv_flow_en`` = 2). + - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default). + - Only supports TCP or UDP as next layer. + - IPv6 routing header must be the only present extension. + - Not supported on guest port. Statistics ---------- diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 951feb5ac4..67a7bad5d5 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -410,6 +410,7 @@ struct mlx5_hw_q_job { }; void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + uint8_t *push_data; /* IPv6 routing push data. */ struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index ee4b19e5e5..7038f34d68 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -414,6 +414,8 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_INDIRECT_IPSEC (1ull << 47) #define MLX5_FLOW_ACTION_SFT (1ull << 48) #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 49) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 50) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 51) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -1340,6 +1342,8 @@ typedef int const struct rte_flow_action *, struct mlx5dr_rule_action *); +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; @@ -1386,6 +1390,10 @@ struct mlx5_action_construct_data { struct { cnt_id_t id; } shared_counter; + struct { + /* IPv6 extension push data len. */ + uint16_t len; + } ipv6_ext; struct { uint32_t id; uint32_t conf_masked:1; @@ -1443,6 +1451,7 @@ struct rte_flow_actions_template { uint16_t *src_off; /* RTE action displacement from app. template */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ + uint16_t recom_off; /* Offset of DR IPv6 routing push remove action. */ uint32_t refcnt; /* Reference counter. */ uint8_t flex_item; /* flex item index. */ }; @@ -1468,7 +1477,14 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; -#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/* Push remove action struct. */ +struct mlx5_hw_push_remove_action { + struct mlx5dr_action *action; /* Action object. */ + /* Is push_remove action shared across flows in table. */ + uint8_t shared; + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; /* Modify field action struct. */ struct mlx5_hw_modify_header_action { @@ -1499,6 +1515,9 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ + /* Push/remove action. */ + struct mlx5_hw_push_remove_action *push_remove; + uint16_t push_remove_pos; /* Push/remove action position. */ uint32_t mark:1; /* Indicate the mark action. */ cnt_id_t cnt_id; /* Counter id. */ uint32_t mtr_id; /* Meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index cc1925a6b2..b4fc879929 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -641,6 +641,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } + if (acts->push_remove) { + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } if (acts->mhdr) { flow_hw_template_destroy_mhdr_action(acts->mhdr); mlx5_free(acts->mhdr); @@ -778,6 +784,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic push action to the dynamic action list. + * + * @param[in] dev + * Pointer to the port. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] len + * Length of the data to be updated. + * + * @return + * Data pointer on success, NULL otherwise and rte_errno is set. + */ +static __rte_always_inline void * +__flow_hw_act_data_push_append(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t len) +{ + struct mlx5_action_construct_data *act_data; + struct mlx5_priv *priv = dev->data->dev_private; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return NULL; + act_data->ipv6_ext.len = len; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return act_data; +} + static __rte_always_inline int __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, struct mlx5_hw_actions *acts, @@ -1951,6 +1995,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, return 0; } + +static int +mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + uint8_t *push_data, uint8_t *push_data_m, + size_t push_size, uint16_t recom_src, + enum mlx5dr_action_type recom_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); + struct mlx5_action_construct_data *act_data; + struct mlx5dr_action_reformat_header hdr = {0}; + uint32_t flag, bulk = 0; + + flag = mlx5_hw_act_flag[!!attr->group][type]; + acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->push_remove) + push_size, + 0, SOCKET_ID_ANY); + if (!acts->push_remove) + return -ENOMEM; + + switch (recom_type) { + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!push_data || !push_size) + goto err1; + if (!push_data_m) { + bulk = rte_log2_u32(table_attr->nb_flows); + } else { + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + } + acts->push_remove->data_size = push_size; + memcpy(acts->push_remove->data, push_data, push_size); + hdr.data = push_data; + hdr.sz = push_size; + break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + break; + default: + break; + } + + acts->push_remove->action = + mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx, + recom_type, &hdr, bulk, flag); + if (!acts->push_remove->action) + goto err1; + acts->rule_acts[at->recom_off].action = acts->push_remove->action; + acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data; + acts->rule_acts[at->recom_off].ipv6_ext.offset = 0; + acts->push_remove_pos = at->recom_off; + if (!acts->push_remove->shared) { + act_data = __flow_hw_act_data_push_append(dev, acts, + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + recom_src, at->recom_off, push_size); + if (!act_data) + goto err; + } + return 0; +err: + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); +err1: + if (acts->push_remove) { + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } + return -EINVAL; +} + /** * Translate rte_flow actions to DR action. * @@ -1984,19 +2104,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *masks = at->masks; enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_src = 0; + uint16_t reformat_src = 0, recom_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; - size_t data_size = 0; + uint8_t *push_data = NULL, *push_data_m = NULL; + size_t data_size = 0, push_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type; bool reformat_used = false; + bool recom_used = false; unsigned int of_vlan_offset; uint16_t jump_pos; uint32_t ct_idx; @@ -2204,6 +2329,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_used = true; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + MLX5_ASSERT(!recom_used && !recom_type); + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)masks->conf; + if (ipv6_ext_data) + push_data_m = ipv6_ext_data->data; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)actions->conf; + if (ipv6_ext_data) { + push_data = ipv6_ext_data->data; + push_size = ipv6_ext_data->size; + } + recom_src = src_pos; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); @@ -2360,6 +2515,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, if (ret) goto err; } + if (recom_used) { + MLX5_ASSERT(at->recom_off != UINT16_MAX); + ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data, + push_data_m, push_size, recom_src, + recom_type); + if (ret) + goto err; + } return 0; err: err = rte_errno; @@ -2765,11 +2928,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_push; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; uint8_t *buf = job->encap_data; + uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2902,6 +3067,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ipv6_push = + (const struct rte_flow_action_ipv6_ext_push *)action->conf; + rte_memcpy((void *)push_buf, ipv6_push->data, + act_data->ipv6_ext.len); + MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, @@ -3058,6 +3230,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->res_idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (hw_acts->push_remove && !hw_acts->push_remove->shared) { + rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = + job->flow->res_idx - 1; + rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf; + } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) job->flow->cnt_id = hw_acts->cnt_id; return 0; @@ -5271,6 +5448,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return 0; } +/** + * Validate ipv6_ext_push action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to the indirect action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf; + + if (!raw_push_data || !raw_push_data->size || !raw_push_data->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "invalid ipv6_ext_push data"); + if (raw_push_data->type != IPPROTO_ROUTING || + raw_push_data->size > MLX5_PUSH_MAX_LEN) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported ipv6_ext_push type or length"); + return 0; +} + /** * Validate raw_encap action. * @@ -5526,6 +5735,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, #endif uint16_t i; int ret; + const struct rte_flow_action_ipv6_ext_remove *remove_data; /* FDB actions are only valid to proxy port. */ if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) @@ -5622,6 +5832,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_DECAP; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + remove_data = action->conf; + /* Remove action must be shared. */ + if (remove_data->type != IPPROTO_ROUTING || !mask) { + DRV_LOG(ERR, "Only supports shared IPv6 routing remove"); + return -EINVAL; + } + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE; + break; case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_METER; @@ -5752,6 +5977,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, [RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, }; static inline void @@ -5852,6 +6079,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, /** * Create DR action template based on a provided sequence of flow actions. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] at * Pointer to flow actions template to be updated. * @@ -5860,7 +6089,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, * NULL otherwise. */ static struct mlx5dr_action_template * -flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +flow_hw_dr_actions_template_create(struct rte_eth_dev *dev, + struct rte_flow_actions_template *at) { struct mlx5dr_action_template *dr_template; enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; @@ -5869,8 +6099,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; uint16_t reformat_off = UINT16_MAX; uint16_t mhdr_off = UINT16_MAX; + uint16_t recom_off = UINT16_MAX; uint16_t cnt_off = UINT16_MAX; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { const struct rte_flow_action_raw_encap *raw_encap_data; size_t data_size; @@ -5904,6 +6137,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) reformat_off = curr_off++; reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = at->actions[i].conf; data_size = raw_encap_data->size; @@ -5980,11 +6223,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) at->reformat_off = reformat_off; action_types[reformat_off] = reformat_act_type; } + if (recom_off != UINT16_MAX) { + at->recom_off = recom_off; + action_types[recom_off] = recom_type; + } dr_template = mlx5dr_action_template_create(action_types); - if (dr_template) + if (dr_template) { at->dr_actions_num = curr_off; - else + } else { DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return NULL; + } + /* Create srh flex parser for remove anchor. */ + if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT || + recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) && + mlx5_alloc_srh_flex_parser(dev)) { + DRV_LOG(ERR, "Failed to create srv6 flex parser"); + claim_zero(mlx5dr_action_template_destroy(dr_template)); + return NULL; + } return dr_template; err_actions_num: DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", @@ -6366,6 +6623,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, at->dr_off[i] = UINT16_MAX; at->reformat_off = UINT16_MAX; at->mhdr_off = UINT16_MAX; + at->recom_off = UINT16_MAX; for (i = 0; ra[i].type != RTE_FLOW_ACTION_TYPE_END; i++) { const struct rte_flow_action_modify_field *info; @@ -6393,7 +6651,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, break; } } - at->tmpl = flow_hw_dr_actions_template_create(at); + at->tmpl = flow_hw_dr_actions_template_create(dev, at); if (!at->tmpl) goto error; at->action_flags = action_flags; @@ -6430,6 +6688,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, struct rte_flow_actions_template *template, struct rte_flow_error *error __rte_unused) { + uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE | + MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Action template %p is still in use.", (void *)template); @@ -6438,6 +6699,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, NULL, "action template in using"); } + if (template->action_flags & flag) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); if (template->tmpl) @@ -9258,6 +9521,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * @@ -9273,7 +9537,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_q_updated; i++) { char mz_name[RTE_MEMZONE_NAMESIZE]; - uint8_t *encap = NULL; + uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; @@ -9293,13 +9557,16 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i]->size]; encap = (uint8_t *) &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - items = (struct rte_flow_item *) + push = (uint8_t *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + items = (struct rte_flow_item *) + &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; upd_flow = (struct rte_flow_hw *) &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j];