From patchwork Tue Oct 31 10:51:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 133645 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 16ECB43251; Tue, 31 Oct 2023 11:52:00 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A59C440294; Tue, 31 Oct 2023 11:51:59 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2072.outbound.protection.outlook.com [40.107.101.72]) by mails.dpdk.org (Postfix) with ESMTP id 93624400EF for ; Tue, 31 Oct 2023 11:51:58 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IWsujrAZzTnKBpuL1FxpzcVNQyCateBhfR5qzvs/bIxI59sn2P5gfbvjcn5BYuNDkBh3qhZti0iLDZ1G3P/Rd2HaV064TOJQTorlS9k5NQ5YM41Qhom15VnLXivsh8mgCeKJscomQoe6D3LptyWjOOcWfzf1WsLmkNsBopbGwtugK+vG0uK/6246CMqwB9csOstyFJV7mUnm/VeCRu27RZEjq40cYDCtOyVkygAnZ7Z6zdJ9jixicDuI9sgBIr2qUcIAI7MB1y0hPy5qpbJv6i9ifnFTnBpJfQ6u7x1rRGiZN7vRtZR4NtLxKeBCfmeBZlPkZRH1edlvTDP95d2X7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YimV24x4sTgeMpzEouBqon6r0PC68ZuMyg/tQxo6JCg=; b=G+k/MCDS1StVi3n5lZeZHWydTlhqQSBkgCyJPYcijktUkURtIB6DnmFRHiZ2C7xgj+IdOW7w8j3mf3Iqb9q64cuSJqfzZEgB3kZ2tpp72lSRAR19dnd0QhSkZ9f88EnqIY1ZzCREuadQqgDOxIIXr40vUn7sPeQe/zsqikzn+3zlTHfQIBWZsrzc9k9zgJGSDQv/QhNIF5biZjUKRao4cPmKhiC5YNK2RGD5ZZqvYYqoDY2ekdshlmYbx/T9VI1G3weYnXhJ7pd5+H1+FhngUw2GUmktrWkIScCfU3xO9OIzVmGy1Uu6XdoQ7iYt/yklFCsRbsLAdx0MLSV25Khe1w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YimV24x4sTgeMpzEouBqon6r0PC68ZuMyg/tQxo6JCg=; b=kycvvOJNsz7UVtgKEouDoqwm3KfasfILSvkj4Aai5FXF2rIJVaNhwX2x5ItPJfoCBW6c6myALdq5qaeNpIOA7pijsM2jH+EoPBM1VPDM//b1BEDBpp0kPr6KjUr3gDfY0Jr/KlHW2qLrmaPDJeb6ysv6IlfdYMkm9O7vQGEw5onwMpe0GNQyEKofYq9rz72htd5jT12Gztlky2f0Ojt2+DYzcEqt0Fc91iYNNmXCFB+GlaXPuS6KxralGhg5SW89ov9QA5J2SrUH67ispOpU9wHN6Cl060pVm7pzGDKO0msw0TRyD6KhSvZg/2vCglEbagk377RlWKqwfpL7JSCLoQ== Received: from DS7PR05CA0107.namprd05.prod.outlook.com (2603:10b6:8:56::25) by DM4PR12MB7597.namprd12.prod.outlook.com (2603:10b6:8:10b::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.29; Tue, 31 Oct 2023 10:51:55 +0000 Received: from CY4PEPF0000EE3B.namprd03.prod.outlook.com (2603:10b6:8:56:cafe::99) by DS7PR05CA0107.outlook.office365.com (2603:10b6:8:56::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.17 via Frontend Transport; Tue, 31 Oct 2023 10:51:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE3B.mail.protection.outlook.com (10.167.242.15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 10:51:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:47 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:45 -0700 From: Rongwei Liu To: , , , , , Subject: [PATCH v3 1/6] net/mlx5: sample the srv6 last segment Date: Tue, 31 Oct 2023 12:51:26 +0200 Message-ID: <20231031105131.441078-2-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231031105131.441078-1-rongweil@nvidia.com> References: <20231031094244.381557-1-rongweil@nvidia.com> <20231031105131.441078-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3B:EE_|DM4PR12MB7597:EE_ X-MS-Office365-Filtering-Correlation-Id: cc18f6b1-b1df-4b6e-e38c-08dbd9ff659a X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LOWOixIjK/s3pHbZEgcHo5Fsou+mjNiSDxjRyg82noVDGGDMUwKsmlaBfFWR3xHrQax6Fp6Tt+KbqEYz1shflRPVoZkPQorKfOYLwPnmruYFGoi/j0r2c3LFEfyzs88RdIZS+EtKR1CcNgWcbsH6sMU8Su920ZGv6EprdbnuNWhQyXoUr1bBrRZlhXwMj4gkPtGMP6WYiDfI4p3bEUPg1B+UP66gEnB5hG6o9PjIM1gXLMkLra7G8LWtjwKWIBRe3NAZuMMpctNs+8PrzzXqpe41ft3qyeV8POs1u/Qro45BwRn64hce99JVWqasQ91Yd+8Eu9UQ7iofnvx0fq97s8VC1Q9bgSZiyaLtIC5WjHZjpwsS1CTdgpXkrqR6DHZx1lJuBlEEkV/4S/b6N12SujwDWGU/C6WTZGWpVDBvcEIPwL2rpv8mtYEK1POnt8GXATGQ0/i7Juc90ztrIyNd/lJ4Mtqk5cBPd/WZv4zXoqtrhBsfi6gucx8Gno8B0i/VP/V3aLMfK35FOJ/SKKfQkn19PwgtTAKF5uM+5/RzFs/NoC5Ke9zshAbhNxXLMKz8s2UZniXPeSUi6uyEBT9prM+gZkpmj1cE4ZxpvzkXqpZ4WcmWVQROc71f3kwWWPyCvvLDXDOaI0nrOQG7ze6XHmIxrR2EJGSshkjwHnqQyLkuXWy5PJ/roU00QLl9XJpCd9yzrOXsqq+embleeco5DCzBRz0iIPeDr4r9l+YjRVjexn1h3hSls4i7md8XLllw X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(136003)(396003)(346002)(376002)(230922051799003)(82310400011)(64100799003)(186009)(451199024)(1800799009)(46966006)(36840700001)(40470700004)(16526019)(6286002)(26005)(1076003)(40460700003)(55016003)(36756003)(86362001)(40480700001)(82740400003)(7636003)(356005)(426003)(336012)(2906002)(83380400001)(478600001)(7696005)(36860700001)(47076005)(6666004)(8936002)(8676002)(2616005)(316002)(5660300002)(41300700001)(110136005)(70586007)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2023 10:51:54.6344 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cc18f6b1-b1df-4b6e-e38c-08dbd9ff659a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3B.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7597 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When removing the IPv6 routing extension header from the packets, the destination address should be updated to the last one in the segment list. Enlarge the hardware sample scope to cover the last segment. Signed-off-by: Rongwei Liu Acked-by: Ori Kam Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5.c | 41 ++++++++++++++++++++++++++++++----------- drivers/net/mlx5/mlx5.h | 6 ++++++ 2 files changed, 36 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f929d6547c..92d66e8f23 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1067,6 +1067,7 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) struct mlx5_devx_graph_node_attr node = { .modify_field_select = 0, }; + uint32_t i; uint32_t ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_common_dev_config *config = &priv->sh->cdev->config; @@ -1100,10 +1101,18 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) node.next_header_field_size = 0x8; node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP; node.in[0].compare_condition_value = IPPROTO_ROUTING; - node.sample[0].flow_match_sample_en = 1; - /* First come first serve no matter inner or outer. */ - node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; - node.sample[0].flow_match_sample_offset_mode = MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* Final IPv6 address. */ + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + node.sample[i].flow_match_sample_en = 1; + node.sample[i].flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* First come first serve no matter inner or outer. */ + node.sample[i].flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + node.sample[i].flow_match_sample_field_base_offset = + (i + 1) * sizeof(uint32_t); /* in bytes */ + } + node.sample[0].flow_match_sample_field_base_offset = 0; node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP; node.out[0].compare_condition_value = IPPROTO_TCP; node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP; @@ -1116,8 +1125,8 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) goto error; } priv->sh->srh_flex_parser.flex.devx_fp->devx_obj = fp; - priv->sh->srh_flex_parser.flex.mapnum = 1; - priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 1; + priv->sh->srh_flex_parser.flex.mapnum = MLX5_SRV6_SAMPLE_NUM; + priv->sh->srh_flex_parser.flex.devx_fp->num_samples = MLX5_SRV6_SAMPLE_NUM; ret = mlx5_devx_cmd_query_parse_samples(fp, ids, priv->sh->srh_flex_parser.flex.mapnum, &priv->sh->srh_flex_parser.flex.devx_fp->anchor_id); @@ -1125,12 +1134,22 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) DRV_LOG(ERR, "Failed to query sample IDs."); goto error; } - ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[0], - &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[0]); - if (ret) { - DRV_LOG(ERR, "Failed to query sample id information."); - goto error; + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[i], + &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[i]); + if (ret) { + DRV_LOG(ERR, "Failed to query sample id %u information.", ids[i]); + goto error; + } + } + for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + priv->sh->srh_flex_parser.flex.devx_fp->sample_ids[i] = ids[i]; + priv->sh->srh_flex_parser.flex.map[i].width = sizeof(uint32_t) * CHAR_BIT; + priv->sh->srh_flex_parser.flex.map[i].reg_id = i; + priv->sh->srh_flex_parser.flex.map[i].shift = + (i + 1) * sizeof(uint32_t) * CHAR_BIT; } + priv->sh->srh_flex_parser.flex.map[0].shift = 0; return 0; error: if (fp) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index a20acb6ca8..f13a56ee9e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1335,6 +1335,7 @@ struct mlx5_flex_pattern_field { uint16_t shift:5; uint16_t reg_id:5; }; + #define MLX5_INVALID_SAMPLE_REG_ID 0x1F /* Port flex item context. */ @@ -1346,6 +1347,11 @@ struct mlx5_flex_item { struct mlx5_flex_pattern_field map[MLX5_FLEX_ITEM_MAPPING_NUM]; }; +/* + * Sample an IPv6 address and the first dword of SRv6 header. + * Then it is 16 + 4 = 20 bytes which is 5 dwords. + */ +#define MLX5_SRV6_SAMPLE_NUM 5 /* Mlx5 internal flex parser profile structure. */ struct mlx5_internal_flex_parser_profile { uint32_t refcnt; From patchwork Tue Oct 31 10:51:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 133648 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3751D43251; Tue, 31 Oct 2023 11:52:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B239540DF8; Tue, 31 Oct 2023 11:52:09 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2081.outbound.protection.outlook.com [40.107.93.81]) by mails.dpdk.org (Postfix) with ESMTP id 99532406B6 for ; Tue, 31 Oct 2023 11:52:07 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=I0Ny0jhq0F10ljkjL/JBAuS/9DhvNhnD6W0YSsf64l0KZaDDg8dt3dOKy0jmGumzw0+1om09FcYKaRSWkaeQBM+GIKPh2UsX1t3x4GhW5fp5D7beYAwQc5LC1ITlG0yDGuMd9u1a8a7uOB9vPsfCjQRNtAY65Cj2mw2OJYUzPK1Ua+HpSIY/bcivy3Y0JUT8KR9gAWd7AFFhNezGYiZ84cAQYWmjzsUFba5TcBKOVO/Qb9/Pr0zwuct0NobW1u3Bc5SYmbV5AG2kuX8UxdCf6aytZJ1hTldnuVS+qaQ6KB2L6Zzgy4IY2j68hDWu9JxQ8r8xDD9qVLy0KYDQ0lua5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dOEIE6kbH1A0gub8li03omSG1jaUyKWlzYKnhhEFevk=; b=m0ZDemE3Z8MWLUmrlZ+lZa9VFNnNvYn1/1gKmBDyitXKJ2aGkO1miasbBAk0CGnse58f1Yeis9SBJQ5rxt46piZf/VjvPM3ZfYQ+ITMHbv/++ZdNAj2PR6px8fHoae7BSRXuoRgC+guhOUuI9txDugp5dTe4pi+CyrAExVq6DrZMRiMpsi8h8o+rXbicDq56QzWin3cfrpeSGIvqltmOFjpOuccUeaIvq1kA0qNxLYlngI87PetyNpkyLhkwIqMtYlPcn76nAIGAu9J5aawhAavnHr74abcjXITOfT80HZDpmZCG4UkPHs1PrrukePKeHAg4kw8YXFfZQclwUbNpyA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dOEIE6kbH1A0gub8li03omSG1jaUyKWlzYKnhhEFevk=; b=FGKFKy6w/Tw0Jf0qJ4MExUehHdyLCjdR1TSBNcX6lR4gOCOG0Za7jaaWt92/dTE93lDyJiIl0ktrxJTAkyxwabfMKJNMWxMFFvIUuGDT0FJZsMvebdDFJU5SBCcLzG2yfFbgBXAZ0WiIPUIs2tEU8kt6UFVPBQTWHGPBpQ8a/LFyGvmTQbLTd3zCg2sAYPsi6zM4woqIob5zkIZO+WzJfhR/LFnsa6ZzwOcsSfp9shpChUUGYxyclN/txdsQsp1CTChSBTMgApVe5jH12m/tgC3Y07aNt880qiH3pOGlGFx8yF9o0UJqZORJJavz52maeiEtYPPKUab/xosrL5Hskw== Received: from MW4PR03CA0149.namprd03.prod.outlook.com (2603:10b6:303:8c::34) by PH0PR12MB8149.namprd12.prod.outlook.com (2603:10b6:510:297::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.28; Tue, 31 Oct 2023 10:52:04 +0000 Received: from CO1PEPF000042AD.namprd03.prod.outlook.com (2603:10b6:303:8c:cafe::71) by MW4PR03CA0149.outlook.office365.com (2603:10b6:303:8c::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.28 via Frontend Transport; Tue, 31 Oct 2023 10:52:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1PEPF000042AD.mail.protection.outlook.com (10.167.243.42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 10:52:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:50 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:47 -0700 From: Rongwei Liu To: , , , , , CC: , Alex Vesker Subject: [PATCH v3 2/6] net/mlx5/hws: fix potential wrong errno value Date: Tue, 31 Oct 2023 12:51:27 +0200 Message-ID: <20231031105131.441078-3-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231031105131.441078-1-rongweil@nvidia.com> References: <20231031094244.381557-1-rongweil@nvidia.com> <20231031105131.441078-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000042AD:EE_|PH0PR12MB8149:EE_ X-MS-Office365-Filtering-Correlation-Id: 97292313-87c6-4fa5-c44f-08dbd9ff6b45 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: fTHhjc+T4oiFOePGDYPDkySiWrylUon08PCVPuzUYE0LjjfcCdvbF97crpJ6MePXrduxWF2DhGio1xn9+2aBDZEbPw51pb9MSP6knSAoWhegPbc2z9J/MehZypR10hG39Piz+HLH2CZ7mGUuuPsnsjINY7us4iT/sLxA4iL/aEhKwcq0r6p9mDL00kII8lbGMuhKiAkiS6kIiEPbU9+EGxGAudFxJEq4ZiwFrTJL+nf+7v6bwvuowSj9dNas+UeGVElFlOdlO4FV4U498tT3U7E21dN0uVxEAFdU46NWRekxIK655J15i6K7Lh6TpJP/12v7UxXOVpvT5maVVCEjA7K5ZJmwgJYeKGha2iQJJ++xhfB4Fsvwq3DCT82kInefoyzlFc7k+mZB5Y+/9/gzgxr53HAPHtyokrcaxOB0puAZ2tpDK6I1xvZX6jImsmQrKso6j+qsd05yV4J4UZXp/tj1vjQLx/fWNZ0PV6LxHFqshrhDsjItqVzIS8kCclPDe2YTD2Q1qzl9oFleVaXg5pTS3EDHSBf45ix73ZbRcLxlileg9zK38qCNjptqcMCk0MIOsvXhwmtxqjwrhwpmyA+49G3ceJaCWybmqM4ZKh/DE/4erilw3jx5rcg+lsAcjiqUn58PCcG9egRqNPUj0friZf+RhLBKLXQlP+Jrejb3Q8kGxEoWebhpVtludd8zi5Ed6sEae3f/12tInkIE8jt2Tp1LTSHz6xlUPzhqU3RGhSkDc9mr6j+qccN+NTNW X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(376002)(346002)(136003)(39860400002)(396003)(230922051799003)(82310400011)(186009)(64100799003)(451199024)(1800799009)(46966006)(40470700004)(36840700001)(40480700001)(86362001)(7636003)(478600001)(356005)(36756003)(40460700003)(82740400003)(47076005)(7696005)(8676002)(6666004)(41300700001)(8936002)(107886003)(4326008)(26005)(5660300002)(83380400001)(336012)(2906002)(16526019)(426003)(6286002)(70586007)(1076003)(2616005)(110136005)(316002)(36860700001)(70206006)(54906003)(55016003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2023 10:52:04.1116 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 97292313-87c6-4fa5-c44f-08dbd9ff6b45 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000042AD.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB8149 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A valid rte_errno is desired when DR layer api returns error and it can't over-write the value set by under-layer. Fixes: a318b3d54772 ("net/mlx5/hws: support insert header action") Cc: hamdani@nvidia.com Signed-off-by: Rongwei Liu Reviewed-by: Alex Vesker Acked-by: Ori Kam --- drivers/net/mlx5/hws/mlx5dr_action.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 59be8ae2c5..76ca57d302 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -2262,6 +2262,7 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, if (!num_of_hdrs) { DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero"); + rte_errno = EINVAL; return NULL; } @@ -2309,7 +2310,6 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, reformat_hdrs, log_bulk_size); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); - rte_errno = EINVAL; goto free_reformat_hdrs; } From patchwork Tue Oct 31 10:51:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 133646 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E17343251; Tue, 31 Oct 2023 11:52:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E90844064E; Tue, 31 Oct 2023 11:52:05 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2084.outbound.protection.outlook.com [40.107.101.84]) by mails.dpdk.org (Postfix) with ESMTP id E2F2B4064C for ; Tue, 31 Oct 2023 11:52:04 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lZkw25qxvXgkNr1xgrH6+H3uoK2wAXGaSZYU/69i7QXUR41lSed0iveKypA+NE5WojvXl6NU2Nd590ijfvmFbZjmzBqBtfW7ppzjgV/cb+k44ug+SeId+4ACJtGfg0tWqf5AXEJC7swSgBHl+VeUpolczQD6Jy+64ICLHAV2NG4/oofo17OGW30uV4+RGFceJ384Q14QGYhrA/HZYs5w6liPM2q8KmgNZShzMgzchCpYtiNiJTNde6edON/7eNlzjnMjxlBYXts2VuH7WhLDrKkG9a6H08ScekoHWvHW6PJ5SLWGQQKHYl0ldjbzfxKNpIZQs79S7WROaRF74IQO5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qbL2gDJPnSvQDBZldTSourZTP4QimOkzHnWdS3vEo5o=; b=hAjS4hh0gHIzWRD1ALipsbkKEen0Mhe8rh11TqMmXIFEE/WhJ8KaBucEG2FOKO3JXTUEQ/zFVBu9FRSnpU1rITGOvuFLPSwqyAvlogLGKgNqpa/YJMZO/QAsYm/jhm2eJ14fKB77KMoZWgJZ6oZ//FNlHJoWKz8m7N2UX6b5Gke5dU6NTjQmAz14wgrtmH2umzrAbhYsfH5NXQXSRU6PiYoq7w0jsF13m2vKjbKN4JqBh/XPX92fL+yw4mIfVXIuW4dpdifBqprZixljYG5hG1J4N53z2kQfSObPzkq5PjXzq0nRD/n20JLC9IsTlJHIaWKaIMzl3BcQziYgX2I3KA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qbL2gDJPnSvQDBZldTSourZTP4QimOkzHnWdS3vEo5o=; b=bizWxZq4byZpkfvIYWgPE6mEssyNLvHh+i3Wb74Yjl+id22jX5bwecf60dkZ6CCdM1jv2ERjvgtJ/8PJB8bgRieM45Rn8P6JgQqi3iK3IPs6voJ/IE/1e8bi7JNxzO/XKrAkMivXUoZnrK4H0HSEAsbgGBkjx1vM7igVVeDeJ8XaQwkqMQtOIeAkzZiDNKsR7X73p7FRo/bmrjiqzfDWHMqM7myMveSEpLFyIJTAhhDVRTpFEbj74QVxwEXqFEHoEsFJTmqpIFyIknavSYJn7p4FfoadsnQpuqzAGwnd/+1QumJS38uE1UHm6YSIrP3vQ/SMNBoNV95fNJNV3HUuwQ== Received: from CY5PR15CA0142.namprd15.prod.outlook.com (2603:10b6:930:67::8) by DM6PR12MB4466.namprd12.prod.outlook.com (2603:10b6:5:2ae::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.29; Tue, 31 Oct 2023 10:52:02 +0000 Received: from CY4PEPF0000EE3F.namprd03.prod.outlook.com (2603:10b6:930:67:cafe::bf) by CY5PR15CA0142.outlook.office365.com (2603:10b6:930:67::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 10:52:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE3F.mail.protection.outlook.com (10.167.242.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 10:52:02 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:53 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:50 -0700 From: Rongwei Liu To: , , , , , CC: Alex Vesker Subject: [PATCH v3 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions Date: Tue, 31 Oct 2023 12:51:28 +0200 Message-ID: <20231031105131.441078-4-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231031105131.441078-1-rongweil@nvidia.com> References: <20231031094244.381557-1-rongweil@nvidia.com> <20231031105131.441078-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3F:EE_|DM6PR12MB4466:EE_ X-MS-Office365-Filtering-Correlation-Id: fe0e39fd-816a-470c-6ba4-08dbd9ff6a1e X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wQULuf7QUio9e/wTNEjahGbiI/wkrUxSwTUZBYs94AwjNckvQhdVvI/ibUecXjJrOrvwr/2+7I8j1ipO/zaTSNongEr5mkfUrieHpzZPN+m8+FsfnSbGFvlFriC7un/I6hpXHL0F8NzGAYqK0ma2+rrMk06tyo2RY5jWAuBrolMmKMSxAfF35c7JrudnThGZHMLcJYSx2Ox4A5tHvFmBQaICN0Ir++HE1n0GLHGkd7lotbgdzrlYC70ljqg6dd2KhJrtQMcCIj9CPvPzW9R4ltOvC77tNhOEJyFDivSCZZtZcvJUBMvyGPr23KQiFimL4alZnnbeEFRxzFFTi5n/G/MyMeiDY00cHQXO9DXS+NSaDj4EJ1ff8L7icfDOkGDQQUYRnt8dPDkauKr1FRI4flKUQ9DZYVuI3TjiakMv8reeq5ZaOIh+/7sqbZXy7YVxjYpgd51c908thmeblPm4c9c4Kzme9WKJcLT5tEa/hyfufSSEdWfqU19hHMUGeaH6VDckZW7VN9IyhHw9fYEaWbZYBhUVd9yBrNyPBgSw6XxzxhKNAVKOAkK3iOhtjrDP//n97eMP8iw+4XVkh6Lo++rddAX78S+yyQ24SHzlse4I8wM5CAR+oJqWSJX0HE7sS1AvdrP2kv9Am25i0F23+imWrRYYqxbNl0wvXWRtPApnk14qF5Wn9x0pAtS2MXIJG7IfBXyl3mLMVCY1nwTXP4AdrXjsw7U0MjPjME/F2fx1X3PRigVYDkwnxgqbI69D X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(396003)(39860400002)(376002)(136003)(230922051799003)(64100799003)(451199024)(186009)(82310400011)(1800799009)(40470700004)(46966006)(36840700001)(36756003)(40460700003)(86362001)(7696005)(6666004)(5660300002)(26005)(478600001)(41300700001)(30864003)(2906002)(426003)(336012)(83380400001)(2616005)(16526019)(6286002)(107886003)(1076003)(8936002)(8676002)(4326008)(110136005)(316002)(70586007)(70206006)(7636003)(356005)(82740400003)(40480700001)(55016003)(47076005)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2023 10:52:02.2229 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fe0e39fd-816a-470c-6ba4-08dbd9ff6a1e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3F.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4466 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add two dr_actions to implement IPv6 routing extension push and remove, the new actions are multiple actions combination instead of new types. Basically, there are two modify headers plus one reformat action. Action order is the same as encap and decap actions. Signed-off-by: Rongwei Liu Reviewed-by: Alex Vesker Acked-by: Ori Kam --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 29 +++ drivers/net/mlx5/hws/mlx5dr_action.c | 358 ++++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 7 + drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + drivers/net/mlx5/mlx5_flow.h | 44 ++++ 6 files changed, 438 insertions(+), 3 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index a5ecce98e9..32ec3df7ef 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3586,6 +3586,7 @@ enum mlx5_ifc_header_anchors { MLX5_HEADER_ANCHOR_PACKET_START = 0x0, MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_TCP_UDP = 0x09, MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, }; diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index 2e692f76c3..9e7dd9c429 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -54,6 +54,8 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_REMOVE_HEADER, MLX5DR_ACTION_TYP_DEST_ROOT, MLX5DR_ACTION_TYP_DEST_ARRAY, + MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, + MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, MLX5DR_ACTION_TYP_MAX, }; @@ -278,6 +280,11 @@ struct mlx5dr_rule_action { uint8_t *data; } reformat; + struct { + uint32_t offset; + uint8_t *header; + } ipv6_ext; + struct { rte_be32_t vlan_hdr; } push_vlan; @@ -889,6 +896,28 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, struct mlx5dr_action_remove_header_attr *attr, uint32_t flags); +/* Create action to push or remove IPv6 extension header. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] type + * Type of direct rule action: MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT or + * MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT. + * @param[in] hdr + * Header for packet reformat. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags); + /* Destroy direct rule action. * * @param[in] action diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 76ca57d302..6ac3c2f782 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -26,7 +26,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_CTR), @@ -39,6 +40,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_TBL) | @@ -61,6 +63,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), @@ -75,7 +78,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) | BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) | - BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) | + BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_CTR), @@ -88,6 +92,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) | + BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) | BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER), @@ -1710,7 +1715,7 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, if (!mlx5dr_action_is_hws_flags(flags) || ((flags & MLX5DR_ACTION_FLAG_SHARED) && (log_bulk_size || num_of_hdrs > 1))) { - DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)", flags); + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: 0x%x)", flags); rte_errno = EINVAL; goto free_action; } @@ -2382,6 +2387,347 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, return NULL; } +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[3] = {0}; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Next_hdr will be copied to ipv6.protocol after pop done. + */ + MLX5_SET(copy_action_in, &cmd[0], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[0], length, 8); + MLX5_SET(copy_action_in, &cmd[0], src_offset, 24); + MLX5_SET(copy_action_in, &cmd[0], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[0], dst_field, mod_id); + + /* Add nop between the continuous same modify field id */ + MLX5_SET(copy_action_in, &cmd[1], action_type, MLX5_MODIFICATION_TYPE_NOP); + + /* Clear next_hdr for right checksum */ + MLX5_SET(set_action_in, &cmd[2], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[2], length, 8); + MLX5_SET(set_action_in, &cmd[2], offset, 24); + MLX5_SET(set_action_in, &cmd[2], field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Copy ipv6_route_ext[first_segment].dst_addr by flex parser to ipv6.dst_addr */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, i + 1); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + MLX5_SET(copy_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[i], dst_field, field[i]); + MLX5_SET(copy_action_in, &cmd[i], src_field, mod_id); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Restore next_hdr from seg_left for flex parser identifying */ + MLX5_SET(copy_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, &cmd[4], length, 8); + MLX5_SET(copy_action_in, &cmd[4], dst_offset, 24); + MLX5_SET(copy_action_in, &cmd[4], src_field, mod_id); + MLX5_SET(copy_action_in, &cmd[4], dst_field, mod_id); + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static void * +mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + uint16_t mod_id; + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Copy ipv6_route_ext.next_hdr to ipv6.protocol */ + MLX5_SET(copy_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_COPY); + MLX5_SET(copy_action_in, cmd, length, 8); + MLX5_SET(copy_action_in, cmd, src_offset, 24); + MLX5_SET(copy_action_in, cmd, src_field, mod_id); + MLX5_SET(copy_action_in, cmd, dst_field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + 0, action->flags); +} + +static int +mlx5dr_action_create_pop_ipv6_route_ext(struct mlx5dr_action *action) +{ + uint8_t anchor_id = flow_hw_get_ipv6_route_ext_anchor_from_ctx(action->ctx); + struct mlx5dr_action_remove_header_attr hdr_attr; + uint32_t i; + + if (!anchor_id) { + rte_errno = EINVAL; + return rte_errno; + } + + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(action); + + hdr_attr.by_anchor.decap = 1; + hdr_attr.by_anchor.start_anchor = anchor_id; + hdr_attr.by_anchor.end_anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + hdr_attr.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER; + action->ipv6_route_ext.action[3] = + mlx5dr_action_create_remove_header(action->ctx, &hdr_attr, action->flags); + + if (!action->ipv6_route_ext.action[0] || !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2] || !action->ipv6_route_ext.action[3]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext pop subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) +{ + uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0}; + struct mlx5dr_action_mh_pattern pattern; + + /* Set ipv6.protocol to IPPROTO_ROUTING */ + MLX5_SET(set_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, cmd, length, 8); + MLX5_SET(set_action_in, cmd, field, MLX5_MODI_OUT_IPV6_NEXT_HDR); + MLX5_SET(set_action_in, cmd, data, IPPROTO_ROUTING); + + pattern.data = (__be64 *)cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, 0, + action->flags | MLX5DR_ACTION_FLAG_SHARED); +} + +static void * +mlx5dr_action_create_push_ipv6_route_ext_mhdr2(struct mlx5dr_action *action, + uint32_t bulk_size, + uint8_t *data) +{ + enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = { + MLX5_MODI_OUT_DIPV6_127_96, + MLX5_MODI_OUT_DIPV6_95_64, + MLX5_MODI_OUT_DIPV6_63_32, + MLX5_MODI_OUT_DIPV6_31_0 + }; + struct mlx5dr_action_mh_pattern pattern; + uint32_t *ipv6_dst_addr = NULL; + uint8_t seg_left, next_hdr; + __be64 cmd[5] = {0}; + uint16_t mod_id; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + } + + /* Copy IPv6 destination address from ipv6_route_ext.last_segment */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[i], field, field[i]); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + MLX5_SET(set_action_in, &cmd[i], data, be32toh(*ipv6_dst_addr++)); + } + + mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0); + if (!mod_id) { + rte_errno = EINVAL; + return NULL; + } + + /* Set ipv6_route_ext.next_hdr since initially pushed as 0 for right checksum */ + MLX5_SET(set_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, &cmd[4], length, 8); + MLX5_SET(set_action_in, &cmd[4], offset, 24); + MLX5_SET(set_action_in, &cmd[4], field, mod_id); + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + next_hdr = MLX5_GET(header_ipv6_routing_ext, data, next_hdr); + MLX5_SET(set_action_in, &cmd[4], data, next_hdr); + } + + pattern.data = cmd; + pattern.sz = sizeof(cmd); + + return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, + bulk_size, action->flags); +} + +static int +mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action, + struct mlx5dr_action_reformat_header *hdr, + uint32_t bulk_size) +{ + struct mlx5dr_action_insert_header insert_hdr = { {0} }; + uint8_t header[MLX5_PUSH_MAX_LEN]; + uint32_t i; + + if (!hdr || !hdr->sz || hdr->sz > MLX5_PUSH_MAX_LEN || + ((action->flags & MLX5DR_ACTION_FLAG_SHARED) && !hdr->data)) { + DR_LOG(ERR, "Invalid ipv6_route_ext header"); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + memcpy(header, hdr->data, hdr->sz); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + } + + insert_hdr.anchor = MLX5_HEADER_ANCHOR_TCP_UDP; + insert_hdr.encap = 1; + insert_hdr.hdr.sz = hdr->sz; + insert_hdr.hdr.data = header; + action->ipv6_route_ext.action[0] = + mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr, + bulk_size, action->flags); + action->ipv6_route_ext.action[1] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action); + action->ipv6_route_ext.action[2] = + mlx5dr_action_create_push_ipv6_route_ext_mhdr2(action, bulk_size, hdr->data); + + if (!action->ipv6_route_ext.action[0] || + !action->ipv6_route_ext.action[1] || + !action->ipv6_route_ext.action[2]) { + DR_LOG(ERR, "Failed to create ipv6_route_ext push subaction"); + goto err; + } + + return 0; + +err: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + + return rte_errno; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_action_reformat_header *hdr, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_context_cap_dynamic_reparse(ctx)) { + DR_LOG(ERR, "IPv6 extension actions is not supported"); + rte_errno = ENOTSUP; + return NULL; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) { + rte_errno = ENOMEM; + return NULL; + } + + switch (action_type) { + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) { + DR_LOG(ERR, "Pop ipv6_route_ext must be shared"); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_pop_ipv6_route_ext(action); + break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size); + break; + default: + DR_LOG(ERR, "Unsupported action type %d\n", action_type); + rte_errno = ENOTSUP; + goto free_action; + } + + if (ret) { + DR_LOG(ERR, "Failed to create IPv6 extension reformat action"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) { struct mlx5dr_devx_obj *obj = NULL; @@ -2455,6 +2801,12 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_action_destroy_stcs(&action[i]); mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); break; + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++) + if (action->ipv6_route_ext.action[i]) + mlx5dr_action_destroy(action->ipv6_route_ext.action[i]); + break; } } diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index e56f5b59c7..d0152dde3b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -8,6 +8,9 @@ /* Max number of STEs needed for a rule (including match) */ #define MLX5DR_ACTION_MAX_STE 10 +/* Max number of internal subactions of ipv6_ext */ +#define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 + enum mlx5dr_action_stc_idx { MLX5DR_ACTION_STC_IDX_CTRL = 0, MLX5DR_ACTION_STC_IDX_HIT = 1, @@ -143,6 +146,10 @@ struct mlx5dr_action { uint8_t offset; bool encap; } reformat; + struct { + struct mlx5dr_action + *action[MLX5DR_ACTION_IPV6_EXT_MAX_SA]; + } ipv6_route_ext; struct { struct mlx5dr_devx_obj *devx_obj; uint8_t return_reg_id; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index 5111f41648..1e5ef9cf67 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -31,6 +31,8 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_CRYPTO_DECRYPT] = "CRYPTO_DECRYPT", [MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER", [MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER", + [MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT", + [MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e637c98b95..43608e15d2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -595,6 +595,7 @@ struct mlx5_flow_dv_matcher { struct mlx5_flow_dv_match_params mask; /**< Matcher mask. */ }; +#define MLX5_PUSH_MAX_LEN 128 #define MLX5_ENCAP_MAX_LEN 132 /* Encap/decap resource structure. */ @@ -2898,6 +2899,49 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused) #endif return UINT32_MAX; } + +static __rte_always_inline uint8_t +flow_hw_get_ipv6_route_ext_anchor_from_ctx(void *dr_ctx) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + uint16_t port; + struct mlx5_priv *priv; + + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) + return priv->sh->srh_flex_parser.flex.devx_fp->anchor_id; + } +#else + RTE_SET_USED(dr_ctx); +#endif + return 0; +} + +static __rte_always_inline uint16_t +flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + uint16_t port; + struct mlx5_priv *priv; + struct mlx5_flex_parser_devx *fp; + + if (idx >= MLX5_GRAPH_NODE_SAMPLE_NUM || idx >= MLX5_SRV6_SAMPLE_NUM) + return 0; + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) { + fp = priv->sh->srh_flex_parser.flex.devx_fp; + return fp->sample_info[idx].modify_field_id; + } + } +#else + RTE_SET_USED(dr_ctx); + RTE_SET_USED(idx); +#endif + return 0; +} + void mlx5_indirect_list_handles_release(struct rte_eth_dev *dev); void From patchwork Tue Oct 31 10:51:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 133647 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C1A8543251; Tue, 31 Oct 2023 11:52:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C25640A8A; Tue, 31 Oct 2023 11:52:08 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2046.outbound.protection.outlook.com [40.107.244.46]) by mails.dpdk.org (Postfix) with ESMTP id C6641406B6 for ; Tue, 31 Oct 2023 11:52:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VUAz/5kxD2JvOG9MB7RCct21y5hWprVtrRFAeI4/PTMvhwNOoa23t0wCaJLB5+lRuHU7qgr7l83sIm9Q1hovl+1pLlqciTFw21c+M/8pWHlea4zsiE1270uDYNc32qFYJ0W4bn/nsjXnq/sbrlODQJELBAtk7Weh6UZzBvhtveiFO0JHBfyRV2FreJFkeqXAjs4Bsl9p4W7O9EWI++wesIJQIXPHF/jHT2c1imQy0HFVs7qbldUz4ZkqWdDBSenDEU2X3gCTcidzz8a/kIOpXE2oZMqAEyxrIoW+ZSFJvLxtgS3EcIj4sygynzQeimB7F7GD6gT9qTTmhhc5ef73eQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1JRgpKTtJPO2evBsL0c+9T0FBOToPXXX5YJN1Jbp+8Q=; b=U6FAPkcTJlSO3vF/7RiBXeIDfC0uGpKBSiy7zU7Jq6WSZ541irkN5znv3nS8OgyDVN883JrsQ42DsxZbtBez0m9a+cE+SRmq7cOLqYBqPi9z72G/oBEZpdYJmvhAMMahInZUvL+ZGxNhsitxyUv3vNgiX2WSqJ/isi20gV1zw/dbc6PR4JfS8s81dWjyHbnR71Hpd/UnolL/mCk2Iz37+1s1esrHShUJ12pCQoukKScq9nHwfwh+Ukv6UMDCZqlJH0XyUgk20bxvrS2YSjECeXBgE0V7d8UGjJmad+S37TMNshmske51V/W7lZ7yBsVDuP9mY27iPEQ+E+wLdeS2mQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1JRgpKTtJPO2evBsL0c+9T0FBOToPXXX5YJN1Jbp+8Q=; b=t5m5Dov0qwfOCmzfe/JdI8o1gyXs1DH1Rh92Tm/kS2B0IKP2w6XxRL18WZWdze3yznP6zIqPDaumApozk6lYuSbxaYCpLsrOxVu3zuvPMW5rY4+UJYPNJ+wqvx/2mi6hI8ysbpV7s+ul6s+n0sr1dFEk0ClrNFUSjJnLmKHBlpNRm3kJzfrr4gO24c6WQL79w1nZT6Wzy23mV9DOKb2NXzKTpbEXchFfaoPHABL0vwQ+7fK/JJUFcmu8GaKH1AMCHx2o4mCkkfpaqL0WklKiv8mdJhmG8HRY63Zdml85QfqhzC5lQebOFkRtqQ2DCuzHtZNUC/yCKBZn7L2kjIZP4Q== Received: from CY5PR15CA0142.namprd15.prod.outlook.com (2603:10b6:930:67::8) by MN6PR12MB8543.namprd12.prod.outlook.com (2603:10b6:208:47b::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.29; Tue, 31 Oct 2023 10:52:04 +0000 Received: from CY4PEPF0000EE3F.namprd03.prod.outlook.com (2603:10b6:930:67:cafe::71) by CY5PR15CA0142.outlook.office365.com (2603:10b6:930:67::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 10:52:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE3F.mail.protection.outlook.com (10.167.242.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 10:52:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:55 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:53 -0700 From: Rongwei Liu To: , , , , , CC: Alex Vesker Subject: [PATCH v3 4/6] net/mlx5/hws: add setter for IPv6 routing push remove Date: Tue, 31 Oct 2023 12:51:29 +0200 Message-ID: <20231031105131.441078-5-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231031105131.441078-1-rongweil@nvidia.com> References: <20231031094244.381557-1-rongweil@nvidia.com> <20231031105131.441078-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3F:EE_|MN6PR12MB8543:EE_ X-MS-Office365-Filtering-Correlation-Id: ce41780a-0c25-4401-d3fb-08dbd9ff6b4a X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NPhXqyjl7pcM6fBML5shvjEX/qs8KOLEVpp3yN+/STw8VUcB/jPeDRMBkP2XfYgOaBZaxtnZT4bgYElrpPZXTPmgRN2jcP9fjGfM0MoDdZOXk+Z66Km2MB3eNOv9kQK6FfBAAuuw1RJDG+5cLCtKPOEUj7GMQcOYLejR9DshaFK7HLdLcoic7OEAQLaiEQHO/Sc230leFZf3+TsQuahJ2MSQlvn7fu+/zhVQHti/2rE4+FVdg8WleU6KemN6D9i3qcbCxsMLy20lvFcUPSraHgLa7mxrRB/dWcpHBLbjEMy0nH4zjKw1ddEuDxAFU7XbF7wJdAKvCUOqnIe4nIvGSw9oDpoHCGIa/ILwybc6kkJafL0ozcJXM8ppNJUy+BZzWIiV3w9Ty4DVNc9CzGnbdJmhTYw0VS44k22r0lumRblJPpfsZh2J7WbjkiIcHs5xRxhRcd9nW86J7B23qnlaUEM2avJFp19BegpUu2cKtsp9uZzODQu5sj+9H8rZR/m28/prUhC7zffW6hkrgnasL9FZEApMXUqS9C1l0/f6qz6Y3FO2nFdrsQLMUOgfcut4V7bJ1UzzJHBZzqVToFV1puW2HuuLodVnEIxauz5euEieM6tua3As/WuxuLJSnR+cf4ZapB/xnFHfu8+8OWAdMzdR/5UFBO+zDF7fbu+zI0KFYfeY0eh/fXoRBEymi8opHNcIB4GC30W6s2xbVTM5ngtEmT8gUwG503TG3zh4+Ie1jHezxeq0P8kcO8VtXEbl X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(136003)(376002)(396003)(39860400002)(230922051799003)(64100799003)(451199024)(186009)(82310400011)(1800799009)(36840700001)(46966006)(40470700004)(36756003)(40460700003)(86362001)(7696005)(6666004)(5660300002)(26005)(478600001)(41300700001)(2906002)(426003)(336012)(83380400001)(2616005)(16526019)(6286002)(107886003)(1076003)(8936002)(8676002)(4326008)(110136005)(316002)(70586007)(70206006)(7636003)(356005)(82740400003)(40480700001)(47076005)(55016003)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2023 10:52:04.1917 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ce41780a-0c25-4401-d3fb-08dbd9ff6b4a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3F.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN6PR12MB8543 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The rte action will be translated to multiple dr_actions which need different setters to program them. In order to leverage the existing setter logic, there is a new callback introduce which called fetch_opt with unique parameter. For each setter, it may have different reparsing properties. Setter which requires no reparse can't share the same one with the one has reparse enabled even if there is spare space. Signed-off-by: Rongwei Liu Reviewed-by: Alex Vesker Acked-by: Ori Kam --- drivers/net/mlx5/hws/mlx5dr_action.c | 174 +++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 3 +- 2 files changed, 176 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 6ac3c2f782..281b09a582 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -3311,6 +3311,121 @@ mlx5dr_action_setter_reformat_trailer(struct mlx5dr_actions_apply_data *apply, apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; } +static void +mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(uint8_t *data, void *mh_data) +{ + uint8_t *action_ptr = mh_data; + uint32_t *ipv6_dst_addr; + uint8_t seg_left; + uint32_t i; + + /* Fetch the last IPv6 address in the segment list which is the next hop */ + seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1; + ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) + + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr); + + /* Load next hop IPv6 address in reverse order to ipv6.dst_address */ + for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) { + MLX5_SET(set_action_in, action_ptr, data, be32toh(*ipv6_dst_addr++)); + action_ptr += MLX5DR_MODIFY_ACTION_SIZE; + } + + /* Set ipv6_route_ext.next_hdr per user input */ + MLX5_SET(set_action_in, action_ptr, data, *data); +} + +static void +mlx5dr_action_setter_ipv6_route_ext_mhdr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + __be64 cmd[MLX5_SRV6_SAMPLE_NUM] = {0}; + struct mlx5dr_action *ipv6_ext_action; + uint8_t *header; + + header = rule_action[setter->idx_double].ipv6_ext.header; + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.modify_header.offset = 0; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.data = NULL; + } else { + /* + * Copy ipv6_dst from ipv6_route_ext.last_seg. + * Set ipv6_route_ext.next_hdr. + */ + mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(header, cmd); + tmp_rule_action.modify_header.data = (uint8_t *)cmd; + tmp_rule_action.modify_header.pattern_idx = 0; + tmp_rule_action.modify_header.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_modify_header(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = apply->rule_action; + struct mlx5dr_actions_wqe_setter tmp_setter = {0}; + struct mlx5dr_rule_action tmp_rule_action; + struct mlx5dr_action *ipv6_ext_action; + uint8_t header[MLX5_PUSH_MAX_LEN]; + + ipv6_ext_action = rule_action[setter->idx_double].action; + tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data]; + + if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) { + tmp_rule_action.reformat.offset = 0; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.data = NULL; + } else { + memcpy(header, rule_action[setter->idx_double].ipv6_ext.header, + tmp_rule_action.action->reformat.header_size); + /* Clear ipv6_route_ext.next_hdr for right checksum */ + MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0); + tmp_rule_action.reformat.data = header; + tmp_rule_action.reformat.hdr_idx = 0; + tmp_rule_action.reformat.offset = + rule_action[setter->idx_double].ipv6_ext.offset; + } + + apply->rule_action = &tmp_rule_action; + + /* Reuse regular */ + mlx5dr_action_setter_insert_ptr(apply, &tmp_setter); + + /* Swap rule actions from backup */ + apply->rule_action = rule_action; +} + +static void +mlx5dr_action_setter_ipv6_route_ext_pop(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action = &apply->rule_action[setter->idx_single]; + uint8_t idx = MLX5DR_ACTION_IPV6_EXT_MAX_SA - 1; + struct mlx5dr_action *action; + + /* Pop the ipv6_route_ext as set_single logic */ + action = rule_action->action->ipv6_route_ext.action[idx]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(action->stc[apply->tbl_type].offset); +} + int mlx5dr_action_template_process(struct mlx5dr_action_template *at) { struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; @@ -3374,6 +3489,65 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) setter->idx_double = i; break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + /* + * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left. + * Set ipv6_route_ext.next_hdr to 0 for checksum bug. + */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* + * Restore ipv6_route_ext.next_hdr from ipv6_route_ext.seg_left. + * Load the final destination address from flex parser sample 1->4. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* Set the ipv6.protocol per ipv6_route_ext.next_hdr */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + /* Pop ipv6_route_ext */ + setter->flags |= ASF_SINGLE1 | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_ipv6_route_ext_pop; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + /* Insert ipv6_route_ext with next_hdr as 0 due to checksum bug */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_INSERT; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_insert_ptr; + setter->idx_double = i; + setter->extra_data = 0; + setter++; + + /* Set ipv6.protocol as IPPROTO_ROUTING: 0x2b */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 1; + setter++; + + /* + * Load the right ipv6_route_ext.next_hdr per user input buffer. + * Load the next dest_addr from the ipv6_route_ext.seg_list[last]. + */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr; + setter->idx_double = i; + setter->extra_data = 2; + break; + case MLX5DR_ACTION_TYP_MODIFY_HDR: /* Double modify header list */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index d0152dde3b..ce9091a336 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -6,7 +6,7 @@ #define MLX5DR_ACTION_H_ /* Max number of STEs needed for a rule (including match) */ -#define MLX5DR_ACTION_MAX_STE 10 +#define MLX5DR_ACTION_MAX_STE 20 /* Max number of internal subactions of ipv6_ext */ #define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4 @@ -109,6 +109,7 @@ struct mlx5dr_actions_wqe_setter { uint8_t idx_ctr; uint8_t idx_hit; uint8_t flags; + uint8_t extra_data; }; struct mlx5dr_action_template { From patchwork Tue Oct 31 10:51:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 133649 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5BC2243251; Tue, 31 Oct 2023 11:52:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DCBD240A7A; Tue, 31 Oct 2023 11:52:12 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2067.outbound.protection.outlook.com [40.107.94.67]) by mails.dpdk.org (Postfix) with ESMTP id 29A9540A71 for ; Tue, 31 Oct 2023 11:52:11 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HHjRajJFMbHfNk7uJwDeJERpeN4Z9BPQWNSQwsPRRdyqhM4hNwP9so0I7z0iMJ2/SITA4AqZJPQ1MxPqWr6VDEyoeAsLy9794pA4HaCORrfzPiYRqVhrUggh/0wYPgqdXMm+caZ4EgWhF9NfrI9F7g2/3aM+iWYf+BQEWaBAEr7orI/X97ySCjs4JOFdzTbDW86dIKvzI9tX58UscUNfYoH0d0ZnUESp1LPYThmOgwlLRKhoj7FuCzXryDTfp6R+0KyUGUZnDB9qmT+qO4usW9OgV+3sZUSHopyG/im4XP2zG9WoDvvvOaagaqrbdnhnRg8D7oyMEohTl3daCmOlSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sNjbioQdQWJVKbda03XStAtSs5yTBikhBi+ytBCNqUY=; b=noX8PsgO5mP6W9myHDqA7TFb0m60ZnKU67Z1H4ndVZgeL+JhXaXXqIUFgzY9NcBsBfxLDSUwH+NpQ0MTUdIalQ8Y4oeZ+2S5HCCHFD7fNbIolugp0v9aByP9jBB7msHtnHkusBtoPtzgG3tv3iN6JUB1k3aiEd93dPAadVyLjCs5Ksm53FdfcnRJhFtVbIMHLs7m8C6FAeWb6+pg1LPmFZ/MLc3w8DhWeyPxih9+vacBcn3Gk9rYxeke73oLh6L5yEv2yWmTo0pILgWEixSJV6owbIGCD9BO7SE8PAHZ8/RFrFoN4EPY/Qy7nDqJWEQwRWrVO7VvfOSmueeuyHHHkw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sNjbioQdQWJVKbda03XStAtSs5yTBikhBi+ytBCNqUY=; b=tuC4blPm2UziluwCoZgZq2fZ6Tk6jYiZpWlIqSRtZvRUKM+idUZ8HsanwKhcnwPHdBI7qhDt3A7ePlAOAG1EacxeGToHHEEFGymQBsPGKvvoYLA7idGOo1NX5ziUfP8iP2I1Ahm2Aca5Pmp1KF9icl16bQF+t5HQz4+WLY0CAKFRQgXRTTZvaErfpKP0/rD9KLUbfxTNsm9BJ+Ey4ReXcZMhGOM4Oc7ohd6Po5XTfkKyEXbgpLIRHmJ2SXfVdFwPD3N4EvUTSjmtuZup8epi+VkOeA9fvFBdo+y3NTfQTilnF4TWsS8HXBvhlwZWCJSEIEuOM5y2VfdVRTNEHyDSNg== Received: from MW4PR03CA0122.namprd03.prod.outlook.com (2603:10b6:303:8c::7) by MN6PR12MB8589.namprd12.prod.outlook.com (2603:10b6:208:47d::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.27; Tue, 31 Oct 2023 10:52:08 +0000 Received: from CO1PEPF000042AD.namprd03.prod.outlook.com (2603:10b6:303:8c:cafe::58) by MW4PR03CA0122.outlook.office365.com (2603:10b6:303:8c::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.28 via Frontend Transport; Tue, 31 Oct 2023 10:52:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1PEPF000042AD.mail.protection.outlook.com (10.167.243.42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 10:52:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:58 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:55 -0700 From: Rongwei Liu To: , , , , , Subject: [PATCH v3 5/6] net/mlx5: implement IPv6 routing push remove Date: Tue, 31 Oct 2023 12:51:30 +0200 Message-ID: <20231031105131.441078-6-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231031105131.441078-1-rongweil@nvidia.com> References: <20231031094244.381557-1-rongweil@nvidia.com> <20231031105131.441078-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000042AD:EE_|MN6PR12MB8589:EE_ X-MS-Office365-Filtering-Correlation-Id: d537b920-0348-4649-cf0b-08dbd9ff6db8 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: aCJDOSGWraAeE5LgkRyNr8zbWwr7LUgFSxRDoHZZJ53KQl7myS2wBTIZtgpg3ZJpHPw5OM9h1mnyXr9YP0bYNq8ujQluxdEIuHmHVzzhiZjHnfqRfzzpy/oynX9LdMHMmPBmWqtHYoUNAwdGksVwejeewIlUYZXciYoT0zkTM4W/RoQsSBIxQvej6gOuRsf9fIs5o2UQQXU/kPNl4JsKrvA5j/UaLY8BSpXm15JXUphSQQQJ2HQJg8RRR8ALEue9Iwbpj0Q0y29+QDEeHyNKLESgPYY4CbIQkBKVnI8yEwskL63AzKtctonOeyk16AGF560Q5MbN7RhTDXKM6fnuEbq1LLbP4z0UeosDPjS8Yx2h9X8ii7L0L29t+lecqqTDzIABp71+RYb4yYwOUiJbnWcv48YzG4xQF9eRFzzxJ7frogeMmZpG4/cZsuq7805GQsdw6EbW2KzUeTZqiDUWSyjqedXgzr3SNPqLgmtNOP3mKDYoyJ7Cua7rQeqw81ZZNXxSKrkyYLgUO3FFy2f2Kaj9U3FDiNOTOx4pRLUu/nn7QQyWG+E7ndXLcyf7OUOxKUmgxrYKMzPQsFw4dmhw3PG+lgaZ6SSH8Zpb9cUr9witAKPYQYBNhuw63bRix2WPpTNlBSjWbAXdVBo9+B4nKGQD13jYy0RvVgx1DVwe10QSm3WL9Rorc4/wE6kkICT9DZ8byd6Sb5XjqEPqMe7qwOw3OvUSaNAmF7fVEb6t+mDMIBrfbmMGQCkAXOT32ytaYciCVrRxc/+uxC7FuGhO0R1tBkmMDRwqNLMPZAdIwXY= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(346002)(39860400002)(376002)(136003)(230922051799003)(230273577357003)(230173577357003)(451199024)(64100799003)(82310400011)(1800799009)(186009)(40470700004)(36840700001)(46966006)(55016003)(2906002)(40460700003)(36860700001)(47076005)(70586007)(83380400001)(70206006)(356005)(7636003)(82740400003)(110136005)(316002)(7696005)(26005)(478600001)(426003)(6286002)(1076003)(2616005)(6666004)(16526019)(336012)(41300700001)(5660300002)(30864003)(8936002)(8676002)(40480700001)(86362001)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2023 10:52:08.2210 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d537b920-0348-4649-cf0b-08dbd9ff6db8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000042AD.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN6PR12MB8589 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Reserve the push data buffer for each job and the maximum length is set to 128 for now. Only supports type IPPROTO_ROUTING when translating the rte flow action. Remove actions must be shared globally and only supports next layer as TCP or UDP. Signed-off-by: Rongwei Liu Acked-by: Ori Kam Acked-by: Suanming Mou --- doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 +- doc/guides/rel_notes/release_23_11.rst | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 +- drivers/net/mlx5/mlx5_flow_hw.c | 282 ++++++++++++++++++++++++- 6 files changed, 309 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 0ed9a6aefc..0739fe9d63 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -108,6 +108,8 @@ flag = Y inc_tcp_ack = Y inc_tcp_seq = Y indirect_list = Y +ipv6_ext_push = Y +ipv6_ext_remove = Y jump = Y mark = Y meter = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index be5054e68a..955dedf3db 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -148,7 +148,9 @@ Features - Matching on GTP extension header with raw encap/decap action. - Matching on Geneve TLV option header with raw encap/decap action. - Matching on ESP header SPI field. +- Matching on flex item with specific pattern. - Matching on InfiniBand BTH. +- Modify flex item field. - Modify IPv4/IPv6 ECN field. - RSS support in sample action. - E-Switch mirroring and jump. @@ -166,7 +168,7 @@ Features - Sub-Function. - Matching on represented port. - Matching on aggregated affinity. - +- Push or remove IPv6 routing extension. Limitations ----------- @@ -759,6 +761,13 @@ Limitations to the representor of the source virtual port (SF/VF), while if it is disabled, the traffic will be routed based on the steering rules in the ingress domain. +- IPv6 routing extension push or remove: + + - Supported only with HW Steering enabled (``dv_flow_en`` = 2). + - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default). + - Only supports TCP or UDP as next layer. + - IPv6 routing header must be the only present extension. + - Not supported on guest port. Statistics ---------- diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index 93999893bd..5ef309ea59 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -157,6 +157,8 @@ New Features * Added support for ``RTE_FLOW_ACTION_TYPE_INDIRECT_LIST`` flow action. * Added support for ``RTE_FLOW_ITEM_TYPE_PTYPE`` flow item. * Added support for ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` flow action and mirror. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH`` flow action. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE`` flow action. * **Updated Solarflare net driver.** diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f13a56ee9e..277bbbf407 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -373,6 +373,7 @@ struct mlx5_hw_q_job { }; void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + uint8_t *push_data; /* IPv6 routing push data. */ struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 43608e15d2..c7be1f3553 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -363,6 +363,8 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) #define MLX5_FLOW_ACTION_QUOTA (1ull << 46) #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -1269,6 +1271,8 @@ typedef int const struct rte_flow_action *, struct mlx5dr_rule_action *); +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; @@ -1315,6 +1319,10 @@ struct mlx5_action_construct_data { struct { cnt_id_t id; } shared_counter; + struct { + /* IPv6 extension push data len. */ + uint16_t len; + } ipv6_ext; struct { uint32_t id; uint32_t conf_masked:1; @@ -1359,6 +1367,7 @@ struct rte_flow_actions_template { uint16_t *src_off; /* RTE action displacement from app. template */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ + uint16_t recom_off; /* Offset of DR IPv6 routing push remove action. */ uint32_t refcnt; /* Reference counter. */ uint8_t flex_item; /* flex item index. */ }; @@ -1384,7 +1393,14 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; -#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/* Push remove action struct. */ +struct mlx5_hw_push_remove_action { + struct mlx5dr_action *action; /* Action object. */ + /* Is push_remove action shared across flows in table. */ + uint8_t shared; + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; /* Modify field action struct. */ struct mlx5_hw_modify_header_action { @@ -1415,6 +1431,9 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ + /* Push/remove action. */ + struct mlx5_hw_push_remove_action *push_remove; + uint16_t push_remove_pos; /* Push/remove action position. */ uint32_t mark:1; /* Indicate the mark action. */ cnt_id_t cnt_id; /* Counter id. */ uint32_t mtr_id; /* Meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 977751394e..592d436099 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -624,6 +624,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } + if (acts->push_remove) { + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } if (acts->mhdr) { flow_hw_template_destroy_mhdr_action(acts->mhdr); mlx5_free(acts->mhdr); @@ -761,6 +767,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic push action to the dynamic action list. + * + * @param[in] dev + * Pointer to the port. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] len + * Length of the data to be updated. + * + * @return + * Data pointer on success, NULL otherwise and rte_errno is set. + */ +static __rte_always_inline void * +__flow_hw_act_data_push_append(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t len) +{ + struct mlx5_action_construct_data *act_data; + struct mlx5_priv *priv = dev->data->dev_private; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return NULL; + act_data->ipv6_ext.len = len; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return act_data; +} + static __rte_always_inline int __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, struct mlx5_hw_actions *acts, @@ -1924,6 +1968,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, return 0; } + +static int +mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + uint8_t *push_data, uint8_t *push_data_m, + size_t push_size, uint16_t recom_src, + enum mlx5dr_action_type recom_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); + struct mlx5_action_construct_data *act_data; + struct mlx5dr_action_reformat_header hdr = {0}; + uint32_t flag, bulk = 0; + + flag = mlx5_hw_act_flag[!!attr->group][type]; + acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->push_remove) + push_size, + 0, SOCKET_ID_ANY); + if (!acts->push_remove) + return -ENOMEM; + + switch (recom_type) { + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!push_data || !push_size) + goto err1; + if (!push_data_m) { + bulk = rte_log2_u32(table_attr->nb_flows); + } else { + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + } + acts->push_remove->data_size = push_size; + memcpy(acts->push_remove->data, push_data, push_size); + hdr.data = push_data; + hdr.sz = push_size; + break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + break; + default: + break; + } + + acts->push_remove->action = + mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx, + recom_type, &hdr, bulk, flag); + if (!acts->push_remove->action) + goto err1; + acts->rule_acts[at->recom_off].action = acts->push_remove->action; + acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data; + acts->rule_acts[at->recom_off].ipv6_ext.offset = 0; + acts->push_remove_pos = at->recom_off; + if (!acts->push_remove->shared) { + act_data = __flow_hw_act_data_push_append(dev, acts, + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + recom_src, at->recom_off, push_size); + if (!act_data) + goto err; + } + return 0; +err: + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); +err1: + if (acts->push_remove) { + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } + return -EINVAL; +} + /** * Translate rte_flow actions to DR action. * @@ -1957,19 +2077,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *masks = at->masks; enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_src = 0; + uint16_t reformat_src = 0, recom_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; - size_t data_size = 0; + uint8_t *push_data = NULL, *push_data_m = NULL; + size_t data_size = 0, push_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type; bool reformat_used = false; + bool recom_used = false; unsigned int of_vlan_offset; uint16_t jump_pos; uint32_t ct_idx; @@ -2175,6 +2300,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_used = true; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + MLX5_ASSERT(!recom_used && !recom_type); + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)masks->conf; + if (ipv6_ext_data) + push_data_m = ipv6_ext_data->data; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)actions->conf; + if (ipv6_ext_data) { + push_data = ipv6_ext_data->data; + push_size = ipv6_ext_data->size; + } + recom_src = src_pos; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); @@ -2322,6 +2477,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, if (ret) goto err; } + if (recom_used) { + MLX5_ASSERT(at->recom_off != UINT16_MAX); + ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data, + push_data_m, push_size, recom_src, + recom_type); + if (ret) + goto err; + } return 0; err: err = rte_errno; @@ -2719,11 +2882,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_push; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; uint8_t *buf = job->encap_data; + uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2854,6 +3019,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ipv6_push = + (const struct rte_flow_action_ipv6_ext_push *)action->conf; + rte_memcpy((void *)push_buf, ipv6_push->data, + act_data->ipv6_ext.len); + MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, @@ -3010,6 +3182,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->res_idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (hw_acts->push_remove && !hw_acts->push_remove->shared) { + rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = + job->flow->res_idx - 1; + rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf; + } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) job->flow->cnt_id = hw_acts->cnt_id; return 0; @@ -5113,6 +5290,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return 0; } +/** + * Validate ipv6_ext_push action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to the indirect action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf; + + if (!raw_push_data || !raw_push_data->size || !raw_push_data->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "invalid ipv6_ext_push data"); + if (raw_push_data->type != IPPROTO_ROUTING || + raw_push_data->size > MLX5_PUSH_MAX_LEN) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported ipv6_ext_push type or length"); + return 0; +} + /** * Validate raw_encap action. * @@ -5340,6 +5549,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, #endif uint16_t i; int ret; + const struct rte_flow_action_ipv6_ext_remove *remove_data; /* FDB actions are only valid to proxy port. */ if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) @@ -5436,6 +5646,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_DECAP; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + remove_data = action->conf; + /* Remove action must be shared. */ + if (remove_data->type != IPPROTO_ROUTING || !mask) { + DRV_LOG(ERR, "Only supports shared IPv6 routing remove"); + return -EINVAL; + } + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE; + break; case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_METER; @@ -5551,6 +5776,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, [RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, }; static inline void @@ -5648,6 +5875,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, /** * Create DR action template based on a provided sequence of flow actions. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] at * Pointer to flow actions template to be updated. * @@ -5656,7 +5885,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, * NULL otherwise. */ static struct mlx5dr_action_template * -flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +flow_hw_dr_actions_template_create(struct rte_eth_dev *dev, + struct rte_flow_actions_template *at) { struct mlx5dr_action_template *dr_template; enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; @@ -5665,8 +5895,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; uint16_t reformat_off = UINT16_MAX; uint16_t mhdr_off = UINT16_MAX; + uint16_t recom_off = UINT16_MAX; uint16_t cnt_off = UINT16_MAX; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { const struct rte_flow_action_raw_encap *raw_encap_data; size_t data_size; @@ -5698,6 +5931,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) reformat_off = curr_off++; reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = at->actions[i].conf; data_size = raw_encap_data->size; @@ -5770,11 +6013,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) at->reformat_off = reformat_off; action_types[reformat_off] = reformat_act_type; } + if (recom_off != UINT16_MAX) { + at->recom_off = recom_off; + action_types[recom_off] = recom_type; + } dr_template = mlx5dr_action_template_create(action_types); - if (dr_template) + if (dr_template) { at->dr_actions_num = curr_off; - else + } else { DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return NULL; + } + /* Create srh flex parser for remove anchor. */ + if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT || + recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) && + mlx5_alloc_srh_flex_parser(dev)) { + DRV_LOG(ERR, "Failed to create srv6 flex parser"); + claim_zero(mlx5dr_action_template_destroy(dr_template)); + return NULL; + } return dr_template; err_actions_num: DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", @@ -6183,7 +6440,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, break; } } - at->tmpl = flow_hw_dr_actions_template_create(at); + at->tmpl = flow_hw_dr_actions_template_create(dev, at); if (!at->tmpl) goto error; at->action_flags = action_flags; @@ -6220,6 +6477,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, struct rte_flow_actions_template *template, struct rte_flow_error *error __rte_unused) { + uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE | + MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Action template %p is still in use.", (void *)template); @@ -6228,6 +6488,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, NULL, "action template in using"); } + if (template->action_flags & flag) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); if (template->tmpl) @@ -8796,6 +9058,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * @@ -8811,7 +9074,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_q_updated; i++) { char mz_name[RTE_MEMZONE_NAMESIZE]; - uint8_t *encap = NULL; + uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; @@ -8831,13 +9094,16 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i]->size]; encap = (uint8_t *) &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - items = (struct rte_flow_item *) + push = (uint8_t *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + items = (struct rte_flow_item *) + &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; upd_flow = (struct rte_flow_hw *) &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; From patchwork Tue Oct 31 10:51:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 133650 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B6F3643251; Tue, 31 Oct 2023 11:52:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 483A84064C; Tue, 31 Oct 2023 11:52:15 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2044.outbound.protection.outlook.com [40.107.223.44]) by mails.dpdk.org (Postfix) with ESMTP id 8550240A6C for ; Tue, 31 Oct 2023 11:52:12 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=b1EAvKhuzYkFFcWASHdYMcqqEoZAgNOQokaFQbX9DGOBT37Dg8sJZ+l7YXTFvXOCK3ficcwT/++NlV4rI00zeYp4ef+/oAC1QbP4ulfsIbPD8SBS3RCXBGSif58PD5wFF1blxrXJ61FTwuYksVIbJRXV/TKZpO42TFHT/fefWFw81CFdRl12zpSA3wKRcc4yoqM9jclpJq9xMEXLhjZn7t9CimCRUfnEdP8lqB/1tyb2rozVhfDQaDJxvLHQtT8I/vUrWrxkVjYq9ACm4cEA+pUQoatEk6MsAgn8J53epB9hmnXWUnHQp8vVn2NIEU4jMrjSlJiPTaqdLnCnWmx6gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mbDF9qVDT/kakCgCUQSwXk7lGKRiZhde/5AK7UShAy0=; b=YTxN4Ge+DaLw89vvy/P5QaTmiRQPRtDOJsb0wTHgwuiT4JYPzu/1EtpLDCRSfrtPDYUCWFRdwlOAsTeBvr9U2i57KsQV7LnnVXWzZ/b1JlBiOXHR+ZLDijV60Ylaz1ttUV2z8fBvwXvh0YBXTo0+v3N6+ZHUXm2zVT6/pK9cFD/lLcWA/QTGdpMam0mQQZWYXafb2zDVyNq+WfWl3c7nhjoZrdenxnzbcjYQ8rYI26lUNMyE13I4ZYxoBoRzTneWjswuYgk9fmGcOhMxQDcw4nJ+6VMY3RgU63tDeaH6odYNfYfaENgOtkErwzvsLzY+LBZLrrBssIf6+aKxMAAkcw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mbDF9qVDT/kakCgCUQSwXk7lGKRiZhde/5AK7UShAy0=; b=BVFGnx4+Chn6TiFCeaM1KbQg15q0n37JXZ+ylrzbbesaSL40ZeuxGhjwiWflQrzMjIzqI8qPEiRB8Y+8hufLiM74d+o0GPp07QPDz/1H9fyG/r5tTzd5b09AFrHyTa+NRUvUlrDKc8nt48vRi6jkgA7jr4E5yz4PvHjNakKwJizYg9vqcSNUIUUg7Qb4R+b1OqhgZsl98VCcyCC7eu9amVu46qQsNqsjwLbrZxGv4fVe3ZEgaSAXz3VOFTNqaOFZKosBeMOOSBZUloOxCqwAttpe38/QppfuSJ5PF4KysDQxCfBs1e0CcXrgPSHnSI7e4CM3ceXZhNVq1GYIaVVO+Q== Received: from CY5PR15CA0149.namprd15.prod.outlook.com (2603:10b6:930:67::26) by DM6PR12MB4911.namprd12.prod.outlook.com (2603:10b6:5:20e::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.28; Tue, 31 Oct 2023 10:52:09 +0000 Received: from CY4PEPF0000EE3F.namprd03.prod.outlook.com (2603:10b6:930:67:cafe::4c) by CY5PR15CA0149.outlook.office365.com (2603:10b6:930:67::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.29 via Frontend Transport; Tue, 31 Oct 2023 10:52:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000EE3F.mail.protection.outlook.com (10.167.242.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 10:52:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:52:00 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 03:51:58 -0700 From: Rongwei Liu To: , , , , , CC: Erez Shitrit Subject: [PATCH v3 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop Date: Tue, 31 Oct 2023 12:51:31 +0200 Message-ID: <20231031105131.441078-7-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231031105131.441078-1-rongweil@nvidia.com> References: <20231031094244.381557-1-rongweil@nvidia.com> <20231031105131.441078-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3F:EE_|DM6PR12MB4911:EE_ X-MS-Office365-Filtering-Correlation-Id: 74bd371d-0fed-4707-8b8f-08dbd9ff6e0c X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SYv32+0O5mwSN6uNQgo+xMA46MrNDDk2TCrcGRdFJifISEmlVJnGUJVtTMNe1eFRVWOA4dKx9pbLku50TFj793BTYsS4PEDOdidTOGfSqfIlnSxTxaZHMSWlpdjk55dyryXBhfe98PKJ8mM0pzrRLiJiFUCfQDaXhSo18rV2a65zji4yNg+/gOYRZWpvYitjB4V2AW3dzNPmxIdy/TjJbd4pP8laIcU0rjN4ngfh1pagbyHd6lFI/wa9igq/jtBDThgK/V3oXkXHPsGbAEEYGvPecRZ5bTy3JHvgDikFpu8kbwyF8lz/lzA7G7nLj0VDaCAMJ/psGM8k/TD8Nlfo3lF0+kvExh2nFOPYuR5jWXBG4rjUOHTZEiTwViTc4GIGX/r8E6p+Q/7kNBEjGotxjQNo5NXTq/uqxSzTutzhL48nCnX0OFm+suzzDRGHMxxZiL8HgAlXOl5Sghzz3wFSvlXqRFhra4uYg7lCML+oDZxhFHakFVi/KeE7BAdK0hVXrDegDOsgaSm6BDkqrTm06b60I7Eec3VxMWJq+DEsdgbmn6VqWn4/DMEt0pR3zJxWy/6PvGrH0t3gZ7w0BbY9ALFXHKCbBd9iebLjzSf16pxSmeoqZzymaqn+ZiduRHUiojI9D3kd/enhTbS3HmK5LTBjkcG+gLx43pzH1Gr4MRlbAHLMFbnHQIdAfy7W98aL0tm+b8TCgZUdXKRxoxj/cQd+DFTGvsUWAKS/VflkslnVylMgYV38TKQhbPFuGGyYqxQyG5AaYTndzqHjnOw34W+TcnxAAcOjgO4/4tXkjkc= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(136003)(376002)(346002)(39860400002)(396003)(230173577357003)(230922051799003)(230273577357003)(64100799003)(82310400011)(1800799009)(186009)(451199024)(40470700004)(36840700001)(46966006)(2906002)(55016003)(40460700003)(30864003)(5660300002)(41300700001)(8936002)(4326008)(8676002)(36756003)(86362001)(40480700001)(26005)(6286002)(107886003)(316002)(82740400003)(6666004)(36860700001)(7636003)(47076005)(356005)(70586007)(70206006)(110136005)(2616005)(16526019)(336012)(426003)(1076003)(83380400001)(7696005)(478600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2023 10:52:08.8167 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 74bd371d-0fed-4707-8b8f-08dbd9ff6e0c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE3F.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4911 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org After pushing/popping srv6 into/from IPv6 packets, the checksum needs to be correct. In order to achieve this, there is a need to control each STE' reparse behavior(CX7 and above). Add two more flags enumeration definitions to allow external control of reparse property in stc. 1. Push a. 1st STE, insert header action, reparse ignored(default reparse always) b. 2nd STE, modify IPv6 protocol, reparse always as default. c. 3rd STE, modify header list, reparse always(default reparse ignored) 2. Pop a. 1st STE, modify header list, reparse always(default reparse ignored) b. 2nd STE, modify header list, reparse always(default reparse ignored) c. 3rd STE, modify IPv6 protocol, reparse ignored(default reparse always); remove header action, reparse always as default. For CX6Lx and CX6Dx, the reparse behavior is controlled by RTC as always. Only pop action can work well. Signed-off-by: Rongwei Liu Reviewed-by: Erez Shitrit Acked-by: Ori Kam --- drivers/net/mlx5/hws/mlx5dr_action.c | 115 +++++++++++++++++++-------- drivers/net/mlx5/hws/mlx5dr_action.h | 7 ++ 2 files changed, 87 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 281b09a582..daeabead2a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -640,6 +640,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: case MLX5DR_ACTION_TYP_MODIFY_HDR: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; if (action->modify_header.require_reparse) attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; @@ -678,9 +679,12 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: case MLX5DR_ACTION_TYP_INSERT_HEADER: + attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; + if (!action->reformat.require_reparse) + attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; - attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS; attr->insert_header.encap = action->reformat.encap; attr->insert_header.insert_anchor = action->reformat.anchor; attr->insert_header.arg_id = action->reformat.arg_obj->id; @@ -1441,7 +1445,7 @@ static int mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action, uint8_t num_of_hdrs, struct mlx5dr_action_reformat_header *hdrs, - uint32_t log_bulk_sz) + uint32_t log_bulk_sz, uint32_t reparse) { struct mlx5dr_devx_obj *arg_obj; size_t max_sz = 0; @@ -1478,6 +1482,11 @@ mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action, action[i].reformat.encap = 1; } + if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT)) + action[i].reformat.require_reparse = true; + else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON) + action[i].reformat.require_reparse = true; + ret = mlx5dr_action_create_stcs(&action[i], NULL); if (ret) { DR_LOG(ERR, "Failed to create stc for reformat"); @@ -1514,7 +1523,8 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action, ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, - log_bulk_sz); + log_bulk_sz, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); if (ret) goto put_shared_stc; @@ -1657,7 +1667,8 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_action *action, ret = mlx5dr_action_create_stcs(action, NULL); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: - ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size); + ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); break; case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: ret = mlx5dr_action_handle_l2_to_tunnel_l3(action, num_of_hdrs, hdrs, bulk_size); @@ -1765,7 +1776,8 @@ static int mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, uint8_t num_of_patterns, struct mlx5dr_action_mh_pattern *pattern, - uint32_t log_bulk_size) + uint32_t log_bulk_size, + uint32_t reparse) { struct mlx5dr_devx_obj *pat_obj, *arg_obj = NULL; struct mlx5dr_context *ctx = action->ctx; @@ -1799,8 +1811,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, action[i].modify_header.num_of_patterns = num_of_patterns; action[i].modify_header.max_num_of_actions = max_mh_actions; action[i].modify_header.num_of_actions = num_actions; - action[i].modify_header.require_reparse = - mlx5dr_pat_require_reparse(pattern[i].data, num_actions); + + if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT)) + action[i].modify_header.require_reparse = + mlx5dr_pat_require_reparse(pattern[i].data, num_actions); + else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON) + action[i].modify_header.require_reparse = true; if (num_actions == 1) { pat_obj = NULL; @@ -1843,12 +1859,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action, return rte_errno; } -struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - uint8_t num_of_patterns, - struct mlx5dr_action_mh_pattern *patterns, - uint32_t log_bulk_size, - uint32_t flags) +static struct mlx5dr_action * +mlx5dr_action_create_modify_header_reparse(struct mlx5dr_context *ctx, + uint8_t num_of_patterns, + struct mlx5dr_action_mh_pattern *patterns, + uint32_t log_bulk_size, + uint32_t flags, uint32_t reparse) { struct mlx5dr_action *action; int ret; @@ -1896,7 +1912,8 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, ret = mlx5dr_action_create_modify_header_hws(action, num_of_patterns, patterns, - log_bulk_size); + log_bulk_size, + reparse); if (ret) goto free_action; @@ -1907,6 +1924,17 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + uint8_t num_of_patterns, + struct mlx5dr_action_mh_pattern *patterns, + uint32_t log_bulk_size, + uint32_t flags) +{ + return mlx5dr_action_create_modify_header_reparse(ctx, num_of_patterns, patterns, + log_bulk_size, flags, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); +} static struct mlx5dr_devx_obj * mlx5dr_action_dest_array_process_reformat(struct mlx5dr_context *ctx, enum mlx5dr_action_type type, @@ -2254,12 +2282,12 @@ mlx5dr_action_create_reformat_trailer(struct mlx5dr_context *ctx, return action; } -struct mlx5dr_action * -mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, - uint8_t num_of_hdrs, - struct mlx5dr_action_insert_header *hdrs, - uint32_t log_bulk_size, - uint32_t flags) +static struct mlx5dr_action * +mlx5dr_action_create_insert_header_reparse(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags, uint32_t reparse) { struct mlx5dr_action_reformat_header *reformat_hdrs; struct mlx5dr_action *action; @@ -2312,7 +2340,8 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, } ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, - reformat_hdrs, log_bulk_size); + reformat_hdrs, log_bulk_size, + reparse); if (ret) { DR_LOG(ERR, "Failed to create HWS reformat action"); goto free_reformat_hdrs; @@ -2329,6 +2358,18 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, return NULL; } +struct mlx5dr_action * +mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx, + uint8_t num_of_hdrs, + struct mlx5dr_action_insert_header *hdrs, + uint32_t log_bulk_size, + uint32_t flags) +{ + return mlx5dr_action_create_insert_header_reparse(ctx, num_of_hdrs, hdrs, + log_bulk_size, flags, + MLX5DR_ACTION_STC_REPARSE_DEFAULT); +} + struct mlx5dr_action * mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx, struct mlx5dr_action_remove_header_attr *attr, @@ -2422,8 +2463,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action) pattern.data = cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_ON); } static void * @@ -2469,8 +2511,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action) pattern.data = cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_ON); } static void * @@ -2496,8 +2539,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action) pattern.data = (__be64 *)cmd; pattern.sz = sizeof(cmd); - return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, - 0, action->flags); + return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0, + action->flags, + MLX5DR_ACTION_STC_REPARSE_OFF); } static int @@ -2644,8 +2688,9 @@ mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action, insert_hdr.hdr.sz = hdr->sz; insert_hdr.hdr.data = header; action->ipv6_route_ext.action[0] = - mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr, - bulk_size, action->flags); + mlx5dr_action_create_insert_header_reparse(action->ctx, 1, &insert_hdr, + bulk_size, action->flags, + MLX5DR_ACTION_STC_REPARSE_OFF); action->ipv6_route_ext.action[1] = mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action); action->ipv6_route_ext.action[2] = @@ -2678,12 +2723,6 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, struct mlx5dr_action *action; int ret; - if (mlx5dr_context_cap_dynamic_reparse(ctx)) { - DR_LOG(ERR, "IPv6 extension actions is not supported"); - rte_errno = ENOTSUP; - return NULL; - } - if (!mlx5dr_action_is_hws_flags(flags) || ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags); @@ -2708,6 +2747,12 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx, ret = mlx5dr_action_create_pop_ipv6_route_ext(action); break; case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!mlx5dr_context_cap_dynamic_reparse(ctx)) { + DR_LOG(ERR, "IPv6 routing extension push actions is not supported"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size); break; default: diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index ce9091a336..ec6605bf7a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -65,6 +65,12 @@ enum mlx5dr_action_setter_flag { ASF_HIT = 1 << 7, }; +enum mlx5dr_action_stc_reparse { + MLX5DR_ACTION_STC_REPARSE_DEFAULT, + MLX5DR_ACTION_STC_REPARSE_ON, + MLX5DR_ACTION_STC_REPARSE_OFF, +}; + struct mlx5dr_action_default_stc { struct mlx5dr_pool_chunk nop_ctr; struct mlx5dr_pool_chunk nop_dw5; @@ -146,6 +152,7 @@ struct mlx5dr_action { uint8_t anchor; uint8_t offset; bool encap; + uint8_t require_reparse; } reformat; struct { struct mlx5dr_action