From patchwork Mon Apr 17 09:25:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 126174 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18BBE4296B; Mon, 17 Apr 2023 11:26:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E674E41138; Mon, 17 Apr 2023 11:26:14 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2059.outbound.protection.outlook.com [40.107.93.59]) by mails.dpdk.org (Postfix) with ESMTP id 2022A410EA for ; Mon, 17 Apr 2023 11:26:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KSmrvcSJ8RSbYGi9cjFMCR1iF0WXsbZxaS5gKpf2CQqZksEFJLhmp8ovM5bRNKDnYBlj9DsSi5LLrXXk30OcpzcSiYBHeTv2t/1elgMz9wQeGBS9Bo0YqM/diVgMD/ftune8MzN+ebkUPQEDizvpG+U2M9NIM2krri0uDy0TqCqFCNDW1z8wPPov0w+YuQDmz2htvTBVG+p3bW21WmafQHhGQrCT5PWVfFT3I7TUcQNe3lS/ENtHbBSvWC0CPz/l6CuXlnFG4+uF7cFZClYAwGAfw/p6kMAGkVi5FVe9zXFGFUb3cC0nzfHeNDxRVCu0t3hWDtS8HfVtlI5nZvpcEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YB3tFkSi9npJhV0uZdtkHKWWJRuCe94L1SnQMzT/myE=; b=RfLygGsmJCY223qsSrHd5cSsieDl9heHSxqPsQgRrZBXR+JIukX2dnpzr/DLeCZAoKGbJW5np20tmHwqdN0QiNhyyBjXj4/0vEO+zD516/VIdOKQ9Br9AFSwgbySAupyXHLUimQTNsZdg2gMXzNnp5U/c6wRxghJLFg5evpVKYbBqJMqHRWEJikpLNl8j67DSb0tgxsHjpbiiUSr8h5rTnj/jXseiPspo00UI9K1sZZd+96tX2b+wPoYSnkaNbrHccN4Se9j/PIrxvFVsjncpsrM0bBF7aMEqmCEsSFE7EYBrmF20JCu+7fuOcrWECq5/oNB7PTkdjaiEiBIFRAvMg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YB3tFkSi9npJhV0uZdtkHKWWJRuCe94L1SnQMzT/myE=; b=KpALP2T5o2r+VRpv8BH3D0t6usCXZ34xjvU8OSBDzkWnzMMGTcYtcz7++BeXzNGap3C04+AHcjlKYsgCNegM4e3h1FRnNq9glLn9C2N59lJY2lhr5N4oFvAKjvtvAoEsj/Ca0CEpkjb7qYjD9F7J3VzM68/pIKMH3LjrDXySPREKzIY8VSaZD3uKRnS77vEPeicOK2czk6+Eb7OQWpu/tENy680Gl6U2EttEZ9ntAuwyZHsMBes8sSS/c/TfNAtgkXaWOUTvnySJ2PtENH5/o2C6Fd13Ue7ouLUwXCizzDG3g1z8jPl3yJEsvUJOIh/2Gg7ljOIEL5ytctd2woysQg== Received: from BN0PR03CA0036.namprd03.prod.outlook.com (2603:10b6:408:e7::11) by MW3PR12MB4474.namprd12.prod.outlook.com (2603:10b6:303:2e::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 09:26:10 +0000 Received: from BN8NAM11FT105.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e7:cafe::15) by BN0PR03CA0036.outlook.office365.com (2603:10b6:408:e7::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT105.mail.protection.outlook.com (10.13.176.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:26:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:25:57 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:25:54 -0700 From: Rongwei Liu To: , , , , CC: Ferruh Yigit , Andrew Rybchenko Subject: [PATCH v1 1/8] ethdev: add IPv6 extension push remove action Date: Mon, 17 Apr 2023 12:25:33 +0300 Message-ID: <20230417092540.2617450-2-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT105:EE_|MW3PR12MB4474:EE_ X-MS-Office365-Filtering-Correlation-Id: fd80d0e6-5698-42bb-f432-08db3f25c80b X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IbP0ahwXi7ZXy566EeDJ4vJO8PTy9S7e8Uo4DcPSv41/cuBo864vKZafginTAV60ifS/3R93cBLl3XvEFjOIp+Sh+AMwUlJ3bfMrEDt+Z6qz37nJ3mWWtav+dzKXQIwG3LQcp62aoNfONn5psA5u28MbvK4mQfAV+TBCalvxt8xIj30w96pQ/rtgS7rZUqSVDYzh0hvKlkoD6viRFYQYlUpoAjcUlpo5G63kbXOUxfk6oTnr1vS/P75np98KuWiAQqWFU8NhDGuZdH61+psmuq+d88n80tUU4fjJSmIl+PkZHa4qp09aBSim/MqDYxHKNMnxojCiH1SQvuHNFrackLdfIuYnDg1bhKtbK7ykrP4xs9xkxLceau+Y1jW3BJ5i6caDsukqZ2NBkuGCLna+42dk1oLGeaXroneGk897aZUOetjUZjnsooJU2ln26CjsF3WVMzebS7bwl8ydyv2Uj+9Ue58E43D8yAIdr4O8KX0EhLvvvBS6BdHYGoKkjEbxRKTD+4GvvRkVidF9lWwkcSOmAvO+U8GbgV+Vax6sQkd6JyEc6J8i3aQNuwVZVCaJusoFJDD80C+oZLFyrWF9QFVwejLrclDom8zBv/WSOJEYYWJi94MjnDYR/3rE+ywVUczp5DHb3ycAOqQD2oHEXKXP6oO78Tfw4NHv2hiGL/25RUA7JZdYFmhfNt8thVaSbi3rfct4Y4YpWNrHPoMHGSv2urltZQXrPkmsF0NpS0J0tZTqCx5Iuzbexa2F7066MxRDW/+Ro4UJIaQTyI2vmLMP2xnODRm7TvLc8TePJGE= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(396003)(136003)(39860400002)(346002)(451199021)(36840700001)(40470700004)(46966006)(336012)(7696005)(6666004)(86362001)(478600001)(110136005)(34020700004)(55016003)(36860700001)(2616005)(47076005)(426003)(36756003)(83380400001)(26005)(40480700001)(6286002)(186003)(1076003)(16526019)(40460700003)(82740400003)(82310400005)(7636003)(356005)(70206006)(70586007)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(41300700001)(54906003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:10.3280 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fd80d0e6-5698-42bb-f432-08db3f25c80b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT105.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4474 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add new rte_actions to push and remove the specific type of IPv6 extension to and from original packets. A new extension to be pushed should be the last extension due to the next header awareness. Remove can support the IPv6 extension in any position. Signed-off-by: Rongwei Liu Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 21 ++++++++++++ lib/ethdev/rte_flow.c | 2 ++ lib/ethdev/rte_flow.h | 52 ++++++++++++++++++++++++++++++ 3 files changed, 75 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..2fe42e1cea 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3300,6 +3300,27 @@ The ``quota`` value is reduced according to ``mode`` setting. | ``RTE_FLOW_QUOTA_MODE_L3`` | Count packet bytes starting from L3 | +------------------+----------------------------------------------------+ +Action: ``IPV6_EXT_PUSH`` +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Add an IPv6 extension into IPv6 header and its template is provided in +its data buffer with the specific type as defined in the +``rte_flow_action_ipv6_ext_push`` definition. + +This action modifies the payload of matched flows. The data supplied must +be a valid extension in the specified type, it should be added the last one +if preceding extension existed. When applied to the original packet the +resulting packet must be a valid packet. + +Action: ``IPV6_EXT_REMOVE`` +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Remove an IPv6 extension whose type is provided in its type as defined in +the ``rte_flow_action_ipv6_ext_remove``. + +This action modifies the payload of matched flow and the packet should be +valid after removing. + Negative types ~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..af4b3f6da4 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -259,6 +259,8 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(METER_MARK, sizeof(struct rte_flow_action_meter_mark)), MK_FLOW_ACTION(SEND_TO_KERNEL, 0), MK_FLOW_ACTION(QUOTA, sizeof(struct rte_flow_action_quota)), + MK_FLOW_ACTION(IPV6_EXT_PUSH, sizeof(struct rte_flow_action_ipv6_ext_push)), + MK_FLOW_ACTION(IPV6_EXT_REMOVE, sizeof(struct rte_flow_action_ipv6_ext_remove)), }; int diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..369ecbc6ba 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2912,6 +2912,25 @@ enum rte_flow_action_type { * applied to the given ethdev Rx queue. */ RTE_FLOW_ACTION_TYPE_SKIP_CMAN, + + /** + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH + * + * Push IPv6 extension into IPv6 packet. + * + * @see struct rte_flow_action_ipv6_ext_push. + */ + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + + /** + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE + * + * Remove IPv6 extension from IPv6 packet whose type + * is provided in its configuration buffer. + * + * @see struct rte_flow_action_ipv6_ext_remove. + */ + RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE, }; /** @@ -3352,6 +3371,39 @@ struct rte_flow_action_vxlan_encap { struct rte_flow_item *definition; }; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH + * + * Valid flow definition for RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH include: + * + * - IPV6_EXT TYPE / IPV6_EXT_HEADER_IN_TYPE / END + * + * size holds the number of bytes in @p data. + * The data must be added as the last IPv6 extension. + */ +struct rte_flow_action_ipv6_ext_push { + uint8_t *data; /**< IPv6 extension header data. */ + size_t size; /**< Size of @p data. */ + uint8_t type; /**< Type of IPv6 extension. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE + * + * Valid flow definition for RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE include: + * + * - IPV6_EXT TYPE / END + */ +struct rte_flow_action_ipv6_ext_remove { + uint8_t type; /**< Type of IPv6 extension. */ +}; + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice From patchwork Mon Apr 17 09:25:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 126176 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 85BCC4296B; Mon, 17 Apr 2023 11:26:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8753942C24; Mon, 17 Apr 2023 11:26:18 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id EC8CE42B8C for ; Mon, 17 Apr 2023 11:26:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ACrS8OIE/uJyLYiEecBgM7zGuR7G8ExUKMrNcElntTpkuT9nYE9BfuCNZlF+p6VdNEL7/1O6zpEx7xunMXx6PW0SvJCKf43gBB/0UYVPGOLJDAfTehAOEmeoSUIw7/f5G4pUBXRQRZc8Oq6p4Dd5laOkEfer7g3Y2UUqMUxcLBFFhUYe8RvXTC5yn7A0haPHViOAyjxNZhfEm5FCANuCQztHPohBq924hN8tlX88XepeWVM28aHJepOIL9zDkKTr24CFpKdXQ8nKQFYyGDeA4m2bEdgaWOyJdaeKQSXem6ttT5MO2GHjUhaMM/b7W5qwIdMjKy8sAQzBYt/M+kMwyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lEAsKrVE6PHBLGuQiu7FouKryRQGBEEYJgZy/AQzYi0=; b=GLhl4Yj8UFA+XnGAYGkq5caZJ6KaPuzstj1ytM+2VHyPv6pMkHyTVS9Nt5kyPkzHT4MORWads7M99QOTqniaYqucJXpbFQzR80E/UhVQ0Pu46rrbmn18Xa3lQNZgCOwZ5oEZsPjsgNJyxV2eHAwMcEq93uHlygkAaej45ADieDBO16jdnI+tYYMzDewiPYYaGoBH+2+lq8PBAsNcrYV4ZMlxGa4Ge5pPcaWkxQH5bwmnVZvSrZXBH/rZCgZimeW35jS4Amt9jVHKL8D5i2ItdjLwzIGX/l/xgKwKpqfB2W/s0pl2lEjD8+xoWiHwWUnlrwkAFn/mOq61C1QnqFLZRQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lEAsKrVE6PHBLGuQiu7FouKryRQGBEEYJgZy/AQzYi0=; b=KO2YxUDq8eSboWFi8LFFpCv2J+XtyQUo380knnwxem6EcI+eMsxYzAEs2e/eeNptUMgkpYZUKL67NBE3hxn3zw+JNmLyTNtgPOqZpY1Z37g3nCnTln41qPoKQcyrmj9GuWhbHVph2zCYpi7/eW2SADt3ChnG7bAvXWjKi0lducBjSXQBSQneqmtTW1N1vHHYnXuriOsrQ1XeR9lwUTVPdUVxyTmse1mNjhKOeHs6RH/KNb/41uHLccF6ypcr06Q/XynmYfxDSWIjz0IGmvYjk0Z3hrXZF96b8Zo/GI+LeUPyJmE1/tobA/yzW9EIkDR5iB49Is+LAsFxvHkXtiG/Pg== Received: from BN0PR03CA0054.namprd03.prod.outlook.com (2603:10b6:408:e7::29) by BY5PR12MB4097.namprd12.prod.outlook.com (2603:10b6:a03:213::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 09:26:13 +0000 Received: from BN8NAM11FT105.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e7:cafe::68) by BN0PR03CA0054.outlook.office365.com (2603:10b6:408:e7::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT105.mail.protection.outlook.com (10.13.176.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:26:12 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:26:00 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:25:57 -0700 From: Rongwei Liu To: , , , , CC: Aman Singh , Yuying Zhang Subject: [PATCH v1 2/8] app/testpmd: add IPv6 extension push remove cli Date: Mon, 17 Apr 2023 12:25:34 +0300 Message-ID: <20230417092540.2617450-3-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT105:EE_|BY5PR12MB4097:EE_ X-MS-Office365-Filtering-Correlation-Id: c325ad55-7dc1-4df6-173e-08db3f25c97b X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +WWCQWusRZM5rB4TbPQ5skjsPvfISO1pcnfrxaWVDzyQgCrafrZhcwWEdzIbuH/SUtv5e7mmFTcDl91TGWVWUH8lnodPSVBviWmWfLerokomH+bXahst6e9vciKf61Lynk4lFSvH2GkRbb3NZjEB2D1UxBuKK0/mnV6aLubPkUvFpv0aNIyT88zvcktuOWH+1qZPjloOskinQji1k6TMDb5D+6d3tSrtD4+DFsXFLrTZj3hvIBwyk7LEZjT4VWrXvrzZ8lkB2Bjwh+QyE1hq2LceyMX3qZxnveoAuvxsvmKo/h8BCedWZ2aUi3D/UP+0xa61+TU9XFOzXob9sgUQ31WdY+3kMlM3TwGR4SqW70ftU5HMMn+h+jkO9mi1zdrWDI25jmJqicaqrXlvlXLwyDw1AjYBBq9apXcTNAMTJWUMLXt/g9YNUbal6M3RagVv7MLYPGCAqm7/+wJKz6UKT+DndmcfGKPnn6aCbJGRxfL0fVlESq9aLwxeJ9MZkS8dycYeTs49wXyOlXWsz219YSJJkcppPOlOfpFuxnpD3MkMZvT8BNx9BmlvtteHEGit4L6x8twYioi6ysIkFyYymDyfdDrBIaZ07cYsXPFfIZVyq9Ta/szKIkGGqwsSoRFf1kIGf9Wt5JYu5ycYqyBE4NJcmng/06kGZCkE+CY8svo+DeusZtM4N4NONtRmO6qXSUJSDiG3tJ17RMf306ONWD2sLQrI89HOb5qNSG2Hj4Yl/YxTSiyXAEn/aLDE3gMW X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(136003)(396003)(39850400004)(376002)(451199021)(36840700001)(46966006)(8676002)(8936002)(5660300002)(2906002)(30864003)(36756003)(82310400005)(86362001)(40480700001)(55016003)(34020700004)(478600001)(7696005)(6666004)(54906003)(110136005)(16526019)(6286002)(186003)(2616005)(36860700001)(26005)(1076003)(70206006)(70586007)(82740400003)(7636003)(83380400001)(316002)(41300700001)(356005)(4326008)(47076005)(426003)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:12.7653 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c325ad55-7dc1-4df6-173e-08db3f25c97b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT105.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4097 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add command lines to generate IPv6 routing extension push and remove patterns and follow the raw_encap/decap style. Add the new actions to the action template parsing. Generating the action patterns 1. IPv6 routing extension push set ipv6_ext_push 1 ipv6_ext type is 43 / ipv6_routing_ext ext_type is 4 ext_next_hdr is 17 ext_seg_left is 2 / end_set 2. IPv6 routing extension remove set ipv6_ext_remove 1 ipv6_ext type is 43 / end_set Specifying the action in the template 1. actions_template_id 1 template ipv6_ext_push index 1 2. actions_template_id 1 template ipv6_ext_remove index 1 Signed-off-by: Rongwei Liu Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 443 +++++++++++++++++++++++++++++++++++- 1 file changed, 442 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..ea4cebce1c 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -74,6 +74,9 @@ enum index { SET_RAW_INDEX, SET_SAMPLE_ACTIONS, SET_SAMPLE_INDEX, + SET_IPV6_EXT_REMOVE, + SET_IPV6_EXT_PUSH, + SET_IPV6_EXT_INDEX, /* Top-level command. */ FLOW, @@ -496,6 +499,8 @@ enum index { ITEM_QUOTA_STATE_NAME, ITEM_AGGR_AFFINITY, ITEM_AGGR_AFFINITY_VALUE, + ITEM_IPV6_PUSH_REMOVE_EXT, + ITEM_IPV6_PUSH_REMOVE_EXT_TYPE, /* Validate/create actions. */ ACTIONS, @@ -665,6 +670,12 @@ enum index { ACTION_QUOTA_QU_LIMIT, ACTION_QUOTA_QU_UPDATE_OP, ACTION_QUOTA_QU_UPDATE_OP_NAME, + ACTION_IPV6_EXT_REMOVE, + ACTION_IPV6_EXT_REMOVE_INDEX, + ACTION_IPV6_EXT_REMOVE_INDEX_VALUE, + ACTION_IPV6_EXT_PUSH, + ACTION_IPV6_EXT_PUSH_INDEX, + ACTION_IPV6_EXT_PUSH_INDEX_VALUE, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -731,6 +742,42 @@ struct action_raw_decap_data { uint16_t idx; }; +/** Maximum data size in struct rte_flow_action_ipv6_ext_push. */ +#define ACTION_IPV6_EXT_PUSH_MAX_DATA 512 +#define IPV6_EXT_PUSH_CONFS_MAX_NUM 8 + +/** Storage for struct rte_flow_action_ipv6_ext_push. */ +struct ipv6_ext_push_conf { + uint8_t data[ACTION_IPV6_EXT_PUSH_MAX_DATA]; + size_t size; + uint8_t type; +}; + +struct ipv6_ext_push_conf ipv6_ext_push_confs[IPV6_EXT_PUSH_CONFS_MAX_NUM]; + +/** Storage for struct rte_flow_action_ipv6_ext_push including external data. */ +struct action_ipv6_ext_push_data { + struct rte_flow_action_ipv6_ext_push conf; + uint8_t data[ACTION_IPV6_EXT_PUSH_MAX_DATA]; + uint8_t type; + uint16_t idx; +}; + +/** Storage for struct rte_flow_action_ipv6_ext_remove. */ +struct ipv6_ext_remove_conf { + struct rte_flow_action_ipv6_ext_remove conf; + uint8_t type; +}; + +struct ipv6_ext_remove_conf ipv6_ext_remove_confs[IPV6_EXT_PUSH_CONFS_MAX_NUM]; + +/** Storage for struct rte_flow_action_ipv6_ext_remove including external data. */ +struct action_ipv6_ext_remove_data { + struct rte_flow_action_ipv6_ext_remove conf; + uint8_t type; + uint16_t idx; +}; + struct vxlan_encap_conf vxlan_encap_conf = { .select_ipv4 = 1, .select_vlan = 0, @@ -2022,6 +2069,8 @@ static const enum index next_action[] = { ACTION_SEND_TO_KERNEL, ACTION_QUOTA_CREATE, ACTION_QUOTA_QU, + ACTION_IPV6_EXT_REMOVE, + ACTION_IPV6_EXT_PUSH, ZERO, }; @@ -2230,6 +2279,18 @@ static const enum index action_raw_decap[] = { ZERO, }; +static const enum index action_ipv6_ext_remove[] = { + ACTION_IPV6_EXT_REMOVE_INDEX, + ACTION_NEXT, + ZERO, +}; + +static const enum index action_ipv6_ext_push[] = { + ACTION_IPV6_EXT_PUSH_INDEX, + ACTION_NEXT, + ZERO, +}; + static const enum index action_set_tag[] = { ACTION_SET_TAG_DATA, ACTION_SET_TAG_INDEX, @@ -2293,6 +2354,22 @@ static const enum index next_action_sample[] = { ZERO, }; +static const enum index item_ipv6_push_ext[] = { + ITEM_IPV6_PUSH_REMOVE_EXT, + ZERO, +}; + +static const enum index item_ipv6_push_ext_type[] = { + ITEM_IPV6_PUSH_REMOVE_EXT_TYPE, + ZERO, +}; + +static const enum index item_ipv6_push_ext_header[] = { + ITEM_IPV6_ROUTING_EXT, + ITEM_NEXT, + ZERO, +}; + static const enum index action_modify_field_dst[] = { ACTION_MODIFY_FIELD_DST_LEVEL, ACTION_MODIFY_FIELD_DST_OFFSET, @@ -2334,6 +2411,9 @@ static int parse_set_raw_encap_decap(struct context *, const struct token *, static int parse_set_sample_action(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_set_ipv6_ext_action(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_set_init(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2411,6 +2491,22 @@ static int parse_vc_action_raw_encap_index(struct context *, static int parse_vc_action_raw_decap_index(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_vc_action_ipv6_ext_remove(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); +static int parse_vc_action_ipv6_ext_remove_index(struct context *ctx, + const struct token *token, + const char *str, unsigned int len, + void *buf, + unsigned int size); +static int parse_vc_action_ipv6_ext_push(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); +static int parse_vc_action_ipv6_ext_push_index(struct context *ctx, + const struct token *token, + const char *str, unsigned int len, + void *buf, + unsigned int size); static int parse_vc_action_set_meta(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, @@ -2596,6 +2692,8 @@ static int comp_set_raw_index(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_sample_index(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_set_ipv6_ext_index(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size); static int comp_set_modify_field_op(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_modify_field_id(struct context *, const struct token *, @@ -6472,6 +6570,48 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), .call = parse_vc, }, + [ACTION_IPV6_EXT_REMOVE] = { + .name = "ipv6_ext_remove", + .help = "IPv6 extension type, defined by set ipv6_ext_remove", + .priv = PRIV_ACTION(IPV6_EXT_REMOVE, + sizeof(struct action_ipv6_ext_remove_data)), + .next = NEXT(action_ipv6_ext_remove), + .call = parse_vc_action_ipv6_ext_remove, + }, + [ACTION_IPV6_EXT_REMOVE_INDEX] = { + .name = "index", + .help = "the index of ipv6_ext_remove", + .next = NEXT(NEXT_ENTRY(ACTION_IPV6_EXT_REMOVE_INDEX_VALUE)), + }, + [ACTION_IPV6_EXT_REMOVE_INDEX_VALUE] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "unsigned integer value", + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_ipv6_ext_remove_index, + .comp = comp_set_ipv6_ext_index, + }, + [ACTION_IPV6_EXT_PUSH] = { + .name = "ipv6_ext_push", + .help = "IPv6 extension data, defined by set ipv6_ext_push", + .priv = PRIV_ACTION(IPV6_EXT_PUSH, + sizeof(struct action_ipv6_ext_push_data)), + .next = NEXT(action_ipv6_ext_push), + .call = parse_vc_action_ipv6_ext_push, + }, + [ACTION_IPV6_EXT_PUSH_INDEX] = { + .name = "index", + .help = "the index of ipv6_ext_push", + .next = NEXT(NEXT_ENTRY(ACTION_IPV6_EXT_PUSH_INDEX_VALUE)), + }, + [ACTION_IPV6_EXT_PUSH_INDEX_VALUE] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "unsigned integer value", + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_ipv6_ext_push_index, + .comp = comp_set_ipv6_ext_index, + }, /* Top level command. */ [SET] = { .name = "set", @@ -6481,7 +6621,9 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY (SET_RAW_ENCAP, SET_RAW_DECAP, - SET_SAMPLE_ACTIONS)), + SET_SAMPLE_ACTIONS, + SET_IPV6_EXT_REMOVE, + SET_IPV6_EXT_PUSH)), .call = parse_set_init, }, /* Sub-level commands. */ @@ -6529,6 +6671,49 @@ static const struct token token_list[] = { 0, RAW_SAMPLE_CONFS_MAX_NUM - 1)), .call = parse_set_sample_action, }, + [SET_IPV6_EXT_PUSH] = { + .name = "ipv6_ext_push", + .help = "set IPv6 extension header", + .next = NEXT(NEXT_ENTRY(SET_IPV6_EXT_INDEX)), + .args = ARGS(ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct buffer, port), + sizeof(((struct buffer *)0)->port), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1)), + .call = parse_set_ipv6_ext_action, + }, + [SET_IPV6_EXT_REMOVE] = { + .name = "ipv6_ext_remove", + .help = "set IPv6 extension header", + .next = NEXT(NEXT_ENTRY(SET_IPV6_EXT_INDEX)), + .args = ARGS(ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct buffer, port), + sizeof(((struct buffer *)0)->port), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1)), + .call = parse_set_ipv6_ext_action, + }, + [SET_IPV6_EXT_INDEX] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "index of ipv6 extension push/remove actions", + .next = NEXT(item_ipv6_push_ext), + .call = parse_port, + }, + [ITEM_IPV6_PUSH_REMOVE_EXT] = { + .name = "ipv6_ext", + .help = "set IPv6 extension header", + .priv = PRIV_ITEM(IPV6_EXT, + sizeof(struct rte_flow_item_ipv6_ext)), + .next = NEXT(item_ipv6_push_ext_type), + .call = parse_vc, + }, + [ITEM_IPV6_PUSH_REMOVE_EXT_TYPE] = { + .name = "type", + .help = "set IPv6 extension type", + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_ext, + next_hdr)), + .next = NEXT(item_ipv6_push_ext_header, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + }, [ACTION_SET_TAG] = { .name = "set_tag", .help = "set tag", @@ -8843,6 +9028,140 @@ parse_vc_action_raw_decap(struct context *ctx, const struct token *token, return ret; } +static int +parse_vc_action_ipv6_ext_remove(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_ipv6_ext_remove_data *ipv6_ext_remove_data = NULL; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Copy the headers to the buffer. */ + ipv6_ext_remove_data = ctx->object; + ipv6_ext_remove_data->conf.type = ipv6_ext_remove_confs[0].type; + action->conf = &ipv6_ext_remove_data->conf; + return ret; +} + +static int +parse_vc_action_ipv6_ext_remove_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct action_ipv6_ext_remove_data *action_ipv6_ext_remove_data; + struct rte_flow_action *action; + const struct arg *arg; + struct buffer *out = buf; + int ret; + uint16_t idx; + + RTE_SET_USED(token); + RTE_SET_USED(buf); + RTE_SET_USED(size); + arg = ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct action_ipv6_ext_remove_data, idx), + sizeof(((struct action_ipv6_ext_remove_data *)0)->idx), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1); + if (push_args(ctx, arg)) + return -1; + ret = parse_int(ctx, token, str, len, NULL, 0); + if (ret < 0) { + pop_args(ctx); + return -1; + } + if (!ctx->object) + return len; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + action_ipv6_ext_remove_data = ctx->object; + idx = action_ipv6_ext_remove_data->idx; + action_ipv6_ext_remove_data->conf.type = ipv6_ext_remove_confs[idx].type; + action->conf = &action_ipv6_ext_remove_data->conf; + return len; +} + +static int +parse_vc_action_ipv6_ext_push(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_ipv6_ext_push_data *ipv6_ext_push_data = NULL; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Copy the headers to the buffer. */ + ipv6_ext_push_data = ctx->object; + ipv6_ext_push_data->conf.type = ipv6_ext_push_confs[0].type; + ipv6_ext_push_data->conf.data = ipv6_ext_push_confs[0].data; + ipv6_ext_push_data->conf.size = ipv6_ext_push_confs[0].size; + action->conf = &ipv6_ext_push_data->conf; + return ret; +} + +static int +parse_vc_action_ipv6_ext_push_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct action_ipv6_ext_push_data *action_ipv6_ext_push_data; + struct rte_flow_action *action; + const struct arg *arg; + struct buffer *out = buf; + int ret; + uint16_t idx; + + RTE_SET_USED(token); + RTE_SET_USED(buf); + RTE_SET_USED(size); + arg = ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct action_ipv6_ext_push_data, idx), + sizeof(((struct action_ipv6_ext_push_data *)0)->idx), + 0, IPV6_EXT_PUSH_CONFS_MAX_NUM - 1); + if (push_args(ctx, arg)) + return -1; + ret = parse_int(ctx, token, str, len, NULL, 0); + if (ret < 0) { + pop_args(ctx); + return -1; + } + if (!ctx->object) + return len; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + action_ipv6_ext_push_data = ctx->object; + idx = action_ipv6_ext_push_data->idx; + action_ipv6_ext_push_data->conf.type = ipv6_ext_push_confs[idx].type; + action_ipv6_ext_push_data->conf.size = ipv6_ext_push_confs[idx].size; + action_ipv6_ext_push_data->conf.data = ipv6_ext_push_confs[idx].data; + action->conf = &action_ipv6_ext_push_data->conf; + return len; +} + static int parse_vc_action_set_meta(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, @@ -10532,6 +10851,35 @@ parse_set_sample_action(struct context *ctx, const struct token *token, return len; } +/** Parse set command, initialize output buffer for subsequent tokens. */ +static int +parse_set_ipv6_ext_action(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + /* Make sure buffer is large enough. */ + if (size < sizeof(*out)) + return -1; + ctx->objdata = 0; + ctx->objmask = NULL; + ctx->object = out; + if (!out->command) + return -1; + out->command = ctx->curr; + /* For ipv6_ext_push/remove we need is pattern */ + out->args.vc.pattern = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; +} + /** * Parse set raw_encap/raw_decap command, * initialize output buffer for subsequent tokens. @@ -10961,6 +11309,24 @@ comp_set_raw_index(struct context *ctx, const struct token *token, return nb; } +/** Complete index number for set raw_ipv6_ext_push/ipv6_ext_remove commands. */ +static int +comp_set_ipv6_ext_index(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + uint16_t idx = 0; + uint16_t nb = 0; + + RTE_SET_USED(ctx); + RTE_SET_USED(token); + for (idx = 0; idx < IPV6_EXT_PUSH_CONFS_MAX_NUM; ++idx) { + if (buf && idx == ent) + return snprintf(buf, size, "%u", idx); + ++nb; + } + return nb; +} + /** Complete index number for set raw_encap/raw_decap commands. */ static int comp_set_sample_index(struct context *ctx, const struct token *token, @@ -11855,6 +12221,78 @@ flow_item_default_mask(const struct rte_flow_item *item) return mask; } +/** Dispatch parsed buffer to function calls. */ +static void +cmd_set_ipv6_ext_parsed(const struct buffer *in) +{ + uint32_t n = in->args.vc.pattern_n; + int i = 0; + struct rte_flow_item *item = NULL; + size_t size = 0; + uint8_t *data = NULL; + uint8_t *type = NULL; + size_t *total_size = NULL; + uint16_t idx = in->port; /* We borrow port field as index */ + struct rte_flow_item_ipv6_routing_ext *ext; + const struct rte_flow_item_ipv6_ext *ipv6_ext; + + RTE_ASSERT(in->command == SET_IPV6_EXT_PUSH || + in->command == SET_IPV6_EXT_REMOVE); + + if (in->command == SET_IPV6_EXT_REMOVE) { + if (n != 1 || in->args.vc.pattern->type != + RTE_FLOW_ITEM_TYPE_IPV6_EXT) { + fprintf(stderr, "Error - Not supported item\n"); + return; + } + type = (uint8_t *)&ipv6_ext_remove_confs[idx].type; + item = in->args.vc.pattern; + ipv6_ext = item->spec; + *type = ipv6_ext->next_hdr; + return; + } + + total_size = &ipv6_ext_push_confs[idx].size; + data = (uint8_t *)&ipv6_ext_push_confs[idx].data; + type = (uint8_t *)&ipv6_ext_push_confs[idx].type; + + *total_size = 0; + memset(data, 0x00, ACTION_RAW_ENCAP_MAX_DATA); + for (i = n - 1 ; i >= 0; --i) { + item = in->args.vc.pattern + i; + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_IPV6_EXT: + ipv6_ext = item->spec; + *type = ipv6_ext->next_hdr; + break; + case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: + ext = (struct rte_flow_item_ipv6_routing_ext *)(uintptr_t)item->spec; + if (!ext->hdr.hdr_len) { + size = sizeof(struct rte_ipv6_routing_ext) + + (ext->hdr.segments_left << 4); + ext->hdr.hdr_len = ext->hdr.segments_left << 1; + /* Indicate no TLV once SRH. */ + if (ext->hdr.type == 4) + ext->hdr.last_entry = ext->hdr.segments_left - 1; + } else { + size = sizeof(struct rte_ipv6_routing_ext) + + (ext->hdr.hdr_len << 3); + } + *total_size += size; + memcpy(data, ext, size); + break; + default: + fprintf(stderr, "Error - Not supported item\n"); + goto error; + } + } + RTE_ASSERT((*total_size) <= ACTION_IPV6_EXT_PUSH_MAX_DATA); + return; +error: + *total_size = 0; + memset(data, 0x00, ACTION_IPV6_EXT_PUSH_MAX_DATA); +} + /** Dispatch parsed buffer to function calls. */ static void cmd_set_raw_parsed_sample(const struct buffer *in) @@ -11988,6 +12426,9 @@ cmd_set_raw_parsed(const struct buffer *in) if (in->command == SET_SAMPLE_ACTIONS) return cmd_set_raw_parsed_sample(in); + else if (in->command == SET_IPV6_EXT_PUSH || + in->command == SET_IPV6_EXT_REMOVE) + return cmd_set_ipv6_ext_parsed(in); RTE_ASSERT(in->command == SET_RAW_ENCAP || in->command == SET_RAW_DECAP); if (in->command == SET_RAW_ENCAP) { From patchwork Mon Apr 17 09:25:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 126175 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E5A2D4296B; Mon, 17 Apr 2023 11:26:24 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 67AEB42B8C; Mon, 17 Apr 2023 11:26:16 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2053.outbound.protection.outlook.com [40.107.100.53]) by mails.dpdk.org (Postfix) with ESMTP id 20F3242B8C for ; Mon, 17 Apr 2023 11:26:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=G/Yz2WqpnPkaDTwXVp8/UvK+SGSOIUAMFNFWqfFQTbGh5i173Qv8l9ie+1nMLr53zy8yPp9wlM/FHcVauPQe/xgVWXGtZ3sI5vBsLFqw16vRxgI24UhnzpPW8Y3rBt0vKS20uCHhysC+6fFkU9pbSQ/U4BsGqMU5C+HG70TpAHMJOrTlmIj2CSvBy1mwvCxuLumfrrqQ7usY8j9oAtXZebeoCGlR9gAnElOVB+38kF2lnyHeBmiAMdq8mN8n/n/ck4K9aY19YtE7XOWoKe16CdjOs6k2WBDaYnTPIx+PmKm/k4wCx99XFAvvjUOXu6NeIeMB6KV7a0vylQQke2Gjeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZymQgiJiP5sLAUcBnvxVm6dv5sgmNnxsdXT+79X9EG4=; b=RJCbjqr5yJzhDfpVLWt4X+niuoMvRvHXl4VGKP2rYbu4IJILh/vQFxSyIvTaZpOmL+vKtzNbWir1NWsqPpKxMsR3n0rYQjp2L8qA797NYd0OndsKqg/nf20nDcHx0ZsBwgU7OmbXT9XMDIyhGfk3bCJsrGibun/NUsHS+Xz0i+gcxlUxG/e8uByUHtW1fL702HnVNiWTTgM+EEx2JKeCkEAVv8jLD7IqSwgotIztr5FpaWyJUmn8s2QT7WzO3Fjd5RY3nQGzk4dIh4EGoLEYxwcKzMvNUJLPudaEV6FTA9txwO9WFttNUyzBSAQ2vKlBZcddYbaFsIPpIUOSaIwbOQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZymQgiJiP5sLAUcBnvxVm6dv5sgmNnxsdXT+79X9EG4=; b=ufNT+cWq6SH2uPbdMfbHb3xfsAmJSYB8uZ0ZkCeHwcfNblXA32Ktl9Gk6spn8ClCu8M62awKkHd3jzFrmfnFWLXMpTUYPUyRwc/sUk8q1ai/1xoh8+OGwMxqc0LXu5EaGt22hJLQHDL8W0DOIzWDvqKEK1Pfy5K3DexikgVe2EMansRnhUQ79t0R1RjcQIlaRkCpSUdVPVFpHPH5YQPn+T6AwzfRyJStyCw5hqTYIIZ9Snoti5tfY64yvAJwPJCtekqTASkUxAiglSP0mqHXZqbO9oX8kZ7t1L2lMb/YTER7UBDH/R4lbf3V8KVyk2aZkhu/zYfQacsPdVQfRz9xzg== Received: from MW4P220CA0008.NAMP220.PROD.OUTLOOK.COM (2603:10b6:303:115::13) by SA1PR12MB7149.namprd12.prod.outlook.com (2603:10b6:806:29c::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.33; Mon, 17 Apr 2023 09:26:13 +0000 Received: from CO1NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:303:115:cafe::a0) by MW4P220CA0008.outlook.office365.com (2603:10b6:303:115::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT003.mail.protection.outlook.com (10.13.175.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:26:12 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:26:02 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:26:00 -0700 From: Rongwei Liu To: , , , , Subject: [PATCH v1 3/8] net/mlx5/hws: add no reparse support Date: Mon, 17 Apr 2023 12:25:35 +0300 Message-ID: <20230417092540.2617450-4-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT003:EE_|SA1PR12MB7149:EE_ X-MS-Office365-Filtering-Correlation-Id: 94e08f1b-6234-49b4-7c39-08db3f25c98f X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wEOyMHliNo2NfZb22zNQO14t3f5OGxlF6l2UtGqfU/B45cfgTf8TKsRRPQ1BF44sX1Elxxp/ZJ+Y6TCYWQS1AIfkLmmVEn0/rTLkLB9YCzakOoSZ/h2WyW4hf3jePSbyVNm2Y4/IE4p1phpZ+yIAeqYvjS8JwlD/TfiXz2IDylncgfTXMBPyEhADyaWpgcBdL6JlUC7559RVGH3xEvvyPMLfiDR7V3J6vuSpnCcVxtxfRyt+EJgkfC2u/VE157Hg5WZmvYi2b2hv7wTmIBLbwlkDzmdnhhQ9ZX3FcpiR6CUT8nzGad6AE99Tz3NTOe/9uW5zN8J9BJXAUSI1TOCO4bbHf/q6s0/JGVey81xaznqrWST5AKX8NdQFW9hZvUXovcPx1t+B8nJau9r1dhY/uW43EjyZnDC12A4qpeFLA3BS6TsfyCnyiuYuE1wlgm1j9DoDLqmgAsyqh3H7u0pNwOUOSOw9ZBWnLx1l8faeAjFQGPxs7hxf2ee1wAIEfzvhBiQNshh3Ttxfo7mdcLisW5OQXHaWzHeMAJTN7rvHp1FPa84Pj3cn+8rcYrauoABt6OSGZUulRxwhD1eSQKODHo8Vxs5y5N/7ygJOn3b03gVDJPoJfieCfKTILNx+EE+MLfSwFd0v9OgKiep7CSGj2trFoV0BU9Nh+ACwWoMXwBbmJ0qZyaUcOCA+qLZIePTzQBauClOAtajAZrn4Vkw7SynGndNuSDzn53oqoP8DphrUxBg5bOOYGlBY8f5s67E9q0uO+URzAgn4NXZg+ii27g== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(110136005)(36756003)(82740400003)(36860700001)(7636003)(356005)(34020700004)(86362001)(82310400005)(40460700003)(8936002)(6666004)(316002)(40480700001)(30864003)(478600001)(5660300002)(426003)(336012)(1076003)(7696005)(6286002)(16526019)(186003)(26005)(2906002)(8676002)(41300700001)(83380400001)(2616005)(55016003)(70206006)(47076005)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:12.9302 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 94e08f1b-6234-49b4-7c39-08db3f25c98f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7149 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Allocate two groups of RTC to control hardware reparsing when flow is updated. 1. Group 1 always reparse the traffic after packet is modified by any hardware module. This is the default behavior which is the same as before. 2. Group 2 doesn't perform any packets reparsing. This will help the complex flow rules. Signed-off-by: Rongwei Liu --- drivers/net/mlx5/hws/mlx5dr_cmd.c | 5 +- drivers/net/mlx5/hws/mlx5dr_cmd.h | 1 + drivers/net/mlx5/hws/mlx5dr_debug.c | 8 +-- drivers/net/mlx5/hws/mlx5dr_matcher.c | 80 +++++++++++++++++---------- drivers/net/mlx5/hws/mlx5dr_matcher.h | 12 ++-- drivers/net/mlx5/hws/mlx5dr_rule.c | 65 ++++++++++++++++------ 6 files changed, 117 insertions(+), 54 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c index 0adcedd9c9..42bf1980db 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.c +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -283,7 +283,10 @@ mlx5dr_cmd_rtc_create(struct ibv_context *ctx, MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); - MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + if (rtc_attr->is_reparse) + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + else + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_NEVER); devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); if (!devx_obj->obj) { diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h index 3f40c085be..b225171d4c 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.h +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -51,6 +51,7 @@ struct mlx5dr_cmd_rtc_create_attr { uint8_t match_definer_1; bool is_frst_jumbo; bool is_scnd_range; + uint8_t is_reparse; }; struct mlx5dr_cmd_alias_obj_create_attr { diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index 6b32ac4ee6..b8049a173d 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -224,9 +224,9 @@ static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) } ret = fprintf(f, ",%d,%d,%d,%d", - matcher->match_ste.rtc_0 ? matcher->match_ste.rtc_0->id : 0, + matcher->match_ste.rtc_0_reparse ? matcher->match_ste.rtc_0_reparse->id : 0, ste_0 ? (int)ste_0->id : -1, - matcher->match_ste.rtc_1 ? matcher->match_ste.rtc_1->id : 0, + matcher->match_ste.rtc_1_reparse ? matcher->match_ste.rtc_1_reparse->id : 0, ste_1 ? (int)ste_1->id : -1); if (ret < 0) goto out_err; @@ -243,9 +243,9 @@ static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) } ret = fprintf(f, ",%d,%d,%d,%d,%d\n", - matcher->action_ste.rtc_0 ? matcher->action_ste.rtc_0->id : 0, + matcher->action_ste.rtc_0_reparse ? matcher->action_ste.rtc_0_reparse->id : 0, ste_0 ? (int)ste_0->id : -1, - matcher->action_ste.rtc_1 ? matcher->action_ste.rtc_1->id : 0, + matcher->action_ste.rtc_1_reparse ? matcher->action_ste.rtc_1_reparse->id : 0, ste_1 ? (int)ste_1->id : -1, is_shared && !is_root ? matcher->match_ste.aliased_rtc_0->id : 0); diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 1fe7ec1bc3..652d50f73a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -101,7 +101,7 @@ static int mlx5dr_matcher_shared_create_alias_rtc(struct mlx5dr_matcher *matcher ctx->ibv_ctx, ctx->local_ibv_ctx, ctx->caps->shared_vhca_id, - matcher->match_ste.rtc_0->id, + matcher->match_ste.rtc_0_reparse->id, MLX5_GENERAL_OBJ_TYPE_RTC, &matcher->match_ste.aliased_rtc_0); if (ret) { @@ -156,7 +156,7 @@ static uint32_t mlx5dr_matcher_connect_get_rtc0(struct mlx5dr_matcher *matcher) { if (!matcher->match_ste.aliased_rtc_0) - return matcher->match_ste.rtc_0->id; + return matcher->match_ste.rtc_0_reparse->id; else return matcher->match_ste.aliased_rtc_0->id; } @@ -233,10 +233,10 @@ static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) /* Connect to next */ if (next) { - if (next->match_ste.rtc_0) - ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; - if (next->match_ste.rtc_1) - ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + if (next->match_ste.rtc_0_reparse) + ft_attr.rtc_id_0 = next->match_ste.rtc_0_reparse->id; + if (next->match_ste.rtc_1_reparse) + ft_attr.rtc_id_1 = next->match_ste.rtc_1_reparse->id; ret = mlx5dr_cmd_flow_table_modify(matcher->end_ft, &ft_attr); if (ret) { @@ -248,10 +248,10 @@ static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) /* Connect to previous */ ft = prev ? prev->end_ft : tbl->ft; - if (matcher->match_ste.rtc_0) - ft_attr.rtc_id_0 = matcher->match_ste.rtc_0->id; - if (matcher->match_ste.rtc_1) - ft_attr.rtc_id_1 = matcher->match_ste.rtc_1->id; + if (matcher->match_ste.rtc_0_reparse) + ft_attr.rtc_id_0 = matcher->match_ste.rtc_0_reparse->id; + if (matcher->match_ste.rtc_1_reparse) + ft_attr.rtc_id_1 = matcher->match_ste.rtc_1_reparse->id; ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); if (ret) { @@ -296,10 +296,10 @@ static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) if (next) { /* Connect previous end FT to next RTC if exists */ - if (next->match_ste.rtc_0) - ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; - if (next->match_ste.rtc_1) - ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + if (next->match_ste.rtc_0_reparse) + ft_attr.rtc_id_0 = next->match_ste.rtc_0_reparse->id; + if (next->match_ste.rtc_1_reparse) + ft_attr.rtc_id_1 = next->match_ste.rtc_1_reparse->id; } else { /* Matcher is last, point prev end FT to default miss */ mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, @@ -470,10 +470,11 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, struct mlx5dr_pool_chunk *ste; int ret; + rtc_attr.is_reparse = true; switch (rtc_type) { case DR_MATCHER_RTC_TYPE_MATCH: - rtc_0 = &matcher->match_ste.rtc_0; - rtc_1 = &matcher->match_ste.rtc_1; + rtc_0 = &matcher->match_ste.rtc_0_reparse; + rtc_1 = &matcher->match_ste.rtc_1_reparse; ste_pool = matcher->match_ste.pool; ste = &matcher->match_ste.ste; ste->order = attr->table.sz_col_log + attr->table.sz_row_log; @@ -537,8 +538,8 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, break; case DR_MATCHER_RTC_TYPE_STE_ARRAY: - rtc_0 = &matcher->action_ste.rtc_0; - rtc_1 = &matcher->action_ste.rtc_1; + rtc_0 = &matcher->action_ste.rtc_0_reparse; + rtc_1 = &matcher->action_ste.rtc_1_reparse; ste_pool = matcher->action_ste.pool; ste = &matcher->action_ste.ste; ste->order = rte_log2_u32(matcher->action_ste.max_stes) + @@ -558,6 +559,7 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, return rte_errno; } +rertc: devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); rtc_attr.pd = ctx->pd_num; @@ -574,8 +576,8 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, *rtc_0 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); if (!*rtc_0) { - DR_LOG(ERR, "Failed to create matcher RTC of type %s", - mlx5dr_matcher_rtc_type_to_str(rtc_type)); + DR_LOG(ERR, "Failed to create matcher RTC of type %s, reparse %u", + mlx5dr_matcher_rtc_type_to_str(rtc_type), rtc_attr.is_reparse); goto free_ste; } @@ -590,12 +592,25 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, *rtc_1 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); if (!*rtc_1) { - DR_LOG(ERR, "Failed to create peer matcher RTC of type %s", - mlx5dr_matcher_rtc_type_to_str(rtc_type)); + DR_LOG(ERR, "Failed to create peer matcher RTC of type %s, reparse %u", + mlx5dr_matcher_rtc_type_to_str(rtc_type), rtc_attr.is_reparse); goto destroy_rtc_0; } } + /* RTC is created in reparse then no_reparse order and fw wqe. */ + if (rtc_attr.is_reparse && !mlx5dr_matcher_req_fw_wqe(matcher)) { + rtc_attr.is_reparse = false; + if (rtc_type == DR_MATCHER_RTC_TYPE_MATCH) { + rtc_0 = &matcher->match_ste.rtc_0_no_reparse; + rtc_1 = &matcher->match_ste.rtc_1_no_reparse; + } else { + rtc_0 = &matcher->action_ste.rtc_0_no_reparse; + rtc_1 = &matcher->action_ste.rtc_1_no_reparse; + } + goto rertc; + } + return 0; destroy_rtc_0: @@ -609,21 +624,25 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, enum mlx5dr_matcher_rtc_type rtc_type) { + struct mlx5dr_devx_obj *rtc_0, *rtc_1, *rtc_2, *rtc_3; struct mlx5dr_table *tbl = matcher->tbl; - struct mlx5dr_devx_obj *rtc_0, *rtc_1; struct mlx5dr_pool_chunk *ste; struct mlx5dr_pool *ste_pool; switch (rtc_type) { case DR_MATCHER_RTC_TYPE_MATCH: - rtc_0 = matcher->match_ste.rtc_0; - rtc_1 = matcher->match_ste.rtc_1; + rtc_0 = matcher->match_ste.rtc_0_reparse; + rtc_1 = matcher->match_ste.rtc_1_reparse; + rtc_2 = matcher->match_ste.rtc_0_no_reparse; + rtc_3 = matcher->match_ste.rtc_1_no_reparse; ste_pool = matcher->match_ste.pool; ste = &matcher->match_ste.ste; break; case DR_MATCHER_RTC_TYPE_STE_ARRAY: - rtc_0 = matcher->action_ste.rtc_0; - rtc_1 = matcher->action_ste.rtc_1; + rtc_0 = matcher->action_ste.rtc_0_reparse; + rtc_1 = matcher->action_ste.rtc_1_reparse; + rtc_2 = matcher->action_ste.rtc_0_no_reparse; + rtc_3 = matcher->action_ste.rtc_1_no_reparse; ste_pool = matcher->action_ste.pool; ste = &matcher->action_ste.ste; break; @@ -631,10 +650,15 @@ static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, return; } - if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) { mlx5dr_cmd_destroy_obj(rtc_1); + if (rtc_3) + mlx5dr_cmd_destroy_obj(rtc_3); + } mlx5dr_cmd_destroy_obj(rtc_0); + if (rtc_2) + mlx5dr_cmd_destroy_obj(rtc_2); if (rtc_type == DR_MATCHER_RTC_TYPE_MATCH) mlx5dr_pool_chunk_free(ste_pool, ste); } diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h index 4759068ab4..02fd283cd1 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.h +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -43,8 +43,10 @@ struct mlx5dr_match_template { struct mlx5dr_matcher_match_ste { struct mlx5dr_pool_chunk ste; - struct mlx5dr_devx_obj *rtc_0; - struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_devx_obj *rtc_0_reparse; + struct mlx5dr_devx_obj *rtc_1_reparse; + struct mlx5dr_devx_obj *rtc_0_no_reparse; + struct mlx5dr_devx_obj *rtc_1_no_reparse; struct mlx5dr_pool *pool; /* Currently not support FDB aliased */ struct mlx5dr_devx_obj *aliased_rtc_0; @@ -53,8 +55,10 @@ struct mlx5dr_matcher_match_ste { struct mlx5dr_matcher_action_ste { struct mlx5dr_pool_chunk ste; struct mlx5dr_pool_chunk stc; - struct mlx5dr_devx_obj *rtc_0; - struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_devx_obj *rtc_0_reparse; + struct mlx5dr_devx_obj *rtc_1_reparse; + struct mlx5dr_devx_obj *rtc_0_no_reparse; + struct mlx5dr_devx_obj *rtc_1_no_reparse; struct mlx5dr_pool *pool; uint8_t max_stes; }; diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index 2418ca0b26..70c6c08741 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -40,11 +40,35 @@ static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, } } +static void mlxdr_rule_set_wqe_rtc_id(struct mlx5dr_send_ring_dep_wqe *wqe, + struct mlx5dr_matcher *matcher, + bool reparse, bool mirror) +{ + if (!mirror && !reparse) { + wqe->rtc_0 = matcher->match_ste.rtc_0_no_reparse->id; + wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0_no_reparse->id : 0; + } else if (!mirror && reparse) { + wqe->rtc_0 = matcher->match_ste.rtc_0_reparse->id; + wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0_reparse->id : 0; + } else if (mirror && reparse) { + wqe->rtc_1 = matcher->match_ste.rtc_1_reparse->id; + wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1_reparse->id : 0; + } else if (mirror && !reparse) { + wqe->rtc_1 = matcher->match_ste.rtc_1_no_reparse->id; + wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1_no_reparse->id : 0; + } +} + static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, struct mlx5dr_rule *rule, const struct rte_flow_item *items, struct mlx5dr_match_template *mt, - void *user_data) + void *user_data, + bool reparse) { struct mlx5dr_matcher *matcher = rule->matcher; struct mlx5dr_table *tbl = matcher->tbl; @@ -56,9 +80,7 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, switch (tbl->type) { case MLX5DR_TABLE_TYPE_NIC_RX: case MLX5DR_TABLE_TYPE_NIC_TX: - dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; - dep_wqe->retry_rtc_0 = matcher->col_matcher ? - matcher->col_matcher->match_ste.rtc_0->id : 0; + mlxdr_rule_set_wqe_rtc_id(dep_wqe, matcher, reparse, false); dep_wqe->rtc_1 = 0; dep_wqe->retry_rtc_1 = 0; break; @@ -67,18 +89,14 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, mlx5dr_rule_skip(matcher, mt, items, &skip_rx, &skip_tx); if (!skip_rx) { - dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; - dep_wqe->retry_rtc_0 = matcher->col_matcher ? - matcher->col_matcher->match_ste.rtc_0->id : 0; + mlxdr_rule_set_wqe_rtc_id(dep_wqe, matcher, reparse, false); } else { dep_wqe->rtc_0 = 0; dep_wqe->retry_rtc_0 = 0; } if (!skip_tx) { - dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; - dep_wqe->retry_rtc_1 = matcher->col_matcher ? - matcher->col_matcher->match_ste.rtc_1->id : 0; + mlxdr_rule_set_wqe_rtc_id(dep_wqe, matcher, reparse, true); } else { dep_wqe->rtc_1 = 0; dep_wqe->retry_rtc_1 = 0; @@ -265,8 +283,9 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, } mlx5dr_rule_create_init(rule, &ste_attr, &apply); - mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data); - mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data); + /* FW WQE doesn't look on rtc reparse, use default REPARSE_ALWAYS. */ + mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data, true); + mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data, true); ste_attr.direct_index = 0; ste_attr.rtc_0 = match_wqe.rtc_0; @@ -348,6 +367,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, struct mlx5dr_actions_apply_data apply; struct mlx5dr_send_engine *queue; uint8_t total_stes, action_stes; + bool matcher_reparse; int i, ret; /* Insert rule using FW WQE if cannot use GTA WQE */ @@ -368,7 +388,9 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, * dep_wqe buffers (ctrl, data) are also reused for all STE writes. */ dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); - mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr->user_data); + /* Jumbo matcher reparse is off. */ + matcher_reparse = !is_jumbo && (at->setters[1].flags & ASF_REPARSE); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr->user_data, matcher_reparse); ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; ste_attr.wqe_data = &dep_wqe->wqe_data; @@ -389,9 +411,6 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, mlx5dr_send_abort_new_dep_wqe(queue); return ret; } - /* Skip RX/TX based on the dep_wqe init */ - ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; - ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; /* Action STEs are written to a specific index last to first */ ste_attr.direct_index = rule->action_ste_idx + action_stes; apply.next_direct_idx = ste_attr.direct_index; @@ -400,7 +419,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, } for (i = total_stes; i-- > 0;) { - mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + mlx5dr_action_apply_setter(&apply, setter, !i && is_jumbo); if (i == 0) { /* Handle last match STE. @@ -431,9 +450,21 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ? attr->rule_idx : 0; } else { + if (setter->flags & ASF_REPARSE) { + ste_attr.rtc_0 = dep_wqe->rtc_0 ? + matcher->action_ste.rtc_0_reparse->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? + matcher->action_ste.rtc_1_reparse->id : 0; + } else { + ste_attr.rtc_0 = dep_wqe->rtc_0 ? + matcher->action_ste.rtc_0_no_reparse->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? + matcher->action_ste.rtc_1_no_reparse->id : 0; + } apply.next_direct_idx = --ste_attr.direct_index; } + setter--; mlx5dr_send_ste(queue, &ste_attr); } From patchwork Mon Apr 17 09:25:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 126177 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0C054296B; Mon, 17 Apr 2023 11:26:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A91BF42D31; Mon, 17 Apr 2023 11:26:25 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2040.outbound.protection.outlook.com [40.107.244.40]) by mails.dpdk.org (Postfix) with ESMTP id 2E1FE42D17 for ; Mon, 17 Apr 2023 11:26:24 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MCZwYd8Oi8SBd97DTUpBqNt2tomdz8oZxl9/8CJLjnacnFST39DLj1Hohin41ej172YHwVADqZMnggQL/jTA/JOA9Un9/rlv+P/0yQUFSgKmF02BFwqvijo+3ZAYDAigNEKkm6DkfVPPf+6lnVZqcP4Y4yAogYQbkrG63qmV0YN8AmWW93lGyCWC1UdWOLX82hTPtU/a5CxnjEk6+sD3VAQslGiQegDRSUZpmnf/ckJhvXqOcnvpGBWm6xDjhptBrEHAWjP4+Q3wJpDAUVuBkrnpjyvBp8HMGLSMpCfGO8rIrCI6oTRL8cF3s0x9Y05gdYzDoqbze4oHzq/gr1DHpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KQoDK4GyQm64RO8cLI/ZAEbsmWAdC+fNkZeYtpnc3UU=; b=ZcEJq3CrbVremTZGPLGDBl0s3owE5Fvj6PH5qP8icSTQvFZt/yuAF3TT+Si9GyUrc4yol3jS61UgmOUOXypBuoTg0XHOljXIQpCygizDnTtdWbyA1Guy9qaJPOZ0+WF3RV+CAhGZBxz8TfixGQ+kO+tZaCwMtZ8j0cbYrVWz5qgcPUNF4QnsQJoLi8B6t/MphusZqgOmMOUL+7kr0YUH4ZEsMVcvYkIhuTIVl0ttrz+t8Corz0ZLpLBMPrJvLOLV0ZkU+uISyBMVI4Bt68pHqX//laHYeB7c+5Ljt1J7RTEeYLFrPeeee8apMs9BXO7Idx2hmyZUZw8W5IzGJ+6IRg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KQoDK4GyQm64RO8cLI/ZAEbsmWAdC+fNkZeYtpnc3UU=; b=mAE0s7eB9fy73BS+wb736n1N88sYOUZ4+V7cHpH5Mvzz26TXNR8Hksr5RoLt3nBJ55b4XVnx1BktlvH6P/CgFaelrE1XXPCL243FofI0ERmYHXEx2u9PVKFNoY6fvaHNYcKSH2qKi/2vDmvGhaHOP3xJmnup5HqyD1Xc9Z4i0NyfGUdThnjkK4yQOIeC1NviDNarRK/+WtH15PaAdhvY0FD8aZig4DtzVOowUfN706sAUYhT2Eo2/TfnL64HGHHLFsfAppb38yWRrE1FSfkkaQytdQa90WobMb/LlFm5LwLug7GKlaVf/pZIvoujBMG4AuHll4cajH/qavk5GYaH2Q== Received: from MW4PR03CA0311.namprd03.prod.outlook.com (2603:10b6:303:dd::16) by PH0PR12MB5647.namprd12.prod.outlook.com (2603:10b6:510:144::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr 2023 09:26:20 +0000 Received: from CO1NAM11FT053.eop-nam11.prod.protection.outlook.com (2603:10b6:303:dd:cafe::18) by MW4PR03CA0311.outlook.office365.com (2603:10b6:303:dd::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT053.mail.protection.outlook.com (10.13.175.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:26:19 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:26:04 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:26:02 -0700 From: Rongwei Liu To: , , , , Subject: [PATCH v1 4/8] net/mlx5: sample the srv6 last segment Date: Mon, 17 Apr 2023 12:25:36 +0300 Message-ID: <20230417092540.2617450-5-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT053:EE_|PH0PR12MB5647:EE_ X-MS-Office365-Filtering-Correlation-Id: e94013e3-fbc0-4bef-c529-08db3f25cd6d X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: H1j6RfX8Ue1W1CxAZTZz5BDozfWyF8G7N2Wcewk3CB5Sc82ioVkXNmxagN5ydwHtNMfFNV5rBlkOAhFtxveV01ljxqxk4MRKhIw3yGEE898LgyyHW0QusLZzhdiEZ0AVn7cmzXqxGSCCIH314Y3m5F8OGf0cq6V4OYhzFyK/PWF0UGthE1lq36nVLrFRydeeMgpbT9S9uUlWzC6hNfovTKpvWbeYwrKW4+l6caz/QEdI3HGBAy65bBSaK5ujUujzTbezSeYyj5SCsN8o73LAm9qwmn8/4a07yatSPw6Ec1lJZn/xR4fxlWHPU/oPD1+DNPZc60/+qRHjsVLHPfT339BTxjjJM7uJOKAVMGiQQcxT8pgejJbp4B39DhxgJFUjQrvC9jbiK/uMNdBk3XaYMLkkebTcQ9ay6PddgEtNfy2f+qOGSeW79bEPGx9FWtgLJpi07SKr5EgjyF7ePWfjk+1Agaj+AIJ2XOFtz5xJ8y2x59yKX/TktSK98sdTdHPJ6vDQU9GYMpcpWOxGXFYJCP4mPWMgkhMZoYFtfjDnUqKRBw8ZXTORGJ7z2VUg/isRLMBuoOFMVU1X5ZLg+bmVfnHPrVB3egcCTUoqYzW3aI1IIeW+HXvHh2LQNM/44KSyCfVVsL1fJks2AGLQcyQmRToMfFclEjnO3M71WgOlux8c5QZQzD0evGiLYcNlhcjQXY1FADkecf3w0UF4zuSNuvwjf7pK71ZhYJcr2R1RYYjr/XJWR2gGDRYUNbO4r9qX9TNUTSRn2+z0mlFAGCcEhX24l8DXqlGHMjxSnHP0Wsc= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(376002)(39860400002)(396003)(136003)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(8676002)(36860700001)(426003)(47076005)(83380400001)(336012)(34020700004)(1076003)(110136005)(2616005)(7696005)(16526019)(478600001)(6286002)(6666004)(26005)(186003)(82740400003)(36756003)(5660300002)(82310400005)(316002)(7636003)(70586007)(70206006)(356005)(41300700001)(55016003)(8936002)(86362001)(2906002)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:19.4514 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e94013e3-fbc0-4bef-c529-08db3f25cd6d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT053.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5647 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When removing the IPv6 routing extension header from the packets, the destination address should be updated to the last one in the segment list. Enlarge the hardware sample scope to cover the last segment. Signed-off-by: Rongwei Liu --- drivers/net/mlx5/mlx5.c | 42 +++++++++++++++++++++++++----------- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 15 +++++++++++++ 3 files changed, 46 insertions(+), 12 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f24e20a2ef..1418ffdea7 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1054,7 +1054,7 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) struct mlx5_devx_graph_node_attr node = { .modify_field_select = 0, }; - uint32_t ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; + uint32_t i, ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_common_dev_config *config = &priv->sh->cdev->config; void *fp = NULL, *ibv_ctx = priv->sh->cdev->ctx; @@ -1084,10 +1084,18 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) node.next_header_field_size = 0x8; node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP; node.in[0].compare_condition_value = IPPROTO_ROUTING; - node.sample[0].flow_match_sample_en = 1; - /* First come first serve no matter inner or outer. */ - node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; - node.sample[0].flow_match_sample_offset_mode = MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* Final IPv6 address. */ + for (i = 0; i <= 4 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + node.sample[i].flow_match_sample_en = 1; + node.sample[i].flow_match_sample_offset_mode = + MLX5_GRAPH_SAMPLE_OFFSET_FIXED; + /* First come first serve no matter inner or outer. */ + node.sample[i].flow_match_sample_tunnel_mode = + MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + node.sample[i].flow_match_sample_field_base_offset = + (i + 1) * sizeof(uint32_t); /* in bytes */ + } + node.sample[0].flow_match_sample_field_base_offset = 0; node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP; node.out[0].compare_condition_value = IPPROTO_TCP; node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP; @@ -1100,8 +1108,8 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) goto error; } priv->sh->srh_flex_parser.flex.devx_fp->devx_obj = fp; - priv->sh->srh_flex_parser.flex.mapnum = 1; - priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 1; + priv->sh->srh_flex_parser.flex.mapnum = 5; + priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 5; ret = mlx5_devx_cmd_query_parse_samples(fp, ids, priv->sh->srh_flex_parser.flex.mapnum, &priv->sh->srh_flex_parser.flex.devx_fp->anchor_id); @@ -1109,12 +1117,22 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) DRV_LOG(ERR, "Failed to query sample IDs."); goto error; } - ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[0], - &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[0]); - if (ret) { - DRV_LOG(ERR, "Failed to query sample id information."); - goto error; + for (i = 0; i <= 4 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[i], + &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[i]); + if (ret) { + DRV_LOG(ERR, "Failed to query sample id %u information.", ids[i]); + goto error; + } + } + for (i = 0; i <= 4 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + priv->sh->srh_flex_parser.flex.devx_fp->sample_ids[i] = ids[i]; + priv->sh->srh_flex_parser.flex.map[i].width = sizeof(uint32_t) * CHAR_BIT; + priv->sh->srh_flex_parser.flex.map[i].reg_id = i; + priv->sh->srh_flex_parser.flex.map[i].shift = + (i + 1) * sizeof(uint32_t) * CHAR_BIT; } + priv->sh->srh_flex_parser.flex.map[0].shift = 0; return 0; error: if (fp) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9eae692037..3fbec4db9e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1323,6 +1323,7 @@ struct mlx5_flex_pattern_field { uint16_t shift:5; uint16_t reg_id:5; }; + #define MLX5_INVALID_SAMPLE_REG_ID 0x1F /* Port flex item context. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1d116ea0f6..821c6ca281 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2666,4 +2666,19 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused) #endif return UINT32_MAX; } + +static __rte_always_inline void * +flow_hw_get_dev_from_ctx(void *dr_ctx) +{ + uint16_t port; + struct mlx5_priv *priv; + + MLX5_ETH_FOREACH_DEV(port, NULL) { + priv = rte_eth_devices[port].data->dev_private; + if (priv->dr_ctx == dr_ctx) + return &rte_eth_devices[port]; + } + return NULL; +} + #endif /* RTE_PMD_MLX5_FLOW_H_ */ From patchwork Mon Apr 17 09:25:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 126178 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B88404296B; Mon, 17 Apr 2023 11:26:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C9EA642D37; Mon, 17 Apr 2023 11:26:26 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2052.outbound.protection.outlook.com [40.107.96.52]) by mails.dpdk.org (Postfix) with ESMTP id 5A71442D2C for ; Mon, 17 Apr 2023 11:26:24 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oT9vR1lMXDUBgtr8HHIfmE36VnWn50saDKj6W6Qh3TiLJKemIUWPAlCWRfm9CNTf7YCFHlzzcBqTz+WT98bP7rpx6SaCvjHOQR1WdcI/LC0OY9futNnTbchtYvpJgMBUQBzJtppiCvp+PKXK5K/U2tXFyqk4Loz/T8+mTDzjFnArfWJRkit+Dp2LY0rtFBLLHkhmhORIR/YYjIr17i9xOxnsXdZDddkaagCslmrrH0BgOZOxPqj2PKbwSa/bb9qjbYb581EwIc4qKBasAbbAcpIvnK2GNkriUiYWShuy9ZZ3T2n242N0zYFdJ0wBMnMiMUNdoNgXA+xxAXKqLXaqfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5LYhBnVsHdORGiIFTQTojYb/ojcym9K8mPyM+5dZfL4=; b=Z3+VIfWUTc99TyOTtysN5euA0Uqfb1JK3qXOjkws6eIxwrlfZ/3MvsOuSjHxjLBdya57cTgyW/az/pJbJTfhlp0Nr5+OXhBY5IGKDcVTJrKCxZaE42I2r3coDEBzgbI1srdv0Kw34wK61vwRgH2YsZRsHWRNdAMNkgmAhXnYig6Z/57qTiK6eimUoNiSVXKJXscQ/danJHmscMqMY8nuG5ZPauONNucRczkk80OKwOWSgBaHekqTXLP4TGsLGkN/xR4L0cKyLE6NBUsmTizNbmb+ho95xle1YNhGdzPkJwiTdyS/6bw/8+GZE8gNoTlNU1rJ7rVYzV1jICgpGVcDhg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5LYhBnVsHdORGiIFTQTojYb/ojcym9K8mPyM+5dZfL4=; b=YJAMrCeq95lz2C3d3Geo0/m5o3GAxTS0QNeGEWaVpZJtAKrbFnD67ThMHzOIqV+LKZzo9EbrPIj8KZkLNvVxl48xL4WM084GI21PUXCGthRANCNuR2UPN91fQxASL4+a1LAo1+6xdVSIzYQRc5+mOUmZTTdXo7uTlXOYkBFXuZgLoAC+ZneSJNFH+0ZunL3f1EojuMEvIZAhdXyvTKKMTfc2SWAa4C9JrppHJ6Cyuj8xt2vbWyQltDaiD5IBXI2rxcgc86LzaNxqgmhU+mQfExx/ecThI1DpV9EB/+KYL4e1BWXq7edroEpXxaDBe+aSODgZoTgHdxfTtH0uoqh0Qg== Received: from BN9PR03CA0899.namprd03.prod.outlook.com (2603:10b6:408:13c::34) by CYYPR12MB8961.namprd12.prod.outlook.com (2603:10b6:930:bf::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 09:26:22 +0000 Received: from BN8NAM11FT100.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13c:cafe::8e) by BN9PR03CA0899.outlook.office365.com (2603:10b6:408:13c::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT100.mail.protection.outlook.com (10.13.177.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:26:21 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:26:06 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:26:04 -0700 From: Rongwei Liu To: , , , , Subject: [PATCH v1 5/8] net/mlx5: generate srv6 modify header resource Date: Mon, 17 Apr 2023 12:25:37 +0300 Message-ID: <20230417092540.2617450-6-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT100:EE_|CYYPR12MB8961:EE_ X-MS-Office365-Filtering-Correlation-Id: a3b76d2c-3320-4a24-ab2a-08db3f25cec2 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rBYrnpOln09aVMUdAdViS6O8NK5jMR029ty5Os214RdS5eRElgIL3hJzFh0ddja04nrCOvHk82ACCAz81WsfN4ZhVpK6nIcS7NIRNUxuNNiVErjtTpqa0mAtBiEuvFi4fjRXzsij79orgAUa+p5A61h6Xtk9DdL54TBcxm5h5FZkO+tLQPUKSRmJxKtYMT8Fl8gcBWnTeXcgnDodfkSwEkmHzC+p9Er3G59KS4VLuheLcf5I1hiQOaP7cYCDHnQaYU1Rnugwt/0bTmq7GB+KrdqbZEYcTRFvtbaOmuZBThPJ1yIEn3Y8HwlJYdRI2dqHHn6SptU06WJv9KAVitJizLsBFIL8q9T5FpKdgpZIYbm/fhyRv1cR4vz8fuET15WbODM8o+sedpCasnXUK716T2G/UytAYQigWnpAqXo5EGfmpfhmSxmWnj5Q4B6clKJx+lKg5LkMtyocNfiK688wXvNtQgF+TqKGkzYco+L0cU1EExpNg6exjMjaXgsCaZGnw0yvAR1qFLfScNzYTKoEuJ6ewBVFujkQ+y56P07JiG3/BUzQMUy9XKFGj7W7ROxw6BQQm1sW0LDYs3BLwdpdEJfU1NOJpDY8Eks9KAWeTvj+qQL+MtjnJF2UiyQv87Q7qmpxv1mCiQpoZjnakYk6pyQup9cSyplLSfJ2JpPt1SP9LOED3TNoE0OEpzvIRBtlpyBESHKHXZI3Q2JaoaTvhsgm4DoVBt4IfKPW3kvbMDSm2Iw9zb4glIYzLtG6TXKd1pxs+/MZhM+ZkbS0PA5n+Psb6j1WwPVW6MTfYNSQfOw= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(396003)(451199021)(46966006)(40470700004)(36840700001)(36860700001)(82310400005)(30864003)(1076003)(26005)(8936002)(8676002)(36756003)(41300700001)(86362001)(316002)(5660300002)(70586007)(70206006)(40460700003)(110136005)(55016003)(2906002)(7696005)(356005)(82740400003)(7636003)(6666004)(34020700004)(47076005)(186003)(6286002)(16526019)(2616005)(478600001)(40480700001)(336012)(426003)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:21.6230 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a3b76d2c-3320-4a24-ab2a-08db3f25cec2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT100.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8961 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Both checksum and IPv6 next_hdr needs to be updated when adding or removing srv6 header into/from IPv6 packets. 1. Add srv6 ste1 (push buffer with next_hdr 0) --> ste2 (IPv6 next_hdr to 0x2b) --> ste3 (load next hop IPv6 address, and srv6 next_hdr restore) 2. Remove srv6 ste1 (set srv6 next_hdr 0 and save original) --> ste2 (load final IPv6 destination, restore srv6 next_hdr) --> ste3 (remove srv6 and copy srv6 next_hdr to ipv6 next_hdr) Add helpers to generate the 2 modify header resources for add/remove actions. Remove srv6 should be shared globally and add srv6 can be shared or unique per each flow rules. Signed-off-by: Rongwei Liu --- drivers/net/mlx5/mlx5.h | 29 +++ drivers/net/mlx5/mlx5_flow_dv.c | 386 ++++++++++++++++++++++++++++++++ 2 files changed, 415 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3fbec4db9e..2cb6364957 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -2314,4 +2314,33 @@ void mlx5_flex_parser_clone_free_cb(void *tool_ctx, int mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev); void mlx5_free_srh_flex_parser(struct rte_eth_dev *dev); + +int +flow_dv_generate_ipv6_routing_pop_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num); + +int +flow_dv_generate_ipv6_routing_pop_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num); + +int +flow_dv_generate_ipv6_routing_push_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num); + +int +flow_dv_generate_ipv6_routing_push_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num, uint8_t *buf); + +int +flow_dv_ipv6_routing_pop_mhdr_cmd(struct rte_eth_dev *dev, uint8_t *mh_data, + uint8_t *anchor_id); + #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index f136f43b0a..4a1f61eeb7 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2128,6 +2128,392 @@ flow_dv_convert_action_modify_field field, dcopy, resource, type, error); } +/** + * Generate the 1st modify header data for IPv6 routing pop. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_pop_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + uint32_t value = 0; + struct rte_flow_error error; + +#define IPV6_ROUTING_POP_MHDR_NUM1 3 + if (cmd_num < IPV6_ROUTING_POP_MHDR_NUM1) { + DRV_LOG(ERR, "Not enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + /* save next_hdr to seg_left. */ + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + data.offset = offsetof(struct rte_ipv6_routing_ext, segments_left) * CHAR_BIT; + /* For COPY fill the destination field (dcopy) without mask. */ + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 8, dev, attr, &error); + /* Then construct the source field (field) with mask. */ + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + item.mask = &mask; + resource = &dummy.resource; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, &error)) { + DRV_LOG(ERR, "Generate save srv6 next header modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 1); + /* add nop. */ + resource->actions[1].data0 = 0; + resource->actions[1].action_type = MLX5_MODIFICATION_TYPE_NOP; + resource->actions[1].data0 = RTE_BE32(resource->actions[1].data0); + resource->actions[1].data1 = 0; + resource->actions_num += 1; + /* clear srv6 next_hdr. */ + memset(&field, 0, sizeof(field)); + memset(&dcopy, 0, sizeof(dcopy)); + memset(&mask, 0, sizeof(mask)); + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + item.spec = (void *)(uintptr_t)&value; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate clear srv6 next header modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_POP_MHDR_NUM1); +#undef IPV6_ROUTING_POP_MHDR_NUM1 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate the 2nd modify header data for IPv6 routing pop. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_pop_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + +#define IPV6_ROUTING_POP_MHDR_NUM2 5 + if (cmd_num < IPV6_ROUTING_POP_MHDR_NUM2) { + DRV_LOG(ERR, "Note enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + item.mask = &mask; + data.field = RTE_FLOW_FIELD_IPV6_DST; + data.level = 0; + data.offset = 0; + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 128, dev, attr, &error); + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.offset = 32; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 128, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, &error)) { + DRV_LOG(ERR, "Generate load final IPv6 address modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 4); + memset(&field, 0, sizeof(field)); + memset(&dcopy, 0, sizeof(dcopy)); + memset(&mask, 0, sizeof(mask)); + /* copy seg_left to srv6.next_hdr */ + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 8, dev, attr, &error); + data.offset = offsetof(struct rte_ipv6_routing_ext, segments_left) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, &error)) { + DRV_LOG(ERR, "Generate restore srv6 next header modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_POP_MHDR_NUM2); +#undef IPV6_ROUTING_POP_MHDR_NUM2 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate the 2nd modify header data for IPv6 routing push. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_push_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num) +{ + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + uint8_t value; + +#define IPV6_ROUTING_PUSH_MHDR_NUM1 1 + if (cmd_num < IPV6_ROUTING_PUSH_MHDR_NUM1) { + DRV_LOG(ERR, "Not enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + /* Set IPv6 proto to 0x2b. */ + data.field = RTE_FLOW_FIELD_IPV6_PROTO; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + resource = &dummy.resource; + item.mask = &mask; + value = IPPROTO_ROUTING; + item.spec = (void *)(uintptr_t)&value; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate modify IPv6 protocol to 0x2b failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_PUSH_MHDR_NUM1); +#undef IPV6_ROUTING_PUSH_MHDR_NUM1 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate the 2nd modify header data for IPv6 routing push. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_push_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num, uint8_t *buf) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + uint8_t next_hdr = *buf; + +#define IPV6_ROUTING_PUSH_MHDR_NUM2 5 + if (cmd_num < IPV6_ROUTING_PUSH_MHDR_NUM2) { + DRV_LOG(ERR, "Not enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + item.mask = &mask; + item.spec = buf + sizeof(struct rte_ipv6_routing_ext) + + (*(buf + 3) - 1) * 16; /* seg_left-1 IPv6 address */ + data.field = RTE_FLOW_FIELD_IPV6_DST; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 128, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate load srv6 next hop modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 4); + memset(&field, 0, sizeof(field)); + memset(&mask, 0, sizeof(mask)); + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + item.spec = (void *)(uintptr_t)&next_hdr; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate srv6 next header restore modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_PUSH_MHDR_NUM2); +#undef IPV6_ROUTING_PUSH_MHDR_NUM2 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate IPv6 routing pop modification_cmd. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in,out] mh_data + * Pointer to modify header data buffer. + * @param[in,out] anchor_id + * Anchor ID for REMOVE command. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_ipv6_routing_pop_mhdr_cmd(struct rte_eth_dev *dev, uint8_t *mh_data, + uint8_t *anchor_id) +{ + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv || !priv->sh->cdev->config.hca_attr.flex.parse_graph_anchor) { + DRV_LOG(ERR, "Doesn't support srv6 as reformat anchor"); + return -1; + } + /* Restore IPv6 protocol from flex parser. */ + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + data.field = RTE_FLOW_FIELD_IPV6_PROTO; + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 8, dev, NULL, &error); + /* Then construct the source field (field) with mask. */ + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, NULL, &error); + item.mask = &mask; + resource = &dummy.resource; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, + &error)) { + DRV_LOG(ERR, "Generate copy IPv6 protocol from srv6 next header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 1); + memcpy(mh_data, resource->actions, sizeof(struct mlx5_modification_cmd)); + *anchor_id = priv->sh->srh_flex_parser.flex.devx_fp->anchor_id; + return 1; +} + /** * Validate MARK item. * From patchwork Mon Apr 17 09:25:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 126180 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0AFC54296B; Mon, 17 Apr 2023 11:27:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CEBAD42D3A; Mon, 17 Apr 2023 11:26:32 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2071.outbound.protection.outlook.com [40.107.223.71]) by mails.dpdk.org (Postfix) with ESMTP id 30E1B42D46 for ; Mon, 17 Apr 2023 11:26:31 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ODzE5KPmV+HX1cIZW+1ZoZnsqznjqzbihLzNl3FcIizZkVwW+Swoqes38cN125CQJ19SLMVdo/NZ4QldqgagnpCt2wpNOIpa9yr+OSRV2/ydyKDvMTGQ6Vj4SdpnGF9yD4SHt4IGw8Fm0HUTmme8dQYwzMifBWdHX+Qb6l8Z5IWXmVWqJ9dE+bI1ITD1+7BjYF3p+k1fPlJaN/oepLjeNg1nknrdtn9FHoG5SkAc0Uq+dIGU2eMDxjNcWA8msGpxsLdJjsEsfKClrlHwvhycPVs5BdU1mQlUFHIeFAbgU7YiO2k4cmVAJkznqvJYF56s/JtKhBBmDUOlXaTVM1TeRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=w1SK9LajwGJym4iywBUZ5fK8FpkFDBxh+I32s7ssKrk=; b=BrExd4Rkqgud6SQEUU/ztjanvYhsUc5NoYKab9p5grRSt/tgnRZ5TxtzYHZJI7dZQDOtXtARlWIPYZn7d0dc2Gew0dU7+slG4c96/HQHpt9SBaana4+hijB7eqq4WpYRCjVUyuErluVVb+N2OuvH0ZdeZi1n1qHXLrB1ZvkC4G8T1j3oEZHh67vX59rKRSITvJgkedqaxwT0SlHhBNCB6oZ3BkSbJM4ecl+l7idempo7eE3fp1Rsv3kv/O+MCr9k/kUqAVjd/MnpIyPTVjWdbnbRmTY4srq+YuAdiBFT5YJeKEq9i1EScSVFLBG1MypIc2bAHhgPZWp7kYy4+oNMWw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=w1SK9LajwGJym4iywBUZ5fK8FpkFDBxh+I32s7ssKrk=; b=FYrR4ZlFLm4OVnyThgFaY6Frm/OCrQCiAcDwasYggKyA6IRaV5obBTqNnEnbAxwZJIh/fseH3rw3YMqyeamULriJ0zlvgUOIwlBYrSbowlLnmZ5EEbY5YLE73TNHuzETO/v1I8+dzOepuvSUq6R34S75XlWctWp/xCXFXSuY3VYHPMzvfTxgJ5TyxrP6poKDP6t4rJtMp8W9wEO+B93LKxk/VpNCFNriEFHfoDZL+zVcaOCqWzNU94+SttbMDDX30P7uH3vweuynaZ0YX73h882C4A/94T06FOA3uccMq1pgQG/fzRI+FvdtbEvb5zwBlC4R4ILlI4NdiD3SYyO8Ew== Received: from MW4P222CA0025.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::30) by SJ2PR12MB7821.namprd12.prod.outlook.com (2603:10b6:a03:4d2::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 09:26:27 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:303:114:cafe::98) by MW4P222CA0025.outlook.office365.com (2603:10b6:303:114::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:26:27 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:26:08 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:26:06 -0700 From: Rongwei Liu To: , , , , Subject: [PATCH v1 6/8] net/mlx5/hws: add IPv6 routing extension push pop actions Date: Mon, 17 Apr 2023 12:25:38 +0300 Message-ID: <20230417092540.2617450-7-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT036:EE_|SJ2PR12MB7821:EE_ X-MS-Office365-Filtering-Correlation-Id: 0c747d45-3ebc-416a-d070-08db3f25d25d X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +3n9hIXsdbXgdPM3Pze/aaeQvnq4DBBLwnfLQEWn2bHkcxOw5mC5CjuMRt8GxDm1IYPbpxz0vL9vH/vOnpVoCXcwF3dQ2LSiMrF1BRTKm915QFkomERoo06eIKjjL9kBUaVNC2ISOI0bPzF95BmzzE4v67cAsXtUgL+IXqm6BPpR2uyF1ta+AV1CNvMI8iVKPgLFyNPF4Ccbh6BvNM3N/YFH52Tfr7Cdd7Ogl19SmvLZmGKQf3DDhP1Za3c3C+yDpvT3WcCAMxxMUrYZIk3xJPSaaI1jLHRbbB+35/3mxpIRNTXJM3UMmHISQlGXLaTg1qd9SRGYQT9tUvMvnjJYawlAA3wbM2Hh10G8agCYshr9dKDE0YwFyZul2F3PDCJqW1qapLOu+4QOAh939cCgvPta+RAeh6mExJ5axahEezyth8MCB2tlK0Hh3LRyAEOxzKT626B7NLdl3yneK0APJr/IuVbzKMYy72X7AfTIJ9SVpvz8ytQ8v+Dy1noIuq9EM6yYQRDh5gVKRdc3W8+SI7zvY40BogXeD3qUXYULOsv9fx3bx7ZCdMG5Eg23QB0ewX1YS9yH5E2UgfXij+cNKJskVkuRyNujTyf+puF6pOxS+piPnrUKKdiuRLpSeb33NyYM68K7WPD3W8ykKQYQJc9Uyt3pud48QeI8IhmifUrznCwXE88GzQZpruOW4P0+5N85xAv6199MibN/Fqxpus6Bfxi7IWPtSiEBIe9CCfZ00nHFPJelMggdJN7PjUWeFHlzOjilLPjxIKO5ppmrZxnrbxuc4TnCOxWzN5i1Jus= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(376002)(39860400002)(136003)(346002)(451199021)(46966006)(36840700001)(40470700004)(36756003)(110136005)(316002)(70586007)(70206006)(478600001)(6666004)(7696005)(55016003)(5660300002)(82310400005)(8936002)(8676002)(41300700001)(30864003)(2906002)(40480700001)(34020700004)(82740400003)(86362001)(7636003)(356005)(426003)(2616005)(336012)(16526019)(26005)(6286002)(1076003)(40460700003)(186003)(36860700001)(47076005)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:27.7198 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0c747d45-3ebc-416a-d070-08db3f25d25d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB7821 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add two dr_actions to implement IPv6 routing extension push and pop, the new actions are multiple actions combination instead of new types. Basically, there are two modify headers plus one reformat action. Action order is the same as encap and decap actions. Signed-off-by: Rongwei Liu --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/hws/mlx5dr.h | 41 +++ drivers/net/mlx5/hws/mlx5dr_action.c | 380 ++++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_action.h | 5 + drivers/net/mlx5/hws/mlx5dr_debug.c | 2 + 5 files changed, 428 insertions(+), 1 deletion(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index ed3d5efbb7..241485f905 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3438,6 +3438,7 @@ enum mlx5_ifc_header_anchors { MLX5_HEADER_ANCHOR_PACKET_START = 0x0, MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_TCP_UDP = 0x09, MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, }; diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index 2b02884dc3..da058bdb4b 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -45,6 +45,8 @@ enum mlx5dr_action_type { MLX5DR_ACTION_TYP_PUSH_VLAN, MLX5DR_ACTION_TYP_ASO_METER, MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_IPV6_ROUTING_POP, + MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH, MLX5DR_ACTION_TYP_MAX, }; @@ -186,6 +188,12 @@ struct mlx5dr_rule_action { uint8_t *data; } reformat; + struct { + uint32_t offset; + uint8_t *data; + uint8_t *mhdr; + } recom; + struct { rte_be32_t vlan_hdr; } push_vlan; @@ -614,4 +622,37 @@ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, */ int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); +/* Check if mlx5dr action template contain srv6 push or pop actions. + * + * @param[in] at + * The action template going to be parsed. + * @return true if containing srv6 push/pop action, false otherwise. + */ +bool +mlx5dr_action_template_contain_srv6(struct mlx5dr_action_template *at); + +/* Create multiple direct actions combination action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] type + * Type of direct rule action. + * @param[in] data_sz + * Size in bytes of data. + * @param[in] inline_data + * Header data array in case of inline action. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_recombination(struct mlx5dr_context *ctx, + enum mlx5dr_action_type type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags); + #endif diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index 2d93be717f..fa38654644 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -19,6 +19,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ [MLX5DR_TABLE_TYPE_NIC_RX] = { BIT(MLX5DR_ACTION_TYP_TAG), BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_POP) | BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), @@ -29,6 +30,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) | BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_FT) | BIT(MLX5DR_ACTION_TYP_MISS) | @@ -46,6 +48,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) | BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_FT) | BIT(MLX5DR_ACTION_TYP_MISS) | @@ -54,6 +57,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ }, [MLX5DR_TABLE_TYPE_FDB] = { BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_POP) | BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), BIT(MLX5DR_ACTION_TYP_POP_VLAN), BIT(MLX5DR_ACTION_TYP_POP_VLAN), @@ -64,6 +68,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_ BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) | BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), BIT(MLX5DR_ACTION_TYP_FT) | BIT(MLX5DR_ACTION_TYP_MISS) | @@ -227,6 +232,18 @@ static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action, mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); } +bool mlx5dr_action_template_contain_srv6(struct mlx5dr_action_template *at) +{ + int i = 0; + + for (i = 0; i < at->num_actions; i++) { + if (at->action_type_arr[i] == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP || + at->action_type_arr[i] == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) + return true; + } + return false; +} + static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions) { DR_LOG(ERR, "Invalid action_type sequence"); @@ -501,6 +518,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, attr->dest_tir_num = obj->id; break; case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: case MLX5DR_ACTION_TYP_MODIFY_HDR: attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; if (action->modify_header.num_of_actions == 1) { @@ -529,10 +547,14 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; break; case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; attr->insert_header.encap = 1; - attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + if (action->type == MLX5DR_ACTION_TYP_L2_TO_TNL_L2) + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + else + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_TCP_UDP; attr->insert_header.arg_id = action->reformat.arg_obj->id; attr->insert_header.header_size = action->reformat.header_size; break; @@ -1452,6 +1474,90 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_context *ctx, return ret; } +static int +mlx5dr_action_handle_ipv6_routing_pop(struct mlx5dr_context *ctx, + struct mlx5dr_action *action) +{ + uint8_t mh_data[MLX5DR_ACTION_REFORMAT_DATA_SIZE] = {0}; + void *dev = flow_hw_get_dev_from_ctx(ctx); + int mh_data_size, ret; + uint8_t *srv6_data; + uint8_t anchor_id; + + if (dev == NULL) { + DR_LOG(ERR, "Invalid dev handle for IPv6 routing pop\n"); + return -1; + } + ret = flow_dv_ipv6_routing_pop_mhdr_cmd(dev, mh_data, &anchor_id); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate modify-header pattern for IPv6 routing pop\n"); + return -1; + } + srv6_data = mh_data + MLX5DR_MODIFY_ACTION_SIZE * ret; + /* Remove SRv6 headers */ + MLX5_SET(stc_ste_param_remove, srv6_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, srv6_data, decap, 0x1); + MLX5_SET(stc_ste_param_remove, srv6_data, remove_start_anchor, anchor_id); + MLX5_SET(stc_ste_param_remove, srv6_data, remove_end_anchor, + MLX5_HEADER_ANCHOR_TCP_UDP); + mh_data_size = (ret + 1) * MLX5DR_MODIFY_ACTION_SIZE; + + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, mh_data_size, + (__be64 *)mh_data, 0); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header for IPv6 routing pop\n"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + mh_data, mh_data_size); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg IPv6 routing pop"); + goto clean_stc; + } + + return 0; + +clean_stc: + mlx5dr_action_destroy_stcs(action); +free_mh_obj: + mlx5dr_pat_arg_destroy_modify_header(ctx, action); + return ret; +} + +static int mlx5dr_action_handle_ipv6_routing_push(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create args for ipv6 routing push"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create stc for ipv6 routing push"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + static int mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, size_t data_sz, @@ -1484,6 +1590,78 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, return ret; } +static int +mlx5dr_action_create_push_pop_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + switch (action->type) { + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + ret = mlx5dr_action_handle_ipv6_routing_pop(ctx, action); + break; + + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + *((uint8_t *)data) = 0; + ret = mlx5dr_action_handle_ipv6_routing_push(ctx, data_sz, data, + bulk_size, action); + break; + + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return ret; +} + +static struct mlx5dr_action * +mlx5dr_action_create_push_pop(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "IPv6 routing push/pop is not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Push/pop flags don't fit HWS (flags: %x)\n", flags); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_push_pop_hws(ctx, data_sz, inline_data, + log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create push/pop HWS.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + struct mlx5dr_action * mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, enum mlx5dr_action_reformat_type reformat_type, @@ -1540,6 +1718,169 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, return NULL; } +static int +mlx5dr_action_create_recom_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + struct mlx5_modification_cmd cmd[MLX5_MHDR_MAX_CMD]; + void *eth_dev = flow_hw_get_dev_from_ctx(ctx); + struct mlx5dr_action *sub_action; + int ret; + + if (eth_dev == NULL) { + DR_LOG(ERR, "Invalid dev handle for recombination action"); + rte_errno = EINVAL; + return rte_errno; + } + memset(cmd, 0, sizeof(cmd)); + switch (action->type) { + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + ret = flow_dv_generate_ipv6_routing_pop_mhdr1(eth_dev, NULL, + cmd, MLX5_MHDR_MAX_CMD); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing pop action1 pattern"); + rte_errno = EINVAL; + return rte_errno; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, 0, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing pop action1"); + rte_errno = EINVAL; + return rte_errno; + } + action->recom.action1 = sub_action; + memset(cmd, 0, sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD); + ret = flow_dv_generate_ipv6_routing_pop_mhdr2(eth_dev, NULL, + cmd, MLX5_MHDR_MAX_CMD); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing pop action2 pattern"); + goto err; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, 0, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing pop action2"); + goto err; + } + action->recom.action2 = sub_action; + sub_action = mlx5dr_action_create_push_pop(ctx, + MLX5DR_ACTION_TYP_IPV6_ROUTING_POP, + data_sz, data, bulk_size, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing pop action3"); + goto err; + } + action->recom.action3 = sub_action; + break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + ret = flow_dv_generate_ipv6_routing_push_mhdr1(eth_dev, NULL, + cmd, MLX5_MHDR_MAX_CMD); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing push action2 pattern"); + rte_errno = EINVAL; + return rte_errno; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, 0, action->flags | MLX5DR_ACTION_FLAG_SHARED); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing push action2"); + rte_errno = EINVAL; + return rte_errno; + } + action->recom.action2 = sub_action; + memset(cmd, 0, sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD); + ret = flow_dv_generate_ipv6_routing_push_mhdr2(eth_dev, NULL, cmd, + MLX5_MHDR_MAX_CMD, data); + if (ret < 0) { + DR_LOG(ERR, "Failed to generate IPv6 routing push action3 pattern"); + goto err; + } + sub_action = mlx5dr_action_create_modify_header(ctx, + sizeof(struct mlx5_modification_cmd) * ret, + (__be64 *)cmd, bulk_size, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing push action3"); + goto err; + } + action->recom.action3 = sub_action; + sub_action = mlx5dr_action_create_push_pop(ctx, + MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH, + data_sz, data, bulk_size, action->flags); + if (!sub_action) { + DR_LOG(ERR, "Failed to create IPv6 routing push action1"); + goto err; + } + action->recom.action1 = sub_action; + break; + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return 0; + +err: + if (action->recom.action1) + mlx5dr_action_destroy(action->recom.action1); + if (action->recom.action2) + mlx5dr_action_destroy(action->recom.action2); + if (action->recom.action3) + mlx5dr_action_destroy(action->recom.action3); + rte_errno = EINVAL; + return rte_errno; +} + +struct mlx5dr_action * +mlx5dr_action_create_recombination(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Recom flags don't fit HWS (flags: %x)\n", flags); + rte_errno = EINVAL; + goto free_action; + } + + if (action_type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP && log_bulk_size) { + DR_LOG(ERR, "IPv6 POP must be shared"); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_recom_hws(ctx, data_sz, inline_data, + log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create recombination.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + static int mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action, size_t actions_sz, @@ -1677,6 +2018,43 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) mlx5dr_action_destroy_stcs(action); mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + if (action->recom.action1) { + mlx5dr_action_destroy_stcs(action->recom.action1); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action1->ctx, + action->recom.action1); + simple_free(action->recom.action1); + } + if (action->recom.action2) { + mlx5dr_action_destroy_stcs(action->recom.action2); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action2->ctx, + action->recom.action2); + simple_free(action->recom.action2); + } + if (action->recom.action3) { + mlx5dr_action_destroy_stcs(action->recom.action3); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action3->ctx, + action->recom.action3); + simple_free(action->recom.action3); + } + break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + if (action->recom.action1) { + mlx5dr_action_destroy_stcs(action->recom.action1); + mlx5dr_cmd_destroy_obj(action->recom.action1->reformat.arg_obj); + simple_free(action->recom.action1); + } + if (action->recom.action2) { + mlx5dr_action_destroy_stcs(action->recom.action2); + simple_free(action->recom.action2); + } + if (action->recom.action3) { + mlx5dr_action_destroy_stcs(action->recom.action3); + mlx5dr_pat_arg_destroy_modify_header(action->recom.action3->ctx, + action->recom.action3); + simple_free(action->recom.action3); + } + break; } } diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h index 17619c0057..cb51f81da1 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.h +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -130,6 +130,11 @@ struct mlx5dr_action { struct mlx5dr_devx_obj *arg_obj; uint32_t header_size; } reformat; + struct { + struct mlx5dr_action *action1; + struct mlx5dr_action *action2; + struct mlx5dr_action *action3; + } recom; struct { struct mlx5dr_devx_obj *devx_obj; uint8_t return_reg_id; diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index b8049a173d..1a6ad4dd71 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -22,6 +22,8 @@ const char *mlx5dr_debug_action_type_str[] = { [MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN", [MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER", [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", + [MLX5DR_ACTION_TYP_IPV6_ROUTING_POP] = "POP_IPV6_ROUTING", + [MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH] = "PUSH_IPV6_ROUTING", }; static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, From patchwork Mon Apr 17 09:25:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 126179 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1EE9A4296C; Mon, 17 Apr 2023 11:26:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7A0BE42D29; Mon, 17 Apr 2023 11:26:29 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2072.outbound.protection.outlook.com [40.107.237.72]) by mails.dpdk.org (Postfix) with ESMTP id B3A4042D0B for ; Mon, 17 Apr 2023 11:26:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H8A47HEtSLZOu6lvoLINSrD+KRhs0bbLTVDrwbGmtguR0d0RepqZ/DO+YPpFs83PZv6afAqygaAtVnODm/jpCXn5DH4DHGRIdmk3lNWNnbpbwT7iFk2m8pMHmJydzE5h4Oyu/GUew3L/IVBuQeO0v0WNM8zX36IziFDgezsaJry5obkZthghmQCOYyMur/MLLqjsl4NtQp5Yj5O8TV/ztVjm0z8209tRRbHnFKPKB9gIN3cMaCEQLjYIXZwfG+tDm0nt4kdive/4bLoKpJ3ZjGUERaTIjGgSMmFSWNP4pI0jlhH6LnMYfmHCSeJi6sZAO3Qjy57G3DOMy2x2A3dNmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JeG6bEMMKUX8YyuJBVmbf/8h7CHQkefOy66JIWnPoGc=; b=RryI7aUdZpIW0sFw4F5bHB9GOBqERtnceZbeKheKATELtDRkVc6/f1HGqAlgMQJ+UswRCd7DgwpVNweuuiBEK7ZPLb7tKfECSll8RLOmAmMHDJgmeETfmruaNPXZ8LUGXx4O651QB2qRoKRGTTzYXj281XJpFcDazCRAF+Bqz6yZkHi7fo+StKKX77apmAoHBn47h46la6usmLFI91ee0br5rjDzhgfvfO43vp5Cu4vgngLivCOpkcnup0eh/CCTJiNNuXK7nqnnDcqQ3wA7t5UxtYAnY6jm0HKIMAEH83YUJAn1ujrwgHohMGCv84OW0UdEHMoAUUDfEHF8/pY8rA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JeG6bEMMKUX8YyuJBVmbf/8h7CHQkefOy66JIWnPoGc=; b=Ak1nHsw95/DSD2ElbNmwhVqvytQqchrCujrOiS1FKNinW6m8ztkQwFanCZB6MDdy5cps7ZAvXPnlru909y5n27BEsOXODWjJi4YvO1sBpekfmZ8E6m+YjIb9Y64DWTrjej7559Y/t/F6KhsBhZ6FBa1w37Ecen6nzRG8mE2/whzed1RZnBFfHxLiNurBaLhPmYa6nKVsyALPLU2T2UaVJe7hQ01cZDP9pNVyOhSbZGmtAbuimDaO75AsaA0ehc1vjOaJDjtmkZUbHVOQBiR9KMF9kG0T/VvBpDehGOvIOJekkbwnbt+jrovHqnrt5XMDOLULcljEAgzgYoMwf/Sjyw== Received: from BN9PR03CA0621.namprd03.prod.outlook.com (2603:10b6:408:106::26) by MN0PR12MB5857.namprd12.prod.outlook.com (2603:10b6:208:378::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 09:26:26 +0000 Received: from BN8NAM11FT053.eop-nam11.prod.protection.outlook.com (2603:10b6:408:106:cafe::56) by BN9PR03CA0621.outlook.office365.com (2603:10b6:408:106::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT053.mail.protection.outlook.com (10.13.177.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:26:25 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:26:10 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:26:08 -0700 From: Rongwei Liu To: , , , , Subject: [PATCH v1 7/8] net/mlx5/hws: add setter for IPv6 routing push pop Date: Mon, 17 Apr 2023 12:25:39 +0300 Message-ID: <20230417092540.2617450-8-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT053:EE_|MN0PR12MB5857:EE_ X-MS-Office365-Filtering-Correlation-Id: 092e1ac5-e889-4a93-eb77-08db3f25d142 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5XQc0CK2jSpoPhHDZb8uv5xpb6o9IZpgp1j8j4YE89UBy5D8q+LwpfM0hn6BYnBaX7DuCOSHnUTFTgTcu4Aman0fnSdADIZt+fclE3l/X6fLlPhjqaWAUW2TkR1zWjUk+yZ8N8ivuP4IPw287DDe3Fe6hCSnjjnOGfc0WrafiebXTYg9EfMAwSy72DAEnGdI4hkgrJTeQl+R4j4fckc/2biSlCxwYmtSMEpzhghmsXoiBpzfAAZmYWU9nk9vdes2LDUHfI2VyWK3O5Emcw1UddMaiUBSAyKFfZ+12Mqn7qn98L1ZMGalcv1FtPkPhxNERD9qkodB/f+ZT7H370OE/qtb/1kAmc6vSn0faz2yGipUck1/pLhzn/ecsAtD+wx0KdYC4GJfBaEylvGoP/9BgPskZhG2/l4aq3nhzID5komrZDztrbcFMC6fjaE3I0zFKgvniZje4iHi2uOE7tHC+lY3Ae8HN0upeav6ZMRxv0/SjQ0Rb755AOK5csFk+sAasJMbssynlf5C6MKWbvhE9v1HF91c15tR388wzam4ck05MSowbHkLwjt2pCVCsuSZ8JS3N7vS9gZJg6FxRoMpCYYMxvfqq2YegJXtMU1gvTRQBeUuCYnNZqdW5XwU9eIpaqmbuPkt1Cp8d7pifr6fO14yjHBRLBFWV8IYed/zNinefyu07OqEewriFLP/b+VPRpYytNw9Ref+n+glszlqYQkL4EsA4dV0Gg0v+CfBchfIzOzchVebEVQIla2UphGhojaSHZSuV7H2eZWUw7Hm/ljbzGS8Q6831BmGqPTOidc= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(39860400002)(376002)(346002)(136003)(451199021)(46966006)(36840700001)(40470700004)(478600001)(6666004)(34020700004)(8936002)(8676002)(316002)(41300700001)(82740400003)(55016003)(40480700001)(70586007)(70206006)(7636003)(110136005)(356005)(40460700003)(16526019)(6286002)(186003)(2906002)(36756003)(26005)(1076003)(336012)(86362001)(83380400001)(426003)(47076005)(82310400005)(2616005)(36860700001)(5660300002)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:25.8189 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 092e1ac5-e889-4a93-eb77-08db3f25d142 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT053.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5857 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The rte action will be translated to 3 dr_actions which need 3 setters to program them. For each setter, it may have different reparsing properties. Setter which requires no reparse can't share the same one with the one has reparse enabled even if there is spare space. Signed-off-by: Rongwei Liu --- drivers/net/mlx5/hws/mlx5dr_action.c | 144 +++++++++++++++++++++++++++ 1 file changed, 144 insertions(+) diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c index fa38654644..9f2386479a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_action.c +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -2318,6 +2318,57 @@ mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply, } } +static void +mlx5dr_action_setter_ipv6_routing_pop1(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + MLX5_ASSERT(action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP); + rule_action->action = action->recom.action1; + MLX5_ASSERT(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED); + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_MODIFY_HDR); + mlx5dr_action_setter_modify_header(apply, setter); + rule_action->action = action; +} + +static void +mlx5dr_action_setter_ipv6_routing_pop2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + MLX5_ASSERT(action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP); + rule_action->action = action->recom.action2; + MLX5_ASSERT(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED); + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_MODIFY_HDR); + mlx5dr_action_setter_modify_header(apply, setter); + rule_action->action = action; +} + +static void +mlx5dr_action_setter_ipv6_routing_pop3(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + MLX5_ASSERT(action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP); + rule_action->action = action->recom.action3; + MLX5_ASSERT(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED); + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP); + mlx5dr_action_setter_modify_header(apply, setter); + rule_action->action = action; +} + static void mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, struct mlx5dr_actions_wqe_setter *setter) @@ -2346,6 +2397,60 @@ mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, } } +static void +mlx5dr_action_setter_ipv6_routing_push1(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_rule_action tmp; + + rule_action = &apply->rule_action[setter->idx_double]; + tmp = *rule_action; + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH); + rule_action->action = tmp.action->recom.action1; + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH); + rule_action->reformat.offset = tmp.recom.offset; + rule_action->reformat.data = tmp.recom.data; + mlx5dr_action_setter_insert_ptr(apply, setter); + *rule_action = tmp; +} + +static void +mlx5dr_action_setter_ipv6_routing_push2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + MLX5_ASSERT(action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH); + rule_action->action = action->recom.action2; + MLX5_ASSERT(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED); + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_MODIFY_HDR); + mlx5dr_action_setter_modify_header(apply, setter); + rule_action->action = action; +} + +static void +mlx5dr_action_setter_ipv6_routing_push3(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_rule_action tmp; + + rule_action = &apply->rule_action[setter->idx_double]; + tmp = *rule_action; + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH); + rule_action->action = tmp.action->recom.action3; + MLX5_ASSERT(rule_action->action->type == MLX5DR_ACTION_TYP_MODIFY_HDR); + rule_action->modify_header.offset = tmp.recom.offset; + rule_action->modify_header.data = tmp.recom.mhdr; + MLX5_ASSERT(rule_action->action->modify_header.num_of_actions > 1); + mlx5dr_action_setter_modify_header(apply, setter); + *rule_action = tmp; +} + static void mlx5dr_action_setter_tnl_l3_to_l2(struct mlx5dr_actions_apply_data *apply, struct mlx5dr_actions_wqe_setter *setter) @@ -2553,6 +2658,45 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at) setter->idx_double = i; break; + case MLX5DR_ACTION_TYP_IPV6_ROUTING_POP: + /* Double internal modify header list */ + setter = mlx5dr_action_setter_find_first(last_setter, + ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_pop1; + setter->idx_double = i; + setter++; + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_pop2; + setter->idx_double = i; + setter++; + /* restore IPv6 protocol + pop via modify list. */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_pop3; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH: + /* Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + /* Can't squeeze with reparsing setter */ + if (setter->flags & ASF_REPARSE) + setter++; + setter->flags |= ASF_DOUBLE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_push1; + setter->idx_double = i; + setter++; + /* Set IPv6 protocol to 0x2b */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_push2; + setter->idx_double = i; + setter++; + /* Load next hop IPv6 address and restore srv6.next_hdr */ + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_ipv6_routing_push3; + setter->idx_double = i; + break; + case MLX5DR_ACTION_TYP_MODIFY_HDR: /* Double modify header list */ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); From patchwork Mon Apr 17 09:25:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 126181 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3E1FD4296B; Mon, 17 Apr 2023 11:27:14 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B2FC42D20; Mon, 17 Apr 2023 11:26:36 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2071.outbound.protection.outlook.com [40.107.95.71]) by mails.dpdk.org (Postfix) with ESMTP id 5166242D4F for ; Mon, 17 Apr 2023 11:26:34 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OAb3K67lNuIUV3WL0ImO6xwu7JAqV0W7ncXRJcpBAtBlRSOT5Rv4sR2a1WBgm31ifx21LDL/2+7pewvOCvmQAopciuRxooSEm6UoHHUuoIJpfNM4O5oXLZceXIoE7uOeMRWHgQLvXYQrZMZw0/HLJ4Aye3XL0h3U4JT56x5cfdkh1OiXj1U/9Zuw1gJyfRAXVWNfkTWz5gm2APLW/I9H9+C+00UFzTSJ+0rDo7ZBbxor6L7wZcePNFVQhshUuDZVbyodhKry5u+wwWmjULTDS1rd4lVR1TlW9sqJYLxOTqmutvzvgqcKDVPKmmyPbVF2NZ8cRs7/GtlUiTdPl+a46A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gebuX2BI0aBV0W1b6pu76jkKj+l6YQ/ztq2e+Ka8vs0=; b=eRMXnTGir/C5tIGX8PbRfEig7U/WLtHdM5EFs4NeDtiog2iYGUVRdvh9uT2vIonwq7A38z09GE0TkT65ipjqe87EovCKYz0sIpu2t56ys1hrKWK96GOSaSopXelFlOB48IcitbTpi94nYr8UEzsvlrSEPLf+s0gtfSTZlMEW2wKuLneMPRZjATgRiF6VkkE9RSfN/IrtwlOZONVLt+K9w0pF7aFoJmDZTKwVXSAdEeMZZAUA/UwVcrm1f/D87sUpnL8cZR1thBzE+xPdXc4misByTH0QJ64z9vx6k0Np0RFx3gm9M3WAkjTQhUOLTfgtJw7gsZOtdoJDpypLOr4qIQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gebuX2BI0aBV0W1b6pu76jkKj+l6YQ/ztq2e+Ka8vs0=; b=KdHxPA9Dd+lI+G4obIdPIvavIW9+s2pUnSWlruGPX1gkTPJePH4ogeSAINykO9B376Y3ibMFPNumIb0MNDDJbkArmYRNh//c8V1r/8imUScLtiCRi/Pl7CpYOdbHeSqnVVyVX7IkYTPKsXpNWyN3bNeHAKbtH3CR3vlklLrJHc9zEYGY5yjwdqjng+rqHolZwePEl4VKXsQS4EQ1L2VejEbbNaHbxmraWv1ZDRJGBglusjvBK8NHBsYMBmbJy9wxtj5Pykarfj09UfAjhABXk1OCYiQbEFaZm+wJgYw6E41kYEJpdLWwzZ3ZYIAiwZbery6XaWmnlpQ7I6iVm5HODA== Received: from MW4PR03CA0066.namprd03.prod.outlook.com (2603:10b6:303:b6::11) by PH7PR12MB5711.namprd12.prod.outlook.com (2603:10b6:510:1e2::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 09:26:32 +0000 Received: from CO1NAM11FT084.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::7e) by MW4PR03CA0066.outlook.office365.com (2603:10b6:303:b6::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT084.mail.protection.outlook.com (10.13.174.194) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Mon, 17 Apr 2023 09:26:31 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:26:12 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:26:10 -0700 From: Rongwei Liu To: , , , , Subject: [PATCH v1 8/8] net/mlx5: implement IPv6 routing push pop Date: Mon, 17 Apr 2023 12:25:40 +0300 Message-ID: <20230417092540.2617450-9-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT084:EE_|PH7PR12MB5711:EE_ X-MS-Office365-Filtering-Correlation-Id: 5ba5e548-a71d-4a3f-b0ab-08db3f25d4bc X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LHnhgjyIeXSNp/zjIfvF/ivWMV0iri+1rA4Kb9GD3HkdLoI9GEsw3YF2hv2sN2XFWHQ50DHPeWBXfK9iV5712p43kCVOBStG59NQ1D6oZUa1lIKvze/cR1MI0aMpaIqCQ4RolIdM2dy7QRv+en08cbIhz6TJRIFW+FWP7zN7FtBJj2YfU+WBhDL3iXVV/4DUqUP6oex6BU6tAEhbdfh5gMkKLGsSIxkUc1XZl1QNnpnWK/m+FDb71sOxmnq2EKqgDChT56YPKPLcRMqHQbrAG+X5kahYOQTjK1T4afZ/oGcekFcaIT4d2WkWvdfoGYsMbAc5VDUwDtrYrmcCDJhO4mv5yqKaqQ4UBxJY0ebYh6wjqT9nEhiU8Nu2RCB5H8C6DGLZlGhTqzL/5R52YFkizajLqJKxzJLkECvpntaR7vmNwW2iB4uAbNxjs8+iqe3dsxhdungeFu7Bv3xa38n9QAZD5nDy9H4P7k3CHGz0JnflyQfASK4oDUhY7epoA1uDUsZHXVOMynwrs1I3vk44nAoK0tt0qyTJg3T6AejtN/4X+U5BoyH9uN/AvGK+6o3acT2tf1D69P/kIlhPsLhTvNt3cai3Dzd7oVN5e4bcqylzZJFoK80LA4g9ReW6F+4R8D1eloHiJo9tKeM4AfKbOG63jzGE/i3tJu3BpA9sDPB2UKI24ORHBwfaKBkNaseSNDz7V/UVifJXpVTWwbnawuqfkdbGPDYZlTQuutSNE7Lz/N5FWCb9uflY/ww/1HEV/bUjk2w/9yzVhXqv4sxJ0qmCZN/TyIAqt3sU6fcC/hs= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(5660300002)(82310400005)(2616005)(426003)(336012)(40460700003)(86362001)(83380400001)(47076005)(16526019)(186003)(26005)(1076003)(7636003)(82740400003)(356005)(40480700001)(6286002)(36860700001)(34020700004)(8676002)(8936002)(110136005)(478600001)(6666004)(7696005)(316002)(41300700001)(55016003)(36756003)(70206006)(70586007)(2906002)(30864003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:31.7126 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5ba5e548-a71d-4a3f-b0ab-08db3f25d4bc X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT084.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5711 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Reserve the push data buffer for each job and the maximum length is set to 128 for now. Only supports type IPPROTO_ROUTING when translating the rte flow action. Pop actions must be shared globally and only supports next layer as TCP or UDP. Signed-off-by: Rongwei Liu --- doc/guides/nics/mlx5.rst | 9 +- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 25 ++- drivers/net/mlx5/mlx5_flow_hw.c | 268 ++++++++++++++++++++++++++++++-- 4 files changed, 291 insertions(+), 12 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 7a137d5f6a..11b7864d23 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -162,7 +162,7 @@ Features - Sub-Function. - Matching on represented port. - Matching on aggregated affinity. - +- Push or remove IPv6 routing extension. Limitations ----------- @@ -694,6 +694,13 @@ Limitations The flow engine of a process cannot move from active to standby mode if preceding active application rules are still present and vice versa. +- IPv6 routing extension push or remove: + + - Supported only with HW Steering enabled (``dv_flow_en`` = 2). + - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default). + - Only supports TCP or UDP as next layer. + - IPv6 routing header must be the only present extension. + Statistics ---------- diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 2cb6364957..5c568070a3 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -364,6 +364,7 @@ struct mlx5_hw_q_job { }; void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + uint8_t *push_data; /* IPv6 routing push data. */ struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 821c6ca281..97dc7c3b4d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -311,6 +311,8 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_SEND_TO_KERNEL (1ull << 42) #define MLX5_FLOW_ACTION_INDIRECT_COUNT (1ull << 43) #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_POP (1ull << 45) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 46) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -538,6 +540,7 @@ struct mlx5_flow_dv_matcher { struct mlx5_flow_dv_match_params mask; /**< Matcher mask. */ }; +#define MLX5_PUSH_MAX_LEN 128 #define MLX5_ENCAP_MAX_LEN 132 /* Encap/decap resource structure. */ @@ -1167,6 +1170,8 @@ struct rte_flow_hw { #pragma GCC diagnostic error "-Wpedantic" #endif +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; @@ -1211,6 +1216,12 @@ struct mlx5_action_construct_data { struct { cnt_id_t id; } shared_counter; + struct { + /* IPv6 routing push data len. */ + uint16_t len; + /* Modify header actions to keep valid checksum. */ + struct mlx5_modification_cmd cmd[MLX5_MHDR_MAX_CMD]; + } recom; struct { uint32_t id; } shared_meter; @@ -1253,6 +1264,7 @@ struct rte_flow_actions_template { uint16_t *actions_off; /* DR action offset for given rte action offset. */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ + uint16_t recom_off; /* Offset of DR IPv6 routing push pop action. */ uint32_t refcnt; /* Reference counter. */ uint16_t rx_cpy_pos; /* Action position of Rx metadata to be copied. */ uint8_t flex_item; /* flex item index. */ @@ -1275,7 +1287,14 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; -#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/* Push pop action struct. */ +struct mlx5_hw_push_pop_action { + struct mlx5dr_action *action; /* Action object. */ + /* Is push_pop action shared across flows in table. */ + uint8_t shared; + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; /* Modify field action struct. */ struct mlx5_hw_modify_header_action { @@ -1304,6 +1323,9 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ + /* Push/Pop action. */ + struct mlx5_hw_push_pop_action *push_pop; + uint16_t push_pop_pos; /* Push/Pop action position. */ uint32_t mark:1; /* Indicate the mark action. */ cnt_id_t cnt_id; /* Counter id. */ uint32_t mtr_id; /* Meter id. */ @@ -1329,7 +1351,6 @@ struct mlx5_flow_group { uint32_t idx; /* Group memory index. */ }; - #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32 diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 7e0ee8d883..d6b2953d55 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -479,6 +479,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } + if (acts->push_pop) { + if (acts->push_pop->action) + mlx5dr_action_destroy(acts->push_pop->action); + mlx5_free(acts->push_pop); + acts->push_pop = NULL; + } if (acts->mhdr) { if (acts->mhdr->action) mlx5dr_action_destroy(acts->mhdr->action); @@ -601,6 +607,53 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic push action to the dynamic action list. + * + * @param[in] dev + * Pointer to the port. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] len + * Length of the data to be updated. + * @param[in] buf + * Data to be updated. + * + * @return + * Data pointer on success, NULL otherwise and rte_errno is set. + */ +static __rte_always_inline void * +__flow_hw_act_data_push_append(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t len, uint8_t *buf) +{ + struct mlx5_modification_cmd cmd[MLX5_MHDR_MAX_CMD]; + struct mlx5_action_construct_data *act_data; + struct mlx5_priv *priv = dev->data->dev_private; + int ret; + + memset(cmd, 0, sizeof(cmd)); + ret = flow_dv_generate_ipv6_routing_push_mhdr2(dev, NULL, cmd, MLX5_MHDR_MAX_CMD, buf); + if (ret < 0) + return NULL; + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return NULL; + act_data->recom.len = len; + memcpy(act_data->recom.cmd, cmd, ret * sizeof(struct mlx5_modification_cmd)); + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return act_data; +} + static __rte_always_inline int __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, struct mlx5_hw_actions *acts, @@ -1359,20 +1412,25 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *action_start = actions; struct rte_flow_action *masks = at->masks; - enum mlx5dr_action_reformat_type refmt_type = 0; + enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; + enum mlx5dr_action_type recom_type = (enum mlx5dr_action_type)0; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_src = 0; + uint16_t reformat_src = 0, recom_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; - size_t data_size = 0; + uint8_t *push_data = NULL, *push_data_m = NULL; + size_t data_size = 0, push_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type; bool reformat_used = false; + bool recom_used = false; unsigned int of_vlan_offset; uint16_t action_pos; uint16_t jump_pos; @@ -1564,6 +1622,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_used = true; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + MLX5_ASSERT(!recom_used && !recom_type); + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH; + if (masks) { + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)masks->conf; + if (ipv6_ext_data) + push_data_m = ipv6_ext_data->data; + } + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)actions->conf; + push_data = ipv6_ext_data->data; + push_size = ipv6_ext_data->size; + recom_src = actions - action_start; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_POP; + break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: DRV_LOG(ERR, "send to kernel action is not supported in HW steering."); goto err; @@ -1767,6 +1855,47 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, acts->encap_decap->shared = shared_rfmt; acts->encap_decap_pos = at->reformat_off; } + if (recom_used) { + struct mlx5_action_construct_data *act_data; + uint32_t flag, bulk = 0; + + flag = mlx5_hw_act_flag[!!attr->group][type]; + if (push_data && !push_data_m) + bulk = rte_log2_u32(table_attr->nb_flows); + else + flag |= MLX5DR_ACTION_FLAG_SHARED; + + MLX5_ASSERT(at->recom_off != UINT16_MAX); + acts->push_pop = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->push_pop) + push_size, 0, SOCKET_ID_ANY); + if (!acts->push_pop) + goto err; + if (push_data && push_size) { + acts->push_pop->data_size = push_size; + memcpy(acts->push_pop->data, push_data, push_size); + } + acts->push_pop->action = mlx5dr_action_create_recombination(priv->dr_ctx, + recom_type, push_size, push_data, bulk, flag); + if (!acts->push_pop->action) + goto err; + acts->rule_acts[at->recom_off].action = acts->push_pop->action; + acts->rule_acts[at->recom_off].recom.data = acts->push_pop->data; + acts->rule_acts[at->recom_off].recom.offset = 0; + acts->push_pop->shared = flag & MLX5DR_ACTION_FLAG_SHARED; + acts->push_pop_pos = at->recom_off; + if (!acts->push_pop->shared) { + act_data = __flow_hw_act_data_push_append(dev, acts, + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + recom_src, at->recom_off, push_size, + acts->push_pop->data); + if (!act_data) + goto err; + /* Clear srv6 next header */ + *acts->push_pop->data = 0; + acts->rule_acts[at->recom_off].recom.mhdr = + (uint8_t *)act_data->recom.cmd; + } + } return 0; err: err = rte_errno; @@ -2143,11 +2272,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_push; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; uint8_t *buf = job->encap_data; + uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2273,6 +2404,12 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ipv6_push = + (const struct rte_flow_action_ipv6_ext_push *)action->conf; + rte_memcpy((void *)push_buf, ipv6_push->data, act_data->recom.len); + MLX5_ASSERT(ipv6_push->size == act_data->recom.len); + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, @@ -2428,6 +2565,32 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (hw_acts->push_pop && !hw_acts->push_pop->shared) { + struct mlx5_modification_cmd *mhdr; + uint32_t data_ofs, rule_data; + int i; + + rule_acts[hw_acts->push_pop_pos].recom.offset = + job->flow->idx - 1; + mhdr = (struct mlx5_modification_cmd *)rule_acts + [hw_acts->push_pop_pos].recom.mhdr; + /* Modify IPv6 dst address is in reverse order. */ + data_ofs = sizeof(struct rte_ipv6_routing_ext) + *(push_buf + 3) * 16; + data_ofs -= sizeof(uint32_t); + /* next_hop address. */ + for (i = 0; i < 4; i++) { + rule_data = flow_dv_fetch_field(push_buf + data_ofs, + sizeof(uint32_t)); + mhdr[i].data1 = rte_cpu_to_be_32(rule_data); + data_ofs -= sizeof(uint32_t); + } + /* next_hdr */ + rule_data = flow_dv_fetch_field(push_buf, sizeof(uint8_t)); + mhdr[i].data1 = rte_cpu_to_be_32(rule_data); + /* clear next_hdr for insert. */ + *push_buf = 0; + rule_acts[hw_acts->push_pop_pos].recom.data = push_buf; + } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) job->flow->cnt_id = hw_acts->cnt_id; return 0; @@ -3864,6 +4027,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return 0; } +/** + * Validate ipv6_ext_push action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to the indirect action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf; + + if (!raw_push_data || !raw_push_data->size || !raw_push_data->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "invalid ipv6_ext_push data"); + if (raw_push_data->type != IPPROTO_ROUTING || + raw_push_data->size > MLX5_PUSH_MAX_LEN) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported ipv6_ext_push type or length"); + return 0; +} + /** * Validate raw_encap action. * @@ -4046,6 +4241,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, uint16_t i; bool actions_end = false; int ret; + const struct rte_flow_action_ipv6_ext_remove *remove_data; /* FDB actions are only valid to proxy port. */ if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) @@ -4122,6 +4318,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_DECAP; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + remove_data = action->conf; + /* Remove action must be shared. */ + if (remove_data->type != IPPROTO_ROUTING || !mask) { + DRV_LOG(ERR, "Only supports shared IPv6 routing remove"); + return -EINVAL; + } + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_POP; + break; case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_METER; @@ -4229,6 +4440,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_CONNTRACK] = MLX5DR_ACTION_TYP_ASO_CT, [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_IPV6_ROUTING_POP, }; static int @@ -4285,6 +4498,8 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, /** * Create DR action template based on a provided sequence of flow actions. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] at * Pointer to flow actions template to be updated. * @@ -4293,7 +4508,8 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, * NULL otherwise. */ static struct mlx5dr_action_template * -flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +flow_hw_dr_actions_template_create(struct rte_eth_dev *dev, + struct rte_flow_actions_template *at) { struct mlx5dr_action_template *dr_template; enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; @@ -4302,8 +4518,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; uint16_t reformat_off = UINT16_MAX; uint16_t mhdr_off = UINT16_MAX; + uint16_t recom_off = UINT16_MAX; uint16_t cnt_off = UINT16_MAX; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_POP; int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { const struct rte_flow_action_raw_encap *raw_encap_data; size_t data_size; @@ -4332,6 +4551,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) reformat_off = curr_off++; reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH; + recom_off = curr_off++; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_IPV6_ROUTING_POP; + recom_off = curr_off++; + break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = at->actions[i].conf; data_size = raw_encap_data->size; @@ -4404,11 +4633,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) at->reformat_off = reformat_off; action_types[reformat_off] = reformat_act_type; } + if (recom_off != UINT16_MAX) { + at->recom_off = recom_off; + action_types[recom_off] = recom_type; + } dr_template = mlx5dr_action_template_create(action_types); - if (dr_template) + if (dr_template) { at->dr_actions_num = curr_off; - else + } else { DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return NULL; + } + /* Create srh flex parser for pop anchor. */ + if ((recom_type == MLX5DR_ACTION_TYP_IPV6_ROUTING_POP || + recom_type == MLX5DR_ACTION_TYP_IPV6_ROUTING_PUSH) && + mlx5_alloc_srh_flex_parser(dev)) { + DRV_LOG(ERR, "Failed to create srv6 flex parser"); + claim_zero(mlx5dr_action_template_destroy(dr_template)); + return NULL; + } return dr_template; err_actions_num: DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", @@ -4706,6 +4949,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, at->actions_off[i] = UINT16_MAX; at->reformat_off = UINT16_MAX; at->mhdr_off = UINT16_MAX; + at->recom_off = UINT16_MAX; at->rx_cpy_pos = pos; /* * mlx5 PMD hacks indirect action index directly to the action conf. @@ -4734,7 +4978,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, } } } - at->tmpl = flow_hw_dr_actions_template_create(at); + at->tmpl = flow_hw_dr_actions_template_create(dev, at); if (!at->tmpl) goto error; at->action_flags = action_flags; @@ -4779,6 +5023,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, NULL, "action template in using"); } + if (template->tmpl && mlx5dr_action_template_contain_srv6(template->tmpl)) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); if (template->tmpl) @@ -7230,6 +7476,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * @@ -7244,7 +7491,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_q_updated; i++) { char mz_name[RTE_MEMZONE_NAMESIZE]; - uint8_t *encap = NULL; + uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; @@ -7263,11 +7510,14 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i]->size]; encap = (uint8_t *) &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - items = (struct rte_flow_item *) + push = (uint8_t *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + items = (struct rte_flow_item *) + &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; priv->hw_q[i].job[j] = &job[j]; }