From patchwork Thu Feb 10 16:29:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107289 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E5CB5A00BE; Thu, 10 Feb 2022 17:30:22 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C7500411CB; Thu, 10 Feb 2022 17:30:00 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2086.outbound.protection.outlook.com [40.107.236.86]) by mails.dpdk.org (Postfix) with ESMTP id 00166411B2 for ; Thu, 10 Feb 2022 17:29:58 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LguHthCZFPN2PqFoKwt2Df6F5PzvdLc6lAiuWiwu+muDO029jupxB7WAi5psDz1SFMKHB3aTcUtABLnLpNYMTXxJoY9STd7gUgDAyXkJvHonrZGUbtpjih+qN1P78+r9hE6YkJ9/M7/vEgjbPFNEi0lrZ49kIZuYWr1TfVLZVYr+Isdm95VIoplSzd14u5VhNzYmAXrmRY7vnz9x04dS7ayIKjk0ZPyy1AhFqLmflhiFWq8eWpK+YJYjGNcCldO+FL1Puwh6wB/v/i1JRCDIODeZ5Sehs5qYB39AqRGu+9YOSjoxb7ibSeoTSNuG+YKW0sgHN9eCEboaWNROH7ajPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ook2mlTIqe6QnhJfAb5lczTSCBocWmWczbB3G9HsyKA=; b=m3oFUEc+y3P6f6pZPymj7CBT/ke8UQNAJFaWIs1YK+nMrUC10ZuhbMkQl9kC7hOPfas0kj9nUcu8CscUnWt6L4/nEiUrUEkz4e17QdO4EY57/bW4f7cB3j9UXE41y0hoQPejI0sx7NvVXJazj7reimZwEfs30wp6Di6PiMESaOO1BVPVwVU8zEopamYb48w2pDA2tyah5TFFhtty/BtXxey/6wOHhNYsPG8aHRJsbtnD0GpSG+wK9tgE+JEoYRU5GAuvTWCJZzB0yQo5YN5ftEJyN06+1uDl07YX+vvxc/Vr9q5qcttxjthuqxb1lwOlWu8vHWkiXunjnQwI3Cppbg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ook2mlTIqe6QnhJfAb5lczTSCBocWmWczbB3G9HsyKA=; b=GGAOa/9n4mFlqcEny4UmD7k3Z0nZN3GQgYI6+8o+5/QPEiJV8SDD7cEDUSmgu92fV/3oWvaGRDNsyrMe8Cx3oeeEZ9BsyxSeNrXktaMHS8YT5TziL84EV3aoc5QgYeiXRQ6NYHg4mz7PdVcnbDvZuAxyJ/eZC2RjbDw3zqKmLPPt8pXnWVcu4FkY7m56d9uAW1oRFKWdLFAKa02E9/pUzfJDxbb3hDTkI0oZM2DS3RLKWuMopJjOYK2b5v45TGNwI4IdqokGPml1qt3J8p+cn+c69+5g1QzMm4WCiGi9SZrbJCQexG6btozmB6Ge/HzmCn6LMeYV6e9MQuDBq7RAJg== Received: from MWHPR22CA0024.namprd22.prod.outlook.com (2603:10b6:300:ef::34) by BYAPR12MB3509.namprd12.prod.outlook.com (2603:10b6:a03:13b::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Thu, 10 Feb 2022 16:29:56 +0000 Received: from CO1NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ef:cafe::e1) by MWHPR22CA0024.outlook.office365.com (2603:10b6:300:ef::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:29:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT058.mail.protection.outlook.com (10.13.174.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:29:55 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:29:47 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:29:46 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 01/13] net/mlx5: introduce hardware steering operation Date: Thu, 10 Feb 2022 18:29:14 +0200 Message-ID: <20220210162926.20436-2-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5d8d63a5-19b0-460f-f95e-08d9ecb292a5 X-MS-TrafficTypeDiagnostic: BYAPR12MB3509:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:372; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kUyZ3cnJG3e7HDwDHmrE4ksfYfMJLSKxIHq46bGXr73kJKB8i6CS8PiGjNfbVaTkOoXETua3J8Vv26f6Im6sLdViFfKWaIk6U9gqeiXd3bIz775l9LKvTQxD/hcvjMTvPNT9FQbmsofHn4Y2yTp6AITOTb/iTINUiRiuX1uPtH5ZsdAGUjHQfSVg26MGXhJofAJE3uiIIwtnqm7USO/yQAP0boJ7VpX1OWlB+RFrQ6zjMltqN0/v5mQOAprPgzXBD5eBiSx1lP3tim8NI7Uyj2h3fPhhDmq0yA1tUbHvxWofVE7cJS5Cys3EzFBU48T3dRgcsNrXfAKnS5F8I7wNFJ7mZ8KStEwMCnkfUvDRXVKBvqZOHLE+ZresN5ps51xi4eYX+NLNDZKf5YuWb/DD9u9VnKVbOOXCo/qdeiO1Lut0wvbFqYW9XyCCQgeTSdUZCfGZ6VF9GnLaPVjP+f6l09oouZEwIame1gJcCxs/f5AmV77w4K2gGzfZ4SlcksSU4b/h7BUEeex09oo9Am6Gibs6hn6WLbadcZGKLrWYFiFr/phEr935EQcP+/Dt4mXsnAH3H16wjdkoLyyWXSV0EJXwJaMbTpyv5g9JDWFwbCA3EUShS96ghuf5mGSDiMItCjObK368nqNLU/+VKrtpm5FVRwvmylfXgX6M5wDfddjnncFxYWEp9tj0ICP5V22HguiCyq5eQuqRJpRnyKSXXQ== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(4326008)(7696005)(36860700001)(26005)(110136005)(47076005)(16526019)(508600001)(5660300002)(55016003)(70206006)(426003)(54906003)(336012)(6636002)(70586007)(186003)(8936002)(6286002)(2616005)(40460700003)(36756003)(6666004)(356005)(81166007)(1076003)(8676002)(82310400004)(2906002)(316002)(86362001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:29:55.7383 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5d8d63a5-19b0-460f-f95e-08d9ecb292a5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB3509 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new hardware based steering operation is going to be introduced for high insertion rate. This commit adds the basic driver operation. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_flow_os.h | 1 + drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5_flow.c | 1 + drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_hw.c | 13 +++++++++++++ 5 files changed, 17 insertions(+) create mode 100644 drivers/net/mlx5/mlx5_flow_hw.c diff --git a/drivers/net/mlx5/linux/mlx5_flow_os.h b/drivers/net/mlx5/linux/mlx5_flow_os.h index 1926d26410..e28a9e0436 100644 --- a/drivers/net/mlx5/linux/mlx5_flow_os.h +++ b/drivers/net/mlx5/linux/mlx5_flow_os.h @@ -9,6 +9,7 @@ #ifdef HAVE_IBV_FLOW_DV_SUPPORT extern const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops; +extern const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; #endif /** diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index 7d12dccdd4..edd4f126b3 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -16,6 +16,7 @@ sources = files( 'mlx5_flow.c', 'mlx5_flow_meter.c', 'mlx5_flow_dv.c', + 'mlx5_flow_hw.c', 'mlx5_flow_aso.c', 'mlx5_flow_flex.c', 'mlx5_mac.c', diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index d7cb1eb89b..21d17aca44 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -76,6 +76,7 @@ const struct mlx5_flow_driver_ops *flow_drv_ops[] = { [MLX5_FLOW_TYPE_MIN] = &mlx5_flow_null_drv_ops, #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) [MLX5_FLOW_TYPE_DV] = &mlx5_flow_dv_drv_ops, + [MLX5_FLOW_TYPE_HW] = &mlx5_flow_hw_drv_ops, #endif [MLX5_FLOW_TYPE_VERBS] = &mlx5_flow_verbs_drv_ops, [MLX5_FLOW_TYPE_MAX] = &mlx5_flow_null_drv_ops diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 7fec79afb3..26f5d97a7d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -452,6 +452,7 @@ enum mlx5_flow_drv_type { MLX5_FLOW_TYPE_MIN, MLX5_FLOW_TYPE_DV, MLX5_FLOW_TYPE_VERBS, + MLX5_FLOW_TYPE_HW, MLX5_FLOW_TYPE_MAX, }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c new file mode 100644 index 0000000000..33875c7d08 --- /dev/null +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 Mellanox Technologies, Ltd + */ + +#include + +#include "mlx5_flow.h" + +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + +const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; + +#endif From patchwork Thu Feb 10 16:29:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107287 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44D38A00BE; Thu, 10 Feb 2022 17:30:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E9EC341190; Thu, 10 Feb 2022 17:29:58 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2074.outbound.protection.outlook.com [40.107.244.74]) by mails.dpdk.org (Postfix) with ESMTP id 2EEE841161 for ; Thu, 10 Feb 2022 17:29:56 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=c+P5OxW2QMLB38wFJp2CZcv1cUQJpQ7MICrQvuLV/Na06+9cyEpY5ITLXZcmJnzR2CgRH7I4Ox5N/9ug7uEe7wFRwhuxYDJAydd2Pv/xuc0MrLRzBOy3NoAjoVIwrJ84nphHez1P070y7/GaB9hSAlF1txxedQxvzkakCOc5nmW0ByOG5ifhQ71ZM1XTroQsyH6Y61dbXaqS3QbcywLMVWT6D8SWZr7TSPZRWygqiOkUkSW0zPW3QPPMPvei7c10HfwVoOPULOF1zYygksi3D0cu2g7cpVY9GLHoU/vc5M818CPwUgQov9MI41DpgMrQqUGwGT0lZBXcglIeRokIZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tapSWVmXbZvPTZRHYyGV6ZqRwsKtQjk9lfDR12QfRCs=; b=fBsfKi8hessi53aojzfDLZ/Ut+M+FiCN9ln0lHqrohaFwq3AqcStppjNT3dq/TgfTYGP6ihD1SwvYVKSfYpzfj2nhjZQJxNpTlk/vAA/NZ0QH4jwIC/glb/uj0otaq7lUdOUWkT1SdknZA6iYi+/0TeY7gKvH+NoEQm02h2O0dpbf6v2os/hdsoFednb7Xf8JtM99GJoRd8nj6MWbZZSkRds81LPKlGgkYEi+EhYXrElnXFOXMqhAhiDNc8Bg8XUTAcod1DIih5tpcSApMxU1U4gwVeFHOhGNZ8hMA+VKNL4v18k9rcQDf5Ky90pr8Gr4juFkpVJ5Pk9oM7QRWyjZw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tapSWVmXbZvPTZRHYyGV6ZqRwsKtQjk9lfDR12QfRCs=; b=lvfnTnaTf6jbZAGs9jU1NcMSHOEGiqrnpa1Iwqlp+ZCf8RYoLixJBTFqfDZDBbYDk2hynOY9pbVs6D7J9ftHESB5ZeyMkdQsMzgVlzKIGFrf1u610lHVwuPI/DO9lWT9+Nd6zmuhQZ8SdSKzoQ0fRX4YPFXMReuC+33jTgYoe9FVjJXpdadbr+13AemRLvA+VKpIMndFLi7CqYCjk6T6dV7EUCJTu83dYf0T06oKrPGQDCkycjf8cXH9at3btwztaaWGJFK3uObbofRw9kK8s4D0cYaBkVI6kzQOgCyZ1kasYgNfcDnZtI/y4+DsPCZL8V+/GTkwcQtO8K+V4WkKPw== Received: from BN9PR03CA0327.namprd03.prod.outlook.com (2603:10b6:408:112::32) by CH0PR12MB5387.namprd12.prod.outlook.com (2603:10b6:610:d6::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Thu, 10 Feb 2022 16:29:51 +0000 Received: from BN8NAM11FT062.eop-nam11.prod.protection.outlook.com (2603:10b6:408:112:cafe::67) by BN9PR03CA0327.outlook.office365.com (2603:10b6:408:112::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:29:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by BN8NAM11FT062.mail.protection.outlook.com (10.13.177.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:29:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:29:49 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:29:47 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 02/13] net/mlx5: introduce hardware steering enable routine Date: Thu, 10 Feb 2022 18:29:15 +0200 Message-ID: <20220210162926.20436-3-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5bc13c40-5ed6-4208-2f7b-08d9ecb28f9d X-MS-TrafficTypeDiagnostic: CH0PR12MB5387:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:826; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hyYCnosgFipD8+6kYbZ4g06+tAu4GB+odjB7GuQC7jL4Ejod1p7zuQY0rqHVD0UFgmPO08G3WTD7HXCEKUIzrDi4x0wnU1cpWuFYBjvssD5tLaBuoxNTWJU/u8+SsjDK6hp+h1ETpdWK+2fc3eJc5sjjf5qZ74ym/XQuxYo2X3a8PclqgO3psR+x1X1H2TgPTPcaGc9t88QpCivaLs59phXNNS5/vsQl8xbj7Fn11/t1sl7fFbLSvqNN8tA9Og+RvDcDz1Vo9C5W6PCNNuhM7RE0hJ3ZROxC6s1RdxpH8jaSNNTk4ZhpImW49pgLN/2juNdhnno+kMxFJonc3tqE3Fwoa6BNBmLlCS/AlybYEItcm3QtgFRGZUABUbH7pxgJnzNZ74ag5psCIJ5Xp6sqf7Uglp7Rdh7lS0Qyyn8U3aqaDa/2OyBkOYWVt/c/1k5Of9H/tuPSOhiA+m8UUVfwUhcnsyP/gqLByns8OS7GzC+jqAqo4BSa4j5CgeRP2K/2zGQ+3/giVpL2XoiMWv1ZUSVflKuhCjau408zUHBrRWugacxJP7HV4lcB3SVSlj0M1X50d/wzu/Vg1nef6eb7FX8UVircdUSFdxg7gve+OfpmxdnIwDCR74e3VPfj1swlJnp9aqCOzOVEsOCJ+ddMrCxX9xGvQUE41KpnaAmk+v+3/oB3nKXlkK3hMr7s09L9PqZ6cwZEDTS9Vx6H9Js/hrw9ulJNMxpQuiuxBPTTzTIwzueacoTU6NeF+a+8asw9 X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(26005)(110136005)(54906003)(82310400004)(86362001)(316002)(36756003)(8936002)(6636002)(70206006)(70586007)(8676002)(4326008)(508600001)(5660300002)(356005)(83380400001)(81166007)(1076003)(7696005)(40460700003)(426003)(186003)(16526019)(2906002)(55016003)(6286002)(6666004)(336012)(47076005)(36860700001)(2616005)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:29:50.5694 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5bc13c40-5ed6-4208-2f7b-08d9ecb28f9d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT062.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5387 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org As the new hardware steering operation will be implemented under the new rte_flow_q APIs. This is not compatible with the existing rte_flow PMD's Direct Rules flow operation routine. This commit introduces an extra dv_flow_en = 2 to specify the new flow operation initialize routine. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 4 ++++ drivers/net/mlx5/mlx5.c | 2 +- drivers/net/mlx5/mlx5.h | 3 ++- 3 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index aecdc5a68a..52e52a4ad7 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -295,6 +295,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) err = mlx5_alloc_table_hash_list(priv); if (err) goto error; + if (priv->config.dv_flow_en == 2) + return 0; /* The resources below are only valid with DV support. */ #ifdef HAVE_IBV_FLOW_DV_SUPPORT /* Init port id action list. */ @@ -1712,6 +1714,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, priv->drop_queue.hrxq = mlx5_drop_action_create(eth_dev); if (!priv->drop_queue.hrxq) goto error; + if (priv->config.dv_flow_en == 2) + return eth_dev; /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 67eda41a60..a4826a583b 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1933,7 +1933,7 @@ mlx5_args_check(const char *key, const char *val, void *opaque) } else if (strcmp(MLX5_DV_ESW_EN, key) == 0) { config->dv_esw_en = !!tmp; } else if (strcmp(MLX5_DV_FLOW_EN, key) == 0) { - config->dv_flow_en = !!tmp; + config->dv_flow_en = tmp; } else if (strcmp(MLX5_DV_XMETA_EN, key) == 0) { if (tmp != MLX5_XMETA_MODE_LEGACY && tmp != MLX5_XMETA_MODE_META16 && diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 737ad6895c..f3b991e549 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -259,7 +259,8 @@ struct mlx5_dev_config { unsigned int l3_vxlan_en:1; /* Enable L3 VXLAN flow creation. */ unsigned int vf_nl_en:1; /* Enable Netlink requests in VF mode. */ unsigned int dv_esw_en:1; /* Enable E-Switch DV flow. */ - unsigned int dv_flow_en:1; /* Enable DV flow. */ + /* Enable DV flow. 1 means SW steering, 2 means HW steering. */ + unsigned int dv_flow_en:2; unsigned int dv_xmeta_en:2; /* Enable extensive flow metadata. */ unsigned int lacp_by_user:1; /* Enable user to manage LACP traffic. */ From patchwork Thu Feb 10 16:29:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107286 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31FBDA00BE; Thu, 10 Feb 2022 17:30:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ACBB841165; Thu, 10 Feb 2022 17:29:56 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2063.outbound.protection.outlook.com [40.107.223.63]) by mails.dpdk.org (Postfix) with ESMTP id 0B97641159 for ; Thu, 10 Feb 2022 17:29:55 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WQ5ChB8BJSxWvw1TjjjxYN3JLTbFznzAefqQ7prQY5XDjaJSCoXAe2MPcJdEP3XZr7Vyd8bDCMXDowBkZ5RyzEtU4tRYV4x6ek/bxb3UVlR9qxeEAuvx0xzQFr++gXAXble0BrgD3pr6bDf+BWCr+cioG7zGjd3SwueiVHcE78Btvy/prirlH78N+uQLap2TeVwEdF6D7hLE3ejhoOd7euqY7EUcSEzd0woX6xUh6xdCRUtkuUk632yGt+gY/NWfBMly8YTIjQd4PNpBKKThhp2TiUO58gWfPakb3hQKXgOYBB1KLoxwjNRbY3dIORq+DQ+g7Yk0ed+9NlvmWGWttA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yjlnURAal9w1mKYNJnK2CjyyIZjK/rDBvVFXJwHClkg=; b=oCF06YDIEOjKntOkyKFpeckdLRncNG55m64hWfO5E3DEdjX4xnFyS5xQxmOrUdVSoryzARS/7RckSc0nXK0A6zI8q6RJSqTnVDu49uMRdGDaAaPCj/GWnRl02drf72cZ+5OewyGNEruJHm6XGjBUie8Zu6nvkk15oE5curNvY5+BYoSwJsZr+zJIxlP52/ZsxiDIP8iT1kX/2weurV7q0n1J6tQZxgv9gTukP9u73SR4xXTqWjbpbAr+RS1ChY0aqEF2KmOvMAevAf+bx8kFSQkMuAeBX6dbUVZFgnr4iEhUfgsM1du7zpKpKqDJhcDJeF5ra7N4u0nISDQcVAhN1Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yjlnURAal9w1mKYNJnK2CjyyIZjK/rDBvVFXJwHClkg=; b=ohDrgRu63BDAk6MiXOD4/0aODokVSs48OTi3Z+erOZ7omJ2fdilCJpavyh4WJctTmm2uSI+9Jqy7FaY7EVVbXCAMxBdm1TTaQFL6VWPgV7CvqJfIwIXRfURUF6PU+lXSA7q8iN84Q/QruRKKQwXo7zTFK5bYeq8FM94Je/r8mhLs80nxooAoRvOFQSqDSRdsmgWneKmz3ozkrVbXuKX35DtJEa3evSkjWG8rHaRylA6715YEXuVL0tJ8RVNBhxLufX2D9DBvBnPF/uJKp9Rwch1d+gcVNpoCC35T0vMWiGDb8QBeYJCqQ5MugM2mHvLE/P5Nl/YR+wGCR0ftYb9uiQ== Received: from DM5PR15CA0026.namprd15.prod.outlook.com (2603:10b6:4:4b::12) by MN2PR12MB4437.namprd12.prod.outlook.com (2603:10b6:208:26f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Thu, 10 Feb 2022 16:29:53 +0000 Received: from DM6NAM11FT050.eop-nam11.prod.protection.outlook.com (2603:10b6:4:4b:cafe::96) by DM5PR15CA0026.outlook.office365.com (2603:10b6:4:4b::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.19 via Frontend Transport; Thu, 10 Feb 2022 16:29:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT050.mail.protection.outlook.com (10.13.173.111) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:29:53 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:29:52 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:29:49 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 03/13] net/mlx5: add port flow configuration Date: Thu, 10 Feb 2022 18:29:16 +0200 Message-ID: <20220210162926.20436-4-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8ebea70e-2205-4a35-8a76-08d9ecb2911b X-MS-TrafficTypeDiagnostic: MN2PR12MB4437:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:949; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SwhmxNQSahhEX2o3/B9h3xIonFvHpoS+iiCdcaQQvoRQAG6Uxup1mM0IHNS4BEVrVqUOK+IFSo6/RINFb4mjkpTteugpSOhtVQY54OqOLx4h2tOfuFNSrleXcM6TpUH58Zdi7njVi/0JaPuGZd5dPibxWZdut73C/Xx0PGzN0QCvphTlod9F6egdzkOPSYRucAoE44DQ/a4/MnEheNCSPRUlpe4+Iu/h1gHgbsOpqFtGpFZIM1rLe5H4xldUmnIiiqAodoxGtO+qQOpF18gXNKsqaWUe8YD4dVq0pbs89D1kyewkFuGf0mrB+qlZDjDXv0kTPmHgwko6YLInZ5/XHoxHgg+u22ovMWJUwasIH5e4Liv3jOznwqmzFBYgYmlro3J/8mL3h1/X0qa53EGA8YNSx6r3zqz1nz70qDUQky1j3dIIxFb2HZM7QTcSzCCjOMa1qmavJGXQqNm05cus0ip1KjdS3qLGQAALBfjA5pAXlb+/FK0Sf+V2un/hjanf+7YnDwmQoGX36wnGkPsoF14ljdL6KNUm1lGbtpeeEwnSQ10Myg0m9Md65PLskZpPPJWpgQd5j4nsLxt7czisLAO8SsczfxXWydoBkNXQBhrqSLFPewjvBCd4Jhi6BYwXLMI3+iBlkHLd17vocqW8YAVmxbETq9kGmNhVsS5DnqmzZL2AtMyYIOauPiR4oOIlK+l09Zw8hQN3RH5B3GWJBTLsWqzWx8wBbpoepQL3oFU= X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(5660300002)(4326008)(8676002)(47076005)(81166007)(316002)(55016003)(70206006)(8936002)(70586007)(54906003)(110136005)(6636002)(36756003)(2906002)(30864003)(426003)(336012)(7696005)(508600001)(186003)(6286002)(16526019)(2616005)(26005)(1076003)(356005)(83380400001)(40460700003)(36860700001)(6666004)(86362001)(82310400004)(21314003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:29:53.1221 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8ebea70e-2205-4a35-8a76-08d9ecb2911b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT050.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4437 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The hardware steering is backend to support rte_flow_q API in mlx5 PMD. The port configuration function creates the queues and needed flow management resources. The PMD layer configuration function allocates the queues' context and per-queue job descriptor. The job descriptor size is equal to the queue size, and the job descriptors will be popped from LIFO to convey the flow information during flow insertion/destruction. So when polling the result, the flow information will be extracted from the descriptor and then the job descriptor will be push back to the LIFO. The commit creates the flow port queue and the job descriptors. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.c | 3 + drivers/net/mlx5/mlx5.h | 26 ++++++- drivers/net/mlx5/mlx5_flow.c | 37 +++++++++ drivers/net/mlx5/mlx5_flow.h | 9 +++ drivers/net/mlx5/mlx5_flow_hw.c | 132 ++++++++++++++++++++++++++++++++ 5 files changed, 206 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index a4826a583b..f1933fd253 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1567,6 +1567,9 @@ mlx5_dev_close(struct rte_eth_dev *dev) /* Free the eCPRI flex parser resource. */ mlx5_flex_parser_ecpri_release(dev); mlx5_flex_item_port_cleanup(dev); +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + flow_hw_resource_release(dev); +#endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f3b991e549..31a13ca69a 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -33,7 +33,7 @@ #include "mlx5_utils.h" #include "mlx5_os.h" #include "mlx5_autoconf.h" - +#include "mlx5dr.h" #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) @@ -324,6 +324,26 @@ struct mlx5_lb_ctx { uint16_t refcnt; /* Reference count for representors. */ }; +/* HW steering queue job descriptor type. */ +enum { + MLX5_HW_Q_JOB_TYPE_CREATE, /* Flow create job type. */ + MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */ +}; + +/* HW steering flow management job descriptor. */ +struct mlx5_hw_q_job { + uint32_t type; /* Job type. */ + struct rte_flow *flow; /* Flow attached to the job. */ + void *user_data; /* Job user data. */ +}; + +/* HW steering job descriptor LIFO header . */ +struct mlx5_hw_q { + uint32_t job_idx; /* Free job index. */ + uint32_t size; /* LIFO size. */ + struct mlx5_hw_q_job **job; /* LIFO pointer. */ +} __rte_cache_aligned; + #define MLX5_COUNTERS_PER_POOL 512 #define MLX5_MAX_PENDING_QUERIES 4 #define MLX5_CNT_CONTAINER_RESIZE 64 @@ -1480,6 +1500,10 @@ struct mlx5_priv { struct mlx5_flex_item flex_item[MLX5_PORT_FLEX_ITEM_NUM]; /* Flex items have been created on the port. */ uint32_t flex_item_map; /* Map of allocated flex item elements. */ + struct mlx5dr_context *dr_ctx; /**< HW steering DR context. */ + uint32_t nb_queue; /* HW steering queue number. */ + /* HW steering queue polling mechanism job descriptor LIFO. */ + struct mlx5_hw_q *hw_q; }; #define PORT_ID(priv) ((priv)->dev_data->port_id) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 21d17aca44..5ff96642b4 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -805,6 +805,12 @@ static int mlx5_flow_flex_item_release(struct rte_eth_dev *dev, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); +static int +mlx5_flow_port_configure(struct rte_eth_dev *dev, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], + struct rte_flow_error *err); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, @@ -826,6 +832,7 @@ static const struct rte_flow_ops mlx5_flow_ops = { .get_restore_info = mlx5_flow_tunnel_get_restore_info, .flex_item_create = mlx5_flow_flex_item_create, .flex_item_release = mlx5_flow_flex_item_release, + .configure = mlx5_flow_port_configure, }; /* Tunnel information. */ @@ -7814,6 +7821,36 @@ mlx5_counter_query(struct rte_eth_dev *dev, uint32_t cnt, return -ENOTSUP; } +/** + * Configure port HWS resources. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] port_attr + * Port configuration attributes. + * @param[in] nb_queue + * Number of queue. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_port_configure(struct rte_eth_dev *dev, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], + struct rte_flow_error *err) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->configure(dev, port_attr, nb_queue, queue_attr, err); +} + /** * Allocate a new memory for the counter values wrapped by all the needed * management. diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 26f5d97a7d..731478ff05 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1257,6 +1257,12 @@ typedef int (*mlx5_flow_item_update_t) const struct rte_flow_item_flex_handle *handle, const struct rte_flow_item_flex_conf *conf, struct rte_flow_error *error); +typedef int (*mlx5_flow_port_configure_t) + (struct rte_eth_dev *dev, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], + struct rte_flow_error *err); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -1295,6 +1301,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_item_create_t item_create; mlx5_flow_item_release_t item_release; mlx5_flow_item_update_t item_update; + mlx5_flow_port_configure_t configure; }; /* mlx5_flow.c */ @@ -1767,4 +1774,6 @@ const struct mlx5_flow_tunnel * mlx5_get_tof(const struct rte_flow_item *items, const struct rte_flow_action *actions, enum mlx5_tof_rule_type *rule_type); +void +flow_hw_resource_release(struct rte_eth_dev *dev); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 33875c7d08..4194f81ee9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4,10 +4,142 @@ #include +#include +#include "mlx5_defs.h" #include "mlx5_flow.h" #ifdef HAVE_IBV_FLOW_DV_SUPPORT const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; +/** + * Configure port HWS resources. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] port_attr + * Port configuration attributes. + * @param[in] nb_queue + * Number of queue. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_configure(struct rte_eth_dev *dev, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_context *dr_ctx = NULL; + struct mlx5dr_context_attr dr_ctx_attr = {0}; + struct mlx5_hw_q *hw_q; + struct mlx5_hw_q_job *job = NULL; + uint32_t mem_size, i, j; + + if (!port_attr || !nb_queue || !queue_attr) { + rte_errno = EINVAL; + goto err; + } + /* In case re-configuring, release existing context at first. */ + if (priv->dr_ctx) { + /* */ + for (i = 0; i < nb_queue; i++) { + hw_q = &priv->hw_q[i]; + /* Make sure all queues are empty. */ + if (hw_q->size != hw_q->job_idx) { + rte_errno = EBUSY; + goto err; + } + } + flow_hw_resource_release(dev); + } + /* Allocate the queue job descriptor LIFO. */ + mem_size = sizeof(priv->hw_q[0]) * nb_queue; + for (i = 0; i < nb_queue; i++) { + /* + * Check if the queues' size are all the same as the + * limitation from HWS layer. + */ + if (queue_attr[i]->size != queue_attr[0]->size) { + rte_errno = EINVAL; + goto err; + } + mem_size += (sizeof(struct mlx5_hw_q_job *) + + sizeof(struct mlx5_hw_q_job)) * + queue_attr[0]->size; + } + priv->hw_q = mlx5_malloc(MLX5_MEM_ZERO, mem_size, + 64, SOCKET_ID_ANY); + if (!priv->hw_q) { + rte_errno = ENOMEM; + goto err; + } + for (i = 0; i < nb_queue; i++) { + priv->hw_q[i].job_idx = queue_attr[i]->size; + priv->hw_q[i].size = queue_attr[i]->size; + if (i == 0) + priv->hw_q[i].job = (struct mlx5_hw_q_job **) + &priv->hw_q[nb_queue]; + else + priv->hw_q[i].job = (struct mlx5_hw_q_job **) + &job[queue_attr[i - 1]->size]; + job = (struct mlx5_hw_q_job *) + &priv->hw_q[i].job[queue_attr[i]->size]; + for (j = 0; j < queue_attr[i]->size; j++) + priv->hw_q[i].job[j] = &job[j]; + } + dr_ctx_attr.pd = priv->sh->cdev->pd; + dr_ctx_attr.queues = nb_queue; + /* Queue size should all be the same. Take the first one. */ + dr_ctx_attr.queue_size = queue_attr[0]->size; + dr_ctx = mlx5dr_context_open(priv->sh->cdev->ctx, &dr_ctx_attr); + /* rte_errno has been updated by HWS layer. */ + if (!dr_ctx) + goto err; + priv->dr_ctx = dr_ctx; + priv->nb_queue = nb_queue; + return 0; +err: + if (dr_ctx) + claim_zero(mlx5dr_context_close(dr_ctx)); + if (priv->hw_q) { + mlx5_free(priv->hw_q); + priv->hw_q = NULL; + } + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to configure port"); +} + +/** + * Release HWS resources. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void +flow_hw_resource_release(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->dr_ctx) + return; + mlx5_free(priv->hw_q); + priv->hw_q = NULL; + claim_zero(mlx5dr_context_close(priv->dr_ctx)); + priv->dr_ctx = NULL; + priv->nb_queue = 0; +} + +const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { + .configure = flow_hw_configure, +}; + #endif From patchwork Thu Feb 10 16:29:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107288 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 347C5A00BE; Thu, 10 Feb 2022 17:30:15 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D21344117E; Thu, 10 Feb 2022 17:29:59 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2044.outbound.protection.outlook.com [40.107.237.44]) by mails.dpdk.org (Postfix) with ESMTP id C485641176 for ; Thu, 10 Feb 2022 17:29:56 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=B+R7MkEjVxOrAEgdVxIddBaTs/kWx5zt9c4COlROGLa65R2zE1zWLhbuVqOZ1lkMz82ntrJ4pdqcOHlZ/wKKDNHcvoHp0sTjJou950nvXkyOUkQ/V31KiJ1zd7K51jWUW8BWVJQjseBipmbDagrawx54WpnelbDEgyz8zeIady1Rb7JQs7qqV77VRfn6HgZkCwejqSOrmmlJCmsRX4Ih6QbeM/6cgE1QIgWZdR7KYjOzPNxFY2CktMjN2VjVeniy8k7ZrWsV+1eNjCQolmH4e+gB6Tt0U1CzLYDHpkLRa5r+dv+TJmBu2tLJ+wL5pWMbRwooZ1PMz4pZmK1lwQ4gSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4KQbJWAdT70l2d94zQ9RC3Br6f5HBuEv8zojsfKKT4c=; b=CIJOks1M0LwSa1NX7RFq3KGjtJaurgt05p1i9hcr+5ZizVCI6cUptSDR/LrozBWZ3HB862vn2OWupsgfBxfc30/TGGCDXsAeABr7kdpruFN5mjSmG6ZF3xl02oYWZDN2Yo0cARidvmPTEcVsFMmBC446a74dx7BLM73+N39PQc0hXhwGbLWg6GkekobhrH/VOMkpXrZ+C4Mq5d15EgWtc89qrQhUDJBAgykg31T7E5jfugeeQmJs5NA4+ztLn51mQoIY1oNbjUGXK/f5INWZUdS0JXtk+lj1f3afCicSAPCxivB++mV8y/Cn1dO1rejh8ctuJYYZAtyQkNX3bJnYRg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4KQbJWAdT70l2d94zQ9RC3Br6f5HBuEv8zojsfKKT4c=; b=WdD9xeGFPVc/A75wuSTupE5EX74tXwwsbdAGGkyZfmakEAXbmO/vD0JywHqhaWU+fJFhr7ireJmFc4NpkMxOV5+NXcjElEPX4vvYK/UVSOXMGZMwOLLX7Hey1/wMTADlL4ZQEgqU3nYaPbOEoZ+9WyLILiv9aXJEPzFAEWdnOouy+/gRfCyZcWj+ynrP9t24xWtZtBxb2MvnQCcToewCcohbneETNBQRmu+Bu4/RJYxcrnMLlVCSNOUt6apjwbyZp70bDPpicVJOTPxwXUUd4h0fvD3hOnC6kcKSIk7pbFGlyGFNnmWIFtsmfZoK9zU5/a4j816jleP1J8xm636lhA== Received: from DM6PR03CA0091.namprd03.prod.outlook.com (2603:10b6:5:333::24) by PH0PR12MB5420.namprd12.prod.outlook.com (2603:10b6:510:e8::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.14; Thu, 10 Feb 2022 16:29:55 +0000 Received: from DM6NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:5:333:cafe::ce) by DM6PR03CA0091.outlook.office365.com (2603:10b6:5:333::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:29:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT008.mail.protection.outlook.com (10.13.172.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:29:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:29:53 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:29:51 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 04/13] net/mlx5: add pattern template management Date: Thu, 10 Feb 2022 18:29:17 +0200 Message-ID: <20220210162926.20436-5-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e20e2963-cdf0-4c4b-4aa1-08d9ecb291d7 X-MS-TrafficTypeDiagnostic: PH0PR12MB5420:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:901; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: itvO0DMQ/8BrglrNCrjRoj50rr7bJALGGeZdFti6pRUC1dRzI7LDXxxHMhMw58S7WGScsnBbZdGruFqaMN4dfYtFXKaMlCQMEg3QEy0MciSmbgZ6DYZ/z3M4C+ihD97hqaCPUH5Xw1kFbrzCsghiGlXuLRWiW9SrlBYzR4GRHY0DA3hn3atY+AqvT6gkbqp5m6cTkDwwNQpMICx3tzWdQWZ5P2OJQH/FPFkmrbBp9l2HlYj/rX0PjWAxqnyYVT6K/tXmx/oGX9ErLttcEICNX2vBPD8faxtkTlL0VhbHXy7r81kxgRT3nnrR9TA4qUok/qAH87uVBDvhPW0RELtTeoeTN/q+cYuyIq+LykznGcSpgkjd8viPZvj1/NTZyLDzQwd7XohIoOjn7GKfVIchCAR+wYemetf1QWeIC3rCY8M13EDrD6MxpwlgpsMA/9bEvZlUAaxtpmxBZzaf8lRbwxZztriYrHa2SMIRlMHqauxJpCfEnyKV7eZqwX+MzixiRvyoNpiq7q4/W9yla9rLQ9cpCRuGBmVmlb6KmlKVjfHGmxh60IjhKg6GhQylP7U+V9iZpPvncz8OaxjUQiXEKvRznJ9h/MIOl68qHTcX4Gz06tBM+JLbOC6WKXOUW7v8UFpzSe3OPlZj3bbw9rqSI5+Wcpco/dCuJImPg8hQ9j5mGdmy4FZDOfOxSHxV3P7A+9Hob0c6mYYI+3gtttlpSQ== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(86362001)(316002)(8936002)(54906003)(2906002)(6636002)(8676002)(508600001)(82310400004)(2616005)(40460700003)(110136005)(356005)(6286002)(1076003)(26005)(186003)(336012)(426003)(16526019)(70586007)(5660300002)(47076005)(36860700001)(70206006)(55016003)(83380400001)(81166007)(6666004)(4326008)(7696005)(36756003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:29:54.3056 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e20e2963-cdf0-4c4b-4aa1-08d9ecb291d7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5420 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The pattern template defines flows that have the same matching fields but with different matching values. For example, matching on 5 tuple TCP flow, the template will be (eth(null) + IPv4(source + dest) + TCP(s_port + d_port) while the values for each rule will be different. Due to the pattern template can be used in different domains, the items will only be cached in pattern template create stage, while the template is binded to a dedicated table, the HW criteria will be created and saved to the table. And different tables may create the same criteria and will not shared between each other in order to have better performance. This commit adds pattern template management. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_flow.c | 64 +++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow.h | 20 ++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 82 +++++++++++++++++++++++++++++++++ 4 files changed, 168 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 31a13ca69a..96048ad0ea 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1500,6 +1500,8 @@ struct mlx5_priv { struct mlx5_flex_item flex_item[MLX5_PORT_FLEX_ITEM_NUM]; /* Flex items have been created on the port. */ uint32_t flex_item_map; /* Map of allocated flex item elements. */ + /* Item template list. */ + LIST_HEAD(flow_hw_itt, rte_flow_pattern_template) flow_hw_itt; struct mlx5dr_context *dr_ctx; /**< HW steering DR context. */ uint32_t nb_queue; /* HW steering queue number. */ /* HW steering queue polling mechanism job descriptor LIFO. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 5ff96642b4..27a40a9627 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -812,6 +812,17 @@ mlx5_flow_port_configure(struct rte_eth_dev *dev, const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); +static struct rte_flow_pattern_template * +mlx5_flow_pattern_template_create(struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item items[], + struct rte_flow_error *error); + +static int +mlx5_flow_pattern_template_destroy(struct rte_eth_dev *dev, + struct rte_flow_pattern_template *template, + struct rte_flow_error *error); + static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -833,6 +844,8 @@ static const struct rte_flow_ops mlx5_flow_ops = { .flex_item_create = mlx5_flow_flex_item_create, .flex_item_release = mlx5_flow_flex_item_release, .configure = mlx5_flow_port_configure, + .pattern_template_create = mlx5_flow_pattern_template_create, + .pattern_template_destroy = mlx5_flow_pattern_template_destroy, }; /* Tunnel information. */ @@ -7851,6 +7864,57 @@ mlx5_flow_port_configure(struct rte_eth_dev *dev, return fops->configure(dev, port_attr, nb_queue, queue_attr, err); } +/** + * Create flow item template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the item template attributes. + * @param[in] items + * The template item pattern. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static struct rte_flow_pattern_template * +mlx5_flow_pattern_template_create(struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item items[], + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->pattern_template_create(dev, attr, items, error); +} + +/** + * Destroy flow item template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] template + * Pointer to the item template to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_pattern_template_destroy(struct rte_eth_dev *dev, + struct rte_flow_pattern_template *template, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->pattern_template_destroy(dev, template, error); +} + /** * Allocate a new memory for the counter values wrapped by all the needed * management. diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 731478ff05..88102f0991 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1015,6 +1015,15 @@ struct rte_flow { uint32_t geneve_tlv_option; /**< Holds Geneve TLV option id. > */ } __rte_packed; +/* Flow item template struct. */ +struct rte_flow_pattern_template { + LIST_ENTRY(rte_flow_pattern_template) next; + /* Template attributes. */ + struct rte_flow_pattern_template_attr attr; + struct mlx5dr_match_template *mt; /* mlx5 match template. */ + uint32_t refcnt; /* Reference counter. */ +}; + /* * Define list of valid combinations of RX Hash fields * (see enum ibv_rx_hash_fields). @@ -1263,6 +1272,15 @@ typedef int (*mlx5_flow_port_configure_t) uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); +typedef struct rte_flow_pattern_template *(*mlx5_flow_pattern_template_create_t) + (struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item items[], + struct rte_flow_error *error); +typedef int (*mlx5_flow_pattern_template_destroy_t) + (struct rte_eth_dev *dev, + struct rte_flow_pattern_template *template, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -1302,6 +1320,8 @@ struct mlx5_flow_driver_ops { mlx5_flow_item_release_t item_release; mlx5_flow_item_update_t item_update; mlx5_flow_port_configure_t configure; + mlx5_flow_pattern_template_create_t pattern_template_create; + mlx5_flow_pattern_template_destroy_t pattern_template_destroy; }; /* mlx5_flow.c */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 4194f81ee9..c984e520cd 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -12,6 +12,81 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; +/** + * Create flow item template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the item template attributes. + * @param[in] items + * The template item pattern. + * @param[out] error + * Pointer to error structure. + * + * @return + * Item template pointer on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow_pattern_template * +flow_hw_pattern_template_create(struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item items[], + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_pattern_template *it; + + it = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*it), 0, SOCKET_ID_ANY); + if (!it) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate item template"); + return NULL; + } + it->attr = *attr; + it->mt = mlx5dr_match_template_create(items, attr->relaxed_matching); + if (!it->mt) { + mlx5_free(it); + return NULL; + } + __atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED); + LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next); + return it; +} + +/** + * Destroy flow item template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] template + * Pointer to the item template to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused, + struct rte_flow_pattern_template *template, + struct rte_flow_error *error __rte_unused) +{ + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { + DRV_LOG(WARNING, "Item template %p is still in use.", + (void *)template); + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "item template in using"); + } + LIST_REMOVE(template, next); + claim_zero(mlx5dr_match_template_destroy(template->mt)); + mlx5_free(template); + return 0; +} + /** * Configure port HWS resources. * @@ -128,9 +203,14 @@ void flow_hw_resource_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_pattern_template *it; if (!priv->dr_ctx) return; + while (!LIST_EMPTY(&priv->flow_hw_itt)) { + it = LIST_FIRST(&priv->flow_hw_itt); + flow_hw_pattern_template_destroy(dev, it, NULL); + } mlx5_free(priv->hw_q); priv->hw_q = NULL; claim_zero(mlx5dr_context_close(priv->dr_ctx)); @@ -140,6 +220,8 @@ flow_hw_resource_release(struct rte_eth_dev *dev) const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .configure = flow_hw_configure, + .pattern_template_create = flow_hw_pattern_template_create, + .pattern_template_destroy = flow_hw_pattern_template_destroy, }; #endif From patchwork Thu Feb 10 16:29:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107291 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3C28BA00BE; Thu, 10 Feb 2022 17:30:41 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5B5F7426EE; Thu, 10 Feb 2022 17:30:04 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2043.outbound.protection.outlook.com [40.107.100.43]) by mails.dpdk.org (Postfix) with ESMTP id D37D7411EE for ; Thu, 10 Feb 2022 17:30:02 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=O/O3y+m4HAbNKVzmTBLDo+VSlqeuj03qmztQDZpF8t5tOTnrFudQfD7ologTpikq6d97bkSXqatd18i+FC9Hy1A3IDj4fzc5Ek3RXLlVAgB+DS8oH08wwHtTbp2HuMsYJAyGCSIzF4fylJggif09KxHoIiETbu+GVep2GjUON3H9jirLu8dDuPf0wt5I50I9UV+tArkEokD97z/KSM3p7uACplBsrfGLxJaVkeSRqkXlz13JW8arJuGchsMBF5WCTpXxMyUuRvEdVRzPHgC0Cq0XptXN1EX1wIo2tZOFxDN2aTvN1F0Kw7vv84h7nIuLLi5ORi+97C8aZYo5robmbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5AfLIzt00jlmliMzQsYmy8ZmhsvWu3NlBvlRYpaXueg=; b=LHbtl5p21NUaeP1t3Mzy+eR96qB0ZUErXnzEXSClMTaDjM8igQN9nmiTOud6Y7qMt7AuOWWSdEDjqQ74eZW52Hnb24Fmmdg6HK8Bbvys6Zy4yuVFfGnhS3Q1azO4V3AcGTMx4scVcU27biSg2x0riH0FFQeVpXRgun7LxwdX8qXLcfmkntWEnqlPp5f3x6D7Ggfz/UnP+yGQFz8K8BkxusuRFKVqaec993NQm/E7kMcomhh+57ThQZHWho5JVvvcbVhu/7pxVT3iE6qGIamnorRcwOSHhhwzQ5Ih3d1RaZxawGsz+6pBJwvbyXSoahW1SqVEbHOeADyyQehn75MojA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5AfLIzt00jlmliMzQsYmy8ZmhsvWu3NlBvlRYpaXueg=; b=aqycy20AqNuNbAwi/N0fXGIjd0/ybeiMrczUnzOENEcljsTH3JhuC3xZ41ZDX1zT8+B0mMhxlxttxtv3CNth1JoVPSeTKIRW+8jm5WuQJYWy2s1KcXrpsDXpQM7itOsfrG+KqcNYCJ9b6CDOfVwcRmQub5tT8JCCkOnmH2uBAX8kDROnmgmpoD5mYvSG+jPOidsFouCe3RTMK5MoJjywXuf0rKG9FbM0ewavlt8/wN8N8fESpfYjEb3p/1ews9umircd4snVogRl3vN8sL9n7sloQDlPXwsLl8as5JSguZe/SCVNQttAmD+dy32nQ62xaQvDPT7QmNnCzzzEVGiJxg== Received: from MWHPR22CA0004.namprd22.prod.outlook.com (2603:10b6:300:ef::14) by DM6PR12MB2812.namprd12.prod.outlook.com (2603:10b6:5:44::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Thu, 10 Feb 2022 16:30:00 +0000 Received: from CO1NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ef:cafe::a) by MWHPR22CA0004.outlook.office365.com (2603:10b6:300:ef::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.19 via Frontend Transport; Thu, 10 Feb 2022 16:30:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT058.mail.protection.outlook.com (10.13.174.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:00 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:29:55 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:29:53 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 05/13] net/mlx5: add action template management Date: Thu, 10 Feb 2022 18:29:18 +0200 Message-ID: <20220210162926.20436-6-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d6c76ff8-a7fb-43c2-b7b4-08d9ecb2955b X-MS-TrafficTypeDiagnostic: DM6PR12MB2812:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:381; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +zEczKWV4qhZH7NfLqNCo54NCjGwQTuLr7NPOEE+f7QmfoGHHVHiZ5JB7nd/l+xLyvgBYOAHBOs0hqtdlr/pNrH5j/lcXRKavaCizNRr/F/ng51Uhe57/Qo5wDXRwXN/6EdLdyCNAVlsjVDjwkAPYJcLEudYcUqjBeO3G2HgNSHGMMShvabzdMR2+VGtSR/5MiWPhdhZ9TyUo71TrXVWbhtaogEEXttEl6/cc+F/0rgFYgQqDdvluvkfzGv7VHeuSDw2J4XdV9vXsjuSWSxeMExDW7GV1VfINn5Z1/NKg1dNjmBpoj2K7DxxuPIQp91BC8s06uw9qKmZDEXMqbODY5Jcfvo8tEgwHzH8cIxYeY36vqtedHEd+yFBr4AG4smUuIAgZZXfNprbZWZzM/XvUHiwPJzXH4j/OYoQEntvJ3Ea4HVQZw44tXu62zVRW6DNI3NhYn4bMQ1Y/bTM5Wre6npA/th8lSk3+qBBiLVEl4zqs238nxlRJ7kYzQwlfsdidkRQKVQBA469F5e3Ta2w+lC1zSNMMFazd/JV2PQqUaljy6jEFtJqQkonqd9sqHP5ght0Qv2GL+xN1CdhAAguAGXuTSn6ZxDdRMraSbejto80lgX+dz11iqP591ZDgFpB9ML3RLRXfqlABQpw/yff8G8+rG3v6QpTQnttX3meUNahiDEmv3xX/GYjZ94VVufOysxzCgmTs1oJ0m/jDMAujA== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(86362001)(55016003)(2906002)(70586007)(8676002)(8936002)(83380400001)(2616005)(30864003)(4326008)(1076003)(508600001)(70206006)(47076005)(82310400004)(426003)(40460700003)(5660300002)(36756003)(6666004)(316002)(6636002)(110136005)(36860700001)(356005)(7696005)(54906003)(336012)(81166007)(6286002)(26005)(16526019)(186003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:30:00.2691 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d6c76ff8-a7fb-43c2-b7b4-08d9ecb2955b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2812 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The action template holds a list of action types that will be used together on the same rule. The template's actions instances will be created only when the template bind to the dedicated group. And the created actions will be saved to each individual group in order for best performance. The actions will not be shared with each other. This commit adds the action template management which caches the flow action template. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_flow.c | 66 ++++++++++++++++++ drivers/net/mlx5/mlx5_flow.h | 27 ++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 116 ++++++++++++++++++++++++++++++++ 4 files changed, 211 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 96048ad0ea..80dc72175d 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1502,6 +1502,8 @@ struct mlx5_priv { uint32_t flex_item_map; /* Map of allocated flex item elements. */ /* Item template list. */ LIST_HEAD(flow_hw_itt, rte_flow_pattern_template) flow_hw_itt; + /* Action template list. */ + LIST_HEAD(flow_hw_at, rte_flow_actions_template) flow_hw_at; struct mlx5dr_context *dr_ctx; /**< HW steering DR context. */ uint32_t nb_queue; /* HW steering queue number. */ /* HW steering queue polling mechanism job descriptor LIFO. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 27a40a9627..be6a7ff336 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -822,6 +822,16 @@ static int mlx5_flow_pattern_template_destroy(struct rte_eth_dev *dev, struct rte_flow_pattern_template *template, struct rte_flow_error *error); +static struct rte_flow_actions_template * +mlx5_flow_actions_template_create(struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); +static int +mlx5_flow_actions_template_destroy(struct rte_eth_dev *dev, + struct rte_flow_actions_template *template, + struct rte_flow_error *error); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, @@ -846,6 +856,8 @@ static const struct rte_flow_ops mlx5_flow_ops = { .configure = mlx5_flow_port_configure, .pattern_template_create = mlx5_flow_pattern_template_create, .pattern_template_destroy = mlx5_flow_pattern_template_destroy, + .actions_template_create = mlx5_flow_actions_template_create, + .actions_template_destroy = mlx5_flow_actions_template_destroy, }; /* Tunnel information. */ @@ -7915,6 +7927,60 @@ mlx5_flow_pattern_template_destroy(struct rte_eth_dev *dev, return fops->pattern_template_destroy(dev, template, error); } +/** + * Create flow item template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the action template attributes. + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[in] masks + * List of actions that marks which of the action's member is constant. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static struct rte_flow_actions_template * +mlx5_flow_actions_template_create(struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->actions_template_create(dev, attr, actions, masks, error); +} + +/** + * Destroy flow action template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] template + * Pointer to the action template to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_actions_template_destroy(struct rte_eth_dev *dev, + struct rte_flow_actions_template *template, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->actions_template_destroy(dev, template, error); +} + /** * Allocate a new memory for the counter values wrapped by all the needed * management. diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 88102f0991..f5ababb32f 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1024,6 +1024,21 @@ struct rte_flow_pattern_template { uint32_t refcnt; /* Reference counter. */ }; +/* Flow action template attribute. */ +struct rte_flow_actions_template_attr { + int32_t reserve; +}; + +/* Flow action template struct. */ +struct rte_flow_actions_template { + LIST_ENTRY(rte_flow_actions_template) next; + /* Template attributes. */ + struct rte_flow_actions_template_attr attr; + struct rte_flow_action *actions; /* Cached flow actions. */ + struct rte_flow_action *masks; /* Cached action masks.*/ + uint32_t refcnt; /* Reference counter. */ +}; + /* * Define list of valid combinations of RX Hash fields * (see enum ibv_rx_hash_fields). @@ -1281,6 +1296,16 @@ typedef int (*mlx5_flow_pattern_template_destroy_t) (struct rte_eth_dev *dev, struct rte_flow_pattern_template *template, struct rte_flow_error *error); +typedef struct rte_flow_actions_template *(*mlx5_flow_actions_template_create_t) + (struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); +typedef int (*mlx5_flow_actions_template_destroy_t) + (struct rte_eth_dev *dev, + struct rte_flow_actions_template *template, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -1322,6 +1347,8 @@ struct mlx5_flow_driver_ops { mlx5_flow_port_configure_t configure; mlx5_flow_pattern_template_create_t pattern_template_create; mlx5_flow_pattern_template_destroy_t pattern_template_destroy; + mlx5_flow_actions_template_create_t actions_template_create; + mlx5_flow_actions_template_destroy_t actions_template_destroy; }; /* mlx5_flow.c */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index c984e520cd..349c12a849 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -12,6 +12,115 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; +/** + * Create flow action template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the action template attributes. + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[in] masks + * List of actions that marks which of the action's member is constant. + * @param[out] error + * Pointer to error structure. + * + * @return + * Action template pointer on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow_actions_template * +flow_hw_actions_template_create(struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + int len, act_len, mask_len, i; + struct rte_flow_actions_template *at; + + act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, + NULL, 0, actions, error); + if (act_len <= 0) + return NULL; + len = RTE_ALIGN(act_len, 16); + mask_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, + NULL, 0, masks, error); + if (mask_len <= 0) + return NULL; + len += RTE_ALIGN(mask_len, 16); + at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at), 64, SOCKET_ID_ANY); + if (!at) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate action template"); + return NULL; + } + at->attr = *attr; + at->actions = (struct rte_flow_action *)(at + 1); + act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->actions, len, + actions, error); + if (act_len <= 0) + goto error; + at->masks = (struct rte_flow_action *) + (((uint8_t *)at->actions) + act_len); + mask_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->masks, + len - act_len, masks, error); + if (mask_len <= 0) + goto error; + /* + * mlx5 PMD hacks indirect action index directly to the action conf. + * The rte_flow_conv() function copies the content from conf pointer. + * Need to restore the indirect action index from action conf here. + */ + for (i = 0; actions->type != RTE_FLOW_ACTION_TYPE_END; + actions++, masks++, i++) { + if (actions->type == RTE_FLOW_ACTION_TYPE_INDIRECT) { + at->actions[i].conf = actions->conf; + at->masks[i].conf = masks->conf; + } + } + __atomic_fetch_add(&at->refcnt, 1, __ATOMIC_RELAXED); + LIST_INSERT_HEAD(&priv->flow_hw_at, at, next); + return at; +error: + mlx5_free(at); + return NULL; +} + +/** + * Destroy flow action template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] template + * Pointer to the action template to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_actions_template_destroy(struct rte_eth_dev *dev __rte_unused, + struct rte_flow_actions_template *template, + struct rte_flow_error *error __rte_unused) +{ + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { + DRV_LOG(WARNING, "Action template %p is still in use.", + (void *)template); + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "action template in using"); + } + LIST_REMOVE(template, next); + mlx5_free(template); + return 0; +} + /** * Create flow item template. * @@ -204,6 +313,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_pattern_template *it; + struct rte_flow_actions_template *at; if (!priv->dr_ctx) return; @@ -211,6 +321,10 @@ flow_hw_resource_release(struct rte_eth_dev *dev) it = LIST_FIRST(&priv->flow_hw_itt); flow_hw_pattern_template_destroy(dev, it, NULL); } + while (!LIST_EMPTY(&priv->flow_hw_at)) { + at = LIST_FIRST(&priv->flow_hw_at); + flow_hw_actions_template_destroy(dev, at, NULL); + } mlx5_free(priv->hw_q); priv->hw_q = NULL; claim_zero(mlx5dr_context_close(priv->dr_ctx)); @@ -222,6 +336,8 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .configure = flow_hw_configure, .pattern_template_create = flow_hw_pattern_template_create, .pattern_template_destroy = flow_hw_pattern_template_destroy, + .actions_template_create = flow_hw_actions_template_create, + .actions_template_destroy = flow_hw_actions_template_destroy, }; #endif From patchwork Thu Feb 10 16:29:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107290 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9FADAA00BE; Thu, 10 Feb 2022 17:30:31 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2C7BE41C3C; Thu, 10 Feb 2022 17:30:03 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2089.outbound.protection.outlook.com [40.107.94.89]) by mails.dpdk.org (Postfix) with ESMTP id 40994411ED for ; Thu, 10 Feb 2022 17:30:02 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PipK+hftxpRQTF73NMo+wxvibJLlX3BHMDPpfXgjFPK852qT0kk88RfNWFKEONERTlJ0SDQHCdtHJGrV+UItIjvDU9pw4VDoaUEERkXpoOs/wVU3zS8kq3hOgswFI+dcX6LUCWwEMcY2EYzwLcuq44DwRS2R8EMvWZBLP9PnQLj7XxGulCxat5bovwE3HN8nxV6SK+uWK6GCVKsQxVXNSiq13dU+j/261mu3Rb0iYQ4VnaofGkpLACOGNw4fMYkZuo6InnVZ6Q7UP3XXX1fGc0IOFWIU2rW4j1pR94R/omyFq/y8W52Y6QTXals5S9YJzIoY1W7aEeJnNinGCBLbLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ypD4hkmgOr8Pwy6GcEhKh3ua7FLdN35jmKrya+aUWCU=; b=VkVkrTUEScYUg7BA+wKZ6s29hbg+tt7ctiylcm8BG2JN7h8ERv4cOfLHg+Avf9G7xTvqD4B2kUapHVJJU0p6PaqXikFNtLfwH8xMJPBQT6offSuoKdJRCwwxp0MKZmSvUO2ig6aWBHd/Pnb68ccV58ZduS87cqSvQRvVGAt0BQMobuae3rHYGn0wWJVh2Wb8XETIrBeinn57hd4oIWuUUerlBamh5fD0UQnVTE+bb6Gvfh6lwk0fzNQs0ArG5Be67IR405+4kzZpH1qqHmACqIrAoBT7zUdyiA5+FNg/pn7xQixEcllUdNW21BV0jQ4D5ufZMemAJVHJFJfefukWHg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ypD4hkmgOr8Pwy6GcEhKh3ua7FLdN35jmKrya+aUWCU=; b=KIH45lx1/+lDFFbujEeDpCViumXHbF2M4wgVt1sa4ZflqQZostZS7xrJdYzRAEATV+Xjc3zH9d6JuXiUCrxFO2gf+prPQDA4Jeqsrd2rhkWtAhkh29Mz+axnpPzhvE0JfAHjkLujcV93IE9zPI+X0CLTpTUwSouUnlEvB2wjT/90FlsfiU0TjofNMJNOp9CVBqfvfbBUwwR7jQa9hOs27PtR/s4dG7jiuAXUT1n8MGPD4OVf3EDS/nc0hWu3Wq0OIj1hfyftSrYZouGcNTtYU0MPqeyMmhajlAMh+zR7NfXzmzlrsnFbl+mKi4sl72FSawdBELxu0H6SkV+leJ6TRg== Received: from DM5PR15CA0071.namprd15.prod.outlook.com (2603:10b6:3:ae::33) by MW3PR12MB4540.namprd12.prod.outlook.com (2603:10b6:303:52::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Thu, 10 Feb 2022 16:29:59 +0000 Received: from DM6NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:3:ae:cafe::a6) by DM5PR15CA0071.outlook.office365.com (2603:10b6:3:ae::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.22 via Frontend Transport; Thu, 10 Feb 2022 16:29:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT005.mail.protection.outlook.com (10.13.172.238) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:29:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:29:58 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:29:55 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 06/13] net/mlx5: add table management Date: Thu, 10 Feb 2022 18:29:19 +0200 Message-ID: <20220210162926.20436-7-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: de073204-fc47-4f16-7729-08d9ecb2946a X-MS-TrafficTypeDiagnostic: MW3PR12MB4540:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:131; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: K1HiFL5Fw5leBJcrDhtybzRCf/j+DyGQOcaz4pufAaH28TCyOULhX4bFa5uPiLtkT+tDcJPauPmHP6TqYRmJPYMN/koFqTz0EfucVNhIiziwk9KsVwJDavow6ZkN5khG0cFwX7MKjBm5E3ZasuYGAqYvGqVgtvW2Hf5Li5IHKErbcH+iR6ZihyzjhA0IorM+6PvbFZ4xoEaAJvi92F4buZF739oNo4qAjbsmpeE9OGxaLltzg9PLvmn6pqi3cgivMCrOod1ssMOuCYFdTyQD6aZ8kMbQ6C8MdQIAdGOIex+w8uTQCppJNK02y1ya1+6d4L59HdKNfs7suvSKtzsIc4S7uej9o6DhFF1JaaeJfCjxawERVDGLWIbp9itjxosdw/s0Oal9oapklZkimqNJboWtqsrtmw2rd9jicLoPYMhPKWmK2xYYSaz/aqEM7o+/+mgBMB1K+l3X5P/AJ+i2EB5NdX/3T/zvIX+2laPxkzHO5k9H2HVjrGrK+fQYcPw/MWV7/T90syY4VekKzP7iQ1bmEpzfRqgn+EKpUktdeuctwfFWTQnY/loEWT4iaFHhG7RABj5hLPBXIG5y9YoAfO8rDMOukMVpbnpM4+Qw86jTaplJmp6I0q1L4p88K2LFT8n/lDOwmVrSg+rwzH6U8uYFto7vWWEiB4atSLv8FYqdmalMP5v3hq+jzns0T08IrACa79Q7cJVStlnu2JhLZFiIaUbC4W3E0OH+zA+tSaIyAvANYIn4ZAl4M+IhqOcv X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(40460700003)(36860700001)(70586007)(86362001)(55016003)(426003)(7696005)(6666004)(8936002)(83380400001)(4326008)(8676002)(336012)(47076005)(2906002)(5660300002)(6286002)(70206006)(316002)(16526019)(1076003)(30864003)(26005)(54906003)(36756003)(110136005)(81166007)(2616005)(186003)(82310400004)(356005)(6636002)(508600001)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:29:58.6256 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: de073204-fc47-4f16-7729-08d9ecb2946a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4540 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Flow table is a group of flows with the same matching criteria and the same actions defined for them. The table defines rules that have the same matching fields but with different matching values. For example, matching on 5 tuple, the table will be (IPv4 source + IPv4 dest + s_port + d_port + next_proto) while the values for each rule will be different. The templates' relevant matching criteria and action instances will be created in the table creation and saved in the table. As table attributes indicate the supported flow number, the flow memory will also be allocated at the same time. This commit adds the table management functions. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.c | 45 ++- drivers/net/mlx5/mlx5.h | 21 +- drivers/net/mlx5/mlx5_flow.c | 81 +++++ drivers/net/mlx5/mlx5_flow.h | 72 +++++ drivers/net/mlx5/mlx5_flow_hw.c | 522 ++++++++++++++++++++++++++++++++ 5 files changed, 735 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f1933fd253..51f3d9bf99 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1354,12 +1354,46 @@ void mlx5_free_table_hash_list(struct mlx5_priv *priv) { struct mlx5_dev_ctx_shared *sh = priv->sh; - - if (!sh->flow_tbls) + struct mlx5_hlist **tbls = (priv->config.dv_flow_en == 2) ? + &sh->groups : &sh->flow_tbls; + if (*tbls == NULL) return; - mlx5_hlist_destroy(sh->flow_tbls); - sh->flow_tbls = NULL; + mlx5_hlist_destroy(*tbls); + *tbls = NULL; +} + +#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +/** + * Allocate HW steering group hash list. + * + * @param[in] priv + * Pointer to the private device data structure. + */ +static int +mlx5_alloc_hw_group_hash_list(struct mlx5_priv *priv) +{ + int err = 0; + struct mlx5_dev_ctx_shared *sh = priv->sh; + char s[MLX5_NAME_SIZE]; + + MLX5_ASSERT(sh); + snprintf(s, sizeof(s), "%s_flow_groups", priv->sh->ibdev_name); + sh->groups = mlx5_hlist_create + (s, MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE, + false, true, sh, + flow_hw_grp_create_cb, + flow_hw_grp_match_cb, + flow_hw_grp_remove_cb, + flow_hw_grp_clone_cb, + flow_hw_grp_clone_free_cb); + if (!sh->groups) { + DRV_LOG(ERR, "flow groups with hash creation failed."); + err = ENOMEM; + } + return err; } +#endif + /** * Initialize flow table hash list and create the root tables entry @@ -1375,11 +1409,14 @@ int mlx5_alloc_table_hash_list(struct mlx5_priv *priv __rte_unused) { int err = 0; + /* Tables are only used in DV and DR modes. */ #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) struct mlx5_dev_ctx_shared *sh = priv->sh; char s[MLX5_NAME_SIZE]; + if (priv->config.dv_flow_en == 2) + return mlx5_alloc_hw_group_hash_list(priv); MLX5_ASSERT(sh); snprintf(s, sizeof(s), "%s_flow_table", priv->sh->ibdev_name); sh->flow_tbls = mlx5_hlist_create(s, MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE, diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 80dc72175d..a4bc8d1fb7 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -62,7 +62,9 @@ enum mlx5_ipool_index { MLX5_IPOOL_PUSH_VLAN, /* Pool for push vlan resource. */ MLX5_IPOOL_TAG, /* Pool for tag resource. */ MLX5_IPOOL_PORT_ID, /* Pool for port id resource. */ - MLX5_IPOOL_JUMP, /* Pool for jump resource. */ + MLX5_IPOOL_JUMP, /* Pool for SWS jump resource. */ + /* Pool for HWS group. Jump action will be created internally. */ + MLX5_IPOOL_HW_GRP = MLX5_IPOOL_JUMP, MLX5_IPOOL_SAMPLE, /* Pool for sample resource. */ MLX5_IPOOL_DEST_ARRAY, /* Pool for destination array resource. */ MLX5_IPOOL_TUNNEL_ID, /* Pool for tunnel offload context */ @@ -106,6 +108,13 @@ enum mlx5_delay_drop_mode { MLX5_DELAY_DROP_HAIRPIN = RTE_BIT32(1), /* Hairpin queues enable. */ }; +/* The HWS action type root/non-root. */ +enum mlx5_hw_action_flag_type { + MLX5_HW_ACTION_FLAG_ROOT, /* Root action. */ + MLX5_HW_ACTION_FLAG_NONE_ROOT, /* Non-root ation. */ + MLX5_HW_ACTION_FLAG_MAX, /* Maximum action flag. */ +}; + /* Hlist and list callback context. */ struct mlx5_flow_cb_ctx { struct rte_eth_dev *dev; @@ -1203,7 +1212,10 @@ struct mlx5_dev_ctx_shared { rte_spinlock_t uar_lock[MLX5_UAR_PAGE_NUM_MAX]; /* UAR same-page access control required in 32bit implementations. */ #endif - struct mlx5_hlist *flow_tbls; + union { + struct mlx5_hlist *flow_tbls; /* SWS flow table. */ + struct mlx5_hlist *groups; /* HWS flow group. */ + }; struct mlx5_flow_tunnel_hub *tunnel_hub; /* Direct Rules tables for FDB, NIC TX+RX */ void *dr_drop_action; /* Pointer to DR drop action, any domain. */ @@ -1508,6 +1520,11 @@ struct mlx5_priv { uint32_t nb_queue; /* HW steering queue number. */ /* HW steering queue polling mechanism job descriptor LIFO. */ struct mlx5_hw_q *hw_q; + /* HW steering rte flow table list header. */ + LIST_HEAD(flow_hw_tbl, rte_flow_template_table) flow_hw_tbl; + /* HW steering global drop action. */ + struct mlx5dr_action *hw_drop[MLX5_HW_ACTION_FLAG_MAX] + [MLX5DR_TABLE_TYPE_MAX]; }; #define PORT_ID(priv) ((priv)->dev_data->port_id) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index be6a7ff336..2e70e1eaaf 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -833,6 +833,19 @@ mlx5_flow_actions_template_destroy(struct rte_eth_dev *dev, struct rte_flow_actions_template *template, struct rte_flow_error *error); +static struct rte_flow_template_table * +mlx5_flow_table_create(struct rte_eth_dev *dev, + const struct rte_flow_template_table_attr *attr, + struct rte_flow_pattern_template *item_templates[], + uint8_t nb_item_templates, + struct rte_flow_actions_template *action_templates[], + uint8_t nb_action_templates, + struct rte_flow_error *error); +static int +mlx5_flow_table_destroy(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error); + static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -858,6 +871,8 @@ static const struct rte_flow_ops mlx5_flow_ops = { .pattern_template_destroy = mlx5_flow_pattern_template_destroy, .actions_template_create = mlx5_flow_actions_template_create, .actions_template_destroy = mlx5_flow_actions_template_destroy, + .template_table_create = mlx5_flow_table_create, + .template_table_destroy = mlx5_flow_table_destroy, }; /* Tunnel information. */ @@ -7981,6 +7996,72 @@ mlx5_flow_actions_template_destroy(struct rte_eth_dev *dev, return fops->actions_template_destroy(dev, template, error); } +/** + * Create flow table. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the table attributes. + * @param[in] item_templates + * Item template array to be binded to the table. + * @param[in] nb_item_templates + * Number of item template. + * @param[in] action_templates + * Action template array to be binded to the table. + * @param[in] nb_action_templates + * Number of action template. + * @param[out] error + * Pointer to error structure. + * + * @return + * Table on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow_template_table * +mlx5_flow_table_create(struct rte_eth_dev *dev, + const struct rte_flow_template_table_attr *attr, + struct rte_flow_pattern_template *item_templates[], + uint8_t nb_item_templates, + struct rte_flow_actions_template *action_templates[], + uint8_t nb_action_templates, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->template_table_create(dev, + attr, + item_templates, + nb_item_templates, + action_templates, + nb_action_templates, + error); +} + +/** + * PMD destroy flow table. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] table + * Pointer to the table to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_table_destroy(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->template_table_destroy(dev, table, error); +} + /** * Allocate a new memory for the counter values wrapped by all the needed * management. diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index f5ababb32f..02eb03e4cf 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1039,6 +1039,54 @@ struct rte_flow_actions_template { uint32_t refcnt; /* Reference counter. */ }; +/* Jump action struct. */ +struct mlx5_hw_jump_action { + /* Action jump from root. */ + struct mlx5dr_action *root_action; + /* HW steering jump action. */ + struct mlx5dr_action *hws_action; +}; + +/* DR action set struct. */ +struct mlx5_hw_actions { + struct mlx5dr_action *drop; /* Drop action. */ +}; + +/* mlx5 action template struct. */ +struct mlx5_hw_action_template { + /* Action template pointer. */ + struct rte_flow_actions_template *action_template; + struct mlx5_hw_actions acts; /* Template actions. */ +}; + +/* mlx5 flow group struct. */ +struct mlx5_flow_group { + struct mlx5_list_entry entry; + struct mlx5dr_table *tbl; /* HWS table object. */ + struct mlx5_hw_jump_action jump; /* Jump action. */ + enum mlx5dr_table_type type; /* Table type. */ + uint32_t group_id; /* Group id. */ + uint32_t idx; /* Group memory index. */ +}; + +#define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2 +#define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32 + +struct rte_flow_template_table { + LIST_ENTRY(rte_flow_template_table) next; + struct mlx5_flow_group *grp; /* The group rte_flow_template_table uses. */ + struct mlx5dr_matcher *matcher; /* Template matcher. */ + /* Item templates bind to the table. */ + struct rte_flow_pattern_template *its[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; + /* Action templates bind to the table. */ + struct mlx5_hw_action_template ats[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + struct mlx5_indexed_pool *flow; /* The table's flow ipool. */ + uint32_t type; /* Flow table type RX/TX/FDB. */ + uint8_t nb_item_templates; /* Item template number. */ + uint8_t nb_action_templates; /* Action template number. */ + uint32_t refcnt; /* Table reference counter. */ +}; + /* * Define list of valid combinations of RX Hash fields * (see enum ibv_rx_hash_fields). @@ -1306,6 +1354,18 @@ typedef int (*mlx5_flow_actions_template_destroy_t) (struct rte_eth_dev *dev, struct rte_flow_actions_template *template, struct rte_flow_error *error); +typedef struct rte_flow_template_table *(*mlx5_flow_table_create_t) + (struct rte_eth_dev *dev, + const struct rte_flow_template_table_attr *attr, + struct rte_flow_pattern_template *item_templates[], + uint8_t nb_item_templates, + struct rte_flow_actions_template *action_templates[], + uint8_t nb_action_templates, + struct rte_flow_error *error); +typedef int (*mlx5_flow_table_destroy_t) + (struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -1349,6 +1409,8 @@ struct mlx5_flow_driver_ops { mlx5_flow_pattern_template_destroy_t pattern_template_destroy; mlx5_flow_actions_template_create_t actions_template_create; mlx5_flow_actions_template_destroy_t actions_template_destroy; + mlx5_flow_table_create_t template_table_create; + mlx5_flow_table_destroy_t template_table_destroy; }; /* mlx5_flow.c */ @@ -1784,6 +1846,16 @@ int flow_dv_query_count(struct rte_eth_dev *dev, uint32_t cnt_idx, void *data, struct rte_flow_error *error); +struct mlx5_list_entry *flow_hw_grp_create_cb(void *tool_ctx, void *cb_ctx); +void flow_hw_grp_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +int flow_hw_grp_match_cb(void *tool_ctx, + struct mlx5_list_entry *entry, + void *cb_ctx); +struct mlx5_list_entry *flow_hw_grp_clone_cb(void *tool_ctx, + struct mlx5_list_entry *oentry, + void *cb_ctx); +void flow_hw_grp_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); + struct mlx5_aso_age_action *flow_aso_age_get_by_idx(struct rte_eth_dev *dev, uint32_t age_idx); int flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 349c12a849..5cb5e2ebb9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -12,6 +12,303 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; +/* DR action flags with different table. */ +static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX] + [MLX5DR_TABLE_TYPE_MAX] = { + { + MLX5DR_ACTION_FLAG_ROOT_RX, + MLX5DR_ACTION_FLAG_ROOT_TX, + MLX5DR_ACTION_FLAG_ROOT_FDB, + }, + { + MLX5DR_ACTION_FLAG_HWS_RX, + MLX5DR_ACTION_FLAG_HWS_TX, + MLX5DR_ACTION_FLAG_HWS_FDB, + }, +}; + +/** + * Destroy DR actions created by action template. + * + * For DR actions created during table creation's action translate. + * Need to destroy the DR action when destroying the table. + * + * @param[in] acts + * Pointer to the template HW steering DR actions. + */ +static void +__flow_hw_action_template_destroy(struct mlx5_hw_actions *acts __rte_unused) +{ +} + +/** + * Translate rte_flow actions to DR action. + * + * As the action template has already indicated the actions. Translate + * the rte_flow actions to DR action if possbile. So in flow create + * stage we will save cycles from handing the actions' organizing. + * For the actions with limited information, need to add these to a + * list. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] table_attr + * Pointer to the table attributes. + * @param[in] item_templates + * Item template array to be binded to the table. + * @param[in/out] acts + * Pointer to the template HW steering DR actions. + * @param[in] at + * Action template. + * @param[out] error + * Pointer to error structure. + * + * @return + * Table on success, NULL otherwise and rte_errno is set. + */ +static int +flow_hw_actions_translate(struct rte_eth_dev *dev, + const struct rte_flow_template_table_attr *table_attr, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + struct rte_flow_error *error __rte_unused) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + struct rte_flow_action *actions = at->actions; + struct rte_flow_action *masks = at->masks; + bool actions_end = false; + uint32_t type; + + if (attr->transfer) + type = MLX5DR_TABLE_TYPE_FDB; + else if (attr->egress) + type = MLX5DR_TABLE_TYPE_NIC_TX; + else + type = MLX5DR_TABLE_TYPE_NIC_RX; + for (; !actions_end; actions++, masks++) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_INDIRECT: + break; + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_DROP: + acts->drop = priv->hw_drop[!!attr->group][type]; + break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + break; + default: + break; + } + } + return 0; +} + +/** + * Create flow table. + * + * The input item and action templates will be binded to the table. + * Flow memory will also be allocated. Matcher will be created based + * on the item template. Action will be translated to the dedicated + * DR action if possible. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the table attributes. + * @param[in] item_templates + * Item template array to be binded to the table. + * @param[in] nb_item_templates + * Number of item template. + * @param[in] action_templates + * Action template array to be binded to the table. + * @param[in] nb_action_templates + * Number of action template. + * @param[out] error + * Pointer to error structure. + * + * @return + * Table on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow_template_table * +flow_hw_table_create(struct rte_eth_dev *dev, + const struct rte_flow_template_table_attr *attr, + struct rte_flow_pattern_template *item_templates[], + uint8_t nb_item_templates, + struct rte_flow_actions_template *action_templates[], + uint8_t nb_action_templates, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_matcher_attr matcher_attr = {0}; + struct rte_flow_template_table *tbl = NULL; + struct mlx5_flow_group *grp; + struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; + struct rte_flow_attr flow_attr = attr->flow_attr; + struct mlx5_flow_cb_ctx ctx = { + .dev = dev, + .error = error, + .data = &flow_attr, + }; + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct rte_flow), + .trunk_size = 1 << 12, + .per_core_cache = 1 << 13, + .need_lock = 1, + .release_mem_en = !!priv->config.reclaim_mode, + .malloc = mlx5_malloc, + .free = mlx5_free, + .type = "mlx5_hw_table_flow", + }; + struct mlx5_list_entry *ge; + uint32_t i, max_tpl = MLX5_HW_TBL_MAX_ITEM_TEMPLATE; + uint32_t nb_flows = rte_align32pow2(attr->nb_flows); + int err; + + /* HWS layer accepts only 1 item template with root table. */ + if (!attr->flow_attr.group) + max_tpl = 1; + cfg.max_idx = nb_flows; + /* For table has limited flows, resize the cache and trunk size. */ + if (nb_flows < cfg.trunk_size) { + cfg.per_core_cache = nb_flows >> 2; + cfg.trunk_size = nb_flows; + } + /* Check if we requires too many templates. */ + if (nb_item_templates > max_tpl || + nb_action_templates > MLX5_HW_TBL_MAX_ACTION_TEMPLATE) { + rte_errno = EINVAL; + goto error; + } + /* Allocate the table memory. */ + tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl), 0, SOCKET_ID_ANY); + if (!tbl) + goto error; + /* Allocate flow indexed pool. */ + tbl->flow = mlx5_ipool_create(&cfg); + if (!tbl->flow) + goto error; + /* Register the flow group. */ + ge = mlx5_hlist_register(priv->sh->groups, attr->flow_attr.group, &ctx); + if (!ge) + goto error; + grp = container_of(ge, struct mlx5_flow_group, entry); + tbl->grp = grp; + /* Prepare matcher information. */ + matcher_attr.priority = attr->flow_attr.priority; + matcher_attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_RULE; + matcher_attr.rule.num_log = rte_log2_u32(nb_flows); + /* Build the item template. */ + for (i = 0; i < nb_item_templates; i++) { + uint32_t ret; + + ret = __atomic_add_fetch(&item_templates[i]->refcnt, 1, + __ATOMIC_RELAXED); + if (ret <= 1) { + rte_errno = EINVAL; + goto it_error; + } + mt[i] = item_templates[i]->mt; + tbl->its[i] = item_templates[i]; + } + tbl->matcher = mlx5dr_matcher_create + (tbl->grp->tbl, mt, nb_item_templates, &matcher_attr); + if (!tbl->matcher) + goto it_error; + tbl->nb_item_templates = nb_item_templates; + /* Build the action template. */ + for (i = 0; i < nb_action_templates; i++) { + uint32_t ret; + + ret = __atomic_add_fetch(&action_templates[i]->refcnt, 1, + __ATOMIC_RELAXED); + if (ret <= 1) { + rte_errno = EINVAL; + goto at_error; + } + err = flow_hw_actions_translate(dev, attr, + &tbl->ats[i].acts, + action_templates[i], error); + if (err) + goto at_error; + tbl->ats[i].action_template = action_templates[i]; + } + tbl->nb_action_templates = nb_action_templates; + tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB : + (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX : + MLX5DR_TABLE_TYPE_NIC_RX); + LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next); + return tbl; +at_error: + while (i--) { + __flow_hw_action_template_destroy(&tbl->ats[i].acts); + __atomic_sub_fetch(&action_templates[i]->refcnt, + 1, __ATOMIC_RELAXED); + } + i = nb_item_templates; +it_error: + while (i--) + __atomic_sub_fetch(&item_templates[i]->refcnt, + 1, __ATOMIC_RELAXED); + mlx5dr_matcher_destroy(tbl->matcher); +error: + err = rte_errno; + if (tbl) { + if (tbl->flow) + mlx5_ipool_destroy(tbl->flow); + mlx5_free(tbl); + } + rte_flow_error_set(error, err, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to create rte table"); + return NULL; +} + +/** + * Destroy flow table. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] table + * Pointer to the table to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_table_destroy(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + struct rte_flow_error *error __rte_unused) +{ + struct mlx5_priv *priv = dev->data->dev_private; + int i; + + if (table->refcnt) { + DRV_LOG(WARNING, "Table %p is still in using.", (void *)table); + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "table in using"); + } + LIST_REMOVE(table, next); + for (i = 0; i < table->nb_item_templates; i++) + __atomic_sub_fetch(&table->its[i]->refcnt, + 1, __ATOMIC_RELAXED); + for (i = 0; i < table->nb_action_templates; i++) { + __flow_hw_action_template_destroy(&table->ats[i].acts); + __atomic_sub_fetch(&table->ats[i].action_template->refcnt, + 1, __ATOMIC_RELAXED); + } + mlx5dr_matcher_destroy(table->matcher); + mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); + mlx5_ipool_destroy(table->flow); + mlx5_free(table); + return 0; +} + /** * Create flow action template. * @@ -196,6 +493,199 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused, return 0; } +/** + * Create group callback. + * + * @param[in] tool_ctx + * Pointer to the hash list related context. + * @param[in] cb_ctx + * Pointer to the group creation context. + * + * @return + * Group entry on success, NULL otherwise and rte_errno is set. + */ +struct mlx5_list_entry * +flow_hw_grp_create_cb(void *tool_ctx, void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct rte_eth_dev *dev = ctx->dev; + struct rte_flow_attr *attr = (struct rte_flow_attr *)ctx->data; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_table_attr dr_tbl_attr = {0}; + struct rte_flow_error *error = ctx->error; + struct mlx5_flow_group *grp_data; + struct mlx5dr_table *tbl = NULL; + struct mlx5dr_action *jump; + uint32_t idx = 0; + + grp_data = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_HW_GRP], &idx); + if (!grp_data) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate flow table data entry"); + return NULL; + } + dr_tbl_attr.level = attr->group; + if (attr->transfer) + dr_tbl_attr.type = MLX5DR_TABLE_TYPE_FDB; + else if (attr->egress) + dr_tbl_attr.type = MLX5DR_TABLE_TYPE_NIC_TX; + else + dr_tbl_attr.type = MLX5DR_TABLE_TYPE_NIC_RX; + tbl = mlx5dr_table_create(priv->dr_ctx, &dr_tbl_attr); + if (!tbl) + goto error; + grp_data->tbl = tbl; + if (attr->group) { + /* Jump action be used by non-root table. */ + jump = mlx5dr_action_create_dest_table + (priv->dr_ctx, tbl, + mlx5_hw_act_flag[!!attr->group][dr_tbl_attr.type]); + if (!jump) + goto error; + grp_data->jump.hws_action = jump; + /* Jump action be used by root table. */ + jump = mlx5dr_action_create_dest_table + (priv->dr_ctx, tbl, + mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_ROOT] + [dr_tbl_attr.type]); + if (!jump) + goto error; + grp_data->jump.root_action = jump; + } + grp_data->idx = idx; + grp_data->group_id = attr->group; + grp_data->type = dr_tbl_attr.type; + return &grp_data->entry; +error: + if (grp_data->jump.root_action) + mlx5dr_action_destroy(grp_data->jump.root_action); + if (grp_data->jump.hws_action) + mlx5dr_action_destroy(grp_data->jump.hws_action); + if (tbl) + mlx5dr_table_destroy(tbl); + if (idx) + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_HW_GRP], idx); + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate flow dr table"); + return NULL; +} + +/** + * Remove group callback. + * + * @param[in] tool_ctx + * Pointer to the hash list related context. + * @param[in] entry + * Pointer to the entry to be removed. + */ +void +flow_hw_grp_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_group *grp_data = + container_of(entry, struct mlx5_flow_group, entry); + + MLX5_ASSERT(entry && sh); + /* To use the wrapper glue functions instead. */ + if (grp_data->jump.hws_action) + mlx5dr_action_destroy(grp_data->jump.hws_action); + if (grp_data->jump.root_action) + mlx5dr_action_destroy(grp_data->jump.root_action); + mlx5dr_table_destroy(grp_data->tbl); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_HW_GRP], grp_data->idx); +} + +/** + * Match group callback. + * + * @param[in] tool_ctx + * Pointer to the hash list related context. + * @param[in] entry + * Pointer to the group to be matched. + * @param[in] cb_ctx + * Pointer to the group matching context. + * + * @return + * 0 on matched, 1 on miss matched. + */ +int +flow_hw_grp_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, + void *cb_ctx) +{ + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_group *grp_data = + container_of(entry, struct mlx5_flow_group, entry); + struct rte_flow_attr *attr = + (struct rte_flow_attr *)ctx->data; + + return (grp_data->group_id != attr->group) || + ((grp_data->type != MLX5DR_TABLE_TYPE_FDB) && + attr->transfer) || + ((grp_data->type != MLX5DR_TABLE_TYPE_NIC_TX) && + attr->egress) || + ((grp_data->type != MLX5DR_TABLE_TYPE_NIC_RX) && + attr->ingress); +} + +/** + * Clone group entry callback. + * + * @param[in] tool_ctx + * Pointer to the hash list related context. + * @param[in] entry + * Pointer to the group to be matched. + * @param[in] cb_ctx + * Pointer to the group matching context. + * + * @return + * 0 on matched, 1 on miss matched. + */ +struct mlx5_list_entry * +flow_hw_grp_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_group *grp_data; + struct rte_flow_error *error = ctx->error; + uint32_t idx = 0; + + grp_data = mlx5_ipool_malloc(sh->ipool[MLX5_IPOOL_HW_GRP], &idx); + if (!grp_data) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate flow table data entry"); + return NULL; + } + memcpy(grp_data, oentry, sizeof(*grp_data)); + grp_data->idx = idx; + return &grp_data->entry; +} + +/** + * Free cloned group entry callback. + * + * @param[in] tool_ctx + * Pointer to the hash list related context. + * @param[in] entry + * Pointer to the group to be freed. + */ +void +flow_hw_grp_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_group *grp_data = + container_of(entry, struct mlx5_flow_group, entry); + + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_HW_GRP], grp_data->idx); +} + /** * Configure port HWS resources. * @@ -213,6 +703,7 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused, * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ + static int flow_hw_configure(struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, @@ -289,8 +780,24 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; priv->dr_ctx = dr_ctx; priv->nb_queue = nb_queue; + /* Add global actions. */ + for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { + for (j = 0; j < MLX5DR_TABLE_TYPE_MAX; j++) { + priv->hw_drop[i][j] = mlx5dr_action_create_dest_drop + (priv->dr_ctx, mlx5_hw_act_flag[i][j]); + if (!priv->hw_drop[i][j]) + goto err; + } + } return 0; err: + for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { + for (j = 0; j < MLX5DR_TABLE_TYPE_MAX; j++) { + if (!priv->hw_drop[i][j]) + continue; + mlx5dr_action_destroy(priv->hw_drop[i][j]); + } + } if (dr_ctx) claim_zero(mlx5dr_context_close(dr_ctx)); if (priv->hw_q) { @@ -312,11 +819,17 @@ void flow_hw_resource_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_template_table *tbl; struct rte_flow_pattern_template *it; struct rte_flow_actions_template *at; + int i, j; if (!priv->dr_ctx) return; + while (!LIST_EMPTY(&priv->flow_hw_tbl)) { + tbl = LIST_FIRST(&priv->flow_hw_tbl); + flow_hw_table_destroy(dev, tbl, NULL); + } while (!LIST_EMPTY(&priv->flow_hw_itt)) { it = LIST_FIRST(&priv->flow_hw_itt); flow_hw_pattern_template_destroy(dev, it, NULL); @@ -325,6 +838,13 @@ flow_hw_resource_release(struct rte_eth_dev *dev) at = LIST_FIRST(&priv->flow_hw_at); flow_hw_actions_template_destroy(dev, at, NULL); } + for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { + for (j = 0; j < MLX5DR_TABLE_TYPE_MAX; j++) { + if (!priv->hw_drop[i][j]) + continue; + mlx5dr_action_destroy(priv->hw_drop[i][j]); + } + } mlx5_free(priv->hw_q); priv->hw_q = NULL; claim_zero(mlx5dr_context_close(priv->dr_ctx)); @@ -338,6 +858,8 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .pattern_template_destroy = flow_hw_pattern_template_destroy, .actions_template_create = flow_hw_actions_template_create, .actions_template_destroy = flow_hw_actions_template_destroy, + .template_table_create = flow_hw_table_create, + .template_table_destroy = flow_hw_table_destroy, }; #endif From patchwork Thu Feb 10 16:29:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107292 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E467DA00BE; Thu, 10 Feb 2022 17:30:48 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 389FB426F7; Thu, 10 Feb 2022 17:30:05 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2089.outbound.protection.outlook.com [40.107.223.89]) by mails.dpdk.org (Postfix) with ESMTP id D95D7411FE for ; Thu, 10 Feb 2022 17:30:02 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dKZvlKnR9RCxYOF+TwilkM5OpDLhFlrAuUbjRsAQoRzBagSd6RXlweyZn0h4ZovBrfjD1akivmptGROd2+mZ/CoYqks8svlWlRdeBm0SKny2R1pyU4mn1nsf4DJIC8LlC/XytvnKzz0vpsGGlrVJlHfLf6hXLYaJFR5FcELCvfqwCJG00VDQAZA9ZuGJMOTfl4mj16BboM0pKHE0R64weYkJneAt7VsOBftBNSYArTk59PuOZti3R5Yk2w5cCfTj6tIBI1jZEIeX9nw9E6NhTaUreOm86k6qMcPWT36cnCF/wiclH90i7eR1sJKWnZC8pivrHAEDyzLpteXR1thMvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MfsD5nY4avYQO9/RjQTSD0jW57v0wqlHItkzgtNz4eg=; b=iWkSc+mH8LCXJFiGWF/auoF3SaFCgfYEKCc80vasTnohxp3Q7ZsbBeCIRaNYKu6kOlud+Ynuwy3xqmUXaKtPnEUPd2GeKcPmbuW7j4e9bAz1jWODk04R9wex0nnFz2iOb3aWOTGL7Us/4WiX1t0KCApmXUemanVym2YC1izgrPAhff7mBa03nvtMTh5zE/rpW/6WDvKWmEpU3zAv6n1wmO39utBF2sE2LtyvcRjXJRdwI/9urdDvxipD5VyQ8eMgze876ewbyu9unPFEE9eEoWt5mSaB/byFUs00/jT25DGWA6fkGLlVeQI/AOz30tAWeHhRlF6ffoyq/+bmMMp5ug== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MfsD5nY4avYQO9/RjQTSD0jW57v0wqlHItkzgtNz4eg=; b=k6WfuJqfGk6ba6vgjBnEiq9pKX2UmLNQAHL44vr95MGhnbthdhX2kuDBRszoxaV+zdxHye/PG4GsK/jRU6XG5/vkKdBxhG7HC9wceawl6dguX32cxWlW6C4cSy8FwRSAp3BrJiWZBXgH3ckfg51ZSmkG1IkE5/Mewtw7pRgnTLJNZg5a7C3oN92EUFGICSARBSKdbSrZZdcSEIs/uUXuGG8XbF02np5RFo5LvxtnMRAWenIyEFU5Eida2V1N1R5m0HCEuDoHv4/xxY/dtZKd/1QbyPMfcN6teJlsZnSh/xCp4hxkjR9I8ks42Lkn/vpHrV1Ns9VU7H4IwE23+2Euvg== Received: from DM6PR08CA0062.namprd08.prod.outlook.com (2603:10b6:5:1e0::36) by BL1PR12MB5142.namprd12.prod.outlook.com (2603:10b6:208:312::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Thu, 10 Feb 2022 16:30:01 +0000 Received: from DM6NAM11FT020.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1e0:cafe::c1) by DM6PR08CA0062.outlook.office365.com (2603:10b6:5:1e0::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT020.mail.protection.outlook.com (10.13.172.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:00 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:29:59 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:29:57 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 07/13] net/mlx5: add basic flow queue operation Date: Thu, 10 Feb 2022 18:29:20 +0200 Message-ID: <20220210162926.20436-8-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7dc8d594-b585-429a-2c6e-08d9ecb295cb X-MS-TrafficTypeDiagnostic: BL1PR12MB5142:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:935; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: dL9dJS6wQjv18cQxWHMgGqbwTIwcuYAoVdMAo1bSruFOjl1Dnsuf0RlmyyOVCEDch3l68hewOHU1IrQbTppzjZ10O25o4XBOQwfeMq3pOb+Mx3AFZTwW5jg/rRwC6LOnvFqpJYj6o3oGGG3ApdzFkgUsTq30O2Wk36aoOjhOEtHjFdpcpsB5nWiX32MTUWoS3GoL5nXQyR+H1kwqu+C2uIi5JpDOAs41b41VS48LYWM5ZQGk3iDK2ALE/4PjWagcKE8CiCNGwnfXjG3ajPdvBeYpy7Wbjqb8ub0ARdfPM4NgRvr0mgokQ8PgIrg0ysr1BN4T0vaG5YG50Af2O09PkGjNVJUOL4hmWOnO/vt9/t/7ZH2txtw7CJSixOUtI7e39wg/sKxxtaCUJNWggUkFHaUrz2sSBPytx7AcbSx7Fxf77yRs/IyCx/Uv5YGByTR04zgVOwWvMp0KLVvBFhXBXh5LxfnGugNU8i1CjGmOM/lhG/U20r38ThAgLMN/mhbJNEIzDAxs2eWhfhKp+0ehq7QNg2e4HjT+qK0jhIkcBBJbzwU01rbA7SY2lB8X/F2iyVu7xRlqu8FY1/NUaHPfTwONDgyFuErjvLc0+f+jTKMtDxTxenwv2Z+IF3l9uKHpCrwkZPK+kXI7Sp09DVntKDxWWY4/J6T3VJJnvOmy2Gh+vqQXqU0Cwriq9e4h6lDBR33QdCMXpji9vkxq3+rqPg== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(16526019)(4326008)(1076003)(426003)(336012)(186003)(83380400001)(356005)(110136005)(36860700001)(6286002)(5660300002)(2616005)(55016003)(81166007)(47076005)(26005)(86362001)(6636002)(6666004)(316002)(70206006)(8676002)(70586007)(8936002)(54906003)(82310400004)(508600001)(30864003)(7696005)(2906002)(40460700003)(36756003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:30:00.9996 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7dc8d594-b585-429a-2c6e-08d9ecb295cb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT020.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5142 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The HW steering uses queue-based flow rules management mechanism. The matcher and part of the actions have been prepared during flow table creation. Some left actions will be constructed during flow creation if needed. A flow postpone attribute bit decribles if flow creation/destruction should be applied to the HW directly. An extra drain function has also been prepared to force push all the cached flows to the HW. Once the flow has been applied to the HW, the pull function will be called to get the enqueued creation/destruction flows. The DR rule flow memory is represented in PMD layer instead of allocating from HW steering layer. While destroying the flow, the flow rule memory can only be freed after the CQE received. The HW queue job descriptor is currently introduced to convey the flow information and operation type between the flow insertion/destruction in the pull function. This commit adds the basic flow queue operation for: rte_flow_q_flow_create(); rte_flow_q_flow_destroy(); rte_flow_q_push(); rte_flow_q_pull(); Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 2 +- drivers/net/mlx5/mlx5_flow.c | 158 ++++++++++++++++++ drivers/net/mlx5/mlx5_flow.h | 39 +++++ drivers/net/mlx5/mlx5_flow_hw.c | 278 +++++++++++++++++++++++++++++++- 4 files changed, 475 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index a4bc8d1fb7..ec4eb7ee94 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -342,7 +342,7 @@ enum { /* HW steering flow management job descriptor. */ struct mlx5_hw_q_job { uint32_t type; /* Job type. */ - struct rte_flow *flow; /* Flow attached to the job. */ + struct rte_flow_hw *flow; /* Flow attached to the job. */ void *user_data; /* Job user data. */ }; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 2e70e1eaaf..b48a3af0fb 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -845,6 +845,33 @@ static int mlx5_flow_table_destroy(struct rte_eth_dev *dev, struct rte_flow_template_table *table, struct rte_flow_error *error); +static struct rte_flow * +mlx5_flow_q_flow_create(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error); +static int +mlx5_flow_q_flow_destroy(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow *flow, + struct rte_flow_error *error); +static int +mlx5_flow_q_pull(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + +static int +mlx5_flow_q_push(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_error *error); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, @@ -873,6 +900,10 @@ static const struct rte_flow_ops mlx5_flow_ops = { .actions_template_destroy = mlx5_flow_actions_template_destroy, .template_table_create = mlx5_flow_table_create, .template_table_destroy = mlx5_flow_table_destroy, + .q_flow_create = mlx5_flow_q_flow_create, + .q_flow_destroy = mlx5_flow_q_flow_destroy, + .q_pull = mlx5_flow_q_pull, + .q_push = mlx5_flow_q_push, }; /* Tunnel information. */ @@ -8062,6 +8093,133 @@ mlx5_flow_table_destroy(struct rte_eth_dev *dev, return fops->template_table_destroy(dev, table, error); } +/** + * Enqueue flow creation. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue_id + * The queue to create the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] items + * Items with flow spec value. + * @param[in] pattern_template_index + * The item pattern flow follows from the table. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[out] error + * Pointer to error structure. + * + * @return + * Flow pointer on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow * +mlx5_flow_q_flow_create(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->q_flow_create(dev, queue_id, attr, table, + items, pattern_template_index, + actions, action_template_index, + error); +} + +/** + * Enqueue flow destruction. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to destroy the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] flow + * Pointer to the flow to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_q_flow_destroy(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->q_flow_destroy(dev, queue, attr, flow, error); +} + +/** + * Pull the enqueued flows. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to pull the result. + * @param[in/out] res + * Array to save the results. + * @param[in] n_res + * Available result with the array. + * @param[out] error + * Pointer to error structure. + * + * @return + * Result number on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_q_pull(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->q_pull(dev, queue, res, n_res, error); +} + +/** + * Push the enqueued flows. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to push the flows. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_q_push(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->q_push(dev, queue, error); +} + /** * Allocate a new memory for the counter values wrapped by all the needed * management. diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 02eb03e4cf..40eb8d79aa 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1015,6 +1015,13 @@ struct rte_flow { uint32_t geneve_tlv_option; /**< Holds Geneve TLV option id. > */ } __rte_packed; +/* HWS flow struct. */ +struct rte_flow_hw { + uint32_t idx; /* Flow index from indexed pool. */ + struct rte_flow_template_table *table; /* The table flow allcated from. */ + struct mlx5dr_rule rule; /* HWS layer data struct. */ +} __rte_packed; + /* Flow item template struct. */ struct rte_flow_pattern_template { LIST_ENTRY(rte_flow_pattern_template) next; @@ -1366,6 +1373,32 @@ typedef int (*mlx5_flow_table_destroy_t) (struct rte_eth_dev *dev, struct rte_flow_template_table *table, struct rte_flow_error *error); +typedef struct rte_flow *(*mlx5_flow_q_flow_create_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error); +typedef int (*mlx5_flow_q_flow_destroy_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow *flow, + struct rte_flow_error *error); +typedef int (*mlx5_flow_q_pull_t) + (struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); +typedef int (*mlx5_flow_q_push_t) + (struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -1411,6 +1444,10 @@ struct mlx5_flow_driver_ops { mlx5_flow_actions_template_destroy_t actions_template_destroy; mlx5_flow_table_create_t template_table_create; mlx5_flow_table_destroy_t template_table_destroy; + mlx5_flow_q_flow_create_t q_flow_create; + mlx5_flow_q_flow_destroy_t q_flow_destroy; + mlx5_flow_q_pull_t q_pull; + mlx5_flow_q_push_t q_push; }; /* mlx5_flow.c */ @@ -1581,6 +1618,8 @@ mlx5_translate_tunnel_etypes(uint64_t pattern_flags) return 0; } +int flow_hw_q_flow_flush(struct rte_eth_dev *dev, + struct rte_flow_error *error); int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 5cb5e2ebb9..a74825312f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -10,6 +10,9 @@ #ifdef HAVE_IBV_FLOW_DV_SUPPORT +/* The maximum actions support in the flow. */ +#define MLX5_HW_MAX_ACTS 16 + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; /* DR action flags with different table. */ @@ -105,6 +108,275 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, return 0; } +/** + * Construct flow action array. + * + * For action template contains dynamic actions, these actions need to + * be updated according to the rte_flow action during flow creation. + * + * @param[in] hw_acts + * Pointer to translated actions from template. + * @param[in] actions + * Array of rte_flow action need to be checked. + * @param[in] rule_acts + * Array of DR rule actions to be used during flow creation.. + * @param[in] acts_num + * Pointer to the real acts_num flow has. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_actions_construct(struct mlx5_hw_actions *hw_acts, + const struct rte_flow_action actions[], + struct mlx5dr_rule_action *rule_acts, + uint32_t *acts_num) +{ + bool actions_end = false; + uint32_t i; + + for (i = 0; !actions_end || (i >= MLX5_HW_MAX_ACTS); actions++) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_INDIRECT: + break; + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_DROP: + rule_acts[i++].action = hw_acts->drop; + break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + break; + default: + break; + } + } + *acts_num = i; + return 0; +} + +/** + * Enqueue HW steering flow creation. + * + * The flow will be applied to the HW only if the postpone bit is not set or + * the extra push function is called. + * The flow creation status should be checked from dequeue result. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to create the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] items + * Items with flow spec value. + * @param[in] pattern_template_index + * The item pattern flow follows from the table. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[out] error + * Pointer to error structure. + * + * @return + * Flow pointer on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow * +flow_hw_q_flow_create(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_rule_attr rule_attr = { + .queue_id = queue, + .user_data = attr->user_data, + .burst = attr->postpone, + }; + struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; + struct mlx5_hw_actions *hw_acts; + struct rte_flow_hw *flow; + struct mlx5_hw_q_job *job; + uint32_t acts_num, flow_idx; + int ret; + + if (unlikely(!priv->hw_q[queue].job_idx)) { + rte_errno = ENOMEM; + goto error; + } + flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); + if (!flow) + goto error; + /* + * Set the table here in order to know the destination table + * when free the flow afterwards. + */ + flow->table = table; + flow->idx = flow_idx; + job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; + /* + * Set the job type here in order to know if the flow memory + * should be freed or not when get the result from dequeue. + */ + job->type = MLX5_HW_Q_JOB_TYPE_CREATE; + job->flow = flow; + job->user_data = attr->user_data; + rule_attr.user_data = job; + hw_acts = &table->ats[action_template_index].acts; + /* Construct the flow action array based on the input actions.*/ + flow_hw_actions_construct(hw_acts, actions, rule_acts, &acts_num); + ret = mlx5dr_rule_create(table->matcher, + pattern_template_index, items, + rule_acts, acts_num, + &rule_attr, &flow->rule); + if (likely(!ret)) + return (struct rte_flow *)flow; + /* Flow created fail, return the descriptor and flow memory. */ + mlx5_ipool_free(table->flow, flow_idx); + priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; +error: + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to create rte flow"); + return NULL; +} + +/** + * Enqueue HW steering flow destruction. + * + * The flow will be applied to the HW only if the postpone bit is not set or + * the extra push function is called. + * The flow destruction status should be checked from dequeue result. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to destroy the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] flow + * Pointer to the flow to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_q_flow_destroy(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_rule_attr rule_attr = { + .queue_id = queue, + .user_data = attr->user_data, + .burst = attr->postpone, + }; + struct rte_flow_hw *fh = (struct rte_flow_hw *)flow; + struct mlx5_hw_q_job *job; + int ret; + + if (unlikely(!priv->hw_q[queue].job_idx)) { + rte_errno = ENOMEM; + goto error; + } + job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; + job->type = MLX5_HW_Q_JOB_TYPE_DESTROY; + job->user_data = attr->user_data; + job->flow = fh; + rule_attr.user_data = job; + ret = mlx5dr_rule_destroy(&fh->rule, &rule_attr); + if (ret) + goto error; + return 0; +error: + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to create rte flow"); +} + +/** + * Pull the enqueued flows. + * + * For flows enqueued from creation/destruction, the status should be + * checked from the dequeue result. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to pull the result. + * @param[in/out] res + * Array to save the results. + * @param[in] n_res + * Available result with the array. + * @param[out] error + * Pointer to error structure. + * + * @return + * Result number on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_q_pull(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_q_job *job; + int ret, i; + + ret = mlx5dr_send_queue_poll(priv->dr_ctx, queue, res, n_res); + if (ret < 0) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to query flow queue"); + for (i = 0; i < ret; i++) { + job = (struct mlx5_hw_q_job *)res[i].user_data; + /* Restore user data. */ + res[i].user_data = job->user_data; + if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) + mlx5_ipool_free(job->flow->table->flow, job->flow->idx); + priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; + } + return ret; +} + +/** + * Push the enqueued flows to HW. + * + * Force apply all the enqueued flows to the HW. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to push the flow. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_q_push(struct rte_eth_dev *dev, + uint32_t queue, + struct rte_flow_error *error __rte_unused) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + return mlx5dr_send_queue_action(priv->dr_ctx, queue, + MLX5DR_SEND_QUEUE_ACTION_DRAIN); +} + /** * Create flow table. * @@ -152,7 +424,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, .data = &flow_attr, }; struct mlx5_indexed_pool_config cfg = { - .size = sizeof(struct rte_flow), + .size = sizeof(struct rte_flow_hw), .trunk_size = 1 << 12, .per_core_cache = 1 << 13, .need_lock = 1, @@ -860,6 +1132,10 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .actions_template_destroy = flow_hw_actions_template_destroy, .template_table_create = flow_hw_table_create, .template_table_destroy = flow_hw_table_destroy, + .q_flow_create = flow_hw_q_flow_create, + .q_flow_destroy = flow_hw_q_flow_destroy, + .q_pull = flow_hw_q_pull, + .q_push = flow_hw_q_push, }; #endif From patchwork Thu Feb 10 16:29:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107293 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ADF00A00BE; Thu, 10 Feb 2022 17:30:58 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 524434272E; Thu, 10 Feb 2022 17:30:06 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2084.outbound.protection.outlook.com [40.107.92.84]) by mails.dpdk.org (Postfix) with ESMTP id 2EA99426E7 for ; Thu, 10 Feb 2022 17:30:04 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bXvkDA0toJ0XKfAb5VAinWFqMu2MdRkDEKuoyMrF5LV/YHai1oPHjmq03cCnlIgcjGafipK4uTFAhZE+2/xfCdkhWBMELM7wv455vqr5pkEd1Iqor+xlPo8BqWot/mYEH+Dt1qqFB4zF/n15DbbjQ4NzznHmYWug1MGg+nONThmGugnbZONYDw+45jGo5550M6xxwNZfEnpLOtKcRdAS4Rr+GuqleRk0Aq89OLKpch8b8DFQQ8SDEd2dHHBNtx8rd+CiL+tcVwZPM0yPCptsd4zxgEyFRUz0ySdZ8qnIh9LmU5j6139wDwBmGLNYydiNEGVs8E2MxVM66UzxJ5ZH5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=p72ZukFkSrf50RUj6h+HCSkfDgvZugMIg4ZCJpGjgzc=; b=ASl6bUw03+BysJbdTU92rC4s04nwRR5DsNBpJm+//rr3d+nTUipl90yHD+lyLqgP6qVcIl+9gMqJ+Sfe7i2gq4yCtxhy1yed+qC2vZmQkchbn4RNroYPJCk4FHJU7RdqzXwojcJHMGu12SA6s8x35uJ+vLuhfB6FVlh+3XikH9AToIVf3YByaZi/wF55CTvhYdB1CXYvAxVQYFNDvM2LZBADaZE9RxUr94l8r1gav7cpkd1j3Garz2ViCfe5sZngZ03lJOnguyteqhwQ22qduiCHvCmthFdcQ3nwBsshYdyeSXP0w7Df/f/dLJ683P2UelcoUaTm+G7Rq3IndeIOBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=p72ZukFkSrf50RUj6h+HCSkfDgvZugMIg4ZCJpGjgzc=; b=YyZS1UMBH1Nkb65LlgH9pkRNaGc+YeyRoe2MGpacd+9OQjf/9oEkgl6UyprAYkR9SYx2GnoLkoMHxsN9g23eN02GClJeUB5L54RYghLqNkn7uSu0uyZyKC/YcpWEMlMNgMkZlM91syLainBOK1+i64NeSxR85esTOD668f4j9jno9ioNlbUZXiexqgjriOBLsFxNEGlvSniThghIgPSDs6lmUj2oFGS6WiLC8uGqkaF9BgFUBtHwqggMq7r7JzPx7Q3P3FojwODOetdij7cjHGAUM6KdkAEzDBgRrWlX6yhwkxt/GkVZJbmJEd7IN3Qt6nLmSIaRIvbN/lrw2fzsIA== Received: from BN0PR04CA0081.namprd04.prod.outlook.com (2603:10b6:408:ea::26) by BN6PR1201MB0036.namprd12.prod.outlook.com (2603:10b6:405:4e::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Thu, 10 Feb 2022 16:30:02 +0000 Received: from BN8NAM11FT020.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ea:cafe::3f) by BN0PR04CA0081.outlook.office365.com (2603:10b6:408:ea::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by BN8NAM11FT020.mail.protection.outlook.com (10.13.176.223) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:02 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:30:01 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:29:59 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 08/13] net/mlx5: add flow flush function Date: Thu, 10 Feb 2022 18:29:21 +0200 Message-ID: <20220210162926.20436-9-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c562f3a3-835f-4065-954b-08d9ecb296ae X-MS-TrafficTypeDiagnostic: BN6PR1201MB0036:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:186; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: T3mTW4nBBZu/Cbz3anT6xPPwCHeuHAo7wEIqs4Dqjf3qWN/av/J4QNdeXvEYQDGHVX2TnKWuFHMmNGoM2BEjl6taIUvQJlhpJCy25u2/8gcPrzROsOTtNHw+vvvSFyRIebj0n3u7GwEde7U6kn5QD4BTbdLDJLYDfdJaF/ZFocbKIy/utJnNBzSGbQciVaybBMK2SK/4gftko4scvvP/c8Q+oBM5J6puIDKnj97h75mih8662Bl1f7lRzg9X0i/uZz09b0omOJWS9yVW5x+bd/opOsj4Gpy82zFkpQBO/44m1zk8IPlcZiBdQcClMAntXZF6mAwnPMb/mqq5qtbGS2odO78l16Hy8qAD5AOQSfnJn8+5cTqev9ZjLFNhtVbPX/Y1V/d70UyYotD2F9TXdil8QxHptCdeIXnB+6Ln32lATd5R3RCfx6CULlYKWlXtXvBtBVFm4lL5+gftKmUmaAqcwp+Nb2JOZsO0rXGhJlkfDudn3GZVTj68909OUzuoADWO9Ck9ENabszDTwnpx5BflZpBZ1tT29+hA1IwwJxq7UDzncVAxmtnLsCp1AqxE49GnbNJxwn0/gsdj7GfnzmitqwWCh904zjI9yS6ShOwqaS/otEs9KDsX8ezNHHFNuWWcK3g63FxHanX9n2UkC68EiokT11YmCBgGns6kE3PD7KAMeUwS8kwCRg0w++O3nU7/XGUZdg8LwHHOagD/kg== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(110136005)(6636002)(4326008)(54906003)(8676002)(356005)(70586007)(316002)(81166007)(5660300002)(82310400004)(2906002)(86362001)(70206006)(8936002)(55016003)(7696005)(6666004)(47076005)(508600001)(2616005)(16526019)(186003)(6286002)(83380400001)(26005)(40460700003)(336012)(426003)(36860700001)(1076003)(36756003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:30:02.4274 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c562f3a3-835f-4065-954b-08d9ecb296ae X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT020.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB0036 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In case port restarting, all created flows should be flushed. This commit adds the flow flush helper function. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow.c | 8 +++ drivers/net/mlx5/mlx5_flow_hw.c | 117 ++++++++++++++++++++++++++++++++ 2 files changed, 125 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index b48a3af0fb..9ac96ac979 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -6991,6 +6991,14 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type, uint32_t num_flushed = 0, fidx = 1; struct rte_flow *flow; +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + if (priv->config.dv_flow_en == 2 && + type == MLX5_FLOW_TYPE_GEN) { + flow_hw_q_flow_flush(dev, NULL); + return; + } +#endif + MLX5_IPOOL_FOREACH(priv->flows[type], fidx, flow) { flow_list_destroy(dev, type, fidx); num_flushed++; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index a74825312f..dcf72ab89f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -377,6 +377,123 @@ flow_hw_q_push(struct rte_eth_dev *dev, MLX5DR_SEND_QUEUE_ACTION_DRAIN); } +/** + * Drain the enqueued flows' completion. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to pull the flow. + * @param[in] pending_rules + * The pending flow number. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +__flow_hw_pull_comp(struct rte_eth_dev *dev, + uint32_t queue, + uint32_t pending_rules, + struct rte_flow_error *error) +{ +#define BURST_THR 32u + struct rte_flow_q_op_res comp[BURST_THR]; + int ret, i, empty_loop = 0; + + flow_hw_q_push(dev, queue, error); + while (pending_rules) { + ret = flow_hw_q_pull(dev, 0, comp, BURST_THR, error); + if (ret < 0) + return -1; + if (!ret) { + usleep(200); + if (++empty_loop > 5) { + DRV_LOG(WARNING, "No available dequeue, quit."); + break; + } + continue; + } + for (i = 0; i < ret; i++) { + if (comp[i].status == RTE_FLOW_Q_OP_ERROR) + DRV_LOG(WARNING, "Flow flush get error CQE."); + } + if ((uint32_t)ret > pending_rules) { + DRV_LOG(WARNING, "Flow flush get extra CQE."); + return -1; + } + pending_rules -= ret; + empty_loop = 0; + } + return 0; +} + +/** + * Flush created flows. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +int +flow_hw_q_flow_flush(struct rte_eth_dev *dev, + struct rte_flow_error *error) +{ +#define DEFAULT_QUEUE 0 + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_q *hw_q; + struct rte_flow_template_table *tbl; + struct rte_flow_hw *flow; + struct rte_flow_q_ops_attr attr = { + .postpone = 0, + }; + uint32_t pending_rules = 0; + uint32_t queue; + uint32_t fidx; + + /* + * Ensure to push and dequeue all the enqueued flows in case user + * forgot to dequeue. Or the enqueued created flows will be leaked. + * The forgot dequeue will also cause flow flush get extra CQEs as + * expected and pending_rules be minus value. + */ + for (queue = 0; queue < priv->nb_queue; queue++) { + hw_q = &priv->hw_q[queue]; + if (__flow_hw_pull_comp(dev, queue, hw_q->size - hw_q->job_idx, + error)) + return -1; + } + /* Flush flow per-table from DEFAULT_QUEUE. */ + hw_q = &priv->hw_q[DEFAULT_QUEUE]; + LIST_FOREACH(tbl, &priv->flow_hw_tbl, next) { + MLX5_IPOOL_FOREACH(tbl->flow, fidx, flow) { + if (flow_hw_q_flow_destroy(dev, DEFAULT_QUEUE, &attr, + (struct rte_flow *)flow, + error)) + return -1; + pending_rules++; + /* Drain completion with queue size. */ + if (pending_rules >= hw_q->size) { + if (__flow_hw_pull_comp(dev, DEFAULT_QUEUE, + pending_rules, error)) + return -1; + pending_rules = 0; + } + } + } + /* Drain left completion. */ + if (pending_rules && + __flow_hw_pull_comp(dev, DEFAULT_QUEUE, pending_rules, + error)) + return -1; + return 0; +} + /** * Create flow table. * From patchwork Thu Feb 10 16:29:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107294 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89496A00BE; Thu, 10 Feb 2022 17:31:05 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3D33A411B6; Thu, 10 Feb 2022 17:30:09 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2075.outbound.protection.outlook.com [40.107.220.75]) by mails.dpdk.org (Postfix) with ESMTP id BFA4E42739 for ; Thu, 10 Feb 2022 17:30:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CP7pmYEKcOto+/+AApfPSsI/vSAjMkDySZDVoAWZxoY4+n62eHfyrUNR64bRf4mlxl5letnJMimiVXjqPkkNoKYapxF/o7JUZ527sYJeMCbhgRy5AfHltzE02ij8kyD4OTWmLqqV3ucfjaXGmnzB/Aa1f9vMMu49jZU0kGzfOLNrfoI64j2dWXP94neInN1cE2iov/h66RbRfvUhkfQ6aGFAgKvyTYJYQsPx/sz0ZIk7u04XO3fIiBFQkmtmh1wp3HEZim7qgd5BREdZxmV98F2TaN2zA+sziOxLa0mM/7tsAJtvI44pqt5QD/autUhv5/+Bgs6So9i1Ea5vdGLJHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=P5OferphP0ivB6EOkUmU0W8+N5qG+TL0DYIGFouI+mI=; b=ht5nLY9vxZlzDkgBHMTDHTvdC76MWYx0zU0r19mzAncV2LEEv4vZn/wqT53qSSlT6YPxyC97QhVwMsBcTSjIMux+KnPBj8vh7tKoo5GszeSA1egzThY5D1JHXgRYSpRZpnzBIAdXj2rzlARNHWyGPw9NS8/tRAPPkcYYoW3WaRbuzpEJgQjKRmVhzcwO3kkwynbrkv9sR+iqgqqV/iLTRrvq9dFR3WIS2kdXZ3U7m09JLYnHMzjKW4JyTJXILRJlR8pwpYKOiLnuys+D+/ujy9lwcA+rIIC2DuJD75Dcaa/A6PFMEdVNKIrNw5c2kSxv85I4wIR6TM+XXmJeX+rTyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=P5OferphP0ivB6EOkUmU0W8+N5qG+TL0DYIGFouI+mI=; b=XE3Rz03rqVLhSktguMkIGEJ7LAEyNToDEJ5mRF9vyzWnwyn1HENXdotglE3umQSKzXz2M9fUiNpxZANqlx69Lf9eaYBNi7CroBLcAjmQOz3QzNdmjYCIcds7wuyPxw0FrZXBtmMyEZKNq0L9nTfiskMIQIK6rKSe5cvno3wqMJaU3OQTxP7Hyct77TOCAXGcP+ldU2RLgxMi56T6QLECIPiKsV8eY9HJ8ELjzJe8Pwd9K9DHnubCBKHRHy3SNz/6V4IYeqNt7qEQQKOvtIR848AbxM7Ybdjd3doFqQc7+W+MKOJzyNevm6F1c+f9DYpHS7DrV1R9tOZrq009imLTzw== Received: from DM5PR19CA0052.namprd19.prod.outlook.com (2603:10b6:3:116::14) by MN0PR12MB5785.namprd12.prod.outlook.com (2603:10b6:208:374::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Thu, 10 Feb 2022 16:30:04 +0000 Received: from DM6NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:3:116:cafe::fd) by DM5PR19CA0052.outlook.office365.com (2603:10b6:3:116::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT013.mail.protection.outlook.com (10.13.173.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:30:03 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:30:01 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 09/13] net/mlx5: add flow jump action Date: Thu, 10 Feb 2022 18:29:22 +0200 Message-ID: <20220210162926.20436-10-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 82bebdd0-a2e1-47c3-b696-08d9ecb297b2 X-MS-TrafficTypeDiagnostic: MN0PR12MB5785:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:514; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 302G9w6LUGb1AqhItfp+xq5HkRI0sL5/x7SKyw+lLbFwWj6ABBMOru7EMkIzrrg8E4GxrAeBFOIb2KgpaBPaK3EUWwxWqlpnwx4NJljOnhHRpRM+SCR23feRlILgcPunuxFQX3iBe0scz75aXqQ4sEMz6Q7k05bUxo4iH25etEtGfl6tvsCJdmB4m8pGIfPfxnoKb/uQ/vqAEmBjziR8kAFtttEgYHWtq+y2oJem9jTSlhoLIzo7ErUJcMW1VsPa3Wfj2N6rgAv4303N1fp5k/FuSRbc2K4A2awU66llHGGFwDE8X3SrjSTFD3r/NVW+dwcUnYm5PFM9rn7PS3jjDvWtEBM5Owf47TdyJOYJE8GcQXV3W+Zalpbn1WBpYVulIDc+DT+0a9i+9VgnwP0tfQCwqj9B4P4fRaC4ZCwnUwVTVLQRIqcmajQlOrNIJv7IbofMKLmDNglVF/jFkUeNNKvhZS1jKRJtGzmYZ7sVtMjTvg4ypfedIA6x/PFMfEpDiibN1IssAosz0sgmp35LbvWt46Zaq3kZJalDL+stN8AtE4WwNy/yOpgOIQS+JLAkInyBVleV8RKJlpiwbAElLqpaWJkOh+242OCDK2JEvjI5l8nUr2PXCQeqd/0YKn+IedtZzB0bZRzHzFcU//jxhaSh/HuPy1TUsJX+ZPO2Cv3WI4Z1RHgZ/aXc6S/tAA5RB23Lz4wKfaHKaR9PERLwi4N/gvhuec5LpdBSsqXLQ4uazTTboC+Zzh+LODvpFktl X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(4326008)(508600001)(426003)(54906003)(26005)(8676002)(336012)(70586007)(2616005)(8936002)(5660300002)(16526019)(6636002)(316002)(6286002)(70206006)(110136005)(86362001)(186003)(82310400004)(81166007)(40460700003)(2906002)(6666004)(55016003)(36860700001)(36756003)(47076005)(356005)(83380400001)(7696005)(1076003)(30864003)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:30:04.1947 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 82bebdd0-a2e1-47c3-b696-08d9ecb297b2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5785 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Jump action connects different level of flow tables as a complete data flow. A new action construct data struct is also added in this commit to help handle the dynamic actions. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 25 ++- drivers/net/mlx5/mlx5_flow_hw.c | 270 +++++++++++++++++++++++++++++--- 3 files changed, 275 insertions(+), 21 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ec4eb7ee94..0bc9897101 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1525,6 +1525,7 @@ struct mlx5_priv { /* HW steering global drop action. */ struct mlx5dr_action *hw_drop[MLX5_HW_ACTION_FLAG_MAX] [MLX5DR_TABLE_TYPE_MAX]; + struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */ }; #define PORT_ID(priv) ((priv)->dev_data->port_id) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 40eb8d79aa..a1ab9173d9 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1018,10 +1018,25 @@ struct rte_flow { /* HWS flow struct. */ struct rte_flow_hw { uint32_t idx; /* Flow index from indexed pool. */ + uint32_t fate_type; /* Fate action type. */ + union { + /* Jump action. */ + struct mlx5_hw_jump_action *jump; + }; struct rte_flow_template_table *table; /* The table flow allcated from. */ struct mlx5dr_rule rule; /* HWS layer data struct. */ } __rte_packed; +/* rte flow action translate to DR action struct. */ +struct mlx5_action_construct_data { + LIST_ENTRY(mlx5_action_construct_data) next; + /* Ensure the action types are matched. */ + int type; + uint32_t idx; /* Data index. */ + uint16_t action_src; /* rte_flow_action src offset. */ + uint16_t action_dst; /* mlx5dr_rule_action dst offset. */ +}; + /* Flow item template struct. */ struct rte_flow_pattern_template { LIST_ENTRY(rte_flow_pattern_template) next; @@ -1054,9 +1069,17 @@ struct mlx5_hw_jump_action { struct mlx5dr_action *hws_action; }; +/* The maximum actions support in the flow. */ +#define MLX5_HW_MAX_ACTS 16 + /* DR action set struct. */ struct mlx5_hw_actions { - struct mlx5dr_action *drop; /* Drop action. */ + /* Dynamic action list. */ + LIST_HEAD(act_list, mlx5_action_construct_data) act_list; + struct mlx5_hw_jump_action *jump; /* Jump action. */ + uint32_t acts_num:4; /* Total action number. */ + /* Translated DR action array from action template. */ + struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; }; /* mlx5 action template struct. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index dcf72ab89f..a825766245 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -30,18 +30,158 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX] }, }; +/** + * Register destination table DR jump action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] table_attr + * Pointer to the flow attributes. + * @param[in] dest_group + * The destination group ID. + * @param[out] error + * Pointer to error structure. + * + * @return + * Table on success, NULL otherwise and rte_errno is set. + */ +static struct mlx5_hw_jump_action * +flow_hw_jump_action_register(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + uint32_t dest_group, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_attr jattr = *attr; + struct mlx5_flow_group *grp; + struct mlx5_flow_cb_ctx ctx = { + .dev = dev, + .error = error, + .data = &jattr, + }; + struct mlx5_list_entry *ge; + + jattr.group = dest_group; + ge = mlx5_hlist_register(priv->sh->flow_tbls, dest_group, &ctx); + if (!ge) + return NULL; + grp = container_of(ge, struct mlx5_flow_group, entry); + return &grp->jump; +} + +/** + * Release jump action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] jump + * Pointer to the jump action. + */ + +static void +flow_hw_jump_release(struct rte_eth_dev *dev, struct mlx5_hw_jump_action *jump) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_group *grp; + + grp = container_of + (jump, struct mlx5_flow_group, jump); + mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry); +} + /** * Destroy DR actions created by action template. * * For DR actions created during table creation's action translate. * Need to destroy the DR action when destroying the table. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] acts * Pointer to the template HW steering DR actions. */ static void -__flow_hw_action_template_destroy(struct mlx5_hw_actions *acts __rte_unused) +__flow_hw_action_template_destroy(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts) { + struct mlx5_priv *priv = dev->data->dev_private; + + if (acts->jump) { + struct mlx5_flow_group *grp; + + grp = container_of + (acts->jump, struct mlx5_flow_group, jump); + mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry); + acts->jump = NULL; + } +} + +/** + * Append dynamic action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline struct mlx5_action_construct_data * +__flow_hw_act_data_alloc(struct mlx5_priv *priv, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst) +{ + struct mlx5_action_construct_data *act_data; + uint32_t idx = 0; + + act_data = mlx5_ipool_zmalloc(priv->acts_ipool, &idx); + if (!act_data) + return NULL; + act_data->idx = idx; + act_data->type = type; + act_data->action_src = action_src; + act_data->action_dst = action_dst; + return act_data; +} + +/** + * Append dynamic action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_general_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; } /** @@ -74,14 +214,16 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, const struct rte_flow_template_table_attr *table_attr, struct mlx5_hw_actions *acts, struct rte_flow_actions_template *at, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; + struct rte_flow_action *action_start = actions; struct rte_flow_action *masks = at->masks; bool actions_end = false; - uint32_t type; + uint32_t type, i; + int err; if (attr->transfer) type = MLX5DR_TABLE_TYPE_FDB; @@ -89,14 +231,34 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, type = MLX5DR_TABLE_TYPE_NIC_TX; else type = MLX5DR_TABLE_TYPE_NIC_RX; - for (; !actions_end; actions++, masks++) { + for (i = 0; !actions_end; actions++, masks++) { switch (actions->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: break; case RTE_FLOW_ACTION_TYPE_VOID: break; case RTE_FLOW_ACTION_TYPE_DROP: - acts->drop = priv->hw_drop[!!attr->group][type]; + acts->rule_acts[i++].action = + priv->hw_drop[!!attr->group][type]; + break; + case RTE_FLOW_ACTION_TYPE_JUMP: + if (masks->conf) { + uint32_t jump_group = + ((const struct rte_flow_action_jump *) + actions->conf)->group; + acts->jump = flow_hw_jump_action_register + (dev, attr, jump_group, error); + if (!acts->jump) + goto err; + acts->rule_acts[i].action = (!!attr->group) ? + acts->jump->hws_action : + acts->jump->root_action; + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, i)){ + goto err; + } + i++; break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; @@ -105,7 +267,14 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, break; } } + acts->acts_num = i; return 0; +err: + err = rte_errno; + __flow_hw_action_template_destroy(dev, acts); + return rte_flow_error_set(error, err, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to create rte table"); } /** @@ -114,6 +283,10 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, * For action template contains dynamic actions, these actions need to * be updated according to the rte_flow action during flow creation. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] job + * Pointer to job descriptor. * @param[in] hw_acts * Pointer to translated actions from template. * @param[in] actions @@ -127,31 +300,63 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, * 0 on success, negative value otherwise and rte_errno is set. */ static __rte_always_inline int -flow_hw_actions_construct(struct mlx5_hw_actions *hw_acts, +flow_hw_actions_construct(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + struct mlx5_hw_actions *hw_acts, const struct rte_flow_action actions[], struct mlx5dr_rule_action *rule_acts, uint32_t *acts_num) { - bool actions_end = false; - uint32_t i; + struct rte_flow_template_table *table = job->flow->table; + struct mlx5_action_construct_data *act_data; + const struct rte_flow_action *action; + struct rte_flow_attr attr = { + .ingress = 1, + }; - for (i = 0; !actions_end || (i >= MLX5_HW_MAX_ACTS); actions++) { - switch (actions->type) { + memcpy(rule_acts, hw_acts->rule_acts, + sizeof(*rule_acts) * hw_acts->acts_num); + *acts_num = hw_acts->acts_num; + if (LIST_EMPTY(&hw_acts->act_list)) + return 0; + attr.group = table->grp->group_id; + if (table->type == MLX5DR_TABLE_TYPE_FDB) { + attr.transfer = 1; + attr.ingress = 1; + } else if (table->type == MLX5DR_TABLE_TYPE_NIC_TX) { + attr.egress = 1; + attr.ingress = 0; + } else { + attr.ingress = 1; + } + LIST_FOREACH(act_data, &hw_acts->act_list, next) { + uint32_t jump_group; + struct mlx5_hw_jump_action *jump; + + action = &actions[act_data->action_src]; + MLX5_ASSERT(action->type == RTE_FLOW_ACTION_TYPE_INDIRECT || + (int)action->type == act_data->type); + switch (action->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: break; case RTE_FLOW_ACTION_TYPE_VOID: break; - case RTE_FLOW_ACTION_TYPE_DROP: - rule_acts[i++].action = hw_acts->drop; - break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; + case RTE_FLOW_ACTION_TYPE_JUMP: + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + jump = flow_hw_jump_action_register + (dev, &attr, jump_group, NULL); + if (!jump) + return -1; + rule_acts[act_data->action_dst].action = + (!!attr.group) ? jump->hws_action : jump->root_action; + job->flow->jump = jump; + job->flow->fate_type = MLX5_FLOW_FATE_JUMP; break; default: break; } } - *acts_num = i; return 0; } @@ -230,7 +435,8 @@ flow_hw_q_flow_create(struct rte_eth_dev *dev, rule_attr.user_data = job; hw_acts = &table->ats[action_template_index].acts; /* Construct the flow action array based on the input actions.*/ - flow_hw_actions_construct(hw_acts, actions, rule_acts, &acts_num); + flow_hw_actions_construct(dev, job, hw_acts, actions, + rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, rule_acts, acts_num, @@ -344,8 +550,11 @@ flow_hw_q_pull(struct rte_eth_dev *dev, job = (struct mlx5_hw_q_job *)res[i].user_data; /* Restore user data. */ res[i].user_data = job->user_data; - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) + if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { + if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP) + flow_hw_jump_release(dev, job->flow->jump); mlx5_ipool_free(job->flow->table->flow, job->flow->idx); + } priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; } return ret; @@ -616,6 +825,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, rte_errno = EINVAL; goto at_error; } + LIST_INIT(&tbl->ats[i].acts.act_list); err = flow_hw_actions_translate(dev, attr, &tbl->ats[i].acts, action_templates[i], error); @@ -631,7 +841,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, return tbl; at_error: while (i--) { - __flow_hw_action_template_destroy(&tbl->ats[i].acts); + __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); __atomic_sub_fetch(&action_templates[i]->refcnt, 1, __ATOMIC_RELAXED); } @@ -687,7 +897,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, __atomic_sub_fetch(&table->its[i]->refcnt, 1, __ATOMIC_RELAXED); for (i = 0; i < table->nb_action_templates; i++) { - __flow_hw_action_template_destroy(&table->ats[i].acts); + __flow_hw_action_template_destroy(dev, &table->ats[i].acts); __atomic_sub_fetch(&table->ats[i].action_template->refcnt, 1, __ATOMIC_RELAXED); } @@ -1106,6 +1316,15 @@ flow_hw_configure(struct rte_eth_dev *dev, struct mlx5_hw_q *hw_q; struct mlx5_hw_q_job *job = NULL; uint32_t mem_size, i, j; + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct rte_flow_hw), + .trunk_size = 4096, + .need_lock = 1, + .release_mem_en = !!priv->config.reclaim_mode, + .malloc = mlx5_malloc, + .free = mlx5_free, + .type = "mlx5_hw_action_construct_data", + }; if (!port_attr || !nb_queue || !queue_attr) { rte_errno = EINVAL; @@ -1124,6 +1343,9 @@ flow_hw_configure(struct rte_eth_dev *dev, } flow_hw_resource_release(dev); } + priv->acts_ipool = mlx5_ipool_create(&cfg); + if (!priv->acts_ipool) + goto err; /* Allocate the queue job descriptor LIFO. */ mem_size = sizeof(priv->hw_q[0]) * nb_queue; for (i = 0; i < nb_queue; i++) { @@ -1193,6 +1415,10 @@ flow_hw_configure(struct rte_eth_dev *dev, mlx5_free(priv->hw_q); priv->hw_q = NULL; } + if (priv->acts_ipool) { + mlx5_ipool_destroy(priv->acts_ipool); + priv->acts_ipool = NULL; + } return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to configure port"); @@ -1234,6 +1460,10 @@ flow_hw_resource_release(struct rte_eth_dev *dev) mlx5dr_action_destroy(priv->hw_drop[i][j]); } } + if (priv->acts_ipool) { + mlx5_ipool_destroy(priv->acts_ipool); + priv->acts_ipool = NULL; + } mlx5_free(priv->hw_q); priv->hw_q = NULL; claim_zero(mlx5dr_context_close(priv->dr_ctx)); From patchwork Thu Feb 10 16:29:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107295 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 299CAA00BE; Thu, 10 Feb 2022 17:31:15 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 75BF5411FB; Thu, 10 Feb 2022 17:30:14 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2071.outbound.protection.outlook.com [40.107.243.71]) by mails.dpdk.org (Postfix) with ESMTP id 54C4641176 for ; Thu, 10 Feb 2022 17:30:13 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K8Y2BuRExaKyMNnJ1ZI3NTefb4nrkH2YbnTNo5vb0BmDxiuvyQfINzywQfi7l6M4oT/j+R9mwKMLDbI1JmN81FQxsiMFavaEvpGtFP99iZwR0ySXj0NVAmJQM06XTWb8yIeXNYbUmXxbQIMgGqkXbVBCinVOzp4j8reoRt3PjokqaW8Ur9xm23mVjtyfNevw9eDWfdprkowUFHYUpaFg8a3gvuqJ6iSHMRjWZLgeVDi2rr9fUrV0r+qtvQJdgYmagjWbdiYxF87C13eVHaA83+5cUXs0CapI+mX7Cr0XrMADSG2h55O5stXxvYuZe6QtqG7sanj7bv05plB/wVU3MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rF/tLQ4/rSekHBpHHZnC9lwfvGbzJOi0iTNyzK713AU=; b=L53Zz+56nJBZfXTuC4Ddzpfn0KfFDjtss9/nD/oXrZrrOKgDzNZApDAWSZN8ddt4C1yQmvoHJ1kQuZtyLoIxs6rfGQU527POIkVWxrSsRVtKjQ6/EvHJ+ok8UD9D910nnuqQVGFAVR2AFEhbBsaSWeIDbHweRDG4Wykbljp+saIcXpVI3Ar78huOFB1rpiDEiTvwH0jxRyot9uYpz8bqL4fcTp9VtQ25BweLvZrPt4VugqYsLr6ji3eZKA+8ZkmdhulX9cLEnFt4DH458AM4NkR3yHWcQ6Vl79eTKBIHRy3eySnw26h1mBj4rQ9uq0UJOfn/l0PmivawPhiCR9jAAg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rF/tLQ4/rSekHBpHHZnC9lwfvGbzJOi0iTNyzK713AU=; b=qGi0gE5tEp1nAvVSIAsW0sZod1pqLlSVmmCf04tAICHGCsl2Zq4PH6No1dkZco/cRcBgHFlqG6QQ1h1y42rOwGcjk6tJIuL1s6mIPVizlRFBVpnlD+Ia602mXwaaOpJiaTf6dTf38e/eTgkggA52v5OoEcKqKLfvwxUcR8ONvaZx+MtHKtvSF9gxE+oJ/h5FHfMQMd7pL7dyuI97WQ8AmCIh4dEDXGxCe19w4hI2bow90VItz21FYGCNuAxkT/UyZQy3qzZLdZBgxGvMM5AfrU04xv2nRfK526S5BKUpegtmjoLRtsCySZWgJvCk0WOFbcYzFwncaqvbg2AYF4y5NQ== Received: from CO2PR04CA0139.namprd04.prod.outlook.com (2603:10b6:104::17) by BY5PR12MB4241.namprd12.prod.outlook.com (2603:10b6:a03:20c::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Thu, 10 Feb 2022 16:30:09 +0000 Received: from CO1NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:104:0:cafe::a9) by CO2PR04CA0139.outlook.office365.com (2603:10b6:104::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17 via Frontend Transport; Thu, 10 Feb 2022 16:30:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT055.mail.protection.outlook.com (10.13.175.129) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:09 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:30:05 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:30:03 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 10/13] net/mlx5: add queue and RSS action Date: Thu, 10 Feb 2022 18:29:23 +0200 Message-ID: <20220210162926.20436-11-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e1656433-a91f-4ff0-5ee1-08d9ecb29ac9 X-MS-TrafficTypeDiagnostic: BY5PR12MB4241:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:758; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NmQmxbnC06HmWpMkbyJ8znOHfqZL2PIDQAkrjtarCfrw7lIkNMs/YiifVgA0lAV4tGn4q1BtinTRNo/MiVMVv3XqK3zbiL3+0DbUDW/rV4wuHEvLgt4EIL3R699oC9+xc9gH3teZMCuFYJd/DeNfqrr/Wr6VFqesoEdS3FhGUse7BtW+xjicR0TO0hGTCw08msapH/ft7pfTDiqbdskoGvQ3QfF3zjO1i7MP4SCBikPX0GsAYYQDbMRkMFi162uyM7+uBJLWYEn3wnRYiNYLrxaSfEKuRxhG7NLZHOMZw0nVTxGqlE+zkzaHuqB1vDllbNU/+l+ZnmEXrNmOkbaBDgi3r9yfKehFBsYQ54AY00Pfl7x9+70i1CmdRSoFuTvIvegsG+0gWAuHJAHVCKVvhi8/8sYNY97Mhl4VkitycUcYWKic/gtJ1yu8j3E8KnlLV6HNQmFGwg93kiKey8NFhona7mnR8QTARrjVNYa8t0p+pS8ifDjyX+bV+0Q8yVWoIKvNdQvWeSfMaPwrYR0uovXhlvhyOYYcMaJSc2lmPM1Q4QO41fejAqgaSrmidi3wXS0PiFuTAdOKtd3T0bwY5X9HiGDlAcos9DKPVVOygpr1gTCQNCap1bBcA0rpZjb+dvbwI5BXiGfxz1jWL+znWuwp3Bj+zCLSZCBG25IZ2TbhHvdhudrBcgPUcbeGMIkpFFVQl+aElYxRsYwUoMQ2a7j/3iOb6oDdlm92NZ6ABmw= X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(508600001)(36860700001)(316002)(40460700003)(6636002)(36756003)(7696005)(6666004)(110136005)(8936002)(5660300002)(70586007)(70206006)(8676002)(4326008)(2906002)(30864003)(82310400004)(26005)(81166007)(426003)(6286002)(186003)(16526019)(55016003)(336012)(356005)(83380400001)(54906003)(86362001)(47076005)(2616005)(1076003)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:30:09.3451 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e1656433-a91f-4ff0-5ee1-08d9ecb29ac9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4241 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit adds the queue and RSS action. Similar to the jump action, dynamic ones will be added to the action construct list. Due to the queue and RSS action in template should not be destroyed during port restart, the actions are created with standalone indirect table as indirect action does. When port stops, detaches the indirect table from action, when port starts, attaches the indirect table back to the action. One more change is made to accelerate the action creation. Currently the mlx5_hrxq_get() function returns the object index instead of object pointer. This introduced an extra converting the index to the object by calling mlx5_ipool_get() in most of the case. And that extra converting hurts multi-thread performance since mlx5_ipool_get() uses the global lock inside. As the hash Rx queue object itself also contains the index, returns the object directly will achieve better performance without the global lock. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 18 ++-- drivers/net/mlx5/mlx5.h | 4 + drivers/net/mlx5/mlx5_devx.c | 10 ++ drivers/net/mlx5/mlx5_flow.c | 38 +++----- drivers/net/mlx5/mlx5_flow.h | 7 ++ drivers/net/mlx5/mlx5_flow_dv.c | 150 ++++++++++++++--------------- drivers/net/mlx5/mlx5_flow_hw.c | 101 +++++++++++++++++++ drivers/net/mlx5/mlx5_flow_verbs.c | 7 +- drivers/net/mlx5/mlx5_rx.h | 9 +- drivers/net/mlx5/mlx5_rxq.c | 78 +++++++++------ 10 files changed, 271 insertions(+), 151 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 52e52a4ad7..8f0b15aad0 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1714,6 +1714,15 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, priv->drop_queue.hrxq = mlx5_drop_action_create(eth_dev); if (!priv->drop_queue.hrxq) goto error; + priv->hrxqs = mlx5_list_create("hrxq", eth_dev, true, + mlx5_hrxq_create_cb, + mlx5_hrxq_match_cb, + mlx5_hrxq_remove_cb, + mlx5_hrxq_clone_cb, + mlx5_hrxq_clone_free_cb); + if (!priv->hrxqs) + goto error; + rte_rwlock_init(&priv->ind_tbls_lock); if (priv->config.dv_flow_en == 2) return eth_dev; /* Port representor shares the same max priority with pf port. */ @@ -1744,15 +1753,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - priv->hrxqs = mlx5_list_create("hrxq", eth_dev, true, - mlx5_hrxq_create_cb, - mlx5_hrxq_match_cb, - mlx5_hrxq_remove_cb, - mlx5_hrxq_clone_cb, - mlx5_hrxq_clone_free_cb); - if (!priv->hrxqs) - goto error; - rte_rwlock_init(&priv->ind_tbls_lock); /* Query availability of metadata reg_c's. */ if (!priv->sh->metadata_regc_check_flag) { err = mlx5_flow_discover_mreg_c(eth_dev); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 0bc9897101..6fb82bf1f3 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1286,6 +1286,7 @@ struct mlx5_flow_rss_desc { uint64_t hash_fields; /* Verbs Hash fields. */ uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */ uint32_t key_len; /**< RSS hash key len. */ + uint32_t hws_flags; /**< HW steering action. */ uint32_t tunnel; /**< Queue in tunnel. */ uint32_t shared_rss; /**< Shared RSS index. */ struct mlx5_ind_table_obj *ind_tbl; @@ -1347,6 +1348,7 @@ struct mlx5_hrxq { #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) void *action; /* DV QP action pointer. */ #endif + uint32_t hws_flags; /* Hw steering flags. */ uint64_t hash_fields; /* Verbs Hash fields. */ uint32_t rss_key_len; /* Hash key length in bytes. */ uint32_t idx; /* Hash Rx queue index. */ @@ -1477,6 +1479,8 @@ struct mlx5_priv { LIST_HEAD(txqobj, mlx5_txq_obj) txqsobj; /* Verbs/DevX Tx queues. */ /* Indirection tables. */ LIST_HEAD(ind_tables, mlx5_ind_table_obj) ind_tbls; + /* Standalone indirect tables. */ + LIST_HEAD(stdl_ind_tables, mlx5_ind_table_obj) standalone_ind_tbls; /* Pointer to next element. */ rte_rwlock_t ind_tbls_lock; uint32_t refcnt; /**< Reference counter. */ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 91243f684f..af131bcd1b 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -807,6 +807,14 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, goto error; } #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) + if (hrxq->hws_flags) { + hrxq->action = mlx5dr_action_create_dest_tir + (priv->dr_ctx, + (struct mlx5dr_devx_obj *)hrxq->tir, hrxq->hws_flags); + if (!hrxq->action) + goto error; + return 0; + } if (mlx5_flow_os_create_flow_action_dest_devx_tir(hrxq->tir, &hrxq->action)) { rte_errno = errno; @@ -1042,6 +1050,8 @@ mlx5_devx_drop_action_create(struct rte_eth_dev *dev) DRV_LOG(ERR, "Cannot create drop RX queue"); return ret; } + if (priv->config.dv_flow_en == 2) + return 0; /* hrxq->ind_table queues are NULL, drop RX queue ID will be used */ ret = mlx5_devx_ind_table_new(dev, 0, hrxq->ind_table); if (ret != 0) { diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 9ac96ac979..9cad84ebc6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -9302,14 +9302,10 @@ int mlx5_action_handle_attach(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_indexed_pool *ipool = - priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS]; - struct mlx5_shared_action_rss *shared_rss, *shared_rss_last; int ret = 0; - uint32_t idx; + struct mlx5_ind_table_obj *ind_tbl, *ind_tbl_last; - ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { - struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; + LIST_FOREACH(ind_tbl, &priv->standalone_ind_tbls, next) { const char *message; uint32_t queue_idx; @@ -9325,9 +9321,7 @@ mlx5_action_handle_attach(struct rte_eth_dev *dev) } if (ret != 0) return ret; - ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { - struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; - + LIST_FOREACH(ind_tbl, &priv->standalone_ind_tbls, next) { ret = mlx5_ind_table_obj_attach(dev, ind_tbl); if (ret != 0) { DRV_LOG(ERR, "Port %u could not attach " @@ -9336,13 +9330,12 @@ mlx5_action_handle_attach(struct rte_eth_dev *dev) goto error; } } + return 0; error: - shared_rss_last = shared_rss; - ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { - struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; - - if (shared_rss == shared_rss_last) + ind_tbl_last = ind_tbl; + LIST_FOREACH(ind_tbl, &priv->standalone_ind_tbls, next) { + if (ind_tbl == ind_tbl_last) break; if (mlx5_ind_table_obj_detach(dev, ind_tbl) != 0) DRV_LOG(CRIT, "Port %u could not detach " @@ -9365,15 +9358,10 @@ int mlx5_action_handle_detach(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_indexed_pool *ipool = - priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS]; - struct mlx5_shared_action_rss *shared_rss, *shared_rss_last; int ret = 0; - uint32_t idx; - - ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { - struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; + struct mlx5_ind_table_obj *ind_tbl, *ind_tbl_last; + LIST_FOREACH(ind_tbl, &priv->standalone_ind_tbls, next) { ret = mlx5_ind_table_obj_detach(dev, ind_tbl); if (ret != 0) { DRV_LOG(ERR, "Port %u could not detach " @@ -9384,11 +9372,9 @@ mlx5_action_handle_detach(struct rte_eth_dev *dev) } return 0; error: - shared_rss_last = shared_rss; - ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { - struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; - - if (shared_rss == shared_rss_last) + ind_tbl_last = ind_tbl; + LIST_FOREACH(ind_tbl, &priv->standalone_ind_tbls, next) { + if (ind_tbl == ind_tbl_last) break; if (mlx5_ind_table_obj_attach(dev, ind_tbl) != 0) DRV_LOG(CRIT, "Port %u could not attach " diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index a1ab9173d9..33094c8c07 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1022,6 +1022,7 @@ struct rte_flow_hw { union { /* Jump action. */ struct mlx5_hw_jump_action *jump; + struct mlx5_hrxq *hrxq; /* TIR action. */ }; struct rte_flow_template_table *table; /* The table flow allcated from. */ struct mlx5dr_rule rule; /* HWS layer data struct. */ @@ -1077,6 +1078,7 @@ struct mlx5_hw_actions { /* Dynamic action list. */ LIST_HEAD(act_list, mlx5_action_construct_data) act_list; struct mlx5_hw_jump_action *jump; /* Jump action. */ + struct mlx5_hrxq *tir; /* TIR action. */ uint32_t acts_num:4; /* Total action number. */ /* Translated DR action array from action template. */ struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; @@ -1907,6 +1909,11 @@ int flow_dv_query_count_ptr(struct rte_eth_dev *dev, uint32_t cnt_idx, int flow_dv_query_count(struct rte_eth_dev *dev, uint32_t cnt_idx, void *data, struct rte_flow_error *error); +void flow_dv_hashfields_set(uint64_t item_flags, + struct mlx5_flow_rss_desc *rss_desc, + uint64_t *hash_fields); +void flow_dv_action_rss_l34_hash_adjust(uint64_t rss_types, + uint64_t *hash_field); struct mlx5_list_entry *flow_hw_grp_create_cb(void *tool_ctx, void *cb_ctx); void flow_hw_grp_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index ef9c66eddf..c3d9d30dba 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -10966,78 +10966,83 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, /** * Set the hash fields according to the @p flow information. * - * @param[in] dev_flow - * Pointer to the mlx5_flow. + * @param[in] item_flags + * The match pattern item flags. * @param[in] rss_desc * Pointer to the mlx5_flow_rss_desc. + * @param[out] hash_fields + * Pointer to the RSS hash fields. */ -static void -flow_dv_hashfields_set(struct mlx5_flow *dev_flow, - struct mlx5_flow_rss_desc *rss_desc) +void +flow_dv_hashfields_set(uint64_t item_flags, + struct mlx5_flow_rss_desc *rss_desc, + uint64_t *hash_fields) { - uint64_t items = dev_flow->handle->layers; + uint64_t items = item_flags; + uint64_t fields = 0; int rss_inner = 0; uint64_t rss_types = rte_eth_rss_hf_refine(rss_desc->types); - dev_flow->hash_fields = 0; + *hash_fields = 0; #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT if (rss_desc->level >= 2) rss_inner = 1; #endif if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) || - (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) { + (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4)) || + !items) { if (rss_types & MLX5_IPV4_LAYER_TYPES) { if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY) - dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4; + fields |= IBV_RX_HASH_SRC_IPV4; else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY) - dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4; + fields |= IBV_RX_HASH_DST_IPV4; else - dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH; + fields |= MLX5_IPV4_IBV_RX_HASH; } } else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) || - (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) { + (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6)) || + !items) { if (rss_types & MLX5_IPV6_LAYER_TYPES) { if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY) - dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6; + fields |= IBV_RX_HASH_SRC_IPV6; else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY) - dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6; + fields |= IBV_RX_HASH_DST_IPV6; else - dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH; + fields |= MLX5_IPV6_IBV_RX_HASH; } } - if (dev_flow->hash_fields == 0) + if (fields == 0) /* * There is no match between the RSS types and the * L3 protocol (IPv4/IPv6) defined in the flow rule. */ return; if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) || - (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) { + (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP)) || + !items) { if (rss_types & RTE_ETH_RSS_UDP) { if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY) - dev_flow->hash_fields |= - IBV_RX_HASH_SRC_PORT_UDP; + fields |= IBV_RX_HASH_SRC_PORT_UDP; else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY) - dev_flow->hash_fields |= - IBV_RX_HASH_DST_PORT_UDP; + fields |= IBV_RX_HASH_DST_PORT_UDP; else - dev_flow->hash_fields |= MLX5_UDP_IBV_RX_HASH; + fields |= MLX5_UDP_IBV_RX_HASH; } } else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) || - (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) { + (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP)) || + !items) { if (rss_types & RTE_ETH_RSS_TCP) { if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY) - dev_flow->hash_fields |= - IBV_RX_HASH_SRC_PORT_TCP; + fields |= IBV_RX_HASH_SRC_PORT_TCP; else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY) - dev_flow->hash_fields |= - IBV_RX_HASH_DST_PORT_TCP; + fields |= IBV_RX_HASH_DST_PORT_TCP; else - dev_flow->hash_fields |= MLX5_TCP_IBV_RX_HASH; + fields |= MLX5_TCP_IBV_RX_HASH; } } if (rss_inner) - dev_flow->hash_fields |= IBV_RX_HASH_INNER; + fields |= IBV_RX_HASH_INNER; + *hash_fields = fields; } /** @@ -11061,7 +11066,6 @@ flow_dv_hrxq_prepare(struct rte_eth_dev *dev, struct mlx5_flow_rss_desc *rss_desc, uint32_t *hrxq_idx) { - struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_handle *dh = dev_flow->handle; struct mlx5_hrxq *hrxq; @@ -11072,11 +11076,8 @@ flow_dv_hrxq_prepare(struct rte_eth_dev *dev, rss_desc->shared_rss = 0; if (rss_desc->hash_fields == 0) rss_desc->queue_num = 1; - *hrxq_idx = mlx5_hrxq_get(dev, rss_desc); - if (!*hrxq_idx) - return NULL; - hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], - *hrxq_idx); + hrxq = mlx5_hrxq_get(dev, rss_desc); + *hrxq_idx = hrxq ? hrxq->idx : 0; return hrxq; } @@ -11622,7 +11623,9 @@ flow_dv_translate_action_sample(struct rte_eth_dev *dev, * rss->level and rss.types should be set in advance * when expanding items for RSS. */ - flow_dv_hashfields_set(dev_flow, rss_desc); + flow_dv_hashfields_set(dev_flow->handle->layers, + rss_desc, + &dev_flow->hash_fields); hrxq = flow_dv_hrxq_prepare(dev, dev_flow, rss_desc, &hrxq_idx); if (!hrxq) @@ -13647,7 +13650,9 @@ flow_dv_translate(struct rte_eth_dev *dev, */ handle->layers |= item_flags; if (action_flags & MLX5_FLOW_ACTION_RSS) - flow_dv_hashfields_set(dev_flow, rss_desc); + flow_dv_hashfields_set(dev_flow->handle->layers, + rss_desc, + &dev_flow->hash_fields); /* If has RSS action in the sample action, the Sample/Mirror resource * should be registered after the hash filed be update. */ @@ -14596,20 +14601,18 @@ __flow_dv_action_rss_hrxqs_release(struct rte_eth_dev *dev, * MLX5_RSS_HASH_IPV4_DST_ONLY are mutually exclusive so they can share * same slot in mlx5_rss_hash_fields. * - * @param[in] rss - * Pointer to the shared action RSS conf. + * @param[in] rss_types + * RSS type. * @param[in, out] hash_field * hash_field variable needed to be adjusted. * * @return * void */ -static void -__flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss, - uint64_t *hash_field) +void +flow_dv_action_rss_l34_hash_adjust(uint64_t rss_types, + uint64_t *hash_field) { - uint64_t rss_types = rss->origin.types; - switch (*hash_field & ~IBV_RX_HASH_INNER) { case MLX5_RSS_HASH_IPV4: if (rss_types & MLX5_IPV4_LAYER_TYPES) { @@ -14692,12 +14695,15 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, size_t i; int err; - if (mlx5_ind_table_obj_setup(dev, shared_rss->ind_tbl, - !!dev->data->dev_started)) { + shared_rss->ind_tbl = mlx5_ind_table_obj_new + (dev, shared_rss->origin.queue, + shared_rss->origin.queue_num, + true, + !!dev->data->dev_started); + if (!shared_rss->ind_tbl) return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot setup indirection table"); - } memcpy(rss_desc.key, shared_rss->origin.key, MLX5_RSS_HASH_KEY_LEN); rss_desc.key_len = MLX5_RSS_HASH_KEY_LEN; rss_desc.const_q = shared_rss->origin.queue; @@ -14706,19 +14712,20 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, rss_desc.shared_rss = action_idx; rss_desc.ind_tbl = shared_rss->ind_tbl; for (i = 0; i < MLX5_RSS_HASH_FIELDS_LEN; i++) { - uint32_t hrxq_idx; + struct mlx5_hrxq *hrxq; uint64_t hash_fields = mlx5_rss_hash_fields[i]; int tunnel = 0; - __flow_dv_action_rss_l34_hash_adjust(shared_rss, &hash_fields); + flow_dv_action_rss_l34_hash_adjust(shared_rss->origin.types, + &hash_fields); if (shared_rss->origin.level > 1) { hash_fields |= IBV_RX_HASH_INNER; tunnel = 1; } rss_desc.tunnel = tunnel; rss_desc.hash_fields = hash_fields; - hrxq_idx = mlx5_hrxq_get(dev, &rss_desc); - if (!hrxq_idx) { + hrxq = mlx5_hrxq_get(dev, &rss_desc); + if (!hrxq) { rte_flow_error_set (error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -14726,14 +14733,14 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, goto error_hrxq_new; } err = __flow_dv_action_rss_hrxq_set - (shared_rss, hash_fields, hrxq_idx); + (shared_rss, hash_fields, hrxq->idx); MLX5_ASSERT(!err); } return 0; error_hrxq_new: err = rte_errno; __flow_dv_action_rss_hrxqs_release(dev, shared_rss); - if (!mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true, true)) + if (!mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true)) shared_rss->ind_tbl = NULL; rte_errno = err; return -rte_errno; @@ -14764,18 +14771,14 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_shared_action_rss *shared_rss = NULL; - void *queue = NULL; struct rte_flow_action_rss *origin; const uint8_t *rss_key; - uint32_t queue_size = rss->queue_num * sizeof(uint16_t); uint32_t idx; RTE_SET_USED(conf); - queue = mlx5_malloc(0, RTE_ALIGN_CEIL(queue_size, sizeof(void *)), - 0, SOCKET_ID_ANY); shared_rss = mlx5_ipool_zmalloc (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], &idx); - if (!shared_rss || !queue) { + if (!shared_rss) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot allocate resource memory"); @@ -14787,18 +14790,6 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev, "rss action number out of range"); goto error_rss_init; } - shared_rss->ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(*shared_rss->ind_tbl), - 0, SOCKET_ID_ANY); - if (!shared_rss->ind_tbl) { - rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate resource memory"); - goto error_rss_init; - } - memcpy(queue, rss->queue, queue_size); - shared_rss->ind_tbl->queues = queue; - shared_rss->ind_tbl->queues_n = rss->queue_num; origin = &shared_rss->origin; origin->func = rss->func; origin->level = rss->level; @@ -14809,10 +14800,12 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev, memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN); origin->key = &shared_rss->key[0]; origin->key_len = MLX5_RSS_HASH_KEY_LEN; - origin->queue = queue; + origin->queue = rss->queue; origin->queue_num = rss->queue_num; if (__flow_dv_action_rss_setup(dev, idx, shared_rss, error)) goto error_rss_init; + /* Update queue with indirect table queue memoyr. */ + origin->queue = shared_rss->ind_tbl->queues; rte_spinlock_init(&shared_rss->action_rss_sl); __atomic_add_fetch(&shared_rss->refcnt, 1, __ATOMIC_RELAXED); rte_spinlock_lock(&priv->shared_act_sl); @@ -14823,12 +14816,11 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev, error_rss_init: if (shared_rss) { if (shared_rss->ind_tbl) - mlx5_free(shared_rss->ind_tbl); + mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, + !!dev->data->dev_started); mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], idx); } - if (queue) - mlx5_free(queue); return 0; } @@ -14856,7 +14848,6 @@ __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], idx); uint32_t old_refcnt = 1; int remaining; - uint16_t *queue = NULL; if (!shared_rss) return rte_flow_error_set(error, EINVAL, @@ -14875,8 +14866,7 @@ __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "shared rss hrxq has references"); - queue = shared_rss->ind_tbl->queues; - remaining = mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true, + remaining = mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, !!dev->data->dev_started); if (remaining) return rte_flow_error_set(error, EBUSY, @@ -14884,7 +14874,6 @@ __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, NULL, "shared rss indirection table has" " references"); - mlx5_free(queue); rte_spinlock_lock(&priv->shared_act_sl); ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], &priv->rss_shared_actions, idx, shared_rss, next); @@ -16878,11 +16867,12 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { if (!rss_desc[i]) continue; - hrxq_idx[i] = mlx5_hrxq_get(dev, rss_desc[i]); - if (!hrxq_idx[i]) { + hrxq = mlx5_hrxq_get(dev, rss_desc[i]); + if (!hrxq) { rte_spinlock_unlock(&mtr_policy->sl); return NULL; } + hrxq_idx[i] = hrxq->idx; } sub_policy_num = (mtr_policy->sub_policy_num >> (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index a825766245..e59d812072 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -7,6 +7,7 @@ #include #include "mlx5_defs.h" #include "mlx5_flow.h" +#include "mlx5_rx.h" #ifdef HAVE_IBV_FLOW_DV_SUPPORT @@ -89,6 +90,56 @@ flow_hw_jump_release(struct rte_eth_dev *dev, struct mlx5_hw_jump_action *jump) mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry); } +/** + * Register queue/RSS action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] hws_flags + * DR action flags. + * @param[in] action + * rte flow action. + * + * @return + * Table on success, NULL otherwise and rte_errno is set. + */ +static inline struct mlx5_hrxq* +flow_hw_tir_action_register(struct rte_eth_dev *dev, + uint32_t hws_flags, + const struct rte_flow_action *action) +{ + struct mlx5_flow_rss_desc rss_desc = { + .hws_flags = hws_flags, + }; + struct mlx5_hrxq *hrxq; + + if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *queue = action->conf; + + rss_desc.const_q = &queue->index; + rss_desc.queue_num = 1; + } else { + const struct rte_flow_action_rss *rss = action->conf; + + rss_desc.queue_num = rss->queue_num; + rss_desc.const_q = rss->queue; + memcpy(rss_desc.key, + !rss->key ? rss_hash_default_key : rss->key, + MLX5_RSS_HASH_KEY_LEN); + rss_desc.key_len = MLX5_RSS_HASH_KEY_LEN; + rss_desc.types = !rss->types ? RTE_ETH_RSS_IP : rss->types; + flow_dv_hashfields_set(0, &rss_desc, &rss_desc.hash_fields); + flow_dv_action_rss_l34_hash_adjust(rss->types, + &rss_desc.hash_fields); + if (rss->level > 1) { + rss_desc.hash_fields |= IBV_RX_HASH_INNER; + rss_desc.tunnel = 1; + } + } + hrxq = mlx5_hrxq_get(dev, &rss_desc); + return hrxq; +} + /** * Destroy DR actions created by action template. * @@ -260,6 +311,40 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, } i++; break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + if (masks->conf) { + acts->tir = flow_hw_tir_action_register + (dev, + mlx5_hw_act_flag[!!attr->group][type], + actions); + if (!acts->tir) + goto err; + acts->rule_acts[i].action = + acts->tir->action; + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, i)) { + goto err; + } + i++; + break; + case RTE_FLOW_ACTION_TYPE_RSS: + if (masks->conf) { + acts->tir = flow_hw_tir_action_register + (dev, + mlx5_hw_act_flag[!!attr->group][type], + actions); + if (!acts->tir) + goto err; + acts->rule_acts[i].action = + acts->tir->action; + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, i)) { + goto err; + } + i++; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -313,6 +398,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, struct rte_flow_attr attr = { .ingress = 1, }; + uint32_t ft_flag; memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * hw_acts->acts_num); @@ -320,6 +406,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (LIST_EMPTY(&hw_acts->act_list)) return 0; attr.group = table->grp->group_id; + ft_flag = mlx5_hw_act_flag[!!table->grp->group_id][table->type]; if (table->type == MLX5DR_TABLE_TYPE_FDB) { attr.transfer = 1; attr.ingress = 1; @@ -332,6 +419,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; struct mlx5_hw_jump_action *jump; + struct mlx5_hrxq *hrxq; action = &actions[act_data->action_src]; MLX5_ASSERT(action->type == RTE_FLOW_ACTION_TYPE_INDIRECT || @@ -353,6 +441,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->jump = jump; job->flow->fate_type = MLX5_FLOW_FATE_JUMP; break; + case RTE_FLOW_ACTION_TYPE_RSS: + case RTE_FLOW_ACTION_TYPE_QUEUE: + hrxq = flow_hw_tir_action_register(dev, + ft_flag, + action); + if (!hrxq) + return -1; + rule_acts[act_data->action_dst].action = hrxq->action; + job->flow->hrxq = hrxq; + job->flow->fate_type = MLX5_FLOW_FATE_QUEUE; + break; default: break; } @@ -553,6 +652,8 @@ flow_hw_q_pull(struct rte_eth_dev *dev, if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP) flow_hw_jump_release(dev, job->flow->jump); + else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE) + mlx5_hrxq_obj_release(dev, job->flow->hrxq); mlx5_ipool_free(job->flow->table->flow, job->flow->idx); } priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index 90ccb9aaff..f08aa7a770 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -1943,7 +1943,6 @@ flow_verbs_apply(struct rte_eth_dev *dev, struct rte_flow *flow, MLX5_ASSERT(priv->drop_queue.hrxq); hrxq = priv->drop_queue.hrxq; } else { - uint32_t hrxq_idx; struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; MLX5_ASSERT(rss_desc->queue_num); @@ -1952,9 +1951,7 @@ flow_verbs_apply(struct rte_eth_dev *dev, struct rte_flow *flow, rss_desc->tunnel = !!(handle->layers & MLX5_FLOW_LAYER_TUNNEL); rss_desc->shared_rss = 0; - hrxq_idx = mlx5_hrxq_get(dev, rss_desc); - hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], - hrxq_idx); + hrxq = mlx5_hrxq_get(dev, rss_desc); if (!hrxq) { rte_flow_error_set (error, rte_errno, @@ -1962,7 +1959,7 @@ flow_verbs_apply(struct rte_eth_dev *dev, struct rte_flow *flow, "cannot get hash queue"); goto error; } - handle->rix_hrxq = hrxq_idx; + handle->rix_hrxq = hrxq->idx; } MLX5_ASSERT(hrxq); handle->drv_flow = mlx5_glue->create_flow diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index cb5d51340d..468772ee27 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -225,9 +225,13 @@ int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev); struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, uint32_t queues_n); +struct mlx5_ind_table_obj *mlx5_ind_table_obj_new(struct rte_eth_dev *dev, + const uint16_t *queues, + uint32_t queues_n, + bool standalone, + bool ref_qs); int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, - bool standalone, bool deref_rxqs); int mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, @@ -250,8 +254,9 @@ struct mlx5_list_entry *mlx5_hrxq_clone_cb(void *tool_ctx, void *cb_ctx __rte_unused); void mlx5_hrxq_clone_free_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry); -uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, +struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev, struct mlx5_flow_rss_desc *rss_desc); +int mlx5_hrxq_obj_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq); int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx); uint32_t mlx5_hrxq_verify(struct rte_eth_dev *dev); enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 580d7ae868..a892675646 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2284,8 +2284,6 @@ mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, * Pointer to Ethernet device. * @param ind_table * Indirection table to release. - * @param standalone - * Indirection table for Standalone queue. * @param deref_rxqs * If true, then dereference RX queues related to indirection table. * Otherwise, no additional action will be taken. @@ -2296,7 +2294,6 @@ mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, - bool standalone, bool deref_rxqs) { struct mlx5_priv *priv = dev->data->dev_private; @@ -2304,7 +2301,7 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, rte_rwlock_write_lock(&priv->ind_tbls_lock); ret = __atomic_sub_fetch(&ind_tbl->refcnt, 1, __ATOMIC_RELAXED); - if (!ret && !standalone) + if (!ret) LIST_REMOVE(ind_tbl, next); rte_rwlock_write_unlock(&priv->ind_tbls_lock); if (ret) @@ -2413,7 +2410,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, * @return * The Verbs/DevX object initialized, NULL otherwise and rte_errno is set. */ -static struct mlx5_ind_table_obj * +struct mlx5_ind_table_obj * mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues, uint32_t queues_n, bool standalone, bool ref_qs) { @@ -2435,11 +2432,13 @@ mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues, mlx5_free(ind_tbl); return NULL; } - if (!standalone) { - rte_rwlock_write_lock(&priv->ind_tbls_lock); + rte_rwlock_write_lock(&priv->ind_tbls_lock); + if (!standalone) LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next); - rte_rwlock_write_unlock(&priv->ind_tbls_lock); - } + else + LIST_INSERT_HEAD(&priv->standalone_ind_tbls, ind_tbl, next); + rte_rwlock_write_unlock(&priv->ind_tbls_lock); + return ind_tbl; } @@ -2605,6 +2604,7 @@ mlx5_hrxq_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, return (hrxq->rss_key_len != rss_desc->key_len || memcmp(hrxq->rss_key, rss_desc->key, rss_desc->key_len) || + hrxq->hws_flags != rss_desc->hws_flags || hrxq->hash_fields != rss_desc->hash_fields || hrxq->ind_table->queues_n != rss_desc->queue_num || memcmp(hrxq->ind_table->queues, rss_desc->queue, @@ -2689,8 +2689,7 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, } if (ind_tbl != hrxq->ind_table) { MLX5_ASSERT(!hrxq->standalone); - mlx5_ind_table_obj_release(dev, hrxq->ind_table, - hrxq->standalone, true); + mlx5_ind_table_obj_release(dev, hrxq->ind_table, true); hrxq->ind_table = ind_tbl; } hrxq->hash_fields = hash_fields; @@ -2700,8 +2699,7 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, err = rte_errno; if (ind_tbl != hrxq->ind_table) { MLX5_ASSERT(!hrxq->standalone); - mlx5_ind_table_obj_release(dev, ind_tbl, hrxq->standalone, - true); + mlx5_ind_table_obj_release(dev, ind_tbl, true); } rte_errno = err; return -rte_errno; @@ -2713,12 +2711,16 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) struct mlx5_priv *priv = dev->data->dev_private; #ifdef HAVE_IBV_FLOW_DV_SUPPORT - mlx5_glue->destroy_flow_action(hrxq->action); + if (hrxq->hws_flags) + mlx5dr_action_destroy(hrxq->action); + else + mlx5_glue->destroy_flow_action(hrxq->action); #endif priv->obj_ops.hrxq_destroy(hrxq); if (!hrxq->standalone) { mlx5_ind_table_obj_release(dev, hrxq->ind_table, - hrxq->standalone, true); + hrxq->hws_flags ? + (!!dev->data->dev_started) : true); } mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq->idx); } @@ -2762,11 +2764,12 @@ __mlx5_hrxq_create(struct rte_eth_dev *dev, int ret; queues_n = rss_desc->hash_fields ? queues_n : 1; - if (!ind_tbl) + if (!ind_tbl && !rss_desc->hws_flags) ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n); if (!ind_tbl) ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n, - standalone, + standalone || + rss_desc->hws_flags, !!dev->data->dev_started); if (!ind_tbl) return NULL; @@ -2778,6 +2781,7 @@ __mlx5_hrxq_create(struct rte_eth_dev *dev, hrxq->ind_table = ind_tbl; hrxq->rss_key_len = rss_key_len; hrxq->hash_fields = rss_desc->hash_fields; + hrxq->hws_flags = rss_desc->hws_flags; memcpy(hrxq->rss_key, rss_key, rss_key_len); ret = priv->obj_ops.hrxq_new(dev, hrxq, rss_desc->tunnel); if (ret < 0) @@ -2785,7 +2789,7 @@ __mlx5_hrxq_create(struct rte_eth_dev *dev, return hrxq; error: if (!rss_desc->ind_tbl) - mlx5_ind_table_obj_release(dev, ind_tbl, standalone, true); + mlx5_ind_table_obj_release(dev, ind_tbl, true); if (hrxq) mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); return NULL; @@ -2839,13 +2843,13 @@ mlx5_hrxq_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) * RSS configuration for the Rx hash queue. * * @return - * An hash Rx queue index on success. + * An hash Rx queue on success. */ -uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, +struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev, struct mlx5_flow_rss_desc *rss_desc) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_hrxq *hrxq; + struct mlx5_hrxq *hrxq = NULL; struct mlx5_list_entry *entry; struct mlx5_flow_cb_ctx ctx = { .data = rss_desc, @@ -2856,12 +2860,10 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, } else { entry = mlx5_list_register(priv->hrxqs, &ctx); if (!entry) - return 0; + return NULL; hrxq = container_of(entry, typeof(*hrxq), entry); } - if (hrxq) - return hrxq->idx; - return 0; + return hrxq; } /** @@ -2870,17 +2872,15 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, * @param dev * Pointer to Ethernet device. * @param hrxq_idx - * Index to Hash Rx queue to release. + * Hash Rx queue to release. * * @return * 1 while a reference on it exists, 0 when freed. */ -int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hrxq_idx) +int mlx5_hrxq_obj_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_hrxq *hrxq; - hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); if (!hrxq) return 0; if (!hrxq->standalone) @@ -2889,6 +2889,26 @@ int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hrxq_idx) return 0; } +/** + * Release the hash Rx queue with index. + * + * @param dev + * Pointer to Ethernet device. + * @param hrxq_idx + * Index to Hash Rx queue to release. + * + * @return + * 1 while a reference on it exists, 0 when freed. + */ +int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hrxq_idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hrxq *hrxq; + + hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); + return mlx5_hrxq_obj_release(dev, hrxq); +} + /** * Create a drop Rx Hash queue. * From patchwork Thu Feb 10 16:29:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107298 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0A011A00BE; Thu, 10 Feb 2022 17:31:52 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D805D41181; Thu, 10 Feb 2022 17:31:13 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2048.outbound.protection.outlook.com [40.107.243.48]) by mails.dpdk.org (Postfix) with ESMTP id 4C57541159 for ; Thu, 10 Feb 2022 17:31:12 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mhnMNiOPl+dFwnzASR5Fd2cKx3DT+iT6uWtiUOULYVhy5XlkML/Se+M5nHzStAj8H9wGBB0uWKXSZqN4FZVNf9mE25JS7ZZjXWBVupfPHkdzkJeZY34xonWsN5mFZl7xHUjHZ+s4bwjdu0CywMLVSSxR6g9Xi09gBDFmdEdxHWUnCagiIlfujewkmudBU5B80kyanOg/1Qv0ayaLrtz5K4Sif5q0KQ3SFp5IGYmGtWN4CFUthUlrhrqgkPaGmdgFDV7hDjqtetP/ENtN6lRZKEqPHocmw04DlqOzw7dZU1QK4R3Q+r6ok5IXb1mKkesVlI1IyDfHBt82P9akofNrvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LaykgClr2wBy9K4HGX26Gji7LVC/vxlNsrjP1PFN6xk=; b=B4zG6LKVwdLACjC1YWvRZAqZzvhjwN1nvsYkQ16+IX+EudttDcW+UxIrs1VjgRePC1spwdFskhccsspnTvWi3rUTyTUiCDEiigzOL7qYFfZD4jNwCCikg8QVlJvvIdQ1+IyYZzM4BitM7ttSZX2jom9DfOyivAeCv4gj4VkNr68WcTZCyXtjwcea0nAM2z5sBR+lIf36UI+1vjuf2I6LacbV1hWVDGAwukrfXkHddiP3bPXVKYNw/0amqhtcxiqPJ3pOBD1ZQMFsKvfrCFqfw1w2FB3C0VTlpJkBfv9ID4DS2DHrnvvqS3zQBuv3+LqYx8lSumtacQ9MqEOhKgIJWw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LaykgClr2wBy9K4HGX26Gji7LVC/vxlNsrjP1PFN6xk=; b=MDx+LMDu2reXoyUpO8lloXwsBjVYymY5TBrFi+uaQU0R41paxgPOaE4OLKqUswNmhZNPshIG5eVutjcl2M81fE0QMcwnux1he/uekbZR2xB6nSOuD+BpyZ+p/h9FpfNk39HIwkLWj2Q1g3AR+KDflkTzmrmz3rQ3CyU2IYSunmyBKRroGhx9UH+Inqafq4NfcY5ph62cT/08HUdEYuS5wi0kqIjUne6Y/EJgmG0SECXoQa+4kymn/bVmD60P/gYuyfsHy552HNRGtnwRZ9Bf2ZBF2tA1e07ET1glgDvxtX/jAtzpt1JXpB8McNftYO5Ca8HZupJgjtFXyd8dpLqgIA== Received: from DM5PR1401CA0003.namprd14.prod.outlook.com (2603:10b6:4:4a::13) by BL0PR12MB2371.namprd12.prod.outlook.com (2603:10b6:207:3e::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Thu, 10 Feb 2022 16:31:10 +0000 Received: from DM6NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:4:4a:cafe::19) by DM5PR1401CA0003.outlook.office365.com (2603:10b6:4:4a::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:31:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT052.mail.protection.outlook.com (10.13.172.111) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:31:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:30:07 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:30:05 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 11/13] net/mlx5: add mark action Date: Thu, 10 Feb 2022 18:29:24 +0200 Message-ID: <20220210162926.20436-12-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 07be1f1f-0d0d-4d92-28b7-08d9ecb2bf11 X-MS-TrafficTypeDiagnostic: BL0PR12MB2371:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3968; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: uXdzwju2DiwxdRNFkwumqTaihhimTP4V9+kGPNKfbIRvVtyhrU7T3HokEGhamDlQw3kkYSd5ygHfXgCXtOfAaBSne7Zw7ZJD7IVYy+B80/xsRYmlwnLv0fiI7ZvYlswQkXyGTgZx4yu4JWYgEA4ylFgXNMAnt3KZtkFJCNvRRfx5rpBPn2RkoJhsa7g24j4RJPXJOKzB0tM9x4RVPCm64lx8+eCzCCkazeTXMC2G8rGUKfQZuwdh7vJH3huSSqEGvP1ZBpaMTDqO7LUXiyuxQ8exYDd78kkacKUCnlMUATA4FF+JhID4prif5o613gwkDJdkf1s9lweL5Ua36TCoQ8Z0rMeNyD5Anifp6zuJtHCCJwau3L91Mag1uX9wBDEh74yrapH6vk7O3YYaLu3MEtK8RPIdpBX+jeyzZKg1maA+qZAvYCOUAJltpv4TGUP/mOF1gjC5m4txvqAswsDP5vzcOa2h4EmlDG1zLAV2Zn0e7TXNIuBC6S+4eIr6Vgx6dNM/AVNKpu5cXETSYWztCqQRoW++bjCn4wR0LlmxIOdzFN2BUOlgUF6vl9xUJkUizO3C/RVIJeJpTLJT/SfQyQE03yR6u0HLBhcuClkgckRT2Oh/XoktiUZK40gxF1corFMcKmgW+8ifxrUI290AmD88d+9hbEf550dfQCpYqWdhZR03SgceB+3+3Ha6Ga9gb4q+dvn+j1z1YmDOKAICfA== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(2616005)(83380400001)(5660300002)(508600001)(6286002)(426003)(16526019)(336012)(26005)(186003)(8936002)(1076003)(82310400004)(7696005)(81166007)(6666004)(356005)(47076005)(2906002)(36860700001)(54906003)(110136005)(316002)(86362001)(6636002)(8676002)(4326008)(55016003)(36756003)(70206006)(70586007)(40460700003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:31:10.2292 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 07be1f1f-0d0d-4d92-28b7-08d9ecb2bf11 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB2371 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The mark action is covered by tag action internally. While it is added the HW will add a tag to the packet. The mark value can be set as fixed or dynamic as the action mask indicates. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 3 ++ drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_hw.c | 87 ++++++++++++++++++++++++++++++--- 3 files changed, 85 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 6fb82bf1f3..c78dc3c431 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1529,6 +1529,9 @@ struct mlx5_priv { /* HW steering global drop action. */ struct mlx5dr_action *hw_drop[MLX5_HW_ACTION_FLAG_MAX] [MLX5DR_TABLE_TYPE_MAX]; + /* HW steering global drop action. */ + struct mlx5dr_action *hw_tag[MLX5_HW_ACTION_FLAG_MAX] + [MLX5DR_TABLE_TYPE_MAX]; struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */ }; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 33094c8c07..8e65486a1f 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1080,6 +1080,7 @@ struct mlx5_hw_actions { struct mlx5_hw_jump_action *jump; /* Jump action. */ struct mlx5_hrxq *tir; /* TIR action. */ uint32_t acts_num:4; /* Total action number. */ + uint32_t mark:1; /* Indicate the mark action. */ /* Translated DR action array from action template. */ struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index e59d812072..a754cdd084 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -31,6 +31,50 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX] }, }; +/** + * Trim rxq flag refcnt. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +static void +flow_hw_rxq_flag_trim(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int i; + + if (!priv->mark_enabled) + return; + for (i = 0; i < priv->rxqs_n; ++i) { + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); + + rxq_ctrl->rxq.mark = 0; + } + priv->mark_enabled = 0; +} + +/** + * Set rxq flag refcnt. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +static void +flow_hw_rxq_flag_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int i; + + if (priv->mark_enabled) + return; + for (i = 0; i < priv->rxqs_n; ++i) { + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); + + rxq_ctrl->rxq.mark = 1; + } + priv->mark_enabled = 1; +} + /** * Register destination table DR jump action. * @@ -292,6 +336,20 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, acts->rule_acts[i++].action = priv->hw_drop[!!attr->group][type]; break; + case RTE_FLOW_ACTION_TYPE_MARK: + acts->mark = true; + if (masks->conf) + acts->rule_acts[i].tag.value = + mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (masks->conf))->id); + else if (__flow_hw_act_data_general_append(priv, acts, + actions->type, actions - action_start, i)) + goto err; + acts->rule_acts[i++].action = + priv->hw_tag[!!attr->group][type]; + flow_hw_rxq_flag_set(dev); + break; case RTE_FLOW_ACTION_TYPE_JUMP: if (masks->conf) { uint32_t jump_group = @@ -418,6 +476,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; + uint32_t tag; struct mlx5_hw_jump_action *jump; struct mlx5_hrxq *hrxq; @@ -429,6 +488,12 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, break; case RTE_FLOW_ACTION_TYPE_VOID: break; + case RTE_FLOW_ACTION_TYPE_MARK: + tag = mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (action->conf))->id); + rule_acts[act_data->action_dst].tag.value = tag; + break; case RTE_FLOW_ACTION_TYPE_JUMP: jump_group = ((const struct rte_flow_action_jump *) action->conf)->group; @@ -998,6 +1063,8 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, __atomic_sub_fetch(&table->its[i]->refcnt, 1, __ATOMIC_RELAXED); for (i = 0; i < table->nb_action_templates; i++) { + if (table->ats[i].acts.mark) + flow_hw_rxq_flag_trim(dev); __flow_hw_action_template_destroy(dev, &table->ats[i].acts); __atomic_sub_fetch(&table->ats[i].action_template->refcnt, 1, __ATOMIC_RELAXED); @@ -1499,15 +1566,21 @@ flow_hw_configure(struct rte_eth_dev *dev, (priv->dr_ctx, mlx5_hw_act_flag[i][j]); if (!priv->hw_drop[i][j]) goto err; + priv->hw_tag[i][j] = mlx5dr_action_create_tag + (priv->dr_ctx, mlx5_hw_act_flag[i][j]); + if (!priv->hw_tag[i][j]) + goto err; } } return 0; err: for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { for (j = 0; j < MLX5DR_TABLE_TYPE_MAX; j++) { - if (!priv->hw_drop[i][j]) - continue; - mlx5dr_action_destroy(priv->hw_drop[i][j]); + if (priv->hw_drop[i][j]) + mlx5dr_action_destroy(priv->hw_drop[i][j]); + if (priv->hw_tag[i][j]) + mlx5dr_action_destroy(priv->hw_tag[i][j]); + } } if (dr_ctx) @@ -1556,9 +1629,11 @@ flow_hw_resource_release(struct rte_eth_dev *dev) } for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { for (j = 0; j < MLX5DR_TABLE_TYPE_MAX; j++) { - if (!priv->hw_drop[i][j]) - continue; - mlx5dr_action_destroy(priv->hw_drop[i][j]); + if (priv->hw_drop[i][j]) + mlx5dr_action_destroy(priv->hw_drop[i][j]); + if (priv->hw_tag[i][j]) + mlx5dr_action_destroy(priv->hw_tag[i][j]); + } } if (priv->acts_ipool) { From patchwork Thu Feb 10 16:29:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107296 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E9D1A00BE; Thu, 10 Feb 2022 17:31:23 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8BF844273B; Thu, 10 Feb 2022 17:30:15 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2085.outbound.protection.outlook.com [40.107.220.85]) by mails.dpdk.org (Postfix) with ESMTP id 608F641169 for ; Thu, 10 Feb 2022 17:30:14 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Og+ywOP8pu7PzmBqc3RJQqn8SSRmBju5GNfKhGkwVa9hZEFukWdtYx2zNX78RDSaU4vQDiICht4uIX1ib0E4t1I1OuxkgauRG3adpuZVI+MedMNL666+CRB4PfEYx33lj7wmko5HhZ2w6XoAe40h/ohNPlIpw1d1LsDfPFhcdB8en143e2xftchw/ng2jNrWANOE4R43JMequPCYxO8bsS/ccahVvSmFcPnp3553A6ZtvRO7m5mmow/dxsevRBpt/+BWI/EvIBmC8QiA0jhfOBsPnLO1BlOVRm0Q0XXrfS3EJ227QnudlMNGy1JskCbuq/V/BYChEE7FEOi1xykkrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DkxKqxD8srPvg0Q83Psy4B9nR5w0rdRp4SeJ8mWRZCI=; b=czLBolskv3zpyz+ajJsCLS0S9rF0rc7ftTTp+NQ6Q5CXiBorKw4x4k3oXVPI2Nvr3apcHS5lQgbewDonNVduKWTI0+EsTD0c2yrddKy4GkDp3hOxvCrbzhuD49t6aBUij4ttjQuvAX8B3pFF+0uFz+k1XrEhzWMOvT4vp68QU/c2tCn1yLvJQu6xB6ASqJjQ0pwX8zDVG1db73rwqcZs5R7b4Hq4HlaOA8jMK7j61LlHqz//gwVo70661gTMU68aj77q7vBfAn4VytN6dwLu4VjUI4dxDJqRW3qlxJHsaxyLQgWObwDonmjoysjPFbc2idUwdTT5zVPky0N54oDxdw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DkxKqxD8srPvg0Q83Psy4B9nR5w0rdRp4SeJ8mWRZCI=; b=au3yoDLpWrSekVnzwf+2DwGktQzxZQCLaBjQhc52AWy/i0qVlD3OTJNIieY87DVlHtzwsDpVKE9MkSZjISH6324zsb5IB3PLMtAyZGRbOReKI+8yNbuv8UXIouAglUR6CRDmXpbLUU6Nq/o0gu+iRxLyQlMag55GMfhi1Shg2x1QYv2c3lBhChUvPkdNE5zgKgn2tohZr+voxitB07oTP1zprmj4OBi4NIIIqhJ7Qwn6BbLf4zweCtL/19ApVP+iKUHvpuk9YkpSJvSJeDJqBUuurcVZF2C/LnOe90AVoLa8VvEjr6QX/Wm4ZTWnmvdIa7DiArTr8b0FpXilr1uPcQ== Received: from MWHPR19CA0012.namprd19.prod.outlook.com (2603:10b6:300:d4::22) by LV2PR12MB5893.namprd12.prod.outlook.com (2603:10b6:408:175::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.19; Thu, 10 Feb 2022 16:30:12 +0000 Received: from CO1NAM11FT041.eop-nam11.prod.protection.outlook.com (2603:10b6:300:d4:cafe::90) by MWHPR19CA0012.outlook.office365.com (2603:10b6:300:d4::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:30:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT041.mail.protection.outlook.com (10.13.174.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Thu, 10 Feb 2022 16:30:11 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:30:09 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:30:07 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 12/13] net/mlx5: add indirect action Date: Thu, 10 Feb 2022 18:29:25 +0200 Message-ID: <20220210162926.20436-13-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 61a2ffdb-d1d6-4dc7-0139-08d9ecb29c15 X-MS-TrafficTypeDiagnostic: LV2PR12MB5893:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:161; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hFgh3/FQDL5Vn0Cx5S2ZjXf4HTDL7OEhh+IOmUhPi5BEAEVkx1zk9c39ym8zBI00tTB+JdrINmbmiyIISIwgz6JHAj6Gy5615W4VJ+PPNjrlQqwBdy1xmM9pns1DyVOQyHr7HEvcP8PP85w9zoYlxbNSHenB2HbXcL9kxKS/PpYwHnhDVTfEsqxjNqw4B3fbtjbNVhsi6ZGvLdayJZl03gc1ub8cotTNHFXdg2KXd8OZ94jharXZEIOOX28CsVHGbjPpzs0qG+gAiUYQgxKPspMJqtiJfIPXTu9Ebmy/cUqZGXGFh8XE9nbL5fBH/ApBka2rfjj5bVzmuMc9yLU/3Jo5Jb+7jzn+ibOnCVl84Maa7bUdmFYrUPuw0T2lfAIEGz51nFvDhaPoqmFVW0lxNinefMAZ2h61AD2jK8W9z3tq1I8iPqO/+fJMQ3Z9Srb0uChceFot1WCXSyWpDfcTb0rPwEIMoUkEdM+yv8sTyiQFtVK+BN1Nff1RTpuE++xL1EW6YWLX3R57mgC+oBJOPyTjfts5Q2JmGcN02axCBOaiFDg3iQbBNGkUtGg50RknReI/E6/SOrB3Df2BpEcFVpxEo7utTU+pG6/RTaTh4WeKKvyMZG3vLUNoO/zq0fUZs+s2uRvxzRtBa4TOxK1Rolo7uEB9yDLL5wRtO0vubYrQnUQHEmqcU2fH1QReDtGEc3yU+1lEMPvczCKcdJTyQg== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(2906002)(47076005)(26005)(426003)(81166007)(336012)(186003)(6666004)(83380400001)(54906003)(6636002)(508600001)(316002)(36756003)(70206006)(30864003)(356005)(82310400004)(8676002)(86362001)(36860700001)(55016003)(2616005)(16526019)(7696005)(1076003)(70586007)(5660300002)(40460700003)(110136005)(6286002)(8936002)(4326008)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:30:11.5498 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 61a2ffdb-d1d6-4dc7-0139-08d9ecb29c15 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT041.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5893 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org HW steering can support indirect action as well. With indirect action, the flow can be created with more flexible shared RSS action selection. This will can save the action template with different RSS actions. This commit adds the flow queue operation callback for: rte_flow_q_action_handle_create(); rte_flow_q_action_handle_destroy(); rte_flow_q_action_handle_update(); Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow.c | 116 +++++++++ drivers/net/mlx5/mlx5_flow.h | 56 +++++ drivers/net/mlx5/mlx5_flow_dv.c | 21 +- drivers/net/mlx5/mlx5_flow_hw.c | 402 +++++++++++++++++++++++++++++++- 4 files changed, 582 insertions(+), 13 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 9cad84ebc6..46950044e0 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -873,6 +873,26 @@ mlx5_flow_q_push(struct rte_eth_dev *dev, uint32_t queue, struct rte_flow_error *error); +static struct rte_flow_action_handle * +mlx5_flow_q_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +static int +mlx5_flow_q_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, + struct rte_flow_error *error); + +static int +mlx5_flow_q_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_action_handle *handle, + struct rte_flow_error *error); + static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -904,6 +924,9 @@ static const struct rte_flow_ops mlx5_flow_ops = { .q_flow_destroy = mlx5_flow_q_flow_destroy, .q_pull = mlx5_flow_q_pull, .q_push = mlx5_flow_q_push, + .q_action_handle_create = mlx5_flow_q_action_handle_create, + .q_action_handle_update = mlx5_flow_q_action_handle_update, + .q_action_handle_destroy = mlx5_flow_q_action_handle_destroy, }; /* Tunnel information. */ @@ -8228,6 +8251,99 @@ mlx5_flow_q_push(struct rte_eth_dev *dev, return fops->q_push(dev, queue, error); } +/** + * Create shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] conf + * Indirect action configuration. + * @param[in] action + * rte_flow action detail. + * @param[out] error + * Pointer to error structure. + * + * @return + * Action handle on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow_action_handle * +mlx5_flow_q_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->q_action_create(dev, queue, attr, conf, action, error); +} + +/** + * Update shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] handle + * Action handle to be updated. + * @param[in] update + * Update value. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_q_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->q_action_update(dev, queue, attr, handle, update, error); +} + +/** + * Destroy shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] handle + * Action handle to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_q_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_action_handle *handle, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + return fops->q_action_destroy(dev, queue, attr, handle, error); +} + /** * Allocate a new memory for the counter values wrapped by all the needed * management. diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 8e65486a1f..097e5bf587 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -41,6 +41,7 @@ enum mlx5_rte_flow_action_type { MLX5_RTE_FLOW_ACTION_TYPE_AGE, MLX5_RTE_FLOW_ACTION_TYPE_COUNT, MLX5_RTE_FLOW_ACTION_TYPE_JUMP, + MLX5_RTE_FLOW_ACTION_TYPE_RSS, }; #define MLX5_INDIRECT_ACTION_TYPE_OFFSET 30 @@ -1036,6 +1037,13 @@ struct mlx5_action_construct_data { uint32_t idx; /* Data index. */ uint16_t action_src; /* rte_flow_action src offset. */ uint16_t action_dst; /* mlx5dr_rule_action dst offset. */ + union { + struct { + uint64_t types; /* RSS hash types. */ + uint32_t level; /* RSS level. */ + uint32_t idx; /* Shared action index. */ + } shared_rss; + }; }; /* Flow item template struct. */ @@ -1044,6 +1052,7 @@ struct rte_flow_pattern_template { /* Template attributes. */ struct rte_flow_pattern_template_attr attr; struct mlx5dr_match_template *mt; /* mlx5 match template. */ + uint64_t item_flags; /* Item layer flags. */ uint32_t refcnt; /* Reference counter. */ }; @@ -1426,6 +1435,29 @@ typedef int (*mlx5_flow_q_push_t) uint32_t queue, struct rte_flow_error *error); +typedef struct rte_flow_action_handle *(*mlx5_flow_q_action_handle_create_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +typedef int (*mlx5_flow_q_action_handle_update_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, + struct rte_flow_error *error); + +typedef int (*mlx5_flow_q_action_handle_destroy_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_action_handle *handle, + struct rte_flow_error *error); + struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; mlx5_flow_prepare_t prepare; @@ -1474,6 +1506,9 @@ struct mlx5_flow_driver_ops { mlx5_flow_q_flow_destroy_t q_flow_destroy; mlx5_flow_q_pull_t q_pull; mlx5_flow_q_push_t q_push; + mlx5_flow_q_action_handle_create_t q_action_create; + mlx5_flow_q_action_handle_update_t q_action_update; + mlx5_flow_q_action_handle_destroy_t q_action_destroy; }; /* mlx5_flow.c */ @@ -1915,6 +1950,8 @@ void flow_dv_hashfields_set(uint64_t item_flags, uint64_t *hash_fields); void flow_dv_action_rss_l34_hash_adjust(uint64_t rss_types, uint64_t *hash_field); +uint32_t flow_dv_action_rss_hrxq_lookup(struct rte_eth_dev *dev, uint32_t idx, + const uint64_t hash_fields); struct mlx5_list_entry *flow_hw_grp_create_cb(void *tool_ctx, void *cb_ctx); void flow_hw_grp_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); @@ -1965,4 +2002,23 @@ mlx5_get_tof(const struct rte_flow_item *items, enum mlx5_tof_rule_type *rule_type); void flow_hw_resource_release(struct rte_eth_dev *dev); +int flow_dv_action_validate(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *err); +struct rte_flow_action_handle *flow_dv_action_create(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *err); +int flow_dv_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + struct rte_flow_error *error); +int flow_dv_action_update(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, + struct rte_flow_error *err); +int flow_dv_action_query(struct rte_eth_dev *dev, + const struct rte_flow_action_handle *handle, + void *data, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index c3d9d30dba..ca8ae4214b 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13837,9 +13837,9 @@ __flow_dv_action_rss_hrxq_set(struct mlx5_shared_action_rss *action, * @return * Valid hash RX queue index, otherwise 0. */ -static uint32_t -__flow_dv_action_rss_hrxq_lookup(struct rte_eth_dev *dev, uint32_t idx, - const uint64_t hash_fields) +uint32_t +flow_dv_action_rss_hrxq_lookup(struct rte_eth_dev *dev, uint32_t idx, + const uint64_t hash_fields) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_shared_action_rss *shared_rss = @@ -13967,7 +13967,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow, struct mlx5_hrxq *hrxq = NULL; uint32_t hrxq_idx; - hrxq_idx = __flow_dv_action_rss_hrxq_lookup(dev, + hrxq_idx = flow_dv_action_rss_hrxq_lookup(dev, rss_desc->shared_rss, dev_flow->hash_fields); if (hrxq_idx) @@ -14691,6 +14691,7 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, struct mlx5_shared_action_rss *shared_rss, struct rte_flow_error *error) { + struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_rss_desc rss_desc = { 0 }; size_t i; int err; @@ -14711,6 +14712,8 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, /* Set non-zero value to indicate a shared RSS. */ rss_desc.shared_rss = action_idx; rss_desc.ind_tbl = shared_rss->ind_tbl; + if (priv->config.dv_flow_en == 2) + rss_desc.hws_flags = MLX5DR_ACTION_FLAG_HWS_RX; for (i = 0; i < MLX5_RSS_HASH_FIELDS_LEN; i++) { struct mlx5_hrxq *hrxq; uint64_t hash_fields = mlx5_rss_hash_fields[i]; @@ -14902,7 +14905,7 @@ __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, * A valid shared action handle in case of success, NULL otherwise and * rte_errno is set. */ -static struct rte_flow_action_handle * +struct rte_flow_action_handle * flow_dv_action_create(struct rte_eth_dev *dev, const struct rte_flow_indir_action_conf *conf, const struct rte_flow_action *action, @@ -14972,7 +14975,7 @@ flow_dv_action_create(struct rte_eth_dev *dev, * @return * 0 on success, otherwise negative errno value. */ -static int +int flow_dv_action_destroy(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, struct rte_flow_error *error) @@ -15181,7 +15184,7 @@ __flow_dv_action_ct_update(struct rte_eth_dev *dev, uint32_t idx, * @return * 0 on success, otherwise negative errno value. */ -static int +int flow_dv_action_update(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, const void *update, @@ -15895,7 +15898,7 @@ flow_dv_query_count_ptr(struct rte_eth_dev *dev, uint32_t cnt_idx, "counters are not available"); } -static int +int flow_dv_action_query(struct rte_eth_dev *dev, const struct rte_flow_action_handle *handle, void *data, struct rte_flow_error *error) @@ -17584,7 +17587,7 @@ flow_dv_counter_allocate(struct rte_eth_dev *dev) * @return * 0 on success, otherwise negative errno value. */ -static int +int flow_dv_action_validate(struct rte_eth_dev *dev, const struct rte_flow_indir_action_conf *conf, const struct rte_flow_action *action, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index a754cdd084..9fc6f24542 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -75,6 +75,72 @@ flow_hw_rxq_flag_set(struct rte_eth_dev *dev) priv->mark_enabled = 1; } +/** + * Generate the pattern item flags. + * Will be used for shared RSS action. + * + * @param[in] items + * Pointer to the list of items. + * + * @return + * Item flags. + */ +static uint64_t +flow_hw_rss_item_flags_get(const struct rte_flow_item items[]) +{ + uint64_t item_flags = 0; + uint64_t last_item = 0; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; + + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_IPV4: + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_TCP: + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + last_item = MLX5_FLOW_LAYER_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE: + last_item = MLX5_FLOW_LAYER_GENEVE; + break; + case RTE_FLOW_ITEM_TYPE_MPLS: + last_item = MLX5_FLOW_LAYER_MPLS; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + last_item = MLX5_FLOW_LAYER_GTP; + break; + default: + break; + } + item_flags |= last_item; + } + return item_flags; +} + /** * Register destination table DR jump action. * @@ -279,6 +345,96 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv, return 0; } +/** + * Append shared RSS action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] idx + * Shared RSS index. + * @param[in] rss + * Pointer to the shared RSS info. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_shared_rss_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint32_t idx, + struct mlx5_shared_action_rss *rss) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + act_data->shared_rss.level = rss->origin.level; + act_data->shared_rss.types = !rss->origin.types ? RTE_ETH_RSS_IP : + rss->origin.types; + act_data->shared_rss.idx = idx; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; +} + +/** + * Translate shared indirect action. + * + * @param[in] dev + * Pointer to the rte_eth_dev data structure. + * @param[in] action + * Pointer to the shared indirect rte_flow action. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_shared_action_translate(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + struct mlx5_hw_actions *acts, + uint16_t action_src, + uint16_t action_dst) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_shared_action_rss *shared_rss; + uint32_t act_idx = (uint32_t)(uintptr_t)action->conf; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t idx = act_idx & + ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_RSS: + shared_rss = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], idx); + if (!shared_rss || __flow_hw_act_data_shared_rss_append + (priv, acts, + (enum rte_flow_action_type)MLX5_RTE_FLOW_ACTION_TYPE_RSS, + action_src, action_dst, idx, shared_rss)) + return -1; + break; + default: + DRV_LOG(WARNING, "Unsupported shared action type:%d", type); + break; + } + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -329,6 +485,20 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, for (i = 0; !actions_end; actions++, masks++) { switch (actions->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: + if (!attr->group) { + DRV_LOG(ERR, "Indirect action is not supported in root table."); + goto err; + } + if (actions->conf && masks->conf) { + if (flow_hw_shared_action_translate + (dev, actions, acts, actions - action_start, i)) + goto err; + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, i)){ + goto err; + } + i++; break; case RTE_FLOW_ACTION_TYPE_VOID: break; @@ -420,6 +590,115 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, "fail to create rte table"); } +/** + * Get shared indirect action. + * + * @param[in] dev + * Pointer to the rte_eth_dev data structure. + * @param[in] act_data + * Pointer to the recorded action construct data. + * @param[in] item_flags + * The matcher itme_flags used for RSS lookup. + * @param[in] rule_act + * Pointer to the shared action's destination rule DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_shared_action_get(struct rte_eth_dev *dev, + struct mlx5_action_construct_data *act_data, + const uint64_t item_flags, + struct mlx5dr_rule_action *rule_act) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_rss_desc rss_desc = { 0 }; + uint64_t hash_fields = 0; + uint32_t hrxq_idx = 0; + struct mlx5_hrxq *hrxq = NULL; + int act_type = act_data->type; + + switch (act_type) { + case MLX5_RTE_FLOW_ACTION_TYPE_RSS: + rss_desc.level = act_data->shared_rss.level; + rss_desc.types = act_data->shared_rss.types; + flow_dv_hashfields_set(item_flags, &rss_desc, &hash_fields); + hrxq_idx = flow_dv_action_rss_hrxq_lookup + (dev, act_data->shared_rss.idx, hash_fields); + if (hrxq_idx) + hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], + hrxq_idx); + if (hrxq) { + rule_act->action = hrxq->action; + return 0; + } + break; + default: + DRV_LOG(WARNING, "Unsupported shared action type:%d", + act_data->type); + break; + } + return -1; +} + +/** + * Construct shared indirect action. + * + * @param[in] dev + * Pointer to the rte_eth_dev data structure. + * @param[in] action + * Pointer to the shared indirect rte_flow action. + * @param[in] table + * Pointer to the flow table. + * @param[in] it_idx + * Item template index the action template refer to. + * @param[in] rule_act + * Pointer to the shared action's destination rule DR action. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_shared_action_construct(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + struct rte_flow_template_table *table, + const uint8_t it_idx, + struct mlx5dr_rule_action *rule_act) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_action_construct_data act_data; + struct mlx5_shared_action_rss *shared_rss; + uint32_t act_idx = (uint32_t)(uintptr_t)action->conf; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t idx = act_idx & + ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + uint64_t item_flags; + + memset(&act_data, 0, sizeof(act_data)); + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_RSS: + act_data.type = MLX5_RTE_FLOW_ACTION_TYPE_RSS; + shared_rss = mlx5_ipool_get + (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], idx); + if (!shared_rss) + return -1; + act_data.shared_rss.idx = idx; + act_data.shared_rss.level = shared_rss->origin.level; + act_data.shared_rss.types = !shared_rss->origin.types ? + RTE_ETH_RSS_IP : + shared_rss->origin.types; + item_flags = table->its[it_idx]->item_flags; + if (flow_hw_shared_action_get + (dev, &act_data, item_flags, rule_act)) + return -1; + break; + default: + DRV_LOG(WARNING, "Unsupported shared action type:%d", type); + break; + } + return 0; +} + /** * Construct flow action array. * @@ -432,6 +711,8 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, * Pointer to job descriptor. * @param[in] hw_acts * Pointer to translated actions from template. + * @param[in] it_idx + * Item template index the action template refer to. * @param[in] actions * Array of rte_flow action need to be checked. * @param[in] rule_acts @@ -445,7 +726,8 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, static __rte_always_inline int flow_hw_actions_construct(struct rte_eth_dev *dev, struct mlx5_hw_q_job *job, - struct mlx5_hw_actions *hw_acts, + const struct mlx5_hw_actions *hw_acts, + const uint8_t it_idx, const struct rte_flow_action actions[], struct mlx5dr_rule_action *rule_acts, uint32_t *acts_num) @@ -477,14 +759,19 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; uint32_t tag; + uint64_t item_flags; struct mlx5_hw_jump_action *jump; struct mlx5_hrxq *hrxq; action = &actions[act_data->action_src]; MLX5_ASSERT(action->type == RTE_FLOW_ACTION_TYPE_INDIRECT || (int)action->type == act_data->type); - switch (action->type) { + switch (act_data->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: + if (flow_hw_shared_action_construct + (dev, action, table, it_idx, + &rule_acts[act_data->action_dst])) + return -1; break; case RTE_FLOW_ACTION_TYPE_VOID: break; @@ -517,6 +804,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->hrxq = hrxq; job->flow->fate_type = MLX5_FLOW_FATE_QUEUE; break; + case MLX5_RTE_FLOW_ACTION_TYPE_RSS: + item_flags = table->its[it_idx]->item_flags; + if (flow_hw_shared_action_get + (dev, act_data, item_flags, + &rule_acts[act_data->action_dst])) + return -1; + break; default: break; } @@ -599,8 +893,8 @@ flow_hw_q_flow_create(struct rte_eth_dev *dev, rule_attr.user_data = job; hw_acts = &table->ats[action_template_index].acts; /* Construct the flow action array based on the input actions.*/ - flow_hw_actions_construct(dev, job, hw_acts, actions, - rule_acts, &acts_num); + flow_hw_actions_construct(dev, job, hw_acts, pattern_template_index, + actions, rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, rule_acts, acts_num, @@ -1223,6 +1517,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, mlx5_free(it); return NULL; } + it->item_flags = flow_hw_rss_item_flags_get(items); __atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next); return it; @@ -1647,6 +1942,97 @@ flow_hw_resource_release(struct rte_eth_dev *dev) priv->nb_queue = 0; } +/** + * Create shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] conf + * Indirect action configuration. + * @param[in] action + * rte_flow action detail. + * @param[out] error + * Pointer to error structure. + * + * @return + * Action handle on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow_action_handle * +flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + RTE_SET_USED(queue); + RTE_SET_USED(attr); + return flow_dv_action_create(dev, conf, action, error); +} + +/** + * Update shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] handle + * Action handle to be updated. + * @param[in] update + * Update value. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, + struct rte_flow_error *error) +{ + RTE_SET_USED(queue); + RTE_SET_USED(attr); + return flow_dv_action_update(dev, handle, update, error); +} + +/** + * Destroy shared action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * Which queue to be used.. + * @param[in] attr + * Operation attribute. + * @param[in] handle + * Action handle to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_q_ops_attr *attr, + struct rte_flow_action_handle *handle, + struct rte_flow_error *error) +{ + RTE_SET_USED(queue); + RTE_SET_USED(attr); + return flow_dv_action_destroy(dev, handle, error); +} + + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .configure = flow_hw_configure, .pattern_template_create = flow_hw_pattern_template_create, @@ -1659,6 +2045,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .q_flow_destroy = flow_hw_q_flow_destroy, .q_pull = flow_hw_q_pull, .q_push = flow_hw_q_push, + .q_action_create = flow_hw_action_handle_create, + .q_action_destroy = flow_hw_action_handle_destroy, + .q_action_update = flow_hw_action_handle_update, + .action_validate = flow_dv_action_validate, + .action_create = flow_dv_action_create, + .action_destroy = flow_dv_action_destroy, + .action_update = flow_dv_action_update, + .action_query = flow_dv_action_query, }; #endif From patchwork Thu Feb 10 16:29:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 107297 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 879A0A00BE; Thu, 10 Feb 2022 17:31:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12A1741159; Thu, 10 Feb 2022 17:31:13 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2055.outbound.protection.outlook.com [40.107.100.55]) by mails.dpdk.org (Postfix) with ESMTP id 2718A41141 for ; Thu, 10 Feb 2022 17:31:12 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H5RHVTnFA2lKGbeX92dHXcG+KZkaanueitCAm9/RaCYJ49wxCJgJ0C4LpBnU0Fu+6QufFYbzbcvN1aIyA+4FnFLpCY/8htgQeyr1OGoy9lszlzyW9sV124GyozDjOXst2encD0F2BNYlyK/DGJkA3CGkvaKqy2bfCvE+D5dfyoMvSj96mwAsJWS6XDtw9UVGL9uQq7U3qF078Y3UqUvKOs8k5ELTz9nwSXsgwqhc29+jEv2X8Ugzc8m3g0mF20UgiwljJYLag1PLQfGzzPZUxWx8fgkVwciBI0Nn+3BgtRP51EbPQaIAicPo1jF1InBy57VoUFjgGdjVt9f30s5BKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vHbGfAM+mtSd6aEhDsXuFhmaUoeeQUMeyXQ1Byd5v7M=; b=Azq2v8wnxl1wrlXewXcxIWhQQo3EnXhimfq0wNE+LbcOoRnrJlcKTedq5M44iFwTO13Wtf/fTMXz5Z7jvBqKn8+c53lliDVIf2t0yYM/5/n9vLpqcAdqRhhpEQYPGqoZeTVjVijknhCHxcIaHBVfo+0mZUffP494K9G6OvpXjTppJ44CP27ugNYnEUidglXPtljEvr+phfGiuJTqUKfsPXATPQS5E1n9JJ0cPzFjBdhXkh2sF++ONHQ7YFV14GTHz9flqlriTE6MNRHW7szAw7f0qB457xfbu/nd/Jrv5+q2FjTsClTBQObLUjOXxNb1PKFpbyWY0KmhXZSkXlHxFQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vHbGfAM+mtSd6aEhDsXuFhmaUoeeQUMeyXQ1Byd5v7M=; b=Lc/PKsJGX8wzBYqH5KlSbumZ8A0OKJIbx3lv0Yqsp9+Cgx4stG2lqdiBjAjfMGYbhbt/sIM8xXE+DlbVCS5NnlcWfFlMNO5rLmkRYzzM6wa2SSfCoIIhT5QZM9hJeFJnwKCvRLOGPG/JiRy6daeoYauyqY1K0SzC/D9C4zirGFjMyHRAy9egkCyLyTdagbtdd43AasvreQ1tN8td2qBJmKhWwd5YJvNoYvJ0o0R458pS+91hw8YAzQK6QtDr+VC3Z/CcRelCb6o5gnzNtmp4OWtvetWuS9jDtbYvSqHmBFv6gJtrdhWVK3teRXc/Fy8aTd4TQSp3MJCs/yyHadJhGg== Received: from MWHPR19CA0013.namprd19.prod.outlook.com (2603:10b6:300:d4::23) by DM5PR12MB1737.namprd12.prod.outlook.com (2603:10b6:3:10e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Thu, 10 Feb 2022 16:31:10 +0000 Received: from CO1NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:300:d4:cafe::61) by MWHPR19CA0013.outlook.office365.com (2603:10b6:300:d4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:31:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT008.mail.protection.outlook.com (10.13.175.191) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 10 Feb 2022 16:31:09 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 10 Feb 2022 16:30:11 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 10 Feb 2022 08:30:09 -0800 From: Suanming Mou To: , CC: , , Subject: [PATCH 13/13] net/mlx5: add header reformat action Date: Thu, 10 Feb 2022 18:29:26 +0200 Message-ID: <20220210162926.20436-14-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220210162926.20436-1-suanmingm@nvidia.com> References: <20220210162926.20436-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f998bc1f-4991-4e66-068c-08d9ecb2bedb X-MS-TrafficTypeDiagnostic: DM5PR12MB1737:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hEoovUpLknpmrzwyqhN2t/WENl5RXZWS9dGo664lBWGXdb9lQIzANozNvOv4itx1X6tAVi1qkUtExwumm/zuEvbDmKvROPGt/TPAE5OQIQvMqbuU9mL6cArH3dB+WXJhuDLxI17wWR8HeVauxrsLi1T6elONIL9wtFRjvxXnQ6w824zM5h6kD1mYsYw5MLUkVFp99r+BrdyMerkSVkj/AVIKsgMMnO3+uDY8gKQ1iFbv8k/Vn13SCOkgd/SxoG35fqcivXwX4bbinhRVBa1ScqoxhV0HVWD3dxTET2DzbQm4BHBReXVz9NWYyUWx2pZtvsOVcugj8JIV+IbmSRNFiAANBSqqwhRct+BsfFvO1RA63uy7owiZWxQBEU3CQ+6Jsuv6cDNh363vXaSEPuPmvp4qMyESHjAAfoQNyB4uHDi0rS5M2MTTnhRB2SikpBIpvueQHfdLF9UDGUXSEwvsv56F/ilHVyHTGMx9ECIlWWGOgGPGjPUMOPnRElF7v9eb2frWhBFWEJLQkN68QiIDybWazDDk2Y5mlAYUmxQ+4tH5i/VY4c7Hfjq13JAe0MGVm2TrwCg3nMCqgtmUJxGBQ3LNWTl1pUem4ANcDGyyKTd4Gnze3FDiOtaJCAysCwgWE6+kbCxyt7XFL8whIRkeLX53O3MBQDLaTT3sHnY5OGCQ7M8iLNwH8YQcVpY2LJT4lVxag3QyVG9TrVBX1Jhr28IE579B/qkvaARtls7YORlYOs5S3AFA8HUOGIsmAsVTGBNOaefc4kpMbYM92y6JWQ== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(40460700003)(2906002)(36860700001)(47076005)(4326008)(81166007)(6666004)(356005)(55016003)(36756003)(70206006)(70586007)(8676002)(86362001)(6636002)(316002)(30864003)(110136005)(54906003)(83380400001)(5660300002)(508600001)(26005)(186003)(6286002)(426003)(16526019)(336012)(2616005)(7696005)(8936002)(1076003)(82310400004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Feb 2022 16:31:09.8588 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f998bc1f-4991-4e66-068c-08d9ecb2bedb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1737 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org HW steering header reformat action can work under bulk mode. In this case, when create the table, bulk size of header reformat actions will be allocated in low level. Afterwards, when create flow, just simply specify the action index in the bulk and the encapsulation data to the action will be enough. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 +++ drivers/net/mlx5/mlx5_flow_dv.c | 4 +- drivers/net/mlx5/mlx5_flow_hw.c | 228 +++++++++++++++++++++++++++++++- 4 files changed, 251 insertions(+), 3 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index c78dc3c431..e10f55bf8c 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -344,6 +344,7 @@ struct mlx5_hw_q_job { uint32_t type; /* Job type. */ struct rte_flow_hw *flow; /* Flow attached to the job. */ void *user_data; /* Job user data. */ + uint8_t *encap_data; /* Encap data. */ }; /* HW steering job descriptor LIFO header . */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 097e5bf587..16fb6e643b 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1038,6 +1038,14 @@ struct mlx5_action_construct_data { uint16_t action_src; /* rte_flow_action src offset. */ uint16_t action_dst; /* mlx5dr_rule_action dst offset. */ union { + struct { + /* encap src(item) offset. */ + uint16_t src; + /* encap dst data offset. */ + uint16_t dst; + /* encap data len. */ + uint16_t len; + } encap; struct { uint64_t types; /* RSS hash types. */ uint32_t level; /* RSS level. */ @@ -1079,6 +1087,13 @@ struct mlx5_hw_jump_action { struct mlx5dr_action *hws_action; }; +/* Encap decap action struct. */ +struct mlx5_hw_encap_decap_action { + struct mlx5dr_action *action; /* Action object. */ + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; + /* The maximum actions support in the flow. */ #define MLX5_HW_MAX_ACTS 16 @@ -1088,6 +1103,9 @@ struct mlx5_hw_actions { LIST_HEAD(act_list, mlx5_action_construct_data) act_list; struct mlx5_hw_jump_action *jump; /* Jump action. */ struct mlx5_hrxq *tir; /* TIR action. */ + /* Encap/Decap action. */ + struct mlx5_hw_encap_decap_action *encap_decap; + uint16_t encap_decap_pos; /* Encap/Decap action position. */ uint32_t acts_num:4; /* Total action number. */ uint32_t mark:1; /* Indicate the mark action. */ /* Translated DR action array from action template. */ @@ -2021,4 +2039,7 @@ int flow_dv_action_query(struct rte_eth_dev *dev, const struct rte_flow_action_handle *handle, void *data, struct rte_flow_error *error); +size_t flow_dv_get_item_hdr_len(const enum rte_flow_item_type item_type); +int flow_dv_convert_encap_data(const struct rte_flow_item *items, uint8_t *buf, + size_t *size, struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index ca8ae4214b..377ed6c1db 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -4024,7 +4024,7 @@ flow_dv_push_vlan_action_resource_register * @return * sizeof struct item_type, 0 if void or irrelevant. */ -static size_t +size_t flow_dv_get_item_hdr_len(const enum rte_flow_item_type item_type) { size_t retval; @@ -4090,7 +4090,7 @@ flow_dv_get_item_hdr_len(const enum rte_flow_item_type item_type) * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int flow_dv_convert_encap_data(const struct rte_flow_item *items, uint8_t *buf, size_t *size, struct rte_flow_error *error) { diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 9fc6f24542..5a652ac8e6 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -345,6 +345,50 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic encap action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] encap_src + * Offset of source encap raw data. + * @param[in] encap_dst + * Offset of destination encap raw data. + * @param[in] len + * Length of the data to be updated. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_encap_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t encap_src, + uint16_t encap_dst, + uint16_t len) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + act_data->encap.src = encap_src; + act_data->encap.dst = encap_dst; + act_data->encap.len = len; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; +} + /** * Append shared RSS action to the dynamic action list. * @@ -435,6 +479,53 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, return 0; } +/** + * Translate encap items to encapsulation list. + * + * @param[in] dev + * Pointer to the rte_eth_dev data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] items + * Encap item pattern. + * @param[in] items_m + * Encap item mask indicates which part are constant and dynamic. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_encap_item_translate(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + const struct rte_flow_item *items, + const struct rte_flow_item *items_m) +{ + struct mlx5_priv *priv = dev->data->dev_private; + size_t len, total_len = 0; + uint32_t i = 0; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++, items_m++, i++) { + len = flow_dv_get_item_hdr_len(items->type); + if ((!items_m->spec || + memcmp(items_m->spec, items->spec, len)) && + __flow_hw_act_data_encap_append(priv, acts, type, + action_src, action_dst, i, + total_len, len)) + return -1; + total_len += len; + } + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -472,6 +563,12 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, struct rte_flow_action *actions = at->actions; struct rte_flow_action *action_start = actions; struct rte_flow_action *masks = at->masks; + enum mlx5dr_action_reformat_type refmt_type = 0; + const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; + uint16_t reformat_pos = MLX5_HW_MAX_ACTS, reformat_src = 0; + uint8_t *encap_data = NULL; + size_t data_size = 0; bool actions_end = false; uint32_t type, i; int err; @@ -573,6 +670,56 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, } i++; break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); + enc_item = ((const struct rte_flow_action_vxlan_encap *) + actions->conf)->definition; + enc_item_m = + ((const struct rte_flow_action_vxlan_encap *) + masks->conf)->definition; + reformat_pos = i++; + reformat_src = actions - action_start; + refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; + break; + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); + enc_item = ((const struct rte_flow_action_nvgre_encap *) + actions->conf)->definition; + enc_item_m = + ((const struct rte_flow_action_nvgre_encap *) + actions->conf)->definition; + reformat_pos = i++; + reformat_src = actions - action_start; + refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); + reformat_pos = i++; + refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; + break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + raw_encap_data = + (const struct rte_flow_action_raw_encap *) + actions->conf; + encap_data = raw_encap_data->data; + data_size = raw_encap_data->size; + if (reformat_pos != MLX5_HW_MAX_ACTS) { + refmt_type = data_size < + MLX5_ENCAPSULATION_DECISION_SIZE ? + MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2 : + MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3; + } else { + reformat_pos = i++; + refmt_type = + MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; + } + reformat_src = actions - action_start; + break; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + reformat_pos = i++; + refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -580,6 +727,45 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, break; } } + if (reformat_pos != MLX5_HW_MAX_ACTS) { + uint8_t buf[MLX5_ENCAP_MAX_LEN]; + + if (enc_item) { + MLX5_ASSERT(!encap_data); + if (flow_dv_convert_encap_data + (enc_item, buf, &data_size, error) || + flow_hw_encap_item_translate + (dev, acts, (action_start + reformat_src)->type, + reformat_src, reformat_pos, + enc_item, enc_item_m)) + goto err; + encap_data = buf; + } else if (encap_data && __flow_hw_act_data_encap_append + (priv, acts, + (action_start + reformat_src)->type, + reformat_src, reformat_pos, 0, 0, data_size)) { + goto err; + } + acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->encap_decap) + data_size, + 0, SOCKET_ID_ANY); + if (!acts->encap_decap) + goto err; + if (data_size) { + acts->encap_decap->data_size = data_size; + memcpy(acts->encap_decap->data, encap_data, data_size); + } + acts->encap_decap->action = mlx5dr_action_create_reformat + (priv->dr_ctx, refmt_type, + data_size, encap_data, + rte_log2_u32(table_attr->nb_flows), + mlx5_hw_act_flag[!!attr->group][type]); + if (!acts->encap_decap->action) + goto err; + acts->rule_acts[reformat_pos].action = + acts->encap_decap->action; + acts->encap_decap_pos = reformat_pos; + } acts->acts_num = i; return 0; err: @@ -735,6 +921,9 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, struct rte_flow_template_table *table = job->flow->table; struct mlx5_action_construct_data *act_data; const struct rte_flow_action *action; + const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_item *enc_item = NULL; + uint8_t *buf = job->encap_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -756,6 +945,9 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } else { attr.ingress = 1; } + if (hw_acts->encap_decap && hw_acts->encap_decap->data_size) + memcpy(buf, hw_acts->encap_decap->data, + hw_acts->encap_decap->data_size); LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; uint32_t tag; @@ -811,10 +1003,38 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, &rule_acts[act_data->action_dst])) return -1; break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + enc_item = ((const struct rte_flow_action_vxlan_encap *) + action->conf)->definition; + rte_memcpy((void *)&buf[act_data->encap.dst], + enc_item[act_data->encap.src].spec, + act_data->encap.len); + break; + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + enc_item = ((const struct rte_flow_action_nvgre_encap *) + action->conf)->definition; + rte_memcpy((void *)&buf[act_data->encap.dst], + enc_item[act_data->encap.src].spec, + act_data->encap.len); + break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + raw_encap_data = + (const struct rte_flow_action_raw_encap *) + action->conf; + rte_memcpy((void *)&buf[act_data->encap.dst], + raw_encap_data->data, act_data->encap.len); + MLX5_ASSERT(raw_encap_data->size == + act_data->encap.len); + break; default: break; } } + if (hw_acts->encap_decap) { + rule_acts[hw_acts->encap_decap_pos].reformat.offset = + job->flow->idx - 1; + rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; + } return 0; } @@ -1821,6 +2041,7 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } mem_size += (sizeof(struct mlx5_hw_q_job *) + + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + sizeof(struct mlx5_hw_q_job)) * queue_attr[0]->size; } @@ -1831,6 +2052,8 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } for (i = 0; i < nb_queue; i++) { + uint8_t *encap = NULL; + priv->hw_q[i].job_idx = queue_attr[i]->size; priv->hw_q[i].size = queue_attr[i]->size; if (i == 0) @@ -1841,8 +2064,11 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[queue_attr[i - 1]->size]; job = (struct mlx5_hw_q_job *) &priv->hw_q[i].job[queue_attr[i]->size]; - for (j = 0; j < queue_attr[i]->size; j++) + encap = (uint8_t *)&job[queue_attr[i]->size]; + for (j = 0; j < queue_attr[i]->size; j++) { + job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; priv->hw_q[i].job[j] = &job[j]; + } } dr_ctx_attr.pd = priv->sh->cdev->pd; dr_ctx_attr.queues = nb_queue;