From patchwork Fri Sep 23 14:43:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116739 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0264EA054A; Fri, 23 Sep 2022 16:44:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3EA8F42BB4; Fri, 23 Sep 2022 16:44:06 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2047.outbound.protection.outlook.com [40.107.244.47]) by mails.dpdk.org (Postfix) with ESMTP id 761B942BA6 for ; Fri, 23 Sep 2022 16:44:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g9hCWPrQ1gPqU9IxSyApRoaawcRRC8ejw3+lXunA0iHn5vOIJXmdmkHyyXstr/K8rwatttx+7Ok4bzU9qg3dZ6v+m0Xb92GEq/bkXTs0wdzR/ElgtQYVz/ns/ohxpD9tuc2gcSd6C5RkoYrJvBYvWaXTt1Ya1YjDHmKTASpTy+TRRJeCfD6hphx5YHR7ZwpwVvBwbYissOzmlV1n/1f3dxZpqmjBoSPvdAo2wMRIF3MWETMyk3jkiERRTHOWl5DMJu1CxhwPl/UXOkjuSKek97bAd1vLgk9Zevxn2aIpshtjDBHV1G3V5hM6kmCaQPheCsd3ag0CJotUkMkefxqwJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=goiGszEK2wkTBVdfH60M4bQH5xVO0vR8E/E/a45ES0w=; b=bfmeIiOmG16aGvW15EbGbgeG8H49mPoCrWyqNgprRogdt+TOM7K8+bkMPfA31a/o/SeyGDQjVOetlrSSDrLagV9k6XS+SkEdjCW7j9gNI6PU4l4l7rvXSjHsNOGhCE+Y7VjM04jb0bdlzCCEnbOHhBSWIzPFidxFSxo9Hvy0NS8jDK09l1GlWrobIUD+PV+nNXLtwsW+mtHT+UwR++o4pHREqHPLgbdFVto+nvQ8K4WuQ8va+cpSsMN9djo6vslu/tQsGolPEqZq8Ge9gp0uO6sm+SerUIaqj28+RLsXGZ4EyEzxasa7eNMo8P77aMlfwrykGRm4DHTA9+MxbwYPaA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=goiGszEK2wkTBVdfH60M4bQH5xVO0vR8E/E/a45ES0w=; b=fT9cci3sC5Fx6bftBqELUMuk3Lh5ORrwtIq0ohzKox29Wf4UW6S7G5cSnK5JosQwJIUrsLLzUmZhEI4o6uUF66Y3wa1CD1vcmzW/JdIlIEiP7WTYF8oxTMPc4kTw+MKLO9L9p4wpQHqd1UeGIy/7hJH3d/tRgsYkQxXeSQtW9oQF6SLFsRDwOa9ALS2xgP0JBHleFiz4mBfLuOvZDaTYIXBz8/j1B91CkrPVQ4OmokTeKYuvw3pTTKe3bmh2ldNBd4waP0pLFseVB8hnZp++yi9jtqp71KWpk5woEj12EXMzSCDjA847jwz6KLXsoIaLurc8Ab9rMPJe/NfKulGhLg== Received: from DM6PR14CA0058.namprd14.prod.outlook.com (2603:10b6:5:18f::35) by MN2PR12MB4335.namprd12.prod.outlook.com (2603:10b6:208:1d4::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:03 +0000 Received: from DM6NAM11FT083.eop-nam11.prod.protection.outlook.com (2603:10b6:5:18f:cafe::ac) by DM6PR14CA0058.outlook.office365.com (2603:10b6:5:18f::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT083.mail.protection.outlook.com (10.13.173.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:02 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:43:49 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:43:48 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: Subject: [PATCH 01/27] net/mlx5: fix invalid flow attributes Date: Fri, 23 Sep 2022 17:43:08 +0300 Message-ID: <20220923144334.27736-2-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT083:EE_|MN2PR12MB4335:EE_ X-MS-Office365-Filtering-Correlation-Id: 7426f634-9cb9-4afa-e206-08da9d720eec X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 23hA/HMSw7gG/vtmJB0LBQIq1PIztga9Pn1s/wHuzvumd8REJMUhnoKio3lNEU52NocyryEVOmjiZTRDdRUw8bhuvk2M3WFFXdPZGyzPXcqL38mKkWvOCboovSsIJsWJxKfCreECQtiP5Bq5inyKUVI+T4jGu9tfamIgoWNblhaCWN5NrGcBgkbjX2eyrH4yIfgJSZylnKit2EavAU8s/oftfNf7+5o85a3ZAm9QAalmkTWZ4sdGQdbihnRx2+retE9fCJkNbH+5L8B+G4l7KxiDUlTwWKwU4RftApFmsrua9/I7B/zfNU6ko+NvFRR7kqbPUdYa6r8Rpy8DZcBXf3jk5uwfvsUbsd3H3ccdBJFmaKjmkW2psU9QpRYMpVOEJzyzpJtsZkY6n6ge7kW0q8mBfhMhs1koxiAklcFpvO+KDxm1mV9RaWLOc+e4vc2aF7pYCAaGZLQVAOST4v9Z1PCHNpkDMEVPQjbuG/GJgHgm2DM/4ofrlhc+0/POa+hGBjeFtvQqUdopCPYhjyPiPE/QiqG2se5W+o01jBHs9OiabSUnhVVyuEpdu2cfAGobV6TBjPGnknKK00Ait1Ma9ECa2w1mwanJe3YKjd013DnpfToVhZHCzd92Y21oxHmc+/mC8EZDRHVVb8/7FdpPdL0fPjxAyVpowJZOZTUvciM7WtiiQX1m+khtG0cRfOOlrVnV0twNPlEM/JaNYtihTmIW/Kt/rROHKZdpDeS+tLDoZ3TKcCBqIus17OrMWZ4+FyBbjQKNTneGqtip8WJmNA== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199015)(36840700001)(46966006)(40470700004)(83380400001)(110136005)(36756003)(26005)(6286002)(2906002)(5660300002)(6666004)(41300700001)(70206006)(4326008)(70586007)(478600001)(40460700003)(8936002)(6636002)(316002)(7696005)(8676002)(7636003)(426003)(2616005)(86362001)(16526019)(55016003)(336012)(40480700001)(47076005)(186003)(1076003)(36860700001)(356005)(82310400005)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:02.7022 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7426f634-9cb9-4afa-e206-08da9d720eec X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT083.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4335 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In the function flow_get_drv_type(), attr will be read in non-HWS mode. In case user call the HWS API in SWS mode, attr should be placed in HWS functions, or it will cause crash. Fixes: 572801ab860f ("ethdev: backport upstream rte_flow_async codes") Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow.c | 38 ++++++++++++++++++++++++------------ 1 file changed, 26 insertions(+), 12 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 45109001ca..3abb39aa92 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3740,6 +3740,8 @@ flow_get_drv_type(struct rte_eth_dev *dev, const struct rte_flow_attr *attr) */ if (priv->sh->config.dv_flow_en == 2) return MLX5_FLOW_TYPE_HW; + if (!attr) + return MLX5_FLOW_TYPE_MIN; /* If no OS specific type - continue with DV/VERBS selection */ if (attr->transfer && priv->sh->config.dv_esw_en) type = MLX5_FLOW_TYPE_DV; @@ -8252,8 +8254,9 @@ mlx5_flow_info_get(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr attr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8287,8 +8290,9 @@ mlx5_flow_port_configure(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr attr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8319,8 +8323,9 @@ mlx5_flow_pattern_template_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) { + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8350,8 +8355,9 @@ mlx5_flow_pattern_template_destroy(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr attr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8385,8 +8391,9 @@ mlx5_flow_actions_template_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) { + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8416,8 +8423,9 @@ mlx5_flow_actions_template_destroy(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr attr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8457,8 +8465,9 @@ mlx5_flow_table_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) { + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8494,8 +8503,9 @@ mlx5_flow_table_destroy(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr attr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8542,8 +8552,9 @@ mlx5_flow_async_flow_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) { + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8585,8 +8596,9 @@ mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8621,8 +8633,9 @@ mlx5_flow_pull(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr attr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8650,8 +8663,9 @@ mlx5_flow_push(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr attr = {0}; - if (flow_get_drv_type(dev, NULL) != MLX5_FLOW_TYPE_HW) + if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, From patchwork Fri Sep 23 14:43:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116740 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1B7ECA054A; Fri, 23 Sep 2022 16:44:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1644842BAE; Fri, 23 Sep 2022 16:44:10 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2080.outbound.protection.outlook.com [40.107.95.80]) by mails.dpdk.org (Postfix) with ESMTP id 3826A42BAE for ; Fri, 23 Sep 2022 16:44:08 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=itozDxARJWM3sTZmDtNX3In9PIt4fUhlLMLGfHSSGlMwlPBY38lF0eGAGLTSJQzLuvt9zDebpItcX5t6v427UoxrZsk8ioN3uEVFHU+D/3Qo9AyEbhvGfpCRT4KZw+rlbv90Y2XKBdfwXbYDGvxgRP6WVjskqMktp+6/tmO69c0R4AwKQbs43NVAUyqhHoW2r+g73PHp7yve2MuSYlQHH7QOnk+C/MFCv7hNyNx2EidaVtmVGaY0vCPdNPL9c0RsIXNnuRiIfh8eh2uCMLfDUwFvcYxQoCIWEHhiY7DmnOnp/UADBrDcym2AynAi2Cg5neQAbhMb0sN6kv21IRK7vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=J86Chgs9ZP4tQD/60dJW+Q5cq1hjQNFs2GC2Inm/T94=; b=dsuOaA4pnXEJkp47nONSWKT5ir6qhmiLzGMrFqFtkNjCDihsyr2RSJLvMKv6KkcNqF0Pe5TdJvEA1f+5PJywhguJHg8qtTW3U9/GPKuNhHM5QIkJ3r7ZYlwI6ou5i0JW8TukbvOh7oiogv+PprLXMVVd1JCueBRkFktxiqpO5GvNjWPOshH2t5Q02tUDSBP3TfpTE5b5BHd+dlwxSNVyjXNZVwJyK7OoU/oplV/eIUv1tAM4PZvSzjfZr4Yv3qViWGKYYU1T77z/JAkSUXvgPmjvyV1C2f6spJK8JzjVuz6dbToeGpMbr43s4GImAmFpgI+kP7+2xhm2wJZ3iQ9YrQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J86Chgs9ZP4tQD/60dJW+Q5cq1hjQNFs2GC2Inm/T94=; b=g/V/lJ91XQTb+B++TMS03SOFev3ek0uyu00XNpZpuXNrsUgJjn+siIqFcWC16TfnVzpME0kIDovHOpSZp8SpFqn2sAUu4mOKboZgdAEEonv4u76GcaeOotgQLY9A9+4EH4UYGQyQ/VS6i0ip4LvCl41fVOS/oeRVpbd5RWshkQk4H1NQG9mne7OmaqIRT6nqnwRQ2t6ceJIlza5EIeLUzQ/n1KJvQzYI8CZU7cnseuy85a3p6ogKu+vYYzmaQP7zdRJI0tPM5ntgH6ip0D2ZmFX8/qnHEeVBdlwDMi2Mk29YORkvPcTSl4WMA/KMLD1yszfN6m1e2jj8D6tUckS2SQ== Received: from BN7PR06CA0056.namprd06.prod.outlook.com (2603:10b6:408:34::33) by SA0PR12MB4367.namprd12.prod.outlook.com (2603:10b6:806:94::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:06 +0000 Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:408:34:cafe::10) by BN7PR06CA0056.outlook.office365.com (2603:10b6:408:34::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:06 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:43:51 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:43:49 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: Subject: [PATCH 02/27] net/mlx5: fix IPv6 and TCP RSS hash fields Date: Fri, 23 Sep 2022 17:43:09 +0300 Message-ID: <20220923144334.27736-3-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT057:EE_|SA0PR12MB4367:EE_ X-MS-Office365-Filtering-Correlation-Id: d7771aad-255c-49cb-bdcb-08da9d72110a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: v8WxwUZ+UfC9kkx3j1+Bet7TW4XDnSOpePBxqmcknY3GsuRwWdjRFVRsxKXPKfgt8toq9MleiKypbDP7I/ybGMNioyalwpIntf3JaxlgMKNMsTxBcfqLMwmILuJWIncjpNZuzITShi+AfJ6AwgpxRCcDYTyUf2xSOIs4BbOXhRNIk+KPPBTA0W2SC+HlJjpAPzcsgiepXGeUCT4JzaVKXXVK4wnkNah3rJiI4kVVGb1LLpyDSTESvCvFs+LtSlZLPnTi6tSDGsSFx+AZZTq6IWnzzRxUbu2jDvqELoJOo/MWMpcFxnaVBtlvtSlKzogkteWrjPuvKzWxDHWS390xbAoHJXX/83gfGWGr8UcPSIRgs20MiJ4kDocHonG4jGPqEkTYCf6GEulY8DjawkNPxvIf3Xp5vY/524DzjLefJ4/KBRCEzPhS73aHKvV0l5u9fsRV2+uq8vqyA0+8y9Bcys/1ZSkgR8I4C4zjliQbLpdGPiNx/p/3hSaeI8x6d7KHAdi56pcYWdFk21bVnLptwSKCyJxGeDRkEAJsz6WCj4RJTg94OvzkMjtWJejshzFn95kGr70rxD659nFJHsnc6W3dw3EfW/O6sN0D93BCTfM6rY1JxrEl204leeVbUpvuRX2jWH8fBZGmFG0xUz4DjIt36DHPxd26FLhspFY244R7UXb4hE/j0EfcuTXpe+VUhGWLx2s7EZUORipMpIXpooupgnTdGSxJO69rqD0HGntapAdloTfqJmKekJumCFsfAb/NSCzKml7YOeFEZ6Y5Tg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(396003)(136003)(39860400002)(376002)(451199015)(36840700001)(46966006)(40470700004)(2906002)(82310400005)(70586007)(8676002)(4326008)(5660300002)(70206006)(8936002)(41300700001)(6636002)(186003)(478600001)(36860700001)(40460700003)(2616005)(356005)(7696005)(1076003)(6666004)(110136005)(316002)(36756003)(6286002)(26005)(83380400001)(16526019)(86362001)(47076005)(82740400003)(55016003)(40480700001)(7636003)(336012)(426003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:06.2064 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d7771aad-255c-49cb-bdcb-08da9d72110a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4367 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In the flow_dv_hashfields_set() function, while item_flags was 0, the code went directly to the first if and the else case would never have chance be checked. This caused the IPv6 and TCP hash fileds in the else case would never be set. This commit adds the dedicate HW steering hash field set function to generate the RSS hash fields. Fixes: 6540da0b93b5 ("net/mlx5: fix RSS scaling issue") Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow_dv.c | 12 +++---- drivers/net/mlx5/mlx5_flow_hw.c | 59 ++++++++++++++++++++++++++++++++- 2 files changed, 62 insertions(+), 9 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 885b4c5588..3e5e6781bf 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -11302,8 +11302,7 @@ flow_dv_hashfields_set(uint64_t item_flags, rss_inner = 1; #endif if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) || - (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4)) || - !items) { + (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) { if (rss_types & MLX5_IPV4_LAYER_TYPES) { if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY) fields |= IBV_RX_HASH_SRC_IPV4; @@ -11313,8 +11312,7 @@ flow_dv_hashfields_set(uint64_t item_flags, fields |= MLX5_IPV4_IBV_RX_HASH; } } else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) || - (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6)) || - !items) { + (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) { if (rss_types & MLX5_IPV6_LAYER_TYPES) { if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY) fields |= IBV_RX_HASH_SRC_IPV6; @@ -11337,8 +11335,7 @@ flow_dv_hashfields_set(uint64_t item_flags, return; } if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) || - (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP)) || - !items) { + (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) { if (rss_types & RTE_ETH_RSS_UDP) { if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY) fields |= IBV_RX_HASH_SRC_PORT_UDP; @@ -11348,8 +11345,7 @@ flow_dv_hashfields_set(uint64_t item_flags, fields |= MLX5_UDP_IBV_RX_HASH; } } else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) || - (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP)) || - !items) { + (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) { if (rss_types & RTE_ETH_RSS_TCP) { if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY) fields |= IBV_RX_HASH_SRC_PORT_TCP; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 7343d59f1f..46c4169b4f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -62,6 +62,63 @@ flow_hw_rxq_flag_set(struct rte_eth_dev *dev, bool enable) priv->mark_enabled = enable; } +/** + * Set the hash fields according to the @p rss_desc information. + * + * @param[in] rss_desc + * Pointer to the mlx5_flow_rss_desc. + * @param[out] hash_fields + * Pointer to the RSS hash fields. + */ +static void +flow_hw_hashfields_set(struct mlx5_flow_rss_desc *rss_desc, + uint64_t *hash_fields) +{ + uint64_t fields = 0; + int rss_inner = 0; + uint64_t rss_types = rte_eth_rss_hf_refine(rss_desc->types); + +#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT + if (rss_desc->level >= 2) + rss_inner = 1; +#endif + if (rss_types & MLX5_IPV4_LAYER_TYPES) { + if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY) + fields |= IBV_RX_HASH_SRC_IPV4; + else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY) + fields |= IBV_RX_HASH_DST_IPV4; + else + fields |= MLX5_IPV4_IBV_RX_HASH; + } else if (rss_types & MLX5_IPV6_LAYER_TYPES) { + if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY) + fields |= IBV_RX_HASH_SRC_IPV6; + else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY) + fields |= IBV_RX_HASH_DST_IPV6; + else + fields |= MLX5_IPV6_IBV_RX_HASH; + } + if (rss_types & RTE_ETH_RSS_UDP) { + if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY) + fields |= IBV_RX_HASH_SRC_PORT_UDP; + else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY) + fields |= IBV_RX_HASH_DST_PORT_UDP; + else + fields |= MLX5_UDP_IBV_RX_HASH; + } else if (rss_types & RTE_ETH_RSS_TCP) { + if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY) + fields |= IBV_RX_HASH_SRC_PORT_TCP; + else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY) + fields |= IBV_RX_HASH_DST_PORT_TCP; + else + fields |= MLX5_TCP_IBV_RX_HASH; + } + if (rss_types & RTE_ETH_RSS_ESP) + fields |= IBV_RX_HASH_IPSEC_SPI; + if (rss_inner) + fields |= IBV_RX_HASH_INNER; + *hash_fields = fields; +} + /** * Generate the pattern item flags. * Will be used for shared RSS action. @@ -225,7 +282,7 @@ flow_hw_tir_action_register(struct rte_eth_dev *dev, MLX5_RSS_HASH_KEY_LEN); rss_desc.key_len = MLX5_RSS_HASH_KEY_LEN; rss_desc.types = !rss->types ? RTE_ETH_RSS_IP : rss->types; - flow_dv_hashfields_set(0, &rss_desc, &rss_desc.hash_fields); + flow_hw_hashfields_set(&rss_desc, &rss_desc.hash_fields); flow_dv_action_rss_l34_hash_adjust(rss->types, &rss_desc.hash_fields); if (rss->level > 1) { From patchwork Fri Sep 23 14:43:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116741 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8ED81A054A; Fri, 23 Sep 2022 16:44:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EEEDE42BBA; Fri, 23 Sep 2022 16:44:11 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2050.outbound.protection.outlook.com [40.107.92.50]) by mails.dpdk.org (Postfix) with ESMTP id 46A1142BB9 for ; Fri, 23 Sep 2022 16:44:10 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=G3Xnszry3K4D0a+cW8sgOtkpe57v4R0bc74UaPLJPBmdPlzdQRsjOS/NG0s+7WjShg/oIvv89rtSn4Vw4HUXdu3D0EdAS5XdRGSqjCWhuAJQB+DaUfwMbMM6cJ7j53GmzaAg9NMvV6VkcTuh8uJtwBflJTbWkWs/m02089SQvQKPf7t5zZm3+TM4yyoEn0gnoJa3MN54MH+PmI2LzCycWPaCZBQgHVIxOjp3jp4Dcru8kJEbCBV6F1WndHP2sDFKqxiwaeIFnR0yhzGg+QCvE2Hwpx8UJRhexpcAcNnxZbjhLUdhmn/oNRMJsl98NM9H87n3UT5FneTX1Rwg8gQeNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OiWVwVGRh07iG0LuFZnYdJQ8/i57TEHmfCisHAzDy44=; b=a+qgsEwyo4x9jyFjSS+/T3qefd1PQqeK71QxzMUb+LpN5bfzxF6GjNVs0W0yDWJN7FZywi8GIw/xJfqIZiq2MqcTt2Uu48AFiYpOsPcX63A7/lxwz1WQxLjmVNW1rLaKu14VaJ+5rCb7CUKIGRXY/umop4bR/kJzibd7u5ZwgeVT6juHQ4ENx1uoTAsdhNK8wmFcTG9rdcx4ogWq0f9sBbVDqWMhgakU+Kh3SSxhX69z4g0nkd1nVOeQGQm8JRMD72YejykDEo/O02gST00oUenfkBYhv100KhjpFGQmrw3/Ofp8W9Gi/zbLp2yGpa208ZkHpWcBSEuVYGJlDab6hQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OiWVwVGRh07iG0LuFZnYdJQ8/i57TEHmfCisHAzDy44=; b=ok5+ShDGHtA3tpZBfzvlCgaDS79Ma0xozZR7DXLH9pY3TuGXKtHKIuPVNd+Y1CQ0PKa2ibvzYMmZCS89trszqTcqDJMSCBERztkzGO+GRhd3ZaNLNeMcAupYQM+qsCjPqx5VDu/5IrOx7nPadgD3MNydrpIFRcADUmpQye5IcWBL7XPGbQCI8yTer8633HO3jXY7tJpI8Bkb7wsSoG7rzPknKMAzURDMsfpDHxQIhWhPIhDkfYfhOfHJsFc1hRM3X4zo+6uWH1aoXAPOin3EqYxs1RaAdN/is5TBWQzuT0DO3zq8dcPVWqWei8GaUj2KVsTpHrWZdFXxqFJUMWInVg== Received: from BN7PR06CA0063.namprd06.prod.outlook.com (2603:10b6:408:34::40) by BY5PR12MB4870.namprd12.prod.outlook.com (2603:10b6:a03:1de::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:08 +0000 Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:408:34:cafe::6e) by BN7PR06CA0063.outlook.office365.com (2603:10b6:408:34::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:07 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:43:52 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:43:51 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: Subject: [PATCH 03/27] net/mlx5: add shared header reformat support Date: Fri, 23 Sep 2022 17:43:10 +0300 Message-ID: <20220923144334.27736-4-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT057:EE_|BY5PR12MB4870:EE_ X-MS-Office365-Filtering-Correlation-Id: be82d7c7-4a73-4087-9b39-08da9d7211ef X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1y/OQH6MDe4Wr6OAqQSjWWd+AyCVj54fnP/sLsJ4iswcQvDbAQyMA8fWdfxYNCuAUWkOQ4mbVbCq9gIFHuO43JvvSGy0AQ5bMCTW9FdGhOojfj2x9fbeNeFeJmcRdR4xq8fdG/u+2mNmnjZkpoFIgZ7xgH73NSuWC5t4NT//4SxtdKXx70ojbfZjUT4xvY9p4EUekfaguBi+WyeXWvzweGp8dTST2OCQYFO7tD/g5+2NaZ3mtRDaNg7hwH/VLh+1glWbEiMqvtr7ts2+uMmF9IZ+WHYcI7PEqd5P+CgqIEqxu6T+qKr+VqGmgb5VJRzQQyqPh3hG2eQyoPNLsv2Ms9/yfLPPSJdpnhzBXHBGnZVh6HhP3ZVppmo9ay627oXK3hpgbbCUihh2v/EMZ0eyHs43vKArQJkgPYVjZD1AHpxZ5pphJ3OOXeJU8tzOemTd1aVUKhjlgE5AwQmUkXtnI17lBP5TPa1sPxbDypMn7f+fiDl5Exs9TRx3XWCd+IIwaRjuPTFdEGNmrNZ9jhskViSVnDP1XUNV1gUT0SI5b9dEf6p1/CPB7CN+FWi/RVmR2HyF66uAJgPL15cAwRAl96dzDmHb7ROvaJCR9F1hKXf7/oC89h6g1S6wyQtWYxDknbKKzU8XT5zofe41e3blBQGN7gJHbDtPIhS67pr6YUZl0w05owwbMpmnz7wogybnBuFMRXQNBgsPJhJRqxFI5KWnreWnmyC/deaOFskBJVWK767EJmXmxhYvc1QpQUSrF4DrDRZCM8jm8kgQwijUWw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(39860400002)(396003)(346002)(376002)(451199015)(36840700001)(46966006)(40470700004)(4326008)(8676002)(36756003)(41300700001)(16526019)(2906002)(70206006)(70586007)(1076003)(2616005)(47076005)(40460700003)(86362001)(336012)(426003)(186003)(5660300002)(316002)(6636002)(82740400003)(356005)(7636003)(36860700001)(83380400001)(110136005)(55016003)(478600001)(8936002)(82310400005)(40480700001)(26005)(6666004)(7696005)(6286002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:07.6907 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: be82d7c7-4a73-4087-9b39-08da9d7211ef X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4870 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org As the rte_flow_async API defines, the action mask with field value not be 0 means the action will be used as shared in all the flows in the table. The header reformat action with action mask field not be 0 will be created as constant shared action. For encapsulation header reformat action, there are two kinds of encapsulation data, raw_encap_data and rte_flow_item encap_data. Both of these two kinds of data can be identified from the action mask conf as constant or not. Examples: 1. VXLAN encap (encap_data: rte_flow_item) action conf (eth/ipv4/udp/vxlan_hdr) a. action mask conf (eth/ipv4/udp/vxlan_hdr) - items are constant. b. action mask conf (NULL) - items will change. 2. RAW encap (encap_data: raw) action conf (raw_data) a. action mask conf (not NULL) - encap_data constant. b. action mask conf (NULL) - encap_data will change. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow.h | 6 +- drivers/net/mlx5/mlx5_flow_hw.c | 124 ++++++++++---------------------- 2 files changed, 39 insertions(+), 91 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1ad75fc8c6..74cb1cd235 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1065,10 +1065,6 @@ struct mlx5_action_construct_data { uint16_t action_dst; /* mlx5dr_rule_action dst offset. */ union { struct { - /* encap src(item) offset. */ - uint16_t src; - /* encap dst data offset. */ - uint16_t dst; /* encap data len. */ uint16_t len; } encap; @@ -1111,6 +1107,8 @@ struct mlx5_hw_jump_action { /* Encap decap action struct. */ struct mlx5_hw_encap_decap_action { struct mlx5dr_action *action; /* Action object. */ + /* Is header_reformat action shared across flows in table. */ + bool shared; size_t data_size; /* Action metadata size. */ uint8_t data[]; /* Action data. */ }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 46c4169b4f..b6978bd051 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -402,10 +402,6 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv, * Offset of source rte flow action. * @param[in] action_dst * Offset of destination DR action. - * @param[in] encap_src - * Offset of source encap raw data. - * @param[in] encap_dst - * Offset of destination encap raw data. * @param[in] len * Length of the data to be updated. * @@ -418,16 +414,12 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, enum rte_flow_action_type type, uint16_t action_src, uint16_t action_dst, - uint16_t encap_src, - uint16_t encap_dst, uint16_t len) { struct mlx5_action_construct_data *act_data; act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); if (!act_data) return -1; - act_data->encap.src = encap_src; - act_data->encap.dst = encap_dst; act_data->encap.len = len; LIST_INSERT_HEAD(&acts->act_list, act_data, next); return 0; @@ -523,53 +515,6 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, return 0; } -/** - * Translate encap items to encapsulation list. - * - * @param[in] dev - * Pointer to the rte_eth_dev data structure. - * @param[in] acts - * Pointer to the template HW steering DR actions. - * @param[in] type - * Action type. - * @param[in] action_src - * Offset of source rte flow action. - * @param[in] action_dst - * Offset of destination DR action. - * @param[in] items - * Encap item pattern. - * @param[in] items_m - * Encap item mask indicates which part are constant and dynamic. - * - * @return - * 0 on success, negative value otherwise and rte_errno is set. - */ -static __rte_always_inline int -flow_hw_encap_item_translate(struct rte_eth_dev *dev, - struct mlx5_hw_actions *acts, - enum rte_flow_action_type type, - uint16_t action_src, - uint16_t action_dst, - const struct rte_flow_item *items, - const struct rte_flow_item *items_m) -{ - struct mlx5_priv *priv = dev->data->dev_private; - size_t len, total_len = 0; - uint32_t i = 0; - - for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++, items_m++, i++) { - len = flow_dv_get_item_hdr_len(items->type); - if ((!items_m->spec || - memcmp(items_m->spec, items->spec, len)) && - __flow_hw_act_data_encap_append(priv, acts, type, - action_src, action_dst, i, - total_len, len)) - return -1; - total_len += len; - } - return 0; -} - /** * Translate rte_flow actions to DR action. * @@ -611,7 +556,7 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, const struct rte_flow_action_raw_encap *raw_encap_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; uint16_t reformat_pos = MLX5_HW_MAX_ACTS, reformat_src = 0; - uint8_t *encap_data = NULL; + uint8_t *encap_data = NULL, *encap_data_m = NULL; size_t data_size = 0; bool actions_end = false; uint32_t type, i; @@ -718,9 +663,9 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); enc_item = ((const struct rte_flow_action_vxlan_encap *) actions->conf)->definition; - enc_item_m = - ((const struct rte_flow_action_vxlan_encap *) - masks->conf)->definition; + if (masks->conf) + enc_item_m = ((const struct rte_flow_action_vxlan_encap *) + masks->conf)->definition; reformat_pos = i++; reformat_src = actions - action_start; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; @@ -729,9 +674,9 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); enc_item = ((const struct rte_flow_action_nvgre_encap *) actions->conf)->definition; - enc_item_m = - ((const struct rte_flow_action_nvgre_encap *) - actions->conf)->definition; + if (masks->conf) + enc_item_m = ((const struct rte_flow_action_nvgre_encap *) + masks->conf)->definition; reformat_pos = i++; reformat_src = actions - action_start; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; @@ -743,6 +688,11 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + raw_encap_data = + (const struct rte_flow_action_raw_encap *) + masks->conf; + if (raw_encap_data) + encap_data_m = raw_encap_data->data; raw_encap_data = (const struct rte_flow_action_raw_encap *) actions->conf; @@ -773,22 +723,17 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, } if (reformat_pos != MLX5_HW_MAX_ACTS) { uint8_t buf[MLX5_ENCAP_MAX_LEN]; + bool shared_rfmt = true; if (enc_item) { MLX5_ASSERT(!encap_data); - if (flow_dv_convert_encap_data - (enc_item, buf, &data_size, error) || - flow_hw_encap_item_translate - (dev, acts, (action_start + reformat_src)->type, - reformat_src, reformat_pos, - enc_item, enc_item_m)) + if (flow_dv_convert_encap_data(enc_item, buf, &data_size, error)) goto err; encap_data = buf; - } else if (encap_data && __flow_hw_act_data_encap_append - (priv, acts, - (action_start + reformat_src)->type, - reformat_src, reformat_pos, 0, 0, data_size)) { - goto err; + if (!enc_item_m) + shared_rfmt = false; + } else if (encap_data && !encap_data_m) { + shared_rfmt = false; } acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->encap_decap) + data_size, @@ -802,12 +747,22 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, acts->encap_decap->action = mlx5dr_action_create_reformat (priv->dr_ctx, refmt_type, data_size, encap_data, - rte_log2_u32(table_attr->nb_flows), - mlx5_hw_act_flag[!!attr->group][type]); + shared_rfmt ? 0 : rte_log2_u32(table_attr->nb_flows), + mlx5_hw_act_flag[!!attr->group][type] | + (shared_rfmt ? MLX5DR_ACTION_FLAG_SHARED : 0)); if (!acts->encap_decap->action) goto err; acts->rule_acts[reformat_pos].action = acts->encap_decap->action; + acts->rule_acts[reformat_pos].reformat.data = + acts->encap_decap->data; + if (shared_rfmt) + acts->rule_acts[reformat_pos].reformat.offset = 0; + else if (__flow_hw_act_data_encap_append(priv, acts, + (action_start + reformat_src)->type, + reformat_src, reformat_pos, data_size)) + goto err; + acts->encap_decap->shared = shared_rfmt; acts->encap_decap_pos = reformat_pos; } acts->acts_num = i; @@ -972,6 +927,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, .ingress = 1, }; uint32_t ft_flag; + size_t encap_len = 0; memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * hw_acts->acts_num); @@ -989,9 +945,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } else { attr.ingress = 1; } - if (hw_acts->encap_decap && hw_acts->encap_decap->data_size) - memcpy(buf, hw_acts->encap_decap->data, - hw_acts->encap_decap->data_size); LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; uint32_t tag; @@ -1050,23 +1003,20 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: enc_item = ((const struct rte_flow_action_vxlan_encap *) action->conf)->definition; - rte_memcpy((void *)&buf[act_data->encap.dst], - enc_item[act_data->encap.src].spec, - act_data->encap.len); + if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL)) + return -1; break; case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: enc_item = ((const struct rte_flow_action_nvgre_encap *) action->conf)->definition; - rte_memcpy((void *)&buf[act_data->encap.dst], - enc_item[act_data->encap.src].spec, - act_data->encap.len); + if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL)) + return -1; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = (const struct rte_flow_action_raw_encap *) action->conf; - rte_memcpy((void *)&buf[act_data->encap.dst], - raw_encap_data->data, act_data->encap.len); + rte_memcpy((void *)buf, raw_encap_data->data, act_data->encap.len); MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; @@ -1074,7 +1024,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, break; } } - if (hw_acts->encap_decap) { + if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { rule_acts[hw_acts->encap_decap_pos].reformat.offset = job->flow->idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; From patchwork Fri Sep 23 14:43:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116742 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 062C3A054A; Fri, 23 Sep 2022 16:44:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C055F42BC1; Fri, 23 Sep 2022 16:44:12 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2087.outbound.protection.outlook.com [40.107.94.87]) by mails.dpdk.org (Postfix) with ESMTP id 5765B42BBA for ; Fri, 23 Sep 2022 16:44:10 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RrQw2fJK4bIXnymQzxE6AYibCHWmqIT9EuSv63uVduox7lNkLNlvpA4jQ84FNaZuO1ct9WcmKC9A09Q5DO4aPiT6IeMfBgq8MhbxLbd1lddvRMaytq03crLMgT4Dbtgx0nZpLcb60IN2b/2BTOOueusbk1fiXSPI/mcmTn6UCen3IRMNzUinzOxaegCMWaxiMyCaNtCQukBtuOt7X55pVQ5doUBAgdsMF78/jvHavKv1l10yGsP31KxIwwQUyqFkf2/lKtBblE4gR8crUP/+b+KmmKQ3bjERWBb7r2chu7xmnVTg82XcbEIz8aJaQBcBlkaErpB38urclJ5gRKZ3JQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZsspDLRsF4uo2mMbJIIenVqgGtN/sT8Y7My02XSYkEo=; b=XQvstzkPEESAMuf93fEQN6xTN53sDsZGluCgYtc//O9xxGBFYMc/wjdiW2JN3sLqJwlSOz5co/A6x15aG5qU7Jmxe56zWAJKKXCucO7LpAuPIs/U0CG/QKx6PYXVKZ5ZXx98A8zVjcUYre/JULIjIA9HVolyg9dEOi0EX7858yX/LDXKFoiXEeIUvDdrbnzYo9GY+en5kO9s7c3n+/6k/n+K0VQGiMoP/b8zHZrAxJ02dUTOj7rLJSOoqiJBocnRNPRepqQEphn/eoFze0gP/I3GoxA3PGfxCYP/Q8fdw7m+UK1ZNoUtcSn9HkK5CqoGxIpdRxYHzYylZZn6WvOfMA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZsspDLRsF4uo2mMbJIIenVqgGtN/sT8Y7My02XSYkEo=; b=Jeb0q+CdsDVyGJX0AkpVQM1ogWeOJdLtyzqiJnAS59/Hp9kTdOy57rrvQin7I8HxdT48eKZhwRUxpIZNpu++iOSZfcNz4lo/HLFlvwioB2N5tbivKunqRIzkxqiNEWk4jVog0M/vuwLtM3ACxzprSLetkneQ00j74Hr+7ppn8xM8bwGdRhIy0IzTRCrKcN7x9qT/sN1PNq/9TD37Xev39fkC+TtVUKn1O6r1+7SBx8hQStlaDjdJQ6g9SruVd3q679nSA6jUcmkgDC37bMzj+eNUDPV1j0YLawHOJPE5YWTpAfkZR09s26ROqTCoV1tRtqaSdbpI4jYNZXu7LYGTzQ== Received: from DM6PR11CA0007.namprd11.prod.outlook.com (2603:10b6:5:190::20) by MN2PR12MB4405.namprd12.prod.outlook.com (2603:10b6:208:26d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:06 +0000 Received: from DM6NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:5:190:cafe::9b) by DM6PR11CA0007.outlook.office365.com (2603:10b6:5:190::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT052.mail.protection.outlook.com (10.13.172.111) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:06 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:43:54 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:43:52 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Dariusz Sosnowski Subject: [PATCH 04/27] net/mlx5: add modify field hws support Date: Fri, 23 Sep 2022 17:43:11 +0300 Message-ID: <20220923144334.27736-5-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT052:EE_|MN2PR12MB4405:EE_ X-MS-Office365-Filtering-Correlation-Id: b5ae7800-eca1-4e0c-a639-08da9d72114f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UnOvUucvpdLNZZZm0tF7nnw4lbD0BDb1IxzrvRiir3igIpAeZB1NnKnK7G8eXqcjrMuPGVQANb2sOIPzPHbjQFlIOOTZmllSHEgpXytrXyjtHeEFOGxfFyPEXbj5TFByl5dtV6NrSsjTD+LYiuQJkNT37CX5S4ncxFFl4gZajxbhaa6FMXxIsSbeMsra0J6V9Oya0jVZKHMHU6ecD84TTfi0oCettwnaHCJf9FIkeiuWCd0PMYbVpVL1+IIbBo9Mz3eGLeS7sWYsFAq8g880V4va1DgmSSzyV9c3SMYnY9CZIP0+N7NbhVJlJItsv+no0+blDbrjuakQpG4JF92m5miwADHddaEa95gGKMsPNKwsSwyVagyp0N8tb939zT8BLHfuT0KM5n5q0pH3HTyyz73PFEGj+MzKf95PYNxoZ0x3PcHvgkIlROHkDiGM6FYwDJoJrZLYI5RE81DM3jTWx0DM2byAvPidpB6gcAsJ/TBGtKCb2Z8ZP0MwniDD+wI9DrphWDnbdU0pfF2N/nUL4kWuXZKfU27NimNFb+LwYjlV5yZbaH/Y/SKXteC6bfC3+rTtSfkBMi+86GxGvI+wzEtxmegzxbLfVWEZNL3vGQbX2VVbKgkSiJHh78XW7qKAmYrJ1Gk3oRK1fv4VU9TN7IQQVkDg35k4mBO1FRYVWYn3QU46Qxw3vjyaWMNfdWGL2SHIwfvRZjUbzqTR8wIM53YNJjYek6tL0PFzp87+20gwravgWrSt2pLi1DZ5Qn53PpxuLno/AOq0jYXkEIPJFw== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(136003)(346002)(39860400002)(396003)(451199015)(46966006)(40470700004)(36840700001)(356005)(7636003)(82740400003)(478600001)(36860700001)(82310400005)(55016003)(40480700001)(107886003)(6666004)(4326008)(30864003)(40460700003)(83380400001)(6636002)(8676002)(70586007)(70206006)(54906003)(110136005)(316002)(41300700001)(36756003)(86362001)(186003)(1076003)(2616005)(336012)(5660300002)(7696005)(8936002)(26005)(6286002)(426003)(47076005)(2906002)(16526019)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:06.7049 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b5ae7800-eca1-4e0c-a639-08da9d72114f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4405 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces support for modify_field rte_flow actions in HWS mode. Support includes: - Ingress and egress domains, - SET and ADD operations, - usage of arbitrary bit offsets and widths for packet and metadata fields. Support is implemented in two phases: 1. On flow table creation the hardware commands are generated, based on rte_flow action templates, and stored alongside action template. 2. On flow rule creation/queueing the hardware commands are updated with values provided by the user. Any masks over immediate values, provided in action templates, are applied to these values before enqueueing rules for creation. Signed-off-by: Dariusz Sosnowski Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/linux/mlx5_os.c | 18 +- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 96 ++++++ drivers/net/mlx5/mlx5_flow_dv.c | 538 ++++++++++++++++--------------- drivers/net/mlx5/mlx5_flow_hw.c | 445 ++++++++++++++++++++++++- 6 files changed, 825 insertions(+), 274 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index b5624e7cd1..628bae72b2 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -751,6 +751,7 @@ enum mlx5_modification_field { MLX5_MODI_IN_TCP_ACK_NUM = 0x5C, MLX5_MODI_GTP_TEID = 0x6E, MLX5_MODI_OUT_IP_ECN = 0x73, + MLX5_MODI_TUNNEL_HDR_DW_1 = 0x75, }; /* Total number of metadata reg_c's. */ diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 6906914ba8..1877b6bec8 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1539,6 +1539,15 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_hrxq_clone_free_cb); if (!priv->hrxqs) goto error; + mlx5_set_metadata_mask(eth_dev); + if (sh->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && + !priv->sh->dv_regc0_mask) { + DRV_LOG(ERR, "metadata mode %u is not supported " + "(no metadata reg_c[0] is available)", + sh->config.dv_xmeta_en); + err = ENOTSUP; + goto error; + } rte_rwlock_init(&priv->ind_tbls_lock); if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); @@ -1560,15 +1569,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = -err; goto error; } - mlx5_set_metadata_mask(eth_dev); - if (sh->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && - !priv->sh->dv_regc0_mask) { - DRV_LOG(ERR, "metadata mode %u is not supported " - "(no metadata reg_c[0] is available)", - sh->config.dv_xmeta_en); - err = ENOTSUP; - goto error; - } /* Query availability of metadata reg_c's. */ if (!priv->sh->metadata_regc_check_flag) { err = mlx5_flow_discover_mreg_c(eth_dev); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 48ae2244da..f3bd45d4c5 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -343,6 +343,7 @@ struct mlx5_hw_q_job { struct rte_flow_hw *flow; /* Flow attached to the job. */ void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + struct mlx5_modification_cmd *mhdr_cmd; }; /* HW steering job descriptor LIFO pool. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 74cb1cd235..a7235b524d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1008,6 +1008,51 @@ flow_items_to_tunnel(const struct rte_flow_item items[]) return items[0].spec; } +/** + * Fetch 1, 2, 3 or 4 byte field from the byte array + * and return as unsigned integer in host-endian format. + * + * @param[in] data + * Pointer to data array. + * @param[in] size + * Size of field to extract. + * + * @return + * converted field in host endian format. + */ +static inline uint32_t +flow_dv_fetch_field(const uint8_t *data, uint32_t size) +{ + uint32_t ret; + + switch (size) { + case 1: + ret = *data; + break; + case 2: + ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data); + break; + case 3: + ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data); + ret = (ret << 8) | *(data + sizeof(uint16_t)); + break; + case 4: + ret = rte_be_to_cpu_32(*(const unaligned_uint32_t *)data); + break; + default: + MLX5_ASSERT(false); + ret = 0; + break; + } + return ret; +} + +struct field_modify_info { + uint32_t size; /* Size of field in protocol header, in bytes. */ + uint32_t offset; /* Offset of field in protocol header, in bytes. */ + enum mlx5_modification_field id; +}; + /* HW steering flow attributes. */ struct mlx5_flow_attr { uint32_t port_id; /* Port index. */ @@ -1068,6 +1113,29 @@ struct mlx5_action_construct_data { /* encap data len. */ uint16_t len; } encap; + struct { + /* Modify header action offset in pattern. */ + uint16_t mhdr_cmds_off; + /* Offset in pattern after modify header actions. */ + uint16_t mhdr_cmds_end; + /* + * True if this action is masked and does not need to + * be generated. + */ + bool shared; + /* + * Modified field definitions in dst field (SET, ADD) + * or src field (COPY). + */ + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS]; + /* Modified field definitions in dst field (COPY). */ + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS]; + /* + * Masks applied to field values to generate + * PRM actions. + */ + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS]; + } modify_header; struct { uint64_t types; /* RSS hash types. */ uint32_t level; /* RSS level. */ @@ -1093,6 +1161,7 @@ struct rte_flow_actions_template { struct rte_flow_actions_template_attr attr; struct rte_flow_action *actions; /* Cached flow actions. */ struct rte_flow_action *masks; /* Cached action masks.*/ + uint16_t mhdr_off; /* Offset of DR modify header action. */ uint32_t refcnt; /* Reference counter. */ }; @@ -1113,6 +1182,22 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + +/* Modify field action struct. */ +struct mlx5_hw_modify_header_action { + /* Reference to DR action */ + struct mlx5dr_action *action; + /* Modify header action position in action rule table. */ + uint16_t pos; + /* Is MODIFY_HEADER action shared across flows in table. */ + bool shared; + /* Amount of modification commands stored in the precompiled buffer. */ + uint32_t mhdr_cmds_num; + /* Precompiled modification commands. */ + struct mlx5_modification_cmd mhdr_cmds[MLX5_MHDR_MAX_CMD]; +}; + /* The maximum actions support in the flow. */ #define MLX5_HW_MAX_ACTS 16 @@ -1122,6 +1207,7 @@ struct mlx5_hw_actions { LIST_HEAD(act_list, mlx5_action_construct_data) act_list; struct mlx5_hw_jump_action *jump; /* Jump action. */ struct mlx5_hrxq *tir; /* TIR action. */ + struct mlx5_hw_modify_header_action *mhdr; /* Modify header action. */ /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ @@ -2200,6 +2286,16 @@ int flow_dv_action_query(struct rte_eth_dev *dev, size_t flow_dv_get_item_hdr_len(const enum rte_flow_item_type item_type); int flow_dv_convert_encap_data(const struct rte_flow_item *items, uint8_t *buf, size_t *size, struct rte_flow_error *error); +void mlx5_flow_field_id_to_modify_info + (const struct rte_flow_action_modify_data *data, + struct field_modify_info *info, uint32_t *mask, + uint32_t width, struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, struct rte_flow_error *error); +int flow_dv_convert_modify_action(struct rte_flow_item *item, + struct field_modify_info *field, + struct field_modify_info *dcopy, + struct mlx5_flow_dv_modify_hdr_resource *resource, + uint32_t type, struct rte_flow_error *error); #define MLX5_PF_VPORT_ID 0 #define MLX5_ECPF_VPORT_ID 0xFFFE diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 3e5e6781bf..5d3e2d37bb 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -241,12 +241,6 @@ rte_col_2_mlx5_col(enum rte_color rcol) return MLX5_FLOW_COLOR_UNDEFINED; } -struct field_modify_info { - uint32_t size; /* Size of field in protocol header, in bytes. */ - uint32_t offset; /* Offset of field in protocol header, in bytes. */ - enum mlx5_modification_field id; -}; - struct field_modify_info modify_eth[] = { {4, 0, MLX5_MODI_OUT_DMAC_47_16}, {2, 4, MLX5_MODI_OUT_DMAC_15_0}, @@ -379,45 +373,6 @@ mlx5_update_vlan_vid_pcp(const struct rte_flow_action *action, } } -/** - * Fetch 1, 2, 3 or 4 byte field from the byte array - * and return as unsigned integer in host-endian format. - * - * @param[in] data - * Pointer to data array. - * @param[in] size - * Size of field to extract. - * - * @return - * converted field in host endian format. - */ -static inline uint32_t -flow_dv_fetch_field(const uint8_t *data, uint32_t size) -{ - uint32_t ret; - - switch (size) { - case 1: - ret = *data; - break; - case 2: - ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data); - break; - case 3: - ret = rte_be_to_cpu_16(*(const unaligned_uint16_t *)data); - ret = (ret << 8) | *(data + sizeof(uint16_t)); - break; - case 4: - ret = rte_be_to_cpu_32(*(const unaligned_uint32_t *)data); - break; - default: - MLX5_ASSERT(false); - ret = 0; - break; - } - return ret; -} - /** * Convert modify-header action to DV specification. * @@ -446,7 +401,7 @@ flow_dv_fetch_field(const uint8_t *data, uint32_t size) * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int flow_dv_convert_modify_action(struct rte_flow_item *item, struct field_modify_info *field, struct field_modify_info *dcopy, @@ -1464,7 +1419,32 @@ mlx5_flow_item_field_width(struct rte_eth_dev *dev, return 0; } -static void +static __rte_always_inline uint8_t +flow_modify_info_mask_8(uint32_t length, uint32_t off) +{ + return (0xffu >> (8 - length)) << off; +} + +static __rte_always_inline uint16_t +flow_modify_info_mask_16(uint32_t length, uint32_t off) +{ + return rte_cpu_to_be_16((0xffffu >> (16 - length)) << off); +} + +static __rte_always_inline uint32_t +flow_modify_info_mask_32(uint32_t length, uint32_t off) +{ + return rte_cpu_to_be_32((0xffffffffu >> (32 - length)) << off); +} + +static __rte_always_inline uint32_t +flow_modify_info_mask_32_masked(uint32_t length, uint32_t off, uint32_t post_mask) +{ + uint32_t mask = (0xffffffffu >> (32 - length)) << off; + return rte_cpu_to_be_32(mask & post_mask); +} + +void mlx5_flow_field_id_to_modify_info (const struct rte_flow_action_modify_data *data, struct field_modify_info *info, uint32_t *mask, @@ -1473,323 +1453,340 @@ mlx5_flow_field_id_to_modify_info { struct mlx5_priv *priv = dev->data->dev_private; uint32_t idx = 0; - uint32_t off = 0; - - switch (data->field) { + uint32_t off_be = 0; + uint32_t length = 0; + switch ((int)data->field) { case RTE_FLOW_FIELD_START: /* not supported yet */ MLX5_ASSERT(false); break; case RTE_FLOW_FIELD_MAC_DST: - off = data->offset > 16 ? data->offset - 16 : 0; - if (mask) { - if (data->offset < 16) { - info[idx] = (struct field_modify_info){2, 4, - MLX5_MODI_OUT_DMAC_15_0}; - if (width < 16) { - mask[1] = rte_cpu_to_be_16(0xffff >> - (16 - width)); - width = 0; - } else { - mask[1] = RTE_BE16(0xffff); - width -= 16; - } - if (!width) - break; - ++idx; - } - info[idx] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_DMAC_47_16}; - mask[0] = rte_cpu_to_be_32((0xffffffff >> - (32 - width)) << off); + MLX5_ASSERT(data->offset + width <= 48); + off_be = 48 - (data->offset + width); + if (off_be < 16) { + info[idx] = (struct field_modify_info){2, 4, + MLX5_MODI_OUT_DMAC_15_0}; + length = off_be + width <= 16 ? width : 16 - off_be; + if (mask) + mask[1] = flow_modify_info_mask_16(length, + off_be); + else + info[idx].offset = off_be; + width -= length; + if (!width) + break; + off_be = 0; + idx++; } else { - if (data->offset < 16) - info[idx++] = (struct field_modify_info){2, 0, - MLX5_MODI_OUT_DMAC_15_0}; - info[idx] = (struct field_modify_info){4, off, - MLX5_MODI_OUT_DMAC_47_16}; + off_be -= 16; } + info[idx] = (struct field_modify_info){4, 0, + MLX5_MODI_OUT_DMAC_47_16}; + if (mask) + mask[0] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_MAC_SRC: - off = data->offset > 16 ? data->offset - 16 : 0; - if (mask) { - if (data->offset < 16) { - info[idx] = (struct field_modify_info){2, 4, - MLX5_MODI_OUT_SMAC_15_0}; - if (width < 16) { - mask[1] = rte_cpu_to_be_16(0xffff >> - (16 - width)); - width = 0; - } else { - mask[1] = RTE_BE16(0xffff); - width -= 16; - } - if (!width) - break; - ++idx; - } - info[idx] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_SMAC_47_16}; - mask[0] = rte_cpu_to_be_32((0xffffffff >> - (32 - width)) << off); + MLX5_ASSERT(data->offset + width <= 48); + off_be = 48 - (data->offset + width); + if (off_be < 16) { + info[idx] = (struct field_modify_info){2, 4, + MLX5_MODI_OUT_SMAC_15_0}; + length = off_be + width <= 16 ? width : 16 - off_be; + if (mask) + mask[1] = flow_modify_info_mask_16(length, + off_be); + else + info[idx].offset = off_be; + width -= length; + if (!width) + break; + off_be = 0; + idx++; } else { - if (data->offset < 16) - info[idx++] = (struct field_modify_info){2, 0, - MLX5_MODI_OUT_SMAC_15_0}; - info[idx] = (struct field_modify_info){4, off, - MLX5_MODI_OUT_SMAC_47_16}; + off_be -= 16; } + info[idx] = (struct field_modify_info){4, 0, + MLX5_MODI_OUT_SMAC_47_16}; + if (mask) + mask[0] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_VLAN_TYPE: /* not supported yet */ break; case RTE_FLOW_FIELD_VLAN_ID: + MLX5_ASSERT(data->offset + width <= 12); + off_be = 12 - (data->offset + width); info[idx] = (struct field_modify_info){2, 0, MLX5_MODI_OUT_FIRST_VID}; if (mask) - mask[idx] = rte_cpu_to_be_16(0x0fff >> (12 - width)); + mask[idx] = flow_modify_info_mask_16(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_MAC_TYPE: + MLX5_ASSERT(data->offset + width <= 16); + off_be = 16 - (data->offset + width); info[idx] = (struct field_modify_info){2, 0, MLX5_MODI_OUT_ETHERTYPE}; if (mask) - mask[idx] = rte_cpu_to_be_16(0xffff >> (16 - width)); + mask[idx] = flow_modify_info_mask_16(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_IPV4_DSCP: + MLX5_ASSERT(data->offset + width <= 6); + off_be = 6 - (data->offset + width); info[idx] = (struct field_modify_info){1, 0, MLX5_MODI_OUT_IP_DSCP}; if (mask) - mask[idx] = 0x3f >> (6 - width); + mask[idx] = flow_modify_info_mask_8(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_IPV4_TTL: + MLX5_ASSERT(data->offset + width <= 8); + off_be = 8 - (data->offset + width); info[idx] = (struct field_modify_info){1, 0, MLX5_MODI_OUT_IPV4_TTL}; if (mask) - mask[idx] = 0xff >> (8 - width); + mask[idx] = flow_modify_info_mask_8(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_IPV4_SRC: + MLX5_ASSERT(data->offset + width <= 32); + off_be = 32 - (data->offset + width); info[idx] = (struct field_modify_info){4, 0, MLX5_MODI_OUT_SIPV4}; if (mask) - mask[idx] = rte_cpu_to_be_32(0xffffffff >> - (32 - width)); + mask[idx] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_IPV4_DST: + MLX5_ASSERT(data->offset + width <= 32); + off_be = 32 - (data->offset + width); info[idx] = (struct field_modify_info){4, 0, MLX5_MODI_OUT_DIPV4}; if (mask) - mask[idx] = rte_cpu_to_be_32(0xffffffff >> - (32 - width)); + mask[idx] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_IPV6_DSCP: + MLX5_ASSERT(data->offset + width <= 6); + off_be = 6 - (data->offset + width); info[idx] = (struct field_modify_info){1, 0, MLX5_MODI_OUT_IP_DSCP}; if (mask) - mask[idx] = 0x3f >> (6 - width); + mask[idx] = flow_modify_info_mask_8(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_IPV6_HOPLIMIT: + MLX5_ASSERT(data->offset + width <= 8); + off_be = 8 - (data->offset + width); info[idx] = (struct field_modify_info){1, 0, MLX5_MODI_OUT_IPV6_HOPLIMIT}; if (mask) - mask[idx] = 0xff >> (8 - width); + mask[idx] = flow_modify_info_mask_8(width, off_be); + else + info[idx].offset = off_be; break; - case RTE_FLOW_FIELD_IPV6_SRC: - if (mask) { - if (data->offset < 32) { - info[idx] = (struct field_modify_info){4, 12, - MLX5_MODI_OUT_SIPV6_31_0}; - if (width < 32) { - mask[3] = - rte_cpu_to_be_32(0xffffffff >> - (32 - width)); - width = 0; - } else { - mask[3] = RTE_BE32(0xffffffff); - width -= 32; - } - if (!width) - break; - ++idx; - } - if (data->offset < 64) { - info[idx] = (struct field_modify_info){4, 8, - MLX5_MODI_OUT_SIPV6_63_32}; - if (width < 32) { - mask[2] = - rte_cpu_to_be_32(0xffffffff >> - (32 - width)); - width = 0; - } else { - mask[2] = RTE_BE32(0xffffffff); - width -= 32; - } - if (!width) - break; - ++idx; - } - if (data->offset < 96) { - info[idx] = (struct field_modify_info){4, 4, - MLX5_MODI_OUT_SIPV6_95_64}; - if (width < 32) { - mask[1] = - rte_cpu_to_be_32(0xffffffff >> - (32 - width)); - width = 0; - } else { - mask[1] = RTE_BE32(0xffffffff); - width -= 32; - } - if (!width) - break; - ++idx; + case RTE_FLOW_FIELD_IPV6_SRC: { + /* + * Fields corresponding to IPv6 source address bytes + * arranged according to network byte ordering. + */ + struct field_modify_info fields[] = { + { 4, 0, MLX5_MODI_OUT_SIPV6_127_96 }, + { 4, 4, MLX5_MODI_OUT_SIPV6_95_64 }, + { 4, 8, MLX5_MODI_OUT_SIPV6_63_32 }, + { 4, 12, MLX5_MODI_OUT_SIPV6_31_0 }, + }; + /* First mask to be modified is the mask of 4th address byte. */ + uint32_t midx = 3; + + MLX5_ASSERT(data->offset + width <= 128); + off_be = 128 - (data->offset + width); + while (width > 0 && midx > 0) { + if (off_be < 32) { + info[idx] = fields[midx]; + length = off_be + width <= 32 ? + width : 32 - off_be; + if (mask) + mask[midx] = flow_modify_info_mask_32 + (length, off_be); + else + info[idx].offset = off_be; + width -= length; + off_be = 0; + idx++; + } else { + off_be -= 32; } - info[idx] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_SIPV6_127_96}; - mask[0] = rte_cpu_to_be_32(0xffffffff >> (32 - width)); - } else { - if (data->offset < 32) - info[idx++] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_SIPV6_31_0}; - if (data->offset < 64) - info[idx++] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_SIPV6_63_32}; - if (data->offset < 96) - info[idx++] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_SIPV6_95_64}; - if (data->offset < 128) - info[idx++] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_SIPV6_127_96}; + midx--; } + if (!width) + break; + info[idx] = fields[midx]; + if (mask) + mask[midx] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; - case RTE_FLOW_FIELD_IPV6_DST: - if (mask) { - if (data->offset < 32) { - info[idx] = (struct field_modify_info){4, 12, - MLX5_MODI_OUT_DIPV6_31_0}; - if (width < 32) { - mask[3] = - rte_cpu_to_be_32(0xffffffff >> - (32 - width)); - width = 0; - } else { - mask[3] = RTE_BE32(0xffffffff); - width -= 32; - } - if (!width) - break; - ++idx; - } - if (data->offset < 64) { - info[idx] = (struct field_modify_info){4, 8, - MLX5_MODI_OUT_DIPV6_63_32}; - if (width < 32) { - mask[2] = - rte_cpu_to_be_32(0xffffffff >> - (32 - width)); - width = 0; - } else { - mask[2] = RTE_BE32(0xffffffff); - width -= 32; - } - if (!width) - break; - ++idx; - } - if (data->offset < 96) { - info[idx] = (struct field_modify_info){4, 4, - MLX5_MODI_OUT_DIPV6_95_64}; - if (width < 32) { - mask[1] = - rte_cpu_to_be_32(0xffffffff >> - (32 - width)); - width = 0; - } else { - mask[1] = RTE_BE32(0xffffffff); - width -= 32; - } - if (!width) - break; - ++idx; + } + case RTE_FLOW_FIELD_IPV6_DST: { + /* + * Fields corresponding to IPv6 destination address bytes + * arranged according to network byte ordering. + */ + struct field_modify_info fields[] = { + { 4, 0, MLX5_MODI_OUT_DIPV6_127_96 }, + { 4, 4, MLX5_MODI_OUT_DIPV6_95_64 }, + { 4, 8, MLX5_MODI_OUT_DIPV6_63_32 }, + { 4, 12, MLX5_MODI_OUT_DIPV6_31_0 }, + }; + /* First mask to be modified is the mask of 4th address byte. */ + uint32_t midx = 3; + + MLX5_ASSERT(data->offset + width <= 128); + off_be = 128 - (data->offset + width); + while (width > 0 && midx > 0) { + if (off_be < 32) { + info[idx] = fields[midx]; + length = off_be + width <= 32 ? + width : 32 - off_be; + if (mask) + mask[midx] = flow_modify_info_mask_32 + (length, off_be); + else + info[idx].offset = off_be; + width -= length; + off_be = 0; + idx++; + } else { + off_be -= 32; } - info[idx] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_DIPV6_127_96}; - mask[0] = rte_cpu_to_be_32(0xffffffff >> (32 - width)); - } else { - if (data->offset < 32) - info[idx++] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_DIPV6_31_0}; - if (data->offset < 64) - info[idx++] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_DIPV6_63_32}; - if (data->offset < 96) - info[idx++] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_DIPV6_95_64}; - if (data->offset < 128) - info[idx++] = (struct field_modify_info){4, 0, - MLX5_MODI_OUT_DIPV6_127_96}; + midx--; } + if (!width) + break; + info[idx] = fields[midx]; + if (mask) + mask[midx] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; + } case RTE_FLOW_FIELD_TCP_PORT_SRC: + MLX5_ASSERT(data->offset + width <= 16); + off_be = 16 - (data->offset + width); info[idx] = (struct field_modify_info){2, 0, MLX5_MODI_OUT_TCP_SPORT}; if (mask) - mask[idx] = rte_cpu_to_be_16(0xffff >> (16 - width)); + mask[idx] = flow_modify_info_mask_16(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_TCP_PORT_DST: + MLX5_ASSERT(data->offset + width <= 16); + off_be = 16 - (data->offset + width); info[idx] = (struct field_modify_info){2, 0, MLX5_MODI_OUT_TCP_DPORT}; if (mask) - mask[idx] = rte_cpu_to_be_16(0xffff >> (16 - width)); + mask[idx] = flow_modify_info_mask_16(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_TCP_SEQ_NUM: + MLX5_ASSERT(data->offset + width <= 32); + off_be = 32 - (data->offset + width); info[idx] = (struct field_modify_info){4, 0, MLX5_MODI_OUT_TCP_SEQ_NUM}; if (mask) - mask[idx] = rte_cpu_to_be_32(0xffffffff >> - (32 - width)); + mask[idx] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_TCP_ACK_NUM: + MLX5_ASSERT(data->offset + width <= 32); + off_be = 32 - (data->offset + width); info[idx] = (struct field_modify_info){4, 0, MLX5_MODI_OUT_TCP_ACK_NUM}; if (mask) - mask[idx] = rte_cpu_to_be_32(0xffffffff >> - (32 - width)); + mask[idx] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_TCP_FLAGS: + MLX5_ASSERT(data->offset + width <= 9); + off_be = 9 - (data->offset + width); info[idx] = (struct field_modify_info){2, 0, MLX5_MODI_OUT_TCP_FLAGS}; if (mask) - mask[idx] = rte_cpu_to_be_16(0x1ff >> (9 - width)); + mask[idx] = flow_modify_info_mask_16(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_UDP_PORT_SRC: + MLX5_ASSERT(data->offset + width <= 16); + off_be = 16 - (data->offset + width); info[idx] = (struct field_modify_info){2, 0, MLX5_MODI_OUT_UDP_SPORT}; if (mask) - mask[idx] = rte_cpu_to_be_16(0xffff >> (16 - width)); + mask[idx] = flow_modify_info_mask_16(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_UDP_PORT_DST: + MLX5_ASSERT(data->offset + width <= 16); + off_be = 16 - (data->offset + width); info[idx] = (struct field_modify_info){2, 0, MLX5_MODI_OUT_UDP_DPORT}; if (mask) - mask[idx] = rte_cpu_to_be_16(0xffff >> (16 - width)); + mask[idx] = flow_modify_info_mask_16(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_VXLAN_VNI: - /* not supported yet */ + MLX5_ASSERT(data->offset + width <= 24); + /* VNI is on bits 31-8 of TUNNEL_HDR_DW_1. */ + off_be = 24 - (data->offset + width) + 8; + info[idx] = (struct field_modify_info){4, 0, + MLX5_MODI_TUNNEL_HDR_DW_1}; + if (mask) + mask[idx] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_GENEVE_VNI: /* not supported yet*/ break; case RTE_FLOW_FIELD_GTP_TEID: + MLX5_ASSERT(data->offset + width <= 32); + off_be = 32 - (data->offset + width); info[idx] = (struct field_modify_info){4, 0, MLX5_MODI_GTP_TEID}; if (mask) - mask[idx] = rte_cpu_to_be_32(0xffffffff >> - (32 - width)); + mask[idx] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_TAG: { - int reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, - data->level, error); + MLX5_ASSERT(data->offset + width <= 32); + int reg; + + if (priv->sh->config.dv_flow_en == 2) + reg = REG_C_1; + else + reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, + data->level, error); if (reg < 0) return; MLX5_ASSERT(reg != REG_NON); @@ -1797,15 +1794,18 @@ mlx5_flow_field_id_to_modify_info info[idx] = (struct field_modify_info){4, 0, reg_to_field[reg]}; if (mask) - mask[idx] = - rte_cpu_to_be_32(0xffffffff >> - (32 - width)); + mask[idx] = flow_modify_info_mask_32 + (width, data->offset); + else + info[idx].offset = data->offset; } break; case RTE_FLOW_FIELD_MARK: { uint32_t mark_mask = priv->sh->dv_mark_mask; uint32_t mark_count = __builtin_popcount(mark_mask); + RTE_SET_USED(mark_count); + MLX5_ASSERT(data->offset + width <= mark_count); int reg = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error); if (reg < 0) @@ -1815,14 +1815,18 @@ mlx5_flow_field_id_to_modify_info info[idx] = (struct field_modify_info){4, 0, reg_to_field[reg]}; if (mask) - mask[idx] = rte_cpu_to_be_32((mark_mask >> - (mark_count - width)) & mark_mask); + mask[idx] = flow_modify_info_mask_32_masked + (width, data->offset, mark_mask); + else + info[idx].offset = data->offset; } break; case RTE_FLOW_FIELD_META: { uint32_t meta_mask = priv->sh->dv_meta_mask; uint32_t meta_count = __builtin_popcount(meta_mask); + RTE_SET_USED(meta_count); + MLX5_ASSERT(data->offset + width <= meta_count); int reg = flow_dv_get_metadata_reg(dev, attr, error); if (reg < 0) return; @@ -1831,16 +1835,22 @@ mlx5_flow_field_id_to_modify_info info[idx] = (struct field_modify_info){4, 0, reg_to_field[reg]}; if (mask) - mask[idx] = rte_cpu_to_be_32((meta_mask >> - (meta_count - width)) & meta_mask); + mask[idx] = flow_modify_info_mask_32_masked + (width, data->offset, meta_mask); + else + info[idx].offset = data->offset; } break; case RTE_FLOW_FIELD_IPV4_ECN: case RTE_FLOW_FIELD_IPV6_ECN: + MLX5_ASSERT(data->offset + width <= 2); + off_be = 2 - (data->offset + width); info[idx] = (struct field_modify_info){1, 0, MLX5_MODI_OUT_IP_ECN}; if (mask) - mask[idx] = 0x3 >> (2 - width); + mask[idx] = flow_modify_info_mask_8(width, off_be); + else + info[idx].offset = off_be; break; case RTE_FLOW_FIELD_POINTER: case RTE_FLOW_FIELD_VALUE: diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index b6978bd051..b89d2cc44f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -319,6 +319,11 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_hlist_unregister(priv->sh->flow_tbls, &grp->entry); acts->jump = NULL; } + if (acts->mhdr) { + if (acts->mhdr->action) + mlx5dr_action_destroy(acts->mhdr->action); + mlx5_free(acts->mhdr); + } } /** @@ -425,6 +430,37 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +static __rte_always_inline int +__flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t mhdr_cmds_off, + uint16_t mhdr_cmds_end, + bool shared, + struct field_modify_info *field, + struct field_modify_info *dcopy, + uint32_t *mask) +{ + struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + act_data->modify_header.mhdr_cmds_off = mhdr_cmds_off; + act_data->modify_header.mhdr_cmds_end = mhdr_cmds_end; + act_data->modify_header.shared = shared; + rte_memcpy(act_data->modify_header.field, field, + sizeof(*field) * MLX5_ACT_MAX_MOD_FIELDS); + rte_memcpy(act_data->modify_header.dcopy, dcopy, + sizeof(*dcopy) * MLX5_ACT_MAX_MOD_FIELDS); + rte_memcpy(act_data->modify_header.mask, mask, + sizeof(*mask) * MLX5_ACT_MAX_MOD_FIELDS); + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; +} + /** * Append shared RSS action to the dynamic action list. * @@ -515,6 +551,257 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, return 0; } +static __rte_always_inline bool +flow_hw_action_modify_field_is_shared(const struct rte_flow_action *action, + const struct rte_flow_action *mask) +{ + const struct rte_flow_action_modify_field *v = action->conf; + const struct rte_flow_action_modify_field *m = mask->conf; + + if (v->src.field == RTE_FLOW_FIELD_VALUE) { + uint32_t j; + + if (m == NULL) + return false; + for (j = 0; j < RTE_DIM(m->src.value); ++j) { + /* + * Immediate value is considered to be masked + * (and thus shared by all flow rules), if mask + * is non-zero. Partial mask over immediate value + * is not allowed. + */ + if (m->src.value[j]) + return true; + } + return false; + } + if (v->src.field == RTE_FLOW_FIELD_POINTER) + return m->src.pvalue != NULL; + /* + * Source field types other than VALUE and + * POINTER are always shared. + */ + return true; +} + +static __rte_always_inline bool +flow_hw_should_insert_nop(const struct mlx5_hw_modify_header_action *mhdr, + const struct mlx5_modification_cmd *cmd) +{ + struct mlx5_modification_cmd last_cmd = { { 0 } }; + struct mlx5_modification_cmd new_cmd = { { 0 } }; + const uint32_t cmds_num = mhdr->mhdr_cmds_num; + unsigned int last_type; + bool should_insert = false; + + if (cmds_num == 0) + return false; + last_cmd = *(&mhdr->mhdr_cmds[cmds_num - 1]); + last_cmd.data0 = rte_be_to_cpu_32(last_cmd.data0); + last_cmd.data1 = rte_be_to_cpu_32(last_cmd.data1); + last_type = last_cmd.action_type; + new_cmd = *cmd; + new_cmd.data0 = rte_be_to_cpu_32(new_cmd.data0); + new_cmd.data1 = rte_be_to_cpu_32(new_cmd.data1); + switch (new_cmd.action_type) { + case MLX5_MODIFICATION_TYPE_SET: + case MLX5_MODIFICATION_TYPE_ADD: + if (last_type == MLX5_MODIFICATION_TYPE_SET || + last_type == MLX5_MODIFICATION_TYPE_ADD) + should_insert = new_cmd.field == last_cmd.field; + else if (last_type == MLX5_MODIFICATION_TYPE_COPY) + should_insert = new_cmd.field == last_cmd.dst_field; + else if (last_type == MLX5_MODIFICATION_TYPE_NOP) + should_insert = false; + else + MLX5_ASSERT(false); /* Other types are not supported. */ + break; + case MLX5_MODIFICATION_TYPE_COPY: + if (last_type == MLX5_MODIFICATION_TYPE_SET || + last_type == MLX5_MODIFICATION_TYPE_ADD) + should_insert = (new_cmd.field == last_cmd.field || + new_cmd.dst_field == last_cmd.field); + else if (last_type == MLX5_MODIFICATION_TYPE_COPY) + should_insert = (new_cmd.field == last_cmd.dst_field || + new_cmd.dst_field == last_cmd.dst_field); + else if (last_type == MLX5_MODIFICATION_TYPE_NOP) + should_insert = false; + else + MLX5_ASSERT(false); /* Other types are not supported. */ + break; + default: + /* Other action types should be rejected on AT validation. */ + MLX5_ASSERT(false); + break; + } + return should_insert; +} + +static __rte_always_inline int +flow_hw_mhdr_cmd_nop_append(struct mlx5_hw_modify_header_action *mhdr) +{ + struct mlx5_modification_cmd *nop; + uint32_t num = mhdr->mhdr_cmds_num; + + if (num + 1 >= MLX5_MHDR_MAX_CMD) + return -ENOMEM; + nop = mhdr->mhdr_cmds + num; + nop->data0 = 0; + nop->action_type = MLX5_MODIFICATION_TYPE_NOP; + nop->data0 = rte_cpu_to_be_32(nop->data0); + nop->data1 = 0; + mhdr->mhdr_cmds_num = num + 1; + return 0; +} + +static __rte_always_inline int +flow_hw_mhdr_cmd_append(struct mlx5_hw_modify_header_action *mhdr, + struct mlx5_modification_cmd *cmd) +{ + uint32_t num = mhdr->mhdr_cmds_num; + + if (num + 1 >= MLX5_MHDR_MAX_CMD) + return -ENOMEM; + mhdr->mhdr_cmds[num] = *cmd; + mhdr->mhdr_cmds_num = num + 1; + return 0; +} + +static __rte_always_inline int +flow_hw_converted_mhdr_cmds_append(struct mlx5_hw_modify_header_action *mhdr, + struct mlx5_flow_dv_modify_hdr_resource *resource) +{ + uint32_t idx; + int ret; + + for (idx = 0; idx < resource->actions_num; ++idx) { + struct mlx5_modification_cmd *src = &resource->actions[idx]; + + if (flow_hw_should_insert_nop(mhdr, src)) { + ret = flow_hw_mhdr_cmd_nop_append(mhdr); + if (ret) + return ret; + } + ret = flow_hw_mhdr_cmd_append(mhdr, src); + if (ret) + return ret; + } + return 0; +} + +static __rte_always_inline void +flow_hw_modify_field_init(struct mlx5_hw_modify_header_action *mhdr, + struct rte_flow_actions_template *at) +{ + memset(mhdr, 0, sizeof(*mhdr)); + /* Modify header action without any commands is shared by default. */ + mhdr->shared = true; + mhdr->pos = at->mhdr_off; +} + +static __rte_always_inline int +flow_hw_modify_field_compile(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action *action_start, /* Start of AT actions. */ + const struct rte_flow_action *action, /* Current action from AT. */ + const struct rte_flow_action *action_mask, /* Current mask from AT. */ + struct mlx5_hw_actions *acts, + struct mlx5_hw_modify_header_action *mhdr, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_action_modify_field *conf = action->conf; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + uint32_t type, value = 0; + uint16_t cmds_start, cmds_end; + bool shared; + int ret; + + /* + * Modify header action is shared if previous modify_field actions + * are shared and currently compiled action is shared. + */ + shared = flow_hw_action_modify_field_is_shared(action, action_mask); + mhdr->shared &= shared; + if (conf->src.field == RTE_FLOW_FIELD_POINTER || + conf->src.field == RTE_FLOW_FIELD_VALUE) { + type = conf->operation == RTE_FLOW_MODIFY_SET ? MLX5_MODIFICATION_TYPE_SET : + MLX5_MODIFICATION_TYPE_ADD; + /* For SET/ADD fill the destination field (field) first. */ + mlx5_flow_field_id_to_modify_info(&conf->dst, field, mask, + conf->width, dev, + attr, error); + item.spec = conf->src.field == RTE_FLOW_FIELD_POINTER ? + (void *)(uintptr_t)conf->src.pvalue : + (void *)(uintptr_t)&conf->src.value; + if (conf->dst.field == RTE_FLOW_FIELD_META || + conf->dst.field == RTE_FLOW_FIELD_TAG) { + value = *(const unaligned_uint32_t *)item.spec; + value = rte_cpu_to_be_32(value); + item.spec = &value; + } + } else { + type = MLX5_MODIFICATION_TYPE_COPY; + /* For COPY fill the destination field (dcopy) without mask. */ + mlx5_flow_field_id_to_modify_info(&conf->dst, dcopy, NULL, + conf->width, dev, + attr, error); + /* Then construct the source field (field) with mask. */ + mlx5_flow_field_id_to_modify_info(&conf->src, field, mask, + conf->width, dev, + attr, error); + } + item.mask = &mask; + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + ret = flow_dv_convert_modify_action(&item, field, dcopy, resource, type, error); + if (ret) + return ret; + MLX5_ASSERT(resource->actions_num > 0); + /* + * If previous modify field action collide with this one, then insert NOP command. + * This NOP command will not be a part of action's command range used to update commands + * on rule creation. + */ + if (flow_hw_should_insert_nop(mhdr, &resource->actions[0])) { + ret = flow_hw_mhdr_cmd_nop_append(mhdr); + if (ret) + return rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "too many modify field operations specified"); + } + cmds_start = mhdr->mhdr_cmds_num; + ret = flow_hw_converted_mhdr_cmds_append(mhdr, resource); + if (ret) + return rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "too many modify field operations specified"); + + cmds_end = mhdr->mhdr_cmds_num; + if (shared) + return 0; + ret = __flow_hw_act_data_hdr_modify_append(priv, acts, RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + action - action_start, mhdr->pos, + cmds_start, cmds_end, shared, + field, dcopy, mask); + if (ret) + return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "not enough memory to store modify field metadata"); + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -558,10 +845,12 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, uint16_t reformat_pos = MLX5_HW_MAX_ACTS, reformat_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; size_t data_size = 0; + struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type, i; int err; + flow_hw_modify_field_init(&mhdr, at); if (attr->transfer) type = MLX5DR_TABLE_TYPE_FDB; else if (attr->egress) @@ -714,6 +1003,15 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_pos = i++; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + if (mhdr.pos == UINT16_MAX) + mhdr.pos = i++; + err = flow_hw_modify_field_compile(dev, attr, action_start, + actions, masks, acts, &mhdr, + error); + if (err) + goto err; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -721,6 +1019,31 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, break; } } + if (mhdr.pos != UINT16_MAX) { + uint32_t flags; + uint32_t bulk_size; + size_t mhdr_len; + + acts->mhdr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->mhdr), + 0, SOCKET_ID_ANY); + if (!acts->mhdr) + goto err; + rte_memcpy(acts->mhdr, &mhdr, sizeof(*acts->mhdr)); + mhdr_len = sizeof(struct mlx5_modification_cmd) * acts->mhdr->mhdr_cmds_num; + flags = mlx5_hw_act_flag[!!attr->group][type]; + if (acts->mhdr->shared) { + flags |= MLX5DR_ACTION_FLAG_SHARED; + bulk_size = 0; + } else { + bulk_size = rte_log2_u32(table_attr->nb_flows); + } + acts->mhdr->action = mlx5dr_action_create_modify_header + (priv->dr_ctx, mhdr_len, (__be64 *)acts->mhdr->mhdr_cmds, + bulk_size, flags); + if (!acts->mhdr->action) + goto err; + acts->rule_acts[acts->mhdr->pos].action = acts->mhdr->action; + } if (reformat_pos != MLX5_HW_MAX_ACTS) { uint8_t buf[MLX5_ENCAP_MAX_LEN]; bool shared_rfmt = true; @@ -884,6 +1207,100 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, return 0; } +static __rte_always_inline int +flow_hw_mhdr_cmd_is_nop(const struct mlx5_modification_cmd *cmd) +{ + struct mlx5_modification_cmd cmd_he = { + .data0 = rte_be_to_cpu_32(cmd->data0), + .data1 = 0, + }; + + return cmd_he.action_type == MLX5_MODIFICATION_TYPE_NOP; +} + +/** + * Construct flow action array. + * + * For action template contains dynamic actions, these actions need to + * be updated according to the rte_flow action during flow creation. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] job + * Pointer to job descriptor. + * @param[in] hw_acts + * Pointer to translated actions from template. + * @param[in] it_idx + * Item template index the action template refer to. + * @param[in] actions + * Array of rte_flow action need to be checked. + * @param[in] rule_acts + * Array of DR rule actions to be used during flow creation.. + * @param[in] acts_num + * Pointer to the real acts_num flow has. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, + struct mlx5_action_construct_data *act_data, + const struct mlx5_hw_actions *hw_acts, + const struct rte_flow_action *action) +{ + const struct rte_flow_action_modify_field *mhdr_action = action->conf; + uint8_t values[16] = { 0 }; + unaligned_uint32_t *value_p; + uint32_t i; + struct field_modify_info *field; + + if (!hw_acts->mhdr) + return -1; + if (hw_acts->mhdr->shared || act_data->modify_header.shared) + return 0; + MLX5_ASSERT(mhdr_action->operation == RTE_FLOW_MODIFY_SET || + mhdr_action->operation == RTE_FLOW_MODIFY_ADD); + if (mhdr_action->src.field != RTE_FLOW_FIELD_VALUE && + mhdr_action->src.field != RTE_FLOW_FIELD_POINTER) + return 0; + if (mhdr_action->src.field == RTE_FLOW_FIELD_VALUE) + rte_memcpy(values, &mhdr_action->src.value, sizeof(values)); + else + rte_memcpy(values, mhdr_action->src.pvalue, sizeof(values)); + if (mhdr_action->dst.field == RTE_FLOW_FIELD_META || + mhdr_action->dst.field == RTE_FLOW_FIELD_TAG) { + value_p = (unaligned_uint32_t *)values; + *value_p = rte_cpu_to_be_32(*value_p); + } + i = act_data->modify_header.mhdr_cmds_off; + field = act_data->modify_header.field; + do { + uint32_t off_b; + uint32_t mask; + uint32_t data; + const uint8_t *mask_src; + + if (i >= act_data->modify_header.mhdr_cmds_end) + return -1; + if (flow_hw_mhdr_cmd_is_nop(&job->mhdr_cmd[i])) { + ++i; + continue; + } + mask_src = (const uint8_t *)act_data->modify_header.mask; + mask = flow_dv_fetch_field(mask_src + field->offset, field->size); + if (!mask) { + ++field; + continue; + } + off_b = rte_bsf32(mask); + data = flow_dv_fetch_field(values + field->offset, field->size); + data = (data & mask) >> off_b; + job->mhdr_cmd[i++].data1 = rte_cpu_to_be_32(data); + ++field; + } while (field->size); + return 0; +} + /** * Construct flow action array. * @@ -928,6 +1345,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, }; uint32_t ft_flag; size_t encap_len = 0; + int ret; memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * hw_acts->acts_num); @@ -945,6 +1363,18 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } else { attr.ingress = 1; } + if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0) { + uint16_t pos = hw_acts->mhdr->pos; + + if (!hw_acts->mhdr->shared) { + rule_acts[pos].modify_header.offset = + job->flow->idx - 1; + rule_acts[pos].modify_header.data = + (uint8_t *)job->mhdr_cmd; + rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, + sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num); + } + } LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; uint32_t tag; @@ -1020,6 +1450,14 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + ret = flow_hw_modify_field_construct(job, + act_data, + hw_acts, + action); + if (ret) + return -1; + break; default: break; } @@ -2093,6 +2531,8 @@ flow_hw_configure(struct rte_eth_dev *dev, } mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(struct mlx5_modification_cmd) * + MLX5_MHDR_MAX_CMD + sizeof(struct mlx5_hw_q_job)) * queue_attr[0]->size; } @@ -2104,6 +2544,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_queue; i++) { uint8_t *encap = NULL; + struct mlx5_modification_cmd *mhdr_cmd = NULL; priv->hw_q[i].job_idx = queue_attr[i]->size; priv->hw_q[i].size = queue_attr[i]->size; @@ -2115,8 +2556,10 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[queue_attr[i - 1]->size]; job = (struct mlx5_hw_q_job *) &priv->hw_q[i].job[queue_attr[i]->size]; - encap = (uint8_t *)&job[queue_attr[i]->size]; + mhdr_cmd = (struct mlx5_modification_cmd *)&job[queue_attr[i]->size]; + encap = (uint8_t *)&mhdr_cmd[queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; for (j = 0; j < queue_attr[i]->size; j++) { + job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; priv->hw_q[i].job[j] = &job[j]; } From patchwork Fri Sep 23 14:43:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116743 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 47562A054A; Fri, 23 Sep 2022 16:44:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B074742BC6; Fri, 23 Sep 2022 16:44:13 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2071.outbound.protection.outlook.com [40.107.244.71]) by mails.dpdk.org (Postfix) with ESMTP id D7BDE42BBD for ; Fri, 23 Sep 2022 16:44:10 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jaRx/8TgnY3e8+JehWkn0SUOuG1LpgfiLc23VLqg8W1WnArTu/M/Pzz+oSnC7iUUUsY8feHMyXT4OfZDzm5Pp+lqPdkdUmI5dPQgpnX6OFMjv082TLK5Ui8n2hlYBhz8Jh/Z+5HRXjVzoqmrrZQV8DFK2/vl6QpgVMvfDzJrTCBfiiheZQ4qwJxGISxNnnYh/SSFd9xvDN0jOhu6abWxQTdCgiKGNdRpq1IAHSOaWbsNyNhyP7+Ew3GHcIncdwxLvBxACvwAVVa9uZjEC97l5FOjzi+TgZP1g73WmRpVbKVoGADSJ6hVt+0lWlx0pGOLVBjb33hM/gEhifQJosTNLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fOoWBwQENOsg8boY3YQPdzuGQJpyZE7/dB5eY2a280g=; b=RxNoCWEwPcStPkxyie//+D6PpO/e6xt0nJ9SlYN96pQPjmmIXpEc4pAKBpSBsOmNBXEmYlgkuwnbuzJ1YZMKXFOg96KQCfIJlj7quA7KUuYGxZSe7iS7rLn0iITR+XATuZBA78UvC7PdbKMgzD9BsXS9Zim5S762PgEUqrunQvIEbewwjPH9Yl4XE0sxY0qPF9ZynkxD9V2yNb+EP3FTex4Q/6h8JEx49uW6v4vW15ENKPubYZ7iaS0MWDv/YX9UgPVw5h/TEu6V8Uc2iWvnX2ACFjBodevOrLF5eNFaMCq+zPcw8TTqSnUtbUYE835XfF+M1chZVoMMFQlq0Ik7JQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fOoWBwQENOsg8boY3YQPdzuGQJpyZE7/dB5eY2a280g=; b=bF9PNgvjIMOaNpbecxKl+R5FVD9KOhUhTnt7rq22ucBnnU4hm1VpQ3Px0SNuj3DclUwKLPVUG7F3p0iKp6aTEqYnAH/iN5dpmEBLYEnUEN7P5q7OdoHYCPdDyU1cwmECL/1C6lUbT484md6u7Sn8/trufKvfO+0aWJ0eeYen2EEDsg0f2B+SFsN9LhJQ8SGdqbtZXrKMHkuVq+sfeQOPpOLQjZRggo9EOUjegMRkML07fqFQ1jsFJx3QnJkpbPYZQw5geu0YPhGSawuXcT1esblH6J4OOkSV22ogGcyf6cd4rxt/nUw5x4vjS8koD06lghodAZWAe+FTp1IiPshjSA== Received: from DM6PR03CA0093.namprd03.prod.outlook.com (2603:10b6:5:333::26) by MW4PR12MB5604.namprd12.prod.outlook.com (2603:10b6:303:18d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14; Fri, 23 Sep 2022 14:44:09 +0000 Received: from DM6NAM11FT079.eop-nam11.prod.protection.outlook.com (2603:10b6:5:333:cafe::5) by DM6PR03CA0093.outlook.office365.com (2603:10b6:5:333::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT079.mail.protection.outlook.com (10.13.173.4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:43:56 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:43:54 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Dariusz Sosnowski Subject: [PATCH 05/27] net/mlx5: validate modify field action template Date: Fri, 23 Sep 2022 17:43:12 +0300 Message-ID: <20220923144334.27736-6-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT079:EE_|MW4PR12MB5604:EE_ X-MS-Office365-Filtering-Correlation-Id: e172b0b0-7335-46a3-59b5-08da9d721266 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CUoZEmc8EG6y/nFL2XbekaXTMvUg+e/JddGBa4FbM+kA/Z6qWlGkiwrLPNciho9OOmoEBAH+sJbTuxwEHU1m1Jislf6dlKGQqwI02j5yZad0uBda9nMLhSA03cgSU+t4di/c/GvDUrKTEIbj85roMjhUQd+PfnZqarQx9tLwlRciVRHSh+GX0jqIGySl9/K11ab/k+k4vDuIP05dsMI89+3mzsXO97gOWTufEyelxYhN+YDvOaJU2MUWMxkTIuhjxe8n6zQ9ryMuzLMIgjXECfCiz+HFMj751tsopdzlpLwOtNyfckX/5kveAwDyKSsVQNJirES3KqUp6oGWOL/NgewmcOoj6OD49gUlOm/UVnvi6krZWOCxlfiFd8qJapz8qlQdnmMWeh5ZXOsgtwnMH1pLy4Imft6FpZC//wWg8sS4v6RKIHsqKUrdidzNtvTmMe5DEGF/+XR7AdMyoVSsMd2yUJ/n6MwLYn88BnwoKombraNrCrYqkg9lUsWQp5oWNdzwXcA03va8CqJCyr6N3Tw5mdUbK1oDrRwdpEvEZvH+WoKb9CnQ8ecVu97w9jeisFuwPX6VRXG15hChOGBW+8KLjqUZzhadctSmBcJeaqjz6LC3V4mlDPWyHfZskM0eJqg95YDUY3wOcfFqhZpAyC0q9+QaxW7WcQlcuUUx11X5UL/AYsjuqvbrCTdDwG2LjrPrdadSG68AR6sbYsg6TgEn7BYasR0vJRC5Iyo1W7jjC4V1eKuqKf9sQ64pbVwH8oC6yzlR/+5zof7eyOnBLw== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(82310400005)(2616005)(356005)(1076003)(186003)(2906002)(316002)(41300700001)(107886003)(82740400003)(426003)(47076005)(110136005)(6636002)(54906003)(86362001)(7696005)(36756003)(40460700003)(478600001)(4326008)(8676002)(6666004)(70206006)(70586007)(5660300002)(26005)(6286002)(8936002)(16526019)(7636003)(15650500001)(336012)(40480700001)(55016003)(36860700001)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:08.5314 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e172b0b0-7335-46a3-59b5-08da9d721266 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT079.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB5604 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dariusz Sosnowski This patch adds validation step for action templates and validates if RTE_FLOW_ACTION_TYPE_MODIFY_FIELD actions' fields are properly masked. Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow_hw.c | 132 ++++++++++++++++++++++++++++++++ 1 file changed, 132 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index b89d2cc44f..1f98e1248a 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2047,6 +2047,136 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, return 0; } +static int +flow_hw_validate_action_modify_field(const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct rte_flow_error *error) +{ + const struct rte_flow_action_modify_field *action_conf = + action->conf; + const struct rte_flow_action_modify_field *mask_conf = + mask->conf; + + if (action_conf->operation != mask_conf->operation) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "modify_field operation mask and template are not equal"); + if (action_conf->dst.field != mask_conf->dst.field) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "destination field mask and template are not equal"); + if (action_conf->dst.field == RTE_FLOW_FIELD_POINTER || + action_conf->dst.field == RTE_FLOW_FIELD_VALUE) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "immediate value and pointer cannot be used as destination"); + if (mask_conf->dst.level != UINT32_MAX) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "destination encapsulation level must be fully masked"); + if (mask_conf->dst.offset != UINT32_MAX) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "destination offset level must be fully masked"); + if (action_conf->src.field != mask_conf->src.field) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "destination field mask and template are not equal"); + if (action_conf->src.field != RTE_FLOW_FIELD_POINTER && + action_conf->src.field != RTE_FLOW_FIELD_VALUE) { + if (mask_conf->src.level != UINT32_MAX) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "source encapsulation level must be fully masked"); + if (mask_conf->src.offset != UINT32_MAX) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "source offset level must be fully masked"); + } + if (mask_conf->width != UINT32_MAX) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "modify_field width field must be fully masked"); + return 0; +} + +static int +flow_hw_action_validate(const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error) +{ + int i; + bool actions_end = false; + int ret; + + for (i = 0; !actions_end; ++i) { + const struct rte_flow_action *action = &actions[i]; + const struct rte_flow_action *mask = &masks[i]; + + if (action->type != mask->type) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "mask type does not match action type"); + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_INDIRECT: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_MARK: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_DROP: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_JUMP: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_RSS: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + /* TODO: Validation logic */ + break; + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + ret = flow_hw_validate_action_modify_field(action, + mask, + error); + if (ret < 0) + return ret; + break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "action not supported in template API"); + } + } + return 0; +} + /** * Create flow action template. * @@ -2075,6 +2205,8 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, int len, act_len, mask_len, i; struct rte_flow_actions_template *at; + if (flow_hw_action_validate(actions, masks, error)) + return NULL; act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, actions, error); if (act_len <= 0) From patchwork Fri Sep 23 14:43:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116744 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D14E3A054A; Fri, 23 Sep 2022 16:44:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8874842BCC; Fri, 23 Sep 2022 16:44:14 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2045.outbound.protection.outlook.com [40.107.94.45]) by mails.dpdk.org (Postfix) with ESMTP id C6F5942BC2 for ; Fri, 23 Sep 2022 16:44:12 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=E9E84UBPYyz/W/947vXj3K2FupXbNpKuMJt2FXGjDPwaTREnrocjpQceztzTslkm8ZBMUxo4nnVY5CzeLLjaKTN+Q/sVukJglPLqHMjE9jAPwtLgngx5OhKYVbGfZrdoMdb/fJPgT2UsVm+tJ6BAZey5EGeY4wmSotA05e6ZtvnKi/vqXs++vdzv1EY4lEvhb5IX7LYpOT17+GMaKCQjhyGMXrvG8eYLSubohMEIRL1yXZvfQEGKQqQlgtJqS/CsKmexS9i0fpUD4uYdM6BPOxK9ox+j102L9wXtr1CL9RsdZLxCq6gguUYRBGxbGAgkSpBPxUhiRWqeLKnYvHqsHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IrpPY7kxFwY/0BuhqfWtZOaLNcikkWNx8R6GHK3fb/E=; b=KZbzRdIIXgTHrzbBHqzVqWfccNpPQEeRrjiF12+TRC+eUY111SZxNgEdr32QGZvZPHHtDJr1yqgkr4mMXXEFwzV2M0ZOPtIJ0UagzaNaR+5RgmKkISrUjojdbm3l/G7Gu3uIo0LjhHuI53e0gdxPUL3ApZ4DgjwgQXhTFEUIbMvshQiF5X7Brl4jEiRczY5KTHAGG9pa+UZNgOQx/9KGZiqRXunEIOGrF0NGRe4N5UU7/NrhRkBmJ4odLxhSls+6NV4eQK6J0NSpInuz5fuX8ffSubrRgcara2SASMwCy3KnqKC4cilPdJAuJzT1ItrN0dOBrIJThucIX42b1025fQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IrpPY7kxFwY/0BuhqfWtZOaLNcikkWNx8R6GHK3fb/E=; b=KP/mI1ae2wdZ7q8O68cqiYNZQpoECabZiAxs4VFrQoOA0s5HHgQvsxqWxrP8jPyIIDBsFvGhVMgpy6CZhEGEOT9xTo1V7ZGo4xCEAo0GHuks0J7QqSzJMWBjy2+G5rLAp0HzOVtwIq2Z5lMPsdLc7+OUhQigO0gunliYzWYJbQfQT4bOEWw5QPZqhfVezPQoFtIZJKXmN1PayXoBouN+6BFNNfbGopHdmHOO4vMJRliA+21B/UWkd9BxPgqjYtoxGU+/Og5/NMboBq0NOVnaUIutwA0C2UUCMmgks3j5FuKUtMvOiu43a3IjOHu2Zy3XMhfemHAU+0ABBvAseV1tzQ== Received: from BN7PR06CA0049.namprd06.prod.outlook.com (2603:10b6:408:34::26) by SN7PR12MB6716.namprd12.prod.outlook.com (2603:10b6:806:270::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:11 +0000 Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:408:34:cafe::2a) by BN7PR06CA0049.outlook.office365.com (2603:10b6:408:34::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:43:58 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:43:56 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko , Raja Zidane CC: , Bing Zhao Subject: [PATCH 06/27] net/mlx5: enable mark flag for all ports in the same domain Date: Fri, 23 Sep 2022 17:43:13 +0300 Message-ID: <20220923144334.27736-7-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT057:EE_|SN7PR12MB6716:EE_ X-MS-Office365-Filtering-Correlation-Id: b2d52d41-9cc5-46e5-b867-08da9d7213b4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: qNy6aArssBcVd/Hvo3Ox/URZjxilU5leEAIfb6aMXOia6/9PIpx4OdqXoYql7Ae9iaWGd3NwX5vYzZpIfp2mU8T6YSh5s003jEWTyNVF6AnR57V6B5IZyJxGKzX/ICNz+HuIbsMVrsE+678mT/SiTuwA66JHH76PZW4Pcj6qjhc1qUYhe/OHeUnbClZc52dyHtBPWKcMNDVILfFxNzmpv9NJgQ+S9QKJrXOY7TxIPnZU28pznVW+DB3TWZWBvnYVSnhB/UP7AQLqM02piL9kTQU66elZDGPtrymvRSzJKRLPy0A5ykMVn2ckFslnM3f1sHeuKtnEHL0c3zL/QAOifHm9EpUQ5kZe+WT6GHyLHfV//NAfIrqwOMYJrGXLdkW1ROcAFvhawRzaGLzNV8zcYTMDVECCAi5HJj2FPSkG1heKe/lLR5fLY5pAIIM8xTobiuyJlPor7XzEFMPkEztM9ki3ZrG1/xSZKb8maFPEwHMHEDQErMykOUBtxAk9rGeZS3GRFowC1kjiZl/ZvoO/bo8yw/1wrXFTmCeyMGp22BuiaYIZrcewCFA8I1jllnbupRpbVN4yXDlS+ppMHiFpIxeD8CXMUNoEK+w32hAcOkjWQ7w8vr0SzlQ7t/bSEtcID5gkarX53IwBVGq2JUUcYupTY/JGWpJ+Fl7ozLOxyNUg8Oj1Xg5uFC94TxipGB3cd/mpL+FpwxkgAEDsw8DIxtVbKn4Flan4VPwdr42Nvosv3S0msai9nb47XN1vXWjgJ68uRFPKtVuOFBdgqYkYhg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199015)(46966006)(40470700004)(36840700001)(107886003)(6666004)(7696005)(2616005)(356005)(36756003)(6286002)(26005)(110136005)(316002)(6636002)(478600001)(54906003)(36860700001)(186003)(40460700003)(1076003)(426003)(47076005)(40480700001)(7636003)(336012)(16526019)(86362001)(83380400001)(82740400003)(55016003)(2906002)(70206006)(8936002)(8676002)(4326008)(5660300002)(41300700001)(70586007)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:10.6758 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b2d52d41-9cc5-46e5-b867-08da9d7213b4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6716 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Bing Zhao In the switchdev mode, there is a unique FDB domain for all the representors and only the eswitch manager can insert the rule into this domain. If a flow rule is like below: flow create 0 ingress transfer pattern port_id id is X / eth / end actions mark id 25 ... It is used for representor X and the mark flag was not enabled for the queues of this port. To fix this, once the mark flag needs to be enabled, in a FDB case, all the queues' mark flag belonging to the same domain will be engaged for only once. Fixes: e211aca851a7 ("net/mlx5: fix mark enabling for Rx") Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5.h | 2 ++ drivers/net/mlx5/mlx5_flow.c | 28 ++++++++++++++++++++++++---- 2 files changed, 26 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f3bd45d4c5..18d70e795f 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1202,6 +1202,8 @@ struct mlx5_dev_ctx_shared { uint32_t flow_priority_check_flag:1; /* Check Flag for flow priority. */ uint32_t metadata_regc_check_flag:1; /* Check Flag for metadata REGC. */ uint32_t hws_tags:1; /* Check if tags info for HWS initialized. */ + uint32_t shared_mark_enabled:1; + /* If mark action is enabled on Rxqs (shared E-Switch domain). */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 3abb39aa92..c856d249db 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1481,13 +1481,32 @@ flow_rxq_mark_flag_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *rxq_ctrl; + uint16_t port_id; - if (priv->mark_enabled) + if (priv->sh->shared_mark_enabled) return; - LIST_FOREACH(rxq_ctrl, &priv->rxqsctrl, next) { - rxq_ctrl->rxq.mark = 1; + if (priv->master || priv->representor) { + MLX5_ETH_FOREACH_DEV(port_id, dev->device) { + struct mlx5_priv *opriv = + rte_eth_devices[port_id].data->dev_private; + + if (!opriv || + opriv->sh != priv->sh || + opriv->domain_id != priv->domain_id || + opriv->mark_enabled) + continue; + LIST_FOREACH(rxq_ctrl, &opriv->rxqsctrl, next) { + rxq_ctrl->rxq.mark = 1; + } + opriv->mark_enabled = 1; + } + } else { + LIST_FOREACH(rxq_ctrl, &priv->rxqsctrl, next) { + rxq_ctrl->rxq.mark = 1; + } + priv->mark_enabled = 1; } - priv->mark_enabled = 1; + priv->sh->shared_mark_enabled = 1; } /** @@ -1623,6 +1642,7 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) rxq->ctrl->rxq.tunnel = 0; } priv->mark_enabled = 0; + priv->sh->shared_mark_enabled = 0; } /** From patchwork Fri Sep 23 14:43:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116745 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A458DA054A; Fri, 23 Sep 2022 16:45:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0FA0B42BDC; Fri, 23 Sep 2022 16:44:18 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2064.outbound.protection.outlook.com [40.107.95.64]) by mails.dpdk.org (Postfix) with ESMTP id 42D8E42BB2 for ; Fri, 23 Sep 2022 16:44:16 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GOjgaJey2VDB4o1CkIvtYK9vSolbKbwUgEbL6x1KQsJSTNo3JbMUhoex7vjfJ0vsXBAEzvxKF+KX25tte1rwP0GTPwRmeY33Uy+CLuWsTADrCFaq9tBY64DVFY/NeRI2vcvwshCuBrFFTce2a/jDmyZH7Y7gNXJJ8q8gPOqTy5BJOgBUfWzCbsOHMsx8EsKTA3XZ+vewIq0+RFlKadSX6MC3FK9Jw/BfLT+1X89KF0qOzxj//7/3WvgmCaEDOey+jxbqUD03tLAuSCK48VSgG6bAeepkHtA5WGMaaTU3uCNd51E6StQYyEKgfpq9GCnMTDu2EfE7NpNNCsLauTTN+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=xXuehp2qEbpK08vsoLOJhUqsU5QI8HdwGhNNQ2pSS9U=; b=bBxGDk3S0X0Nn8vbWZOXITP8J2xaYrNNlRJjb1PtpPKJUcSrbkj2rlb8zyHxPMLSGKmUE5AiKat2Ff+YnT8bOVw+R1510CNGIjqDHNaIF0AkZTs58gGnnfWD5GGwvUpRwGLSk2RMTtoCSnwSDCP/BApRXIb76uAmLe/UElFHBRyWKO45CJuPxyaLDsfyb7XznKlZnDIyHm6OrocOglEtRz8m04fPi8TZDor9rEKoYeSfurMVUglZ6VaPoZZYaOkjglamTWPB/RI9Z/nuQyW58JMkcRu+ZDSAmX+mWJU+ZVCq8KOVyEEAaOdDtu0xsLVsjDv1+XuLxylviXHx6q0gIA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xXuehp2qEbpK08vsoLOJhUqsU5QI8HdwGhNNQ2pSS9U=; b=BmRT6Ebeq/hU7p3PZ9PJzB+bClDHgKbZwr1+n9Uto0A1/DCARo+vJzVxYbrzWgCbjn5JviiKv6ubXBDrBuYlYEnmXoXgcwzI635FIPm+FjyMTkuhSYaPlg6TuXZ6/HjCJXiOlf1gYrbx+r9H84YQtoD3UXLfeU5nlIFsU2gdvwgisZvqhYBsbw+0g2iJrK+ic5qpx21NCXX7EnXYR7MPtBWD9itG+v5K/v9cYpSTyrS6WYhyWf6eTYNrGU5GH9JdjfoFJJNHx73WhkgJzCoTsB18/kIWplhZLmmLebHDmayewiFpMGINO42X8NmLk4+M/0VjQu2SER6j5noQETMSrA== Received: from MW4PR04CA0212.namprd04.prod.outlook.com (2603:10b6:303:87::7) by MN2PR12MB4581.namprd12.prod.outlook.com (2603:10b6:208:260::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:12 +0000 Received: from CO1NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:303:87:cafe::e1) by MW4PR04CA0212.outlook.office365.com (2603:10b6:303:87::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT036.mail.protection.outlook.com (10.13.174.124) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:12 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:00 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:43:58 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Dariusz Sosnowski Subject: [PATCH 07/27] net/mlx5: create port actions Date: Fri, 23 Sep 2022 17:43:14 +0300 Message-ID: <20220923144334.27736-8-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT036:EE_|MN2PR12MB4581:EE_ X-MS-Office365-Filtering-Correlation-Id: 3c0756ea-958f-4b05-1c4d-08da9d7214a7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: j0xJNe08sOv67056SAOqH9dRIQcNiVgOF34k9NzqBA4pmMtXXNMvExq66XZiFy3N53kojw0EZLU9eDk6tpt+awqCtYVnjPfLwE+wS30pMU5RjkiLdZh3r9s3+y/tQ+xbr+2XrjB2t11pbKybVhc1OPclGKQvVRuoiOQJZPu5re6hoFNRQKSUP+oIwrjKir9QGAvFjrCA4CFdKjvGKVmC604FPtGPRxJZBb8y2WTTCxBSVI/xXsv3eEt/yefyFzX5cJ09ZAX1HmER+Ol+V3xS+aXqhxWTgRugQCCzyUymZyXRg2sqCC8J0RaxuQQ/upWXfpAaIqc0/KBDpGsz/4eDDlrEu9xkvjdo6iv6tDzJ3F+/oyO3ec2ky/DwlcOo3/rg5/G3WJycFXClsKjsvzF9lGwjAPn3Mpu3BDn52SBUl5144YJnkrIVrAEda7bJZMZT+toqAhbySoT0iZDfbcyz1dvt0TCyqX6MYmzY3eAwo81cjoSIQYwoR2ntglg2Sv0PeajRLJXfeRmGlGz7Pj7JCvmgrC4tScY2s+L2qwClmCA41y632Z1mi7TkjT885i84pYFRCBe36Z5dZkOr3QSosMX954HSuqkXt3hjZ1r9BN6Uhrx4xE4zpl9Tw+/JW5T04MZrICLTXIeO3rUIcXUA9vgPLpTaKGW7TzthIFQqdl5jF6fB9LR0kS1kOPDWbjlGri8TNswC4wKiAQjiXduiDcZAARkC7HduQyiw2mElux3K9KSzlY/sBg+VzJfZp9vCBfJa8e0AkZ1ckp4f1SHx9rOvDZjnR/Tas6AHcojQArk= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199015)(40470700004)(36840700001)(46966006)(86362001)(8936002)(2906002)(82310400005)(83380400001)(70206006)(70586007)(8676002)(426003)(356005)(47076005)(4326008)(82740400003)(336012)(36756003)(30864003)(40460700003)(186003)(7696005)(36860700001)(55016003)(2616005)(1076003)(16526019)(40480700001)(316002)(7636003)(54906003)(110136005)(5660300002)(26005)(107886003)(41300700001)(6286002)(478600001)(6636002)(6666004)(559001)(579004)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:12.3112 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3c0756ea-958f-4b05-1c4d-08da9d7214a7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4581 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dariusz Sosnowski This patch implements creating and caching of port actions for use with HW Steering FDB flows. Actions are created on flow template API configuration and created only on the port designated as master. Attaching and detaching of ports in the same switching domain causes an update to the port actions cache by, respectively, creating and destroying actions. A new devarg fdb_def_rule_en is being added and it's used to control the default dedicated E-Switch rule is created by PMD implicitly or not, and PMD sets this value to 1 by default. If set to 0, the default E-Switch rule will not be created and user can create the specific E-Switch rule on root table if needed. Signed-off-by: Dariusz Sosnowski --- doc/guides/nics/mlx5.rst | 9 + drivers/net/mlx5/linux/mlx5_os.c | 12 + drivers/net/mlx5/mlx5.c | 14 + drivers/net/mlx5/mlx5.h | 24 +- drivers/net/mlx5/mlx5_flow.c | 68 +- drivers/net/mlx5/mlx5_flow.h | 22 +- drivers/net/mlx5/mlx5_flow_dv.c | 93 +- drivers/net/mlx5/mlx5_flow_hw.c | 1350 +++++++++++++++++++++++++++- drivers/net/mlx5/mlx5_flow_verbs.c | 4 +- drivers/net/mlx5/mlx5_trigger.c | 69 +- 10 files changed, 1554 insertions(+), 111 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 631f0840eb..c42ac482d8 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -1118,6 +1118,15 @@ for an additional list of options shared with other mlx5 drivers. By default, the PMD will set this value to 1. +- ``fdb_def_rule_en`` parameter [int] + + A non-zero value enables the PMD to create a dedicated rule on E-Switch root + table, this dedicated rule forwards all incoming packets into table 1, other + rules will be created in E-Switch table original table level plus one, to + improve the flow insertion rate due to skip root table managed by firmware. + If set to 0, all rules will be created on the original E-Switch table level. + + By default, the PMD will set this value to 1. Supported NICs -------------- diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 1877b6bec8..28220d10ad 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1554,6 +1554,13 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (priv->sh->config.dv_flow_en == 2) { /* Only HWS requires this information. */ flow_hw_init_tags_set(eth_dev); + if (priv->sh->config.dv_esw_en && + flow_hw_create_vport_action(eth_dev)) { + DRV_LOG(ERR, "port %u failed to create vport action", + eth_dev->data->port_id); + err = EINVAL; + goto error; + } return eth_dev; } /* Port representor shares the same max priority with pf port. */ @@ -1614,6 +1621,11 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, return eth_dev; error: if (priv) { + if (eth_dev && + priv->sh && + priv->sh->config.dv_flow_en == 2 && + priv->sh->config.dv_esw_en) + flow_hw_destroy_vport_action(eth_dev); if (priv->mreg_cp_tbl) mlx5_hlist_destroy(priv->mreg_cp_tbl); if (priv->sh) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 556709c697..a21b8c69a9 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -172,6 +172,9 @@ /* Device parameter to configure the delay drop when creating Rxqs. */ #define MLX5_DELAY_DROP "delay_drop" +/* Device parameter to create the fdb default rule in PMD */ +#define MLX5_FDB_DEFAULT_RULE_EN "fdb_def_rule_en" + /* Shared memory between primary and secondary processes. */ struct mlx5_shared_data *mlx5_shared_data; @@ -1239,6 +1242,8 @@ mlx5_dev_args_check_handler(const char *key, const char *val, void *opaque) config->decap_en = !!tmp; } else if (strcmp(MLX5_ALLOW_DUPLICATE_PATTERN, key) == 0) { config->allow_duplicate_pattern = !!tmp; + } else if (strcmp(MLX5_FDB_DEFAULT_RULE_EN, key) == 0) { + config->fdb_def_rule = !!tmp; } return 0; } @@ -1274,6 +1279,7 @@ mlx5_shared_dev_ctx_args_config(struct mlx5_dev_ctx_shared *sh, MLX5_RECLAIM_MEM, MLX5_DECAP_EN, MLX5_ALLOW_DUPLICATE_PATTERN, + MLX5_FDB_DEFAULT_RULE_EN, NULL, }; int ret = 0; @@ -1285,6 +1291,7 @@ mlx5_shared_dev_ctx_args_config(struct mlx5_dev_ctx_shared *sh, config->dv_flow_en = 1; config->decap_en = 1; config->allow_duplicate_pattern = 1; + config->fdb_def_rule = 1; if (mkvlist != NULL) { /* Process parameters. */ ret = mlx5_kvargs_process(mkvlist, params, @@ -1360,6 +1367,7 @@ mlx5_shared_dev_ctx_args_config(struct mlx5_dev_ctx_shared *sh, DRV_LOG(DEBUG, "\"decap_en\" is %u.", config->decap_en); DRV_LOG(DEBUG, "\"allow_duplicate_pattern\" is %u.", config->allow_duplicate_pattern); + DRV_LOG(DEBUG, "\"fdb_def_rule_en\" is %u.", config->fdb_def_rule); return 0; } @@ -1943,6 +1951,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_flex_parser_ecpri_release(dev); mlx5_flex_item_port_cleanup(dev); #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) + flow_hw_destroy_vport_action(dev); flow_hw_resource_release(dev); #endif flow_hw_clear_port_info(dev); @@ -2644,6 +2653,11 @@ mlx5_probe_again_args_validate(struct mlx5_common_device *cdev, sh->ibdev_name); goto error; } + if (sh->config.fdb_def_rule ^ config->fdb_def_rule) { + DRV_LOG(ERR, "\"fdb_def_rule_en\" configuration mismatch for shared %s context.", + sh->ibdev_name); + goto error; + } if (sh->config.l3_vxlan_en ^ config->l3_vxlan_en) { DRV_LOG(ERR, "\"l3_vxlan_en\" " "configuration mismatch for shared %s context.", diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 18d70e795f..77dbe3593e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -309,6 +309,7 @@ struct mlx5_sh_config { uint32_t allow_duplicate_pattern:1; uint32_t lro_allowed:1; /* Whether LRO is allowed. */ /* Allow/Prevent the duplicate rules pattern. */ + uint32_t fdb_def_rule:1; /* Create FDB default jump rule */ }; @@ -337,6 +338,8 @@ enum { MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */ }; +#define MLX5_HW_MAX_ITEMS (16) + /* HW steering flow management job descriptor. */ struct mlx5_hw_q_job { uint32_t type; /* Job type. */ @@ -344,6 +347,8 @@ struct mlx5_hw_q_job { void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ struct mlx5_modification_cmd *mhdr_cmd; + struct rte_flow_item *items; + struct rte_flow_item_ethdev port_spec; }; /* HW steering job descriptor LIFO pool. */ @@ -1452,6 +1457,12 @@ struct mlx5_obj_ops { #define MLX5_RSS_HASH_FIELDS_LEN RTE_DIM(mlx5_rss_hash_fields) +struct mlx5_hw_ctrl_flow { + LIST_ENTRY(mlx5_hw_ctrl_flow) next; + struct rte_eth_dev *owner_dev; + struct rte_flow *flow; +}; + struct mlx5_priv { struct rte_eth_dev_data *dev_data; /* Pointer to device data. */ struct mlx5_dev_ctx_shared *sh; /* Shared device context. */ @@ -1492,6 +1503,12 @@ struct mlx5_priv { unsigned int reta_idx_n; /* RETA index size. */ struct mlx5_drop drop_queue; /* Flow drop queues. */ void *root_drop_action; /* Pointer to root drop action. */ + rte_spinlock_t hw_ctrl_lock; + LIST_HEAD(hw_ctrl_flow, mlx5_hw_ctrl_flow) hw_ctrl_flows; + struct mlx5dr_action **hw_vport; + struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; + struct rte_flow_template_table *hw_esw_sq_miss_tbl; + struct rte_flow_template_table *hw_esw_zero_tbl; struct mlx5_indexed_pool *flows[MLX5_FLOW_TYPE_MAXI]; /* RTE Flow rules. */ uint32_t ctrl_flows; /* Control flow rules. */ @@ -1553,10 +1570,9 @@ struct mlx5_priv { /* HW steering rte flow table list header. */ LIST_HEAD(flow_hw_tbl, rte_flow_template_table) flow_hw_tbl; /* HW steering global drop action. */ - struct mlx5dr_action *hw_drop[MLX5_HW_ACTION_FLAG_MAX] - [MLX5DR_TABLE_TYPE_MAX]; - /* HW steering global drop action. */ - struct mlx5dr_action *hw_tag[MLX5_HW_ACTION_FLAG_MAX]; + struct mlx5dr_action *hw_drop[2]; + /* HW steering global tag action. */ + struct mlx5dr_action *hw_tag[2]; struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */ #endif }; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index c856d249db..9c44b2e99b 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -999,6 +999,7 @@ static const struct rte_flow_ops mlx5_flow_ops = { .flex_item_create = mlx5_flow_flex_item_create, .flex_item_release = mlx5_flow_flex_item_release, .info_get = mlx5_flow_info_get, + .pick_transfer_proxy = mlx5_flow_pick_transfer_proxy, .configure = mlx5_flow_port_configure, .pattern_template_create = mlx5_flow_pattern_template_create, .pattern_template_destroy = mlx5_flow_pattern_template_destroy, @@ -1242,7 +1243,7 @@ mlx5_get_lowest_priority(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; - if (!attr->group && !attr->transfer) + if (!attr->group && !(attr->transfer && priv->fdb_def_rule)) return priv->sh->flow_max_priority - 2; return MLX5_NON_ROOT_FLOW_MAX_PRIO - 1; } @@ -1269,11 +1270,14 @@ mlx5_get_matcher_priority(struct rte_eth_dev *dev, uint16_t priority = (uint16_t)attr->priority; struct mlx5_priv *priv = dev->data->dev_private; + /* NIC root rules */ if (!attr->group && !attr->transfer) { if (attr->priority == MLX5_FLOW_LOWEST_PRIO_INDICATOR) priority = priv->sh->flow_max_priority - 1; return mlx5_os_flow_adjust_priority(dev, priority, subpriority); - } else if (!external && attr->transfer && attr->group == 0 && + /* FDB root rules */ + } else if (attr->transfer && (!external || !priv->fdb_def_rule) && + attr->group == 0 && attr->priority == MLX5_FLOW_LOWEST_PRIO_INDICATOR) { return (priv->sh->flow_max_priority - 1) * 3; } @@ -2828,8 +2832,8 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item, * Item specification. * @param[in] item_flags * Bit-fields that holds the items detected until now. - * @param[in] attr - * Flow rule attributes. + * @param root + * Whether action is on root table. * @param[out] error * Pointer to error structure. * @@ -2841,7 +2845,7 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev, uint16_t udp_dport, const struct rte_flow_item *item, uint64_t item_flags, - const struct rte_flow_attr *attr, + bool root, struct rte_flow_error *error) { const struct rte_flow_item_vxlan *spec = item->spec; @@ -2878,12 +2882,11 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev, if (priv->sh->steering_format_version != MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 || !udp_dport || udp_dport == MLX5_UDP_PORT_VXLAN) { - /* FDB domain & NIC domain non-zero group */ - if ((attr->transfer || attr->group) && priv->sh->misc5_cap) + /* non-root table */ + if (!root && priv->sh->misc5_cap) valid_mask = &nic_mask; /* Group zero in NIC domain */ - if (!attr->group && !attr->transfer && - priv->sh->tunnel_header_0_1) + if (!root && priv->sh->tunnel_header_0_1) valid_mask = &nic_mask; } ret = mlx5_flow_item_acceptable @@ -3122,11 +3125,11 @@ mlx5_flow_validate_item_gre_option(struct rte_eth_dev *dev, if (mask->checksum_rsvd.checksum || mask->sequence.sequence) { if (priv->sh->steering_format_version == MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 || - ((attr->group || attr->transfer) && + ((attr->group || (attr->transfer && priv->fdb_def_rule)) && !priv->sh->misc5_cap) || (!(priv->sh->tunnel_header_0_1 && priv->sh->tunnel_header_2_3) && - !attr->group && !attr->transfer)) + !attr->group && (!attr->transfer || !priv->fdb_def_rule))) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -6183,7 +6186,8 @@ flow_create_split_metadata(struct rte_eth_dev *dev, } if (qrss) { /* Check if it is in meter suffix table. */ - mtr_sfx = attr->group == (attr->transfer ? + mtr_sfx = attr->group == + ((attr->transfer && priv->fdb_def_rule) ? (MLX5_FLOW_TABLE_LEVEL_METER - 1) : MLX5_FLOW_TABLE_LEVEL_METER); /* @@ -11106,3 +11110,43 @@ int mlx5_flow_get_item_vport_id(struct rte_eth_dev *dev, return 0; } + +int +mlx5_flow_pick_transfer_proxy(struct rte_eth_dev *dev, + uint16_t *proxy_port_id, + struct rte_flow_error *error) +{ + const struct mlx5_priv *priv = dev->data->dev_private; + uint16_t port_id; + + if (!priv->sh->config.dv_esw_en) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "unable to provide a proxy port" + " without E-Switch configured"); + if (!priv->master && !priv->representor) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "unable to provide a proxy port" + " for port which is not a master" + " or a representor port"); + if (priv->master) { + *proxy_port_id = dev->data->port_id; + return 0; + } + MLX5_ETH_FOREACH_DEV(port_id, dev->device) { + const struct rte_eth_dev *port_dev = &rte_eth_devices[port_id]; + const struct mlx5_priv *port_priv = port_dev->data->dev_private; + + if (port_priv->master && + port_priv->domain_id == priv->domain_id) { + *proxy_port_id = port_id; + return 0; + } + } + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "unable to find a proxy port"); +} diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index a7235b524d..f661f858c7 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1152,6 +1152,11 @@ struct rte_flow_pattern_template { struct mlx5dr_match_template *mt; /* mlx5 match template. */ uint64_t item_flags; /* Item layer flags. */ uint32_t refcnt; /* Reference counter. */ + /* + * If true, then rule pattern should be prepended with + * represented_port pattern item. + */ + bool implicit_port; }; /* Flow action template struct. */ @@ -1227,6 +1232,7 @@ struct mlx5_hw_action_template { /* mlx5 flow group struct. */ struct mlx5_flow_group { struct mlx5_list_entry entry; + struct rte_eth_dev *dev; /* Reference to corresponding device. */ struct mlx5dr_table *tbl; /* HWS table object. */ struct mlx5_hw_jump_action jump; /* Jump action. */ enum mlx5dr_table_type type; /* Table type. */ @@ -1483,6 +1489,9 @@ void flow_hw_clear_port_info(struct rte_eth_dev *dev); void flow_hw_init_tags_set(struct rte_eth_dev *dev); void flow_hw_clear_tags_set(struct rte_eth_dev *dev); +int flow_hw_create_vport_action(struct rte_eth_dev *dev); +void flow_hw_destroy_vport_action(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], @@ -2055,7 +2064,7 @@ int mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev, uint16_t udp_dport, const struct rte_flow_item *item, uint64_t item_flags, - const struct rte_flow_attr *attr, + bool root, struct rte_flow_error *error); int mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item, uint64_t item_flags, @@ -2312,4 +2321,15 @@ int flow_dv_translate_items_hws(const struct rte_flow_item *items, uint32_t key_type, uint64_t *item_flags, uint8_t *match_criteria, struct rte_flow_error *error); + +int mlx5_flow_pick_transfer_proxy(struct rte_eth_dev *dev, + uint16_t *proxy_port_id, + struct rte_flow_error *error); + +int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); + +int mlx5_flow_hw_esw_create_mgr_sq_miss_flow(struct rte_eth_dev *dev); +int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, + uint32_t txq); +int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 5d3e2d37bb..d0f78cae8e 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2460,8 +2460,8 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev, * Previous validated item in the pattern items. * @param[in] gtp_item * Previous GTP item specification. - * @param[in] attr - * Pointer to flow attributes. + * @param root + * Whether action is on root table. * @param[out] error * Pointer to error structure. * @@ -2472,7 +2472,7 @@ static int flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item, uint64_t last_item, const struct rte_flow_item *gtp_item, - const struct rte_flow_attr *attr, + bool root, struct rte_flow_error *error) { const struct rte_flow_item_gtp *gtp_spec; @@ -2497,7 +2497,7 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item, (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "GTP E flag must be 1 to match GTP PSC"); /* Check the flow is not created in group zero. */ - if (!attr->transfer && !attr->group) + if (root) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "GTP PSC is not supported for group 0"); @@ -3362,20 +3362,19 @@ flow_dv_validate_action_set_tag(struct rte_eth_dev *dev, /** * Indicates whether ASO aging is supported. * - * @param[in] sh - * Pointer to shared device context structure. - * @param[in] attr - * Attributes of flow that includes AGE action. + * @param[in] priv + * Pointer to device private context structure. + * @param[in] root + * Whether action is on root table. * * @return * True when ASO aging is supported, false otherwise. */ static inline bool -flow_hit_aso_supported(const struct mlx5_dev_ctx_shared *sh, - const struct rte_flow_attr *attr) +flow_hit_aso_supported(const struct mlx5_priv *priv, bool root) { - MLX5_ASSERT(sh && attr); - return (sh->flow_hit_aso_en && (attr->transfer || attr->group)); + MLX5_ASSERT(priv); + return (priv->sh->flow_hit_aso_en && !root); } /** @@ -3387,8 +3386,8 @@ flow_hit_aso_supported(const struct mlx5_dev_ctx_shared *sh, * Indicator if action is shared. * @param[in] action_flags * Holds the actions detected until now. - * @param[in] attr - * Attributes of flow that includes this action. + * @param[in] root + * Whether action is on root table. * @param[out] error * Pointer to error structure. * @@ -3398,7 +3397,7 @@ flow_hit_aso_supported(const struct mlx5_dev_ctx_shared *sh, static int flow_dv_validate_action_count(struct rte_eth_dev *dev, bool shared, uint64_t action_flags, - const struct rte_flow_attr *attr, + bool root, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -3410,7 +3409,7 @@ flow_dv_validate_action_count(struct rte_eth_dev *dev, bool shared, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "duplicate count actions set"); if (shared && (action_flags & MLX5_FLOW_ACTION_AGE) && - !flow_hit_aso_supported(priv->sh, attr)) + !flow_hit_aso_supported(priv, root)) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "old age and indirect count combination is not supported"); @@ -3641,8 +3640,8 @@ flow_dv_validate_action_raw_encap_decap * Holds the actions detected until now. * @param[in] item_flags * The items found in this flow rule. - * @param[in] attr - * Pointer to flow attributes. + * @param root + * Whether action is on root table. * @param[out] error * Pointer to error structure. * @@ -3653,12 +3652,12 @@ static int flow_dv_validate_action_aso_ct(struct rte_eth_dev *dev, uint64_t action_flags, uint64_t item_flags, - const struct rte_flow_attr *attr, + bool root, struct rte_flow_error *error) { RTE_SET_USED(dev); - if (attr->group == 0 && !attr->transfer) + if (root) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -4908,6 +4907,8 @@ flow_dv_validate_action_modify_ttl(const uint64_t action_flags, * Pointer to the modify action. * @param[in] attr * Pointer to the flow attributes. + * @param root + * Whether action is on root table. * @param[out] error * Pointer to error structure. * @@ -4920,6 +4921,7 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev, const uint64_t action_flags, const struct rte_flow_action *action, const struct rte_flow_attr *attr, + bool root, struct rte_flow_error *error) { int ret = 0; @@ -4967,7 +4969,7 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev, } if (action_modify_field->src.field != RTE_FLOW_FIELD_VALUE && action_modify_field->src.field != RTE_FLOW_FIELD_POINTER) { - if (!attr->transfer && !attr->group) + if (root) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, action, "modify field action is not" @@ -5057,8 +5059,7 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev, action_modify_field->src.field == RTE_FLOW_FIELD_IPV4_ECN || action_modify_field->dst.field == RTE_FLOW_FIELD_IPV6_ECN || action_modify_field->src.field == RTE_FLOW_FIELD_IPV6_ECN) - if (!hca_attr->modify_outer_ip_ecn && - !attr->transfer && !attr->group) + if (!hca_attr->modify_outer_ip_ecn && root) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, action, "modifications of the ECN for current firmware is not supported"); @@ -5092,11 +5093,12 @@ flow_dv_validate_action_jump(struct rte_eth_dev *dev, bool external, struct rte_flow_error *error) { uint32_t target_group, table = 0; + struct mlx5_priv *priv = dev->data->dev_private; int ret = 0; struct flow_grp_info grp_info = { .external = !!external, .transfer = !!attributes->transfer, - .fdb_def_rule = 1, + .fdb_def_rule = !!priv->fdb_def_rule, .std_tbl_fix = 0 }; if (action_flags & (MLX5_FLOW_FATE_ACTIONS | @@ -5676,6 +5678,8 @@ flow_dv_modify_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) * Pointer to the COUNT action in sample action list. * @param[out] fdb_mirror_limit * Pointer to the FDB mirror limitation flag. + * @param root + * Whether action is on root table. * @param[out] error * Pointer to error structure. * @@ -5692,6 +5696,7 @@ flow_dv_validate_action_sample(uint64_t *action_flags, const struct rte_flow_action_rss **sample_rss, const struct rte_flow_action_count **count, int *fdb_mirror_limit, + bool root, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -5793,7 +5798,7 @@ flow_dv_validate_action_sample(uint64_t *action_flags, case RTE_FLOW_ACTION_TYPE_COUNT: ret = flow_dv_validate_action_count (dev, false, *action_flags | sub_action_flags, - attr, error); + root, error); if (ret < 0) return ret; *count = act->conf; @@ -7273,7 +7278,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, case RTE_FLOW_ITEM_TYPE_VXLAN: ret = mlx5_flow_validate_item_vxlan(dev, udp_dport, items, item_flags, - attr, error); + is_root, error); if (ret < 0) return ret; last_item = MLX5_FLOW_LAYER_VXLAN; @@ -7367,7 +7372,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, break; case RTE_FLOW_ITEM_TYPE_GTP_PSC: ret = flow_dv_validate_item_gtp_psc(items, last_item, - gtp_item, attr, + gtp_item, is_root, error); if (ret < 0) return ret; @@ -7584,7 +7589,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, case RTE_FLOW_ACTION_TYPE_COUNT: ret = flow_dv_validate_action_count(dev, shared_count, action_flags, - attr, error); + is_root, error); if (ret < 0) return ret; count = actions->conf; @@ -7878,7 +7883,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, rw_act_num += MLX5_ACT_NUM_SET_TAG; break; case MLX5_RTE_FLOW_ACTION_TYPE_AGE: - if (!attr->transfer && !attr->group) + if (is_root) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -7903,7 +7908,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, * Validate the regular AGE action (using counter) * mutual exclusion with indirect counter actions. */ - if (!flow_hit_aso_supported(priv->sh, attr)) { + if (!flow_hit_aso_supported(priv, is_root)) { if (shared_count) return rte_flow_error_set (error, EINVAL, @@ -7959,6 +7964,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, rss, &sample_rss, &sample_count, &fdb_mirror_limit, + is_root, error); if (ret < 0) return ret; @@ -7975,6 +7981,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, action_flags, actions, attr, + is_root, error); if (ret < 0) return ret; @@ -7988,8 +7995,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: ret = flow_dv_validate_action_aso_ct(dev, action_flags, - item_flags, attr, - error); + item_flags, + is_root, error); if (ret < 0) return ret; action_flags |= MLX5_FLOW_ACTION_CT; @@ -9189,15 +9196,18 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, if (MLX5_ITEM_VALID(item, key_type)) return; MLX5_ITEM_UPDATE(item, key_type, vxlan_v, vxlan_m, &nic_mask); - if (item->mask == &nic_mask && - ((!attr->group && !priv->sh->tunnel_header_0_1) || - (attr->group && !priv->sh->misc5_cap))) + if ((item->mask == &nic_mask) && + ((!attr->group && !(attr->transfer && priv->fdb_def_rule) && + !priv->sh->tunnel_header_0_1) || + ((attr->group || (attr->transfer && priv->fdb_def_rule)) && + !priv->sh->misc5_cap))) vxlan_m = &rte_flow_item_vxlan_mask; if ((priv->sh->steering_format_version == MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && dport != MLX5_UDP_PORT_VXLAN) || - (!attr->group && !attr->transfer) || - ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) { + (!attr->group && !(attr->transfer && priv->fdb_def_rule)) || + ((attr->group || (attr->transfer && priv->fdb_def_rule)) && + !priv->sh->misc5_cap)) { misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); size = sizeof(vxlan_m->vni); vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); @@ -14169,7 +14179,7 @@ flow_dv_translate(struct rte_eth_dev *dev, */ if (action_flags & MLX5_FLOW_ACTION_AGE) { if ((non_shared_age && count) || - !flow_hit_aso_supported(priv->sh, attr)) { + !flow_hit_aso_supported(priv, !dev_flow->dv.group)) { /* Creates age by counters. */ cnt_act = flow_dv_prepare_counter (dev, dev_flow, @@ -18318,6 +18328,7 @@ flow_dv_action_validate(struct rte_eth_dev *dev, struct rte_flow_error *err) { struct mlx5_priv *priv = dev->data->dev_private; + /* called from RTE API */ RTE_SET_USED(conf); switch (action->type) { @@ -18345,7 +18356,7 @@ flow_dv_action_validate(struct rte_eth_dev *dev, "Indirect age action not supported"); return flow_dv_validate_action_age(0, action, dev, err); case RTE_FLOW_ACTION_TYPE_COUNT: - return flow_dv_validate_action_count(dev, true, 0, NULL, err); + return flow_dv_validate_action_count(dev, true, 0, false, err); case RTE_FLOW_ACTION_TYPE_CONNTRACK: if (!priv->sh->ct_aso_en) return rte_flow_error_set(err, ENOTSUP, @@ -18522,6 +18533,8 @@ flow_dv_validate_mtr_policy_acts(struct rte_eth_dev *dev, bool def_green = false; bool def_yellow = false; const struct rte_flow_action_rss *rss_color[RTE_COLORS] = {NULL}; + /* Called from RTE API */ + bool is_root = !(attr->group || (attr->transfer && priv->fdb_def_rule)); if (!dev_conf->dv_esw_en) def_domain &= ~MLX5_MTR_DOMAIN_TRANSFER_BIT; @@ -18723,7 +18736,7 @@ flow_dv_validate_mtr_policy_acts(struct rte_eth_dev *dev, break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: ret = flow_dv_validate_action_modify_field(dev, - action_flags[i], act, attr, &flow_err); + action_flags[i], act, attr, is_root, &flow_err); if (ret < 0) return -rte_mtr_error_set(error, ENOTSUP, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 1f98e1248a..004eacc334 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -20,6 +20,14 @@ /* Default queue to flush the flows. */ #define MLX5_DEFAULT_FLUSH_QUEUE 0 +/* Maximum number of rules in control flow tables */ +#define MLX5_HW_CTRL_FLOW_NB_RULES (4096) + +/* Flow group for SQ miss default flows/ */ +#define MLX5_HW_SQ_MISS_GROUP (UINT32_MAX) + +static int flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev); + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; /* DR action flags with different table. */ @@ -802,6 +810,77 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev, return 0; } +static int +flow_hw_represented_port_compile(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action *action_start, + const struct rte_flow_action *action, + const struct rte_flow_action *action_mask, + struct mlx5_hw_actions *acts, + uint16_t action_dst, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_action_ethdev *v = action->conf; + const struct rte_flow_action_ethdev *m = action_mask->conf; + int ret; + + if (!attr->group) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, NULL, + "represented_port action cannot" + " be used on group 0"); + if (!attr->transfer) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, + NULL, + "represented_port action requires" + " transfer attribute"); + if (attr->ingress || attr->egress) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, NULL, + "represented_port action cannot" + " be used with direction attributes"); + if (!priv->master) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "represented_port acton must" + " be used on proxy port"); + if (m && !!m->port_id) { + struct mlx5_priv *port_priv; + + port_priv = mlx5_port_to_eswitch_info(v->port_id, false); + if (port_priv == NULL) + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "port does not exist or unable to" + " obtain E-Switch info for port"); + MLX5_ASSERT(priv->hw_vport != NULL); + if (priv->hw_vport[v->port_id]) { + acts->rule_acts[action_dst].action = + priv->hw_vport[v->port_id]; + } else { + return rte_flow_error_set + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot use represented_port action" + " with this port"); + } + } else { + ret = __flow_hw_act_data_general_append + (priv, acts, action->type, + action - action_start, action_dst); + if (ret) + return rte_flow_error_set + (error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "not enough memory to store" + " vport action"); + } + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -879,7 +958,7 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, break; case RTE_FLOW_ACTION_TYPE_DROP: acts->rule_acts[i++].action = - priv->hw_drop[!!attr->group][type]; + priv->hw_drop[!!attr->group]; break; case RTE_FLOW_ACTION_TYPE_MARK: acts->mark = true; @@ -1012,6 +1091,13 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, if (err) goto err; break; + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + if (flow_hw_represented_port_compile + (dev, attr, action_start, actions, + masks, acts, i, error)) + goto err; + i++; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -1334,11 +1420,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, struct mlx5dr_rule_action *rule_acts, uint32_t *acts_num) { + struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_template_table *table = job->flow->table; struct mlx5_action_construct_data *act_data; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; const struct rte_flow_item *enc_item = NULL; + const struct rte_flow_action_ethdev *port_action = NULL; uint8_t *buf = job->encap_data; struct rte_flow_attr attr = { .ingress = 1, @@ -1458,6 +1546,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (ret) return -1; break; + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + port_action = action->conf; + if (!priv->hw_vport[port_action->port_id]) + return -1; + rule_acts[act_data->action_dst].action = + priv->hw_vport[port_action->port_id]; + break; default: break; } @@ -1470,6 +1565,52 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return 0; } +static const struct rte_flow_item * +flow_hw_get_rule_items(struct rte_eth_dev *dev, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + struct mlx5_hw_q_job *job) +{ + if (table->its[pattern_template_index]->implicit_port) { + const struct rte_flow_item *curr_item; + unsigned int nb_items; + bool found_end; + unsigned int i; + + /* Count number of pattern items. */ + nb_items = 0; + found_end = false; + for (curr_item = items; !found_end; ++curr_item) { + ++nb_items; + if (curr_item->type == RTE_FLOW_ITEM_TYPE_END) + found_end = true; + } + /* Prepend represented port item. */ + job->port_spec = (struct rte_flow_item_ethdev){ + .port_id = dev->data->port_id, + }; + job->items[0] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, + .spec = &job->port_spec, + }; + found_end = false; + for (i = 1; i < MLX5_HW_MAX_ITEMS && i - 1 < nb_items; ++i) { + job->items[i] = items[i - 1]; + if (items[i - 1].type == RTE_FLOW_ITEM_TYPE_END) { + found_end = true; + break; + } + } + if (i >= MLX5_HW_MAX_ITEMS && !found_end) { + rte_errno = ENOMEM; + return NULL; + } + return job->items; + } + return items; +} + /** * Enqueue HW steering flow creation. * @@ -1521,6 +1662,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, struct mlx5_hw_actions *hw_acts; struct rte_flow_hw *flow; struct mlx5_hw_q_job *job; + const struct rte_flow_item *rule_items; uint32_t acts_num, flow_idx; int ret; @@ -1547,15 +1689,23 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, job->user_data = user_data; rule_attr.user_data = job; hw_acts = &table->ats[action_template_index].acts; - /* Construct the flow action array based on the input actions.*/ - flow_hw_actions_construct(dev, job, hw_acts, pattern_template_index, - actions, rule_acts, &acts_num); + /* Construct the flow actions based on the input actions.*/ + if (flow_hw_actions_construct(dev, job, hw_acts, pattern_template_index, + actions, rule_acts, &acts_num)) { + rte_errno = EINVAL; + goto free; + } + rule_items = flow_hw_get_rule_items(dev, table, items, + pattern_template_index, job); + if (!rule_items) + goto free; ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, action_template_index, rule_acts, &rule_attr, &flow->rule); if (likely(!ret)) return (struct rte_flow *)flow; +free: /* Flow created fail, return the descriptor and flow memory. */ mlx5_ipool_free(table->flow, flow_idx); priv->hw_q[queue].job_idx++; @@ -1736,7 +1886,9 @@ __flow_hw_pull_comp(struct rte_eth_dev *dev, struct rte_flow_op_result comp[BURST_THR]; int ret, i, empty_loop = 0; - flow_hw_push(dev, queue, error); + ret = flow_hw_push(dev, queue, error); + if (ret < 0) + return ret; while (pending_rules) { ret = flow_hw_pull(dev, queue, comp, BURST_THR, error); if (ret < 0) @@ -2021,8 +2173,12 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; int i; + uint32_t fidx = 1; - if (table->refcnt) { + /* Build ipool allocated object bitmap. */ + mlx5_ipool_flush_cache(table->flow); + /* Check if ipool has allocated objects. */ + if (table->refcnt || mlx5_ipool_get_next(table->flow, &fidx)) { DRV_LOG(WARNING, "Table %p is still in using.", (void *)table); return rte_flow_error_set(error, EBUSY, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -2101,7 +2257,51 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action, } static int -flow_hw_action_validate(const struct rte_flow_action actions[], +flow_hw_validate_action_represented_port(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ethdev *action_conf = action->conf; + const struct rte_flow_action_ethdev *mask_conf = mask->conf; + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->sh->config.dv_esw_en) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot use represented_port actions" + " without an E-Switch"); + if (mask_conf->port_id) { + struct mlx5_priv *port_priv; + struct mlx5_priv *dev_priv; + + port_priv = mlx5_port_to_eswitch_info(action_conf->port_id, false); + if (!port_priv) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "failed to obtain E-Switch" + " info for port"); + dev_priv = mlx5_dev_to_eswitch_info(dev); + if (!dev_priv) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "failed to obtain E-Switch" + " info for transfer proxy"); + if (port_priv->domain_id != dev_priv->domain_id) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "cannot forward to port from" + " a different E-Switch"); + } + return 0; +} + +static int +flow_hw_action_validate(struct rte_eth_dev *dev, + const struct rte_flow_action actions[], const struct rte_flow_action masks[], struct rte_flow_error *error) { @@ -2164,6 +2364,12 @@ flow_hw_action_validate(const struct rte_flow_action actions[], if (ret < 0) return ret; break; + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + ret = flow_hw_validate_action_represented_port + (dev, action, mask, error); + if (ret < 0) + return ret; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -2205,7 +2411,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, int len, act_len, mask_len, i; struct rte_flow_actions_template *at; - if (flow_hw_action_validate(actions, masks, error)) + if (flow_hw_action_validate(dev, actions, masks, error)) return NULL; act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, actions, error); @@ -2288,6 +2494,46 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev __rte_unused, return 0; } +static struct rte_flow_item * +flow_hw_copy_prepend_port_item(const struct rte_flow_item *items, + struct rte_flow_error *error) +{ + const struct rte_flow_item *curr_item; + struct rte_flow_item *copied_items; + bool found_end; + unsigned int nb_items; + unsigned int i; + size_t size; + + /* Count number of pattern items. */ + nb_items = 0; + found_end = false; + for (curr_item = items; !found_end; ++curr_item) { + ++nb_items; + if (curr_item->type == RTE_FLOW_ITEM_TYPE_END) + found_end = true; + } + /* Allocate new array of items and prepend REPRESENTED_PORT item. */ + size = sizeof(*copied_items) * (nb_items + 1); + copied_items = mlx5_malloc(MLX5_MEM_ZERO, size, 0, rte_socket_id()); + if (!copied_items) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate item template"); + return NULL; + } + copied_items[0] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, + .spec = NULL, + .last = NULL, + .mask = &rte_flow_item_ethdev_mask, + }; + for (i = 1; i < nb_items + 1; ++i) + copied_items[i] = items[i - 1]; + return copied_items; +} + /** * Create flow item template. * @@ -2311,9 +2557,35 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_pattern_template *it; + struct rte_flow_item *copied_items = NULL; + const struct rte_flow_item *tmpl_items; + if (priv->sh->config.dv_esw_en && attr->ingress) { + /* + * Disallow pattern template with ingress and egress/transfer + * attributes in order to forbid implicit port matching + * on egress and transfer traffic. + */ + if (attr->egress || attr->transfer) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "item template for ingress traffic" + " cannot be used for egress/transfer" + " traffic when E-Switch is enabled"); + return NULL; + } + copied_items = flow_hw_copy_prepend_port_item(items, error); + if (!copied_items) + return NULL; + tmpl_items = copied_items; + } else { + tmpl_items = items; + } it = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*it), 0, rte_socket_id()); if (!it) { + if (copied_items) + mlx5_free(copied_items); rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -2321,8 +2593,10 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, return NULL; } it->attr = *attr; - it->mt = mlx5dr_match_template_create(items, attr->relaxed_matching); + it->mt = mlx5dr_match_template_create(tmpl_items, attr->relaxed_matching); if (!it->mt) { + if (copied_items) + mlx5_free(copied_items); mlx5_free(it); rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -2330,9 +2604,12 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, "cannot create match template"); return NULL; } - it->item_flags = flow_hw_rss_item_flags_get(items); + it->item_flags = flow_hw_rss_item_flags_get(tmpl_items); + it->implicit_port = !!copied_items; __atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next); + if (copied_items) + mlx5_free(copied_items); return it; } @@ -2458,6 +2735,7 @@ flow_hw_grp_create_cb(void *tool_ctx, void *cb_ctx) goto error; grp_data->jump.root_action = jump; } + grp_data->dev = dev; grp_data->idx = idx; grp_data->group_id = attr->group; grp_data->type = dr_tbl_attr.type; @@ -2526,7 +2804,8 @@ flow_hw_grp_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, struct rte_flow_attr *attr = (struct rte_flow_attr *)ctx->data; - return (grp_data->group_id != attr->group) || + return (grp_data->dev != ctx->dev) || + (grp_data->group_id != attr->group) || ((grp_data->type != MLX5DR_TABLE_TYPE_FDB) && attr->transfer) || ((grp_data->type != MLX5DR_TABLE_TYPE_NIC_TX) && @@ -2589,6 +2868,545 @@ flow_hw_grp_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) mlx5_ipool_free(sh->ipool[MLX5_IPOOL_HW_GRP], grp_data->idx); } +/** + * Create and cache a vport action for given @p dev port. vport actions + * cache is used in HWS with FDB flows. + * + * This function does not create any function if proxy port for @p dev port + * was not configured for HW Steering. + * + * This function assumes that E-Switch is enabled and PMD is running with + * HW Steering configured. + * + * @param dev + * Pointer to Ethernet device which will be the action destination. + * + * @return + * 0 on success, positive value otherwise. + */ +int +flow_hw_create_vport_action(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_eth_dev *proxy_dev; + struct mlx5_priv *proxy_priv; + uint16_t port_id = dev->data->port_id; + uint16_t proxy_port_id = port_id; + int ret; + + ret = mlx5_flow_pick_transfer_proxy(dev, &proxy_port_id, NULL); + if (ret) + return ret; + proxy_dev = &rte_eth_devices[proxy_port_id]; + proxy_priv = proxy_dev->data->dev_private; + if (!proxy_priv->hw_vport) + return 0; + if (proxy_priv->hw_vport[port_id]) { + DRV_LOG(ERR, "port %u HWS vport action already created", + port_id); + return -EINVAL; + } + proxy_priv->hw_vport[port_id] = mlx5dr_action_create_dest_vport + (proxy_priv->dr_ctx, priv->dev_port, + MLX5DR_ACTION_FLAG_HWS_FDB); + if (!proxy_priv->hw_vport[port_id]) { + DRV_LOG(ERR, "port %u unable to create HWS vport action", + port_id); + return -EINVAL; + } + return 0; +} + +/** + * Destroys the vport action associated with @p dev device + * from actions' cache. + * + * This function does not destroy any action if there is no action cached + * for @p dev or proxy port was not configured for HW Steering. + * + * This function assumes that E-Switch is enabled and PMD is running with + * HW Steering configured. + * + * @param dev + * Pointer to Ethernet device which will be the action destination. + */ +void +flow_hw_destroy_vport_action(struct rte_eth_dev *dev) +{ + struct rte_eth_dev *proxy_dev; + struct mlx5_priv *proxy_priv; + uint16_t port_id = dev->data->port_id; + uint16_t proxy_port_id = port_id; + + if (mlx5_flow_pick_transfer_proxy(dev, &proxy_port_id, NULL)) + return; + proxy_dev = &rte_eth_devices[proxy_port_id]; + proxy_priv = proxy_dev->data->dev_private; + if (!proxy_priv->hw_vport || !proxy_priv->hw_vport[port_id]) + return; + mlx5dr_action_destroy(proxy_priv->hw_vport[port_id]); + proxy_priv->hw_vport[port_id] = NULL; +} + +static int +flow_hw_create_vport_actions(struct mlx5_priv *priv) +{ + uint16_t port_id; + + MLX5_ASSERT(!priv->hw_vport); + priv->hw_vport = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*priv->hw_vport) * RTE_MAX_ETHPORTS, + 0, SOCKET_ID_ANY); + if (!priv->hw_vport) + return -ENOMEM; + DRV_LOG(DEBUG, "port %u :: creating vport actions", priv->dev_data->port_id); + DRV_LOG(DEBUG, "port %u :: domain_id=%u", priv->dev_data->port_id, priv->domain_id); + MLX5_ETH_FOREACH_DEV(port_id, NULL) { + struct mlx5_priv *port_priv = rte_eth_devices[port_id].data->dev_private; + + if (!port_priv || + port_priv->domain_id != priv->domain_id) + continue; + DRV_LOG(DEBUG, "port %u :: for port_id=%u, calling mlx5dr_action_create_dest_vport() with ibport=%u", + priv->dev_data->port_id, port_id, port_priv->dev_port); + priv->hw_vport[port_id] = mlx5dr_action_create_dest_vport + (priv->dr_ctx, port_priv->dev_port, + MLX5DR_ACTION_FLAG_HWS_FDB); + DRV_LOG(DEBUG, "port %u :: priv->hw_vport[%u]=%p", + priv->dev_data->port_id, port_id, (void *)priv->hw_vport[port_id]); + if (!priv->hw_vport[port_id]) + return -EINVAL; + } + return 0; +} + +static void +flow_hw_free_vport_actions(struct mlx5_priv *priv) +{ + uint16_t port_id; + + if (!priv->hw_vport) + return; + for (port_id = 0; port_id < RTE_MAX_ETHPORTS; ++port_id) + if (priv->hw_vport[port_id]) + mlx5dr_action_destroy(priv->hw_vport[port_id]); + mlx5_free(priv->hw_vport); + priv->hw_vport = NULL; +} + +/** + * Creates a flow pattern template used to match on E-Switch Manager. + * This template is used to set up a table for SQ miss default flow. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * Pointer to flow pattern template on success, NULL otherwise. + */ +static struct rte_flow_pattern_template * +flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev) +{ + struct rte_flow_pattern_template_attr attr = { + .relaxed_matching = 0, + .transfer = 1, + }; + struct rte_flow_item_ethdev port_spec = { + .port_id = MLX5_REPRESENTED_PORT_ESW_MGR, + }; + struct rte_flow_item_ethdev port_mask = { + .port_id = UINT16_MAX, + }; + struct rte_flow_item items[] = { + { + .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, + .spec = &port_spec, + .mask = &port_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }; + + return flow_hw_pattern_template_create(dev, &attr, items, NULL); +} + +/** + * Creates a flow pattern template used to match on a TX queue. + * This template is used to set up a table for SQ miss default flow. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * Pointer to flow pattern template on success, NULL otherwise. + */ +static struct rte_flow_pattern_template * +flow_hw_create_ctrl_sq_pattern_template(struct rte_eth_dev *dev) +{ + struct rte_flow_pattern_template_attr attr = { + .relaxed_matching = 0, + .transfer = 1, + }; + struct mlx5_rte_flow_item_tx_queue queue_mask = { + .queue = UINT32_MAX, + }; + struct rte_flow_item items[] = { + { + .type = (enum rte_flow_item_type) + MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + .mask = &queue_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }; + + return flow_hw_pattern_template_create(dev, &attr, items, NULL); +} + +/** + * Creates a flow pattern template with unmasked represented port matching. + * This template is used to set up a table for default transfer flows + * directing packets to group 1. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * Pointer to flow pattern template on success, NULL otherwise. + */ +static struct rte_flow_pattern_template * +flow_hw_create_ctrl_port_pattern_template(struct rte_eth_dev *dev) +{ + struct rte_flow_pattern_template_attr attr = { + .relaxed_matching = 0, + .transfer = 1, + }; + struct rte_flow_item_ethdev port_mask = { + .port_id = UINT16_MAX, + }; + struct rte_flow_item items[] = { + { + .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, + .mask = &port_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }; + + return flow_hw_pattern_template_create(dev, &attr, items, NULL); +} + +/** + * Creates a flow actions template with an unmasked JUMP action. Flows + * based on this template will perform a jump to some group. This template + * is used to set up tables for control flows. + * + * @param dev + * Pointer to Ethernet device. + * @param group + * Destination group for this action template. + * + * @return + * Pointer to flow actions template on success, NULL otherwise. + */ +static struct rte_flow_actions_template * +flow_hw_create_ctrl_jump_actions_template(struct rte_eth_dev *dev, + uint32_t group) +{ + struct rte_flow_actions_template_attr attr = { + .transfer = 1, + }; + struct rte_flow_action_jump jump_v = { + .group = group, + }; + struct rte_flow_action_jump jump_m = { + .group = UINT32_MAX, + }; + struct rte_flow_action actions_v[] = { + { + .type = RTE_FLOW_ACTION_TYPE_JUMP, + .conf = &jump_v, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + } + }; + struct rte_flow_action actions_m[] = { + { + .type = RTE_FLOW_ACTION_TYPE_JUMP, + .conf = &jump_m, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + } + }; + + return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m, + NULL); +} + +/** + * Creates a flow action template with a unmasked REPRESENTED_PORT action. + * It is used to create control flow tables. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * Pointer to flow action template on success, NULL otherwise. + */ +static struct rte_flow_actions_template * +flow_hw_create_ctrl_port_actions_template(struct rte_eth_dev *dev) +{ + struct rte_flow_actions_template_attr attr = { + .transfer = 1, + }; + struct rte_flow_action_ethdev port_v = { + .port_id = 0, + }; + struct rte_flow_action actions_v[] = { + { + .type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + .conf = &port_v, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + } + }; + struct rte_flow_action_ethdev port_m = { + .port_id = 0, + }; + struct rte_flow_action actions_m[] = { + { + .type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + .conf = &port_m, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + } + }; + + return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m, + NULL); +} + +/** + * Creates a control flow table used to transfer traffic from E-Switch Manager + * and TX queues from group 0 to group 1. + * + * @param dev + * Pointer to Ethernet device. + * @param it + * Pointer to flow pattern template. + * @param at + * Pointer to flow actions template. + * + * @return + * Pointer to flow table on success, NULL otherwise. + */ +static struct rte_flow_template_table* +flow_hw_create_ctrl_sq_miss_root_table(struct rte_eth_dev *dev, + struct rte_flow_pattern_template *it, + struct rte_flow_actions_template *at) +{ + struct rte_flow_template_table_attr attr = { + .flow_attr = { + .group = 0, + .priority = 0, + .ingress = 0, + .egress = 0, + .transfer = 1, + }, + .nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES, + }; + + return flow_hw_table_create(dev, &attr, &it, 1, &at, 1, NULL); +} + + +/** + * Creates a control flow table used to transfer traffic from E-Switch Manager + * and TX queues from group 0 to group 1. + * + * @param dev + * Pointer to Ethernet device. + * @param it + * Pointer to flow pattern template. + * @param at + * Pointer to flow actions template. + * + * @return + * Pointer to flow table on success, NULL otherwise. + */ +static struct rte_flow_template_table* +flow_hw_create_ctrl_sq_miss_table(struct rte_eth_dev *dev, + struct rte_flow_pattern_template *it, + struct rte_flow_actions_template *at) +{ + struct rte_flow_template_table_attr attr = { + .flow_attr = { + .group = MLX5_HW_SQ_MISS_GROUP, + .priority = 0, + .ingress = 0, + .egress = 0, + .transfer = 1, + }, + .nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES, + }; + + return flow_hw_table_create(dev, &attr, &it, 1, &at, 1, NULL); +} + +/** + * Creates a control flow table used to transfer traffic + * from group 0 to group 1. + * + * @param dev + * Pointer to Ethernet device. + * @param it + * Pointer to flow pattern template. + * @param at + * Pointer to flow actions template. + * + * @return + * Pointer to flow table on success, NULL otherwise. + */ +static struct rte_flow_template_table * +flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, + struct rte_flow_pattern_template *it, + struct rte_flow_actions_template *at) +{ + struct rte_flow_template_table_attr attr = { + .flow_attr = { + .group = 0, + .priority = 15, /* TODO: Flow priority discovery. */ + .ingress = 0, + .egress = 0, + .transfer = 1, + }, + .nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES, + }; + + return flow_hw_table_create(dev, &attr, &it, 1, &at, 1, NULL); +} + +/** + * Creates a set of flow tables used to create control flows used + * when E-Switch is engaged. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * 0 on success, EINVAL otherwise + */ +static __rte_unused int +flow_hw_create_ctrl_tables(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_pattern_template *esw_mgr_items_tmpl = NULL; + struct rte_flow_pattern_template *sq_items_tmpl = NULL; + struct rte_flow_pattern_template *port_items_tmpl = NULL; + struct rte_flow_actions_template *jump_sq_actions_tmpl = NULL; + struct rte_flow_actions_template *port_actions_tmpl = NULL; + struct rte_flow_actions_template *jump_one_actions_tmpl = NULL; + + /* Item templates */ + esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev); + if (!esw_mgr_items_tmpl) { + DRV_LOG(ERR, "port %u failed to create E-Switch Manager item" + " template for control flows", dev->data->port_id); + goto error; + } + sq_items_tmpl = flow_hw_create_ctrl_sq_pattern_template(dev); + if (!sq_items_tmpl) { + DRV_LOG(ERR, "port %u failed to create SQ item template for" + " control flows", dev->data->port_id); + goto error; + } + port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev); + if (!port_items_tmpl) { + DRV_LOG(ERR, "port %u failed to create SQ item template for" + " control flows", dev->data->port_id); + goto error; + } + /* Action templates */ + jump_sq_actions_tmpl = flow_hw_create_ctrl_jump_actions_template(dev, + MLX5_HW_SQ_MISS_GROUP); + if (!jump_sq_actions_tmpl) { + DRV_LOG(ERR, "port %u failed to create jump action template" + " for control flows", dev->data->port_id); + goto error; + } + port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev); + if (!port_actions_tmpl) { + DRV_LOG(ERR, "port %u failed to create port action template" + " for control flows", dev->data->port_id); + goto error; + } + jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template(dev, 1); + if (!jump_one_actions_tmpl) { + DRV_LOG(ERR, "port %u failed to create jump action template" + " for control flows", dev->data->port_id); + goto error; + } + /* Tables */ + MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL); + priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table + (dev, esw_mgr_items_tmpl, jump_sq_actions_tmpl); + if (!priv->hw_esw_sq_miss_root_tbl) { + DRV_LOG(ERR, "port %u failed to create table for default sq miss (root table)" + " for control flows", dev->data->port_id); + goto error; + } + MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL); + priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, sq_items_tmpl, + port_actions_tmpl); + if (!priv->hw_esw_sq_miss_tbl) { + DRV_LOG(ERR, "port %u failed to create table for default sq miss (non-root table)" + " for control flows", dev->data->port_id); + goto error; + } + MLX5_ASSERT(priv->hw_esw_zero_tbl == NULL); + priv->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table(dev, port_items_tmpl, + jump_one_actions_tmpl); + if (!priv->hw_esw_zero_tbl) { + DRV_LOG(ERR, "port %u failed to create table for default jump to group 1" + " for control flows", dev->data->port_id); + goto error; + } + return 0; +error: + if (priv->hw_esw_zero_tbl) { + flow_hw_table_destroy(dev, priv->hw_esw_zero_tbl, NULL); + priv->hw_esw_zero_tbl = NULL; + } + if (priv->hw_esw_sq_miss_tbl) { + flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_tbl, NULL); + priv->hw_esw_sq_miss_tbl = NULL; + } + if (priv->hw_esw_sq_miss_root_tbl) { + flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL); + priv->hw_esw_sq_miss_root_tbl = NULL; + } + if (jump_one_actions_tmpl) + flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL); + if (port_actions_tmpl) + flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL); + if (jump_sq_actions_tmpl) + flow_hw_actions_template_destroy(dev, jump_sq_actions_tmpl, NULL); + if (port_items_tmpl) + flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL); + if (sq_items_tmpl) + flow_hw_pattern_template_destroy(dev, sq_items_tmpl, NULL); + if (esw_mgr_items_tmpl) + flow_hw_pattern_template_destroy(dev, esw_mgr_items_tmpl, NULL); + return -EINVAL; +} + /** * Configure port HWS resources. * @@ -2606,7 +3424,6 @@ flow_hw_grp_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ - static int flow_hw_configure(struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, @@ -2629,6 +3446,14 @@ flow_hw_configure(struct rte_eth_dev *dev, .free = mlx5_free, .type = "mlx5_hw_action_construct_data", }; + /* Adds one queue to be used by PMD. + * The last queue will be used by the PMD. + */ + uint16_t nb_q_updated; + struct rte_flow_queue_attr **_queue_attr = NULL; + struct rte_flow_queue_attr ctrl_queue_attr = {0}; + bool is_proxy = !!(priv->sh->config.dv_esw_en && priv->master); + int ret; if (!port_attr || !nb_queue || !queue_attr) { rte_errno = EINVAL; @@ -2637,7 +3462,7 @@ flow_hw_configure(struct rte_eth_dev *dev, /* In case re-configuring, release existing context at first. */ if (priv->dr_ctx) { /* */ - for (i = 0; i < nb_queue; i++) { + for (i = 0; i < priv->nb_queue; i++) { hw_q = &priv->hw_q[i]; /* Make sure all queues are empty. */ if (hw_q->size != hw_q->job_idx) { @@ -2647,26 +3472,42 @@ flow_hw_configure(struct rte_eth_dev *dev, } flow_hw_resource_release(dev); } + ctrl_queue_attr.size = queue_attr[0]->size; + nb_q_updated = nb_queue + 1; + _queue_attr = mlx5_malloc(MLX5_MEM_ZERO, + nb_q_updated * + sizeof(struct rte_flow_queue_attr *), + 64, SOCKET_ID_ANY); + if (!_queue_attr) { + rte_errno = ENOMEM; + goto err; + } + + memcpy(_queue_attr, queue_attr, + sizeof(void *) * nb_queue); + _queue_attr[nb_queue] = &ctrl_queue_attr; priv->acts_ipool = mlx5_ipool_create(&cfg); if (!priv->acts_ipool) goto err; /* Allocate the queue job descriptor LIFO. */ - mem_size = sizeof(priv->hw_q[0]) * nb_queue; - for (i = 0; i < nb_queue; i++) { + mem_size = sizeof(priv->hw_q[0]) * nb_q_updated; + for (i = 0; i < nb_q_updated; i++) { /* * Check if the queues' size are all the same as the * limitation from HWS layer. */ - if (queue_attr[i]->size != queue_attr[0]->size) { + if (_queue_attr[i]->size != _queue_attr[0]->size) { rte_errno = EINVAL; goto err; } mem_size += (sizeof(struct mlx5_hw_q_job *) + + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + - sizeof(struct mlx5_hw_q_job)) * - queue_attr[0]->size; + sizeof(struct rte_flow_item) * + MLX5_HW_MAX_ITEMS) * + _queue_attr[i]->size; } priv->hw_q = mlx5_malloc(MLX5_MEM_ZERO, mem_size, 64, SOCKET_ID_ANY); @@ -2674,58 +3515,82 @@ flow_hw_configure(struct rte_eth_dev *dev, rte_errno = ENOMEM; goto err; } - for (i = 0; i < nb_queue; i++) { + for (i = 0; i < nb_q_updated; i++) { uint8_t *encap = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; + struct rte_flow_item *items = NULL; - priv->hw_q[i].job_idx = queue_attr[i]->size; - priv->hw_q[i].size = queue_attr[i]->size; + priv->hw_q[i].job_idx = _queue_attr[i]->size; + priv->hw_q[i].size = _queue_attr[i]->size; if (i == 0) priv->hw_q[i].job = (struct mlx5_hw_q_job **) - &priv->hw_q[nb_queue]; + &priv->hw_q[nb_q_updated]; else priv->hw_q[i].job = (struct mlx5_hw_q_job **) - &job[queue_attr[i - 1]->size]; + &job[_queue_attr[i - 1]->size - 1].items + [MLX5_HW_MAX_ITEMS]; job = (struct mlx5_hw_q_job *) - &priv->hw_q[i].job[queue_attr[i]->size]; - mhdr_cmd = (struct mlx5_modification_cmd *)&job[queue_attr[i]->size]; - encap = (uint8_t *)&mhdr_cmd[queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - for (j = 0; j < queue_attr[i]->size; j++) { + &priv->hw_q[i].job[_queue_attr[i]->size]; + mhdr_cmd = (struct mlx5_modification_cmd *) + &job[_queue_attr[i]->size]; + encap = (uint8_t *) + &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; + items = (struct rte_flow_item *) + &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; priv->hw_q[i].job[j] = &job[j]; } } dr_ctx_attr.pd = priv->sh->cdev->pd; - dr_ctx_attr.queues = nb_queue; + dr_ctx_attr.queues = nb_q_updated; /* Queue size should all be the same. Take the first one. */ - dr_ctx_attr.queue_size = queue_attr[0]->size; + dr_ctx_attr.queue_size = _queue_attr[0]->size; dr_ctx = mlx5dr_context_open(priv->sh->cdev->ctx, &dr_ctx_attr); /* rte_errno has been updated by HWS layer. */ if (!dr_ctx) goto err; priv->dr_ctx = dr_ctx; - priv->nb_queue = nb_queue; + priv->nb_queue = nb_q_updated; + rte_spinlock_init(&priv->hw_ctrl_lock); + LIST_INIT(&priv->hw_ctrl_flows); /* Add global actions. */ for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { - for (j = 0; j < MLX5DR_TABLE_TYPE_MAX; j++) { - priv->hw_drop[i][j] = mlx5dr_action_create_dest_drop - (priv->dr_ctx, mlx5_hw_act_flag[i][j]); - if (!priv->hw_drop[i][j]) - goto err; - } + uint32_t act_flags = 0; + + act_flags = mlx5_hw_act_flag[i][0] | mlx5_hw_act_flag[i][1]; + if (is_proxy) + act_flags |= mlx5_hw_act_flag[i][2]; + priv->hw_drop[i] = mlx5dr_action_create_dest_drop(priv->dr_ctx, act_flags); + if (!priv->hw_drop[i]) + goto err; priv->hw_tag[i] = mlx5dr_action_create_tag (priv->dr_ctx, mlx5_hw_act_flag[i][0]); if (!priv->hw_tag[i]) goto err; } + if (is_proxy) { + ret = flow_hw_create_vport_actions(priv); + if (ret) { + rte_errno = -ret; + goto err; + } + ret = flow_hw_create_ctrl_tables(dev); + if (ret) { + rte_errno = -ret; + goto err; + } + } + if (_queue_attr) + mlx5_free(_queue_attr); return 0; err: + flow_hw_free_vport_actions(priv); for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { - for (j = 0; j < MLX5DR_TABLE_TYPE_MAX; j++) { - if (priv->hw_drop[i][j]) - mlx5dr_action_destroy(priv->hw_drop[i][j]); - } + if (priv->hw_drop[i]) + mlx5dr_action_destroy(priv->hw_drop[i]); if (priv->hw_tag[i]) mlx5dr_action_destroy(priv->hw_tag[i]); } @@ -2737,6 +3602,8 @@ flow_hw_configure(struct rte_eth_dev *dev, mlx5_ipool_destroy(priv->acts_ipool); priv->acts_ipool = NULL; } + if (_queue_attr) + mlx5_free(_queue_attr); return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to configure port"); @@ -2755,10 +3622,11 @@ flow_hw_resource_release(struct rte_eth_dev *dev) struct rte_flow_template_table *tbl; struct rte_flow_pattern_template *it; struct rte_flow_actions_template *at; - int i, j; + int i; if (!priv->dr_ctx) return; + flow_hw_flush_all_ctrl_flows(dev); while (!LIST_EMPTY(&priv->flow_hw_tbl)) { tbl = LIST_FIRST(&priv->flow_hw_tbl); flow_hw_table_destroy(dev, tbl, NULL); @@ -2772,13 +3640,12 @@ flow_hw_resource_release(struct rte_eth_dev *dev) flow_hw_actions_template_destroy(dev, at, NULL); } for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { - for (j = 0; j < MLX5DR_TABLE_TYPE_MAX; j++) { - if (priv->hw_drop[i][j]) - mlx5dr_action_destroy(priv->hw_drop[i][j]); - } + if (priv->hw_drop[i]) + mlx5dr_action_destroy(priv->hw_drop[i]); if (priv->hw_tag[i]) mlx5dr_action_destroy(priv->hw_tag[i]); } + flow_hw_free_vport_actions(priv); if (priv->acts_ipool) { mlx5_ipool_destroy(priv->acts_ipool); priv->acts_ipool = NULL; @@ -3021,4 +3888,397 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .action_query = flow_dv_action_query, }; +static uint32_t +flow_hw_get_ctrl_queue(struct mlx5_priv *priv) +{ + MLX5_ASSERT(priv->nb_queue > 0); + return priv->nb_queue - 1; +} + +/** + * Creates a control flow using flow template API on @p proxy_dev device, + * on behalf of @p owner_dev device. + * + * This function uses locks internally to synchronize access to the + * flow queue. + * + * Created flow is stored in private list associated with @p proxy_dev device. + * + * @param owner_dev + * Pointer to Ethernet device on behalf of which flow is created. + * @param proxy_dev + * Pointer to Ethernet device on which flow is created. + * @param table + * Pointer to flow table. + * @param items + * Pointer to flow rule items. + * @param item_template_idx + * Index of an item template associated with @p table. + * @param actions + * Pointer to flow rule actions. + * @param action_template_idx + * Index of an action template associated with @p table. + * + * @return + * 0 on success, negative errno value otherwise and rte_errno set. + */ +static __rte_unused int +flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, + struct rte_eth_dev *proxy_dev, + struct rte_flow_template_table *table, + struct rte_flow_item items[], + uint8_t item_template_idx, + struct rte_flow_action actions[], + uint8_t action_template_idx) +{ + struct mlx5_priv *priv = proxy_dev->data->dev_private; + uint32_t queue = flow_hw_get_ctrl_queue(priv); + struct rte_flow_op_attr op_attr = { + .postpone = 0, + }; + struct rte_flow *flow = NULL; + struct mlx5_hw_ctrl_flow *entry = NULL; + int ret; + + rte_spinlock_lock(&priv->hw_ctrl_lock); + entry = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_SYS, sizeof(*entry), + 0, SOCKET_ID_ANY); + if (!entry) { + DRV_LOG(ERR, "port %u not enough memory to create control flows", + proxy_dev->data->port_id); + rte_errno = ENOMEM; + ret = -rte_errno; + goto error; + } + flow = flow_hw_async_flow_create(proxy_dev, queue, &op_attr, table, + items, item_template_idx, + actions, action_template_idx, + NULL, NULL); + if (!flow) { + DRV_LOG(ERR, "port %u failed to enqueue create control" + " flow operation", proxy_dev->data->port_id); + ret = -rte_errno; + goto error; + } + ret = flow_hw_push(proxy_dev, queue, NULL); + if (ret) { + DRV_LOG(ERR, "port %u failed to drain control flow queue", + proxy_dev->data->port_id); + goto error; + } + ret = __flow_hw_pull_comp(proxy_dev, queue, 1, NULL); + if (ret) { + DRV_LOG(ERR, "port %u failed to insert control flow", + proxy_dev->data->port_id); + rte_errno = EINVAL; + ret = -rte_errno; + goto error; + } + entry->owner_dev = owner_dev; + entry->flow = flow; + LIST_INSERT_HEAD(&priv->hw_ctrl_flows, entry, next); + rte_spinlock_unlock(&priv->hw_ctrl_lock); + return 0; +error: + if (entry) + mlx5_free(entry); + rte_spinlock_unlock(&priv->hw_ctrl_lock); + return ret; +} + +/** + * Destroys a control flow @p flow using flow template API on @p dev device. + * + * This function uses locks internally to synchronize access to the + * flow queue. + * + * If the @p flow is stored on any private list/pool, then caller must free up + * the relevant resources. + * + * @param dev + * Pointer to Ethernet device. + * @param flow + * Pointer to flow rule. + * + * @return + * 0 on success, non-zero value otherwise. + */ +static int +flow_hw_destroy_ctrl_flow(struct rte_eth_dev *dev, struct rte_flow *flow) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t queue = flow_hw_get_ctrl_queue(priv); + struct rte_flow_op_attr op_attr = { + .postpone = 0, + }; + int ret; + + rte_spinlock_lock(&priv->hw_ctrl_lock); + ret = flow_hw_async_flow_destroy(dev, queue, &op_attr, flow, NULL, NULL); + if (ret) { + DRV_LOG(ERR, "port %u failed to enqueue destroy control" + " flow operation", dev->data->port_id); + goto exit; + } + ret = flow_hw_push(dev, queue, NULL); + if (ret) { + DRV_LOG(ERR, "port %u failed to drain control flow queue", + dev->data->port_id); + goto exit; + } + ret = __flow_hw_pull_comp(dev, queue, 1, NULL); + if (ret) { + DRV_LOG(ERR, "port %u failed to destroy control flow", + dev->data->port_id); + rte_errno = EINVAL; + ret = -rte_errno; + goto exit; + } +exit: + rte_spinlock_unlock(&priv->hw_ctrl_lock); + return ret; +} + +/** + * Destroys control flows created on behalf of @p owner_dev device. + * + * @param owner_dev + * Pointer to Ethernet device owning control flows. + * + * @return + * 0 on success, otherwise negative error code is returned and + * rte_errno is set. + */ +int +mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *owner_dev) +{ + struct mlx5_priv *owner_priv = owner_dev->data->dev_private; + struct rte_eth_dev *proxy_dev; + struct mlx5_priv *proxy_priv; + struct mlx5_hw_ctrl_flow *cf; + struct mlx5_hw_ctrl_flow *cf_next; + uint16_t owner_port_id = owner_dev->data->port_id; + uint16_t proxy_port_id = owner_dev->data->port_id; + int ret; + + if (owner_priv->sh->config.dv_esw_en) { + if (rte_flow_pick_transfer_proxy(owner_port_id, &proxy_port_id, NULL)) { + DRV_LOG(ERR, "Unable to find proxy port for port %u", + owner_port_id); + rte_errno = EINVAL; + return -rte_errno; + } + proxy_dev = &rte_eth_devices[proxy_port_id]; + proxy_priv = proxy_dev->data->dev_private; + } else { + proxy_dev = owner_dev; + proxy_priv = owner_priv; + } + cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows); + while (cf != NULL) { + cf_next = LIST_NEXT(cf, next); + if (cf->owner_dev == owner_dev) { + ret = flow_hw_destroy_ctrl_flow(proxy_dev, cf->flow); + if (ret) { + rte_errno = ret; + return -ret; + } + LIST_REMOVE(cf, next); + mlx5_free(cf); + } + cf = cf_next; + } + return 0; +} + +/** + * Destroys all control flows created on @p dev device. + * + * @param owner_dev + * Pointer to Ethernet device. + * + * @return + * 0 on success, otherwise negative error code is returned and + * rte_errno is set. + */ +static int +flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_ctrl_flow *cf; + struct mlx5_hw_ctrl_flow *cf_next; + int ret; + + cf = LIST_FIRST(&priv->hw_ctrl_flows); + while (cf != NULL) { + cf_next = LIST_NEXT(cf, next); + ret = flow_hw_destroy_ctrl_flow(dev, cf->flow); + if (ret) { + rte_errno = ret; + return -ret; + } + LIST_REMOVE(cf, next); + mlx5_free(cf); + cf = cf_next; + } + return 0; +} + +int +mlx5_flow_hw_esw_create_mgr_sq_miss_flow(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_item_ethdev port_spec = { + .port_id = MLX5_REPRESENTED_PORT_ESW_MGR, + }; + struct rte_flow_item_ethdev port_mask = { + .port_id = MLX5_REPRESENTED_PORT_ESW_MGR, + }; + struct rte_flow_item items[] = { + { + .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, + .spec = &port_spec, + .mask = &port_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }; + struct rte_flow_action_jump jump = { + .group = MLX5_HW_SQ_MISS_GROUP, + }; + struct rte_flow_action actions[] = { + { + .type = RTE_FLOW_ACTION_TYPE_JUMP, + .conf = &jump, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + }, + }; + + MLX5_ASSERT(priv->master); + if (!priv->dr_ctx || + !priv->hw_esw_sq_miss_root_tbl) + return 0; + return flow_hw_create_ctrl_flow(dev, dev, + priv->hw_esw_sq_miss_root_tbl, + items, 0, actions, 0); +} + +int +mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) +{ + uint16_t port_id = dev->data->port_id; + struct mlx5_rte_flow_item_tx_queue queue_spec = { + .queue = txq, + }; + struct mlx5_rte_flow_item_tx_queue queue_mask = { + .queue = UINT32_MAX, + }; + struct rte_flow_item items[] = { + { + .type = (enum rte_flow_item_type) + MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + .spec = &queue_spec, + .mask = &queue_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }; + struct rte_flow_action_ethdev port = { + .port_id = port_id, + }; + struct rte_flow_action actions[] = { + { + .type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + .conf = &port, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + }, + }; + struct rte_eth_dev *proxy_dev; + struct mlx5_priv *proxy_priv; + uint16_t proxy_port_id = dev->data->port_id; + int ret; + + RTE_SET_USED(txq); + ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); + if (ret) { + DRV_LOG(ERR, "Unable to pick proxy port for port %u", port_id); + return ret; + } + proxy_dev = &rte_eth_devices[proxy_port_id]; + proxy_priv = proxy_dev->data->dev_private; + if (!proxy_priv->dr_ctx) + return 0; + if (!proxy_priv->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_esw_sq_miss_tbl) { + DRV_LOG(ERR, "port %u proxy port %u was configured but default" + " flow tables are not created", + port_id, proxy_port_id); + rte_errno = ENOMEM; + return -rte_errno; + } + return flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_esw_sq_miss_tbl, + items, 0, actions, 0); +} + +int +mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) +{ + uint16_t port_id = dev->data->port_id; + struct rte_flow_item_ethdev port_spec = { + .port_id = port_id, + }; + struct rte_flow_item items[] = { + { + .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, + .spec = &port_spec, + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }; + struct rte_flow_action_jump jump = { + .group = 1, + }; + struct rte_flow_action actions[] = { + { + .type = RTE_FLOW_ACTION_TYPE_JUMP, + .conf = &jump, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + } + }; + struct rte_eth_dev *proxy_dev; + struct mlx5_priv *proxy_priv; + uint16_t proxy_port_id = dev->data->port_id; + int ret; + + ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); + if (ret) { + DRV_LOG(ERR, "Unable to pick proxy port for port %u", port_id); + return ret; + } + proxy_dev = &rte_eth_devices[proxy_port_id]; + proxy_priv = proxy_dev->data->dev_private; + if (!proxy_priv->dr_ctx) + return 0; + if (!proxy_priv->hw_esw_zero_tbl) { + DRV_LOG(ERR, "port %u proxy port %u was configured but default" + " flow tables are not created", + port_id, proxy_port_id); + rte_errno = EINVAL; + return -rte_errno; + } + return flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_esw_zero_tbl, + items, 0, actions, 0); +} + #endif diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index fd902078f8..7ffaf4c227 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -1245,12 +1245,14 @@ flow_verbs_validate(struct rte_eth_dev *dev, uint16_t ether_type = 0; bool is_empty_vlan = false; uint16_t udp_dport = 0; + bool is_root; if (items == NULL) return -1; ret = mlx5_flow_validate_attributes(dev, attr, error); if (ret < 0) return ret; + is_root = ret; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); int ret = 0; @@ -1380,7 +1382,7 @@ flow_verbs_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_VXLAN: ret = mlx5_flow_validate_item_vxlan(dev, udp_dport, items, item_flags, - attr, error); + is_root, error); if (ret < 0) return ret; last_item = MLX5_FLOW_LAYER_VXLAN; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index c68b32cf14..3ef31671b1 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1280,6 +1280,48 @@ mlx5_dev_stop(struct rte_eth_dev *dev) return 0; } +static int +mlx5_traffic_enable_hws(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int i; + int ret; + + if (priv->sh->config.dv_esw_en && priv->master) { + if (mlx5_flow_hw_esw_create_mgr_sq_miss_flow(dev)) + goto error; + } + for (i = 0; i < priv->txqs_n; ++i) { + struct mlx5_txq_ctrl *txq = mlx5_txq_get(dev, i); + uint32_t queue; + + if (!txq) + continue; + if (txq->is_hairpin) + queue = txq->obj->sq->id; + else + queue = txq->obj->sq_obj.sq->id; + if ((priv->representor || priv->master) && + priv->sh->config.dv_esw_en) { + if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, queue)) { + mlx5_txq_release(dev, i); + goto error; + } + } + mlx5_txq_release(dev, i); + } + if ((priv->master || priv->representor) && priv->sh->config.dv_esw_en) { + if (mlx5_flow_hw_esw_create_default_jump_flow(dev)) + goto error; + } + return 0; +error: + ret = rte_errno; + mlx5_flow_hw_flush_ctrl_flows(dev); + rte_errno = ret; + return -rte_errno; +} + /** * Enable traffic flows configured by control plane * @@ -1316,6 +1358,8 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) unsigned int j; int ret; + if (priv->sh->config.dv_flow_en == 2) + return mlx5_traffic_enable_hws(dev); /* * Hairpin txq default flow should be created no matter if it is * isolation mode. Or else all the packets to be sent will be sent @@ -1346,13 +1390,17 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) } mlx5_txq_release(dev, i); } - if (priv->sh->config.dv_esw_en) { - if (mlx5_flow_create_esw_table_zero_flow(dev)) - priv->fdb_def_rule = 1; - else - DRV_LOG(INFO, "port %u FDB default rule cannot be" - " configured - only Eswitch group 0 flows are" - " supported.", dev->data->port_id); + if (priv->sh->config.fdb_def_rule) { + if (priv->sh->config.dv_esw_en) { + if (mlx5_flow_create_esw_table_zero_flow(dev)) + priv->fdb_def_rule = 1; + else + DRV_LOG(INFO, "port %u FDB default rule cannot be configured - only Eswitch group 0 flows are supported.", + dev->data->port_id); + } + } else { + DRV_LOG(INFO, "port %u FDB default rule is disabled", + dev->data->port_id); } if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0) { ret = mlx5_flow_lacp_miss(dev); @@ -1470,7 +1518,12 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) void mlx5_traffic_disable(struct rte_eth_dev *dev) { - mlx5_flow_list_flush(dev, MLX5_FLOW_TYPE_CTL, false); + struct mlx5_priv *priv = dev->data->dev_private; + + if (priv->sh->config.dv_flow_en == 2) + mlx5_flow_hw_flush_ctrl_flows(dev); + else + mlx5_flow_list_flush(dev, MLX5_FLOW_TYPE_CTL, false); } /** From patchwork Fri Sep 23 14:43:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116747 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C19E4A054A; Fri, 23 Sep 2022 16:45:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 224D342BC5; Fri, 23 Sep 2022 16:44:21 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2054.outbound.protection.outlook.com [40.107.101.54]) by mails.dpdk.org (Postfix) with ESMTP id 3789342BDE for ; Fri, 23 Sep 2022 16:44:18 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HJSemRmDqzWlCWJCA8EbBit7Tz7LMsgWM3AqvrS8S2sTyk2PUgnjLjkkUtiLgZ7GiQISiZkY1DuZ4BNc5nvzZSbC3bLC+Qpozxf72bgdzfTFXsLx1WwfcCWrQreqaMdS6ctppEuI/zqy9J6hrhYC1hnt3naUbAZlC5G11EHxOeztkvc4U9newUE2vAbooWaYOjs6d+U1t0aIdicfYfdxb0GrJBYTM3XavDZJH3Jwju5jC0ylsg9ipMTTHj+/Q2soYu2/UPTqel1nVMNiu+2r8R8T0rJ0PS/8UGUVqJ3QBxLmwrwF2RhzB5hR6IhqVGU6pub8Ik/iHcIRvsXpzfnngQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=V3XoJSEujTfTaU2IbOWUorKfPIAhODgnVwkpWum5yzU=; b=UM5DmDxwAr4CTN488sL2hQFVfUR6WmFVTLZMpWnuk+i2ekefcWWuFdJH4tgxIwev0mqKPtAxrnvuxlg4Q8aXQMPFpUNIUeCNhZZyc4EEDcss/g/okrFYGXG7SA/UWfkV1H8PmZKvWaP4vTuTQMiIyksspkLdGErchVuVp8KkK/XTGzsQd6SOYjiDz21nx8B3QepmmpQIfk6jgNHW3I0v8V3ueO8rGoTBUeXQOeqYN5uYEbImcmZ3qqtOI5ej8faG3+gUjCPgpyX/ewrrVO+OoC7EEhvQFkDlzx0ErUTIXjnRpXoWG/DjS4jcF6QoZYjSDo3zmAoRQMFjHFjV8Ybf7A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=V3XoJSEujTfTaU2IbOWUorKfPIAhODgnVwkpWum5yzU=; b=ulMKgfCex8rT1lzv/gUSfwvGEd9jgKAudZxtNFzJFFAdYHB6qXqRH1fQruFdpbLCdgUCHQITuJbqUkTOjP9b038Wo/c5E5A9A1lHOzmM703tUNxDeNSMXZPGTg5hTS8gOYG/XFzvZRymGJvXXw91pLmRZa71AFapPMEj99yfjvTo0KwgQMGtZWI6f454fObhsHtYhHS7ukuBB5YSwKQzHgVk03fp5/IEUC4LsAWveEour5GZHr6CH7Dglg5aXlmOjcS2BeiCSByNecq+Tzs91m9A5vhrkqsyoP8TAK6QrsfA5P+iBztFXGneAkBHPk+xWZqcCbHNO67g9obnGLKhLw== Received: from MW4PR04CA0293.namprd04.prod.outlook.com (2603:10b6:303:89::28) by PH7PR12MB6443.namprd12.prod.outlook.com (2603:10b6:510:1f9::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.19; Fri, 23 Sep 2022 14:44:15 +0000 Received: from CO1NAM11FT108.eop-nam11.prod.protection.outlook.com (2603:10b6:303:89:cafe::f4) by MW4PR04CA0293.outlook.office365.com (2603:10b6:303:89::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT108.mail.protection.outlook.com (10.13.175.226) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:15 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:02 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:00 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Bing Zhao Subject: [PATCH 08/27] net/mlx5: add extended metadata mode for hardware steering Date: Fri, 23 Sep 2022 17:43:15 +0300 Message-ID: <20220923144334.27736-9-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT108:EE_|PH7PR12MB6443:EE_ X-MS-Office365-Filtering-Correlation-Id: 9a9a8a21-16e3-4122-75e3-08da9d721657 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 3pJAlJCJSkniH3tmOyFV/1yIIZDp/mk6MBYL7rCpmyplCjph0x/TvChW7knoQTGVmz2McSbHA+nPTqkK3xZUhfTdKyNHm8mIV36j9DduLZxl2MXHOt5/kb70wT1qijt9vZhwtOEyFC8+0QhzusE4aTDaFlLXVpN1a18CiF9z/oS/EbB9mxSYUG4ofbDgYsg2iyQZAowtlLgy0Bw18zMl1pqbU8lJcM/F+SFJ3ZEj+etnjkmSsaHSjl51h1upY10nR+HlnpsPotroPtzydYwN3ytp9UPigEU0ErG2zlNUySN6rlDKCikLo7oFpSfha6/3ii/AK3pxU419oSfvw4kTmL+pNjgredMCfphNQta1O2RieltvJB6YT3I7PagoXteBof9ZZMN7dEuqzxqKpRbMCDj/zD3gjh+Tphr9i/jqDnVDd3g9R9NmHvOg+mo85NVaOHupGGNPZ9wOPsZJvR1OQZ6kyCPCSeaTLr8NlueGZ+M6u2slD9u21ZK+oT2eBv8v2kIpTyHn4rsNpzKRF7P9QIzdGkYucilijMI5wWWwlf5ntPHWCFB6WBr40nZFgoJJ0xdLbvZdljF2SaQns9uKHrYMYtb36zCSI9tIf0tof++/xJQ6P4zHRSkpOSgOb3rO2oR/KcWplesc1CiQm++/4D48yQWewB6SjDGf94zNHC07J9idEsGHDLz2i6TGgaeTTYk5ckJ3VRd4DCyCkbHclZeLXDC2auMI8ebX9SQBgzQkR/+WMcK/3AQUsLgLEp536DvZwiUDdHlMRDbcZZx72uWT6I1YW5t+5mOIs6yhFBI= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199015)(36840700001)(40470700004)(46966006)(6666004)(40460700003)(83380400001)(5660300002)(186003)(2906002)(1076003)(82310400005)(336012)(7636003)(426003)(47076005)(82740400003)(8936002)(2616005)(6286002)(41300700001)(55016003)(26005)(356005)(30864003)(16526019)(36756003)(40480700001)(36860700001)(8676002)(4326008)(107886003)(110136005)(70206006)(70586007)(6636002)(478600001)(54906003)(86362001)(7696005)(316002)(559001)(579004)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:15.1447 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9a9a8a21-16e3-4122-75e3-08da9d721657 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT108.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6443 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Bing Zhao The new mode 4 of devarg "dv_xmeta_en" is added for HWS only. In this mode, the Rx / Tx metadata with 32b width copy between FDB and NIC is supported. The mark is only supported in NIC and there is no copy supported. Signed-off-by: Bing Zhao --- drivers/net/mlx5/linux/mlx5_os.c | 10 +- drivers/net/mlx5/mlx5.c | 7 +- drivers/net/mlx5/mlx5.h | 8 +- drivers/net/mlx5/mlx5_flow.c | 8 +- drivers/net/mlx5/mlx5_flow.h | 14 + drivers/net/mlx5/mlx5_flow_dv.c | 21 +- drivers/net/mlx5/mlx5_flow_hw.c | 862 ++++++++++++++++++++++++++++--- drivers/net/mlx5/mlx5_trigger.c | 3 + 8 files changed, 851 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 28220d10ad..41940d7ce7 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1552,6 +1552,15 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); if (priv->sh->config.dv_flow_en == 2) { + if (priv->sh->config.dv_esw_en && + priv->sh->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && + priv->sh->config.dv_xmeta_en != MLX5_XMETA_MODE_META32_HWS) { + DRV_LOG(ERR, + "metadata mode %u is not supported in HWS eswitch mode", + priv->sh->config.dv_xmeta_en); + err = ENOTSUP; + goto error; + } /* Only HWS requires this information. */ flow_hw_init_tags_set(eth_dev); if (priv->sh->config.dv_esw_en && @@ -1563,7 +1572,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } return eth_dev; } - /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ err = mlx5_flow_discover_priorities(eth_dev); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index a21b8c69a9..4abb207077 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1218,7 +1218,8 @@ mlx5_dev_args_check_handler(const char *key, const char *val, void *opaque) if (tmp != MLX5_XMETA_MODE_LEGACY && tmp != MLX5_XMETA_MODE_META16 && tmp != MLX5_XMETA_MODE_META32 && - tmp != MLX5_XMETA_MODE_MISS_INFO) { + tmp != MLX5_XMETA_MODE_MISS_INFO && + tmp != MLX5_XMETA_MODE_META32_HWS) { DRV_LOG(ERR, "Invalid extensive metadata parameter."); rte_errno = EINVAL; return -rte_errno; @@ -2849,6 +2850,10 @@ mlx5_set_metadata_mask(struct rte_eth_dev *dev) meta = UINT32_MAX; mark = (reg_c0 >> rte_bsf32(reg_c0)) & MLX5_FLOW_MARK_MASK; break; + case MLX5_XMETA_MODE_META32_HWS: + meta = UINT32_MAX; + mark = MLX5_FLOW_MARK_MASK; + break; default: meta = 0; mark = 0; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 77dbe3593e..3364c4735c 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -298,8 +298,8 @@ struct mlx5_sh_config { uint32_t reclaim_mode:2; /* Memory reclaim mode. */ uint32_t dv_esw_en:1; /* Enable E-Switch DV flow. */ /* Enable DV flow. 1 means SW steering, 2 means HW steering. */ - unsigned int dv_flow_en:2; - uint32_t dv_xmeta_en:2; /* Enable extensive flow metadata. */ + uint32_t dv_flow_en:2; /* Enable DV flow. */ + uint32_t dv_xmeta_en:3; /* Enable extensive flow metadata. */ uint32_t dv_miss_info:1; /* Restore packet after partial hw miss. */ uint32_t l3_vxlan_en:1; /* Enable L3 VXLAN flow creation. */ uint32_t vf_nl_en:1; /* Enable Netlink requests in VF mode. */ @@ -312,7 +312,6 @@ struct mlx5_sh_config { uint32_t fdb_def_rule:1; /* Create FDB default jump rule */ }; - /* Structure for VF VLAN workaround. */ struct mlx5_vf_vlan { uint32_t tag:12; @@ -1279,12 +1278,12 @@ struct mlx5_dev_ctx_shared { struct mlx5_lb_ctx self_lb; /* QP to enable self loopback for Devx. */ unsigned int flow_max_priority; enum modify_reg flow_mreg_c[MLX5_MREG_C_NUM]; + /* Availability of mreg_c's. */ void *devx_channel_lwm; struct rte_intr_handle *intr_handle_lwm; pthread_mutex_t lwm_config_lock; uint32_t host_shaper_rate:8; uint32_t lwm_triggered:1; - /* Availability of mreg_c's. */ struct mlx5_dev_shared_port port[]; /* per device port data array. */ }; @@ -1509,6 +1508,7 @@ struct mlx5_priv { struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; struct rte_flow_template_table *hw_esw_sq_miss_tbl; struct rte_flow_template_table *hw_esw_zero_tbl; + struct rte_flow_template_table *hw_tx_meta_cpy_tbl; struct mlx5_indexed_pool *flows[MLX5_FLOW_TYPE_MAXI]; /* RTE Flow rules. */ uint32_t ctrl_flows; /* Control flow rules. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 9c44b2e99b..b570ed7f69 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1107,6 +1107,8 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev, return REG_C_0; case MLX5_XMETA_MODE_META32: return REG_C_1; + case MLX5_XMETA_MODE_META32_HWS: + return REG_C_1; } break; case MLX5_METADATA_TX: @@ -1119,11 +1121,14 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev, return REG_C_0; case MLX5_XMETA_MODE_META32: return REG_C_1; + case MLX5_XMETA_MODE_META32_HWS: + return REG_C_1; } break; case MLX5_FLOW_MARK: switch (config->dv_xmeta_en) { case MLX5_XMETA_MODE_LEGACY: + case MLX5_XMETA_MODE_META32_HWS: return REG_NON; case MLX5_XMETA_MODE_META16: return REG_C_1; @@ -4442,7 +4447,8 @@ static bool flow_check_modify_action_type(struct rte_eth_dev *dev, return true; case RTE_FLOW_ACTION_TYPE_FLAG: case RTE_FLOW_ACTION_TYPE_MARK: - if (priv->sh->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) + if (priv->sh->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && + priv->sh->config.dv_xmeta_en != MLX5_XMETA_MODE_META32_HWS) return true; else return false; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index f661f858c7..15c5826d8a 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -49,6 +49,12 @@ enum mlx5_rte_flow_action_type { MLX5_RTE_FLOW_ACTION_TYPE_RSS, }; +/* Private (internal) Field IDs for MODIFY_FIELD action. */ +enum mlx5_rte_flow_field_id { + MLX5_RTE_FLOW_FIELD_END = INT_MIN, + MLX5_RTE_FLOW_FIELD_META_REG, +}; + #define MLX5_INDIRECT_ACTION_TYPE_OFFSET 30 enum { @@ -1168,6 +1174,7 @@ struct rte_flow_actions_template { struct rte_flow_action *masks; /* Cached action masks.*/ uint16_t mhdr_off; /* Offset of DR modify header action. */ uint32_t refcnt; /* Reference counter. */ + uint16_t rx_cpy_pos; /* Action position of Rx metadata to be copied. */ }; /* Jump action struct. */ @@ -1244,6 +1251,11 @@ struct mlx5_flow_group { #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32 +struct mlx5_flow_template_table_cfg { + struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */ + bool external; /* True if created by flow API, false if table is internal to PMD. */ +}; + struct rte_flow_template_table { LIST_ENTRY(rte_flow_template_table) next; struct mlx5_flow_group *grp; /* The group rte_flow_template_table uses. */ @@ -1253,6 +1265,7 @@ struct rte_flow_template_table { /* Action templates bind to the table. */ struct mlx5_hw_action_template ats[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; struct mlx5_indexed_pool *flow; /* The table's flow ipool. */ + struct mlx5_flow_template_table_cfg cfg; uint32_t type; /* Flow table type RX/TX/FDB. */ uint8_t nb_item_templates; /* Item template number. */ uint8_t nb_action_templates; /* Action template number. */ @@ -2332,4 +2345,5 @@ int mlx5_flow_hw_esw_create_mgr_sq_miss_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq); int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev); +int mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d0f78cae8e..d1f0d63fdc 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -1783,7 +1783,8 @@ mlx5_flow_field_id_to_modify_info int reg; if (priv->sh->config.dv_flow_en == 2) - reg = REG_C_1; + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, + data->level); else reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, data->level, error); @@ -1852,6 +1853,24 @@ mlx5_flow_field_id_to_modify_info else info[idx].offset = off_be; break; + case MLX5_RTE_FLOW_FIELD_META_REG: + { + uint32_t meta_mask = priv->sh->dv_meta_mask; + uint32_t meta_count = __builtin_popcount(meta_mask); + uint32_t reg = data->level; + + RTE_SET_USED(meta_count); + MLX5_ASSERT(data->offset + width <= meta_count); + MLX5_ASSERT(reg != REG_NON); + MLX5_ASSERT(reg < RTE_DIM(reg_to_field)); + info[idx] = (struct field_modify_info){4, 0, reg_to_field[reg]}; + if (mask) + mask[idx] = flow_modify_info_mask_32_masked + (width, data->offset, meta_mask); + else + info[idx].offset = data->offset; + } + break; case RTE_FLOW_FIELD_POINTER: case RTE_FLOW_FIELD_VALUE: default: diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 004eacc334..dfbf885530 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -20,13 +20,27 @@ /* Default queue to flush the flows. */ #define MLX5_DEFAULT_FLUSH_QUEUE 0 -/* Maximum number of rules in control flow tables */ +/* Maximum number of rules in control flow tables. */ #define MLX5_HW_CTRL_FLOW_NB_RULES (4096) -/* Flow group for SQ miss default flows/ */ -#define MLX5_HW_SQ_MISS_GROUP (UINT32_MAX) +/* Lowest flow group usable by an application. */ +#define MLX5_HW_LOWEST_USABLE_GROUP (1) + +/* Maximum group index usable by user applications for transfer flows. */ +#define MLX5_HW_MAX_TRANSFER_GROUP (UINT32_MAX - 1) + +/* Lowest priority for HW root table. */ +#define MLX5_HW_LOWEST_PRIO_ROOT 15 + +/* Lowest priority for HW non-root table. */ +#define MLX5_HW_LOWEST_PRIO_NON_ROOT (UINT32_MAX) static int flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev); +static int flow_hw_translate_group(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + uint32_t group, + uint32_t *table_group, + struct rte_flow_error *error); const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; @@ -210,12 +224,12 @@ flow_hw_rss_item_flags_get(const struct rte_flow_item items[]) */ static struct mlx5_hw_jump_action * flow_hw_jump_action_register(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, + const struct mlx5_flow_template_table_cfg *cfg, uint32_t dest_group, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow_attr jattr = *attr; + struct rte_flow_attr jattr = cfg->attr.flow_attr; struct mlx5_flow_group *grp; struct mlx5_flow_cb_ctx ctx = { .dev = dev, @@ -223,9 +237,13 @@ flow_hw_jump_action_register(struct rte_eth_dev *dev, .data = &jattr, }; struct mlx5_list_entry *ge; + uint32_t target_group; - jattr.group = dest_group; - ge = mlx5_hlist_register(priv->sh->flow_tbls, dest_group, &ctx); + target_group = dest_group; + if (flow_hw_translate_group(dev, cfg, dest_group, &target_group, error)) + return NULL; + jattr.group = target_group; + ge = mlx5_hlist_register(priv->sh->flow_tbls, target_group, &ctx); if (!ge) return NULL; grp = container_of(ge, struct mlx5_flow_group, entry); @@ -757,7 +775,8 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev, (void *)(uintptr_t)conf->src.pvalue : (void *)(uintptr_t)&conf->src.value; if (conf->dst.field == RTE_FLOW_FIELD_META || - conf->dst.field == RTE_FLOW_FIELD_TAG) { + conf->dst.field == RTE_FLOW_FIELD_TAG || + conf->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) { value = *(const unaligned_uint32_t *)item.spec; value = rte_cpu_to_be_32(value); item.spec = &value; @@ -849,6 +868,9 @@ flow_hw_represented_port_compile(struct rte_eth_dev *dev, if (m && !!m->port_id) { struct mlx5_priv *port_priv; + if (!v) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "port index was not provided"); port_priv = mlx5_port_to_eswitch_info(v->port_id, false); if (port_priv == NULL) return rte_flow_error_set @@ -892,8 +914,8 @@ flow_hw_represented_port_compile(struct rte_eth_dev *dev, * * @param[in] dev * Pointer to the rte_eth_dev structure. - * @param[in] table_attr - * Pointer to the table attributes. + * @param[in] cfg + * Pointer to the table configuration. * @param[in] item_templates * Item template array to be binded to the table. * @param[in/out] acts @@ -908,12 +930,13 @@ flow_hw_represented_port_compile(struct rte_eth_dev *dev, */ static int flow_hw_actions_translate(struct rte_eth_dev *dev, - const struct rte_flow_template_table_attr *table_attr, + const struct mlx5_flow_template_table_cfg *cfg, struct mlx5_hw_actions *acts, struct rte_flow_actions_template *at, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *action_start = actions; @@ -980,7 +1003,7 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, ((const struct rte_flow_action_jump *) actions->conf)->group; acts->jump = flow_hw_jump_action_register - (dev, attr, jump_group, error); + (dev, cfg, jump_group, error); if (!acts->jump) goto err; acts->rule_acts[i].action = (!!attr->group) ? @@ -1090,6 +1113,16 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, error); if (err) goto err; + /* + * Adjust the action source position for the following. + * ... / MODIFY_FIELD: rx_cpy_pos / (QUEUE|RSS) / ... + * The next action will be Q/RSS, there will not be + * another adjustment and the real source position of + * the following actions will be decreased by 1. + * No change of the total actions in the new template. + */ + if ((actions - action_start) == at->rx_cpy_pos) + action_start += 1; break; case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: if (flow_hw_represented_port_compile @@ -1354,7 +1387,8 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, else rte_memcpy(values, mhdr_action->src.pvalue, sizeof(values)); if (mhdr_action->dst.field == RTE_FLOW_FIELD_META || - mhdr_action->dst.field == RTE_FLOW_FIELD_TAG) { + mhdr_action->dst.field == RTE_FLOW_FIELD_TAG || + mhdr_action->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) { value_p = (unaligned_uint32_t *)values; *value_p = rte_cpu_to_be_32(*value_p); } @@ -1492,7 +1526,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, jump_group = ((const struct rte_flow_action_jump *) action->conf)->group; jump = flow_hw_jump_action_register - (dev, &attr, jump_group, NULL); + (dev, &table->cfg, jump_group, NULL); if (!jump) return -1; rule_acts[act_data->action_dst].action = @@ -1689,7 +1723,13 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, job->user_data = user_data; rule_attr.user_data = job; hw_acts = &table->ats[action_template_index].acts; - /* Construct the flow actions based on the input actions.*/ + /* + * Construct the flow actions based on the input actions. + * The implicitly appended action is always fixed, like metadata + * copy action from FDB to NIC Rx. + * No need to copy and contrust a new "actions" list based on the + * user's input, in order to save the cost. + */ if (flow_hw_actions_construct(dev, job, hw_acts, pattern_template_index, actions, rule_acts, &acts_num)) { rte_errno = EINVAL; @@ -1997,8 +2037,8 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, * * @param[in] dev * Pointer to the rte_eth_dev structure. - * @param[in] attr - * Pointer to the table attributes. + * @param[in] table_cfg + * Pointer to the table configuration. * @param[in] item_templates * Item template array to be binded to the table. * @param[in] nb_item_templates @@ -2015,7 +2055,7 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, */ static struct rte_flow_template_table * flow_hw_table_create(struct rte_eth_dev *dev, - const struct rte_flow_template_table_attr *attr, + const struct mlx5_flow_template_table_cfg *table_cfg, struct rte_flow_pattern_template *item_templates[], uint8_t nb_item_templates, struct rte_flow_actions_template *action_templates[], @@ -2027,6 +2067,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl = NULL; struct mlx5_flow_group *grp; struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; + const struct rte_flow_template_table_attr *attr = &table_cfg->attr; struct rte_flow_attr flow_attr = attr->flow_attr; struct mlx5_flow_cb_ctx ctx = { .dev = dev, @@ -2067,6 +2108,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl), 0, rte_socket_id()); if (!tbl) goto error; + tbl->cfg = *table_cfg; /* Allocate flow indexed pool. */ tbl->flow = mlx5_ipool_create(&cfg); if (!tbl->flow) @@ -2110,7 +2152,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, goto at_error; } LIST_INIT(&tbl->ats[i].acts.act_list); - err = flow_hw_actions_translate(dev, attr, + err = flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, action_templates[i], error); if (err) { @@ -2153,6 +2195,96 @@ flow_hw_table_create(struct rte_eth_dev *dev, return NULL; } +/** + * Translates group index specified by the user in @p attr to internal + * group index. + * + * Translation is done by incrementing group index, so group n becomes n + 1. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] cfg + * Pointer to the template table configuration. + * @param[in] group + * Currently used group index (table group or jump destination). + * @param[out] table_group + * Pointer to output group index. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success. Otherwise, returns negative error code, rte_errno is set + * and error structure is filled. + */ +static int +flow_hw_translate_group(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + uint32_t group, + uint32_t *table_group, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_attr *flow_attr = &cfg->attr.flow_attr; + + if (priv->sh->config.dv_esw_en && cfg->external && flow_attr->transfer) { + if (group > MLX5_HW_MAX_TRANSFER_GROUP) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, + NULL, + "group index not supported"); + *table_group = group + 1; + } else { + *table_group = group; + } + return 0; +} + +/** + * Create flow table. + * + * This function is a wrapper over @ref flow_hw_table_create(), which translates parameters + * provided by user to proper internal values. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] attr + * Pointer to the table attributes. + * @param[in] item_templates + * Item template array to be binded to the table. + * @param[in] nb_item_templates + * Number of item templates. + * @param[in] action_templates + * Action template array to be binded to the table. + * @param[in] nb_action_templates + * Number of action templates. + * @param[out] error + * Pointer to error structure. + * + * @return + * Table on success, Otherwise, returns negative error code, rte_errno is set + * and error structure is filled. + */ +static struct rte_flow_template_table * +flow_hw_template_table_create(struct rte_eth_dev *dev, + const struct rte_flow_template_table_attr *attr, + struct rte_flow_pattern_template *item_templates[], + uint8_t nb_item_templates, + struct rte_flow_actions_template *action_templates[], + uint8_t nb_action_templates, + struct rte_flow_error *error) +{ + struct mlx5_flow_template_table_cfg cfg = { + .attr = *attr, + .external = true, + }; + uint32_t group = attr->flow_attr.group; + + if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error)) + return NULL; + return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates, + action_templates, nb_action_templates, error); +} + /** * Destroy flow table. * @@ -2271,10 +2403,13 @@ flow_hw_validate_action_represented_port(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot use represented_port actions" " without an E-Switch"); - if (mask_conf->port_id) { + if (mask_conf && mask_conf->port_id) { struct mlx5_priv *port_priv; struct mlx5_priv *dev_priv; + if (!action_conf) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "port index was not provided"); port_priv = mlx5_port_to_eswitch_info(action_conf->port_id, false); if (!port_priv) return rte_flow_error_set(error, rte_errno, @@ -2299,20 +2434,77 @@ flow_hw_validate_action_represented_port(struct rte_eth_dev *dev, return 0; } +static inline int +flow_hw_action_meta_copy_insert(const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + const struct rte_flow_action *ins_actions, + const struct rte_flow_action *ins_masks, + struct rte_flow_action *new_actions, + struct rte_flow_action *new_masks, + uint16_t *ins_pos) +{ + uint16_t idx, total = 0; + bool ins = false; + bool act_end = false; + + MLX5_ASSERT(actions && masks); + MLX5_ASSERT(new_actions && new_masks); + MLX5_ASSERT(ins_actions && ins_masks); + for (idx = 0; !act_end; idx++) { + if (idx >= MLX5_HW_MAX_ACTS) + return -1; + if (actions[idx].type == RTE_FLOW_ACTION_TYPE_RSS || + actions[idx].type == RTE_FLOW_ACTION_TYPE_QUEUE) { + ins = true; + *ins_pos = idx; + } + if (actions[idx].type == RTE_FLOW_ACTION_TYPE_END) + act_end = true; + } + if (!ins) + return 0; + else if (idx == MLX5_HW_MAX_ACTS) + return -1; /* No more space. */ + total = idx; + /* Before the position, no change for the actions. */ + for (idx = 0; idx < *ins_pos; idx++) { + new_actions[idx] = actions[idx]; + new_masks[idx] = masks[idx]; + } + /* Insert the new action and mask to the position. */ + new_actions[idx] = *ins_actions; + new_masks[idx] = *ins_masks; + /* Remaining content is right shifted by one position. */ + for (; idx < total; idx++) { + new_actions[idx + 1] = actions[idx]; + new_masks[idx + 1] = masks[idx]; + } + return 0; +} + static int flow_hw_action_validate(struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *attr, const struct rte_flow_action actions[], const struct rte_flow_action masks[], struct rte_flow_error *error) { - int i; + struct mlx5_priv *priv = dev->data->dev_private; + uint16_t i; bool actions_end = false; int ret; + /* FDB actions are only valid to proxy port. */ + if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "transfer actions are only valid to proxy port"); for (i = 0; !actions_end; ++i) { const struct rte_flow_action *action = &actions[i]; const struct rte_flow_action *mask = &masks[i]; + MLX5_ASSERT(i < MLX5_HW_MAX_ACTS); if (action->type != mask->type) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, @@ -2409,21 +2601,77 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; int len, act_len, mask_len, i; - struct rte_flow_actions_template *at; + struct rte_flow_actions_template *at = NULL; + uint16_t pos = MLX5_HW_MAX_ACTS; + struct rte_flow_action tmp_action[MLX5_HW_MAX_ACTS]; + struct rte_flow_action tmp_mask[MLX5_HW_MAX_ACTS]; + const struct rte_flow_action *ra; + const struct rte_flow_action *rm; + const struct rte_flow_action_modify_field rx_mreg = { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = REG_B, + }, + .src = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = REG_C_1, + }, + .width = 32, + }; + const struct rte_flow_action_modify_field rx_mreg_mask = { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = UINT32_MAX, + .offset = UINT32_MAX, + }, + .src = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = UINT32_MAX, + .offset = UINT32_MAX, + }, + .width = UINT32_MAX, + }; + const struct rte_flow_action rx_cpy = { + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + .conf = &rx_mreg, + }; + const struct rte_flow_action rx_cpy_mask = { + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + .conf = &rx_mreg_mask, + }; - if (flow_hw_action_validate(dev, actions, masks, error)) + if (flow_hw_action_validate(dev, attr, actions, masks, error)) return NULL; - act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, - NULL, 0, actions, error); + if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS && + priv->sh->config.dv_esw_en) { + if (flow_hw_action_meta_copy_insert(actions, masks, &rx_cpy, &rx_cpy_mask, + tmp_action, tmp_mask, &pos)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to concatenate new action/mask"); + return NULL; + } + } + /* Application should make sure only one Q/RSS exist in one rule. */ + if (pos == MLX5_HW_MAX_ACTS) { + ra = actions; + rm = masks; + } else { + ra = tmp_action; + rm = tmp_mask; + } + act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, ra, error); if (act_len <= 0) return NULL; len = RTE_ALIGN(act_len, 16); - mask_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, - NULL, 0, masks, error); + mask_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, rm, error); if (mask_len <= 0) return NULL; len += RTE_ALIGN(mask_len, 16); - at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at), 64, rte_socket_id()); + at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at), + RTE_CACHE_LINE_SIZE, rte_socket_id()); if (!at) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -2431,18 +2679,20 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, "cannot allocate action template"); return NULL; } + /* Actions part is in the first half. */ at->attr = *attr; at->actions = (struct rte_flow_action *)(at + 1); - act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->actions, len, - actions, error); + act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->actions, + len, ra, error); if (act_len <= 0) goto error; - at->masks = (struct rte_flow_action *) - (((uint8_t *)at->actions) + act_len); + /* Masks part is in the second half. */ + at->masks = (struct rte_flow_action *)(((uint8_t *)at->actions) + act_len); mask_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->masks, - len - act_len, masks, error); + len - act_len, rm, error); if (mask_len <= 0) goto error; + at->rx_cpy_pos = pos; /* * mlx5 PMD hacks indirect action index directly to the action conf. * The rte_flow_conv() function copies the content from conf pointer. @@ -2459,7 +2709,8 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, LIST_INSERT_HEAD(&priv->flow_hw_at, at, next); return at; error: - mlx5_free(at); + if (at) + mlx5_free(at); return NULL; } @@ -2534,6 +2785,80 @@ flow_hw_copy_prepend_port_item(const struct rte_flow_item *items, return copied_items; } +static int +flow_hw_pattern_validate(struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item items[], + struct rte_flow_error *error) +{ + int i; + bool items_end = false; + RTE_SET_USED(dev); + RTE_SET_USED(attr); + + for (i = 0; !items_end; i++) { + int type = items[i].type; + + switch (type) { + case RTE_FLOW_ITEM_TYPE_TAG: + { + int reg; + const struct rte_flow_item_tag *tag = + (const struct rte_flow_item_tag *)items[i].spec; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, tag->index); + if (reg == REG_NON) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Unsupported tag index"); + break; + } + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + { + const struct rte_flow_item_tag *tag = + (const struct rte_flow_item_tag *)items[i].spec; + struct mlx5_priv *priv = dev->data->dev_private; + uint8_t regcs = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c; + + if (!((1 << (tag->index - REG_C_0)) & regcs)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Unsupported internal tag index"); + } + case RTE_FLOW_ITEM_TYPE_VOID: + case RTE_FLOW_ITEM_TYPE_ETH: + case RTE_FLOW_ITEM_TYPE_VLAN: + case RTE_FLOW_ITEM_TYPE_IPV4: + case RTE_FLOW_ITEM_TYPE_IPV6: + case RTE_FLOW_ITEM_TYPE_UDP: + case RTE_FLOW_ITEM_TYPE_TCP: + case RTE_FLOW_ITEM_TYPE_GTP: + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + case RTE_FLOW_ITEM_TYPE_VXLAN: + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + case RTE_FLOW_ITEM_TYPE_META: + case RTE_FLOW_ITEM_TYPE_GRE: + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + case RTE_FLOW_ITEM_TYPE_ICMP: + case RTE_FLOW_ITEM_TYPE_ICMP6: + break; + case RTE_FLOW_ITEM_TYPE_END: + items_end = true; + break; + default: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Unsupported item type"); + } + } + return 0; +} + /** * Create flow item template. * @@ -2560,6 +2885,8 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, struct rte_flow_item *copied_items = NULL; const struct rte_flow_item *tmpl_items; + if (flow_hw_pattern_validate(dev, attr, items, error)) + return NULL; if (priv->sh->config.dv_esw_en && attr->ingress) { /* * Disallow pattern template with ingress and egress/transfer @@ -2994,6 +3321,17 @@ flow_hw_free_vport_actions(struct mlx5_priv *priv) priv->hw_vport = NULL; } +static uint32_t +flow_hw_usable_lsb_vport_mask(struct mlx5_priv *priv) +{ + uint32_t usable_mask = ~priv->vport_meta_mask; + + if (usable_mask) + return (1 << rte_bsf32(usable_mask)); + else + return 0; +} + /** * Creates a flow pattern template used to match on E-Switch Manager. * This template is used to set up a table for SQ miss default flow. @@ -3032,7 +3370,10 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev) } /** - * Creates a flow pattern template used to match on a TX queue. + * Creates a flow pattern template used to match REG_C_0 and a TX queue. + * Matching on REG_C_0 is set up to match on least significant bit usable + * by user-space, which is set when packet was originated from E-Switch Manager. + * * This template is used to set up a table for SQ miss default flow. * * @param dev @@ -3042,16 +3383,30 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev) * Pointer to flow pattern template on success, NULL otherwise. */ static struct rte_flow_pattern_template * -flow_hw_create_ctrl_sq_pattern_template(struct rte_eth_dev *dev) +flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev) { + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t marker_bit = flow_hw_usable_lsb_vport_mask(priv); struct rte_flow_pattern_template_attr attr = { .relaxed_matching = 0, .transfer = 1, }; + struct rte_flow_item_tag reg_c0_spec = { + .index = (uint8_t)REG_C_0, + }; + struct rte_flow_item_tag reg_c0_mask = { + .index = 0xff, + }; struct mlx5_rte_flow_item_tx_queue queue_mask = { .queue = UINT32_MAX, }; struct rte_flow_item items[] = { + { + .type = (enum rte_flow_item_type) + MLX5_RTE_FLOW_ITEM_TYPE_TAG, + .spec = ®_c0_spec, + .mask = ®_c0_mask, + }, { .type = (enum rte_flow_item_type) MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, @@ -3062,6 +3417,12 @@ flow_hw_create_ctrl_sq_pattern_template(struct rte_eth_dev *dev) }, }; + if (!marker_bit) { + DRV_LOG(ERR, "Unable to set up pattern template for SQ miss table"); + return NULL; + } + reg_c0_spec.data = marker_bit; + reg_c0_mask.data = marker_bit; return flow_hw_pattern_template_create(dev, &attr, items, NULL); } @@ -3099,6 +3460,132 @@ flow_hw_create_ctrl_port_pattern_template(struct rte_eth_dev *dev) return flow_hw_pattern_template_create(dev, &attr, items, NULL); } +/* + * Creating a flow pattern template with all ETH packets matching. + * This template is used to set up a table for default Tx copy (Tx metadata + * to REG_C_1) flow rule usage. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * Pointer to flow pattern template on success, NULL otherwise. + */ +static struct rte_flow_pattern_template * +flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev) +{ + struct rte_flow_pattern_template_attr tx_pa_attr = { + .relaxed_matching = 0, + .egress = 1, + }; + struct rte_flow_item_eth promisc = { + .dst.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, + }; + struct rte_flow_item eth_all[] = { + [0] = { + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = &promisc, + .mask = &promisc, + }, + [1] = { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }; + struct rte_flow_error drop_err; + + RTE_SET_USED(drop_err); + return flow_hw_pattern_template_create(dev, &tx_pa_attr, eth_all, &drop_err); +} + +/** + * Creates a flow actions template with modify field action and masked jump action. + * Modify field action sets the least significant bit of REG_C_0 (usable by user-space) + * to 1, meaning that packet was originated from E-Switch Manager. Jump action + * transfers steering to group 1. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * Pointer to flow actions template on success, NULL otherwise. + */ +static struct rte_flow_actions_template * +flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t marker_bit = flow_hw_usable_lsb_vport_mask(priv); + uint32_t marker_bit_mask = UINT32_MAX; + struct rte_flow_actions_template_attr attr = { + .transfer = 1, + }; + struct rte_flow_action_modify_field set_reg_v = { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = REG_C_0, + }, + .src = { + .field = RTE_FLOW_FIELD_VALUE, + }, + .width = 1, + }; + struct rte_flow_action_modify_field set_reg_m = { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = UINT32_MAX, + .offset = UINT32_MAX, + }, + .src = { + .field = RTE_FLOW_FIELD_VALUE, + }, + .width = UINT32_MAX, + }; + struct rte_flow_action_jump jump_v = { + .group = MLX5_HW_LOWEST_USABLE_GROUP, + }; + struct rte_flow_action_jump jump_m = { + .group = UINT32_MAX, + }; + struct rte_flow_action actions_v[] = { + { + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + .conf = &set_reg_v, + }, + { + .type = RTE_FLOW_ACTION_TYPE_JUMP, + .conf = &jump_v, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + } + }; + struct rte_flow_action actions_m[] = { + { + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + .conf = &set_reg_m, + }, + { + .type = RTE_FLOW_ACTION_TYPE_JUMP, + .conf = &jump_m, + }, + { + .type = RTE_FLOW_ACTION_TYPE_END, + } + }; + + if (!marker_bit) { + DRV_LOG(ERR, "Unable to set up actions template for SQ miss table"); + return NULL; + } + set_reg_v.dst.offset = rte_bsf32(marker_bit); + rte_memcpy(set_reg_v.src.value, &marker_bit, sizeof(marker_bit)); + rte_memcpy(set_reg_m.src.value, &marker_bit_mask, sizeof(marker_bit_mask)); + return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m, NULL); +} + /** * Creates a flow actions template with an unmasked JUMP action. Flows * based on this template will perform a jump to some group. This template @@ -3193,6 +3680,73 @@ flow_hw_create_ctrl_port_actions_template(struct rte_eth_dev *dev) NULL); } +/* + * Creating an actions template to use header modify action for register + * copying. This template is used to set up a table for copy flow. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * Pointer to flow actions template on success, NULL otherwise. + */ +static struct rte_flow_actions_template * +flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev) +{ + struct rte_flow_actions_template_attr tx_act_attr = { + .egress = 1, + }; + const struct rte_flow_action_modify_field mreg_action = { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = REG_C_1, + }, + .src = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = REG_A, + }, + .width = 32, + }; + const struct rte_flow_action_modify_field mreg_mask = { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = UINT32_MAX, + .offset = UINT32_MAX, + }, + .src = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = UINT32_MAX, + .offset = UINT32_MAX, + }, + .width = UINT32_MAX, + }; + const struct rte_flow_action copy_reg_action[] = { + [0] = { + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + .conf = &mreg_action, + }, + [1] = { + .type = RTE_FLOW_ACTION_TYPE_END, + }, + }; + const struct rte_flow_action copy_reg_mask[] = { + [0] = { + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + .conf = &mreg_mask, + }, + [1] = { + .type = RTE_FLOW_ACTION_TYPE_END, + }, + }; + struct rte_flow_error drop_err; + + RTE_SET_USED(drop_err); + return flow_hw_actions_template_create(dev, &tx_act_attr, copy_reg_action, + copy_reg_mask, &drop_err); +} + /** * Creates a control flow table used to transfer traffic from E-Switch Manager * and TX queues from group 0 to group 1. @@ -3222,8 +3776,12 @@ flow_hw_create_ctrl_sq_miss_root_table(struct rte_eth_dev *dev, }, .nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES, }; + struct mlx5_flow_template_table_cfg cfg = { + .attr = attr, + .external = false, + }; - return flow_hw_table_create(dev, &attr, &it, 1, &at, 1, NULL); + return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, NULL); } @@ -3248,16 +3806,56 @@ flow_hw_create_ctrl_sq_miss_table(struct rte_eth_dev *dev, { struct rte_flow_template_table_attr attr = { .flow_attr = { - .group = MLX5_HW_SQ_MISS_GROUP, - .priority = 0, + .group = 1, + .priority = MLX5_HW_LOWEST_PRIO_NON_ROOT, .ingress = 0, .egress = 0, .transfer = 1, }, .nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES, }; + struct mlx5_flow_template_table_cfg cfg = { + .attr = attr, + .external = false, + }; - return flow_hw_table_create(dev, &attr, &it, 1, &at, 1, NULL); + return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, NULL); +} + +/* + * Creating the default Tx metadata copy table on NIC Tx group 0. + * + * @param dev + * Pointer to Ethernet device. + * @param pt + * Pointer to flow pattern template. + * @param at + * Pointer to flow actions template. + * + * @return + * Pointer to flow table on success, NULL otherwise. + */ +static struct rte_flow_template_table* +flow_hw_create_tx_default_mreg_copy_table(struct rte_eth_dev *dev, + struct rte_flow_pattern_template *pt, + struct rte_flow_actions_template *at) +{ + struct rte_flow_template_table_attr tx_tbl_attr = { + .flow_attr = { + .group = 0, /* Root */ + .priority = MLX5_HW_LOWEST_PRIO_ROOT, + .egress = 1, + }, + .nb_flows = 1, /* One default flow rule for all. */ + }; + struct mlx5_flow_template_table_cfg tx_tbl_cfg = { + .attr = tx_tbl_attr, + .external = false, + }; + struct rte_flow_error drop_err; + + RTE_SET_USED(drop_err); + return flow_hw_table_create(dev, &tx_tbl_cfg, &pt, 1, &at, 1, &drop_err); } /** @@ -3282,15 +3880,19 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, struct rte_flow_template_table_attr attr = { .flow_attr = { .group = 0, - .priority = 15, /* TODO: Flow priority discovery. */ + .priority = MLX5_HW_LOWEST_PRIO_ROOT, .ingress = 0, .egress = 0, .transfer = 1, }, .nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES, }; + struct mlx5_flow_template_table_cfg cfg = { + .attr = attr, + .external = false, + }; - return flow_hw_table_create(dev, &attr, &it, 1, &at, 1, NULL); + return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, NULL); } /** @@ -3308,11 +3910,14 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_pattern_template *esw_mgr_items_tmpl = NULL; - struct rte_flow_pattern_template *sq_items_tmpl = NULL; + struct rte_flow_pattern_template *regc_sq_items_tmpl = NULL; struct rte_flow_pattern_template *port_items_tmpl = NULL; - struct rte_flow_actions_template *jump_sq_actions_tmpl = NULL; + struct rte_flow_pattern_template *tx_meta_items_tmpl = NULL; + struct rte_flow_actions_template *regc_jump_actions_tmpl = NULL; struct rte_flow_actions_template *port_actions_tmpl = NULL; struct rte_flow_actions_template *jump_one_actions_tmpl = NULL; + struct rte_flow_actions_template *tx_meta_actions_tmpl = NULL; + uint32_t xmeta = priv->sh->config.dv_xmeta_en; /* Item templates */ esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev); @@ -3321,8 +3926,8 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev) " template for control flows", dev->data->port_id); goto error; } - sq_items_tmpl = flow_hw_create_ctrl_sq_pattern_template(dev); - if (!sq_items_tmpl) { + regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev); + if (!regc_sq_items_tmpl) { DRV_LOG(ERR, "port %u failed to create SQ item template for" " control flows", dev->data->port_id); goto error; @@ -3333,11 +3938,18 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev) " control flows", dev->data->port_id); goto error; } + if (xmeta == MLX5_XMETA_MODE_META32_HWS) { + tx_meta_items_tmpl = flow_hw_create_tx_default_mreg_copy_pattern_template(dev); + if (!tx_meta_items_tmpl) { + DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern" + " template for control flows", dev->data->port_id); + goto error; + } + } /* Action templates */ - jump_sq_actions_tmpl = flow_hw_create_ctrl_jump_actions_template(dev, - MLX5_HW_SQ_MISS_GROUP); - if (!jump_sq_actions_tmpl) { - DRV_LOG(ERR, "port %u failed to create jump action template" + regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template(dev); + if (!regc_jump_actions_tmpl) { + DRV_LOG(ERR, "port %u failed to create REG_C set and jump action template" " for control flows", dev->data->port_id); goto error; } @@ -3347,23 +3959,32 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev) " for control flows", dev->data->port_id); goto error; } - jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template(dev, 1); + jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template + (dev, MLX5_HW_LOWEST_USABLE_GROUP); if (!jump_one_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create jump action template" " for control flows", dev->data->port_id); goto error; } + if (xmeta == MLX5_XMETA_MODE_META32_HWS) { + tx_meta_actions_tmpl = flow_hw_create_tx_default_mreg_copy_actions_template(dev); + if (!tx_meta_actions_tmpl) { + DRV_LOG(ERR, "port %u failed to Tx metadata copy actions" + " template for control flows", dev->data->port_id); + goto error; + } + } /* Tables */ MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL); priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table - (dev, esw_mgr_items_tmpl, jump_sq_actions_tmpl); + (dev, esw_mgr_items_tmpl, regc_jump_actions_tmpl); if (!priv->hw_esw_sq_miss_root_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (root table)" " for control flows", dev->data->port_id); goto error; } MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL); - priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, sq_items_tmpl, + priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, regc_sq_items_tmpl, port_actions_tmpl); if (!priv->hw_esw_sq_miss_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (non-root table)" @@ -3378,6 +3999,16 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev) " for control flows", dev->data->port_id); goto error; } + if (xmeta == MLX5_XMETA_MODE_META32_HWS) { + MLX5_ASSERT(priv->hw_tx_meta_cpy_tbl == NULL); + priv->hw_tx_meta_cpy_tbl = flow_hw_create_tx_default_mreg_copy_table(dev, + tx_meta_items_tmpl, tx_meta_actions_tmpl); + if (!priv->hw_tx_meta_cpy_tbl) { + DRV_LOG(ERR, "port %u failed to create table for default" + " Tx metadata copy flow rule", dev->data->port_id); + goto error; + } + } return 0; error: if (priv->hw_esw_zero_tbl) { @@ -3392,16 +4023,20 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev) flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL); priv->hw_esw_sq_miss_root_tbl = NULL; } + if (xmeta == MLX5_XMETA_MODE_META32_HWS && tx_meta_actions_tmpl) + flow_hw_actions_template_destroy(dev, tx_meta_actions_tmpl, NULL); if (jump_one_actions_tmpl) flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL); if (port_actions_tmpl) flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL); - if (jump_sq_actions_tmpl) - flow_hw_actions_template_destroy(dev, jump_sq_actions_tmpl, NULL); + if (regc_jump_actions_tmpl) + flow_hw_actions_template_destroy(dev, regc_jump_actions_tmpl, NULL); + if (xmeta == MLX5_XMETA_MODE_META32_HWS && tx_meta_items_tmpl) + flow_hw_pattern_template_destroy(dev, tx_meta_items_tmpl, NULL); if (port_items_tmpl) flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL); - if (sq_items_tmpl) - flow_hw_pattern_template_destroy(dev, sq_items_tmpl, NULL); + if (regc_sq_items_tmpl) + flow_hw_pattern_template_destroy(dev, regc_sq_items_tmpl, NULL); if (esw_mgr_items_tmpl) flow_hw_pattern_template_destroy(dev, esw_mgr_items_tmpl, NULL); return -EINVAL; @@ -3453,7 +4088,7 @@ flow_hw_configure(struct rte_eth_dev *dev, struct rte_flow_queue_attr **_queue_attr = NULL; struct rte_flow_queue_attr ctrl_queue_attr = {0}; bool is_proxy = !!(priv->sh->config.dv_esw_en && priv->master); - int ret; + int ret = 0; if (!port_attr || !nb_queue || !queue_attr) { rte_errno = EINVAL; @@ -3604,6 +4239,9 @@ flow_hw_configure(struct rte_eth_dev *dev, } if (_queue_attr) mlx5_free(_queue_attr); + /* Do not overwrite the internal errno information. */ + if (ret) + return ret; return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to configure port"); @@ -3712,17 +4350,17 @@ void flow_hw_init_tags_set(struct rte_eth_dev *dev) return; unset |= 1 << (priv->mtr_color_reg - REG_C_0); unset |= 1 << (REG_C_6 - REG_C_0); - if (meta_mode == MLX5_XMETA_MODE_META32_HWS) { - unset |= 1 << (REG_C_1 - REG_C_0); + if (priv->sh->config.dv_esw_en) unset |= 1 << (REG_C_0 - REG_C_0); - } + if (meta_mode == MLX5_XMETA_MODE_META32_HWS) + unset |= 1 << (REG_C_1 - REG_C_0); masks &= ~unset; if (mlx5_flow_hw_avl_tags_init_cnt) { for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { if (mlx5_flow_hw_avl_tags[i] != REG_NON && !!((1 << i) & masks)) { copy[mlx5_flow_hw_avl_tags[i] - REG_C_0] = mlx5_flow_hw_avl_tags[i]; - copy_masks |= (1 << i); + copy_masks |= (1 << (mlx5_flow_hw_avl_tags[i] - REG_C_0)); } } if (copy_masks != masks) { @@ -3864,7 +4502,6 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, return flow_dv_action_destroy(dev, handle, error); } - const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, @@ -3872,7 +4509,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .pattern_template_destroy = flow_hw_pattern_template_destroy, .actions_template_create = flow_hw_actions_template_create, .actions_template_destroy = flow_hw_actions_template_destroy, - .template_table_create = flow_hw_table_create, + .template_table_create = flow_hw_template_table_create, .template_table_destroy = flow_hw_table_destroy, .async_flow_create = flow_hw_async_flow_create, .async_flow_destroy = flow_hw_async_flow_destroy, @@ -3888,13 +4525,6 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .action_query = flow_dv_action_query, }; -static uint32_t -flow_hw_get_ctrl_queue(struct mlx5_priv *priv) -{ - MLX5_ASSERT(priv->nb_queue > 0); - return priv->nb_queue - 1; -} - /** * Creates a control flow using flow template API on @p proxy_dev device, * on behalf of @p owner_dev device. @@ -3932,7 +4562,7 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, uint8_t action_template_idx) { struct mlx5_priv *priv = proxy_dev->data->dev_private; - uint32_t queue = flow_hw_get_ctrl_queue(priv); + uint32_t queue = priv->nb_queue - 1; struct rte_flow_op_attr op_attr = { .postpone = 0, }; @@ -4007,7 +4637,7 @@ static int flow_hw_destroy_ctrl_flow(struct rte_eth_dev *dev, struct rte_flow *flow) { struct mlx5_priv *priv = dev->data->dev_private; - uint32_t queue = flow_hw_get_ctrl_queue(priv); + uint32_t queue = priv->nb_queue - 1; struct rte_flow_op_attr op_attr = { .postpone = 0, }; @@ -4144,10 +4774,24 @@ mlx5_flow_hw_esw_create_mgr_sq_miss_flow(struct rte_eth_dev *dev) .type = RTE_FLOW_ITEM_TYPE_END, }, }; + struct rte_flow_action_modify_field modify_field = { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + }, + .src = { + .field = RTE_FLOW_FIELD_VALUE, + }, + .width = 1, + }; struct rte_flow_action_jump jump = { - .group = MLX5_HW_SQ_MISS_GROUP, + .group = 1, }; struct rte_flow_action actions[] = { + { + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + .conf = &modify_field, + }, { .type = RTE_FLOW_ACTION_TYPE_JUMP, .conf = &jump, @@ -4170,6 +4814,12 @@ int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) { uint16_t port_id = dev->data->port_id; + struct rte_flow_item_tag reg_c0_spec = { + .index = (uint8_t)REG_C_0, + }; + struct rte_flow_item_tag reg_c0_mask = { + .index = 0xff, + }; struct mlx5_rte_flow_item_tx_queue queue_spec = { .queue = txq, }; @@ -4177,6 +4827,12 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) .queue = UINT32_MAX, }; struct rte_flow_item items[] = { + { + .type = (enum rte_flow_item_type) + MLX5_RTE_FLOW_ITEM_TYPE_TAG, + .spec = ®_c0_spec, + .mask = ®_c0_mask, + }, { .type = (enum rte_flow_item_type) MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, @@ -4202,6 +4858,7 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) struct rte_eth_dev *proxy_dev; struct mlx5_priv *proxy_priv; uint16_t proxy_port_id = dev->data->port_id; + uint32_t marker_bit; int ret; RTE_SET_USED(txq); @@ -4222,6 +4879,14 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) rte_errno = ENOMEM; return -rte_errno; } + marker_bit = flow_hw_usable_lsb_vport_mask(proxy_priv); + if (!marker_bit) { + DRV_LOG(ERR, "Unable to set up control flow in SQ miss table"); + rte_errno = EINVAL; + return -rte_errno; + } + reg_c0_spec.data = marker_bit; + reg_c0_mask.data = marker_bit; return flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, items, 0, actions, 0); @@ -4281,4 +4946,53 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) items, 0, actions, 0); } +int +mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_item_eth promisc = { + .dst.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .type = 0, + }; + struct rte_flow_item eth_all[] = { + [0] = { + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = &promisc, + .mask = &promisc, + }, + [1] = { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }; + struct rte_flow_action_modify_field mreg_action = { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = REG_C_1, + }, + .src = { + .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, + .level = REG_A, + }, + .width = 32, + }; + struct rte_flow_action copy_reg_action[] = { + [0] = { + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + .conf = &mreg_action, + }, + [1] = { + .type = RTE_FLOW_ACTION_TYPE_END, + }, + }; + + MLX5_ASSERT(priv->master); + if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl) + return 0; + return flow_hw_create_ctrl_flow(dev, dev, + priv->hw_tx_meta_cpy_tbl, + eth_all, 0, copy_reg_action, 0); +} + #endif diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 3ef31671b1..9e458356a0 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1290,6 +1290,9 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) if (priv->sh->config.dv_esw_en && priv->master) { if (mlx5_flow_hw_esw_create_mgr_sq_miss_flow(dev)) goto error; + if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS) + if (mlx5_flow_hw_create_tx_default_mreg_copy_flow(dev)) + goto error; } for (i = 0; i < priv->txqs_n; ++i) { struct mlx5_txq_ctrl *txq = mlx5_txq_get(dev, i); From patchwork Fri Sep 23 14:43:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116746 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76F7CA054A; Fri, 23 Sep 2022 16:45:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3627242BE7; Fri, 23 Sep 2022 16:44:19 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2055.outbound.protection.outlook.com [40.107.220.55]) by mails.dpdk.org (Postfix) with ESMTP id 2CB9442BB2 for ; Fri, 23 Sep 2022 16:44:17 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=miKSCIP3+wQFkgQv6n2nPa11NwLHggSH+kiJMAFmKjf3UMmcZkbjTWv3+GG8SZ1RYV87awX+pBkqWAwFZ8lnGhN3TT4/FjWFHTl+421nSC0jyJDsNyd9vGppoR14ibGrBv3opUgGnbf6XJI3yfXW1fWWcVAntNS1QCnNQzP0g46/Y0DhIXl6RCYPRRsG244WLHZZ4IqRuiE0eI51us/Xo2sJzdzoI1F4V1KSN+ELRIqUJxKPks6ZotQDdRqNLg5n0eIL9vS6VtAvOCyi84uXhchbneFvolfVhDHAPel8Fbamd31Lb3WOtANU9DZl4DD4pfP2u695WWARaxPePrK7Ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ywOCeIXDJef6s4sEl5jUEDMpnIVERv8IMI3u1UWhgY0=; b=RWCQ/lZAXnorQcC9QNjWChi+b494Q39J1KNCdAHzCSQeQnG0mF+RnTKhQyZ6sa7LL3XMPZRpsivy9YnWw+fwoH15909QQVssY2zIFPiEdcKrqe/lNC7iYPRi4eag0hJqCnbMLOOf2BZSF1BfM0+7By9Ljdh1gj7P10lbU/4I9CB8vqtbmqoU1WXYmY5YmspUurQpuIalccpx/nBkAxhEqlGFcfpqFRSI0msvDZthfcY63vxDfNLkbbFRZdGbZhgv8Q41OzqopktyIYSghYjcwU4b90lqHnGfNoZEeox9ltw1glXFZ9abPg//mP9277Aq6XvwrJnaW9DOf98SIjArew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ywOCeIXDJef6s4sEl5jUEDMpnIVERv8IMI3u1UWhgY0=; b=iUX8Z/OZlCeTmhBCKeIwTPRaIJxkQxwcvtMe0yRr4YSkg7nOzDr9haK3sfcHTheD3gvvdLC1SUjCZfGH7CvmFR/I1fdpcK+4VaWghMnrAvEW/cb7CLZ61Y1dzcLrPfcvcpcZsQ+IeA8sQ+Gk/BJF72n6GsrkW+8gylSvXWRF6HPAjK7e1iuECl2pPu6SYeQ2G2NBg0fk4RhqyAR+0+HjvH2cWXE8dIlshYF56f84tdguepmWSpu4SyPaFI54uRb+ia5gMSpBi5jcrz2tCYkesNyW5AY9CDuWUfQeIvK+BcqT0s2B9ROjdAkh24zt8PKquhvzp4Z+A65Usn848alsxg== Received: from DM6PR07CA0112.namprd07.prod.outlook.com (2603:10b6:5:330::27) by BL1PR12MB5303.namprd12.prod.outlook.com (2603:10b6:208:317::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:14 +0000 Received: from DM6NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:5:330:cafe::5) by DM6PR07CA0112.outlook.office365.com (2603:10b6:5:330::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.19 via Frontend Transport; Fri, 23 Sep 2022 14:44:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT023.mail.protection.outlook.com (10.13.173.96) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:13 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:04 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:02 -0700 From: Suanming Mou To: Ori Kam , Thomas Monjalon , "Ferruh Yigit" , Andrew Rybchenko CC: , Alexander Kozyrev Subject: [PATCH 09/27] ethdev: add meter profiles/policies config Date: Fri, 23 Sep 2022 17:43:16 +0300 Message-ID: <20220923144334.27736-10-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT023:EE_|BL1PR12MB5303:EE_ X-MS-Office365-Filtering-Correlation-Id: ddc91056-0650-479f-65a2-08da9d7215a2 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5JnOM1nuQ6Od12ErT8h39BP/turNotcLXo0/5kIeWQBVNL4xD/JyDDna720OIxJ/wS5vxbEgIWFJ/s1tTq3vnIYAH0BC3pk6NFOO15PiMyDcKN7E9o3H4uyd3Kk96TpGBSwyDwlHBjZnqgB5T3Fj7F9+lI3vFXZgP+WfHQBZetcS6AR15VMOHBendUdF9q2j+lks3qqKXcI3X1vHLlX8fGO69f/MLPeJIV9X+tddgKrFPBlm6nq3V6JYRbOC8HWS6qsxvgd2tbfrAV42qU8OLLswz/3a52+DXlPeZhlI0U0IciT/WtQ+k7aISKHyW7zHwtkwg6VTZ/z08gMlZsl32JFLz4MVfi1kZXiDNHKr3kJx6bicdsN1UrTY4HCcIC/O+c7/skTFqx/EdrDimqdNbbBePoh8uMBGkd20jsTyBlg85Q4odfWW01s+IeFEEKtIndlpkwbTsPpzSPLLGpLpqcSVRP/CESDbW9szFupQfE5Oc+bzFt7vxSQbnPnweCiNou9tYSsHh3wVZFCz11IBviEqJ2r30x+w+EZIa474eLhrW6+0wiZC/NRX78QsciLtBt/+ZGcoe3OmKqOigVqB+IMGFbkQV+dSD4z32uFIGqNSd7AuJRo24m6Q3FqgXHXUodr72qWLY3BnKvkaZENTsAc5q5xSwFkv4yp2hjmdn//wcapXZLg2sK7D5Xlr7L6ASQwXV4s9dC10cVucn5CxttYoV3Z62i5Y/8/KxMQawRlvmhARDavYLLO0nMZxonzSylIxq6+KZV7++Q2dGke39g== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(376002)(136003)(346002)(396003)(451199015)(46966006)(36840700001)(40470700004)(426003)(47076005)(36756003)(1076003)(2616005)(41300700001)(336012)(26005)(6286002)(2906002)(7696005)(16526019)(186003)(5660300002)(82740400003)(7636003)(8936002)(82310400005)(6666004)(55016003)(107886003)(40460700003)(356005)(40480700001)(86362001)(36860700001)(83380400001)(478600001)(316002)(54906003)(4326008)(110136005)(70586007)(8676002)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:13.9737 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ddc91056-0650-479f-65a2-08da9d7215a2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5303 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alexander Kozyrev Provide an ability to specify the number of meter profiles/policies alongside with the number of meters during the Flow engine configuration. Signed-off-by: Alexander Kozyrev --- lib/ethdev/rte_flow.h | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a79f1e7ef0..abb475bdee 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4898,10 +4898,20 @@ struct rte_flow_port_info { */ uint32_t max_nb_aging_objects; /** - * Maximum number traffic meters. + * Maximum number of traffic meters. * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t max_nb_meters; + /** + * Maximum number of traffic meter profiles. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t max_nb_meter_profiles; + /** + * Maximum number of traffic meter policies. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t max_nb_meter_policies; }; /** @@ -4971,6 +4981,16 @@ struct rte_flow_port_attr { * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t nb_meters; + /** + * Number of traffic meter profiles to configure. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t nb_meter_profiles; + /** + * Number of traffic meter policies to configure. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t nb_meter_policies; }; /** From patchwork Fri Sep 23 14:43:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116748 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29E98A054A; Fri, 23 Sep 2022 16:45:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 576A542BF3; Fri, 23 Sep 2022 16:44:22 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id DD6C642BEB for ; Fri, 23 Sep 2022 16:44:19 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jrfmgHnLOz8dCXYCY2dcoCFbjEi/c3qdLJv4CgvtWdb0t7P3FYpv7dPYLiHxJFtb6/agoKoAWfGc0VqdF52FJMA1T7gAb99senGvbB960T6lPxOUCffaRyrBbz++wXtetctgznGSdOdZ3mO1uFgHom0CMvAVlC1eSJHxQtzZUh04c28k+I3S4cRgzGzBiwHgLkfl6CSQfHmJ0l7jqnoejk74WeOIUIwI9GeG3OBOhXeEbFXOkrU4oyggQMo+0Ag/48FCf8iWMD9GjOytG64TPL3arr5JhIkjWAtOZJs2N++zNOwHVs+Q6PP7yOedjFxGyI4FqG6s0EPtWWMs9f0lyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8uGXPxZ5GCxlhC6b4GXsdYUv8YUKqoXoh2HKC6leMS0=; b=jITlNsvc2zUbu5aIyGKBwcB/MwW8/CUF8WYEI/KbEoY6PVqCqy0TtNE1Y+XKnPUlcq4tJtBuoQME+tMxIdiC6LWm5Ctxxjot9EAj0JVJsq/kQX9j+Ci8OyXp/mNPrUoj7EmDxaGokwPXLHv/gNw59DjcOy19gMF7kmCe+quHcidoukrW5VL0HO46WCaf7uxUVzF3jhhbU1Bz1CEy47Zd50DQqsUs9e059X9CJtu+iD8jleCzehywBXhf6gk+rnDkiY6jGENUE+y5/bcW4E5McXAidkJ5IgoS9iVhI4mQdm6etZ3zY5bbOF20t6AFfegV3FxLcyWXpsC5FdXepRUF0w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8uGXPxZ5GCxlhC6b4GXsdYUv8YUKqoXoh2HKC6leMS0=; b=Ewu8folDhq9qfcysSw46lwUMsvh4bIyV9N2AuADtDgc3PMxtFp/XbyQvCJRX/sCCMk5tghL4XBn3XnmzzBJGCOJbsPT3tj/K/PjRu65IZEJOFQJ4roMY1tlVNBlIjKpGSThQJjcCxDAf6WumYrRxAcyTlyXgEPlkHiK0t0SQFCIKHhvbY0zmKGnsbAOCPN07KXY9xB1/kVx8uAzVbiHOynbDAlhBubpAR1xoVUSrhKl15vU4gPbPKxb+mF5AyGQKPt7wgaCRnZpwd2fhgiinn/Op8ieLcEbtOQue03OLdIGxj7iCTguDHwZQUD9sUfbYwo9qjuLWk3L6/CPoqgDxcQ== Received: from DS7PR03CA0005.namprd03.prod.outlook.com (2603:10b6:5:3b8::10) by CH2PR12MB4213.namprd12.prod.outlook.com (2603:10b6:610:a4::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:16 +0000 Received: from DM6NAM11FT050.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b8:cafe::8b) by DS7PR03CA0005.outlook.office365.com (2603:10b6:5:3b8::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT050.mail.protection.outlook.com (10.13.173.111) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:16 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:06 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:04 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Alexander Kozyrev Subject: [PATCH 10/27] net/mlx5: add HW steering meter action Date: Fri, 23 Sep 2022 17:43:17 +0300 Message-ID: <20220923144334.27736-11-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT050:EE_|CH2PR12MB4213:EE_ X-MS-Office365-Filtering-Correlation-Id: 25e21732-86d1-4531-c7d3-08da9d721722 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: N3XTNWHYLCGK0V3rt/zjaPvKAx3/dCmTiVdcZj3R5nqFzvtqVmNdNr0jK+1xQnRz4XBH2iFgt5lfOkdEnRp4DrnT2dF+ZBpT6LbYnRgYyf91ehAajKNsm7aao/mLJ1rR022L1ctpabmca3qQTa5NkMr6s5MTpUWAh0QKgxO/JfO+GnFZtwJyp5Bz3oht16ifKBnSTHz3xMpEn8JGAjPnLFmPlb81BRGx97o5m/zkOREZkxk/FXPq9fS5c6MsjsOP945z8lcGLtu3wwUb+2qc9R5ola/0Ex+zhtliggsWBqJVr9PSKAUbL/IajnZlKt+pH7NeBPhwJKHIvePYVzh2bP9KnR3EhRxjzE2qeLMhNyLZHmApPb+jCFIF/+/SUREokLy4hH82smrUE5TFAFvvwU+uzgx9KD4wf+gPwJOo/dwjvq5vNr1T3pTw6w5DZN1nD8iwCnxAtc0p2f8yUS4SqlFxO7gU5HTX7Z8oIA/ieoozYvJHgIonswTdTuS67zivQAiRKckPvGefgqRfPgod/7jSBkLRzVlto3fZ/3l/Htp6S+3vXdiepyoQWT02jVt+7vD9EnlBC0czxpI26/p5F3jr72b5nfrh89jS/PV8DvNRX1vzTvLOaI+Z4/VhRvhkTOUrD7FijX8Ud1NmPW3Q+YuHTbFWr2Yqf9c1PUlZJ25vVOlEBD2Mrk+1tskDfwSmoTTyKFmg6qvoXXeBtchHHiiogPe4azeNvimoaNAS8QkDgednHbprdPMO0x4Wp1K5y058sYqI0XEVMiKncjn113ZYsdPeHCJOXC6CC2v/w+Y= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(39860400002)(346002)(376002)(396003)(451199015)(36840700001)(40470700004)(46966006)(4326008)(70206006)(86362001)(8676002)(30864003)(8936002)(5660300002)(2906002)(6666004)(7696005)(41300700001)(16526019)(36860700001)(70586007)(107886003)(2616005)(336012)(47076005)(186003)(1076003)(6286002)(426003)(83380400001)(26005)(110136005)(316002)(7636003)(6636002)(356005)(82740400003)(478600001)(54906003)(36756003)(40460700003)(82310400005)(40480700001)(55016003)(579004)(559001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:16.4772 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 25e21732-86d1-4531-c7d3-08da9d721722 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT050.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4213 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alexander Kozyrev This commit adds meter action for HWS steering. HW steering meter is based on ASO. The number of meters will be used by flows should be specified in advanced in the flow configure API. Signed-off-by: Alexander Kozyrev Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/mlx5.h | 58 +- drivers/net/mlx5/mlx5_flow.c | 71 +++ drivers/net/mlx5/mlx5_flow.h | 50 ++ drivers/net/mlx5/mlx5_flow_aso.c | 30 +- drivers/net/mlx5/mlx5_flow_dv.c | 25 - drivers/net/mlx5/mlx5_flow_hw.c | 113 +++- drivers/net/mlx5/mlx5_flow_meter.c | 851 ++++++++++++++++++++++++++++- 7 files changed, 1138 insertions(+), 60 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3364c4735c..263b502d37 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -357,6 +357,9 @@ struct mlx5_hw_q { struct mlx5_hw_q_job **job; /* LIFO header. */ } __rte_cache_aligned; + + + #define MLX5_COUNTERS_PER_POOL 512 #define MLX5_MAX_PENDING_QUERIES 4 #define MLX5_CNT_CONTAINER_RESIZE 64 @@ -782,15 +785,29 @@ struct mlx5_flow_meter_policy { /* Is meter action in policy table. */ uint32_t hierarchy_drop_cnt:1; /* Is any meter in hierarchy contains drop_cnt. */ + uint32_t skip_r:1; + /* If red color policy is skipped. */ uint32_t skip_y:1; /* If yellow color policy is skipped. */ uint32_t skip_g:1; /* If green color policy is skipped. */ uint32_t mark:1; /* If policy contains mark action. */ + uint32_t initialized:1; + /* Initialized. */ + uint16_t group; + /* The group. */ rte_spinlock_t sl; uint32_t ref_cnt; /* Use count. */ + struct rte_flow_pattern_template *hws_item_templ; + /* Hardware steering item templates. */ + struct rte_flow_actions_template *hws_act_templ[MLX5_MTR_DOMAIN_MAX]; + /* Hardware steering action templates. */ + struct rte_flow_template_table *hws_flow_table[MLX5_MTR_DOMAIN_MAX]; + /* Hardware steering tables. */ + struct rte_flow *hws_flow_rule[MLX5_MTR_DOMAIN_MAX][RTE_COLORS]; + /* Hardware steering rules. */ struct mlx5_meter_policy_action_container act_cnt[MLX5_MTR_RTE_COLORS]; /* Policy actions container. */ void *dr_drop_action[MLX5_MTR_DOMAIN_MAX]; @@ -865,6 +882,7 @@ struct mlx5_flow_meter_info { */ uint32_t transfer:1; uint32_t def_policy:1; + uint32_t initialized:1; /* Meter points to default policy. */ uint32_t color_aware:1; /* Meter is color aware mode. */ @@ -880,6 +898,10 @@ struct mlx5_flow_meter_info { /**< Flow meter action. */ void *meter_action_y; /**< Flow meter action for yellow init_color. */ + uint32_t meter_offset; + /**< Flow meter offset. */ + uint16_t group; + /**< Flow meter group. */ }; /* PPS(packets per second) map to BPS(Bytes per second). @@ -914,6 +936,7 @@ struct mlx5_flow_meter_profile { uint32_t ref_cnt; /**< Use count. */ uint32_t g_support:1; /**< If G color will be generated. */ uint32_t y_support:1; /**< If Y color will be generated. */ + uint32_t initialized:1; /**< Initialized. */ }; /* 2 meters in each ASO cache line */ @@ -934,13 +957,20 @@ enum mlx5_aso_mtr_state { ASO_METER_READY, /* CQE received. */ }; +/*aso flow meter type*/ +enum mlx5_aso_mtr_type { + ASO_METER_INDIRECT, + ASO_METER_DIRECT, +}; + /* Generic aso_flow_meter information. */ struct mlx5_aso_mtr { LIST_ENTRY(mlx5_aso_mtr) next; + enum mlx5_aso_mtr_type type; struct mlx5_flow_meter_info fm; /**< Pointer to the next aso flow meter structure. */ uint8_t state; /**< ASO flow meter state. */ - uint8_t offset; + uint32_t offset; }; /* Generic aso_flow_meter pool structure. */ @@ -964,6 +994,14 @@ struct mlx5_aso_mtr_pools_mng { struct mlx5_aso_mtr_pool **pools; /* ASO flow meter pool array. */ }; +/* Bulk management structure for ASO flow meter. */ +struct mlx5_mtr_bulk { + uint32_t size; /* Number of ASO objects. */ + struct mlx5dr_action *action; /* HWS action */ + struct mlx5_devx_obj *devx_obj; /* DEVX object. */ + struct mlx5_aso_mtr *aso; /* Array of ASO objects. */ +}; + /* Meter management structure for global flow meter resource. */ struct mlx5_flow_mtr_mng { struct mlx5_aso_mtr_pools_mng pools_mng; @@ -1017,6 +1055,7 @@ struct mlx5_flow_tbl_resource { #define MLX5_FLOW_TABLE_LEVEL_METER (MLX5_MAX_TABLES - 3) #define MLX5_FLOW_TABLE_LEVEL_POLICY (MLX5_MAX_TABLES - 4) #define MLX5_MAX_TABLES_EXTERNAL MLX5_FLOW_TABLE_LEVEL_POLICY +#define MLX5_FLOW_TABLE_HWS_POLICY (MLX5_MAX_TABLES - 10) #define MLX5_MAX_TABLES_FDB UINT16_MAX #define MLX5_FLOW_TABLE_FACTOR 10 @@ -1303,6 +1342,12 @@ TAILQ_HEAD(mlx5_mtr_profiles, mlx5_flow_meter_profile); /* MTR list. */ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter); +struct mlx5_mtr_config { + uint32_t nb_meters; /**< Number of configured meters */ + uint32_t nb_meter_profiles; /**< Number of configured meter profiles */ + uint32_t nb_meter_policies; /**< Number of configured meter policies */ +}; + /* RSS description. */ struct mlx5_flow_rss_desc { uint32_t level; @@ -1539,12 +1584,16 @@ struct mlx5_priv { struct mlx5_nl_vlan_vmwa_context *vmwa_context; /* VLAN WA context. */ struct mlx5_hlist *mreg_cp_tbl; /* Hash table of Rx metadata register copy table. */ + struct mlx5_mtr_config mtr_config; /* Meter configuration */ uint8_t mtr_sfx_reg; /* Meter prefix-suffix flow match REG_C. */ uint8_t mtr_color_reg; /* Meter color match REG_C. */ struct mlx5_legacy_flow_meters flow_meters; /* MTR list. */ struct mlx5_l3t_tbl *mtr_profile_tbl; /* Meter index lookup table. */ + struct mlx5_flow_meter_profile *mtr_profile_arr; /* Profile array. */ struct mlx5_l3t_tbl *policy_idx_tbl; /* Policy index lookup table. */ + struct mlx5_flow_meter_policy *mtr_policy_arr; /* Policy array. */ struct mlx5_l3t_tbl *mtr_idx_tbl; /* Meter index lookup table. */ + struct mlx5_mtr_bulk mtr_bulk; /* Meter index mapping for HWS */ uint8_t skip_default_rss_reta; /* Skip configuration of default reta. */ uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */ struct mlx5_mp_id mp_id; /* ID of a multi-process process */ @@ -1579,6 +1628,7 @@ struct mlx5_priv { #define PORT_ID(priv) ((priv)->dev_data->port_id) #define ETH_DEV(priv) (&rte_eth_devices[PORT_ID(priv)]) +#define CTRL_QUEUE_ID(priv) ((priv)->nb_queue - 1) struct rte_hairpin_peer_info { uint32_t qp_id; @@ -1890,6 +1940,10 @@ void mlx5_pmd_socket_uninit(void); /* mlx5_flow_meter.c */ +int mlx5_flow_meter_init(struct rte_eth_dev *dev, + uint32_t nb_meters, + uint32_t nb_meter_profiles, + uint32_t nb_meter_policies); int mlx5_flow_meter_ops_get(struct rte_eth_dev *dev, void *arg); struct mlx5_flow_meter_info *mlx5_flow_meter_find(struct mlx5_priv *priv, uint32_t meter_id, uint32_t *mtr_idx); @@ -1964,7 +2018,7 @@ int mlx5_aso_flow_hit_queue_poll_stop(struct mlx5_dev_ctx_shared *sh); void mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh, enum mlx5_access_aso_opc_mod aso_opc_mod); int mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, - struct mlx5_aso_mtr *mtr); + struct mlx5_aso_mtr *mtr, struct mlx5_mtr_bulk *bulk); int mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, struct mlx5_aso_mtr *mtr); int mlx5_aso_ct_update_by_wqe(struct mlx5_dev_ctx_shared *sh, diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index b570ed7f69..fb3be940e5 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -8331,6 +8331,40 @@ mlx5_flow_port_configure(struct rte_eth_dev *dev, return fops->configure(dev, port_attr, nb_queue, queue_attr, error); } +/** + * Validate item template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the item template attributes. + * @param[in] items + * The template item pattern. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_flow_pattern_validate(struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item items[], + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; + + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "pattern validate with incorrect steering mode"); + return -ENOTSUP; + } + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + return fops->pattern_validate(dev, attr, items, error); +} + /** * Create flow item template. * @@ -8396,6 +8430,43 @@ mlx5_flow_pattern_template_destroy(struct rte_eth_dev *dev, return fops->pattern_template_destroy(dev, template, error); } +/** + * Validate flow actions template. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the action template attributes. + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[in] masks + * List of actions that marks which of the action's member is constant. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_flow_actions_validate(struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; + + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "actions validate with incorrect steering mode"); + return -ENOTSUP; + } + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + return fops->actions_validate(dev, attr, actions, masks, error); +} + /** * Create flow item template. * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 15c5826d8a..c5190b1d4f 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1653,6 +1653,11 @@ typedef int (*mlx5_flow_port_configure_t) uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); +typedef int (*mlx5_flow_pattern_validate_t) + (struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item items[], + struct rte_flow_error *error); typedef struct rte_flow_pattern_template *(*mlx5_flow_pattern_template_create_t) (struct rte_eth_dev *dev, const struct rte_flow_pattern_template_attr *attr, @@ -1662,6 +1667,12 @@ typedef int (*mlx5_flow_pattern_template_destroy_t) (struct rte_eth_dev *dev, struct rte_flow_pattern_template *template, struct rte_flow_error *error); +typedef int (*mlx5_flow_actions_validate_t) + (struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); typedef struct rte_flow_actions_template *(*mlx5_flow_actions_template_create_t) (struct rte_eth_dev *dev, const struct rte_flow_actions_template_attr *attr, @@ -1778,8 +1789,10 @@ struct mlx5_flow_driver_ops { mlx5_flow_item_update_t item_update; mlx5_flow_info_get_t info_get; mlx5_flow_port_configure_t configure; + mlx5_flow_pattern_validate_t pattern_validate; mlx5_flow_pattern_template_create_t pattern_template_create; mlx5_flow_pattern_template_destroy_t pattern_template_destroy; + mlx5_flow_actions_validate_t actions_validate; mlx5_flow_actions_template_create_t actions_template_create; mlx5_flow_actions_template_destroy_t actions_template_destroy; mlx5_flow_table_create_t template_table_create; @@ -1861,6 +1874,8 @@ mlx5_aso_meter_by_idx(struct mlx5_priv *priv, uint32_t idx) /* Decrease to original index. */ idx--; + if (priv->mtr_bulk.aso) + return priv->mtr_bulk.aso + idx; MLX5_ASSERT(idx / MLX5_ASO_MTRS_PER_POOL < pools_mng->n); rte_rwlock_read_lock(&pools_mng->resize_mtrwl); pool = pools_mng->pools[idx / MLX5_ASO_MTRS_PER_POOL]; @@ -1963,6 +1978,32 @@ mlx5_translate_tunnel_etypes(uint64_t pattern_flags) int flow_hw_q_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error); + +/* + * Convert rte_mtr_color to mlx5 color. + * + * @param[in] rcol + * rte_mtr_color. + * + * @return + * mlx5 color. + */ +static inline int +rte_col_2_mlx5_col(enum rte_color rcol) +{ + switch (rcol) { + case RTE_COLOR_GREEN: + return MLX5_FLOW_COLOR_GREEN; + case RTE_COLOR_YELLOW: + return MLX5_FLOW_COLOR_YELLOW; + case RTE_COLOR_RED: + return MLX5_FLOW_COLOR_RED; + default: + break; + } + return MLX5_FLOW_COLOR_UNDEFINED; +} + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, @@ -2346,4 +2387,13 @@ int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq); int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev); +int mlx5_flow_actions_validate(struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); +int mlx5_flow_pattern_validate(struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *attr, + const struct rte_flow_item items[], + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index 4129e3a9e0..60d0280367 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -642,7 +642,8 @@ mlx5_aso_flow_hit_queue_poll_stop(struct mlx5_dev_ctx_shared *sh) static uint16_t mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, struct mlx5_aso_sq *sq, - struct mlx5_aso_mtr *aso_mtr) + struct mlx5_aso_mtr *aso_mtr, + struct mlx5_mtr_bulk *bulk) { volatile struct mlx5_aso_wqe *wqe = NULL; struct mlx5_flow_meter_info *fm = NULL; @@ -653,6 +654,7 @@ mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, uint32_t dseg_idx = 0; struct mlx5_aso_mtr_pool *pool = NULL; uint32_t param_le; + int id; rte_spinlock_lock(&sq->sqsl); res = size - (uint16_t)(sq->head - sq->tail); @@ -666,14 +668,19 @@ mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, /* Fill next WQE. */ fm = &aso_mtr->fm; sq->elts[sq->head & mask].mtr = aso_mtr; - pool = container_of(aso_mtr, struct mlx5_aso_mtr_pool, - mtrs[aso_mtr->offset]); - wqe->general_cseg.misc = rte_cpu_to_be_32(pool->devx_obj->id + - (aso_mtr->offset >> 1)); - wqe->general_cseg.opcode = rte_cpu_to_be_32(MLX5_OPCODE_ACCESS_ASO | - (ASO_OPC_MOD_POLICER << - WQE_CSEG_OPC_MOD_OFFSET) | - sq->pi << WQE_CSEG_WQE_INDEX_OFFSET); + if (aso_mtr->type == ASO_METER_INDIRECT) { + pool = container_of(aso_mtr, struct mlx5_aso_mtr_pool, + mtrs[aso_mtr->offset]); + id = pool->devx_obj->id; + } else { + id = bulk->devx_obj->id; + } + wqe->general_cseg.misc = rte_cpu_to_be_32(id + + (aso_mtr->offset >> 1)); + wqe->general_cseg.opcode = + rte_cpu_to_be_32(MLX5_OPCODE_ACCESS_ASO | + (ASO_OPC_MOD_POLICER << WQE_CSEG_OPC_MOD_OFFSET) | + sq->pi << WQE_CSEG_WQE_INDEX_OFFSET); /* There are 2 meters in one ASO cache line. */ dseg_idx = aso_mtr->offset & 0x1; wqe->aso_cseg.data_mask = @@ -811,14 +818,15 @@ mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq) */ int mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, - struct mlx5_aso_mtr *mtr) + struct mlx5_aso_mtr *mtr, + struct mlx5_mtr_bulk *bulk) { struct mlx5_aso_sq *sq = &sh->mtrmng->pools_mng.sq; uint32_t poll_wqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES; do { mlx5_aso_mtr_completion_handle(sq); - if (mlx5_aso_mtr_sq_enqueue_single(sh, sq, mtr)) + if (mlx5_aso_mtr_sq_enqueue_single(sh, sq, mtr, bulk)) return 0; /* Waiting for wqe resource. */ rte_delay_us_sleep(MLX5_ASO_WQE_CQE_RESPONSE_DELAY); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d1f0d63fdc..80539fd75d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -216,31 +216,6 @@ flow_dv_attr_init(const struct rte_flow_item *item, union flow_dv_attr *attr, attr->valid = 1; } -/* - * Convert rte_mtr_color to mlx5 color. - * - * @param[in] rcol - * rte_mtr_color. - * - * @return - * mlx5 color. - */ -static inline int -rte_col_2_mlx5_col(enum rte_color rcol) -{ - switch (rcol) { - case RTE_COLOR_GREEN: - return MLX5_FLOW_COLOR_GREEN; - case RTE_COLOR_YELLOW: - return MLX5_FLOW_COLOR_YELLOW; - case RTE_COLOR_RED: - return MLX5_FLOW_COLOR_RED; - default: - break; - } - return MLX5_FLOW_COLOR_UNDEFINED; -} - struct field_modify_info modify_eth[] = { {4, 0, MLX5_MODI_OUT_DMAC_47_16}, {2, 4, MLX5_MODI_OUT_DMAC_15_0}, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index dfbf885530..959d566d68 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -903,6 +903,38 @@ flow_hw_represented_port_compile(struct rte_eth_dev *dev, return 0; } +static __rte_always_inline int +flow_hw_meter_compile(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + uint32_t start_pos, const struct rte_flow_action *action, + struct mlx5_hw_actions *acts, uint32_t *end_pos, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr *aso_mtr; + const struct rte_flow_action_meter *meter = action->conf; + uint32_t pos = start_pos; + uint32_t group = cfg->attr.flow_attr.group; + + aso_mtr = mlx5_aso_meter_by_idx(priv, meter->mtr_id); + acts->rule_acts[pos].action = priv->mtr_bulk.action; + acts->rule_acts[pos].aso_meter.offset = aso_mtr->offset; + acts->jump = flow_hw_jump_action_register + (dev, cfg, aso_mtr->fm.group, error); + if (!acts->jump) { + *end_pos = start_pos; + return -ENOMEM; + } + acts->rule_acts[++pos].action = (!!group) ? + acts->jump->hws_action : + acts->jump->root_action; + *end_pos = pos; + if (mlx5_aso_mtr_wait(priv->sh, aso_mtr)) { + *end_pos = start_pos; + return -ENOMEM; + } + return 0; +} /** * Translate rte_flow actions to DR action. * @@ -1131,6 +1163,21 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, goto err; i++; break; + case RTE_FLOW_ACTION_TYPE_METER: + if (actions->conf && masks->conf && + ((const struct rte_flow_action_meter *) + masks->conf)->mtr_id) { + err = flow_hw_meter_compile(dev, cfg, + i, actions, acts, &i, error); + if (err) + goto err; + } else if (__flow_hw_act_data_general_append(priv, acts, + actions->type, + actions - action_start, + i)) + goto err; + i++; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -1461,6 +1508,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct rte_flow_action_raw_encap *raw_encap_data; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; + const struct rte_flow_action_meter *meter = NULL; uint8_t *buf = job->encap_data; struct rte_flow_attr attr = { .ingress = 1, @@ -1468,6 +1516,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, uint32_t ft_flag; size_t encap_len = 0; int ret; + struct mlx5_aso_mtr *mtr; + uint32_t mtr_id; memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * hw_acts->acts_num); @@ -1587,6 +1637,29 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, rule_acts[act_data->action_dst].action = priv->hw_vport[port_action->port_id]; break; + case RTE_FLOW_ACTION_TYPE_METER: + meter = action->conf; + mtr_id = meter->mtr_id; + mtr = mlx5_aso_meter_by_idx(priv, mtr_id); + rule_acts[act_data->action_dst].action = + priv->mtr_bulk.action; + rule_acts[act_data->action_dst].aso_meter.offset = + mtr->offset; + jump = flow_hw_jump_action_register + (dev, &table->cfg, mtr->fm.group, NULL); + if (!jump) + return -1; + MLX5_ASSERT + (!rule_acts[act_data->action_dst + 1].action); + rule_acts[act_data->action_dst + 1].action = + (!!attr.group) ? jump->hws_action : + jump->root_action; + job->flow->jump = jump; + job->flow->fate_type = MLX5_FLOW_FATE_JUMP; + (*acts_num)++; + if (mlx5_aso_mtr_wait(priv->sh, mtr)) + return -1; + break; default: break; } @@ -2483,7 +2556,7 @@ flow_hw_action_meta_copy_insert(const struct rte_flow_action actions[], } static int -flow_hw_action_validate(struct rte_eth_dev *dev, +flow_hw_actions_validate(struct rte_eth_dev *dev, const struct rte_flow_actions_template_attr *attr, const struct rte_flow_action actions[], const struct rte_flow_action masks[], @@ -2549,6 +2622,9 @@ flow_hw_action_validate(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_RAW_DECAP: /* TODO: Validation logic */ break; + case RTE_FLOW_ACTION_TYPE_METER: + /* TODO: Validation logic */ + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: ret = flow_hw_validate_action_modify_field(action, mask, @@ -2642,7 +2718,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, .conf = &rx_mreg_mask, }; - if (flow_hw_action_validate(dev, attr, actions, masks, error)) + if (flow_hw_actions_validate(dev, attr, actions, masks, error)) return NULL; if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS && priv->sh->config.dv_esw_en) { @@ -2988,15 +3064,27 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_hw_info_get(struct rte_eth_dev *dev __rte_unused, - struct rte_flow_port_info *port_info __rte_unused, - struct rte_flow_queue_info *queue_info __rte_unused, +flow_hw_info_get(struct rte_eth_dev *dev, + struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error __rte_unused) { - /* Nothing to be updated currently. */ + uint16_t port_id = dev->data->port_id; + struct rte_mtr_capabilities mtr_cap; + int ret; + memset(port_info, 0, sizeof(*port_info)); /* Queue size is unlimited from low-level. */ + port_info->max_nb_queues = UINT32_MAX; queue_info->max_size = UINT32_MAX; + + memset(&mtr_cap, 0, sizeof(struct rte_mtr_capabilities)); + ret = rte_mtr_capabilities_get(port_id, &mtr_cap, NULL); + if (!ret) { + port_info->max_nb_meters = mtr_cap.n_max; + port_info->max_nb_meter_profiles = UINT32_MAX; + port_info->max_nb_meter_policies = UINT32_MAX; + } return 0; } @@ -4191,6 +4279,13 @@ flow_hw_configure(struct rte_eth_dev *dev, priv->nb_queue = nb_q_updated; rte_spinlock_init(&priv->hw_ctrl_lock); LIST_INIT(&priv->hw_ctrl_flows); + /* Initialize meter library*/ + if (port_attr->nb_meters) + if (mlx5_flow_meter_init(dev, + port_attr->nb_meters, + port_attr->nb_meter_profiles, + port_attr->nb_meter_policies)) + goto err; /* Add global actions. */ for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { uint32_t act_flags = 0; @@ -4505,8 +4600,10 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, + .pattern_validate = flow_hw_pattern_validate, .pattern_template_create = flow_hw_pattern_template_create, .pattern_template_destroy = flow_hw_pattern_template_destroy, + .actions_validate = flow_hw_actions_validate, .actions_template_create = flow_hw_actions_template_create, .actions_template_destroy = flow_hw_actions_template_destroy, .template_table_create = flow_hw_template_table_create, @@ -4562,7 +4659,7 @@ flow_hw_create_ctrl_flow(struct rte_eth_dev *owner_dev, uint8_t action_template_idx) { struct mlx5_priv *priv = proxy_dev->data->dev_private; - uint32_t queue = priv->nb_queue - 1; + uint32_t queue = CTRL_QUEUE_ID(priv); struct rte_flow_op_attr op_attr = { .postpone = 0, }; @@ -4637,7 +4734,7 @@ static int flow_hw_destroy_ctrl_flow(struct rte_eth_dev *dev, struct rte_flow *flow) { struct mlx5_priv *priv = dev->data->dev_private; - uint32_t queue = priv->nb_queue - 1; + uint32_t queue = CTRL_QUEUE_ID(priv); struct rte_flow_op_attr op_attr = { .postpone = 0, }; diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index d4aafe4eea..b69021f6a0 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -18,6 +18,157 @@ static int mlx5_flow_meter_disable(struct rte_eth_dev *dev, uint32_t meter_id, struct rte_mtr_error *error); +static void +mlx5_flow_meter_uninit(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (priv->mtr_policy_arr) { + mlx5_free(priv->mtr_policy_arr); + priv->mtr_policy_arr = NULL; + } + if (priv->mtr_profile_arr) { + mlx5_free(priv->mtr_profile_arr); + priv->mtr_profile_arr = NULL; + } + if (priv->mtr_bulk.aso) { + mlx5_free(priv->mtr_bulk.aso); + priv->mtr_bulk.aso = NULL; + priv->mtr_bulk.size = 0; + mlx5_aso_queue_uninit(priv->sh, ASO_OPC_MOD_POLICER); + } + if (priv->mtr_bulk.action) { + mlx5dr_action_destroy(priv->mtr_bulk.action); + priv->mtr_bulk.action = NULL; + } + if (priv->mtr_bulk.devx_obj) { + claim_zero(mlx5_devx_cmd_destroy(priv->mtr_bulk.devx_obj)); + priv->mtr_bulk.devx_obj = NULL; + } +} + +int +mlx5_flow_meter_init(struct rte_eth_dev *dev, + uint32_t nb_meters, + uint32_t nb_meter_profiles, + uint32_t nb_meter_policies) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_devx_obj *dcs = NULL; + uint32_t log_obj_size; + int ret = 0; + int reg_id; + struct mlx5_aso_mtr *aso; + uint32_t i; + struct rte_mtr_error error; + + if (!nb_meters || !nb_meter_profiles || !nb_meter_policies) { + ret = ENOTSUP; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter configuration is invalid."); + goto err; + } + if (!priv->mtr_en || !priv->sh->meter_aso_en) { + ret = ENOTSUP; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO is not supported."); + goto err; + } + priv->mtr_config.nb_meters = nb_meters; + if (mlx5_aso_queue_init(priv->sh, ASO_OPC_MOD_POLICER)) { + ret = ENOMEM; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO queue allocation failed."); + goto err; + } + log_obj_size = rte_log2_u32(nb_meters >> 1); + dcs = mlx5_devx_cmd_create_flow_meter_aso_obj + (priv->sh->cdev->ctx, priv->sh->cdev->pdn, + log_obj_size); + if (!dcs) { + ret = ENOMEM; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO object allocation failed."); + goto err; + } + priv->mtr_bulk.devx_obj = dcs; + reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); + if (reg_id < 0) { + ret = ENOTSUP; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter register is not available."); + goto err; + } + priv->mtr_bulk.action = mlx5dr_action_create_aso_meter + (priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs, + reg_id - REG_C_0, MLX5DR_ACTION_FLAG_HWS_RX | + MLX5DR_ACTION_FLAG_HWS_TX | + MLX5DR_ACTION_FLAG_HWS_FDB); + if (!priv->mtr_bulk.action) { + ret = ENOMEM; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter action creation failed."); + goto err; + } + priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_mtr) * nb_meters, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_bulk.aso) { + ret = ENOMEM; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter bulk ASO allocation failed."); + goto err; + } + priv->mtr_bulk.size = nb_meters; + aso = priv->mtr_bulk.aso; + for (i = 0; i < priv->mtr_bulk.size; i++) { + aso->type = ASO_METER_DIRECT; + aso->state = ASO_METER_WAIT; + aso->offset = i; + aso++; + } + priv->mtr_config.nb_meter_profiles = nb_meter_profiles; + priv->mtr_profile_arr = + mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_flow_meter_profile) * + nb_meter_profiles, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_profile_arr) { + ret = ENOMEM; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter profile allocation failed."); + goto err; + } + priv->mtr_config.nb_meter_policies = nb_meter_policies; + priv->mtr_policy_arr = + mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_flow_meter_policy) * + nb_meter_policies, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_policy_arr) { + ret = ENOMEM; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter policy allocation failed."); + goto err; + } + return 0; +err: + mlx5_flow_meter_uninit(dev); + return ret; +} + /** * Create the meter action. * @@ -98,6 +249,8 @@ mlx5_flow_meter_profile_find(struct mlx5_priv *priv, uint32_t meter_profile_id) union mlx5_l3t_data data; int32_t ret; + if (priv->mtr_profile_arr) + return &priv->mtr_profile_arr[meter_profile_id]; if (mlx5_l3t_get_entry(priv->mtr_profile_tbl, meter_profile_id, &data) || !data.ptr) return NULL; @@ -145,17 +298,29 @@ mlx5_flow_meter_profile_validate(struct rte_eth_dev *dev, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL, "Meter profile is null."); /* Meter profile ID must be valid. */ - if (meter_profile_id == UINT32_MAX) - return -rte_mtr_error_set(error, EINVAL, - RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, - NULL, "Meter profile id not valid."); - /* Meter profile must not exist. */ - fmp = mlx5_flow_meter_profile_find(priv, meter_profile_id); - if (fmp) - return -rte_mtr_error_set(error, EEXIST, - RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, - NULL, - "Meter profile already exists."); + if (priv->mtr_profile_arr) { + if (meter_profile_id >= priv->mtr_config.nb_meter_profiles) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, + NULL, "Meter profile id not valid."); + fmp = mlx5_flow_meter_profile_find(priv, meter_profile_id); + /* Meter profile must not exist. */ + if (fmp->initialized) + return -rte_mtr_error_set(error, EEXIST, + RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, + NULL, "Meter profile already exists."); + } else { + if (meter_profile_id == UINT32_MAX) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, + NULL, "Meter profile id not valid."); + fmp = mlx5_flow_meter_profile_find(priv, meter_profile_id); + /* Meter profile must not exist. */ + if (fmp) + return -rte_mtr_error_set(error, EEXIST, + RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, + NULL, "Meter profile already exists."); + } if (!priv->sh->meter_aso_en) { /* Old version is even not supported. */ if (!priv->sh->cdev->config.hca_attr.qos.flow_meter_old) @@ -574,6 +739,96 @@ mlx5_flow_meter_profile_delete(struct rte_eth_dev *dev, return 0; } +/** + * Callback to add MTR profile with HWS. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] meter_profile_id + * Meter profile id. + * @param[in] profile + * Pointer to meter profile detail. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_profile_hws_add(struct rte_eth_dev *dev, + uint32_t meter_profile_id, + struct rte_mtr_meter_profile *profile, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_profile *fmp; + int ret; + + if (!priv->mtr_profile_arr) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter profile array is not allocated"); + /* Check input params. */ + ret = mlx5_flow_meter_profile_validate(dev, meter_profile_id, + profile, error); + if (ret) + return ret; + fmp = mlx5_flow_meter_profile_find(priv, meter_profile_id); + /* Fill profile info. */ + fmp->id = meter_profile_id; + fmp->profile = *profile; + fmp->initialized = 1; + /* Fill the flow meter parameters for the PRM. */ + return mlx5_flow_meter_param_fill(fmp, error); +} + +/** + * Callback to delete MTR profile with HWS. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] meter_profile_id + * Meter profile id. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_profile_hws_delete(struct rte_eth_dev *dev, + uint32_t meter_profile_id, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_profile *fmp; + + if (!priv->mtr_profile_arr) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter profile array is not allocated"); + /* Meter id must be valid. */ + if (meter_profile_id >= priv->mtr_config.nb_meter_profiles) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, + &meter_profile_id, + "Meter profile id not valid."); + /* Meter profile must exist. */ + fmp = mlx5_flow_meter_profile_find(priv, meter_profile_id); + if (!fmp->initialized) + return -rte_mtr_error_set(error, ENOENT, + RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, + &meter_profile_id, + "Meter profile id is invalid."); + /* Check profile is unused. */ + if (fmp->ref_cnt) + return -rte_mtr_error_set(error, EBUSY, + RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, + NULL, "Meter profile is in use."); + memset(fmp, 0, sizeof(struct mlx5_flow_meter_profile)); + return 0; +} + /** * Find policy by id. * @@ -594,6 +849,11 @@ mlx5_flow_meter_policy_find(struct rte_eth_dev *dev, struct mlx5_flow_meter_sub_policy *sub_policy = NULL; union mlx5_l3t_data data; + if (priv->mtr_policy_arr) { + if (policy_idx) + *policy_idx = policy_id; + return &priv->mtr_policy_arr[policy_id]; + } if (policy_id > MLX5_MAX_SUB_POLICY_TBL_NUM || !priv->policy_idx_tbl) return NULL; if (mlx5_l3t_get_entry(priv->policy_idx_tbl, policy_id, &data) || @@ -710,6 +970,43 @@ mlx5_flow_meter_policy_validate(struct rte_eth_dev *dev, return 0; } +/** + * Callback to check MTR policy action validate for HWS + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] actions + * Pointer to meter policy action detail. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_policy_hws_validate(struct rte_eth_dev *dev, + struct rte_mtr_meter_policy_params *policy, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_actions_template_attr attr = { + .transfer = priv->sh->config.dv_esw_en ? 1 : 0 }; + int ret; + int i; + + if (!priv->mtr_en || !priv->sh->meter_aso_en) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_METER_POLICY, + NULL, "meter policy unsupported."); + for (i = 0; i < RTE_COLORS; i++) { + ret = mlx5_flow_actions_validate(dev, &attr, policy->actions[i], + policy->actions[i], NULL); + if (ret) + return ret; + } + return 0; +} + static int __mlx5_flow_meter_policy_delete(struct rte_eth_dev *dev, uint32_t policy_id, @@ -1004,6 +1301,338 @@ mlx5_flow_meter_policy_delete(struct rte_eth_dev *dev, return 0; } +/** + * Callback to delete MTR policy for HWS. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] policy_id + * Meter policy id. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_policy_hws_delete(struct rte_eth_dev *dev, + uint32_t policy_id, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_policy *mtr_policy; + uint32_t i, j; + uint32_t nb_flows = 0; + int ret; + struct rte_flow_op_attr op_attr = { .postpone = 1 }; + struct rte_flow_op_result result[RTE_COLORS * MLX5_MTR_DOMAIN_MAX]; + + if (!priv->mtr_policy_arr) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter policy array is not allocated"); + /* Meter id must be valid. */ + if (policy_id >= priv->mtr_config.nb_meter_policies) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + &policy_id, + "Meter policy id not valid."); + /* Meter policy must exist. */ + mtr_policy = mlx5_flow_meter_policy_find(dev, policy_id, NULL); + if (!mtr_policy->initialized) + return -rte_mtr_error_set(error, ENOENT, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, NULL, + "Meter policy does not exists."); + /* Check policy is unused. */ + if (mtr_policy->ref_cnt) + return -rte_mtr_error_set(error, EBUSY, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy is in use."); + rte_spinlock_lock(&priv->hw_ctrl_lock); + for (i = 0; i < MLX5_MTR_DOMAIN_MAX; i++) { + for (j = 0; j < RTE_COLORS; j++) { + if (mtr_policy->hws_flow_rule[i][j]) { + ret = rte_flow_async_destroy(dev->data->port_id, + CTRL_QUEUE_ID(priv), &op_attr, + mtr_policy->hws_flow_rule[i][j], + NULL, NULL); + if (ret < 0) + continue; + nb_flows++; + } + } + } + ret = rte_flow_push(dev->data->port_id, CTRL_QUEUE_ID(priv), NULL); + while (nb_flows && (ret >= 0)) { + ret = rte_flow_pull(dev->data->port_id, + CTRL_QUEUE_ID(priv), result, + nb_flows, NULL); + nb_flows -= ret; + } + for (i = 0; i < MLX5_MTR_DOMAIN_MAX; i++) { + if (mtr_policy->hws_flow_table[i]) + rte_flow_template_table_destroy(dev->data->port_id, + mtr_policy->hws_flow_table[i], NULL); + } + for (i = 0; i < RTE_COLORS; i++) { + if (mtr_policy->hws_act_templ[i]) + rte_flow_actions_template_destroy(dev->data->port_id, + mtr_policy->hws_act_templ[i], NULL); + } + if (mtr_policy->hws_item_templ) + rte_flow_pattern_template_destroy(dev->data->port_id, + mtr_policy->hws_item_templ, NULL); + rte_spinlock_unlock(&priv->hw_ctrl_lock); + memset(mtr_policy, 0, sizeof(struct mlx5_flow_meter_policy)); + return 0; +} + +/** + * Callback to add MTR policy for HWS. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[out] policy_id + * Pointer to policy id + * @param[in] actions + * Pointer to meter policy action detail. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_policy_hws_add(struct rte_eth_dev *dev, + uint32_t policy_id, + struct rte_mtr_meter_policy_params *policy, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_policy *mtr_policy = NULL; + const struct rte_flow_action *act; + const struct rte_flow_action_meter *mtr; + struct mlx5_flow_meter_info *fm; + struct mlx5_flow_meter_policy *plc; + uint8_t domain_color = MLX5_MTR_ALL_DOMAIN_BIT; + bool is_rss = false; + bool is_hierarchy = false; + int i, j; + uint32_t nb_colors = 0; + uint32_t nb_flows = 0; + int color; + int ret; + struct rte_flow_pattern_template_attr pta = {0}; + struct rte_flow_actions_template_attr ata = {0}; + struct rte_flow_template_table_attr ta = { {0}, 0 }; + struct rte_flow_op_attr op_attr = { .postpone = 1 }; + struct rte_flow_op_result result[RTE_COLORS * MLX5_MTR_DOMAIN_MAX]; + const uint32_t color_mask = (UINT32_C(1) << MLX5_MTR_COLOR_BITS) - 1; + int color_reg_c_idx = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, + 0, NULL); + struct rte_flow_item_tag tag_spec = { + .data = 0, + .index = color_reg_c_idx + }; + struct rte_flow_item_tag tag_mask = { + .data = color_mask, + .index = 0xff}; + struct rte_flow_item pattern[] = { + [0] = { + .type = (enum rte_flow_item_type) + MLX5_RTE_FLOW_ITEM_TYPE_TAG, + .spec = &tag_spec, + .mask = &tag_mask, + }, + [1] = { .type = RTE_FLOW_ITEM_TYPE_END } + }; + + if (!priv->mtr_policy_arr) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_METER_POLICY, + NULL, "Meter policy array is not allocated."); + if (policy_id >= priv->mtr_config.nb_meter_policies) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy id not valid."); + mtr_policy = mlx5_flow_meter_policy_find(dev, policy_id, NULL); + if (mtr_policy->initialized) + return -rte_mtr_error_set(error, EEXIST, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy already exists."); + if (!policy || + !policy->actions[RTE_COLOR_RED] || + !policy->actions[RTE_COLOR_YELLOW] || + !policy->actions[RTE_COLOR_GREEN]) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_METER_POLICY, + NULL, "Meter policy actions are not valid."); + if (policy->actions[RTE_COLOR_RED] == RTE_FLOW_ACTION_TYPE_END) + mtr_policy->skip_r = 1; + if (policy->actions[RTE_COLOR_YELLOW] == RTE_FLOW_ACTION_TYPE_END) + mtr_policy->skip_y = 1; + if (policy->actions[RTE_COLOR_GREEN] == RTE_FLOW_ACTION_TYPE_END) + mtr_policy->skip_g = 1; + if (mtr_policy->skip_r && mtr_policy->skip_y && mtr_policy->skip_g) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy actions are empty."); + for (i = 0; i < RTE_COLORS; i++) { + act = policy->actions[i]; + while (act && act->type != RTE_FLOW_ACTION_TYPE_END) { + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_PORT_ID: + /* fall-through. */ + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + domain_color &= ~(MLX5_MTR_DOMAIN_INGRESS_BIT | + MLX5_MTR_DOMAIN_EGRESS_BIT); + break; + case RTE_FLOW_ACTION_TYPE_RSS: + is_rss = true; + /* fall-through. */ + case RTE_FLOW_ACTION_TYPE_QUEUE: + domain_color &= ~(MLX5_MTR_DOMAIN_EGRESS_BIT | + MLX5_MTR_DOMAIN_TRANSFER_BIT); + break; + case RTE_FLOW_ACTION_TYPE_METER: + is_hierarchy = true; + mtr = act->conf; + fm = mlx5_flow_meter_find(priv, + mtr->mtr_id, NULL); + if (!fm) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_MTR_ID, NULL, + "Meter not found in meter hierarchy."); + plc = mlx5_flow_meter_policy_find(dev, + fm->policy_id, + NULL); + MLX5_ASSERT(plc); + domain_color &= MLX5_MTR_ALL_DOMAIN_BIT & + (plc->ingress << + MLX5_MTR_DOMAIN_INGRESS); + domain_color &= MLX5_MTR_ALL_DOMAIN_BIT & + (plc->egress << + MLX5_MTR_DOMAIN_EGRESS); + domain_color &= MLX5_MTR_ALL_DOMAIN_BIT & + (plc->transfer << + MLX5_MTR_DOMAIN_TRANSFER); + break; + default: + break; + } + act++; + } + } + if (!domain_color) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy domains are conflicting."); + mtr_policy->is_rss = is_rss; + mtr_policy->ingress = !!(domain_color & MLX5_MTR_DOMAIN_INGRESS_BIT); + pta.ingress = mtr_policy->ingress; + mtr_policy->egress = !!(domain_color & MLX5_MTR_DOMAIN_EGRESS_BIT); + pta.egress = mtr_policy->egress; + mtr_policy->transfer = !!(domain_color & MLX5_MTR_DOMAIN_TRANSFER_BIT); + pta.transfer = mtr_policy->transfer; + mtr_policy->group = MLX5_FLOW_TABLE_HWS_POLICY - policy_id; + mtr_policy->is_hierarchy = is_hierarchy; + mtr_policy->initialized = 1; + rte_spinlock_lock(&priv->hw_ctrl_lock); + mtr_policy->hws_item_templ = + rte_flow_pattern_template_create(dev->data->port_id, + &pta, pattern, NULL); + if (!mtr_policy->hws_item_templ) + goto policy_add_err; + for (i = 0; i < RTE_COLORS; i++) { + if (mtr_policy->skip_g && i == RTE_COLOR_GREEN) + continue; + if (mtr_policy->skip_y && i == RTE_COLOR_YELLOW) + continue; + if (mtr_policy->skip_r && i == RTE_COLOR_RED) + continue; + mtr_policy->hws_act_templ[nb_colors] = + rte_flow_actions_template_create(dev->data->port_id, + &ata, policy->actions[i], + policy->actions[i], NULL); + if (!mtr_policy->hws_act_templ[nb_colors]) + goto policy_add_err; + nb_colors++; + } + for (i = 0; i < MLX5_MTR_DOMAIN_MAX; i++) { + memset(&ta, 0, sizeof(ta)); + ta.nb_flows = RTE_COLORS; + ta.flow_attr.group = mtr_policy->group; + if (i == MLX5_MTR_DOMAIN_INGRESS) { + if (!mtr_policy->ingress) + continue; + ta.flow_attr.ingress = 1; + } else if (i == MLX5_MTR_DOMAIN_EGRESS) { + if (!mtr_policy->egress) + continue; + ta.flow_attr.egress = 1; + } else if (i == MLX5_MTR_DOMAIN_TRANSFER) { + if (!mtr_policy->transfer) + continue; + ta.flow_attr.transfer = 1; + } + mtr_policy->hws_flow_table[i] = + rte_flow_template_table_create(dev->data->port_id, + &ta, &mtr_policy->hws_item_templ, 1, + mtr_policy->hws_act_templ, nb_colors, + NULL); + if (!mtr_policy->hws_flow_table[i]) + goto policy_add_err; + nb_colors = 0; + for (j = 0; j < RTE_COLORS; j++) { + if (mtr_policy->skip_g && j == RTE_COLOR_GREEN) + continue; + if (mtr_policy->skip_y && j == RTE_COLOR_YELLOW) + continue; + if (mtr_policy->skip_r && j == RTE_COLOR_RED) + continue; + color = rte_col_2_mlx5_col((enum rte_color)j); + tag_spec.data = color; + mtr_policy->hws_flow_rule[i][j] = + rte_flow_async_create(dev->data->port_id, + CTRL_QUEUE_ID(priv), &op_attr, + mtr_policy->hws_flow_table[i], + pattern, 0, policy->actions[j], + nb_colors, NULL, NULL); + if (!mtr_policy->hws_flow_rule[i][j]) + goto policy_add_err; + nb_colors++; + nb_flows++; + } + ret = rte_flow_push(dev->data->port_id, + CTRL_QUEUE_ID(priv), NULL); + if (ret < 0) + goto policy_add_err; + while (nb_flows) { + ret = rte_flow_pull(dev->data->port_id, + CTRL_QUEUE_ID(priv), result, + nb_flows, NULL); + if (ret < 0) + goto policy_add_err; + for (j = 0; j < ret; j++) { + if (result[j].status == RTE_FLOW_OP_ERROR) + goto policy_add_err; + } + nb_flows -= ret; + } + } + rte_spinlock_unlock(&priv->hw_ctrl_lock); + return 0; +policy_add_err: + rte_spinlock_unlock(&priv->hw_ctrl_lock); + ret = mlx5_flow_meter_policy_hws_delete(dev, policy_id, error); + memset(mtr_policy, 0, sizeof(struct mlx5_flow_meter_policy)); + if (ret) + return ret; + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to create meter policy."); +} + /** * Check meter validation. * @@ -1087,7 +1716,8 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv, if (priv->sh->meter_aso_en) { fm->is_enable = !!is_enable; aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm); - ret = mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr); + ret = mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr, + &priv->mtr_bulk); if (ret) return ret; ret = mlx5_aso_mtr_wait(priv->sh, aso_mtr); @@ -1336,7 +1966,8 @@ mlx5_flow_meter_create(struct rte_eth_dev *dev, uint32_t meter_id, /* If ASO meter supported, update ASO flow meter by wqe. */ if (priv->sh->meter_aso_en) { aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm); - ret = mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr); + ret = mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr, + &priv->mtr_bulk); if (ret) goto error; if (!priv->mtr_idx_tbl) { @@ -1369,6 +2000,90 @@ mlx5_flow_meter_create(struct rte_eth_dev *dev, uint32_t meter_id, NULL, "Failed to create devx meter."); } +/** + * Create meter rules. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] meter_id + * Meter id. + * @param[in] params + * Pointer to rte meter parameters. + * @param[in] shared + * Meter shared with other flow or not. + * @param[out] error + * Pointer to rte meter error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id, + struct rte_mtr_params *params, int shared, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_meter_profile *profile; + struct mlx5_flow_meter_info *fm; + struct mlx5_flow_meter_policy *policy = NULL; + struct mlx5_aso_mtr *aso_mtr; + int ret; + + if (!priv->mtr_profile_arr || + !priv->mtr_policy_arr || + !priv->mtr_bulk.aso) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL, + "Meter bulk array is not allocated."); + /* Meter profile must exist. */ + profile = mlx5_flow_meter_profile_find(priv, params->meter_profile_id); + if (!profile->initialized) + return -rte_mtr_error_set(error, ENOENT, + RTE_MTR_ERROR_TYPE_METER_PROFILE_ID, + NULL, "Meter profile id not valid."); + /* Meter policy must exist. */ + policy = mlx5_flow_meter_policy_find(dev, + params->meter_policy_id, NULL); + if (!policy->initialized) + return -rte_mtr_error_set(error, ENOENT, + RTE_MTR_ERROR_TYPE_METER_POLICY_ID, + NULL, "Meter policy id not valid."); + /* Meter ID must be valid. */ + if (meter_id >= priv->mtr_config.nb_meters) + return -rte_mtr_error_set(error, EINVAL, + RTE_MTR_ERROR_TYPE_MTR_ID, + NULL, "Meter id not valid."); + /* Find ASO object. */ + aso_mtr = mlx5_aso_meter_by_idx(priv, meter_id); + fm = &aso_mtr->fm; + if (fm->initialized) + return -rte_mtr_error_set(error, ENOENT, + RTE_MTR_ERROR_TYPE_MTR_ID, + NULL, "Meter object already exists."); + /* Fill the flow meter parameters. */ + fm->meter_id = meter_id; + fm->policy_id = params->meter_policy_id; + fm->profile = profile; + fm->meter_offset = meter_id; + fm->group = policy->group; + /* Add to the flow meter list. */ + fm->active_state = 1; /* Config meter starts as active. */ + fm->is_enable = params->meter_enable; + fm->shared = !!shared; + fm->initialized = 1; + /* Update ASO flow meter by wqe. */ + ret = mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr, + &priv->mtr_bulk); + if (ret) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to create devx meter."); + fm->active_state = params->meter_enable; + __atomic_add_fetch(&fm->profile->ref_cnt, 1, __ATOMIC_RELAXED); + __atomic_add_fetch(&policy->ref_cnt, 1, __ATOMIC_RELAXED); + return 0; +} + static int mlx5_flow_meter_params_flush(struct rte_eth_dev *dev, struct mlx5_flow_meter_info *fm, @@ -1475,6 +2190,58 @@ mlx5_flow_meter_destroy(struct rte_eth_dev *dev, uint32_t meter_id, return 0; } +/** + * Destroy meter rules. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] meter_id + * Meter id. + * @param[out] error + * Pointer to rte meter error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_meter_hws_destroy(struct rte_eth_dev *dev, uint32_t meter_id, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr *aso_mtr; + struct mlx5_flow_meter_info *fm; + struct mlx5_flow_meter_policy *policy; + + if (!priv->mtr_profile_arr || + !priv->mtr_policy_arr || + !priv->mtr_bulk.aso) + return -rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_METER_POLICY, NULL, + "Meter bulk array is not allocated."); + /* Find ASO object. */ + aso_mtr = mlx5_aso_meter_by_idx(priv, meter_id); + fm = &aso_mtr->fm; + if (!fm->initialized) + return -rte_mtr_error_set(error, ENOENT, + RTE_MTR_ERROR_TYPE_MTR_ID, + NULL, "Meter object id not valid."); + /* Meter object must not have any owner. */ + if (fm->ref_cnt > 0) + return -rte_mtr_error_set(error, EBUSY, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter object is being used."); + /* Destroy the meter profile. */ + __atomic_sub_fetch(&fm->profile->ref_cnt, + 1, __ATOMIC_RELAXED); + /* Destroy the meter policy. */ + policy = mlx5_flow_meter_policy_find(dev, + fm->policy_id, NULL); + __atomic_sub_fetch(&policy->ref_cnt, + 1, __ATOMIC_RELAXED); + memset(fm, 0, sizeof(struct mlx5_flow_meter_info)); + return 0; +} + /** * Modify meter state. * @@ -1798,6 +2565,23 @@ static const struct rte_mtr_ops mlx5_flow_mtr_ops = { .stats_read = mlx5_flow_meter_stats_read, }; +static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = { + .capabilities_get = mlx5_flow_mtr_cap_get, + .meter_profile_add = mlx5_flow_meter_profile_hws_add, + .meter_profile_delete = mlx5_flow_meter_profile_hws_delete, + .meter_policy_validate = mlx5_flow_meter_policy_hws_validate, + .meter_policy_add = mlx5_flow_meter_policy_hws_add, + .meter_policy_delete = mlx5_flow_meter_policy_hws_delete, + .create = mlx5_flow_meter_hws_create, + .destroy = mlx5_flow_meter_hws_destroy, + .meter_enable = mlx5_flow_meter_enable, + .meter_disable = mlx5_flow_meter_disable, + .meter_profile_update = mlx5_flow_meter_profile_update, + .meter_dscp_table_update = NULL, + .stats_update = NULL, + .stats_read = NULL, +}; + /** * Get meter operations. * @@ -1812,7 +2596,12 @@ static const struct rte_mtr_ops mlx5_flow_mtr_ops = { int mlx5_flow_meter_ops_get(struct rte_eth_dev *dev __rte_unused, void *arg) { - *(const struct rte_mtr_ops **)arg = &mlx5_flow_mtr_ops; + struct mlx5_priv *priv = dev->data->dev_private; + + if (priv->sh->config.dv_flow_en == 2) + *(const struct rte_mtr_ops **)arg = &mlx5_flow_mtr_hws_ops; + else + *(const struct rte_mtr_ops **)arg = &mlx5_flow_mtr_ops; return 0; } @@ -1841,6 +2630,12 @@ mlx5_flow_meter_find(struct mlx5_priv *priv, uint32_t meter_id, union mlx5_l3t_data data; uint16_t n_valid; + if (priv->mtr_bulk.aso) { + if (mtr_idx) + *mtr_idx = meter_id; + aso_mtr = priv->mtr_bulk.aso + meter_id; + return &aso_mtr->fm; + } if (priv->sh->meter_aso_en) { rte_rwlock_read_lock(&pools_mng->resize_mtrwl); n_valid = pools_mng->n_valid; @@ -2185,6 +2980,7 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error) struct mlx5_flow_meter_profile *fmp; struct mlx5_legacy_flow_meter *legacy_fm; struct mlx5_flow_meter_info *fm; + struct mlx5_flow_meter_policy *policy; struct mlx5_flow_meter_sub_policy *sub_policy; void *tmp; uint32_t i, mtr_idx, policy_idx; @@ -2219,6 +3015,14 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error) NULL, "MTR object meter profile invalid."); } } + if (priv->mtr_bulk.aso) { + for (i = 1; i <= priv->mtr_config.nb_meter_profiles; i++) { + aso_mtr = mlx5_aso_meter_by_idx(priv, i); + fm = &aso_mtr->fm; + if (fm->initialized) + mlx5_flow_meter_hws_destroy(dev, i, error); + } + } if (priv->policy_idx_tbl) { MLX5_L3T_FOREACH(priv->policy_idx_tbl, i, entry) { policy_idx = *(uint32_t *)entry; @@ -2244,6 +3048,15 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error) mlx5_l3t_destroy(priv->policy_idx_tbl); priv->policy_idx_tbl = NULL; } + if (priv->mtr_policy_arr) { + for (i = 0; i < priv->mtr_config.nb_meter_policies; i++) { + policy = mlx5_flow_meter_policy_find(dev, i, + &policy_idx); + if (policy->initialized) + mlx5_flow_meter_policy_hws_delete(dev, i, + error); + } + } if (priv->mtr_profile_tbl) { MLX5_L3T_FOREACH(priv->mtr_profile_tbl, i, entry) { fmp = entry; @@ -2257,9 +3070,19 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error) mlx5_l3t_destroy(priv->mtr_profile_tbl); priv->mtr_profile_tbl = NULL; } + if (priv->mtr_profile_arr) { + for (i = 0; i < priv->mtr_config.nb_meter_profiles; i++) { + fmp = mlx5_flow_meter_profile_find(priv, i); + if (fmp->initialized) + mlx5_flow_meter_profile_hws_delete(dev, i, + error); + } + } /* Delete default policy table. */ mlx5_flow_destroy_def_policy(dev); if (priv->sh->refcnt == 1) mlx5_flow_destroy_mtr_drop_tbls(dev); + /* Destroy HWS configuration. */ + mlx5_flow_meter_uninit(dev); return 0; } From patchwork Fri Sep 23 14:43:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116750 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C9DEA054A; Fri, 23 Sep 2022 16:45:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 554D242BE2; Fri, 23 Sep 2022 16:44:25 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2046.outbound.protection.outlook.com [40.107.237.46]) by mails.dpdk.org (Postfix) with ESMTP id 959CF42BF5 for ; Fri, 23 Sep 2022 16:44:22 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Uk4XoNrsrP638WhcUzTGp5j9NTVWFrL5P97dT5/VUCh0oZNS1sGvpMdoVprnrSaHZH6bn6yYvgK/q+cipo2aua7bhor1QkjkKscicpi2I1F41oQCQh3LsR7UoUMit5XNqyiizf1HnsVxpcGCxUMq8F5K7bTsEI94fTelgYWiM2rkQBW6X0SMWKG88e0YKGSfLFlIcWoH/h9YInxh4QPyX95VGmhT4QRq5Jlcjlv23BuYEjfmxwJV/42S47+vx3sDRaIItF9qjsjRSIyx9yOCDNj+mZynWXB0YsH+x9NIU3vE9qB9Zo+2xMaLrdLIHYx6LS6Wh7tagicnp3qVWcZlWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=H5UzaO9f+t7GsUw+OdGr/al5vj0hMXBBkbzwzxjO0kY=; b=d2Z8nGjO7q+ZhMNE0ugHMGoSx6G3D8fHJqU2WGrK0i/JshcrekzoPvJzia6PxW8Kn3QKMAkRviK77Hf/hx3KqclFdqT6fV6vXQYv8ofC5D94eadBMKKNRhfR2vwoMWemvmInKpI89WkQl8HX2fx9YVeEXwKKx+Pn8Bdq5u3jUeygJfbzr/aVa+XYuqSPPf8wZAZOJh1tNwSKUzb58OSy/9qq8RbQtUT5KcHwTDy5LcXy1YjbJNBZ2pjPUqbKKJ1V7eMeTldWWA9kzc8nFA5p6xNBKjKhhm99GKCeMD+FkHwpVwi/WQggOBIJowaRretik06omSLleldTy0TV5twKAQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=ashroe.eu smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=H5UzaO9f+t7GsUw+OdGr/al5vj0hMXBBkbzwzxjO0kY=; b=UluSI67Pzd7ga+8qEVD+ndjTytdhTXSVmb2TaPRuqGbK6lNBd5vx6bBAh8b9BstYSTza5gDK8drlD7uKJDrtbRT7yLmwJ4qcyUyBSaowEu8KDEF8+rLIqeIfhmv24uL1qC1fAkzQ4aPM0oQWLlXFEIgVKkBDMLEaNeEO8PNQvkoHwQItT3L1dJ4BQitnbYMOdWrLd3ZNgwGtQZ7L42qXdV6TtAAsdjN0wWQLXdxBze7zwkTanoAAZm4LynCmFuf6fz0tvnz7iQhQcwlYGl1Fy6aW42qR2U6QhICSV4qMbTd1Pb8vmfU0aEBLaZ+FbGkNHBukyx4BNJc6EaMs0a5/pg== Received: from DM6PR07CA0127.namprd07.prod.outlook.com (2603:10b6:5:330::28) by MW4PR12MB5628.namprd12.prod.outlook.com (2603:10b6:303:185::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:18 +0000 Received: from DM6NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:5:330:cafe::5d) by DM6PR07CA0127.outlook.office365.com (2603:10b6:5:330::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT023.mail.protection.outlook.com (10.13.173.96) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:18 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:09 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:06 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko , Ray Kinsella CC: , Xiaoyu Min Subject: [PATCH 11/27] net/mlx5: add HW steering counter action Date: Fri, 23 Sep 2022 17:43:18 +0300 Message-ID: <20220923144334.27736-12-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT023:EE_|MW4PR12MB5628:EE_ X-MS-Office365-Filtering-Correlation-Id: 39d176c8-0f7c-4904-3f31-08da9d72180e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bFxw590Jo/vm9Xp0rUzy9E/0u0JaUD/9alJ27b4mszpMbDQKYrc0O5JLwfMJRL5Z3CIJzHWddITJmz5+6wFstkNJoT6Ky9UC6Gm4+t8zZntiR/dMVI3WSkRbNv/rr3GkYlKoXvmfJnVOlm/irPrOZNM4QJG0znpSwi0H2ZaHi4+FG24wPwoXYcdTNudhks2ZsB8vdu+vsz3yvwOsDWgjHpDgmHorqUi9qjP+4nP/ztq5PIM4EduZNEkQe5apinAl2WNspziLkhroL80QWlL9Au9hSQ4M8Wl/PMDXcGMz0LAesT5x+kAnPY7lxtJ0A6x25ec4qYc2ogXmFidnZolPfm975ybt+u1jF4k2Lww4/ZOmeNH1SuEZxqQq14ZWAycDAPD6zQiap3/U/0OGHK+2gdWzMWm6PgwwnYoBGh7rtPpemkNta0mAXkhynjXmU0UT4D+65O+B1Q9DXiANjf3wrKnE5+anrKcWs2ZfRLR1PDBKyRvWDKx6b9vE6tygEOo81bLy7VolBsxbQ+0M/B773x3xaGOMxMpippd/7SIuMyS9c2l8INjVULGmrYFEMLzZJgYUHZTWcqMT4xKPfgQEG0irqkmqWL/7gLCymdOCKskHP5Y266nB7H+q9yOhC7KhQ8e3XLxIUpGmchBVwKJU0LKgyUKw52gTlTSBtJDHlk2VI7O28hk40F6cPCbaDOH92ekvBughMR2hoGzcsqRk6T9WuJK7eTvNtohd2ePXccMYHyuSnXtI/HOdGKL7aMgcIWuhJyW5O5kQfofKOQHRXWNGaI7moKB6oAtH8yvaugw= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(39860400002)(346002)(376002)(396003)(451199015)(40470700004)(36840700001)(46966006)(70206006)(8676002)(86362001)(30864003)(8936002)(4326008)(70586007)(5660300002)(2906002)(41300700001)(6666004)(186003)(336012)(1076003)(36860700001)(2616005)(107886003)(7696005)(426003)(16526019)(6286002)(47076005)(83380400001)(26005)(316002)(54906003)(7636003)(356005)(110136005)(478600001)(82740400003)(36756003)(55016003)(40460700003)(82310400005)(40480700001)(559001)(579004)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:18.0359 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 39d176c8-0f7c-4904-3f31-08da9d72180e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB5628 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xiaoyu Min This commit adds HW steering counter action support. Pool mechanism is the basic data structure for the HW steering counter. The HW steering's counter pool is based on the rte_ring of zero-copy variation. There are two global rte_rings: 1. free_list: Store the counters indexes, which are ready for use. 2. wait_reset_list: Store the counters indexes, which are just freed from the user and need to query the hardware counter to get the reset value before this counter can be reused again. The counter pool also supports cache per HW steering's queues, which are also based on rte_ring of zero-copy variation. The cache can be configured in size, preload, threshold, and fetch size, they are all exposed via device args. The main operations of the counter pool are as follows: - Get one counter from the pool: 1. The user call _get_* API. 2. If the cache is enabled, dequeue one counter index from the local cache: 2.A: if the dequeued one from the local cache is still in reset status (counter's query_gen_when_free is equal to pool's query gen): I. Flush all counters in local cache back to global wait_reset_list. II. Fetch _fetch_sz_ counters into the cache from the global free list. III. Fetch one counter from the cache. 3. If the cache is empty, fetch _fetch_sz_ counters from the global free list into the cache and fetch one counter from the cache. - Free one counter into the pool: 1. The user calls _put_* API. 2. Put the counter into the local cache. 3. If the local cache is full: 3.A: Write back all counters above _threshold_ into the global wait_reset_list. 3.B: Also, write back this counter into the global wait_reset_list. When the local cache is disabled, _get_/_put_ cache directly from/into global list. Signed-off-by: Xiaoyu Min net/mlx5: move ASO query counter into ASO file Move the ASO counter query logical into the dedicated ASO file. The function name is changed accordingly. Also use the max SQ size for ASO counter query. Signed-off-by: Xiaoyu Min --- drivers/common/mlx5/mlx5_devx_cmds.c | 50 +++ drivers/common/mlx5/mlx5_devx_cmds.h | 27 ++ drivers/common/mlx5/mlx5_prm.h | 20 +- drivers/common/mlx5/version.map | 1 + drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5.c | 14 + drivers/net/mlx5/mlx5.h | 27 ++ drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_flow.c | 27 +- drivers/net/mlx5/mlx5_flow.h | 2 + drivers/net/mlx5/mlx5_flow_aso.c | 261 ++++++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 142 ++++++- drivers/net/mlx5/mlx5_flow_meter.c | 8 +- drivers/net/mlx5/mlx5_hws_cnt.c | 523 +++++++++++++++++++++++++ drivers/net/mlx5/mlx5_hws_cnt.h | 558 +++++++++++++++++++++++++++ 15 files changed, 1635 insertions(+), 28 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_hws_cnt.c create mode 100644 drivers/net/mlx5/mlx5_hws_cnt.h diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index ac6891145d..eef7a98248 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -176,6 +176,41 @@ mlx5_devx_cmd_register_write(void *ctx, uint16_t reg_id, uint32_t arg, return 0; } +struct mlx5_devx_obj * +mlx5_devx_cmd_flow_counter_alloc_general(void *ctx, + struct mlx5_devx_counter_attr *attr) +{ + struct mlx5_devx_obj *dcs = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*dcs), + 0, SOCKET_ID_ANY); + uint32_t in[MLX5_ST_SZ_DW(alloc_flow_counter_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(alloc_flow_counter_out)] = {0}; + + if (!dcs) { + rte_errno = ENOMEM; + return NULL; + } + MLX5_SET(alloc_flow_counter_in, in, opcode, + MLX5_CMD_OP_ALLOC_FLOW_COUNTER); + if (attr->bulk_log_max_alloc) + MLX5_SET(alloc_flow_counter_in, in, flow_counter_bulk_log_size, + attr->flow_counter_bulk_log_size); + else + MLX5_SET(alloc_flow_counter_in, in, flow_counter_bulk, + attr->bulk_n_128); + if (attr->pd_valid) + MLX5_SET(alloc_flow_counter_in, in, pd, attr->pd); + dcs->obj = mlx5_glue->devx_obj_create(ctx, in, + sizeof(in), out, sizeof(out)); + if (!dcs->obj) { + DRV_LOG(ERR, "Can't allocate counters - error %d", errno); + rte_errno = errno; + mlx5_free(dcs); + return NULL; + } + dcs->id = MLX5_GET(alloc_flow_counter_out, out, flow_counter_id); + return dcs; +} + /** * Allocate flow counters via devx interface. * @@ -967,6 +1002,16 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, general_obj_types) & MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD); attr->rq_delay_drop = MLX5_GET(cmd_hca_cap, hcattr, rq_delay_drop); + attr->max_flow_counter_15_0 = MLX5_GET(cmd_hca_cap, hcattr, + max_flow_counter_15_0); + attr->max_flow_counter_31_16 = MLX5_GET(cmd_hca_cap, hcattr, + max_flow_counter_31_16); + attr->alloc_flow_counter_pd = MLX5_GET(cmd_hca_cap, hcattr, + alloc_flow_counter_pd); + attr->flow_counter_access_aso = MLX5_GET(cmd_hca_cap, hcattr, + flow_counter_access_aso); + attr->flow_access_aso_opc_mod = MLX5_GET(cmd_hca_cap, hcattr, + flow_access_aso_opc_mod); if (attr->crypto) { attr->aes_xts = MLX5_GET(cmd_hca_cap, hcattr, aes_xts); hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, @@ -989,6 +1034,11 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, } attr->log_min_stride_wqe_sz = MLX5_GET(cmd_hca_cap_2, hcattr, log_min_stride_wqe_sz); + attr->flow_counter_bulk_log_max_alloc = MLX5_GET(cmd_hca_cap_2, + hcattr, flow_counter_bulk_log_max_alloc); + attr->flow_counter_bulk_log_granularity = + MLX5_GET(cmd_hca_cap_2, hcattr, + flow_counter_bulk_log_granularity); } if (attr->log_min_stride_wqe_sz == 0) attr->log_min_stride_wqe_sz = MLX5_MPRQ_LOG_MIN_STRIDE_WQE_SIZE; diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index d69dad613e..15b46f2acd 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -15,6 +15,16 @@ #define MLX5_DEVX_MAX_KLM_ENTRIES ((UINT16_MAX - \ MLX5_ST_SZ_DW(create_mkey_in) * 4) / (MLX5_ST_SZ_DW(klm) * 4)) +struct mlx5_devx_counter_attr { + uint32_t pd_valid:1; + uint32_t pd:24; + uint32_t bulk_log_max_alloc:1; + union { + uint8_t flow_counter_bulk_log_size; + uint8_t bulk_n_128; + }; +}; + struct mlx5_devx_mkey_attr { uint64_t addr; uint64_t size; @@ -263,6 +273,18 @@ struct mlx5_hca_attr { uint32_t set_reg_c:8; uint32_t nic_flow_table:1; uint32_t modify_outer_ip_ecn:1; + union { + uint32_t max_flow_counter; + struct { + uint16_t max_flow_counter_15_0; + uint16_t max_flow_counter_31_16; + }; + }; + uint32_t flow_counter_bulk_log_max_alloc:5; + uint32_t flow_counter_bulk_log_granularity:5; + uint32_t alloc_flow_counter_pd:1; + uint32_t flow_counter_access_aso:1; + uint32_t flow_access_aso_opc_mod:8; }; /* LAG Context. */ @@ -593,6 +615,11 @@ struct mlx5_devx_crypto_login_attr { /* mlx5_devx_cmds.c */ +__rte_internal +struct mlx5_devx_obj * +mlx5_devx_cmd_flow_counter_alloc_general(void *ctx, + struct mlx5_devx_counter_attr *attr); + __rte_internal struct mlx5_devx_obj *mlx5_devx_cmd_flow_counter_alloc(void *ctx, uint32_t bulk_sz); diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 628bae72b2..88121d5563 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1169,8 +1169,10 @@ struct mlx5_ifc_alloc_flow_counter_in_bits { u8 reserved_at_10[0x10]; u8 reserved_at_20[0x10]; u8 op_mod[0x10]; - u8 flow_counter_id[0x20]; - u8 reserved_at_40[0x18]; + u8 reserved_at_40[0x8]; + u8 pd[0x18]; + u8 reserved_at_60[0x13]; + u8 flow_counter_bulk_log_size[0x5]; u8 flow_counter_bulk[0x8]; }; @@ -1404,7 +1406,13 @@ enum { #define MLX5_STEERING_LOGIC_FORMAT_CONNECTX_6DX 0x1 struct mlx5_ifc_cmd_hca_cap_bits { - u8 reserved_at_0[0x20]; + u8 access_other_hca_roce[0x1]; + u8 alloc_flow_counter_pd[0x1]; + u8 flow_counter_access_aso[0x1]; + u8 reserved_at_3[0x5]; + u8 flow_access_aso_opc_mod[0x8]; + u8 reserved_at_10[0xf]; + u8 vhca_resource_manager[0x1]; u8 hca_cap_2[0x1]; u8 reserved_at_21[0xf]; u8 vhca_id[0x10]; @@ -2111,7 +2119,11 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 format_select_dw_8_6_ext[0x1]; u8 reserved_at_1ac[0x14]; u8 general_obj_types_127_64[0x40]; - u8 reserved_at_200[0x80]; + u8 reserved_at_200[0x53]; + u8 flow_counter_bulk_log_max_alloc[0x5]; + u8 reserved_at_258[0x3]; + u8 flow_counter_bulk_log_granularity[0x5]; + u8 reserved_at_260[0x20]; u8 format_select_dw_gtpu_dw_0[0x8]; u8 format_select_dw_gtpu_dw_1[0x8]; u8 format_select_dw_gtpu_dw_2[0x8]; diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 413dec14ab..4f72900519 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -40,6 +40,7 @@ INTERNAL { mlx5_devx_cmd_create_virtq; mlx5_devx_cmd_destroy; mlx5_devx_cmd_flow_counter_alloc; + mlx5_devx_cmd_flow_counter_alloc_general; mlx5_devx_cmd_flow_counter_query; mlx5_devx_cmd_flow_dump; mlx5_devx_cmd_flow_single_dump; diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index f9b266c900..4433849c89 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -37,6 +37,7 @@ sources = files( 'mlx5_vlan.c', 'mlx5_utils.c', 'mlx5_devx.c', + 'mlx5_hws_cnt.c', ) if is_linux diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 4abb207077..314176022a 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -175,6 +175,12 @@ /* Device parameter to create the fdb default rule in PMD */ #define MLX5_FDB_DEFAULT_RULE_EN "fdb_def_rule_en" +/* HW steering counter configuration. */ +#define MLX5_HWS_CNT_SERVICE_CORE "service_core" + +/* HW steering counter's query interval. */ +#define MLX5_HWS_CNT_CYCLE_TIME "svc_cycle_time" + /* Shared memory between primary and secondary processes. */ struct mlx5_shared_data *mlx5_shared_data; @@ -1245,6 +1251,10 @@ mlx5_dev_args_check_handler(const char *key, const char *val, void *opaque) config->allow_duplicate_pattern = !!tmp; } else if (strcmp(MLX5_FDB_DEFAULT_RULE_EN, key) == 0) { config->fdb_def_rule = !!tmp; + } else if (strcmp(MLX5_HWS_CNT_SERVICE_CORE, key) == 0) { + config->cnt_svc.service_core = tmp; + } else if (strcmp(MLX5_HWS_CNT_CYCLE_TIME, key) == 0) { + config->cnt_svc.cycle_time = tmp; } return 0; } @@ -1281,6 +1291,8 @@ mlx5_shared_dev_ctx_args_config(struct mlx5_dev_ctx_shared *sh, MLX5_DECAP_EN, MLX5_ALLOW_DUPLICATE_PATTERN, MLX5_FDB_DEFAULT_RULE_EN, + MLX5_HWS_CNT_SERVICE_CORE, + MLX5_HWS_CNT_CYCLE_TIME, NULL, }; int ret = 0; @@ -1293,6 +1305,8 @@ mlx5_shared_dev_ctx_args_config(struct mlx5_dev_ctx_shared *sh, config->decap_en = 1; config->allow_duplicate_pattern = 1; config->fdb_def_rule = 1; + config->cnt_svc.cycle_time = MLX5_CNT_SVC_CYCLE_TIME_DEFAULT; + config->cnt_svc.service_core = rte_get_main_lcore(); if (mkvlist != NULL) { /* Process parameters. */ ret = mlx5_kvargs_process(mkvlist, params, diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 263b502d37..8d82c68569 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -308,6 +308,10 @@ struct mlx5_sh_config { uint32_t hw_fcs_strip:1; /* FCS stripping is supported. */ uint32_t allow_duplicate_pattern:1; uint32_t lro_allowed:1; /* Whether LRO is allowed. */ + struct { + uint16_t service_core; + uint32_t cycle_time; /* query cycle time in milli-second. */ + } cnt_svc; /* configure for HW steering's counter's service. */ /* Allow/Prevent the duplicate rules pattern. */ uint32_t fdb_def_rule:1; /* Create FDB default jump rule */ }; @@ -1224,6 +1228,22 @@ struct mlx5_flex_item { struct mlx5_flex_pattern_field map[MLX5_FLEX_ITEM_MAPPING_NUM]; }; +#define HWS_CNT_ASO_SQ_NUM 4 + +struct mlx5_hws_aso_mng { + uint16_t sq_num; + struct mlx5_aso_sq sqs[HWS_CNT_ASO_SQ_NUM]; +}; + +struct mlx5_hws_cnt_svc_mng { + uint32_t refcnt; + uint32_t service_core; + uint32_t query_interval; + pthread_t service_thread; + uint8_t svc_running; + struct mlx5_hws_aso_mng aso_mng __rte_cache_aligned; +}; + /* * Shared Infiniband device context for Master/Representors * which belong to same IB device with multiple IB ports. @@ -1323,6 +1343,7 @@ struct mlx5_dev_ctx_shared { pthread_mutex_t lwm_config_lock; uint32_t host_shaper_rate:8; uint32_t lwm_triggered:1; + struct mlx5_hws_cnt_svc_mng *cnt_svc; struct mlx5_dev_shared_port port[]; /* per device port data array. */ }; @@ -1623,6 +1644,7 @@ struct mlx5_priv { /* HW steering global tag action. */ struct mlx5dr_action *hw_tag[2]; struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */ + struct mlx5_hws_cnt_pool *hws_cpool; /* HW steering's counter pool. */ #endif }; @@ -2036,6 +2058,11 @@ mlx5_get_supported_sw_parsing_offloads(const struct mlx5_hca_attr *attr); uint32_t mlx5_get_supported_tunneling_offloads(const struct mlx5_hca_attr *attr); +int mlx5_aso_cnt_queue_init(struct mlx5_dev_ctx_shared *sh); +void mlx5_aso_cnt_queue_uninit(struct mlx5_dev_ctx_shared *sh); +int mlx5_aso_cnt_query(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool); + /* mlx5_flow_flex.c */ struct rte_flow_item_flex_handle * diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 585afb0a98..d064abfef3 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -188,4 +188,6 @@ #define static_assert _Static_assert #endif +#define MLX5_CNT_SVC_CYCLE_TIME_DEFAULT 500 + #endif /* RTE_PMD_MLX5_DEFS_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index fb3be940e5..658cc69750 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7832,24 +7832,33 @@ mlx5_flow_isolate(struct rte_eth_dev *dev, */ static int flow_drv_query(struct rte_eth_dev *dev, - uint32_t flow_idx, + struct rte_flow *eflow, const struct rte_flow_action *actions, void *data, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; const struct mlx5_flow_driver_ops *fops; - struct rte_flow *flow = mlx5_ipool_get(priv->flows[MLX5_FLOW_TYPE_GEN], - flow_idx); - enum mlx5_flow_drv_type ftype; + struct rte_flow *flow = NULL; + enum mlx5_flow_drv_type ftype = MLX5_FLOW_TYPE_MIN; + if (priv->sh->config.dv_flow_en == 2) { +#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) + flow = eflow; + ftype = MLX5_FLOW_TYPE_HW; +#endif + } else { + flow = (struct rte_flow *)mlx5_ipool_get(priv->flows[MLX5_FLOW_TYPE_GEN], + (uintptr_t)(void *)eflow); + } if (!flow) { return rte_flow_error_set(error, ENOENT, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "invalid flow handle"); } - ftype = flow->drv_type; + if (ftype == MLX5_FLOW_TYPE_MIN) + ftype = flow->drv_type; MLX5_ASSERT(ftype > MLX5_FLOW_TYPE_MIN && ftype < MLX5_FLOW_TYPE_MAX); fops = flow_get_drv_ops(ftype); @@ -7870,14 +7879,8 @@ mlx5_flow_query(struct rte_eth_dev *dev, struct rte_flow_error *error) { int ret; - struct mlx5_priv *priv = dev->data->dev_private; - if (priv->sh->config.dv_flow_en == 2) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "Flow non-Q query not supported"); - ret = flow_drv_query(dev, (uintptr_t)(void *)flow, actions, data, + ret = flow_drv_query(dev, flow, actions, data, error); if (ret < 0) return ret; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index c5190b1d4f..cdea4076d8 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1104,6 +1104,7 @@ struct rte_flow_hw { }; struct rte_flow_template_table *table; /* The table flow allcated from. */ struct mlx5dr_rule rule; /* HWS layer data struct. */ + uint32_t cnt_id; } __rte_packed; /* rte flow action translate to DR action struct. */ @@ -1225,6 +1226,7 @@ struct mlx5_hw_actions { uint16_t encap_decap_pos; /* Encap/Decap action position. */ uint32_t acts_num:4; /* Total action number. */ uint32_t mark:1; /* Indicate the mark action. */ + uint32_t cnt_id; /* Counter id. */ /* Translated DR action array from action template. */ struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; }; diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index 60d0280367..ed9272e583 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -12,6 +12,9 @@ #include "mlx5.h" #include "mlx5_flow.h" +#include "mlx5_hws_cnt.h" + +#define MLX5_ASO_CNT_QUEUE_LOG_DESC 14 /** * Free MR resources. @@ -79,6 +82,33 @@ mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq) memset(sq, 0, sizeof(*sq)); } +/** + * Initialize Send Queue used for ASO access counter. + * + * @param[in] sq + * ASO SQ to initialize. + */ +static void +mlx5_aso_cnt_init_sq(struct mlx5_aso_sq *sq) +{ + volatile struct mlx5_aso_wqe *restrict wqe; + int i; + int size = 1 << sq->log_desc_n; + + /* All the next fields state should stay constant. */ + for (i = 0, wqe = &sq->sq_obj.aso_wqes[0]; i < size; ++i, ++wqe) { + wqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) | + (sizeof(*wqe) >> 4)); + wqe->aso_cseg.operand_masks = rte_cpu_to_be_32 + (0u | + (ASO_OPER_LOGICAL_OR << ASO_CSEG_COND_OPER_OFFSET) | + (ASO_OP_ALWAYS_FALSE << ASO_CSEG_COND_1_OPER_OFFSET) | + (ASO_OP_ALWAYS_FALSE << ASO_CSEG_COND_0_OPER_OFFSET) | + (BYTEWISE_64BYTE << ASO_CSEG_DATA_MASK_MODE_OFFSET)); + wqe->aso_cseg.data_mask = RTE_BE64(UINT64_MAX); + } +} + /** * Initialize Send Queue used for ASO access. * @@ -191,7 +221,7 @@ mlx5_aso_ct_init_sq(struct mlx5_aso_sq *sq) */ static int mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq, - void *uar) + void *uar, uint16_t log_desc_n) { struct mlx5_devx_cq_attr cq_attr = { .uar_page_id = mlx5_os_get_devx_uar_page_id(uar), @@ -212,12 +242,12 @@ mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq, int ret; if (mlx5_devx_cq_create(cdev->ctx, &sq->cq.cq_obj, - MLX5_ASO_QUEUE_LOG_DESC, &cq_attr, + log_desc_n, &cq_attr, SOCKET_ID_ANY)) goto error; sq->cq.cq_ci = 0; - sq->cq.log_desc_n = MLX5_ASO_QUEUE_LOG_DESC; - sq->log_desc_n = MLX5_ASO_QUEUE_LOG_DESC; + sq->cq.log_desc_n = log_desc_n; + sq->log_desc_n = log_desc_n; sq_attr.cqn = sq->cq.cq_obj.cq->id; /* for mlx5_aso_wqe that is twice the size of mlx5_wqe */ log_wqbb_n = sq->log_desc_n + 1; @@ -269,7 +299,8 @@ mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, sq_desc_n, &sh->aso_age_mng->aso_sq.mr)) return -1; if (mlx5_aso_sq_create(cdev, &sh->aso_age_mng->aso_sq, - sh->tx_uar.obj)) { + sh->tx_uar.obj, + MLX5_ASO_QUEUE_LOG_DESC)) { mlx5_aso_dereg_mr(cdev, &sh->aso_age_mng->aso_sq.mr); return -1; } @@ -277,7 +308,7 @@ mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, break; case ASO_OPC_MOD_POLICER: if (mlx5_aso_sq_create(cdev, &sh->mtrmng->pools_mng.sq, - sh->tx_uar.obj)) + sh->tx_uar.obj, MLX5_ASO_QUEUE_LOG_DESC)) return -1; mlx5_aso_mtr_init_sq(&sh->mtrmng->pools_mng.sq); break; @@ -287,7 +318,7 @@ mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, &sh->ct_mng->aso_sq.mr)) return -1; if (mlx5_aso_sq_create(cdev, &sh->ct_mng->aso_sq, - sh->tx_uar.obj)) { + sh->tx_uar.obj, MLX5_ASO_QUEUE_LOG_DESC)) { mlx5_aso_dereg_mr(cdev, &sh->ct_mng->aso_sq.mr); return -1; } @@ -1403,3 +1434,219 @@ mlx5_aso_ct_available(struct mlx5_dev_ctx_shared *sh, rte_errno = EBUSY; return -rte_errno; } + +int +mlx5_aso_cnt_queue_init(struct mlx5_dev_ctx_shared *sh) +{ + struct mlx5_hws_aso_mng *aso_mng = NULL; + uint8_t idx; + struct mlx5_aso_sq *sq; + + MLX5_ASSERT(sh); + MLX5_ASSERT(sh->cnt_svc); + aso_mng = &sh->cnt_svc->aso_mng; + aso_mng->sq_num = HWS_CNT_ASO_SQ_NUM; + for (idx = 0; idx < HWS_CNT_ASO_SQ_NUM; idx++) { + sq = &aso_mng->sqs[idx]; + if (mlx5_aso_sq_create(sh->cdev, sq, sh->tx_uar.obj, + MLX5_ASO_CNT_QUEUE_LOG_DESC)) + goto error; + mlx5_aso_cnt_init_sq(sq); + } + return 0; +error: + mlx5_aso_cnt_queue_uninit(sh); + return -1; +} + +void +mlx5_aso_cnt_queue_uninit(struct mlx5_dev_ctx_shared *sh) +{ + uint16_t idx; + + for (idx = 0; idx < sh->cnt_svc->aso_mng.sq_num; idx++) + mlx5_aso_destroy_sq(&sh->cnt_svc->aso_mng.sqs[idx]); + sh->cnt_svc->aso_mng.sq_num = 0; +} + +static uint16_t +mlx5_aso_cnt_sq_enqueue_burst(struct mlx5_hws_cnt_pool *cpool, + struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_sq *sq, uint32_t n, + uint32_t offset, uint32_t dcs_id_base) +{ + volatile struct mlx5_aso_wqe *wqe; + uint16_t size = 1 << sq->log_desc_n; + uint16_t mask = size - 1; + uint16_t max; + uint32_t upper_offset = offset; + uint64_t addr; + uint32_t ctrl_gen_id = 0; + uint8_t opcmod = sh->cdev->config.hca_attr.flow_access_aso_opc_mod; + rte_be32_t lkey = rte_cpu_to_be_32(cpool->raw_mng->mr.lkey); + uint16_t aso_n = (uint16_t)(RTE_ALIGN_CEIL(n, 4) / 4); + uint32_t ccntid; + + max = RTE_MIN(size - (uint16_t)(sq->head - sq->tail), aso_n); + if (unlikely(!max)) + return 0; + upper_offset += (max * 4); + /* Because only one burst at one time, we can use the same elt. */ + sq->elts[0].burst_size = max; + ctrl_gen_id = dcs_id_base; + ctrl_gen_id /= 4; + do { + ccntid = upper_offset - max * 4; + wqe = &sq->sq_obj.aso_wqes[sq->head & mask]; + rte_prefetch0(&sq->sq_obj.aso_wqes[(sq->head + 1) & mask]); + wqe->general_cseg.misc = rte_cpu_to_be_32(ctrl_gen_id); + wqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << + MLX5_COMP_MODE_OFFSET); + wqe->general_cseg.opcode = rte_cpu_to_be_32 + (MLX5_OPCODE_ACCESS_ASO | + (opcmod << + WQE_CSEG_OPC_MOD_OFFSET) | + (sq->pi << + WQE_CSEG_WQE_INDEX_OFFSET)); + addr = (uint64_t)RTE_PTR_ADD(cpool->raw_mng->raw, + ccntid * sizeof(struct flow_counter_stats)); + wqe->aso_cseg.va_h = rte_cpu_to_be_32((uint32_t)(addr >> 32)); + wqe->aso_cseg.va_l_r = rte_cpu_to_be_32((uint32_t)addr | 1u); + wqe->aso_cseg.lkey = lkey; + sq->pi += 2; /* Each WQE contains 2 WQEBB's. */ + sq->head++; + sq->next++; + ctrl_gen_id++; + max--; + } while (max); + wqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ALWAYS << + MLX5_COMP_MODE_OFFSET); + mlx5_doorbell_ring(&sh->tx_uar.bf_db, *(volatile uint64_t *)wqe, + sq->pi, &sq->sq_obj.db_rec[MLX5_SND_DBR], + !sh->tx_uar.dbnc); + return sq->elts[0].burst_size; +} + +static uint16_t +mlx5_aso_cnt_completion_handle(struct mlx5_aso_sq *sq) +{ + struct mlx5_aso_cq *cq = &sq->cq; + volatile struct mlx5_cqe *restrict cqe; + const unsigned int cq_size = 1 << cq->log_desc_n; + const unsigned int mask = cq_size - 1; + uint32_t idx; + uint32_t next_idx = cq->cq_ci & mask; + const uint16_t max = (uint16_t)(sq->head - sq->tail); + uint16_t i = 0; + int ret; + if (unlikely(!max)) + return 0; + idx = next_idx; + next_idx = (cq->cq_ci + 1) & mask; + rte_prefetch0(&cq->cq_obj.cqes[next_idx]); + cqe = &cq->cq_obj.cqes[idx]; + ret = check_cqe(cqe, cq_size, cq->cq_ci); + /* + * Be sure owner read is done before any other cookie field or + * opaque field. + */ + rte_io_rmb(); + if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { + if (likely(ret == MLX5_CQE_STATUS_HW_OWN)) + return 0; /* return immediately. */ + mlx5_aso_cqe_err_handle(sq); + } + i += sq->elts[0].burst_size; + sq->elts[0].burst_size = 0; + cq->cq_ci++; + if (likely(i)) { + sq->tail += i; + rte_io_wmb(); + cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); + } + return i; +} + +static uint16_t +mlx5_aso_cnt_query_one_dcs(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool, + uint8_t dcs_idx, uint32_t num) +{ + uint32_t dcs_id = cpool->dcs_mng.dcs[dcs_idx].obj->id; + uint64_t cnt_num = cpool->dcs_mng.dcs[dcs_idx].batch_sz; + uint64_t left; + uint32_t iidx = cpool->dcs_mng.dcs[dcs_idx].iidx; + uint32_t offset; + uint16_t mask; + uint16_t sq_idx; + uint64_t burst_sz = (uint64_t)(1 << MLX5_ASO_CNT_QUEUE_LOG_DESC) * 4 * + sh->cnt_svc->aso_mng.sq_num; + uint64_t qburst_sz = burst_sz / sh->cnt_svc->aso_mng.sq_num; + uint64_t n; + struct mlx5_aso_sq *sq; + + cnt_num = RTE_MIN(num, cnt_num); + left = cnt_num; + while (left) { + mask = 0; + for (sq_idx = 0; sq_idx < sh->cnt_svc->aso_mng.sq_num; + sq_idx++) { + if (left == 0) { + mask |= (1 << sq_idx); + continue; + } + n = RTE_MIN(left, qburst_sz); + offset = cnt_num - left; + offset += iidx; + mlx5_aso_cnt_sq_enqueue_burst(cpool, sh, + &sh->cnt_svc->aso_mng.sqs[sq_idx], n, + offset, dcs_id); + left -= n; + } + do { + for (sq_idx = 0; sq_idx < sh->cnt_svc->aso_mng.sq_num; + sq_idx++) { + sq = &sh->cnt_svc->aso_mng.sqs[sq_idx]; + if (mlx5_aso_cnt_completion_handle(sq)) + mask |= (1 << sq_idx); + } + } while (mask < ((1 << sh->cnt_svc->aso_mng.sq_num) - 1)); + } + return cnt_num; +} + +/* + * Query FW counter via ASO WQE. + * + * ASO query counter use _sync_ mode, means: + * 1. each SQ issue one burst with several WQEs + * 2. ask for CQE at last WQE + * 3. busy poll CQ of each SQ's + * 4. If all SQ's CQE are received then goto step 1, issue next burst + * + * @param[in] sh + * Pointer to shared device. + * @param[in] cpool + * Pointer to counter pool. + * + * @return + * 0 on success, -1 on failure. + */ +int +mlx5_aso_cnt_query(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool) +{ + uint32_t idx; + uint32_t num; + uint32_t cnt_num = mlx5_hws_cnt_pool_get_size(cpool) - + rte_ring_count(cpool->free_list); + + for (idx = 0; idx < cpool->dcs_mng.batch_total; idx++) { + num = RTE_MIN(cnt_num, cpool->dcs_mng.dcs[idx].batch_sz); + mlx5_aso_cnt_query_one_dcs(sh, cpool, idx, num); + cnt_num -= num; + if (cnt_num == 0) + break; + } + return 0; +} diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 959d566d68..8891f4a4e3 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -10,6 +10,9 @@ #include "mlx5_rx.h" #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#include "mlx5dr_context.h" +#include "mlx5dr_send.h" +#include "mlx5_hws_cnt.h" /* The maximum actions support in the flow. */ #define MLX5_HW_MAX_ACTS 16 @@ -350,6 +353,10 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5dr_action_destroy(acts->mhdr->action); mlx5_free(acts->mhdr); } + if (mlx5_hws_cnt_id_valid(acts->cnt_id)) { + mlx5_hws_cnt_shared_put(priv->hws_cpool, &acts->cnt_id); + acts->cnt_id = 0; + } } /** @@ -935,6 +942,30 @@ flow_hw_meter_compile(struct rte_eth_dev *dev, } return 0; } + +static __rte_always_inline int +flow_hw_cnt_compile(struct rte_eth_dev *dev, uint32_t start_pos, + struct mlx5_hw_actions *acts) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t pos = start_pos; + cnt_id_t cnt_id; + int ret; + + ret = mlx5_hws_cnt_shared_get(priv->hws_cpool, &cnt_id); + if (ret != 0) + return ret; + ret = mlx5_hws_cnt_pool_get_action_offset + (priv->hws_cpool, + cnt_id, + &acts->rule_acts[pos].action, + &acts->rule_acts[pos].counter.offset); + if (ret != 0) + return ret; + acts->cnt_id = cnt_id; + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -1178,6 +1209,20 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, goto err; i++; break; + case RTE_FLOW_ACTION_TYPE_COUNT: + if (masks->conf && + ((const struct rte_flow_action_count *) + masks->conf)->id) { + err = flow_hw_cnt_compile(dev, i, acts); + if (err) + goto err; + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, i)) { + goto err; + } + i++; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -1499,7 +1544,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const uint8_t it_idx, const struct rte_flow_action actions[], struct mlx5dr_rule_action *rule_acts, - uint32_t *acts_num) + uint32_t *acts_num, + uint32_t queue) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_template_table *table = job->flow->table; @@ -1553,6 +1599,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, uint64_t item_flags; struct mlx5_hw_jump_action *jump; struct mlx5_hrxq *hrxq; + cnt_id_t cnt_id; action = &actions[act_data->action_src]; MLX5_ASSERT(action->type == RTE_FLOW_ACTION_TYPE_INDIRECT || @@ -1660,6 +1707,21 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (mlx5_aso_mtr_wait(priv->sh, mtr)) return -1; break; + case RTE_FLOW_ACTION_TYPE_COUNT: + ret = mlx5_hws_cnt_pool_get(priv->hws_cpool, &queue, + &cnt_id); + if (ret != 0) + return ret; + ret = mlx5_hws_cnt_pool_get_action_offset + (priv->hws_cpool, + cnt_id, + &rule_acts[act_data->action_dst].action, + &rule_acts[act_data->action_dst].counter.offset + ); + if (ret != 0) + return ret; + job->flow->cnt_id = cnt_id; + break; default: break; } @@ -1669,6 +1731,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) + job->flow->cnt_id = hw_acts->cnt_id; return 0; } @@ -1804,7 +1868,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, * user's input, in order to save the cost. */ if (flow_hw_actions_construct(dev, job, hw_acts, pattern_template_index, - actions, rule_acts, &acts_num)) { + actions, rule_acts, &acts_num, queue)) { rte_errno = EINVAL; goto free; } @@ -1934,6 +1998,13 @@ flow_hw_pull(struct rte_eth_dev *dev, flow_hw_jump_release(dev, job->flow->jump); else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE) mlx5_hrxq_obj_release(dev, job->flow->hrxq); + if (mlx5_hws_cnt_id_valid(job->flow->cnt_id) && + mlx5_hws_cnt_is_shared + (priv->hws_cpool, job->flow->cnt_id) == false) { + mlx5_hws_cnt_pool_put(priv->hws_cpool, &queue, + &job->flow->cnt_id); + job->flow->cnt_id = 0; + } mlx5_ipool_free(job->flow->table->flow, job->flow->idx); } priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; @@ -2638,6 +2709,9 @@ flow_hw_actions_validate(struct rte_eth_dev *dev, if (ret < 0) return ret; break; + case RTE_FLOW_ACTION_TYPE_COUNT: + /* TODO: Validation logic */ + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -4315,6 +4389,12 @@ flow_hw_configure(struct rte_eth_dev *dev, } if (_queue_attr) mlx5_free(_queue_attr); + if (port_attr->nb_counters) { + priv->hws_cpool = mlx5_hws_cnt_pool_create(dev, port_attr, + nb_queue); + if (priv->hws_cpool == NULL) + goto err; + } return 0; err: flow_hw_free_vport_actions(priv); @@ -4383,6 +4463,8 @@ flow_hw_resource_release(struct rte_eth_dev *dev) mlx5_ipool_destroy(priv->acts_ipool); priv->acts_ipool = NULL; } + if (priv->hws_cpool) + mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool); mlx5_free(priv->hw_q); priv->hw_q = NULL; claim_zero(mlx5dr_context_close(priv->dr_ctx)); @@ -4597,6 +4679,61 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, return flow_dv_action_destroy(dev, handle, error); } +static int +flow_hw_query_counter(const struct rte_eth_dev *dev, uint32_t counter, + void *data, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hws_cnt *cnt; + struct rte_flow_query_count *qc = data; + uint32_t iidx = mlx5_hws_cnt_iidx(priv->hws_cpool, counter); + uint64_t pkts, bytes; + + if (!mlx5_hws_cnt_id_valid(counter)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "counter are not available"); + cnt = &priv->hws_cpool->pool[iidx]; + __hws_cnt_query_raw(priv->hws_cpool, counter, &pkts, &bytes); + qc->hits_set = 1; + qc->bytes_set = 1; + qc->hits = pkts - cnt->reset.hits; + qc->bytes = bytes - cnt->reset.bytes; + if (qc->reset) { + cnt->reset.bytes = bytes; + cnt->reset.hits = pkts; + } + return 0; +} + +static int +flow_hw_query(struct rte_eth_dev *dev, + struct rte_flow *flow __rte_unused, + const struct rte_flow_action *actions __rte_unused, + void *data __rte_unused, + struct rte_flow_error *error __rte_unused) +{ + int ret = -EINVAL; + struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow; + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + ret = flow_hw_query_counter(dev, hw_flow->cnt_id, data, + error); + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "action not supported"); + } + } + return ret; +} + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, @@ -4620,6 +4757,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .action_destroy = flow_dv_action_destroy, .action_update = flow_dv_action_update, .action_query = flow_dv_action_query, + .query = flow_hw_query, }; /** diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index b69021f6a0..7221bfb642 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -61,6 +61,7 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, struct mlx5_aso_mtr *aso; uint32_t i; struct rte_mtr_error error; + uint32_t flags; if (!nb_meters || !nb_meter_profiles || !nb_meter_policies) { ret = ENOTSUP; @@ -104,11 +105,12 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, NULL, "Meter register is not available."); goto err; } + flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + if (priv->sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; priv->mtr_bulk.action = mlx5dr_action_create_aso_meter (priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs, - reg_id - REG_C_0, MLX5DR_ACTION_FLAG_HWS_RX | - MLX5DR_ACTION_FLAG_HWS_TX | - MLX5DR_ACTION_FLAG_HWS_FDB); + reg_id - REG_C_0, flags); if (!priv->mtr_bulk.action) { ret = ENOMEM; rte_mtr_error_set(&error, ENOMEM, diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c new file mode 100644 index 0000000000..f7bf36de09 --- /dev/null +++ b/drivers/net/mlx5/mlx5_hws_cnt.c @@ -0,0 +1,523 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 Mellanox Technologies, Ltd + */ + +#include +#include +#include +#include +#include +#include + +#include "mlx5_utils.h" +#include "mlx5_hws_cnt.h" + +#define HWS_CNT_CACHE_SZ_DEFAULT 511 +#define HWS_CNT_CACHE_PRELOAD_DEFAULT 254 +#define HWS_CNT_CACHE_FETCH_DEFAULT 254 +#define HWS_CNT_CACHE_THRESHOLD_DEFAULT 254 +#define HWS_CNT_ALLOC_FACTOR_DEFAULT 20 + +static void +__hws_cnt_id_load(struct mlx5_hws_cnt_pool *cpool) +{ + uint32_t preload; + uint32_t q_num = cpool->cache->q_num; + uint32_t cnt_num = mlx5_hws_cnt_pool_get_size(cpool); + cnt_id_t cnt_id, iidx = 0; + uint32_t qidx; + struct rte_ring *qcache = NULL; + + /* + * Counter ID order is important for tracking the max number of in used + * counter for querying, which means counter internal index order must + * be from zero to the number user configured, i.e: 0 - 8000000. + * Need to load counter ID in this order into the cache firstly, + * and then the global free list. + * In the end, user fetch the the counter from minimal to the maximum. + */ + preload = RTE_MIN(cpool->cache->preload_sz, cnt_num / q_num); + for (qidx = 0; qidx < q_num; qidx++) { + for (; iidx < preload * (qidx + 1); iidx++) { + cnt_id = mlx5_hws_cnt_id_gen(cpool, iidx); + qcache = cpool->cache->qcache[qidx]; + if (qcache) + rte_ring_enqueue_elem(qcache, &cnt_id, + sizeof(cnt_id)); + } + } + for (; iidx < cnt_num; iidx++) { + cnt_id = mlx5_hws_cnt_id_gen(cpool, iidx); + rte_ring_enqueue_elem(cpool->free_list, &cnt_id, + sizeof(cnt_id)); + } +} + +static void +__mlx5_hws_cnt_svc(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool) +{ + struct rte_ring *reset_list = cpool->wait_reset_list; + struct rte_ring *reuse_list = cpool->reuse_list; + uint32_t reset_cnt_num; + struct rte_ring_zc_data zcdr = {0}; + struct rte_ring_zc_data zcdu = {0}; + + reset_cnt_num = rte_ring_count(reset_list); + do { + cpool->query_gen++; + mlx5_aso_cnt_query(sh, cpool); + zcdr.n1 = 0; + zcdu.n1 = 0; + rte_ring_enqueue_zc_burst_elem_start(reuse_list, + sizeof(cnt_id_t), reset_cnt_num, &zcdu, + NULL); + rte_ring_dequeue_zc_burst_elem_start(reset_list, + sizeof(cnt_id_t), reset_cnt_num, &zcdr, + NULL); + __hws_cnt_r2rcpy(&zcdu, &zcdr, reset_cnt_num); + rte_ring_dequeue_zc_elem_finish(reset_list, + reset_cnt_num); + rte_ring_enqueue_zc_elem_finish(reuse_list, + reset_cnt_num); + reset_cnt_num = rte_ring_count(reset_list); + } while (reset_cnt_num > 0); +} + +static void +mlx5_hws_cnt_raw_data_free(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_raw_data_mng *mng) +{ + if (mng == NULL) + return; + sh->cdev->mr_scache.dereg_mr_cb(&mng->mr); + mlx5_free(mng->raw); + mlx5_free(mng); +} + +__rte_unused +static struct mlx5_hws_cnt_raw_data_mng * +mlx5_hws_cnt_raw_data_alloc(struct mlx5_dev_ctx_shared *sh, uint32_t n) +{ + struct mlx5_hws_cnt_raw_data_mng *mng = NULL; + int ret; + size_t sz = n * sizeof(struct flow_counter_stats); + + mng = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*mng), 0, + SOCKET_ID_ANY); + if (mng == NULL) + goto error; + mng->raw = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sz, 0, + SOCKET_ID_ANY); + if (mng->raw == NULL) + goto error; + ret = sh->cdev->mr_scache.reg_mr_cb(sh->cdev->pd, mng->raw, sz, + &mng->mr); + if (ret) { + rte_errno = errno; + goto error; + } + return mng; +error: + mlx5_hws_cnt_raw_data_free(sh, mng); + return NULL; +} + +static void * +mlx5_hws_cnt_svc(void *opaque) +{ + struct mlx5_dev_ctx_shared *sh = + (struct mlx5_dev_ctx_shared *)opaque; + uint64_t interval = + (uint64_t)sh->cnt_svc->query_interval * (US_PER_S / MS_PER_S); + uint16_t port_id; + uint64_t start_cycle, query_cycle = 0; + uint64_t query_us; + uint64_t sleep_us; + + while (sh->cnt_svc->svc_running != 0) { + start_cycle = rte_rdtsc(); + MLX5_ETH_FOREACH_DEV(port_id, sh->cdev->dev) { + struct mlx5_priv *opriv = + rte_eth_devices[port_id].data->dev_private; + if (opriv != NULL && + opriv->sh == sh && + opriv->hws_cpool != NULL) { + __mlx5_hws_cnt_svc(sh, opriv->hws_cpool); + } + } + query_cycle = rte_rdtsc() - start_cycle; + query_us = query_cycle / (rte_get_timer_hz() / US_PER_S); + sleep_us = interval - query_us; + if (interval > query_us) + rte_delay_us_sleep(sleep_us); + } + return NULL; +} + +struct mlx5_hws_cnt_pool * +mlx5_hws_cnt_pool_init(const struct mlx5_hws_cnt_pool_cfg *pcfg, + const struct mlx5_hws_cache_param *ccfg) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + struct mlx5_hws_cnt_pool *cntp; + uint64_t cnt_num = 0; + uint32_t qidx; + + MLX5_ASSERT(pcfg); + MLX5_ASSERT(ccfg); + cntp = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*cntp), 0, + SOCKET_ID_ANY); + if (cntp == NULL) + return NULL; + + cntp->cfg = *pcfg; + cntp->cache = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, + sizeof(*cntp->cache) + + sizeof(((struct mlx5_hws_cnt_pool_caches *)0)->qcache[0]) + * ccfg->q_num, 0, SOCKET_ID_ANY); + if (cntp->cache == NULL) + goto error; + /* store the necessary cache parameters. */ + cntp->cache->fetch_sz = ccfg->fetch_sz; + cntp->cache->preload_sz = ccfg->preload_sz; + cntp->cache->threshold = ccfg->threshold; + cntp->cache->q_num = ccfg->q_num; + cnt_num = pcfg->request_num * (100 + pcfg->alloc_factor) / 100; + if (cnt_num > UINT32_MAX) { + DRV_LOG(ERR, "counter number %lu is out of 32bit range", + cnt_num); + goto error; + } + cntp->pool = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, + sizeof(struct mlx5_hws_cnt) * + pcfg->request_num * (100 + pcfg->alloc_factor) / 100, + 0, SOCKET_ID_ANY); + if (cntp->pool == NULL) + goto error; + snprintf(mz_name, sizeof(mz_name), "%s_F_RING", pcfg->name); + cntp->free_list = rte_ring_create_elem(mz_name, sizeof(cnt_id_t), + (uint32_t)cnt_num, SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_MC_HTS_DEQ | RING_F_EXACT_SZ); + if (cntp->free_list == NULL) { + DRV_LOG(ERR, "failed to create free list ring"); + goto error; + } + snprintf(mz_name, sizeof(mz_name), "%s_R_RING", pcfg->name); + cntp->wait_reset_list = rte_ring_create_elem(mz_name, sizeof(cnt_id_t), + (uint32_t)cnt_num, SOCKET_ID_ANY, + RING_F_MP_HTS_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ); + if (cntp->wait_reset_list == NULL) { + DRV_LOG(ERR, "failed to create free list ring"); + goto error; + } + snprintf(mz_name, sizeof(mz_name), "%s_U_RING", pcfg->name); + cntp->reuse_list = rte_ring_create_elem(mz_name, sizeof(cnt_id_t), + (uint32_t)cnt_num, SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_MC_HTS_DEQ | RING_F_EXACT_SZ); + if (cntp->reuse_list == NULL) { + DRV_LOG(ERR, "failed to create reuse list ring"); + goto error; + } + for (qidx = 0; qidx < ccfg->q_num; qidx++) { + snprintf(mz_name, sizeof(mz_name), "%s_cache/%u", pcfg->name, + qidx); + cntp->cache->qcache[qidx] = rte_ring_create(mz_name, ccfg->size, + SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_SC_DEQ | + RING_F_EXACT_SZ); + if (cntp->cache->qcache[qidx] == NULL) + goto error; + } + return cntp; +error: + mlx5_hws_cnt_pool_deinit(cntp); + return NULL; +} + +void +mlx5_hws_cnt_pool_deinit(struct mlx5_hws_cnt_pool * const cntp) +{ + uint32_t qidx = 0; + if (cntp == NULL) + return; + rte_ring_free(cntp->free_list); + rte_ring_free(cntp->wait_reset_list); + rte_ring_free(cntp->reuse_list); + if (cntp->cache) { + for (qidx = 0; qidx < cntp->cache->q_num; qidx++) + rte_ring_free(cntp->cache->qcache[qidx]); + } + mlx5_free(cntp->cache); + mlx5_free(cntp->raw_mng); + mlx5_free(cntp->pool); + mlx5_free(cntp); +} + +int +mlx5_hws_cnt_service_thread_create(struct mlx5_dev_ctx_shared *sh) +{ + char name[NAME_MAX]; + cpu_set_t cpuset; + int ret; + uint32_t service_core = sh->cnt_svc->service_core; + + CPU_ZERO(&cpuset); + sh->cnt_svc->svc_running = 1; + ret = pthread_create(&sh->cnt_svc->service_thread, NULL, + mlx5_hws_cnt_svc, sh); + if (ret != 0) { + DRV_LOG(ERR, "Failed to create HW steering's counter service thread."); + return -ENOSYS; + } + snprintf(name, NAME_MAX - 1, "%s/svc@%d", + sh->ibdev_name, service_core); + rte_thread_setname(sh->cnt_svc->service_thread, name); + CPU_SET(service_core, &cpuset); + pthread_setaffinity_np(sh->cnt_svc->service_thread, sizeof(cpuset), + &cpuset); + return 0; +} + +void +mlx5_hws_cnt_service_thread_destroy(struct mlx5_dev_ctx_shared *sh) +{ + if (sh->cnt_svc->service_thread == 0) + return; + sh->cnt_svc->svc_running = 0; + pthread_join(sh->cnt_svc->service_thread, NULL); + sh->cnt_svc->service_thread = 0; +} + +int +mlx5_hws_cnt_pool_dcs_alloc(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool) +{ + struct mlx5_hca_attr *hca_attr = &sh->cdev->config.hca_attr; + uint32_t max_log_bulk_sz = 0; + uint32_t log_bulk_sz; + uint32_t idx, alloced = 0; + unsigned int cnt_num = mlx5_hws_cnt_pool_get_size(cpool); + struct mlx5_devx_counter_attr attr = {0}; + struct mlx5_devx_obj *dcs; + + if (hca_attr->flow_counter_bulk_log_max_alloc == 0) { + DRV_LOG(ERR, + "Fw doesn't support bulk log max alloc"); + return -1; + } + max_log_bulk_sz = 23; /* hard code to 8M (1 << 23). */ + cnt_num = RTE_ALIGN_CEIL(cnt_num, 4); /* minimal 4 counter in bulk. */ + log_bulk_sz = RTE_MIN(max_log_bulk_sz, rte_log2_u32(cnt_num)); + attr.pd = sh->cdev->pdn; + attr.pd_valid = 1; + attr.bulk_log_max_alloc = 1; + attr.flow_counter_bulk_log_size = log_bulk_sz; + idx = 0; + dcs = mlx5_devx_cmd_flow_counter_alloc_general(sh->cdev->ctx, &attr); + if (dcs == NULL) + goto error; + cpool->dcs_mng.dcs[idx].obj = dcs; + cpool->dcs_mng.dcs[idx].batch_sz = (1 << log_bulk_sz); + cpool->dcs_mng.batch_total++; + idx++; + cpool->dcs_mng.dcs[0].iidx = 0; + alloced = cpool->dcs_mng.dcs[0].batch_sz; + if (cnt_num > cpool->dcs_mng.dcs[0].batch_sz) { + for (; idx < MLX5_HWS_CNT_DCS_NUM; idx++) { + attr.flow_counter_bulk_log_size = --max_log_bulk_sz; + dcs = mlx5_devx_cmd_flow_counter_alloc_general + (sh->cdev->ctx, &attr); + if (dcs == NULL) + goto error; + cpool->dcs_mng.dcs[idx].obj = dcs; + cpool->dcs_mng.dcs[idx].batch_sz = + (1 << max_log_bulk_sz); + cpool->dcs_mng.dcs[idx].iidx = alloced; + alloced += cpool->dcs_mng.dcs[idx].batch_sz; + cpool->dcs_mng.batch_total++; + } + } + return 0; +error: + DRV_LOG(DEBUG, + "Cannot alloc device counter, allocated[%" PRIu32 "] request[%" PRIu32 "]", + alloced, cnt_num); + for (idx = 0; idx < cpool->dcs_mng.batch_total; idx++) { + mlx5_devx_cmd_destroy(cpool->dcs_mng.dcs[idx].obj); + cpool->dcs_mng.dcs[idx].obj = NULL; + cpool->dcs_mng.dcs[idx].batch_sz = 0; + cpool->dcs_mng.dcs[idx].iidx = 0; + } + cpool->dcs_mng.batch_total = 0; + return -1; +} + +void +mlx5_hws_cnt_pool_dcs_free(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool) +{ + uint32_t idx; + + if (cpool == NULL) + return; + for (idx = 0; idx < MLX5_HWS_CNT_DCS_NUM; idx++) + mlx5_devx_cmd_destroy(cpool->dcs_mng.dcs[idx].obj); + if (cpool->raw_mng) { + mlx5_hws_cnt_raw_data_free(sh, cpool->raw_mng); + cpool->raw_mng = NULL; + } +} + +int +mlx5_hws_cnt_pool_action_create(struct mlx5_priv *priv, + struct mlx5_hws_cnt_pool *cpool) +{ + uint32_t idx; + int ret = 0; + struct mlx5_hws_cnt_dcs *dcs; + uint32_t flags; + + flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + if (priv->sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; + for (idx = 0; idx < cpool->dcs_mng.batch_total; idx++) { + dcs = &cpool->dcs_mng.dcs[idx]; + dcs->dr_action = mlx5dr_action_create_counter(priv->dr_ctx, + (struct mlx5dr_devx_obj *)dcs->obj, + flags); + if (dcs->dr_action == NULL) { + mlx5_hws_cnt_pool_action_destroy(cpool); + ret = -ENOSYS; + break; + } + } + return ret; +} + +void +mlx5_hws_cnt_pool_action_destroy(struct mlx5_hws_cnt_pool *cpool) +{ + uint32_t idx; + struct mlx5_hws_cnt_dcs *dcs; + + for (idx = 0; idx < cpool->dcs_mng.batch_total; idx++) { + dcs = &cpool->dcs_mng.dcs[idx]; + if (dcs->dr_action != NULL) { + mlx5dr_action_destroy(dcs->dr_action); + dcs->dr_action = NULL; + } + } +} + +struct mlx5_hws_cnt_pool * +mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, + const struct rte_flow_port_attr *pattr, uint16_t nb_queue) +{ + struct mlx5_hws_cnt_pool *cpool = NULL; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hws_cache_param cparam = {0}; + struct mlx5_hws_cnt_pool_cfg pcfg = {0}; + char *mp_name; + int ret = 0; + size_t sz; + + /* init cnt service if not. */ + if (priv->sh->cnt_svc == NULL) { + ret = mlx5_hws_cnt_svc_init(priv->sh); + if (ret != 0) + return NULL; + } + cparam.fetch_sz = HWS_CNT_CACHE_FETCH_DEFAULT; + cparam.preload_sz = HWS_CNT_CACHE_PRELOAD_DEFAULT; + cparam.q_num = nb_queue; + cparam.threshold = HWS_CNT_CACHE_THRESHOLD_DEFAULT; + cparam.size = HWS_CNT_CACHE_SZ_DEFAULT; + pcfg.alloc_factor = HWS_CNT_ALLOC_FACTOR_DEFAULT; + mp_name = mlx5_malloc(MLX5_MEM_ZERO, RTE_MEMZONE_NAMESIZE, 0, + SOCKET_ID_ANY); + if (mp_name == NULL) + goto error; + snprintf(mp_name, RTE_MEMZONE_NAMESIZE, "MLX5_HWS_CNT_POOL_%u", + dev->data->port_id); + pcfg.name = mp_name; + pcfg.request_num = pattr->nb_counters; + cpool = mlx5_hws_cnt_pool_init(&pcfg, &cparam); + if (cpool == NULL) + goto error; + ret = mlx5_hws_cnt_pool_dcs_alloc(priv->sh, cpool); + if (ret != 0) + goto error; + sz = RTE_ALIGN_CEIL(mlx5_hws_cnt_pool_get_size(cpool), 4); + cpool->raw_mng = mlx5_hws_cnt_raw_data_alloc(priv->sh, sz); + if (cpool->raw_mng == NULL) + goto error; + __hws_cnt_id_load(cpool); + /* + * Bump query gen right after pool create so the + * pre-loaded counters can be used directly + * because they already have init value no need + * to wait for query. + */ + cpool->query_gen = 1; + ret = mlx5_hws_cnt_pool_action_create(priv, cpool); + if (ret != 0) + goto error; + priv->sh->cnt_svc->refcnt++; + return cpool; +error: + mlx5_hws_cnt_pool_destroy(priv->sh, cpool); + return NULL; +} + +void +mlx5_hws_cnt_pool_destroy(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool) +{ + if (cpool == NULL) + return; + if (--sh->cnt_svc->refcnt == 0) + mlx5_hws_cnt_svc_deinit(sh); + mlx5_hws_cnt_pool_action_destroy(cpool); + mlx5_hws_cnt_pool_dcs_free(sh, cpool); + mlx5_hws_cnt_raw_data_free(sh, cpool->raw_mng); + mlx5_free((void *)cpool->cfg.name); + mlx5_hws_cnt_pool_deinit(cpool); +} + +int +mlx5_hws_cnt_svc_init(struct mlx5_dev_ctx_shared *sh) +{ + int ret; + + sh->cnt_svc = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, + sizeof(*sh->cnt_svc), 0, SOCKET_ID_ANY); + if (sh->cnt_svc == NULL) + return -1; + sh->cnt_svc->query_interval = sh->config.cnt_svc.cycle_time; + sh->cnt_svc->service_core = sh->config.cnt_svc.service_core; + ret = mlx5_aso_cnt_queue_init(sh); + if (ret != 0) { + mlx5_free(sh->cnt_svc); + sh->cnt_svc = NULL; + return -1; + } + ret = mlx5_hws_cnt_service_thread_create(sh); + if (ret != 0) { + mlx5_aso_cnt_queue_uninit(sh); + mlx5_free(sh->cnt_svc); + sh->cnt_svc = NULL; + } + return 0; +} + +void +mlx5_hws_cnt_svc_deinit(struct mlx5_dev_ctx_shared *sh) +{ + if (sh->cnt_svc == NULL) + return; + mlx5_hws_cnt_service_thread_destroy(sh); + mlx5_aso_cnt_queue_uninit(sh); + mlx5_free(sh->cnt_svc); + sh->cnt_svc = NULL; +} diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h new file mode 100644 index 0000000000..312b053c59 --- /dev/null +++ b/drivers/net/mlx5/mlx5_hws_cnt.h @@ -0,0 +1,558 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2022 Mellanox Technologies, Ltd + */ + +#ifndef _MLX5_HWS_CNT_H_ +#define _MLX5_HWS_CNT_H_ + +#include +#include "mlx5_utils.h" +#include "mlx5_flow.h" + +/* + * COUNTER ID's layout + * 3 2 1 0 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | T | | D | | + * ~ Y | | C | IDX ~ + * | P | | S | | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * + * Bit 31:30 = TYPE = MLX5_INDIRECT_ACTION_TYPE_COUNT = b'10 + * Bit 25:24 = DCS index + * Bit 23:00 = IDX in this counter belonged DCS bulk. + */ +typedef uint32_t cnt_id_t; + +#define MLX5_HWS_CNT_DCS_NUM 4 +#define MLX5_HWS_CNT_DCS_IDX_OFFSET 24 +#define MLX5_HWS_CNT_DCS_IDX_MASK 0x3 +#define MLX5_HWS_CNT_IDX_MASK ((1UL << MLX5_HWS_CNT_DCS_IDX_OFFSET) - 1) + +struct mlx5_hws_cnt_dcs { + void *dr_action; + uint32_t batch_sz; + uint32_t iidx; /* internal index of first counter in this bulk. */ + struct mlx5_devx_obj *obj; +}; + +struct mlx5_hws_cnt_dcs_mng { + uint32_t batch_total; + struct mlx5_hws_cnt_dcs dcs[MLX5_HWS_CNT_DCS_NUM]; +}; + +struct mlx5_hws_cnt { + struct flow_counter_stats reset; + union { + uint32_t share: 1; + /* + * share will be set to 1 when this counter is used as indirect + * action. Only meaningful when user own this counter. + */ + uint32_t query_gen_when_free; + /* + * When PMD own this counter (user put back counter to PMD + * counter pool, i.e), this field recorded value of counter + * pools query generation at time user release the counter. + */ + }; +}; + +struct mlx5_hws_cnt_raw_data_mng { + struct flow_counter_stats *raw; + struct mlx5_pmd_mr mr; +}; + +struct mlx5_hws_cache_param { + uint32_t size; + uint32_t q_num; + uint32_t fetch_sz; + uint32_t threshold; + uint32_t preload_sz; +}; + +struct mlx5_hws_cnt_pool_cfg { + char *name; + uint32_t request_num; + uint32_t alloc_factor; +}; + +struct mlx5_hws_cnt_pool_caches { + uint32_t fetch_sz; + uint32_t threshold; + uint32_t preload_sz; + uint32_t q_num; + struct rte_ring *qcache[]; +}; + +struct mlx5_hws_cnt_pool { + struct mlx5_hws_cnt_pool_cfg cfg __rte_cache_aligned; + struct mlx5_hws_cnt_dcs_mng dcs_mng __rte_cache_aligned; + uint32_t query_gen __rte_cache_aligned; + struct mlx5_hws_cnt *pool; + struct mlx5_hws_cnt_raw_data_mng *raw_mng; + struct rte_ring *reuse_list; + struct rte_ring *free_list; + struct rte_ring *wait_reset_list; + struct mlx5_hws_cnt_pool_caches *cache; +} __rte_cache_aligned; + +/** + * Translate counter id into internal index (start from 0), which can be used + * as index of raw/cnt pool. + * + * @param cnt_id + * The external counter id + * @return + * Internal index + */ +static __rte_always_inline cnt_id_t +mlx5_hws_cnt_iidx(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id) +{ + uint8_t dcs_idx = cnt_id >> MLX5_HWS_CNT_DCS_IDX_OFFSET; + uint32_t offset = cnt_id & MLX5_HWS_CNT_IDX_MASK; + + dcs_idx &= MLX5_HWS_CNT_DCS_IDX_MASK; + return (cpool->dcs_mng.dcs[dcs_idx].iidx + offset); +} + +/** + * Check if it's valid counter id. + */ +static __rte_always_inline bool +mlx5_hws_cnt_id_valid(cnt_id_t cnt_id) +{ + return (cnt_id >> MLX5_INDIRECT_ACTION_TYPE_OFFSET) == + MLX5_INDIRECT_ACTION_TYPE_COUNT ? true : false; +} + +/** + * Generate Counter id from internal index. + * + * @param cpool + * The pointer to counter pool + * @param index + * The internal counter index. + * + * @return + * Counter id + */ +static __rte_always_inline cnt_id_t +mlx5_hws_cnt_id_gen(struct mlx5_hws_cnt_pool *cpool, cnt_id_t iidx) +{ + struct mlx5_hws_cnt_dcs_mng *dcs_mng = &cpool->dcs_mng; + uint32_t idx; + uint32_t offset; + cnt_id_t cnt_id; + + for (idx = 0, offset = iidx; idx < dcs_mng->batch_total; idx++) { + if (dcs_mng->dcs[idx].batch_sz <= offset) + offset -= dcs_mng->dcs[idx].batch_sz; + else + break; + } + cnt_id = offset; + cnt_id |= (idx << MLX5_HWS_CNT_DCS_IDX_OFFSET); + return (MLX5_INDIRECT_ACTION_TYPE_COUNT << + MLX5_INDIRECT_ACTION_TYPE_OFFSET) | cnt_id; +} + +static __rte_always_inline void +__hws_cnt_query_raw(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id, + uint64_t *raw_pkts, uint64_t *raw_bytes) +{ + struct mlx5_hws_cnt_raw_data_mng *raw_mng = cpool->raw_mng; + struct flow_counter_stats s[2]; + uint8_t i = 0x1; + size_t stat_sz = sizeof(s[0]); + uint32_t iidx = mlx5_hws_cnt_iidx(cpool, cnt_id); + + memcpy(&s[0], &raw_mng->raw[iidx], stat_sz); + do { + memcpy(&s[i & 1], &raw_mng->raw[iidx], stat_sz); + if (memcmp(&s[0], &s[1], stat_sz) == 0) { + *raw_pkts = rte_be_to_cpu_64(s[0].hits); + *raw_bytes = rte_be_to_cpu_64(s[0].bytes); + break; + } + i = ~i; + } while (1); +} + +/** + * Copy elems from one zero-copy ring to zero-copy ring in place. + * + * The input is a rte ring zero-copy data struct, which has two pointer. + * in case of the wrapper happened, the ptr2 will be meaningful. + * + * So this rountin needs to consider the situation that the address given by + * source and destination could be both wrapped. + * First, calculate the first number of element needs to be copied until wrapped + * address, which could be in source or destination. + * Second, copy left number of element until second wrapped address. If in first + * step the wrapped address is source, then this time it must be in destination. + * and vice-vers. + * Third, copy all left numbe of element. + * + * In worst case, we need copy three pieces of continuous memory. + * + * @param zcdd + * A pointer to zero-copy data of dest ring. + * @param zcds + * A pointer to zero-copy data of source ring. + * @param n + * Number of elems to copy. + */ +static __rte_always_inline void +__hws_cnt_r2rcpy(struct rte_ring_zc_data *zcdd, struct rte_ring_zc_data *zcds, + unsigned int n) +{ + unsigned int n1, n2, n3; + void *s1, *s2, *s3; + void *d1, *d2, *d3; + + s1 = zcds->ptr1; + d1 = zcdd->ptr1; + n1 = RTE_MIN(zcdd->n1, zcds->n1); + if (zcds->n1 > n1) { + n2 = zcds->n1 - n1; + s2 = RTE_PTR_ADD(zcds->ptr1, sizeof(cnt_id_t) * n1); + d2 = zcdd->ptr2; + n3 = n - n1 - n2; + s3 = zcds->ptr2; + d3 = RTE_PTR_ADD(zcdd->ptr2, sizeof(cnt_id_t) * n2); + } else { + n2 = zcdd->n1 - n1; + s2 = zcds->ptr2; + d2 = RTE_PTR_ADD(zcdd->ptr1, sizeof(cnt_id_t) * n1); + n3 = n - n1 - n2; + s3 = RTE_PTR_ADD(zcds->ptr2, sizeof(cnt_id_t) * n2); + d3 = zcdd->ptr2; + } + memcpy(d1, s1, n1 * sizeof(cnt_id_t)); + if (n2 != 0) { + memcpy(d2, s2, n2 * sizeof(cnt_id_t)); + if (n3 != 0) + memcpy(d3, s3, n3 * sizeof(cnt_id_t)); + } +} + +static __rte_always_inline int +mlx5_hws_cnt_pool_cache_flush(struct mlx5_hws_cnt_pool *cpool, + uint32_t queue_id) +{ + unsigned int ret; + struct rte_ring_zc_data zcdr = {0}; + struct rte_ring_zc_data zcdc = {0}; + struct rte_ring *reset_list = NULL; + struct rte_ring *qcache = cpool->cache->qcache[queue_id]; + + ret = rte_ring_dequeue_zc_burst_elem_start(qcache, + sizeof(cnt_id_t), rte_ring_count(qcache), &zcdc, + NULL); + MLX5_ASSERT(ret); + reset_list = cpool->wait_reset_list; + rte_ring_enqueue_zc_burst_elem_start(reset_list, + sizeof(cnt_id_t), ret, &zcdr, NULL); + __hws_cnt_r2rcpy(&zcdr, &zcdc, ret); + rte_ring_enqueue_zc_elem_finish(reset_list, ret); + rte_ring_dequeue_zc_elem_finish(qcache, ret); + return 0; +} + +static __rte_always_inline int +mlx5_hws_cnt_pool_cache_fetch(struct mlx5_hws_cnt_pool *cpool, + uint32_t queue_id) +{ + struct rte_ring *qcache = cpool->cache->qcache[queue_id]; + struct rte_ring *free_list = NULL; + struct rte_ring *reuse_list = NULL; + struct rte_ring *list = NULL; + struct rte_ring_zc_data zcdf = {0}; + struct rte_ring_zc_data zcdc = {0}; + struct rte_ring_zc_data zcdu = {0}; + struct rte_ring_zc_data zcds = {0}; + struct mlx5_hws_cnt_pool_caches *cache = cpool->cache; + unsigned int ret; + + reuse_list = cpool->reuse_list; + ret = rte_ring_dequeue_zc_burst_elem_start(reuse_list, + sizeof(cnt_id_t), cache->fetch_sz, &zcdu, NULL); + zcds = zcdu; + list = reuse_list; + if (unlikely(ret == 0)) { /* no reuse counter. */ + rte_ring_dequeue_zc_elem_finish(reuse_list, 0); + free_list = cpool->free_list; + ret = rte_ring_dequeue_zc_burst_elem_start(free_list, + sizeof(cnt_id_t), cache->fetch_sz, &zcdf, NULL); + zcds = zcdf; + list = free_list; + if (unlikely(ret == 0)) { /* no free counter. */ + rte_ring_dequeue_zc_elem_finish(free_list, 0); + if (rte_ring_count(cpool->wait_reset_list)) + return -EAGAIN; + return -ENOENT; + } + } + rte_ring_enqueue_zc_burst_elem_start(qcache, sizeof(cnt_id_t), + ret, &zcdc, NULL); + __hws_cnt_r2rcpy(&zcdc, &zcds, ret); + rte_ring_dequeue_zc_elem_finish(list, ret); + rte_ring_enqueue_zc_elem_finish(qcache, ret); + return 0; +} + +static __rte_always_inline int +__mlx5_hws_cnt_pool_enqueue_revert(struct rte_ring *r, unsigned int n, + struct rte_ring_zc_data *zcd) +{ + uint32_t current_head = 0; + uint32_t revert2head = 0; + + MLX5_ASSERT(r->prod.sync_type == RTE_RING_SYNC_ST); + MLX5_ASSERT(r->cons.sync_type == RTE_RING_SYNC_ST); + current_head = __atomic_load_n(&r->prod.head, __ATOMIC_RELAXED); + MLX5_ASSERT(n <= r->capacity); + MLX5_ASSERT(n <= rte_ring_count(r)); + revert2head = current_head - n; + r->prod.head = revert2head; /* This ring should be SP. */ + __rte_ring_get_elem_addr(r, revert2head, sizeof(cnt_id_t), n, + &zcd->ptr1, &zcd->n1, &zcd->ptr2); + /* Update tail */ + __atomic_store_n(&r->prod.tail, revert2head, __ATOMIC_RELEASE); + return n; +} + +/** + * Put one counter back in the mempool. + * + * @param cpool + * A pointer to the counter pool structure. + * @param cnt_id + * A counter id to be added. + * @return + * - 0: Success; object taken + * - -ENOENT: not enough entry in pool + */ +static __rte_always_inline int +mlx5_hws_cnt_pool_put(struct mlx5_hws_cnt_pool *cpool, + uint32_t *queue, cnt_id_t *cnt_id) +{ + unsigned int ret = 0; + struct rte_ring_zc_data zcdc = {0}; + struct rte_ring_zc_data zcdr = {0}; + struct rte_ring *qcache = NULL; + unsigned int wb_num = 0; /* cache write-back number. */ + cnt_id_t iidx; + + iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); + cpool->pool[iidx].query_gen_when_free = + __atomic_load_n(&cpool->query_gen, __ATOMIC_RELAXED); + if (likely(queue != NULL)) + qcache = cpool->cache->qcache[*queue]; + if (unlikely(qcache == NULL)) { + ret = rte_ring_enqueue_elem(cpool->wait_reset_list, cnt_id, + sizeof(cnt_id_t)); + MLX5_ASSERT(ret == 0); + return ret; + } + ret = rte_ring_enqueue_burst_elem(qcache, cnt_id, sizeof(cnt_id_t), 1, + NULL); + if (unlikely(ret == 0)) { /* cache is full. */ + wb_num = rte_ring_count(qcache) - cpool->cache->threshold; + MLX5_ASSERT(wb_num < rte_ring_count(qcache)); + __mlx5_hws_cnt_pool_enqueue_revert(qcache, wb_num, &zcdc); + rte_ring_enqueue_zc_burst_elem_start(cpool->wait_reset_list, + sizeof(cnt_id_t), wb_num, &zcdr, NULL); + __hws_cnt_r2rcpy(&zcdr, &zcdc, wb_num); + rte_ring_enqueue_zc_elem_finish(cpool->wait_reset_list, wb_num); + /* write-back THIS counter too */ + ret = rte_ring_enqueue_burst_elem(cpool->wait_reset_list, + cnt_id, sizeof(cnt_id_t), 1, NULL); + } + return ret == 1 ? 0 : -ENOENT; +} + +/** + * Get one counter from the pool. + * + * If @param queue is not null, objects will be retrieved first from queue's + * cache, subsequently from the common pool. Note that it can return -ENOENT + * when the local cache and common pool are empty, even if cache from other + * queue are full. + * + * @param cntp + * A pointer to the counter pool structure. + * @param queue + * A pointer to HWS queue. If null, it means fetch from common pool. + * @param cnt_id + * A pointer to a cnt_id_t * pointer (counter id) that will be filled. + * @return + * - 0: Success; objects taken. + * - -ENOENT: Not enough entries in the mempool; no object is retrieved. + * - -EAGAIN: counter is not ready; try again. + */ +static __rte_always_inline int +mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, + uint32_t *queue, cnt_id_t *cnt_id) +{ + unsigned int ret; + struct rte_ring_zc_data zcdc = {0}; + struct rte_ring *qcache = NULL; + uint32_t query_gen = 0; + cnt_id_t iidx, tmp_cid = 0; + + if (likely(queue != NULL)) + qcache = cpool->cache->qcache[*queue]; + if (unlikely(qcache == NULL)) { + ret = rte_ring_dequeue_elem(cpool->reuse_list, &tmp_cid, + sizeof(cnt_id_t)); + if (unlikely(ret != 0)) { + ret = rte_ring_dequeue_elem(cpool->free_list, &tmp_cid, + sizeof(cnt_id_t)); + if (unlikely(ret != 0)) { + if (rte_ring_count(cpool->wait_reset_list)) + return -EAGAIN; + return -ENOENT; + } + } + *cnt_id = tmp_cid; + iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); + __hws_cnt_query_raw(cpool, *cnt_id, + &cpool->pool[iidx].reset.hits, + &cpool->pool[iidx].reset.bytes); + return 0; + } + ret = rte_ring_dequeue_zc_burst_elem_start(qcache, sizeof(cnt_id_t), 1, + &zcdc, NULL); + if (unlikely(ret == 0)) { /* local cache is empty. */ + rte_ring_dequeue_zc_elem_finish(qcache, 0); + /* let's fetch from global free list. */ + ret = mlx5_hws_cnt_pool_cache_fetch(cpool, *queue); + if (unlikely(ret != 0)) + return ret; + rte_ring_dequeue_zc_burst_elem_start(qcache, sizeof(cnt_id_t), + 1, &zcdc, NULL); + } + /* get one from local cache. */ + *cnt_id = (*(cnt_id_t *)zcdc.ptr1); + iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); + query_gen = cpool->pool[iidx].query_gen_when_free; + if (cpool->query_gen == query_gen) { /* counter is waiting to reset. */ + rte_ring_dequeue_zc_elem_finish(qcache, 0); + /* write-back counter to reset list. */ + mlx5_hws_cnt_pool_cache_flush(cpool, *queue); + /* let's fetch from global free list. */ + ret = mlx5_hws_cnt_pool_cache_fetch(cpool, *queue); + if (unlikely(ret != 0)) + return ret; + rte_ring_dequeue_zc_burst_elem_start(qcache, sizeof(cnt_id_t), + 1, &zcdc, NULL); + *cnt_id = *(cnt_id_t *)zcdc.ptr1; + } + __hws_cnt_query_raw(cpool, *cnt_id, &cpool->pool[iidx].reset.hits, + &cpool->pool[iidx].reset.bytes); + rte_ring_dequeue_zc_elem_finish(qcache, 1); + cpool->pool[iidx].share = 0; + return 0; +} + +static __always_inline unsigned int +mlx5_hws_cnt_pool_get_size(struct mlx5_hws_cnt_pool *cpool) +{ + return rte_ring_get_capacity(cpool->free_list); +} + +static __always_inline int +mlx5_hws_cnt_pool_get_action_offset(struct mlx5_hws_cnt_pool *cpool, + cnt_id_t cnt_id, struct mlx5dr_action **action, + uint32_t *offset) +{ + uint8_t idx = cnt_id >> MLX5_HWS_CNT_DCS_IDX_OFFSET; + + idx &= MLX5_HWS_CNT_DCS_IDX_MASK; + *action = cpool->dcs_mng.dcs[idx].dr_action; + *offset = cnt_id & MLX5_HWS_CNT_IDX_MASK; + return 0; +} + +static __rte_always_inline int +mlx5_hws_cnt_shared_get(struct mlx5_hws_cnt_pool *cpool, cnt_id_t *cnt_id) +{ + int ret; + uint32_t iidx; + + ret = mlx5_hws_cnt_pool_get(cpool, NULL, cnt_id); + if (ret != 0) + return ret; + iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); + MLX5_ASSERT(cpool->pool[iidx].share == 0); + cpool->pool[iidx].share = 1; + return 0; +} + +static __rte_always_inline int +mlx5_hws_cnt_shared_put(struct mlx5_hws_cnt_pool *cpool, cnt_id_t *cnt_id) +{ + int ret; + uint32_t iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); + + cpool->pool[iidx].share = 0; + ret = mlx5_hws_cnt_pool_put(cpool, NULL, cnt_id); + if (unlikely(ret != 0)) + cpool->pool[iidx].share = 1; /* fail to release, restore. */ + return ret; +} + +static __rte_always_inline bool +mlx5_hws_cnt_is_shared(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id) +{ + uint32_t iidx = mlx5_hws_cnt_iidx(cpool, cnt_id); + + return cpool->pool[iidx].share ? true : false; +} + +/* init HWS counter pool. */ +struct mlx5_hws_cnt_pool * +mlx5_hws_cnt_pool_init(const struct mlx5_hws_cnt_pool_cfg *pcfg, + const struct mlx5_hws_cache_param *ccfg); + +void +mlx5_hws_cnt_pool_deinit(struct mlx5_hws_cnt_pool *cntp); + +int +mlx5_hws_cnt_service_thread_create(struct mlx5_dev_ctx_shared *sh); + +void +mlx5_hws_cnt_service_thread_destroy(struct mlx5_dev_ctx_shared *sh); + +int +mlx5_hws_cnt_pool_dcs_alloc(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool); +void +mlx5_hws_cnt_pool_dcs_free(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool); + +int +mlx5_hws_cnt_pool_action_create(struct mlx5_priv *priv, + struct mlx5_hws_cnt_pool *cpool); + +void +mlx5_hws_cnt_pool_action_destroy(struct mlx5_hws_cnt_pool *cpool); + +struct mlx5_hws_cnt_pool * +mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, + const struct rte_flow_port_attr *pattr, uint16_t nb_queue); + +void +mlx5_hws_cnt_pool_destroy(struct mlx5_dev_ctx_shared *sh, + struct mlx5_hws_cnt_pool *cpool); + +int +mlx5_hws_cnt_svc_init(struct mlx5_dev_ctx_shared *sh); + +void +mlx5_hws_cnt_svc_deinit(struct mlx5_dev_ctx_shared *sh); + +#endif /* _MLX5_HWS_CNT_H_ */ From patchwork Fri Sep 23 14:43:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116749 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFCC2A054A; Fri, 23 Sep 2022 16:45:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3D3C542BD5; Fri, 23 Sep 2022 16:44:24 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2044.outbound.protection.outlook.com [40.107.93.44]) by mails.dpdk.org (Postfix) with ESMTP id 9EFC042BA7 for ; Fri, 23 Sep 2022 16:44:22 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XBqZXpnebg7DQe4W/zRHqCz4Vv3x3hHduXBjOp37yKsdLspSsk4bhjqGk8FdRBjKlrsRVClUPBRJ9MHeRF0CU19MoQ+z+N8Usj9xzoH2c/yWup5gIERKMBKoQZzBi78p9clzv5nTZ0R2moRXiyW5f4yB5OUOxDqsq3xu3RcyYwKFRGO0YCT/RKvYVNinH6tLNzErLJK9gwjs2rw8/0wVgkoqUsEwLybDPKDoG8eQMRd22IzbldS14fB3vsiLJOJWOFaMVnhKOsYa9j+NT9zt9L4ceP7TaXUyE7TyiOuhKXh4fFR+MsqlC7o12P0nLvYuy62mC09Az99j2RNOtBRHgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gxW2BWl8L0iHSCNYymMnCQWyXhkLGJ4106Lah57kfWU=; b=j64UznfbNvMyF76AytIJfQRB3BLXSOb4Vrm/NQtIdrYDeH97t68ilMSP+wq3wknrtSrKauBr+AuMiOniWlVbVprQ2+SPM6L+4bBm7mPXB5x/i3rU1GqxbkE/9QHST6w9U0l2OjBKKTbRtfYZezw9KCit32ZyKOzbtGFLoFO1KxHyJNd23BrNWU5xUVY0GkDaXvRzz0bVrZ8AfgVOHBlVz83VyyTXrQT0xuYcxHnQrw8UlxVsmSBTNBnm0KwZiK4LZVfhtd0JcaQW7u9RwC8JE6NUkv7YU0iV3WjUgcD5GeRL201u+PCWjOBDUAtPzUEwht5RFSJ5T3MHMxFz7j03bQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gxW2BWl8L0iHSCNYymMnCQWyXhkLGJ4106Lah57kfWU=; b=VEzyxLALajaUP9N2+/Bm+yciKKHhGstWRYdlXiw3lQhMOxrSyzbzUFcDN21HJQn8BjVEPoTmREKw/9IIcsBFLm785SG1CvTz8L8hldWQOdhspb+g0g0OqcTkrlX/UjD3LYUoSfsNyhK8dGyx4NHlKRL3PpnPk1nRKgXyL0nqpdDdXY06FuiloULpUM/wZwfoTeIVoCtr1Vs2s/1m89ppacXIr/avSEHLQKKCVECjxRkWobjZabOUN+r70TK5i+lQm0tb7aw2E2Wkds4/YkV4/mA4h8zEA8mc96dlronOIxTY0KcYKaNcwjIu3ptCSiKVWqgQpfl71CwJQVQ4e50iCg== Received: from DM6PR07CA0107.namprd07.prod.outlook.com (2603:10b6:5:330::21) by PH7PR12MB6908.namprd12.prod.outlook.com (2603:10b6:510:1ba::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:21 +0000 Received: from DM6NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:5:330:cafe::4e) by DM6PR07CA0107.outlook.office365.com (2603:10b6:5:330::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT023.mail.protection.outlook.com (10.13.173.96) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:20 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:10 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:09 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: Subject: [PATCH 12/27] net/mlx5: support caching queue action Date: Fri, 23 Sep 2022 17:43:19 +0300 Message-ID: <20220923144334.27736-13-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT023:EE_|PH7PR12MB6908:EE_ X-MS-Office365-Filtering-Correlation-Id: 6da7db85-fa5a-4d29-7605-08da9d721990 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EDrE8o6k9MryiPjjaaX9ouzmvFGSMDkHBtq2x6ZMgB9Yp7CBc5KI7JnJnl5Mfa+HjLX5CagMQn9eAx4YkKwfH8JdfcbNUl2W3Cl/BLItex+xCEQ3jcB74GVIYR4gnVBK/PDkxgDJI3E/dT1i8gALdr3wEW353NED9UrmmcWBbP+N57A1pCNg7G/JAiR8QLJIciQCA6qBe7sFWvkmYbrrGVeeyYDNH08VDJn5PYmU0W3lmmM0lErFX0meNWnN0TGEetSkga7iiT8cnmRqfoLAuwO1OGXs+YZqfkqlS2utXJrBDLtnVAqkww30P92DbCyaODwz7hC9gSKoxMMm81MlI1+SelSgwB0tsrsWq/8oMWCHwR1DeVuH3OD7weHZMcw3EVO9pVZ5vtkEj6ISD7Ru4XRD45eOUVB2m75nZVx7+RR/mt6TsS3Ymg1DEGaB7ZVtYSWf/Ok0vKFz0uluZlTKyZJ+U0NOd0JTX8e9mfBxZ/TpOW9+pH4RvYJ+kIG8w/svGZwAc1Lq62ckrEj74kL8PklDnuhsP8rZF7I06i4YNgelvogCkg6Pr1CujeGAHke4G07Z/S+41+gKhnaoLCO0OrwUTNjoX0vMNDsKAzRVgLgoHjguJLBtKVfWYFeLqpmyqjrhg7RAiOvXgvGywTCgtNZhq96NUDakFxSasfIamPw5NGKqJnTYTv1d9JguOUmFq7z26dvwO2UkRZMa18MH1kEO1d3uurWoYPaf2138lrd59vLiqLvbJX0C4zlL8abVKysBuzIJCY11Y4QHsNaJPAlzbjB2wbNfsx+Cp2R2ZzI= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199015)(40470700004)(36840700001)(46966006)(82310400005)(478600001)(2616005)(6636002)(2906002)(5660300002)(356005)(16526019)(70586007)(336012)(70206006)(40460700003)(186003)(1076003)(8676002)(7696005)(316002)(110136005)(6666004)(26005)(426003)(47076005)(36756003)(6286002)(40480700001)(55016003)(83380400001)(4326008)(36860700001)(8936002)(7636003)(82740400003)(41300700001)(86362001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:20.5670 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6da7db85-fa5a-4d29-7605-08da9d721990 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6908 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If the port is stopped, the Rx queue data will also be destroyed. At that time, create table with RSS action would be failed due to lack of Rx queue data. This commit adds the cache of queue create operation while port stopped. In case port is stopped, add tables to the ongoing list first, then do action translate only when port starts. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_flow.h | 2 + drivers/net/mlx5/mlx5_flow_hw.c | 95 +++++++++++++++++++++++++++++---- drivers/net/mlx5/mlx5_trigger.c | 8 +++ 4 files changed, 97 insertions(+), 10 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 8d82c68569..be60038810 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1643,6 +1643,8 @@ struct mlx5_priv { struct mlx5dr_action *hw_drop[2]; /* HW steering global tag action. */ struct mlx5dr_action *hw_tag[2]; + /* HW steering create ongoing rte flow table list header. */ + LIST_HEAD(flow_hw_tbl_ongo, rte_flow_template_table) flow_hw_tbl_ongo; struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */ struct mlx5_hws_cnt_pool *hws_cpool; /* HW steering's counter pool. */ #endif diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index cdea4076d8..746cf439fc 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2398,4 +2398,6 @@ int mlx5_flow_pattern_validate(struct rte_eth_dev *dev, const struct rte_flow_pattern_template_attr *attr, const struct rte_flow_item items[], struct rte_flow_error *error); +int flow_hw_table_update(struct rte_eth_dev *dev, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 8891f4a4e3..fe40b02c49 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -992,11 +992,11 @@ flow_hw_cnt_compile(struct rte_eth_dev *dev, uint32_t start_pos, * Table on success, NULL otherwise and rte_errno is set. */ static int -flow_hw_actions_translate(struct rte_eth_dev *dev, - const struct mlx5_flow_template_table_cfg *cfg, - struct mlx5_hw_actions *acts, - struct rte_flow_actions_template *at, - struct rte_flow_error *error) +__flow_hw_actions_translate(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; @@ -1309,6 +1309,40 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, "fail to create rte table"); } +/** + * Translate rte_flow actions to DR action. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] tbl + * Pointer to the flow template table. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_actions_translate(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + struct rte_flow_error *error) +{ + uint32_t i; + + for (i = 0; i < tbl->nb_action_templates; i++) { + if (__flow_hw_actions_translate(dev, &tbl->cfg, + &tbl->ats[i].acts, + tbl->ats[i].action_template, + error)) + goto err; + } + return 0; +err: + while (i--) + __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); + return -1; +} + /** * Get shared indirect action. * @@ -1837,6 +1871,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, uint32_t acts_num, flow_idx; int ret; + if (unlikely((!dev->data->dev_started))) { + rte_errno = EINVAL; + goto error; + } if (unlikely(!priv->hw_q[queue].job_idx)) { rte_errno = ENOMEM; goto error; @@ -2231,6 +2269,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, struct mlx5_list_entry *ge; uint32_t i, max_tpl = MLX5_HW_TBL_MAX_ITEM_TEMPLATE; uint32_t nb_flows = rte_align32pow2(attr->nb_flows); + bool port_started = !!dev->data->dev_started; int err; /* HWS layer accepts only 1 item template with root table. */ @@ -2295,21 +2334,26 @@ flow_hw_table_create(struct rte_eth_dev *dev, rte_errno = EINVAL; goto at_error; } + tbl->ats[i].action_template = action_templates[i]; LIST_INIT(&tbl->ats[i].acts.act_list); - err = flow_hw_actions_translate(dev, &tbl->cfg, - &tbl->ats[i].acts, - action_templates[i], error); + if (!port_started) + continue; + err = __flow_hw_actions_translate(dev, &tbl->cfg, + &tbl->ats[i].acts, + action_templates[i], error); if (err) { i++; goto at_error; } - tbl->ats[i].action_template = action_templates[i]; } tbl->nb_action_templates = nb_action_templates; tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB : (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX : MLX5DR_TABLE_TYPE_NIC_RX); - LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next); + if (port_started) + LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next); + else + LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next); return tbl; at_error: while (i--) { @@ -2339,6 +2383,33 @@ flow_hw_table_create(struct rte_eth_dev *dev, return NULL; } +/** + * Update flow template table. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +int +flow_hw_table_update(struct rte_eth_dev *dev, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_template_table *tbl; + + while ((tbl = LIST_FIRST(&priv->flow_hw_tbl_ongo)) != NULL) { + if (flow_hw_actions_translate(dev, tbl, error)) + return -1; + LIST_REMOVE(tbl, next); + LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next); + } + return 0; +} + /** * Translates group index specified by the user in @p attr to internal * group index. @@ -4440,6 +4511,10 @@ flow_hw_resource_release(struct rte_eth_dev *dev) if (!priv->dr_ctx) return; flow_hw_flush_all_ctrl_flows(dev); + while (!LIST_EMPTY(&priv->flow_hw_tbl_ongo)) { + tbl = LIST_FIRST(&priv->flow_hw_tbl_ongo); + flow_hw_table_destroy(dev, tbl, NULL); + } while (!LIST_EMPTY(&priv->flow_hw_tbl)) { tbl = LIST_FIRST(&priv->flow_hw_tbl); flow_hw_table_destroy(dev, tbl, NULL); diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 9e458356a0..ab2b83a870 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1170,6 +1170,14 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->data->port_id, rte_strerror(rte_errno)); goto error; } + if (priv->sh->config.dv_flow_en == 2) { + ret = flow_hw_table_update(dev, NULL); + if (ret) { + DRV_LOG(ERR, "port %u failed to update HWS tables", + dev->data->port_id); + goto error; + } + } ret = mlx5_traffic_enable(dev); if (ret) { DRV_LOG(ERR, "port %u failed to set defaults flows", From patchwork Fri Sep 23 14:43:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116752 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 498F9A054A; Fri, 23 Sep 2022 16:46:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA6A142C0A; Fri, 23 Sep 2022 16:44:31 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2059.outbound.protection.outlook.com [40.107.223.59]) by mails.dpdk.org (Postfix) with ESMTP id 439AC42C02 for ; Fri, 23 Sep 2022 16:44:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cKM0jkESwALylHoIIctJAqy7+FZ2nb94z90fRd81SmPtzMwzI9XyU6f3aqvhtRoW4/L4g6eTyCc+/Sa8O7A1pt/QiIn1nawsviUidzci9vY9OLyko/Bl0r0ShzOX2w1NciFe0KY39+aKPYuCb4QDAZgrEHF3QWjm4WWAg8CcvcA9JhNHzqfc4sDi6KgEbugW63MZBnMUBgJP1GRcH6vo3CLz6omy49SUvGOfHSgrC4Lhhju8PnJC9LYZwqM9b2Lq5sHMofz6jaWIWF5nnB1k5iC4Zsu0Ith93TGDdwBfrC4lBjkItuNre+XoOdu7qpR+5qo7tobByF7oJY9AlCtYiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=go0r0MmZn9f8vU1pB9TqhANwt1njzgpDeyomnLey6IE=; b=YK3paCbrfdkuah3wpVqKO8+iUuFl86/e7EEhCHUcMH5Z1hBsl7KXPt27ptoaNvaSYikcgeq3Bvvh6YT0lfWTMXJXyqBMtOmXE7AuOVoCQZh6S1CdAZxPkVOy+Y4smywx9auhwtyi9kCp36w/RciqaQHdtATUO/RCVZ2wxbV6V15Ep2VO/l0b/FbMus7NbpjAdRdZkr2WIEXvpOVcAzv5b1D9JszFsyFJjDGO5cDZUXViMhQuSYsIaYL6u/eSX1NhM+LUI5Ejhjaf9Ck6W8qeCCtCiFPYeVmgeJ2rnwX5mp9+JzuBThw6Y5yT1b/qW60FBIljuikl4jORssEuSYLBfw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=go0r0MmZn9f8vU1pB9TqhANwt1njzgpDeyomnLey6IE=; b=PjpbguqCp5JiNHW1p5Ey6McjJSSvGhja5bQJnsPnROHyD41ko9JNN7zm2ZJi9UzYwy3WFM6FECCw3yKYoOURh3zBq3BW/QhIyxvIhSUBrC44SF+QV9M1QoWRGbTc0l6xmp5+zweiWqHHH8xfnxQvNqBQS3a2Vw3eeBCMhZDeq5uRCUdoHJYxrfvleTrkDAP4dO6WFgRwJgLclJgmGfNsTWxvU6QtVLic/AUBD3cHJP3GGzpIo0E57Dfnb8HruD/AlzF0p7+N5poyUVh+LU9lyO686gmysWGycDt8+k1aQ9iZvTor+La0qC+/nXX+PF/VcYokKTia5p6qVLc6TKOhRQ== Received: from MW4PR03CA0256.namprd03.prod.outlook.com (2603:10b6:303:b4::21) by IA1PR12MB6042.namprd12.prod.outlook.com (2603:10b6:208:3d6::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:28 +0000 Received: from CO1NAM11FT079.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b4:cafe::99) by MW4PR03CA0256.outlook.office365.com (2603:10b6:303:b4::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.19 via Frontend Transport; Fri, 23 Sep 2022 14:44:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT079.mail.protection.outlook.com (10.13.175.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:27 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:12 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:11 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Dariusz Sosnowski Subject: [PATCH 13/27] net/mlx5: support DR action template API Date: Fri, 23 Sep 2022 17:43:20 +0300 Message-ID: <20220923144334.27736-14-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT079:EE_|IA1PR12MB6042:EE_ X-MS-Office365-Filtering-Correlation-Id: 92a607b1-a4cf-4921-9e87-08da9d721dc1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nDItRFEKFtzOPToynEnpL5vE/zuRl8Izfk9xvc7fT8NQF4IzrHszuR+BTfuBbTvWyNiYnSET/bpHTZ3TY6ehb8mYd+Jd9WppcccOcRy4snxaQMsM/4kbaYWH5oeWAM4+rbpBnI5dtliaHTj7GwOoYmXEy9Oz2t41kscWAsYCcwVuyoyhX1QmyskSf7khGxWhnSKOJ0WLpP9RZ5xgHtJp0H9j1Iiz3C4iO314v5mr0XvpPAIcX5D9vmqREiJVSkH30Xv7X/lVSHVJ7F1IN6mj+CbaqA3R1SJtr+xRcW/pGhbyOupeb7y5gJQ81sil9YY5TnE6maeSEWhl4UafRbcz1YH4dHc/F50I648QtFuQRVjIoPg2SfnuRBnctnp9SUGkGfnDONS5jgGPHsx4htnaeKLLupmH+j+L94wbmb0MQkfEUc4aO2GYSs4ftF2hb5XbssZ6sy0LHPuaJzXze8t9SMgzE/Byc3ijFon6iUzohF5iss0uTe/QoSccJJ0xdARlLmhVelAdiabFGfcHZ1Z05S74ZM8GuzEFjM+8b9Q5gdZajYa1Oiaasm1HU/uZfnFthw2YvY2uCaoQAVaWEsMXAQGigJEVIo52DnoQfxB7jvGd1nnWZqO1V/hV0mnAB2c23aNFP5iNl5bRcW4yB8t3lF25feXK2lonQ5k0gVxubvZHlG5Mf9dEQXgjikqpUkxBb5t03UmjcZdqVUlybQuYkVwCaUZ/gv7qhbmmojdSlm27beXJj9+hn4nXIwj4K+rxX+I/SeGynI4YYnBYmtqe7g== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(136003)(376002)(346002)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(30864003)(40460700003)(6286002)(6636002)(36756003)(316002)(70206006)(6666004)(70586007)(107886003)(8676002)(7696005)(86362001)(82310400005)(356005)(8936002)(26005)(41300700001)(4326008)(83380400001)(82740400003)(55016003)(5660300002)(54906003)(40480700001)(186003)(1076003)(16526019)(336012)(7636003)(110136005)(36860700001)(478600001)(426003)(47076005)(2616005)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:27.5844 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 92a607b1-a4cf-4921-9e87-08da9d721dc1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT079.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6042 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dariusz Sosnowski This patch adapts mlx5 PMD to changes in mlx5dr API regarding action templates. It changes the following: 1. Actions template creation: - Flow actions types are translated to mlx5dr action types in order to create mlx5dr_action_template object. - An offset is assigned to each flow action. This offset is used to predetermine action's location in rule_acts array passed on rule creation. 2. Template table creation: - Fixed actions are created and put in rule_acts cache using predetermined offsets - mlx5dr matcher is parametrized by action templates bound to template table. - mlx5dr matcher is configured to optimize rule creation based on passed rule indices. 3. Flow rule creation: - mlx5dr rule is parametrized by action template on which these rule's actions are based. - Rule index hint is provided to mlx5dr. Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow.h | 6 +- drivers/net/mlx5/mlx5_flow_hw.c | 351 ++++++++++++++++++++++++-------- 2 files changed, 268 insertions(+), 89 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 746cf439fc..c982cb953a 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1173,6 +1173,11 @@ struct rte_flow_actions_template { struct rte_flow_actions_template_attr attr; struct rte_flow_action *actions; /* Cached flow actions. */ struct rte_flow_action *masks; /* Cached action masks.*/ + struct mlx5dr_action_template *tmpl; /* mlx5dr action template. */ + uint16_t dr_actions_num; /* Amount of DR rules actions. */ + uint16_t actions_num; /* Amount of flow actions */ + uint16_t *actions_off; /* DR action offset for given rte action offset. */ + uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ uint32_t refcnt; /* Reference counter. */ uint16_t rx_cpy_pos; /* Action position of Rx metadata to be copied. */ @@ -1224,7 +1229,6 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ - uint32_t acts_num:4; /* Total action number. */ uint32_t mark:1; /* Indicate the mark action. */ uint32_t cnt_id; /* Counter id. */ /* Translated DR action array from action template. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fe40b02c49..6a1ed7e790 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -913,33 +913,29 @@ flow_hw_represented_port_compile(struct rte_eth_dev *dev, static __rte_always_inline int flow_hw_meter_compile(struct rte_eth_dev *dev, const struct mlx5_flow_template_table_cfg *cfg, - uint32_t start_pos, const struct rte_flow_action *action, - struct mlx5_hw_actions *acts, uint32_t *end_pos, + uint16_t aso_mtr_pos, + uint16_t jump_pos, + const struct rte_flow_action *action, + struct mlx5_hw_actions *acts, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_mtr *aso_mtr; const struct rte_flow_action_meter *meter = action->conf; - uint32_t pos = start_pos; uint32_t group = cfg->attr.flow_attr.group; aso_mtr = mlx5_aso_meter_by_idx(priv, meter->mtr_id); - acts->rule_acts[pos].action = priv->mtr_bulk.action; - acts->rule_acts[pos].aso_meter.offset = aso_mtr->offset; - acts->jump = flow_hw_jump_action_register + acts->rule_acts[aso_mtr_pos].action = priv->mtr_bulk.action; + acts->rule_acts[aso_mtr_pos].aso_meter.offset = aso_mtr->offset; + acts->jump = flow_hw_jump_action_register (dev, cfg, aso_mtr->fm.group, error); - if (!acts->jump) { - *end_pos = start_pos; + if (!acts->jump) return -ENOMEM; - } - acts->rule_acts[++pos].action = (!!group) ? + acts->rule_acts[jump_pos].action = (!!group) ? acts->jump->hws_action : acts->jump->root_action; - *end_pos = pos; - if (mlx5_aso_mtr_wait(priv->sh, aso_mtr)) { - *end_pos = start_pos; + if (mlx5_aso_mtr_wait(priv->sh, aso_mtr)) return -ENOMEM; - } return 0; } @@ -1007,12 +1003,15 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, enum mlx5dr_action_reformat_type refmt_type = 0; const struct rte_flow_action_raw_encap *raw_encap_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_pos = MLX5_HW_MAX_ACTS, reformat_src = 0; + uint16_t reformat_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; size_t data_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; - uint32_t type, i; + uint32_t type; + bool reformat_used = false; + uint16_t action_pos; + uint16_t jump_pos; int err; flow_hw_modify_field_init(&mhdr, at); @@ -1022,46 +1021,53 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, type = MLX5DR_TABLE_TYPE_NIC_TX; else type = MLX5DR_TABLE_TYPE_NIC_RX; - for (i = 0; !actions_end; actions++, masks++) { + for (; !actions_end; actions++, masks++) { switch (actions->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: + action_pos = at->actions_off[actions - action_start]; if (!attr->group) { DRV_LOG(ERR, "Indirect action is not supported in root table."); goto err; } if (actions->conf && masks->conf) { if (flow_hw_shared_action_translate - (dev, actions, acts, actions - action_start, i)) + (dev, actions, acts, actions - action_start, action_pos)) goto err; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, i)){ + actions - action_start, action_pos)){ goto err; } - i++; break; case RTE_FLOW_ACTION_TYPE_VOID: break; case RTE_FLOW_ACTION_TYPE_DROP: - acts->rule_acts[i++].action = + action_pos = at->actions_off[actions - action_start]; + acts->rule_acts[action_pos].action = priv->hw_drop[!!attr->group]; break; case RTE_FLOW_ACTION_TYPE_MARK: + action_pos = at->actions_off[actions - action_start]; acts->mark = true; - if (masks->conf) - acts->rule_acts[i].tag.value = + if (masks->conf && + ((const struct rte_flow_action_mark *) + masks->conf)->id) + acts->rule_acts[action_pos].tag.value = mlx5_flow_mark_set (((const struct rte_flow_action_mark *) (masks->conf))->id); else if (__flow_hw_act_data_general_append(priv, acts, - actions->type, actions - action_start, i)) + actions->type, actions - action_start, action_pos)) goto err; - acts->rule_acts[i++].action = + acts->rule_acts[action_pos].action = priv->hw_tag[!!attr->group]; flow_hw_rxq_flag_set(dev, true); break; case RTE_FLOW_ACTION_TYPE_JUMP: - if (masks->conf) { + action_pos = at->actions_off[actions - action_start]; + if (masks->conf && + ((const struct rte_flow_action_jump *) + masks->conf)->group) { uint32_t jump_group = ((const struct rte_flow_action_jump *) actions->conf)->group; @@ -1069,76 +1075,77 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, (dev, cfg, jump_group, error); if (!acts->jump) goto err; - acts->rule_acts[i].action = (!!attr->group) ? + acts->rule_acts[action_pos].action = (!!attr->group) ? acts->jump->hws_action : acts->jump->root_action; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, i)){ + actions - action_start, action_pos)){ goto err; } - i++; break; case RTE_FLOW_ACTION_TYPE_QUEUE: - if (masks->conf) { + action_pos = at->actions_off[actions - action_start]; + if (masks->conf && + ((const struct rte_flow_action_queue *) + masks->conf)->index) { acts->tir = flow_hw_tir_action_register (dev, mlx5_hw_act_flag[!!attr->group][type], actions); if (!acts->tir) goto err; - acts->rule_acts[i].action = + acts->rule_acts[action_pos].action = acts->tir->action; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, i)) { + actions - action_start, action_pos)) { goto err; } - i++; break; case RTE_FLOW_ACTION_TYPE_RSS: - if (masks->conf) { + action_pos = at->actions_off[actions - action_start]; + if (actions->conf && masks->conf) { acts->tir = flow_hw_tir_action_register (dev, mlx5_hw_act_flag[!!attr->group][type], actions); if (!acts->tir) goto err; - acts->rule_acts[i].action = + acts->rule_acts[action_pos].action = acts->tir->action; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, i)) { + actions - action_start, action_pos)) { goto err; } - i++; break; case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: - MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); + MLX5_ASSERT(!reformat_used); enc_item = ((const struct rte_flow_action_vxlan_encap *) actions->conf)->definition; if (masks->conf) enc_item_m = ((const struct rte_flow_action_vxlan_encap *) masks->conf)->definition; - reformat_pos = i++; + reformat_used = true; reformat_src = actions - action_start; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; break; case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); + MLX5_ASSERT(!reformat_used); enc_item = ((const struct rte_flow_action_nvgre_encap *) actions->conf)->definition; if (masks->conf) enc_item_m = ((const struct rte_flow_action_nvgre_encap *) masks->conf)->definition; - reformat_pos = i++; + reformat_used = true; reformat_src = actions - action_start; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; break; case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); - reformat_pos = i++; + MLX5_ASSERT(!reformat_used); + reformat_used = true; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: @@ -1152,25 +1159,23 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, actions->conf; encap_data = raw_encap_data->data; data_size = raw_encap_data->size; - if (reformat_pos != MLX5_HW_MAX_ACTS) { + if (reformat_used) { refmt_type = data_size < MLX5_ENCAPSULATION_DECISION_SIZE ? MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2 : MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3; } else { - reformat_pos = i++; + reformat_used = true; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; } reformat_src = actions - action_start; break; case RTE_FLOW_ACTION_TYPE_RAW_DECAP: - reformat_pos = i++; + reformat_used = true; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - if (mhdr.pos == UINT16_MAX) - mhdr.pos = i++; err = flow_hw_modify_field_compile(dev, attr, action_start, actions, masks, acts, &mhdr, error); @@ -1188,40 +1193,46 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, action_start += 1; break; case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + action_pos = at->actions_off[actions - action_start]; if (flow_hw_represented_port_compile (dev, attr, action_start, actions, - masks, acts, i, error)) + masks, acts, action_pos, error)) goto err; - i++; break; case RTE_FLOW_ACTION_TYPE_METER: + /* + * METER action is compiled to 2 DR actions - ASO_METER and FT. + * Calculated DR offset is stored only for ASO_METER and FT + * is assumed to be the next action. + */ + action_pos = at->actions_off[actions - action_start]; + jump_pos = action_pos + 1; if (actions->conf && masks->conf && ((const struct rte_flow_action_meter *) masks->conf)->mtr_id) { err = flow_hw_meter_compile(dev, cfg, - i, actions, acts, &i, error); + action_pos, jump_pos, actions, acts, error); if (err) goto err; } else if (__flow_hw_act_data_general_append(priv, acts, actions->type, actions - action_start, - i)) + action_pos)) goto err; - i++; break; case RTE_FLOW_ACTION_TYPE_COUNT: + action_pos = at->actions_off[actions - action_start]; if (masks->conf && ((const struct rte_flow_action_count *) masks->conf)->id) { - err = flow_hw_cnt_compile(dev, i, acts); + err = flow_hw_cnt_compile(dev, action_pos, acts); if (err) goto err; } else if (__flow_hw_act_data_general_append (priv, acts, actions->type, - actions - action_start, i)) { + actions - action_start, action_pos)) { goto err; } - i++; break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; @@ -1255,10 +1266,11 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, goto err; acts->rule_acts[acts->mhdr->pos].action = acts->mhdr->action; } - if (reformat_pos != MLX5_HW_MAX_ACTS) { + if (reformat_used) { uint8_t buf[MLX5_ENCAP_MAX_LEN]; bool shared_rfmt = true; + MLX5_ASSERT(at->reformat_off != UINT16_MAX); if (enc_item) { MLX5_ASSERT(!encap_data); if (flow_dv_convert_encap_data(enc_item, buf, &data_size, error)) @@ -1286,20 +1298,17 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, (shared_rfmt ? MLX5DR_ACTION_FLAG_SHARED : 0)); if (!acts->encap_decap->action) goto err; - acts->rule_acts[reformat_pos].action = - acts->encap_decap->action; - acts->rule_acts[reformat_pos].reformat.data = - acts->encap_decap->data; + acts->rule_acts[at->reformat_off].action = acts->encap_decap->action; + acts->rule_acts[at->reformat_off].reformat.data = acts->encap_decap->data; if (shared_rfmt) - acts->rule_acts[reformat_pos].reformat.offset = 0; + acts->rule_acts[at->reformat_off].reformat.offset = 0; else if (__flow_hw_act_data_encap_append(priv, acts, (action_start + reformat_src)->type, - reformat_src, reformat_pos, data_size)) + reformat_src, at->reformat_off, data_size)) goto err; acts->encap_decap->shared = shared_rfmt; - acts->encap_decap_pos = reformat_pos; + acts->encap_decap_pos = at->reformat_off; } - acts->acts_num = i; return 0; err: err = rte_errno; @@ -1574,16 +1583,17 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, static __rte_always_inline int flow_hw_actions_construct(struct rte_eth_dev *dev, struct mlx5_hw_q_job *job, - const struct mlx5_hw_actions *hw_acts, + const struct mlx5_hw_action_template *hw_at, const uint8_t it_idx, const struct rte_flow_action actions[], struct mlx5dr_rule_action *rule_acts, - uint32_t *acts_num, uint32_t queue) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_template_table *table = job->flow->table; struct mlx5_action_construct_data *act_data; + const struct rte_flow_actions_template *at = hw_at->action_template; + const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; const struct rte_flow_item *enc_item = NULL; @@ -1599,11 +1609,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, struct mlx5_aso_mtr *mtr; uint32_t mtr_id; - memcpy(rule_acts, hw_acts->rule_acts, - sizeof(*rule_acts) * hw_acts->acts_num); - *acts_num = hw_acts->acts_num; - if (LIST_EMPTY(&hw_acts->act_list)) - return 0; + rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; ft_flag = mlx5_hw_act_flag[!!table->grp->group_id][table->type]; if (table->type == MLX5DR_TABLE_TYPE_FDB) { @@ -1737,7 +1743,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, jump->root_action; job->flow->jump = jump; job->flow->fate_type = MLX5_FLOW_FATE_JUMP; - (*acts_num)++; if (mlx5_aso_mtr_wait(priv->sh, mtr)) return -1; break; @@ -1864,11 +1869,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, .burst = attr->postpone, }; struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; - struct mlx5_hw_actions *hw_acts; struct rte_flow_hw *flow; struct mlx5_hw_q_job *job; const struct rte_flow_item *rule_items; - uint32_t acts_num, flow_idx; + uint32_t flow_idx; int ret; if (unlikely((!dev->data->dev_started))) { @@ -1897,7 +1901,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, job->flow = flow; job->user_data = user_data; rule_attr.user_data = job; - hw_acts = &table->ats[action_template_index].acts; + rule_attr.rule_idx = flow_idx; /* * Construct the flow actions based on the input actions. * The implicitly appended action is always fixed, like metadata @@ -1905,8 +1909,8 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, * No need to copy and contrust a new "actions" list based on the * user's input, in order to save the cost. */ - if (flow_hw_actions_construct(dev, job, hw_acts, pattern_template_index, - actions, rule_acts, &acts_num, queue)) { + if (flow_hw_actions_construct(dev, job, &table->ats[action_template_index], + pattern_template_index, actions, rule_acts, queue)) { rte_errno = EINVAL; goto free; } @@ -1915,7 +1919,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, if (!rule_items) goto free; ret = mlx5dr_rule_create(table->matcher, - pattern_template_index, items, + pattern_template_index, rule_items, action_template_index, rule_acts, &rule_attr, &flow->rule); if (likely(!ret)) @@ -2249,6 +2253,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl = NULL; struct mlx5_flow_group *grp; struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE]; + struct mlx5dr_action_template *at[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; const struct rte_flow_template_table_attr *attr = &table_cfg->attr; struct rte_flow_attr flow_attr = attr->flow_attr; struct mlx5_flow_cb_ctx ctx = { @@ -2304,6 +2309,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->grp = grp; /* Prepare matcher information. */ matcher_attr.priority = attr->flow_attr.priority; + matcher_attr.optimize_using_rule_idx = true; matcher_attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_RULE; matcher_attr.rule.num_log = rte_log2_u32(nb_flows); /* Build the item template. */ @@ -2319,10 +2325,6 @@ flow_hw_table_create(struct rte_eth_dev *dev, mt[i] = item_templates[i]->mt; tbl->its[i] = item_templates[i]; } - tbl->matcher = mlx5dr_matcher_create - (tbl->grp->tbl, mt, nb_item_templates, NULL, 0, &matcher_attr); - if (!tbl->matcher) - goto it_error; tbl->nb_item_templates = nb_item_templates; /* Build the action template. */ for (i = 0; i < nb_action_templates; i++) { @@ -2334,6 +2336,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, rte_errno = EINVAL; goto at_error; } + at[i] = action_templates[i]->tmpl; tbl->ats[i].action_template = action_templates[i]; LIST_INIT(&tbl->ats[i].acts.act_list); if (!port_started) @@ -2347,6 +2350,10 @@ flow_hw_table_create(struct rte_eth_dev *dev, } } tbl->nb_action_templates = nb_action_templates; + tbl->matcher = mlx5dr_matcher_create + (tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr); + if (!tbl->matcher) + goto at_error; tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB : (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX : MLX5DR_TABLE_TYPE_NIC_RX); @@ -2366,7 +2373,6 @@ flow_hw_table_create(struct rte_eth_dev *dev, while (i--) __atomic_sub_fetch(&item_templates[i]->refcnt, 1, __ATOMIC_RELAXED); - mlx5dr_matcher_destroy(tbl->matcher); error: err = rte_errno; if (tbl) { @@ -2796,6 +2802,154 @@ flow_hw_actions_validate(struct rte_eth_dev *dev, return 0; } +static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { + [RTE_FLOW_ACTION_TYPE_MARK] = MLX5DR_ACTION_TYP_TAG, + [RTE_FLOW_ACTION_TYPE_DROP] = MLX5DR_ACTION_TYP_DROP, + [RTE_FLOW_ACTION_TYPE_JUMP] = MLX5DR_ACTION_TYP_FT, + [RTE_FLOW_ACTION_TYPE_QUEUE] = MLX5DR_ACTION_TYP_TIR, + [RTE_FLOW_ACTION_TYPE_RSS] = MLX5DR_ACTION_TYP_TIR, + [RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = MLX5DR_ACTION_TYP_L2_TO_TNL_L2, + [RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = MLX5DR_ACTION_TYP_L2_TO_TNL_L2, + [RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = MLX5DR_ACTION_TYP_TNL_L2_TO_L2, + [RTE_FLOW_ACTION_TYPE_NVGRE_DECAP] = MLX5DR_ACTION_TYP_TNL_L2_TO_L2, + [RTE_FLOW_ACTION_TYPE_MODIFY_FIELD] = MLX5DR_ACTION_TYP_MODIFY_HDR, + [RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT] = MLX5DR_ACTION_TYP_VPORT, + [RTE_FLOW_ACTION_TYPE_COUNT] = MLX5DR_ACTION_TYP_CTR, +}; + +static int +flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, + unsigned int action_src, + enum mlx5dr_action_type *action_types, + uint16_t *curr_off, + struct rte_flow_actions_template *at) +{ + uint32_t act_idx; + uint32_t type; + + if (!mask->conf) { + DRV_LOG(WARNING, "Unable to determine indirect action type " + "without a mask specified"); + return -EINVAL; + } + act_idx = (uint32_t)(uintptr_t)mask->conf; + type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_RSS: + at->actions_off[action_src] = *curr_off; + action_types[*curr_off] = MLX5DR_ACTION_TYP_TIR; + *curr_off = *curr_off + 1; + break; + default: + DRV_LOG(WARNING, "Unsupported shared action type: %d", type); + return -EINVAL; + } + return 0; +} + +/** + * Create DR action template based on a provided sequence of flow actions. + * + * @param[in] at + * Pointer to flow actions template to be updated. + * + * @return + * DR action template pointer on success and action offsets in @p at are updated. + * NULL otherwise. + */ +static struct mlx5dr_action_template * +flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +{ + struct mlx5dr_action_template *dr_template; + enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; + unsigned int i; + uint16_t curr_off; + enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; + uint16_t reformat_off = UINT16_MAX; + uint16_t mhdr_off = UINT16_MAX; + int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { + const struct rte_flow_action_raw_encap *raw_encap_data; + size_t data_size; + enum mlx5dr_action_type type; + + if (curr_off >= MLX5_HW_MAX_ACTS) + goto err_actions_num; + switch (at->actions[i].type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_INDIRECT: + ret = flow_hw_dr_actions_template_handle_shared(&at->masks[i], i, + action_types, + &curr_off, at); + if (ret) + return NULL; + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + MLX5_ASSERT(reformat_off == UINT16_MAX); + reformat_off = curr_off++; + reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; + break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + raw_encap_data = at->actions[i].conf; + data_size = raw_encap_data->size; + if (reformat_off != UINT16_MAX) { + reformat_act_type = data_size < MLX5_ENCAPSULATION_DECISION_SIZE ? + MLX5DR_ACTION_TYP_TNL_L3_TO_L2 : + MLX5DR_ACTION_TYP_L2_TO_TNL_L3; + } else { + reformat_off = curr_off++; + reformat_act_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L2; + } + break; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + reformat_off = curr_off++; + reformat_act_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; + break; + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + if (mhdr_off == UINT16_MAX) { + mhdr_off = curr_off++; + type = mlx5_hw_dr_action_types[at->actions[i].type]; + action_types[mhdr_off] = type; + } + break; + case RTE_FLOW_ACTION_TYPE_METER: + at->actions_off[i] = curr_off; + action_types[curr_off++] = MLX5DR_ACTION_TYP_ASO_METER; + if (curr_off >= MLX5_HW_MAX_ACTS) + goto err_actions_num; + action_types[curr_off++] = MLX5DR_ACTION_TYP_FT; + break; + default: + type = mlx5_hw_dr_action_types[at->actions[i].type]; + at->actions_off[i] = curr_off; + action_types[curr_off++] = type; + break; + } + } + if (curr_off >= MLX5_HW_MAX_ACTS) + goto err_actions_num; + if (mhdr_off != UINT16_MAX) + at->mhdr_off = mhdr_off; + if (reformat_off != UINT16_MAX) { + at->reformat_off = reformat_off; + action_types[reformat_off] = reformat_act_type; + } + dr_template = mlx5dr_action_template_create(action_types); + if (dr_template) + at->dr_actions_num = curr_off; + else + DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return dr_template; +err_actions_num: + DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", + curr_off, MLX5_HW_MAX_ACTS); + return NULL; +} + /** * Create flow action template. * @@ -2821,7 +2975,8 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - int len, act_len, mask_len, i; + int len, act_num, act_len, mask_len; + unsigned int i; struct rte_flow_actions_template *at = NULL; uint16_t pos = MLX5_HW_MAX_ACTS; struct rte_flow_action tmp_action[MLX5_HW_MAX_ACTS]; @@ -2891,6 +3046,11 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, if (mask_len <= 0) return NULL; len += RTE_ALIGN(mask_len, 16); + /* Count flow actions to allocate required space for storing DR offsets. */ + act_num = 0; + for (i = 0; ra[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) + act_num++; + len += RTE_ALIGN(act_num * sizeof(*at->actions_off), 16); at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at), RTE_CACHE_LINE_SIZE, rte_socket_id()); if (!at) { @@ -2900,19 +3060,26 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, "cannot allocate action template"); return NULL; } - /* Actions part is in the first half. */ + /* Actions part is in the first part. */ at->attr = *attr; at->actions = (struct rte_flow_action *)(at + 1); act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->actions, len, ra, error); if (act_len <= 0) goto error; - /* Masks part is in the second half. */ + /* Masks part is in the second part. */ at->masks = (struct rte_flow_action *)(((uint8_t *)at->actions) + act_len); mask_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->masks, len - act_len, rm, error); if (mask_len <= 0) goto error; + /* DR actions offsets in the third part. */ + at->actions_off = (uint16_t *)((uint8_t *)at->masks + mask_len); + at->actions_num = act_num; + for (i = 0; i < at->actions_num; ++i) + at->actions_off[i] = UINT16_MAX; + at->reformat_off = UINT16_MAX; + at->mhdr_off = UINT16_MAX; at->rx_cpy_pos = pos; /* * mlx5 PMD hacks indirect action index directly to the action conf. @@ -2926,12 +3093,18 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, at->masks[i].conf = masks->conf; } } + at->tmpl = flow_hw_dr_actions_template_create(at); + if (!at->tmpl) + goto error; __atomic_fetch_add(&at->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->flow_hw_at, at, next); return at; error: - if (at) + if (at) { + if (at->tmpl) + mlx5dr_action_template_destroy(at->tmpl); mlx5_free(at); + } return NULL; } @@ -2962,6 +3135,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev __rte_unused, "action template in using"); } LIST_REMOVE(template, next); + if (template->tmpl) + mlx5dr_action_template_destroy(template->tmpl); mlx5_free(template); return 0; } From patchwork Fri Sep 23 14:43:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116753 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 823C9A054A; Fri, 23 Sep 2022 16:46:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0738E42C10; Fri, 23 Sep 2022 16:44:34 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2088.outbound.protection.outlook.com [40.107.223.88]) by mails.dpdk.org (Postfix) with ESMTP id 7B3B942C0A for ; Fri, 23 Sep 2022 16:44:31 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aLwEqJuqMlJy9Gp6PK/IGRswVb3pGSa81D7Bb4AaW26fVSTk7G69pBNzuiT3Zeg7C5FXLVKmx5mmzzL7NAtzKRzokA7Hbn/3FWBi+B3D2zp1xzwJV6UjdfQhtWNDOXo1MAs/+QXebN1rHOcJMGytNKoy1zA52Oc2F1LcE3HxLQgdGSZ8GjZGcA30aur5OAAPvHBckwdexO8lzvK6dnepy2l9NdBgFy5BFdaKUoOtGVCx4MCrRKHkfvtgNtcOr6G/meYD6g+DmYrMwQ7+fOIUQ6mMzs92b7Is1bVEjLD6AeLfN9daR8DusurmRR1ozRBXujdzwmJL2+XRAHQK8RyZbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9904UsvJEfjQpFH7iw9pHt/WVYUfuDIuviy2PwJO/o0=; b=NSwOklCmjaFysCNRch7NoqOxtYQZTrlst8WLsllE8QoIkhy1xx0sCC9j/uJ7F/T7NUG5L9Xtm/sUL0t1hYIIE8KQL4AUNRU/QAOjnK0qulk3SQYd1zLUKGLwlBp12ZaWk72ocTAtZbEWbztVkPcfjSfIeuIb0M1tAeK2HV4wnQ4P+RkizBk7TWEA1gkiYgVBcQCXk+k+KV8x4FdRHcbl4h/DA3KQowCtvM71k3IcTySCaMsL9g+f9k1iA82WXCyip4itgF+ygj4QBrV73J2AhdM4bPr1vYQ/bfIuymWzR0p0Z1uO21ZokYbgWaVHrgtNr+t36CbTEhsiJ3K8bvERyA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9904UsvJEfjQpFH7iw9pHt/WVYUfuDIuviy2PwJO/o0=; b=DJSJj4Mm4D22vIfAMb8WhOg6lzS8sgHVt+onsV+EJMlPIHPoHl+fSEsWF1A5fK7ZZNHKCTYJXGqNEWlaNeCVGIfSN1yphPqi0X2iNqkZNr0y9kUuwfKUjgSvQimhx0ELBPMS42aW3jugzZ8kunZ0iimfSpEpn9q7cgx+6dUQcZUtiquSbbcOM865zM/+9s6pCknRRTcAUxNP93ANEgrJVDaoRFsbRd+K92X9UMZgeBOh1Qmru1rDkk/1IDlRr3fbgvaavlbtcd7EiEZbTfOTHZsSe/gI1aCwJYvCAsqixI3UjNBEeuwxa66LWwQs/QFLKQwv0nIYojRVHytp9QcXrQ== Received: from MW4PR04CA0293.namprd04.prod.outlook.com (2603:10b6:303:89::28) by SJ1PR12MB6028.namprd12.prod.outlook.com (2603:10b6:a03:489::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:29 +0000 Received: from CO1NAM11FT108.eop-nam11.prod.protection.outlook.com (2603:10b6:303:89:cafe::71) by MW4PR04CA0293.outlook.office365.com (2603:10b6:303:89::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT108.mail.protection.outlook.com (10.13.175.226) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:29 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:14 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:12 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko , Dariusz Sosnowski CC: Subject: [PATCH 14/27] net/mlx5: fix indirect action validate Date: Fri, 23 Sep 2022 17:43:21 +0300 Message-ID: <20220923144334.27736-15-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT108:EE_|SJ1PR12MB6028:EE_ X-MS-Office365-Filtering-Correlation-Id: d12b7cbf-5872-48e8-a05a-08da9d721f0c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: af/X3FwH9/1n8LdDE4HOS+shGy0NPW90Wzt9pvl4Q+ixJ6s+/zvoqvJgIPr9ETmOTmEOmeVraPEKfaeExdjkvs7awZHRsUFKkWW5NQwyljIbirfKwmXhaZXZmJ8FNKFdyKtEfgfomtlrxF0Yc6uj7nwZq/yN1BwWiIswYw1ERliZQG9WPueK76PhhzBgcjU10I41rRg4klkA3TJlzldDZhWYpb3N68rIX8QHu8IyjzmkUdu7t1/s1luZ2eIilLBgONNq18nkcQNGeyPH9ukYNx6vIklkOcJwRjh7du9sf7pPBma5Y9FBb2Y71xzpJPUUh5fm2NRCnqXq7odl1EHcruhYhq/PU1zQ+NIiq+At7vDzg8WZ8UU2zPchUV95HqDSs02IXBSdU6EMYicQyQjeK5HDSfw/mS1s3CMuoLHSxHBNGorgndWnpXXH1qQXSUOaqZkTYBdjVy5CDTafU2BpZ7nNmp9mVAeD/zFDfSU04/Ah6ajZhCN59Ai4somd/PHlx6YwSjo70AWvovN98iSAaGhxSXwegyugkudSEeJKRE59dhUG/ieHW38it/DcHUiLXIqHPohL+DoHnfSSA4LIa7nmImWu6OLeNs+vQLtznVvLXVh2Cezq495En7JFSGES+w95FTFN2u3SBpKnuUQQsZ9eCXO/sKcQPZiiFZDwSFru/ycQGFGKS1sdDi2Tt+F/YfogH/xqW1gXvTl/3WT094o4hLe480rESseJgVPuNx3Cw4gQNJ9fthXpRvQ52XC6fQT4Nfs89KwkmllKC+6pCA== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199015)(46966006)(40470700004)(36840700001)(16526019)(186003)(426003)(1076003)(47076005)(336012)(2616005)(86362001)(55016003)(40460700003)(6286002)(26005)(6666004)(7696005)(41300700001)(5660300002)(15650500001)(4326008)(36756003)(82740400003)(7636003)(40480700001)(356005)(82310400005)(83380400001)(2906002)(36860700001)(316002)(6636002)(8936002)(8676002)(110136005)(478600001)(70206006)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:29.7537 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d12b7cbf-5872-48e8-a05a-08da9d721f0c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT108.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6028 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For indirect actions, the action mask type indicates the indirect action type. And action mask conf be NULL means the indirect action will be provided by flow action conf. This commit fixes the indirect action validate. Fixes: 393e0eb555c0 ("net/mlx5: support DR action template API") Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow_hw.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 6a1ed7e790..d828d49613 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2726,7 +2726,8 @@ flow_hw_actions_validate(struct rte_eth_dev *dev, const struct rte_flow_action *mask = &masks[i]; MLX5_ASSERT(i < MLX5_HW_MAX_ACTS); - if (action->type != mask->type) + if (action->type != RTE_FLOW_ACTION_TYPE_INDIRECT && + action->type != mask->type) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, action, @@ -2824,22 +2825,25 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, uint16_t *curr_off, struct rte_flow_actions_template *at) { - uint32_t act_idx; uint32_t type; - if (!mask->conf) { + if (!mask) { DRV_LOG(WARNING, "Unable to determine indirect action type " "without a mask specified"); return -EINVAL; } - act_idx = (uint32_t)(uintptr_t)mask->conf; - type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + type = mask->type; switch (type) { - case MLX5_INDIRECT_ACTION_TYPE_RSS: + case RTE_FLOW_ACTION_TYPE_RSS: at->actions_off[action_src] = *curr_off; action_types[*curr_off] = MLX5DR_ACTION_TYP_TIR; *curr_off = *curr_off + 1; break; + case RTE_FLOW_ACTION_TYPE_COUNT: + at->actions_off[action_src] = *curr_off; + action_types[*curr_off] = MLX5DR_ACTION_TYP_CTR; + *curr_off = *curr_off + 1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type: %d", type); return -EINVAL; From patchwork Fri Sep 23 14:43:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116751 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 145ABA054A; Fri, 23 Sep 2022 16:46:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 19DB342BEB; Fri, 23 Sep 2022 16:44:30 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2061.outbound.protection.outlook.com [40.107.223.61]) by mails.dpdk.org (Postfix) with ESMTP id BE94A42BEB for ; Fri, 23 Sep 2022 16:44:28 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g/2iEOhZZjBs0ndsJzZbpS4TkhiociuJ0qi4GKL7Os1VS+4ravDMCfFx5OE/pG/SEMz+bGrBI8fm09vfef83JTc+nTZ+pQHd+Imtp15+RNmu+E0wxhkqPhWq7LYmhudUt81rJaF2Vl4Ot+y5g+ReNKtUYIT4SCl1aeNJY4bmgakOOhkNnvlWVVZFE0E3qHW+3815rQogFcj3CLa/Jv6WdiUBk39RK9ewE1akmNDdjLfryvxzG4ysT0hgXrjBbWGvHep3Hje9lmY9ScQOytm8xhoSGgUh4gmnB2FHexaAYeQPuejTglXygeobOAkWtwhUrRqLdlw5/51O/jHBgxQVjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZcmJJEHGhhB/k29iz/tCPni/ZblgdtUYPE1vqPlnbn8=; b=NwhGle7SEBm4Wq3Q8Y5c/4SHbR3Gz9aVHE9Vjif6G13SsGgZgHGuKtrBMgxx419ay7h2W/GE+DRhzt2kVTvf3ppRnowvWWnafUmfc+Pgo2dmZud31Xyv7nBTCVb3gxV1+/IfGvZktvsSAHzbsvYAOH1EipyQI7wNuxq+iNoZAowK8GvCbiRGRX383pdeBpDKxcXSqoEtczEh17hB/jKASVDzZtzvTJTn7njdwggAFj05HeM/5cwsC8s+r/RUpgPjMlyFwNb/OwsjY9yirwrw+Az33dQv9n+0eO6yPXOw3bvxNgjzXhf70Jv0g1pC5pHMoizrdrDoPbLuP17+397ncA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZcmJJEHGhhB/k29iz/tCPni/ZblgdtUYPE1vqPlnbn8=; b=aGSiguPdj3TidQPMhIWpssEvBHFM/Gd8UjR441URfGEuCaY2upP1UcSljOLPO8sLf1D+DAdrfLBPdLhqMc8W2LdufNsvNHF+t6tpeVM13viDIXtL2s48JvPmi0KCFD7lkuwmrcqU6PNwtK90MGyLBheChBF3jWxkBpcQgx+AsQ7mkIihRZXWwfkDxYD+ZTuJio2/autzKvy/Tb9DRvO5hFuTWbhS2ncn2fkS4MkwmFeeaWB9rkTcv3Ao0PIE7bU87YBzz4vEHweTeaC98luA+KMHQzTKZjbEHA7By8Da7KmAY9t7HChrQXqsVVijVvH2lIRzzSepuYAAnjZM0rYWEA== Received: from DM6PR02CA0069.namprd02.prod.outlook.com (2603:10b6:5:177::46) by MN2PR12MB4375.namprd12.prod.outlook.com (2603:10b6:208:24f::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.19; Fri, 23 Sep 2022 14:44:27 +0000 Received: from DM6NAM11FT069.eop-nam11.prod.protection.outlook.com (2603:10b6:5:177:cafe::5f) by DM6PR02CA0069.outlook.office365.com (2603:10b6:5:177::46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT069.mail.protection.outlook.com (10.13.173.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:16 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:14 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Xiaoyu Min Subject: [PATCH 15/27] net/mlx5: update indirect actions ops to HW variation Date: Fri, 23 Sep 2022 17:43:22 +0300 Message-ID: <20220923144334.27736-16-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT069:EE_|MN2PR12MB4375:EE_ X-MS-Office365-Filtering-Correlation-Id: 407850cc-83a7-4d60-eaef-08da9d721d54 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YMomocwE7U8FdTkmiAhbHsbzLrVOYWabEUztNwPdODNmnYQKHiOqjRLoHB9h3bqbRrxMYfpzONDAugc5BMtG0RWg5pRiIZzaZbCPZtSWiotTfq6oodpCuqv0pCMIOCvsifvV0NB4hB9mx4qjkx6AWC1ASd3sY12otWhnpf5lF+R7DFmQXQHPZW4jAccRFJGKetQja1kWg5rNPLIQNPx21yH3+513Q1AUfYe5MG00zCVWiMCnkANNJTCLtKjLjmzBd7SJ/zxkFwJzKyzSJHkajtQVXZ+ByKY38hoh4ixpCHVvJFCcBiY0ZamezInagIKNBs255TxqCMsTTcjNDrQV3z+yIPOv3WXXa5PinaWQxmzx2vh4lIZ9vYjbl+R0ChYIMejWaz3/pxbnzDKPJ1sosNZSrXm09kPmQDsxqNNnHehe505XCODPZf9gJbQgfRj4sYfjqW8EuNpy8pxfIiMbwHUf4A3TJyknSd3ZtwkW/nxKtW4cSW15blezQGjXCLYnTPDFsojjD9BCcH3+XamPjvhlEVHALZcv+whxs+XdcS94kgjP/qSo/mhMt/e6NzxiSyrq3qkZtefv10Mt2l5X7sgtwU7K2bATDSlhFZZaxurjce+pp1aBoHZ6pVK1kTuNcRo126vT7MPisRgLXWqOivswc+Ze5/ujLpOKar5ODmUEVB6RbzaAeNsmtOpRqhae69mWo6bZcRlABTPlnPyX7NX6VFs3zIUhF+MVTys/uSAqoSdGJARH6WWtw1ThgVQqGRGFFPvvzytPd/ctTW+wbA== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(376002)(346002)(396003)(39860400002)(451199015)(40470700004)(36840700001)(46966006)(26005)(2616005)(6286002)(7696005)(55016003)(82740400003)(86362001)(110136005)(70586007)(8676002)(70206006)(4326008)(316002)(40460700003)(356005)(36860700001)(7636003)(1076003)(186003)(82310400005)(16526019)(36756003)(107886003)(6666004)(478600001)(54906003)(47076005)(41300700001)(5660300002)(426003)(40480700001)(2906002)(336012)(83380400001)(8936002)(15650500001)(6636002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:26.8721 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 407850cc-83a7-4d60-eaef-08da9d721d54 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT069.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4375 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xiaoyu Min Each flow engine should have its own callback functions for each flow's ops. Create new callback functions for indirect actions' ops which actually are wrppers of their mlx5_hw_async_* counter parts. Signed-off-by: Xiaoyu Min --- drivers/net/mlx5/mlx5_flow_hw.c | 98 +++++++++++++++++++++++++++++++-- 1 file changed, 94 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index d828d49613..de82396a04 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4988,6 +4988,96 @@ flow_hw_query(struct rte_eth_dev *dev, return ret; } +/** + * Create indirect action. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] conf + * Shared action configuration. + * @param[in] action + * Action specification used to create indirect action. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * A valid shared action handle in case of success, NULL otherwise and + * rte_errno is set. + */ +static struct rte_flow_action_handle * +flow_hw_action_create(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *err) +{ + return flow_hw_action_handle_create(dev, UINT32_MAX, NULL, conf, action, + NULL, err); +} + +/** + * Destroy the indirect action. + * Release action related resources on the NIC and the memory. + * Lock free, (mutex should be acquired by caller). + * Dispatcher for action type specific call. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] handle + * The indirect action object handle to be removed. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +static int +flow_hw_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + struct rte_flow_error *error) +{ + return flow_hw_action_handle_destroy(dev, UINT32_MAX, NULL, handle, + NULL, error); +} + +/** + * Updates in place shared action configuration. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] handle + * The indirect action object handle to be updated. + * @param[in] update + * Action specification used to modify the action pointed by *handle*. + * *update* could be of same type with the action pointed by the *handle* + * handle argument, or some other structures like a wrapper, depending on + * the indirect action type. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +static int +flow_hw_action_update(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, + struct rte_flow_error *err) +{ + return flow_hw_action_handle_update(dev, UINT32_MAX, NULL, handle, + update, NULL, err); +} + +static int +flow_hw_action_query(struct rte_eth_dev *dev, + const struct rte_flow_action_handle *handle, void *data, + struct rte_flow_error *error) +{ + return flow_dv_action_query(dev, handle, data, error); +} + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .info_get = flow_hw_info_get, .configure = flow_hw_configure, @@ -5007,10 +5097,10 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .async_action_destroy = flow_hw_action_handle_destroy, .async_action_update = flow_hw_action_handle_update, .action_validate = flow_dv_action_validate, - .action_create = flow_dv_action_create, - .action_destroy = flow_dv_action_destroy, - .action_update = flow_dv_action_update, - .action_query = flow_dv_action_query, + .action_create = flow_hw_action_create, + .action_destroy = flow_hw_action_destroy, + .action_update = flow_hw_action_update, + .action_query = flow_hw_action_query, .query = flow_hw_query, }; From patchwork Fri Sep 23 14:43:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116754 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 35E31A054A; Fri, 23 Sep 2022 16:46:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C82D42C06; Fri, 23 Sep 2022 16:44:38 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2057.outbound.protection.outlook.com [40.107.220.57]) by mails.dpdk.org (Postfix) with ESMTP id 9512E42BE6 for ; Fri, 23 Sep 2022 16:44:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IMhC6WN+w+8/emhFqNtXIyB3yNzYYyvnXs61YGGLGuEk48hpCPwLJxsZ2XktfAhuxa/CDhRaiECFWod+AeOrh2KsfjHIKkOlmwyOgWLdu6YhYeOfwyQqjHtfvzopQFY/0Npq0GpmcPrQJ2vOat6Zr2S8QO/jXQpTWI/6WFHSNnKGWOdUaH9OznEJyziBZE5eKsbjN6q2nvWQL/BLV9KUL07o7FbiKV7bdTZom7eD63493kcARc/2AVqVbCuWjwKI+35DDA3m+4PP+PdiKGJUBBKRSnFE9BJ1dq7BI3Zm+kPD95d5sAJRgwXDh9wQU7ALlWhP/Mf29fWKEktOFzL1pA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aK11pt2lPsEotcROCqFZSpNgzKWV/SL4D+nPfTPDOXs=; b=XiS2CnS+XDax6fE1Lt3iC6BTaTma+f4EYjXxMBn21l0e362pcqFwKe7wOgdIhFaOcgUz65kjLm4Nrdfm182TNu8UniQzyNPmkv/jPGGT/XNT3QRnMcUzYUBp4L9H7EIDCWeTYHI6IZIkALr04iRKRlWoj4atQ3DjDh/h3GHB1fQID47JAlR2psRBfBezotG7PrdaTG9l7dJeh5wopmKlv//VmdD8ECvuFUq1rNf0TATgp6WPV1WHsoROKa1qiccTYDrJ22eQCIP0KriV/r7Tk3gRm+Xgzs8m1+ZugKtu5Kqs/vtfl9T7Kc5SVc9sL1sSZLH7wPbGUw1BvzaO0AHlTg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aK11pt2lPsEotcROCqFZSpNgzKWV/SL4D+nPfTPDOXs=; b=bI9sygqPBh3t8g7MXgYXHBA9iutL4JJdJ1QtVFwx1kBveXbpLijKNtNMrjH7bC97zh2urL+h/+itmbP7mhqgL4PJwNzua8M9xcI9Lkcsv5s865ROzCvQXW1iUZqCJY3ysiGe5YdX4dQDEa8epJnfWrUuO1Yp/GTfDf3ddeavl1vKxP5wWfy8vvPjMDZQhIh462uI/vaJPC3DaZNPiewa9SF4wrnaBuJePcJRAzMceIeTV4juHh/yk+IRmeX/GWu4qmvsdvs2lYkoYN/wasXTQxwf1Z0Z1ytozwJdeLvQE0kEFfXWkqOVb9jn0aS1G/wcWEi74sPqWMlwJN29mC23Cw== Received: from MW4PR03CA0238.namprd03.prod.outlook.com (2603:10b6:303:b9::33) by DM6PR12MB5021.namprd12.prod.outlook.com (2603:10b6:5:208::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:35 +0000 Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b9:cafe::54) by MW4PR03CA0238.outlook.office365.com (2603:10b6:303:b9::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:35 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:18 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:16 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Xiaoyu Min Subject: [PATCH 16/27] net/mlx5: support indirect count action for HW steering Date: Fri, 23 Sep 2022 17:43:23 +0300 Message-ID: <20220923144334.27736-17-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT026:EE_|DM6PR12MB5021:EE_ X-MS-Office365-Filtering-Correlation-Id: c56c8b57-3873-42c3-2a6c-08da9d72222e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vNDHpA+B0ys1vZD5T+Ghd+Oe5W2JHYYuJUscIwbT763zCaTLa3n4Zt8gNhYALFzjXrCdY+dNMnAZwYuMuqHpJyFWm/rH3NYhJxcpvalPh3zyU5ZCOu2IJsMeneCqYDQ8VJAOks2iZ0N2RxAEUXPhuyvrCM1Xqt3rVYuKEEf2AbbZnmqGLfb3aX9YRs3qGzxwV9QN7mpuW3EFmmzFrRtiXg4Tv9RbK9ZFSX1aSzuAfTF8e/lmWdTFSCYyAEnadnexQSlc90rXrkFMRe2Hrj5+m4X0rKg4CI5xB8FJeFz0izc5zMhjcMGwFFl9RZKqBVDgpIhJWUihe64pD+a2bD54XO2xF/dVZ2Ztk3E1wkNEOpKbVIhvvtP9EJQmXDXp0NKOtpMBedSCv0v88YP/Ls5ViH7MzyFq/fN72vp2tDpLo5ZfTKEZ8RyajX86V3dbe6TfVfV3ZhegkHfRUBZPndyH5LT3fTL2lPTOwPrPg67T4ycBq7VCa8pLi1stEJcWn3CpiSR0IBs85eiBF7foh7Dp+qy5xUfRQZx9wqUNZRMAXskFAOarXSCGIX+/sC/C/6NevDzli1LSOMNbo5TYgeP7fvtcaXyHNiOgNl15+HCORm2UORlNdsu383JY3i4BstsYLCQ38sDkLUP5wWIiIZXTXVuVIONgFfHIkeSnTuHvzgyXvHkjl0nfTrxWeDeGWqDPe49xDgasgpErhn0sjiYo2RTefOga+4X/QhEBK+FGA2S4Fp57fCH3wdFIctGqvL/w7h0uZiBdrtpOw+irm/GkDw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(86362001)(36756003)(40480700001)(55016003)(40460700003)(356005)(7636003)(70586007)(70206006)(4326008)(6636002)(8676002)(54906003)(110136005)(316002)(36860700001)(82310400005)(82740400003)(7696005)(107886003)(478600001)(5660300002)(6666004)(426003)(2616005)(26005)(2906002)(41300700001)(8936002)(16526019)(1076003)(186003)(336012)(47076005)(83380400001)(6286002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:35.0067 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c56c8b57-3873-42c3-2a6c-08da9d72222e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB5021 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xiaoyu Min The indirect counter action is taked as _shared_ counter between the flows use it. This _shared_ counter is gotten from counter pool at time the indirect action is created. And be put back to counter pool when indirect action is destroyed. Signed-off-by: Xiaoyu Min --- drivers/net/mlx5/mlx5_flow.h | 3 + drivers/net/mlx5/mlx5_flow_hw.c | 104 +++++++++++++++++++++++++++++++- 2 files changed, 104 insertions(+), 3 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index c982cb953a..a39dacc60a 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1148,6 +1148,9 @@ struct mlx5_action_construct_data { uint32_t level; /* RSS level. */ uint32_t idx; /* Shared action index. */ } shared_rss; + struct { + uint32_t id; + } shared_counter; }; }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index de82396a04..92b61b63d1 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -536,6 +536,44 @@ __flow_hw_act_data_shared_rss_append(struct mlx5_priv *priv, return 0; } +/** + * Append shared counter action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] cnt_id + * Shared counter id. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_shared_cnt_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + cnt_id_t cnt_id) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + act_data->type = type; + act_data->shared_counter.id = cnt_id; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; +} + + /** * Translate shared indirect action. * @@ -577,6 +615,13 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, action_src, action_dst, idx, shared_rss)) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_COUNT: + if (__flow_hw_act_data_shared_cnt_append(priv, acts, + (enum rte_flow_action_type) + MLX5_RTE_FLOW_ACTION_TYPE_COUNT, + action_src, action_dst, act_idx)) + return -1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -1454,6 +1499,13 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, (dev, &act_data, item_flags, rule_act)) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_COUNT: + if (mlx5_hws_cnt_pool_get_action_offset(priv->hws_cpool, + act_idx, + &rule_act->action, + &rule_act->counter.offset)) + return -1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -1761,6 +1813,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return ret; job->flow->cnt_id = cnt_id; break; + case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: + ret = mlx5_hws_cnt_pool_get_action_offset + (priv->hws_cpool, + act_data->shared_counter.id, + &rule_acts[act_data->action_dst].action, + &rule_acts[act_data->action_dst].counter.offset + ); + if (ret != 0) + return ret; + job->flow->cnt_id = act_data->shared_counter.id; + break; default: break; } @@ -4860,10 +4923,28 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, void *user_data, struct rte_flow_error *error) { + struct rte_flow_action_handle *handle = NULL; + struct mlx5_priv *priv = dev->data->dev_private; + cnt_id_t cnt_id; + RTE_SET_USED(queue); RTE_SET_USED(attr); RTE_SET_USED(user_data); - return flow_dv_action_create(dev, conf, action, error); + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + if (mlx5_hws_cnt_shared_get(priv->hws_cpool, &cnt_id)) + rte_flow_error_set(error, ENODEV, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "counter are not configured!"); + else + handle = (struct rte_flow_action_handle *) + (uintptr_t)cnt_id; + break; + default: + handle = flow_dv_action_create(dev, conf, action, error); + } + return handle; } /** @@ -4927,10 +5008,19 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, void *user_data, struct rte_flow_error *error) { + uint32_t act_idx = (uint32_t)(uintptr_t)handle; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + struct mlx5_priv *priv = dev->data->dev_private; + RTE_SET_USED(queue); RTE_SET_USED(attr); RTE_SET_USED(user_data); - return flow_dv_action_destroy(dev, handle, error); + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_COUNT: + return mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx); + default: + return flow_dv_action_destroy(dev, handle, error); + } } static int @@ -5075,7 +5165,15 @@ flow_hw_action_query(struct rte_eth_dev *dev, const struct rte_flow_action_handle *handle, void *data, struct rte_flow_error *error) { - return flow_dv_action_query(dev, handle, data, error); + uint32_t act_idx = (uint32_t)(uintptr_t)handle; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_COUNT: + return flow_hw_query_counter(dev, act_idx, data, error); + default: + return flow_dv_action_query(dev, handle, data, error); + } } const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { From patchwork Fri Sep 23 14:43:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116755 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 17DE8A054A; Fri, 23 Sep 2022 16:46:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 436FD42C18; Fri, 23 Sep 2022 16:44:41 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id C55B342C0D for ; Fri, 23 Sep 2022 16:44:39 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Px3dKEugVBaeOSSBBrHwyqq8jyFdMIVc5xquEhUAYEZ4/4Hm9g5rREQOYEfM+ZManmRZcN6e7EJWlX7SgawGRNbllrHWNdkhw89sTKymNnI4RTyJqQnX76nLRP11s5CMrdFA/14a9ubBI7U9un8qoFBB1TDm6ZwlfxHRwYLEdx0aT5NeMvpFcu6VePQOA/7Da/HLr2wAoSYBl6fzVQOU3sT66REOgGbmlJ27v1I4Eigh6G7h4EDOih2VTqgDNdWHtBfEwlYGyDuInMTzqq06Kn5jhDNgiFQXXQxfci/igmfTQ7l0k4eqd2xIBqzgM+lTS+QeIn4aFbKtvXOUSW4HMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=F71ghNs8KQUyvk7+8WEoACTT34XlaLCOz0GVovVyO3g=; b=WOqFWZ7urkIDj78dnTzUvqwkFyFpTTixlPV9SKzhTciosTWKT/fZJTve/A0VKdaHUeZTy5+BhETVqe+vSYtEhcrTeAjCAOiWQ8pxa5pSMaYY+qCDwcjgy58WpVRe5209EQB3hbvtTyLqeaeTkPN6SDxRKRCdF0Uf3jzaIl9tc/LkjIp2ZRbCfBZntTznwM982rhAdn/r4IwHikAo3PJy+QdRjAegz2dWOA/80ouJHmaNd7zZNNC0mpTodUkW5PN9HrSDnSu12/LKAjAgBTlB4caap1PZGqfHJKSurN27ujjkH4+xeaCOGAKnP2GB/AkOFYEVyi7MeM5mxWIohnqQow== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=F71ghNs8KQUyvk7+8WEoACTT34XlaLCOz0GVovVyO3g=; b=bPAKxlbnTmYPCt7tMNXF0b17MLT5YXIvTcUbzwrAQW0712KIW1ryRvL3Xt6ZBqUk5XXQDBmd3MneQw9sp4t9YLARDLkBaePIu+FcCzZFkjYrvTw7A8jwoyOsQ3M92Lts94SUX2OPqIdlKWJZRLbi+STXYnHyfH1Au+6VaEZBzbvqgA+HN/qsip4F4edVaP9nUeDGg9WI9uxzOYZc4lKr0PlTTblqvjc1EEhbH37rbGBME1/YZq/osVw+P5BoZcavyvlPlwzrqcCpC4yxgrzwbqADeZV4k7jzj71pLRr/zkf2pa06HTy0sXTD8w2QqFT3Cj/eUxHqmZVGy6gIFx34Qg== Received: from MW4PR03CA0227.namprd03.prod.outlook.com (2603:10b6:303:b9::22) by CH2PR12MB4071.namprd12.prod.outlook.com (2603:10b6:610:7b::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:37 +0000 Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b9:cafe::a8) by MW4PR03CA0227.outlook.office365.com (2603:10b6:303:b9::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:37 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:19 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:18 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Dariusz Sosnowski Subject: [PATCH 17/27] net/mlx5: add pattern and table attribute validation Date: Fri, 23 Sep 2022 17:43:24 +0300 Message-ID: <20220923144334.27736-18-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT026:EE_|CH2PR12MB4071:EE_ X-MS-Office365-Filtering-Correlation-Id: 0824ef34-d9de-43ba-ae0c-08da9d7223be X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: X/ADU2shSYEHtVNEumEHvgFxgixBHaaWfjhk1XE0CFui5t/vXF6S2Z8JYegooETYRk7FE3zVd85XdlNUkSryBZ1mm6p55Xo0/tFxZdvNjW4NwdSYJtSKu+S/c/aneO8D0fCSAaRhUYbseJlnUTKfiLP9uk3uN6zKD4aEGUviRTD0YUnoWEtA5aUaFqvWVSJD/oXmCB4R5aNvVA7QP4DPCBoRCe38mkvlIemqknuElR1gpJqeJPmPfEcIx72PtReSoR8i3MEAAy0cqoSvl8kcP5g3hLvK9dloS2J/nYHqW38b85T3aYuhYIZXSAYUwiub0YuTGAtGBWK8qbf4tc5P8ua9FGr3KSv9pvRCok/f1P2aEnkCudyetjNgoBE/Byz4AI6vBE+pmHb7gXTjHZdg93x1nQd2Us4fmhBRtnDG4eLLjEE9IFhNJRUDOm9c1mflPz6VP3XuqJvQVk2FjlemNWISGcjRyhqxNfxDI2MUrd0DWz54CsrrOkqtN1tw74WA7T9/oPDPDZ9lhnb/ulWwWqNrXscF8YaqwgYaWqG3BRBNIVVyE1HCYoziJi2LWWmjG75nSuew/+W2HLmpokacWAUvouZ9BCaXmwKK0nOFCWxjGhEbvi8QfXATQohUrFuRu3Lr6LGeIVvfjEksN1peydw+xWmr8K7EmjC2ROfuJr+PtGkDWtwy7ipEVGPVzd2d8gU+x+qVSrU7nQFP2DY/bFdzmUi5DLxJZeTBWq9lNwUqv9bir/mBBmHQ3u8UGKd8BQD/ZuNyRN/6coyLtQ55xQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(136003)(396003)(39860400002)(346002)(451199015)(46966006)(36840700001)(40470700004)(36860700001)(26005)(6636002)(107886003)(36756003)(2906002)(7696005)(110136005)(54906003)(6666004)(2616005)(1076003)(16526019)(86362001)(186003)(4326008)(70586007)(70206006)(83380400001)(336012)(426003)(8676002)(55016003)(6286002)(316002)(41300700001)(40480700001)(47076005)(8936002)(5660300002)(7636003)(478600001)(40460700003)(82740400003)(82310400005)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:37.6315 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0824ef34-d9de-43ba-ae0c-08da9d7223be X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4071 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dariusz Sosnowski This patch adds validation of direction attributes of pattern templates and template tables. In case of pattern templates the following configurations are allowed (and if this configuration requires addition of implicit pattern items): 1. If E-Switch is enabled (i.e. dv_esw_en devarg is set to 1): 1. If a port is a VF/SF representor: 1. Ingress only - implicit pattern items are added. 2. Egress only - implicit pattern items are added. 2. If a port is a transfer proxy port (E-Switch Manager/PF representor): 1. Ingress, egress and transfer - no implicit items are added. This setting is useful for applications which require to receive traffic from devices connected to the E-Switch and did not hit any transfer flow rules. 2. Ingress only - implicit pattern items are added. 3. Egress only - implicit pattern items are added. 4. Transfer only - no implicit pattern items are added. 2. If E-Switch is disabled (i.e. dv_esw_en devarg is set to 0): 1. Ingress only - no implicit pattern items are added. 2. Egress only - no implicit pattern items are added. 3. Ingress and egress - no implicit pattern items are added. 4. Transfer is not allowed. In case of template tables, the table attributes must be consistent with attributes associated with pattern template attributes. Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5_flow_hw.c | 80 +++++++++++++++++++++++++-------- 1 file changed, 62 insertions(+), 18 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 92b61b63d1..dfbc434d54 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2379,6 +2379,13 @@ flow_hw_table_create(struct rte_eth_dev *dev, for (i = 0; i < nb_item_templates; i++) { uint32_t ret; + if ((flow_attr.ingress && !item_templates[i]->attr.ingress) || + (flow_attr.egress && !item_templates[i]->attr.egress) || + (flow_attr.transfer && !item_templates[i]->attr.transfer)) { + DRV_LOG(ERR, "pattern template and template table attribute mismatch"); + rte_errno = EINVAL; + goto it_error; + } ret = __atomic_add_fetch(&item_templates[i]->refcnt, 1, __ATOMIC_RELAXED); if (ret <= 1) { @@ -2557,6 +2564,7 @@ flow_hw_template_table_create(struct rte_eth_dev *dev, uint8_t nb_action_templates, struct rte_flow_error *error) { + struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_template_table_cfg cfg = { .attr = *attr, .external = true, @@ -2565,6 +2573,12 @@ flow_hw_template_table_create(struct rte_eth_dev *dev, if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error)) return NULL; + if (priv->sh->config.dv_esw_en && cfg.attr.flow_attr.egress) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL, + "egress flows are not supported with HW Steering" + " when E-Switch is enabled"); + return NULL; + } return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates, action_templates, nb_action_templates, error); } @@ -3254,11 +3268,48 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, const struct rte_flow_item items[], struct rte_flow_error *error) { + struct mlx5_priv *priv = dev->data->dev_private; int i; bool items_end = false; - RTE_SET_USED(dev); - RTE_SET_USED(attr); + if (!attr->ingress && !attr->egress && !attr->transfer) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL, + "at least one of the direction attributes" + " must be specified"); + if (priv->sh->config.dv_esw_en) { + MLX5_ASSERT(priv->master || priv->representor); + if (priv->master) { + /* + * It is allowed to specify ingress, egress and transfer attributes + * at the same time, in order to construct flows catching all missed + * FDB traffic and forwarding it to the master port. + */ + if (!(attr->ingress ^ attr->egress ^ attr->transfer)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, NULL, + "only one or all direction attributes" + " at once can be used on transfer proxy" + " port"); + } else { + if (attr->transfer) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, NULL, + "transfer attribute cannot be used with" + " port representors"); + if (attr->ingress && attr->egress) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, NULL, + "ingress and egress direction attributes" + " cannot be used at the same time on" + " port representors"); + } + } else { + if (attr->transfer) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, NULL, + "transfer attribute cannot be used when" + " E-Switch is disabled"); + } for (i = 0; !items_end; i++) { int type = items[i].type; @@ -3289,7 +3340,15 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "Unsupported internal tag index"); + break; } + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + if (attr->ingress || attr->egress) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "represented port item cannot be used" + " when transfer attribute is set"); + break; case RTE_FLOW_ITEM_TYPE_VOID: case RTE_FLOW_ITEM_TYPE_ETH: case RTE_FLOW_ITEM_TYPE_VLAN: @@ -3299,7 +3358,6 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_TCP: case RTE_FLOW_ITEM_TYPE_GTP: case RTE_FLOW_ITEM_TYPE_GTP_PSC: - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: case RTE_FLOW_ITEM_TYPE_VXLAN: case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: case RTE_FLOW_ITEM_TYPE_META: @@ -3350,21 +3408,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, if (flow_hw_pattern_validate(dev, attr, items, error)) return NULL; - if (priv->sh->config.dv_esw_en && attr->ingress) { - /* - * Disallow pattern template with ingress and egress/transfer - * attributes in order to forbid implicit port matching - * on egress and transfer traffic. - */ - if (attr->egress || attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "item template for ingress traffic" - " cannot be used for egress/transfer" - " traffic when E-Switch is enabled"); - return NULL; - } + if (priv->sh->config.dv_esw_en && attr->ingress && !attr->egress && !attr->transfer) { copied_items = flow_hw_copy_prepend_port_item(items, error); if (!copied_items) return NULL; From patchwork Fri Sep 23 14:43:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116756 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27C6EA054A; Fri, 23 Sep 2022 16:46:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0AFEB42C1D; Fri, 23 Sep 2022 16:44:42 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2064.outbound.protection.outlook.com [40.107.92.64]) by mails.dpdk.org (Postfix) with ESMTP id 0D6E142BB3 for ; Fri, 23 Sep 2022 16:44:41 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AN1s3roCSfRmR1TKnMF8rQutG63v4F/3DR0Q7X2pTjU7NzM8JyKu076cPaXImNzql+aYbaa47okzG4Vbr/XcjoxRmpO7FL1Sl092MCh4RV4AaFhDju7+7M5hLjF4gqjYzRU5fvG8baZwGewiapawc2SIEwGi7P2Ju3WJ4M1TQcCp3sQRK8/ggSaQrD7YefvbxslqbNY+o/rw2w9OZrKLZraz3o63L27mYyuXbXcOrJocQdsqlKEVCPh4xEi17bzRGvgETPyFrzcgGJO6r7vS3wASSEYB0HRYrfNC6/ExX1LTnnJQlQIGQHqrORNohcYUtULo17fGkDue/86Y60H/nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2NQylTZzjHyk4qvmD4//J49OJhUwS3IFWRU+SgRibF0=; b=I7gOOxn3xHsvYLkQdajHVwwmdIRGxuxdq0WsnrlFBy/kmTn4Xkxbx6G7Wlq+tXSaU/Mc5iQ43gh+u3hbzcrS8pPHEROpJHxoKUBXZQaD9NH696+92tyRRbWZHezkdWzxmXalXv0+D6ITjrSsspILYzzBksZu7cBrRTLCQkq3OeSLwNqf2iljNbxZxIKdYStCRDXZWkekrni2FUcFYonc0kxtX8YdA7VbsfLh4SaDkDTbcLgcx3gtiNf73BBeMczn5tVCqyIi0a2TNBle+wWThJE/Txv8eINR5QGxYusu20QC/t3QVHZiSqWeLq+d/FmB2/+sdvELWK/leTNE53YluQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2NQylTZzjHyk4qvmD4//J49OJhUwS3IFWRU+SgRibF0=; b=OeJI7dYDFaJ9oEnSvp1vwRc2GqCpbtQRxbmWQPkjvYSXt22KpoYzAAdSm8nBq81CE+1EmwrzrMixSEnvVzm+xAohbLI7Dzm5jkslmzyLM2lWRE9NOue9O/R3Tjhlieb6WaLdpq8u85IgIw7Bu7r5QuhCSqDh1ZReVKEYJLMkfvbaP0CHCi/PDw2EO6F3mcY9ttvfPTHrXDEUdO+BFpSrVhJAP4sAgiSGSz/Ddqxl+L0JfrZyJu/PkPHGbZpvj+WUwFe+maS/aFs5GZsHPeXIHWImcPUWNqXww7ppFzlE2YgiIv+E6ERWXlN+hm3TvTtqNhrDEykeMXutNWa8sOcLVA== Received: from MW4PR03CA0239.namprd03.prod.outlook.com (2603:10b6:303:b9::34) by BY5PR12MB4968.namprd12.prod.outlook.com (2603:10b6:a03:1d2::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:39 +0000 Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b9:cafe::bb) by MW4PR03CA0239.outlook.office365.com (2603:10b6:303:b9::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:21 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:19 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Dariusz Sosnowski Subject: [PATCH 18/27] net/mlx5: add meta item support in egress Date: Fri, 23 Sep 2022 17:43:25 +0300 Message-ID: <20220923144334.27736-19-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT026:EE_|BY5PR12MB4968:EE_ X-MS-Office365-Filtering-Correlation-Id: c096da16-689d-4684-dd68-08da9d722473 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jkItnJbsPZG1DXsSeBTacNKbXB5bVQzE8GfLXyAsAJ99F/2MU2bVByZXn9xEAsl104Dcfl5GmFKMkYmEFTaQpjNfm5bHx5gMLPQFDTk0pBDV2xwlmDmI2csI2SgpANzXMuUdl4QsloKNCuH3/mypsOSTSpXIBn15h/TkF3PIlZ9NXPNT94bqB1Imrh0qYkaLINy4OkvJadN5JWAo3SkK+F2kBFipWf4HPfUNb9IlDw9iCU7RkYuJ48sNXa46QnXU2Z2KmuR6t2Rofuie26KpK1GihbPLoFZZsFSI60S4SmRP8gdKV2ZIGUqg8p0UdNW55OJ9UJ+1dyb/J1136eILeQw+SvmeI63Df+MI96JWI/mPrKjIU81GrKMNBms0Y+8p7aKcUPzokjqEXM9crXvdEl1WOyNIzZhZUSUppGT42MYPmhl4tmpeasEMkviWmLtF3r2ppUcGfSdcglD9PoXkFPpjqOAbH1DUlU+0feC/b71++gA8Ku+EAEIq/vsm8czEDTVIIAFCVyJ7fFGRPYorKNYsP56zQWlvr4RHVAACpCejQEN9tdJiYccLdXUQB0Ru2j9DHC/Ywj5SoEMGU4vTJj+gpBw+Jg/2YPalmClk/j9VNIswrLm2twuJkoKvp3njzYbVIhNKH0S084tLJmVp7WLyt9r4n5918swIr7xGNzIz1MGBe5dqFvgwiKCE0qMoH1Uzm/ndhAiQiZIPMh62zQlGbtyg4H6mU1USZIlisqdQm8y59fEuVu/m0tBNPsP+9KStbmW+htPzSVNEvIQsWX3Bd8j+86WEPJr497vf3wI= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199015)(40470700004)(36840700001)(46966006)(186003)(356005)(82310400005)(6636002)(336012)(16526019)(7636003)(54906003)(110136005)(1076003)(47076005)(426003)(316002)(82740400003)(5660300002)(7696005)(40460700003)(4326008)(8676002)(107886003)(6666004)(478600001)(86362001)(55016003)(8936002)(36756003)(2616005)(26005)(2906002)(70206006)(70586007)(6286002)(40480700001)(83380400001)(41300700001)(36860700001)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:38.8189 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c096da16-689d-4684-dd68-08da9d722473 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4968 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dariusz Sosnowski This patch adds support for META item in HW Steering mode, in NIC TX domain. Due to API limitations, usage of META item requires that all mlx5 ports use the same configuration of dv_esw_en and dv_xmeta_en device arguments in order to consistently translate META item to appropriate register. If mlx5 ports use different configurations, then configuration of the first probed device is used. Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/linux/mlx5_os.c | 1 + drivers/net/mlx5/mlx5.c | 4 ++- drivers/net/mlx5/mlx5_flow.h | 22 +++++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 61 ++++++++++++++++++++++++++++++-- 4 files changed, 84 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 41940d7ce7..54e7164663 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1563,6 +1563,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } /* Only HWS requires this information. */ flow_hw_init_tags_set(eth_dev); + flow_hw_init_flow_metadata_config(eth_dev); if (priv->sh->config.dv_esw_en && flow_hw_create_vport_action(eth_dev)) { DRV_LOG(ERR, "port %u failed to create vport action", diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 314176022a..87cbcd473d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1970,8 +1970,10 @@ mlx5_dev_close(struct rte_eth_dev *dev) flow_hw_resource_release(dev); #endif flow_hw_clear_port_info(dev); - if (priv->sh->config.dv_flow_en == 2) + if (priv->sh->config.dv_flow_en == 2) { + flow_hw_clear_flow_metadata_config(); flow_hw_clear_tags_set(dev); + } if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index a39dacc60a..dae2fe6b37 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1485,6 +1485,13 @@ flow_hw_get_wire_port(struct ibv_context *ibctx) return NULL; } +extern uint32_t mlx5_flow_hw_flow_metadata_config_refcnt; +extern uint8_t mlx5_flow_hw_flow_metadata_esw_en; +extern uint8_t mlx5_flow_hw_flow_metadata_xmeta_en; + +void flow_hw_init_flow_metadata_config(struct rte_eth_dev *dev); +void flow_hw_clear_flow_metadata_config(void); + /* * Convert metadata or tag to the actual register. * META: Can only be used to match in the FDB in this stage, fixed C_1. @@ -1496,7 +1503,20 @@ flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) { switch (type) { case RTE_FLOW_ITEM_TYPE_META: - return REG_C_1; + if (mlx5_flow_hw_flow_metadata_esw_en && + mlx5_flow_hw_flow_metadata_xmeta_en == MLX5_XMETA_MODE_META32_HWS) { + return REG_C_1; + } + /* + * On root table - PMD allows only egress META matching, thus + * REG_A matching is sufficient. + * + * On non-root tables - REG_A corresponds to general_purpose_lookup_field, + * which translates to REG_A in NIC TX and to REG_B in NIC RX. + * However, current FW does not implement REG_B case right now, so + * REG_B case should be rejected on pattern template validation. + */ + return REG_A; case RTE_FLOW_ITEM_TYPE_TAG: MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); return mlx5_flow_hw_avl_tags[id]; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index dfbc434d54..55a14d39eb 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3332,7 +3332,6 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, { const struct rte_flow_item_tag *tag = (const struct rte_flow_item_tag *)items[i].spec; - struct mlx5_priv *priv = dev->data->dev_private; uint8_t regcs = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c; if (!((1 << (tag->index - REG_C_0)) & regcs)) @@ -3349,6 +3348,17 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, "represented port item cannot be used" " when transfer attribute is set"); break; + case RTE_FLOW_ITEM_TYPE_META: + if (!priv->sh->config.dv_esw_en || + priv->sh->config.dv_xmeta_en != MLX5_XMETA_MODE_META32_HWS) { + if (attr->ingress) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "META item is not supported" + " on current FW with ingress" + " attribute"); + } + break; case RTE_FLOW_ITEM_TYPE_VOID: case RTE_FLOW_ITEM_TYPE_ETH: case RTE_FLOW_ITEM_TYPE_VLAN: @@ -3360,7 +3370,6 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_GTP_PSC: case RTE_FLOW_ITEM_TYPE_VXLAN: case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - case RTE_FLOW_ITEM_TYPE_META: case RTE_FLOW_ITEM_TYPE_GRE: case RTE_FLOW_ITEM_TYPE_GRE_KEY: case RTE_FLOW_ITEM_TYPE_GRE_OPTION: @@ -4938,6 +4947,54 @@ void flow_hw_clear_tags_set(struct rte_eth_dev *dev) sizeof(enum modify_reg) * MLX5_FLOW_HW_TAGS_MAX); } +uint32_t mlx5_flow_hw_flow_metadata_config_refcnt; +uint8_t mlx5_flow_hw_flow_metadata_esw_en; +uint8_t mlx5_flow_hw_flow_metadata_xmeta_en; + +/** + * Initializes static configuration of META flow items. + * + * As a temporary workaround, META flow item is translated to a register, + * based on statically saved dv_esw_en and dv_xmeta_en device arguments. + * It is a workaround for flow_hw_get_reg_id() where port specific information + * is not available at runtime. + * + * Values of dv_esw_en and dv_xmeta_en device arguments are taken from the first opened port. + * This means that each mlx5 port will use the same configuration for translation + * of META flow items. + * + * @param[in] dev + * Pointer to Ethernet device. + */ +void +flow_hw_init_flow_metadata_config(struct rte_eth_dev *dev) +{ + uint32_t refcnt; + + refcnt = __atomic_fetch_add(&mlx5_flow_hw_flow_metadata_config_refcnt, 1, + __ATOMIC_RELAXED); + if (refcnt > 0) + return; + mlx5_flow_hw_flow_metadata_esw_en = MLX5_SH(dev)->config.dv_esw_en; + mlx5_flow_hw_flow_metadata_xmeta_en = MLX5_SH(dev)->config.dv_xmeta_en; +} + +/** + * Clears statically stored configuration related to META flow items. + */ +void +flow_hw_clear_flow_metadata_config(void) +{ + uint32_t refcnt; + + refcnt = __atomic_sub_fetch(&mlx5_flow_hw_flow_metadata_config_refcnt, 1, + __ATOMIC_RELAXED); + if (refcnt > 0) + return; + mlx5_flow_hw_flow_metadata_esw_en = 0; + mlx5_flow_hw_flow_metadata_xmeta_en = 0; +} + /** * Create shared action. * From patchwork Fri Sep 23 14:43:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116758 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EFA2CA054A; Fri, 23 Sep 2022 16:46:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A58542C28; Fri, 23 Sep 2022 16:44:46 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2086.outbound.protection.outlook.com [40.107.237.86]) by mails.dpdk.org (Postfix) with ESMTP id 473B142BF2 for ; Fri, 23 Sep 2022 16:44:42 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ad+cjFeCiib7oqBvUZwHjeE5T9xZdCNLIpCHU9yUod6GF7BmhFCov34ce+QU1ODNPmkNWIZvofswAOLKFnVzZfFn9/HzhgEEH96o+zafqPdkhwZKTo2jac9QulVBg7Hi0KwK1myV4EIuZXeHxOvaFGjA3tsNsr1oI5O4N9nO2JGojyC4rbFbwo/BuZ9dLPJt9iJvk9U+IUWXLwaP+V22q/5okgyj/2DQDzKw2eoFTk/N9Ht4OsuQcIRa5sSSts24s0xJUjSAUNh5KYfxXppArgEfpnvR0KFdTRlQ+PFL5mZ7gKVX+cNnvwp+XpDgkuD6BAH/zhDN1BtZoCiBYXgmUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=P46gFkJjpLCp0D6h+YmJR6YmzfRGrcJdP+MvRzA0glY=; b=I487SichNWeO/hAFeBgNPqT7VjdBLnBYUASk3Gw6+rkQHa/KH0s+iIpPz0bfGQe5tJI04mMbzfanCaqUGlQI4cMmsy7uSyHLD/NTb9L9Hxjdor9LUHYTI3y5rceS+spT9AIKexjmpyNd38ik8xv6vkT0aOvonMp3E90cSzUTg+OSlDI2PW/IQPL/TNiG7GHwM2DDEGiN/t8X4cJ80aMNIJp5VZMmeZd7RudJ53sRfmYR/QIm8XVe+U6/WTHARdT7emQdKEW9jhdVZGa2QC5/1Fiu5i7HIDwJSywjIdM3zkZbBj3y5r4y+J0b8b6HgC6+OGhRroVZjqTXkK2x2qsTyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=P46gFkJjpLCp0D6h+YmJR6YmzfRGrcJdP+MvRzA0glY=; b=I3Hvx1R3kC/6Gw94sPn+lJpff4738+BKbBsCjtsuFB4vsNtUr2/Gq+fKIO2v2JWgn5hL59wYRQ7hEeCzGBDfu1h4s8rDxFx6vyPaPWlaP8/JSHvC6BTYrS0Ju1H0Nq8EX6llvyRZundWpD3T8EQjjlrZMQ/NKd+8mCZU0EeGcfiH75S0rqd1N9oq5uet9KoN14As7KFFgrxRGXDG7uwscYX+h7f+6/6Q0Ob1+Hp1RqBNtl/+1SyTV5ubxmWQmhLLP0RHGoQ9D8k/gvKj57j3+Fr7XxI1YNQdJFcVg4VyQ+ne8HjGcz8s5yH6NWNv3CG8FBYX3qf40DnugXr5klGUZQ== Received: from MW4PR03CA0062.namprd03.prod.outlook.com (2603:10b6:303:b6::7) by DM4PR12MB5104.namprd12.prod.outlook.com (2603:10b6:5:393::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:41 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::1f) by MW4PR03CA0062.outlook.office365.com (2603:10b6:303:b6::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:40 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:23 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:21 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Bing Zhao Subject: [PATCH 19/27] net/mlx5: add support for ASO return register Date: Fri, 23 Sep 2022 17:43:26 +0300 Message-ID: <20220923144334.27736-20-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT034:EE_|DM4PR12MB5104:EE_ X-MS-Office365-Filtering-Correlation-Id: 5f61b91f-8b84-4876-1b84-08da9d722596 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bqyZhIc5iVFL+YOaNgoTAftzUIDdJB002B/mDGj44s9OTwpaennOiVqKkVcTpKfnzt4wkXFSeD4jP2kW/zn5l52WBBNcVmz3Od+jn3OBcSUkVrZO8lxxD8EAF2O6d6/n+PKlIMOEbdEKAh/ja+Yj1S1BmgkkuMifCaUYwYH46PbP0Kvh39wXismkza/wAQV6XtBkBElYgcl3aa6H2H8XrkW0K69PRqTq12rby+qAE0vIFwJdpd6vSJkcmgxFUuKZRl7zobHahXnvpmH2HGK1+sCWaSH1avVELBuFEw08N4iJDu0XAWW4mCR6D5D5B5qnlvIB14bPCZQJ97gXz3WBtbrr36hBdcsTcPaFnNW7TgcXauV3M+XTdc4qDrbDAlxdUxp5a8NADFCAops2f4UON7LdRH/rg+mBuXnyiU2vuS32BGvtFguBm1oOh+/MhoC+TgKMTeJr/L+/iJbblTwskOyoaIC/IR2hK03bZeAg1LmeFIYS4xYWj02GqE1cGP2A1z7x1WFX1qLi39ILYwg6SD2KpeiBXlFXsYnyxebF3Qc+N8Zqt1jwinWO8twQ69mJ+aSt8iFrGgeO4Xo0SXKRL2/zUoKgdvqTlNhW/edpL6137mkMoKYurAhRzoyuBB7t3eYCkyH2PliFlevhsZ1yKgDvniOY0/KuEwhs5DeFyYN+5jL0lXfW0od7Gi5aqAQVNPSDMqzN+zL/kA4AQGJFI1eZ00OlRaLHd8vVbPm3iVcEMeuIcQIH3myRZEcOpscX/KeJxmYEzXpU8Eq1p2z4RQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199015)(36840700001)(40470700004)(46966006)(86362001)(36756003)(40480700001)(55016003)(40460700003)(356005)(7636003)(70586007)(70206006)(4326008)(6636002)(8676002)(54906003)(110136005)(316002)(36860700001)(82310400005)(82740400003)(7696005)(107886003)(478600001)(5660300002)(426003)(2616005)(26005)(2906002)(41300700001)(8936002)(16526019)(1076003)(186003)(336012)(47076005)(6286002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:40.7211 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5f61b91f-8b84-4876-1b84-08da9d722596 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5104 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Bing Zhao A REG_C_x metadata register is needed to store the result after an ASO action. Like in the SWS, the meter color register is used for all the ASO actions right now and this register was already filtered out from the available tags. It is assumed that all the devices are using the same meter color register inside one application now. In the next stage, the available tags and other metadata registers allocation will be stored per device. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5_flow.c | 1 + drivers/net/mlx5/mlx5_flow.h | 3 +++ drivers/net/mlx5/mlx5_flow_hw.c | 2 ++ 3 files changed, 6 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 658cc69750..cbf9c31984 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -49,6 +49,7 @@ struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; */ uint32_t mlx5_flow_hw_avl_tags_init_cnt; enum modify_reg mlx5_flow_hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; +enum modify_reg mlx5_flow_hw_aso_tag; struct tunnel_default_miss_ctx { uint16_t *queue; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index dae2fe6b37..a6bd002dca 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1445,6 +1445,7 @@ extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; #define MLX5_FLOW_HW_TAGS_MAX 8 extern uint32_t mlx5_flow_hw_avl_tags_init_cnt; extern enum modify_reg mlx5_flow_hw_avl_tags[]; +extern enum modify_reg mlx5_flow_hw_aso_tag; /* * Get metadata match tag and mask for given rte_eth_dev port. @@ -1517,6 +1518,8 @@ flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) * REG_B case should be rejected on pattern template validation. */ return REG_A; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + return mlx5_flow_hw_aso_tag; case RTE_FLOW_ITEM_TYPE_TAG: MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); return mlx5_flow_hw_avl_tags[id]; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 55a14d39eb..b9d4402aed 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4903,6 +4903,7 @@ void flow_hw_init_tags_set(struct rte_eth_dev *dev) unset |= 1 << (REG_C_1 - REG_C_0); masks &= ~unset; if (mlx5_flow_hw_avl_tags_init_cnt) { + MLX5_ASSERT(mlx5_flow_hw_aso_tag == priv->mtr_color_reg); for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { if (mlx5_flow_hw_avl_tags[i] != REG_NON && !!((1 << i) & masks)) { copy[mlx5_flow_hw_avl_tags[i] - REG_C_0] = @@ -4925,6 +4926,7 @@ void flow_hw_init_tags_set(struct rte_eth_dev *dev) } } priv->sh->hws_tags = 1; + mlx5_flow_hw_aso_tag = (enum modify_reg)priv->mtr_color_reg; mlx5_flow_hw_avl_tags_init_cnt++; } From patchwork Fri Sep 23 14:43:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116759 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D29DCA054A; Fri, 23 Sep 2022 16:47:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3B43542BD1; Fri, 23 Sep 2022 16:44:48 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2077.outbound.protection.outlook.com [40.107.223.77]) by mails.dpdk.org (Postfix) with ESMTP id 2372D42C22 for ; Fri, 23 Sep 2022 16:44:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H8oP3H5Fi86TQNSBp8T6xSzpfI3Hzh3ZnHFKArzk1RKi7is5VjIe1XsEZGAA+EQnmkZPksQ/6ERWObP2pOSokQ8Z9s4+BvYRSmmSOP2JfcgIIgp3ezS6hqK3MbVtzgWW3XtT8dKxNUzNTpYlS77WWz5bK9hguWEG0yHrmeQbyaxcodhfBQPa6seq+pGfjJZfFFsiSvURCVhilwD6hmKcpxUrIY1MvqjV5dJhRKLI1t9zpTW4iHS//8xD3ncMtawO5UJNK6lN3RV+MJHiJ6Rrqs1s4e6tn/rs6CaVGQ1BBuNzRGp8kb6PXrto8Vxt6D44Nbit4XODy7j2x8zzupr+lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2u76KeRtVZ9dCMFW+PS/WDSYv4jyZEeY+lrW22HrFMY=; b=PuK2WRSKBKX15gRwxZsBnnnJzDcFYMsQ8RdqETiQhBgYohRMIkhGLFvDREeOwcNhxyORJrkHYy3sx9Mp9Wxdb5Tta76DqXtmCnx0ehkEEYuvv53rim1qj0aFxLeLQOIUJiV2Q2desMQY3/zFHItjGwapi2FFRDFdfhF22FWhjRY10ARSX9OOKlLxPA39Z/8a1HFUlAYPVE8SD45WyNq/5oXpaMJjuOtm44BTopgeO87nnCxGXOD1agFOI0z+cV0c7FSy0I0VeN3jsVdc/fJB6Tnqclok18OoEGC9+RZE3SFA282pEX9f6MZGg+EaEya27AyKokaC9ujBY/CLRj3LAA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2u76KeRtVZ9dCMFW+PS/WDSYv4jyZEeY+lrW22HrFMY=; b=gbyYbirxeXB1+tulyH7FcHCtMENijxGVorQ/5UlUBQTj1BEdoEvpUYiygp3nCe+feDOq803C245eRwgk630l55BkFwMfO/HMOEJirm5MXs+dmAKXjIyHVrHjrdp6RUFkWxbpSKA92HwkPz3tbkeFwJvLmlFntBZ61sxErbrDw9jZ0KDZeK/Vh/7kKDDmp0U5IxwDjd1PKrVdmKsY7r8+lQhha3mKlAxrV51hnEognaAeeRVmoqOIcwGK5lDnRpm8zKPM1tJuDNBskn+AZFp4IlBK1GDLo/Ikk9LYiNqS+nJCjRxGDj01WyGMQirYWj6S9VhTTB73UoEmqxFPpx0Kog== Received: from MW4PR03CA0084.namprd03.prod.outlook.com (2603:10b6:303:b6::29) by DS0PR12MB6584.namprd12.prod.outlook.com (2603:10b6:8:d0::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.18; Fri, 23 Sep 2022 14:44:43 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::73) by MW4PR03CA0084.outlook.office365.com (2603:10b6:303:b6::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:25 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:23 -0700 From: Suanming Mou To: Ori Kam , Thomas Monjalon , "Ferruh Yigit" , Andrew Rybchenko CC: Subject: [PATCH 20/27] lib/ethdev: add connection tracking configuration Date: Fri, 23 Sep 2022 17:43:27 +0300 Message-ID: <20220923144334.27736-21-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT034:EE_|DS0PR12MB6584:EE_ X-MS-Office365-Filtering-Correlation-Id: 7626f30a-b9c0-44a3-2f44-08da9d7226d3 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: U3iErQq725PXruwHqTbJCgrOBvxrS17Wp9xNzQU1J7ndFdUJEMHbSuP0T4yq3Q6kBICSHRk214IkPOeZSTi3sYphT1QOYK4HPSfvOeY/jQO1zjVmRVcNE7Ca4Ie85js7r2eheFMNyQKtCAu6j7lMNKY3caPPNLCWcmdrpU+fqZWGA5mcm+4uLy/wlnEschDA1calCQGVHOAlyGOH73a76kvOC7CtZtWIVwVZoZSoSy4YOEB9vgqDGTZCOnGmSAxTX3E7rsqLgEgxZURfu5oIt7JZVUMArH+wfu06pjOfxlV+EemkBZCtAoAVu0mr8EkDZp8h69fzymZb+NSC2R30LOX3bXYIkb2S9FB01rR+1Nz5OAJWAYYdSoMsilhDYeSr2GTe5d2MyaiuOmbEvPN/LLFuMa0gy6SXzDV4rZWr6MvyvI7JeZkgphHRihnLohDSTOCWmTqs2tpgQA5TzBQ0mCH0oTz7/9wey0HUyt5TiwulzzGAMhag2W1QNqLtCl7aDsVKKPvSuKro0pRN+8Vmsoo/s2S2sLFHsyJ5grF8PcbIzt2sNDJ4Ixk/NBBnppOQ6nyKwD/ShOLDwcSn9JZUYBgeSCcOgf4FXj9LX0ccMzKNDiGsNUjIfsPHB5/wHf1zNRvnU/uayWAy/1S53e+OfGeNFFZGiht+IBHgTRvSf8ZKld90LMyp4sBYRKLlZI7iYkI1W3pF7EbMqf6HkqsDdHkKibc3CC5HH7zho+sQb6+6ocTGgnMkjn6jE/8a3IECvVFTnfpnPVwPWJqWdVpBKw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(396003)(346002)(136003)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(4326008)(4744005)(478600001)(36756003)(8676002)(70206006)(7696005)(5660300002)(70586007)(336012)(6286002)(41300700001)(40460700003)(2906002)(8936002)(7636003)(82310400005)(2616005)(82740400003)(356005)(55016003)(1076003)(40480700001)(16526019)(426003)(86362001)(186003)(47076005)(36860700001)(26005)(316002)(110136005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:42.7991 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7626f30a-b9c0-44a3-2f44-08da9d7226d3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB6584 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit adds the maximum connection tracking number configuration for async flow engine. Signed-off-by: Suanming Mou --- lib/ethdev/rte_flow.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index abb475bdee..e9a1bce38b 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4991,6 +4991,11 @@ struct rte_flow_port_attr { * @see RTE_FLOW_ACTION_TYPE_METER */ uint32_t nb_meter_policies; + /** + * Number of connection tracking to configure. + * @see RTE_FLOW_ACTION_TYPE_CONNTRACK + */ + uint32_t nb_cts; }; /** From patchwork Fri Sep 23 14:43:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116760 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E71B5A054A; Fri, 23 Sep 2022 16:47:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0CD6F42C35; Fri, 23 Sep 2022 16:44:49 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2069.outbound.protection.outlook.com [40.107.237.69]) by mails.dpdk.org (Postfix) with ESMTP id 0DFA842BAB for ; Fri, 23 Sep 2022 16:44:46 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iu12ZZfkPN+tx4hgBJLcZEXEcUBlgG5HH8pQvjOk8prHQsGJhJovhITO93FBkxEM6YGuKViD4OjVQCZA76NxabvsE8ksVrCkMHtuNol+miVhRO9eigVYWifowBR5U7xMIf14q9xrgXJfWBc0LWiwPmEmKhEBuV5Uje0eO+mppMtwFbOEdwDgXd9S93N7tuCzTZnA/IAULRlpJ1WlBmFrmVKonIuTnPAq5q8J94MKu14JTRMzal6B3oHPBsFg1+x+xb+mhw27LIeO0i148qcqnAhPWnhHJOmY+qkRprGjt4L+gofo+HsPQS2D0hQHRRMTtqCS6WipCUvKhPv9ZDYebA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6WCBB0jmpjTDj3ZxjsoioeBaW70Y1i+C/3kWR85u/jI=; b=Yn7mmxdIpXjvIq37jZZAdoBAYXmcW7u+kUvG0jCBF583ei7XQ+X3w2O+oYUu7rY8VxoM0XTCVR4hwR+Zd57+XtKeY7H5uKAB/tGZk5i1mW9M0IBb93Wk/n356fzbinLqaLvmw91a2a5dB5JVjewi744dnBAKGxpNszZzAEn99vP5DSIIOl08DeBtN64AMokokzz3Cnf3U0+xfLRjgPGlD7UmFYDd/d7YyDEvx2v0E8jHsS84ZN8r0+a/KX0HA/kxABaM5SlX21iLSO7TqMaGEQBACdGhCXIaKTYajuZZO5QfgOLZgpAnaKUaupqYW7hzndBaw9awGXiBfp+pxYSSnA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6WCBB0jmpjTDj3ZxjsoioeBaW70Y1i+C/3kWR85u/jI=; b=VAcOy4p+QV8wPeXlF8/qvo+VNzrpkwz6V/CezztgLaPS4QjMjUBBE/jvq/+I0fQE4ZvT3anv54wS/eMnJwpNKHMyqpQCwcNP6DtHWVZ5xUb1dfa+ZwnoMGmDuk2JcGnmD1KJmk3JD2bnlqfNBLqOehIRVlEietvyakUOMCogrkrBzJmzs6wetnm/YCWTdtXfMJ3fpvrhe+uC6zcY+6yuUddZW9f+/iLCj1OEM3hDSL7id3GqUnyM2swANvPhpT8TAjwWUixuhbgpLPLSc9ovaDno8i930NPRURtAbXUImzHnnm2sN02fRi6DlJsm59iPZRkDKYA73YKRj1+j5Mf03w== Received: from MW4PR03CA0076.namprd03.prod.outlook.com (2603:10b6:303:b6::21) by DM4PR12MB6086.namprd12.prod.outlook.com (2603:10b6:8:b2::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.16; Fri, 23 Sep 2022 14:44:44 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::d0) by MW4PR03CA0076.outlook.office365.com (2603:10b6:303:b6::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:43 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:26 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:25 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: Subject: [PATCH 21/27] net/mlx5: add HW steering connection tracking support Date: Fri, 23 Sep 2022 17:43:28 +0300 Message-ID: <20220923144334.27736-22-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT034:EE_|DM4PR12MB6086:EE_ X-MS-Office365-Filtering-Correlation-Id: 17de5242-0f41-417f-00ac-08da9d722748 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iP8zqUkAUJb3pPRRsSCBtcurSYOnsfKvRNlGPghbK3az7sxH4PWxAspJVKq1PO+1kFwnXGO6aavf0AzhYK/fep4abIdnaayiTetDd7QrOWBHMCZykAIfeGOshoYJzfLFsqFsQjzxlzhdKP+S9tmtanj2rlpNAwNI7QBnAs5nkjllCtEGd0P6bZqtTMQRf/tRODOewvirSugXgLnvyjkDSijDu3eozsgTvDKqH8MgNzaBmwzYmX3nxa+BSCXV7yr/ZYAcdjpOPrdO26WpR5yKcFvgNGYh2Xk5Ybt1MdsNdYVWcxS6jD+8B1SErdqYI5viFPuLkCe+2l+9Ehh/nmTYXvRu1PXvZMXAnmiVEc2OjZdnAAXGpHZiGVgrQpyR/jUWR2HUke4YPu+fQd+OZ6JA/t3/Fo2RJWLtOpCS4UtNnwTPYtuuGMcNZG/fJnue01aewfz5c6+Nis+gIoZQIoyyhbfLF8Ht5s3sdjW5eslErtxgYCUsQJtB/JOqa4bIh7dw0ZgriBK3/v8L68WhMu6crXF2scWNXXv6Ev+b+nI6tb95ddxDxctaU3DOt72nDp28CumgnVYxzDj61DhmL8QGH53i4eqgiJayuq05K49EM1rqKrOsXJRbXbia9xPqA0qQA1a/HpU+6p14PAsb/OGmypgnDZQSF0zHTAVEYUzS2HL3hjWbQ2ZS4yyd7bY2+ua4L/hy436MK3Sp7pM+n7wJJRGZEXBUlJBHc6/puEu9kx3OQMDNh8PYS+qKnRP7vLhT7/ERx9wNaJUymrv5EnVpVTuMc+GvOHnHRhjVse0eURU= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(136003)(39860400002)(396003)(346002)(451199015)(46966006)(36840700001)(40470700004)(55016003)(66899012)(86362001)(2906002)(40480700001)(41300700001)(4326008)(70586007)(36756003)(70206006)(8676002)(8936002)(6636002)(6286002)(47076005)(83380400001)(7696005)(26005)(30864003)(110136005)(316002)(40460700003)(356005)(82310400005)(478600001)(186003)(16526019)(1076003)(2616005)(336012)(426003)(36860700001)(5660300002)(82740400003)(7636003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:43.5803 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 17de5242-0f41-417f-00ac-08da9d722748 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6086 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit adds the support of connection tracking to HW steering as SW steering did before. Different with SW steering implementation, take advantage of HW steering bulk action allocation support, in HW steering only one single CT pool is needed. An indexed pool is introduced to record allocated actions from bulk and CT action state etc. Once one CT action is allocated from bulk, one indexed object will also be allocated from the indexed pool, similar for deallocate. That makes mlx5_aso_ct_action can also be managed by that indexed pool, no need to be reserved from mlx5_aso_ct_pool. The single CT pool is also saved to mlx5_aso_ct_action struct directly. The ASO operation functions are shared with SW steering implementation. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5.h | 27 ++- drivers/net/mlx5/mlx5_flow.h | 4 + drivers/net/mlx5/mlx5_flow_aso.c | 19 +- drivers/net/mlx5/mlx5_flow_dv.c | 6 +- drivers/net/mlx5/mlx5_flow_hw.c | 342 ++++++++++++++++++++++++++++++- 5 files changed, 388 insertions(+), 10 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index be60038810..ee4823f649 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1159,7 +1159,12 @@ enum mlx5_aso_ct_state { /* Generic ASO connection tracking structure. */ struct mlx5_aso_ct_action { - LIST_ENTRY(mlx5_aso_ct_action) next; /* Pointer to the next ASO CT. */ + union { + LIST_ENTRY(mlx5_aso_ct_action) next; + /* Pointer to the next ASO CT. Used only in SWS. */ + struct mlx5_aso_ct_pool *pool; + /* Pointer to action pool. Used only in HWS. */ + }; void *dr_action_orig; /* General action object for original dir. */ void *dr_action_rply; /* General action object for reply dir. */ uint32_t refcnt; /* Action used count in device flows. */ @@ -1173,15 +1178,30 @@ struct mlx5_aso_ct_action { #define MLX5_ASO_CT_UPDATE_STATE(c, s) \ __atomic_store_n(&((c)->state), (s), __ATOMIC_RELAXED) +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif + /* ASO connection tracking software pool definition. */ struct mlx5_aso_ct_pool { uint16_t index; /* Pool index in pools array. */ + /* Free ASO CT index in the pool. Used by HWS. */ + struct mlx5_indexed_pool *cts; struct mlx5_devx_obj *devx_obj; - /* The first devx object in the bulk, used for freeing (not yet). */ - struct mlx5_aso_ct_action actions[MLX5_ASO_CT_ACTIONS_PER_POOL]; + union { + void *dummy_action; + /* Dummy action to increase the reference count in the driver. */ + struct mlx5dr_action *dr_action; + /* HWS action. */ + }; + struct mlx5_aso_ct_action actions[0]; /* CT action structures bulk. */ }; +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif + LIST_HEAD(aso_ct_list, mlx5_aso_ct_action); /* Pools management structure for ASO connection tracking pools. */ @@ -1647,6 +1667,7 @@ struct mlx5_priv { LIST_HEAD(flow_hw_tbl_ongo, rte_flow_template_table) flow_hw_tbl_ongo; struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */ struct mlx5_hws_cnt_pool *hws_cpool; /* HW steering's counter pool. */ + struct mlx5_aso_ct_pool *hws_ctpool; /* HW steering's CT pool. */ #endif }; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index a6bd002dca..f7bedd9605 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -83,6 +83,10 @@ enum { #define MLX5_INDIRECT_ACT_CT_GET_IDX(index) \ ((index) & ((1 << MLX5_INDIRECT_ACT_CT_OWNER_SHIFT) - 1)) +#define MLX5_ACTION_CTX_CT_GET_IDX MLX5_INDIRECT_ACT_CT_GET_IDX +#define MLX5_ACTION_CTX_CT_GET_OWNER MLX5_INDIRECT_ACT_CT_GET_OWNER +#define MLX5_ACTION_CTX_CT_GEN_IDX MLX5_INDIRECT_ACT_CT_GEN_IDX + /* Matches on selected register. */ struct mlx5_rte_flow_item_tag { enum modify_reg id; diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index ed9272e583..34fed3f4b8 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -903,6 +903,15 @@ mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, return -1; } +static inline struct mlx5_aso_ct_pool* +__mlx5_aso_ct_get_pool(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_ct_action *ct) +{ + if (likely(sh->config.dv_flow_en == 2)) + return ct->pool; + return container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); +} + /* * Post a WQE to the ASO CT SQ to modify the context. * @@ -945,7 +954,7 @@ mlx5_aso_ct_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, MLX5_ASO_CT_UPDATE_STATE(ct, ASO_CONNTRACK_WAIT); sq->elts[sq->head & mask].ct = ct; sq->elts[sq->head & mask].query_data = NULL; - pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + pool = __mlx5_aso_ct_get_pool(sh, ct); /* Each WQE will have a single CT object. */ wqe->general_cseg.misc = rte_cpu_to_be_32(pool->devx_obj->id + ct->offset); @@ -1113,7 +1122,7 @@ mlx5_aso_ct_sq_query_single(struct mlx5_dev_ctx_shared *sh, wqe_idx = sq->head & mask; sq->elts[wqe_idx].ct = ct; sq->elts[wqe_idx].query_data = data; - pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + pool = __mlx5_aso_ct_get_pool(sh, ct); /* Each WQE will have a single CT object. */ wqe->general_cseg.misc = rte_cpu_to_be_32(pool->devx_obj->id + ct->offset); @@ -1231,7 +1240,7 @@ mlx5_aso_ct_update_by_wqe(struct mlx5_dev_ctx_shared *sh, /* Waiting for wqe resource. */ rte_delay_us_sleep(10u); } while (--poll_wqe_times); - pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + pool = __mlx5_aso_ct_get_pool(sh, ct); DRV_LOG(ERR, "Fail to send WQE for ASO CT %d in pool %d", ct->offset, pool->index); return -1; @@ -1267,7 +1276,7 @@ mlx5_aso_ct_wait_ready(struct mlx5_dev_ctx_shared *sh, /* Waiting for CQE ready, consider should block or sleep. */ rte_delay_us_sleep(MLX5_ASO_WQE_CQE_RESPONSE_DELAY); } while (--poll_cqe_times); - pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + pool = __mlx5_aso_ct_get_pool(sh, ct); DRV_LOG(ERR, "Fail to poll CQE for ASO CT %d in pool %d", ct->offset, pool->index); return -1; @@ -1383,7 +1392,7 @@ mlx5_aso_ct_query_by_wqe(struct mlx5_dev_ctx_shared *sh, else rte_delay_us_sleep(10u); } while (--poll_wqe_times); - pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + pool = __mlx5_aso_ct_get_pool(sh, ct); DRV_LOG(ERR, "Fail to send WQE for ASO CT %d in pool %d", ct->offset, pool->index); return -1; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 80539fd75d..e2794c1d26 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -12790,6 +12790,7 @@ flow_dv_ct_pool_create(struct rte_eth_dev *dev, struct mlx5_devx_obj *obj = NULL; uint32_t i; uint32_t log_obj_size = rte_log2_u32(MLX5_ASO_CT_ACTIONS_PER_POOL); + size_t mem_size; obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->cdev->ctx, priv->sh->cdev->pdn, @@ -12799,7 +12800,10 @@ flow_dv_ct_pool_create(struct rte_eth_dev *dev, DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX."); return NULL; } - pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), 0, SOCKET_ID_ANY); + mem_size = sizeof(struct mlx5_aso_ct_action) * + MLX5_ASO_CT_ACTIONS_PER_POOL + + sizeof(*pool); + pool = mlx5_malloc(MLX5_MEM_ZERO, mem_size, 0, SOCKET_ID_ANY); if (!pool) { rte_errno = ENOMEM; claim_zero(mlx5_devx_cmd_destroy(obj)); diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index b9d4402aed..a4a0882d15 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -14,9 +14,19 @@ #include "mlx5dr_send.h" #include "mlx5_hws_cnt.h" +#define MLX5_HW_INV_QUEUE UINT32_MAX + /* The maximum actions support in the flow. */ #define MLX5_HW_MAX_ACTS 16 +/* + * The default ipool threshold value indicates which per_core_cache + * value to set. + */ +#define MLX5_HW_IPOOL_SIZE_THRESHOLD (1 << 19) +/* The default min local cache size. */ +#define MLX5_HW_IPOOL_CACHE_MIN (1 << 9) + /* Default push burst threshold. */ #define BURST_THR 32u @@ -323,6 +333,24 @@ flow_hw_tir_action_register(struct rte_eth_dev *dev, return hrxq; } +static __rte_always_inline int +flow_hw_ct_compile(struct rte_eth_dev *dev, uint32_t idx, + struct mlx5dr_rule_action *rule_act) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_action *ct; + + ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); + if (!ct || mlx5_aso_ct_available(priv->sh, ct)) + return -1; + rule_act->action = priv->hws_ctpool->dr_action; + rule_act->aso_ct.offset = ct->offset; + rule_act->aso_ct.direction = ct->is_original ? + MLX5DR_ACTION_ASO_CT_DIRECTION_INITIATOR : + MLX5DR_ACTION_ASO_CT_DIRECTION_RESPONDER; + return 0; +} + /** * Destroy DR actions created by action template. * @@ -622,6 +650,10 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, action_src, action_dst, act_idx)) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_CT: + if (flow_hw_ct_compile(dev, idx, &acts->rule_acts[action_dst])) + return -1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -1057,6 +1089,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, bool reformat_used = false; uint16_t action_pos; uint16_t jump_pos; + uint32_t ct_idx; int err; flow_hw_modify_field_init(&mhdr, at); @@ -1279,6 +1312,20 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, goto err; } break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + action_pos = at->actions_off[actions - action_start]; + if (masks->conf) { + ct_idx = MLX5_ACTION_CTX_CT_GET_IDX + ((uint32_t)(uintptr_t)actions->conf); + if (flow_hw_ct_compile(dev, ct_idx, + &acts->rule_acts[action_pos])) + goto err; + } else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, action_pos)) { + goto err; + } + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -1506,6 +1553,10 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, &rule_act->counter.offset)) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_CT: + if (flow_hw_ct_compile(dev, idx, rule_act)) + return -1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -1691,6 +1742,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, uint64_t item_flags; struct mlx5_hw_jump_action *jump; struct mlx5_hrxq *hrxq; + uint32_t ct_idx; cnt_id_t cnt_id; action = &actions[act_data->action_src]; @@ -1824,6 +1876,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return ret; job->flow->cnt_id = act_data->shared_counter.id; break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + ct_idx = MLX5_ACTION_CTX_CT_GET_IDX + ((uint32_t)(uintptr_t)action->conf); + if (flow_hw_ct_compile(dev, ct_idx, + &rule_acts[act_data->action_dst])) + return -1; + break; default: break; } @@ -2348,6 +2407,8 @@ flow_hw_table_create(struct rte_eth_dev *dev, if (nb_flows < cfg.trunk_size) { cfg.per_core_cache = 0; cfg.trunk_size = nb_flows; + } else if (nb_flows <= MLX5_HW_IPOOL_SIZE_THRESHOLD) { + cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; } /* Check if we requires too many templates. */ if (nb_item_templates > max_tpl || @@ -2867,6 +2928,9 @@ flow_hw_actions_validate(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_COUNT: /* TODO: Validation logic */ break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + /* TODO: Validation logic */ + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -2893,6 +2957,7 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_MODIFY_FIELD] = MLX5DR_ACTION_TYP_MODIFY_HDR, [RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT] = MLX5DR_ACTION_TYP_VPORT, [RTE_FLOW_ACTION_TYPE_COUNT] = MLX5DR_ACTION_TYP_CTR, + [RTE_FLOW_ACTION_TYPE_CONNTRACK] = MLX5DR_ACTION_TYP_ASO_CT, }; static int @@ -2921,6 +2986,11 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, action_types[*curr_off] = MLX5DR_ACTION_TYP_CTR; *curr_off = *curr_off + 1; break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + at->actions_off[action_src] = *curr_off; + action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_CT; + *curr_off = *curr_off + 1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type: %d", type); return -EINVAL; @@ -3375,6 +3445,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_GRE_OPTION: case RTE_FLOW_ITEM_TYPE_ICMP: case RTE_FLOW_ITEM_TYPE_ICMP6: + case RTE_FLOW_ITEM_TYPE_CONNTRACK: break; case RTE_FLOW_ITEM_TYPE_END: items_end = true; @@ -4570,6 +4641,84 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev) return -EINVAL; } +static void +flow_hw_ct_pool_destroy(struct rte_eth_dev *dev __rte_unused, + struct mlx5_aso_ct_pool *pool) +{ + if (pool->dr_action) + mlx5dr_action_destroy(pool->dr_action); + if (pool->devx_obj) + claim_zero(mlx5_devx_cmd_destroy(pool->devx_obj)); + if (pool->cts) + mlx5_ipool_destroy(pool->cts); + mlx5_free(pool); +} + +static struct mlx5_aso_ct_pool * +flow_hw_ct_pool_create(struct rte_eth_dev *dev, + const struct rte_flow_port_attr *port_attr) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pool *pool; + struct mlx5_devx_obj *obj; + uint32_t nb_cts = rte_align32pow2(port_attr->nb_cts); + uint32_t log_obj_size = rte_log2_u32(nb_cts); + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct mlx5_aso_ct_action), + .trunk_size = 1 << 12, + .per_core_cache = 1 << 13, + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .free = mlx5_free, + .type = "mlx5_hw_ct_action", + }; + int reg_id; + uint32_t flags; + + pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), 0, SOCKET_ID_ANY); + if (!pool) { + rte_errno = ENOMEM; + return NULL; + } + obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->cdev->ctx, + priv->sh->cdev->pdn, + log_obj_size); + if (!obj) { + rte_errno = ENODATA; + DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX."); + goto err; + } + pool->devx_obj = obj; + reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, NULL); + flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + if (priv->sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; + pool->dr_action = mlx5dr_action_create_aso_ct(priv->dr_ctx, + (struct mlx5dr_devx_obj *)obj, + reg_id - REG_C_0, flags); + if (!pool->dr_action) + goto err; + /* + * No need for local cache if CT number is a small number. Since + * flow insertion rate will be very limited in that case. Here let's + * set the number to less than default trunk size 4K. + */ + if (nb_cts <= cfg.trunk_size) { + cfg.per_core_cache = 0; + cfg.trunk_size = nb_cts; + } else if (nb_cts <= MLX5_HW_IPOOL_SIZE_THRESHOLD) { + cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; + } + pool->cts = mlx5_ipool_create(&cfg); + if (!pool->cts) + goto err; + return pool; +err: + flow_hw_ct_pool_destroy(dev, pool); + return NULL; +} + /** * Configure port HWS resources. * @@ -4755,6 +4904,11 @@ flow_hw_configure(struct rte_eth_dev *dev, } if (_queue_attr) mlx5_free(_queue_attr); + if (port_attr->nb_cts) { + priv->hws_ctpool = flow_hw_ct_pool_create(dev, port_attr); + if (!priv->hws_ctpool) + goto err; + } if (port_attr->nb_counters) { priv->hws_cpool = mlx5_hws_cnt_pool_create(dev, port_attr, nb_queue); @@ -4763,6 +4917,10 @@ flow_hw_configure(struct rte_eth_dev *dev, } return 0; err: + if (priv->hws_ctpool) { + flow_hw_ct_pool_destroy(dev, priv->hws_ctpool); + priv->hws_ctpool = NULL; + } flow_hw_free_vport_actions(priv); for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { if (priv->hw_drop[i]) @@ -4835,6 +4993,10 @@ flow_hw_resource_release(struct rte_eth_dev *dev) } if (priv->hws_cpool) mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool); + if (priv->hws_ctpool) { + flow_hw_ct_pool_destroy(dev, priv->hws_ctpool); + priv->hws_ctpool = NULL; + } mlx5_free(priv->hw_q); priv->hw_q = NULL; claim_zero(mlx5dr_context_close(priv->dr_ctx)); @@ -4997,6 +5159,169 @@ flow_hw_clear_flow_metadata_config(void) mlx5_flow_hw_flow_metadata_xmeta_en = 0; } +static int +flow_hw_conntrack_destroy(struct rte_eth_dev *dev __rte_unused, + uint32_t idx, + struct rte_flow_error *error) +{ + uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx); + uint32_t ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx); + struct rte_eth_dev *owndev = &rte_eth_devices[owner]; + struct mlx5_priv *priv = owndev->data->dev_private; + struct mlx5_aso_ct_pool *pool = priv->hws_ctpool; + struct mlx5_aso_ct_action *ct; + + ct = mlx5_ipool_get(pool->cts, ct_idx); + if (!ct) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Invalid CT destruction index"); + } + __atomic_store_n(&ct->state, ASO_CONNTRACK_FREE, + __ATOMIC_RELAXED); + mlx5_ipool_free(pool->cts, ct_idx); + return 0; +} + +static int +flow_hw_conntrack_query(struct rte_eth_dev *dev, uint32_t idx, + struct rte_flow_action_conntrack *profile, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pool *pool = priv->hws_ctpool; + struct mlx5_aso_ct_action *ct; + uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx); + uint32_t ct_idx; + + if (owner != PORT_ID(priv)) + return rte_flow_error_set(error, EACCES, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Can't query CT object owned by another port"); + ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx); + ct = mlx5_ipool_get(pool->cts, ct_idx); + if (!ct) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Invalid CT query index"); + } + profile->peer_port = ct->peer; + profile->is_original_dir = ct->is_original; + if (mlx5_aso_ct_query_by_wqe(priv->sh, ct, profile)) + return rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to query CT context"); + return 0; +} + + +static int +flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_modify_conntrack *action_conf, + uint32_t idx, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pool *pool = priv->hws_ctpool; + struct mlx5_aso_ct_action *ct; + const struct rte_flow_action_conntrack *new_prf; + uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx); + uint32_t ct_idx; + int ret = 0; + + if (PORT_ID(priv) != owner) + return rte_flow_error_set(error, EACCES, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Can't update CT object owned by another port"); + ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx); + ct = mlx5_ipool_get(pool->cts, ct_idx); + if (!ct) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Invalid CT update index"); + } + new_prf = &action_conf->new_ct; + if (action_conf->direction) + ct->is_original = !!new_prf->is_original_dir; + if (action_conf->state) { + /* Only validate the profile when it needs to be updated. */ + ret = mlx5_validate_action_ct(dev, new_prf, error); + if (ret) + return ret; + ret = mlx5_aso_ct_update_by_wqe(priv->sh, ct, new_prf); + if (ret) + return rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to send CT context update WQE"); + if (queue != MLX5_HW_INV_QUEUE) + return 0; + /* Block until ready or a failure in synchronous mode. */ + ret = mlx5_aso_ct_available(priv->sh, ct); + if (ret) + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Timeout to get the CT update"); + } + return ret; +} + +static struct rte_flow_action_handle * +flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_action_conntrack *pro, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_ct_pool *pool = priv->hws_ctpool; + struct mlx5_aso_ct_action *ct; + uint32_t ct_idx = 0; + int ret; + + if (!pool) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "CT is not enabled"); + return 0; + } + ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx); + if (!ct) { + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to allocate CT object"); + return 0; + } + ct->offset = ct_idx - 1; + ct->is_original = !!pro->is_original_dir; + ct->peer = pro->peer_port; + ct->pool = pool; + if (mlx5_aso_ct_update_by_wqe(priv->sh, ct, pro)) { + mlx5_ipool_free(pool->cts, ct_idx); + rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to update CT"); + return 0; + } + if (queue == MLX5_HW_INV_QUEUE) { + ret = mlx5_aso_ct_available(priv->sh, ct); + if (ret) { + mlx5_ipool_free(pool->cts, ct_idx); + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Timeout to get the CT update"); + return 0; + } + } + return (struct rte_flow_action_handle *)(uintptr_t) + MLX5_ACTION_CTX_CT_GEN_IDX(PORT_ID(priv), ct_idx); +} + /** * Create shared action. * @@ -5044,6 +5369,9 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, handle = (struct rte_flow_action_handle *) (uintptr_t)cnt_id; break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + handle = flow_hw_conntrack_create(dev, queue, action->conf, error); + break; default: handle = flow_dv_action_create(dev, conf, action, error); } @@ -5079,10 +5407,18 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, void *user_data, struct rte_flow_error *error) { + uint32_t act_idx = (uint32_t)(uintptr_t)handle; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + RTE_SET_USED(queue); RTE_SET_USED(attr); RTE_SET_USED(user_data); - return flow_dv_action_update(dev, handle, update, error); + switch (type) { + case MLX5_INDIRECT_ACTION_TYPE_CT: + return flow_hw_conntrack_update(dev, queue, update, act_idx, error); + default: + return flow_dv_action_update(dev, handle, update, error); + } } /** @@ -5121,6 +5457,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, switch (type) { case MLX5_INDIRECT_ACTION_TYPE_COUNT: return mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx); + case MLX5_INDIRECT_ACTION_TYPE_CT: + return flow_hw_conntrack_destroy(dev, act_idx, error); default: return flow_dv_action_destroy(dev, handle, error); } @@ -5274,6 +5612,8 @@ flow_hw_action_query(struct rte_eth_dev *dev, switch (type) { case MLX5_INDIRECT_ACTION_TYPE_COUNT: return flow_hw_query_counter(dev, act_idx, data, error); + case MLX5_INDIRECT_ACTION_TYPE_CT: + return flow_hw_conntrack_query(dev, act_idx, data, error); default: return flow_dv_action_query(dev, handle, data, error); } From patchwork Fri Sep 23 14:43:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116757 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 771B0A054D; Fri, 23 Sep 2022 16:46:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 38A5F42C25; Fri, 23 Sep 2022 16:44:45 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2040.outbound.protection.outlook.com [40.107.93.40]) by mails.dpdk.org (Postfix) with ESMTP id 003FB42C1B for ; Fri, 23 Sep 2022 16:44:41 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Gv7IooGZ6dbFzT+GKXPz50nD7+gKML2tH0LEmE7JNFSDjV3LvnzgybPje29uIVGm56W3CuMYzm+LaUIQ1Y6KFe0FaG5FgwhMqk/8+DT856mT4HsuATWFecKeigu50aiGMtPOXDAy8zcmet7O43G45KeMb0spNDaqiJM5HfMO1u377WUJX0Tjc4XQl0+XrDfUSmfDy9dyXJc9LOhgkezmzRoTeGIZoVOddAOsMYsFpUg2zH/OCtuYWjJojqQJiIQQTWFo5dcWhm8aGaXmePw3GDYc3zL3rZ1Mv/LTDHsxa8KC2S1zMy6bmeWvUjhJCprSGW3Q4/IcGCe6YJLyjpaUsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5oawezouoIsGw5uoF51yrYtsokWIUQnuHCKqhJr7LS0=; b=LApc2wz/CSu9zdhrBWU13thObJlmNZFyJkwZcpHJmz3MAq/no4XwG9cO76e+yl38ezxCiN316oan2EWM71YOqCv9l7IDpNYzghiWD0MirTmUA84bLk14Wa1Hj+dZ9IYUT2xp9jjbXVHV1cCCZ67m4Dl+2Ce8Ax47QbQkHoClFGE3zPIxwYSA8u7mY67k61ImEMYircgZYGVc1lPELpDfdSJQ/6j7pOOty06CnI71JaiLaAhpZJMpxnVbWit0DJBCyv07t9GC6SzT1EAIPxLYV9bURGcibUXXFmWUyeaIl/S5KzP1x5buXIkFyq4mm3SzOG8VDuUrEs9bN3jyHBPO2Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5oawezouoIsGw5uoF51yrYtsokWIUQnuHCKqhJr7LS0=; b=ReigkDag8i6gNq6ylZipGgsYv519xGfvSaC4fvdw0yMXzk4Ce0l7BsJrRfJ7UF9noM9261OQYXXrLkeiw8IZv99L7H5RyZ6UQNTMaLjGvJTDiVPVTnlC/LS0v/6Ui0xRzxGHmoqqDXq6XOdc4Hsd8xcRirvKDkBl1nyxs3VAhu2+OSwLqJud0nsKhZ56dwNoKLzBHSaCW0G5p78R91ZJ9rJKPKYGMaq/y5IVYPgFkCadpAVe5TQA4vZQyf+iKzokOzazJ7g88FEurbkKjQwUaQyR6FTHbYW/NPU6LxEELSB6y84NFAgl/pLQfYmFVEG3FEuvmeA9eZnc4UCyEDhh3A== Received: from DS7PR03CA0297.namprd03.prod.outlook.com (2603:10b6:5:3ad::32) by SJ0PR12MB6784.namprd12.prod.outlook.com (2603:10b6:a03:44f::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:40 +0000 Received: from DM6NAM11FT015.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::1d) by DS7PR03CA0297.outlook.office365.com (2603:10b6:5:3ad::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT015.mail.protection.outlook.com (10.13.172.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:39 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:28 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:26 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Gregory Etelson Subject: [PATCH 22/27] net/mlx5: add HW steering VLAN push, pop and VID modify flow actions Date: Fri, 23 Sep 2022 17:43:29 +0300 Message-ID: <20220923144334.27736-23-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT015:EE_|SJ0PR12MB6784:EE_ X-MS-Office365-Filtering-Correlation-Id: 4d51a247-7757-4d93-67d0-08da9d7224da X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zprPb3j5QV7NG5DjUqwH7Dfl5ejOWTVDz64iNb5U4U1T0l3uw3UloD1RRLLXLV5Ka9amRCh80TsTtxBMxdQZHvywoMGO40wx38VtVQUmuC8Ij/Ce7K/yKN/IO2zyTqLTsaIkJ5rfCr7CzqVYC3p+Ofbk68hGm1zuvAzb1EJOBYnLaMY29mPplynIsCSc6hu/HBtqS8Xd4XtsFAbAW6Y429xnCujGKFwDu68964NM/1Hc7Fwu6GQxkJxMoiHHc1VBpix6gIG7Qii0nmN8+ULbQJI5NEaZCFRJIBoL7gI6DtFiirFPOPbwKGxqTdcI0LLozc+dhIMughe60Nyrgl+sBw+ApUhaeCowUOMmRQ401TJn2mOJsi+rRmawFyc7JAOer2Ln+1qfJv0HsdFDX+mK2zzmdDu6XDabimC5pfLtTXVT6hACNJEpjMGMWOV+9MxWVY2givFlFcPAk43iiSDMVReb6yrmld+ZkdkHSsYnRKZZgNEfCkqyrF+NCPvB1ihOPQc9j9GPsIlOnrT8N+8G2Bw9WS8ksifGJ+3WezLn5fWjpNCoz0n8048BqFthpq6cIeEoD0JQoZ7xFnK4zyfMqnprwwN/dwuPnsS7cChyTmKSUVcIKq2EskxVOD64yNF2sRPcAUdmnFvFHzcgu4cF2FxV/6eJ4nVRY46307I6cYD4rMAcJcPYAy0iq7lkCr8wGoIIMxMW7n9G/k7SfgEMeKDMSPuNm8wTop9SwAHm44bvMSZC5veWqDOx7MB0R9oJJSrGomWrcwim6nLMno3E5g== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199015)(36840700001)(40470700004)(46966006)(316002)(6636002)(86362001)(356005)(36756003)(7636003)(2906002)(30864003)(5660300002)(4326008)(82740400003)(8936002)(40460700003)(70206006)(70586007)(6286002)(41300700001)(47076005)(26005)(110136005)(54906003)(36860700001)(8676002)(16526019)(186003)(426003)(2616005)(83380400001)(478600001)(336012)(82310400005)(7696005)(107886003)(1076003)(40480700001)(55016003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:39.4909 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4d51a247-7757-4d93-67d0-08da9d7224da X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT015.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6784 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson Add PMD implementation for HW steering VLAN push, pop and modify flow actions. HWS VLAN push flow action is triggered by a sequence of mandatory OF_PUSH_VLAN, OF_SET_VLAN_VID and optional OF_SET_VLAN_PCP flow actions commands. The commands must be arranged in the exact order: OF_PUSH_VLAN / OF_SET_VLAN_VID [ / OF_SET_VLAN_PCP ]. In masked HWS VLAN push flow action template *ALL* the above flow actions must be masked. In non-masked HWS VLAN push flow action template *ALL* the above flow actions must not be masked. Example: flow actions_template create \ actions_template_id \ template \ of_push_vlan / \ of_set_vlan_vid \ [ / of_set_vlan_pcp ] / end \ mask \ of_push_vlan ethertype 0 / \ of_set_vlan_vid vlan_vid 0 \ [ / of_set_vlan_pcp vlan_pcp 0 ] / end\ flow actions_template create \ actions_template_id \ template \ of_push_vlan ethertype / \ of_set_vlan_vid vlan_vid \ [ / of_set_vlan_pcp ] / end \ mask \ of_push_vlan ethertype / \ of_set_vlan_vid vlan_vid \ [ / of_set_vlan_pcp vlan_pcp ] / end\ HWS VLAN pop flow action is triggered by OF_POP_VLAN flow action command. HWS VLAN pop action template is always non-masked. Example: flow actions_template create \ actions_template_id \ template of_pop_vlan / end mask of_pop_vlan / end HWS VLAN VID modify flow action is triggered by a standalone OF_SET_VLAN_VID flow action command. HWS VLAN VID modify action template can be ether masked or non-masked. Example: flow actions_template create \ actions_template_id \ template of_set_vlan_vid / end mask of_set_vlan_vid vlan_vid 0 / end flow actions_template create \ actions_template_id \ template of_set_vlan_vid vlan_vid 0x101 / end \ mask of_set_vlan_vid vlan_vid 0xffff / end Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_flow.h | 4 + drivers/net/mlx5/mlx5_flow_dv.c | 2 +- drivers/net/mlx5/mlx5_flow_hw.c | 360 ++++++++++++++++++++++++++++++-- 4 files changed, 348 insertions(+), 20 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ee4823f649..ec08014832 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1590,6 +1590,8 @@ struct mlx5_priv { void *root_drop_action; /* Pointer to root drop action. */ rte_spinlock_t hw_ctrl_lock; LIST_HEAD(hw_ctrl_flow, mlx5_hw_ctrl_flow) hw_ctrl_flows; + struct mlx5dr_action *hw_push_vlan[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_action *hw_pop_vlan[MLX5DR_TABLE_TYPE_MAX]; struct mlx5dr_action **hw_vport; struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; struct rte_flow_template_table *hw_esw_sq_miss_tbl; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index f7bedd9605..2d1a9dba27 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2434,4 +2434,8 @@ int mlx5_flow_pattern_validate(struct rte_eth_dev *dev, struct rte_flow_error *error); int flow_hw_table_update(struct rte_eth_dev *dev, struct rte_flow_error *error); +int mlx5_flow_item_field_width(struct rte_eth_dev *dev, + enum rte_flow_field_id field, int inherit, + const struct rte_flow_attr *attr, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index e2794c1d26..36059beb71 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -1326,7 +1326,7 @@ flow_dv_convert_action_modify_ipv6_dscp MLX5_MODIFICATION_TYPE_SET, error); } -static int +int mlx5_flow_item_field_width(struct rte_eth_dev *dev, enum rte_flow_field_id field, int inherit, const struct rte_flow_attr *attr, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index a4a0882d15..7e7b48f884 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -48,12 +48,22 @@ /* Lowest priority for HW non-root table. */ #define MLX5_HW_LOWEST_PRIO_NON_ROOT (UINT32_MAX) +#define MLX5_HW_VLAN_PUSH_TYPE_IDX 0 +#define MLX5_HW_VLAN_PUSH_VID_IDX 1 +#define MLX5_HW_VLAN_PUSH_PCP_IDX 2 + static int flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev); static int flow_hw_translate_group(struct rte_eth_dev *dev, const struct mlx5_flow_template_table_cfg *cfg, uint32_t group, uint32_t *table_group, struct rte_flow_error *error); +static __rte_always_inline int +flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + struct mlx5_action_construct_data *act_data, + const struct mlx5_hw_actions *hw_acts, + const struct rte_flow_action *action); const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; @@ -1039,6 +1049,52 @@ flow_hw_cnt_compile(struct rte_eth_dev *dev, uint32_t start_pos, return 0; } +static __rte_always_inline bool +is_of_vlan_pcp_present(const struct rte_flow_action *actions) +{ + /* + * Order of RTE VLAN push actions is + * OF_PUSH_VLAN / OF_SET_VLAN_VID [ / OF_SET_VLAN_PCP ] + */ + return actions[MLX5_HW_VLAN_PUSH_PCP_IDX].type == + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP; +} + +static __rte_always_inline bool +is_template_masked_push_vlan(const struct rte_flow_action_of_push_vlan *mask) +{ + /* + * In masked push VLAN template all RTE push actions are masked. + */ + return mask && mask->ethertype != 0; +} + +static rte_be32_t vlan_hdr_to_be32(const struct rte_flow_action *actions) +{ +/* + * OpenFlow Switch Specification defines 801.1q VID as 12+1 bits. + */ + rte_be32_t type, vid, pcp; +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rte_be32_t vid_lo, vid_hi; +#endif + + type = ((const struct rte_flow_action_of_push_vlan *) + actions[MLX5_HW_VLAN_PUSH_TYPE_IDX].conf)->ethertype; + vid = ((const struct rte_flow_action_of_set_vlan_vid *) + actions[MLX5_HW_VLAN_PUSH_VID_IDX].conf)->vlan_vid; + pcp = is_of_vlan_pcp_present(actions) ? + ((const struct rte_flow_action_of_set_vlan_pcp *) + actions[MLX5_HW_VLAN_PUSH_PCP_IDX].conf)->vlan_pcp : 0; +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + vid_hi = vid & 0xff; + vid_lo = vid >> 8; + return (((vid_lo << 8) | (pcp << 5) | vid_hi) << 16) | type; +#else + return (type << 16) | (pcp << 13) | vid; +#endif +} + /** * Translate rte_flow actions to DR action. * @@ -1141,6 +1197,26 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, priv->hw_tag[!!attr->group]; flow_hw_rxq_flag_set(dev, true); break; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + action_pos = at->actions_off[actions - at->actions]; + acts->rule_acts[action_pos].action = + priv->hw_push_vlan[type]; + if (is_template_masked_push_vlan(masks->conf)) + acts->rule_acts[action_pos].push_vlan.vlan_hdr = + vlan_hdr_to_be32(actions); + else if (__flow_hw_act_data_general_append + (priv, acts, actions->type, + actions - action_start, action_pos)) + goto err; + actions += is_of_vlan_pcp_present(actions) ? + MLX5_HW_VLAN_PUSH_PCP_IDX : + MLX5_HW_VLAN_PUSH_VID_IDX; + break; + case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: + action_pos = at->actions_off[actions - at->actions]; + acts->rule_acts[action_pos].action = + priv->hw_pop_vlan[type]; + break; case RTE_FLOW_ACTION_TYPE_JUMP: action_pos = at->actions_off[actions - action_start]; if (masks->conf && @@ -1746,8 +1822,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, cnt_id_t cnt_id; action = &actions[act_data->action_src]; - MLX5_ASSERT(action->type == RTE_FLOW_ACTION_TYPE_INDIRECT || - (int)action->type == act_data->type); + /* + * action template construction replaces + * OF_SET_VLAN_VID with MODIFY_FIELD + */ + if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) + MLX5_ASSERT(act_data->type == + RTE_FLOW_ACTION_TYPE_MODIFY_FIELD); + else + MLX5_ASSERT(action->type == + RTE_FLOW_ACTION_TYPE_INDIRECT || + (int)action->type == act_data->type); switch (act_data->type) { case RTE_FLOW_ACTION_TYPE_INDIRECT: if (flow_hw_shared_action_construct @@ -1763,6 +1848,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, (action->conf))->id); rule_acts[act_data->action_dst].tag.value = tag; break; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + rule_acts[act_data->action_dst].push_vlan.vlan_hdr = + vlan_hdr_to_be32(action); + break; case RTE_FLOW_ACTION_TYPE_JUMP: jump_group = ((const struct rte_flow_action_jump *) action->conf)->group; @@ -1814,10 +1903,16 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, act_data->encap.len); break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - ret = flow_hw_modify_field_construct(job, - act_data, - hw_acts, - action); + if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) + ret = flow_hw_set_vlan_vid_construct(dev, job, + act_data, + hw_acts, + action); + else + ret = flow_hw_modify_field_construct(job, + act_data, + hw_acts, + action); if (ret) return -1; break; @@ -2841,6 +2936,56 @@ flow_hw_action_meta_copy_insert(const struct rte_flow_action actions[], return 0; } +static int +flow_hw_validate_action_push_vlan(struct rte_eth_dev *dev, + const + struct rte_flow_actions_template_attr *attr, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct rte_flow_error *error) +{ +#define X_FIELD(ptr, t, f) ((t *)((ptr)->conf))->f + /* + * 1. Mandatory actions order: + * OF_PUSH_VLAN / OF_SET_VLAN_VID [ / OF_SET_VLAN_PCP ] + * 2. All actions ether masked or not. + */ + const bool masked_action = action[MLX5_HW_VLAN_PUSH_TYPE_IDX].conf && + X_FIELD(action + MLX5_HW_VLAN_PUSH_TYPE_IDX, + const struct rte_flow_action_of_push_vlan, + ethertype) != 0; + bool masked_param; + + RTE_SET_USED(dev); + RTE_SET_USED(attr); + RTE_SET_USED(mask); + if (action[MLX5_HW_VLAN_PUSH_VID_IDX].type != + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, "OF_PUSH_VLAN: invalid actions order"); + masked_param = action[MLX5_HW_VLAN_PUSH_VID_IDX].conf && + X_FIELD(action + MLX5_HW_VLAN_PUSH_VID_IDX, + const struct rte_flow_action_of_set_vlan_vid, vlan_vid); + if (!(masked_action & masked_param)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, "OF_SET_VLAN_VID: template mask does not match OF_PUSH_VLAN"); + if (is_of_vlan_pcp_present(action)) { + masked_param = action[MLX5_HW_VLAN_PUSH_PCP_IDX].conf && + X_FIELD(action + MLX5_HW_VLAN_PUSH_PCP_IDX, + const struct rte_flow_action_of_set_vlan_pcp, + vlan_pcp); + if (!(masked_action & masked_param)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, "OF_SET_VLAN_PCP: template mask does not match OF_PUSH_VLAN"); + } + + return 0; +#undef X_FIELD +} + static int flow_hw_actions_validate(struct rte_eth_dev *dev, const struct rte_flow_actions_template_attr *attr, @@ -2931,6 +3076,18 @@ flow_hw_actions_validate(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_CONNTRACK: /* TODO: Validation logic */ break; + case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: + break; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + ret = flow_hw_validate_action_push_vlan + (dev, attr, action, mask, error); + if (ret != 0) + return ret; + i += is_of_vlan_pcp_present(action) ? + MLX5_HW_VLAN_PUSH_PCP_IDX : + MLX5_HW_VLAN_PUSH_VID_IDX; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -2958,6 +3115,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT] = MLX5DR_ACTION_TYP_VPORT, [RTE_FLOW_ACTION_TYPE_COUNT] = MLX5DR_ACTION_TYP_CTR, [RTE_FLOW_ACTION_TYPE_CONNTRACK] = MLX5DR_ACTION_TYP_ASO_CT, + [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, + [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, }; static int @@ -3074,6 +3233,14 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) goto err_actions_num; action_types[curr_off++] = MLX5DR_ACTION_TYP_FT; break; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + type = mlx5_hw_dr_action_types[at->actions[i].type]; + at->actions_off[i] = curr_off; + action_types[curr_off++] = type; + i += is_of_vlan_pcp_present(at->actions + i) ? + MLX5_HW_VLAN_PUSH_PCP_IDX : + MLX5_HW_VLAN_PUSH_VID_IDX; + break; default: type = mlx5_hw_dr_action_types[at->actions[i].type]; at->actions_off[i] = curr_off; @@ -3101,6 +3268,95 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) return NULL; } +static void +flow_hw_set_vlan_vid(struct rte_eth_dev *dev, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks, + struct rte_flow_action *ra, struct rte_flow_action *rm, + struct rte_flow_action_modify_field *spec, + struct rte_flow_action_modify_field *mask, + uint32_t act_num, int set_vlan_vid_ix) +{ + struct rte_flow_error error; + const bool masked = masks[set_vlan_vid_ix].conf && + (((const struct rte_flow_action_of_set_vlan_vid *) + masks[set_vlan_vid_ix].conf)->vlan_vid != 0); + const struct rte_flow_action_of_set_vlan_vid *conf = + actions[set_vlan_vid_ix].conf; + rte_be16_t vid = masked ? conf->vlan_vid : 0; + int width = mlx5_flow_item_field_width(dev, RTE_FLOW_FIELD_VLAN_ID, 0, + NULL, &error); + if (actions == ra) { + size_t copy_sz = sizeof(ra[0]) * act_num; + rte_memcpy(ra, actions, copy_sz); + rte_memcpy(rm, masks, copy_sz); + } + *spec = (typeof(*spec)) { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = RTE_FLOW_FIELD_VLAN_ID, + .level = 0, .offset = 0, + }, + .src = { + .field = RTE_FLOW_FIELD_VALUE, + .level = vid, + .offset = 0, + }, + .width = width, + }; + *mask = (typeof(*mask)) { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = RTE_FLOW_FIELD_VLAN_ID, + .level = 0xffffffff, .offset = 0xffffffff, + }, + .src = { + .field = RTE_FLOW_FIELD_VALUE, + .level = masked ? (1U << width) - 1 : 0, + .offset = 0, + }, + .width = 0xffffffff, + }; + ra[set_vlan_vid_ix].type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD; + ra[set_vlan_vid_ix].conf = spec; + rm[set_vlan_vid_ix].type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD; + rm[set_vlan_vid_ix].conf = mask; +} + +static __rte_always_inline int +flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, + struct mlx5_hw_q_job *job, + struct mlx5_action_construct_data *act_data, + const struct mlx5_hw_actions *hw_acts, + const struct rte_flow_action *action) +{ + struct rte_flow_error error; + rte_be16_t vid = ((const struct rte_flow_action_of_set_vlan_vid *) + action->conf)->vlan_vid; + int width = mlx5_flow_item_field_width(dev, RTE_FLOW_FIELD_VLAN_ID, 0, + NULL, &error); + struct rte_flow_action_modify_field conf = { + .operation = RTE_FLOW_MODIFY_SET, + .dst = { + .field = RTE_FLOW_FIELD_VLAN_ID, + .level = 0, .offset = 0, + }, + .src = { + .field = RTE_FLOW_FIELD_VALUE, + .level = vid, + .offset = 0, + }, + .width = width, + }; + struct rte_flow_action modify_action = { + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + .conf = &conf + }; + + return flow_hw_modify_field_construct(job, act_data, hw_acts, + &modify_action); +} + /** * Create flow action template. * @@ -3132,8 +3388,11 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, uint16_t pos = MLX5_HW_MAX_ACTS; struct rte_flow_action tmp_action[MLX5_HW_MAX_ACTS]; struct rte_flow_action tmp_mask[MLX5_HW_MAX_ACTS]; - const struct rte_flow_action *ra; - const struct rte_flow_action *rm; + struct rte_flow_action *ra = (void *)(uintptr_t)actions; + struct rte_flow_action *rm = (void *)(uintptr_t)masks; + int set_vlan_vid_ix = -1; + struct rte_flow_action_modify_field set_vlan_vid_spec = {0, }; + struct rte_flow_action_modify_field set_vlan_vid_mask = {0, }; const struct rte_flow_action_modify_field rx_mreg = { .operation = RTE_FLOW_MODIFY_SET, .dst = { @@ -3173,22 +3432,42 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, return NULL; if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS && priv->sh->config.dv_esw_en) { + /* Application should make sure only one Q/RSS exist in one rule. */ if (flow_hw_action_meta_copy_insert(actions, masks, &rx_cpy, &rx_cpy_mask, tmp_action, tmp_mask, &pos)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Failed to concatenate new action/mask"); return NULL; + } else if (pos != MLX5_HW_MAX_ACTS) { + ra = tmp_action; + rm = tmp_mask; } } - /* Application should make sure only one Q/RSS exist in one rule. */ - if (pos == MLX5_HW_MAX_ACTS) { - ra = actions; - rm = masks; - } else { - ra = tmp_action; - rm = tmp_mask; + for (i = 0; ra[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { + switch (ra[i].type) { + /* OF_PUSH_VLAN *MUST* come before OF_SET_VLAN_VID */ + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + i += is_of_vlan_pcp_present(ra + i) ? + MLX5_HW_VLAN_PUSH_PCP_IDX : + MLX5_HW_VLAN_PUSH_VID_IDX; + break; + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: + set_vlan_vid_ix = i; + break; + default: + break; + } } + /* Count flow actions to allocate required space for storing DR offsets. */ + act_num = i; + if (act_num >= MLX5_HW_MAX_ACTS) + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Too many actions"); + if (set_vlan_vid_ix != -1) + flow_hw_set_vlan_vid(dev, actions, masks, ra, rm, + &set_vlan_vid_spec, &set_vlan_vid_mask, + act_num, set_vlan_vid_ix); act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, ra, error); if (act_len <= 0) return NULL; @@ -3197,10 +3476,6 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, if (mask_len <= 0) return NULL; len += RTE_ALIGN(mask_len, 16); - /* Count flow actions to allocate required space for storing DR offsets. */ - act_num = 0; - for (i = 0; ra[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) - act_num++; len += RTE_ALIGN(act_num * sizeof(*at->actions_off), 16); at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at), RTE_CACHE_LINE_SIZE, rte_socket_id()); @@ -4719,6 +4994,48 @@ flow_hw_ct_pool_create(struct rte_eth_dev *dev, return NULL; } +static void +flow_hw_destroy_vlan(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + enum mlx5dr_table_type i; + + for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) { + if (priv->hw_pop_vlan[i]) { + mlx5dr_action_destroy(priv->hw_pop_vlan[i]); + priv->hw_pop_vlan[i] = NULL; + } + if (priv->hw_push_vlan[i]) { + mlx5dr_action_destroy(priv->hw_push_vlan[i]); + priv->hw_push_vlan[i] = NULL; + } + } +} + +static int +flow_hw_create_vlan(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + enum mlx5dr_table_type i; + const enum mlx5dr_action_flags flags[MLX5DR_TABLE_TYPE_MAX] = { + MLX5DR_ACTION_FLAG_HWS_RX, + MLX5DR_ACTION_FLAG_HWS_TX, + MLX5DR_ACTION_FLAG_HWS_FDB + }; + + for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) { + priv->hw_pop_vlan[i] = + mlx5dr_action_create_pop_vlan(priv->dr_ctx, flags[i]); + if (!priv->hw_pop_vlan[i]) + return -ENOENT; + priv->hw_push_vlan[i] = + mlx5dr_action_create_push_vlan(priv->dr_ctx, flags[i]); + if (!priv->hw_pop_vlan[i]) + return -ENOENT; + } + return 0; +} + /** * Configure port HWS resources. * @@ -4915,6 +5232,9 @@ flow_hw_configure(struct rte_eth_dev *dev, if (priv->hws_cpool == NULL) goto err; } + ret = flow_hw_create_vlan(dev); + if (ret) + goto err; return 0; err: if (priv->hws_ctpool) { @@ -4928,6 +5248,7 @@ flow_hw_configure(struct rte_eth_dev *dev, if (priv->hw_tag[i]) mlx5dr_action_destroy(priv->hw_tag[i]); } + flow_hw_destroy_vlan(dev); if (dr_ctx) claim_zero(mlx5dr_context_close(dr_ctx)); mlx5_free(priv->hw_q); @@ -4986,6 +5307,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) if (priv->hw_tag[i]) mlx5dr_action_destroy(priv->hw_tag[i]); } + flow_hw_destroy_vlan(dev); flow_hw_free_vport_actions(priv); if (priv->acts_ipool) { mlx5_ipool_destroy(priv->acts_ipool); From patchwork Fri Sep 23 14:43:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116761 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 07B03A054A; Fri, 23 Sep 2022 16:47:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DE6D742C3E; Fri, 23 Sep 2022 16:44:49 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2056.outbound.protection.outlook.com [40.107.93.56]) by mails.dpdk.org (Postfix) with ESMTP id C890B40156 for ; Fri, 23 Sep 2022 16:44:47 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YjU7G1Ga6KlvWrrF8xpyoGmvsWJO87/eig/VVBZA0ZnGX4YvUUDw1Pkb0t7M1vVJFC33n+j8ymlUVxWnP2OZMwOEwJfQBvNzXQkNRB2Vz/vvcd27MIw9+J+HZodPFiMNNbex81o2OhSv+zaI/6J6BL7SRtJGHGtOdRdr5c5rrSJWe2ioh+XkmGE3c5DkOZfHWy06Uvk1sF6eh1g7cxhyV0xfPIFGMunzD+EGuELtbCePdkbVM0RF8W2hQzE1N3Ea3W4AA1B39S11pby3pf78GUkfuzK70wQmCz/quFikMLUbTgIU+MWbibnkFQ1xwUg2V/u6ywf6/IbHBsk305zh9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zZwhTtwxIM5LaThPVm9eMwpBkZ66lvlDU1gdGh+eCPQ=; b=VxMSeC+dt8fGLZQQBbOZaPqCLllB4PCPHPQeunbXahzMhn8y5Zeka4d9zptkglxFqHIbrvzQYGNT7eO2GREJADV9PQ/etqF9GKG7mQ+d4gTmEIGJvfJXeTYR0ydXaWtsnBGglWlDln+tLdycX/URRB+113bAsWgwIzptIMsan9N9OR7TtgMYQoDS7RJdG9AnXmna+Lm2XF/AE3zxWzwfEdcEqqS1+Juk6OvNXW3pxl/RobMl0wHxNUhAiURUA9ndY83958Px3sqqESCHmfVv58dlsyZfUFxYhMHNNM9CGw26g03kvDlgJVf0Ma86s9CuPrJlD71FWzluVxnNExUk5w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zZwhTtwxIM5LaThPVm9eMwpBkZ66lvlDU1gdGh+eCPQ=; b=qgYI3F6Op6VcD5DQnRtH2lhx7DQBbGoJj+MHBtVfCNWNdwqlTOA9bkHgvivj4U068q06FEMSdyXqtrKiIwLvNaz5rJJWma11heTgOqIonT+H1B9UBZpowfNz0YtLcIi78yKjxTlLWzyfeOKwpk97YRD3POp77M7wpxT9KeaPEoUTrELzlGovsBZBJuWZUoqnAqeVlTUZSDgeKbhZ8NUfUWd8/YZvzq6zu5ALGP+c44WpUXTUzBUMwynELFXpEDbAYr+lvyh+nRmdn1xQSpIZanvRlihY6WjhgQ0NN5HdOWnwNr+90Wt+3AznawxvDpCzr1Xbx32yhuCxRpiaOAPS/g== Received: from MW4PR03CA0076.namprd03.prod.outlook.com (2603:10b6:303:b6::21) by SA1PR12MB7365.namprd12.prod.outlook.com (2603:10b6:806:2ba::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.18; Fri, 23 Sep 2022 14:44:45 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::37) by MW4PR03CA0076.outlook.office365.com (2603:10b6:303:b6::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:45 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:30 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:28 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Alexander Kozyrev Subject: [PATCH 23/27] net/mlx5: add meter color flow matching in dv Date: Fri, 23 Sep 2022 17:43:30 +0300 Message-ID: <20220923144334.27736-24-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT034:EE_|SA1PR12MB7365:EE_ X-MS-Office365-Filtering-Correlation-Id: 083163a0-0102-4b2a-b225-08da9d722882 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 14Ih3bcP3pLG8x6rg39xU4P5NHfVhM/RgJYRit7erbAHrThyS8vxbJ37yjDz7x6MVkVR7eMGrefMzErmHZ4tAAkunu5hGFqfZOg5sx/Q4tVYKdgvt9eE6xJ+QTLC+j0Bu5YztARMAoA0rwTmgmEDSUUS6txPRSxNRP24pP9SFMBE4oBciOOkajpmVhzq/SeYfJAuNL8Tw7uDXIXdrBS9So1rvKjxh3DEaY3fZpDtegFH7fD7XgAwo1DsFuJd7ksvjOj0yJRrkwQiKrHJ9FXAquVOmbDhOt9kblWxK4dzV6ulJDCkQ8cYPn+ROpcr0tDCj3Qb8ZFPRfDMYVyW347PW3PTpnxiD1BExF3d5+qzYHe7cwL/AaF01/dAnbIp7ITZvrlY2BvIhYzFXU1yvDwG50OdSYXTCwp7TwjizBZ4i4egsvTwF32WKtp7vZX5KISffZaf8BUntIKh4dyTfS30l3KpTKOwFuYzl2tibasH2PxxuGwaEedHV1tzxJHNxH2rC4/edlMk7YnEmu2ySUlReVpYpI/hQM4pqdAM917egQVeh7/WB0xizhreWYylpn8JPq2UuWD0c8pJdXo6dUq577xVKlOxXb0EOEAxP8Qk3/hU31yuqmOw3zE13/TLQ0JcVwluqJmX2GDkYUUKbHmM1xnGROtSEHjhN4GDzOIQvTkBiHZSFKXriaqh0nGTkODP4GR7fHHWf7y+PHWPdx8GwRA+QaUvBZbQVdwVPdXJvAB4Wtb4tuS2oHBWMzSa4MKmf+JPDwfsCs+nlGMdN59FHw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(136003)(376002)(396003)(346002)(451199015)(40470700004)(36840700001)(46966006)(26005)(6286002)(7696005)(8676002)(36756003)(110136005)(82310400005)(70586007)(54906003)(40460700003)(4326008)(316002)(41300700001)(107886003)(6666004)(70206006)(55016003)(5660300002)(8936002)(6636002)(40480700001)(86362001)(356005)(82740400003)(186003)(7636003)(83380400001)(336012)(16526019)(36860700001)(1076003)(2616005)(478600001)(2906002)(47076005)(426003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:45.6270 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 083163a0-0102-4b2a-b225-08da9d722882 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7365 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alexander Kozyrev Create firmware and software steering meter color support. Allow matching on a meter color in both root and non-root groups. Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/mlx5_flow.h | 3 + drivers/net/mlx5/mlx5_flow_dv.c | 113 ++++++++++++++++++++++++++++++++ 2 files changed, 116 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2d1a9dba27..99d3c40f36 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -208,6 +208,9 @@ enum mlx5_feature_name { #define MLX5_FLOW_ITEM_PORT_REPRESENTOR (UINT64_C(1) << 41) #define MLX5_FLOW_ITEM_REPRESENTED_PORT (UINT64_C(1) << 42) +/* Meter color item */ +#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 36059beb71..e1db68b532 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3676,6 +3676,69 @@ flow_dv_validate_action_aso_ct(struct rte_eth_dev *dev, return 0; } +/** + * Validate METER_COLOR item. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] item + * Item specification. + * @param[in] attr + * Attributes of flow that includes this item. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_validate_item_meter_color(struct rte_eth_dev *dev, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr __rte_unused, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_item_meter_color *spec = item->spec; + const struct rte_flow_item_meter_color *mask = item->mask; + struct rte_flow_item_meter_color nic_mask = { + .color = RTE_COLORS + }; + int ret; + + if (priv->mtr_color_reg == REG_NON) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "meter color register" + " isn't available"); + ret = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, error); + if (ret < 0) + return ret; + if (!spec) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_SPEC, + item->spec, + "data cannot be empty"); + if (spec->color > RTE_COLORS) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + &spec->color, + "meter color is invalid"); + if (!mask) + mask = &rte_flow_item_meter_color_mask; + if (!mask->color) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_SPEC, NULL, + "mask cannot be zero"); + + ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, + (const uint8_t *)&nic_mask, + sizeof(struct rte_flow_item_meter_color), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); + if (ret < 0) + return ret; + return 0; +} + int flow_dv_encap_decap_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) @@ -7410,6 +7473,13 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, if (ret < 0) return ret; break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + ret = flow_dv_validate_item_meter_color(dev, items, + attr, error); + if (ret < 0) + return ret; + last_item = MLX5_FLOW_ITEM_METER_COLOR; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, @@ -10485,6 +10555,45 @@ flow_dv_translate_item_flex(struct rte_eth_dev *dev, void *matcher, void *key, mlx5_flex_flow_translate_item(dev, matcher, key, item, is_inner); } +/** + * Add METER_COLOR item to matcher + * + * @param[in] dev + * The device to configure through. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. + */ +static void +flow_dv_translate_item_meter_color(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) +{ + const struct rte_flow_item_meter_color *color_m = item->mask; + const struct rte_flow_item_meter_color *color_v = item->spec; + uint32_t value, mask; + int reg = REG_NON; + + MLX5_ASSERT(color_v); + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, color_v, color_m, + &rte_flow_item_meter_color_mask); + value = rte_col_2_mlx5_col(color_v->color); + mask = color_m ? + color_m->color : (UINT32_C(1) << MLX5_MTR_COLOR_BITS) - 1; + if (!!(key_type & MLX5_SET_MATCHER_SW)) + reg = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); + else + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + if (reg == REG_NON) + return; + flow_dv_match_meta_reg(key, (enum modify_reg)reg, value, mask); +} + static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 }; #define HEADER_IS_ZERO(match_criteria, headers) \ @@ -13234,6 +13343,10 @@ flow_dv_translate_items(struct rte_eth_dev *dev, /* No other protocol should follow eCPRI layer. */ last_item = MLX5_FLOW_LAYER_ECPRI; break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + flow_dv_translate_item_meter_color(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_METER_COLOR; + break; default: break; } From patchwork Fri Sep 23 14:43:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116763 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 176A0A054A; Fri, 23 Sep 2022 16:47:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DC4D542C16; Fri, 23 Sep 2022 16:44:54 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2077.outbound.protection.outlook.com [40.107.93.77]) by mails.dpdk.org (Postfix) with ESMTP id 08CD442C39 for ; Fri, 23 Sep 2022 16:44:52 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=O6F3/SR9ttAH5K/7VMz2KGRpuXQ/pmHWXVZhCnHPZbRgAxFi2xqr6pIxUwkl+jrik2wbxhBr/E91WlE/LKfoRoJ90+mppCpNmkd0f3RinhXWuGrk0KMX+ByZUtaJPUCqqU4eRcnSuINhWZycJM6v/tDK31EDjQt/VTUy/Gst1P1PMyvxejZSUoLKVXYwAxYzsvQJxTJQfl2FjQU2druPCTThFsflYpaJ4gEJwnnyAbKQW2r1VLaADbEY2pgo9zROfvWRNtPXYw4Tzauf10PYsEBs/YGJXdCI1yDH+kXHAjjJDEk/nbkpnNIMDX//Dn0a1AW1tGdWVDtYx2FAzO9EUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MNbltbSYZKhaRVnLuHd32C2ACeYkbmKl4x1v/qkOVqg=; b=Ewz6GlBj3MVMekvqf7Fp0nbT0L8X80fVmLQrXbqFdpDiXpMurByr0YGPsCP2eWm+u2JFxC7WGIfjG+U4Ck1dcuP9FqUZk+psP96xtDmzRm32Tnua0RdF6LYMn6m+nI3ZR1EExNAyKD85BFUEX4HLdnKApiu3O8jd47gon/NJd+tCrEJ9WybeBJwE5RuFaWkhuvljqscHZkI1J8daT0+1rq0tJLMdHeeF9mqR6um/88sdB3JJZrKl9k3eTfsgXoOi5xjTKCDZc/cQXLsb0j+pRxfFaUoTBntDtCEkh9zWR1d7jkl/7pxgWLkBMdKQvOAnF186BrRWgu5shrzFtgxYBQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MNbltbSYZKhaRVnLuHd32C2ACeYkbmKl4x1v/qkOVqg=; b=jPrz3Tk3esmP/nd2qRCh0aiU+1Z6jcBK+sUZ27LCj9xpmWRdxlLa8I9AVu2GRyp8azcw9CZRxx/Yok+NpjBOQFdWjWMEngwa8LHKT4LXUZ9T05pN3F7gHXv3Q2akSMl1+ozS84JoVrJInH1yHlZHLXeL+xdKpjYU3Tx0GuaF09Jf3SglE96E4sjLCB8ujbSZxjWnGg1yOB4rw6nkC+HOkogM9t/o8pCX+tZaUAjIVskkVbOAwD8ccU6hGGU2cxrlL8S89Qk/1FwdVKyLCy5qRHvslB1LCwvHrFCbhuQqLtBex6w/Y3BSgkMNKwYINCFCI68bRRCTeq67H9g30uLehg== Received: from MW4PR03CA0337.namprd03.prod.outlook.com (2603:10b6:303:dc::12) by SN7PR12MB7321.namprd12.prod.outlook.com (2603:10b6:806:298::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:50 +0000 Received: from CO1NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:303:dc:cafe::2a) by MW4PR03CA0337.outlook.office365.com (2603:10b6:303:dc::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT042.mail.protection.outlook.com (10.13.174.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:49 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:32 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:30 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Alexander Kozyrev Subject: [PATCH 24/27] net/mlx5: add meter color flow matching in hws Date: Fri, 23 Sep 2022 17:43:31 +0300 Message-ID: <20220923144334.27736-25-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT042:EE_|SN7PR12MB7321:EE_ X-MS-Office365-Filtering-Correlation-Id: d92e94ce-ba43-4665-7ae0-08da9d722aff X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zcXyyC3gTtTOgK2kcJCb0D+i5wqe8mHVrXCFegsw57y5rJaow4DncCqh5cxtVvldwZBiyuNRxMlmLNS/7ZiLPpOcTdIqRqR6dvRi7l0VrSmeucQ8TaV8fbIWkhpx8gsH8U2oa+a7A+s5SGSUj3Txu03dEGTdttQPuT4I/6oZS2XXM0jN3U1t0xf5YN8WcsQIYOtTb3XtAxY7oPV5/+XnqoEtewRpS3LNUIOL8qlxrqrQ8QkVqz7jBLVppDLrCfiibepWqNnLwE76xrGYr6KkAtd7H7idjbzz0NsXxdX2O5hzrHOMEq85tnRkeprAYhFhOKOQQogOk9V6Smb/Pq6M+ygr5Rk+Ux+bgDKHoE3MRa48fuQXbNBRk5evjxHv9mP5tXo7UYXReRZBfV6rdezEIzQlB7x5qXhvMX4sCebD+BlUcZr4/OanqS0tF+d5VKI41cG2m7kgUO2uURKxG4+e7ANkllmzZclhR9KDYcJ4ZoJmvfJNUx6+44f113Xt6t6+ybu8sat6BypdqXZ8EU7UCCn9GX5YLR4nQIn5WILT8nH1FcDVdV8PgmdY0A2DT5eVgfKVEJpo7qhwxls1UgftSyd9GqukchJUvA1cMGd4GoGn61YYF/4jQrRMhowJ3Ub+qzFFwj2+PFZI7Im71oEo2hCT+wPl5Y00DyRQpb0QkpKWRXg94p6m1ErC8fcLSWT9mHJmC3ZqxUFPQsaheZTrOpu8vCavIEaRdJZlmH1eFCiV54yq1u+FkP2ozAHRwo/cPcja7HbGzIo/3R5oDdQAxw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199015)(40470700004)(46966006)(36840700001)(36756003)(47076005)(316002)(16526019)(54906003)(2616005)(6636002)(4326008)(356005)(82740400003)(1076003)(107886003)(26005)(86362001)(8936002)(426003)(6286002)(110136005)(186003)(336012)(40480700001)(7696005)(83380400001)(478600001)(8676002)(7636003)(2906002)(55016003)(36860700001)(70586007)(70206006)(41300700001)(5660300002)(40460700003)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:49.8002 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d92e94ce-ba43-4665-7ae0-08da9d722aff X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7321 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alexander Kozyrev Create hardware steering meter color support. Allow matching on a meter color using hardware steering. Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 32 ++++++++++++++++++++++++++++++-- drivers/net/mlx5/mlx5_flow_hw.c | 12 ++++++++++++ 3 files changed, 43 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 99d3c40f36..514903dbe1 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1526,6 +1526,7 @@ flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) */ return REG_A; case RTE_FLOW_ITEM_TYPE_CONNTRACK: + case RTE_FLOW_ITEM_TYPE_METER_COLOR: return mlx5_flow_hw_aso_tag; case RTE_FLOW_ITEM_TYPE_TAG: MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index e1db68b532..0785734217 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -1387,6 +1387,7 @@ mlx5_flow_item_field_width(struct rte_eth_dev *dev, return inherit < 0 ? 0 : inherit; case RTE_FLOW_FIELD_IPV4_ECN: case RTE_FLOW_FIELD_IPV6_ECN: + case RTE_FLOW_FIELD_METER_COLOR return 2; default: MLX5_ASSERT(false); @@ -1846,6 +1847,31 @@ mlx5_flow_field_id_to_modify_info info[idx].offset = data->offset; } break; + case RTE_FLOW_FIELD_METER_COLOR: + { + const uint32_t color_mask = + (UINT32_C(1) << MLX5_MTR_COLOR_BITS) - 1; + int reg; + + if (priv->sh->config.dv_flow_en == 2) + reg = flow_hw_get_reg_id + (RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + else + reg = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, + 0, error); + if (reg < 0) + return; + MLX5_ASSERT(reg != REG_NON); + MLX5_ASSERT((unsigned int)reg < RTE_DIM(reg_to_field)); + info[idx] = (struct field_modify_info){4, 0, + reg_to_field[reg]}; + if (mask) + mask[idx] = flow_modify_info_mask_32_masked + (width, data->offset, color_mask); + else + info[idx].offset = data->offset; + } + break; case RTE_FLOW_FIELD_POINTER: case RTE_FLOW_FIELD_VALUE: default: @@ -1893,7 +1919,7 @@ flow_dv_convert_action_modify_field uint32_t type, meta = 0; if (conf->src.field == RTE_FLOW_FIELD_POINTER || - conf->src.field == RTE_FLOW_FIELD_VALUE) { + conf->src.field == RTE_FLOW_FIELD_VALUE) {/ type = MLX5_MODIFICATION_TYPE_SET; /** For SET fill the destination field (field) first. */ mlx5_flow_field_id_to_modify_info(&conf->dst, field, mask, @@ -1902,7 +1928,9 @@ flow_dv_convert_action_modify_field item.spec = conf->src.field == RTE_FLOW_FIELD_POINTER ? (void *)(uintptr_t)conf->src.pvalue : (void *)(uintptr_t)&conf->src.value; - if (conf->dst.field == RTE_FLOW_FIELD_META) { + if (conf->dst.field == RTE_FLOW_FIELD_META || + conf->dst.field == RTE_FLOW_FIELD_TAG || + conf->dst.field == RTE_FLOW_FIELD_METER_COLOR) { meta = *(const unaligned_uint32_t *)item.spec; meta = rte_cpu_to_be_32(meta); item.spec = &meta; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 7e7b48f884..87b3e34cb4 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -870,6 +870,7 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev, (void *)(uintptr_t)&conf->src.value; if (conf->dst.field == RTE_FLOW_FIELD_META || conf->dst.field == RTE_FLOW_FIELD_TAG || + conf->dst.field == RTE_FLOW_FIELD_METER_COLOR || conf->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) { value = *(const unaligned_uint32_t *)item.spec; value = rte_cpu_to_be_32(value); @@ -1702,6 +1703,7 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, rte_memcpy(values, mhdr_action->src.pvalue, sizeof(values)); if (mhdr_action->dst.field == RTE_FLOW_FIELD_META || mhdr_action->dst.field == RTE_FLOW_FIELD_TAG || + mhdr_action->dst.field == RTE_FLOW_FIELD_METER_COLOR || mhdr_action->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) { value_p = (unaligned_uint32_t *)values; *value_p = rte_cpu_to_be_32(*value_p); @@ -3704,6 +3706,16 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, " attribute"); } break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + { + int reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + if (reg == REG_NON) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Unsupported meter color register"); + break; + } case RTE_FLOW_ITEM_TYPE_VOID: case RTE_FLOW_ITEM_TYPE_ETH: case RTE_FLOW_ITEM_TYPE_VLAN: From patchwork Fri Sep 23 14:43:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116762 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C778BA054A; Fri, 23 Sep 2022 16:47:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0330442C2B; Fri, 23 Sep 2022 16:44:52 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2048.outbound.protection.outlook.com [40.107.100.48]) by mails.dpdk.org (Postfix) with ESMTP id 82D5242BB0 for ; Fri, 23 Sep 2022 16:44:50 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z8OFfGC8i4ffeGvx7c4FKsWHwNCLqLZb60Ic+bOtcTSQlLWcoP16KASin+HIHNgMDQ2CMUP7qXpMM8JCMmrt8V1bahstiOg9JLaYHQd/95vIglqo9hMjBH3tRk+yp7ABD8UOCN2ReRY8sVLOsf70yrZnVS5dL7Ly2xi0XWaghXyatRmCRfcD/ltxv7iOQr/CTOQjCdQqpGSeGuOPx8s48o/lnPQnbXtsGdyfPExNSfm64cMUPvlw68VG0cikOnfdiZNcKPzqg55oKPwGXKMaxHPv3kQ0UJCeYFCeOqYddM/G7QujPOPwc7qzk/Lde3LWL+xEJ8zgmW+1TacYP+X3kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=kl6WvWmVvF4q7U6w00dmIh1Lz2PjVYOxA7Ysk0aObTA=; b=T0VdLzFiwoeFhZLwVZu7AQKEfmSJK5JQjfGCgiu9vq/1HgvPRjBe4sun4gPcb1XryauBQljtY/9CBv+EtNmlhpzHTCTVte3ZgHwNraTDp6cu2josX1J4l1f1fvECNR25m5h122SXstJpI9aNzSuHkrjbMMwhbhUl9pT+hh6ljnSwO6+9AzARWJWvTtclazS5oyxlpp3HaDTVjD5cGBFuSWUWY54RL0gfaWFmNhNiiHvhtp+Gp6UK+bIsxIyxUbnZpbBSgXzycbArjOQulb16FedV4/2Sa7gb7y2HEVPdDUJoAPdK8XaI7/WZ85n1E+q3BhrFNR2KXliQL1mbwyTPyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kl6WvWmVvF4q7U6w00dmIh1Lz2PjVYOxA7Ysk0aObTA=; b=QjoCeEbDFnnJnWILovsGDiqTE58oCb7UXOWxhu+aLK2P+y6/ocod39mK+Q4kWObrZj1Q6bz8qvWZ/2KawCZMqqNm3VK8fReu9xWZ/zaj4e7nZiqf9EzXYPhBmqm7wW7R0t+6uJj2JhFCtSx4au2qVuDQAL5QuMzWdC+oWEtYYsMhZxND49/p96CHwsVpqxNekejhY+o3EGEHEmCmP5Aba8R8m+VE4F2ROINGvMGaZLtp7zmJzRKgy7Xq6m8VRBQ9wlgSYcs6pHLbL1Wb0rANP6zN57YNv2UCY3N1E8iwpXVwKJm5gQ2PHD/cIrTMW9/Yv9up6tKnluJGorzNjTI/zQ== Received: from DS7PR03CA0291.namprd03.prod.outlook.com (2603:10b6:5:3ad::26) by PH7PR12MB5655.namprd12.prod.outlook.com (2603:10b6:510:138::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.17; Fri, 23 Sep 2022 14:44:49 +0000 Received: from DM6NAM11FT015.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::94) by DS7PR03CA0291.outlook.office365.com (2603:10b6:5:3ad::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT015.mail.protection.outlook.com (10.13.172.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:48 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:33 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:32 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Alexander Kozyrev Subject: [PATCH 25/27] net/mlx5: implement profile/policy get Date: Fri, 23 Sep 2022 17:43:32 +0300 Message-ID: <20220923144334.27736-26-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT015:EE_|PH7PR12MB5655:EE_ X-MS-Office365-Filtering-Correlation-Id: 03ea7017-9c44-4b83-4e77-08da9d722a48 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wRH/nAtzgi4ul/scpz83UWTTqtiPh9X+MbKYGO/SciXZ6HwIPmcgBOTN2QKhKfAlQmJFQ+0fRUXDcN2FSiecYO8fdCsk/hKTxsHcKnHUlf0EU2lQWSqi2CITeObPxZKTrgOpdzdbO2R+OT+rKadZXbG8nNqHT068tvLYsU/bg9SdpvsoDXnr/sV56C1/W8exNsc99oQ1CvqaR/H7miVzqFR5YrSTqn+pJpvN9tJZJjhakh4586iiUmf9Oxe4qHMKW//u0TZ7xpDqLUzN0DunSPg7pjyc4A2UWcJbiblDYrlluvxh5hCu/UOYU82aLG9TjlymPhkP9U56/uR5Ybm8baI1VVq5hqEV4CLAGqEwzfyZZ1+Ynx8fprf/ixGwNutaW5peqrZ4j+8Ba459JXtnjC9lybeDPayyx6XQUQtKN1EsKymvd+4A04dVAzH8KF91akWJjNat+TsIDSDLbOrbrg7O5pXl6xKo9w+mcyla14PXIsjT8jTC2u+/A5Ezws66R+Krzneq4HQ4PqUQUpBCs+rhsCcaL0+iUoXzZfg72FrmgttF4sb280IlNZlCoJAwIjBCWXof/F1dL4aDvUnyK7yoITG9OiYPCuKd98nS+r+c8CGbp9aaMNAsGAoR+5e54X0+zoBk20xnCkdtq7voBh58CLUYPfZi712Uyr0MTP2j6+6n6md/2M7oZYHOVTSombyzfVDymlft7u1tYWs9KgQSJDueiV7esSjOE2qn2Ek9YkyGDUi4w0vJgNTcQOuLwOjAKE+vh7nSKH4L8lPHgA== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(110136005)(316002)(40480700001)(55016003)(36860700001)(356005)(478600001)(6636002)(86362001)(107886003)(4326008)(8676002)(54906003)(82740400003)(36756003)(426003)(47076005)(70586007)(83380400001)(26005)(7696005)(6286002)(41300700001)(5660300002)(7636003)(2616005)(336012)(1076003)(2906002)(70206006)(82310400005)(186003)(8936002)(40460700003)(16526019); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:48.5998 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 03ea7017-9c44-4b83-4e77-08da9d722a48 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT015.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5655 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alexander Kozyrev Add callback functions for both software and hardware steering to get a pointers to Meter profile/policy by their IDs. Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/mlx5_flow_meter.c | 65 ++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 7221bfb642..893dc42cef 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -741,6 +741,36 @@ mlx5_flow_meter_profile_delete(struct rte_eth_dev *dev, return 0; } +/** + * Callback to get MTR profile. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] meter_profile_id + * Meter profile id. + * @param[out] error + * Pointer to the error structure. + * + * @return + * A valid handle in case of success, NULL otherwise. + */ +static struct rte_flow_meter_profile * +mlx5_flow_meter_profile_get(struct rte_eth_dev *dev, + uint32_t meter_profile_id, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->mtr_en) { + rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL, + "Meter is not supported"); + return NULL; + } + return (void *)(uintptr_t)mlx5_flow_meter_profile_find(priv, + meter_profile_id); +} + /** * Callback to add MTR profile with HWS. * @@ -1303,6 +1333,37 @@ mlx5_flow_meter_policy_delete(struct rte_eth_dev *dev, return 0; } +/** + * Callback to get MTR policy. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] policy_id + * Meter policy id. + * @param[out] error + * Pointer to the error structure. + * + * @return + * A valid handle in case of success, NULL otherwise. + */ +static struct rte_flow_meter_policy * +mlx5_flow_meter_policy_get(struct rte_eth_dev *dev, + uint32_t policy_id, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t policy_idx; + + if (!priv->mtr_en) { + rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL, + "Meter is not supported"); + return NULL; + } + return (void *)(uintptr_t)mlx5_flow_meter_policy_find(dev, policy_id, + &policy_idx); +} + /** * Callback to delete MTR policy for HWS. * @@ -2554,9 +2615,11 @@ static const struct rte_mtr_ops mlx5_flow_mtr_ops = { .capabilities_get = mlx5_flow_mtr_cap_get, .meter_profile_add = mlx5_flow_meter_profile_add, .meter_profile_delete = mlx5_flow_meter_profile_delete, + .meter_profile_get = mlx5_flow_meter_profile_get, .meter_policy_validate = mlx5_flow_meter_policy_validate, .meter_policy_add = mlx5_flow_meter_policy_add, .meter_policy_delete = mlx5_flow_meter_policy_delete, + .meter_policy_get = mlx5_flow_meter_policy_get, .create = mlx5_flow_meter_create, .destroy = mlx5_flow_meter_destroy, .meter_enable = mlx5_flow_meter_enable, @@ -2571,9 +2634,11 @@ static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = { .capabilities_get = mlx5_flow_mtr_cap_get, .meter_profile_add = mlx5_flow_meter_profile_hws_add, .meter_profile_delete = mlx5_flow_meter_profile_hws_delete, + .meter_profile_get = mlx5_flow_meter_profile_get, .meter_policy_validate = mlx5_flow_meter_policy_hws_validate, .meter_policy_add = mlx5_flow_meter_policy_hws_add, .meter_policy_delete = mlx5_flow_meter_policy_hws_delete, + .meter_policy_get = mlx5_flow_meter_policy_get, .create = mlx5_flow_meter_hws_create, .destroy = mlx5_flow_meter_hws_destroy, .meter_enable = mlx5_flow_meter_enable, From patchwork Fri Sep 23 14:43:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116765 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1377DA054A; Fri, 23 Sep 2022 16:47:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0383042BD0; Fri, 23 Sep 2022 16:45:01 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2088.outbound.protection.outlook.com [40.107.92.88]) by mails.dpdk.org (Postfix) with ESMTP id 7F06142BBB for ; Fri, 23 Sep 2022 16:44:59 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n0YB/t/o6fBc/IzkQpuZASlbzLm+tc+KyIZZWvry11wZ3Ek4wlYqE9gPufl8kpWhh8cPX8FxoIvanvRh9NgxnL2+yhTstZNtQxFVEIo6O2ybWJrVhpzi5T6e2LZUQr7jR9tUEWkT2czDyuN28xTpAmJqLcxMFrsCQBoNei/A7FTBBFFlfRZQwRbCo3yDswB/Zv3wuLizn/Q0GsyrtxYBbcv/Kh/AX0VXK7C3K5nhB1yiygnlQP2bFVmnI7kFM9xstFXHu0jdYDD0TjiOUMXzZnG3jLGKsNsigeITfW+6nnRB0JPL67iysP/3w4Mety8Kjs1+G3u8907/1aCwUGivgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gziA2xAEWd4sjBVu/yApJJvFKL9tEojC8VmwR5vPHb0=; b=Wz09eA7JTWH5F5hX/qeWom18iQtY3ZW64Vunh2R/IMLQJeogmN/5OmBX+Y/ewzek2VddmnewK777tlEGK2oNi1a4pcA4HPwjNn+mkszlsdfkxMIz8kA4UYtuFYmNbOG0XE9IxwPG2sD7+qTewKLBZIxwZtD3TOMn+TmlO9EPFdt3BfNMxopKRX0uvIpA7g+k3cBuWZGZDxCJA6faDwcR/6DzudP5R0VncNLVCoJJ2Ys+W7B/ViQ08PaqXXeyu4MRlV9sWjhtQjBm7MmRczw1E5ry785oJC49fvc/ZwA/pXeEdPvSmbDaXD6C1EO6NCzib+lfhbVylw1j4rFHZhsvxA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gziA2xAEWd4sjBVu/yApJJvFKL9tEojC8VmwR5vPHb0=; b=qYqCmR1pe98kfnnCeUWzNn9Na3CL2IP0Nd0nNectmuoN/lXSfgvqRRDBRqdVCr8g3zftvxm0e5d4/Rg/7c4NDpR0VOt1YEr19q/RGI6L6ZbbtBll9tJFYG2guFBEbnAcSfWsZ+RRttk0EARSA+CB+KixOged/4P+TiyCoiBxoOOmmCAV4TvJKWXCk3Vgu/6H0L9juLRD7njoawAM9cGFtJx19kssU5qTV9F4CWcTD8JT9OEeakvzlDyLVwq4nenicdey4//18Jqlf6ALamEVL8w+Z07Dgk/bWJIFRQVLA7L+vKpZPpld4aVwzoS+Gwf6u0vf5OQSKPYd4Fb7Ajagkg== Received: from MW4PR03CA0355.namprd03.prod.outlook.com (2603:10b6:303:dc::30) by BL1PR12MB5828.namprd12.prod.outlook.com (2603:10b6:208:397::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5632.19; Fri, 23 Sep 2022 14:44:57 +0000 Received: from CO1NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:303:dc:cafe::60) by MW4PR03CA0355.outlook.office365.com (2603:10b6:303:dc::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT042.mail.protection.outlook.com (10.13.174.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:57 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:35 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:34 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Alexander Kozyrev Subject: [PATCH 26/27] net/mlx5: implement METER MARK action for HWS Date: Fri, 23 Sep 2022 17:43:33 +0300 Message-ID: <20220923144334.27736-27-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT042:EE_|BL1PR12MB5828:EE_ X-MS-Office365-Filtering-Correlation-Id: c45201f0-4b4f-49a3-6b9a-08da9d722f75 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: t9/2VTiERLwVH37kMHh7b/R/Mkj8+0GDiF9gKyT+8JCgokHRwly0mrvnSRDA48OAkDaa5vm1dSoHCAxXoBEgZA/hNoohqAommgTZBrLMczsGQ+nSkLhDvllN2TfMdtNLBROiUErMJAPf3AQC2YJFSUn2Mjl5dsvb6HC9JSnzn6bI1bbHVSOJHSGc23KJsiwlbddtK2W6fomIyHjpt9l45IYrgUct5o8UHJxQPh+2LbOEiSq4bfl3aFwZyfW3lhu9TG82jdqoq3mHKPeei344ePYXz70QIVlnsm975j0pVDoSjF6a+FP2mQhq/sJEMqiHylQ59Wbp6h8xlRnnK+b0Vrnz+hNLEMWnC0Lgd123dUKrfGF8EceHppJXb7jfcWYiHe+dVZefJuG4YAIjtMmfVlvMe4qobSNpGEypqJpENmMKwhq6gsI8K60eSDYFRUQUEvhe5dIYyroMNJ6CYPqEW9iP1C24z/6NJM7QY/gd7zLIYFtiETla2HFRbbWvQjdAMsFf6xDhwxL/YZNVoH0vz2D1G5IKf5j5J04sLJ2OqSJC8lUN+dB2+r89eWM7XvTDcT0yYUTa42k03ZNRHs+YDorU2QDtc+zL32M9OSAasJMTziUPUe2WQ2p/WajkcjKA7BBglN9vIL9ezT8EqhNyN2cyOPAw9R9WG6jvJNLrGU66mQzPlmhj01BlUnJqUAHxa97mgYyhTaRVpkd3QmilVR/dJ7eSqprHslactd1CtfPLsQ1a9BwBCx3JqG74Z/AGS4jTOLdCrAwMSFBBJt3MWI8r0164ltl/Gjgx8QaKV6M= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(136003)(396003)(376002)(346002)(451199015)(40470700004)(46966006)(36840700001)(426003)(16526019)(336012)(36860700001)(47076005)(83380400001)(356005)(1076003)(186003)(2616005)(82740400003)(7636003)(8676002)(7696005)(70586007)(82310400005)(40480700001)(70206006)(26005)(4326008)(8936002)(41300700001)(6286002)(2906002)(110136005)(5660300002)(54906003)(40460700003)(30864003)(86362001)(316002)(36756003)(6636002)(55016003)(478600001)(107886003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:57.2841 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c45201f0-4b4f-49a3-6b9a-08da9d722f75 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5828 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alexander Kozyrev Implement METER_MARK action for hardware steering case. Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/mlx5.h | 9 ++- drivers/net/mlx5/mlx5_flow.h | 2 + drivers/net/mlx5/mlx5_flow_aso.c | 7 +- drivers/net/mlx5/mlx5_flow_hw.c | 116 +++++++++++++++++++++++++++-- drivers/net/mlx5/mlx5_flow_meter.c | 107 ++++++++++++++++++-------- 5 files changed, 204 insertions(+), 37 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ec08014832..ff02d4cf13 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -969,12 +969,16 @@ enum mlx5_aso_mtr_type { /* Generic aso_flow_meter information. */ struct mlx5_aso_mtr { - LIST_ENTRY(mlx5_aso_mtr) next; + union { + LIST_ENTRY(mlx5_aso_mtr) next; + struct mlx5_aso_mtr_pool *pool; + }; enum mlx5_aso_mtr_type type; struct mlx5_flow_meter_info fm; /**< Pointer to the next aso flow meter structure. */ uint8_t state; /**< ASO flow meter state. */ uint32_t offset; + enum rte_color init_color; }; /* Generic aso_flow_meter pool structure. */ @@ -983,6 +987,8 @@ struct mlx5_aso_mtr_pool { /*Must be the first in pool*/ struct mlx5_devx_obj *devx_obj; /* The devx object of the minimum aso flow meter ID. */ + struct mlx5dr_action *action; /* HWS action. */ + struct mlx5_indexed_pool *idx_pool; /* HWS index pool. */ uint32_t index; /* Pool index in management structure. */ }; @@ -1670,6 +1676,7 @@ struct mlx5_priv { struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */ struct mlx5_hws_cnt_pool *hws_cpool; /* HW steering's counter pool. */ struct mlx5_aso_ct_pool *hws_ctpool; /* HW steering's CT pool. */ + struct mlx5_aso_mtr_pool *hws_mpool; /* Meter mark indexed pool. */ #endif }; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 514903dbe1..e1eb0ab697 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1112,6 +1112,7 @@ struct rte_flow_hw { struct rte_flow_template_table *table; /* The table flow allcated from. */ struct mlx5dr_rule rule; /* HWS layer data struct. */ uint32_t cnt_id; + uint32_t mtr_id; } __rte_packed; /* rte flow action translate to DR action struct. */ @@ -1241,6 +1242,7 @@ struct mlx5_hw_actions { uint16_t encap_decap_pos; /* Encap/Decap action position. */ uint32_t mark:1; /* Indicate the mark action. */ uint32_t cnt_id; /* Counter id. */ + uint32_t mtr_id; /* Meter id. */ /* Translated DR action array from action template. */ struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; }; diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index 34fed3f4b8..8bb7d4ef39 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -700,8 +700,11 @@ mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, fm = &aso_mtr->fm; sq->elts[sq->head & mask].mtr = aso_mtr; if (aso_mtr->type == ASO_METER_INDIRECT) { - pool = container_of(aso_mtr, struct mlx5_aso_mtr_pool, - mtrs[aso_mtr->offset]); + if (likely(sh->config.dv_flow_en == 2)) + pool = aso_mtr->pool; + else + pool = container_of(aso_mtr, struct mlx5_aso_mtr_pool, + mtrs[aso_mtr->offset]); id = pool->devx_obj->id; } else { id = bulk->devx_obj->id; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 87b3e34cb4..90a6c0c78f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -395,6 +395,10 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_hws_cnt_shared_put(priv->hws_cpool, &acts->cnt_id); acts->cnt_id = 0; } + if (acts->mtr_id) { + mlx5_ipool_free(priv->hws_mpool->idx_pool, acts->mtr_id); + acts->mtr_id = 0; + } } /** @@ -1096,6 +1100,70 @@ static rte_be32_t vlan_hdr_to_be32(const struct rte_flow_action *actions) #endif } +static __rte_always_inline struct mlx5_aso_mtr * +flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, + const struct rte_flow_action *action) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + const struct rte_flow_action_meter_mark *meter_mark = action->conf; + struct mlx5_aso_mtr *aso_mtr; + struct mlx5_flow_meter_info *fm; + uint32_t mtr_id; + + aso_mtr = mlx5_ipool_malloc(priv->hws_mpool->idx_pool, &mtr_id); + if (!aso_mtr) + return NULL; + /* Fill the flow meter parameters. */ + aso_mtr->type = ASO_METER_INDIRECT; + fm = &aso_mtr->fm; + fm->meter_id = mtr_id; + fm->profile = (struct mlx5_flow_meter_profile *)(meter_mark->profile); + fm->is_enable = meter_mark->state; + fm->color_aware = meter_mark->color_mode; + aso_mtr->pool = pool; + aso_mtr->state = ASO_METER_WAIT; + aso_mtr->offset = mtr_id - 1; + aso_mtr->init_color = (meter_mark->color_mode) ? + meter_mark->init_color : RTE_COLOR_GREEN; + /* Update ASO flow meter by wqe. */ + if (mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr, &priv->mtr_bulk)) { + mlx5_ipool_free(pool->idx_pool, mtr_id); + return NULL; + } + /* Wait for ASO object completion. */ + if (mlx5_aso_mtr_wait(priv->sh, aso_mtr)) { + mlx5_ipool_free(pool->idx_pool, mtr_id); + return NULL; + } + return aso_mtr; +} + +static __rte_always_inline int +flow_hw_meter_mark_compile(struct rte_eth_dev *dev, + uint16_t aso_mtr_pos, + const struct rte_flow_action *action, + struct mlx5dr_rule_action *acts, + uint32_t *index) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + struct mlx5_aso_mtr *aso_mtr; + + aso_mtr = flow_hw_meter_mark_alloc(dev, action); + if (!aso_mtr) + return -1; + + /* Compile METER_MARK action */ + acts[aso_mtr_pos].action = pool->action; + acts[aso_mtr_pos].aso_meter.offset = aso_mtr->offset; + acts[aso_mtr_pos].aso_meter.init_color = + (enum mlx5dr_action_aso_meter_color) + rte_col_2_mlx5_col(aso_mtr->init_color); + *index = aso_mtr->fm.meter_id; + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -1403,6 +1471,23 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, goto err; } break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + action_pos = at->actions_off[actions - action_start]; + if (actions->conf && masks->conf && + ((const struct rte_flow_action_meter_mark *) + masks->conf)->profile) { + ret = flow_hw_meter_mark_compile(dev, + action_pos, actions, + acts->rule_acts, + &acts->mtr_id); + if (ret) + goto err; + } else if (__flow_hw_act_data_general_append(priv, acts, + actions->type, + actions - action_start, + action_pos)) + goto err; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -1788,7 +1873,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, size_t encap_len = 0; int ret; struct mlx5_aso_mtr *mtr; - uint32_t mtr_id; rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; @@ -1822,6 +1906,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq; uint32_t ct_idx; cnt_id_t cnt_id; + uint32_t mtr_id; action = &actions[act_data->action_src]; /* @@ -1928,13 +2013,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_METER: meter = action->conf; mtr_id = meter->mtr_id; - mtr = mlx5_aso_meter_by_idx(priv, mtr_id); + aso_mtr = mlx5_aso_meter_by_idx(priv, mtr_id); rule_acts[act_data->action_dst].action = priv->mtr_bulk.action; rule_acts[act_data->action_dst].aso_meter.offset = - mtr->offset; + aso_mtr->offset; jump = flow_hw_jump_action_register - (dev, &table->cfg, mtr->fm.group, NULL); + (dev, &table->cfg, aso_mtr->fm.group, NULL); if (!jump) return -1; MLX5_ASSERT @@ -1944,7 +2029,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, jump->root_action; job->flow->jump = jump; job->flow->fate_type = MLX5_FLOW_FATE_JUMP; - if (mlx5_aso_mtr_wait(priv->sh, mtr)) + if (mlx5_aso_mtr_wait(priv->sh, aso_mtr)) return -1; break; case RTE_FLOW_ACTION_TYPE_COUNT: @@ -1980,6 +2065,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, &rule_acts[act_data->action_dst])) return -1; break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + ret = flow_hw_meter_mark_compile(dev, + act_data->action_dst, action, + rule_acts, &job->flow->mtr_id); + if (ret != 0) + return ret; + break; default: break; } @@ -2242,6 +2334,7 @@ flow_hw_pull(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct mlx5_hw_q_job *job; int ret, i; @@ -2266,6 +2359,10 @@ flow_hw_pull(struct rte_eth_dev *dev, &job->flow->cnt_id); job->flow->cnt_id = 0; } + if (job->flow->mtr_id) { + mlx5_ipool_free(pool->idx_pool, job->flow->mtr_id); + job->flow->mtr_id = 0; + } mlx5_ipool_free(job->flow->table->flow, job->flow->idx); } priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; @@ -3059,6 +3156,9 @@ flow_hw_actions_validate(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + /* TODO: Validation logic */ + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: ret = flow_hw_validate_action_modify_field(action, mask, @@ -3243,6 +3343,12 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) MLX5_HW_VLAN_PUSH_PCP_IDX : MLX5_HW_VLAN_PUSH_VID_IDX; break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + at->actions_off[i] = curr_off; + action_types[curr_off++] = MLX5DR_ACTION_TYP_ASO_METER; + if (curr_off >= MLX5_HW_MAX_ACTS) + goto err_actions_num; + break; default: type = mlx5_hw_dr_action_types[at->actions[i].type]; at->actions_off[i] = curr_off; diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 893dc42cef..1c8bb5fc8c 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -17,6 +17,13 @@ static int mlx5_flow_meter_disable(struct rte_eth_dev *dev, uint32_t meter_id, struct rte_mtr_error *error); +/* + * The default ipool threshold value indicates which per_core_cache + * value to set. + */ +#define MLX5_MTR_IPOOL_SIZE_THRESHOLD (1 << 19) +/* The default min local cache size. */ +#define MLX5_MTR_IPOOL_CACHE_MIN (1 << 9) static void mlx5_flow_meter_uninit(struct rte_eth_dev *dev) @@ -31,6 +38,11 @@ mlx5_flow_meter_uninit(struct rte_eth_dev *dev) mlx5_free(priv->mtr_profile_arr); priv->mtr_profile_arr = NULL; } + if (priv->hws_mpool) { + mlx5_ipool_destroy(priv->hws_mpool->idx_pool); + mlx5_free(priv->hws_mpool); + priv->hws_mpool = NULL; + } if (priv->mtr_bulk.aso) { mlx5_free(priv->mtr_bulk.aso); priv->mtr_bulk.aso = NULL; @@ -62,27 +74,39 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, uint32_t i; struct rte_mtr_error error; uint32_t flags; + uint32_t nb_mtrs = rte_align32pow2(nb_meters); + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct mlx5_aso_mtr), + .trunk_size = 1 << 12, + .per_core_cache = 1 << 13, + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .max_idx = nb_meters, + .free = mlx5_free, + .type = "mlx5_hw_mtr_mark_action", + }; if (!nb_meters || !nb_meter_profiles || !nb_meter_policies) { ret = ENOTSUP; rte_mtr_error_set(&error, ENOMEM, - RTE_MTR_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter configuration is invalid."); + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter configuration is invalid."); goto err; } if (!priv->mtr_en || !priv->sh->meter_aso_en) { ret = ENOTSUP; rte_mtr_error_set(&error, ENOMEM, - RTE_MTR_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO is not supported."); + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO is not supported."); goto err; } priv->mtr_config.nb_meters = nb_meters; if (mlx5_aso_queue_init(priv->sh, ASO_OPC_MOD_POLICER)) { ret = ENOMEM; rte_mtr_error_set(&error, ENOMEM, - RTE_MTR_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO queue allocation failed."); + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO queue allocation failed."); goto err; } log_obj_size = rte_log2_u32(nb_meters >> 1); @@ -92,8 +116,8 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, if (!dcs) { ret = ENOMEM; rte_mtr_error_set(&error, ENOMEM, - RTE_MTR_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO object allocation failed."); + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO object allocation failed."); goto err; } priv->mtr_bulk.devx_obj = dcs; @@ -101,8 +125,8 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, if (reg_id < 0) { ret = ENOTSUP; rte_mtr_error_set(&error, ENOMEM, - RTE_MTR_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter register is not available."); + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter register is not available."); goto err; } flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; @@ -114,19 +138,20 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, if (!priv->mtr_bulk.action) { ret = ENOMEM; rte_mtr_error_set(&error, ENOMEM, - RTE_MTR_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter action creation failed."); + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter action creation failed."); goto err; } priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_aso_mtr) * nb_meters, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); + sizeof(struct mlx5_aso_mtr) * + nb_meters, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); if (!priv->mtr_bulk.aso) { ret = ENOMEM; rte_mtr_error_set(&error, ENOMEM, - RTE_MTR_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter bulk ASO allocation failed."); + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter bulk ASO allocation failed."); goto err; } priv->mtr_bulk.size = nb_meters; @@ -137,32 +162,56 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, aso->offset = i; aso++; } + priv->hws_mpool = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_mtr_pool), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!priv->hws_mpool) { + ret = ENOMEM; + rte_mtr_error_set(&error, ENOMEM, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ipool allocation failed."); + goto err; + } + priv->hws_mpool->devx_obj = priv->mtr_bulk.devx_obj; + priv->hws_mpool->action = priv->mtr_bulk.action; + /* + * No need for local cache if Meter number is a small number. + * Since flow insertion rate will be very limited in that case. + * Here let's set the number to less than default trunk size 4K. + */ + if (nb_mtrs <= cfg.trunk_size) { + cfg.per_core_cache = 0; + cfg.trunk_size = nb_mtrs; + } else if (nb_mtrs <= MLX5_MTR_IPOOL_SIZE_THRESHOLD) { + cfg.per_core_cache = MLX5_MTR_IPOOL_CACHE_MIN; + } + priv->hws_mpool->idx_pool = mlx5_ipool_create(&cfg); priv->mtr_config.nb_meter_profiles = nb_meter_profiles; priv->mtr_profile_arr = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_profile) * - nb_meter_profiles, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); + sizeof(struct mlx5_flow_meter_profile) * + nb_meter_profiles, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); if (!priv->mtr_profile_arr) { ret = ENOMEM; rte_mtr_error_set(&error, ENOMEM, - RTE_MTR_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter profile allocation failed."); + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter profile allocation failed."); goto err; } priv->mtr_config.nb_meter_policies = nb_meter_policies; priv->mtr_policy_arr = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_policy) * - nb_meter_policies, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); + sizeof(struct mlx5_flow_meter_policy) * + nb_meter_policies, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); if (!priv->mtr_policy_arr) { ret = ENOMEM; rte_mtr_error_set(&error, ENOMEM, - RTE_MTR_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter policy allocation failed."); + RTE_MTR_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter policy allocation failed."); goto err; } return 0; From patchwork Fri Sep 23 14:43:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 116764 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D3E7AA054A; Fri, 23 Sep 2022 16:47:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1AB4E42C55; Fri, 23 Sep 2022 16:44:58 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2070.outbound.protection.outlook.com [40.107.100.70]) by mails.dpdk.org (Postfix) with ESMTP id A290C42C54 for ; Fri, 23 Sep 2022 16:44:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=J5y6swIhLeTzSMl2cO4Em2t2NDh7+tvrN/j1UuJU1vYFXmVqxpTzY+vPvUECrYY7Bh41iahJEJ9wB+PqrpPzjVYLXQF3RQ5a6jITNd2VQYMhA6n/77ADIZmMkIx2UU5dekrUNCbjCpRQxu2acAQEuVQIdjGN7NA+V5yn56aIYovPraJGi4zaBgUp/HeH5qJdvsE2ZKnU/MAdPbrokoPxKMaa3fxFUbAEscYQet510rIUD8guqP0CtaaWh/YT4APnDPmbl3/UVsDgW9G3Gg1KioM23V2VasV9PpuJUZERkEAMxIQFLKYKNwxOvqVdK0+I8VxnPQ1fA6ovwsKKJBGuxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mBGS+Nx4yn/SINUOc5IgLzPOHpPrDjqCtVBNhkFBfCg=; b=ilt2e4+JnvCUwmoiqqW2ByQYR6y6yZZX9wHB/VB7xhHOLD5ncU8KFqaBphAY+5K3a2nj3OVn5Ge0mc+X/v8gOb7rW78M/myt2L1sqhxR/1FZq321BQ4kIfmHa8I9javz9+U2Y1iatxLO1iq/n5kZoa78E7t4bbEqbxVVAibo/wJ5bF+ro+lRLpQHt4JlmMX636yRm0mTBYK1rnpUaKZZyEdtJywGmrHTPF7dVHwQ8KHLnfukGtnTqEe7QuccYZTiqgLa2yuVqzmhbc7/Rm2IiCQ50cnI8e+ZZPXgA+xfZHpDXad+qjFt21xaxkSrqFmt6B+EWUAoZtWZg9ZXxdVMDA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mBGS+Nx4yn/SINUOc5IgLzPOHpPrDjqCtVBNhkFBfCg=; b=purS91gPLv9w1uk9k1MbI9SvA8shX9g/rEvEnmQUzW09grNxFRMZutEIzn7NiGTGcVNRmQ0EVYB4ktlCkg0a6OCwN62TQ//qfiJxVPlzItcYi4k0890h3N9B6yBVoc+dkWnaPMlxVNIyUY+UM1vSRDoHukt2RtB6FXPWORU7hi1dieyR3R7HtQhnypdXc4i6xLahuzJfJ0iyemzwerPjf0St7fujIc3JEl34LMBjOn7ri0VmMyjCEdXVXeyzj9wLWCiXd2I5PpS2kruZydtoKpNwL42OvwxoD5rTmfhNLCN2R0UlSSVyMsJQbUXFp3cWSHz/RrQlbOBr+ak9uNPeAw== Received: from DM5PR08CA0041.namprd08.prod.outlook.com (2603:10b6:4:60::30) by CH0PR12MB5372.namprd12.prod.outlook.com (2603:10b6:610:d7::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20; Fri, 23 Sep 2022 14:44:54 +0000 Received: from DM6NAM11FT109.eop-nam11.prod.protection.outlook.com (2603:10b6:4:60:cafe::56) by DM5PR08CA0041.outlook.office365.com (2603:10b6:4:60::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.20 via Frontend Transport; Fri, 23 Sep 2022 14:44:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT109.mail.protection.outlook.com (10.13.173.178) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Fri, 23 Sep 2022 14:44:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 23 Sep 2022 07:44:37 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 23 Sep 2022 07:44:36 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , Alexander Kozyrev Subject: [PATCH 27/27] net/mlx5: implement METER MARK indirect action for HWS Date: Fri, 23 Sep 2022 17:43:34 +0300 Message-ID: <20220923144334.27736-28-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220923144334.27736-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT109:EE_|CH0PR12MB5372:EE_ X-MS-Office365-Filtering-Correlation-Id: 1d964180-05fd-4c24-5882-08da9d722deb X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4V6IlNtUajPXYVa3Ip2zVmydkJ8uqGzseyqni+7vmSRi+9NmexeU95rlABjDeualNVX7A2/a4MLXiRecRkrOMb+snEyau18xPnV0o3dNFrkbX1kBjNX/O5LcTcyftz3R5TKEadqgllieJvKHsKxdKSOz0kMBsqHlD59slhuZUQuR2W/VqDe99G2VIreVvmkpvsZO7uKqyEkN4Gtveb/+88iRNfKYOTsSDcDN7FceF5wk6yM+BaqzEws0CiGiNdZvkgJ+YK9yOAZav3wMwdNXyjkJpq9XWdZ4Pqb7Ce+yyNQaCYvyMTg8M4TJV/ud/jEQAuZ4iTGuLaTqgoOqfWcHqhz2ohLNDwbmhbdMenjL3SXka6nLXLP2SQJ3eGUJq4SzpIUMYkf11kMzw0FmJJvt6rPkwLM0K6gSGUDuqNVI7CE1wsoqd8HxpsMhIwhoqoUzhq32QTTmyeFW6cxF6uqh4gR+I5eQ5f6S4aPusMTO0rmg0Jw0D6+CChpZsOywdp/e2lvUmP2HCGHsUwTjOCP8MXKPnoTRPOf0PFlrpvJ73ODXm+Y5cZ/C7A2YU4TqZ9XgtaXjjz93e/9EdHN59PMmXquPEy3iL6aYUSZMruJsGweImPd+n/iih3VaYdVsl8HQM23speb2TLMSSZAx/XpxAMB0hoABz5Fahhg4nMQJwFYSaO1uVUHOf3Qe8d13y+wbye6fNh1TXUevFoOCpspAOcTXNMzWr1iPwFPN1uRKRPipkOaIANCxUnqmsMfNzUpOJ9oL1ogtMh4XsMfXtkQDZw== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(376002)(396003)(346002)(136003)(451199015)(46966006)(40470700004)(36840700001)(36860700001)(82740400003)(82310400005)(8676002)(4326008)(70586007)(70206006)(6636002)(110136005)(316002)(54906003)(2906002)(8936002)(41300700001)(186003)(2616005)(16526019)(1076003)(30864003)(6286002)(83380400001)(336012)(47076005)(426003)(7696005)(478600001)(5660300002)(107886003)(26005)(40460700003)(7636003)(356005)(55016003)(40480700001)(86362001)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Sep 2022 14:44:54.7171 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1d964180-05fd-4c24-5882-08da9d722deb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT109.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5372 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alexander Kozyrev Add ability to create an indirect action handle for METER_MARK. It allows to share one Meter between several different actions. Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/mlx5_flow.c | 6 ++ drivers/net/mlx5/mlx5_flow.h | 25 ++++- drivers/net/mlx5/mlx5_flow_hw.c | 160 +++++++++++++++++++++++++++++++- 3 files changed, 183 insertions(+), 8 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index cbf9c31984..9627ffc979 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4221,6 +4221,12 @@ flow_action_handles_translate(struct rte_eth_dev *dev, MLX5_RTE_FLOW_ACTION_TYPE_COUNT; translated[handle->index].conf = (void *)(uintptr_t)idx; break; + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + translated[handle->index].type = + (enum rte_flow_action_type) + MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK; + translated[handle->index].conf = (void *)(uintptr_t)idx; + break; case MLX5_INDIRECT_ACTION_TYPE_AGE: if (priv->sh->flow_hit_aso_en) { translated[handle->index].type = diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e1eb0ab697..30b8e1df99 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -47,6 +47,7 @@ enum mlx5_rte_flow_action_type { MLX5_RTE_FLOW_ACTION_TYPE_COUNT, MLX5_RTE_FLOW_ACTION_TYPE_JUMP, MLX5_RTE_FLOW_ACTION_TYPE_RSS, + MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK, }; /* Private (internal) Field IDs for MODIFY_FIELD action. */ @@ -55,22 +56,35 @@ enum mlx5_rte_flow_field_id { MLX5_RTE_FLOW_FIELD_META_REG, }; -#define MLX5_INDIRECT_ACTION_TYPE_OFFSET 30 +#define MLX5_INDIRECT_ACTION_TYPE_OFFSET 29 enum { MLX5_INDIRECT_ACTION_TYPE_RSS, MLX5_INDIRECT_ACTION_TYPE_AGE, MLX5_INDIRECT_ACTION_TYPE_COUNT, MLX5_INDIRECT_ACTION_TYPE_CT, + MLX5_INDIRECT_ACTION_TYPE_METER_MARK, }; -/* Now, the maximal ports will be supported is 256, action number is 4M. */ -#define MLX5_INDIRECT_ACT_CT_MAX_PORT 0x100 +enum MLX5_SET_MATCHER { + MLX5_SET_MATCHER_SW_V = 1 << 0, + MLX5_SET_MATCHER_SW_M = 1 << 1, + MLX5_SET_MATCHER_HS_V = 1 << 2, + MLX5_SET_MATCHER_HS_M = 1 << 3, +}; + +#define MLX5_SET_MATCHER_SW (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_SW_M) +#define MLX5_SET_MATCHER_HS (MLX5_SET_MATCHER_HS_V | MLX5_SET_MATCHER_HS_M) +#define MLX5_SET_MATCHER_V (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_HS_V) +#define MLX5_SET_MATCHER_M (MLX5_SET_MATCHER_SW_M | MLX5_SET_MATCHER_HS_M) + +/* Now, the maximal ports will be supported is 16, action number is 32M. */ +#define MLX5_ACTION_CTX_CT_MAX_PORT 0x10 #define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 22 #define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1) -/* 30-31: type, 22-29: owner port, 0-21: index. */ +/* 29-31: type, 25-28: owner port, 0-24: index */ #define MLX5_INDIRECT_ACT_CT_GEN_IDX(owner, index) \ ((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | \ (((owner) & MLX5_INDIRECT_ACT_CT_OWNER_MASK) << \ @@ -1159,6 +1173,9 @@ struct mlx5_action_construct_data { struct { uint32_t id; } shared_counter; + struct { + uint32_t id; + } shared_meter; }; }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 90a6c0c78f..e114bf11c1 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -615,6 +615,42 @@ __flow_hw_act_data_shared_cnt_append(struct mlx5_priv *priv, return 0; } +/** + * Append shared meter_mark action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] mtr_id + * Shared meter id. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_shared_mtr_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + cnt_id_t mtr_id) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + act_data->type = type; + act_data->shared_meter.id = mtr_id; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; +} /** * Translate shared indirect action. @@ -668,6 +704,13 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, if (flow_hw_ct_compile(dev, idx, &acts->rule_acts[action_dst])) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + if (__flow_hw_act_data_shared_mtr_append(priv, acts, + (enum rte_flow_action_type) + MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK, + action_src, action_dst, idx)) + return -1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -1682,8 +1725,10 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, struct mlx5dr_rule_action *rule_act) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct mlx5_action_construct_data act_data; struct mlx5_shared_action_rss *shared_rss; + struct mlx5_aso_mtr *aso_mtr; uint32_t act_idx = (uint32_t)(uintptr_t)action->conf; uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t idx = act_idx & @@ -1719,6 +1764,17 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, if (flow_hw_ct_compile(dev, idx, rule_act)) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + /* Find ASO object. */ + aso_mtr = mlx5_ipool_get(pool->idx_pool, idx); + if (!aso_mtr) + return -1; + rule_act->action = pool->action; + rule_act->aso_meter.offset = aso_mtr->offset; + rule_act->aso_meter.init_color = + (enum mlx5dr_action_aso_meter_color) + rte_col_2_mlx5_col(aso_mtr->init_color); + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -1856,6 +1912,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, uint32_t queue) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct rte_flow_template_table *table = job->flow->table; struct mlx5_action_construct_data *act_data; const struct rte_flow_actions_template *at = hw_at->action_template; @@ -2065,6 +2122,21 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, &rule_acts[act_data->action_dst])) return -1; break; + case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK: + mtr_id = act_data->shared_meter.id & + ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + /* Find ASO object. */ + aso_mtr = mlx5_ipool_get(pool->idx_pool, mtr_id); + if (!aso_mtr) + return -1; + rule_acts[act_data->action_dst].action = + pool->action; + rule_acts[act_data->action_dst].aso_meter.offset = + aso_mtr->offset; + rule_acts[act_data->action_dst].aso_meter.init_color = + (enum mlx5dr_action_aso_meter_color) + rte_col_2_mlx5_col(aso_mtr->init_color); + break; case RTE_FLOW_ACTION_TYPE_METER_MARK: ret = flow_hw_meter_mark_compile(dev, act_data->action_dst, action, @@ -3252,6 +3324,11 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_CT; *curr_off = *curr_off + 1; break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + at->actions_off[action_src] = *curr_off; + action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_METER; + *curr_off = *curr_off + 1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type: %d", type); return -EINVAL; @@ -5793,7 +5870,9 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, { struct rte_flow_action_handle *handle = NULL; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr *aso_mtr; cnt_id_t cnt_id; + uint32_t mtr_id; RTE_SET_USED(queue); RTE_SET_USED(attr); @@ -5812,6 +5891,14 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, case RTE_FLOW_ACTION_TYPE_CONNTRACK: handle = flow_hw_conntrack_create(dev, queue, action->conf, error); break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + aso_mtr = flow_hw_meter_mark_alloc(dev, action); + if (!aso_mtr) + break; + mtr_id = (MLX5_INDIRECT_ACTION_TYPE_METER_MARK << + MLX5_INDIRECT_ACTION_TYPE_OFFSET) | (aso_mtr->fm.meter_id); + handle = (struct rte_flow_action_handle *)(uintptr_t)mtr_id; + break; default: handle = flow_dv_action_create(dev, conf, action, error); } @@ -5847,18 +5934,58 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, void *user_data, struct rte_flow_error *error) { - uint32_t act_idx = (uint32_t)(uintptr_t)handle; - uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; - RTE_SET_USED(queue); RTE_SET_USED(attr); RTE_SET_USED(user_data); + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + const struct rte_flow_update_meter_mark *upd_meter_mark = + (const struct rte_flow_update_meter_mark *)update; + const struct rte_flow_action_meter_mark *meter_mark; + struct mlx5_aso_mtr *aso_mtr; + struct mlx5_flow_meter_info *fm; + uint32_t act_idx = (uint32_t)(uintptr_t)handle; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + switch (type) { case MLX5_INDIRECT_ACTION_TYPE_CT: return flow_hw_conntrack_update(dev, queue, update, act_idx, error); + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + meter_mark = &upd_meter_mark->meter_mark; + /* Find ASO object. */ + aso_mtr = mlx5_ipool_get(pool->idx_pool, idx); + if (!aso_mtr) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Invalid meter_mark update index"); + fm = &aso_mtr->fm; + if (upd_meter_mark->profile_valid) + fm->profile = (struct mlx5_flow_meter_profile *) + (meter_mark->profile); + if (upd_meter_mark->color_mode_valid) + fm->color_aware = meter_mark->color_mode; + if (upd_meter_mark->init_color_valid) + aso_mtr->init_color = (meter_mark->color_mode) ? + meter_mark->init_color : RTE_COLOR_GREEN; + if (upd_meter_mark->state_valid) + fm->is_enable = meter_mark->state; + /* Update ASO flow meter by wqe. */ + if (mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr, + &priv->mtr_bulk)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Unable to update ASO meter WQE"); + /* Wait for ASO object completion. */ + if (mlx5_aso_mtr_wait(priv->sh, aso_mtr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Unable to wait for ASO meter CQE"); + return 0; default: - return flow_dv_action_update(dev, handle, update, error); + break; } + return flow_dv_action_update(dev, handle, update, error); } /** @@ -5889,7 +6016,11 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, { uint32_t act_idx = (uint32_t)(uintptr_t)handle; uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + struct mlx5_aso_mtr *aso_mtr; + struct mlx5_flow_meter_info *fm; RTE_SET_USED(queue); RTE_SET_USED(attr); @@ -5899,6 +6030,27 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, return mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx); case MLX5_INDIRECT_ACTION_TYPE_CT: return flow_hw_conntrack_destroy(dev, act_idx, error); + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + aso_mtr = mlx5_ipool_get(pool->idx_pool, idx); + if (!aso_mtr) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Invalid meter_mark destroy index"); + fm = &aso_mtr->fm; + fm->is_enable = 0; + /* Update ASO flow meter by wqe. */ + if (mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr, + &priv->mtr_bulk)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Unable to update ASO meter WQE"); + /* Wait for ASO object completion. */ + if (mlx5_aso_mtr_wait(priv->sh, aso_mtr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Unable to wait for ASO meter CQE"); + mlx5_ipool_free(pool->idx_pool, idx); + return 0; default: return flow_dv_action_destroy(dev, handle, error); }