From patchwork Thu Feb 29 16:05:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 137508 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8FFC43C3B; Thu, 29 Feb 2024 17:06:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BC13B42DFB; Thu, 29 Feb 2024 17:06:06 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2048.outbound.protection.outlook.com [40.107.237.48]) by mails.dpdk.org (Postfix) with ESMTP id 35DAE42E1A; Thu, 29 Feb 2024 17:05:59 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JT8gMVAMw1lmkZo4vMRuO0R8xQADNelyZPp/TtnkYm/JU2/Bu5ObMDGW4fEFV6K4GXAzpH7dwVSqlKceOhEzqxt9ksDOKCdWgkE29CSWroIux0RYgAl6Pc3Cu8VED9oF+24mRiNesQq4VOAVluPo0E0stXld2szwVspj6xVdSW/tS1ukpN3hRL+kuSbZnriso1b1uDv1dEDSo5WWFtjcBtJO+lhDfB+aldlan2mV2Rls+oXR5tjiXrJQX3SPwcs8uOCpCwT7cMS7O+G5YP6xPHONVUkfZO39zf86QO+kqLFWJo4fr2rgnrbH2mHYdE6NvJT//SIC4FoMJHcPbACjCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hLndn30o/GsF1QL6j0COKOFtpdURgO11dUs3jEQcVt4=; b=Lj/AANqA3SmFqP5ytdYGYhFCuMzd1G9Tta0JmohgwIGB795AZb9RSly22q29A0NJe77LCRBu7tYw2IjaXuZGrhi6X3KCIdpRAK7snG/zuqYrbYV6ceKyyS0Rvw7+ddya+pUv27GGXyYP8RZxKjpL5NVne8iHsCVdVx8SHY9uzGbFN++qkAkZ1c5dWEHzZSxuJBDBPt3L77iBV2RzmvrXBdpjpFAcg/pX49gW4r4H5UQqhcsZ2FoAIBrS3g+ELhIH8lXLtWT/4T1xpgYeYtEYejrU4cdkTytDuCu7bksv5GDEY+tQiy6hjpvujD4Cpow+JOXo9TTW7OuuqpbdvCfNHQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hLndn30o/GsF1QL6j0COKOFtpdURgO11dUs3jEQcVt4=; b=UwCeOLcj66veCuhHGeg0ljN6EXwDDxLLssWFixkc3ag11zWt7ZTWLNi+7LuGU9BKJvwjsDSp8zq2Aa9EoMw95MOK0rUqJt8wbz9MWd3a7dGHVFDBxJVzRoX045D6OXJ8eFv/IQLh2vQ87rAiH1TusBPhBD8K1xblzBvLv7RweJub1+B2+/jkesbBF9541ji2bphhmmVnfP0NciBlmOwkU31mVXT3KzT0M/6pNIiaFcr2X3WQLFGXuZxBnF4CFhIIZgJo7jR6wlznvoSO+UZ/0Uvlr9I8vA+11eIqHyu0SEqKgfBTSuBKjl+Dj1FgOaAxDgvRIw3CRRqiWK37R+U7rw== Received: from BN9PR03CA0059.namprd03.prod.outlook.com (2603:10b6:408:fb::34) by DS0PR12MB9424.namprd12.prod.outlook.com (2603:10b6:8:1b4::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.41; Thu, 29 Feb 2024 16:05:56 +0000 Received: from BN3PEPF0000B072.namprd04.prod.outlook.com (2603:10b6:408:fb:cafe::e6) by BN9PR03CA0059.outlook.office365.com (2603:10b6:408:fb::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.32 via Frontend Transport; Thu, 29 Feb 2024 16:05:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN3PEPF0000B072.mail.protection.outlook.com (10.167.243.117) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Thu, 29 Feb 2024 16:05:55 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 29 Feb 2024 08:05:26 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Thu, 29 Feb 2024 08:05:22 -0800 From: Gregory Etelson To: CC: , , , , Ori Kam , Dariusz Sosnowski , Viacheslav Ovsiienko , Suanming Mou , Matan Azrad , "Yongseok Koh" , Shahaf Shuler Subject: [PATCH 1/2] net/mlx5: remove code duplications Date: Thu, 29 Feb 2024 18:05:03 +0200 Message-ID: <20240229160505.630586-2-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240229160505.630586-1-getelson@nvidia.com> References: <20240229160505.630586-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B072:EE_|DS0PR12MB9424:EE_ X-MS-Office365-Filtering-Correlation-Id: 1e9aa168-aa68-4153-57be-08dc39404fd3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6qeAbCDKOXONNU6XjgziHP1G9slnv6EZq4TRBGOIRqVQsUsLs5s6Tx0lspT9Mb57h51A0Q/5lz3QhWst3uzQmyJY7nYThzFanag9BBYxuZo+3UIb2KGLlawCBKiUo7ku7ciwmB3iFNWUVLaKIxQQEJQACmkf0T5gIEGkQv1azBWk5J3UD7d74PcpaEe2YlhtTnSTzVJ5EuU1PE31S8b9+EKF2KinJXu/3SSiQIyQyNZlVzLtSQFYWSUEDQIYSw0BDGMvRzhvOnLab2E678hOmubLcXTlRYgVYR2m6hPwjGKBYzvWKtTuS01qfhlM3DUNq2R6DI2i9ttb0eWV3l65DkQzlWdeYIK7p+vjdFZ+Yv3+fsAknCelmOIASkSb8rarocvpqBDUvN3/OTa/14/AiWQj+AFzYlV7/IqlB3u9yv1z2RhaVLGwMo/aYMRvVfpYQAWlm8DCIDxKPUIICceA1s1G67cM+4BXDCd+7SvC5t28g2Y9cp8WpnLGcV6eIdAlmGoZKaVwR4fS9ATJL2Lyz4JnvM7C96hHkVvBxYia6WPsilA5cQQB+/QxFCB8N5YT7pvvHKe+/2x4/FzicuIl2K0xfonmI5Jfhrfbm5GqWs948SLO00z+vB1i9W/iWEkZg/mf3cDfOlAteGr0MFuKF6Pw1QQ+TfIZD0fbVIaA3ygVnAl1YTgpOoA91xfTkO78/vCyKls3QYWdPySHNo79TyMhim5xfXdUEy4WJX+2Jx6cQ+R2SSu/OmeiQSj6JnIL X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Feb 2024 16:05:55.7833 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1e9aa168-aa68-4153-57be-08dc39404fd3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B072.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB9424 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Remove code duplications in DV L3 items validation translation. Fixes: 3193c2494eea ("net/mlx5: fix L4 protocol validation") Cc: stable@dpdk.org Signed-off-by: Gregory Etelson Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_dv.c | 151 +++++++++----------------------- 1 file changed, 43 insertions(+), 108 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 18f09b22be..fe0a06f364 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7488,6 +7488,40 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev, return 0; } +static __rte_always_inline uint8_t +mlx5_flow_l3_next_protocol(const struct rte_flow_item *l3_item, + enum MLX5_SET_MATCHER key_type) +{ +#define MLX5_L3_NEXT_PROTOCOL(i, ms) \ + ((i)->type == RTE_FLOW_ITEM_TYPE_IPV4 ? \ + ((const struct rte_flow_item_ipv4 *)(i)->ms)->hdr.next_proto_id : \ + (i)->type == RTE_FLOW_ITEM_TYPE_IPV6 ? \ + ((const struct rte_flow_item_ipv6 *)(i)->ms)->hdr.proto : \ + (i)->type == RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT ? \ + ((const struct rte_flow_item_ipv6_frag_ext *)(i)->ms)->hdr.next_header :\ + 0xff) + + uint8_t next_protocol; + + if (l3_item->mask != NULL && l3_item->spec != NULL) { + next_protocol = MLX5_L3_NEXT_PROTOCOL(l3_item, spec); + if (next_protocol) + next_protocol &= MLX5_L3_NEXT_PROTOCOL(l3_item, mask); + else + next_protocol = 0xff; + } else if (key_type == MLX5_SET_MATCHER_HS_M && l3_item->mask != NULL) { + next_protocol = MLX5_L3_NEXT_PROTOCOL(l3_item, mask); + } else if (key_type == MLX5_SET_MATCHER_HS_V && l3_item->spec != NULL) { + next_protocol = MLX5_L3_NEXT_PROTOCOL(l3_item, spec); + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + return next_protocol; + +#undef MLX5_L3_NEXT_PROTOCOL +} + /** * Validate IB BTH item. * @@ -7770,19 +7804,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol + (items, (enum MLX5_SET_MATCHER)-1); break; case RTE_FLOW_ITEM_TYPE_IPV6: mlx5_flow_tunnel_ip_check(items, next_protocol, @@ -7796,22 +7819,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - item_ipv6_proto = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol + (items, (enum MLX5_SET_MATCHER)-1); break; case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: ret = flow_dv_validate_item_ipv6_frag_ext(items, @@ -7822,19 +7831,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol + (items, (enum MLX5_SET_MATCHER)-1); break; case RTE_FLOW_ITEM_TYPE_TCP: ret = mlx5_flow_validate_item_tcp @@ -13997,28 +13995,7 @@ flow_dv_translate_items(struct rte_eth_dev *dev, wks->priority = MLX5_PRIORITY_MAP_L3; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - items->spec != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else if (key_type == MLX5_SET_MATCHER_HS_M && - items->mask != NULL) { - next_protocol = ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else if (key_type == MLX5_SET_MATCHER_HS_V && - items->spec != NULL) { - next_protocol = ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol(items, key_type); break; case RTE_FLOW_ITEM_TYPE_IPV6: mlx5_flow_tunnel_ip_check(items, next_protocol, @@ -14028,56 +14005,14 @@ flow_dv_translate_items(struct rte_eth_dev *dev, wks->priority = MLX5_PRIORITY_MAP_L3; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - items->spec != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else if (key_type == MLX5_SET_MATCHER_HS_M && - items->mask != NULL) { - next_protocol = ((const struct rte_flow_item_ipv6 *) - (items->mask))->hdr.proto; - } else if (key_type == MLX5_SET_MATCHER_HS_V && - items->spec != NULL) { - next_protocol = ((const struct rte_flow_item_ipv6 *) - (items->spec))->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol(items, key_type); break; case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: flow_dv_translate_item_ipv6_frag_ext (key, items, tunnel, key_type); last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - items->spec != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else if (key_type == MLX5_SET_MATCHER_HS_M && - items->mask != NULL) { - next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) - (items->mask))->hdr.next_header; - } else if (key_type == MLX5_SET_MATCHER_HS_V && - items->spec != NULL) { - next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) - (items->spec))->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol(items, key_type); break; case RTE_FLOW_ITEM_TYPE_TCP: flow_dv_translate_item_tcp(key, items, tunnel, key_type);