From patchwork Mon May 31 10:19:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 93619 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0CF0CA0524; Mon, 31 May 2021 12:21:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C42140E78; Mon, 31 May 2021 12:21:18 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2072.outbound.protection.outlook.com [40.107.95.72]) by mails.dpdk.org (Postfix) with ESMTP id 87E1E40697 for ; Mon, 31 May 2021 12:21:16 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AHwx5y4oara5o69g4yo5zZ64QgrbdWTmuJOXTNVhvw9FqWFSTdBVQ/RQMEnow2qZ61xoLyk952m6VEVhUFDsgn82mWyrl4+cdC864CrQbxIYxKABq3CaHtuIX6nkZTdky8h0C5WI1ZQRkgFaR6U1zUR/xW2SnZMNZoRCgUxL3u7nXq2hwksgfKXJjZPNm6y4cARTy6znj2m+oAxb8pPbK2ZGFSFeJOPtVCzijbZbyPqHB8qNsPqvhN3Kz5lEDfDgNL4LpxPhXKmHm7vWuC16FYi9bbJN3tDQYDN6NABCJvznjInkk3W+MPJdvLbzJAIeBUEa9QzuiFTQMwlQlYQyFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vVY2BeZoJWRIGKzwKEXiCzaVnpXJir7exacri84A/eU=; b=ddjLZoZ4Hdq43kLLNAXjGOoDKWLUplbZHLRZC4V9IWAEBxpzIOUYVEm4zISDS8QFsIU5RLROCi8CvX7a4wBrZomEYl1qB2RukSdvupRILLk3r/xWQV7LsD7FSyR5bEBzZv3AvEJlaYZ5bTp1d/zsU4TSpwb363NIyEh6MJSs0hXy4Zzgos2pNYsGQu8NuX3V/sLeUoCdjk7adajPYCjUhO4D+qtrViOmzwdWRcFXshDMwXkBoQX3IQ5LkahC5UoeYqx2x/8HAwzPC7pW8tPpqwg2bAIvoyzIT4CweCc9iCSyyHL6PhU6GwWzDR0K14ivQnRMDhXSraKBtAEyEfpBEg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vVY2BeZoJWRIGKzwKEXiCzaVnpXJir7exacri84A/eU=; b=sxCKd6jfpEzomUUDeaN8XLoN/KS7/DglsNGSy/1X1RGXDRtdgMTTyUKBNHSlz02I2nBUiimzdQ/iFw8aFiskmX4DKa1Gz8ShFsSCx9vJaEFMh/wZWewMcaUp31syCwsWbcmmQ8hVVktkZwJn9IAQ1I7Ol/DNDInrqb3ik7zkMxfD8ShQwjpEfJm5uH/rxibycS31R8ipG/oG4DPFV3zbbGYwvAEQh0qy4xnhiwCVwCJiFcecXfr99FhvJrwWYY7n8r4eyRrPbXLWtgsR1QNIhewAfSzHQMsJ1M6rFgS/s5bHP+taYs4JZwmLR4oEYTcIgJQ8/lw/BA+HjGRkst0EZg== Received: from CO2PR04CA0151.namprd04.prod.outlook.com (2603:10b6:104::29) by CH2PR12MB3734.namprd12.prod.outlook.com (2603:10b6:610:2a::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.27; Mon, 31 May 2021 10:21:14 +0000 Received: from CO1NAM11FT019.eop-nam11.prod.protection.outlook.com (2603:10b6:104:0:cafe::36) by CO2PR04CA0151.outlook.office365.com (2603:10b6:104::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20 via Frontend Transport; Mon, 31 May 2021 10:21:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT019.mail.protection.outlook.com (10.13.175.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4150.30 via Frontend Transport; Mon, 31 May 2021 10:21:14 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 31 May 2021 10:20:35 +0000 From: rongwei liu To: , , , , Shahaf Shuler CC: , Date: Mon, 31 May 2021 13:19:40 +0300 Message-ID: <20210531101941.60916-2-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210531101941.60916-1-rongweil@nvidia.com> References: <20210531101941.60916-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: afdc933d-dafe-4731-bb11-08d9241dd1e6 X-MS-TrafficTypeDiagnostic: CH2PR12MB3734: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UKpow7RMkN/UcBjzV/ElnQu6QlpLvFfhdwsLts6pf6g8/on7uGjbIyqT43wBcAMvBxIlqU5md6usTcT3vS1EzO5OlA/G33qa6CtHaMag2gOyP4ePq8pgtBdbXvc7kvvN2gR/5s2Jt/SLRGnpvFGnU2eJ7q9oVUYtrrkxKxyUfaIeTiwy9UYmOrgZVnKWCsq/xJtEFhr/t83Z4sK5F5DHu4kt/onpTsryHfIwmGUl4exDOGLjRr1nGi0Q67Xh7aqxO53h+6R4AbBhVqKfxv/pEvscfckdZ1sMoZ90DkNAgZMA0TItU5rat90Pp2bf/zTe+LwmZp3DhviJ/z5kW3a4do9JdsPBeLJ1N/riMUcHYRbDzBMA7D7yyu15lzFaW5sWjVlTsX7AhER1YGkANgU/v/Xb+mAPQhwOJ8irwBFKEW+vSaIJW9qxXV/6hg1POxpZcFLA3CeNSQ2AupJhgmC4jaAKU21A1Gqllm2kaLyCpxeNcOQqM63/D3XUl/pyZVWZPa/7bGsYNcUbjx5n8EFv6yGq+vJ6xnAEVWA7ZVo8PITDNv4tSZ2XeDiUc3xTVPG18ouDl4V17FK8KgAdj/EE+LFWIyk3ZAbBinfP6mTuTYHYQZ7nUmKF2wozqrNVQUdYtVjQbnxDuqfXpAOHpvvlHWuUb3wMCrb2u5frsrgi7m/lBgpQHY+RI2W22wRmVLg61ZgOpU3ngD8XuxDZi86Qa5GdAlFtXP9SurQV1kJzjUi2Fi7MyUDh6vux1BvqnBro98J/CIvfAHqq6cv9RWxGQQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(346002)(39860400002)(376002)(46966006)(36840700001)(5660300002)(426003)(336012)(2906002)(47076005)(36756003)(7696005)(2616005)(478600001)(186003)(36906005)(16526019)(6286002)(4326008)(82310400003)(1076003)(36860700001)(55016002)(82740400003)(6666004)(26005)(86362001)(70206006)(8936002)(316002)(8676002)(70586007)(54906003)(6636002)(356005)(7636003)(30864003)(107886003)(110136005)(83380400001)(966005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2021 10:21:14.2757 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: afdc933d-dafe-4731-bb11-08d9241dd1e6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT019.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB3734 Subject: [dpdk-dev] [RFC 1/2] drivers: add VXLAN header the last 8-bits matching support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This update adds support for the VXLAN alert bits matching when creating steering rules. At the PCIe probe stage, we create a dummy VXLAN matcher using misc5 to check rdma-core library's capability. The logic is, group 0 depends on HCA_CAP to enable misc or misc5 for VXLAN matching while group non zero depends on the rdma-core capability. Signed-off-by: rongwei liu --- doc/guides/nics/mlx5.rst | 11 +- drivers/common/mlx5/mlx5_devx_cmds.c | 3 + drivers/common/mlx5/mlx5_devx_cmds.h | 6 ++ drivers/common/mlx5/mlx5_prm.h | 41 ++++++-- drivers/net/mlx5/linux/mlx5_os.c | 77 ++++++++++++++ drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_flow.c | 26 ++++- drivers/net/mlx5/mlx5_flow.h | 4 +- drivers/net/mlx5/mlx5_flow_dv.c | 152 +++++++++++++++++++-------- drivers/net/mlx5/mlx5_flow_verbs.c | 3 +- drivers/vdpa/mlx5/mlx5_vdpa_steer.c | 6 +- 11 files changed, 270 insertions(+), 61 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 83299646dd..0bfe55dbdc 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -186,8 +186,15 @@ Limitations size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal inline settings) to 58. -- Flows with a VXLAN Network Identifier equal (or ends to be equal) - to 0 are not supported. +- Match on VXLAN supports the following fields only: + + - VNI + - Last reserved 8-bits + + Last reserved 8-bits matching is only supported When using DV flow + engine (``dv_flow_en`` = 1). + Group zero's behavior may differ which depends on FW. + Matching value equals 0 (value & mask) is not supported. - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP. diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index f5914bce32..63ae95832d 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -947,6 +947,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->log_max_ft_sampler_num = MLX5_GET (flow_table_nic_cap, hcattr, flow_table_properties_nic_receive.log_max_ft_sampler_num); + attr->flow.tunnel_header_0_1 = MLX5_GET + (flow_table_nic_cap, hcattr, + ft_field_support_2_nic_receive.tunnel_header_0_1); attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr); /* Query HCA offloads for Ethernet protocol. */ memset(in, 0, sizeof(in)); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index f8a17b886b..124f43e852 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -89,6 +89,11 @@ struct mlx5_hca_vdpa_attr { uint64_t doorbell_bar_offset; }; +struct mlx5_hca_flow_attr { + uint32_t tunnel_header_0_1; + uint32_t tunnel_header_2_3; +}; + /* HCA supports this number of time periods for LRO. */ #define MLX5_LRO_NUM_SUPP_PERIODS 4 @@ -155,6 +160,7 @@ struct mlx5_hca_attr { uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */ struct mlx5_hca_qos_attr qos; struct mlx5_hca_vdpa_attr vdpa; + struct mlx5_hca_flow_attr flow; int log_max_qp_sz; int log_max_cq_sz; int log_max_qp; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 26761f5bd3..7950070976 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -977,6 +977,18 @@ struct mlx5_ifc_fte_match_set_misc4_bits { u8 reserved_at_100[0x100]; }; +struct mlx5_ifc_fte_match_set_misc5_bits { + u8 macsec_tag_0[0x20]; + u8 macsec_tag_1[0x20]; + u8 macsec_tag_2[0x20]; + u8 macsec_tag_3[0x20]; + u8 tunnel_header_0[0x20]; + u8 tunnel_header_1[0x20]; + u8 tunnel_header_2[0x20]; + u8 tunnel_header_3[0x20]; + u8 reserved[0x100]; +}; + /* Flow matcher. */ struct mlx5_ifc_fte_match_param_bits { struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers; @@ -985,12 +997,13 @@ struct mlx5_ifc_fte_match_param_bits { struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2; struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3; struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4; + struct mlx5_ifc_fte_match_set_misc5_bits misc_parameters_5; /* * Add reserved bit to match the struct size with the size defined in PRM. * This extension is not required in Linux. */ #ifndef HAVE_INFINIBAND_VERBS_H - u8 reserved_0[0x400]; + u8 reserved_0[0x200]; #endif }; @@ -1007,6 +1020,7 @@ enum { MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT, MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT, MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT, + MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT, }; enum { @@ -1784,7 +1798,12 @@ struct mlx5_ifc_roce_caps_bits { * Table 1872 - Flow Table Fields Supported 2 Format */ struct mlx5_ifc_ft_fields_support_2_bits { - u8 reserved_at_0[0x14]; + u8 reserved_at_0[0xf]; + u8 tunnel_header_2_3[0x1]; + u8 tunnel_header_0_1[0x1]; + u8 macsec_syndrome[0x1]; + u8 macsec_tag[0x1]; + u8 outer_lrh_sl[0x1]; u8 inner_ipv4_ihl[0x1]; u8 outer_ipv4_ihl[0x1]; u8 psp_syndrome[0x1]; @@ -1797,18 +1816,26 @@ struct mlx5_ifc_ft_fields_support_2_bits { u8 inner_l4_checksum_ok[0x1]; u8 outer_ipv4_checksum_ok[0x1]; u8 outer_l4_checksum_ok[0x1]; + u8 reserved_at_20[0x60]; }; struct mlx5_ifc_flow_table_nic_cap_bits { u8 reserved_at_0[0x200]; struct mlx5_ifc_flow_table_prop_layout_bits - flow_table_properties_nic_receive; + flow_table_properties_nic_receive; + struct mlx5_ifc_flow_table_prop_layout_bits + flow_table_properties_nic_receive_rdma; + struct mlx5_ifc_flow_table_prop_layout_bits + flow_table_properties_nic_receive_sniffer; + struct mlx5_ifc_flow_table_prop_layout_bits + flow_table_properties_nic_transmit; + struct mlx5_ifc_flow_table_prop_layout_bits + flow_table_properties_nic_transmit_rdma; struct mlx5_ifc_flow_table_prop_layout_bits - flow_table_properties_unused[5]; - u8 reserved_at_1C0[0x200]; - u8 header_modify_nic_receive[0x400]; + flow_table_properties_nic_transmit_sniffer; + u8 reserved_at_e00[0x600]; struct mlx5_ifc_ft_fields_support_2_bits - ft_field_support_2_nic_receive; + ft_field_support_2_nic_receive; }; struct mlx5_ifc_cmd_hca_cap_2_bits { diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 534a56a555..386db44795 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -193,6 +193,79 @@ mlx5_alloc_verbs_buf(size_t size, void *data) return ret; } +/** + * Detect misc5 support or not + * + * @param[in] priv + * Device private data pointer + */ +#ifdef HAVE_MLX5DV_DR +static void +__mlx5_discovery_misc5_cap(struct mlx5_priv *priv) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + /* Dummy VxLAN matcher to detect rdma-core misc5 cap + * Case: IPv4--->UDP--->VxLAN--->vni + */ + void *tbl; + struct mlx5_flow_dv_match_params matcher_mask; + void *match_m; + void *matcher; + void *headers_m; + void *misc5_m; + uint32_t *tunnel_header_m; + struct mlx5dv_flow_matcher_attr dv_attr; + + memset(&matcher_mask, 0, sizeof(matcher_mask)); + matcher_mask.size = sizeof(matcher_mask.buf); + match_m = matcher_mask.buf; + headers_m = MLX5_ADDR_OF(fte_match_param, match_m, outer_headers); + misc5_m = MLX5_ADDR_OF(fte_match_param, + match_m, misc_parameters_5); + tunnel_header_m = (uint32_t *) + MLX5_ADDR_OF(fte_match_set_misc5, + misc5_m, tunnel_header_1); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 4); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xffff); + *tunnel_header_m = 0xffffff; + + tbl = mlx5_glue->dr_create_flow_tbl(priv->sh->rx_domain, 1); + if (!tbl) { + DRV_LOG(ERR, "dr_create_flow_tbl failed"); + return; + } + dv_attr.type = IBV_FLOW_ATTR_NORMAL, + dv_attr.match_mask = (void *)&matcher_mask, + dv_attr.match_criteria_enable = + (1 << MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT) | + (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT); + dv_attr.priority = 3; +#ifdef HAVE_MLX5DV_DR_ESWITCH + void *misc2_m; + if (priv->config.dv_esw_en) { + /* FDB enabled reg_c_0 */ + dv_attr.match_criteria_enable |= + (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT); + misc2_m = MLX5_ADDR_OF(fte_match_param, + match_m, misc_parameters_2); + MLX5_SET(fte_match_set_misc2, misc2_m, + metadata_reg_c_0, 0xffff); + } +#endif + matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->ctx, + &dv_attr, tbl); + if (matcher) { + priv->sh->misc5_cap = 1; + mlx5_glue->dv_destroy_flow_matcher(matcher); + } + mlx5_glue->dr_destroy_flow_tbl(tbl); +#else + RTE_SET_USED(priv); +#endif +} +#endif + /** * Verbs callback to free a memory. * @@ -355,6 +428,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) mlx5_glue->dr_reclaim_domain_memory(sh->fdb_domain, 1); } sh->pop_vlan_action = mlx5_glue->dr_create_flow_action_pop_vlan(); + + __mlx5_discovery_misc5_cap(priv); #endif /* HAVE_MLX5DV_DR */ sh->default_miss_action = mlx5_glue->dr_create_flow_action_default_miss(); @@ -1312,6 +1387,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, goto error; } } + if (config->hca_attr.flow.tunnel_header_0_1) + sh->tunnel_header_0_1 = 1; #endif #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO if (config->hca_attr.flow_hit_aso && diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 32b2817bf2..2b2ddd02cf 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1071,6 +1071,8 @@ struct mlx5_dev_ctx_shared { uint32_t qp_ts_format:2; /* QP timestamp formats supported. */ uint32_t meter_aso_en:1; /* Flow Meter ASO is supported. */ uint32_t ct_aso_en:1; /* Connection Tracking ASO is supported. */ + uint32_t tunnel_header_0_1:1; /* tunnel_header_0_1 is supported. */ + uint32_t misc5_cap:1; /* misc5 matcher parameter is supported. */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ void *ctx; /* Verbs/DV/DevX context. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index dbeca571b6..309f9f94bf 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -2368,12 +2368,14 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item, /** * Validate VXLAN item. * + * @param[in] dev + * Pointer to the Ethernet device structure. * @param[in] item * Item specification. * @param[in] item_flags * Bit-fields that holds the items detected until now. - * @param[in] target_protocol - * The next protocol in the previous item. + * @param[in] attr + * Flow rule attributes. * @param[out] error * Pointer to error structure. * @@ -2381,24 +2383,32 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item, * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item, +mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev, + const struct rte_flow_item *item, uint64_t item_flags, + const struct rte_flow_attr *attr, struct rte_flow_error *error) { const struct rte_flow_item_vxlan *spec = item->spec; const struct rte_flow_item_vxlan *mask = item->mask; int ret; + struct mlx5_priv *priv = dev->data->dev_private; union vni { uint32_t vlan_id; uint8_t vni[4]; } id = { .vlan_id = 0, }; - + const struct rte_flow_item_vxlan nic_mask = { + .vni = "\xff\xff\xff", + .rsvd1 = 0xff, + }; + const struct rte_flow_item_vxlan *valid_mask; if (item_flags & MLX5_FLOW_LAYER_TUNNEL) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "multiple tunnel layers not" " supported"); + valid_mask = &rte_flow_item_vxlan_mask; /* * Verify only UDPv4 is present as defined in * https://tools.ietf.org/html/rfc7348 @@ -2409,9 +2419,15 @@ mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item, "no outer UDP layer found"); if (!mask) mask = &rte_flow_item_vxlan_mask; + /* FDB domain & NIC domain non-zero group */ + if ((attr->transfer || attr->group) && priv->sh->misc5_cap) + valid_mask = &nic_mask; + /* Group zero in NIC domain */ + if (!attr->group && !attr->transfer && priv->sh->tunnel_header_0_1) + valid_mask = &nic_mask; ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, - (const uint8_t *)&rte_flow_item_vxlan_mask, + (const uint8_t *)valid_mask, sizeof(struct rte_flow_item_vxlan), MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2f2aa962f9..3739dcc319 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1521,8 +1521,10 @@ int mlx5_flow_validate_item_vlan(const struct rte_flow_item *item, uint64_t item_flags, struct rte_eth_dev *dev, struct rte_flow_error *error); -int mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item, +int mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev, + const struct rte_flow_item *item, uint64_t item_flags, + const struct rte_flow_attr *attr, struct rte_flow_error *error); int mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item, uint64_t item_flags, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index c50649a107..14500421ad 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -6861,7 +6861,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, last_item = MLX5_FLOW_LAYER_GRE_KEY; break; case RTE_FLOW_ITEM_TYPE_VXLAN: - ret = mlx5_flow_validate_item_vxlan(items, item_flags, + ret = mlx5_flow_validate_item_vxlan(dev, items, + item_flags, attr, error); if (ret < 0) return ret; @@ -7820,15 +7821,7 @@ flow_dv_prepare(struct rte_eth_dev *dev, memset(dev_flow, 0, sizeof(*dev_flow)); dev_flow->handle = dev_handle; dev_flow->handle_idx = handle_idx; - /* - * In some old rdma-core releases, before continuing, a check of the - * length of matching parameter will be done at first. It needs to use - * the length without misc4 param. If the flow has misc4 support, then - * the length needs to be adjusted accordingly. Each param member is - * aligned with a 64B boundary naturally. - */ - dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4); + dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param); dev_flow->ingress = attr->ingress; dev_flow->dv.transfer = attr->transfer; return dev_flow; @@ -8609,6 +8602,10 @@ flow_dv_translate_item_nvgre(void *matcher, void *key, /** * Add VXLAN item to matcher and to the value. * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] attr + * Flow rule attributes. * @param[in, out] matcher * Flow matcher. * @param[in, out] key @@ -8619,7 +8616,9 @@ flow_dv_translate_item_nvgre(void *matcher, void *key, * Item is inner pattern. */ static void -flow_dv_translate_item_vxlan(void *matcher, void *key, +flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + void *matcher, void *key, const struct rte_flow_item *item, int inner) { @@ -8627,14 +8626,25 @@ flow_dv_translate_item_vxlan(void *matcher, void *key, const struct rte_flow_item_vxlan *vxlan_v = item->spec; void *headers_m; void *headers_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); - void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + void *misc_m; + void *misc_v; + void *misc5_m; + void *misc5_v; + uint32_t *tunnel_header_v; + uint32_t *tunnel_header_m; char *vni_m; char *vni_v; uint16_t dport; int size; int i; + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_item_vxlan nic_mask = { + .vni = "\xff\xff\xff", + .rsvd1 = 0xff, + }; + misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); + misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); if (inner) { headers_m = MLX5_ADDR_OF(fte_match_param, matcher, inner_headers); @@ -8652,14 +8662,44 @@ flow_dv_translate_item_vxlan(void *matcher, void *key, } if (!vxlan_v) return; - if (!vxlan_m) - vxlan_m = &rte_flow_item_vxlan_mask; - size = sizeof(vxlan_m->vni); - vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni); - vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); - memcpy(vni_m, vxlan_m->vni, size); - for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + if (!vxlan_m) { + if ((!attr->group && !priv->sh->tunnel_header_0_1) || + (attr->group && !priv->sh->misc5_cap)) + vxlan_m = &rte_flow_item_vxlan_mask; + else + vxlan_m = &nic_mask; + } + if ((!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) || + ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) { + misc_m = MLX5_ADDR_OF(fte_match_param, + matcher, misc_parameters); + misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + size = sizeof(vxlan_m->vni); + vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni); + vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); + memcpy(vni_m, vxlan_m->vni, size); + for (i = 0; i < size; ++i) + vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + return; + } + tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, + misc5_v, + tunnel_header_1); + tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, + misc5_m, + tunnel_header_1); + *tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | + (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; + if (*tunnel_header_v) + *tunnel_header_m = vxlan_m->vni[0] | + vxlan_m->vni[1] << 8 | + vxlan_m->vni[2] << 16; + else + *tunnel_header_m = 0x0; + *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; + if (vxlan_v->rsvd1 & vxlan_m->rsvd1) + *tunnel_header_m |= vxlan_m->rsvd1 << 24; } /** @@ -9834,9 +9874,32 @@ flow_dv_matcher_enable(uint32_t *match_criteria) match_criteria_enable |= (!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) << MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT; + match_criteria_enable |= + (!HEADER_IS_ZERO(match_criteria, misc_parameters_5)) << + MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT; return match_criteria_enable; } +static void +__flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria) +{ + /* + * Check flow matching criteria first, subtract misc5/4 length if flow + * doesn't own misc5/4 parameters. In some old rdma-core releases, + * misc5/4 are not supported, and matcher creation failure is expected + * w/o subtration. If misc5 is provided, misc4 must be counted in since + * misc5 is right after misc4. + */ + if (!(match_criteria & (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) { + *size = MLX5_ST_SZ_BYTES(fte_match_param) - + MLX5_ST_SZ_BYTES(fte_match_set_misc5); + if (!(match_criteria & (1 << + MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT))) { + *size -= MLX5_ST_SZ_BYTES(fte_match_set_misc4); + } + } +} + struct mlx5_hlist_entry * flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) { @@ -10103,6 +10166,8 @@ flow_dv_matcher_create_cb(struct mlx5_cache_list *list, *cache = *ref; dv_attr.match_criteria_enable = flow_dv_matcher_enable(cache->mask.buf); + __flow_dv_adjust_buf_size(&ref->mask.size, + dv_attr.match_criteria_enable); dv_attr.priority = ref->priority; if (tbl->is_egress) dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS; @@ -10152,7 +10217,6 @@ flow_dv_matcher_register(struct rte_eth_dev *dev, .error = error, .data = ref, }; - /** * tunnel offload API requires this registration for cases when * tunnel match rule was inserted before tunnel set rule. @@ -12011,8 +12075,7 @@ flow_dv_translate(struct rte_eth_dev *dev, uint64_t action_flags = 0; struct mlx5_flow_dv_matcher matcher = { .mask = { - .size = sizeof(matcher.mask.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + .size = sizeof(matcher.mask.buf), }, }; int actions_n = 0; @@ -12819,7 +12882,8 @@ flow_dv_translate(struct rte_eth_dev *dev, last_item = MLX5_FLOW_LAYER_GRE; break; case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(match_mask, match_value, + flow_dv_translate_item_vxlan(dev, attr, + match_mask, match_value, items, tunnel); matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); last_item = MLX5_FLOW_LAYER_VXLAN; @@ -12917,10 +12981,6 @@ flow_dv_translate(struct rte_eth_dev *dev, NULL, "cannot create eCPRI parser"); } - /* Adjust the length matcher and device flow value. */ - matcher.mask.size = MLX5_ST_SZ_BYTES(fte_match_param); - dev_flow->dv.value.size = - MLX5_ST_SZ_BYTES(fte_match_param); flow_dv_translate_item_ecpri(dev, match_mask, match_value, items); /* No other protocol should follow eCPRI layer. */ @@ -13221,6 +13281,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow, int idx; struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; + uint8_t misc_mask; MLX5_ASSERT(wks); for (idx = wks->flow_idx - 1; idx >= 0; idx--) { @@ -13291,6 +13352,8 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow, } dv->actions[n++] = priv->sh->default_miss_action; } + misc_mask = flow_dv_matcher_enable(dv->value.buf); + __flow_dv_adjust_buf_size(&dv->value.size, misc_mask); err = mlx5_flow_os_create_flow(dv_h->matcher->matcher_object, (void *)&dv->value, n, dv->actions, &dh->drv_flow); @@ -15339,14 +15402,13 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, { int ret; struct mlx5_flow_dv_match_params value = { - .size = sizeof(value.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + .size = sizeof(value.buf), }; struct mlx5_flow_dv_match_params matcher = { - .size = sizeof(matcher.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + .size = sizeof(matcher.buf), }; struct mlx5_priv *priv = dev->data->dev_private; + uint8_t misc_mask; if (!is_default_policy && (priv->representor || priv->master)) { if (flow_dv_translate_item_port_id(dev, matcher.buf, @@ -15360,6 +15422,8 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, (enum modify_reg)color_reg_c_idx, rte_col_2_mlx5_col(color), UINT32_MAX); + misc_mask = flow_dv_matcher_enable(value.buf); + __flow_dv_adjust_buf_size(&value.size, misc_mask); ret = mlx5_flow_os_create_flow(matcher_object, (void *)&value, actions_n, actions, rule); if (ret) { @@ -15382,14 +15446,12 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, struct mlx5_flow_tbl_resource *tbl_rsc = sub_policy->tbl_rsc; struct mlx5_flow_dv_matcher matcher = { .mask = { - .size = sizeof(matcher.mask.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + .size = sizeof(matcher.mask.buf), }, .tbl = tbl_rsc, }; struct mlx5_flow_dv_match_params value = { - .size = sizeof(value.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + .size = sizeof(value.buf), }; struct mlx5_flow_cb_ctx ctx = { .error = error, @@ -15766,12 +15828,10 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, int domain, ret, i; struct mlx5_flow_counter *cnt; struct mlx5_flow_dv_match_params value = { - .size = sizeof(value.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + .size = sizeof(value.buf), }; struct mlx5_flow_dv_match_params matcher_para = { - .size = sizeof(matcher_para.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + .size = sizeof(matcher_para.buf), }; int mtr_id_reg_c = mlx5_flow_get_reg_id(dev, MLX5_MTR_ID, 0, &error); @@ -15780,8 +15840,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, struct mlx5_cache_entry *entry; struct mlx5_flow_dv_matcher matcher = { .mask = { - .size = sizeof(matcher.mask.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + .size = sizeof(matcher.mask.buf), }, }; struct mlx5_flow_dv_matcher *drop_matcher; @@ -15789,6 +15848,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, .error = &error, .data = &matcher, }; + uint8_t misc_mask; if (!priv->mtr_en || mtr_id_reg_c < 0) { rte_errno = ENOTSUP; @@ -15838,6 +15898,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, actions[i++] = priv->sh->dr_drop_action; flow_dv_match_meta_reg(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); + misc_mask = flow_dv_matcher_enable(value.buf); + __flow_dv_adjust_buf_size(&value.size, misc_mask); ret = mlx5_flow_os_create_flow (mtrmng->def_matcher[domain]->matcher_object, (void *)&value, i, actions, @@ -15881,6 +15943,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, fm->drop_cnt, NULL); actions[i++] = cnt->action; actions[i++] = priv->sh->dr_drop_action; + misc_mask = flow_dv_matcher_enable(value.buf); + __flow_dv_adjust_buf_size(&value.size, misc_mask); ret = mlx5_flow_os_create_flow(drop_matcher->matcher_object, (void *)&value, i, actions, &fm->drop_rule[domain]); @@ -16161,10 +16225,12 @@ mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev) if (ret) goto err; dv_attr.match_criteria_enable = flow_dv_matcher_enable(mask.buf); + __flow_dv_adjust_buf_size(&mask.size, dv_attr.match_criteria_enable); ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj, &matcher); if (ret) goto err; + __flow_dv_adjust_buf_size(&value.size, dv_attr.match_criteria_enable); ret = mlx5_flow_os_create_flow(matcher, (void *)&value, 1, actions, &flow); err: diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index fe9673310a..7b3d0b320d 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -1381,7 +1381,8 @@ flow_verbs_validate(struct rte_eth_dev *dev, MLX5_FLOW_LAYER_OUTER_L4_TCP; break; case RTE_FLOW_ITEM_TYPE_VXLAN: - ret = mlx5_flow_validate_item_vxlan(items, item_flags, + ret = mlx5_flow_validate_item_vxlan(dev, items, + item_flags, attr, error); if (ret < 0) return ret; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c index 1fcd24c002..383f003966 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c @@ -140,11 +140,13 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv) /**< Matcher value. This value is used as the mask or a key. */ } matcher_mask = { .size = sizeof(matcher_mask.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + MLX5_ST_SZ_BYTES(fte_match_set_misc4) - + MLX5_ST_SZ_BYTES(fte_match_set_misc5), }, matcher_value = { .size = sizeof(matcher_value.buf) - - MLX5_ST_SZ_BYTES(fte_match_set_misc4), + MLX5_ST_SZ_BYTES(fte_match_set_misc4) - + MLX5_ST_SZ_BYTES(fte_match_set_misc5), }; struct mlx5dv_flow_matcher_attr dv_attr = { .type = IBV_FLOW_ATTR_NORMAL, From patchwork Mon May 31 10:19:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 93618 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99544A0524; Mon, 31 May 2021 12:21:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5514B40E3C; Mon, 31 May 2021 12:21:17 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2064.outbound.protection.outlook.com [40.107.220.64]) by mails.dpdk.org (Postfix) with ESMTP id CB6B440040 for ; Mon, 31 May 2021 12:21:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mFmM2Oh9OUe4XbsXvib5ur36rnENpEHjtEq/JAeVpwA+H8ZiSEaQAxsuWpU5QB9vE5hRQGxLbjCxX7OqikpuGITKbDrE17NOb5bwq2WJIhq049/ww/4TVfZc9ZDJK+L4CAwkJODaesLK2U5o2cldts4e7KXoQEAiUML75Gmcr8EIHaVihqQ82Ic7HWu6zLt07A8c+Tk8vSV4NA4pH7iJH0PN0x18sYJVb35xN0PJ3oZb03oimwUI9P/4lOCG7vgqqd+ABrvGv2ZSu3jUTXhcgzZBuuEr3qNbkEN9OwfNmffnzzOndB+nZDT1ZuKybEw5/cwLp4VOYaRT1cpbqE0tmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=E/bW6O4XbEoPduv7bTOL9Htt6YYPiHdBE2fYf79wmuY=; b=eZ/MaVfZ3cJYZDUhP30OzODRHbIjZQpZNRLkOwFy++y0DdXD+nbXfsXnxSCfXEht9MQSmqNe88o9F5ASANmTzTcHiIg2vZQVjd7uMrMO2CS4ES+ucIAGy4DYDZltfmBf98Zln2+F/UwOH/eyjJX91cxRe5WtyfN6bs2En9ngM1gVarQkBUz/smUFTEGr6kbk9nXm7sYWBgdMIF3wesPUONMHuZnZAvT6w7K1RDBMP2lzfbUfdDQrhWwv7aN1PvWpg+seb9uSXz1n6Fjs8Nu532Jhw+xM3w0uqCvvPe1CZ2bkLDVZ4y7roKOmGoPMZ5+0ewKxgi6kxeLX7dsEkSRQIA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=E/bW6O4XbEoPduv7bTOL9Htt6YYPiHdBE2fYf79wmuY=; b=chkIWyuRmphQBrHWeB10fwSp3WLTlGE+j91Jzej684sJ4kSzig13Teqf7qBmMCuFJp9rZAPqrMAxfW04TROTUghUUeLOVJ8s7vqcbI3zrpDk0zjtWUXYN91ZSq+H0pNuEZrKKBThI1lMbBbLAvbjrAERTsNZtg6e6zvfmV9d5jTOtKku1gpN0c+0TtgNnyqWvr7136beMag3NkwWYuh5uf69hSoFMD6te3Mvz+lHCDJV2WHGyy5toK0xGzJWklk/t5ESxhf2pB/Zhjz6WIoA1FTA9/+8UFMPCTsExUabOqE/k+cGi3qZInk82AcMJSooIsQXli2nfp3POhBtYNvRrw== Received: from MWHPR15CA0025.namprd15.prod.outlook.com (2603:10b6:300:ad::11) by MW3PR12MB4380.namprd12.prod.outlook.com (2603:10b6:303:5a::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22; Mon, 31 May 2021 10:21:14 +0000 Received: from CO1NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ad:cafe::bf) by MWHPR15CA0025.outlook.office365.com (2603:10b6:300:ad::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.21 via Frontend Transport; Mon, 31 May 2021 10:21:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT017.mail.protection.outlook.com (10.13.175.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4150.30 via Frontend Transport; Mon, 31 May 2021 10:21:14 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 31 May 2021 10:20:44 +0000 From: rongwei liu To: , , , , Xiaoyun Li CC: , Date: Mon, 31 May 2021 13:19:41 +0300 Message-ID: <20210531101941.60916-3-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210531101941.60916-1-rongweil@nvidia.com> References: <20210531101941.60916-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4165c2f2-275d-45b6-9d3d-08d9241dd1f9 X-MS-TrafficTypeDiagnostic: MW3PR12MB4380: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: J5dJmMuXbJ4oWrMAnG0jfLepV4glBxVO91da76TqR9gfKU5wo6CGN9XK+ZxLzkBr2dDL+F61cCdCi6PKUPwnUMxoFChaHadJouQdHqJUe3bjiILaygw84OI2sFpepwEO+oxlFHPn55fl89nT+dZKCtQCltC9INKJakHwjF65sjaD5eRGjbLRaqqG36rmiTIssLWINsRDbgTNJx8WH9fZzWBnQFmJpcyUWIBfQ2CLGfEfCbnQIoizOmgMJndmFx5MenTYyJq19NWwcLUO6x0ka75U570v57tGRhRnR7IETvaELEcqHgN5sxWRJrrBdu2A1q9S+5S5ymAmhkh3ek+967cVhQyx/7CvOphLhNjMUJ5PXK6ssgkAJfSKmmGgK92ig9VPSsTv7/Jug3bFsNnbdthNcnb+BkrTnqZSI3GL3fXPjRJu1e6Y3ZloIAVOEI2mBeJ4EgwdnMRl/Sk9bqE1y6DYsz3MxQPFp2aigyyMqt2tOPPjubgpY9JLbp6DAv2F8SMeMhM6OdhiPVpBJGhY/Wci6S9n8Nh0Ku+ipCydMXpCvCMKwfv9UpCER7GfL1GXyHgK1CqeNCozU96GT9gt9o3IDwdbMGn4WYJzFCoI2uxYdHc2GrMhwJPKZz12ZnaQiXyqbUNxZJe0ucmUfXZ+2xGRqPeF7FHEU6qE4HbpEe4= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(39860400002)(136003)(396003)(346002)(36840700001)(46966006)(478600001)(8936002)(70206006)(16526019)(336012)(8676002)(5660300002)(426003)(186003)(36860700001)(4326008)(47076005)(7696005)(107886003)(1076003)(83380400001)(7636003)(82740400003)(70586007)(356005)(55016002)(6666004)(36906005)(6286002)(316002)(54906003)(110136005)(26005)(36756003)(2616005)(86362001)(82310400003)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2021 10:21:14.4054 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4165c2f2-275d-45b6-9d3d-08d9241dd1f9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4380 Subject: [dpdk-dev] [RFC 2/2] app/testpmd: support VXLAN last 8-bits field matching X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a new testpmd pattern field 'last_rsvd' that supports the last 8-bits matching of VXLAN header. The examples for the "last_rsvd" pattern field are as below: 1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ... This flow will exactly match the last 8-bits to be 0x80. 2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80 vxlan mask 0x80 / end ... This flow will only match the MSB of the last 8-bits to be 1. Signed-off-by: rongwei liu --- app/test-pmd/cmdline_flow.c | 9 +++++++++ app/test-pmd/util.c | 5 +++-- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 1 + 3 files changed, 13 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 1c587bb7b8..6e76a625ca 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -207,6 +207,7 @@ enum index { ITEM_SCTP_CKSUM, ITEM_VXLAN, ITEM_VXLAN_VNI, + ITEM_VXLAN_LAST_RSVD, ITEM_E_TAG, ITEM_E_TAG_GRP_ECID_B, ITEM_NVGRE, @@ -1129,6 +1130,7 @@ static const enum index item_sctp[] = { static const enum index item_vxlan[] = { ITEM_VXLAN_VNI, + ITEM_VXLAN_LAST_RSVD, ITEM_NEXT, ZERO, }; @@ -2806,6 +2808,13 @@ static const struct token token_list[] = { .next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)), }, + [ITEM_VXLAN_LAST_RSVD] = { + .name = "last_rsvd", + .help = "VXLAN last reserved bits", + .next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, + rsvd1)), + }, [ITEM_E_TAG] = { .name = "e_tag", .help = "match E-Tag header", diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index a9e431a8b2..59626518d5 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -266,8 +266,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], vx_vni = rte_be_to_cpu_32(vxlan_hdr->vx_vni); MKDUMPSTR(print_buf, buf_size, cur_len, " - VXLAN packet: packet type =%d, " - "Destination UDP port =%d, VNI = %d", - packet_type, udp_port, vx_vni >> 8); + "Destination UDP port =%d, VNI = %d, " + "last_rsvd = %d", packet_type, + udp_port, vx_vni >> 8, vx_vni & 0xff); } } MKDUMPSTR(print_buf, buf_size, cur_len, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 33857acf54..4ca3103067 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3694,6 +3694,7 @@ This section lists supported pattern items and their attributes, if any. - ``vxlan``: match VXLAN header. - ``vni {unsigned}``: VXLAN identifier. + - ``last_rsvd {unsigned}``: VXLAN last reserved 8-bits. - ``e_tag``: match IEEE 802.1BR E-Tag header.