From patchwork Fri Feb 3 16:48:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 123041 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 148F041BBE; Fri, 3 Feb 2023 17:49:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F015242D35; Fri, 3 Feb 2023 17:49:19 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2069.outbound.protection.outlook.com [40.107.220.69]) by mails.dpdk.org (Postfix) with ESMTP id 3538B4021E for ; Fri, 3 Feb 2023 17:49:18 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mWUgVz7yjTIn8ioHtqFpjJzq+qkYUHUAj/AxLdclR5+uNWrMYO+2Gk3TmyYfBjUfpAYP06Lbig8bTcwImFNL0q1rmJsjnSNUFyG6mVTFJ4N0DmCHnXuHOnBmWfw2nMVc+0YHoQ7VvyKpT1UPzX++YBY24Lg4Y4V2B1d6359Bm1TxeVD6dtXarW7vvf4JK3lRmbaslqXFBruHSwTi8nFDURs3vIIQlqW/TtmQNJCiVIOP3B+8ZzaVNdcDpLxdYMiwcv7j9oobRDq4E9Dc76o7scybV/4ZDAsnHLr9EyU85kJz966AzQHHpMv+TzOCrl99TOCjlhhsxihbUeiwgw6kFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aFdCK3pLKgtlG1PHdNtof2Odl19QzUySmAHw+ma9sJc=; b=OaxT5uqY22eihIzuzxYpopTut4F5RbmHP01eg5JU9SfvITKuBrgKX/K7kIBGpkkWzxRnGoG7h3QrvPBgFBJEW4+NATLRzDIW/4+5zhOHQlCA3GyybzrScaFNVHtBMGYNxaxjtimzqPDuhEDHk/8hLR85ANmgfjBclyukNQl/s/LTlyOEcEfK7JtAvwPR7/AVKRmqitTeVWdUfNKLXuG2FmNEnM7clI3eqzdTmf4j0X+t8htcOmGW1Hm7ClN077rZ5Z2NaCi2sYbapE+ML9FXukcQOZ0WZQZt0CX6EGyUIAyfIEEx0WjxQCaZ5q0Q8gNzmc0jOjRD6MO0YCNtW8xI9g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=monjalon.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aFdCK3pLKgtlG1PHdNtof2Odl19QzUySmAHw+ma9sJc=; b=jLaKksYDoo0YzpvIX34D76kZZsPfuy0dsRI54/rSd7G4WKBtub+/W69mQTn3wX/Nk8Td0gM/wtMCHxDm1KWpisCokaar3nqKDdAob5kPnx7pLy3OpccR8aTCgErc57fznQKQjR2lyn7au2tD6x1mkbjAui7a8LR7dSENOiTPHo4= Received: from DM6PR06CA0004.namprd06.prod.outlook.com (2603:10b6:5:120::17) by CY5PR12MB6406.namprd12.prod.outlook.com (2603:10b6:930:3d::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.24; Fri, 3 Feb 2023 16:49:12 +0000 Received: from DS1PEPF0000E65D.namprd02.prod.outlook.com (2603:10b6:5:120:cafe::3a) by DM6PR06CA0004.outlook.office365.com (2603:10b6:5:120::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.31 via Frontend Transport; Fri, 3 Feb 2023 16:49:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF0000E65D.mail.protection.outlook.com (10.167.18.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6064.21 via Frontend Transport; Fri, 3 Feb 2023 16:49:12 +0000 Received: from telcodpdk.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 3 Feb 2023 10:49:07 -0600 From: Ferruh Yigit To: Thomas Monjalon , Wisam Jaddo , Ori Kam , Aman Singh , Yuying Zhang , Ajit Khaparde , Somnath Kotur , Chas Williams , "Min Hu (Connor)" , Rahul Lakkireddy , Hemant Agrawal , Sachin Saxena , Simei Su , Wenjun Wu , John Daley , Hyong Youb Kim , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , Dongdong Liu , Yisen Zhuang , Beilei Xing , Jingjing Wu , Qiming Yang , Qi Zhang , Junfeng Guo , Rosen Xu , Matan Azrad , Viacheslav Ovsiienko , Liron Himi , Chaoyong He , =?utf-8?q?Niklas_S=C3=B6derlund?= , Andrew Rybchenko , Jiawen Wu , Jian Wang CC: David Marchand , Subject: [PATCH v7 1/7] ethdev: use Ethernet protocol struct for flow matching Date: Fri, 3 Feb 2023 16:48:48 +0000 Message-ID: <20230203164854.602595-2-ferruh.yigit@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230203164854.602595-1-ferruh.yigit@amd.com> References: <20221025214410.715864-1-thomas@monjalon.net> <20230203164854.602595-1-ferruh.yigit@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E65D:EE_|CY5PR12MB6406:EE_ X-MS-Office365-Filtering-Correlation-Id: 80f37121-3192-4ed5-0441-08db06069415 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YYJj9z6u67qkPfEUln8pfEQoo07jOvzTSpKsSkDdVwOpBk7ZFjyrVLNgCJu2gV95dwTEfyQaYVxhJDw4K7ZPp+FDZQV1h3bZB1F4PxhNp9SVMpQauzX2DnMv6Aysl1heM6pmS1UsYQ61qbi4bYhBEQpCUZI2y/rhmi2sG7wFqk26CWxAX2L+WPkOBNoRWSL0L0DojpH35EgRfcTlUsHCLN7FdQ6QU1iw6SXTaEgVyb1cyd5ugXF/dAVBabBuO0S0FjIJ+q+tO9RrkTa1L/K7kxtiLdxIPt3WeE/2s6Km4Y8fvVZFm6Or/tJADyaqGiPNuIIf9DJNyxl1zuHikRYhxD56+RUgzEFVj93bzT9dF2pljE7KLwPqhLM+YzfYooY84bmIJzQE+EdeHvPufjVIvobH2trTYU2+DDqbvqA2nUH8vupfzLSY/oUDyX4d7h+0b/FJPWDZQKvk9VUtPkwh99w58Ve9iouRzPS98BzqK2AslMrzVxg10uJUGDWpIpZ8hwXT8uhKW4H0c8e6k3gad5gL0iDQb7nTlkCPX4dF2SXQZtiHV4eqRHP09zt0bXR4xS3WhZIVm3dGnrGkQDe4w9KKSPKImJ2rhWKT6KSe0k4j5NMPQ0ulsr8Ucrrgd5nw1JmRF8jvhtekcRMS7hvHZNbNQIncSPhrN8LkV8zKYDePPc1TwT5N2dfhx85Fi+Rju5vjNf3gPsTUB/qBU7yCFJRgZeRrbyDgmmM0wuDHsNQJrgSaqEo4DoVLEZZk8NeL X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230025)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199018)(36840700001)(40470700004)(46966006)(70206006)(478600001)(7696005)(86362001)(4326008)(8936002)(41300700001)(47076005)(36860700001)(70586007)(8676002)(83380400001)(426003)(336012)(66574015)(2906002)(186003)(356005)(6666004)(921005)(82740400003)(81166007)(7406005)(7416002)(5660300002)(40460700003)(36756003)(30864003)(16526019)(1076003)(40480700001)(44832011)(26005)(2616005)(54906003)(82310400005)(110136005)(316002)(36900700001)(559001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 16:49:12.6251 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 80f37121-3192-4ed5-0441-08db06069415 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E65D.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6406 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Thomas Monjalon As announced in the deprecation notice, flow item structures should re-use the protocol header definitions from the directory lib/net/. The Ethernet headers (including VLAN) structures are used instead of the redundant fields in the flow items. The remaining protocols to clean up are listed for future work in the deprecation list. Some protocols are not even defined in the directory net yet. Signed-off-by: Thomas Monjalon Acked-by: Ferruh Yigit Reviewed-by: Niklas Söderlund Acked-by: Ori Kam Acked-by: Andrew Rybchenko --- app/test-flow-perf/items_gen.c | 4 +- app/test-pmd/cmdline_flow.c | 140 +++++++++++------------ doc/guides/prog_guide/rte_flow.rst | 7 +- doc/guides/rel_notes/deprecation.rst | 2 + drivers/net/bnxt/bnxt_flow.c | 42 +++---- drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 58 +++++----- drivers/net/bonding/rte_eth_bond_pmd.c | 12 +- drivers/net/cxgbe/cxgbe_flow.c | 44 +++---- drivers/net/dpaa2/dpaa2_flow.c | 48 ++++---- drivers/net/dpaa2/dpaa2_mux.c | 2 +- drivers/net/e1000/igb_flow.c | 14 +-- drivers/net/enic/enic_flow.c | 24 ++-- drivers/net/enic/enic_fm_flow.c | 16 +-- drivers/net/hinic/hinic_pmd_flow.c | 14 +-- drivers/net/hns3/hns3_flow.c | 28 ++--- drivers/net/i40e/i40e_flow.c | 100 ++++++++-------- drivers/net/i40e/i40e_hash.c | 4 +- drivers/net/iavf/iavf_fdir.c | 10 +- drivers/net/iavf/iavf_fsub.c | 10 +- drivers/net/iavf/iavf_ipsec_crypto.c | 4 +- drivers/net/ice/ice_acl_filter.c | 20 ++-- drivers/net/ice/ice_fdir_filter.c | 14 +-- drivers/net/ice/ice_switch_filter.c | 34 +++--- drivers/net/igc/igc_flow.c | 8 +- drivers/net/ipn3ke/ipn3ke_flow.c | 8 +- drivers/net/ixgbe/ixgbe_flow.c | 40 +++---- drivers/net/mlx4/mlx4_flow.c | 38 +++--- drivers/net/mlx5/hws/mlx5dr_definer.c | 26 ++--- drivers/net/mlx5/mlx5_flow.c | 24 ++-- drivers/net/mlx5/mlx5_flow_dv.c | 94 +++++++-------- drivers/net/mlx5/mlx5_flow_hw.c | 80 ++++++------- drivers/net/mlx5/mlx5_flow_verbs.c | 30 ++--- drivers/net/mlx5/mlx5_trigger.c | 28 ++--- drivers/net/mvpp2/mrvl_flow.c | 28 ++--- drivers/net/nfp/nfp_flow.c | 12 +- drivers/net/sfc/sfc_flow.c | 46 ++++---- drivers/net/sfc/sfc_mae.c | 38 +++--- drivers/net/tap/tap_flow.c | 58 +++++----- drivers/net/txgbe/txgbe_flow.c | 28 ++--- 39 files changed, 618 insertions(+), 619 deletions(-) diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c index a73de9031f54..b7f51030a119 100644 --- a/app/test-flow-perf/items_gen.c +++ b/app/test-flow-perf/items_gen.c @@ -37,10 +37,10 @@ add_vlan(struct rte_flow_item *items, __rte_unused struct additional_para para) { static struct rte_flow_item_vlan vlan_spec = { - .tci = RTE_BE16(VLAN_VALUE), + .hdr.vlan_tci = RTE_BE16(VLAN_VALUE), }; static struct rte_flow_item_vlan vlan_mask = { - .tci = RTE_BE16(0xffff), + .hdr.vlan_tci = RTE_BE16(0xffff), }; items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN; diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 88108498e0c3..694a7eb647c5 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -3633,19 +3633,19 @@ static const struct token token_list[] = { .name = "dst", .help = "destination MAC", .next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param), - .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, dst)), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.dst_addr)), }, [ITEM_ETH_SRC] = { .name = "src", .help = "source MAC", .next = NEXT(item_eth, NEXT_ENTRY(COMMON_MAC_ADDR), item_param), - .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, src)), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.src_addr)), }, [ITEM_ETH_TYPE] = { .name = "type", .help = "EtherType", .next = NEXT(item_eth, NEXT_ENTRY(COMMON_UNSIGNED), item_param), - .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, type)), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_eth, hdr.ether_type)), }, [ITEM_ETH_HAS_VLAN] = { .name = "has_vlan", @@ -3666,7 +3666,7 @@ static const struct token token_list[] = { .help = "tag control information", .next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED), item_param), - .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, tci)), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, hdr.vlan_tci)), }, [ITEM_VLAN_PCP] = { .name = "pcp", @@ -3674,7 +3674,7 @@ static const struct token token_list[] = { .next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan, - tci, "\xe0\x00")), + hdr.vlan_tci, "\xe0\x00")), }, [ITEM_VLAN_DEI] = { .name = "dei", @@ -3682,7 +3682,7 @@ static const struct token token_list[] = { .next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan, - tci, "\x10\x00")), + hdr.vlan_tci, "\x10\x00")), }, [ITEM_VLAN_VID] = { .name = "vid", @@ -3690,7 +3690,7 @@ static const struct token token_list[] = { .next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY_MASK_HTON(struct rte_flow_item_vlan, - tci, "\x0f\xff")), + hdr.vlan_tci, "\x0f\xff")), }, [ITEM_VLAN_INNER_TYPE] = { .name = "inner_type", @@ -3698,7 +3698,7 @@ static const struct token token_list[] = { .next = NEXT(item_vlan, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vlan, - inner_type)), + hdr.eth_proto)), }, [ITEM_VLAN_HAS_MORE_VLAN] = { .name = "has_more_vlan", @@ -7487,10 +7487,10 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_ .type = RTE_FLOW_ITEM_TYPE_END, }, }, - .item_eth.type = 0, + .item_eth.hdr.ether_type = 0, .item_vlan = { - .tci = vxlan_encap_conf.vlan_tci, - .inner_type = 0, + .hdr.vlan_tci = vxlan_encap_conf.vlan_tci, + .hdr.eth_proto = 0, }, .item_ipv4.hdr = { .src_addr = vxlan_encap_conf.ipv4_src, @@ -7502,9 +7502,9 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_ }, .item_vxlan.flags = 0, }; - memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes, + memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes, vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN); - memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes, + memcpy(action_vxlan_encap_data->item_eth.hdr.src_addr.addr_bytes, vxlan_encap_conf.eth_src, RTE_ETHER_ADDR_LEN); if (!vxlan_encap_conf.select_ipv4) { memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr, @@ -7622,10 +7622,10 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_ .type = RTE_FLOW_ITEM_TYPE_END, }, }, - .item_eth.type = 0, + .item_eth.hdr.ether_type = 0, .item_vlan = { - .tci = nvgre_encap_conf.vlan_tci, - .inner_type = 0, + .hdr.vlan_tci = nvgre_encap_conf.vlan_tci, + .hdr.eth_proto = 0, }, .item_ipv4.hdr = { .src_addr = nvgre_encap_conf.ipv4_src, @@ -7635,9 +7635,9 @@ parse_setup_nvgre_encap_data(struct action_nvgre_encap_data *action_nvgre_encap_ .item_nvgre.protocol = RTE_BE16(RTE_ETHER_TYPE_TEB), .item_nvgre.flow_id = 0, }; - memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes, + memcpy(action_nvgre_encap_data->item_eth.hdr.dst_addr.addr_bytes, nvgre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN); - memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes, + memcpy(action_nvgre_encap_data->item_eth.hdr.src_addr.addr_bytes, nvgre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN); if (!nvgre_encap_conf.select_ipv4) { memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr, @@ -7698,10 +7698,10 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token, struct buffer *out = buf; struct rte_flow_action *action; struct action_raw_encap_data *action_encap_data; - struct rte_flow_item_eth eth = { .type = 0, }; + struct rte_flow_item_eth eth = { .hdr.ether_type = 0, }; struct rte_flow_item_vlan vlan = { - .tci = mplsoudp_encap_conf.vlan_tci, - .inner_type = 0, + .hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci, + .hdr.eth_proto = 0, }; uint8_t *header; int ret; @@ -7728,22 +7728,22 @@ parse_vc_action_l2_encap(struct context *ctx, const struct token *token, }; header = action_encap_data->data; if (l2_encap_conf.select_vlan) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); else if (l2_encap_conf.select_ipv4) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); - memcpy(eth.dst.addr_bytes, + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + memcpy(eth.hdr.dst_addr.addr_bytes, l2_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN); - memcpy(eth.src.addr_bytes, + memcpy(eth.hdr.src_addr.addr_bytes, l2_encap_conf.eth_src, RTE_ETHER_ADDR_LEN); memcpy(header, ð, sizeof(eth)); header += sizeof(eth); if (l2_encap_conf.select_vlan) { if (l2_encap_conf.select_ipv4) - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); memcpy(header, &vlan, sizeof(vlan)); header += sizeof(vlan); } @@ -7762,10 +7762,10 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token, struct buffer *out = buf; struct rte_flow_action *action; struct action_raw_decap_data *action_decap_data; - struct rte_flow_item_eth eth = { .type = 0, }; + struct rte_flow_item_eth eth = { .hdr.ether_type = 0, }; struct rte_flow_item_vlan vlan = { - .tci = mplsoudp_encap_conf.vlan_tci, - .inner_type = 0, + .hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci, + .hdr.eth_proto = 0, }; uint8_t *header; int ret; @@ -7792,7 +7792,7 @@ parse_vc_action_l2_decap(struct context *ctx, const struct token *token, }; header = action_decap_data->data; if (l2_decap_conf.select_vlan) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); memcpy(header, ð, sizeof(eth)); header += sizeof(eth); if (l2_decap_conf.select_vlan) { @@ -7816,10 +7816,10 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token, struct buffer *out = buf; struct rte_flow_action *action; struct action_raw_encap_data *action_encap_data; - struct rte_flow_item_eth eth = { .type = 0, }; + struct rte_flow_item_eth eth = { .hdr.ether_type = 0, }; struct rte_flow_item_vlan vlan = { - .tci = mplsogre_encap_conf.vlan_tci, - .inner_type = 0, + .hdr.vlan_tci = mplsogre_encap_conf.vlan_tci, + .hdr.eth_proto = 0, }; struct rte_flow_item_ipv4 ipv4 = { .hdr = { @@ -7868,22 +7868,22 @@ parse_vc_action_mplsogre_encap(struct context *ctx, const struct token *token, }; header = action_encap_data->data; if (mplsogre_encap_conf.select_vlan) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); else if (mplsogre_encap_conf.select_ipv4) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); - memcpy(eth.dst.addr_bytes, + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + memcpy(eth.hdr.dst_addr.addr_bytes, mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN); - memcpy(eth.src.addr_bytes, + memcpy(eth.hdr.src_addr.addr_bytes, mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN); memcpy(header, ð, sizeof(eth)); header += sizeof(eth); if (mplsogre_encap_conf.select_vlan) { if (mplsogre_encap_conf.select_ipv4) - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); memcpy(header, &vlan, sizeof(vlan)); header += sizeof(vlan); } @@ -7922,8 +7922,8 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token, struct buffer *out = buf; struct rte_flow_action *action; struct action_raw_decap_data *action_decap_data; - struct rte_flow_item_eth eth = { .type = 0, }; - struct rte_flow_item_vlan vlan = {.tci = 0}; + struct rte_flow_item_eth eth = { .hdr.ether_type = 0, }; + struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0}; struct rte_flow_item_ipv4 ipv4 = { .hdr = { .next_proto_id = IPPROTO_GRE, @@ -7963,22 +7963,22 @@ parse_vc_action_mplsogre_decap(struct context *ctx, const struct token *token, }; header = action_decap_data->data; if (mplsogre_decap_conf.select_vlan) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); else if (mplsogre_encap_conf.select_ipv4) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); - memcpy(eth.dst.addr_bytes, + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + memcpy(eth.hdr.dst_addr.addr_bytes, mplsogre_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN); - memcpy(eth.src.addr_bytes, + memcpy(eth.hdr.src_addr.addr_bytes, mplsogre_encap_conf.eth_src, RTE_ETHER_ADDR_LEN); memcpy(header, ð, sizeof(eth)); header += sizeof(eth); if (mplsogre_encap_conf.select_vlan) { if (mplsogre_encap_conf.select_ipv4) - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); memcpy(header, &vlan, sizeof(vlan)); header += sizeof(vlan); } @@ -8009,10 +8009,10 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token, struct buffer *out = buf; struct rte_flow_action *action; struct action_raw_encap_data *action_encap_data; - struct rte_flow_item_eth eth = { .type = 0, }; + struct rte_flow_item_eth eth = { .hdr.ether_type = 0, }; struct rte_flow_item_vlan vlan = { - .tci = mplsoudp_encap_conf.vlan_tci, - .inner_type = 0, + .hdr.vlan_tci = mplsoudp_encap_conf.vlan_tci, + .hdr.eth_proto = 0, }; struct rte_flow_item_ipv4 ipv4 = { .hdr = { @@ -8062,22 +8062,22 @@ parse_vc_action_mplsoudp_encap(struct context *ctx, const struct token *token, }; header = action_encap_data->data; if (mplsoudp_encap_conf.select_vlan) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); else if (mplsoudp_encap_conf.select_ipv4) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); - memcpy(eth.dst.addr_bytes, + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + memcpy(eth.hdr.dst_addr.addr_bytes, mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN); - memcpy(eth.src.addr_bytes, + memcpy(eth.hdr.src_addr.addr_bytes, mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN); memcpy(header, ð, sizeof(eth)); header += sizeof(eth); if (mplsoudp_encap_conf.select_vlan) { if (mplsoudp_encap_conf.select_ipv4) - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); memcpy(header, &vlan, sizeof(vlan)); header += sizeof(vlan); } @@ -8116,8 +8116,8 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token, struct buffer *out = buf; struct rte_flow_action *action; struct action_raw_decap_data *action_decap_data; - struct rte_flow_item_eth eth = { .type = 0, }; - struct rte_flow_item_vlan vlan = {.tci = 0}; + struct rte_flow_item_eth eth = { .hdr.ether_type = 0, }; + struct rte_flow_item_vlan vlan = {.hdr.vlan_tci = 0}; struct rte_flow_item_ipv4 ipv4 = { .hdr = { .next_proto_id = IPPROTO_UDP, @@ -8159,22 +8159,22 @@ parse_vc_action_mplsoudp_decap(struct context *ctx, const struct token *token, }; header = action_decap_data->data; if (mplsoudp_decap_conf.select_vlan) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); else if (mplsoudp_encap_conf.select_ipv4) - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - eth.type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); - memcpy(eth.dst.addr_bytes, + eth.hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + memcpy(eth.hdr.dst_addr.addr_bytes, mplsoudp_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN); - memcpy(eth.src.addr_bytes, + memcpy(eth.hdr.src_addr.addr_bytes, mplsoudp_encap_conf.eth_src, RTE_ETHER_ADDR_LEN); memcpy(header, ð, sizeof(eth)); header += sizeof(eth); if (mplsoudp_encap_conf.select_vlan) { if (mplsoudp_encap_conf.select_ipv4) - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); else - vlan.inner_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + vlan.hdr.eth_proto = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); memcpy(header, &vlan, sizeof(vlan)); header += sizeof(vlan); } diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 3e6242803dc0..27c3780c4f17 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -840,9 +840,7 @@ instead of using the ``type`` field. If the ``type`` and ``has_vlan`` fields are not specified, then both tagged and untagged packets will match the pattern. -- ``dst``: destination MAC. -- ``src``: source MAC. -- ``type``: EtherType or TPID. +- ``hdr``: header definition (``rte_ether.h``). - ``has_vlan``: packet header contains at least one VLAN. - Default ``mask`` matches destination and source addresses only. @@ -861,8 +859,7 @@ instead of using the ``inner_type field``. If the ``inner_type`` and ``has_more_vlan`` fields are not specified, then any tagged packets will match the pattern. -- ``tci``: tag control information. -- ``inner_type``: inner EtherType or TPID. +- ``hdr``: header definition (``rte_ether.h``). - ``has_more_vlan``: packet header contains at least one more VLAN, after this VLAN. - Default ``mask`` matches the VID part of TCI only (lower 12 bits). diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index eea99b454005..4782d2e680d3 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -63,6 +63,8 @@ Deprecation Notices should start with relevant protocol header structure from lib/net/. The individual protocol header fields and the protocol header struct may be kept together in a union as a first migration step. + In future (target is DPDK 23.11), the protocol header fields will be cleaned + and only protocol header struct will remain. These items are not compliant (not including struct from lib/net/): diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 96ef00460cf5..8f660493402c 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -199,10 +199,10 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, * Destination MAC address mask must not be partially * set. Should be all 1's or all 0's. */ - if ((!rte_is_zero_ether_addr(ð_mask->src) && - !rte_is_broadcast_ether_addr(ð_mask->src)) || - (!rte_is_zero_ether_addr(ð_mask->dst) && - !rte_is_broadcast_ether_addr(ð_mask->dst))) { + if ((!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) && + !rte_is_broadcast_ether_addr(ð_mask->hdr.src_addr)) || + (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) && + !rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr))) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -212,8 +212,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, } /* Mask is not allowed. Only exact matches are */ - if (eth_mask->type && - eth_mask->type != RTE_BE16(0xffff)) { + if (eth_mask->hdr.ether_type && + eth_mask->hdr.ether_type != RTE_BE16(0xffff)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -221,8 +221,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, return -rte_errno; } - if (rte_is_broadcast_ether_addr(ð_mask->dst)) { - dst = ð_spec->dst; + if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr)) { + dst = ð_spec->hdr.dst_addr; if (!rte_is_valid_assigned_ether_addr(dst)) { rte_flow_error_set(error, EINVAL, @@ -234,7 +234,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, return -rte_errno; } rte_memcpy(filter->dst_macaddr, - ð_spec->dst, RTE_ETHER_ADDR_LEN); + ð_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN); en |= use_ntuple ? NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR : EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR; @@ -245,8 +245,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, PMD_DRV_LOG(DEBUG, "Creating a priority flow\n"); } - if (rte_is_broadcast_ether_addr(ð_mask->src)) { - src = ð_spec->src; + if (rte_is_broadcast_ether_addr(ð_mask->hdr.src_addr)) { + src = ð_spec->hdr.src_addr; if (!rte_is_valid_assigned_ether_addr(src)) { rte_flow_error_set(error, EINVAL, @@ -258,7 +258,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, return -rte_errno; } rte_memcpy(filter->src_macaddr, - ð_spec->src, RTE_ETHER_ADDR_LEN); + ð_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN); en |= use_ntuple ? NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR : EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR; @@ -270,9 +270,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, * PMD_DRV_LOG(ERR, "Handle this condition\n"); * } */ - if (eth_mask->type) { + if (eth_mask->hdr.ether_type) { filter->ethertype = - rte_be_to_cpu_16(eth_spec->type); + rte_be_to_cpu_16(eth_spec->hdr.ether_type); en |= en_ethertype; } if (inner) @@ -295,11 +295,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, " supported"); return -rte_errno; } - if (vlan_mask->tci && - vlan_mask->tci == RTE_BE16(0x0fff)) { + if (vlan_mask->hdr.vlan_tci && + vlan_mask->hdr.vlan_tci == RTE_BE16(0x0fff)) { /* Only the VLAN ID can be matched. */ filter->l2_ovlan = - rte_be_to_cpu_16(vlan_spec->tci & + rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci & RTE_BE16(0x0fff)); en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID; } else { @@ -310,8 +310,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, "VLAN mask is invalid"); return -rte_errno; } - if (vlan_mask->inner_type && - vlan_mask->inner_type != RTE_BE16(0xffff)) { + if (vlan_mask->hdr.eth_proto && + vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -319,9 +319,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, " valid"); return -rte_errno; } - if (vlan_mask->inner_type) { + if (vlan_mask->hdr.eth_proto) { filter->ethertype = - rte_be_to_cpu_16(vlan_spec->inner_type); + rte_be_to_cpu_16(vlan_spec->hdr.eth_proto); en |= en_ethertype; } diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c index 1be649a16c49..2928598ced55 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c @@ -627,13 +627,13 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item, /* Perform validations */ if (eth_spec) { /* Todo: work around to avoid multicast and broadcast addr */ - if (ulp_rte_parser_is_bcmc_addr(ð_spec->dst)) + if (ulp_rte_parser_is_bcmc_addr(ð_spec->hdr.dst_addr)) return BNXT_TF_RC_PARSE_ERR; - if (ulp_rte_parser_is_bcmc_addr(ð_spec->src)) + if (ulp_rte_parser_is_bcmc_addr(ð_spec->hdr.src_addr)) return BNXT_TF_RC_PARSE_ERR; - eth_type = eth_spec->type; + eth_type = eth_spec->hdr.ether_type; } if (ulp_rte_prsr_fld_size_validate(params, &idx, @@ -646,22 +646,22 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item, * header fields */ dmac_idx = idx; - size = sizeof(((struct rte_flow_item_eth *)NULL)->dst.addr_bytes); + size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.dst_addr.addr_bytes); ulp_rte_prsr_fld_mask(params, &idx, size, - ulp_deference_struct(eth_spec, dst.addr_bytes), - ulp_deference_struct(eth_mask, dst.addr_bytes), + ulp_deference_struct(eth_spec, hdr.dst_addr.addr_bytes), + ulp_deference_struct(eth_mask, hdr.dst_addr.addr_bytes), ULP_PRSR_ACT_DEFAULT); - size = sizeof(((struct rte_flow_item_eth *)NULL)->src.addr_bytes); + size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.src_addr.addr_bytes); ulp_rte_prsr_fld_mask(params, &idx, size, - ulp_deference_struct(eth_spec, src.addr_bytes), - ulp_deference_struct(eth_mask, src.addr_bytes), + ulp_deference_struct(eth_spec, hdr.src_addr.addr_bytes), + ulp_deference_struct(eth_mask, hdr.src_addr.addr_bytes), ULP_PRSR_ACT_DEFAULT); - size = sizeof(((struct rte_flow_item_eth *)NULL)->type); + size = sizeof(((struct rte_flow_item_eth *)NULL)->hdr.ether_type); ulp_rte_prsr_fld_mask(params, &idx, size, - ulp_deference_struct(eth_spec, type), - ulp_deference_struct(eth_mask, type), + ulp_deference_struct(eth_spec, hdr.ether_type), + ulp_deference_struct(eth_mask, hdr.ether_type), ULP_PRSR_ACT_MATCH_IGNORE); /* Update the protocol hdr bitmap */ @@ -706,15 +706,15 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item, uint32_t size; if (vlan_spec) { - vlan_tag = ntohs(vlan_spec->tci); + vlan_tag = ntohs(vlan_spec->hdr.vlan_tci); priority = htons(vlan_tag >> ULP_VLAN_PRIORITY_SHIFT); vlan_tag &= ULP_VLAN_TAG_MASK; vlan_tag = htons(vlan_tag); - eth_type = vlan_spec->inner_type; + eth_type = vlan_spec->hdr.eth_proto; } if (vlan_mask) { - vlan_tag_mask = ntohs(vlan_mask->tci); + vlan_tag_mask = ntohs(vlan_mask->hdr.vlan_tci); priority_mask = htons(vlan_tag_mask >> ULP_VLAN_PRIORITY_SHIFT); vlan_tag_mask &= 0xfff; @@ -741,7 +741,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item, * Copy the rte_flow_item for vlan into hdr_field using Vlan * header fields */ - size = sizeof(((struct rte_flow_item_vlan *)NULL)->tci); + size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.vlan_tci); /* * The priority field is ignored since OVS is setting it as * wild card match and it is not supported. This is a work @@ -757,10 +757,10 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item, (vlan_mask) ? &vlan_tag_mask : NULL, ULP_PRSR_ACT_DEFAULT); - size = sizeof(((struct rte_flow_item_vlan *)NULL)->inner_type); + size = sizeof(((struct rte_flow_item_vlan *)NULL)->hdr.eth_proto); ulp_rte_prsr_fld_mask(params, &idx, size, - ulp_deference_struct(vlan_spec, inner_type), - ulp_deference_struct(vlan_mask, inner_type), + ulp_deference_struct(vlan_spec, hdr.eth_proto), + ulp_deference_struct(vlan_mask, hdr.eth_proto), ULP_PRSR_ACT_MATCH_IGNORE); /* Get the outer tag and inner tag counts */ @@ -1673,14 +1673,14 @@ ulp_rte_enc_eth_hdr_handler(struct ulp_rte_parser_params *params, uint32_t size; field = ¶ms->enc_field[BNXT_ULP_ENC_FIELD_ETH_DMAC]; - size = sizeof(eth_spec->dst.addr_bytes); - field = ulp_rte_parser_fld_copy(field, eth_spec->dst.addr_bytes, size); + size = sizeof(eth_spec->hdr.dst_addr.addr_bytes); + field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.dst_addr.addr_bytes, size); - size = sizeof(eth_spec->src.addr_bytes); - field = ulp_rte_parser_fld_copy(field, eth_spec->src.addr_bytes, size); + size = sizeof(eth_spec->hdr.src_addr.addr_bytes); + field = ulp_rte_parser_fld_copy(field, eth_spec->hdr.src_addr.addr_bytes, size); - size = sizeof(eth_spec->type); - field = ulp_rte_parser_fld_copy(field, ð_spec->type, size); + size = sizeof(eth_spec->hdr.ether_type); + field = ulp_rte_parser_fld_copy(field, ð_spec->hdr.ether_type, size); ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH); } @@ -1704,11 +1704,11 @@ ulp_rte_enc_vlan_hdr_handler(struct ulp_rte_parser_params *params, BNXT_ULP_HDR_BIT_OI_VLAN); } - size = sizeof(vlan_spec->tci); - field = ulp_rte_parser_fld_copy(field, &vlan_spec->tci, size); + size = sizeof(vlan_spec->hdr.vlan_tci); + field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.vlan_tci, size); - size = sizeof(vlan_spec->inner_type); - field = ulp_rte_parser_fld_copy(field, &vlan_spec->inner_type, size); + size = sizeof(vlan_spec->hdr.eth_proto); + field = ulp_rte_parser_fld_copy(field, &vlan_spec->hdr.eth_proto, size); } /* Function to handle the parsing of RTE Flow item ipv4 Header. */ diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index f70c2c290577..f0c4f7d26b86 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -122,15 +122,15 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf) */ static struct rte_flow_item_eth flow_item_eth_type_8023ad = { - .dst.addr_bytes = { 0 }, - .src.addr_bytes = { 0 }, - .type = RTE_BE16(RTE_ETHER_TYPE_SLOW), + .hdr.dst_addr.addr_bytes = { 0 }, + .hdr.src_addr.addr_bytes = { 0 }, + .hdr.ether_type = RTE_BE16(RTE_ETHER_TYPE_SLOW), }; static struct rte_flow_item_eth flow_item_eth_mask_type_8023ad = { - .dst.addr_bytes = { 0 }, - .src.addr_bytes = { 0 }, - .type = 0xFFFF, + .hdr.dst_addr.addr_bytes = { 0 }, + .hdr.src_addr.addr_bytes = { 0 }, + .hdr.ether_type = 0xFFFF, }; static struct rte_flow_item flow_item_8023ad[] = { diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c index d66672a9e6b8..f5787c247f1f 100644 --- a/drivers/net/cxgbe/cxgbe_flow.c +++ b/drivers/net/cxgbe/cxgbe_flow.c @@ -188,22 +188,22 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item, return 0; /* we don't support SRC_MAC filtering*/ - if (!rte_is_zero_ether_addr(&spec->src) || - (umask && !rte_is_zero_ether_addr(&umask->src))) + if (!rte_is_zero_ether_addr(&spec->hdr.src_addr) || + (umask && !rte_is_zero_ether_addr(&umask->hdr.src_addr))) return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "src mac filtering not supported"); - if (!rte_is_zero_ether_addr(&spec->dst) || - (umask && !rte_is_zero_ether_addr(&umask->dst))) { + if (!rte_is_zero_ether_addr(&spec->hdr.dst_addr) || + (umask && !rte_is_zero_ether_addr(&umask->hdr.dst_addr))) { CXGBE_FILL_FS(0, 0x1ff, macidx); - CXGBE_FILL_FS_MEMCPY(spec->dst.addr_bytes, mask->dst.addr_bytes, + CXGBE_FILL_FS_MEMCPY(spec->hdr.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes, dmac); } - if (spec->type || (umask && umask->type)) - CXGBE_FILL_FS(be16_to_cpu(spec->type), - be16_to_cpu(mask->type), ethtype); + if (spec->hdr.ether_type || (umask && umask->hdr.ether_type)) + CXGBE_FILL_FS(be16_to_cpu(spec->hdr.ether_type), + be16_to_cpu(mask->hdr.ether_type), ethtype); return 0; } @@ -239,26 +239,26 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item, if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) { CXGBE_FILL_FS(1, 1, ovlan_vld); if (spec) { - if (spec->tci || (umask && umask->tci)) - CXGBE_FILL_FS(be16_to_cpu(spec->tci), - be16_to_cpu(mask->tci), ovlan); + if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci)) + CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci), + be16_to_cpu(mask->hdr.vlan_tci), ovlan); fs->mask.ethtype = 0; fs->val.ethtype = 0; } } else { CXGBE_FILL_FS(1, 1, ivlan_vld); if (spec) { - if (spec->tci || (umask && umask->tci)) - CXGBE_FILL_FS(be16_to_cpu(spec->tci), - be16_to_cpu(mask->tci), ivlan); + if (spec->hdr.vlan_tci || (umask && umask->hdr.vlan_tci)) + CXGBE_FILL_FS(be16_to_cpu(spec->hdr.vlan_tci), + be16_to_cpu(mask->hdr.vlan_tci), ivlan); fs->mask.ethtype = 0; fs->val.ethtype = 0; } } - if (spec && (spec->inner_type || (umask && umask->inner_type))) - CXGBE_FILL_FS(be16_to_cpu(spec->inner_type), - be16_to_cpu(mask->inner_type), ethtype); + if (spec && (spec->hdr.eth_proto || (umask && umask->hdr.eth_proto))) + CXGBE_FILL_FS(be16_to_cpu(spec->hdr.eth_proto), + be16_to_cpu(mask->hdr.eth_proto), ethtype); return 0; } @@ -889,17 +889,17 @@ static struct chrte_fparse parseitem[] = { [RTE_FLOW_ITEM_TYPE_ETH] = { .fptr = ch_rte_parsetype_eth, .dmask = &(const struct rte_flow_item_eth){ - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0xffff, + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0xffff, } }, [RTE_FLOW_ITEM_TYPE_VLAN] = { .fptr = ch_rte_parsetype_vlan, .dmask = &(const struct rte_flow_item_vlan){ - .tci = 0xffff, - .inner_type = 0xffff, + .hdr.vlan_tci = 0xffff, + .hdr.eth_proto = 0xffff, } }, diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index df06c3862e7c..eec7e6065097 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -100,13 +100,13 @@ enum rte_flow_action_type dpaa2_supported_fs_action_type[] = { #ifndef __cplusplus static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .type = RTE_BE16(0xffff), + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.ether_type = RTE_BE16(0xffff), }; static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = { - .tci = RTE_BE16(0xffff), + .hdr.vlan_tci = RTE_BE16(0xffff), }; static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = { @@ -966,7 +966,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, return -1; } - if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) { + if (memcmp((const char *)&mask->hdr.src_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) { index = dpaa2_flow_extract_search( &priv->extract.qos_key_extract.dpkg, NET_PROT_ETH, NH_FLD_ETH_SA); @@ -1009,8 +1009,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, &flow->qos_rule, NET_PROT_ETH, NH_FLD_ETH_SA, - &spec->src.addr_bytes, - &mask->src.addr_bytes, + &spec->hdr.src_addr.addr_bytes, + &mask->hdr.src_addr.addr_bytes, sizeof(struct rte_ether_addr)); if (ret) { DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed"); @@ -1022,8 +1022,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, &flow->fs_rule, NET_PROT_ETH, NH_FLD_ETH_SA, - &spec->src.addr_bytes, - &mask->src.addr_bytes, + &spec->hdr.src_addr.addr_bytes, + &mask->hdr.src_addr.addr_bytes, sizeof(struct rte_ether_addr)); if (ret) { DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed"); @@ -1031,7 +1031,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, } } - if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) { + if (memcmp((const char *)&mask->hdr.dst_addr, zero_cmp, RTE_ETHER_ADDR_LEN)) { index = dpaa2_flow_extract_search( &priv->extract.qos_key_extract.dpkg, NET_PROT_ETH, NH_FLD_ETH_DA); @@ -1076,8 +1076,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, &flow->qos_rule, NET_PROT_ETH, NH_FLD_ETH_DA, - &spec->dst.addr_bytes, - &mask->dst.addr_bytes, + &spec->hdr.dst_addr.addr_bytes, + &mask->hdr.dst_addr.addr_bytes, sizeof(struct rte_ether_addr)); if (ret) { DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed"); @@ -1089,8 +1089,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, &flow->fs_rule, NET_PROT_ETH, NH_FLD_ETH_DA, - &spec->dst.addr_bytes, - &mask->dst.addr_bytes, + &spec->hdr.dst_addr.addr_bytes, + &mask->hdr.dst_addr.addr_bytes, sizeof(struct rte_ether_addr)); if (ret) { DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed"); @@ -1098,7 +1098,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, } } - if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) { + if (memcmp((const char *)&mask->hdr.ether_type, zero_cmp, sizeof(rte_be16_t))) { index = dpaa2_flow_extract_search( &priv->extract.qos_key_extract.dpkg, NET_PROT_ETH, NH_FLD_ETH_TYPE); @@ -1142,8 +1142,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, &flow->qos_rule, NET_PROT_ETH, NH_FLD_ETH_TYPE, - &spec->type, - &mask->type, + &spec->hdr.ether_type, + &mask->hdr.ether_type, sizeof(rte_be16_t)); if (ret) { DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed"); @@ -1155,8 +1155,8 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, &flow->fs_rule, NET_PROT_ETH, NH_FLD_ETH_TYPE, - &spec->type, - &mask->type, + &spec->hdr.ether_type, + &mask->hdr.ether_type, sizeof(rte_be16_t)); if (ret) { DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed"); @@ -1266,7 +1266,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, return -1; } - if (!mask->tci) + if (!mask->hdr.vlan_tci) return 0; index = dpaa2_flow_extract_search( @@ -1314,8 +1314,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, &flow->qos_rule, NET_PROT_VLAN, NH_FLD_VLAN_TCI, - &spec->tci, - &mask->tci, + &spec->hdr.vlan_tci, + &mask->hdr.vlan_tci, sizeof(rte_be16_t)); if (ret) { DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed"); @@ -1327,8 +1327,8 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, &flow->fs_rule, NET_PROT_VLAN, NH_FLD_VLAN_TCI, - &spec->tci, - &mask->tci, + &spec->hdr.vlan_tci, + &mask->hdr.vlan_tci, sizeof(rte_be16_t)); if (ret) { DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed"); diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c index 7456f43f425c..2ff1a98fda7c 100644 --- a/drivers/net/dpaa2/dpaa2_mux.c +++ b/drivers/net/dpaa2/dpaa2_mux.c @@ -150,7 +150,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, kg_cfg.num_extracts = 1; spec = (const struct rte_flow_item_eth *)pattern[0]->spec; - eth_type = rte_constant_bswap16(spec->type); + eth_type = rte_constant_bswap16(spec->hdr.ether_type); memcpy((void *)key_iova, (const void *)ð_type, sizeof(rte_be16_t)); memcpy(mask_iova, pattern[0]->mask, sizeof(uint16_t)); diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c index b77531065196..ea9b290e1cb5 100644 --- a/drivers/net/e1000/igb_flow.c +++ b/drivers/net/e1000/igb_flow.c @@ -555,16 +555,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr, * Mask bits of destination MAC address must be full * of 1 or full of 0. */ - if (!rte_is_zero_ether_addr(ð_mask->src) || - (!rte_is_zero_ether_addr(ð_mask->dst) && - !rte_is_broadcast_ether_addr(ð_mask->dst))) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) || + (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) && + !rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr))) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid ether address mask"); return -rte_errno; } - if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) { + if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid ethertype mask"); @@ -574,13 +574,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr, /* If mask bits of destination MAC address * are full of 1, set RTE_ETHTYPE_FLAGS_MAC. */ - if (rte_is_broadcast_ether_addr(ð_mask->dst)) { - filter->mac_addr = eth_spec->dst; + if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr)) { + filter->mac_addr = eth_spec->hdr.dst_addr; filter->flags |= RTE_ETHTYPE_FLAGS_MAC; } else { filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC; } - filter->ether_type = rte_be_to_cpu_16(eth_spec->type); + filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type); /* Check if the next non-void item is END. */ index++; diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index cf51793cfef0..e6c9ad442ac0 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -656,17 +656,17 @@ enic_copy_item_eth_v2(struct copy_item_args *arg) if (!mask) mask = &rte_flow_item_eth_mask; - memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes, + memcpy(enic_spec.dst_addr.addr_bytes, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); - memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes, + memcpy(enic_spec.src_addr.addr_bytes, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); - memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes, + memcpy(enic_mask.dst_addr.addr_bytes, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); - memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes, + memcpy(enic_mask.src_addr.addr_bytes, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); - enic_spec.ether_type = spec->type; - enic_mask.ether_type = mask->type; + enic_spec.ether_type = spec->hdr.ether_type; + enic_mask.ether_type = mask->hdr.ether_type; /* outer header */ memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask, @@ -715,16 +715,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg) struct rte_vlan_hdr *vlan; vlan = (struct rte_vlan_hdr *)(eth_mask + 1); - vlan->eth_proto = mask->inner_type; + vlan->eth_proto = mask->hdr.eth_proto; vlan = (struct rte_vlan_hdr *)(eth_val + 1); - vlan->eth_proto = spec->inner_type; + vlan->eth_proto = spec->hdr.eth_proto; } else { - eth_mask->ether_type = mask->inner_type; - eth_val->ether_type = spec->inner_type; + eth_mask->ether_type = mask->hdr.eth_proto; + eth_val->ether_type = spec->hdr.eth_proto; } /* For TCI, use the vlan mask/val fields (little endian). */ - gp->mask_vlan = rte_be_to_cpu_16(mask->tci); - gp->val_vlan = rte_be_to_cpu_16(spec->tci); + gp->mask_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci); + gp->val_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci); return 0; } diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c index c87d3af8476c..90027dc67695 100644 --- a/drivers/net/enic/enic_fm_flow.c +++ b/drivers/net/enic/enic_fm_flow.c @@ -462,10 +462,10 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg) eth_val = (void *)&fm_data->l2.eth; /* - * Outer TPID cannot be matched. If inner_type is 0, use what is + * Outer TPID cannot be matched. If protocol is 0, use what is * in the eth header. */ - if (eth_mask->ether_type && mask->inner_type) + if (eth_mask->ether_type && mask->hdr.eth_proto) return -ENOTSUP; /* @@ -473,14 +473,14 @@ enic_fm_copy_item_vlan(struct copy_item_args *arg) * L2, regardless of vlan stripping settings. So, the inner type * from vlan becomes the ether type of the eth header. */ - if (mask->inner_type) { - eth_mask->ether_type = mask->inner_type; - eth_val->ether_type = spec->inner_type; + if (mask->hdr.eth_proto) { + eth_mask->ether_type = mask->hdr.eth_proto; + eth_val->ether_type = spec->hdr.eth_proto; } fm_data->fk_header_select |= FKH_ETHER | FKH_QTAG; fm_mask->fk_header_select |= FKH_ETHER | FKH_QTAG; - fm_data->fk_vlan = rte_be_to_cpu_16(spec->tci); - fm_mask->fk_vlan = rte_be_to_cpu_16(mask->tci); + fm_data->fk_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci); + fm_mask->fk_vlan = rte_be_to_cpu_16(mask->hdr.vlan_tci); return 0; } @@ -1385,7 +1385,7 @@ enic_fm_copy_vxlan_encap(struct enic_flowman *fm, ENICPMD_LOG(DEBUG, "vxlan-encap: vlan"); spec = item->spec; - fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->tci); + fm_op.encap.outer_vlan = rte_be_to_cpu_16(spec->hdr.vlan_tci); item++; flow_item_skip_void(&item); } diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c index 358b372e07e8..d1a564a16303 100644 --- a/drivers/net/hinic/hinic_pmd_flow.c +++ b/drivers/net/hinic/hinic_pmd_flow.c @@ -310,15 +310,15 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr, * Mask bits of destination MAC address must be full * of 1 or full of 0. */ - if (!rte_is_zero_ether_addr(ð_mask->src) || - (!rte_is_zero_ether_addr(ð_mask->dst) && - !rte_is_broadcast_ether_addr(ð_mask->dst))) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) || + (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) && + !rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr))) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid ether address mask"); return -rte_errno; } - if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) { + if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid ethertype mask"); return -rte_errno; @@ -328,13 +328,13 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr, * If mask bits of destination MAC address * are full of 1, set RTE_ETHTYPE_FLAGS_MAC. */ - if (rte_is_broadcast_ether_addr(ð_mask->dst)) { - filter->mac_addr = eth_spec->dst; + if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr)) { + filter->mac_addr = eth_spec->hdr.dst_addr; filter->flags |= RTE_ETHTYPE_FLAGS_MAC; } else { filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC; } - filter->ether_type = rte_be_to_cpu_16(eth_spec->type); + filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type); /* Check if the next non-void item is END. */ item = next_no_void_pattern(pattern, item); diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index a2c1589c3980..ef1832982dee 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -493,28 +493,28 @@ hns3_parse_eth(const struct rte_flow_item *item, struct hns3_fdir_rule *rule, if (item->mask) { eth_mask = item->mask; - if (eth_mask->type) { + if (eth_mask->hdr.ether_type) { hns3_set_bit(rule->input_set, INNER_ETH_TYPE, 1); rule->key_conf.mask.ether_type = - rte_be_to_cpu_16(eth_mask->type); + rte_be_to_cpu_16(eth_mask->hdr.ether_type); } - if (!rte_is_zero_ether_addr(ð_mask->src)) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr)) { hns3_set_bit(rule->input_set, INNER_SRC_MAC, 1); memcpy(rule->key_conf.mask.src_mac, - eth_mask->src.addr_bytes, RTE_ETHER_ADDR_LEN); + eth_mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); } - if (!rte_is_zero_ether_addr(ð_mask->dst)) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr)) { hns3_set_bit(rule->input_set, INNER_DST_MAC, 1); memcpy(rule->key_conf.mask.dst_mac, - eth_mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN); + eth_mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); } } eth_spec = item->spec; - rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->type); - memcpy(rule->key_conf.spec.src_mac, eth_spec->src.addr_bytes, + rule->key_conf.spec.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type); + memcpy(rule->key_conf.spec.src_mac, eth_spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); - memcpy(rule->key_conf.spec.dst_mac, eth_spec->dst.addr_bytes, + memcpy(rule->key_conf.spec.dst_mac, eth_spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); return 0; } @@ -538,17 +538,17 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule, if (item->mask) { vlan_mask = item->mask; - if (vlan_mask->tci) { + if (vlan_mask->hdr.vlan_tci) { if (rule->key_conf.vlan_num == 1) { hns3_set_bit(rule->input_set, INNER_VLAN_TAG1, 1); rule->key_conf.mask.vlan_tag1 = - rte_be_to_cpu_16(vlan_mask->tci); + rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci); } else { hns3_set_bit(rule->input_set, INNER_VLAN_TAG2, 1); rule->key_conf.mask.vlan_tag2 = - rte_be_to_cpu_16(vlan_mask->tci); + rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci); } } } @@ -556,10 +556,10 @@ hns3_parse_vlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule, vlan_spec = item->spec; if (rule->key_conf.vlan_num == 1) rule->key_conf.spec.vlan_tag1 = - rte_be_to_cpu_16(vlan_spec->tci); + rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci); else rule->key_conf.spec.vlan_tag2 = - rte_be_to_cpu_16(vlan_spec->tci); + rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci); return 0; } diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index 65a826d51c17..0acbd5a061e0 100644 --- a/drivers/net/i40e/i40e_flow.c +++ b/drivers/net/i40e/i40e_flow.c @@ -1322,9 +1322,9 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev, * Mask bits of destination MAC address must be full * of 1 or full of 0. */ - if (!rte_is_zero_ether_addr(ð_mask->src) || - (!rte_is_zero_ether_addr(ð_mask->dst) && - !rte_is_broadcast_ether_addr(ð_mask->dst))) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) || + (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) && + !rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr))) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -1332,7 +1332,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev, return -rte_errno; } - if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) { + if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -1343,13 +1343,13 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev, /* If mask bits of destination MAC address * are full of 1, set RTE_ETHTYPE_FLAGS_MAC. */ - if (rte_is_broadcast_ether_addr(ð_mask->dst)) { - filter->mac_addr = eth_spec->dst; + if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr)) { + filter->mac_addr = eth_spec->hdr.dst_addr; filter->flags |= RTE_ETHTYPE_FLAGS_MAC; } else { filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC; } - filter->ether_type = rte_be_to_cpu_16(eth_spec->type); + filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type); if (filter->ether_type == RTE_ETHER_TYPE_IPV4 || filter->ether_type == RTE_ETHER_TYPE_IPV6 || @@ -1662,25 +1662,25 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, } if (eth_spec && eth_mask) { - if (rte_is_broadcast_ether_addr(ð_mask->dst) && - rte_is_zero_ether_addr(ð_mask->src)) { + if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr) && + rte_is_zero_ether_addr(ð_mask->hdr.src_addr)) { filter->input.flow.l2_flow.dst = - eth_spec->dst; + eth_spec->hdr.dst_addr; input_set |= I40E_INSET_DMAC; - } else if (rte_is_zero_ether_addr(ð_mask->dst) && - rte_is_broadcast_ether_addr(ð_mask->src)) { + } else if (rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) && + rte_is_broadcast_ether_addr(ð_mask->hdr.src_addr)) { filter->input.flow.l2_flow.src = - eth_spec->src; + eth_spec->hdr.src_addr; input_set |= I40E_INSET_SMAC; - } else if (rte_is_broadcast_ether_addr(ð_mask->dst) && - rte_is_broadcast_ether_addr(ð_mask->src)) { + } else if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr) && + rte_is_broadcast_ether_addr(ð_mask->hdr.src_addr)) { filter->input.flow.l2_flow.dst = - eth_spec->dst; + eth_spec->hdr.dst_addr; filter->input.flow.l2_flow.src = - eth_spec->src; + eth_spec->hdr.src_addr; input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC); - } else if (!rte_is_zero_ether_addr(ð_mask->src) || - !rte_is_zero_ether_addr(ð_mask->dst)) { + } else if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) || + !rte_is_zero_ether_addr(ð_mask->hdr.dst_addr)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -1690,7 +1690,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, } if (eth_spec && eth_mask && next_type == RTE_FLOW_ITEM_TYPE_END) { - if (eth_mask->type != RTE_BE16(0xffff)) { + if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -1698,7 +1698,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, return -rte_errno; } - ether_type = rte_be_to_cpu_16(eth_spec->type); + ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type); if (next_type == RTE_FLOW_ITEM_TYPE_VLAN || ether_type == RTE_ETHER_TYPE_IPV4 || @@ -1712,7 +1712,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, } input_set |= I40E_INSET_LAST_ETHER_TYPE; filter->input.flow.l2_flow.ether_type = - eth_spec->type; + eth_spec->hdr.ether_type; } pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD; @@ -1725,13 +1725,13 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE)); if (vlan_spec && vlan_mask) { - if (vlan_mask->tci != + if (vlan_mask->hdr.vlan_tci != rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) && - vlan_mask->tci != + vlan_mask->hdr.vlan_tci != rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) && - vlan_mask->tci != + vlan_mask->hdr.vlan_tci != rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) && - vlan_mask->tci != + vlan_mask->hdr.vlan_tci != rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -1740,10 +1740,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, } input_set |= I40E_INSET_VLAN_INNER; filter->input.flow_ext.vlan_tci = - vlan_spec->tci; + vlan_spec->hdr.vlan_tci; } - if (vlan_spec && vlan_mask && vlan_mask->inner_type) { - if (vlan_mask->inner_type != RTE_BE16(0xffff)) { + if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) { + if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -1753,7 +1753,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, } ether_type = - rte_be_to_cpu_16(vlan_spec->inner_type); + rte_be_to_cpu_16(vlan_spec->hdr.eth_proto); if (ether_type == RTE_ETHER_TYPE_IPV4 || ether_type == RTE_ETHER_TYPE_IPV6 || @@ -1766,7 +1766,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, } input_set |= I40E_INSET_LAST_ETHER_TYPE; filter->input.flow.l2_flow.ether_type = - vlan_spec->inner_type; + vlan_spec->hdr.eth_proto; } pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD; @@ -2908,9 +2908,9 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev, /* DST address of inner MAC shouldn't be masked. * SRC address of Inner MAC should be masked. */ - if (!rte_is_broadcast_ether_addr(ð_mask->dst) || - !rte_is_zero_ether_addr(ð_mask->src) || - eth_mask->type) { + if (!rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr) || + !rte_is_zero_ether_addr(ð_mask->hdr.src_addr) || + eth_mask->hdr.ether_type) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -2920,12 +2920,12 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev, if (!vxlan_flag) { rte_memcpy(&filter->outer_mac, - ð_spec->dst, + ð_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN); filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC; } else { rte_memcpy(&filter->inner_mac, - ð_spec->dst, + ð_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN); filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC; } @@ -2935,7 +2935,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev, vlan_spec = item->spec; vlan_mask = item->mask; if (!(vlan_spec && vlan_mask) || - vlan_mask->inner_type) { + vlan_mask->hdr.eth_proto) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -2944,10 +2944,10 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev, } if (vlan_spec && vlan_mask) { - if (vlan_mask->tci == + if (vlan_mask->hdr.vlan_tci == rte_cpu_to_be_16(I40E_VLAN_TCI_MASK)) filter->inner_vlan = - rte_be_to_cpu_16(vlan_spec->tci) & + rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) & I40E_VLAN_TCI_MASK; filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN; } @@ -3138,9 +3138,9 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev, /* DST address of inner MAC shouldn't be masked. * SRC address of Inner MAC should be masked. */ - if (!rte_is_broadcast_ether_addr(ð_mask->dst) || - !rte_is_zero_ether_addr(ð_mask->src) || - eth_mask->type) { + if (!rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr) || + !rte_is_zero_ether_addr(ð_mask->hdr.src_addr) || + eth_mask->hdr.ether_type) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -3150,12 +3150,12 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev, if (!nvgre_flag) { rte_memcpy(&filter->outer_mac, - ð_spec->dst, + ð_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN); filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC; } else { rte_memcpy(&filter->inner_mac, - ð_spec->dst, + ð_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN); filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC; } @@ -3166,7 +3166,7 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev, vlan_spec = item->spec; vlan_mask = item->mask; if (!(vlan_spec && vlan_mask) || - vlan_mask->inner_type) { + vlan_mask->hdr.eth_proto) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -3175,10 +3175,10 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev, } if (vlan_spec && vlan_mask) { - if (vlan_mask->tci == + if (vlan_mask->hdr.vlan_tci == rte_cpu_to_be_16(I40E_VLAN_TCI_MASK)) filter->inner_vlan = - rte_be_to_cpu_16(vlan_spec->tci) & + rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) & I40E_VLAN_TCI_MASK; filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN; } @@ -3675,7 +3675,7 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev, vlan_mask = item->mask; if (!(vlan_spec && vlan_mask) || - vlan_mask->inner_type) { + vlan_mask->hdr.eth_proto) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -3701,8 +3701,8 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev, /* Get filter specification */ if (o_vlan_mask != NULL && i_vlan_mask != NULL) { - filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->tci); - filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->tci); + filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci); + filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci); } else { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c index 0c848189776d..02e1457d8017 100644 --- a/drivers/net/i40e/i40e_hash.c +++ b/drivers/net/i40e/i40e_hash.c @@ -986,7 +986,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev, vlan_spec = pattern->spec; vlan_mask = pattern->mask; if (!vlan_spec || !vlan_mask || - (rte_be_to_cpu_16(vlan_mask->tci) >> 13) != 7) + (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, pattern, "Pattern error."); @@ -1033,7 +1033,7 @@ i40e_hash_parse_queue_region(const struct rte_eth_dev *dev, rss_conf->region_queue_num = (uint8_t)rss_act->queue_num; rss_conf->region_queue_start = rss_act->queue[0]; - rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->tci) >> 13; + rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13; return 0; } diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c index 8f8087392538..a6c88cb55b88 100644 --- a/drivers/net/iavf/iavf_fdir.c +++ b/drivers/net/iavf/iavf_fdir.c @@ -850,27 +850,27 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, } if (eth_spec && eth_mask) { - if (!rte_is_zero_ether_addr(ð_mask->dst)) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr)) { input_set |= IAVF_INSET_DMAC; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH, DST); - } else if (!rte_is_zero_ether_addr(ð_mask->src)) { + } else if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr)) { input_set |= IAVF_INSET_SMAC; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH, SRC); } - if (eth_mask->type) { - if (eth_mask->type != RTE_BE16(0xffff)) { + if (eth_mask->hdr.ether_type) { + if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid type mask."); return -rte_errno; } - ether_type = rte_be_to_cpu_16(eth_spec->type); + ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type); if (ether_type == RTE_ETHER_TYPE_IPV4 || ether_type == RTE_ETHER_TYPE_IPV6) { rte_flow_error_set(error, EINVAL, diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c index 4082c0069f31..74e1e7099b8c 100644 --- a/drivers/net/iavf/iavf_fsub.c +++ b/drivers/net/iavf/iavf_fsub.c @@ -254,7 +254,7 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[], if (eth_spec && eth_mask) { input = &outer_input_set; - if (!rte_is_zero_ether_addr(ð_mask->dst)) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr)) { *input |= IAVF_INSET_DMAC; input_set_byte += 6; } else { @@ -262,12 +262,12 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[], input_set_byte += 6; } - if (!rte_is_zero_ether_addr(ð_mask->src)) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr)) { *input |= IAVF_INSET_SMAC; input_set_byte += 6; } - if (eth_mask->type) { + if (eth_mask->hdr.ether_type) { *input |= IAVF_INSET_ETHERTYPE; input_set_byte += 2; } @@ -487,10 +487,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[], *input |= IAVF_INSET_VLAN_OUTER; - if (vlan_mask->tci) + if (vlan_mask->hdr.vlan_tci) input_set_byte += 2; - if (vlan_mask->inner_type) { + if (vlan_mask->hdr.eth_proto) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c index 868921cac595..08a80137e5b9 100644 --- a/drivers/net/iavf/iavf_ipsec_crypto.c +++ b/drivers/net/iavf/iavf_ipsec_crypto.c @@ -1682,9 +1682,9 @@ parse_eth_item(const struct rte_flow_item_eth *item, struct rte_ether_hdr *eth) { memcpy(eth->src_addr.addr_bytes, - item->src.addr_bytes, sizeof(eth->src_addr)); + item->hdr.src_addr.addr_bytes, sizeof(eth->src_addr)); memcpy(eth->dst_addr.addr_bytes, - item->dst.addr_bytes, sizeof(eth->dst_addr)); + item->hdr.dst_addr.addr_bytes, sizeof(eth->dst_addr)); } static void diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c index 8fe6f5aeb0cd..f2ddbd7b9b2e 100644 --- a/drivers/net/ice/ice_acl_filter.c +++ b/drivers/net/ice/ice_acl_filter.c @@ -675,36 +675,36 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad, eth_mask = item->mask; if (eth_spec && eth_mask) { - if (rte_is_broadcast_ether_addr(ð_mask->src) || - rte_is_broadcast_ether_addr(ð_mask->dst)) { + if (rte_is_broadcast_ether_addr(ð_mask->hdr.src_addr) || + rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid mac addr mask"); return -rte_errno; } - if (!rte_is_zero_ether_addr(ð_spec->src) && - !rte_is_zero_ether_addr(ð_mask->src)) { + if (!rte_is_zero_ether_addr(ð_spec->hdr.src_addr) && + !rte_is_zero_ether_addr(ð_mask->hdr.src_addr)) { input_set |= ICE_INSET_SMAC; ice_memcpy(&filter->input.ext_data.src_mac, - ð_spec->src, + ð_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN, ICE_NONDMA_TO_NONDMA); ice_memcpy(&filter->input.ext_mask.src_mac, - ð_mask->src, + ð_mask->hdr.src_addr, RTE_ETHER_ADDR_LEN, ICE_NONDMA_TO_NONDMA); } - if (!rte_is_zero_ether_addr(ð_spec->dst) && - !rte_is_zero_ether_addr(ð_mask->dst)) { + if (!rte_is_zero_ether_addr(ð_spec->hdr.dst_addr) && + !rte_is_zero_ether_addr(ð_mask->hdr.dst_addr)) { input_set |= ICE_INSET_DMAC; ice_memcpy(&filter->input.ext_data.dst_mac, - ð_spec->dst, + ð_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN, ICE_NONDMA_TO_NONDMA); ice_memcpy(&filter->input.ext_mask.dst_mac, - ð_mask->dst, + ð_mask->hdr.dst_addr, RTE_ETHER_ADDR_LEN, ICE_NONDMA_TO_NONDMA); } diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index 7914ba940731..5d297afc290e 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -1971,17 +1971,17 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, if (!(eth_spec && eth_mask)) break; - if (!rte_is_zero_ether_addr(ð_mask->dst)) + if (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr)) *input_set |= ICE_INSET_DMAC; - if (!rte_is_zero_ether_addr(ð_mask->src)) + if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr)) *input_set |= ICE_INSET_SMAC; next_type = (item + 1)->type; /* Ignore this field except for ICE_FLTR_PTYPE_NON_IP_L2 */ - if (eth_mask->type == RTE_BE16(0xffff) && + if (eth_mask->hdr.ether_type == RTE_BE16(0xffff) && next_type == RTE_FLOW_ITEM_TYPE_END) { *input_set |= ICE_INSET_ETHERTYPE; - ether_type = rte_be_to_cpu_16(eth_spec->type); + ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type); if (ether_type == RTE_ETHER_TYPE_IPV4 || ether_type == RTE_ETHER_TYPE_IPV6) { @@ -1997,11 +1997,11 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, &filter->input.ext_data_outer : &filter->input.ext_data; rte_memcpy(&p_ext_data->src_mac, - ð_spec->src, RTE_ETHER_ADDR_LEN); + ð_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN); rte_memcpy(&p_ext_data->dst_mac, - ð_spec->dst, RTE_ETHER_ADDR_LEN); + ð_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN); rte_memcpy(&p_ext_data->ether_type, - ð_spec->type, sizeof(eth_spec->type)); + ð_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type)); break; case RTE_FLOW_ITEM_TYPE_IPV4: flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER; diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 60f7934a1697..d84061340e6c 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -592,8 +592,8 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], eth_spec = item->spec; eth_mask = item->mask; if (eth_spec && eth_mask) { - const uint8_t *a = eth_mask->src.addr_bytes; - const uint8_t *b = eth_mask->dst.addr_bytes; + const uint8_t *a = eth_mask->hdr.src_addr.addr_bytes; + const uint8_t *b = eth_mask->hdr.dst_addr.addr_bytes; if (tunnel_valid) input = &inner_input_set; else @@ -610,7 +610,7 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], break; } } - if (eth_mask->type) + if (eth_mask->hdr.ether_type) *input |= ICE_INSET_ETHERTYPE; list[t].type = (tunnel_valid == 0) ? ICE_MAC_OFOS : ICE_MAC_IL; @@ -620,31 +620,31 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], h = &list[t].h_u.eth_hdr; m = &list[t].m_u.eth_hdr; for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { - if (eth_mask->src.addr_bytes[j]) { + if (eth_mask->hdr.src_addr.addr_bytes[j]) { h->src_addr[j] = - eth_spec->src.addr_bytes[j]; + eth_spec->hdr.src_addr.addr_bytes[j]; m->src_addr[j] = - eth_mask->src.addr_bytes[j]; + eth_mask->hdr.src_addr.addr_bytes[j]; i = 1; input_set_byte++; } - if (eth_mask->dst.addr_bytes[j]) { + if (eth_mask->hdr.dst_addr.addr_bytes[j]) { h->dst_addr[j] = - eth_spec->dst.addr_bytes[j]; + eth_spec->hdr.dst_addr.addr_bytes[j]; m->dst_addr[j] = - eth_mask->dst.addr_bytes[j]; + eth_mask->hdr.dst_addr.addr_bytes[j]; i = 1; input_set_byte++; } } if (i) t++; - if (eth_mask->type) { + if (eth_mask->hdr.ether_type) { list[t].type = ICE_ETYPE_OL; list[t].h_u.ethertype.ethtype_id = - eth_spec->type; + eth_spec->hdr.ether_type; list[t].m_u.ethertype.ethtype_id = - eth_mask->type; + eth_mask->hdr.ether_type; input_set_byte += 2; t++; } @@ -1087,14 +1087,14 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], *input |= ICE_INSET_VLAN_INNER; } - if (vlan_mask->tci) { + if (vlan_mask->hdr.vlan_tci) { list[t].h_u.vlan_hdr.vlan = - vlan_spec->tci; + vlan_spec->hdr.vlan_tci; list[t].m_u.vlan_hdr.vlan = - vlan_mask->tci; + vlan_mask->hdr.vlan_tci; input_set_byte += 2; } - if (vlan_mask->inner_type) { + if (vlan_mask->hdr.eth_proto) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -1879,7 +1879,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, eth_mask = item->mask; else continue; - if (eth_mask->type == UINT16_MAX) + if (eth_mask->hdr.ether_type == UINT16_MAX) tun_type = ICE_SW_TUN_AND_NON_TUN; } diff --git a/drivers/net/igc/igc_flow.c b/drivers/net/igc/igc_flow.c index 58a6a8a539c6..b677a0d61340 100644 --- a/drivers/net/igc/igc_flow.c +++ b/drivers/net/igc/igc_flow.c @@ -327,14 +327,14 @@ igc_parse_pattern_ether(const struct rte_flow_item *item, IGC_SET_FILTER_MASK(filter, IGC_FILTER_MASK_ETHER); /* destination and source MAC address are not supported */ - if (!rte_is_zero_ether_addr(&mask->src) || - !rte_is_zero_ether_addr(&mask->dst)) + if (!rte_is_zero_ether_addr(&mask->hdr.src_addr) || + !rte_is_zero_ether_addr(&mask->hdr.dst_addr)) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, item, "Only support ether-type"); /* ether-type mask bits must be all 1 */ - if (IGC_NOT_ALL_BITS_SET(mask->type)) + if (IGC_NOT_ALL_BITS_SET(mask->hdr.ether_type)) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, item, "Ethernet type mask bits must be all 1"); @@ -342,7 +342,7 @@ igc_parse_pattern_ether(const struct rte_flow_item *item, ether = &filter->ethertype; /* get ether-type */ - ether->ether_type = rte_be_to_cpu_16(spec->type); + ether->ether_type = rte_be_to_cpu_16(spec->hdr.ether_type); /* ether-type should not be IPv4 and IPv6 */ if (ether->ether_type == RTE_ETHER_TYPE_IPV4 || diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c index 5b57ee9341d3..ee56d0f43d93 100644 --- a/drivers/net/ipn3ke/ipn3ke_flow.c +++ b/drivers/net/ipn3ke/ipn3ke_flow.c @@ -101,7 +101,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[], eth = item->spec; rte_memcpy(&parser->key[0], - eth->src.addr_bytes, + eth->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); break; @@ -165,7 +165,7 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[], eth = item->spec; rte_memcpy(parser->key, - eth->src.addr_bytes, + eth->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); break; @@ -227,13 +227,13 @@ ipn3ke_pattern_qinq(const struct rte_flow_item patterns[], if (!outer_vlan) { outer_vlan = item->spec; - tci = rte_be_to_cpu_16(outer_vlan->tci); + tci = rte_be_to_cpu_16(outer_vlan->hdr.vlan_tci); parser->key[0] = (tci & 0xff0) >> 4; parser->key[1] |= (tci & 0x00f) << 4; } else { inner_vlan = item->spec; - tci = rte_be_to_cpu_16(inner_vlan->tci); + tci = rte_be_to_cpu_16(inner_vlan->hdr.vlan_tci); parser->key[1] |= (tci & 0xf00) >> 8; parser->key[2] = (tci & 0x0ff); } diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c index 110ff34fcceb..a11da3dc8beb 100644 --- a/drivers/net/ixgbe/ixgbe_flow.c +++ b/drivers/net/ixgbe/ixgbe_flow.c @@ -744,16 +744,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr, * Mask bits of destination MAC address must be full * of 1 or full of 0. */ - if (!rte_is_zero_ether_addr(ð_mask->src) || - (!rte_is_zero_ether_addr(ð_mask->dst) && - !rte_is_broadcast_ether_addr(ð_mask->dst))) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) || + (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) && + !rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr))) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid ether address mask"); return -rte_errno; } - if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) { + if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid ethertype mask"); @@ -763,13 +763,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr, /* If mask bits of destination MAC address * are full of 1, set RTE_ETHTYPE_FLAGS_MAC. */ - if (rte_is_broadcast_ether_addr(ð_mask->dst)) { - filter->mac_addr = eth_spec->dst; + if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr)) { + filter->mac_addr = eth_spec->hdr.dst_addr; filter->flags |= RTE_ETHTYPE_FLAGS_MAC; } else { filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC; } - filter->ether_type = rte_be_to_cpu_16(eth_spec->type); + filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type); /* Check if the next non-void item is END. */ item = next_no_void_pattern(pattern, item); @@ -1698,7 +1698,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, /* Get the dst MAC. */ for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { rule->ixgbe_fdir.formatted.inner_mac[j] = - eth_spec->dst.addr_bytes[j]; + eth_spec->hdr.dst_addr.addr_bytes[j]; } } @@ -1709,7 +1709,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, eth_mask = item->mask; /* Ether type should be masked. */ - if (eth_mask->type || + if (eth_mask->hdr.ether_type || rule->mode == RTE_FDIR_MODE_SIGNATURE) { memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, @@ -1726,8 +1726,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, * and don't support dst MAC address mask. */ for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { - if (eth_mask->src.addr_bytes[j] || - eth_mask->dst.addr_bytes[j] != 0xFF) { + if (eth_mask->hdr.src_addr.addr_bytes[j] || + eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) { memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, @@ -1790,9 +1790,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, vlan_spec = item->spec; vlan_mask = item->mask; - rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci; + rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci; - rule->mask.vlan_tci_mask = vlan_mask->tci; + rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci; rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF); /* More than one tags are not supported. */ @@ -2642,7 +2642,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, eth_mask = item->mask; /* Ether type should be masked. */ - if (eth_mask->type) { + if (eth_mask->hdr.ether_type) { memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2652,7 +2652,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, /* src MAC address should be masked. */ for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { - if (eth_mask->src.addr_bytes[j]) { + if (eth_mask->hdr.src_addr.addr_bytes[j]) { memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, @@ -2664,9 +2664,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, rule->mask.mac_addr_byte_mask = 0; for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { /* It's a per byte mask. */ - if (eth_mask->dst.addr_bytes[j] == 0xFF) { + if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) { rule->mask.mac_addr_byte_mask |= 0x1 << j; - } else if (eth_mask->dst.addr_bytes[j]) { + } else if (eth_mask->hdr.dst_addr.addr_bytes[j]) { memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2685,7 +2685,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, /* Get the dst MAC. */ for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { rule->ixgbe_fdir.formatted.inner_mac[j] = - eth_spec->dst.addr_bytes[j]; + eth_spec->hdr.dst_addr.addr_bytes[j]; } } @@ -2722,9 +2722,9 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, vlan_spec = item->spec; vlan_mask = item->mask; - rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci; + rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci; - rule->mask.vlan_tci_mask = vlan_mask->tci; + rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci; rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF); /* More than one tags are not supported. */ diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c index 9d7247cf81d0..8ef9fd2db44e 100644 --- a/drivers/net/mlx4/mlx4_flow.c +++ b/drivers/net/mlx4/mlx4_flow.c @@ -207,17 +207,17 @@ mlx4_flow_merge_eth(struct rte_flow *flow, uint32_t sum_dst = 0; uint32_t sum_src = 0; - for (i = 0; i != sizeof(mask->dst.addr_bytes); ++i) { - sum_dst += mask->dst.addr_bytes[i]; - sum_src += mask->src.addr_bytes[i]; + for (i = 0; i != sizeof(mask->hdr.dst_addr.addr_bytes); ++i) { + sum_dst += mask->hdr.dst_addr.addr_bytes[i]; + sum_src += mask->hdr.src_addr.addr_bytes[i]; } if (sum_src) { msg = "mlx4 does not support source MAC matching"; goto error; } else if (!sum_dst) { flow->promisc = 1; - } else if (sum_dst == 1 && mask->dst.addr_bytes[0] == 1) { - if (!(spec->dst.addr_bytes[0] & 1)) { + } else if (sum_dst == 1 && mask->hdr.dst_addr.addr_bytes[0] == 1) { + if (!(spec->hdr.dst_addr.addr_bytes[0] & 1)) { msg = "mlx4 does not support the explicit" " exclusion of all multicast traffic"; goto error; @@ -251,8 +251,8 @@ mlx4_flow_merge_eth(struct rte_flow *flow, flow->promisc = 1; return 0; } - memcpy(eth->val.dst_mac, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN); - memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN); + memcpy(eth->val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); + memcpy(eth->mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); /* Remove unwanted bits from values. */ for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) eth->val.dst_mac[i] &= eth->mask.dst_mac[i]; @@ -297,12 +297,12 @@ mlx4_flow_merge_vlan(struct rte_flow *flow, struct ibv_flow_spec_eth *eth; const char *msg; - if (!mask || !mask->tci) { + if (!mask || !mask->hdr.vlan_tci) { msg = "mlx4 cannot match all VLAN traffic while excluding" " non-VLAN traffic, TCI VID must be specified"; goto error; } - if (mask->tci != RTE_BE16(0x0fff)) { + if (mask->hdr.vlan_tci != RTE_BE16(0x0fff)) { msg = "mlx4 does not support partial TCI VID matching"; goto error; } @@ -310,8 +310,8 @@ mlx4_flow_merge_vlan(struct rte_flow *flow, return 0; eth = (void *)((uintptr_t)flow->ibv_attr + flow->ibv_attr_size - sizeof(*eth)); - eth->val.vlan_tag = spec->tci; - eth->mask.vlan_tag = mask->tci; + eth->val.vlan_tag = spec->hdr.vlan_tci; + eth->mask.vlan_tag = mask->hdr.vlan_tci; eth->val.vlan_tag &= eth->mask.vlan_tag; if (flow->ibv_attr->type == IBV_FLOW_ATTR_ALL_DEFAULT) flow->ibv_attr->type = IBV_FLOW_ATTR_NORMAL; @@ -582,7 +582,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { RTE_FLOW_ITEM_TYPE_IPV4), .mask_support = &(const struct rte_flow_item_eth){ /* Only destination MAC can be matched. */ - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", }, .mask_default = &rte_flow_item_eth_mask, .mask_sz = sizeof(struct rte_flow_item_eth), @@ -593,7 +593,7 @@ static const struct mlx4_flow_proc_item mlx4_flow_proc_item_list[] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4), .mask_support = &(const struct rte_flow_item_vlan){ /* Only TCI VID matching is supported. */ - .tci = RTE_BE16(0x0fff), + .hdr.vlan_tci = RTE_BE16(0x0fff), }, .mask_default = &rte_flow_item_vlan_mask, .mask_sz = sizeof(struct rte_flow_item_vlan), @@ -1304,14 +1304,14 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error) }; struct rte_flow_item_eth eth_spec; const struct rte_flow_item_eth eth_mask = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", }; const struct rte_flow_item_eth eth_allmulti = { - .dst.addr_bytes = "\x01\x00\x00\x00\x00\x00", + .hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00", }; struct rte_flow_item_vlan vlan_spec; const struct rte_flow_item_vlan vlan_mask = { - .tci = RTE_BE16(0x0fff), + .hdr.vlan_tci = RTE_BE16(0x0fff), }; struct rte_flow_item pattern[] = { { @@ -1356,12 +1356,12 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error) .type = RTE_FLOW_ACTION_TYPE_END, }, }; - struct rte_ether_addr *rule_mac = ð_spec.dst; + struct rte_ether_addr *rule_mac = ð_spec.hdr.dst_addr; rte_be16_t *rule_vlan = (ETH_DEV(priv)->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) && !ETH_DEV(priv)->data->promiscuous ? - &vlan_spec.tci : + &vlan_spec.hdr.vlan_tci : NULL; uint16_t vlan = 0; struct rte_flow *flow; @@ -1399,7 +1399,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error) if (i < RTE_DIM(priv->mac)) mac = &priv->mac[i]; else - mac = ð_mask.dst; + mac = ð_mask.hdr.dst_addr; if (rte_is_zero_ether_addr(mac)) continue; /* Check if MAC flow rule is already present. */ diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 6b98eb8c9666..604384a24253 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -109,12 +109,12 @@ struct mlx5dr_definer_conv_data { /* Xmacro used to create generic item setter from items */ #define LIST_OF_FIELDS_INFO \ - X(SET_BE16, eth_type, v->type, rte_flow_item_eth) \ - X(SET_BE32P, eth_smac_47_16, &v->src.addr_bytes[0], rte_flow_item_eth) \ - X(SET_BE16P, eth_smac_15_0, &v->src.addr_bytes[4], rte_flow_item_eth) \ - X(SET_BE32P, eth_dmac_47_16, &v->dst.addr_bytes[0], rte_flow_item_eth) \ - X(SET_BE16P, eth_dmac_15_0, &v->dst.addr_bytes[4], rte_flow_item_eth) \ - X(SET_BE16, tci, v->tci, rte_flow_item_vlan) \ + X(SET_BE16, eth_type, v->hdr.ether_type, rte_flow_item_eth) \ + X(SET_BE32P, eth_smac_47_16, &v->hdr.src_addr.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_smac_15_0, &v->hdr.src_addr.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE32P, eth_dmac_47_16, &v->hdr.dst_addr.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_dmac_15_0, &v->hdr.dst_addr.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE16, tci, v->hdr.vlan_tci, rte_flow_item_vlan) \ X(SET, ipv4_ihl, v->ihl, rte_ipv4_hdr) \ X(SET, ipv4_tos, v->type_of_service, rte_ipv4_hdr) \ X(SET, ipv4_time_to_live, v->time_to_live, rte_ipv4_hdr) \ @@ -416,7 +416,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, return rte_errno; } - if (m->type) { + if (m->hdr.ether_type) { fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; fc->item_idx = item_idx; fc->tag_set = &mlx5dr_definer_eth_type_set; @@ -424,7 +424,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, } /* Check SMAC 47_16 */ - if (memcmp(m->src.addr_bytes, empty_mac, 4)) { + if (memcmp(m->hdr.src_addr.addr_bytes, empty_mac, 4)) { fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)]; fc->item_idx = item_idx; fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set; @@ -432,7 +432,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, } /* Check SMAC 15_0 */ - if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) { + if (memcmp(m->hdr.src_addr.addr_bytes + 4, empty_mac + 4, 2)) { fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)]; fc->item_idx = item_idx; fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set; @@ -440,7 +440,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, } /* Check DMAC 47_16 */ - if (memcmp(m->dst.addr_bytes, empty_mac, 4)) { + if (memcmp(m->hdr.dst_addr.addr_bytes, empty_mac, 4)) { fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)]; fc->item_idx = item_idx; fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set; @@ -448,7 +448,7 @@ mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, } /* Check DMAC 15_0 */ - if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) { + if (memcmp(m->hdr.dst_addr.addr_bytes + 4, empty_mac + 4, 2)) { fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)]; fc->item_idx = item_idx; fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set; @@ -493,14 +493,14 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd, DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); } - if (m->tci) { + if (m->hdr.vlan_tci) { fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)]; fc->item_idx = item_idx; fc->tag_set = &mlx5dr_definer_tci_set; DR_CALC_SET(fc, eth_l2, tci, inner); } - if (m->inner_type) { + if (m->hdr.eth_proto) { fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; fc->item_idx = item_idx; fc->tag_set = &mlx5dr_definer_eth_type_set; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index a0cf677fb099..2512d6b52db9 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -301,13 +301,13 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item) return RTE_FLOW_ITEM_TYPE_VOID; switch (item->type) { case RTE_FLOW_ITEM_TYPE_ETH: - MLX5_XSET_ITEM_MASK_SPEC(eth, type); + MLX5_XSET_ITEM_MASK_SPEC(eth, hdr.ether_type); if (!mask) return RTE_FLOW_ITEM_TYPE_VOID; ret = mlx5_ethertype_to_item_type(spec, mask, false); break; case RTE_FLOW_ITEM_TYPE_VLAN: - MLX5_XSET_ITEM_MASK_SPEC(vlan, inner_type); + MLX5_XSET_ITEM_MASK_SPEC(vlan, hdr.eth_proto); if (!mask) return RTE_FLOW_ITEM_TYPE_VOID; ret = mlx5_ethertype_to_item_type(spec, mask, false); @@ -2431,9 +2431,9 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item, { const struct rte_flow_item_eth *mask = item->mask; const struct rte_flow_item_eth nic_mask = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .type = RTE_BE16(0xffff), + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.ether_type = RTE_BE16(0xffff), .has_vlan = ext_vlan_sup ? 1 : 0, }; int ret; @@ -2493,8 +2493,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item, const struct rte_flow_item_vlan *spec = item->spec; const struct rte_flow_item_vlan *mask = item->mask; const struct rte_flow_item_vlan nic_mask = { - .tci = RTE_BE16(UINT16_MAX), - .inner_type = RTE_BE16(UINT16_MAX), + .hdr.vlan_tci = RTE_BE16(UINT16_MAX), + .hdr.eth_proto = RTE_BE16(UINT16_MAX), }; uint16_t vlan_tag = 0; const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); @@ -2522,7 +2522,7 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item, MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; - if (!tunnel && mask->tci != RTE_BE16(0x0fff)) { + if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) { struct mlx5_priv *priv = dev->data->dev_private; if (priv->vmwa_context) { @@ -2542,8 +2542,8 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item, } } if (spec) { - vlan_tag = spec->tci; - vlan_tag &= mask->tci; + vlan_tag = spec->hdr.vlan_tci; + vlan_tag &= mask->hdr.vlan_tci; } /* * From verbs perspective an empty VLAN is equivalent @@ -7877,10 +7877,10 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev) * a multicast dst mac causes kernel to give low priority to this flow. */ static const struct rte_flow_item_eth lacp_spec = { - .type = RTE_BE16(0x8809), + .hdr.ether_type = RTE_BE16(0x8809), }; static const struct rte_flow_item_eth lacp_mask = { - .type = 0xffff, + .hdr.ether_type = 0xffff, }; const struct rte_flow_attr attr = { .ingress = 1, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 62c38b87a1f0..ff915183b7cc 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -594,17 +594,17 @@ flow_dv_convert_action_modify_mac memset(ð, 0, sizeof(eth)); memset(ð_mask, 0, sizeof(eth_mask)); if (action->type == RTE_FLOW_ACTION_TYPE_SET_MAC_SRC) { - memcpy(ð.src.addr_bytes, &conf->mac_addr, - sizeof(eth.src.addr_bytes)); - memcpy(ð_mask.src.addr_bytes, - &rte_flow_item_eth_mask.src.addr_bytes, - sizeof(eth_mask.src.addr_bytes)); + memcpy(ð.hdr.src_addr.addr_bytes, &conf->mac_addr, + sizeof(eth.hdr.src_addr.addr_bytes)); + memcpy(ð_mask.hdr.src_addr.addr_bytes, + &rte_flow_item_eth_mask.hdr.src_addr.addr_bytes, + sizeof(eth_mask.hdr.src_addr.addr_bytes)); } else { - memcpy(ð.dst.addr_bytes, &conf->mac_addr, - sizeof(eth.dst.addr_bytes)); - memcpy(ð_mask.dst.addr_bytes, - &rte_flow_item_eth_mask.dst.addr_bytes, - sizeof(eth_mask.dst.addr_bytes)); + memcpy(ð.hdr.dst_addr.addr_bytes, &conf->mac_addr, + sizeof(eth.hdr.dst_addr.addr_bytes)); + memcpy(ð_mask.hdr.dst_addr.addr_bytes, + &rte_flow_item_eth_mask.hdr.dst_addr.addr_bytes, + sizeof(eth_mask.hdr.dst_addr.addr_bytes)); } item.spec = ð item.mask = ð_mask; @@ -2370,8 +2370,8 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item, { const struct rte_flow_item_vlan *mask = item->mask; const struct rte_flow_item_vlan nic_mask = { - .tci = RTE_BE16(UINT16_MAX), - .inner_type = RTE_BE16(UINT16_MAX), + .hdr.vlan_tci = RTE_BE16(UINT16_MAX), + .hdr.eth_proto = RTE_BE16(UINT16_MAX), .has_more_vlan = 1, }; const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); @@ -2399,7 +2399,7 @@ flow_dv_validate_item_vlan(const struct rte_flow_item *item, MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; - if (!tunnel && mask->tci != RTE_BE16(0x0fff)) { + if (!tunnel && mask->hdr.vlan_tci != RTE_BE16(0x0fff)) { struct mlx5_priv *priv = dev->data->dev_private; if (priv->vmwa_context) { @@ -2920,9 +2920,9 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items, struct rte_vlan_hdr *vlan) { const struct rte_flow_item_vlan nic_mask = { - .tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK | + .hdr.vlan_tci = RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK | MLX5DV_FLOW_VLAN_VID_MASK), - .inner_type = RTE_BE16(0xffff), + .hdr.eth_proto = RTE_BE16(0xffff), }; if (items == NULL) @@ -2944,23 +2944,23 @@ flow_dev_get_vlan_info_from_items(const struct rte_flow_item *items, if (!vlan_m) vlan_m = &nic_mask; /* Only full match values are accepted */ - if ((vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) == + if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) == MLX5DV_FLOW_VLAN_PCP_MASK_BE) { vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_PCP_MASK; vlan->vlan_tci |= - rte_be_to_cpu_16(vlan_v->tci & + rte_be_to_cpu_16(vlan_v->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE); } - if ((vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) == + if ((vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) == MLX5DV_FLOW_VLAN_VID_MASK_BE) { vlan->vlan_tci &= ~MLX5DV_FLOW_VLAN_VID_MASK; vlan->vlan_tci |= - rte_be_to_cpu_16(vlan_v->tci & + rte_be_to_cpu_16(vlan_v->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE); } - if (vlan_m->inner_type == nic_mask.inner_type) - vlan->eth_proto = rte_be_to_cpu_16(vlan_v->inner_type & - vlan_m->inner_type); + if (vlan_m->hdr.eth_proto == nic_mask.hdr.eth_proto) + vlan->eth_proto = rte_be_to_cpu_16(vlan_v->hdr.eth_proto & + vlan_m->hdr.eth_proto); } } @@ -3010,8 +3010,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev, "push vlan action for VF representor " "not supported on NIC table"); if (vlan_m && - (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) && - (vlan_m->tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) != + (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) && + (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_PCP_MASK_BE) != MLX5DV_FLOW_VLAN_PCP_MASK_BE && !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_PCP) && !(mlx5_flow_find_action @@ -3023,8 +3023,8 @@ flow_dv_validate_action_push_vlan(struct rte_eth_dev *dev, "push VLAN action cannot figure out " "PCP value"); if (vlan_m && - (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) && - (vlan_m->tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) != + (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) && + (vlan_m->hdr.vlan_tci & MLX5DV_FLOW_VLAN_VID_MASK_BE) != MLX5DV_FLOW_VLAN_VID_MASK_BE && !(action_flags & MLX5_FLOW_ACTION_OF_SET_VLAN_VID) && !(mlx5_flow_find_action @@ -7130,10 +7130,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, if (items->mask != NULL && items->spec != NULL) { ether_type = ((const struct rte_flow_item_eth *) - items->spec)->type; + items->spec)->hdr.ether_type; ether_type &= ((const struct rte_flow_item_eth *) - items->mask)->type; + items->mask)->hdr.ether_type; ether_type = rte_be_to_cpu_16(ether_type); } else { ether_type = 0; @@ -7149,10 +7149,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, if (items->mask != NULL && items->spec != NULL) { ether_type = ((const struct rte_flow_item_vlan *) - items->spec)->inner_type; + items->spec)->hdr.eth_proto; ether_type &= ((const struct rte_flow_item_vlan *) - items->mask)->inner_type; + items->mask)->hdr.eth_proto; ether_type = rte_be_to_cpu_16(ether_type); } else { ether_type = 0; @@ -8460,9 +8460,9 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, const struct rte_flow_item_eth *eth_m; const struct rte_flow_item_eth *eth_v; const struct rte_flow_item_eth nic_mask = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .type = RTE_BE16(0xffff), + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.ether_type = RTE_BE16(0xffff), .has_vlan = 0, }; void *hdrs_v; @@ -8480,12 +8480,12 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* The value must be in the range of the mask. */ l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16); - for (i = 0; i < sizeof(eth_m->dst); ++i) - l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i]; + for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i) + l24_v[i] = eth_m->hdr.dst_addr.addr_bytes[i] & eth_v->hdr.dst_addr.addr_bytes[i]; l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16); /* The value must be in the range of the mask. */ - for (i = 0; i < sizeof(eth_m->dst); ++i) - l24_v[i] = eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i]; + for (i = 0; i < sizeof(eth_m->hdr.dst_addr); ++i) + l24_v[i] = eth_m->hdr.src_addr.addr_bytes[i] & eth_v->hdr.src_addr.addr_bytes[i]; /* * HW supports match on one Ethertype, the Ethertype following the last * VLAN tag of the packet (see PRM). @@ -8494,8 +8494,8 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, * ethertype, and use ip_version field instead. * eCPRI over Ether layer will use type value 0xAEFE. */ - if (eth_m->type == 0xFFFF) { - rte_be16_t type = eth_v->type; + if (eth_m->hdr.ether_type == 0xFFFF) { + rte_be16_t type = eth_v->hdr.ether_type; /* * When set the matcher mask, refer to the original spec @@ -8503,7 +8503,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, */ if (key_type == MLX5_SET_MATCHER_SW_M) { MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); - type = eth_vv->type; + type = eth_vv->hdr.ether_type; } /* Set cvlan_tag mask for any single\multi\un-tagged case. */ switch (type) { @@ -8539,7 +8539,7 @@ flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, return; } l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); - *(uint16_t *)(l24_v) = eth_m->type & eth_v->type; + *(uint16_t *)(l24_v) = eth_m->hdr.ether_type & eth_v->hdr.ether_type; } /** @@ -8576,7 +8576,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, * and pre-validated. */ if (vlan_vv) - wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff; + wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->hdr.vlan_tci) & 0x0fff; } /* * When VLAN item exists in flow, mark packet as tagged, @@ -8588,7 +8588,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, return; MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m, &rte_flow_item_vlan_mask); - tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci); + tci_v = rte_be_to_cpu_16(vlan_m->hdr.vlan_tci & vlan_v->hdr.vlan_tci); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13); @@ -8596,15 +8596,15 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, * HW is optimized for IPv4/IPv6. In such cases, avoid setting * ethertype, and use ip_version field instead. */ - if (vlan_m->inner_type == 0xFFFF) { - rte_be16_t inner_type = vlan_v->inner_type; + if (vlan_m->hdr.eth_proto == 0xFFFF) { + rte_be16_t inner_type = vlan_v->hdr.eth_proto; /* * When set the matcher mask, refer to the original spec * value. */ if (key_type == MLX5_SET_MATCHER_SW_M) - inner_type = vlan_vv->inner_type; + inner_type = vlan_vv->hdr.eth_proto; switch (inner_type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); @@ -8632,7 +8632,7 @@ flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, return; } MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype, - rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type)); + rte_be_to_cpu_16(vlan_m->hdr.eth_proto & vlan_v->hdr.eth_proto)); } /** diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index a3c8056515da..b8f96839c8bf 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -91,68 +91,68 @@ static uint32_t mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_MAX] /* Ethernet item spec for promiscuous mode. */ static const struct rte_flow_item_eth ctrl_rx_eth_promisc_spec = { - .dst.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /* Ethernet item mask for promiscuous mode. */ static const struct rte_flow_item_eth ctrl_rx_eth_promisc_mask = { - .dst.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /* Ethernet item spec for all multicast mode. */ static const struct rte_flow_item_eth ctrl_rx_eth_mcast_spec = { - .dst.addr_bytes = "\x01\x00\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /* Ethernet item mask for all multicast mode. */ static const struct rte_flow_item_eth ctrl_rx_eth_mcast_mask = { - .dst.addr_bytes = "\x01\x00\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /* Ethernet item spec for IPv4 multicast traffic. */ static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_spec = { - .dst.addr_bytes = "\x01\x00\x5e\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x01\x00\x5e\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /* Ethernet item mask for IPv4 multicast traffic. */ static const struct rte_flow_item_eth ctrl_rx_eth_ipv4_mcast_mask = { - .dst.addr_bytes = "\xff\xff\xff\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /* Ethernet item spec for IPv6 multicast traffic. */ static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_spec = { - .dst.addr_bytes = "\x33\x33\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /* Ethernet item mask for IPv6 multicast traffic. */ static const struct rte_flow_item_eth ctrl_rx_eth_ipv6_mcast_mask = { - .dst.addr_bytes = "\xff\xff\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /* Ethernet item mask for unicast traffic. */ static const struct rte_flow_item_eth ctrl_rx_eth_dmac_mask = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /* Ethernet item spec for broadcast. */ static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; /** @@ -5682,9 +5682,9 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev) .egress = 1, }; struct rte_flow_item_eth promisc = { - .dst.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; struct rte_flow_item eth_all[] = { [0] = { @@ -8776,9 +8776,9 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_item_eth promisc = { - .dst.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; struct rte_flow_item eth_all[] = { [0] = { @@ -9036,7 +9036,7 @@ __flow_hw_ctrl_flows_single_vlan(struct rte_eth_dev *dev, for (i = 0; i < priv->vlan_filter_n; ++i) { uint16_t vlan = priv->vlan_filter[i]; struct rte_flow_item_vlan vlan_spec = { - .tci = rte_cpu_to_be_16(vlan), + .hdr.vlan_tci = rte_cpu_to_be_16(vlan), }; items[1].spec = &vlan_spec; @@ -9080,7 +9080,7 @@ __flow_hw_ctrl_flows_unicast(struct rte_eth_dev *dev, if (!memcmp(mac, &cmp, sizeof(*mac))) continue; - memcpy(ð_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); + memcpy(ð_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); if (flow_hw_create_ctrl_flow(dev, dev, tbl, items, 0, actions, 0)) return -rte_errno; } @@ -9123,11 +9123,11 @@ __flow_hw_ctrl_flows_unicast_vlan(struct rte_eth_dev *dev, if (!memcmp(mac, &cmp, sizeof(*mac))) continue; - memcpy(ð_spec.dst.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); + memcpy(ð_spec.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); for (j = 0; j < priv->vlan_filter_n; ++j) { uint16_t vlan = priv->vlan_filter[j]; struct rte_flow_item_vlan vlan_spec = { - .tci = rte_cpu_to_be_16(vlan), + .hdr.vlan_tci = rte_cpu_to_be_16(vlan), }; items[1].spec = &vlan_spec; diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index 28ea28bfbe02..1902b97ec6d4 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -417,16 +417,16 @@ flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow, if (spec) { unsigned int i; - memcpy(ð.val.dst_mac, spec->dst.addr_bytes, + memcpy(ð.val.dst_mac, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); - memcpy(ð.val.src_mac, spec->src.addr_bytes, + memcpy(ð.val.src_mac, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); - eth.val.ether_type = spec->type; - memcpy(ð.mask.dst_mac, mask->dst.addr_bytes, + eth.val.ether_type = spec->hdr.ether_type; + memcpy(ð.mask.dst_mac, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); - memcpy(ð.mask.src_mac, mask->src.addr_bytes, + memcpy(ð.mask.src_mac, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); - eth.mask.ether_type = mask->type; + eth.mask.ether_type = mask->hdr.ether_type; /* Remove unwanted bits from values. */ for (i = 0; i < RTE_ETHER_ADDR_LEN; ++i) { eth.val.dst_mac[i] &= eth.mask.dst_mac[i]; @@ -502,11 +502,11 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow, if (!mask) mask = &rte_flow_item_vlan_mask; if (spec) { - eth.val.vlan_tag = spec->tci; - eth.mask.vlan_tag = mask->tci; + eth.val.vlan_tag = spec->hdr.vlan_tci; + eth.mask.vlan_tag = mask->hdr.vlan_tci; eth.val.vlan_tag &= eth.mask.vlan_tag; - eth.val.ether_type = spec->inner_type; - eth.mask.ether_type = mask->inner_type; + eth.val.ether_type = spec->hdr.eth_proto; + eth.mask.ether_type = mask->hdr.eth_proto; eth.val.ether_type &= eth.mask.ether_type; } if (!(item_flags & l2m)) @@ -515,7 +515,7 @@ flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow, flow_verbs_item_vlan_update(&dev_flow->verbs.attr, ð); if (!tunnel) dev_flow->handle->vf_vlan.tag = - rte_be_to_cpu_16(spec->tci) & 0x0fff; + rte_be_to_cpu_16(spec->hdr.vlan_tci) & 0x0fff; } /** @@ -1305,10 +1305,10 @@ flow_verbs_validate(struct rte_eth_dev *dev, if (items->mask != NULL && items->spec != NULL) { ether_type = ((const struct rte_flow_item_eth *) - items->spec)->type; + items->spec)->hdr.ether_type; ether_type &= ((const struct rte_flow_item_eth *) - items->mask)->type; + items->mask)->hdr.ether_type; if (ether_type == RTE_BE16(RTE_ETHER_TYPE_VLAN)) is_empty_vlan = true; ether_type = rte_be_to_cpu_16(ether_type); @@ -1328,10 +1328,10 @@ flow_verbs_validate(struct rte_eth_dev *dev, if (items->mask != NULL && items->spec != NULL) { ether_type = ((const struct rte_flow_item_vlan *) - items->spec)->inner_type; + items->spec)->hdr.eth_proto; ether_type &= ((const struct rte_flow_item_vlan *) - items->mask)->inner_type; + items->mask)->hdr.eth_proto; ether_type = rte_be_to_cpu_16(ether_type); } else { ether_type = 0; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index f54443ed1ac4..3457bf65d3e1 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1552,19 +1552,19 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_item_eth bcast = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", }; struct rte_flow_item_eth ipv6_multi_spec = { - .dst.addr_bytes = "\x33\x33\x00\x00\x00\x00", + .hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00", }; struct rte_flow_item_eth ipv6_multi_mask = { - .dst.addr_bytes = "\xff\xff\x00\x00\x00\x00", + .hdr.dst_addr.addr_bytes = "\xff\xff\x00\x00\x00\x00", }; struct rte_flow_item_eth unicast = { - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", }; struct rte_flow_item_eth unicast_mask = { - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", }; const unsigned int vlan_filter_n = priv->vlan_filter_n; const struct rte_ether_addr cmp = { @@ -1637,9 +1637,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) return 0; if (dev->data->promiscuous) { struct rte_flow_item_eth promisc = { - .dst.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; ret = mlx5_ctrl_flow(dev, &promisc, &promisc); @@ -1648,9 +1648,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) } if (dev->data->all_multicast) { struct rte_flow_item_eth multicast = { - .dst.addr_bytes = "\x01\x00\x00\x00\x00\x00", - .src.addr_bytes = "\x00\x00\x00\x00\x00\x00", - .type = 0, + .hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00", + .hdr.src_addr.addr_bytes = "\x00\x00\x00\x00\x00\x00", + .hdr.ether_type = 0, }; ret = mlx5_ctrl_flow(dev, &multicast, &multicast); @@ -1662,7 +1662,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) uint16_t vlan = priv->vlan_filter[i]; struct rte_flow_item_vlan vlan_spec = { - .tci = rte_cpu_to_be_16(vlan), + .hdr.vlan_tci = rte_cpu_to_be_16(vlan), }; struct rte_flow_item_vlan vlan_mask = rte_flow_item_vlan_mask; @@ -1697,14 +1697,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) if (!memcmp(mac, &cmp, sizeof(*mac))) continue; - memcpy(&unicast.dst.addr_bytes, + memcpy(&unicast.hdr.dst_addr.addr_bytes, mac->addr_bytes, RTE_ETHER_ADDR_LEN); for (j = 0; j != vlan_filter_n; ++j) { uint16_t vlan = priv->vlan_filter[j]; struct rte_flow_item_vlan vlan_spec = { - .tci = rte_cpu_to_be_16(vlan), + .hdr.vlan_tci = rte_cpu_to_be_16(vlan), }; struct rte_flow_item_vlan vlan_mask = rte_flow_item_vlan_mask; diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c index 99695b91c496..e74a5f83f55b 100644 --- a/drivers/net/mvpp2/mrvl_flow.c +++ b/drivers/net/mvpp2/mrvl_flow.c @@ -189,14 +189,14 @@ mrvl_parse_mac(const struct rte_flow_item_eth *spec, const uint8_t *k, *m; if (parse_dst) { - k = spec->dst.addr_bytes; - m = mask->dst.addr_bytes; + k = spec->hdr.dst_addr.addr_bytes; + m = mask->hdr.dst_addr.addr_bytes; flow->table_key.proto_field[flow->rule.num_fields].field.eth = MV_NET_ETH_F_DA; } else { - k = spec->src.addr_bytes; - m = mask->src.addr_bytes; + k = spec->hdr.src_addr.addr_bytes; + m = mask->hdr.src_addr.addr_bytes; flow->table_key.proto_field[flow->rule.num_fields].field.eth = MV_NET_ETH_F_SA; @@ -275,7 +275,7 @@ mrvl_parse_type(const struct rte_flow_item_eth *spec, mrvl_alloc_key_mask(key_field); key_field->size = 2; - k = rte_be_to_cpu_16(spec->type); + k = rte_be_to_cpu_16(spec->hdr.ether_type); snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); flow->table_key.proto_field[flow->rule.num_fields].proto = @@ -311,7 +311,7 @@ mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec, mrvl_alloc_key_mask(key_field); key_field->size = 2; - k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK; + k = rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_ID_MASK; snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); flow->table_key.proto_field[flow->rule.num_fields].proto = @@ -347,7 +347,7 @@ mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec, mrvl_alloc_key_mask(key_field); key_field->size = 1; - k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13; + k = (rte_be_to_cpu_16(spec->hdr.vlan_tci) & MRVL_VLAN_PRI_MASK) >> 13; snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); flow->table_key.proto_field[flow->rule.num_fields].proto = @@ -856,19 +856,19 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow, memset(&zero, 0, sizeof(zero)); - if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) { + if (memcmp(&mask->hdr.dst_addr, &zero, sizeof(mask->hdr.dst_addr))) { ret = mrvl_parse_dmac(spec, mask, flow); if (ret) goto out; } - if (memcmp(&mask->src, &zero, sizeof(mask->src))) { + if (memcmp(&mask->hdr.src_addr, &zero, sizeof(mask->hdr.src_addr))) { ret = mrvl_parse_smac(spec, mask, flow); if (ret) goto out; } - if (mask->type) { + if (mask->hdr.ether_type) { MRVL_LOG(WARNING, "eth type mask is ignored"); ret = mrvl_parse_type(spec, mask, flow); if (ret) @@ -905,7 +905,7 @@ mrvl_parse_vlan(const struct rte_flow_item *item, if (ret) return ret; - m = rte_be_to_cpu_16(mask->tci); + m = rte_be_to_cpu_16(mask->hdr.vlan_tci); if (m & MRVL_VLAN_ID_MASK) { MRVL_LOG(WARNING, "vlan id mask is ignored"); ret = mrvl_parse_vlan_id(spec, mask, flow); @@ -920,12 +920,12 @@ mrvl_parse_vlan(const struct rte_flow_item *item, goto out; } - if (mask->inner_type) { + if (mask->hdr.eth_proto) { struct rte_flow_item_eth spec_eth = { - .type = spec->inner_type, + .hdr.ether_type = spec->hdr.eth_proto, }; struct rte_flow_item_eth mask_eth = { - .type = mask->inner_type, + .hdr.ether_type = mask->hdr.eth_proto, }; /* TPID is not supported so if ETH_TYPE was selected, diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index ff2e21c817b4..bd3a8d2a3b2f 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -1099,11 +1099,11 @@ nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower, eth = (void *)*mbuf_off; if (is_mask) { - memcpy(eth->mac_src, mask->src.addr_bytes, RTE_ETHER_ADDR_LEN); - memcpy(eth->mac_dst, mask->dst.addr_bytes, RTE_ETHER_ADDR_LEN); + memcpy(eth->mac_src, mask->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); + memcpy(eth->mac_dst, mask->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); } else { - memcpy(eth->mac_src, spec->src.addr_bytes, RTE_ETHER_ADDR_LEN); - memcpy(eth->mac_dst, spec->dst.addr_bytes, RTE_ETHER_ADDR_LEN); + memcpy(eth->mac_src, spec->hdr.src_addr.addr_bytes, RTE_ETHER_ADDR_LEN); + memcpy(eth->mac_dst, spec->hdr.dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN); } eth->mpls_lse = 0; @@ -1136,10 +1136,10 @@ nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower, mask = item->mask ? item->mask : proc->mask_default; if (is_mask) { meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.mask_data; - meta_tci->tci |= mask->tci; + meta_tci->tci |= mask->hdr.vlan_tci; } else { meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - meta_tci->tci |= spec->tci; + meta_tci->tci |= spec->hdr.vlan_tci; } return 0; diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c index fb59abd0b563..f098edc6eb33 100644 --- a/drivers/net/sfc/sfc_flow.c +++ b/drivers/net/sfc/sfc_flow.c @@ -280,12 +280,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item, const struct rte_flow_item_eth *spec = NULL; const struct rte_flow_item_eth *mask = NULL; const struct rte_flow_item_eth supp_mask = { - .dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, - .src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, - .type = 0xffff, + .hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .hdr.src_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .hdr.ether_type = 0xffff, }; const struct rte_flow_item_eth ifrm_supp_mask = { - .dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .hdr.dst_addr.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, }; const uint8_t ig_mask[EFX_MAC_ADDR_LEN] = { 0x01, 0x00, 0x00, 0x00, 0x00, 0x00 @@ -319,15 +319,15 @@ sfc_flow_parse_eth(const struct rte_flow_item *item, if (spec == NULL) return 0; - if (rte_is_same_ether_addr(&mask->dst, &supp_mask.dst)) { + if (rte_is_same_ether_addr(&mask->hdr.dst_addr, &supp_mask.hdr.dst_addr)) { efx_spec->efs_match_flags |= is_ifrm ? EFX_FILTER_MATCH_IFRM_LOC_MAC : EFX_FILTER_MATCH_LOC_MAC; - rte_memcpy(loc_mac, spec->dst.addr_bytes, + rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes, EFX_MAC_ADDR_LEN); - } else if (memcmp(mask->dst.addr_bytes, ig_mask, + } else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask, EFX_MAC_ADDR_LEN) == 0) { - if (rte_is_unicast_ether_addr(&spec->dst)) + if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr)) efx_spec->efs_match_flags |= is_ifrm ? EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST : EFX_FILTER_MATCH_UNKNOWN_UCAST_DST; @@ -335,7 +335,7 @@ sfc_flow_parse_eth(const struct rte_flow_item *item, efx_spec->efs_match_flags |= is_ifrm ? EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST : EFX_FILTER_MATCH_UNKNOWN_MCAST_DST; - } else if (!rte_is_zero_ether_addr(&mask->dst)) { + } else if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) { goto fail_bad_mask; } @@ -344,11 +344,11 @@ sfc_flow_parse_eth(const struct rte_flow_item *item, * ethertype masks are equal to zero in inner frame, * so these fields are filled in only for the outer frame */ - if (rte_is_same_ether_addr(&mask->src, &supp_mask.src)) { + if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) { efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC; - rte_memcpy(efx_spec->efs_rem_mac, spec->src.addr_bytes, + rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes, EFX_MAC_ADDR_LEN); - } else if (!rte_is_zero_ether_addr(&mask->src)) { + } else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) { goto fail_bad_mask; } @@ -356,10 +356,10 @@ sfc_flow_parse_eth(const struct rte_flow_item *item, * Ether type is in big-endian byte order in item and * in little-endian in efx_spec, so byte swap is used */ - if (mask->type == supp_mask.type) { + if (mask->hdr.ether_type == supp_mask.hdr.ether_type) { efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE; - efx_spec->efs_ether_type = rte_bswap16(spec->type); - } else if (mask->type != 0) { + efx_spec->efs_ether_type = rte_bswap16(spec->hdr.ether_type); + } else if (mask->hdr.ether_type != 0) { goto fail_bad_mask; } @@ -394,8 +394,8 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item, const struct rte_flow_item_vlan *spec = NULL; const struct rte_flow_item_vlan *mask = NULL; const struct rte_flow_item_vlan supp_mask = { - .tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX), - .inner_type = RTE_BE16(0xffff), + .hdr.vlan_tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX), + .hdr.eth_proto = RTE_BE16(0xffff), }; rc = sfc_flow_parse_init(item, @@ -414,9 +414,9 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item, * If two VLAN items are included, the first matches * the outer tag and the next matches the inner tag. */ - if (mask->tci == supp_mask.tci) { + if (mask->hdr.vlan_tci == supp_mask.hdr.vlan_tci) { /* Apply mask to keep VID only */ - vid = rte_bswap16(spec->tci & mask->tci); + vid = rte_bswap16(spec->hdr.vlan_tci & mask->hdr.vlan_tci); if (!(efx_spec->efs_match_flags & EFX_FILTER_MATCH_OUTER_VID)) { @@ -445,13 +445,13 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item, "VLAN TPID matching is not supported"); return -rte_errno; } - if (mask->inner_type == supp_mask.inner_type) { + if (mask->hdr.eth_proto == supp_mask.hdr.eth_proto) { efx_spec->efs_match_flags |= EFX_FILTER_MATCH_ETHER_TYPE; - efx_spec->efs_ether_type = rte_bswap16(spec->inner_type); - } else if (mask->inner_type) { + efx_spec->efs_ether_type = rte_bswap16(spec->hdr.eth_proto); + } else if (mask->hdr.eth_proto) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "Bad mask for VLAN inner_type"); + "Bad mask for VLAN inner type"); return -rte_errno; } diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 421bb6da9582..710d04be13af 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -1701,18 +1701,18 @@ static const struct sfc_mae_field_locator flocs_eth[] = { * The field is handled by sfc_mae_rule_process_pattern_data(). */ SFC_MAE_FIELD_HANDLING_DEFERRED, - RTE_SIZEOF_FIELD(struct rte_flow_item_eth, type), - offsetof(struct rte_flow_item_eth, type), + RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.ether_type), + offsetof(struct rte_flow_item_eth, hdr.ether_type), }, { EFX_MAE_FIELD_ETH_DADDR_BE, - RTE_SIZEOF_FIELD(struct rte_flow_item_eth, dst), - offsetof(struct rte_flow_item_eth, dst), + RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.dst_addr), + offsetof(struct rte_flow_item_eth, hdr.dst_addr), }, { EFX_MAE_FIELD_ETH_SADDR_BE, - RTE_SIZEOF_FIELD(struct rte_flow_item_eth, src), - offsetof(struct rte_flow_item_eth, src), + RTE_SIZEOF_FIELD(struct rte_flow_item_eth, hdr.src_addr), + offsetof(struct rte_flow_item_eth, hdr.src_addr), }, }; @@ -1770,8 +1770,8 @@ sfc_mae_rule_parse_item_eth(const struct rte_flow_item *item, * sfc_mae_rule_process_pattern_data() will consider them * altogether when the rest of the items have been parsed. */ - ethertypes[0].value = item_spec->type; - ethertypes[0].mask = item_mask->type; + ethertypes[0].value = item_spec->hdr.ether_type; + ethertypes[0].mask = item_mask->hdr.ether_type; if (item_mask->has_vlan) { pdata->has_ovlan_mask = B_TRUE; if (item_spec->has_vlan) @@ -1794,8 +1794,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = { /* Outermost tag */ { EFX_MAE_FIELD_VLAN0_TCI_BE, - RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci), - offsetof(struct rte_flow_item_vlan, tci), + RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci), + offsetof(struct rte_flow_item_vlan, hdr.vlan_tci), }, { /* @@ -1803,15 +1803,15 @@ static const struct sfc_mae_field_locator flocs_vlan[] = { * The field is handled by sfc_mae_rule_process_pattern_data(). */ SFC_MAE_FIELD_HANDLING_DEFERRED, - RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type), - offsetof(struct rte_flow_item_vlan, inner_type), + RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto), + offsetof(struct rte_flow_item_vlan, hdr.eth_proto), }, /* Innermost tag */ { EFX_MAE_FIELD_VLAN1_TCI_BE, - RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, tci), - offsetof(struct rte_flow_item_vlan, tci), + RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.vlan_tci), + offsetof(struct rte_flow_item_vlan, hdr.vlan_tci), }, { /* @@ -1819,8 +1819,8 @@ static const struct sfc_mae_field_locator flocs_vlan[] = { * The field is handled by sfc_mae_rule_process_pattern_data(). */ SFC_MAE_FIELD_HANDLING_DEFERRED, - RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, inner_type), - offsetof(struct rte_flow_item_vlan, inner_type), + RTE_SIZEOF_FIELD(struct rte_flow_item_vlan, hdr.eth_proto), + offsetof(struct rte_flow_item_vlan, hdr.eth_proto), }, }; @@ -1899,9 +1899,9 @@ sfc_mae_rule_parse_item_vlan(const struct rte_flow_item *item, * sfc_mae_rule_process_pattern_data() will consider them * altogether when the rest of the items have been parsed. */ - et[pdata->nb_vlan_tags + 1].value = item_spec->inner_type; - et[pdata->nb_vlan_tags + 1].mask = item_mask->inner_type; - pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->tci; + et[pdata->nb_vlan_tags + 1].value = item_spec->hdr.eth_proto; + et[pdata->nb_vlan_tags + 1].mask = item_mask->hdr.eth_proto; + pdata->tci_masks[pdata->nb_vlan_tags] = item_mask->hdr.vlan_tci; if (item_mask->has_more_vlan) { if (pdata->nb_vlan_tags == SFC_MAE_MATCH_VLAN_MAX_NTAGS) { diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c index efe66fe0593d..ed4d42f92f9f 100644 --- a/drivers/net/tap/tap_flow.c +++ b/drivers/net/tap/tap_flow.c @@ -258,9 +258,9 @@ static const struct tap_flow_items tap_flow_items[] = { RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_IPV6), .mask = &(const struct rte_flow_item_eth){ - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .type = -1, + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.ether_type = -1, }, .mask_sz = sizeof(struct rte_flow_item_eth), .default_mask = &rte_flow_item_eth_mask, @@ -272,11 +272,11 @@ static const struct tap_flow_items tap_flow_items[] = { .mask = &(const struct rte_flow_item_vlan){ /* DEI matching is not supported */ #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN - .tci = 0xffef, + .hdr.vlan_tci = 0xffef, #else - .tci = 0xefff, + .hdr.vlan_tci = 0xefff, #endif - .inner_type = -1, + .hdr.eth_proto = -1, }, .mask_sz = sizeof(struct rte_flow_item_vlan), .default_mask = &rte_flow_item_vlan_mask, @@ -391,7 +391,7 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = { .items[0] = { .type = RTE_FLOW_ITEM_TYPE_ETH, .mask = &(const struct rte_flow_item_eth){ - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", }, }, .items[1] = { @@ -408,10 +408,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = { .items[0] = { .type = RTE_FLOW_ITEM_TYPE_ETH, .mask = &(const struct rte_flow_item_eth){ - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", }, .spec = &(const struct rte_flow_item_eth){ - .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", }, }, .items[1] = { @@ -428,10 +428,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = { .items[0] = { .type = RTE_FLOW_ITEM_TYPE_ETH, .mask = &(const struct rte_flow_item_eth){ - .dst.addr_bytes = "\x33\x33\x00\x00\x00\x00", + .hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00", }, .spec = &(const struct rte_flow_item_eth){ - .dst.addr_bytes = "\x33\x33\x00\x00\x00\x00", + .hdr.dst_addr.addr_bytes = "\x33\x33\x00\x00\x00\x00", }, }, .items[1] = { @@ -462,10 +462,10 @@ static struct remote_rule implicit_rte_flows[TAP_REMOTE_MAX_IDX] = { .items[0] = { .type = RTE_FLOW_ITEM_TYPE_ETH, .mask = &(const struct rte_flow_item_eth){ - .dst.addr_bytes = "\x01\x00\x00\x00\x00\x00", + .hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00", }, .spec = &(const struct rte_flow_item_eth){ - .dst.addr_bytes = "\x01\x00\x00\x00\x00\x00", + .hdr.dst_addr.addr_bytes = "\x01\x00\x00\x00\x00\x00", }, }, .items[1] = { @@ -527,31 +527,31 @@ tap_flow_create_eth(const struct rte_flow_item *item, void *data) if (!mask) mask = tap_flow_items[RTE_FLOW_ITEM_TYPE_ETH].default_mask; /* TC does not support eth_type masking. Only accept if exact match. */ - if (mask->type && mask->type != 0xffff) + if (mask->hdr.ether_type && mask->hdr.ether_type != 0xffff) return -1; if (!spec) return 0; /* store eth_type for consistency if ipv4/6 pattern item comes next */ - if (spec->type & mask->type) - info->eth_type = spec->type; + if (spec->hdr.ether_type & mask->hdr.ether_type) + info->eth_type = spec->hdr.ether_type; if (!flow) return 0; msg = &flow->msg; - if (!rte_is_zero_ether_addr(&mask->dst)) { + if (!rte_is_zero_ether_addr(&mask->hdr.dst_addr)) { tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST, RTE_ETHER_ADDR_LEN, - &spec->dst.addr_bytes); + &spec->hdr.dst_addr.addr_bytes); tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_DST_MASK, RTE_ETHER_ADDR_LEN, - &mask->dst.addr_bytes); + &mask->hdr.dst_addr.addr_bytes); } - if (!rte_is_zero_ether_addr(&mask->src)) { + if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) { tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC, RTE_ETHER_ADDR_LEN, - &spec->src.addr_bytes); + &spec->hdr.src_addr.addr_bytes); tap_nlattr_add(&msg->nh, TCA_FLOWER_KEY_ETH_SRC_MASK, RTE_ETHER_ADDR_LEN, - &mask->src.addr_bytes); + &mask->hdr.src_addr.addr_bytes); } return 0; } @@ -587,11 +587,11 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data) if (info->vlan) return -1; info->vlan = 1; - if (mask->inner_type) { + if (mask->hdr.eth_proto) { /* TC does not support partial eth_type masking */ - if (mask->inner_type != RTE_BE16(0xffff)) + if (mask->hdr.eth_proto != RTE_BE16(0xffff)) return -1; - info->eth_type = spec->inner_type; + info->eth_type = spec->hdr.eth_proto; } if (!flow) return 0; @@ -601,8 +601,8 @@ tap_flow_create_vlan(const struct rte_flow_item *item, void *data) #define VLAN_ID(tci) ((tci) & 0xfff) if (!spec) return 0; - if (spec->tci) { - uint16_t tci = ntohs(spec->tci) & mask->tci; + if (spec->hdr.vlan_tci) { + uint16_t tci = ntohs(spec->hdr.vlan_tci) & mask->hdr.vlan_tci; uint16_t prio = VLAN_PRIO(tci); uint8_t vid = VLAN_ID(tci); @@ -1681,7 +1681,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd, }; struct rte_flow_item *items = implicit_rte_flows[idx].items; struct rte_flow_attr *attr = &implicit_rte_flows[idx].attr; - struct rte_flow_item_eth eth_local = { .type = 0 }; + struct rte_flow_item_eth eth_local = { .hdr.ether_type = 0 }; unsigned int if_index = pmd->remote_if_index; struct rte_flow *remote_flow = NULL; struct nlmsg *msg = NULL; @@ -1718,7 +1718,7 @@ int tap_flow_implicit_create(struct pmd_internals *pmd, * eth addr couldn't be set in implicit_rte_flows[] as it is not * known at compile time. */ - memcpy(ð_local.dst, &pmd->eth_addr, sizeof(pmd->eth_addr)); + memcpy(ð_local.hdr.dst_addr, &pmd->eth_addr, sizeof(pmd->eth_addr)); items = items_local; } tc_init_msg(msg, if_index, RTM_NEWTFILTER, flags); diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c index 7b18dca7e8d2..7ef52d0b0fcd 100644 --- a/drivers/net/txgbe/txgbe_flow.c +++ b/drivers/net/txgbe/txgbe_flow.c @@ -706,16 +706,16 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr, * Mask bits of destination MAC address must be full * of 1 or full of 0. */ - if (!rte_is_zero_ether_addr(ð_mask->src) || - (!rte_is_zero_ether_addr(ð_mask->dst) && - !rte_is_broadcast_ether_addr(ð_mask->dst))) { + if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) || + (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) && + !rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr))) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid ether address mask"); return -rte_errno; } - if ((eth_mask->type & UINT16_MAX) != UINT16_MAX) { + if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid ethertype mask"); @@ -725,13 +725,13 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr, /* If mask bits of destination MAC address * are full of 1, set RTE_ETHTYPE_FLAGS_MAC. */ - if (rte_is_broadcast_ether_addr(ð_mask->dst)) { - filter->mac_addr = eth_spec->dst; + if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr)) { + filter->mac_addr = eth_spec->hdr.dst_addr; filter->flags |= RTE_ETHTYPE_FLAGS_MAC; } else { filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC; } - filter->ether_type = rte_be_to_cpu_16(eth_spec->type); + filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type); /* Check if the next non-void item is END. */ item = next_no_void_pattern(pattern, item); @@ -1635,7 +1635,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, eth_mask = item->mask; /* Ether type should be masked. */ - if (eth_mask->type || + if (eth_mask->hdr.ether_type || rule->mode == RTE_FDIR_MODE_SIGNATURE) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, @@ -1652,8 +1652,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * and don't support dst MAC address mask. */ for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { - if (eth_mask->src.addr_bytes[j] || - eth_mask->dst.addr_bytes[j] != 0xFF) { + if (eth_mask->hdr.src_addr.addr_bytes[j] || + eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, @@ -2381,7 +2381,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, eth_mask = item->mask; /* Ether type should be masked. */ - if (eth_mask->type) { + if (eth_mask->hdr.ether_type) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2391,7 +2391,7 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, /* src MAC address should be masked. */ for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { - if (eth_mask->src.addr_bytes[j]) { + if (eth_mask->hdr.src_addr.addr_bytes[j]) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, @@ -2403,9 +2403,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, rule->mask.mac_addr_byte_mask = 0; for (j = 0; j < ETH_ADDR_LEN; j++) { /* It's a per byte mask. */ - if (eth_mask->dst.addr_bytes[j] == 0xFF) { + if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) { rule->mask.mac_addr_byte_mask |= 0x1 << j; - } else if (eth_mask->dst.addr_bytes[j]) { + } else if (eth_mask->hdr.dst_addr.addr_bytes[j]) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, From patchwork Fri Feb 3 16:48:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 123042 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 950FD41BBE; Fri, 3 Feb 2023 17:49:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8D5BD42D49; Fri, 3 Feb 2023 17:49:22 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2070.outbound.protection.outlook.com [40.107.223.70]) by mails.dpdk.org (Postfix) with ESMTP id 3D90342D41 for ; Fri, 3 Feb 2023 17:49:21 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=A0sBWEB5wzCoGz+jjXYwvqoQwVTNRO8EDyZF8ea8LTwSEVVqoAsOcI/bWMHKGzXZSDRSYFAuZ60yEUws8hoUs3A6+CKg5MYjkOJJC9nIvlwLgX6IwtFmoZ8OkNQOW5IaoPL0OHz1xI027YVtzVoxfie1pgN0k+ZGKjIDQlPCQQ6C9k9hHJU4VNfEuyIMGZdqM5nwTZ5M0F8mdYA7S0Og+m22c7mHPlWRe0SNbqYhfrFklqAjlE8z3EMsGu5kTPgUxFmZTklpHF68pHOXzHEFJBw1Hl3OTDbWaKaWkXGOftv90jfVC9Ta1XqAQwj8VGlIs0tK6xg+DgvPbkim/ulnBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oFnl2CWT5lqLd/hU4sH6STFh/KFL/S4RBgRhuu+3ZMw=; b=K/GkSb60HpSAmoLY52uR28nMGEe8+5s2AJZiqUYCNgHqkuz2t97JfEfvl5DjX1GjH8NLkR9i2x3k+LKj0gcR2W4Y2EUq/Yrc8VH/Rpx18M271RMrSCLZDJ/vfeCQUuTqCm4DxHTcBG/v6WA7H7q0lwN2lREMbDKZMyEE8bopcPrfNTLjwa2SNMw18E6ju9bxugaTvEPmYonx5jxCUrzrUJurOLJNlJ4ctQo6QyrwS1xzO0UXYca/Y3zkzgImtBvIR7Flzry/5HXH7OcN9j8zQGWQBbdPXjaLoYT3PDmgVYzFem+VvAVOQW3vwD421NKXdkHTlSNASBibQeM25D5SXA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=monjalon.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oFnl2CWT5lqLd/hU4sH6STFh/KFL/S4RBgRhuu+3ZMw=; b=RTk3ewIDFxQywGXSrPhX+Um01HqdfNzUI5oA/sLVIrr9vhsa5SZD1qZM+DBuGTp2UitivG5SrL25ugJ0iaegIgBiz1ak+G0UgbPFerSap36rrh4WOiOqy6pcIzaX8C0QLsZA1doaaKM4bYhaIgMvMfty7lWkajGbFMLwSfFg+GU= Received: from DS7PR03CA0039.namprd03.prod.outlook.com (2603:10b6:5:3b5::14) by BL1PR12MB5364.namprd12.prod.outlook.com (2603:10b6:208:314::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Fri, 3 Feb 2023 16:49:19 +0000 Received: from DS1PEPF0000E65A.namprd02.prod.outlook.com (2603:10b6:5:3b5:cafe::90) by DS7PR03CA0039.outlook.office365.com (2603:10b6:5:3b5::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.27 via Frontend Transport; Fri, 3 Feb 2023 16:49:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF0000E65A.mail.protection.outlook.com (10.167.18.72) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6064.21 via Frontend Transport; Fri, 3 Feb 2023 16:49:18 +0000 Received: from telcodpdk.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 3 Feb 2023 10:49:17 -0600 From: Ferruh Yigit To: Thomas Monjalon , Olivier Matz CC: David Marchand , Subject: [PATCH v7 2/7] net: add smaller fields for VXLAN Date: Fri, 3 Feb 2023 16:48:49 +0000 Message-ID: <20230203164854.602595-3-ferruh.yigit@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230203164854.602595-1-ferruh.yigit@amd.com> References: <20221025214410.715864-1-thomas@monjalon.net> <20230203164854.602595-1-ferruh.yigit@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E65A:EE_|BL1PR12MB5364:EE_ X-MS-Office365-Filtering-Correlation-Id: 2a2a9879-4b3f-45af-7363-08db060697a9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vaEeKXpuIpqMqYFCaHKDdgEYqEo5itDqLh1Ne9YAVcQB4Ddaje1165wf0XNH6/ruXCnyT7xZEBuNiqqwaNkg0P3qks0rYDfCnkjW3+2r/h5CbJrnfI8QxaW8Bw8dG0VFrQkqybuO/0vdw2AlCdQCFuzJlW6/SodpT7nMtPBd4G+CsL8OZUpl4NlHRfFQq4Tt8D1V8oYGrwUevS8H3tGN33zNoQAVxfCKyiCkvJqx9Z9UzmA33ECbl/Yfz+LiditNKfHaL6B8eVwKGIea/CYzHwIlRQPEVo3p5PcGsI3F6DkW8G/+A+mwQxbCMKRL4KSamEdTPYs5xVljjcoaFCmZqMzFuVNo2hXX8BFjvMQT7QbgxuJOXLv85xPf8X68PZYfyTs8BG//Y+U4hYIkc8XTGqOwl4saHdo8qzjKZzZRJN2a9oOU3W1uMt2U0eiq7DDVuiAIYEtWxzO/tPh+k1aP/rXXaLfakmJxdWPSFhwkSp3ihDK2ePVqEw4QCIEeQN5utUaHo52j5le1YWu2skao8M8fz0dvLzPSg9GrJu4njSqN8JMmxWRoYf1w+dE0b33AuKuuoxGxKRqDXKWWv+IyNWLd1a14CTJWF083MmYO7BswrmRH2Ob8QoYoHclORCfrpozsH2ylzyPULb8Ip7XE9okc56rseX77vwHRCSW2uW6ycZtzzYlenAHsk8m5HRH/kunK9z9R+4QtKb4yCjhHVg0IYQBRlr6BHsWCfy0neK4= X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230025)(4636009)(136003)(346002)(376002)(396003)(39860400002)(451199018)(40470700004)(46966006)(36840700001)(40460700003)(36756003)(110136005)(47076005)(86362001)(8676002)(336012)(83380400001)(426003)(82310400005)(70586007)(54906003)(82740400003)(186003)(8936002)(316002)(1076003)(478600001)(6666004)(5660300002)(2616005)(26005)(16526019)(44832011)(356005)(4326008)(40480700001)(81166007)(7696005)(36860700001)(70206006)(2906002)(41300700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 16:49:18.6311 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2a2a9879-4b3f-45af-7363-08db060697a9 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E65A.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5364 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Thomas Monjalon The VXLAN and VXLAN-GPE headers were including reserved fields with other fields in big uint32_t struct members. Some more precise definitions are added as union of the old ones. The new struct members are smaller in size and in names. Signed-off-by: Thomas Monjalon Acked-by: Ferruh Yigit --- lib/net/rte_vxlan.h | 35 +++++++++++++++++++++++++++++------ 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/lib/net/rte_vxlan.h b/lib/net/rte_vxlan.h index 929fa7a1dd01..997fc784fc84 100644 --- a/lib/net/rte_vxlan.h +++ b/lib/net/rte_vxlan.h @@ -30,9 +30,20 @@ extern "C" { * Contains the 8-bit flag, 24-bit VXLAN Network Identifier and * Reserved fields (24 bits and 8 bits) */ +__extension__ /* no named member in struct */ struct rte_vxlan_hdr { - rte_be32_t vx_flags; /**< flag (8) + Reserved (24). */ - rte_be32_t vx_vni; /**< VNI (24) + Reserved (8). */ + union { + struct { + rte_be32_t vx_flags; /**< flags (8) + Reserved (24). */ + rte_be32_t vx_vni; /**< VNI (24) + Reserved (8). */ + }; + struct { + uint8_t flags; /**< Should be 8 (I flag). */ + uint8_t rsvd0[3]; /**< Reserved. */ + uint8_t vni[3]; /**< VXLAN identifier. */ + uint8_t rsvd1; /**< Reserved. */ + }; + }; } __rte_packed; /** VXLAN tunnel header length. */ @@ -45,11 +56,23 @@ struct rte_vxlan_hdr { * Contains the 8-bit flag, 8-bit next-protocol, 24-bit VXLAN Network * Identifier and Reserved fields (16 bits and 8 bits). */ +__extension__ /* no named member in struct */ struct rte_vxlan_gpe_hdr { - uint8_t vx_flags; /**< flag (8). */ - uint8_t reserved[2]; /**< Reserved (16). */ - uint8_t proto; /**< next-protocol (8). */ - rte_be32_t vx_vni; /**< VNI (24) + Reserved (8). */ + union { + struct { + uint8_t vx_flags; /**< flag (8). */ + uint8_t reserved[2]; /**< Reserved (16). */ + uint8_t protocol; /**< next-protocol (8). */ + rte_be32_t vx_vni; /**< VNI (24) + Reserved (8). */ + }; + struct { + uint8_t flags; /**< Flags. */ + uint8_t rsvd0[2]; /**< Reserved. */ + uint8_t proto; /**< Next protocol. */ + uint8_t vni[3]; /**< VXLAN identifier. */ + uint8_t rsvd1; /**< Reserved. */ + }; + }; } __rte_packed; /** VXLAN-GPE tunnel header length. */ From patchwork Fri Feb 3 16:48:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 123044 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0C4B41BBE; Fri, 3 Feb 2023 17:49:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1530542D41; Fri, 3 Feb 2023 17:49:31 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2059.outbound.protection.outlook.com [40.107.94.59]) by mails.dpdk.org (Postfix) with ESMTP id D38DA42D32 for ; Fri, 3 Feb 2023 17:49:27 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dOVe15rWflcXpw7KuE5oKFL2P8Z4/XgylKHBHpk2q/BW5Yi2yQcVAbe8Xo1uRR9xlNJFe2mqnkM/pI1RNdnLjehC677GVS79bb2XarWlpQ39lW6iRL/fX4VAMzBnMeRpxbrxW7UBep4FQvtsCV4y+cAqtVZqK+/gg1d/lW0EEklcqcRTR8VR2zIt0eUCuy2eqOpeOGG4yrP7CwBwdipd9Uf/YHKXWS8sfOXrVBSHHNua9c81oshaPgfa/cJiZvLoVcKpaJHj/y20EZh85DSdpLLG8BZOPZKsdE79T0zrEl5t970WtNIVqc9JlHV2Vy+Mvg2KrRMbxRpCkw1/+kaOSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nuBSBoPrktedO1YG63JjGHhaqjaHXfQYuMJQQykXDOE=; b=m6APRRUEgCC3yNjD3fiYrsu8w+Sr6b7FiNOHPT3kDhG5tR4VGC8uSgS9e62kpW2bSq2jQOc4bzBiH312NSspjZrfPAto8gXPp+6XA1gUX5J/mMa57R1XhW0uyRt5tJsJRht63wtaSeAI7Hw53ANQtSjrbd3tJFhTVD5s3ZtlESqHlwjnOK9TrMdoa87OvYRUGRfj9NTjxq5Ek1RsKj+DoB1zsEqLVR0vw/xkzngMsFFBGw3Ao1YJgrJIg1PspXWUqmHqjzYJ67+Fu8XbxqTRdV+ukIq7Eorn+qmwrsMnssmQuaq9oIrTmvz54EBpBJ2dMIVZwK1oE4p4xlNS26Oodg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=monjalon.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nuBSBoPrktedO1YG63JjGHhaqjaHXfQYuMJQQykXDOE=; b=KgfcstEmvvIQLpVrogQDTH2YM1/Ow29PenWwbr83RhQj9XNYelWYEJGwp7taQmjXO/jIe+8w73T1ur8uFzBdx+73Org+Il895DHfh4++AWveog298iYALpeO2eHzkfeDjBt5Ov+or2W/Ly/NimGROACeHp7FIS9PNC7NvMLsf7g= Received: from DS7PR06CA0054.namprd06.prod.outlook.com (2603:10b6:8:54::34) by SA1PR12MB6750.namprd12.prod.outlook.com (2603:10b6:806:257::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.27; Fri, 3 Feb 2023 16:49:22 +0000 Received: from DS1PEPF0000E65C.namprd02.prod.outlook.com (2603:10b6:8:54:cafe::27) by DS7PR06CA0054.outlook.office365.com (2603:10b6:8:54::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.29 via Frontend Transport; Fri, 3 Feb 2023 16:49:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF0000E65C.mail.protection.outlook.com (10.167.18.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6064.21 via Frontend Transport; Fri, 3 Feb 2023 16:49:21 +0000 Received: from telcodpdk.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 3 Feb 2023 10:49:19 -0600 From: Ferruh Yigit To: Thomas Monjalon , Wisam Jaddo , Ori Kam , Aman Singh , Yuying Zhang , Ajit Khaparde , Somnath Kotur , Dongdong Liu , Yisen Zhuang , Beilei Xing , Qiming Yang , Qi Zhang , Rosen Xu , Wenjun Wu , Matan Azrad , Viacheslav Ovsiienko , Andrew Rybchenko CC: David Marchand , Subject: [PATCH v7 3/7] ethdev: use VXLAN protocol struct for flow matching Date: Fri, 3 Feb 2023 16:48:50 +0000 Message-ID: <20230203164854.602595-4-ferruh.yigit@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230203164854.602595-1-ferruh.yigit@amd.com> References: <20221025214410.715864-1-thomas@monjalon.net> <20230203164854.602595-1-ferruh.yigit@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E65C:EE_|SA1PR12MB6750:EE_ X-MS-Office365-Filtering-Correlation-Id: 7fbbda73-5050-4bff-0401-08db0606999f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iFa6XpLZbPOoxH0irWH+7s5EyWKtv8NhxmJNjK6SdioFrjlvdixMV6uaRlQroOBq8dKMueiNE8GSYQ7m15c9cMkhkKNHydRjHLj6GsVqEAqsSA/Uyt5mOgRFKqMizKlqggKgPdmV+gkYyYfVegFistKSe4sDNG/mr+xwsvmjTZk/9+GqfjxF6kjiaII29Nn1ogFwGS1xtPriYkrZKvh9UXES35PFkJglprmwszlKS4QAVeyOrq6YYoQ+ynOCS+9POm2PNE/gW8LnmCTejU13hXA+8YjE3dyENZXnlIg6pObez06KvmmtA7RhWw24SnCFsxMuOpwg9Nx/swKQGUV0nu4bXp1iumt52PyLSZ350gukrSvcjqyHugRunTG5PYkBeUyrUYccHjezKIxyvHSud6ToPx8RIEr2dTik17hYWnmT4CAl4ozbrvZt54IZ6U69ZpLXndYD+HR0KiHXRJUBXxcqa8y8mEdNEUi594+7r7vlTO3XmPxRVTDTlx1GCJ97RJSSKAeulfr70wkvB/DGsPMRreT6GG0lfJEi2tbriw8DZnE5x7NcYq833tDaUJS26Mzb+RrtwG5xcMgovty/0VIXqNIBYjTcEQx2rsqG0PC9B6yiiTmGMIUuEBd9kU8F0yRBrISKPvzEbfCAEbfR7LPUHtn+Nrx1YddE66vFSodASYQpACE/i2vaOCgH/gz//3dX/+0LBd0gxX5rfAuySdftvkZh/Fs4l91HG0GarJTzNe06XBNpMlqed1giLAoBPu+XyyGrFWtv8s9DIXycXQ== X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230025)(4636009)(39860400002)(136003)(346002)(376002)(396003)(451199018)(40470700004)(46966006)(36840700001)(356005)(921005)(40480700001)(36756003)(36860700001)(47076005)(82310400005)(81166007)(44832011)(8936002)(40460700003)(82740400003)(426003)(8676002)(30864003)(70206006)(2906002)(4326008)(316002)(41300700001)(5660300002)(70586007)(110136005)(7416002)(86362001)(6666004)(54906003)(7696005)(1076003)(478600001)(26005)(186003)(16526019)(336012)(2616005)(83380400001)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 16:49:21.9181 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7fbbda73-5050-4bff-0401-08db0606999f X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E65C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB6750 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Thomas Monjalon As announced in the deprecation notice, flow item structures should re-use the protocol header definitions from the directory lib/net/. In the case of VXLAN-GPE, the protocol struct is added in an unnamed union, keeping old field names. The VXLAN headers (including VXLAN-GPE) are used in apps and drivers instead of the redundant fields in the flow items. Signed-off-by: Thomas Monjalon Acked-by: Ferruh Yigit Acked-by: Ori Kam Acked-by: Andrew Rybchenko --- app/test-flow-perf/actions_gen.c | 2 +- app/test-flow-perf/items_gen.c | 12 +++---- app/test-pmd/cmdline_flow.c | 10 +++--- doc/guides/prog_guide/rte_flow.rst | 11 ++----- doc/guides/rel_notes/deprecation.rst | 1 - drivers/net/bnxt/bnxt_flow.c | 12 ++++--- drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 42 ++++++++++++------------ drivers/net/hns3/hns3_flow.c | 12 +++---- drivers/net/i40e/i40e_flow.c | 4 +-- drivers/net/ice/ice_switch_filter.c | 18 +++++----- drivers/net/ipn3ke/ipn3ke_flow.c | 4 +-- drivers/net/ixgbe/ixgbe_flow.c | 18 +++++----- drivers/net/mlx5/mlx5_flow.c | 16 ++++----- drivers/net/mlx5/mlx5_flow_dv.c | 40 +++++++++++----------- drivers/net/mlx5/mlx5_flow_verbs.c | 8 ++--- drivers/net/sfc/sfc_flow.c | 6 ++-- drivers/net/sfc/sfc_mae.c | 8 ++--- lib/ethdev/rte_flow.h | 24 ++++++++++---- 18 files changed, 126 insertions(+), 122 deletions(-) diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c index 63f05d87fa86..f1d59313256d 100644 --- a/app/test-flow-perf/actions_gen.c +++ b/app/test-flow-perf/actions_gen.c @@ -874,7 +874,7 @@ add_vxlan_encap(struct rte_flow_action *actions, items[2].type = RTE_FLOW_ITEM_TYPE_UDP; - item_vxlan.vni[2] = 1; + item_vxlan.hdr.vni[2] = 1; items[3].spec = &item_vxlan; items[3].mask = &item_vxlan; items[3].type = RTE_FLOW_ITEM_TYPE_VXLAN; diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c index b7f51030a119..a58245239ba1 100644 --- a/app/test-flow-perf/items_gen.c +++ b/app/test-flow-perf/items_gen.c @@ -128,12 +128,12 @@ add_vxlan(struct rte_flow_item *items, /* Set standard vxlan vni */ for (i = 0; i < 3; i++) { - vxlan_specs[ti].vni[2 - i] = vni_value >> (i * 8); - vxlan_masks[ti].vni[2 - i] = 0xff; + vxlan_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8); + vxlan_masks[ti].hdr.vni[2 - i] = 0xff; } /* Standard vxlan flags */ - vxlan_specs[ti].flags = 0x8; + vxlan_specs[ti].hdr.flags = 0x8; items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN; items[items_counter].spec = &vxlan_specs[ti]; @@ -155,12 +155,12 @@ add_vxlan_gpe(struct rte_flow_item *items, /* Set vxlan-gpe vni */ for (i = 0; i < 3; i++) { - vxlan_gpe_specs[ti].vni[2 - i] = vni_value >> (i * 8); - vxlan_gpe_masks[ti].vni[2 - i] = 0xff; + vxlan_gpe_specs[ti].hdr.vni[2 - i] = vni_value >> (i * 8); + vxlan_gpe_masks[ti].hdr.vni[2 - i] = 0xff; } /* vxlan-gpe flags */ - vxlan_gpe_specs[ti].flags = 0x0c; + vxlan_gpe_specs[ti].hdr.flags = 0x0c; items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE; items[items_counter].spec = &vxlan_gpe_specs[ti]; diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 694a7eb647c5..b904f8c3d45c 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -3984,7 +3984,7 @@ static const struct token token_list[] = { .help = "VXLAN identifier", .next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED), item_param), - .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, hdr.vni)), }, [ITEM_VXLAN_LAST_RSVD] = { .name = "last_rsvd", @@ -3992,7 +3992,7 @@ static const struct token token_list[] = { .next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, - rsvd1)), + hdr.rsvd1)), }, [ITEM_E_TAG] = { .name = "e_tag", @@ -4210,7 +4210,7 @@ static const struct token token_list[] = { .next = NEXT(item_vxlan_gpe, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan_gpe, - vni)), + hdr.vni)), }, [ITEM_ARP_ETH_IPV4] = { .name = "arp_eth_ipv4", @@ -7500,7 +7500,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_ .src_port = vxlan_encap_conf.udp_src, .dst_port = vxlan_encap_conf.udp_dst, }, - .item_vxlan.flags = 0, + .item_vxlan.hdr.flags = 0, }; memcpy(action_vxlan_encap_data->item_eth.hdr.dst_addr.addr_bytes, vxlan_encap_conf.eth_dst, RTE_ETHER_ADDR_LEN); @@ -7554,7 +7554,7 @@ parse_setup_vxlan_encap_data(struct action_vxlan_encap_data *action_vxlan_encap_ &ipv6_mask_tos; } } - memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni, + memcpy(action_vxlan_encap_data->item_vxlan.hdr.vni, vxlan_encap_conf.vni, RTE_DIM(vxlan_encap_conf.vni)); return 0; } diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 27c3780c4f17..116722351486 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -935,10 +935,7 @@ Item: ``VXLAN`` Matches a VXLAN header (RFC 7348). -- ``flags``: normally 0x08 (I flag). -- ``rsvd0``: reserved, normally 0x000000. -- ``vni``: VXLAN network identifier. -- ``rsvd1``: reserved, normally 0x00. +- ``hdr``: header definition (``rte_vxlan.h``). - Default ``mask`` matches VNI only. Item: ``E_TAG`` @@ -1104,11 +1101,7 @@ Item: ``VXLAN-GPE`` Matches a VXLAN-GPE header (draft-ietf-nvo3-vxlan-gpe-05). -- ``flags``: normally 0x0C (I and P flags). -- ``rsvd0``: reserved, normally 0x0000. -- ``protocol``: protocol type. -- ``vni``: VXLAN network identifier. -- ``rsvd1``: reserved, normally 0x00. +- ``hdr``: header definition (``rte_vxlan.h``). - Default ``mask`` matches VNI only. Item: ``ARP_ETH_IPV4`` diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 4782d2e680d3..df8b5bcb1b64 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -90,7 +90,6 @@ Deprecation Notices - ``rte_flow_item_pfcp`` - ``rte_flow_item_pppoe`` - ``rte_flow_item_pppoe_proto_id`` - - ``rte_flow_item_vxlan_gpe`` * ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``. Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``, diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 8f660493402c..4a107e81e955 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -563,9 +563,11 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, break; } - if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] || - vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] || - vxlan_spec->flags != 0x8) { + if ((vxlan_spec->hdr.rsvd0[0] != 0) || + (vxlan_spec->hdr.rsvd0[1] != 0) || + (vxlan_spec->hdr.rsvd0[2] != 0) || + (vxlan_spec->hdr.rsvd1 != 0) || + (vxlan_spec->hdr.flags != 8)) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -577,7 +579,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, /* Check if VNI is masked. */ if (vxlan_mask != NULL) { vni_masked = - !!memcmp(vxlan_mask->vni, vni_mask, + !!memcmp(vxlan_mask->hdr.vni, vni_mask, RTE_DIM(vni_mask)); if (vni_masked) { rte_flow_error_set @@ -590,7 +592,7 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr, } rte_memcpy(((uint8_t *)&tenant_id_be + 1), - vxlan_spec->vni, 3); + vxlan_spec->hdr.vni, 3); filter->vni = rte_be_to_cpu_32(tenant_id_be); filter->tunnel_type = diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c index 2928598ced55..80869b79c3fe 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c @@ -1414,28 +1414,28 @@ ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item, * Copy the rte_flow_item for vxlan into hdr_field using vxlan * header fields */ - size = sizeof(((struct rte_flow_item_vxlan *)NULL)->flags); + size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.flags); ulp_rte_prsr_fld_mask(params, &idx, size, - ulp_deference_struct(vxlan_spec, flags), - ulp_deference_struct(vxlan_mask, flags), + ulp_deference_struct(vxlan_spec, hdr.flags), + ulp_deference_struct(vxlan_mask, hdr.flags), ULP_PRSR_ACT_DEFAULT); - size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd0); + size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd0); ulp_rte_prsr_fld_mask(params, &idx, size, - ulp_deference_struct(vxlan_spec, rsvd0), - ulp_deference_struct(vxlan_mask, rsvd0), + ulp_deference_struct(vxlan_spec, hdr.rsvd0), + ulp_deference_struct(vxlan_mask, hdr.rsvd0), ULP_PRSR_ACT_DEFAULT); - size = sizeof(((struct rte_flow_item_vxlan *)NULL)->vni); + size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.vni); ulp_rte_prsr_fld_mask(params, &idx, size, - ulp_deference_struct(vxlan_spec, vni), - ulp_deference_struct(vxlan_mask, vni), + ulp_deference_struct(vxlan_spec, hdr.vni), + ulp_deference_struct(vxlan_mask, hdr.vni), ULP_PRSR_ACT_DEFAULT); - size = sizeof(((struct rte_flow_item_vxlan *)NULL)->rsvd1); + size = sizeof(((struct rte_flow_item_vxlan *)NULL)->hdr.rsvd1); ulp_rte_prsr_fld_mask(params, &idx, size, - ulp_deference_struct(vxlan_spec, rsvd1), - ulp_deference_struct(vxlan_mask, rsvd1), + ulp_deference_struct(vxlan_spec, hdr.rsvd1), + ulp_deference_struct(vxlan_mask, hdr.rsvd1), ULP_PRSR_ACT_DEFAULT); /* Update the hdr_bitmap with vxlan */ @@ -1827,17 +1827,17 @@ ulp_rte_enc_vxlan_hdr_handler(struct ulp_rte_parser_params *params, uint32_t size; field = ¶ms->enc_field[BNXT_ULP_ENC_FIELD_VXLAN_FLAGS]; - size = sizeof(vxlan_spec->flags); - field = ulp_rte_parser_fld_copy(field, &vxlan_spec->flags, size); + size = sizeof(vxlan_spec->hdr.flags); + field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.flags, size); - size = sizeof(vxlan_spec->rsvd0); - field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd0, size); + size = sizeof(vxlan_spec->hdr.rsvd0); + field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd0, size); - size = sizeof(vxlan_spec->vni); - field = ulp_rte_parser_fld_copy(field, &vxlan_spec->vni, size); + size = sizeof(vxlan_spec->hdr.vni); + field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.vni, size); - size = sizeof(vxlan_spec->rsvd1); - field = ulp_rte_parser_fld_copy(field, &vxlan_spec->rsvd1, size); + size = sizeof(vxlan_spec->hdr.rsvd1); + field = ulp_rte_parser_fld_copy(field, &vxlan_spec->hdr.rsvd1, size); ULP_BITMAP_SET(params->enc_hdr_bitmap.bits, BNXT_ULP_HDR_BIT_T_VXLAN); } @@ -1989,7 +1989,7 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item, vxlan_size = sizeof(struct rte_flow_item_vxlan); /* copy the vxlan details */ memcpy(&vxlan_spec, item->spec, vxlan_size); - vxlan_spec.flags = 0x08; + vxlan_spec.hdr.flags = 0x08; vxlan_size = tfp_cpu_to_be_32(vxlan_size); memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ], &vxlan_size, sizeof(uint32_t)); diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index ef1832982dee..e88f9b7e452b 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -933,23 +933,23 @@ hns3_parse_vxlan(const struct rte_flow_item *item, struct hns3_fdir_rule *rule, vxlan_mask = item->mask; vxlan_spec = item->spec; - if (vxlan_mask->flags) + if (vxlan_mask->hdr.flags) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, item, "Flags is not supported in VxLAN"); /* VNI must be totally masked or not. */ - if (memcmp(vxlan_mask->vni, full_mask, VNI_OR_TNI_LEN) && - memcmp(vxlan_mask->vni, zero_mask, VNI_OR_TNI_LEN)) + if (memcmp(vxlan_mask->hdr.vni, full_mask, VNI_OR_TNI_LEN) && + memcmp(vxlan_mask->hdr.vni, zero_mask, VNI_OR_TNI_LEN)) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, item, "VNI must be totally masked or not in VxLAN"); - if (vxlan_mask->vni[0]) { + if (vxlan_mask->hdr.vni[0]) { hns3_set_bit(rule->input_set, OUTER_TUN_VNI, 1); - memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->vni, + memcpy(rule->key_conf.mask.outer_tun_vni, vxlan_mask->hdr.vni, VNI_OR_TNI_LEN); } - memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->vni, + memcpy(rule->key_conf.spec.outer_tun_vni, vxlan_spec->hdr.vni, VNI_OR_TNI_LEN); return 0; } diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index 0acbd5a061e0..2855b14fe679 100644 --- a/drivers/net/i40e/i40e_flow.c +++ b/drivers/net/i40e/i40e_flow.c @@ -3009,7 +3009,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev, /* Check if VNI is masked. */ if (vxlan_spec && vxlan_mask) { is_vni_masked = - !!memcmp(vxlan_mask->vni, vni_mask, + !!memcmp(vxlan_mask->hdr.vni, vni_mask, RTE_DIM(vni_mask)); if (is_vni_masked) { rte_flow_error_set(error, EINVAL, @@ -3020,7 +3020,7 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev, } rte_memcpy(((uint8_t *)&tenant_id_be + 1), - vxlan_spec->vni, 3); + vxlan_spec->hdr.vni, 3); filter->tenant_id = rte_be_to_cpu_32(tenant_id_be); filter_type |= RTE_ETH_TUNNEL_FILTER_TENID; diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index d84061340e6c..7cb20fa0b4f8 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -990,17 +990,17 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], input = &inner_input_set; if (vxlan_spec && vxlan_mask) { list[t].type = ICE_VXLAN; - if (vxlan_mask->vni[0] || - vxlan_mask->vni[1] || - vxlan_mask->vni[2]) { + if (vxlan_mask->hdr.vni[0] || + vxlan_mask->hdr.vni[1] || + vxlan_mask->hdr.vni[2]) { list[t].h_u.tnl_hdr.vni = - (vxlan_spec->vni[2] << 16) | - (vxlan_spec->vni[1] << 8) | - vxlan_spec->vni[0]; + (vxlan_spec->hdr.vni[2] << 16) | + (vxlan_spec->hdr.vni[1] << 8) | + vxlan_spec->hdr.vni[0]; list[t].m_u.tnl_hdr.vni = - (vxlan_mask->vni[2] << 16) | - (vxlan_mask->vni[1] << 8) | - vxlan_mask->vni[0]; + (vxlan_mask->hdr.vni[2] << 16) | + (vxlan_mask->hdr.vni[1] << 8) | + vxlan_mask->hdr.vni[0]; *input |= ICE_INSET_VXLAN_VNI; input_set_byte += 2; } diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c index ee56d0f43d93..d20a29b9a2d6 100644 --- a/drivers/net/ipn3ke/ipn3ke_flow.c +++ b/drivers/net/ipn3ke/ipn3ke_flow.c @@ -108,7 +108,7 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[], case RTE_FLOW_ITEM_TYPE_VXLAN: vxlan = item->spec; - rte_memcpy(&parser->key[6], vxlan->vni, 3); + rte_memcpy(&parser->key[6], vxlan->hdr.vni, 3); break; default: @@ -576,7 +576,7 @@ ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[], case RTE_FLOW_ITEM_TYPE_VXLAN: vxlan = item->spec; - rte_memcpy(&parser->key[0], vxlan->vni, 3); + rte_memcpy(&parser->key[0], vxlan->hdr.vni, 3); break; case RTE_FLOW_ITEM_TYPE_IPV4: diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c index a11da3dc8beb..fe710b79008d 100644 --- a/drivers/net/ixgbe/ixgbe_flow.c +++ b/drivers/net/ixgbe/ixgbe_flow.c @@ -2481,7 +2481,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, rule->mask.tunnel_type_mask = 1; vxlan_mask = item->mask; - if (vxlan_mask->flags) { + if (vxlan_mask->hdr.flags) { memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2489,11 +2489,11 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, return -rte_errno; } /* VNI must be totally masked or not. */ - if ((vxlan_mask->vni[0] || vxlan_mask->vni[1] || - vxlan_mask->vni[2]) && - ((vxlan_mask->vni[0] != 0xFF) || - (vxlan_mask->vni[1] != 0xFF) || - (vxlan_mask->vni[2] != 0xFF))) { + if ((vxlan_mask->hdr.vni[0] || vxlan_mask->hdr.vni[1] || + vxlan_mask->hdr.vni[2]) && + ((vxlan_mask->hdr.vni[0] != 0xFF) || + (vxlan_mask->hdr.vni[1] != 0xFF) || + (vxlan_mask->hdr.vni[2] != 0xFF))) { memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2501,15 +2501,15 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, return -rte_errno; } - rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni, - RTE_DIM(vxlan_mask->vni)); + rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni, + RTE_DIM(vxlan_mask->hdr.vni)); if (item->spec) { rule->b_spec = TRUE; vxlan_spec = item->spec; rte_memcpy(((uint8_t *) &rule->ixgbe_fdir.formatted.tni_vni), - vxlan_spec->vni, RTE_DIM(vxlan_spec->vni)); + vxlan_spec->hdr.vni, RTE_DIM(vxlan_spec->hdr.vni)); } } diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 2512d6b52db9..ff08a629e2c6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -333,7 +333,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item) ret = mlx5_ethertype_to_item_type(spec, mask, true); break; case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, protocol); + MLX5_XSET_ITEM_MASK_SPEC(vxlan_gpe, hdr.proto); ret = mlx5_nsh_proto_to_item_type(spec, mask); break; default: @@ -2919,8 +2919,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev, uint8_t vni[4]; } id = { .vlan_id = 0, }; const struct rte_flow_item_vxlan nic_mask = { - .vni = "\xff\xff\xff", - .rsvd1 = 0xff, + .hdr.vni = "\xff\xff\xff", + .hdr.rsvd1 = 0xff, }; const struct rte_flow_item_vxlan *valid_mask; @@ -2959,8 +2959,8 @@ mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev, if (ret < 0) return ret; if (spec) { - memcpy(&id.vni[1], spec->vni, 3); - memcpy(&id.vni[1], mask->vni, 3); + memcpy(&id.vni[1], spec->hdr.vni, 3); + memcpy(&id.vni[1], mask->hdr.vni, 3); } if (!(item_flags & MLX5_FLOW_LAYER_OUTER)) return rte_flow_error_set(error, ENOTSUP, @@ -3030,14 +3030,14 @@ mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item, if (ret < 0) return ret; if (spec) { - if (spec->protocol) + if (spec->hdr.proto) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "VxLAN-GPE protocol" " not supported"); - memcpy(&id.vni[1], spec->vni, 3); - memcpy(&id.vni[1], mask->vni, 3); + memcpy(&id.vni[1], spec->hdr.vni, 3); + memcpy(&id.vni[1], mask->hdr.vni, 3); } if (!(item_flags & MLX5_FLOW_LAYER_OUTER)) return rte_flow_error_set(error, ENOTSUP, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index ff915183b7cc..261c60a5c33a 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -9235,8 +9235,8 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, int i; struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_vxlan nic_mask = { - .vni = "\xff\xff\xff", - .rsvd1 = 0xff, + .hdr.vni = "\xff\xff\xff", + .hdr.rsvd1 = 0xff, }; misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); @@ -9274,29 +9274,29 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, ((attr->group || (attr->transfer && priv->fdb_def_rule)) && !priv->sh->misc5_cap)) { misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - size = sizeof(vxlan_m->vni); + size = sizeof(vxlan_m->hdr.vni); vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); for (i = 0; i < size; ++i) - vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i]; return; } tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, misc5_v, tunnel_header_1); - tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | - (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | - (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; + tunnel_v = (vxlan_v->hdr.vni[0] & vxlan_m->hdr.vni[0]) | + (vxlan_v->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 | + (vxlan_v->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16; *tunnel_header_v = tunnel_v; if (key_type == MLX5_SET_MATCHER_SW_M) { - tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) | - (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 | - (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16; + tunnel_v = (vxlan_vv->hdr.vni[0] & vxlan_m->hdr.vni[0]) | + (vxlan_vv->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 | + (vxlan_vv->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16; if (!tunnel_v) *tunnel_header_v = 0x0; - if (vxlan_vv->rsvd1 & vxlan_m->rsvd1) - *tunnel_header_v |= vxlan_v->rsvd1 << 24; + if (vxlan_vv->hdr.rsvd1 & vxlan_m->hdr.rsvd1) + *tunnel_header_v |= vxlan_v->hdr.rsvd1 << 24; } else { - *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; + *tunnel_header_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24; } } @@ -9327,7 +9327,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item, MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni); - int i, size = sizeof(vxlan_m->vni); + int i, size = sizeof(vxlan_m->hdr.vni); uint8_t flags_m = 0xff; uint8_t flags_v = 0xc; uint8_t m_protocol, v_protocol; @@ -9352,15 +9352,15 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item, else if (key_type == MLX5_SET_MATCHER_HS_V) vxlan_m = vxlan_v; for (i = 0; i < size; ++i) - vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; - if (vxlan_m->flags) { - flags_m = vxlan_m->flags; - flags_v = vxlan_v->flags; + vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i]; + if (vxlan_m->hdr.flags) { + flags_m = vxlan_m->hdr.flags; + flags_v = vxlan_v->hdr.flags; } MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, flags_m & flags_v); - m_protocol = vxlan_m->protocol; - v_protocol = vxlan_v->protocol; + m_protocol = vxlan_m->hdr.protocol; + v_protocol = vxlan_v->hdr.protocol; if (!m_protocol) { /* Force next protocol to ensure next headers parsing. */ if (pattern_flags & MLX5_FLOW_LAYER_INNER_L2) diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index 1902b97ec6d4..4ef4f3044515 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -765,9 +765,9 @@ flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow, if (!mask) mask = &rte_flow_item_vxlan_mask; if (spec) { - memcpy(&id.vni[1], spec->vni, 3); + memcpy(&id.vni[1], spec->hdr.vni, 3); vxlan.val.tunnel_id = id.vlan_id; - memcpy(&id.vni[1], mask->vni, 3); + memcpy(&id.vni[1], mask->hdr.vni, 3); vxlan.mask.tunnel_id = id.vlan_id; /* Remove unwanted bits from values. */ vxlan.val.tunnel_id &= vxlan.mask.tunnel_id; @@ -807,9 +807,9 @@ flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow, if (!mask) mask = &rte_flow_item_vxlan_gpe_mask; if (spec) { - memcpy(&id.vni[1], spec->vni, 3); + memcpy(&id.vni[1], spec->hdr.vni, 3); vxlan_gpe.val.tunnel_id = id.vlan_id; - memcpy(&id.vni[1], mask->vni, 3); + memcpy(&id.vni[1], mask->hdr.vni, 3); vxlan_gpe.mask.tunnel_id = id.vlan_id; /* Remove unwanted bits from values. */ vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id; diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c index f098edc6eb33..fe1f5ba55f86 100644 --- a/drivers/net/sfc/sfc_flow.c +++ b/drivers/net/sfc/sfc_flow.c @@ -921,7 +921,7 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item, const struct rte_flow_item_vxlan *spec = NULL; const struct rte_flow_item_vxlan *mask = NULL; const struct rte_flow_item_vxlan supp_mask = { - .vni = { 0xff, 0xff, 0xff } + .hdr.vni = { 0xff, 0xff, 0xff } }; rc = sfc_flow_parse_init(item, @@ -945,8 +945,8 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item, if (spec == NULL) return 0; - rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->vni, - mask->vni, item, error); + rc = sfc_flow_set_efx_spec_vni_or_vsid(efx_spec, spec->hdr.vni, + mask->hdr.vni, item, error); return rc; } diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 710d04be13af..aab697b204c2 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -2223,8 +2223,8 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = { * The size and offset values are relevant * for Geneve and NVGRE, too. */ - .size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni), - .ofst = offsetof(struct rte_flow_item_vxlan, vni), + .size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, hdr.vni), + .ofst = offsetof(struct rte_flow_item_vxlan, hdr.vni), }, }; @@ -2359,10 +2359,10 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item, * The extra byte is 0 both in the mask and in the value. */ vxp = (const struct rte_flow_item_vxlan *)spec; - memcpy(vnet_id_v + 1, &vxp->vni, sizeof(vxp->vni)); + memcpy(vnet_id_v + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni)); vxp = (const struct rte_flow_item_vxlan *)mask; - memcpy(vnet_id_m + 1, &vxp->vni, sizeof(vxp->vni)); + memcpy(vnet_id_m + 1, &vxp->hdr.vni, sizeof(vxp->hdr.vni)); rc = efx_mae_match_spec_field_set(ctx_mae->match_spec, EFX_MAE_FIELD_ENC_VNET_ID_BE, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index b60987db4b4f..e2364823d622 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -988,7 +988,7 @@ struct rte_flow_item_vxlan { /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN. */ #ifndef __cplusplus static const struct rte_flow_item_vxlan rte_flow_item_vxlan_mask = { - .hdr.vx_vni = RTE_BE32(0xffffff00), /* (0xffffff << 8) */ + .hdr.vni = "\xff\xff\xff", }; #endif @@ -1205,18 +1205,28 @@ static const struct rte_flow_item_geneve rte_flow_item_geneve_mask = { * * Matches a VXLAN-GPE header. */ +RTE_STD_C11 struct rte_flow_item_vxlan_gpe { - uint8_t flags; /**< Normally 0x0c (I and P flags). */ - uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */ - uint8_t protocol; /**< Protocol type. */ - uint8_t vni[3]; /**< VXLAN identifier. */ - uint8_t rsvd1; /**< Reserved, normally 0x00. */ + union { + struct { + /* + * These are old fields kept for compatibility. + * Please prefer hdr field below. + */ + uint8_t flags; /**< Normally 0x0c (I and P flags). */ + uint8_t rsvd0[2]; /**< Reserved, normally 0x0000. */ + uint8_t protocol; /**< Protocol type. */ + uint8_t vni[3]; /**< VXLAN identifier. */ + uint8_t rsvd1; /**< Reserved, normally 0x00. */ + }; + struct rte_vxlan_gpe_hdr hdr; + }; }; /** Default mask for RTE_FLOW_ITEM_TYPE_VXLAN_GPE. */ #ifndef __cplusplus static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = { - .vni = "\xff\xff\xff", + .hdr.vni = "\xff\xff\xff", }; #endif From patchwork Fri Feb 3 16:48:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 123043 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D56BA41BBE; Fri, 3 Feb 2023 17:49:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F072142D32; Fri, 3 Feb 2023 17:49:29 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2087.outbound.protection.outlook.com [40.107.101.87]) by mails.dpdk.org (Postfix) with ESMTP id BAADE42D30 for ; Fri, 3 Feb 2023 17:49:27 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NnSb+a1jEy7Ovz+7zWYpoGZmBA5lUfMKE8I7886zwICUS5fsllCDJEG4sPi//S37B/J44msKUMs2uqkD9JM114GRfbPqDpPg6Sdj31bzFa9EHEUwc5eVOqMMmgAc7p2wDRAz+K3Q2FK0r4qwwErSgCrD/QYHoB0L+9/TUd7v3ajB39XEuqT5doB7FJh9wAmRNz1U2q+mUjMUOVYZXAV7lvZuON3J07h6J4FPdPJL5qkTLzpqJj5yifvWqu54/fgW2PJI+qw+ed6hFH72Jqo9z9VAEAVuAT5PrX9EXsAcUxLTMtijYx4qXtGRcu1dxv3SGkl4XCqyFsL6ujhBu8SUeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=K4TmFFabRhdcz6plccBP9RSQo8RvH/f8rScPjqVe3iE=; b=MWsep1NDOEaCvb7cCzzBWMGH6GAczw7aa5I974yEd1BmLzz82WkJW6AWK3Pd7DZ4g7Noxog5pnOurR7l+mlaWE1aTFU+oUhnlnMEB7mYYaig8oX91TIvAqWVZNO5ztPlvcxW+za3Rw4wmNwRfs6BVIe9iJqyHLC/CmK47UcVM8lQIejFCAelToY1u5Noh+Th0yvchRtfwx+LL4Fkio86pOBrtnLAPuL8clT2vMp/dUZRoOFGqTIyYECR3t66UCICsR/ETX8aQQUsOtMugzKgDFRjdzcC0y/a/W9Qd2GM96Ildtk+vh4gewdQTx9/Z01qRza6Vr/eECUhdTg05BzpNQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=monjalon.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K4TmFFabRhdcz6plccBP9RSQo8RvH/f8rScPjqVe3iE=; b=SLKz4/UNtYzHy2Z4Hdz8/uMh2ElwkDx8wqjto6btIHdATsCenGaqd0fMPhiXI8AT4OXGXX+Hp2/kjTh7KYO3tXaTTmLSJMrAGQTiHiN18126bQP0XP3lSvmGomjz3OhS4misn3+TLHgZXmaEAC7de/Qtas51HIriBWns3Kx0IIo= Received: from DS7PR03CA0050.namprd03.prod.outlook.com (2603:10b6:5:3b5::25) by CH2PR12MB5001.namprd12.prod.outlook.com (2603:10b6:610:61::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.27; Fri, 3 Feb 2023 16:49:25 +0000 Received: from DS1PEPF0000E658.namprd02.prod.outlook.com (2603:10b6:5:3b5:cafe::7d) by DS7PR03CA0050.outlook.office365.com (2603:10b6:5:3b5::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.31 via Frontend Transport; Fri, 3 Feb 2023 16:49:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF0000E658.mail.protection.outlook.com (10.167.18.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6064.21 via Frontend Transport; Fri, 3 Feb 2023 16:49:24 +0000 Received: from telcodpdk.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 3 Feb 2023 10:49:23 -0600 From: Ferruh Yigit To: Thomas Monjalon , Wisam Jaddo , Ori Kam , Aman Singh , Yuying Zhang , Beilei Xing , Jingjing Wu , Qiming Yang , Qi Zhang , Matan Azrad , Viacheslav Ovsiienko , Andrew Rybchenko CC: David Marchand , Subject: [PATCH v7 4/7] ethdev: use GTP protocol struct for flow matching Date: Fri, 3 Feb 2023 16:48:51 +0000 Message-ID: <20230203164854.602595-5-ferruh.yigit@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230203164854.602595-1-ferruh.yigit@amd.com> References: <20221025214410.715864-1-thomas@monjalon.net> <20230203164854.602595-1-ferruh.yigit@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E658:EE_|CH2PR12MB5001:EE_ X-MS-Office365-Filtering-Correlation-Id: 223f1f9a-bd10-4bdb-7444-08db06069b65 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vfn3VoDL/WIDJBuMoqLeyS8yGQvk6MxOP52uE608hj7IkukGlDn1iRzW8rQd4IwxvSRxpKcauV5BzQdH5eCvAUCDpCqHhdN+KnRWF7FrhifAEd9m2PT1TxoEbcIybZFMi2s5SLOmnksZZDTLndotgSSQpwB8XJ3EOTuwsh3s0VAIULRernI94bebcFBLEHvUNYNhrdfbOx/Z7X4iYp84j3xYjY2DWfWlRPIrYrjTqyFZUkEqPT9dw/ji+dlahIC2wK+BMnH/aeMbcVm68fUivTtZYrMeYTT4Kvfc2msvgmIlcYxklvFern6o35ieAxVkpDgsGx35yq7e5smgeRqu0n2VYFIBVJYDs95k9yd700qe4HzNthmlBqMxZR5rGrZqAaQNvUjL7yKtzPk5J8ddtAkamhIzACQQ7LHpH1kk1DotZAwoGMURFLVEr0VhydqzK2azBuV5sZVbXejEPPL1LRn2MhcNeOd2CRUqkwrCXOH5rle8l4XEoyiTWzL+/50JK+HPiJJawY2ZwM0r6B/SFWgFvv8PlyBWHpcv/e2HNSCjzG0c4c40ZE2QY7uL/fUuR/HWlTN+sXEbYcjEXFLcplOEXKpH3KRo3izzlhNoxzKkG7wqppI0GGdG8zC14cwS7RIZFxJE5DqsxmT4GMi+M/QyahSWburm7jvLDNdkfbbCjGldwcrTHHjNpNR+UekaZV0a1UMOolNhVfBYSZR6uU+R0Z3YenjORle5IsFXF37+k3aY9J+jy6RNTIPzXjUH3E4lYLP9TNANSXILFBchbTKLYJB5/dxFib/bbR+2B9gKnawQUuXb/sQAjSGG2u1X X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230025)(4636009)(376002)(346002)(136003)(39860400002)(396003)(451199018)(40470700004)(46966006)(36840700001)(8676002)(41300700001)(4326008)(83380400001)(70586007)(70206006)(8936002)(36756003)(44832011)(81166007)(5660300002)(40460700003)(186003)(30864003)(921005)(26005)(16526019)(7416002)(36860700001)(110136005)(40480700001)(2906002)(356005)(43170500006)(426003)(54906003)(478600001)(6666004)(82740400003)(86362001)(336012)(316002)(47076005)(1076003)(82310400005)(7696005)(2616005)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 16:49:24.8963 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 223f1f9a-bd10-4bdb-7444-08db06069b65 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E658.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB5001 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Thomas Monjalon As announced in the deprecation notice, flow item structures should re-use the protocol header definitions from the directory lib/net/. The protocol struct is added in an unnamed union, keeping old field names. The GTP header struct members are used in apps and drivers instead of the redundant fields in the flow items. Signed-off-by: Thomas Monjalon Acked-by: Ferruh Yigit Acked-by: Andrew Rybchenko --- app/test-flow-perf/items_gen.c | 4 ++-- app/test-pmd/cmdline_flow.c | 8 +++---- doc/guides/prog_guide/rte_flow.rst | 10 ++------- doc/guides/rel_notes/deprecation.rst | 1 - drivers/net/i40e/i40e_fdir.c | 14 ++++++------ drivers/net/i40e/i40e_flow.c | 20 ++++++++--------- drivers/net/iavf/iavf_fdir.c | 8 +++---- drivers/net/ice/ice_fdir_filter.c | 10 ++++----- drivers/net/ice/ice_switch_filter.c | 12 +++++----- drivers/net/mlx5/hws/mlx5dr_definer.c | 14 ++++++------ drivers/net/mlx5/mlx5_flow_dv.c | 20 ++++++++--------- lib/ethdev/rte_flow.h | 32 ++++++++++++++++++--------- 12 files changed, 78 insertions(+), 75 deletions(-) diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c index a58245239ba1..85928349eee0 100644 --- a/app/test-flow-perf/items_gen.c +++ b/app/test-flow-perf/items_gen.c @@ -213,10 +213,10 @@ add_gtp(struct rte_flow_item *items, __rte_unused struct additional_para para) { static struct rte_flow_item_gtp gtp_spec = { - .teid = RTE_BE32(TEID_VALUE), + .hdr.teid = RTE_BE32(TEID_VALUE), }; static struct rte_flow_item_gtp gtp_mask = { - .teid = RTE_BE32(0xffffffff), + .hdr.teid = RTE_BE32(0xffffffff), }; items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP; diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index b904f8c3d45c..429d9cab8217 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -4137,19 +4137,19 @@ static const struct token token_list[] = { .help = "GTP flags", .next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param), .args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, - v_pt_rsv_flags)), + hdr.gtp_hdr_info)), }, [ITEM_GTP_MSG_TYPE] = { .name = "msg_type", .help = "GTP message type", .next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param), - .args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, msg_type)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_gtp, hdr.msg_type)), }, [ITEM_GTP_TEID] = { .name = "teid", .help = "tunnel endpoint identifier", .next = NEXT(item_gtp, NEXT_ENTRY(COMMON_UNSIGNED), item_param), - .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, teid)), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gtp, hdr.teid)), }, [ITEM_GTPC] = { .name = "gtpc", @@ -11224,7 +11224,7 @@ cmd_set_raw_parsed(const struct buffer *in) goto error; } gtp = item->spec; - if ((gtp->v_pt_rsv_flags & 0x07) != 0x04) { + if (gtp->hdr.s == 1 || gtp->hdr.pn == 1) { /* Only E flag should be set. */ fprintf(stderr, "Error - GTP unsupported flags\n"); diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 116722351486..c4b96b5d324b 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1068,12 +1068,7 @@ Note: GTP, GTPC and GTPU use the same structure. GTPC and GTPU item are defined for a user-friendly API when creating GTP-C and GTP-U flow rules. -- ``v_pt_rsv_flags``: version (3b), protocol type (1b), reserved (1b), - extension header flag (1b), sequence number flag (1b), N-PDU number - flag (1b). -- ``msg_type``: message type. -- ``msg_len``: message length. -- ``teid``: tunnel endpoint identifier. +- ``hdr``: header definition (``rte_gtp.h``). - Default ``mask`` matches teid only. Item: ``ESP`` @@ -1239,8 +1234,7 @@ Item: ``GTP_PSC`` Matches a GTP PDU extension header with type 0x85. -- ``pdu_type``: PDU type. -- ``qfi``: QoS flow identifier. +- ``hdr``: header definition (``rte_gtp.h``). - Default ``mask`` matches QFI only. Item: ``PPPOES``, ``PPPOED`` diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index df8b5bcb1b64..838d5854ad9b 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -74,7 +74,6 @@ Deprecation Notices - ``rte_flow_item_geneve`` - ``rte_flow_item_geneve_opt`` - ``rte_flow_item_gre`` - - ``rte_flow_item_gtp`` - ``rte_flow_item_icmp6`` - ``rte_flow_item_icmp6_nd_na`` - ``rte_flow_item_icmp6_nd_ns`` diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c index afcaa593eb58..47f79ecf11cc 100644 --- a/drivers/net/i40e/i40e_fdir.c +++ b/drivers/net/i40e/i40e_fdir.c @@ -761,26 +761,26 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf, gtp = (struct rte_flow_item_gtp *) ((unsigned char *)udp + sizeof(struct rte_udp_hdr)); - gtp->msg_len = + gtp->hdr.plen = rte_cpu_to_be_16(I40E_FDIR_GTP_DEFAULT_LEN); - gtp->teid = fdir_input->flow.gtp_flow.teid; - gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01; + gtp->hdr.teid = fdir_input->flow.gtp_flow.teid; + gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0X01; /* GTP-C message type is not supported. */ if (cus_pctype->index == I40E_CUSTOMIZED_GTPC) { udp->dst_port = rte_cpu_to_be_16(I40E_FDIR_GTPC_DST_PORT); - gtp->v_pt_rsv_flags = + gtp->hdr.gtp_hdr_info = I40E_FDIR_GTP_VER_FLAG_0X32; } else { udp->dst_port = rte_cpu_to_be_16(I40E_FDIR_GTPU_DST_PORT); - gtp->v_pt_rsv_flags = + gtp->hdr.gtp_hdr_info = I40E_FDIR_GTP_VER_FLAG_0X30; } if (cus_pctype->index == I40E_CUSTOMIZED_GTPU_IPV4) { - gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF; + gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF; gtp_ipv4 = (struct rte_ipv4_hdr *) ((unsigned char *)gtp + sizeof(struct rte_flow_item_gtp)); @@ -794,7 +794,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf, sizeof(struct rte_ipv4_hdr); } else if (cus_pctype->index == I40E_CUSTOMIZED_GTPU_IPV6) { - gtp->msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF; + gtp->hdr.msg_type = I40E_FDIR_GTP_MSG_TYPE_0XFF; gtp_ipv6 = (struct rte_ipv6_hdr *) ((unsigned char *)gtp + sizeof(struct rte_flow_item_gtp)); diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index 2855b14fe679..3c550733f2bb 100644 --- a/drivers/net/i40e/i40e_flow.c +++ b/drivers/net/i40e/i40e_flow.c @@ -2135,10 +2135,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, gtp_mask = item->mask; if (gtp_spec && gtp_mask) { - if (gtp_mask->v_pt_rsv_flags || - gtp_mask->msg_type || - gtp_mask->msg_len || - gtp_mask->teid != UINT32_MAX) { + if (gtp_mask->hdr.gtp_hdr_info || + gtp_mask->hdr.msg_type || + gtp_mask->hdr.plen || + gtp_mask->hdr.teid != UINT32_MAX) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -2147,7 +2147,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, } filter->input.flow.gtp_flow.teid = - gtp_spec->teid; + gtp_spec->hdr.teid; filter->input.flow_ext.customized_pctype = true; cus_proto = item_type; } @@ -3570,10 +3570,10 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev, return -rte_errno; } - if (gtp_mask->v_pt_rsv_flags || - gtp_mask->msg_type || - gtp_mask->msg_len || - gtp_mask->teid != UINT32_MAX) { + if (gtp_mask->hdr.gtp_hdr_info || + gtp_mask->hdr.msg_type || + gtp_mask->hdr.plen || + gtp_mask->hdr.teid != UINT32_MAX) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -3586,7 +3586,7 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev, else if (item_type == RTE_FLOW_ITEM_TYPE_GTPU) filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU; - filter->tenant_id = rte_be_to_cpu_32(gtp_spec->teid); + filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid); break; default: diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c index a6c88cb55b88..811a10287b70 100644 --- a/drivers/net/iavf/iavf_fdir.c +++ b/drivers/net/iavf/iavf_fdir.c @@ -1277,16 +1277,16 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GTPU_IP); if (gtp_spec && gtp_mask) { - if (gtp_mask->v_pt_rsv_flags || - gtp_mask->msg_type || - gtp_mask->msg_len) { + if (gtp_mask->hdr.gtp_hdr_info || + gtp_mask->hdr.msg_type || + gtp_mask->hdr.plen) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid GTP mask"); return -rte_errno; } - if (gtp_mask->teid == UINT32_MAX) { + if (gtp_mask->hdr.teid == UINT32_MAX) { input_set |= IAVF_INSET_GTPU_TEID; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, GTPU_IP, TEID); } diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index 5d297afc290e..480b369af816 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -2341,9 +2341,9 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, if (!(gtp_spec && gtp_mask)) break; - if (gtp_mask->v_pt_rsv_flags || - gtp_mask->msg_type || - gtp_mask->msg_len) { + if (gtp_mask->hdr.gtp_hdr_info || + gtp_mask->hdr.msg_type || + gtp_mask->hdr.plen) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -2351,10 +2351,10 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, return -rte_errno; } - if (gtp_mask->teid == UINT32_MAX) + if (gtp_mask->hdr.teid == UINT32_MAX) input_set_o |= ICE_INSET_GTPU_TEID; - filter->input.gtpu_data.teid = gtp_spec->teid; + filter->input.gtpu_data.teid = gtp_spec->hdr.teid; break; case RTE_FLOW_ITEM_TYPE_GTP_PSC: tunnel_type = ICE_FDIR_TUNNEL_TYPE_GTPU_EH; diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 7cb20fa0b4f8..110d8895fea3 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -1405,9 +1405,9 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], return false; } if (gtp_spec && gtp_mask) { - if (gtp_mask->v_pt_rsv_flags || - gtp_mask->msg_type || - gtp_mask->msg_len) { + if (gtp_mask->hdr.gtp_hdr_info || + gtp_mask->hdr.msg_type || + gtp_mask->hdr.plen) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -1415,13 +1415,13 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], return false; } input = &outer_input_set; - if (gtp_mask->teid) + if (gtp_mask->hdr.teid) *input |= ICE_INSET_GTPU_TEID; list[t].type = ICE_GTP; list[t].h_u.gtp_hdr.teid = - gtp_spec->teid; + gtp_spec->hdr.teid; list[t].m_u.gtp_hdr.teid = - gtp_mask->teid; + gtp_mask->hdr.teid; input_set_byte += 4; t++; } diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 604384a24253..fbcfe3665748 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -145,9 +145,9 @@ struct mlx5dr_definer_conv_data { X(SET_BE16, tcp_src_port, v->hdr.src_port, rte_flow_item_tcp) \ X(SET_BE16, tcp_dst_port, v->hdr.dst_port, rte_flow_item_tcp) \ X(SET, gtp_udp_port, RTE_GTPU_UDP_PORT, rte_flow_item_gtp) \ - X(SET_BE32, gtp_teid, v->teid, rte_flow_item_gtp) \ - X(SET, gtp_msg_type, v->msg_type, rte_flow_item_gtp) \ - X(SET, gtp_ext_flag, !!v->v_pt_rsv_flags, rte_flow_item_gtp) \ + X(SET_BE32, gtp_teid, v->hdr.teid, rte_flow_item_gtp) \ + X(SET, gtp_msg_type, v->hdr.msg_type, rte_flow_item_gtp) \ + X(SET, gtp_ext_flag, !!v->hdr.gtp_hdr_info, rte_flow_item_gtp) \ X(SET, gtp_next_ext_hdr, GTP_PDU_SC, rte_flow_item_gtp_psc) \ X(SET, gtp_ext_hdr_pdu, v->hdr.type, rte_flow_item_gtp_psc) \ X(SET, gtp_ext_hdr_qfi, v->hdr.qfi, rte_flow_item_gtp_psc) \ @@ -830,12 +830,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd, if (!m) return 0; - if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) { + if (m->hdr.plen || m->hdr.gtp_hdr_info & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) { rte_errno = ENOTSUP; return rte_errno; } - if (m->teid) { + if (m->hdr.teid) { if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) { rte_errno = ENOTSUP; return rte_errno; @@ -847,7 +847,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd, fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE; } - if (m->v_pt_rsv_flags) { + if (m->hdr.gtp_hdr_info) { if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { rte_errno = ENOTSUP; return rte_errno; @@ -861,7 +861,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd, } - if (m->msg_type) { + if (m->hdr.msg_type) { if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { rte_errno = ENOTSUP; return rte_errno; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 261c60a5c33a..54cd4ca7344c 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2458,9 +2458,9 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev, const struct rte_flow_item_gtp *spec = item->spec; const struct rte_flow_item_gtp *mask = item->mask; const struct rte_flow_item_gtp nic_mask = { - .v_pt_rsv_flags = MLX5_GTP_FLAGS_MASK, - .msg_type = 0xff, - .teid = RTE_BE32(0xffffffff), + .hdr.gtp_hdr_info = MLX5_GTP_FLAGS_MASK, + .hdr.msg_type = 0xff, + .hdr.teid = RTE_BE32(0xffffffff), }; if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_gtp) @@ -2478,7 +2478,7 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev, "no outer UDP layer found"); if (!mask) mask = &rte_flow_item_gtp_mask; - if (spec && spec->v_pt_rsv_flags & ~MLX5_GTP_FLAGS_MASK) + if (spec && spec->hdr.gtp_hdr_info & ~MLX5_GTP_FLAGS_MASK) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "Match is supported for GTP" @@ -2529,8 +2529,8 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item, gtp_mask = gtp_item->mask ? gtp_item->mask : &rte_flow_item_gtp_mask; /* GTP spec and E flag is requested to match zero. */ if (gtp_spec && - (gtp_mask->v_pt_rsv_flags & - ~gtp_spec->v_pt_rsv_flags & MLX5_GTP_EXT_HEADER_FLAG)) + (gtp_mask->hdr.gtp_hdr_info & + ~gtp_spec->hdr.gtp_hdr_info & MLX5_GTP_EXT_HEADER_FLAG)) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "GTP E flag must be 1 to match GTP PSC"); @@ -9318,7 +9318,7 @@ flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item, const uint64_t pattern_flags, uint32_t key_type) { - static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, }; + static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {{{0}}}; const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask; const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec; /* The item was validated to be on the outer side */ @@ -10356,11 +10356,11 @@ flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item, MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m, &rte_flow_item_gtp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, - gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags); + gtp_v->hdr.gtp_hdr_info & gtp_m->hdr.gtp_hdr_info); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type, - gtp_v->msg_type & gtp_m->msg_type); + gtp_v->hdr.msg_type & gtp_m->hdr.msg_type); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid, - rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid)); + rte_be_to_cpu_32(gtp_v->hdr.teid & gtp_m->hdr.teid)); } /** diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index e2364823d622..8e8925277eb3 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -1139,23 +1139,33 @@ static const struct rte_flow_item_fuzzy rte_flow_item_fuzzy_mask = { * * Matches a GTPv1 header. */ +RTE_STD_C11 struct rte_flow_item_gtp { - /** - * Version (3b), protocol type (1b), reserved (1b), - * Extension header flag (1b), - * Sequence number flag (1b), - * N-PDU number flag (1b). - */ - uint8_t v_pt_rsv_flags; - uint8_t msg_type; /**< Message type. */ - rte_be16_t msg_len; /**< Message length. */ - rte_be32_t teid; /**< Tunnel endpoint identifier. */ + union { + struct { + /* + * These are old fields kept for compatibility. + * Please prefer hdr field below. + */ + /** + * Version (3b), protocol type (1b), reserved (1b), + * Extension header flag (1b), + * Sequence number flag (1b), + * N-PDU number flag (1b). + */ + uint8_t v_pt_rsv_flags; + uint8_t msg_type; /**< Message type. */ + rte_be16_t msg_len; /**< Message length. */ + rte_be32_t teid; /**< Tunnel endpoint identifier. */ + }; + struct rte_gtp_hdr hdr; /**< GTP header definition. */ + }; }; /** Default mask for RTE_FLOW_ITEM_TYPE_GTP. */ #ifndef __cplusplus static const struct rte_flow_item_gtp rte_flow_item_gtp_mask = { - .teid = RTE_BE32(0xffffffff), + .hdr.teid = RTE_BE32(UINT32_MAX), }; #endif From patchwork Fri Feb 3 16:48:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 123045 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A20541BBE; Fri, 3 Feb 2023 17:49:53 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1F85842D69; Fri, 3 Feb 2023 17:49:33 +0100 (CET) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2076.outbound.protection.outlook.com [40.107.95.76]) by mails.dpdk.org (Postfix) with ESMTP id DEE354021E for ; Fri, 3 Feb 2023 17:49:29 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=S0rh2W1RCdL3qopuu/5DahZ11mNMcikHeLeMZSvzr9BiCdQOljgovUCoPM8mUkUDE+LnxIG4aH1imtXSKTQ/mYIbBudIPsffBCeeRy1taSA7acE0jsgP7Y3CbrbkgjkUHQAayrTRV/d27Qt9xfMZRsS1Dsr1Yl2kiJus2mn2SUw3v0zrQdPkLM2kzNAWJavJaR8FVFapQ8awNgpz+xM3PqMMLpLjsqg9H3ItRKN2YaUi1n7ycvWdKC9lHEnkkvcZzjt5aTKEqwrrnwXLbanCyAh4Y/AHKjOtrrdz2KkJomegKaGgKOIEHtrXoGuhdAhgMIRaHjTtbZcoG8p3EL6ZVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dY+DlSj7ioARu3Z9xUkD5rfIr25KJWVKSY1fM43xn8I=; b=d33bGjoThH2lQA2xjD0VJPG7MzcG84/qicE++Bg6LtaJuwOnbbi4MUxDxoPJOnhm55iw3QrXxEnw7JRSCNHCqkLhRiQySy5q0aRjwCGAsHl//9PFv82vOaV5GrW8fOpjzPfOF0xuRIGRr5fLsCygoIUbRunfTU9Gp07p1f/ORMggqI0XqrVUYyVTds4QnhQgyUGmc5QIUF+RP2KeeP1oBN75UaFX2JraGnJhflxh4SQwQNRAH6z05fEICOvC0SYtuRz+emDY2nk89SkfS1GNtcSn+6g4Her3aZDfKzZ6E6TSpCUrfiWQXJPv8ZyoQpaSG6YzMUjDp1e5LRGHLxGvcA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=monjalon.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dY+DlSj7ioARu3Z9xUkD5rfIr25KJWVKSY1fM43xn8I=; b=gIiRdH1GM8PIYSBIhg2+GVZgs+KeL2O29nZIfyUfqECujM17iJCrmcm27OqlhHvWZjH8d+nUujP65iB9jYXCCV9MZY3CF/UZiurRSd37PCUyErTy75makG+WEF22bbhHBMIPtTX20BnVAtVzMCHLOBOJuAdXLAO7ZcXCEOuhSUk= Received: from DS7PR06CA0037.namprd06.prod.outlook.com (2603:10b6:8:54::12) by BY5PR12MB4146.namprd12.prod.outlook.com (2603:10b6:a03:20d::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.27; Fri, 3 Feb 2023 16:49:27 +0000 Received: from DS1PEPF0000E65C.namprd02.prod.outlook.com (2603:10b6:8:54:cafe::f2) by DS7PR06CA0037.outlook.office365.com (2603:10b6:8:54::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.29 via Frontend Transport; Fri, 3 Feb 2023 16:49:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF0000E65C.mail.protection.outlook.com (10.167.18.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6064.21 via Frontend Transport; Fri, 3 Feb 2023 16:49:26 +0000 Received: from telcodpdk.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 3 Feb 2023 10:49:25 -0600 From: Ferruh Yigit To: Thomas Monjalon , Ori Kam , Aman Singh , Yuying Zhang , Andrew Rybchenko CC: David Marchand , Subject: [PATCH v7 5/7] ethdev: use ARP protocol struct for flow matching Date: Fri, 3 Feb 2023 16:48:52 +0000 Message-ID: <20230203164854.602595-6-ferruh.yigit@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230203164854.602595-1-ferruh.yigit@amd.com> References: <20221025214410.715864-1-thomas@monjalon.net> <20230203164854.602595-1-ferruh.yigit@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E65C:EE_|BY5PR12MB4146:EE_ X-MS-Office365-Filtering-Correlation-Id: d74ebe3e-d7eb-420b-7d48-08db06069c84 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xVHOTo/fsIrgFKwXjaXyTUYsxdsuBwfRfe7L4jZYXY0enLSXflygEqSu6ypJR/IIj5PfM6RCCGeu0VzDDMHEBpg6BHbFTJlcuxNqhRW07TFoQ6bXlgaAylyeT5T0AiKm9voIBhrq+mshVJAcNE5r4GeLpGzlhjMOoSQI3/ZYK/JdHr7JVYSooaBPFuoGC4gJYB/wn9SqXOuw+G9oqJKS4SXMKxwX7wVIylWnrucUWvwNoT74PnycOApprmnkpcfoOayc+QBLEl+WW9SiRV3lxSoemVpqlZmX+sryoCYZUVl/+I5wy9aTMdowXMEGskczgywK89jdBgE4vlkXZFMC1PI30dGqRikFmORb/Ypm0LpSasWoQ0xoWmvqiDGnjPptn6pA98eDpxQpL66RuYQblOdLFqUUQEJXzi/gYGzC4pro/7nRYcwdX4g6voEX1490cxGlpHEEZcqeWavB0cXER6gsIrz1GAcEh55kZxxyMvguUtVWYJfv5cdUSn3ez7E/Wm/hpk7eeE0pdScYPpfcwc0jHlOY5P4jLQB9plxe32fObFDNvN88vuULojfZJy39eReaS8cK14p3mH0LV1a5wkB2ic62niA1Ys054T/nlGDUWxiOy1eOFuAkAlh+x2/7eaOuJuNveVXMwvCt1d4rCge6tyn3N6k+RJe0hb8x8yq/J+QV8lu0f/SGUO+8UabEpzuFtThqqYTWUNPBI2gx30SOqZ26Mve6wBeucc24NGE= X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230025)(4636009)(396003)(346002)(39860400002)(376002)(136003)(451199018)(40470700004)(46966006)(36840700001)(36756003)(8936002)(336012)(41300700001)(83380400001)(16526019)(186003)(2906002)(82310400005)(36860700001)(478600001)(82740400003)(356005)(70206006)(81166007)(7696005)(40460700003)(44832011)(47076005)(426003)(70586007)(26005)(4326008)(86362001)(6666004)(54906003)(316002)(8676002)(40480700001)(2616005)(110136005)(5660300002)(1076003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 16:49:26.7774 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d74ebe3e-d7eb-420b-7d48-08db06069c84 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E65C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4146 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Thomas Monjalon As announced in the deprecation notice, flow item structures should re-use the protocol header definitions from the directory lib/net/. The protocol struct is added in an unnamed union, keeping old field names. The ARP header struct members are used in testpmd instead of the redundant fields in the flow items. Signed-off-by: Thomas Monjalon Acked-by: Ferruh Yigit Acked-by: Ori Kam Acked-by: Andrew Rybchenko --- app/test-pmd/cmdline_flow.c | 8 +++--- doc/guides/prog_guide/rte_flow.rst | 10 +------- doc/guides/rel_notes/deprecation.rst | 1 - lib/ethdev/rte_flow.h | 37 ++++++++++++++++++---------- 4 files changed, 29 insertions(+), 27 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 429d9cab8217..275a1f9d3b5e 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -4226,7 +4226,7 @@ static const struct token token_list[] = { .next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4, - sha)), + hdr.arp_data.arp_sha)), }, [ITEM_ARP_ETH_IPV4_SPA] = { .name = "spa", @@ -4234,7 +4234,7 @@ static const struct token token_list[] = { .next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4, - spa)), + hdr.arp_data.arp_sip)), }, [ITEM_ARP_ETH_IPV4_THA] = { .name = "tha", @@ -4242,7 +4242,7 @@ static const struct token token_list[] = { .next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_MAC_ADDR), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4, - tha)), + hdr.arp_data.arp_tha)), }, [ITEM_ARP_ETH_IPV4_TPA] = { .name = "tpa", @@ -4250,7 +4250,7 @@ static const struct token token_list[] = { .next = NEXT(item_arp_eth_ipv4, NEXT_ENTRY(COMMON_IPV4_ADDR), item_param), .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_arp_eth_ipv4, - tpa)), + hdr.arp_data.arp_tip)), }, [ITEM_IPV6_EXT] = { .name = "ipv6_ext", diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index c4b96b5d324b..085c93c89b3b 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1104,15 +1104,7 @@ Item: ``ARP_ETH_IPV4`` Matches an ARP header for Ethernet/IPv4. -- ``hdr``: hardware type, normally 1. -- ``pro``: protocol type, normally 0x0800. -- ``hln``: hardware address length, normally 6. -- ``pln``: protocol address length, normally 4. -- ``op``: opcode (1 for request, 2 for reply). -- ``sha``: sender hardware address. -- ``spa``: sender IPv4 address. -- ``tha``: target hardware address. -- ``tpa``: target IPv4 address. +- ``hdr``: header definition (``rte_arp.h``). - Default ``mask`` matches SHA, SPA, THA and TPA. Item: ``IPV6_EXT`` diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 838d5854ad9b..6097eb5e0c5b 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -69,7 +69,6 @@ Deprecation Notices These items are not compliant (not including struct from lib/net/): - ``rte_flow_item_ah`` - - ``rte_flow_item_arp_eth_ipv4`` - ``rte_flow_item_e_tag`` - ``rte_flow_item_geneve`` - ``rte_flow_item_geneve_opt`` diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 8e8925277eb3..b8f66d668cac 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -1245,26 +1246,36 @@ static const struct rte_flow_item_vxlan_gpe rte_flow_item_vxlan_gpe_mask = { * * Matches an ARP header for Ethernet/IPv4. */ +RTE_STD_C11 struct rte_flow_item_arp_eth_ipv4 { - rte_be16_t hrd; /**< Hardware type, normally 1. */ - rte_be16_t pro; /**< Protocol type, normally 0x0800. */ - uint8_t hln; /**< Hardware address length, normally 6. */ - uint8_t pln; /**< Protocol address length, normally 4. */ - rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */ - struct rte_ether_addr sha; /**< Sender hardware address. */ - rte_be32_t spa; /**< Sender IPv4 address. */ - struct rte_ether_addr tha; /**< Target hardware address. */ - rte_be32_t tpa; /**< Target IPv4 address. */ + union { + struct { + /* + * These are old fields kept for compatibility. + * Please prefer hdr field below. + */ + rte_be16_t hrd; /**< Hardware type, normally 1. */ + rte_be16_t pro; /**< Protocol type, normally 0x0800. */ + uint8_t hln; /**< Hardware address length, normally 6. */ + uint8_t pln; /**< Protocol address length, normally 4. */ + rte_be16_t op; /**< Opcode (1 for request, 2 for reply). */ + struct rte_ether_addr sha; /**< Sender hardware address. */ + rte_be32_t spa; /**< Sender IPv4 address. */ + struct rte_ether_addr tha; /**< Target hardware address. */ + rte_be32_t tpa; /**< Target IPv4 address. */ + }; + struct rte_arp_hdr hdr; /**< ARP header definition. */ + }; }; /** Default mask for RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4. */ #ifndef __cplusplus static const struct rte_flow_item_arp_eth_ipv4 rte_flow_item_arp_eth_ipv4_mask = { - .sha.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .spa = RTE_BE32(0xffffffff), - .tha.addr_bytes = "\xff\xff\xff\xff\xff\xff", - .tpa = RTE_BE32(0xffffffff), + .hdr.arp_data.arp_sha.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.arp_data.arp_sip = RTE_BE32(UINT32_MAX), + .hdr.arp_data.arp_tha.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .hdr.arp_data.arp_tip = RTE_BE32(UINT32_MAX), }; #endif From patchwork Fri Feb 3 16:48:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 123046 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 43ABB41BBE; Fri, 3 Feb 2023 17:49:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7518D42D5E; Fri, 3 Feb 2023 17:49:34 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id 9910A42D3F; Fri, 3 Feb 2023 17:49:30 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fytxrl7rQGX9LaclanZXDHCmk30QM4gAXIeFffSCg4eS9pRBvoATNW1kslP+iXVRdTLHf/DRPDG2j/rhSG4+mRTg0SJEZLLT5h+0NojwQY+e+H3nb3qsVkpGB4vmlBngdHJY3pCZqz3qBExy9oQhFR2IQBWFoF0z6uyjOLF6IOxfZOhJWKqmDAp0OZez8AaPsLtZS+zonijBYHLeM3JZ0Vm8alNPmCqlxT82ItKE5AFLJ+aHxL5MNHnOhBq7IIRSyn3ATgLgB/dfMSAVstar0HwJ2fg59hXvHy8Gd56bWToM8AP8ZeqTRLujA7nrF1bt06k0EEomynzfRCYiChUTnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zInrS7RGn2X4KkU4KWh7GwfiSYUvlr0QU+TNDqiYBsQ=; b=PeHdEsu28jj4gpWwofITBCCuGc5gDUma/ntjIHmjAdirOxzuuAT2riWLScSfYUsB+PGk/RGEPZX6uKAFlogcaPGxWu0BN97IfaZOsIUpX+LmhxNdVAqbMdtKq7Ar5a1yR+wr8hF45Iv25+Xyk/vbWClQNhWCQQd9GBSbzrrjJ8IJkD3RHs3KbJmZqv1OBBqEPnfiXgSOlncsJYufTSX2hC1+y0/eTvFGozYx4EH7U6wM0XomY3jmUZ7oSkkhoSx6eulizsJntdM6Q0urG8UKGZRtZeiaOW5abELemkOdBCqaEreslu/aCXQ2sR6oqJMx214ZSiOZgtsSpylZh8NoQQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=monjalon.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zInrS7RGn2X4KkU4KWh7GwfiSYUvlr0QU+TNDqiYBsQ=; b=Nsrwh8KMF/9MjpQo68Q3aaoxe6HFnNQsFlV5wbIuZSCBPSjFGpwFHZgZ5f+qzy9FO1rzkg1eWWVrH7GqpV0Iu51fxR1NrDi+Wwz1rPVCCO+0IOKL8MANqBxH0ITuybCRFlIHrSpZ5R1Y5ddf9Qmes8Ufd3AWaUlQGNFaiHbOj3U= Received: from DS7PR06CA0030.namprd06.prod.outlook.com (2603:10b6:8:54::22) by DM4PR12MB5392.namprd12.prod.outlook.com (2603:10b6:5:39f::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.31; Fri, 3 Feb 2023 16:49:28 +0000 Received: from DS1PEPF0000E65C.namprd02.prod.outlook.com (2603:10b6:8:54:cafe::cb) by DS7PR06CA0030.outlook.office365.com (2603:10b6:8:54::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.31 via Frontend Transport; Fri, 3 Feb 2023 16:49:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF0000E65C.mail.protection.outlook.com (10.167.18.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6064.21 via Frontend Transport; Fri, 3 Feb 2023 16:49:28 +0000 Received: from telcodpdk.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 3 Feb 2023 10:49:27 -0600 From: Ferruh Yigit To: Thomas Monjalon , Ori Kam , Wenjun Wu , Ferruh Yigit , Jie Wang , Andrew Rybchenko CC: David Marchand , , Subject: [PATCH v7 6/7] doc: fix description of L2TPV2 flow item Date: Fri, 3 Feb 2023 16:48:53 +0000 Message-ID: <20230203164854.602595-7-ferruh.yigit@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230203164854.602595-1-ferruh.yigit@amd.com> References: <20221025214410.715864-1-thomas@monjalon.net> <20230203164854.602595-1-ferruh.yigit@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E65C:EE_|DM4PR12MB5392:EE_ X-MS-Office365-Filtering-Correlation-Id: d32443f1-a8d7-4739-e50b-08db06069db0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WTPUoLjqugJf7IaYQ5wdgc5uSa9zO6nZl1hjq3N0y/IFllUss4K+bHzDWCHC610MzpRdVQpgDINBQgp4r7KHS0RXyLW+yUmGm0qPfJC3cDvB/4TC2Qgj6qghC537fUPaw7Op6dCBHExFxIMNz6v0EcjQ1XNGkQHxgV8rLV3Hjk0jtP2OTqU+4MnMe2kon3iaIr7rLbQWdq+/qgncFR60dlNLYCvev2EZSA5kj4ecRg41Z5RFYejbb3Nxab1UwtteFh+NYCLr44EKHn3oAKtt1IgPtS9Q1pL+mNQ5AxGNgu1opFRfxWnZSbUQXSFA+zJsUtxc0WSjuEZJRTpl4m2QBZUNkCC7aKTu2H6p0nk59YrJc2rG0Z5qxrHFNl8a4Qb8QNbO13kOFZSd1r42sGxA0CkBaYSUupGoNPAyMpb86HySg8inGVrwUzDk3eEHPWNUjH1XUUQcvZ3SMe80UHeL07JZ1FXnwBcb+kJCAQf1Ibl1CpmuVCw23mI/4Ct8VQbWY1GXAaEvdUW4SKaG8a16DCBt439dhsKHHmo6fniWevvDFk+dmUFy0CX1HJjoEv+sGpNilLYDvKkVhlhheYHyg6YLzogsFUp9cfut/LKxjRV3cKh9iMxGaKn4b7SjyQ0JKp2iyQHcGvk+W4NATF7L66AFGtJ6XumLCqrmRYs+KuVsef+Xz8PYRn1eVrzGz2BZHLDz28E4tYIJNuxuyfmfsGhVusA3w1aeYXL94BwEZ4A= X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230025)(4636009)(376002)(396003)(136003)(39860400002)(346002)(451199018)(36840700001)(40470700004)(46966006)(70206006)(70586007)(4326008)(5660300002)(8936002)(41300700001)(54906003)(110136005)(316002)(8676002)(2906002)(40480700001)(44832011)(40460700003)(81166007)(336012)(83380400001)(426003)(2616005)(356005)(82740400003)(36860700001)(47076005)(6666004)(7696005)(16526019)(478600001)(82310400005)(186003)(26005)(36756003)(1076003)(86362001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 16:49:28.7305 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d32443f1-a8d7-4739-e50b-08db06069db0 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E65C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5392 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Thomas Monjalon The flow item structure includes the protocol definition from the directory lib/net/, so it is reflected in the guide. Section title underlining is also fixed around. Fixes: 3a929df1f286 ("ethdev: support L2TPv2 and PPP procotol") Cc: stable@dpdk.org Signed-off-by: Thomas Monjalon Acked-by: Ferruh Yigit Acked-by: Andrew Rybchenko --- Cc: jie1x.wang@intel.com --- doc/guides/prog_guide/rte_flow.rst | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 085c93c89b3b..f532cb1675ff 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1489,22 +1489,15 @@ rte_flow_flex_item_create() routine. value and mask. Item: ``L2TPV2`` -^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^ Matches a L2TPv2 header. -- ``flags_version``: flags(12b), version(4b). -- ``length``: total length of the message. -- ``tunnel_id``: identifier for the control connection. -- ``session_id``: identifier for a session within a tunnel. -- ``ns``: sequence number for this date or control message. -- ``nr``: sequence number expected in the next control message to be received. -- ``offset_size``: offset of payload data. -- ``offset_padding``: offset padding, variable length. +- ``hdr``: header definition (``rte_l2tpv2.h``). - Default ``mask`` matches flags_version only. Item: ``PPP`` -^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^ Matches a PPP header. From patchwork Fri Feb 3 16:48:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 123047 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 08C6941BBE; Fri, 3 Feb 2023 17:50:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D40E742D55; Fri, 3 Feb 2023 17:49:36 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2088.outbound.protection.outlook.com [40.107.94.88]) by mails.dpdk.org (Postfix) with ESMTP id D923342D70 for ; Fri, 3 Feb 2023 17:49:34 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Eb3/Q/VIodXqqi7YkG/bZBeJWH96zjZNpPparG5wNLiL4/usgwSQ8CQ4TQG+y+G91p3IW+aJh2yPTD6utOpjnfdtG49C0DQqEfU+t+zE0bp7XJybn4HF301qPcp8LxA/CavFK9FllVRb+LaQQZp+PWYzypt+iQW4P9B7rl0ogYrLnANb1Q4vx2u+CgjUWGdrFqdMRhl14yibqQ7hOwceiVwDdFdGsFlVb64SWhAHCCVf5LHLwo6b9obAqxLQPY4IbBe4hTGva47h1israTGLyUYzLeG1O+UAM+Udbb+WcOBHxYCa6Ci6MPJwSKZJCiK0QozMdh3bXK4V9Cmk2yk1Uw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YVm4UCPrzTC038NBS/tkQit8iFZnhUQU0xxJJ2d1RwM=; b=GvVHFA5Ml7397/kKO3XSGdqrRCnxoWVQm+Nh2JBUiDkq0Mdg3Wix2dWDHwmBFqoF3p+loRV3xMCq94Kqed4QHoi8S+KCeBw5OrwPeuk5tJ/BKwlgaEAZsWTkbC9FSTtKxvd3AxPGBof2ZuzVWiyJHiKnrtNNZG6Lh5hYyAUq4EuRm+NPPZbLdLpXvvNqkLJ2xmIsFanTZdsfwJ1DU74eJ4ASQzK8nuR2NtZH7LuP+8Rx52p0ly5Ds/rwxKz0VYvSdsIqAgJwWNsyF2nCKdSpZ7jIqWviPAcEVVVu6goIeMXhzrOAD4SycnnLje5gr5UKTqIFWMMJDufa1mGI1ZJQOA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=monjalon.net smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YVm4UCPrzTC038NBS/tkQit8iFZnhUQU0xxJJ2d1RwM=; b=jg2H0He3L1crXjNaQDtE3Jnqp/DRU3eR6a8F5RExRMOOexZx4/rTshIYsFbHqFhZGWVmSSVzNQXNYCRYjRTuXaQSiiv/z0lx4dHv2R7X1QsyyklOivqAmaZiy+glCLXFLcW/AFdlHseGS6/5Po+TwCAs8JNo2CoBJsT96SkylvY= Received: from DS7PR06CA0053.namprd06.prod.outlook.com (2603:10b6:8:54::8) by IA1PR12MB6185.namprd12.prod.outlook.com (2603:10b6:208:3e7::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.27; Fri, 3 Feb 2023 16:49:30 +0000 Received: from DS1PEPF0000E65C.namprd02.prod.outlook.com (2603:10b6:8:54:cafe::8) by DS7PR06CA0053.outlook.office365.com (2603:10b6:8:54::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.31 via Frontend Transport; Fri, 3 Feb 2023 16:49:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DS1PEPF0000E65C.mail.protection.outlook.com (10.167.18.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6064.21 via Frontend Transport; Fri, 3 Feb 2023 16:49:30 +0000 Received: from telcodpdk.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 3 Feb 2023 10:49:29 -0600 From: Ferruh Yigit To: Thomas Monjalon , Ori Kam , Andrew Rybchenko , Olivier Matz CC: David Marchand , Subject: [PATCH v7 7/7] net: mark all big endian types Date: Fri, 3 Feb 2023 16:48:54 +0000 Message-ID: <20230203164854.602595-8-ferruh.yigit@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230203164854.602595-1-ferruh.yigit@amd.com> References: <20221025214410.715864-1-thomas@monjalon.net> <20230203164854.602595-1-ferruh.yigit@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E65C:EE_|IA1PR12MB6185:EE_ X-MS-Office365-Filtering-Correlation-Id: 1d52d90f-3763-49ad-ffaf-08db06069eaf X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CDenfF0IBAclSASWGoGz2ec0BF6FIF6yjmt0zy2Ry0Dw/fcw0M85eCyBRzcEnz0unYYVFq41ElwozadpLvWKunpIlEkP71TeBCkauJeznEYO6Cn/aJD0mQ9VyukclxmFnCLUUIrawPqGb7HueXbsqNqG5de2KOZzOxU7J8K4uE8YC168eK3mfp0eZJnvb2tXG6BJ8eYTsOhe47nsmyT4SgR1pzN9GLUUsKz/VAyPYRQ+8yuTqUBizr8iNzZAS6ppRurRVXMwexcOvx8pzMI6+TsoVc77ON0g7xl2d4Ji0ysuKiIbBNSF3xHq9EFed1Ie/qPlDq8IBaxk7u7Xa3cpAq1HTFczIrLxrW14bJl2+CWCldZ0CM3+DBBCHhNQt7beEJzT5bhWqotJy21tAU05bABSCT2PlPKRVfVRBl8FItee8FGTyz84eEMXgQWFn2tCSOaVJxoZDfxVH3gRVwJyMFmLc/f8LmA75FcEN6DWG4yuCX/IyRbcBi5+RYCOLGCMcA/trdR00rHq0SjdPIcFGRz3ixf8Vm2L+NFc1f2ciqUDbVXgKb6v63+aJrgBiXC4QASjs4iGVCa+imD4f3nMaheZsdiJ1+Z4AjvKH4+risRMlNXqiC3IbzIzthqLH0f+j0bPpQnaGYR1ddV0wsFBWKF9FGs4YDtGXnZQkn0tjoYtw/8hs2mchbUlVld65iirW/kwDtoLCnV9R7X6gVXIpTYAK7gauapsnHBfKfmgDtk= X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230025)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(36860700001)(82310400005)(82740400003)(86362001)(40460700003)(40480700001)(356005)(81166007)(2906002)(426003)(4326008)(54906003)(110136005)(336012)(6666004)(70586007)(70206006)(83380400001)(41300700001)(5660300002)(7696005)(478600001)(44832011)(8936002)(316002)(16526019)(8676002)(47076005)(36756003)(186003)(2616005)(26005)(1076003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 16:49:30.4180 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1d52d90f-3763-49ad-ffaf-08db06069eaf X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E65C.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6185 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Thomas Monjalon Some protocols (ARP, MPLS and HIGIG2) were using uint16_t and uint32_t types for their 16 and 32-bit fields. It was correct but not conveying the big endian nature of these fields. As for other protocols defined in this directory, all types are explicitly marked as big endian fields. Signed-off-by: Thomas Monjalon Acked-by: Ferruh Yigit Acked-by: Andrew Rybchenko --- lib/ethdev/rte_flow.h | 4 ++-- lib/net/rte_arp.h | 28 ++++++++++++++-------------- lib/net/rte_gre.h | 2 +- lib/net/rte_higig.h | 6 +++--- lib/net/rte_mpls.h | 2 +- 5 files changed, 21 insertions(+), 21 deletions(-) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index b8f66d668cac..7b780f70a56f 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -642,8 +642,8 @@ struct rte_flow_item_higig2_hdr { static const struct rte_flow_item_higig2_hdr rte_flow_item_higig2_hdr_mask = { .hdr = { .ppt1 = { - .classification = 0xffff, - .vid = 0xfff, + .classification = RTE_BE16(UINT16_MAX), + .vid = RTE_BE16(0xfff), }, }, }; diff --git a/lib/net/rte_arp.h b/lib/net/rte_arp.h index 076c8ab314ee..c3cd0afb5ca8 100644 --- a/lib/net/rte_arp.h +++ b/lib/net/rte_arp.h @@ -23,28 +23,28 @@ extern "C" { */ struct rte_arp_ipv4 { struct rte_ether_addr arp_sha; /**< sender hardware address */ - uint32_t arp_sip; /**< sender IP address */ + rte_be32_t arp_sip; /**< sender IP address */ struct rte_ether_addr arp_tha; /**< target hardware address */ - uint32_t arp_tip; /**< target IP address */ + rte_be32_t arp_tip; /**< target IP address */ } __rte_packed __rte_aligned(2); /** * ARP header. */ struct rte_arp_hdr { - uint16_t arp_hardware; /* format of hardware address */ -#define RTE_ARP_HRD_ETHER 1 /* ARP Ethernet address format */ + rte_be16_t arp_hardware; /**< format of hardware address */ +#define RTE_ARP_HRD_ETHER 1 /**< ARP Ethernet address format */ - uint16_t arp_protocol; /* format of protocol address */ - uint8_t arp_hlen; /* length of hardware address */ - uint8_t arp_plen; /* length of protocol address */ - uint16_t arp_opcode; /* ARP opcode (command) */ -#define RTE_ARP_OP_REQUEST 1 /* request to resolve address */ -#define RTE_ARP_OP_REPLY 2 /* response to previous request */ -#define RTE_ARP_OP_REVREQUEST 3 /* request proto addr given hardware */ -#define RTE_ARP_OP_REVREPLY 4 /* response giving protocol address */ -#define RTE_ARP_OP_INVREQUEST 8 /* request to identify peer */ -#define RTE_ARP_OP_INVREPLY 9 /* response identifying peer */ + rte_be16_t arp_protocol; /**< format of protocol address */ + uint8_t arp_hlen; /**< length of hardware address */ + uint8_t arp_plen; /**< length of protocol address */ + rte_be16_t arp_opcode; /**< ARP opcode (command) */ +#define RTE_ARP_OP_REQUEST 1 /**< request to resolve address */ +#define RTE_ARP_OP_REPLY 2 /**< response to previous request */ +#define RTE_ARP_OP_REVREQUEST 3 /**< request proto addr given hardware */ +#define RTE_ARP_OP_REVREPLY 4 /**< response giving protocol address */ +#define RTE_ARP_OP_INVREQUEST 8 /**< request to identify peer */ +#define RTE_ARP_OP_INVREPLY 9 /**< response identifying peer */ struct rte_arp_ipv4 arp_data; } __rte_packed __rte_aligned(2); diff --git a/lib/net/rte_gre.h b/lib/net/rte_gre.h index 6c6aef6fcaa0..8da8027b43da 100644 --- a/lib/net/rte_gre.h +++ b/lib/net/rte_gre.h @@ -45,7 +45,7 @@ struct rte_gre_hdr { uint16_t res3:5; /**< Reserved */ uint16_t ver:3; /**< Version Number */ #endif - uint16_t proto; /**< Protocol Type */ + rte_be16_t proto; /**< Protocol Type */ } __rte_packed; /** diff --git a/lib/net/rte_higig.h b/lib/net/rte_higig.h index b55fb1a7db44..bba3898a883f 100644 --- a/lib/net/rte_higig.h +++ b/lib/net/rte_higig.h @@ -112,9 +112,9 @@ struct rte_higig2_ppt_type0 { */ __extension__ struct rte_higig2_ppt_type1 { - uint16_t classification; - uint16_t resv; - uint16_t vid; + rte_be16_t classification; + rte_be16_t resv; + rte_be16_t vid; #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN uint16_t opcode:3; uint16_t resv1:2; diff --git a/lib/net/rte_mpls.h b/lib/net/rte_mpls.h index 3e8cb90ec383..51523e7a1188 100644 --- a/lib/net/rte_mpls.h +++ b/lib/net/rte_mpls.h @@ -23,7 +23,7 @@ extern "C" { */ __extension__ struct rte_mpls_hdr { - uint16_t tag_msb; /**< Label(msb). */ + rte_be16_t tag_msb; /**< Label(msb). */ #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN uint8_t tag_lsb:4; /**< Label(lsb). */ uint8_t tc:3; /**< Traffic class. */