From patchwork Sat Nov 3 06:18:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 47781 X-Patchwork-Delegate: shahafs@mellanox.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4FC8D1B10D; Sat, 3 Nov 2018 07:18:54 +0100 (CET) Received: from EUR02-VE1-obe.outbound.protection.outlook.com (mail-eopbgr20055.outbound.protection.outlook.com [40.107.2.55]) by dpdk.org (Postfix) with ESMTP id DC8E35B1E for ; Sat, 3 Nov 2018 07:18:42 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=h3lOJtN3Pgg7ojJ28FwpEzmi/5X4HYL5WuJdhM97iJ4=; b=mupWEMiMi2/0ky78pbT+c80scSURGVUqXLJb1KoLivWjedmSDhJ5i0HskrFiIcAPDiPznP/SdCqwQItkKb95DjGYEj9faj7x0PpHrPVqdqw9z8XLGAg4V0D41Jva45t6HJaNRYfTSAiWcg+y8gU+aUDobwGOQXKlT1vA9dSEF8k= Received: from AM4PR05MB3265.eurprd05.prod.outlook.com (10.171.186.150) by AM4PR05MB3250.eurprd05.prod.outlook.com (10.171.186.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1294.21; Sat, 3 Nov 2018 06:18:41 +0000 Received: from AM4PR05MB3265.eurprd05.prod.outlook.com ([fe80::544b:a68d:e6a5:ba6e]) by AM4PR05MB3265.eurprd05.prod.outlook.com ([fe80::544b:a68d:e6a5:ba6e%2]) with mapi id 15.20.1294.027; Sat, 3 Nov 2018 06:18:41 +0000 From: Slava Ovsiienko To: Shahaf Shuler CC: "dev@dpdk.org" , Yongseok Koh , Slava Ovsiienko Thread-Topic: [PATCH v5 08/13] net/mlx5: add VXLAN support to flow translate routine Thread-Index: AQHUcz0QIA/u9EEz5021T9rIHSPpGQ== Date: Sat, 3 Nov 2018 06:18:41 +0000 Message-ID: <1541225876-8817-9-git-send-email-viacheslavo@mellanox.com> References: <1541181152-15788-2-git-send-email-viacheslavo@mellanox.com> <1541225876-8817-1-git-send-email-viacheslavo@mellanox.com> In-Reply-To: <1541225876-8817-1-git-send-email-viacheslavo@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: CWLP123CA0088.GBRP123.PROD.OUTLOOK.COM (2603:10a6:401:5b::28) To AM4PR05MB3265.eurprd05.prod.outlook.com (2603:10a6:205:4::22) authentication-results: spf=none (sender IP is ) smtp.mailfrom=viacheslavo@mellanox.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [37.142.13.130] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; AM4PR05MB3250; 6:GH4LthcEb9NMEQdNYzsy+9njafZkERztbxzH35jgRkHdCqfOx33GpAldXgBVN/RUVal2AbLkBdgKPY1X4hr4ZznCwgaZtSSCmKi5n550Rx6OoLDKwq/zPliM6yCE5D7vwI1XYWPoFjGZB/W/32fwu5uCVxC6yCfo751LiGq7fgFzwGhp9ed4C9WsIK+M4xleUvDCWcKtovNtCmg+v4w11hh4kNZMXBEk4aeprw5hN/5IPoVZ0xLd4f9kbqqfWvcKbRHplIBpzdJSuQWBj0Udm1d6fx42GMGpjkpR+ZJNvY92DO/CQ6VwI/K8FZgXvCd9Xdg6e5Lj4PuMamAj3UQuLhLO/92D4ZEa6TzxHIgiBc6M2GI2dDNBn77OcZ7P3H8vr+KuGlFaa91ROzjof7iFTwTEboJ278rXlXAZBlviLMYnEWqaw0rKh0/PJoh3iz/q+CoAC9rzs7cwJZd3qBBhxA==; 5:NbNB/EJ1OJq0x9DwbM0sqYm9EUbNs1XYQQPpe9avNi7P0fmkYTs5omdxcIeLitTvttybBHh6VpWYz2MgEZrVcehgU5je8yNsDtIlsFVX/6gHk9x2mc69JYEOKUvdLQLYnQsIEETyPL+slZRGEwSyobFTm6ejHdiJsCQg5nuc+Vw=; 7:B6lIxcsrK7E/x4F9M1clKhNz7wFhleSmGFJjR6bE2pZsMnKLWqiOXFZlGohDanMPjmo8zKvCccOUerAWEsyhG6xot9faE0tuc0X6bbA0HPFOr2edCrTWAKR9gmUi0fhBGspDY0FrqYkIw0MNKx0WYQ== x-ms-office365-filtering-correlation-id: 7baef689-c847-4521-2f92-08d64154333d x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(5600074)(711020)(4618075)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060)(7193020); SRVR:AM4PR05MB3250; x-ms-traffictypediagnostic: AM4PR05MB3250: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3231382)(944501410)(52105095)(3002001)(10201501046)(93006095)(93001095)(6055026)(148016)(149066)(150057)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(20161123560045)(20161123564045)(20161123558120)(201708071742011)(7699051)(76991095); SRVR:AM4PR05MB3250; BCL:0; PCL:0; RULEID:; SRVR:AM4PR05MB3250; x-forefront-prvs: 08457955C4 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(346002)(376002)(366004)(39860400002)(136003)(396003)(189003)(199004)(81156014)(4744004)(66066001)(107886003)(6436002)(71200400001)(71190400001)(6636002)(106356001)(68736007)(25786009)(316002)(52116002)(3846002)(6486002)(6116002)(4326008)(76176011)(97736004)(6862004)(486006)(476003)(478600001)(11346002)(305945005)(37006003)(5660300001)(7736002)(105586002)(2900100001)(53946003)(26005)(2906002)(99286004)(186003)(2616005)(53936002)(446003)(6512007)(6506007)(8676002)(256004)(8936002)(386003)(14454004)(14444005)(575784001)(36756003)(102836004)(81166006)(86362001)(54906003)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:AM4PR05MB3250; H:AM4PR05MB3265.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: s/0Z6mhZwoFjZ0dAaTZokSCEntY8qftqsbCLJiWHQKkNTD6Itcv4V96QbX3cN5nG0mUgucG/RDDIj2oTfYA2nkk2/HEz6y3P985elHfzvG5GORUBCKbJicMxGRZcx5K6QB0cCxr+OI+iKeKau3IPo52nKSRVTt2jW5NDmKqjQSlbauO9HDsb8SocFWRn4lTYrp7VhVIBJeIPtTGdCE2DTJhAso69rl2xQ9rMgDDt7YoFgjAde0q9soZZxvRY/bC+8mjaZ1f4C19gMMWF6DVeXrdGy4GyHYTRRR2RglRmJ+CghkuO3iJ65aEUIpwJDFuqU6IE1DMqtWuzR6Q4yDf3dKqomYfufLwyd9VuTXzvL7E= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7baef689-c847-4521-2f92-08d64154333d X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Nov 2018 06:18:41.5062 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR05MB3250 Subject: [dpdk-dev] [PATCH v5 08/13] net/mlx5: add VXLAN support to flow translate routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This part of patchset adds support of VXLAN-related items and actions to the flow translation routine. Later some tunnel types, other than VXLAN can be addedd (GRE). No VTEP devices are created at this point, the flow rule is just translated, not applied yet. Suggested-by: Adrien Mazarguil Signed-off-by: Viacheslav Ovsiienko Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_flow_tcf.c | 533 ++++++++++++++++++++++++++++++++++----- 1 file changed, 476 insertions(+), 57 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c b/drivers/net/mlx5/mlx5_flow_tcf.c index 017f2bd..ba17806 100644 --- a/drivers/net/mlx5/mlx5_flow_tcf.c +++ b/drivers/net/mlx5/mlx5_flow_tcf.c @@ -2799,6 +2799,241 @@ struct pedit_parser { } /** + * Convert VXLAN VNI to 32-bit integer. + * + * @param[in] vni + * VXLAN VNI in 24-bit wire format. + * + * @return + * VXLAN VNI as a 32-bit integer value in network endian. + */ +static inline rte_be32_t +vxlan_vni_as_be32(const uint8_t vni[3]) +{ + union { + uint8_t vni[4]; + rte_be32_t dword; + } ret = { + .vni = { 0, vni[0], vni[1], vni[2] }, + }; + return ret.dword; +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_ETH entry in configuration + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the MAC address fields + * in the encapsulation parameters structure. The item must be prevalidated, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_ETH entry specification. + * @param[in] mask + * RTE_FLOW_ITEM_TYPE_ETH entry mask. + * @param[out] encap + * Structure to fill the gathered MAC address data. + */ +static void +flow_tcf_parse_vxlan_encap_eth(const struct rte_flow_item_eth *spec, + const struct rte_flow_item_eth *mask, + struct flow_tcf_vxlan_encap *encap) +{ + /* Item must be validated before. No redundant checks. */ + assert(spec); + if (!mask || !memcmp(&mask->dst, + &rte_flow_item_eth_mask.dst, + sizeof(rte_flow_item_eth_mask.dst))) { + /* + * Ethernet addresses are not supported by + * tc as tunnel_key parameters. Destination + * address is needed to form encap packet + * header and retrieved by kernel from + * implicit sources (ARP table, etc), + * address masks are not supported at all. + */ + encap->eth.dst = spec->dst; + encap->mask |= FLOW_TCF_ENCAP_ETH_DST; + } + if (!mask || !memcmp(&mask->src, + &rte_flow_item_eth_mask.src, + sizeof(rte_flow_item_eth_mask.src))) { + /* + * Ethernet addresses are not supported by + * tc as tunnel_key parameters. Source ethernet + * address is ignored anyway. + */ + encap->eth.src = spec->src; + encap->mask |= FLOW_TCF_ENCAP_ETH_SRC; + } +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_IPV4 entry in configuration + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the IPV4 address fields + * in the encapsulation parameters structure. The item must be prevalidated, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_IPV4 entry specification. + * @param[out] encap + * Structure to fill the gathered IPV4 address data. + */ +static void +flow_tcf_parse_vxlan_encap_ipv4(const struct rte_flow_item_ipv4 *spec, + struct flow_tcf_vxlan_encap *encap) +{ + /* Item must be validated before. No redundant checks. */ + assert(spec); + encap->ipv4.dst = spec->hdr.dst_addr; + encap->ipv4.src = spec->hdr.src_addr; + encap->mask |= FLOW_TCF_ENCAP_IPV4_SRC | + FLOW_TCF_ENCAP_IPV4_DST; +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_IPV6 entry in configuration + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the IPV6 address fields + * in the encapsulation parameters structure. The item must be prevalidated, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_IPV6 entry specification. + * @param[out] encap + * Structure to fill the gathered IPV6 address data. + */ +static void +flow_tcf_parse_vxlan_encap_ipv6(const struct rte_flow_item_ipv6 *spec, + struct flow_tcf_vxlan_encap *encap) +{ + /* Item must be validated before. No redundant checks. */ + assert(spec); + memcpy(encap->ipv6.dst, spec->hdr.dst_addr, IPV6_ADDR_LEN); + memcpy(encap->ipv6.src, spec->hdr.src_addr, IPV6_ADDR_LEN); + encap->mask |= FLOW_TCF_ENCAP_IPV6_SRC | + FLOW_TCF_ENCAP_IPV6_DST; +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_UDP entry in configuration + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the UDP port fields + * in the encapsulation parameters structure. The item must be prevalidated, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_UDP entry specification. + * @param[in] mask + * RTE_FLOW_ITEM_TYPE_UDP entry mask. + * @param[out] encap + * Structure to fill the gathered UDP port data. + */ +static void +flow_tcf_parse_vxlan_encap_udp(const struct rte_flow_item_udp *spec, + const struct rte_flow_item_udp *mask, + struct flow_tcf_vxlan_encap *encap) +{ + assert(spec); + encap->udp.dst = spec->hdr.dst_port; + encap->mask |= FLOW_TCF_ENCAP_UDP_DST; + if (!mask || mask->hdr.src_port != RTE_BE16(0x0000)) { + encap->udp.src = spec->hdr.src_port; + encap->mask |= FLOW_TCF_ENCAP_IPV4_SRC; + } +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_VXLAN entry in configuration + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the VNI fields + * in the encapsulation parameters structure. The item must be prevalidated, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_VXLAN entry specification. + * @param[out] encap + * Structure to fill the gathered VNI address data. + */ +static void +flow_tcf_parse_vxlan_encap_vni(const struct rte_flow_item_vxlan *spec, + struct flow_tcf_vxlan_encap *encap) +{ + /* Item must be validated before. Do not redundant checks. */ + assert(spec); + memcpy(encap->vxlan.vni, spec->vni, sizeof(encap->vxlan.vni)); + encap->mask |= FLOW_TCF_ENCAP_VXLAN_VNI; +} + +/** + * Populate consolidated encapsulation object from list of pattern items. + * + * Helper function to process configuration of action such as + * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. The item list should be + * validated, there is no way to return an meaningful error. + * + * @param[in] action + * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP action object. + * List of pattern items to gather data from. + * @param[out] src + * Structure to fill gathered data. + */ +static void +flow_tcf_vxlan_encap_parse(const struct rte_flow_action *action, + struct flow_tcf_vxlan_encap *encap) +{ + union { + const struct rte_flow_item_eth *eth; + const struct rte_flow_item_ipv4 *ipv4; + const struct rte_flow_item_ipv6 *ipv6; + const struct rte_flow_item_udp *udp; + const struct rte_flow_item_vxlan *vxlan; + } spec, mask; + const struct rte_flow_item *items; + + assert(action->type == RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP); + assert(action->conf); + + items = ((const struct rte_flow_action_vxlan_encap *) + action->conf)->definition; + assert(items); + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + switch (items->type) { + case RTE_FLOW_ITEM_TYPE_VOID: + break; + case RTE_FLOW_ITEM_TYPE_ETH: + mask.eth = items->mask; + spec.eth = items->spec; + flow_tcf_parse_vxlan_encap_eth(spec.eth, mask.eth, + encap); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + spec.ipv4 = items->spec; + flow_tcf_parse_vxlan_encap_ipv4(spec.ipv4, encap); + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + spec.ipv6 = items->spec; + flow_tcf_parse_vxlan_encap_ipv6(spec.ipv6, encap); + break; + case RTE_FLOW_ITEM_TYPE_UDP: + mask.udp = items->mask; + spec.udp = items->spec; + flow_tcf_parse_vxlan_encap_udp(spec.udp, mask.udp, + encap); + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + spec.vxlan = items->spec; + flow_tcf_parse_vxlan_encap_vni(spec.vxlan, encap); + break; + default: + assert(false); + DRV_LOG(WARNING, + "unsupported item %p type %d," + " items must be validated" + " before flow creation", + (const void *)items, items->type); + encap->mask = 0; + return; + } + } +} + +/** * Translate flow for Linux TC flower and construct Netlink message. * * @param[in] priv @@ -2832,6 +3067,7 @@ struct pedit_parser { const struct rte_flow_item_ipv6 *ipv6; const struct rte_flow_item_tcp *tcp; const struct rte_flow_item_udp *udp; + const struct rte_flow_item_vxlan *vxlan; } spec, mask; union { const struct rte_flow_action_port_id *port_id; @@ -2842,6 +3078,18 @@ struct pedit_parser { const struct rte_flow_action_of_set_vlan_pcp * of_set_vlan_pcp; } conf; + union { + struct flow_tcf_tunnel_hdr *hdr; + struct flow_tcf_vxlan_decap *vxlan; + } decap = { + .hdr = NULL, + }; + union { + struct flow_tcf_tunnel_hdr *hdr; + struct flow_tcf_vxlan_encap *vxlan; + } encap = { + .hdr = NULL, + }; struct flow_tcf_ptoi ptoi[PTOI_TABLE_SZ_MAX(dev)]; struct nlmsghdr *nlh = dev_flow->tcf.nlh; struct tcmsg *tcm = dev_flow->tcf.tcm; @@ -2859,6 +3107,20 @@ struct pedit_parser { claim_nonzero(flow_tcf_build_ptoi_table(dev, ptoi, PTOI_TABLE_SZ_MAX(dev))); + if (dev_flow->tcf.tunnel) { + switch (dev_flow->tcf.tunnel->type) { + case FLOW_TCF_TUNACT_VXLAN_DECAP: + decap.vxlan = dev_flow->tcf.vxlan_decap; + break; + case FLOW_TCF_TUNACT_VXLAN_ENCAP: + encap.vxlan = dev_flow->tcf.vxlan_encap; + break; + /* New tunnel actions can be added here. */ + default: + assert(false); + break; + } + } nlh = dev_flow->tcf.nlh; tcm = dev_flow->tcf.tcm; /* Prepare API must have been called beforehand. */ @@ -2876,7 +3138,6 @@ struct pedit_parser { mnl_attr_put_u32(nlh, TCA_CHAIN, attr->group); mnl_attr_put_strz(nlh, TCA_KIND, "flower"); na_flower = mnl_attr_nest_start(nlh, TCA_OPTIONS); - mnl_attr_put_u32(nlh, TCA_FLOWER_FLAGS, TCA_CLS_FLAGS_SKIP_SW); for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { unsigned int i; @@ -2904,7 +3165,9 @@ struct pedit_parser { tcm->tcm_ifindex = ptoi[i].ifindex; break; case RTE_FLOW_ITEM_TYPE_ETH: - item_flags |= MLX5_FLOW_LAYER_OUTER_L2; + item_flags |= (item_flags & MLX5_FLOW_LAYER_VXLAN) ? + MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; mask.eth = flow_tcf_item_mask (items, &rte_flow_item_eth_mask, &flow_tcf_mask_supported.eth, @@ -2915,6 +3178,14 @@ struct pedit_parser { if (mask.eth == &flow_tcf_mask_empty.eth) break; spec.eth = items->spec; + if (decap.vxlan && + !(item_flags & MLX5_FLOW_LAYER_VXLAN)) { + DRV_LOG(WARNING, + "outer L2 addresses cannot be forced" + " for vxlan decapsulation, parameter" + " ignored"); + break; + } if (mask.eth->type) { mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_ETH_TYPE, spec.eth->type); @@ -2936,8 +3207,11 @@ struct pedit_parser { ETHER_ADDR_LEN, mask.eth->src.addr_bytes); } + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_VLAN: + assert(!encap.hdr); + assert(!decap.hdr); item_flags |= MLX5_FLOW_LAYER_OUTER_VLAN; mask.vlan = flow_tcf_item_mask (items, &rte_flow_item_vlan_mask, @@ -2969,6 +3243,7 @@ struct pedit_parser { rte_be_to_cpu_16 (spec.vlan->tci & RTE_BE16(0x0fff))); + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_IPV4: item_flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV4; @@ -2979,36 +3254,53 @@ struct pedit_parser { sizeof(flow_tcf_mask_supported.ipv4), error); assert(mask.ipv4); - if (!eth_type_set || !vlan_eth_type_set) - mnl_attr_put_u16(nlh, + spec.ipv4 = items->spec; + if (!decap.vxlan) { + if (!eth_type_set && !vlan_eth_type_set) + mnl_attr_put_u16 + (nlh, vlan_present ? TCA_FLOWER_KEY_VLAN_ETH_TYPE : TCA_FLOWER_KEY_ETH_TYPE, RTE_BE16(ETH_P_IP)); - eth_type_set = 1; - vlan_eth_type_set = 1; - if (mask.ipv4 == &flow_tcf_mask_empty.ipv4) - break; - spec.ipv4 = items->spec; - if (mask.ipv4->hdr.next_proto_id) { - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, - spec.ipv4->hdr.next_proto_id); - ip_proto_set = 1; + eth_type_set = 1; + vlan_eth_type_set = 1; + if (mask.ipv4 == &flow_tcf_mask_empty.ipv4) + break; + if (mask.ipv4->hdr.next_proto_id) { + mnl_attr_put_u8 + (nlh, TCA_FLOWER_KEY_IP_PROTO, + spec.ipv4->hdr.next_proto_id); + ip_proto_set = 1; + } + } else { + assert(mask.ipv4 != &flow_tcf_mask_empty.ipv4); } if (mask.ipv4->hdr.src_addr) { - mnl_attr_put_u32(nlh, TCA_FLOWER_KEY_IPV4_SRC, - spec.ipv4->hdr.src_addr); - mnl_attr_put_u32(nlh, - TCA_FLOWER_KEY_IPV4_SRC_MASK, - mask.ipv4->hdr.src_addr); + mnl_attr_put_u32 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV4_SRC : + TCA_FLOWER_KEY_IPV4_SRC, + spec.ipv4->hdr.src_addr); + mnl_attr_put_u32 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV4_SRC_MASK : + TCA_FLOWER_KEY_IPV4_SRC_MASK, + mask.ipv4->hdr.src_addr); } if (mask.ipv4->hdr.dst_addr) { - mnl_attr_put_u32(nlh, TCA_FLOWER_KEY_IPV4_DST, - spec.ipv4->hdr.dst_addr); - mnl_attr_put_u32(nlh, - TCA_FLOWER_KEY_IPV4_DST_MASK, - mask.ipv4->hdr.dst_addr); + mnl_attr_put_u32 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV4_DST : + TCA_FLOWER_KEY_IPV4_DST, + spec.ipv4->hdr.dst_addr); + mnl_attr_put_u32 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV4_DST_MASK : + TCA_FLOWER_KEY_IPV4_DST_MASK, + mask.ipv4->hdr.dst_addr); } + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_IPV6: item_flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV6; @@ -3019,38 +3311,54 @@ struct pedit_parser { sizeof(flow_tcf_mask_supported.ipv6), error); assert(mask.ipv6); - if (!eth_type_set || !vlan_eth_type_set) - mnl_attr_put_u16(nlh, + spec.ipv6 = items->spec; + if (!decap.vxlan) { + if (!eth_type_set || !vlan_eth_type_set) { + mnl_attr_put_u16 + (nlh, vlan_present ? TCA_FLOWER_KEY_VLAN_ETH_TYPE : TCA_FLOWER_KEY_ETH_TYPE, RTE_BE16(ETH_P_IPV6)); - eth_type_set = 1; - vlan_eth_type_set = 1; - if (mask.ipv6 == &flow_tcf_mask_empty.ipv6) - break; - spec.ipv6 = items->spec; - if (mask.ipv6->hdr.proto) { - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, - spec.ipv6->hdr.proto); - ip_proto_set = 1; + } + eth_type_set = 1; + vlan_eth_type_set = 1; + if (mask.ipv6 == &flow_tcf_mask_empty.ipv6) + break; + if (mask.ipv6->hdr.proto) { + mnl_attr_put_u8 + (nlh, TCA_FLOWER_KEY_IP_PROTO, + spec.ipv6->hdr.proto); + ip_proto_set = 1; + } + } else { + assert(mask.ipv6 != &flow_tcf_mask_empty.ipv6); } if (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.src_addr)) { - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_SRC, - sizeof(spec.ipv6->hdr.src_addr), + mnl_attr_put(nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV6_SRC : + TCA_FLOWER_KEY_IPV6_SRC, + IPV6_ADDR_LEN, spec.ipv6->hdr.src_addr); - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_SRC_MASK, - sizeof(mask.ipv6->hdr.src_addr), + mnl_attr_put(nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV6_SRC_MASK : + TCA_FLOWER_KEY_IPV6_SRC_MASK, + IPV6_ADDR_LEN, mask.ipv6->hdr.src_addr); } if (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.dst_addr)) { - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_DST, - sizeof(spec.ipv6->hdr.dst_addr), + mnl_attr_put(nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV6_DST : + TCA_FLOWER_KEY_IPV6_DST, + IPV6_ADDR_LEN, spec.ipv6->hdr.dst_addr); - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_DST_MASK, - sizeof(mask.ipv6->hdr.dst_addr), + mnl_attr_put(nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV6_DST_MASK : + TCA_FLOWER_KEY_IPV6_DST_MASK, + IPV6_ADDR_LEN, mask.ipv6->hdr.dst_addr); } + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_UDP: item_flags |= MLX5_FLOW_LAYER_OUTER_L4_UDP; @@ -3061,26 +3369,45 @@ struct pedit_parser { sizeof(flow_tcf_mask_supported.udp), error); assert(mask.udp); - if (!ip_proto_set) - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, - IPPROTO_UDP); - if (mask.udp == &flow_tcf_mask_empty.udp) - break; spec.udp = items->spec; + if (!decap.vxlan) { + if (!ip_proto_set) + mnl_attr_put_u8 + (nlh, TCA_FLOWER_KEY_IP_PROTO, + IPPROTO_UDP); + if (mask.udp == &flow_tcf_mask_empty.udp) + break; + } else { + assert(mask.udp != &flow_tcf_mask_empty.udp); + decap.vxlan->udp_port = + rte_be_to_cpu_16 + (spec.udp->hdr.dst_port); + } if (mask.udp->hdr.src_port) { - mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_UDP_SRC, - spec.udp->hdr.src_port); - mnl_attr_put_u16(nlh, - TCA_FLOWER_KEY_UDP_SRC_MASK, - mask.udp->hdr.src_port); + mnl_attr_put_u16 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT : + TCA_FLOWER_KEY_UDP_SRC, + spec.udp->hdr.src_port); + mnl_attr_put_u16 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT_MASK : + TCA_FLOWER_KEY_UDP_SRC_MASK, + mask.udp->hdr.src_port); } if (mask.udp->hdr.dst_port) { - mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_UDP_DST, - spec.udp->hdr.dst_port); - mnl_attr_put_u16(nlh, - TCA_FLOWER_KEY_UDP_DST_MASK, - mask.udp->hdr.dst_port); + mnl_attr_put_u16 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_UDP_DST_PORT : + TCA_FLOWER_KEY_UDP_DST, + spec.udp->hdr.dst_port); + mnl_attr_put_u16 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_UDP_DST_PORT_MASK : + TCA_FLOWER_KEY_UDP_DST_MASK, + mask.udp->hdr.dst_port); } + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_TCP: item_flags |= MLX5_FLOW_LAYER_OUTER_L4_TCP; @@ -3123,6 +3450,16 @@ struct pedit_parser { rte_cpu_to_be_16 (mask.tcp->hdr.tcp_flags)); } + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + assert(decap.vxlan); + item_flags |= MLX5_FLOW_LAYER_VXLAN; + spec.vxlan = items->spec; + mnl_attr_put_u32(nlh, + TCA_FLOWER_KEY_ENC_KEY_ID, + vxlan_vni_as_be32(spec.vxlan->vni)); + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); break; default: return rte_flow_error_set(error, ENOTSUP, @@ -3156,6 +3493,14 @@ struct pedit_parser { mnl_attr_put_strz(nlh, TCA_ACT_KIND, "mirred"); na_act = mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); assert(na_act); + if (encap.hdr) { + assert(dev_flow->tcf.tunnel); + dev_flow->tcf.tunnel->ifindex_ptr = + &((struct tc_mirred *) + mnl_attr_get_payload + (mnl_nlmsg_get_payload_tail + (nlh)))->ifindex; + } mnl_attr_put(nlh, TCA_MIRRED_PARMS, sizeof(struct tc_mirred), &(struct tc_mirred){ @@ -3273,6 +3618,74 @@ struct pedit_parser { conf.of_set_vlan_pcp->vlan_pcp; } break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + assert(decap.vxlan); + assert(dev_flow->tcf.tunnel); + dev_flow->tcf.tunnel->ifindex_ptr = + (unsigned int *)&tcm->tcm_ifindex; + na_act_index = + mnl_attr_nest_start(nlh, na_act_index_cur++); + assert(na_act_index); + mnl_attr_put_strz(nlh, TCA_ACT_KIND, "tunnel_key"); + na_act = mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); + assert(na_act); + mnl_attr_put(nlh, TCA_TUNNEL_KEY_PARMS, + sizeof(struct tc_tunnel_key), + &(struct tc_tunnel_key){ + .action = TC_ACT_PIPE, + .t_action = TCA_TUNNEL_KEY_ACT_RELEASE, + }); + mnl_attr_nest_end(nlh, na_act); + mnl_attr_nest_end(nlh, na_act_index); + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + assert(encap.vxlan); + flow_tcf_vxlan_encap_parse(actions, encap.vxlan); + na_act_index = + mnl_attr_nest_start(nlh, na_act_index_cur++); + assert(na_act_index); + mnl_attr_put_strz(nlh, TCA_ACT_KIND, "tunnel_key"); + na_act = mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); + assert(na_act); + mnl_attr_put(nlh, TCA_TUNNEL_KEY_PARMS, + sizeof(struct tc_tunnel_key), + &(struct tc_tunnel_key){ + .action = TC_ACT_PIPE, + .t_action = TCA_TUNNEL_KEY_ACT_SET, + }); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_UDP_DST) + mnl_attr_put_u16(nlh, + TCA_TUNNEL_KEY_ENC_DST_PORT, + encap.vxlan->udp.dst); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV4_SRC) + mnl_attr_put_u32(nlh, + TCA_TUNNEL_KEY_ENC_IPV4_SRC, + encap.vxlan->ipv4.src); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV4_DST) + mnl_attr_put_u32(nlh, + TCA_TUNNEL_KEY_ENC_IPV4_DST, + encap.vxlan->ipv4.dst); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV6_SRC) + mnl_attr_put(nlh, + TCA_TUNNEL_KEY_ENC_IPV6_SRC, + sizeof(encap.vxlan->ipv6.src), + &encap.vxlan->ipv6.src); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV6_DST) + mnl_attr_put(nlh, + TCA_TUNNEL_KEY_ENC_IPV6_DST, + sizeof(encap.vxlan->ipv6.dst), + &encap.vxlan->ipv6.dst); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_VXLAN_VNI) + mnl_attr_put_u32(nlh, + TCA_TUNNEL_KEY_ENC_KEY_ID, + vxlan_vni_as_be32 + (encap.vxlan->vxlan.vni)); + mnl_attr_put_u8(nlh, TCA_TUNNEL_KEY_NO_CSUM, 0); + mnl_attr_nest_end(nlh, na_act); + mnl_attr_nest_end(nlh, na_act_index); + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); + break; case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: @@ -3299,7 +3712,13 @@ struct pedit_parser { assert(na_flower); assert(na_flower_act); mnl_attr_nest_end(nlh, na_flower_act); + mnl_attr_put_u32(nlh, TCA_FLOWER_FLAGS, decap.vxlan ? + 0 : TCA_CLS_FLAGS_SKIP_SW); mnl_attr_nest_end(nlh, na_flower); + if (dev_flow->tcf.tunnel && dev_flow->tcf.tunnel->ifindex_ptr) + dev_flow->tcf.tunnel->ifindex_org = + *dev_flow->tcf.tunnel->ifindex_ptr; + assert(dev_flow->tcf.nlsize >= nlh->nlmsg_len); return 0; }