From patchwork Wed Jun 27 18:08:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrien Mazarguil X-Patchwork-Id: 41725 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 55E091C3A6; Wed, 27 Jun 2018 20:08:37 +0200 (CEST) Received: from mail-wr0-f195.google.com (mail-wr0-f195.google.com [209.85.128.195]) by dpdk.org (Postfix) with ESMTP id C16621C39B for ; Wed, 27 Jun 2018 20:08:33 +0200 (CEST) Received: by mail-wr0-f195.google.com with SMTP id e18-v6so2945716wrs.5 for ; Wed, 27 Jun 2018 11:08:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=BlvwWX0nKitl8LKGfZMpmAt7D6l8wCjYNhZa6UUrEz4=; b=P3yW1XZ/apPty/boRDdob81RIiGzayrxUgge9EoQB0mOh2DeGlziBxWQenihf8mVFb MqgEscB0CXR1pjw8yKcZRWqEJtf+2eJRWgZoNcKbB7bw6+7NDnsL0KsjOsQH0o4wYmwZ RllYJzeQNjS4VfIxXnvzzxvXcicArdHOE5hQfu9D8goTIlEDb/PXMtOxJOq8ngoAgXVR FBVhUXlXfhq/bVYc1tAvpL0Xutj66NIUwEcNQ2/e4OE27ipCJ3c0Py7AmYi0YECOrvT/ l8bXFqjCnKnfzlKJ1tZkFZ7Rdh5LHOKc/Ix3YRZ8Knn3EW4MTVdvEqQ4Sei6MrA7OmjK IJUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=BlvwWX0nKitl8LKGfZMpmAt7D6l8wCjYNhZa6UUrEz4=; b=sKpL9r9xZxfmiGp1SFgU0xancdjg3ATjHMdzFrs54JtwvaABU1+XQxBli1QcqIE7nI Fx1+0xk/iHS2oBjTxecV+/LXzS6RAJPtmGb/va/hDBvAmo0ZjNx7dKBluvDe2eKPQCkF TDoFuNTzZTSee6oZqYhhItaQyWp18UazaaeuEFZjUh324dZ88hVkGv3PfB1lhWLESht4 AqyeCQQmnqp+BPQDJ/j/zSlDIDoxVSMME7jv5iBA58Y45M6QZsAsX54xq5nCiKiQvfCj w91K+lyTh8Q5J5td66G1lQ4t2aga0gq/R1UhYciqnbIh6S4AzC8GdDs9W2FUla0MoRGK 3Hkg== X-Gm-Message-State: APt69E0EJc0n/TscHuY95u4L7m/bWKAPd0WH7XywpGFDNWjQu8NIV/+/ YTg1XEUbLSCih4xNlgV6AkpchLsL X-Google-Smtp-Source: AAOMgpfMsIzw1ypKnHgumPtrmKdO7QHj/PTOQKeQy1Ese10oJ28mf2OJVvBSptKpBrdl1rUeMka7qw== X-Received: by 2002:a5d:4d8d:: with SMTP id b13-v6mr5874803wru.80.1530122913356; Wed, 27 Jun 2018 11:08:33 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id m10-v6sm3468867wrj.35.2018.06.27.11.08.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 11:08:32 -0700 (PDT) Date: Wed, 27 Jun 2018 20:08:16 +0200 From: Adrien Mazarguil To: Shahaf Shuler Cc: Nelio Laranjeiro , Yongseok Koh , dev@dpdk.org Message-ID: <20180627173355.4718-5-adrien.mazarguil@6wind.com> References: <20180627173355.4718-1-adrien.mazarguil@6wind.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180627173355.4718-1-adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH 4/6] net/mlx5: add L2-L4 pattern items to switch flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This enables flow rules to explicitly match supported combinations of Ethernet, IPv4, IPv6, TCP and UDP headers at the switch level. Testpmd example: - Dropping TCPv4 traffic with a specific destination on port ID 2: flow create 2 ingress transfer pattern eth / ipv4 / tcp dst is 42 / end actions drop / end Signed-off-by: Adrien Mazarguil Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_nl_flow.c | 397 ++++++++++++++++++++++++++++++++++- 1 file changed, 396 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5_nl_flow.c b/drivers/net/mlx5/mlx5_nl_flow.c index 70da85fd5..ad1e001c6 100644 --- a/drivers/net/mlx5/mlx5_nl_flow.c +++ b/drivers/net/mlx5/mlx5_nl_flow.c @@ -3,6 +3,7 @@ * Copyright 2018 Mellanox Technologies, Ltd */ +#include #include #include #include @@ -12,7 +13,9 @@ #include #include #include +#include #include +#include #include #include #include @@ -20,6 +23,7 @@ #include #include +#include #include #include "mlx5.h" @@ -31,6 +35,11 @@ enum mlx5_nl_flow_trans { ATTR, PATTERN, ITEM_VOID, + ITEM_ETH, + ITEM_IPV4, + ITEM_IPV6, + ITEM_TCP, + ITEM_UDP, ACTIONS, ACTION_VOID, ACTION_PORT_ID, @@ -52,8 +61,13 @@ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { [INVALID] = NULL, [BACK] = NULL, [ATTR] = TRANS(PATTERN), - [PATTERN] = TRANS(PATTERN_COMMON), + [PATTERN] = TRANS(ITEM_ETH, PATTERN_COMMON), [ITEM_VOID] = TRANS(BACK), + [ITEM_ETH] = TRANS(ITEM_IPV4, ITEM_IPV6, PATTERN_COMMON), + [ITEM_IPV4] = TRANS(ITEM_TCP, ITEM_UDP, PATTERN_COMMON), + [ITEM_IPV6] = TRANS(ITEM_TCP, ITEM_UDP, PATTERN_COMMON), + [ITEM_TCP] = TRANS(PATTERN_COMMON), + [ITEM_UDP] = TRANS(PATTERN_COMMON), [ACTIONS] = TRANS(ACTIONS_FATE, ACTIONS_COMMON), [ACTION_VOID] = TRANS(BACK), [ACTION_PORT_ID] = TRANS(ACTION_VOID, END), @@ -61,6 +75,126 @@ static const enum mlx5_nl_flow_trans *const mlx5_nl_flow_trans[] = { [END] = NULL, }; +/** Empty masks for known item types. */ +static const union { + struct rte_flow_item_eth eth; + struct rte_flow_item_ipv4 ipv4; + struct rte_flow_item_ipv6 ipv6; + struct rte_flow_item_tcp tcp; + struct rte_flow_item_udp udp; +} mlx5_nl_flow_mask_empty; + +/** Supported masks for known item types. */ +static const struct { + struct rte_flow_item_eth eth; + struct rte_flow_item_ipv4 ipv4; + struct rte_flow_item_ipv6 ipv6; + struct rte_flow_item_tcp tcp; + struct rte_flow_item_udp udp; +} mlx5_nl_flow_mask_supported = { + .eth = { + .type = RTE_BE16(0xffff), + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", + }, + .ipv4.hdr = { + .next_proto_id = 0xff, + .src_addr = RTE_BE32(0xffffffff), + .dst_addr = RTE_BE32(0xffffffff), + }, + .ipv6.hdr = { + .proto = 0xff, + .src_addr = + "\xff\xff\xff\xff\xff\xff\xff\xff" + "\xff\xff\xff\xff\xff\xff\xff\xff", + .dst_addr = + "\xff\xff\xff\xff\xff\xff\xff\xff" + "\xff\xff\xff\xff\xff\xff\xff\xff", + }, + .tcp.hdr = { + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), + }, + .udp.hdr = { + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), + }, +}; + +/** + * Retrieve mask for pattern item. + * + * This function does basic sanity checks on a pattern item in order to + * return the most appropriate mask for it. + * + * @param[in] item + * Item specification. + * @param[in] mask_default + * Default mask for pattern item as specified by the flow API. + * @param[in] mask_supported + * Mask fields supported by the implementation. + * @param[in] mask_empty + * Empty mask to return when there is no specification. + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * Either @p item->mask or one of the mask parameters on success, NULL + * otherwise and rte_errno is set. + */ +static const void * +mlx5_nl_flow_item_mask(const struct rte_flow_item *item, + const void *mask_default, + const void *mask_supported, + const void *mask_empty, + size_t mask_size, + struct rte_flow_error *error) +{ + const uint8_t *mask; + size_t i; + + /* item->last and item->mask cannot exist without item->spec. */ + if (!item->spec && (item->mask || item->last)) { + rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, + "\"mask\" or \"last\" field provided without a" + " corresponding \"spec\""); + return NULL; + } + /* No spec, no mask, no problem. */ + if (!item->spec) + return mask_empty; + mask = item->mask ? item->mask : mask_default; + assert(mask); + /* + * Single-pass check to make sure that: + * - Mask is supported, no bits are set outside mask_supported. + * - Both item->spec and item->last are included in mask. + */ + for (i = 0; i != mask_size; ++i) { + if (!mask[i]) + continue; + if ((mask[i] | ((const uint8_t *)mask_supported)[i]) != + ((const uint8_t *)mask_supported)[i]) { + rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask, "unsupported field found in \"mask\""); + return NULL; + } + if (item->last && + (((const uint8_t *)item->spec)[i] & mask[i]) != + (((const uint8_t *)item->last)[i] & mask[i])) { + rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_LAST, + item->last, + "range between \"spec\" and \"last\" not" + " comprised in \"mask\""); + return NULL; + } + } + return mask; +} + /** * Transpose flow rule description to rtnetlink message. * @@ -107,6 +241,8 @@ mlx5_nl_flow_transpose(void *buf, const struct rte_flow_action *action; unsigned int n; uint32_t act_index_cur; + bool eth_type_set; + bool ip_proto_set; struct nlattr *na_flower; struct nlattr *na_flower_act; const enum mlx5_nl_flow_trans *trans; @@ -119,6 +255,8 @@ mlx5_nl_flow_transpose(void *buf, action = actions; n = 0; act_index_cur = 0; + eth_type_set = false; + ip_proto_set = false; na_flower = NULL; na_flower_act = NULL; trans = TRANS(ATTR); @@ -126,6 +264,13 @@ mlx5_nl_flow_transpose(void *buf, trans: switch (trans[n++]) { union { + const struct rte_flow_item_eth *eth; + const struct rte_flow_item_ipv4 *ipv4; + const struct rte_flow_item_ipv6 *ipv6; + const struct rte_flow_item_tcp *tcp; + const struct rte_flow_item_udp *udp; + } spec, mask; + union { const struct rte_flow_action_port_id *port_id; } conf; struct nlmsghdr *nlh; @@ -214,6 +359,256 @@ mlx5_nl_flow_transpose(void *buf, goto trans; ++item; break; + case ITEM_ETH: + if (item->type != RTE_FLOW_ITEM_TYPE_ETH) + goto trans; + mask.eth = mlx5_nl_flow_item_mask + (item, &rte_flow_item_eth_mask, + &mlx5_nl_flow_mask_supported.eth, + &mlx5_nl_flow_mask_empty.eth, + sizeof(mlx5_nl_flow_mask_supported.eth), error); + if (!mask.eth) + return -rte_errno; + if (mask.eth == &mlx5_nl_flow_mask_empty.eth) { + ++item; + break; + } + spec.eth = item->spec; + if (mask.eth->type && mask.eth->type != RTE_BE16(0xffff)) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.eth, + "no support for partial mask on" + " \"type\" field"); + if (mask.eth->type) { + if (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_ETH_TYPE, + spec.eth->type)) + goto error_nobufs; + eth_type_set = 1; + } + if ((!is_zero_ether_addr(&mask.eth->dst) && + (!mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_ETH_DST, + ETHER_ADDR_LEN, + spec.eth->dst.addr_bytes) || + !mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_ETH_DST_MASK, + ETHER_ADDR_LEN, + mask.eth->dst.addr_bytes))) || + (!is_zero_ether_addr(&mask.eth->src) && + (!mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_ETH_SRC, + ETHER_ADDR_LEN, + spec.eth->src.addr_bytes) || + !mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_ETH_SRC_MASK, + ETHER_ADDR_LEN, + mask.eth->src.addr_bytes)))) + goto error_nobufs; + ++item; + break; + case ITEM_IPV4: + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) + goto trans; + mask.ipv4 = mlx5_nl_flow_item_mask + (item, &rte_flow_item_ipv4_mask, + &mlx5_nl_flow_mask_supported.ipv4, + &mlx5_nl_flow_mask_empty.ipv4, + sizeof(mlx5_nl_flow_mask_supported.ipv4), error); + if (!mask.ipv4) + return -rte_errno; + if (!eth_type_set && + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_ETH_TYPE, + RTE_BE16(ETH_P_IP))) + goto error_nobufs; + eth_type_set = 1; + if (mask.ipv4 == &mlx5_nl_flow_mask_empty.ipv4) { + ++item; + break; + } + spec.ipv4 = item->spec; + if (mask.ipv4->hdr.next_proto_id && + mask.ipv4->hdr.next_proto_id != 0xff) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.ipv4, + "no support for partial mask on" + " \"hdr.next_proto_id\" field"); + if (mask.ipv4->hdr.next_proto_id) { + if (!mnl_attr_put_u8_check + (buf, size, TCA_FLOWER_KEY_IP_PROTO, + spec.ipv4->hdr.next_proto_id)) + goto error_nobufs; + ip_proto_set = 1; + } + if ((mask.ipv4->hdr.src_addr && + (!mnl_attr_put_u32_check(buf, size, + TCA_FLOWER_KEY_IPV4_SRC, + spec.ipv4->hdr.src_addr) || + !mnl_attr_put_u32_check(buf, size, + TCA_FLOWER_KEY_IPV4_SRC_MASK, + mask.ipv4->hdr.src_addr))) || + (mask.ipv4->hdr.dst_addr && + (!mnl_attr_put_u32_check(buf, size, + TCA_FLOWER_KEY_IPV4_DST, + spec.ipv4->hdr.dst_addr) || + !mnl_attr_put_u32_check(buf, size, + TCA_FLOWER_KEY_IPV4_DST_MASK, + mask.ipv4->hdr.dst_addr)))) + goto error_nobufs; + ++item; + break; + case ITEM_IPV6: + if (item->type != RTE_FLOW_ITEM_TYPE_IPV6) + goto trans; + mask.ipv6 = mlx5_nl_flow_item_mask + (item, &rte_flow_item_ipv6_mask, + &mlx5_nl_flow_mask_supported.ipv6, + &mlx5_nl_flow_mask_empty.ipv6, + sizeof(mlx5_nl_flow_mask_supported.ipv6), error); + if (!mask.ipv6) + return -rte_errno; + if (!eth_type_set && + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_ETH_TYPE, + RTE_BE16(ETH_P_IPV6))) + goto error_nobufs; + eth_type_set = 1; + if (mask.ipv6 == &mlx5_nl_flow_mask_empty.ipv6) { + ++item; + break; + } + spec.ipv6 = item->spec; + if (mask.ipv6->hdr.proto && mask.ipv6->hdr.proto != 0xff) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.ipv6, + "no support for partial mask on" + " \"hdr.proto\" field"); + if (mask.ipv6->hdr.proto) { + if (!mnl_attr_put_u8_check + (buf, size, TCA_FLOWER_KEY_IP_PROTO, + spec.ipv6->hdr.proto)) + goto error_nobufs; + ip_proto_set = 1; + } + if ((!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.src_addr) && + (!mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_IPV6_SRC, + sizeof(spec.ipv6->hdr.src_addr), + spec.ipv6->hdr.src_addr) || + !mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_IPV6_SRC_MASK, + sizeof(mask.ipv6->hdr.src_addr), + mask.ipv6->hdr.src_addr))) || + (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.dst_addr) && + (!mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_IPV6_DST, + sizeof(spec.ipv6->hdr.dst_addr), + spec.ipv6->hdr.dst_addr) || + !mnl_attr_put_check(buf, size, + TCA_FLOWER_KEY_IPV6_DST_MASK, + sizeof(mask.ipv6->hdr.dst_addr), + mask.ipv6->hdr.dst_addr)))) + goto error_nobufs; + ++item; + break; + case ITEM_TCP: + if (item->type != RTE_FLOW_ITEM_TYPE_TCP) + goto trans; + mask.tcp = mlx5_nl_flow_item_mask + (item, &rte_flow_item_tcp_mask, + &mlx5_nl_flow_mask_supported.tcp, + &mlx5_nl_flow_mask_empty.tcp, + sizeof(mlx5_nl_flow_mask_supported.tcp), error); + if (!mask.tcp) + return -rte_errno; + if (!ip_proto_set && + !mnl_attr_put_u8_check(buf, size, + TCA_FLOWER_KEY_IP_PROTO, + IPPROTO_TCP)) + goto error_nobufs; + if (mask.tcp == &mlx5_nl_flow_mask_empty.tcp) { + ++item; + break; + } + spec.tcp = item->spec; + if ((mask.tcp->hdr.src_port && + mask.tcp->hdr.src_port != RTE_BE16(0xffff)) || + (mask.tcp->hdr.dst_port && + mask.tcp->hdr.dst_port != RTE_BE16(0xffff))) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.tcp, + "no support for partial masks on" + " \"hdr.src_port\" and \"hdr.dst_port\"" + " fields"); + if ((mask.tcp->hdr.src_port && + (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_TCP_SRC, + spec.tcp->hdr.src_port) || + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_TCP_SRC_MASK, + mask.tcp->hdr.src_port))) || + (mask.tcp->hdr.dst_port && + (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_TCP_DST, + spec.tcp->hdr.dst_port) || + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_TCP_DST_MASK, + mask.tcp->hdr.dst_port)))) + goto error_nobufs; + ++item; + break; + case ITEM_UDP: + if (item->type != RTE_FLOW_ITEM_TYPE_UDP) + goto trans; + mask.udp = mlx5_nl_flow_item_mask + (item, &rte_flow_item_udp_mask, + &mlx5_nl_flow_mask_supported.udp, + &mlx5_nl_flow_mask_empty.udp, + sizeof(mlx5_nl_flow_mask_supported.udp), error); + if (!mask.udp) + return -rte_errno; + if (!ip_proto_set && + !mnl_attr_put_u8_check(buf, size, + TCA_FLOWER_KEY_IP_PROTO, + IPPROTO_UDP)) + goto error_nobufs; + if (mask.udp == &mlx5_nl_flow_mask_empty.udp) { + ++item; + break; + } + spec.udp = item->spec; + if ((mask.udp->hdr.src_port && + mask.udp->hdr.src_port != RTE_BE16(0xffff)) || + (mask.udp->hdr.dst_port && + mask.udp->hdr.dst_port != RTE_BE16(0xffff))) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.udp, + "no support for partial masks on" + " \"hdr.src_port\" and \"hdr.dst_port\"" + " fields"); + if ((mask.udp->hdr.src_port && + (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_UDP_SRC, + spec.udp->hdr.src_port) || + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_UDP_SRC_MASK, + mask.udp->hdr.src_port))) || + (mask.udp->hdr.dst_port && + (!mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_UDP_DST, + spec.udp->hdr.dst_port) || + !mnl_attr_put_u16_check(buf, size, + TCA_FLOWER_KEY_UDP_DST_MASK, + mask.udp->hdr.dst_port)))) + goto error_nobufs; + ++item; + break; case ACTIONS: if (item->type != RTE_FLOW_ITEM_TYPE_END) goto trans;