From patchwork Wed Jun 27 15:07:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 41682 X-Patchwork-Delegate: shahafs@mellanox.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DFAF31BFCC; Wed, 27 Jun 2018 17:07:52 +0200 (CEST) Received: from mail-wm0-f66.google.com (mail-wm0-f66.google.com [74.125.82.66]) by dpdk.org (Postfix) with ESMTP id F2B9F1BFA9 for ; Wed, 27 Jun 2018 17:07:41 +0200 (CEST) Received: by mail-wm0-f66.google.com with SMTP id p11-v6so5849319wmc.4 for ; Wed, 27 Jun 2018 08:07:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=+Ttfs57J/DFx6/1qOrmmY1w5o1u+vMJHbkJpO+YELK4=; b=ApGoYUihgBwZvmbN2nYRwaBwzGkz3EBxqoBMu4ONPNxbTImjjvf7+H7V1XNfThwUL+ ccvAYRK5msRjL7s7cx81Elj2t1NIAwdNYsW09tC3tc3P79PRfVXZ0Jlb0BQsN1dK9svB O6LTcCR6dpScHXhJpAv0qIyEJ/y+ZJM1UAaW1DJ/PQawvO5waHaKvFhKgBHkydL+KcJ9 WAFxFbS5iAUSBDX1Dl5eG5H88HtwMnfV0Cg8mMSxODHh54js8bVvQjuKzGRbDfe1Iypd z27Jpn7EMOIHks0BkZr28OXPCesspMrRgB3ULypfu5Wf5AwD+gLXss1IbNeKzC8b+rxs 4s+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=+Ttfs57J/DFx6/1qOrmmY1w5o1u+vMJHbkJpO+YELK4=; b=QPO3QTTygAcChSzWWjOiHval0ZIFFl4nE6SSb6POMCltm8n8nqClsnFYf3+bu6dqqh NOmR+r0NV85v9XDqoXLi3JE91Rx9k2vdoYhMXMmq6tZypD+0PaaPMQi85BMc4NFbA9R+ 2je2fcE+z4tTGEglcj7gOR4rnLdeShquMp1MzjyfIHxSZyHCzGXSHOWsERw8KSdA+/JX Wnsg90w7/ZZxX9XNhlXDDCjvVz1HPA/LQxsHq5fbvFoo7knG+O0TPeKne7xtj5IqnZYr 8qhfUTydSegtuppNyB3drVOriBM6ralqMc2WzWYeKRlQcfhUIsmzetkmGU17DCshpJmi UzwA== X-Gm-Message-State: APt69E057FXW+Rgg73a5xs6uNJeML5kbC/8H6poTnZ4sitGAef5bwftm Jax10bNKhC+ht08Z32VN9oCaXpk4uA== X-Google-Smtp-Source: AAOMgpedtk9euSnpS4mHWSItMYRQjRklWhnB9oFi3vWSDpJyFLg2dzbQ3zFCj+35+zORKGhulTkBlA== X-Received: by 2002:a1c:4291:: with SMTP id k17-v6mr5149609wmi.74.1530112061178; Wed, 27 Jun 2018 08:07:41 -0700 (PDT) Received: from laranjeiro-vm.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id k17-v6sm4872513wrp.19.2018.06.27.08.07.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 08:07:40 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org, Adrien Mazarguil , Yongseok Koh Date: Wed, 27 Jun 2018 17:07:40 +0200 Message-Id: <7693b4973a8e683972895820d6db36aa7b12a404.1530111623.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 08/20] net/mlx5: add flow IPv4 item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Nelio Laranjeiro --- drivers/net/mlx5/mlx5_flow.c | 83 ++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 6a576ddd9..8e7a0bb5a 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -540,6 +540,86 @@ mlx5_flow_item_vlan(const struct rte_flow_item *item, struct rte_flow *flow, return size; } +/** + * Validate IPv4 layer and possibly create the Verbs specification. + * + * @param item[in] + * Item specification. + * @param flow[in, out] + * Pointer to flow structure. + * @param flow_size[in] + * Size in bytes of the available space for to store the flow information. + * @param error + * Pointer to error structure. + * + * @return + * size in bytes necessary for the conversion, a negative errno value + * otherwise and rte_errno is set. + */ +static int +mlx5_flow_item_ipv4(const struct rte_flow_item *item, struct rte_flow *flow, + const size_t flow_size, struct rte_flow_error *error) +{ + const struct rte_flow_item_ipv4 *spec = item->spec; + const struct rte_flow_item_ipv4 *mask = item->mask; + const struct rte_flow_item_ipv4 nic_mask = { + .hdr = { + .src_addr = RTE_BE32(0xffffffff), + .dst_addr = RTE_BE32(0xffffffff), + .type_of_service = 0xff, + .next_proto_id = 0xff, + }, + }; + unsigned int size = sizeof(struct ibv_flow_spec_ipv4_ext); + struct ibv_flow_spec_ipv4_ext ipv4 = { + .type = IBV_FLOW_SPEC_IPV4_EXT, + .size = size, + }; + int ret; + + if (flow->layers & MLX5_FLOW_LAYER_OUTER_L3) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "multiple L3 layers not supported"); + else if (flow->layers & MLX5_FLOW_LAYER_OUTER_L4) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "L3 cannot follow an L4 layer."); + if (!mask) + mask = &rte_flow_item_ipv4_mask; + ret = mlx5_flow_item_validate(item, (const uint8_t *)mask, + (const uint8_t *)&nic_mask, + sizeof(struct rte_flow_item_ipv4), error); + if (ret < 0) + return ret; + flow->layers |= MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (size > flow_size) + return size; + if (spec) { + ipv4.val = (struct ibv_flow_ipv4_ext_filter){ + .src_ip = spec->hdr.src_addr, + .dst_ip = spec->hdr.dst_addr, + .proto = spec->hdr.next_proto_id, + .tos = spec->hdr.type_of_service, + }; + ipv4.mask = (struct ibv_flow_ipv4_ext_filter){ + .src_ip = mask->hdr.src_addr, + .dst_ip = mask->hdr.dst_addr, + .proto = mask->hdr.next_proto_id, + .tos = mask->hdr.type_of_service, + }; + /* Remove unwanted bits from values. */ + ipv4.val.src_ip &= ipv4.mask.src_ip; + ipv4.val.dst_ip &= ipv4.mask.dst_ip; + ipv4.val.proto &= ipv4.mask.proto; + ipv4.val.tos &= ipv4.mask.tos; + } + mlx5_flow_spec_verbs_add(flow, &ipv4, size); + return size; +} + /** * Validate items provided by the user. * @@ -576,6 +656,9 @@ mlx5_flow_items(const struct rte_flow_item items[], case RTE_FLOW_ITEM_TYPE_VLAN: ret = mlx5_flow_item_vlan(items, flow, remain, error); break; + case RTE_FLOW_ITEM_TYPE_IPV4: + ret = mlx5_flow_item_ipv4(items, flow, remain, error); + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,