From patchwork Thu Feb 28 07:03:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50598 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A0E534C8E; Thu, 28 Feb 2019 08:03:54 +0100 (CET) Received: from alln-iport-7.cisco.com (alln-iport-7.cisco.com [173.37.142.94]) by dpdk.org (Postfix) with ESMTP id B45743798 for ; Thu, 28 Feb 2019 08:03:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=4513; q=dns/txt; s=iport; t=1551337432; x=1552547032; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=qY++9IAi7eZoqcQZ70tK2Bql4EUAUdfvLKwnnFnrad0=; b=POWtcuvKRREOIESd755O/WLee2/WeXn287HdZjF9BKWlJyan372t65Tb 954HDxM+q7dbPjmZORXk8mHrhzXaXWLD3yBOxbMztVCAUOQ4308jsS/m8 KN7+5jn5GMuEOXDptLsaTYbDcdk+mx72ZTQlEsonIBpBa40jD72m/oBcF g=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="240672986" Received: from alln-core-1.cisco.com ([173.36.13.131]) by alln-iport-7.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:03:51 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-1.cisco.com (8.15.2/8.15.2) with ESMTP id x1S73paO007969; Thu, 28 Feb 2019 07:03:51 GMT Received: by cisco.com (Postfix, from userid 508933) id 5CAF320F2001; Wed, 27 Feb 2019 23:03:51 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim Date: Wed, 27 Feb 2019 23:03:03 -0800 Message-Id: <20190228070317.17002-2-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-1.cisco.com Subject: [dpdk-dev] [PATCH 01/15] net/enic: remove unused code X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Remove unused functions. Specifically, vnic_set_rss_key() is obsolete. enic_{add,del}_vlan() has never been supported in the firmware. And, remove vnic_rss.c altogether as it becomes empty. These were discovered by cppcheck. Signed-off-by: Hyong Youb Kim Reviewed-by: John Daley --- drivers/net/enic/Makefile | 1 - drivers/net/enic/base/vnic_rss.c | 23 ----------------------- drivers/net/enic/base/vnic_rss.h | 5 ----- drivers/net/enic/enic_res.c | 26 -------------------------- drivers/net/enic/enic_res.h | 2 -- drivers/net/enic/meson.build | 1 - 6 files changed, 58 deletions(-) delete mode 100644 drivers/net/enic/base/vnic_rss.c diff --git a/drivers/net/enic/Makefile b/drivers/net/enic/Makefile index e39e47631..04bae35e3 100644 --- a/drivers/net/enic/Makefile +++ b/drivers/net/enic/Makefile @@ -37,7 +37,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_wq.c SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_dev.c SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_intr.c SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_rq.c -SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_rss.c # The current implementation assumes 64-bit pointers CC_AVX2_SUPPORT=0 diff --git a/drivers/net/enic/base/vnic_rss.c b/drivers/net/enic/base/vnic_rss.c deleted file mode 100644 index f41b8660f..000000000 --- a/drivers/net/enic/base/vnic_rss.c +++ /dev/null @@ -1,23 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2008-2017 Cisco Systems, Inc. All rights reserved. - * Copyright 2007 Nuova Systems, Inc. All rights reserved. - */ - -#include "enic_compat.h" -#include "vnic_rss.h" - -void vnic_set_rss_key(union vnic_rss_key *rss_key, u8 *key) -{ - u32 i; - u32 *p; - u16 *q; - - for (i = 0; i < 4; ++i) { - p = (u32 *)(key + (10 * i)); - iowrite32(*p++, &rss_key->key[i].b[0]); - iowrite32(*p++, &rss_key->key[i].b[4]); - q = (u16 *)p; - iowrite32(*q, &rss_key->key[i].b[8]); - } -} - diff --git a/drivers/net/enic/base/vnic_rss.h b/drivers/net/enic/base/vnic_rss.h index abd7b9f13..039041ece 100644 --- a/drivers/net/enic/base/vnic_rss.h +++ b/drivers/net/enic/base/vnic_rss.h @@ -24,9 +24,4 @@ union vnic_rss_cpu { u64 raw[32]; }; -void vnic_set_rss_key(union vnic_rss_key *rss_key, u8 *key); -void vnic_set_rss_cpu(union vnic_rss_cpu *rss_cpu, u8 *cpu); -void vnic_get_rss_key(union vnic_rss_key *rss_key, u8 *key); -void vnic_get_rss_cpu(union vnic_rss_cpu *rss_cpu, u8 *cpu); - #endif /* _VNIC_RSS_H_ */ diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c index 24b2844f3..d289f3da8 100644 --- a/drivers/net/enic/enic_res.c +++ b/drivers/net/enic/enic_res.c @@ -212,32 +212,6 @@ int enic_get_vnic_config(struct enic *enic) return 0; } -int enic_add_vlan(struct enic *enic, u16 vlanid) -{ - u64 a0 = vlanid, a1 = 0; - int wait = 1000; - int err; - - err = vnic_dev_cmd(enic->vdev, CMD_VLAN_ADD, &a0, &a1, wait); - if (err) - dev_err(enic_get_dev(enic), "Can't add vlan id, %d\n", err); - - return err; -} - -int enic_del_vlan(struct enic *enic, u16 vlanid) -{ - u64 a0 = vlanid, a1 = 0; - int wait = 1000; - int err; - - err = vnic_dev_cmd(enic->vdev, CMD_VLAN_DEL, &a0, &a1, wait); - if (err) - dev_err(enic_get_dev(enic), "Can't delete vlan id, %d\n", err); - - return err; -} - int enic_set_nic_cfg(struct enic *enic, u8 rss_default_cpu, u8 rss_hash_type, u8 rss_hash_bits, u8 rss_base_cpu, u8 rss_enable, u8 tso_ipid_split_en, u8 ig_vlan_strip_en) diff --git a/drivers/net/enic/enic_res.h b/drivers/net/enic/enic_res.h index 3786bc0e2..faaaad9bd 100644 --- a/drivers/net/enic/enic_res.h +++ b/drivers/net/enic/enic_res.h @@ -59,8 +59,6 @@ struct enic; int enic_get_vnic_config(struct enic *); -int enic_add_vlan(struct enic *enic, u16 vlanid); -int enic_del_vlan(struct enic *enic, u16 vlanid); int enic_set_nic_cfg(struct enic *enic, u8 rss_default_cpu, u8 rss_hash_type, u8 rss_hash_bits, u8 rss_base_cpu, u8 rss_enable, u8 tso_ipid_split_en, u8 ig_vlan_strip_en); diff --git a/drivers/net/enic/meson.build b/drivers/net/enic/meson.build index 064487118..c381f1496 100644 --- a/drivers/net/enic/meson.build +++ b/drivers/net/enic/meson.build @@ -6,7 +6,6 @@ sources = files( 'base/vnic_dev.c', 'base/vnic_intr.c', 'base/vnic_rq.c', - 'base/vnic_rss.c', 'base/vnic_wq.c', 'enic_clsf.c', 'enic_ethdev.c', From patchwork Thu Feb 28 07:03:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50599 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6F7A04CA0; Thu, 28 Feb 2019 08:04:06 +0100 (CET) Received: from rcdn-iport-5.cisco.com (rcdn-iport-5.cisco.com [173.37.86.76]) by dpdk.org (Postfix) with ESMTP id CDF914CA0; Thu, 28 Feb 2019 08:04:04 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=6263; q=dns/txt; s=iport; t=1551337445; x=1552547045; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ZPsKdvLdQAH0HwTkrCbDfh/BDdqYIfPt3+eD8Mf5Jzk=; b=EGvDX+X6oub/xh6w4EBiBeSAu2xasHnx1Nh3V8M5tUVcTIja+yYGLclY WCHEoDLh5ePQetSCaHWmw7QP+znf/V55eSyCMXJ+dnB9lzXGmal4c7Lyk zYvxD2ygbr3VhD0iuxrBJgE7JgXpr4hhyAEQkIvHX1d2O9R8jOvFxBhFx g=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="307840146" Received: from alln-core-7.cisco.com ([173.36.13.140]) by rcdn-iport-5.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:04:04 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-7.cisco.com (8.15.2/8.15.2) with ESMTP id x1S743oS022898; Thu, 28 Feb 2019 07:04:03 GMT Received: by cisco.com (Postfix, from userid 508933) id 7284C20F2001; Wed, 27 Feb 2019 23:04:03 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim , stable@dpdk.org Date: Wed, 27 Feb 2019 23:03:04 -0800 Message-Id: <20190228070317.17002-3-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-7.cisco.com Subject: [dpdk-dev] [PATCH 02/15] net/enic: fix flow director SCTP matching X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The firmware filter API does not have flags indicating "match SCTP packet". Instead, the driver needs to explicitly add an IP match and set the protocol number (132 for SCTP) in the IP header. The existing code (copy_fltr_v2) has two bugs. 1. It sets the protocol number (132) in the match value, but not the mask. The mask remains 0, so the match becomes a wildcard match. The NIC ends up matching all protocol numbers (i.e. thinks non-SCTP packets are SCTP). 2. It modifies the input argument (rte_eth_fdir_input). The driver tracks filters using rte_hash_{add,del}_key(input). So, addding (RTE_ETH_FILTER_ADD) and deleting (RTE_ETH_FILTER_DELETE) must use the same input argument for the same filter. But, overwriting the protocol number while adding the filter breaks this assumption, and causes delete operation to fail. So, set the mask as well as protocol value. Do not modify the input argument, and use const in function signatures to make the intention clear. Also move a couple function declarations to enic_clsf.c from enic.h as they are strictly local. Fixes: dfbd6a9cb504 ("net/enic: extend flow director support for 1300 series") Cc: stable@dpdk.org Signed-off-by: Hyong Youb Kim Reviewed-by: John Daley --- drivers/net/enic/enic.h | 8 ++------ drivers/net/enic/enic_clsf.c | 38 ++++++++++++++++++++++++++------------ 2 files changed, 28 insertions(+), 18 deletions(-) diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h index 6c497e9a2..fa4d5590e 100644 --- a/drivers/net/enic/enic.h +++ b/drivers/net/enic/enic.h @@ -76,8 +76,8 @@ struct enic_fdir { u32 modes; u32 types_mask; void (*copy_fltr_fn)(struct filter_v2 *filt, - struct rte_eth_fdir_input *input, - struct rte_eth_fdir_masks *masks); + const struct rte_eth_fdir_input *input, + const struct rte_eth_fdir_masks *masks); }; struct enic_soft_stats { @@ -342,9 +342,5 @@ int enic_link_update(struct enic *enic); bool enic_use_vector_rx_handler(struct enic *enic); void enic_fdir_info(struct enic *enic); void enic_fdir_info_get(struct enic *enic, struct rte_eth_fdir_info *stats); -void copy_fltr_v1(struct filter_v2 *fltr, struct rte_eth_fdir_input *input, - struct rte_eth_fdir_masks *masks); -void copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input, - struct rte_eth_fdir_masks *masks); extern const struct rte_flow_ops enic_flow_ops; #endif /* _ENIC_H_ */ diff --git a/drivers/net/enic/enic_clsf.c b/drivers/net/enic/enic_clsf.c index 9e9e548c2..48c8e6264 100644 --- a/drivers/net/enic/enic_clsf.c +++ b/drivers/net/enic/enic_clsf.c @@ -36,6 +36,13 @@ #define ENICPMD_CLSF_HASH_ENTRIES ENICPMD_FDIR_MAX +static void copy_fltr_v1(struct filter_v2 *fltr, + const struct rte_eth_fdir_input *input, + const struct rte_eth_fdir_masks *masks); +static void copy_fltr_v2(struct filter_v2 *fltr, + const struct rte_eth_fdir_input *input, + const struct rte_eth_fdir_masks *masks); + void enic_fdir_stats_get(struct enic *enic, struct rte_eth_fdir_stats *stats) { *stats = enic->fdir.stats; @@ -79,9 +86,9 @@ enic_set_layer(struct filter_generic_1 *gp, unsigned int flag, /* Copy Flow Director filter to a VIC ipv4 filter (for Cisco VICs * without advanced filter support. */ -void -copy_fltr_v1(struct filter_v2 *fltr, struct rte_eth_fdir_input *input, - __rte_unused struct rte_eth_fdir_masks *masks) +static void +copy_fltr_v1(struct filter_v2 *fltr, const struct rte_eth_fdir_input *input, + __rte_unused const struct rte_eth_fdir_masks *masks) { fltr->type = FILTER_IPV4_5TUPLE; fltr->u.ipv4.src_addr = rte_be_to_cpu_32( @@ -104,9 +111,9 @@ copy_fltr_v1(struct filter_v2 *fltr, struct rte_eth_fdir_input *input, /* Copy Flow Director filter to a VIC generic filter (requires advanced * filter support. */ -void -copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input, - struct rte_eth_fdir_masks *masks) +static void +copy_fltr_v2(struct filter_v2 *fltr, const struct rte_eth_fdir_input *input, + const struct rte_eth_fdir_masks *masks) { struct filter_generic_1 *gp = &fltr->u.generic_1; @@ -163,9 +170,11 @@ copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input, sctp_val.tag = input->flow.sctp4_flow.verify_tag; } - /* v4 proto should be 132, override ip4_flow.proto */ - input->flow.ip4_flow.proto = 132; - + /* + * Unlike UDP/TCP (FILTER_GENERIC_1_{UDP,TCP}), the firmware + * has no "packet is SCTP" flag. Use flag=0 (generic L4) and + * manually set proto_id=sctp below. + */ enic_set_layer(gp, 0, FILTER_GENERIC_1_L4, &sctp_mask, &sctp_val, sizeof(struct sctp_hdr)); } @@ -189,6 +198,10 @@ copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input, if (input->flow.ip4_flow.proto) { ip4_mask.next_proto_id = masks->ipv4_mask.proto; ip4_val.next_proto_id = input->flow.ip4_flow.proto; + } else if (input->flow_type == RTE_ETH_FLOW_NONFRAG_IPV4_SCTP) { + /* Explicitly match the SCTP protocol number */ + ip4_mask.next_proto_id = 0xff; + ip4_val.next_proto_id = IPPROTO_SCTP; } if (input->flow.ip4_flow.src_ip) { ip4_mask.src_addr = masks->ipv4_mask.src_ip; @@ -251,9 +264,6 @@ copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input, sctp_val.tag = input->flow.sctp6_flow.verify_tag; } - /* v4 proto should be 132, override ipv6_flow.proto */ - input->flow.ipv6_flow.proto = 132; - enic_set_layer(gp, 0, FILTER_GENERIC_1_L4, &sctp_mask, &sctp_val, sizeof(struct sctp_hdr)); } @@ -269,6 +279,10 @@ copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input, if (input->flow.ipv6_flow.proto) { ipv6_mask.proto = masks->ipv6_mask.proto; ipv6_val.proto = input->flow.ipv6_flow.proto; + } else if (input->flow_type == RTE_ETH_FLOW_NONFRAG_IPV6_SCTP) { + /* See comments for IPv4 SCTP above. */ + ipv6_mask.proto = 0xff; + ipv6_val.proto = IPPROTO_SCTP; } memcpy(ipv6_mask.src_addr, masks->ipv6_mask.src_ip, sizeof(ipv6_mask.src_addr)); From patchwork Thu Feb 28 07:03:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50600 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 039344CE4; Thu, 28 Feb 2019 08:04:19 +0100 (CET) Received: from rcdn-iport-4.cisco.com (rcdn-iport-4.cisco.com [173.37.86.75]) by dpdk.org (Postfix) with ESMTP id DCF2F4CC0; Thu, 28 Feb 2019 08:04:17 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=2848; q=dns/txt; s=iport; t=1551337458; x=1552547058; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=EoulLgFiP2sV34OOop8XhEHNXfSmjvz9TpR+cs1WmcA=; b=dHpTEcyx8KQXTFRBVHtSrOUY4yTt0Q42VeXKxF0ZGNLbDKB40uELBJQv 6E2toP9s6oG4j8jrGp3paBJJaC13/1Bb+Wgw32ZtQ4H9ZCa5grVLNSd1C mreSePeMWFIjmlHAi5zJv2vK26nTb2DhuoQ/uMf368S+1pNGwg9v9q3hq M=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="525792479" Received: from alln-core-12.cisco.com ([173.36.13.134]) by rcdn-iport-4.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:04:17 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-12.cisco.com (8.15.2/8.15.2) with ESMTP id x1S74G5s020701; Thu, 28 Feb 2019 07:04:16 GMT Received: by cisco.com (Postfix, from userid 508933) id 8771C20F2001; Wed, 27 Feb 2019 23:04:16 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim , stable@dpdk.org Date: Wed, 27 Feb 2019 23:03:05 -0800 Message-Id: <20190228070317.17002-4-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-12.cisco.com Subject: [dpdk-dev] [PATCH 03/15] net/enic: fix SCTP match for flow API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The driver needs to explicitly set the protocol number (132) in the IP header pattern, as the current firmware filter API lacks "match SCTP packet" flag. Otherwise, the resulting NIC filter may lead to false positives (i.e. NIC reporting non-SCTP packets as SCTP packets). The flow director handler does the same (enic_clsf.c). Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled") Cc: stable@dpdk.org Signed-off-by: Hyong Youb Kim Reviewed-by: John Daley --- drivers/net/enic/enic_flow.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index bb9ed037a..55d8d50a1 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -70,7 +70,6 @@ static enic_copy_item_fn enic_copy_item_ipv6_v2; static enic_copy_item_fn enic_copy_item_udp_v2; static enic_copy_item_fn enic_copy_item_tcp_v2; static enic_copy_item_fn enic_copy_item_sctp_v2; -static enic_copy_item_fn enic_copy_item_sctp_v2; static enic_copy_item_fn enic_copy_item_vxlan_v2; static copy_action_fn enic_copy_action_v1; static copy_action_fn enic_copy_action_v2; @@ -237,7 +236,7 @@ static const struct enic_items enic_items_v3[] = { }, [RTE_FLOW_ITEM_TYPE_SCTP] = { .copy_item = enic_copy_item_sctp_v2, - .valid_start_item = 1, + .valid_start_item = 0, .prev_items = (const enum rte_flow_item_type[]) { RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_IPV6, @@ -819,12 +818,37 @@ enic_copy_item_sctp_v2(const struct rte_flow_item *item, const struct rte_flow_item_sctp *spec = item->spec; const struct rte_flow_item_sctp *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; + uint8_t *ip_proto_mask = NULL; + uint8_t *ip_proto = NULL; FLOW_TRACE(); if (*inner_ofst) return ENOTSUP; + /* + * The NIC filter API has no flags for "match sctp", so explicitly set + * the protocol number in the IP pattern. + */ + if (gp->val_flags & FILTER_GENERIC_1_IPV4) { + struct ipv4_hdr *ip; + ip = (struct ipv4_hdr *)gp->layer[FILTER_GENERIC_1_L3].mask; + ip_proto_mask = &ip->next_proto_id; + ip = (struct ipv4_hdr *)gp->layer[FILTER_GENERIC_1_L3].val; + ip_proto = &ip->next_proto_id; + } else if (gp->val_flags & FILTER_GENERIC_1_IPV6) { + struct ipv6_hdr *ip; + ip = (struct ipv6_hdr *)gp->layer[FILTER_GENERIC_1_L3].mask; + ip_proto_mask = &ip->proto; + ip = (struct ipv6_hdr *)gp->layer[FILTER_GENERIC_1_L3].val; + ip_proto = &ip->proto; + } else { + /* Need IPv4/IPv6 pattern first */ + return EINVAL; + } + *ip_proto = IPPROTO_SCTP; + *ip_proto_mask = 0xff; + /* Match all if no spec */ if (!spec) return 0; From patchwork Thu Feb 28 07:03:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50601 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C54FC4C94; Thu, 28 Feb 2019 08:04:32 +0100 (CET) Received: from rcdn-iport-4.cisco.com (rcdn-iport-4.cisco.com [173.37.86.75]) by dpdk.org (Postfix) with ESMTP id A676A4CC7; Thu, 28 Feb 2019 08:04:30 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=3240; q=dns/txt; s=iport; t=1551337470; x=1552547070; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=e1UStQsyXH9iS/grFWjUiFWo2XqrmoPfNg5VN5/IL2c=; b=jQlsirIkxX7DEWUv0y35PMjmz2Mq6IrN2oMA2BnhAI/s53b8VhIFqb5f Yh1eY4ukF7Qvm8dzbpyAtXe8ukoLLNueL4rgL7ISEVI+zL/hT26LpdCGQ lXlTn+JuOuMi04VHTzDCgZsbxo2nxprF6MfYUABk3wi/xkB4PUD59TGFI I=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="525792558" Received: from alln-core-2.cisco.com ([173.36.13.135]) by rcdn-iport-4.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:04:28 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-2.cisco.com (8.15.2/8.15.2) with ESMTP id x1S74S29027636; Thu, 28 Feb 2019 07:04:28 GMT Received: by cisco.com (Postfix, from userid 508933) id 245AA20F2001; Wed, 27 Feb 2019 23:04:28 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim , stable@dpdk.org Date: Wed, 27 Feb 2019 23:03:06 -0800 Message-Id: <20190228070317.17002-5-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-2.cisco.com Subject: [dpdk-dev] [PATCH 04/15] net/enic: allow flow mark ID 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The driver currently accepts mark ID 0 but does not report it in matching packet's mbuf. For example, the following testpmd command succeeds. But, the mbuf of a matching IPv4 UDP packet does not have PKT_RX_FDIR_ID set. flow create 0 ingress pattern ... actions mark id 0 / queue index 0 / end The problem has to do with mapping mark IDs (32-bit) to NIC filter IDs. Filter ID is currently 16-bit, so values greater than 0xffff are rejected. The firmware reserves filter ID 0 for filters that do not mark (e.g. steer w/o mark). And, the driver reserves 0xffff for the flag action. This leaves 1...0xfffe for app use. It is possible to simply reject mark ID 0 as unsupported. But, 0 is commonly used (e.g. OVS-DPDK and VPP). So, when adding a filter, set filter ID = mark ID + 1 to support mark ID 0. The receive handler subtracts 1 from filter ID to get back the original mark ID. Fixes: dfbd6a9cb504 ("net/enic: extend flow director support for 1300 series") Cc: stable@dpdk.org Signed-off-by: Hyong Youb Kim Reviewed-by: John Daley --- drivers/net/enic/enic_flow.c | 15 +++++++++++---- drivers/net/enic/enic_rxtx_common.h | 3 ++- 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index 55d8d50a1..e12a6ec73 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -1081,12 +1081,18 @@ enic_copy_action_v2(const struct rte_flow_action actions[], if (overlap & MARK) return ENOTSUP; overlap |= MARK; - /* ENIC_MAGIC_FILTER_ID is reserved and is the highest - * in the range of allows mark ids. + /* + * Map mark ID (32-bit) to filter ID (16-bit): + * - Reject values > 16 bits + * - Filter ID 0 is reserved for filters that steer + * but not mark. So add 1 to the mark ID to avoid + * using 0. + * - Filter ID (ENIC_MAGIC_FILTER_ID = 0xffff) is + * reserved for the "flag" action below. */ - if (mark->id >= ENIC_MAGIC_FILTER_ID) + if (mark->id >= ENIC_MAGIC_FILTER_ID - 1) return EINVAL; - enic_action->filter_id = mark->id; + enic_action->filter_id = mark->id + 1; enic_action->flags |= FILTER_ACTION_FILTER_ID_FLAG; break; } @@ -1094,6 +1100,7 @@ enic_copy_action_v2(const struct rte_flow_action actions[], if (overlap & MARK) return ENOTSUP; overlap |= MARK; + /* ENIC_MAGIC_FILTER_ID is reserved for flagging */ enic_action->filter_id = ENIC_MAGIC_FILTER_ID; enic_action->flags |= FILTER_ACTION_FILTER_ID_FLAG; break; diff --git a/drivers/net/enic/enic_rxtx_common.h b/drivers/net/enic/enic_rxtx_common.h index bfbb4909e..66f631dfe 100644 --- a/drivers/net/enic/enic_rxtx_common.h +++ b/drivers/net/enic/enic_rxtx_common.h @@ -226,7 +226,8 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf) if (filter_id) { pkt_flags |= PKT_RX_FDIR; if (filter_id != ENIC_MAGIC_FILTER_ID) { - mbuf->hash.fdir.hi = clsf_cqd->filter_id; + /* filter_id = mark id + 1, so subtract 1 */ + mbuf->hash.fdir.hi = filter_id - 1; pkt_flags |= PKT_RX_FDIR_ID; } } From patchwork Thu Feb 28 07:03:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50602 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 121D24C96; Thu, 28 Feb 2019 08:04:43 +0100 (CET) Received: from alln-iport-6.cisco.com (alln-iport-6.cisco.com [173.37.142.93]) by dpdk.org (Postfix) with ESMTP id BC32D4C9F; Thu, 28 Feb 2019 08:04:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=2755; q=dns/txt; s=iport; t=1551337482; x=1552547082; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=he4ktedDHvic2UrYgYx+RP6VxHvgT2Ka5BFBiaSTAos=; b=D/VCXU6us+dGkbb+X/DIwgv+4t9G8Kz4RzbbsMrKuDKsfCFeXDxVBMHn /ZGLG1QPMeeQR072MjKYCWYbB6xG/d3G0MUghE1n3WyIsoUyI5t/MPekT hZieHI9f870JwXGUkdB8uVBu9xMSBkmMpN5HPOFEDsfoxmuyQWX+X6n/h U=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="241773635" Received: from alln-core-10.cisco.com ([173.36.13.132]) by alln-iport-6.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:04:40 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-10.cisco.com (8.15.2/8.15.2) with ESMTP id x1S74ebB027424; Thu, 28 Feb 2019 07:04:40 GMT Received: by cisco.com (Postfix, from userid 508933) id 57EAF20F2001; Wed, 27 Feb 2019 23:04:40 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim , stable@dpdk.org Date: Wed, 27 Feb 2019 23:03:07 -0800 Message-Id: <20190228070317.17002-6-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-10.cisco.com Subject: [dpdk-dev] [PATCH 05/15] net/enic: check for unsupported flow item types X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently a pattern with an unsupported item type causes segfault, because the flow handler is using the type as an array index without checking bounds. Add an explicit check for unsupported item types and avoid out-of-bound accesses. Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled") Cc: stable@dpdk.org Signed-off-by: Hyong Youb Kim Reviewed-by: John Daley --- drivers/net/enic/enic_flow.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index e12a6ec73..c60476c8c 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -40,6 +40,8 @@ struct enic_items { struct enic_filter_cap { /** list of valid items and their handlers and attributes. */ const struct enic_items *item_info; + /* Max type in the above list, used to detect unsupported types */ + enum rte_flow_item_type max_item_type; }; /* functions for copying flow actions into enic actions */ @@ -257,12 +259,15 @@ static const struct enic_items enic_items_v3[] = { static const struct enic_filter_cap enic_filter_cap[] = { [FILTER_IPV4_5TUPLE] = { .item_info = enic_items_v1, + .max_item_type = RTE_FLOW_ITEM_TYPE_TCP, }, [FILTER_USNIC_IP] = { .item_info = enic_items_v2, + .max_item_type = RTE_FLOW_ITEM_TYPE_VXLAN, }, [FILTER_DPDK_1] = { .item_info = enic_items_v3, + .max_item_type = RTE_FLOW_ITEM_TYPE_VXLAN, }, }; @@ -946,7 +951,7 @@ item_stacking_valid(enum rte_flow_item_type prev_item, */ static int enic_copy_filter(const struct rte_flow_item pattern[], - const struct enic_items *items_info, + const struct enic_filter_cap *cap, struct filter_v2 *enic_filter, struct rte_flow_error *error) { @@ -969,7 +974,14 @@ enic_copy_filter(const struct rte_flow_item pattern[], if (item->type == RTE_FLOW_ITEM_TYPE_VOID) continue; - item_info = &items_info[item->type]; + item_info = &cap->item_info[item->type]; + if (item->type > cap->max_item_type || + item_info->copy_item == NULL) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "Unsupported item."); + return -rte_errno; + } /* check to see if item stacking is valid */ if (!item_stacking_valid(prev_item, item_info, is_first_item)) @@ -1423,7 +1435,7 @@ enic_flow_parse(struct rte_eth_dev *dev, return -rte_errno; } enic_filter->type = enic->flow_filter_mode; - ret = enic_copy_filter(pattern, enic_filter_cap->item_info, + ret = enic_copy_filter(pattern, enic_filter_cap, enic_filter, error); return ret; } From patchwork Thu Feb 28 07:03:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50603 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A8E524C8F; Thu, 28 Feb 2019 08:04:55 +0100 (CET) Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by dpdk.org (Postfix) with ESMTP id B88074C90 for ; Thu, 28 Feb 2019 08:04:53 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=4689; q=dns/txt; s=iport; t=1551337493; x=1552547093; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ZX5bRDjjH7sqmImGNWWQ4EdWYZQAw6sSlKbvUrp1Y1Y=; b=gsPWzW1MQo4Zmi5CEuoBCsbwD0EqaGyCiB32vNQH3baoDOQCCjJ+YOcR e1x0vTv5LJd1nfLB6GYg5kaoKjdSFjQnqN0ug2QBTbK/d9kj7M0k50J+Q icrwVifCfBRTHNt5FduDAKsRdexq459spzX/Fj5yqAvjQ/3fxzP5BceSE I=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="522692798" Received: from alln-core-5.cisco.com ([173.36.13.138]) by rcdn-iport-7.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:04:52 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-5.cisco.com (8.15.2/8.15.2) with ESMTP id x1S74qUp021795; Thu, 28 Feb 2019 07:04:52 GMT Received: by cisco.com (Postfix, from userid 508933) id 6A7B820F2001; Wed, 27 Feb 2019 23:04:52 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim Date: Wed, 27 Feb 2019 23:03:08 -0800 Message-Id: <20190228070317.17002-7-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-5.cisco.com Subject: [dpdk-dev] [PATCH 06/15] net/enic: enable limited RSS flow action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some apps like OVS-DPDK use MARK+RSS flow rules in order to offload packet matching to the NIC. The RSS action in such flow rules simply indicates "receive packet normally", not trying to override the port wide RSS. The action is included in the flow rules simply to terminate them, as MARK is not a fate-deciding action. And, the RSS action has a most basic config: default hash, level, types, null key, and identity queue mapping. Recent VIC adapters can support these "mark and receive" flow rules. So, enable support for RSS action for this limited use case. Signed-off-by: Hyong Youb Kim Reviewed-by: John Daley --- drivers/net/enic/enic_flow.c | 48 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 42 insertions(+), 6 deletions(-) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index c60476c8c..0f6b6b930 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -45,7 +45,8 @@ struct enic_filter_cap { }; /* functions for copying flow actions into enic actions */ -typedef int (copy_action_fn)(const struct rte_flow_action actions[], +typedef int (copy_action_fn)(struct enic *enic, + const struct rte_flow_action actions[], struct filter_action_v2 *enic_action); /* functions for copying items into enic filters */ @@ -57,8 +58,7 @@ struct enic_action_cap { /** list of valid actions */ const enum rte_flow_action_type *actions; /** copy function for a particular NIC */ - int (*copy_fn)(const struct rte_flow_action actions[], - struct filter_action_v2 *enic_action); + copy_action_fn *copy_fn; }; /* Forward declarations */ @@ -282,6 +282,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_id[] = { RTE_FLOW_ACTION_TYPE_QUEUE, RTE_FLOW_ACTION_TYPE_MARK, RTE_FLOW_ACTION_TYPE_FLAG, + RTE_FLOW_ACTION_TYPE_RSS, RTE_FLOW_ACTION_TYPE_END, }; @@ -290,6 +291,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_drop[] = { RTE_FLOW_ACTION_TYPE_MARK, RTE_FLOW_ACTION_TYPE_FLAG, RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_RSS, RTE_FLOW_ACTION_TYPE_END, }; @@ -299,6 +301,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_count[] = { RTE_FLOW_ACTION_TYPE_FLAG, RTE_FLOW_ACTION_TYPE_DROP, RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_RSS, RTE_FLOW_ACTION_TYPE_END, }; @@ -1016,7 +1019,8 @@ enic_copy_filter(const struct rte_flow_item pattern[], * @param error[out] */ static int -enic_copy_action_v1(const struct rte_flow_action actions[], +enic_copy_action_v1(__rte_unused struct enic *enic, + const struct rte_flow_action actions[], struct filter_action_v2 *enic_action) { enum { FATE = 1, }; @@ -1062,7 +1066,8 @@ enic_copy_action_v1(const struct rte_flow_action actions[], * @param error[out] */ static int -enic_copy_action_v2(const struct rte_flow_action actions[], +enic_copy_action_v2(struct enic *enic, + const struct rte_flow_action actions[], struct filter_action_v2 *enic_action) { enum { FATE = 1, MARK = 2, }; @@ -1128,6 +1133,37 @@ enic_copy_action_v2(const struct rte_flow_action actions[], enic_action->flags |= FILTER_ACTION_COUNTER_FLAG; break; } + case RTE_FLOW_ACTION_TYPE_RSS: { + const struct rte_flow_action_rss *rss = + (const struct rte_flow_action_rss *) + actions->conf; + bool allow; + uint16_t i; + + /* + * Hardware does not support general RSS actions, but + * we can still support the dummy one that is used to + * "receive normally". + */ + allow = rss->func == RTE_ETH_HASH_FUNCTION_DEFAULT && + rss->level == 0 && + (rss->types == 0 || + rss->types == enic->rss_hf) && + rss->queue_num == enic->rq_count && + rss->key_len == 0; + /* Identity queue map is ok */ + for (i = 0; i < rss->queue_num; i++) + allow = allow && (i == rss->queue[i]); + if (!allow) + return ENOTSUP; + if (overlap & FATE) + return ENOTSUP; + /* Need MARK or FLAG */ + if (!(overlap & MARK)) + return ENOTSUP; + overlap |= FATE; + break; + } case RTE_FLOW_ACTION_TYPE_VOID: continue; default: @@ -1418,7 +1454,7 @@ enic_flow_parse(struct rte_eth_dev *dev, action, "Invalid action."); return -rte_errno; } - ret = enic_action_cap->copy_fn(actions, enic_action); + ret = enic_action_cap->copy_fn(enic, actions, enic_action); if (ret) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "Unsupported action."); From patchwork Thu Feb 28 07:03:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50604 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C6F3F4C9F; Thu, 28 Feb 2019 08:05:08 +0100 (CET) Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by dpdk.org (Postfix) with ESMTP id 2CE304C74 for ; Thu, 28 Feb 2019 08:05:07 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=2394; q=dns/txt; s=iport; t=1551337507; x=1552547107; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=dJxA7mHEa28LA2OoO/18M9iWskXi4f4sjC127MAo8e0=; b=e9IddGKhrjspDF58trS3wZR8VjJC9n33+mwYg0E4cuFHp/ICSOVJZWrc jD74qDGLDRb/HAXnRafrXLbPmQr4yb67LVkUOPCTldH9fFt9gqZzVUobh bD2RRogBVhvhxMQi2jXRdLZypXEHYCzuUh95tLH+O3pOx3TeQxS2TiM2+ U=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="522692944" Received: from alln-core-3.cisco.com ([173.36.13.136]) by rcdn-iport-7.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:05:06 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-3.cisco.com (8.15.2/8.15.2) with ESMTP id x1S756XR019889; Thu, 28 Feb 2019 07:05:06 GMT Received: by cisco.com (Postfix, from userid 508933) id F0ABB20F2001; Wed, 27 Feb 2019 23:05:05 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim Date: Wed, 27 Feb 2019 23:03:09 -0800 Message-Id: <20190228070317.17002-8-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-3.cisco.com Subject: [dpdk-dev] [PATCH 07/15] net/enic: enable limited PASSTHRU flow action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some apps like VPP use PASSTHRU+MARK flow rules to offload packet matching to the NIC. Just like MARK+RSS used by OVS-DPDK and others, PASSTHRU+MARK is used to "mark and then receive normally". Recent VIC adapters support such flow rules, so enable PASSTHRU for this limited use case. Signed-off-by: Hyong Youb Kim --- drivers/net/enic/enic_flow.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index 0f6b6b930..c6ed9e1b9 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -283,6 +283,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_id[] = { RTE_FLOW_ACTION_TYPE_MARK, RTE_FLOW_ACTION_TYPE_FLAG, RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_PASSTHRU, RTE_FLOW_ACTION_TYPE_END, }; @@ -292,6 +293,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_drop[] = { RTE_FLOW_ACTION_TYPE_FLAG, RTE_FLOW_ACTION_TYPE_DROP, RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_PASSTHRU, RTE_FLOW_ACTION_TYPE_END, }; @@ -302,6 +304,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_count[] = { RTE_FLOW_ACTION_TYPE_DROP, RTE_FLOW_ACTION_TYPE_COUNT, RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_PASSTHRU, RTE_FLOW_ACTION_TYPE_END, }; @@ -1072,6 +1075,7 @@ enic_copy_action_v2(struct enic *enic, { enum { FATE = 1, MARK = 2, }; uint32_t overlap = 0; + bool passthru = false; FLOW_TRACE(); @@ -1164,6 +1168,19 @@ enic_copy_action_v2(struct enic *enic, overlap |= FATE; break; } + case RTE_FLOW_ACTION_TYPE_PASSTHRU: { + /* + * Like RSS above, PASSTHRU + MARK may be used to + * "mark and then receive normally". MARK usually comes + * after PASSTHRU, so remember we have seen passthru + * and check for mark later. + */ + if (overlap & FATE) + return ENOTSUP; + overlap |= FATE; + passthru = true; + break; + } case RTE_FLOW_ACTION_TYPE_VOID: continue; default: @@ -1171,6 +1188,9 @@ enic_copy_action_v2(struct enic *enic, break; } } + /* Only PASSTHRU + MARK is allowed */ + if (passthru && !(overlap & MARK)) + return ENOTSUP; if (!(overlap & FATE)) return ENOTSUP; enic_action->type = FILTER_ACTION_V2; From patchwork Thu Feb 28 07:03:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50605 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6627C4CE4; Thu, 28 Feb 2019 08:05:20 +0100 (CET) Received: from alln-iport-7.cisco.com (alln-iport-7.cisco.com [173.37.142.94]) by dpdk.org (Postfix) with ESMTP id 13A5F5398 for ; Thu, 28 Feb 2019 08:05:18 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=12796; q=dns/txt; s=iport; t=1551337519; x=1552547119; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=rdHxnqIyQBskoLwua9qplxTUODQ0Fv+xtWRCk9m64Sg=; b=iweqdCaxTgZvoXghWXUom9AlgXsduI+7S5LaNRE27cyYAtTwvJ/0XdWk YgmtaZRvrpQJcnW/Hv+fTjucKihs6kO4EqG1S+Gz2nzAD+hp+WeiqYYVC QInDSLtbdmt1RMta6SyBuGmcyAa3fgc6hQnjg7XA5swbO4RmXZz13jM3S 0=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="240673077" Received: from alln-core-2.cisco.com ([173.36.13.135]) by alln-iport-7.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:05:18 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-2.cisco.com (8.15.2/8.15.2) with ESMTP id x1S75INP028649; Thu, 28 Feb 2019 07:05:18 GMT Received: by cisco.com (Postfix, from userid 508933) id F020620F2001; Wed, 27 Feb 2019 23:05:17 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim Date: Wed, 27 Feb 2019 23:03:10 -0800 Message-Id: <20190228070317.17002-9-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-2.cisco.com Subject: [dpdk-dev] [PATCH 08/15] net/enic: move arguments into struct X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There are many copy_item functions, all with the same arguments, which makes it difficult to add/change arguments. Move the arguments into a struct to help subsequent commits that will add/fix features. Also remove self-explanatory verbose comments for these local functions. These changes are purely mechanical and have no impact on functionalities. Signed-off-by: Hyong Youb Kim --- drivers/net/enic/enic_flow.c | 209 ++++++++++++++----------------------------- 1 file changed, 67 insertions(+), 142 deletions(-) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index c6ed9e1b9..fda641b6f 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -23,11 +23,27 @@ rte_log(RTE_LOG_ ## level, enicpmd_logtype_flow, \ fmt "\n", ##args) +/* + * Common arguments passed to copy_item functions. Use this structure + * so we can easily add new arguments. + * item: Item specification. + * filter: Partially filled in NIC filter structure. + * inner_ofst: If zero, this is an outer header. If non-zero, this is + * the offset into L5 where the header begins. + */ +struct copy_item_args { + const struct rte_flow_item *item; + struct filter_v2 *filter; + uint8_t *inner_ofst; +}; + +/* functions for copying items into enic filters */ +typedef int (enic_copy_item_fn)(struct copy_item_args *arg); + /** Info about how to copy items into enic filters. */ struct enic_items { /** Function for copying and validating an item. */ - int (*copy_item)(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst); + enic_copy_item_fn *copy_item; /** List of valid previous items. */ const enum rte_flow_item_type * const prev_items; /** True if it's OK for this item to be the first item. For some NIC @@ -49,10 +65,6 @@ typedef int (copy_action_fn)(struct enic *enic, const struct rte_flow_action actions[], struct filter_action_v2 *enic_action); -/* functions for copying items into enic filters */ -typedef int(enic_copy_item_fn)(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst); - /** Action capabilities for various NICs. */ struct enic_action_cap { /** list of valid actions */ @@ -340,20 +352,12 @@ mask_exact_match(const u8 *supported, const u8 *supplied, return 1; } -/** - * Copy IPv4 item into version 1 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * Should always be 0 for version 1. - */ static int -enic_copy_item_ipv4_v1(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_ipv4_v1(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_ipv4 *spec = item->spec; const struct rte_flow_item_ipv4 *mask = item->mask; struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4; @@ -390,20 +394,12 @@ enic_copy_item_ipv4_v1(const struct rte_flow_item *item, return 0; } -/** - * Copy UDP item into version 1 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * Should always be 0 for version 1. - */ static int -enic_copy_item_udp_v1(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_udp_v1(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_udp *spec = item->spec; const struct rte_flow_item_udp *mask = item->mask; struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4; @@ -441,20 +437,12 @@ enic_copy_item_udp_v1(const struct rte_flow_item *item, return 0; } -/** - * Copy TCP item into version 1 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * Should always be 0 for version 1. - */ static int -enic_copy_item_tcp_v1(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_tcp_v1(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_tcp *spec = item->spec; const struct rte_flow_item_tcp *mask = item->mask; struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4; @@ -492,21 +480,12 @@ enic_copy_item_tcp_v1(const struct rte_flow_item *item, return 0; } -/** - * Copy ETH item into version 2 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * If zero, this is an outer header. If non-zero, this is the offset into L5 - * where the header begins. - */ static int -enic_copy_item_eth_v2(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_eth_v2(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; struct ether_hdr enic_spec; struct ether_hdr enic_mask; const struct rte_flow_item_eth *spec = item->spec; @@ -555,21 +534,12 @@ enic_copy_item_eth_v2(const struct rte_flow_item *item, return 0; } -/** - * Copy VLAN item into version 2 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * If zero, this is an outer header. If non-zero, this is the offset into L5 - * where the header begins. - */ static int -enic_copy_item_vlan_v2(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_vlan_v2(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_vlan *spec = item->spec; const struct rte_flow_item_vlan *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -612,20 +582,12 @@ enic_copy_item_vlan_v2(const struct rte_flow_item *item, return 0; } -/** - * Copy IPv4 item into version 2 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * Must be 0. Don't support inner IPv4 filtering. - */ static int -enic_copy_item_ipv4_v2(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_ipv4_v2(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_ipv4 *spec = item->spec; const struct rte_flow_item_ipv4 *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -662,20 +624,12 @@ enic_copy_item_ipv4_v2(const struct rte_flow_item *item, return 0; } -/** - * Copy IPv6 item into version 2 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * Must be 0. Don't support inner IPv6 filtering. - */ static int -enic_copy_item_ipv6_v2(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_ipv6_v2(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_ipv6 *spec = item->spec; const struct rte_flow_item_ipv6 *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -712,20 +666,12 @@ enic_copy_item_ipv6_v2(const struct rte_flow_item *item, return 0; } -/** - * Copy UDP item into version 2 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * Must be 0. Don't support inner UDP filtering. - */ static int -enic_copy_item_udp_v2(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_udp_v2(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_udp *spec = item->spec; const struct rte_flow_item_udp *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -762,20 +708,12 @@ enic_copy_item_udp_v2(const struct rte_flow_item *item, return 0; } -/** - * Copy TCP item into version 2 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * Must be 0. Don't support inner TCP filtering. - */ static int -enic_copy_item_tcp_v2(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_tcp_v2(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_tcp *spec = item->spec; const struct rte_flow_item_tcp *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -812,20 +750,12 @@ enic_copy_item_tcp_v2(const struct rte_flow_item *item, return 0; } -/** - * Copy SCTP item into version 2 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * Must be 0. Don't support inner SCTP filtering. - */ static int -enic_copy_item_sctp_v2(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_sctp_v2(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_sctp *spec = item->spec; const struct rte_flow_item_sctp *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -874,20 +804,12 @@ enic_copy_item_sctp_v2(const struct rte_flow_item *item, return 0; } -/** - * Copy UDP item into version 2 NIC filter. - * - * @param item[in] - * Item specification. - * @param enic_filter[out] - * Partially filled in NIC filter structure. - * @param inner_ofst[in] - * Must be 0. VxLAN headers always start at the beginning of L5. - */ static int -enic_copy_item_vxlan_v2(const struct rte_flow_item *item, - struct filter_v2 *enic_filter, u8 *inner_ofst) +enic_copy_item_vxlan_v2(struct copy_item_args *arg) { + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_vxlan *spec = item->spec; const struct rte_flow_item_vxlan *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -966,13 +888,15 @@ enic_copy_filter(const struct rte_flow_item pattern[], u8 inner_ofst = 0; /* If encapsulated, ofst into L5 */ enum rte_flow_item_type prev_item; const struct enic_items *item_info; - + struct copy_item_args args; u8 is_first_item = 1; FLOW_TRACE(); prev_item = 0; + args.filter = enic_filter; + args.inner_ofst = &inner_ofst; for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { /* Get info about how to validate and copy the item. If NULL * is returned the nic does not support the item. @@ -993,7 +917,8 @@ enic_copy_filter(const struct rte_flow_item pattern[], if (!item_stacking_valid(prev_item, item_info, is_first_item)) goto stacking_error; - ret = item_info->copy_item(item, enic_filter, &inner_ofst); + args.item = item; + ret = item_info->copy_item(&args); if (ret) goto item_not_supported; prev_item = item->type; From patchwork Thu Feb 28 07:03:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50606 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B41F8548B; Thu, 28 Feb 2019 08:05:32 +0100 (CET) Received: from alln-iport-2.cisco.com (alln-iport-2.cisco.com [173.37.142.89]) by dpdk.org (Postfix) with ESMTP id D1D4556A1 for ; Thu, 28 Feb 2019 08:05:30 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=3964; q=dns/txt; s=iport; t=1551337531; x=1552547131; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=pHmtyHOjxTOoD9FhCIhryRC2t7OQiNxCavFW1tEuqpM=; b=F5d2lueNk7d5cLVMK/sk13dpX3E5Y77BeKMg+9mAvIiDdl/i6MII/8t+ RosBFOf/uaokCP69TAKTuxe8xvls1Q+BIWDQ9ha0HTt0ZAIeOK9nJPfA2 L0zdGEqpDrN5CZ9e9CMprmArGwrIUg7tfe4KzDZm/56S2BFFLq+xufiNO w=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="241659324" Received: from alln-core-1.cisco.com ([173.36.13.131]) by alln-iport-2.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:05:30 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-1.cisco.com (8.15.2/8.15.2) with ESMTP id x1S75TcI009225; Thu, 28 Feb 2019 07:05:29 GMT Received: by cisco.com (Postfix, from userid 508933) id 88A0F20F2001; Wed, 27 Feb 2019 23:05:29 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim Date: Wed, 27 Feb 2019 23:03:11 -0800 Message-Id: <20190228070317.17002-10-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-1.cisco.com Subject: [dpdk-dev] [PATCH 09/15] net/enic: enable limited support for RAW flow item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some apps like VPP use a raw item to match UDP tunnel headers like VXLAN or GENEVE. The NIC hardware supports such usage via L5 match, which does pattern match on packet data immediately following the outer L4 header. Accept raw items for these limited use cases. Signed-off-by: Hyong Youb Kim --- drivers/net/enic/enic_flow.c | 65 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index fda641b6f..ffc6ce1da 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -77,6 +77,7 @@ struct enic_action_cap { static enic_copy_item_fn enic_copy_item_ipv4_v1; static enic_copy_item_fn enic_copy_item_udp_v1; static enic_copy_item_fn enic_copy_item_tcp_v1; +static enic_copy_item_fn enic_copy_item_raw_v2; static enic_copy_item_fn enic_copy_item_eth_v2; static enic_copy_item_fn enic_copy_item_vlan_v2; static enic_copy_item_fn enic_copy_item_ipv4_v2; @@ -123,6 +124,14 @@ static const struct enic_items enic_items_v1[] = { * that layer 3 must be specified. */ static const struct enic_items enic_items_v2[] = { + [RTE_FLOW_ITEM_TYPE_RAW] = { + .copy_item = enic_copy_item_raw_v2, + .valid_start_item = 0, + .prev_items = (const enum rte_flow_item_type[]) { + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, + }, + }, [RTE_FLOW_ITEM_TYPE_ETH] = { .copy_item = enic_copy_item_eth_v2, .valid_start_item = 1, @@ -196,6 +205,14 @@ static const struct enic_items enic_items_v2[] = { /** NICs with Advanced filters enabled */ static const struct enic_items enic_items_v3[] = { + [RTE_FLOW_ITEM_TYPE_RAW] = { + .copy_item = enic_copy_item_raw_v2, + .valid_start_item = 0, + .prev_items = (const enum rte_flow_item_type[]) { + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, + }, + }, [RTE_FLOW_ITEM_TYPE_ETH] = { .copy_item = enic_copy_item_eth_v2, .valid_start_item = 1, @@ -835,6 +852,54 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg) return 0; } +/* + * Copy raw item into version 2 NIC filter. Currently, raw pattern match is + * very limited. It is intended for matching UDP tunnel header (e.g. vxlan + * or geneve). + */ +static int +enic_copy_item_raw_v2(struct copy_item_args *arg) +{ + const struct rte_flow_item *item = arg->item; + struct filter_v2 *enic_filter = arg->filter; + uint8_t *inner_ofst = arg->inner_ofst; + const struct rte_flow_item_raw *spec = item->spec; + const struct rte_flow_item_raw *mask = item->mask; + struct filter_generic_1 *gp = &enic_filter->u.generic_1; + + FLOW_TRACE(); + + /* Cannot be used for inner packet */ + if (*inner_ofst) + return EINVAL; + /* Need both spec and mask */ + if (!spec || !mask) + return EINVAL; + /* Only supports relative with offset 0 */ + if (!spec->relative || spec->offset != 0 || spec->search || spec->limit) + return EINVAL; + /* Need non-null pattern that fits within the NIC's filter pattern */ + if (spec->length == 0 || spec->length > FILTER_GENERIC_1_KEY_LEN || + !spec->pattern || !mask->pattern) + return EINVAL; + /* + * Mask fields, including length, are often set to zero. Assume that + * means "same as spec" to avoid breaking existing apps. If length + * is not zero, then it should be >= spec length. + * + * No more pattern follows this, so append to the L4 layer instead of + * L5 to work with both recent and older VICs. + */ + if (mask->length != 0 && mask->length < spec->length) + return EINVAL; + memcpy(gp->layer[FILTER_GENERIC_1_L4].mask + sizeof(struct udp_hdr), + mask->pattern, spec->length); + memcpy(gp->layer[FILTER_GENERIC_1_L4].val + sizeof(struct udp_hdr), + spec->pattern, spec->length); + + return 0; +} + /** * Return 1 if current item is valid on top of the previous one. * From patchwork Thu Feb 28 07:03:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50607 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CDE684CA0; Thu, 28 Feb 2019 08:05:47 +0100 (CET) Received: from rcdn-iport-2.cisco.com (rcdn-iport-2.cisco.com [173.37.86.73]) by dpdk.org (Postfix) with ESMTP id 5B8C64CA0; Thu, 28 Feb 2019 08:05:45 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=1454; q=dns/txt; s=iport; t=1551337545; x=1552547145; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ugkcOu7f4kvORdbpVKyKEtAjKMfvS+Vwf79wlDxNUDg=; b=Oe2yWuVXmKu4xh0QGjtiRlXElRE1Db/yN4/O4QoKkkTKfISPTsZ5dcDW Up+7mo9uf4PvDZZVqCSBr9kJ2vSy/G+PCg4jobt283oqP/zCvNBlDZrpy 5AjbgAo0JIsOsGsjaQ5wnEzG3PAtAju0C3+X04gUMdMcK0N1qsJ5TTT6l 8=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="529108229" Received: from alln-core-9.cisco.com ([173.36.13.129]) by rcdn-iport-2.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:05:44 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-9.cisco.com (8.15.2/8.15.2) with ESMTP id x1S75ies017956; Thu, 28 Feb 2019 07:05:44 GMT Received: by cisco.com (Postfix, from userid 508933) id 18F3120F2001; Wed, 27 Feb 2019 23:05:44 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim , stable@dpdk.org Date: Wed, 27 Feb 2019 23:03:12 -0800 Message-Id: <20190228070317.17002-11-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-9.cisco.com Subject: [dpdk-dev] [PATCH 10/15] net/enic: initialize VXLAN port regardless of overlay offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, the driver resets the vxlan port register only if overlay offload is enabled. But, the register is actually tied to hardware vxlan parsing, which is an independent feature and is always enabled even if overlay offload is disabled. If left uninitialized, it can affect flow rules that match vxlan. So always reset the port number when HW vxlan parsing is available. Fixes: 8a4efd17410c ("net/enic: add handlers to add/delete vxlan port number") Cc: stable@dpdk.org Signed-off-by: Hyong Youb Kim --- drivers/net/enic/enic_main.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c index 2652949a2..ea9eb2edf 100644 --- a/drivers/net/enic/enic_main.c +++ b/drivers/net/enic/enic_main.c @@ -1714,8 +1714,15 @@ static int enic_dev_init(struct enic *enic) PKT_TX_OUTER_IP_CKSUM | PKT_TX_TUNNEL_MASK; enic->overlay_offload = true; - enic->vxlan_port = ENIC_DEFAULT_VXLAN_PORT; dev_info(enic, "Overlay offload is enabled\n"); + } + /* + * Reset the vxlan port if HW vxlan parsing is available. It + * is always enabled regardless of overlay offload + * enable/disable. + */ + if (enic->vxlan) { + enic->vxlan_port = ENIC_DEFAULT_VXLAN_PORT; /* * Reset the vxlan port to the default, as the NIC firmware * does not reset it automatically and keeps the old setting. From patchwork Thu Feb 28 07:03:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50608 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 69B0E4C94; Thu, 28 Feb 2019 08:06:01 +0100 (CET) Received: from rcdn-iport-8.cisco.com (rcdn-iport-8.cisco.com [173.37.86.79]) by dpdk.org (Postfix) with ESMTP id DC8614C8F; Thu, 28 Feb 2019 08:05:59 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=4058; q=dns/txt; s=iport; t=1551337560; x=1552547160; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=fari03nMWiH7RL1Syo4YrEl7wR62zhIXiwbUqVq61qE=; b=buVa8974Zis3I4K2gIt3GAHjBztNYlCrhsw8cUEKgj3TVK5oWsWU9Xtg pFHGvjYYjFRowCTjtSv5Oduu0joEAjsqpeAbnwUCv9zoh3F87cURFSmLV OLmG4miCptHb14pk2DiR9eEhyExX6hb1NZMSb0hawWysYIDmL4e0Y4KLJ 8=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="523882831" Received: from alln-core-3.cisco.com ([173.36.13.136]) by rcdn-iport-8.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:05:58 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-3.cisco.com (8.15.2/8.15.2) with ESMTP id x1S75wGn020457; Thu, 28 Feb 2019 07:05:58 GMT Received: by cisco.com (Postfix, from userid 508933) id 79E7520F2001; Wed, 27 Feb 2019 23:05:58 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim , stable@dpdk.org Date: Wed, 27 Feb 2019 23:03:13 -0800 Message-Id: <20190228070317.17002-12-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-3.cisco.com Subject: [dpdk-dev] [PATCH 11/15] net/enic: fix a couple issues with VXLAN match X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The filter API does not have flags for "match VXLAN". Explicitly set the UDP destination port and mask in the L4 pattern. Otherwise, UDP packets with non-VXLAN ports may be falsely reported as VXLAN. 1400 series VIC adapters have hardware VXLAN parsing. The L5 buffer on the NIC starts with the inner Ethernet header, and the VXLAN header is now in the L4 buffer following the UDP header. So the VXLAN spec/mask needs to be in the L4 pattern, not L5. Older models still expect the VXLAN spec/mask in the L5 pattern. Fix up the L4/L5 patterns accordingly. Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled") Cc: stable@dpdk.org Signed-off-by: Hyong Youb Kim --- drivers/net/enic/enic_flow.c | 46 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 45 insertions(+), 1 deletion(-) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index ffc6ce1da..da43b31dc 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -830,12 +830,23 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg) const struct rte_flow_item_vxlan *spec = item->spec; const struct rte_flow_item_vxlan *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; + struct udp_hdr *udp; FLOW_TRACE(); if (*inner_ofst) return EINVAL; + /* + * The NIC filter API has no flags for "match vxlan". Set UDP port to + * avoid false positives. + */ + gp->mask_flags |= FILTER_GENERIC_1_UDP; + gp->val_flags |= FILTER_GENERIC_1_UDP; + udp = (struct udp_hdr *)gp->layer[FILTER_GENERIC_1_L4].mask; + udp->dst_port = 0xffff; + udp = (struct udp_hdr *)gp->layer[FILTER_GENERIC_1_L4].val; + udp->dst_port = RTE_BE16(4789); /* Match all if no spec */ if (!spec) return 0; @@ -931,6 +942,36 @@ item_stacking_valid(enum rte_flow_item_type prev_item, return 0; } +/* + * Fix up the L5 layer.. HW vxlan parsing removes vxlan header from L5. + * Instead it is in L4 following the UDP header. Append the vxlan + * pattern to L4 (udp) and shift any inner packet pattern in L5. + */ +static void +fixup_l5_layer(struct enic *enic, struct filter_generic_1 *gp, + uint8_t inner_ofst) +{ + uint8_t layer[FILTER_GENERIC_1_KEY_LEN]; + uint8_t inner; + uint8_t vxlan; + + if (!(inner_ofst > 0 && enic->vxlan)) + return; + FLOW_TRACE(); + vxlan = sizeof(struct vxlan_hdr); + memcpy(gp->layer[FILTER_GENERIC_1_L4].mask + sizeof(struct udp_hdr), + gp->layer[FILTER_GENERIC_1_L5].mask, vxlan); + memcpy(gp->layer[FILTER_GENERIC_1_L4].val + sizeof(struct udp_hdr), + gp->layer[FILTER_GENERIC_1_L5].val, vxlan); + inner = inner_ofst - vxlan; + memset(layer, 0, sizeof(layer)); + memcpy(layer, gp->layer[FILTER_GENERIC_1_L5].mask + vxlan, inner); + memcpy(gp->layer[FILTER_GENERIC_1_L5].mask, layer, sizeof(layer)); + memset(layer, 0, sizeof(layer)); + memcpy(layer, gp->layer[FILTER_GENERIC_1_L5].val + vxlan, inner); + memcpy(gp->layer[FILTER_GENERIC_1_L5].val, layer, sizeof(layer)); +} + /** * Build the intenal enic filter structure from the provided pattern. The * pattern is validated as the items are copied. @@ -945,6 +986,7 @@ item_stacking_valid(enum rte_flow_item_type prev_item, static int enic_copy_filter(const struct rte_flow_item pattern[], const struct enic_filter_cap *cap, + struct enic *enic, struct filter_v2 *enic_filter, struct rte_flow_error *error) { @@ -989,6 +1031,8 @@ enic_copy_filter(const struct rte_flow_item pattern[], prev_item = item->type; is_first_item = 0; } + fixup_l5_layer(enic, &enic_filter->u.generic_1, inner_ofst); + return 0; item_not_supported: @@ -1481,7 +1525,7 @@ enic_flow_parse(struct rte_eth_dev *dev, return -rte_errno; } enic_filter->type = enic->flow_filter_mode; - ret = enic_copy_filter(pattern, enic_filter_cap, + ret = enic_copy_filter(pattern, enic_filter_cap, enic, enic_filter, error); return ret; } From patchwork Thu Feb 28 07:03:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50609 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 29BB74D3A; Thu, 28 Feb 2019 08:06:15 +0100 (CET) Received: from alln-iport-7.cisco.com (alln-iport-7.cisco.com [173.37.142.94]) by dpdk.org (Postfix) with ESMTP id 3E1FF4CE4; Thu, 28 Feb 2019 08:06:13 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=1427; q=dns/txt; s=iport; t=1551337573; x=1552547173; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=fhn+uyjtcsIF5r3lXkgilf9flSuVGStPhrFoEr2nwus=; b=j2pm+o/e5mwzzK2oL+sjlFWP7BttdxUo/263ZBdwiQVtui6CXKfdsZUE pjMmy2WpN51zxxPnPINI5+OrkGM/wBx+SNTgPe6sSIFwPjYVt5F9CN4dB zaB6l8shYjidZIxTL2zFxrLb855HAfpk4EaOMnEveTSIFWsvGQndxfzF6 Y=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="240673121" Received: from alln-core-11.cisco.com ([173.36.13.133]) by alln-iport-7.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:06:12 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-11.cisco.com (8.15.2/8.15.2) with ESMTP id x1S76CX9030143; Thu, 28 Feb 2019 07:06:12 GMT Received: by cisco.com (Postfix, from userid 508933) id 2825620F2001; Wed, 27 Feb 2019 23:06:12 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim , stable@dpdk.org Date: Wed, 27 Feb 2019 23:03:14 -0800 Message-Id: <20190228070317.17002-13-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-11.cisco.com Subject: [dpdk-dev] [PATCH 12/15] net/enic: fix an endian bug in VLAN match X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The VLAN fields in the NIC filter use little endian. The VLAN item is in big endian, so swap bytes. Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled") Cc: stable@dpdk.org Signed-off-by: Hyong Youb Kim --- drivers/net/enic/enic_flow.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index da43b31dc..b3172e7be 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -579,12 +579,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg) /* Outer TPID cannot be matched */ if (eth_mask->ether_type) return ENOTSUP; + /* + * When packet matching, the VIC always compares vlan-stripped + * L2, regardless of vlan stripping settings. So, the inner type + * from vlan becomes the ether type of the eth header. + */ eth_mask->ether_type = mask->inner_type; eth_val->ether_type = spec->inner_type; - - /* Outer header. Use the vlan mask/val fields */ - gp->mask_vlan = mask->tci; - gp->val_vlan = spec->tci; + /* For TCI, use the vlan mask/val fields (little endian). */ + gp->mask_vlan = rte_be_to_cpu_16(mask->tci); + gp->val_vlan = rte_be_to_cpu_16(spec->tci); } else { /* Inner header. Mask/Val start at *inner_ofst into L5 */ if ((*inner_ofst + sizeof(struct vlan_hdr)) > From patchwork Thu Feb 28 07:03:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50610 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A7F2E4CA6; Thu, 28 Feb 2019 08:06:34 +0100 (CET) Received: from alln-iport-6.cisco.com (alln-iport-6.cisco.com [173.37.142.93]) by dpdk.org (Postfix) with ESMTP id E667C4CA6 for ; Thu, 28 Feb 2019 08:06:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=26366; q=dns/txt; s=iport; t=1551337593; x=1552547193; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=12Ye7idERSgT0gdcFl812cMzRIpyP38wqKOjf96ENJQ=; b=UHBLwTr7ptCNL7skxFY+azvzhPeL17uTkxRpfFRY4v1tNRnejwMYK+MY tCF78qd8kALnWj3HUJniSAMd9om/D4NQkxMMgBFO+UbLNoEPnToV23v0L y3Dy+5Kq3zg+UangirFurwpdHQ9RZw92u/6xWNWDkEbBtISR/V/jH3d8w I=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="241773764" Received: from alln-core-5.cisco.com ([173.36.13.138]) by alln-iport-6.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:06:32 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-5.cisco.com (8.15.2/8.15.2) with ESMTP id x1S76VIQ022661; Thu, 28 Feb 2019 07:06:32 GMT Received: by cisco.com (Postfix, from userid 508933) id C568B20F2001; Wed, 27 Feb 2019 23:06:31 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim Date: Wed, 27 Feb 2019 23:03:15 -0800 Message-Id: <20190228070317.17002-14-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-5.cisco.com Subject: [dpdk-dev] [PATCH 13/15] net/enic: fix several issues with inner packet matching X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Inner packet matching is currently buggy in many cases. 1. Mishandling null spec ("match any"). The copy_item functions do nothing if spec is null. This is incorrect, as all patterns should be appended to the L5 pattern buffer even for null spec (treated as all zeros). 2. Accessing null spec causing segfault. 3. Not setting protocol fields. The NIC filter API currently has no flags for "match inner IPv4, IPv6, UDP, TCP, and so on". So, the driver needs to explicitly set EtherType and IP protocol fields in the L5 pattern buffer to avoid false positives (e.g. reporting IPv6 as IPv4). Instead of keep adding "if inner, do something differently" cases to the existing copy_item functions, introduce separate functions for inner packet patterns and address the above issues in those functions. The changes to the previous outer-packet copy_item functions are mechanical, due to reduced indentation. Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled") Signed-off-by: Hyong Youb Kim --- drivers/net/enic/enic_flow.c | 371 ++++++++++++++++++++++++++----------------- 1 file changed, 224 insertions(+), 147 deletions(-) diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index b3172e7be..5924a01e3 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -30,11 +30,15 @@ * filter: Partially filled in NIC filter structure. * inner_ofst: If zero, this is an outer header. If non-zero, this is * the offset into L5 where the header begins. + * l2_proto_off: offset to EtherType eth or vlan header. + * l3_proto_off: offset to next protocol field in IPv4 or 6 header. */ struct copy_item_args { const struct rte_flow_item *item; struct filter_v2 *filter; uint8_t *inner_ofst; + uint8_t l2_proto_off; + uint8_t l3_proto_off; }; /* functions for copying items into enic filters */ @@ -50,6 +54,8 @@ struct enic_items { * versions, it's invalid to start the stack above layer 3. */ const u8 valid_start_item; + /* Inner packet version of copy_item. */ + enic_copy_item_fn *inner_copy_item; }; /** Filtering capabilities for various NIC and firmware versions. */ @@ -86,6 +92,12 @@ static enic_copy_item_fn enic_copy_item_udp_v2; static enic_copy_item_fn enic_copy_item_tcp_v2; static enic_copy_item_fn enic_copy_item_sctp_v2; static enic_copy_item_fn enic_copy_item_vxlan_v2; +static enic_copy_item_fn enic_copy_item_inner_eth_v2; +static enic_copy_item_fn enic_copy_item_inner_vlan_v2; +static enic_copy_item_fn enic_copy_item_inner_ipv4_v2; +static enic_copy_item_fn enic_copy_item_inner_ipv6_v2; +static enic_copy_item_fn enic_copy_item_inner_udp_v2; +static enic_copy_item_fn enic_copy_item_inner_tcp_v2; static copy_action_fn enic_copy_action_v1; static copy_action_fn enic_copy_action_v2; @@ -100,6 +112,7 @@ static const struct enic_items enic_items_v1[] = { .prev_items = (const enum rte_flow_item_type[]) { RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = NULL, }, [RTE_FLOW_ITEM_TYPE_UDP] = { .copy_item = enic_copy_item_udp_v1, @@ -108,6 +121,7 @@ static const struct enic_items enic_items_v1[] = { RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = NULL, }, [RTE_FLOW_ITEM_TYPE_TCP] = { .copy_item = enic_copy_item_tcp_v1, @@ -116,6 +130,7 @@ static const struct enic_items enic_items_v1[] = { RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = NULL, }, }; @@ -131,6 +146,7 @@ static const struct enic_items enic_items_v2[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = NULL, }, [RTE_FLOW_ITEM_TYPE_ETH] = { .copy_item = enic_copy_item_eth_v2, @@ -139,6 +155,7 @@ static const struct enic_items enic_items_v2[] = { RTE_FLOW_ITEM_TYPE_VXLAN, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_eth_v2, }, [RTE_FLOW_ITEM_TYPE_VLAN] = { .copy_item = enic_copy_item_vlan_v2, @@ -147,6 +164,7 @@ static const struct enic_items enic_items_v2[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_vlan_v2, }, [RTE_FLOW_ITEM_TYPE_IPV4] = { .copy_item = enic_copy_item_ipv4_v2, @@ -156,6 +174,7 @@ static const struct enic_items enic_items_v2[] = { RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_ipv4_v2, }, [RTE_FLOW_ITEM_TYPE_IPV6] = { .copy_item = enic_copy_item_ipv6_v2, @@ -165,6 +184,7 @@ static const struct enic_items enic_items_v2[] = { RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_ipv6_v2, }, [RTE_FLOW_ITEM_TYPE_UDP] = { .copy_item = enic_copy_item_udp_v2, @@ -174,6 +194,7 @@ static const struct enic_items enic_items_v2[] = { RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_udp_v2, }, [RTE_FLOW_ITEM_TYPE_TCP] = { .copy_item = enic_copy_item_tcp_v2, @@ -183,6 +204,7 @@ static const struct enic_items enic_items_v2[] = { RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_tcp_v2, }, [RTE_FLOW_ITEM_TYPE_SCTP] = { .copy_item = enic_copy_item_sctp_v2, @@ -192,6 +214,7 @@ static const struct enic_items enic_items_v2[] = { RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = NULL, }, [RTE_FLOW_ITEM_TYPE_VXLAN] = { .copy_item = enic_copy_item_vxlan_v2, @@ -200,6 +223,7 @@ static const struct enic_items enic_items_v2[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = NULL, }, }; @@ -212,6 +236,7 @@ static const struct enic_items enic_items_v3[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = NULL, }, [RTE_FLOW_ITEM_TYPE_ETH] = { .copy_item = enic_copy_item_eth_v2, @@ -220,6 +245,7 @@ static const struct enic_items enic_items_v3[] = { RTE_FLOW_ITEM_TYPE_VXLAN, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_eth_v2, }, [RTE_FLOW_ITEM_TYPE_VLAN] = { .copy_item = enic_copy_item_vlan_v2, @@ -228,6 +254,7 @@ static const struct enic_items enic_items_v3[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_vlan_v2, }, [RTE_FLOW_ITEM_TYPE_IPV4] = { .copy_item = enic_copy_item_ipv4_v2, @@ -237,6 +264,7 @@ static const struct enic_items enic_items_v3[] = { RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_ipv4_v2, }, [RTE_FLOW_ITEM_TYPE_IPV6] = { .copy_item = enic_copy_item_ipv6_v2, @@ -246,6 +274,7 @@ static const struct enic_items enic_items_v3[] = { RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_ipv6_v2, }, [RTE_FLOW_ITEM_TYPE_UDP] = { .copy_item = enic_copy_item_udp_v2, @@ -255,6 +284,7 @@ static const struct enic_items enic_items_v3[] = { RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_udp_v2, }, [RTE_FLOW_ITEM_TYPE_TCP] = { .copy_item = enic_copy_item_tcp_v2, @@ -264,6 +294,7 @@ static const struct enic_items enic_items_v3[] = { RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = enic_copy_item_inner_tcp_v2, }, [RTE_FLOW_ITEM_TYPE_SCTP] = { .copy_item = enic_copy_item_sctp_v2, @@ -273,6 +304,7 @@ static const struct enic_items enic_items_v3[] = { RTE_FLOW_ITEM_TYPE_IPV6, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = NULL, }, [RTE_FLOW_ITEM_TYPE_VXLAN] = { .copy_item = enic_copy_item_vxlan_v2, @@ -281,6 +313,7 @@ static const struct enic_items enic_items_v3[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_END, }, + .inner_copy_item = NULL, }, }; @@ -374,7 +407,6 @@ enic_copy_item_ipv4_v1(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_ipv4 *spec = item->spec; const struct rte_flow_item_ipv4 *mask = item->mask; struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4; @@ -385,9 +417,6 @@ enic_copy_item_ipv4_v1(struct copy_item_args *arg) FLOW_TRACE(); - if (*inner_ofst) - return ENOTSUP; - if (!mask) mask = &rte_flow_item_ipv4_mask; @@ -416,7 +445,6 @@ enic_copy_item_udp_v1(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_udp *spec = item->spec; const struct rte_flow_item_udp *mask = item->mask; struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4; @@ -427,9 +455,6 @@ enic_copy_item_udp_v1(struct copy_item_args *arg) FLOW_TRACE(); - if (*inner_ofst) - return ENOTSUP; - if (!mask) mask = &rte_flow_item_udp_mask; @@ -459,7 +484,6 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_tcp *spec = item->spec; const struct rte_flow_item_tcp *mask = item->mask; struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4; @@ -470,9 +494,6 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg) FLOW_TRACE(); - if (*inner_ofst) - return ENOTSUP; - if (!mask) mask = &rte_flow_item_tcp_mask; @@ -497,12 +518,150 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg) return 0; } +/* + * The common 'copy' function for all inner packet patterns. Patterns are + * first appended to the L5 pattern buffer. Then, since the NIC filter + * API has no special support for inner packet matching at the moment, + * we set EtherType and IP proto as necessary. + */ +static int +copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_ofst, + const void *val, const void *mask, uint8_t val_size, + uint8_t proto_off, uint16_t proto_val, uint8_t proto_size) +{ + uint8_t *l5_mask, *l5_val; + uint8_t start_off; + + /* No space left in the L5 pattern buffer. */ + start_off = *inner_ofst; + if ((start_off + val_size) > FILTER_GENERIC_1_KEY_LEN) + return ENOTSUP; + l5_mask = gp->layer[FILTER_GENERIC_1_L5].mask; + l5_val = gp->layer[FILTER_GENERIC_1_L5].val; + /* Copy the pattern into the L5 buffer. */ + if (val) { + memcpy(l5_mask + start_off, mask, val_size); + memcpy(l5_val + start_off, val, val_size); + } + /* Set the protocol field in the previous header. */ + if (proto_off) { + void *m, *v; + + m = l5_mask + proto_off; + v = l5_val + proto_off; + if (proto_size == 1) { + *(uint8_t *)m = 0xff; + *(uint8_t *)v = (uint8_t)proto_val; + } else if (proto_size == 2) { + *(uint16_t *)m = 0xffff; + *(uint16_t *)v = proto_val; + } + } + /* All inner headers land in L5 buffer even if their spec is null. */ + *inner_ofst += val_size; + return 0; +} + +static int +enic_copy_item_inner_eth_v2(struct copy_item_args *arg) +{ + const void *mask = arg->item->mask; + uint8_t *off = arg->inner_ofst; + + FLOW_TRACE(); + if (!mask) + mask = &rte_flow_item_eth_mask; + arg->l2_proto_off = *off + offsetof(struct ether_hdr, ether_type); + return copy_inner_common(&arg->filter->u.generic_1, off, + arg->item->spec, mask, sizeof(struct ether_hdr), + 0 /* no previous protocol */, 0, 0); +} + +static int +enic_copy_item_inner_vlan_v2(struct copy_item_args *arg) +{ + const void *mask = arg->item->mask; + uint8_t *off = arg->inner_ofst; + uint8_t eth_type_off; + + FLOW_TRACE(); + if (!mask) + mask = &rte_flow_item_vlan_mask; + /* Append vlan header to L5 and set ether type = TPID */ + eth_type_off = arg->l2_proto_off; + arg->l2_proto_off = *off + offsetof(struct vlan_hdr, eth_proto); + return copy_inner_common(&arg->filter->u.generic_1, off, + arg->item->spec, mask, sizeof(struct vlan_hdr), + eth_type_off, rte_cpu_to_be_16(ETHER_TYPE_VLAN), 2); +} + +static int +enic_copy_item_inner_ipv4_v2(struct copy_item_args *arg) +{ + const void *mask = arg->item->mask; + uint8_t *off = arg->inner_ofst; + + FLOW_TRACE(); + if (!mask) + mask = &rte_flow_item_ipv4_mask; + /* Append ipv4 header to L5 and set ether type = ipv4 */ + arg->l3_proto_off = *off + offsetof(struct ipv4_hdr, next_proto_id); + return copy_inner_common(&arg->filter->u.generic_1, off, + arg->item->spec, mask, sizeof(struct ipv4_hdr), + arg->l2_proto_off, rte_cpu_to_be_16(ETHER_TYPE_IPv4), 2); +} + +static int +enic_copy_item_inner_ipv6_v2(struct copy_item_args *arg) +{ + const void *mask = arg->item->mask; + uint8_t *off = arg->inner_ofst; + + FLOW_TRACE(); + if (!mask) + mask = &rte_flow_item_ipv6_mask; + /* Append ipv6 header to L5 and set ether type = ipv6 */ + arg->l3_proto_off = *off + offsetof(struct ipv6_hdr, proto); + return copy_inner_common(&arg->filter->u.generic_1, off, + arg->item->spec, mask, sizeof(struct ipv6_hdr), + arg->l2_proto_off, rte_cpu_to_be_16(ETHER_TYPE_IPv6), 2); +} + +static int +enic_copy_item_inner_udp_v2(struct copy_item_args *arg) +{ + const void *mask = arg->item->mask; + uint8_t *off = arg->inner_ofst; + + FLOW_TRACE(); + if (!mask) + mask = &rte_flow_item_udp_mask; + /* Append udp header to L5 and set ip proto = udp */ + return copy_inner_common(&arg->filter->u.generic_1, off, + arg->item->spec, mask, sizeof(struct udp_hdr), + arg->l3_proto_off, IPPROTO_UDP, 1); +} + +static int +enic_copy_item_inner_tcp_v2(struct copy_item_args *arg) +{ + const void *mask = arg->item->mask; + uint8_t *off = arg->inner_ofst; + + FLOW_TRACE(); + if (!mask) + mask = &rte_flow_item_tcp_mask; + /* Append tcp header to L5 and set ip proto = tcp */ + return copy_inner_common(&arg->filter->u.generic_1, off, + arg->item->spec, mask, sizeof(struct tcp_hdr), + arg->l3_proto_off, IPPROTO_TCP, 1); +} + static int enic_copy_item_eth_v2(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; struct ether_hdr enic_spec; struct ether_hdr enic_mask; const struct rte_flow_item_eth *spec = item->spec; @@ -530,24 +689,11 @@ enic_copy_item_eth_v2(struct copy_item_args *arg) enic_spec.ether_type = spec->type; enic_mask.ether_type = mask->type; - if (*inner_ofst == 0) { - /* outer header */ - memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask, - sizeof(struct ether_hdr)); - memcpy(gp->layer[FILTER_GENERIC_1_L2].val, &enic_spec, - sizeof(struct ether_hdr)); - } else { - /* inner header */ - if ((*inner_ofst + sizeof(struct ether_hdr)) > - FILTER_GENERIC_1_KEY_LEN) - return ENOTSUP; - /* Offset into L5 where inner Ethernet header goes */ - memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst], - &enic_mask, sizeof(struct ether_hdr)); - memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst], - &enic_spec, sizeof(struct ether_hdr)); - *inner_ofst += sizeof(struct ether_hdr); - } + /* outer header */ + memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask, + sizeof(struct ether_hdr)); + memcpy(gp->layer[FILTER_GENERIC_1_L2].val, &enic_spec, + sizeof(struct ether_hdr)); return 0; } @@ -556,10 +702,11 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_vlan *spec = item->spec; const struct rte_flow_item_vlan *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; + struct ether_hdr *eth_mask; + struct ether_hdr *eth_val; FLOW_TRACE(); @@ -570,36 +717,21 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg) if (!mask) mask = &rte_flow_item_vlan_mask; - if (*inner_ofst == 0) { - struct ether_hdr *eth_mask = - (void *)gp->layer[FILTER_GENERIC_1_L2].mask; - struct ether_hdr *eth_val = - (void *)gp->layer[FILTER_GENERIC_1_L2].val; - - /* Outer TPID cannot be matched */ - if (eth_mask->ether_type) - return ENOTSUP; - /* - * When packet matching, the VIC always compares vlan-stripped - * L2, regardless of vlan stripping settings. So, the inner type - * from vlan becomes the ether type of the eth header. - */ - eth_mask->ether_type = mask->inner_type; - eth_val->ether_type = spec->inner_type; - /* For TCI, use the vlan mask/val fields (little endian). */ - gp->mask_vlan = rte_be_to_cpu_16(mask->tci); - gp->val_vlan = rte_be_to_cpu_16(spec->tci); - } else { - /* Inner header. Mask/Val start at *inner_ofst into L5 */ - if ((*inner_ofst + sizeof(struct vlan_hdr)) > - FILTER_GENERIC_1_KEY_LEN) - return ENOTSUP; - memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst], - mask, sizeof(struct vlan_hdr)); - memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst], - spec, sizeof(struct vlan_hdr)); - *inner_ofst += sizeof(struct vlan_hdr); - } + eth_mask = (void *)gp->layer[FILTER_GENERIC_1_L2].mask; + eth_val = (void *)gp->layer[FILTER_GENERIC_1_L2].val; + /* Outer TPID cannot be matched */ + if (eth_mask->ether_type) + return ENOTSUP; + /* + * When packet matching, the VIC always compares vlan-stripped + * L2, regardless of vlan stripping settings. So, the inner type + * from vlan becomes the ether type of the eth header. + */ + eth_mask->ether_type = mask->inner_type; + eth_val->ether_type = spec->inner_type; + /* For TCI, use the vlan mask/val fields (little endian). */ + gp->mask_vlan = rte_be_to_cpu_16(mask->tci); + gp->val_vlan = rte_be_to_cpu_16(spec->tci); return 0; } @@ -608,40 +740,27 @@ enic_copy_item_ipv4_v2(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_ipv4 *spec = item->spec; const struct rte_flow_item_ipv4 *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; FLOW_TRACE(); - if (*inner_ofst == 0) { - /* Match IPv4 */ - gp->mask_flags |= FILTER_GENERIC_1_IPV4; - gp->val_flags |= FILTER_GENERIC_1_IPV4; + /* Match IPv4 */ + gp->mask_flags |= FILTER_GENERIC_1_IPV4; + gp->val_flags |= FILTER_GENERIC_1_IPV4; - /* Match all if no spec */ - if (!spec) - return 0; + /* Match all if no spec */ + if (!spec) + return 0; - if (!mask) - mask = &rte_flow_item_ipv4_mask; + if (!mask) + mask = &rte_flow_item_ipv4_mask; - memcpy(gp->layer[FILTER_GENERIC_1_L3].mask, &mask->hdr, - sizeof(struct ipv4_hdr)); - memcpy(gp->layer[FILTER_GENERIC_1_L3].val, &spec->hdr, - sizeof(struct ipv4_hdr)); - } else { - /* Inner IPv4 header. Mask/Val start at *inner_ofst into L5 */ - if ((*inner_ofst + sizeof(struct ipv4_hdr)) > - FILTER_GENERIC_1_KEY_LEN) - return ENOTSUP; - memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst], - mask, sizeof(struct ipv4_hdr)); - memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst], - spec, sizeof(struct ipv4_hdr)); - *inner_ofst += sizeof(struct ipv4_hdr); - } + memcpy(gp->layer[FILTER_GENERIC_1_L3].mask, &mask->hdr, + sizeof(struct ipv4_hdr)); + memcpy(gp->layer[FILTER_GENERIC_1_L3].val, &spec->hdr, + sizeof(struct ipv4_hdr)); return 0; } @@ -650,7 +769,6 @@ enic_copy_item_ipv6_v2(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_ipv6 *spec = item->spec; const struct rte_flow_item_ipv6 *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -668,22 +786,10 @@ enic_copy_item_ipv6_v2(struct copy_item_args *arg) if (!mask) mask = &rte_flow_item_ipv6_mask; - if (*inner_ofst == 0) { - memcpy(gp->layer[FILTER_GENERIC_1_L3].mask, &mask->hdr, - sizeof(struct ipv6_hdr)); - memcpy(gp->layer[FILTER_GENERIC_1_L3].val, &spec->hdr, - sizeof(struct ipv6_hdr)); - } else { - /* Inner IPv6 header. Mask/Val start at *inner_ofst into L5 */ - if ((*inner_ofst + sizeof(struct ipv6_hdr)) > - FILTER_GENERIC_1_KEY_LEN) - return ENOTSUP; - memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst], - mask, sizeof(struct ipv6_hdr)); - memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst], - spec, sizeof(struct ipv6_hdr)); - *inner_ofst += sizeof(struct ipv6_hdr); - } + memcpy(gp->layer[FILTER_GENERIC_1_L3].mask, &mask->hdr, + sizeof(struct ipv6_hdr)); + memcpy(gp->layer[FILTER_GENERIC_1_L3].val, &spec->hdr, + sizeof(struct ipv6_hdr)); return 0; } @@ -692,7 +798,6 @@ enic_copy_item_udp_v2(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_udp *spec = item->spec; const struct rte_flow_item_udp *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -710,22 +815,10 @@ enic_copy_item_udp_v2(struct copy_item_args *arg) if (!mask) mask = &rte_flow_item_udp_mask; - if (*inner_ofst == 0) { - memcpy(gp->layer[FILTER_GENERIC_1_L4].mask, &mask->hdr, - sizeof(struct udp_hdr)); - memcpy(gp->layer[FILTER_GENERIC_1_L4].val, &spec->hdr, - sizeof(struct udp_hdr)); - } else { - /* Inner IPv6 header. Mask/Val start at *inner_ofst into L5 */ - if ((*inner_ofst + sizeof(struct udp_hdr)) > - FILTER_GENERIC_1_KEY_LEN) - return ENOTSUP; - memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst], - mask, sizeof(struct udp_hdr)); - memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst], - spec, sizeof(struct udp_hdr)); - *inner_ofst += sizeof(struct udp_hdr); - } + memcpy(gp->layer[FILTER_GENERIC_1_L4].mask, &mask->hdr, + sizeof(struct udp_hdr)); + memcpy(gp->layer[FILTER_GENERIC_1_L4].val, &spec->hdr, + sizeof(struct udp_hdr)); return 0; } @@ -734,7 +827,6 @@ enic_copy_item_tcp_v2(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_tcp *spec = item->spec; const struct rte_flow_item_tcp *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -752,22 +844,10 @@ enic_copy_item_tcp_v2(struct copy_item_args *arg) if (!mask) return ENOTSUP; - if (*inner_ofst == 0) { - memcpy(gp->layer[FILTER_GENERIC_1_L4].mask, &mask->hdr, - sizeof(struct tcp_hdr)); - memcpy(gp->layer[FILTER_GENERIC_1_L4].val, &spec->hdr, - sizeof(struct tcp_hdr)); - } else { - /* Inner IPv6 header. Mask/Val start at *inner_ofst into L5 */ - if ((*inner_ofst + sizeof(struct tcp_hdr)) > - FILTER_GENERIC_1_KEY_LEN) - return ENOTSUP; - memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst], - mask, sizeof(struct tcp_hdr)); - memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst], - spec, sizeof(struct tcp_hdr)); - *inner_ofst += sizeof(struct tcp_hdr); - } + memcpy(gp->layer[FILTER_GENERIC_1_L4].mask, &mask->hdr, + sizeof(struct tcp_hdr)); + memcpy(gp->layer[FILTER_GENERIC_1_L4].val, &spec->hdr, + sizeof(struct tcp_hdr)); return 0; } @@ -776,7 +856,6 @@ enic_copy_item_sctp_v2(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; const struct rte_flow_item_sctp *spec = item->spec; const struct rte_flow_item_sctp *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -785,9 +864,6 @@ enic_copy_item_sctp_v2(struct copy_item_args *arg) FLOW_TRACE(); - if (*inner_ofst) - return ENOTSUP; - /* * The NIC filter API has no flags for "match sctp", so explicitly set * the protocol number in the IP pattern. @@ -838,9 +914,6 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg) FLOW_TRACE(); - if (*inner_ofst) - return EINVAL; - /* * The NIC filter API has no flags for "match vxlan". Set UDP port to * avoid false positives. @@ -1000,6 +1073,7 @@ enic_copy_filter(const struct rte_flow_item pattern[], enum rte_flow_item_type prev_item; const struct enic_items *item_info; struct copy_item_args args; + enic_copy_item_fn *copy_fn; u8 is_first_item = 1; FLOW_TRACE(); @@ -1017,7 +1091,8 @@ enic_copy_filter(const struct rte_flow_item pattern[], item_info = &cap->item_info[item->type]; if (item->type > cap->max_item_type || - item_info->copy_item == NULL) { + item_info->copy_item == NULL || + (inner_ofst > 0 && item_info->inner_copy_item == NULL)) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "Unsupported item."); @@ -1029,7 +1104,9 @@ enic_copy_filter(const struct rte_flow_item pattern[], goto stacking_error; args.item = item; - ret = item_info->copy_item(&args); + copy_fn = inner_ofst > 0 ? item_info->inner_copy_item : + item_info->copy_item; + ret = copy_fn(&args); if (ret) goto item_not_supported; prev_item = item->type; From patchwork Thu Feb 28 07:03:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50611 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 73D664CA9; Thu, 28 Feb 2019 08:06:59 +0100 (CET) Received: from alln-iport-3.cisco.com (alln-iport-3.cisco.com [173.37.142.90]) by dpdk.org (Postfix) with ESMTP id DBAAB4CA7 for ; Thu, 28 Feb 2019 08:06:58 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=2839; q=dns/txt; s=iport; t=1551337619; x=1552547219; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=d8QT7EHzgXjeVnT4dapdnkvpxehgoVGwwVTVjzluz3g=; b=Tu2nK4W+caMSSyfOn831TwyGWU1L1qhz9uPejxgDW3dPJjBeXhaBTK+4 bSK/92JLTPfXfE4SlpO4DbSyU/eHZ+31rfKnIi/lInDjTpedqxYRxjRcr 97b6i4gwSZq+RR9m2PiJbNvgFUmkQrjVdWakL7H++vBwL7+Ci02hTQAqW I=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="242654321" Received: from alln-core-8.cisco.com ([173.36.13.141]) by alln-iport-3.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:06:58 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-8.cisco.com (8.15.2/8.15.2) with ESMTP id x1S76vAh018139; Thu, 28 Feb 2019 07:06:57 GMT Received: by cisco.com (Postfix, from userid 508933) id 5F16D20F2001; Wed, 27 Feb 2019 23:06:57 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim Date: Wed, 27 Feb 2019 23:03:16 -0800 Message-Id: <20190228070317.17002-15-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-8.cisco.com Subject: [dpdk-dev] [PATCH 14/15] doc: update enic guide X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Minor changes to text around flow API. - Add vlan to the supported items. - Describe VLAN stripping's effect on ETH/VLAN match - Mention limitations on MARK, RAW, RSS, and PASSTHRU Signed-off-by: Hyong Youb Kim --- doc/guides/nics/enic.rst | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst index bc38f51aa..8b6be7ef4 100644 --- a/doc/guides/nics/enic.rst +++ b/doc/guides/nics/enic.rst @@ -247,7 +247,7 @@ Generic Flow API is supported. The baseline support is: in the pattern. - Attributes: ingress - - Items: eth, ipv4, ipv6, udp, tcp, vxlan, inner eth, ipv4, ipv6, udp, tcp + - Items: eth, vlan, ipv4, ipv6, udp, tcp, vxlan, inner eth, vlan, ipv4, ipv6, udp, tcp - Actions: queue and void - Selectors: 'is', 'spec' and 'mask'. 'last' is not supported - In total, up to 64 bytes of mask is allowed across all headers @@ -255,7 +255,7 @@ Generic Flow API is supported. The baseline support is: - **1300 and later series VICS with advanced filters enabled** - Attributes: ingress - - Items: eth, ipv4, ipv6, udp, tcp, vxlan, inner eth, ipv4, ipv6, udp, tcp + - Items: eth, vlan, ipv4, ipv6, udp, tcp, vxlan, inner eth, vlan, ipv4, ipv6, udp, tcp - Actions: queue, mark, drop, flag and void - Selectors: 'is', 'spec' and 'mask'. 'last' is not supported - In total, up to 64 bytes of mask is allowed across all headers @@ -266,6 +266,12 @@ Generic Flow API is supported. The baseline support is: - Action: count +The VIC performs packet matching after applying VLAN strip. If VLAN +stripping is enabled, EtherType in the ETH item corresponds to the +stripped VLAN header's EtherType. Stripping does not affect the VLAN +item. TCI and EtherType in the VLAN item are matched against those in +the (stripped) VLAN header whether stripping is enabled or disabled. + More features may be added in future firmware and new versions of the VIC. Please refer to the release notes. @@ -450,6 +456,12 @@ PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the 1000 for 1300 series VICs). Filters are checked for matching in the order they were added. Since there currently is no grouping or priority support, 'catch-all' filters should be added last. + - The supported range of IDs for the 'MARK' action is 0 - 0xFFFD. + - RSS and PASSTHRU actions only support "receive normally". They are limited + to supporting MARK + RSS and PASSTHRU + MARK to allow the application to mark + packets and then receive them normally. These require 1400 series VIC adapters + and latest firmware. + - RAW items are limited to matching UDP tunnel headers like VXLAN. - **Statistics** From patchwork Thu Feb 28 07:03:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 50612 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 635F45A4A; Thu, 28 Feb 2019 08:07:16 +0100 (CET) Received: from rcdn-iport-6.cisco.com (rcdn-iport-6.cisco.com [173.37.86.77]) by dpdk.org (Postfix) with ESMTP id 1A4905A4A for ; Thu, 28 Feb 2019 08:07:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=792; q=dns/txt; s=iport; t=1551337632; x=1552547232; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=CbQ/jdBQbRqsY2zbOiX5aeOf45uHLnpe4HkqFJVD7gM=; b=OTFFdSsxRc0JjRUto16vMzrS4HwFXr0JXwwdXTb6CtQTXjpEN4RRkP3o ep7vmJnAA8z6jwozt4NbIex0GbdGGYidiIJHYDNOKtyPSVitDHf3/GudL ow5rqzCzfYFq7h3TQFa8/pSSGTx4X7asEhUybBRddj5tb2/AAv/SWjwdG I=; X-IronPort-AV: E=Sophos;i="5.58,422,1544486400"; d="scan'208";a="526001980" Received: from alln-core-12.cisco.com ([173.36.13.134]) by rcdn-iport-6.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Feb 2019 07:07:11 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-12.cisco.com (8.15.2/8.15.2) with ESMTP id x1S77AFd023068; Thu, 28 Feb 2019 07:07:11 GMT Received: by cisco.com (Postfix, from userid 508933) id BE0CE20F2001; Wed, 27 Feb 2019 23:07:10 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit Cc: dev@dpdk.org, John Daley , Hyong Youb Kim Date: Wed, 27 Feb 2019 23:03:17 -0800 Message-Id: <20190228070317.17002-16-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190228070317.17002-1-hyonkim@cisco.com> References: <20190228070317.17002-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-12.cisco.com Subject: [dpdk-dev] [PATCH 15/15] doc: update release notes for enic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Hyong Youb Kim --- doc/guides/rel_notes/release_19_05.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst index 4a3e2a7f3..9226dc5e9 100644 --- a/doc/guides/rel_notes/release_19_05.rst +++ b/doc/guides/rel_notes/release_19_05.rst @@ -77,6 +77,11 @@ New Features which includes the directory name, lib name, filenames, makefile, docs, macros, functions, structs and any other strings in the code. +* **Updated the enic driver.** + + * Fixed several flow (director) bugs related to MARK, SCTP, VLAN, VXLAN, and + inner packet matching. + * Added limited support for RAW, RSS, and PASSTHRU. Removed Items -------------