From patchwork Mon Oct 12 10:43:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80342 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 58490A04B6; Mon, 12 Oct 2020 12:45:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7DF721D6C4; Mon, 12 Oct 2020 12:44:05 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 429CA1D6B0 for ; Mon, 12 Oct 2020 12:44:03 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:01 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqvq025485; Mon, 12 Oct 2020 13:44:01 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:00 +0300 Message-Id: <97bc93abb2093db8eebba0fbe1b9692afb1e7095.1602494556.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 01/11] ethdev: add extensions attributes to IPv6 item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Using the current implementation of DPDK, an application cannot match on IPv6 packets, based on the existing extension headers, in a simple way. Field 'Next Header' in IPv6 header indicates type of the first extension header only. Following extension headers can't be identified by inspecting the IPv6 header. As a result, the existence or absence of specific extension headers can't be used for packet matching. For example, fragmented IPv6 packets contain a dedicated extension header (which is implemented in a later patch of this series). Non-fragmented packets don't contain the fragment extension header. For an application to match on non-fragmented IPv6 packets, the current implementation doesn't provide a suitable solution. Matching on the Next Header field is not sufficient, since additional extension headers might be present in the same packet. To match on fragmented IPv6 packets, the same difficulty exists. This patch implements the update as detailed in RFC [1]. A set of additional values will be added to IPv6 header struct. These values will indicate the existence of every defined extension header type, providing simple means for identification of existing extensions in the packet header. Continuing the above example, fragmented packets can be identified using the specific value indicating existence of fragment extension header. To match on non-fragmented IPv6 packets, need to use frag_ext_exist 0. To match on fragmented IPv6 packets, need to use frag_ext_exist 1. To match on any IPv6 packets, the frag_ext_exist field should not be specified for match. [1] https://mails.dpdk.org/archives/dev/2020-August/177257.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 22 +++++++++++++++++++--- lib/librte_ethdev/rte_flow.h | 25 +++++++++++++++++++++++-- 2 files changed, 42 insertions(+), 5 deletions(-) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 119b128..ae090db 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -946,11 +946,27 @@ Item: ``IPV6`` Matches an IPv6 header. -Note: IPv6 options are handled by dedicated pattern items, see `Item: -IPV6_EXT`_. +Dedicated flags indicate existence of specific extension headers. +Every type of extension header can use a dedicated pattern item, or +the generic `Item: IPV6_EXT`_. +To match on packets containing a specific extension header, an application +should match on the dedicated flag set to 1. +To match on packets not containing a specific extension header, an application +should match on the dedicated flag clear to 0. +In case application doesn't care about the existence of a specific extension +header, it should not specify the dedicated flag for matching. - ``hdr``: IPv6 header definition (``rte_ip.h``). -- Default ``mask`` matches source and destination addresses only. +- ``hop_ext_exist``: Hop-by-Hop Options extension header exists. +- ``rout_ext_exist``: Routing extension header exists. +- ``frag_ext_exist``: Fragment extension header exists. +- ``auth_ext_exist``: Authentication extension header exists. +- ``esp_ext_exist``: Encapsulation Security Payload extension header exists. +- ``dest_ext_exist``: Destination Options extension header exists. +- ``mobil_ext_exist``: Mobility extension header exists. +- ``hip_ext_exist``: Host Identity Protocol extension header exists. +- ``shim6_ext_exist``: Shim6 Protocol extension header exists. +- Default ``mask`` matches ``hdr`` source and destination addresses only. Item: ``ICMP`` ^^^^^^^^^^^^^^ diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index da8bfa5..5b5bed2 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -792,11 +792,32 @@ struct rte_flow_item_ipv4 { * * Matches an IPv6 header. * - * Note: IPv6 options are handled by dedicated pattern items, see - * RTE_FLOW_ITEM_TYPE_IPV6_EXT. + * Dedicated flags indicate existence of specific extension headers. + * Every type of extension header can use a dedicated pattern item, or + * the generic item RTE_FLOW_ITEM_TYPE_IPV6_EXT. */ struct rte_flow_item_ipv6 { struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */ + uint32_t hop_ext_exist:1; + /**< Hop-by-Hop Options extension header exists. */ + uint32_t rout_ext_exist:1; + /**< Routing extension header exists. */ + uint32_t frag_ext_exist:1; + /**< Fragment extension header exists. */ + uint32_t auth_ext_exist:1; + /**< Authentication extension header exists. */ + uint32_t esp_ext_exist:1; + /**< Encapsulation Security Payload extension header exists. */ + uint32_t dest_ext_exist:1; + /**< Destination Options extension header exists. */ + uint32_t mobil_ext_exist:1; + /**< Mobility extension header exists. */ + uint32_t hip_ext_exist:1; + /**< Host Identity Protocol extension header exists. */ + uint32_t shim6_ext_exist:1; + /**< Shim6 Protocol extension header exists. */ + uint32_t reserved:23; + /**< Reserved for future extension headers, must be zero. */ }; /** Default mask for RTE_FLOW_ITEM_TYPE_IPV6. */ From patchwork Mon Oct 12 10:43:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80349 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DEF75A04B6; Mon, 12 Oct 2020 12:47:20 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 034521D70D; Mon, 12 Oct 2020 12:44:31 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 572E81D6C9 for ; Mon, 12 Oct 2020 12:44:08 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:02 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqvr025485; Mon, 12 Oct 2020 13:44:02 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:01 +0300 Message-Id: <38558f664207049f4bfb477a8575e8e8d74960d3.1602494556.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 02/11] ethdev: add IPv6 fragment extension header item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Applications handling fragmented IPv6 packets need to match on IPv6 fragment extension header, in order to identify the fragments order and location in the packet. This patch introduces the IPv6 fragment extension header item, proposed in [1]. Relevant definitions are moved from lib/librte_ip_frag/rte_ip_frag.h to lib/librte_net/rte_ip.h, as they are needed for IPv6 header handling. struct ipv6_extension_fragment renamed to rte_ipv6_fragment_ext to adapt it to the common naming convention. Default mask is not defined, since all fields are optional. [1] http://mails.dpdk.org/archives/dev/2020-March/160255.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 16 ++++++++++++++-- lib/librte_ethdev/rte_flow.c | 1 + lib/librte_ethdev/rte_flow.h | 20 ++++++++++++++++++++ lib/librte_ip_frag/rte_ip_frag.h | 26 ++------------------------ lib/librte_net/rte_ip.h | 26 ++++++++++++++++++++++++-- 5 files changed, 61 insertions(+), 28 deletions(-) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index ae090db..02b1d58 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -947,8 +947,8 @@ Item: ``IPV6`` Matches an IPv6 header. Dedicated flags indicate existence of specific extension headers. -Every type of extension header can use a dedicated pattern item, or -the generic `Item: IPV6_EXT`_. +Every type of extension header can use a dedicated pattern item, +for example `Item: IPV6_FRAG_EXT`_, or the generic `Item: IPV6_EXT`_. To match on packets containing a specific extension header, an application should match on the dedicated flag set to 1. To match on packets not containing a specific extension header, an application @@ -1193,6 +1193,18 @@ Normally preceded by any of: - `Item: IPV6`_ - `Item: IPV6_EXT`_ +Item: ``IPV6_FRAG_EXT`` +^^^^^^^^^^^^^^^^^^^^^^^ + +Matches the presence of IPv6 fragment extension header. + +- ``hdr``: IPv6 fragment extension header definition (``rte_ip.h``). + +Normally preceded by any of: + +- `Item: IPV6`_ +- `Item: IPV6_EXT`_ + Item: ``ICMP6`` ^^^^^^^^^^^^^^^ diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c index 8d1b279..6239fbf 100644 --- a/lib/librte_ethdev/rte_flow.c +++ b/lib/librte_ethdev/rte_flow.c @@ -72,6 +72,7 @@ struct rte_flow_desc_data { MK_FLOW_ITEM(VXLAN_GPE, sizeof(struct rte_flow_item_vxlan_gpe)), MK_FLOW_ITEM(ARP_ETH_IPV4, sizeof(struct rte_flow_item_arp_eth_ipv4)), MK_FLOW_ITEM(IPV6_EXT, sizeof(struct rte_flow_item_ipv6_ext)), + MK_FLOW_ITEM(IPV6_FRAG_EXT, sizeof(struct rte_flow_item_ipv6_frag_ext)), MK_FLOW_ITEM(ICMP6, sizeof(struct rte_flow_item_icmp6)), MK_FLOW_ITEM(ICMP6_ND_NS, sizeof(struct rte_flow_item_icmp6_nd_ns)), MK_FLOW_ITEM(ICMP6_ND_NA, sizeof(struct rte_flow_item_icmp6_nd_na)), diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index 5b5bed2..85376a3 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -537,6 +537,12 @@ enum rte_flow_item_type { */ RTE_FLOW_ITEM_TYPE_ECPRI, + /** + * Matches the presence of IPv6 fragment extension header. + * + * See struct rte_flow_item_ipv6_frag_ext. + */ + RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT, }; /** @@ -1188,6 +1194,20 @@ struct rte_flow_item_ipv6_ext rte_flow_item_ipv6_ext_mask = { #endif /** + * RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT + * + * Matches the presence of IPv6 fragment extension header. + * + * Preceded by any of: + * + * - RTE_FLOW_ITEM_TYPE_IPV6 + * - RTE_FLOW_ITEM_TYPE_IPV6_EXT + */ +struct rte_flow_item_ipv6_frag_ext { + struct rte_ipv6_fragment_ext hdr; +}; + +/** * RTE_FLOW_ITEM_TYPE_ICMP6 * * Matches any ICMPv6 header. diff --git a/lib/librte_ip_frag/rte_ip_frag.h b/lib/librte_ip_frag/rte_ip_frag.h index 66edd7e..0bfe64b 100644 --- a/lib/librte_ip_frag/rte_ip_frag.h +++ b/lib/librte_ip_frag/rte_ip_frag.h @@ -110,30 +110,8 @@ struct rte_ip_frag_tbl { __extension__ struct ip_frag_pkt pkt[0]; /**< hash table. */ }; -/** IPv6 fragment extension header */ -#define RTE_IPV6_EHDR_MF_SHIFT 0 -#define RTE_IPV6_EHDR_MF_MASK 1 -#define RTE_IPV6_EHDR_FO_SHIFT 3 -#define RTE_IPV6_EHDR_FO_MASK (~((1 << RTE_IPV6_EHDR_FO_SHIFT) - 1)) -#define RTE_IPV6_EHDR_FO_ALIGN (1 << RTE_IPV6_EHDR_FO_SHIFT) - -#define RTE_IPV6_FRAG_USED_MASK \ - (RTE_IPV6_EHDR_MF_MASK | RTE_IPV6_EHDR_FO_MASK) - -#define RTE_IPV6_GET_MF(x) ((x) & RTE_IPV6_EHDR_MF_MASK) -#define RTE_IPV6_GET_FO(x) ((x) >> RTE_IPV6_EHDR_FO_SHIFT) - -#define RTE_IPV6_SET_FRAG_DATA(fo, mf) \ - (((fo) & RTE_IPV6_EHDR_FO_MASK) | ((mf) & RTE_IPV6_EHDR_MF_MASK)) - -struct ipv6_extension_fragment { - uint8_t next_header; /**< Next header type */ - uint8_t reserved; /**< Reserved */ - uint16_t frag_data; /**< All fragmentation data */ - uint32_t id; /**< Packet ID */ -} __rte_packed; - - +/* struct ipv6_extension_fragment moved to librte_net/rte_ip.h and renamed. */ +#define ipv6_extension_fragment rte_ipv6_fragment_ext /** * Create a new IP fragmentation table. diff --git a/lib/librte_net/rte_ip.h b/lib/librte_net/rte_ip.h index bb55ebb..fbf5575 100644 --- a/lib/librte_net/rte_ip.h +++ b/lib/librte_net/rte_ip.h @@ -461,8 +461,30 @@ struct rte_ipv6_hdr { return (uint16_t)cksum; } -/* IPv6 fragmentation header size */ -#define RTE_IPV6_FRAG_HDR_SIZE 8 +/** IPv6 fragment extension header. */ +#define RTE_IPV6_EHDR_MF_SHIFT 0 +#define RTE_IPV6_EHDR_MF_MASK 1 +#define RTE_IPV6_EHDR_FO_SHIFT 3 +#define RTE_IPV6_EHDR_FO_MASK (~((1 << RTE_IPV6_EHDR_FO_SHIFT) - 1)) +#define RTE_IPV6_EHDR_FO_ALIGN (1 << RTE_IPV6_EHDR_FO_SHIFT) + +#define RTE_IPV6_FRAG_USED_MASK (RTE_IPV6_EHDR_MF_MASK | RTE_IPV6_EHDR_FO_MASK) + +#define RTE_IPV6_GET_MF(x) ((x) & RTE_IPV6_EHDR_MF_MASK) +#define RTE_IPV6_GET_FO(x) ((x) >> RTE_IPV6_EHDR_FO_SHIFT) + +#define RTE_IPV6_SET_FRAG_DATA(fo, mf) \ + (((fo) & RTE_IPV6_EHDR_FO_MASK) | ((mf) & RTE_IPV6_EHDR_MF_MASK)) + +struct rte_ipv6_fragment_ext { + uint8_t next_header; /**< Next header type */ + uint8_t reserved; /**< Reserved */ + rte_be16_t frag_data; /**< All fragmentation data */ + rte_be32_t id; /**< Packet ID */ +} __rte_packed; + +/* IPv6 fragment extension header size */ +#define RTE_IPV6_FRAG_HDR_SIZE sizeof(struct rte_ipv6_fragment_ext) /** * Parse next IPv6 header extension From patchwork Mon Oct 12 10:43:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80345 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F09CDA04B6; Mon, 12 Oct 2020 12:46:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7BABC1D6DC; Mon, 12 Oct 2020 12:44:13 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 755FC1D6CB for ; Mon, 12 Oct 2020 12:44:08 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:03 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqvs025485; Mon, 12 Oct 2020 13:44:03 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:02 +0300 Message-Id: <75e7d6a81658f62544f75fe35dae51ab3f293549.1602494556.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 03/11] app/testpmd: support IPv4 fragments X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates testpmd CLI to support fragment_offset field of IPv4 header item. To match on non-fragmented IPv4 packets, need to use pattern: ... ipv4 fragment_offset spec 0 fragment_offset mask 0x3fff ... To match on fragmented IPv4 packets, need to use pattern: ... ipv4 fragment_offset spec 1 fragment_offset last 0x3fff fragment_offset mask 0x3fff ... (Use the full available range 1 to 0x3fff to include all possible values.) To match on any IPv4 packets, fragmented and non-fragmented, the fragment_offset field should not be specified for match. Signed-off-by: Dekel Peled Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 6e04d53..a9bf29f 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -129,6 +129,7 @@ enum index { ITEM_VLAN_INNER_TYPE, ITEM_IPV4, ITEM_IPV4_TOS, + ITEM_IPV4_FRAGMENT_OFFSET, ITEM_IPV4_TTL, ITEM_IPV4_PROTO, ITEM_IPV4_SRC, @@ -873,6 +874,7 @@ struct parse_action_priv { static const enum index item_ipv4[] = { ITEM_IPV4_TOS, + ITEM_IPV4_FRAGMENT_OFFSET, ITEM_IPV4_TTL, ITEM_IPV4_PROTO, ITEM_IPV4_SRC, @@ -2097,6 +2099,13 @@ static int comp_set_raw_index(struct context *, const struct token *, .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4, hdr.type_of_service)), }, + [ITEM_IPV4_FRAGMENT_OFFSET] = { + .name = "fragment_offset", + .help = "fragmentation flags and fragment offset", + .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4, + hdr.fragment_offset)), + }, [ITEM_IPV4_TTL] = { .name = "ttl", .help = "time to live", From patchwork Mon Oct 12 10:43:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80346 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 87CCEA04B6; Mon, 12 Oct 2020 12:46:20 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7C0FA1D6E2; Mon, 12 Oct 2020 12:44:15 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 867461D6CD for ; Mon, 12 Oct 2020 12:44:08 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:04 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqvt025485; Mon, 12 Oct 2020 13:44:04 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:03 +0300 Message-Id: <72730b2e9cdd8cb6c81d6c354c818eb9620d2a7e.1602494556.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 04/11] app/testpmd: support IPv6 fragments X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_flow update, following RFC [1], introduced frag_ext_exist field for IPv6 header item, used to indicate match on fragmented/non-fragmented packets. This patch updates testpmd CLI to support the new field. To match on non-fragmented IPv6 packets, need to use pattern: ... ipv6 frag_ext_exist spec 0 frag_ext_exist mask 1 ... To match on fragmented IPv6 packets, need to use pattern: ... ipv6 frag_ext_exist spec 1 frag_ext_exist mask 1 ... To match on any IPv6 packets, the frag_ext_exist field should not be specified for match. [1] https://mails.dpdk.org/archives/dev/2020-August/177257.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index a9bf29f..b078095 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -141,6 +141,7 @@ enum index { ITEM_IPV6_HOP, ITEM_IPV6_SRC, ITEM_IPV6_DST, + ITEM_IPV6_FRAG_EXT_EXIST, ITEM_ICMP, ITEM_ICMP_TYPE, ITEM_ICMP_CODE, @@ -890,6 +891,7 @@ struct parse_action_priv { ITEM_IPV6_HOP, ITEM_IPV6_SRC, ITEM_IPV6_DST, + ITEM_IPV6_FRAG_EXT_EXIST, ITEM_NEXT, ZERO, }; @@ -2185,6 +2187,13 @@ static int comp_set_raw_index(struct context *, const struct token *, .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6, hdr.dst_addr)), }, + [ITEM_IPV6_FRAG_EXT_EXIST] = { + .name = "frag_ext_exist", + .help = "fragment packet attribute", + .next = NEXT(item_ipv6, NEXT_ENTRY(UNSIGNED), item_param), + .args = ARGS(ARGS_ENTRY_BF(struct rte_flow_item_ipv6, + frag_ext_exist, 1)), + }, [ITEM_ICMP] = { .name = "icmp", .help = "match ICMP header", From patchwork Mon Oct 12 10:43:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80344 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E2C98A04B6; Mon, 12 Oct 2020 12:45:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CE6571D6D1; Mon, 12 Oct 2020 12:44:11 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 690111D6CA for ; Mon, 12 Oct 2020 12:44:08 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:04 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqvu025485; Mon, 12 Oct 2020 13:44:04 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:04 +0300 Message-Id: X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 05/11] app/testpmd: support IPv6 fragment extension item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_flow update, following RFC [1], added to ethdev the rte_flow item ipv6_frag_ext. This patch updates testpmd CLI to support the new item and its fields. To match on fragmented IPv6 packets, this item is added to pattern: ... ipv6 / ipv6_frag_ext ... [1] http://mails.dpdk.org/archives/dev/2020-March/160255.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index b078095..1f800eb 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -195,6 +195,9 @@ enum index { ITEM_ARP_ETH_IPV4_TPA, ITEM_IPV6_EXT, ITEM_IPV6_EXT_NEXT_HDR, + ITEM_IPV6_FRAG_EXT, + ITEM_IPV6_FRAG_EXT_NEXT_HDR, + ITEM_IPV6_FRAG_EXT_FRAG_DATA, ITEM_ICMP6, ITEM_ICMP6_TYPE, ITEM_ICMP6_CODE, @@ -786,6 +789,7 @@ struct parse_action_priv { ITEM_VXLAN_GPE, ITEM_ARP_ETH_IPV4, ITEM_IPV6_EXT, + ITEM_IPV6_FRAG_EXT, ITEM_ICMP6, ITEM_ICMP6_ND_NS, ITEM_ICMP6_ND_NA, @@ -1007,6 +1011,13 @@ struct parse_action_priv { ZERO, }; +static const enum index item_ipv6_frag_ext[] = { + ITEM_IPV6_FRAG_EXT_NEXT_HDR, + ITEM_IPV6_FRAG_EXT_FRAG_DATA, + ITEM_NEXT, + ZERO, +}; + static const enum index item_icmp6[] = { ITEM_ICMP6_TYPE, ITEM_ICMP6_CODE, @@ -2578,6 +2589,30 @@ static int comp_set_raw_index(struct context *, const struct token *, .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_ext, next_hdr)), }, + [ITEM_IPV6_FRAG_EXT] = { + .name = "ipv6_frag_ext", + .help = "match presence of IPv6 fragment extension header", + .priv = PRIV_ITEM(IPV6_FRAG_EXT, + sizeof(struct rte_flow_item_ipv6_frag_ext)), + .next = NEXT(item_ipv6_frag_ext), + .call = parse_vc, + }, + [ITEM_IPV6_FRAG_EXT_NEXT_HDR] = { + .name = "next_hdr", + .help = "next header", + .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_ipv6_frag_ext, + hdr.next_header)), + }, + [ITEM_IPV6_FRAG_EXT_FRAG_DATA] = { + .name = "frag_data", + .help = "Fragment flags and offset", + .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext, + hdr.frag_data)), + }, [ITEM_ICMP6] = { .name = "icmp6", .help = "match any ICMPv6 header", From patchwork Mon Oct 12 10:43:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80343 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B01A1A04B6; Mon, 12 Oct 2020 12:45:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 26D811D6CA; Mon, 12 Oct 2020 12:44:10 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 56D411D6C9 for ; Mon, 12 Oct 2020 12:44:08 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:05 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqvv025485; Mon, 12 Oct 2020 13:44:05 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:05 +0300 Message-Id: <6d93bc37e2b18fce9b61b35e4beff5dc807789b0.1602494556.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 06/11] net/mlx5: remove handling of ICMP fragmented packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Commit [1] forced setting of match on 'frag' bit mask 1 and value 0. Previous patch in this series added support of match on fragmented and non-fragmented packets on L3 items, so this setting is now redundant. This patch removes the changes done in [1]. [1] commit 85407db9f60d ("net/mlx5: fix matching for ICMP fragments") Signed-off-by: Dekel Peled Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_dv.c | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 2bbfcea..c0fb311 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7345,12 +7345,6 @@ struct field_modify_info modify_tcp[] = { return; if (!icmp6_m) icmp6_m = &rte_flow_item_icmp6_mask; - /* - * Force flow only to match the non-fragmented IPv6 ICMPv6 packets. - * If only the protocol is specified, no need to match the frag. - */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 0); MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type, icmp6_v->type & icmp6_m->type); @@ -7400,12 +7394,6 @@ struct field_modify_info modify_tcp[] = { return; if (!icmp_m) icmp_m = &rte_flow_item_icmp_mask; - /* - * Force flow only to match the non-fragmented IPv4 ICMP packets. - * If only the protocol is specified, no need to match the frag. - */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 0); MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type, icmp_m->hdr.icmp_type); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type, From patchwork Mon Oct 12 10:43:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80348 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B238EA04B6; Mon, 12 Oct 2020 12:46:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E2E971D6F8; Mon, 12 Oct 2020 12:44:18 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 89D901D6CF for ; Mon, 12 Oct 2020 12:44:07 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:05 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqvw025485; Mon, 12 Oct 2020 13:44:05 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:06 +0300 Message-Id: <3d636fa9dc337c55b101b2cac2bf83980495014c.1602494556.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 07/11] net/mlx5: support match on IPv4 fragment packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds to MLX5 PMD the support of matching on IPv4 fragmented and non-fragmented packets, using the IPv4 header fragment_offset field. Signed-off-by: Dekel Peled Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow.c | 48 ++++++++---- drivers/net/mlx5/mlx5_flow.h | 10 +++ drivers/net/mlx5/mlx5_flow_dv.c | 156 +++++++++++++++++++++++++++++++------ drivers/net/mlx5/mlx5_flow_verbs.c | 9 ++- 4 files changed, 178 insertions(+), 45 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 0a54818..38cfd0f 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -800,6 +800,8 @@ struct mlx5_flow_tunnel_info { * Bit-masks covering supported fields by the NIC to compare with user mask. * @param[in] size * Bit-masks size in bytes. + * @param[in] range_accepted + * True if range of values is accepted for specific fields, false otherwise. * @param[out] error * Pointer to error structure. * @@ -811,6 +813,7 @@ struct mlx5_flow_tunnel_info { const uint8_t *mask, const uint8_t *nic_mask, unsigned int size, + bool range_accepted, struct rte_flow_error *error) { unsigned int i; @@ -828,7 +831,7 @@ struct mlx5_flow_tunnel_info { RTE_FLOW_ERROR_TYPE_ITEM, item, "mask/last without a spec is not" " supported"); - if (item->spec && item->last) { + if (item->spec && item->last && !range_accepted) { uint8_t spec[size]; uint8_t last[size]; unsigned int i; @@ -1603,7 +1606,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_icmp6_mask, - sizeof(struct rte_flow_item_icmp6), error); + sizeof(struct rte_flow_item_icmp6), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -1661,7 +1665,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, - sizeof(struct rte_flow_item_icmp), error); + sizeof(struct rte_flow_item_icmp), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -1716,7 +1721,7 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_eth), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); return ret; } @@ -1770,7 +1775,7 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_vlan), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; if (!tunnel && mask->tci != RTE_BE16(0x0fff)) { @@ -1822,6 +1827,8 @@ struct mlx5_flow_tunnel_info { * @param[in] acc_mask * Acceptable mask, if NULL default internal default mask * will be used to check whether item fields are supported. + * @param[in] range_accepted + * True if range of values is accepted for specific fields, false otherwise. * @param[out] error * Pointer to error structure. * @@ -1834,6 +1841,7 @@ struct mlx5_flow_tunnel_info { uint64_t last_item, uint16_t ether_type, const struct rte_flow_item_ipv4 *acc_mask, + bool range_accepted, struct rte_flow_error *error) { const struct rte_flow_item_ipv4 *mask = item->mask; @@ -1904,7 +1912,7 @@ struct mlx5_flow_tunnel_info { acc_mask ? (const uint8_t *)acc_mask : (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_ipv4), - error); + range_accepted, error); if (ret < 0) return ret; return 0; @@ -2003,7 +2011,7 @@ struct mlx5_flow_tunnel_info { acc_mask ? (const uint8_t *)acc_mask : (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_ipv6), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -2058,7 +2066,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_udp_mask, - sizeof(struct rte_flow_item_udp), error); + sizeof(struct rte_flow_item_udp), MLX5_ITEM_RANGE_NOT_ACCEPTED, + error); if (ret < 0) return ret; return 0; @@ -2113,7 +2122,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)flow_mask, - sizeof(struct rte_flow_item_tcp), error); + sizeof(struct rte_flow_item_tcp), MLX5_ITEM_RANGE_NOT_ACCEPTED, + error); if (ret < 0) return ret; return 0; @@ -2167,7 +2177,7 @@ struct mlx5_flow_tunnel_info { (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_vxlan_mask, sizeof(struct rte_flow_item_vxlan), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; if (spec) { @@ -2238,7 +2248,7 @@ struct mlx5_flow_tunnel_info { (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_vxlan_gpe_mask, sizeof(struct rte_flow_item_vxlan_gpe), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; if (spec) { @@ -2312,7 +2322,7 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&gre_key_default_mask, - sizeof(rte_be32_t), error); + sizeof(rte_be32_t), MLX5_ITEM_RANGE_NOT_ACCEPTED, error); return ret; } @@ -2364,7 +2374,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, - sizeof(struct rte_flow_item_gre), error); + sizeof(struct rte_flow_item_gre), MLX5_ITEM_RANGE_NOT_ACCEPTED, + error); if (ret < 0) return ret; #ifndef HAVE_MLX5DV_DR @@ -2439,7 +2450,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, - sizeof(struct rte_flow_item_geneve), error); + sizeof(struct rte_flow_item_geneve), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; if (spec) { @@ -2522,7 +2534,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_mpls_mask, - sizeof(struct rte_flow_item_mpls), error); + sizeof(struct rte_flow_item_mpls), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -2577,7 +2590,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_nvgre_mask, - sizeof(struct rte_flow_item_nvgre), error); + sizeof(struct rte_flow_item_nvgre), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -2671,7 +2685,7 @@ struct mlx5_flow_tunnel_info { acc_mask ? (const uint8_t *)acc_mask : (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_ecpri), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); } /* Allocate unique ID for the split Q/RSS subflows. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 279daf2..1e30c93 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -330,6 +330,14 @@ enum mlx5_feature_name { #define MLX5_ENCAPSULATION_DECISION_SIZE (sizeof(struct rte_flow_item_eth) + \ sizeof(struct rte_flow_item_ipv4)) +/* IPv4 fragment_offset field contains relevant data in bits 2 to 15. */ +#define MLX5_IPV4_FRAG_OFFSET_MASK \ + (RTE_IPV4_HDR_OFFSET_MASK | RTE_IPV4_HDR_MF_FLAG) + +/* Specific item's fields can accept a range of values (using spec and last). */ +#define MLX5_ITEM_RANGE_NOT_ACCEPTED false +#define MLX5_ITEM_RANGE_ACCEPTED true + /* Software header modify action numbers of a flow. */ #define MLX5_ACT_NUM_MDF_IPV4 1 #define MLX5_ACT_NUM_MDF_IPV6 4 @@ -985,6 +993,7 @@ int mlx5_flow_item_acceptable(const struct rte_flow_item *item, const uint8_t *mask, const uint8_t *nic_mask, unsigned int size, + bool range_accepted, struct rte_flow_error *error); int mlx5_flow_validate_item_eth(const struct rte_flow_item *item, uint64_t item_flags, @@ -1002,6 +1011,7 @@ int mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item, uint64_t last_item, uint16_t ether_type, const struct rte_flow_item_ipv4 *acc_mask, + bool range_accepted, struct rte_flow_error *error); int mlx5_flow_validate_item_ipv6(const struct rte_flow_item *item, uint64_t item_flags, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index c0fb311..08e6f74 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -1418,7 +1418,7 @@ struct field_modify_info modify_tcp[] = { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_mark), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -1494,7 +1494,7 @@ struct field_modify_info modify_tcp[] = { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_meta), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); return ret; } @@ -1547,7 +1547,7 @@ struct field_modify_info modify_tcp[] = { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_tag), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; if (mask->index != 0xff) @@ -1618,7 +1618,7 @@ struct field_modify_info modify_tcp[] = { (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_port_id_mask, sizeof(struct rte_flow_item_port_id), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; if (!spec) @@ -1691,7 +1691,7 @@ struct field_modify_info modify_tcp[] = { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_vlan), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; if (!tunnel && mask->tci != RTE_BE16(0x0fff)) { @@ -1778,11 +1778,126 @@ struct field_modify_info modify_tcp[] = { RTE_FLOW_ERROR_TYPE_ITEM, item, "Match is supported for GTP" " flags only"); - return mlx5_flow_item_acceptable - (item, (const uint8_t *)mask, - (const uint8_t *)&nic_mask, - sizeof(struct rte_flow_item_gtp), - error); + return mlx5_flow_item_acceptable(item, (const uint8_t *)mask, + (const uint8_t *)&nic_mask, + sizeof(struct rte_flow_item_gtp), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); +} + +/** + * Validate IPV4 item. + * Use existing validation function mlx5_flow_validate_item_ipv4(), and + * add specific validation of fragment_offset field, + * + * @param[in] item + * Item specification. + * @param[in] item_flags + * Bit-fields that holds the items detected until now. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_validate_item_ipv4(const struct rte_flow_item *item, + uint64_t item_flags, + uint64_t last_item, + uint16_t ether_type, + struct rte_flow_error *error) +{ + int ret; + const struct rte_flow_item_ipv4 *spec = item->spec; + const struct rte_flow_item_ipv4 *last = item->last; + const struct rte_flow_item_ipv4 *mask = item->mask; + rte_be16_t fragment_offset_spec = 0; + rte_be16_t fragment_offset_last = 0; + const struct rte_flow_item_ipv4 nic_ipv4_mask = { + .hdr = { + .src_addr = RTE_BE32(0xffffffff), + .dst_addr = RTE_BE32(0xffffffff), + .type_of_service = 0xff, + .fragment_offset = RTE_BE16(0xffff), + .next_proto_id = 0xff, + .time_to_live = 0xff, + }, + }; + + ret = mlx5_flow_validate_item_ipv4(item, item_flags, last_item, + ether_type, &nic_ipv4_mask, + MLX5_ITEM_RANGE_ACCEPTED, error); + if (ret < 0) + return ret; + if (spec && mask) + fragment_offset_spec = spec->hdr.fragment_offset & + mask->hdr.fragment_offset; + if (!fragment_offset_spec) + return 0; + /* + * spec and mask are valid, enforce using full mask to make sure the + * complete value is used correctly. + */ + if ((mask->hdr.fragment_offset & RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK)) + != RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, + item, "must use full mask for" + " fragment_offset"); + /* + * Match on fragment_offset 0x2000 means MF is 1 and frag-offset is 0, + * indicating this is 1st fragment of fragmented packet. + * This is not yet supported in MLX5, return appropriate error message. + */ + if (fragment_offset_spec == RTE_BE16(RTE_IPV4_HDR_MF_FLAG)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "match on first fragment not " + "supported"); + if (fragment_offset_spec && !last) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "specified value not supported"); + /* spec and last are valid, validate the specified range. */ + fragment_offset_last = last->hdr.fragment_offset & + mask->hdr.fragment_offset; + /* + * Match on fragment_offset spec 0x2001 and last 0x3fff + * means MF is 1 and frag-offset is > 0. + * This packet is fragment 2nd and onward, excluding last. + * This is not yet supported in MLX5, return appropriate + * error message. + */ + if (fragment_offset_spec == RTE_BE16(RTE_IPV4_HDR_MF_FLAG + 1) && + fragment_offset_last == RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, + last, "match on following " + "fragments not supported"); + /* + * Match on fragment_offset spec 0x0001 and last 0x1fff + * means MF is 0 and frag-offset is > 0. + * This packet is last fragment of fragmented packet. + * This is not yet supported in MLX5, return appropriate + * error message. + */ + if (fragment_offset_spec == RTE_BE16(1) && + fragment_offset_last == RTE_BE16(RTE_IPV4_HDR_OFFSET_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, + last, "match on last " + "fragment not supported"); + /* + * Match on fragment_offset spec 0x0001 and last 0x3fff + * means MF and/or frag-offset is not 0. + * This is a fragmented packet. + * Other range values are invalid and rejected. + */ + if (!(fragment_offset_spec == RTE_BE16(1) && + fragment_offset_last == RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK))) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, last, + "specified range not supported"); + return 0; } /** @@ -5084,15 +5199,6 @@ struct field_modify_info modify_tcp[] = { .dst_port = RTE_BE16(UINT16_MAX), } }; - const struct rte_flow_item_ipv4 nic_ipv4_mask = { - .hdr = { - .src_addr = RTE_BE32(0xffffffff), - .dst_addr = RTE_BE32(0xffffffff), - .type_of_service = 0xff, - .next_proto_id = 0xff, - .time_to_live = 0xff, - }, - }; const struct rte_flow_item_ipv6 nic_ipv6_mask = { .hdr = { .src_addr = @@ -5192,11 +5298,9 @@ struct field_modify_info modify_tcp[] = { case RTE_FLOW_ITEM_TYPE_IPV4: mlx5_flow_tunnel_ip_check(items, next_protocol, &item_flags, &tunnel); - ret = mlx5_flow_validate_item_ipv4(items, item_flags, - last_item, - ether_type, - &nic_ipv4_mask, - error); + ret = flow_dv_validate_item_ipv4(items, item_flags, + last_item, ether_type, + error); if (ret < 0) return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : @@ -6296,6 +6400,10 @@ struct field_modify_info modify_tcp[] = { ipv4_m->hdr.time_to_live); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, + !!(ipv4_m->hdr.fragment_offset)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, + !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset)); } /** diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index 62c18b8..276bcb5 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -1312,10 +1312,11 @@ } break; case RTE_FLOW_ITEM_TYPE_IPV4: - ret = mlx5_flow_validate_item_ipv4(items, item_flags, - last_item, - ether_type, NULL, - error); + ret = mlx5_flow_validate_item_ipv4 + (items, item_flags, + last_item, ether_type, NULL, + MLX5_ITEM_RANGE_NOT_ACCEPTED, + error); if (ret < 0) return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : From patchwork Mon Oct 12 10:43:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80347 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A569BA04B6; Mon, 12 Oct 2020 12:46:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 400B51D6F0; Mon, 12 Oct 2020 12:44:17 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 834C11D6CC for ; Mon, 12 Oct 2020 12:44:08 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:06 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqvx025485; Mon, 12 Oct 2020 13:44:06 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:07 +0300 Message-Id: <932f2a7c93b72cb2058110153cc2d67d578814ae.1602494556.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 08/11] net/mlx5: support match on IPv6 fragment packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds to MLX5 PMD the support of matching on IPv6 fragmented and non-fragmented packets, using the new field frag_ext_exist, added to rte_flow following RFC [1]. [1] https://mails.dpdk.org/archives/dev/2020-August/177257.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_dv.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 08e6f74..49bfa5f 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5211,6 +5211,7 @@ struct field_modify_info modify_tcp[] = { .proto = 0xff, .hop_limits = 0xff, }, + .frag_ext_exist = 1, }; const struct rte_flow_item_ecpri nic_ecpri_mask = { .hdr = { @@ -6519,6 +6520,10 @@ struct field_modify_info modify_tcp[] = { ipv6_m->hdr.hop_limits); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, + !!(ipv6_m->frag_ext_exist)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, + !!(ipv6_v->frag_ext_exist & ipv6_m->frag_ext_exist)); } /** From patchwork Mon Oct 12 10:43:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80350 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23ABBA04B6; Mon, 12 Oct 2020 12:47:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CE6831D713; Mon, 12 Oct 2020 12:44:32 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 526EB1D6D8 for ; Mon, 12 Oct 2020 12:44:13 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:07 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqw0025485; Mon, 12 Oct 2020 13:44:06 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:08 +0300 Message-Id: <441ce08389787add9fbac7a3703edf7ce2d09b53.1602494556.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 09/11] net/mlx5: support match on IPv6 fragment ext. item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_flow update, following RFC [1], added to ethdev the rte_flow item ipv6_frag_ext. This patch adds to MLX5 PMD the option to match on this item type. [1] http://mails.dpdk.org/archives/dev/2020-March/160255.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow.h | 4 + drivers/net/mlx5/mlx5_flow_dv.c | 209 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 213 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1e30c93..376519f 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -122,6 +122,10 @@ enum mlx5_feature_name { /* Pattern eCPRI Layer bit. */ #define MLX5_FLOW_LAYER_ECPRI (UINT64_C(1) << 29) +/* IPv6 Fragment Extension Header bit. */ +#define MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT (1u << 30) +#define MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT (1u << 31) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 49bfa5f..e298918 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -1901,6 +1901,120 @@ struct field_modify_info modify_tcp[] = { } /** + * Validate IPV6 fragment extension item. + * + * @param[in] item + * Item specification. + * @param[in] item_flags + * Bit-fields that holds the items detected until now. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_validate_item_ipv6_frag_ext(const struct rte_flow_item *item, + uint64_t item_flags, + struct rte_flow_error *error) +{ + const struct rte_flow_item_ipv6_frag_ext *spec = item->spec; + const struct rte_flow_item_ipv6_frag_ext *last = item->last; + const struct rte_flow_item_ipv6_frag_ext *mask = item->mask; + rte_be16_t frag_data_spec = 0; + rte_be16_t frag_data_last = 0; + const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + const uint64_t l4m = tunnel ? MLX5_FLOW_LAYER_INNER_L4 : + MLX5_FLOW_LAYER_OUTER_L4; + int ret = 0; + struct rte_flow_item_ipv6_frag_ext nic_mask = { + .hdr = { + .next_header = 0xff, + .frag_data = RTE_BE16(0xffff), + }, + }; + + if (item_flags & l4m) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "ipv6 fragment extension item cannot " + "follow L4 item."); + if ((tunnel && !(item_flags & MLX5_FLOW_LAYER_INNER_L3_IPV6)) || + (!tunnel && !(item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "ipv6 fragment extension item must " + "follow ipv6 item"); + if (spec && mask) + frag_data_spec = spec->hdr.frag_data & mask->hdr.frag_data; + if (!frag_data_spec) + return 0; + /* + * spec and mask are valid, enforce using full mask to make sure the + * complete value is used correctly. + */ + if ((mask->hdr.frag_data & RTE_BE16(RTE_IPV6_FRAG_USED_MASK)) != + RTE_BE16(RTE_IPV6_FRAG_USED_MASK)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, + item, "must use full mask for" + " frag_data"); + /* + * Match on frag_data 0x00001 means M is 1 and frag-offset is 0. + * This is 1st fragment of fragmented packet. + */ + if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_MF_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "match on first fragment not " + "supported"); + if (frag_data_spec && !last) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "specified value not supported"); + ret = mlx5_flow_item_acceptable + (item, (const uint8_t *)mask, + (const uint8_t *)&nic_mask, + sizeof(struct rte_flow_item_ipv6_frag_ext), + MLX5_ITEM_RANGE_ACCEPTED, error); + if (ret) + return ret; + /* spec and last are valid, validate the specified range. */ + frag_data_last = last->hdr.frag_data & mask->hdr.frag_data; + /* + * Match on frag_data spec 0x0009 and last 0xfff9 + * means M is 1 and frag-offset is > 0. + * This packet is fragment 2nd and onward, excluding last. + * This is not yet supported in MLX5, return appropriate + * error message. + */ + if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_FO_ALIGN | + RTE_IPV6_EHDR_MF_MASK) && + frag_data_last == RTE_BE16(RTE_IPV6_FRAG_USED_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, + last, "match on following " + "fragments not supported"); + /* + * Match on frag_data spec 0x0008 and last 0xfff8 + * means M is 0 and frag-offset is > 0. + * This packet is last fragment of fragmented packet. + * This is not yet supported in MLX5, return appropriate + * error message. + */ + if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_FO_ALIGN) && + frag_data_last == RTE_BE16(RTE_IPV6_EHDR_FO_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, + last, "match on last " + "fragment not supported"); + /* Other range values are invalid and rejected. */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, last, + "specified range not supported"); +} + +/** * Validate the pop VLAN action. * * @param[in] dev @@ -5349,6 +5463,29 @@ struct field_modify_info modify_tcp[] = { next_protocol = 0xff; } break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + ret = flow_dv_validate_item_ipv6_frag_ext(items, + item_flags, + error); + if (ret < 0) + return ret; + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; case RTE_FLOW_ITEM_TYPE_TCP: ret = mlx5_flow_validate_item_tcp (items, item_flags, @@ -6527,6 +6664,57 @@ struct field_modify_info modify_tcp[] = { } /** + * Add IPV6 fragment extension item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec; + const struct rte_flow_item_ipv6_frag_ext nic_mask = { + .hdr = { + .next_header = 0xff, + .frag_data = RTE_BE16(0xffff), + }, + }; + void *headers_m; + void *headers_v; + + if (inner) { + headers_m = MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m = MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + /* IPv6 fragment extension item exists, so packet is IP fragment. */ + MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1); + if (!ipv6_frag_ext_v) + return; + if (!ipv6_frag_ext_m) + ipv6_frag_ext_m = &nic_mask; + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, + ipv6_frag_ext_m->hdr.next_header); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + ipv6_frag_ext_v->hdr.next_header & + ipv6_frag_ext_m->hdr.next_header); +} + +/** * Add TCP item to matcher and to the value. * * @param[in, out] matcher @@ -8881,6 +9069,27 @@ struct field_modify_info modify_tcp[] = { next_protocol = 0xff; } break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext(match_mask, + match_value, + items, tunnel); + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; case RTE_FLOW_ITEM_TYPE_TCP: flow_dv_translate_item_tcp(match_mask, match_value, items, tunnel); From patchwork Mon Oct 12 10:43:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80352 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8908AA04B6; Mon, 12 Oct 2020 12:48:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2DF6D1D72A; Mon, 12 Oct 2020 12:44:39 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 556ED1D6D9 for ; Mon, 12 Oct 2020 12:44:13 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:07 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqw1025485; Mon, 12 Oct 2020 13:44:07 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:09 +0300 Message-Id: <9154a4684a9e1954fb6d20e38d1067cb3d20b1b0.1602494556.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 10/11] doc: update release notes for MLX5 L3 frag support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates 20.11 release notes with the changes included in patches of this series: 1) MLX5 support of matching on IPv4/IPv6 fragmented/non-fragmented packets. 2) ABI change in ethdev struct rte_flow_item_ipv6. Signed-off-by: Dekel Peled Acked-by: Ori Kam --- doc/guides/rel_notes/release_20_11.rst | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 35dd938..9894ad6 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -148,6 +148,11 @@ New Features * Extern objects and functions can be plugged into the pipeline. * Transaction-oriented table updates. +* **Updated Mellanox mlx5 driver.** + + Updated Mellanox mlx5 driver with new features and improvements, including: + + * Added support for matching on fragmented/non-fragmented IPv4/IPv6 packets. Removed Items ------------- @@ -300,6 +305,11 @@ ABI Changes * ``ethdev`` internal functions are marked with ``__rte_internal`` tag. + * Added extensions' attributes to struct ``rte_flow_item_ipv6``. + A set of additional values added to struct, indicating the existence of + every defined extension header type. + Applications should use the new values for identification of existing + extensions in the packet header. Known Issues ------------ From patchwork Mon Oct 12 10:43:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 80351 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 17091A04B6; Mon, 12 Oct 2020 12:47:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6D7781D722; Mon, 12 Oct 2020 12:44:37 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 5D6C61D6DA for ; Mon, 12 Oct 2020 12:44:13 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 12 Oct 2020 13:44:08 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09CAhqw2025485; Mon, 12 Oct 2020 13:44:08 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Mon, 12 Oct 2020 13:43:10 +0300 Message-Id: X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v5 11/11] net/mlx5: enforce limitation on IPv6 next proto X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Due to PRM requirement, the IPv6 header item 'proto' field, indicating the next header protocol, should not be set as extension header. This patch adds the relevant validation, and documents the limitation. Signed-off-by: Dekel Peled Acked-by: Ori Kam --- doc/guides/nics/mlx5.rst | 7 +++++++ drivers/net/mlx5/mlx5_flow.c | 14 ++++++++++++-- 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index a174cdd..b7a4dce 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -311,6 +311,13 @@ Limitations for some NICs (such as ConnectX-6 Dx and BlueField 2). The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support. +- IPv6 header item 'proto' field, indicating the next header protocol, should + not be set as extension header. + In case the next header is an extension header, it should not be specified in + IPv6 header item 'proto' field. + The last extension header item 'next header' field can specify the following + header protocol type. + Statistics ---------- diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 38cfd0f..84931a3 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1977,9 +1977,9 @@ struct mlx5_flow_tunnel_info { RTE_FLOW_ERROR_TYPE_ITEM, item, "IPv6 cannot follow L2/VLAN layer " "which ether type is not IPv6"); + if (mask && spec) + next_proto = mask->hdr.proto & spec->hdr.proto; if (item_flags & MLX5_FLOW_LAYER_IPV6_ENCAP) { - if (mask && spec) - next_proto = mask->hdr.proto & spec->hdr.proto; if (next_proto == IPPROTO_IPIP || next_proto == IPPROTO_IPV6) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -1987,6 +1987,16 @@ struct mlx5_flow_tunnel_info { "multiple tunnel " "not supported"); } + if (next_proto == IPPROTO_HOPOPTS || + next_proto == IPPROTO_ROUTING || + next_proto == IPPROTO_FRAGMENT || + next_proto == IPPROTO_ESP || + next_proto == IPPROTO_AH || + next_proto == IPPROTO_DSTOPTS) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "IPv6 proto (next header) should " + "not be set as extension header"); if (item_flags & MLX5_FLOW_LAYER_IPIP) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item,