From patchwork Thu Oct 1 21:14:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79498 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E8AFA04BA; Thu, 1 Oct 2020 23:15:48 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 956F11D659; Thu, 1 Oct 2020 23:15:32 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id DF1CB1D650 for ; Thu, 1 Oct 2020 23:15:31 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:29 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOdv021286; Fri, 2 Oct 2020 00:15:29 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:14:58 +0300 Message-Id: <529d5943c5e23bc03cea0c4046cda5611f26d9ca.1601586563.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 01/11] ethdev: add extensions attributes to IPv6 item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Using the current implementation of DPDK, an application cannot match on IPv6 packets, based on the existing extension headers, in a simple way. Field 'Next Header' in IPv6 header indicates type of the first extension header only. Following extension headers can't be identified by inspecting the IPv6 header. As a result, the existence or absence of specific extension headers can't be used for packet matching. For example, fragmented IPv6 packets contain a dedicated extension header (which is implemented in a later patch of this series). Non-fragmented packets don't contain the fragment extension header. For an application to match on non-fragmented IPv6 packets, the current implementation doesn't provide a suitable solution. Matching on the Next Header field is not sufficient, since additional extension headers might be present in the same packet. To match on fragmented IPv6 packets, the same difficulty exists. This patch implements the update as detailed in RFC [1]. A set of additional values will be added to IPv6 header struct. These values will indicate the existence of every defined extension header type, providing simple means for identification of existing extensions in the packet header. Continuing the above example, fragmented packets can be identified using the specific value indicating existence of fragment extension header. [1] https://mails.dpdk.org/archives/dev/2020-August/177257.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 16 +++++++++++++--- lib/librte_ethdev/rte_flow.h | 25 +++++++++++++++++++++++-- 2 files changed, 36 insertions(+), 5 deletions(-) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 119b128..0b476da 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -946,11 +946,21 @@ Item: ``IPV6`` Matches an IPv6 header. -Note: IPv6 options are handled by dedicated pattern items, see `Item: -IPV6_EXT`_. +Dedicated flags indicate existence of specific extension headers. +Every type of extension header can use a dedicated pattern item, or +the generic `Item: IPV6_EXT`_. - ``hdr``: IPv6 header definition (``rte_ip.h``). -- Default ``mask`` matches source and destination addresses only. +- ``hop_ext_exist``: Hop-by-Hop Options extension header exists. +- ``rout_ext_exist``: Routing extension header exists. +- ``frag_ext_exist``: Fragment extension header exists. +- ``auth_ext_exist``: Authentication extension header exists. +- ``esp_ext_exist``: Encapsulation Security Payload extension header exists. +- ``dest_ext_exist``: Destination Options extension header exists. +- ``mobil_ext_exist``: Mobility extension header exists. +- ``hip_ext_exist``: Host Identity Protocol extension header exists. +- ``shim6_ext_exist``: Shim6 Protocol extension header exists. +- Default ``mask`` matches ``hdr`` source and destination addresses only. Item: ``ICMP`` ^^^^^^^^^^^^^^ diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index da8bfa5..5b5bed2 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -792,11 +792,32 @@ struct rte_flow_item_ipv4 { * * Matches an IPv6 header. * - * Note: IPv6 options are handled by dedicated pattern items, see - * RTE_FLOW_ITEM_TYPE_IPV6_EXT. + * Dedicated flags indicate existence of specific extension headers. + * Every type of extension header can use a dedicated pattern item, or + * the generic item RTE_FLOW_ITEM_TYPE_IPV6_EXT. */ struct rte_flow_item_ipv6 { struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */ + uint32_t hop_ext_exist:1; + /**< Hop-by-Hop Options extension header exists. */ + uint32_t rout_ext_exist:1; + /**< Routing extension header exists. */ + uint32_t frag_ext_exist:1; + /**< Fragment extension header exists. */ + uint32_t auth_ext_exist:1; + /**< Authentication extension header exists. */ + uint32_t esp_ext_exist:1; + /**< Encapsulation Security Payload extension header exists. */ + uint32_t dest_ext_exist:1; + /**< Destination Options extension header exists. */ + uint32_t mobil_ext_exist:1; + /**< Mobility extension header exists. */ + uint32_t hip_ext_exist:1; + /**< Host Identity Protocol extension header exists. */ + uint32_t shim6_ext_exist:1; + /**< Shim6 Protocol extension header exists. */ + uint32_t reserved:23; + /**< Reserved for future extension headers, must be zero. */ }; /** Default mask for RTE_FLOW_ITEM_TYPE_IPV6. */ From patchwork Thu Oct 1 21:14:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79499 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DE43CA04BA; Thu, 1 Oct 2020 23:16:08 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 10F101D68A; Thu, 1 Oct 2020 23:15:38 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id DDF131D686 for ; Thu, 1 Oct 2020 23:15:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:31 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOdw021286; Fri, 2 Oct 2020 00:15:30 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:14:59 +0300 Message-Id: <2ce00b146e8c1255ac6b83bebb5331b3d3647449.1601586563.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 02/11] ethdev: add IPv6 fragment extension header item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Applications handling fragmented IPv6 packets need to match on IPv6 fragment extension header, in order to identify the fragments order and location in the packet. This patch introduces the IPv6 fragment extension header item, proposed in [1]. Relevant definitions are moved from lib/librte_ip_frag/rte_ip_frag.h to lib/librte_net/rte_ip.h, as they are needed for IPv6 header handling. struct ipv6_extension_fragment renamed to rte_ipv6_fragment_ext to adapt it to the common naming convention. Default mask is not defined, since all fields are optional. [1] http://mails.dpdk.org/archives/dev/2020-March/160255.html Signed-off-by: Dekel Peled --- doc/guides/prog_guide/rte_flow.rst | 16 ++++++++++++++-- lib/librte_ethdev/rte_flow.c | 1 + lib/librte_ethdev/rte_flow.h | 21 +++++++++++++++++++++ lib/librte_ip_frag/rte_ip_frag.h | 26 ++------------------------ lib/librte_net/rte_ip.h | 26 ++++++++++++++++++++++++-- 5 files changed, 62 insertions(+), 28 deletions(-) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 0b476da..826e45d 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -947,8 +947,8 @@ Item: ``IPV6`` Matches an IPv6 header. Dedicated flags indicate existence of specific extension headers. -Every type of extension header can use a dedicated pattern item, or -the generic `Item: IPV6_EXT`_. +Every type of extension header can use a dedicated pattern item, +for example `Item: IPV6_FRAG_EXT`_, or the generic `Item: IPV6_EXT`_. - ``hdr``: IPv6 header definition (``rte_ip.h``). - ``hop_ext_exist``: Hop-by-Hop Options extension header exists. @@ -1187,6 +1187,18 @@ Normally preceded by any of: - `Item: IPV6`_ - `Item: IPV6_EXT`_ +Item: ``IPV6_FRAG_EXT`` +^^^^^^^^^^^^^^^^^^^^^^^ + +Matches the presence of IPv6 fragment extension header. + +- ``hdr``: IPv6 fragment extension header definition (``rte_ip.h``). + +Normally preceded by any of: + +- `Item: IPV6`_ +- `Item: IPV6_EXT`_ + Item: ``ICMP6`` ^^^^^^^^^^^^^^^ diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c index f8fdd68..c1f3132 100644 --- a/lib/librte_ethdev/rte_flow.c +++ b/lib/librte_ethdev/rte_flow.c @@ -72,6 +72,7 @@ struct rte_flow_desc_data { MK_FLOW_ITEM(VXLAN_GPE, sizeof(struct rte_flow_item_vxlan_gpe)), MK_FLOW_ITEM(ARP_ETH_IPV4, sizeof(struct rte_flow_item_arp_eth_ipv4)), MK_FLOW_ITEM(IPV6_EXT, sizeof(struct rte_flow_item_ipv6_ext)), + MK_FLOW_ITEM(IPV6_FRAG_EXT, sizeof(struct rte_flow_item_ipv6_frag_ext)), MK_FLOW_ITEM(ICMP6, sizeof(struct rte_flow_item_icmp6)), MK_FLOW_ITEM(ICMP6_ND_NS, sizeof(struct rte_flow_item_icmp6_nd_ns)), MK_FLOW_ITEM(ICMP6_ND_NA, sizeof(struct rte_flow_item_icmp6_nd_na)), diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index 5b5bed2..1443e6a 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -368,6 +368,13 @@ enum rte_flow_item_type { RTE_FLOW_ITEM_TYPE_IPV6_EXT, /** + * Matches the presence of IPv6 fragment extension header. + * + * See struct rte_flow_item_ipv6_frag_ext. + */ + RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT, + + /** * Matches any ICMPv6 header. * * See struct rte_flow_item_icmp6. @@ -1188,6 +1195,20 @@ struct rte_flow_item_ipv6_ext rte_flow_item_ipv6_ext_mask = { #endif /** + * RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT + * + * Matches the presence of IPv6 fragment extension header. + * + * Preceded by any of: + * + * - RTE_FLOW_ITEM_TYPE_IPV6 + * - RTE_FLOW_ITEM_TYPE_IPV6_EXT + */ +struct rte_flow_item_ipv6_frag_ext { + struct rte_ipv6_fragment_ext hdr; +}; + +/** * RTE_FLOW_ITEM_TYPE_ICMP6 * * Matches any ICMPv6 header. diff --git a/lib/librte_ip_frag/rte_ip_frag.h b/lib/librte_ip_frag/rte_ip_frag.h index 66edd7e..0bfe64b 100644 --- a/lib/librte_ip_frag/rte_ip_frag.h +++ b/lib/librte_ip_frag/rte_ip_frag.h @@ -110,30 +110,8 @@ struct rte_ip_frag_tbl { __extension__ struct ip_frag_pkt pkt[0]; /**< hash table. */ }; -/** IPv6 fragment extension header */ -#define RTE_IPV6_EHDR_MF_SHIFT 0 -#define RTE_IPV6_EHDR_MF_MASK 1 -#define RTE_IPV6_EHDR_FO_SHIFT 3 -#define RTE_IPV6_EHDR_FO_MASK (~((1 << RTE_IPV6_EHDR_FO_SHIFT) - 1)) -#define RTE_IPV6_EHDR_FO_ALIGN (1 << RTE_IPV6_EHDR_FO_SHIFT) - -#define RTE_IPV6_FRAG_USED_MASK \ - (RTE_IPV6_EHDR_MF_MASK | RTE_IPV6_EHDR_FO_MASK) - -#define RTE_IPV6_GET_MF(x) ((x) & RTE_IPV6_EHDR_MF_MASK) -#define RTE_IPV6_GET_FO(x) ((x) >> RTE_IPV6_EHDR_FO_SHIFT) - -#define RTE_IPV6_SET_FRAG_DATA(fo, mf) \ - (((fo) & RTE_IPV6_EHDR_FO_MASK) | ((mf) & RTE_IPV6_EHDR_MF_MASK)) - -struct ipv6_extension_fragment { - uint8_t next_header; /**< Next header type */ - uint8_t reserved; /**< Reserved */ - uint16_t frag_data; /**< All fragmentation data */ - uint32_t id; /**< Packet ID */ -} __rte_packed; - - +/* struct ipv6_extension_fragment moved to librte_net/rte_ip.h and renamed. */ +#define ipv6_extension_fragment rte_ipv6_fragment_ext /** * Create a new IP fragmentation table. diff --git a/lib/librte_net/rte_ip.h b/lib/librte_net/rte_ip.h index fcd1eb3..3081c46 100644 --- a/lib/librte_net/rte_ip.h +++ b/lib/librte_net/rte_ip.h @@ -456,8 +456,30 @@ struct rte_ipv6_hdr { return (uint16_t)cksum; } -/* IPv6 fragmentation header size */ -#define RTE_IPV6_FRAG_HDR_SIZE 8 +/** IPv6 fragment extension header. */ +#define RTE_IPV6_EHDR_MF_SHIFT 0 +#define RTE_IPV6_EHDR_MF_MASK 1 +#define RTE_IPV6_EHDR_FO_SHIFT 3 +#define RTE_IPV6_EHDR_FO_MASK (~((1 << RTE_IPV6_EHDR_FO_SHIFT) - 1)) +#define RTE_IPV6_EHDR_FO_ALIGN (1 << RTE_IPV6_EHDR_FO_SHIFT) + +#define RTE_IPV6_FRAG_USED_MASK (RTE_IPV6_EHDR_MF_MASK | RTE_IPV6_EHDR_FO_MASK) + +#define RTE_IPV6_GET_MF(x) ((x) & RTE_IPV6_EHDR_MF_MASK) +#define RTE_IPV6_GET_FO(x) ((x) >> RTE_IPV6_EHDR_FO_SHIFT) + +#define RTE_IPV6_SET_FRAG_DATA(fo, mf) \ + (((fo) & RTE_IPV6_EHDR_FO_MASK) | ((mf) & RTE_IPV6_EHDR_MF_MASK)) + +struct rte_ipv6_fragment_ext { + uint8_t next_header; /**< Next header type */ + uint8_t reserved; /**< Reserved */ + rte_be16_t frag_data; /**< All fragmentation data */ + rte_be32_t id; /**< Packet ID */ +} __rte_packed; + +/* IPv6 fragment extension header size */ +#define RTE_IPV6_FRAG_HDR_SIZE sizeof(struct rte_ipv6_fragment_ext) /** * Parse next IPv6 header extension From patchwork Thu Oct 1 21:15:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79500 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 63B7EA04BA; Thu, 1 Oct 2020 23:16:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 904941D6C9; Thu, 1 Oct 2020 23:15:39 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id E59FA1D68A for ; Thu, 1 Oct 2020 23:15:35 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:31 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOdx021286; Fri, 2 Oct 2020 00:15:31 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:15:00 +0300 Message-Id: <53a309585984c6b1605598d6ccb36ef8a65a0c76.1601586563.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 03/11] app/testpmd: support IPv4 fragments X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates testpmd CLI to support fragment_offset field of IPv4 header item. To match on non-fragmented IPv4 packets, need to use pattern: ... ipv4 fragment_offset spec 0 fragment_offset mask 0x3fff ... To match on fragmented IPv4 packets, need to use pattern: ... ipv4 fragment_offset spec 1 fragment_offset last 0x3fff fragment_offset mask 0x3fff ... (Use the full available range 1 to 0x3fff to include all possible values.) To match on any IPv4 packets, fragmented and non-fragmented, the fragment_offset field should not be specified for match. Signed-off-by: Dekel Peled Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 6e04d53..a9bf29f 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -129,6 +129,7 @@ enum index { ITEM_VLAN_INNER_TYPE, ITEM_IPV4, ITEM_IPV4_TOS, + ITEM_IPV4_FRAGMENT_OFFSET, ITEM_IPV4_TTL, ITEM_IPV4_PROTO, ITEM_IPV4_SRC, @@ -873,6 +874,7 @@ struct parse_action_priv { static const enum index item_ipv4[] = { ITEM_IPV4_TOS, + ITEM_IPV4_FRAGMENT_OFFSET, ITEM_IPV4_TTL, ITEM_IPV4_PROTO, ITEM_IPV4_SRC, @@ -2097,6 +2099,13 @@ static int comp_set_raw_index(struct context *, const struct token *, .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4, hdr.type_of_service)), }, + [ITEM_IPV4_FRAGMENT_OFFSET] = { + .name = "fragment_offset", + .help = "fragmentation flags and fragment offset", + .next = NEXT(item_ipv4, NEXT_ENTRY(UNSIGNED), item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv4, + hdr.fragment_offset)), + }, [ITEM_IPV4_TTL] = { .name = "ttl", .help = "time to live", From patchwork Thu Oct 1 21:15:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79501 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E5F3FA04BA; Thu, 1 Oct 2020 23:16:47 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1D52C1D6E5; Thu, 1 Oct 2020 23:15:41 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 016121D695 for ; Thu, 1 Oct 2020 23:15:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:32 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOe0021286; Fri, 2 Oct 2020 00:15:32 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:15:01 +0300 Message-Id: <3dcc6699f6e3e69b6136e90c562346dba756cb75.1601586563.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 04/11] app/testpmd: support IPv6 fragments X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_flow update, following RFC [1], introduced frag_ext_exist field for IPv6 header item, used to indicate match on fragmented/non-fragmented packets. This patch updates testpmd CLI to support the new field. To match on non-fragmented IPv6 packets, need to use pattern: ... ipv6 frag_ext_exist spec 0 frag_ext_exist mask 1 ... To match on fragmented IPv6 packets, need to use pattern: ... ipv6 frag_ext_exist spec 1 frag_ext_exist mask 1 ... To match on any IPv6 packets, the frag_ext_exist field should not be specified for match. [1] https://mails.dpdk.org/archives/dev/2020-August/177257.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index a9bf29f..b078095 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -141,6 +141,7 @@ enum index { ITEM_IPV6_HOP, ITEM_IPV6_SRC, ITEM_IPV6_DST, + ITEM_IPV6_FRAG_EXT_EXIST, ITEM_ICMP, ITEM_ICMP_TYPE, ITEM_ICMP_CODE, @@ -890,6 +891,7 @@ struct parse_action_priv { ITEM_IPV6_HOP, ITEM_IPV6_SRC, ITEM_IPV6_DST, + ITEM_IPV6_FRAG_EXT_EXIST, ITEM_NEXT, ZERO, }; @@ -2185,6 +2187,13 @@ static int comp_set_raw_index(struct context *, const struct token *, .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6, hdr.dst_addr)), }, + [ITEM_IPV6_FRAG_EXT_EXIST] = { + .name = "frag_ext_exist", + .help = "fragment packet attribute", + .next = NEXT(item_ipv6, NEXT_ENTRY(UNSIGNED), item_param), + .args = ARGS(ARGS_ENTRY_BF(struct rte_flow_item_ipv6, + frag_ext_exist, 1)), + }, [ITEM_ICMP] = { .name = "icmp", .help = "match ICMP header", From patchwork Thu Oct 1 21:15:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79502 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 59E28A04BA; Thu, 1 Oct 2020 23:17:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D06BF1D6FE; Thu, 1 Oct 2020 23:15:42 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 044BC1D6A5 for ; Thu, 1 Oct 2020 23:15:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:33 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOe1021286; Fri, 2 Oct 2020 00:15:33 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:15:02 +0300 Message-Id: <248c94e7f8f071e51b12a5b283c7313cf07f0c8e.1601586563.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 05/11] app/testpmd: support IPv6 fragment extension item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_flow update, following RFC [1], added to ethdev the rte_flow item ipv6_frag_ext. This patch updates testpmd CLI to support the new item and its fields. To match on fragmented IPv6 packets, this item is added to pattern: ... ipv6 / ipv6_frag_ext ... [1] http://mails.dpdk.org/archives/dev/2020-March/160255.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index b078095..1f800eb 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -195,6 +195,9 @@ enum index { ITEM_ARP_ETH_IPV4_TPA, ITEM_IPV6_EXT, ITEM_IPV6_EXT_NEXT_HDR, + ITEM_IPV6_FRAG_EXT, + ITEM_IPV6_FRAG_EXT_NEXT_HDR, + ITEM_IPV6_FRAG_EXT_FRAG_DATA, ITEM_ICMP6, ITEM_ICMP6_TYPE, ITEM_ICMP6_CODE, @@ -786,6 +789,7 @@ struct parse_action_priv { ITEM_VXLAN_GPE, ITEM_ARP_ETH_IPV4, ITEM_IPV6_EXT, + ITEM_IPV6_FRAG_EXT, ITEM_ICMP6, ITEM_ICMP6_ND_NS, ITEM_ICMP6_ND_NA, @@ -1007,6 +1011,13 @@ struct parse_action_priv { ZERO, }; +static const enum index item_ipv6_frag_ext[] = { + ITEM_IPV6_FRAG_EXT_NEXT_HDR, + ITEM_IPV6_FRAG_EXT_FRAG_DATA, + ITEM_NEXT, + ZERO, +}; + static const enum index item_icmp6[] = { ITEM_ICMP6_TYPE, ITEM_ICMP6_CODE, @@ -2578,6 +2589,30 @@ static int comp_set_raw_index(struct context *, const struct token *, .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_ext, next_hdr)), }, + [ITEM_IPV6_FRAG_EXT] = { + .name = "ipv6_frag_ext", + .help = "match presence of IPv6 fragment extension header", + .priv = PRIV_ITEM(IPV6_FRAG_EXT, + sizeof(struct rte_flow_item_ipv6_frag_ext)), + .next = NEXT(item_ipv6_frag_ext), + .call = parse_vc, + }, + [ITEM_IPV6_FRAG_EXT_NEXT_HDR] = { + .name = "next_hdr", + .help = "next header", + .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_ipv6_frag_ext, + hdr.next_header)), + }, + [ITEM_IPV6_FRAG_EXT_FRAG_DATA] = { + .name = "frag_data", + .help = "Fragment flags and offset", + .next = NEXT(item_ipv6_frag_ext, NEXT_ENTRY(UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ipv6_frag_ext, + hdr.frag_data)), + }, [ITEM_ICMP6] = { .name = "icmp6", .help = "match any ICMPv6 header", From patchwork Thu Oct 1 21:15:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79503 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D27F3A04BA; Thu, 1 Oct 2020 23:17:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5FACD1D8DB; Thu, 1 Oct 2020 23:15:44 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 016B41D6EE for ; Thu, 1 Oct 2020 23:15:41 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:36 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOe2021286; Fri, 2 Oct 2020 00:15:36 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:15:03 +0300 Message-Id: <66fcec003fc002f8d4d8c83021de8f27a94ad083.1601586564.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 06/11] net/mlx5: remove handling of ICMP fragmented packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Commit [1] forced setting of match on 'frag' bit mask 1 and value 0. Previous patch in this series added support of match on fragmented and non-fragmented packets on L3 items, so this setting is now redundant. This patch removes the changes done in [1]. [1] commit 85407db9f60d ("net/mlx5: fix matching for ICMP fragments") Signed-off-by: Dekel Peled Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_dv.c | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 79fdf34..0a0a5a4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7345,12 +7345,6 @@ struct field_modify_info modify_tcp[] = { return; if (!icmp6_m) icmp6_m = &rte_flow_item_icmp6_mask; - /* - * Force flow only to match the non-fragmented IPv6 ICMPv6 packets. - * If only the protocol is specified, no need to match the frag. - */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 0); MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type, icmp6_v->type & icmp6_m->type); @@ -7398,12 +7392,6 @@ struct field_modify_info modify_tcp[] = { return; if (!icmp_m) icmp_m = &rte_flow_item_icmp_mask; - /* - * Force flow only to match the non-fragmented IPv4 ICMP packets. - * If only the protocol is specified, no need to match the frag. - */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 0); MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type, icmp_m->hdr.icmp_type); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type, From patchwork Thu Oct 1 21:15:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79504 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F00D7A04BA; Thu, 1 Oct 2020 23:17:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0F3411D961; Thu, 1 Oct 2020 23:15:46 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 0C20F1D6F3 for ; Thu, 1 Oct 2020 23:15:41 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:37 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOe3021286; Fri, 2 Oct 2020 00:15:36 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:15:04 +0300 Message-Id: <676530dd1af25095243b7c44649522b751b6feb3.1601586564.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 07/11] net/mlx5: support match on IPv4 fragment packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds to MLX5 PMD the support of matching on IPv4 fragmented and non-fragmented packets, using the IPv4 header fragment_offset field. Signed-off-by: Dekel Peled Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow.c | 48 ++++++++---- drivers/net/mlx5/mlx5_flow.h | 10 +++ drivers/net/mlx5/mlx5_flow_dv.c | 156 +++++++++++++++++++++++++++++++------ drivers/net/mlx5/mlx5_flow_verbs.c | 9 ++- 4 files changed, 178 insertions(+), 45 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index ffa7646..906741f 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -474,6 +474,8 @@ struct mlx5_flow_tunnel_info { * Bit-masks covering supported fields by the NIC to compare with user mask. * @param[in] size * Bit-masks size in bytes. + * @param[in] range_accepted + * True if range of values is accepted for specific fields, false otherwise. * @param[out] error * Pointer to error structure. * @@ -485,6 +487,7 @@ struct mlx5_flow_tunnel_info { const uint8_t *mask, const uint8_t *nic_mask, unsigned int size, + bool range_accepted, struct rte_flow_error *error) { unsigned int i; @@ -502,7 +505,7 @@ struct mlx5_flow_tunnel_info { RTE_FLOW_ERROR_TYPE_ITEM, item, "mask/last without a spec is not" " supported"); - if (item->spec && item->last) { + if (item->spec && item->last && !range_accepted) { uint8_t spec[size]; uint8_t last[size]; unsigned int i; @@ -1277,7 +1280,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_icmp6_mask, - sizeof(struct rte_flow_item_icmp6), error); + sizeof(struct rte_flow_item_icmp6), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -1329,7 +1333,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_icmp_mask, - sizeof(struct rte_flow_item_icmp), error); + sizeof(struct rte_flow_item_icmp), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -1384,7 +1389,7 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_eth), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); return ret; } @@ -1438,7 +1443,7 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_vlan), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; if (!tunnel && mask->tci != RTE_BE16(0x0fff)) { @@ -1502,6 +1507,7 @@ struct mlx5_flow_tunnel_info { uint64_t last_item, uint16_t ether_type, const struct rte_flow_item_ipv4 *acc_mask, + bool range_accepted, struct rte_flow_error *error) { const struct rte_flow_item_ipv4 *mask = item->mask; @@ -1572,7 +1578,7 @@ struct mlx5_flow_tunnel_info { acc_mask ? (const uint8_t *)acc_mask : (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_ipv4), - error); + range_accepted, error); if (ret < 0) return ret; return 0; @@ -1592,6 +1598,8 @@ struct mlx5_flow_tunnel_info { * @param[in] acc_mask * Acceptable mask, if NULL default internal default mask * will be used to check whether item fields are supported. + * @param[in] range_accepted + * True if range of values is accepted for specific fields, false otherwise. * @param[out] error * Pointer to error structure. * @@ -1671,7 +1679,7 @@ struct mlx5_flow_tunnel_info { acc_mask ? (const uint8_t *)acc_mask : (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_ipv6), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -1726,7 +1734,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_udp_mask, - sizeof(struct rte_flow_item_udp), error); + sizeof(struct rte_flow_item_udp), MLX5_ITEM_RANGE_NOT_ACCEPTED, + error); if (ret < 0) return ret; return 0; @@ -1781,7 +1790,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)flow_mask, - sizeof(struct rte_flow_item_tcp), error); + sizeof(struct rte_flow_item_tcp), MLX5_ITEM_RANGE_NOT_ACCEPTED, + error); if (ret < 0) return ret; return 0; @@ -1835,7 +1845,7 @@ struct mlx5_flow_tunnel_info { (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_vxlan_mask, sizeof(struct rte_flow_item_vxlan), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; if (spec) { @@ -1906,7 +1916,7 @@ struct mlx5_flow_tunnel_info { (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_vxlan_gpe_mask, sizeof(struct rte_flow_item_vxlan_gpe), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; if (spec) { @@ -1980,7 +1990,7 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&gre_key_default_mask, - sizeof(rte_be32_t), error); + sizeof(rte_be32_t), MLX5_ITEM_RANGE_NOT_ACCEPTED, error); return ret; } @@ -2032,7 +2042,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, - sizeof(struct rte_flow_item_gre), error); + sizeof(struct rte_flow_item_gre), MLX5_ITEM_RANGE_NOT_ACCEPTED, + error); if (ret < 0) return ret; #ifndef HAVE_MLX5DV_DR @@ -2107,7 +2118,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, - sizeof(struct rte_flow_item_geneve), error); + sizeof(struct rte_flow_item_geneve), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; if (spec) { @@ -2190,7 +2202,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_mpls_mask, - sizeof(struct rte_flow_item_mpls), error); + sizeof(struct rte_flow_item_mpls), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -2245,7 +2258,8 @@ struct mlx5_flow_tunnel_info { ret = mlx5_flow_item_acceptable (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_nvgre_mask, - sizeof(struct rte_flow_item_nvgre), error); + sizeof(struct rte_flow_item_nvgre), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -2339,7 +2353,7 @@ struct mlx5_flow_tunnel_info { acc_mask ? (const uint8_t *)acc_mask : (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_ecpri), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); } /* Allocate unique ID for the split Q/RSS subflows. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 279daf2..1e30c93 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -330,6 +330,14 @@ enum mlx5_feature_name { #define MLX5_ENCAPSULATION_DECISION_SIZE (sizeof(struct rte_flow_item_eth) + \ sizeof(struct rte_flow_item_ipv4)) +/* IPv4 fragment_offset field contains relevant data in bits 2 to 15. */ +#define MLX5_IPV4_FRAG_OFFSET_MASK \ + (RTE_IPV4_HDR_OFFSET_MASK | RTE_IPV4_HDR_MF_FLAG) + +/* Specific item's fields can accept a range of values (using spec and last). */ +#define MLX5_ITEM_RANGE_NOT_ACCEPTED false +#define MLX5_ITEM_RANGE_ACCEPTED true + /* Software header modify action numbers of a flow. */ #define MLX5_ACT_NUM_MDF_IPV4 1 #define MLX5_ACT_NUM_MDF_IPV6 4 @@ -985,6 +993,7 @@ int mlx5_flow_item_acceptable(const struct rte_flow_item *item, const uint8_t *mask, const uint8_t *nic_mask, unsigned int size, + bool range_accepted, struct rte_flow_error *error); int mlx5_flow_validate_item_eth(const struct rte_flow_item *item, uint64_t item_flags, @@ -1002,6 +1011,7 @@ int mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item, uint64_t last_item, uint16_t ether_type, const struct rte_flow_item_ipv4 *acc_mask, + bool range_accepted, struct rte_flow_error *error); int mlx5_flow_validate_item_ipv6(const struct rte_flow_item *item, uint64_t item_flags, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 0a0a5a4..3379caf 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -1418,7 +1418,7 @@ struct field_modify_info modify_tcp[] = { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_mark), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; return 0; @@ -1494,7 +1494,7 @@ struct field_modify_info modify_tcp[] = { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_meta), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); return ret; } @@ -1547,7 +1547,7 @@ struct field_modify_info modify_tcp[] = { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_tag), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret < 0) return ret; if (mask->index != 0xff) @@ -1618,7 +1618,7 @@ struct field_modify_info modify_tcp[] = { (item, (const uint8_t *)mask, (const uint8_t *)&rte_flow_item_port_id_mask, sizeof(struct rte_flow_item_port_id), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; if (!spec) @@ -1691,7 +1691,7 @@ struct field_modify_info modify_tcp[] = { ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, (const uint8_t *)&nic_mask, sizeof(struct rte_flow_item_vlan), - error); + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); if (ret) return ret; if (!tunnel && mask->tci != RTE_BE16(0x0fff)) { @@ -1778,11 +1778,126 @@ struct field_modify_info modify_tcp[] = { RTE_FLOW_ERROR_TYPE_ITEM, item, "Match is supported for GTP" " flags only"); - return mlx5_flow_item_acceptable - (item, (const uint8_t *)mask, - (const uint8_t *)&nic_mask, - sizeof(struct rte_flow_item_gtp), - error); + return mlx5_flow_item_acceptable(item, (const uint8_t *)mask, + (const uint8_t *)&nic_mask, + sizeof(struct rte_flow_item_gtp), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); +} + +/** + * Validate IPV4 item. + * Use existing validation function mlx5_flow_validate_item_ipv4(), and + * add specific validation of fragment_offset field, + * + * @param[in] item + * Item specification. + * @param[in] item_flags + * Bit-fields that holds the items detected until now. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_validate_item_ipv4(const struct rte_flow_item *item, + uint64_t item_flags, + uint64_t last_item, + uint16_t ether_type, + struct rte_flow_error *error) +{ + int ret; + const struct rte_flow_item_ipv4 *spec = item->spec; + const struct rte_flow_item_ipv4 *last = item->last; + const struct rte_flow_item_ipv4 *mask = item->mask; + rte_be16_t fragment_offset_spec = 0; + rte_be16_t fragment_offset_last = 0; + const struct rte_flow_item_ipv4 nic_ipv4_mask = { + .hdr = { + .src_addr = RTE_BE32(0xffffffff), + .dst_addr = RTE_BE32(0xffffffff), + .type_of_service = 0xff, + .fragment_offset = RTE_BE16(0xffff), + .next_proto_id = 0xff, + .time_to_live = 0xff, + }, + }; + + ret = mlx5_flow_validate_item_ipv4(item, item_flags, last_item, + ether_type, &nic_ipv4_mask, + MLX5_ITEM_RANGE_ACCEPTED, error); + if (ret < 0) + return ret; + if (spec && mask) + fragment_offset_spec = spec->hdr.fragment_offset & + mask->hdr.fragment_offset; + if (!fragment_offset_spec) + return 0; + /* + * spec and mask are valid, enforce using full mask to make sure the + * complete value is used correctly. + */ + if ((mask->hdr.fragment_offset & RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK)) + != RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, + item, "must use full mask for" + " fragment_offset"); + /* + * Match on fragment_offset 0x2000 means MF is 1 and frag-offset is 0, + * indicating this is 1st fragment of fragmented packet. + * This is not yet supported in MLX5, return appropriate error message. + */ + if (fragment_offset_spec == RTE_BE16(RTE_IPV4_HDR_MF_FLAG)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "match on first fragment not " + "supported"); + if (fragment_offset_spec && !last) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "specified value not supported"); + /* spec and last are valid, validate the specified range. */ + fragment_offset_last = last->hdr.fragment_offset & + mask->hdr.fragment_offset; + /* + * Match on fragment_offset spec 0x2001 and last 0x3fff + * means MF is 1 and frag-offset is > 0. + * This packet is fragment 2nd and onward, excluding last. + * This is not yet supported in MLX5, return appropriate + * error message. + */ + if (fragment_offset_spec == RTE_BE16(RTE_IPV4_HDR_MF_FLAG + 1) && + fragment_offset_last == RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, + last, "match on following " + "fragments not supported"); + /* + * Match on fragment_offset spec 0x0001 and last 0x1fff + * means MF is 0 and frag-offset is > 0. + * This packet is last fragment of fragmented packet. + * This is not yet supported in MLX5, return appropriate + * error message. + */ + if (fragment_offset_spec == RTE_BE16(1) && + fragment_offset_last == RTE_BE16(RTE_IPV4_HDR_OFFSET_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, + last, "match on last " + "fragment not supported"); + /* + * Match on fragment_offset spec 0x0001 and last 0x3fff + * means MF and/or frag-offset is not 0. + * This is a fragmented packet. + * Other range values are invalid and rejected. + */ + if (!(fragment_offset_spec == RTE_BE16(1) && + fragment_offset_last == RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK))) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, last, + "specified range not supported"); + return 0; } /** @@ -5084,15 +5199,6 @@ struct field_modify_info modify_tcp[] = { .dst_port = RTE_BE16(UINT16_MAX), } }; - const struct rte_flow_item_ipv4 nic_ipv4_mask = { - .hdr = { - .src_addr = RTE_BE32(0xffffffff), - .dst_addr = RTE_BE32(0xffffffff), - .type_of_service = 0xff, - .next_proto_id = 0xff, - .time_to_live = 0xff, - }, - }; const struct rte_flow_item_ipv6 nic_ipv6_mask = { .hdr = { .src_addr = @@ -5192,11 +5298,9 @@ struct field_modify_info modify_tcp[] = { case RTE_FLOW_ITEM_TYPE_IPV4: mlx5_flow_tunnel_ip_check(items, next_protocol, &item_flags, &tunnel); - ret = mlx5_flow_validate_item_ipv4(items, item_flags, - last_item, - ether_type, - &nic_ipv4_mask, - error); + ret = flow_dv_validate_item_ipv4(items, item_flags, + last_item, ether_type, + error); if (ret < 0) return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : @@ -6296,6 +6400,10 @@ struct field_modify_info modify_tcp[] = { ipv4_m->hdr.time_to_live); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, + !!(ipv4_m->hdr.fragment_offset)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, + !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset)); } /** diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index 62c18b8..276bcb5 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -1312,10 +1312,11 @@ } break; case RTE_FLOW_ITEM_TYPE_IPV4: - ret = mlx5_flow_validate_item_ipv4(items, item_flags, - last_item, - ether_type, NULL, - error); + ret = mlx5_flow_validate_item_ipv4 + (items, item_flags, + last_item, ether_type, NULL, + MLX5_ITEM_RANGE_NOT_ACCEPTED, + error); if (ret < 0) return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : From patchwork Thu Oct 1 21:15:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79505 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6FD17A04BA; Thu, 1 Oct 2020 23:18:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 285AF1D9EC; Thu, 1 Oct 2020 23:15:47 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 0E0CE1D6F6 for ; Thu, 1 Oct 2020 23:15:41 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:37 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOe4021286; Fri, 2 Oct 2020 00:15:37 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:15:05 +0300 Message-Id: <908ff39956f96a9c06921e644ac84521699fec8c.1601586564.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 08/11] net/mlx5: support match on IPv6 fragment packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds to MLX5 PMD the support of matching on IPv6 fragmented and non-fragmented packets, using the new field frag_ext_exist, added to rte_flow following RFC [1]. [1] https://mails.dpdk.org/archives/dev/2020-August/177257.html Signed-off-by: Dekel Peled --- drivers/net/mlx5/mlx5_flow_dv.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 3379caf..4403abc 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5211,6 +5211,7 @@ struct field_modify_info modify_tcp[] = { .proto = 0xff, .hop_limits = 0xff, }, + .frag_ext_exist = 1, }; const struct rte_flow_item_ecpri nic_ecpri_mask = { .hdr = { @@ -6519,6 +6520,10 @@ struct field_modify_info modify_tcp[] = { ipv6_m->hdr.hop_limits); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, + !!(ipv6_m->frag_ext_exist)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, + !!(ipv6_v->frag_ext_exist & ipv6_m->frag_ext_exist)); } /** From patchwork Thu Oct 1 21:15:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79506 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC7D3A04BA; Thu, 1 Oct 2020 23:18:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A50B21D9FC; Thu, 1 Oct 2020 23:15:48 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 460091D6F7 for ; Thu, 1 Oct 2020 23:15:41 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:38 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOe5021286; Fri, 2 Oct 2020 00:15:38 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:15:06 +0300 Message-Id: X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 09/11] net/mlx5: support match on IPv6 fragment ext. item X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_flow update, following RFC [1], added to ethdev the rte_flow item ipv6_frag_ext. This patch adds to MLX5 PMD the option to match on this item type. [1] http://mails.dpdk.org/archives/dev/2020-March/160255.html Signed-off-by: Dekel Peled Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow.h | 4 + drivers/net/mlx5/mlx5_flow_dv.c | 209 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 213 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1e30c93..376519f 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -122,6 +122,10 @@ enum mlx5_feature_name { /* Pattern eCPRI Layer bit. */ #define MLX5_FLOW_LAYER_ECPRI (UINT64_C(1) << 29) +/* IPv6 Fragment Extension Header bit. */ +#define MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT (1u << 30) +#define MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT (1u << 31) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 4403abc..eb1db12 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -1901,6 +1901,120 @@ struct field_modify_info modify_tcp[] = { } /** + * Validate IPV6 fragment extension item. + * + * @param[in] item + * Item specification. + * @param[in] item_flags + * Bit-fields that holds the items detected until now. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_validate_item_ipv6_frag_ext(const struct rte_flow_item *item, + uint64_t item_flags, + struct rte_flow_error *error) +{ + const struct rte_flow_item_ipv6_frag_ext *spec = item->spec; + const struct rte_flow_item_ipv6_frag_ext *last = item->last; + const struct rte_flow_item_ipv6_frag_ext *mask = item->mask; + rte_be16_t frag_data_spec = 0; + rte_be16_t frag_data_last = 0; + const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + const uint64_t l4m = tunnel ? MLX5_FLOW_LAYER_INNER_L4 : + MLX5_FLOW_LAYER_OUTER_L4; + int ret = 0; + struct rte_flow_item_ipv6_frag_ext nic_mask = { + .hdr = { + .next_header = 0xff, + .frag_data = RTE_BE16(0xffff), + }, + }; + + if (item_flags & l4m) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "ipv6 fragment extension item cannot " + "follow L4 item."); + if ((tunnel && !(item_flags & MLX5_FLOW_LAYER_INNER_L3_IPV6)) || + (!tunnel && !(item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "ipv6 fragment extension item must " + "follow ipv6 item"); + if (spec && mask) + frag_data_spec = spec->hdr.frag_data & mask->hdr.frag_data; + if (!frag_data_spec) + return 0; + /* + * spec and mask are valid, enforce using full mask to make sure the + * complete value is used correctly. + */ + if ((mask->hdr.frag_data & RTE_BE16(RTE_IPV6_FRAG_USED_MASK)) != + RTE_BE16(RTE_IPV6_FRAG_USED_MASK)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, + item, "must use full mask for" + " frag_data"); + /* + * Match on frag_data 0x00001 means M is 1 and frag-offset is 0. + * This is 1st fragment of fragmented packet. + */ + if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_MF_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "match on first fragment not " + "supported"); + if (frag_data_spec && !last) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "specified value not supported"); + ret = mlx5_flow_item_acceptable + (item, (const uint8_t *)mask, + (const uint8_t *)&nic_mask, + sizeof(struct rte_flow_item_ipv6_frag_ext), + MLX5_ITEM_RANGE_ACCEPTED, error); + if (ret) + return ret; + /* spec and last are valid, validate the specified range. */ + frag_data_last = last->hdr.frag_data & mask->hdr.frag_data; + /* + * Match on frag_data spec 0x0009 and last 0xfff9 + * means M is 1 and frag-offset is > 0. + * This packet is fragment 2nd and onward, excluding last. + * This is not yet supported in MLX5, return appropriate + * error message. + */ + if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_FO_ALIGN | + RTE_IPV6_EHDR_MF_MASK) && + frag_data_last == RTE_BE16(RTE_IPV6_FRAG_USED_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, + last, "match on following " + "fragments not supported"); + /* + * Match on frag_data spec 0x0008 and last 0xfff8 + * means M is 0 and frag-offset is > 0. + * This packet is last fragment of fragmented packet. + * This is not yet supported in MLX5, return appropriate + * error message. + */ + if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_FO_ALIGN) && + frag_data_last == RTE_BE16(RTE_IPV6_EHDR_FO_MASK)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, + last, "match on last " + "fragment not supported"); + /* Other range values are invalid and rejected. */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_LAST, last, + "specified range not supported"); +} + +/** * Validate the pop VLAN action. * * @param[in] dev @@ -5349,6 +5463,29 @@ struct field_modify_info modify_tcp[] = { next_protocol = 0xff; } break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + ret = flow_dv_validate_item_ipv6_frag_ext(items, + item_flags, + error); + if (ret < 0) + return ret; + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; case RTE_FLOW_ITEM_TYPE_TCP: ret = mlx5_flow_validate_item_tcp (items, item_flags, @@ -6527,6 +6664,57 @@ struct field_modify_info modify_tcp[] = { } /** + * Add IPV6 fragment extension item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec; + const struct rte_flow_item_ipv6_frag_ext nic_mask = { + .hdr = { + .next_header = 0xff, + .frag_data = RTE_BE16(0xffff), + }, + }; + void *headers_m; + void *headers_v; + + if (inner) { + headers_m = MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m = MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + /* IPv6 fragment extension item exists, so packet is IP fragment. */ + MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1); + if (!ipv6_frag_ext_v) + return; + if (!ipv6_frag_ext_m) + ipv6_frag_ext_m = &nic_mask; + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, + ipv6_frag_ext_m->hdr.next_header); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + ipv6_frag_ext_v->hdr.next_header & + ipv6_frag_ext_m->hdr.next_header); +} + +/** * Add TCP item to matcher and to the value. * * @param[in, out] matcher @@ -8868,6 +9056,27 @@ struct field_modify_info modify_tcp[] = { next_protocol = 0xff; } break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext(match_mask, + match_value, + items, tunnel); + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; case RTE_FLOW_ITEM_TYPE_TCP: flow_dv_translate_item_tcp(match_mask, match_value, items, tunnel); From patchwork Thu Oct 1 21:15:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79507 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 580DBA04BA; Thu, 1 Oct 2020 23:18:50 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 15E231DABD; Thu, 1 Oct 2020 23:15:51 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id A7C1F1D6F3 for ; Thu, 1 Oct 2020 23:15:42 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:38 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOe6021286; Fri, 2 Oct 2020 00:15:38 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:15:07 +0300 Message-Id: X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 10/11] doc: update release notes for MLX5 L3 frag support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates 20.11 release notes with the changes included in patches of this series: 1) MLX5 support of matching on IPv4/IPv6 fragmented/non-fragmented packets. 2) ABI change in ethdev struct rte_flow_item_ipv6. Signed-off-by: Dekel Peled Acked-by: Ori Kam --- doc/guides/rel_notes/release_20_11.rst | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 7f9d0dd..91e1773 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -90,6 +90,11 @@ New Features * Added support for 200G PAM4 link speed. +* **Updated Mellanox mlx5 driver.** + + Updated Mellanox mlx5 driver with new features and improvements, including: + + * Added support for matching on fragmented/non-fragmented IPv4/IPv6 packets. Removed Items ------------- @@ -215,6 +220,11 @@ ABI Changes * ``ethdev`` internal functions are marked with ``__rte_internal`` tag. + * Added extensions' attributes to struct ``rte_flow_item_ipv6``. + A set of additional values added to struct, indicating the existence of + every defined extension header type. + Applications should use the new values for identification of existing + extensions in the packet header. Known Issues ------------ From patchwork Thu Oct 1 21:15:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dekel Peled X-Patchwork-Id: 79508 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2DAC7A04BA; Thu, 1 Oct 2020 23:19:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 18BB01DAD7; Thu, 1 Oct 2020 23:15:52 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id AC1D01D6F6 for ; Thu, 1 Oct 2020 23:15:42 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@nvidia.com) with SMTP; 2 Oct 2020 00:15:39 +0300 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.228.134.250]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 091LFOe7021286; Fri, 2 Oct 2020 00:15:39 +0300 From: Dekel Peled To: orika@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, konstantin.ananyev@intel.com, olivier.matz@6wind.com, wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com, matan@nvidia.com, shahafs@nvidia.com, viacheslavo@nvidia.com Cc: dev@dpdk.org Date: Fri, 2 Oct 2020 00:15:08 +0300 Message-Id: <5a25973431babe75cf681df7827ee69942297446.1601586564.git.dekelp@nvidia.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 11/11] net/mlx5: enforce limitation on IPv6 next proto X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Due to PRM requirement, the IPv6 header item 'proto' field, indicating the next header protocol, should not be set as extension header. This patch adds the relevant validation, and documents the limitation. Signed-off-by: Dekel Peled Acked-by: Ori Kam --- doc/guides/nics/mlx5.rst | 7 +++++++ drivers/net/mlx5/mlx5_flow.c | 14 ++++++++++++-- 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 211c0c5..e6ca5e1 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -311,6 +311,13 @@ Limitations for some NICs (such as ConnectX-6 Dx and BlueField 2). The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support. +- IPv6 header item 'proto' field, indicating the next header protocol, should + not be set as extension header. + In case the next header is an extension header, it should not be specified in + IPv6 header item 'proto' field. + The last extension header item 'next header' field can specify the following + header protocol type. + Statistics ---------- diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 906741f..7a438cf 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1645,9 +1645,9 @@ struct mlx5_flow_tunnel_info { RTE_FLOW_ERROR_TYPE_ITEM, item, "IPv6 cannot follow L2/VLAN layer " "which ether type is not IPv6"); + if (mask && spec) + next_proto = mask->hdr.proto & spec->hdr.proto; if (item_flags & MLX5_FLOW_LAYER_IPV6_ENCAP) { - if (mask && spec) - next_proto = mask->hdr.proto & spec->hdr.proto; if (next_proto == IPPROTO_IPIP || next_proto == IPPROTO_IPV6) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -1655,6 +1655,16 @@ struct mlx5_flow_tunnel_info { "multiple tunnel " "not supported"); } + if (next_proto == IPPROTO_HOPOPTS || + next_proto == IPPROTO_ROUTING || + next_proto == IPPROTO_FRAGMENT || + next_proto == IPPROTO_ESP || + next_proto == IPPROTO_AH || + next_proto == IPPROTO_DSTOPTS) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "IPv6 proto (next header) should " + "not be set as extension header"); if (item_flags & MLX5_FLOW_LAYER_IPIP) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item,