[v1] net/mlx5: fix RSS expansion for inner tunnel VLAN

Message ID 20210803181322.1409492-1-lmargalit@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Raslan Darawsheh
Headers
Series [v1] net/mlx5: fix RSS expansion for inner tunnel VLAN |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/github-robot success github build: passed
ci/Intel-compilation success Compilation OK
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-aarch64-unit-testing fail Testing issues
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/intel-Testing success Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing fail Testing issues

Commit Message

Lior Margalit Aug. 3, 2021, 6:13 p.m. UTC
  The RSS expansion alg is using a graph to find the possible expansion
paths. The VLAN item in the flow pattern requires special treatment,
because it should not be added implicitly by the expansion alg.
If the flow pattern ends with ETH item, the pattern will be expanded
with IPv4 and IPv6. For example:
testpmd> flow create ... eth / end actions rss / end
ETH END
ETH IPV4 END
ETH IPV6 END
If a VLAN item follows the ETH item in the flow pattern, the pattern
will be expanded with IPv4 and IPv6 following the VLAN item.
For example:
testpmd> flow create ... eth / vlan / end actions rss level 1 / end
ETH VLAN END
ETH VLAN IPV4 END
ETH VLAN IPV6 END

The case of inner tunnel VLAN item was not taken care of so the flow
pattern did not expand with IPv6 and IPv4 as expected.
Example with inner VLAN:
testpmd> flow create ... / vxlan / eth / vlan / end actions rss level 2
/ end
The current result of the expansion alg:
ETH IPV6 UDP VXLAN ETH VLAN END
The expected result of the expansion alg:
ETH IPV6 UDP VXLAN ETH VLAN END
ETH IPV6 UDP VXLAN ETH VLAN IPV4 END
ETH IPV6 UDP VXLAN ETH VLAN IPV6 END

The fix is to introduce a new flag to set on a graph expansion node
to apply the 'explicit' behavior, meaning the node is not added to
the expanded pattern, if it is not found in the flow pattern, but the
expansion alg can go deeper to its next nodes.

Fixes: c7870bfe09dc ("ethdev: move RSS expansion code to mlx5 driver")
Cc: stable@dpdk.org

Signed-off-by: Lior Margalit <lmargalit@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.c | 97 ++++++++++++++++++------------------
 1 file changed, 48 insertions(+), 49 deletions(-)
  

Comments

Raslan Darawsheh Aug. 17, 2021, 9:47 a.m. UTC | #1
Hi,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Lior Margalit
> Sent: Tuesday, August 3, 2021 9:13 PM
> To: Matan Azrad <matan@nvidia.com>
> Cc: dev@dpdk.org; Lior Margalit <lmargalit@nvidia.com>; stable@dpdk.org
> Subject: [dpdk-dev] [PATCH v1] net/mlx5: fix RSS expansion for inner tunnel
> VLAN
> 
> The RSS expansion alg is using a graph to find the possible expansion
> paths. The VLAN item in the flow pattern requires special treatment,
> because it should not be added implicitly by the expansion alg.
> If the flow pattern ends with ETH item, the pattern will be expanded
> with IPv4 and IPv6. For example:
> testpmd> flow create ... eth / end actions rss / end
> ETH END
> ETH IPV4 END
> ETH IPV6 END
> If a VLAN item follows the ETH item in the flow pattern, the pattern
> will be expanded with IPv4 and IPv6 following the VLAN item.
> For example:
> testpmd> flow create ... eth / vlan / end actions rss level 1 / end
> ETH VLAN END
> ETH VLAN IPV4 END
> ETH VLAN IPV6 END
> 
> The case of inner tunnel VLAN item was not taken care of so the flow
> pattern did not expand with IPv6 and IPv4 as expected.
> Example with inner VLAN:
> testpmd> flow create ... / vxlan / eth / vlan / end actions rss level 2
> / end
> The current result of the expansion alg:
> ETH IPV6 UDP VXLAN ETH VLAN END
> The expected result of the expansion alg:
> ETH IPV6 UDP VXLAN ETH VLAN END
> ETH IPV6 UDP VXLAN ETH VLAN IPV4 END
> ETH IPV6 UDP VXLAN ETH VLAN IPV6 END
> 
> The fix is to introduce a new flag to set on a graph expansion node
> to apply the 'explicit' behavior, meaning the node is not added to
> the expanded pattern, if it is not found in the flow pattern, but the
> expansion alg can go deeper to its next nodes.
> 
> Fixes: c7870bfe09dc ("ethdev: move RSS expansion code to mlx5 driver")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Lior Margalit <lmargalit@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh
  

Patch

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 0689e6d45d..858d25f837 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -100,10 +100,25 @@  struct mlx5_flow_expand_node {
 	 * RSS types bit-field associated with this node
 	 * (see ETH_RSS_* definitions).
 	 */
-	uint8_t optional;
-	/**< optional expand field. Default 0 to expand, 1 not go deeper. */
+	uint64_t node_flags;
+	/**<
+	 *  Bit-fields that define how the node is used in the expansion.
+	 * (see MLX5_EXPANSION_NODE_* definitions).
+	 */
 };
 
+/* Optional expand field. The expansion alg will not go deeper. */
+#define MLX5_EXPANSION_NODE_OPTIONAL (UINT64_C(1) << 0)
+
+/* The node is not added implicitly as expansion to the flow pattern.
+ * If the node type does not match the flow pattern item type, the
+ * expansion alg will go deeper to its next items.
+ * In the current implementation, the list of next nodes indexes can
+ * have up to one node with this flag set and it has to be the last
+ * node index (before the list terminator).
+ */
+#define MLX5_EXPANSION_NODE_EXPLICIT (UINT64_C(1) << 1)
+
 /** Object returned by mlx5_flow_expand_rss(). */
 struct mlx5_flow_expand_rss {
 	uint32_t entries;
@@ -308,10 +323,17 @@  mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size,
 			continue;
 		}
 		last_item = item;
-		for (i = 0; node->next && node->next[i]; ++i) {
+		i = 0;
+		while (node->next && node->next[i]) {
 			next = &graph[node->next[i]];
 			if (next->type == item->type)
 				break;
+			if (next->node_flags & MLX5_EXPANSION_NODE_EXPLICIT) {
+				node = next;
+				i = 0;
+			} else {
+				++i;
+			}
 		}
 		if (next)
 			node = next;
@@ -370,6 +392,16 @@  mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size,
 	}
 	memset(flow_items, 0, sizeof(flow_items));
 	next_node = node->next;
+	while (next_node) {
+		/*
+		 * Skip the nodes with the MLX5_EXPANSION_NODE_EXPLICIT
+		 * flag set, because they were not found in the flow pattern.
+		 */
+		node = &graph[*next_node];
+		if (!(node->node_flags & MLX5_EXPANSION_NODE_EXPLICIT))
+			break;
+		next_node = node->next;
+	}
 	stack[stack_pos] = next_node;
 	node = next_node ? &graph[*next_node] : NULL;
 	while (node) {
@@ -404,7 +436,8 @@  mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size,
 			addr = (void *)(((uintptr_t)addr) + n);
 		}
 		/* Go deeper. */
-		if (!node->optional && node->next) {
+		if (!(node->node_flags & MLX5_EXPANSION_NODE_OPTIONAL) &&
+				node->next) {
 			next_node = node->next;
 			if (stack_pos++ == MLX5_RSS_EXP_ELT_N) {
 				rte_errno = E2BIG;
@@ -429,10 +462,7 @@  mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size,
 enum mlx5_expansion {
 	MLX5_EXPANSION_ROOT,
 	MLX5_EXPANSION_ROOT_OUTER,
-	MLX5_EXPANSION_ROOT_ETH_VLAN,
-	MLX5_EXPANSION_ROOT_OUTER_ETH_VLAN,
 	MLX5_EXPANSION_OUTER_ETH,
-	MLX5_EXPANSION_OUTER_ETH_VLAN,
 	MLX5_EXPANSION_OUTER_VLAN,
 	MLX5_EXPANSION_OUTER_IPV4,
 	MLX5_EXPANSION_OUTER_IPV4_UDP,
@@ -447,7 +477,6 @@  enum mlx5_expansion {
 	MLX5_EXPANSION_GRE_KEY,
 	MLX5_EXPANSION_MPLS,
 	MLX5_EXPANSION_ETH,
-	MLX5_EXPANSION_ETH_VLAN,
 	MLX5_EXPANSION_VLAN,
 	MLX5_EXPANSION_IPV4,
 	MLX5_EXPANSION_IPV4_UDP,
@@ -473,22 +502,7 @@  static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_OUTER_IPV6),
 		.type = RTE_FLOW_ITEM_TYPE_END,
 	},
-	[MLX5_EXPANSION_ROOT_ETH_VLAN] = {
-		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH_VLAN),
-		.type = RTE_FLOW_ITEM_TYPE_END,
-	},
-	[MLX5_EXPANSION_ROOT_OUTER_ETH_VLAN] = {
-		.next = MLX5_FLOW_EXPAND_RSS_NEXT
-						(MLX5_EXPANSION_OUTER_ETH_VLAN),
-		.type = RTE_FLOW_ITEM_TYPE_END,
-	},
 	[MLX5_EXPANSION_OUTER_ETH] = {
-		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_OUTER_IPV4,
-						  MLX5_EXPANSION_OUTER_IPV6),
-		.type = RTE_FLOW_ITEM_TYPE_ETH,
-		.rss_types = 0,
-	},
-	[MLX5_EXPANSION_OUTER_ETH_VLAN] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_OUTER_VLAN),
 		.type = RTE_FLOW_ITEM_TYPE_ETH,
 		.rss_types = 0,
@@ -497,6 +511,7 @@  static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_OUTER_IPV4,
 						  MLX5_EXPANSION_OUTER_IPV6),
 		.type = RTE_FLOW_ITEM_TYPE_VLAN,
+		.node_flags = MLX5_EXPANSION_NODE_EXPLICIT,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -570,7 +585,7 @@  static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_IPV6,
 						  MLX5_EXPANSION_MPLS),
 		.type = RTE_FLOW_ITEM_TYPE_GRE_KEY,
-		.optional = 1,
+		.node_flags = MLX5_EXPANSION_NODE_OPTIONAL,
 	},
 	[MLX5_EXPANSION_NVGRE] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH),
@@ -581,14 +596,9 @@  static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_IPV6,
 						  MLX5_EXPANSION_ETH),
 		.type = RTE_FLOW_ITEM_TYPE_MPLS,
-		.optional = 1,
+		.node_flags = MLX5_EXPANSION_NODE_OPTIONAL,
 	},
 	[MLX5_EXPANSION_ETH] = {
-		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
-						  MLX5_EXPANSION_IPV6),
-		.type = RTE_FLOW_ITEM_TYPE_ETH,
-	},
-	[MLX5_EXPANSION_ETH_VLAN] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VLAN),
 		.type = RTE_FLOW_ITEM_TYPE_ETH,
 	},
@@ -596,6 +606,7 @@  static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
 						  MLX5_EXPANSION_IPV6),
 		.type = RTE_FLOW_ITEM_TYPE_VLAN,
+		.node_flags = MLX5_EXPANSION_NODE_EXPLICIT,
 	},
 	[MLX5_EXPANSION_IPV4] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
@@ -632,10 +643,10 @@  static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 		.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
 	},
 	[MLX5_EXPANSION_GTP] = {
-			.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
-							  MLX5_EXPANSION_IPV6),
-			.type = RTE_FLOW_ITEM_TYPE_GTP
-	}
+		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4,
+						  MLX5_EXPANSION_IPV6),
+		.type = RTE_FLOW_ITEM_TYPE_GTP,
+	},
 };
 
 static struct rte_flow_action_handle *
@@ -3765,20 +3776,8 @@  flow_get_shared_rss_action(struct rte_eth_dev *dev,
 }
 
 static unsigned int
-find_graph_root(const struct rte_flow_item pattern[], uint32_t rss_level)
+find_graph_root(uint32_t rss_level)
 {
-	const struct rte_flow_item *item;
-	unsigned int has_vlan = 0;
-
-	for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
-		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
-			has_vlan = 1;
-			break;
-		}
-	}
-	if (has_vlan)
-		return rss_level < 2 ? MLX5_EXPANSION_ROOT_ETH_VLAN :
-				       MLX5_EXPANSION_ROOT_OUTER_ETH_VLAN;
 	return rss_level < 2 ? MLX5_EXPANSION_ROOT :
 			       MLX5_EXPANSION_ROOT_OUTER;
 }
@@ -6272,7 +6271,7 @@  flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
 	int indir_actions_n = MLX5_MAX_INDIRECT_ACTIONS;
 	union {
 		struct mlx5_flow_expand_rss buf;
-		uint8_t buffer[2048];
+		uint8_t buffer[4096];
 	} expand_buffer;
 	union {
 		struct rte_flow_action actions[MLX5_MAX_SPLIT_ACTIONS];
@@ -6363,7 +6362,7 @@  flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
 	if (rss && rss->types) {
 		unsigned int graph_root;
 
-		graph_root = find_graph_root(items, rss->level);
+		graph_root = find_graph_root(rss->level);
 		ret = mlx5_flow_expand_rss(buf, sizeof(expand_buffer.buffer),
 					   items, rss->types,
 					   mlx5_support_expansion, graph_root);