[dpdk-dev] net/mlx5: fix RSS action for tunneled packets
Checks
Commit Message
The flow engine in mlx5 searches for the most specific layer in the
pattern in order to set the flow rule priority properly.
Since the RSS can be currently performed only for the outer headers, avoid
updating the layer for the inner headers.
Fixes: 8086cf08b2f0 ("net/mlx5: handle RSS hash configuration in RSS flow")
Cc: nelio.laranjeiro@6wind.com
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 34 +++++++++++++++++++++++-----------
1 file changed, 23 insertions(+), 11 deletions(-)
Comments
On Thu, Oct 26, 2017 at 08:41:57PM +0300, Shahaf Shuler wrote:
> The flow engine in mlx5 searches for the most specific layer in the
> pattern in order to set the flow rule priority properly.
>
> Since the RSS can be currently performed only for the outer headers, avoid
> updating the layer for the inner headers.
>
> Fixes: 8086cf08b2f0 ("net/mlx5: handle RSS hash configuration in RSS flow")
> Cc: nelio.laranjeiro@6wind.com
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
On 10/26/2017 11:27 PM, Nélio Laranjeiro wrote:
> On Thu, Oct 26, 2017 at 08:41:57PM +0300, Shahaf Shuler wrote:
>> The flow engine in mlx5 searches for the most specific layer in the
>> pattern in order to set the flow rule priority properly.
>>
>> Since the RSS can be currently performed only for the outer headers, avoid
>> updating the layer for the inner headers.
>>
>> Fixes: 8086cf08b2f0 ("net/mlx5: handle RSS hash configuration in RSS flow")
>> Cc: nelio.laranjeiro@6wind.com
>>
>> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Applied to dpdk-next-net/master, thanks.
@@ -1291,7 +1291,9 @@ mlx5_flow_create_eth(const struct rte_flow_item *item,
.size = eth_size,
};
- parser->layer = HASH_RXQ_ETH;
+ /* Don't update layer for the inner pattern. */
+ if (!parser->inner)
+ parser->layer = HASH_RXQ_ETH;
if (spec) {
unsigned int i;
@@ -1386,7 +1388,9 @@ mlx5_flow_create_ipv4(const struct rte_flow_item *item,
.size = ipv4_size,
};
- parser->layer = HASH_RXQ_IPV4;
+ /* Don't update layer for the inner pattern. */
+ if (!parser->inner)
+ parser->layer = HASH_RXQ_IPV4;
if (spec) {
if (!mask)
mask = default_mask;
@@ -1436,7 +1440,9 @@ mlx5_flow_create_ipv6(const struct rte_flow_item *item,
.size = ipv6_size,
};
- parser->layer = HASH_RXQ_IPV6;
+ /* Don't update layer for the inner pattern. */
+ if (!parser->inner)
+ parser->layer = HASH_RXQ_IPV6;
if (spec) {
unsigned int i;
@@ -1490,10 +1496,13 @@ mlx5_flow_create_udp(const struct rte_flow_item *item,
.size = udp_size,
};
- if (parser->layer == HASH_RXQ_IPV4)
- parser->layer = HASH_RXQ_UDPV4;
- else
- parser->layer = HASH_RXQ_UDPV6;
+ /* Don't update layer for the inner pattern. */
+ if (!parser->inner) {
+ if (parser->layer == HASH_RXQ_IPV4)
+ parser->layer = HASH_RXQ_UDPV4;
+ else
+ parser->layer = HASH_RXQ_UDPV6;
+ }
if (spec) {
if (!mask)
mask = default_mask;
@@ -1533,10 +1542,13 @@ mlx5_flow_create_tcp(const struct rte_flow_item *item,
.size = tcp_size,
};
- if (parser->layer == HASH_RXQ_IPV4)
- parser->layer = HASH_RXQ_TCPV4;
- else
- parser->layer = HASH_RXQ_TCPV6;
+ /* Don't update layer for the inner pattern. */
+ if (!parser->inner) {
+ if (parser->layer == HASH_RXQ_IPV4)
+ parser->layer = HASH_RXQ_TCPV4;
+ else
+ parser->layer = HASH_RXQ_TCPV6;
+ }
if (spec) {
if (!mask)
mask = default_mask;