net/mlx5: fix Tx metadata endianness in data path

Message ID 20210927080203.23877-1-bingz@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Raslan Darawsheh
Headers
Series net/mlx5: fix Tx metadata endianness in data path |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-testing warning apply patch failure

Commit Message

Bing Zhao Sept. 27, 2021, 8:02 a.m. UTC
  The metadata can be set in the mbuf dynamic field and then used in
flow rules steering for egress direction. The hardware requires
network order for both the insertion of a rule and sending a packet.
Indeed, there is no strict restriction for the endianness. The order
for sending a packet and its steering rule should be consistent.

In the past, there was no endianness conversion due to the
performance reason. The flow rule converted the metadata into little
endian for hardware (if needed) and the packet hit the flow rule also
with little endian.

After the metadata was converted to big endian, the missing adaption
in the data path resulted in a flow miss of the egress packets.

Converting the metadata to big endian before posting a WQE to the
hardware solves this issue.

Fixes: b57e414b48c0 ("net/mlx5: convert meta register to big-endian")
Cc: akozyrev@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_tx.h | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)
  

Comments

Thomas Monjalon Sept. 29, 2021, 9:31 p.m. UTC | #1
27/09/2021 10:02, Bing Zhao:
> The metadata can be set in the mbuf dynamic field and then used in
> flow rules steering for egress direction. The hardware requires
> network order for both the insertion of a rule and sending a packet.
> Indeed, there is no strict restriction for the endianness. The order
> for sending a packet and its steering rule should be consistent.
> 
> In the past, there was no endianness conversion due to the
> performance reason. The flow rule converted the metadata into little
> endian for hardware (if needed) and the packet hit the flow rule also
> with little endian.
> 
> After the metadata was converted to big endian, the missing adaption
> in the data path resulted in a flow miss of the egress packets.
> 
> Converting the metadata to big endian before posting a WQE to the
> hardware solves this issue.
> 
> Fixes: b57e414b48c0 ("net/mlx5: convert meta register to big-endian")
> Cc: akozyrev@nvidia.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

Applied in next-net-mlx, thanks.
  

Patch

diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 1a35919371..77d6069755 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -953,7 +953,8 @@  mlx5_tx_eseg_none(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
 		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
-		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
+		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
+		       0 : 0;
 	/* Engage VLAN tag insertion feature if requested. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
 	    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
@@ -1013,7 +1014,8 @@  mlx5_tx_eseg_dmin(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
 		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
-		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
+		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
+		       0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
 	es->inline_hdr_sz = RTE_BE16(MLX5_ESEG_MIN_INLINE_SIZE);
 	es->inline_data = *(unaligned_uint16_t *)psrc;
@@ -1096,7 +1098,8 @@  mlx5_tx_eseg_data(struct mlx5_txq_data *__rte_restrict txq,
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
 		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
-		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
+		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
+		       0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
 	es->inline_hdr_sz = rte_cpu_to_be_16(inlen);
 	es->inline_data = *(unaligned_uint16_t *)psrc;
@@ -1308,7 +1311,8 @@  mlx5_tx_eseg_mdat(struct mlx5_txq_data *__rte_restrict txq,
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
 		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
-		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
+		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
+		       0 : 0;
 	MLX5_ASSERT(inlen >= MLX5_ESEG_MIN_INLINE_SIZE);
 	pdst = (uint8_t *)&es->inline_data;
 	if (MLX5_TXOFF_CONFIG(VLAN) && vlan) {
@@ -2470,7 +2474,7 @@  mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 	/* Fill metadata field if needed. */
 	if (MLX5_TXOFF_CONFIG(METADATA) &&
 		es->metadata != (loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
-				 *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0))
+		rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) : 0))
 		return false;
 	/* Legacy MPW can send packets with the same length only. */
 	if (MLX5_TXOFF_CONFIG(MPW) &&