[1/1] net/mlx5: fix inline data length for multisegment packets

Message ID 20231110094938.21171-1-viacheslavo@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Raslan Darawsheh
Headers
Series [1/1] net/mlx5: fix inline data length for multisegment packets |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/github-robot: build success github build: passed
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS

Commit Message

Slava Ovsiienko Nov. 10, 2023, 9:49 a.m. UTC
  If packet data length exceeds the configured limit for packet
to be inlined in the queue descriptor the driver checks if hardware
requires to do minimal data inline or the VLAN insertion offload is
requested and not supported in hardware (that means we have to do VLAN
insertion in software with inline data). Then driver scans the mbuf
chain to find the minimal segment amount to satisfy the data needed
for minimal inline.

There was incorrect first segment inline data length calculation
with missing VLAN header being inserted, that could lead to the
segmentation fault in the mbuf chain scanning, for example for
the packets:

  packet:
    mbuf0 pkt_len = 288, data_len = 156
    mbuf1 pkt_len = 132, data_len = 132

  txq->inlen_send = 290

The driver was trying to reach the inlen_send inline data length
with missing VLAN header length added and was running out of the
mbuf chain (there were just not enough data in the packet to satisfy
the criteria).

Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template")
Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first segments")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>

with '#' will be ignored, and an empty message aborts the commit.  # #
Date:      Fri Nov 10 11:12:14 2023 +0200 # # On branch tx_fix_firstseg
---
 drivers/net/mlx5/mlx5_tx.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

Raslan Darawsheh Nov. 12, 2023, 2:41 p.m. UTC | #1
Hi,

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Friday, November 10, 2023 11:50 AM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>;
> stable@dpdk.org
> Subject: [PATCH 1/1] net/mlx5: fix inline data length for multisegment packets
> 
> If packet data length exceeds the configured limit for packet
> to be inlined in the queue descriptor the driver checks if hardware
> requires to do minimal data inline or the VLAN insertion offload is
> requested and not supported in hardware (that means we have to do VLAN
> insertion in software with inline data). Then driver scans the mbuf
> chain to find the minimal segment amount to satisfy the data needed
> for minimal inline.
> 
> There was incorrect first segment inline data length calculation
> with missing VLAN header being inserted, that could lead to the
> segmentation fault in the mbuf chain scanning, for example for
> the packets:
> 
>   packet:
>     mbuf0 pkt_len = 288, data_len = 156
>     mbuf1 pkt_len = 132, data_len = 132
> 
>   txq->inlen_send = 290
> 
> The driver was trying to reach the inlen_send inline data length
> with missing VLAN header length added and was running out of the
> mbuf chain (there were just not enough data in the packet to satisfy
> the criteria).
> 
> Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template")
> Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first
> segments")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Suanming Mou <suanmingm@nvidia.com>

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh
  

Patch

diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 264cc192dc..e59ce37667 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -2046,7 +2046,7 @@  mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		uintptr_t start;
 
 		mbuf = loc->mbuf;
-		nxlen = rte_pktmbuf_data_len(mbuf);
+		nxlen = rte_pktmbuf_data_len(mbuf) + vlan;
 		/*
 		 * Packet length exceeds the allowed inline data length,
 		 * check whether the minimal inlining is required.