examples/packet_ordering: fix segfault in disable_reorder mode
Checks
Commit Message
packet_ordering example works in two modes (--disable-reorder):
- When reorder is enabled: rx_thread - N*worker_thread - send_thread
- When reorder is disabled: rx_thread - N*worker_thread - tx_thread
N parallel worker_thread(s) generate out-of-order packets.
When reorder is enabled, send_thread uses sequence number generated in rx_thread (Line 459) to enforce packet ordering. Otherwise rx_thread just sends any packet it receives.
rx_thread writes sequence number into a dynamic field, which is only registered by calling rte_reorder_create() (Line 741) when reorder is enabled. However, rx_thread marks sequence number onto each packet no matter whether reorder is enabled, overwriting the leading bytes in packet mbufs when reorder is disabled, resulting in segfaults when PMD tries to DMA packets.
if (!disable_reorder) {...} is added to fix the bug.
---
examples/packet_ordering/main.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
Comments
> -----Original Message-----
> From: Qian Hao <qi_an_hao@126.com>
> Sent: Friday, December 8, 2023 1:43 PM
> To: dev@dpdk.org
> Cc: Volodymyr Fialko <vfialko@marvell.com>
> Subject: [EXT] [PATCH] examples/packet_ordering: fix segfault in disable_reorder mode
<snip>
Good catch overall, but few comments:
1. Please fix checkpatch coding style issues:
http://patchwork.dpdk.org/project/dpdk/patch/20231208124231.198138-1-qi_an_hao@126.com/
Check dpdk contributing guide to see how to run it locally:
https://doc.dpdk.org/guides/contributing/patches.html#checking-the-patches
2. This patch will add if check per burst of packets (even so it will be easy to
branch predict for CPU since this flag does not changes), I still think it would
be better to check this condition only once before starting the rx_thread and
let compiler inline the rest. So something like this:
// mark rx_thread inline with explicit parameter
static __rte_always_inline int
rx_thread(struct rte_ring *ring_out, bool disable_reorder)
// create two separate functions with baked flag
static __rte_noinline int
rx_thread_reorder(struct rte_ring *ring_out)
{
return rx_thread(ring_out, false);
}
static __rte_noinline int
rx_thread_reorder_disabled(struct rte_ring *ring_out)
{
return rx_thread(ring_out, true);
}
// dispatch only once in main
/* Start rx_thread() on the main core */
if (disable_reorder)
rx_thread_reorder_disabled(rx_to_workers);
else
rx_thread_reorder(rx_to_workers);
/Volodymyr
@@ -455,8 +455,11 @@ rx_thread(struct rte_ring *ring_out)
app_stats.rx.rx_pkts += nb_rx_pkts;
/* mark sequence number */
- for (i = 0; i < nb_rx_pkts; )
- *rte_reorder_seqn(pkts[i++]) = seqn++;
+ if (!disable_reorder) {
+ for (i = 0; i < nb_rx_pkts;) {
+ *rte_reorder_seqn(pkts[i++]) = seqn++;
+ }
+ }
/* enqueue to rx_to_workers ring */
ret = rte_ring_enqueue_burst(ring_out,