[dpdk-dev,v3] net/mlx5: fix deadlock due to buffered slots in Rx SW ring

Message ID 8beeef1f03eadc02d67e86c002a40c1fc56d6c55.1507644171.git.yskoh@mellanox.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues

Commit Message

Yongseok Koh Oct. 10, 2017, 2:04 p.m. UTC
  When replenishing Rx ring, there're always buffered slots reserved between
consumed entries and HW owned entries. These have to be filled with fake
mbufs to protect from possible overflow rather than optimistically
expecting successful replenishment which can cause deadlock with
small-sized queue.

Fixes: fc048bd52cb7 ("net/mlx5: fix overflow of Rx SW ring")
Cc: stable@dpdk.org

Reported-by: Martin Weiser <martin.weiser@allegro-packets.com>
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Tested-by: Martin Weiser <martin.weiser@allegro-packets.com>
---
v3:
* Rebased on top of dpdk-next-net/master

v2:
* Replace vector st/ld with regular assignment as performance gain is negligible
for the short loop. This is to make the function shareable with ARM NEON.

 drivers/net/mlx5/mlx5_rxtx_vec.h      |  6 +++++-
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 13 ++-----------
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h  | 13 ++-----------
 3 files changed, 9 insertions(+), 23 deletions(-)
  

Comments

Ferruh Yigit Oct. 11, 2017, 12:43 a.m. UTC | #1
On 10/10/2017 3:04 PM, Yongseok Koh wrote:
> When replenishing Rx ring, there're always buffered slots reserved between
> consumed entries and HW owned entries. These have to be filled with fake
> mbufs to protect from possible overflow rather than optimistically
> expecting successful replenishment which can cause deadlock with
> small-sized queue.
> 
> Fixes: fc048bd52cb7 ("net/mlx5: fix overflow of Rx SW ring")
> Cc: stable@dpdk.org
> 
> Reported-by: Martin Weiser <martin.weiser@allegro-packets.com>
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> Tested-by: Martin Weiser <martin.weiser@allegro-packets.com>

Applied to dpdk-next-net/master, thanks.
  

Patch

diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 426169037..1f08ed0b2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -101,7 +101,7 @@  mlx5_rx_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq, uint16_t n)
 {
 	const uint16_t q_n = 1 << rxq->elts_n;
 	const uint16_t q_mask = q_n - 1;
-	const uint16_t elts_idx = rxq->rq_ci & q_mask;
+	uint16_t elts_idx = rxq->rq_ci & q_mask;
 	struct rte_mbuf **elts = &(*rxq->elts)[elts_idx];
 	volatile struct mlx5_wqe_data_seg *wq = &(*rxq->wqes)[elts_idx];
 	unsigned int i;
@@ -119,6 +119,10 @@  mlx5_rx_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq, uint16_t n)
 		wq[i].addr = rte_cpu_to_be_64((uintptr_t)elts[i]->buf_addr +
 					      RTE_PKTMBUF_HEADROOM);
 	rxq->rq_ci += n;
+	/* Prevent overflowing into consumed mbufs. */
+	elts_idx = rxq->rq_ci & q_mask;
+	for (i = 0; i < MLX5_VPMD_DESCS_PER_LOOP; ++i)
+		(*rxq->elts)[elts_idx + i] = &rxq->fake_mbuf;
 	rte_io_wmb();
 	*rxq->rq_db = rte_cpu_to_be_32(rxq->rq_ci);
 }
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index 6dd18b619..86b37d5c6 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -446,13 +446,6 @@  rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 	};
 
 	/*
-	 * Not to overflow elts array. Decompress next time after mbuf
-	 * replenishment.
-	 */
-	if (unlikely(mcqe_n + MLX5_VPMD_DESCS_PER_LOOP >
-		     (uint16_t)(rxq->rq_ci - rxq->cq_ci)))
-		return;
-	/*
 	 * A. load mCQEs into a 128bit register.
 	 * B. store rearm data to mbuf.
 	 * C. combine data from mCQEs with rx_descriptor_fields1.
@@ -778,10 +771,8 @@  rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 	}
 	elts_idx = rxq->rq_pi & q_mask;
 	elts = &(*rxq->elts)[elts_idx];
-	pkts_n = RTE_MIN(pkts_n - rcvd_pkt,
-			 (uint16_t)(rxq->rq_ci - rxq->cq_ci));
-	/* Not to overflow pkts/elts array. */
-	pkts_n = RTE_ALIGN_FLOOR(pkts_n, MLX5_VPMD_DESCS_PER_LOOP);
+	/* Not to overflow pkts array. */
+	pkts_n = RTE_ALIGN_FLOOR(pkts_n - rcvd_pkt, MLX5_VPMD_DESCS_PER_LOOP);
 	/* Not to cross queue end. */
 	pkts_n = RTE_MIN(pkts_n, q_n - elts_idx);
 	if (!pkts_n)
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index 88c5d75fa..c2142d7ca 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -437,13 +437,6 @@  rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 #endif
 
 	/*
-	 * Not to overflow elts array. Decompress next time after mbuf
-	 * replenishment.
-	 */
-	if (unlikely(mcqe_n + MLX5_VPMD_DESCS_PER_LOOP >
-		     (uint16_t)(rxq->rq_ci - rxq->cq_ci)))
-		return;
-	/*
 	 * A. load mCQEs into a 128bit register.
 	 * B. store rearm data to mbuf.
 	 * C. combine data from mCQEs with rx_descriptor_fields1.
@@ -764,10 +757,8 @@  rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 	}
 	elts_idx = rxq->rq_pi & q_mask;
 	elts = &(*rxq->elts)[elts_idx];
-	pkts_n = RTE_MIN(pkts_n - rcvd_pkt,
-			 (uint16_t)(rxq->rq_ci - rxq->cq_ci));
-	/* Not to overflow pkts/elts array. */
-	pkts_n = RTE_ALIGN_FLOOR(pkts_n, MLX5_VPMD_DESCS_PER_LOOP);
+	/* Not to overflow pkts array. */
+	pkts_n = RTE_ALIGN_FLOOR(pkts_n - rcvd_pkt, MLX5_VPMD_DESCS_PER_LOOP);
 	/* Not to cross queue end. */
 	pkts_n = RTE_MIN(pkts_n, q_n - elts_idx);
 	if (!pkts_n)