net/cxgbe: fix segfault when accessing empty Tx mbuf list
diff mbox series

Message ID 1598980809-19942-1-git-send-email-rahul.lakkireddy@chelsio.com
State Accepted, archived
Delegated to: Ferruh Yigit
Headers show
Series
  • net/cxgbe: fix segfault when accessing empty Tx mbuf list
Related show

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/travis-robot success Travis build: passed
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS

Commit Message

Rahul Lakkireddy Sept. 1, 2020, 5:20 p.m. UTC
Ensure packets are available before accessing the mbuf list in Tx
burst function. Otherwise, just reclaim completed Tx descriptors and
exit.

Fixes: b1df19e43e1d ("net/cxgbe: fix prefetch for non-coalesced Tx packets")
Cc: stable@dpdk.org

Reported-by: Brian Poole <brian90013@gmail.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_ethdev.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Ferruh Yigit Sept. 17, 2020, 3:09 p.m. UTC | #1
On 9/1/2020 6:20 PM, Rahul Lakkireddy wrote:
> Ensure packets are available before accessing the mbuf list in Tx
> burst function. Otherwise, just reclaim completed Tx descriptors and
> exit.
> 
> Fixes: b1df19e43e1d ("net/cxgbe: fix prefetch for non-coalesced Tx packets")
> Cc: stable@dpdk.org
> 
> Reported-by: Brian Poole <brian90013@gmail.com>
> Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>

Applied to dpdk-next-net/main, thanks.

Patch
diff mbox series

diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 60d325723..38b43772c 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -71,6 +71,9 @@  uint16_t cxgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	t4_os_lock(&txq->txq_lock);
 	/* free up desc from already completed tx */
 	reclaim_completed_tx(&txq->q);
+	if (unlikely(!nb_pkts))
+		goto out_unlock;
+
 	rte_prefetch0(rte_pktmbuf_mtod(tx_pkts[0], volatile void *));
 	while (total_sent < nb_pkts) {
 		pkts_remain = nb_pkts - total_sent;
@@ -91,6 +94,7 @@  uint16_t cxgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		reclaim_completed_tx(&txq->q);
 	}
 
+out_unlock:
 	t4_os_unlock(&txq->txq_lock);
 	return total_sent;
 }