[1/1] net/qede: fix receive packet drop

Message ID 20190312165114.23740-1-shshaikh@marvell.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers
Series [1/1] net/qede: fix receive packet drop |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/intel-Performance-Testing success Performance Testing PASS
ci/Intel-compilation success Compilation OK

Commit Message

Shahed Shaikh March 12, 2019, 4:51 p.m. UTC
  There is a corner case in which driver won't post
receive buffers when driver has processed all received packets
in single loop (i.e. hw_consumer == sw_consumer) and then
HW will start dropping packets since it did not see new receive
buffers posted.

This corner case is seen when size of Rx ring is less than or equals
Rx packet burst count for dev->rx_pkt_burst().

Fixes: 8f2312474529 ("net/qede: fix performance bottleneck in Rx path")
Cc: stable@dpdk.org

Signed-off-by: Shahed Shaikh <shshaikh@marvell.com>
---
 drivers/net/qede/qede_rxtx.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)
  

Comments

Rasesh Mody March 13, 2019, 5:55 p.m. UTC | #1
>From: dev <dev-bounces@dpdk.org> On Behalf Of Shahed Shaikh
>Sent: Tuesday, March 12, 2019 9:51 AM
>
>There is a corner case in which driver won't post receive buffers when driver
>has processed all received packets in single loop (i.e. hw_consumer ==
>sw_consumer) and then HW will start dropping packets since it did not see
>new receive buffers posted.
>
>This corner case is seen when size of Rx ring is less than or equals Rx packet
>burst count for dev->rx_pkt_burst().
>
>Fixes: 8f2312474529 ("net/qede: fix performance bottleneck in Rx path")
>Cc: stable@dpdk.org
>
>Signed-off-by: Shahed Shaikh <shshaikh@marvell.com>
>---

Acked-by: Rasesh Mody <rmody@marvell.com>

> drivers/net/qede/qede_rxtx.c | 15 ++++++++-------
> 1 file changed, 8 insertions(+), 7 deletions(-)
>
>diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
>index 70c32e3..27bac09 100644
>--- a/drivers/net/qede/qede_rxtx.c
>+++ b/drivers/net/qede/qede_rxtx.c
>@@ -1420,13 +1420,6 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf
>**rx_pkts, uint16_t nb_pkts)
> 	uint32_t rss_hash;
> 	int rx_alloc_count = 0;
>
>-	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
>-	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
>-
>-	rte_rmb();
>-
>-	if (hw_comp_cons == sw_comp_cons)
>-		return 0;
>
> 	/* Allocate buffers that we used in previous loop */
> 	if (rxq->rx_alloc_count) {
>@@ -1447,6 +1440,14 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf
>**rx_pkts, uint16_t nb_pkts)
> 		rxq->rx_alloc_count = 0;
> 	}
>
>+	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
>+	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
>+
>+	rte_rmb();
>+
>+	if (hw_comp_cons == sw_comp_cons)
>+		return 0;
>+
> 	while (sw_comp_cons != hw_comp_cons) {
> 		ol_flags = 0;
> 		packet_type = RTE_PTYPE_UNKNOWN;
>--
>2.7.4
  
Ferruh Yigit March 19, 2019, 7:01 p.m. UTC | #2
On 3/13/2019 5:55 PM, Rasesh Mody wrote:
>> From: dev <dev-bounces@dpdk.org> On Behalf Of Shahed Shaikh
>> Sent: Tuesday, March 12, 2019 9:51 AM
>>
>> There is a corner case in which driver won't post receive buffers when driver
>> has processed all received packets in single loop (i.e. hw_consumer ==
>> sw_consumer) and then HW will start dropping packets since it did not see
>> new receive buffers posted.
>>
>> This corner case is seen when size of Rx ring is less than or equals Rx packet
>> burst count for dev->rx_pkt_burst().
>>
>> Fixes: 8f2312474529 ("net/qede: fix performance bottleneck in Rx path")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Shahed Shaikh <shshaikh@marvell.com>
>> ---
> 
> Acked-by: Rasesh Mody <rmody@marvell.com>

Applied to dpdk-next-net/master, thanks.
  

Patch

diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 70c32e3..27bac09 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1420,13 +1420,6 @@  qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint32_t rss_hash;
 	int rx_alloc_count = 0;
 
-	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
-	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
-
-	rte_rmb();
-
-	if (hw_comp_cons == sw_comp_cons)
-		return 0;
 
 	/* Allocate buffers that we used in previous loop */
 	if (rxq->rx_alloc_count) {
@@ -1447,6 +1440,14 @@  qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rxq->rx_alloc_count = 0;
 	}
 
+	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
+	sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
+
+	rte_rmb();
+
+	if (hw_comp_cons == sw_comp_cons)
+		return 0;
+
 	while (sw_comp_cons != hw_comp_cons) {
 		ol_flags = 0;
 		packet_type = RTE_PTYPE_UNKNOWN;