[v1] net/i40e: remove the SMP barrier in HW scanning func

Message ID 20210604073405.14880-1-joyce.kong@arm.com (mailing list archive)
State Superseded, archived
Delegated to: Qi Zhang
Headers
Series [v1] net/i40e: remove the SMP barrier in HW scanning func |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/Intel-compilation fail Compilation issues
ci/intel-Testing success Testing PASS
ci/iol-abi-testing warning Testing issues
ci/iol-testing fail Testing issues
ci/github-robot success github build: passed

Commit Message

Joyce Kong June 4, 2021, 7:34 a.m. UTC
  Add the logic to determine how many DD bits have been set
for contiguous packets, for removing the SMP barrier while
reading descs.

Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 drivers/net/i40e/i40e_rxtx.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)
  

Comments

Honnappa Nagarahalli June 4, 2021, 4:12 p.m. UTC | #1
<snip>
> 
> Add the logic to determine how many DD bits have been set for contiguous
> packets, for removing the SMP barrier while reading descs.
Are there any performance numbers with this change?

> 
> Signed-off-by: Joyce Kong <joyce.kong@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>  drivers/net/i40e/i40e_rxtx.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 6c58decec..410a81f30 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -452,7 +452,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
>  	uint16_t pkt_len;
>  	uint64_t qword1;
>  	uint32_t rx_status;
> -	int32_t s[I40E_LOOK_AHEAD], nb_dd;
> +	int32_t s[I40E_LOOK_AHEAD], var, nb_dd;
>  	int32_t i, j, nb_rx = 0;
>  	uint64_t pkt_flags;
>  	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -482,11
> +482,14 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
>  					I40E_RXD_QW1_STATUS_SHIFT;
>  		}
> 
> -		rte_smp_rmb();
> -
>  		/* Compute how many status bits were set */
> -		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++)
> -			nb_dd += s[j] & (1 <<
> I40E_RX_DESC_STATUS_DD_SHIFT);
> +		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) {
> +			var = s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
> +			if (var)
> +				nb_dd += 1;
> +			else
> +				break;
> +		}
> 
>  		nb_rx += nb_dd;
> 
> --
> 2.17.1
  
Qi Zhang June 6, 2021, 2:17 p.m. UTC | #2
> -----Original Message-----
> From: Joyce Kong <joyce.kong@arm.com>
> Sent: Friday, June 4, 2021 3:34 PM
> To: Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
> ruifeng.wang@arm.com; honnappa.nagarahalli@arm.com
> Cc: dev@dpdk.org; nd@arm.com
> Subject: [PATCH v1] net/i40e: remove the SMP barrier in HW scanning func
> 
> Add the logic to determine how many DD bits have been set for contiguous
> packets, for removing the SMP barrier while reading descs.

I didn't understand this.
The current logic already guarantee the read out DD bits are from continue packets, as it read Rx descriptor in a reversed order from the ring.
So I didn't see the a new logic be added, would you describe more clear about the purpose of this patch?

> 
> Signed-off-by: Joyce Kong <joyce.kong@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>  drivers/net/i40e/i40e_rxtx.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 6c58decec..410a81f30 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -452,7 +452,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
>  	uint16_t pkt_len;
>  	uint64_t qword1;
>  	uint32_t rx_status;
> -	int32_t s[I40E_LOOK_AHEAD], nb_dd;
> +	int32_t s[I40E_LOOK_AHEAD], var, nb_dd;
>  	int32_t i, j, nb_rx = 0;
>  	uint64_t pkt_flags;
>  	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -482,11 +482,14
> @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
>  					I40E_RXD_QW1_STATUS_SHIFT;
>  		}
> 
> -		rte_smp_rmb();

Any performance gain by removing this? and it is not necessary to be combined with below change, right?
 
> -
>  		/* Compute how many status bits were set */
> -		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++)
> -			nb_dd += s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
> +		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) {
> +			var = s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
> +			if (var)
> +				nb_dd += 1;
> +			else
> +				break;
> +		}
> 
>  		nb_rx += nb_dd;
> 
> --
> 2.17.1
  
Honnappa Nagarahalli June 6, 2021, 6:33 p.m. UTC | #3
<snip>

> >
> > Add the logic to determine how many DD bits have been set for
> > contiguous packets, for removing the SMP barrier while reading descs.
> 
> I didn't understand this.
> The current logic already guarantee the read out DD bits are from continue
> packets, as it read Rx descriptor in a reversed order from the ring.
Qi, the comments in the code mention that there is a race condition if the descriptors are not read in the reverse order. But, they do not mention what the race condition is and how it can occur. Appreciate if you could explain that.

On x86, the reads are not re-ordered (though the compiler can re-order). On ARM, the reads can get re-ordered and hence the barriers are required. In order to avoid the barriers, we are trying to process only those descriptors whose DD bits are set such that they are contiguous. i.e. if the DD bits are 1011, we process only the first descriptor.

> So I didn't see the a new logic be added, would you describe more clear about
> the purpose of this patch?
> 
> >
> > Signed-off-by: Joyce Kong <joyce.kong@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > ---
> >  drivers/net/i40e/i40e_rxtx.c | 13 ++++++++-----
> >  1 file changed, 8 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/i40e/i40e_rxtx.c
> > b/drivers/net/i40e/i40e_rxtx.c index
> > 6c58decec..410a81f30 100644
> > --- a/drivers/net/i40e/i40e_rxtx.c
> > +++ b/drivers/net/i40e/i40e_rxtx.c
> > @@ -452,7 +452,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> >  	uint16_t pkt_len;
> >  	uint64_t qword1;
> >  	uint32_t rx_status;
> > -	int32_t s[I40E_LOOK_AHEAD], nb_dd;
> > +	int32_t s[I40E_LOOK_AHEAD], var, nb_dd;
> >  	int32_t i, j, nb_rx = 0;
> >  	uint64_t pkt_flags;
> >  	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -482,11
> > +482,14 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> >  					I40E_RXD_QW1_STATUS_SHIFT;
> >  		}
> >
> > -		rte_smp_rmb();
> 
> Any performance gain by removing this? and it is not necessary to be
> combined with below change, right?
> 
> > -
> >  		/* Compute how many status bits were set */
> > -		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++)
> > -			nb_dd += s[j] & (1 <<
> I40E_RX_DESC_STATUS_DD_SHIFT);
> > +		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) {
> > +			var = s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
> > +			if (var)
> > +				nb_dd += 1;
> > +			else
> > +				break;
> > +		}
> >
> >  		nb_rx += nb_dd;
> >
> > --
> > 2.17.1
  
Qi Zhang June 7, 2021, 2:55 p.m. UTC | #4
> -----Original Message-----
> From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Sent: Monday, June 7, 2021 2:33 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Joyce Kong <Joyce.Kong@arm.com>;
> Xing, Beilei <beilei.xing@intel.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> Subject: RE: [PATCH v1] net/i40e: remove the SMP barrier in HW scanning
> func
> 
> <snip>
> 
> > >
> > > Add the logic to determine how many DD bits have been set for
> > > contiguous packets, for removing the SMP barrier while reading descs.
> >
> > I didn't understand this.
> > The current logic already guarantee the read out DD bits are from
> > continue packets, as it read Rx descriptor in a reversed order from the ring.
> Qi, the comments in the code mention that there is a race condition if the
> descriptors are not read in the reverse order. But, they do not mention what
> the race condition is and how it can occur. Appreciate if you could explain
> that.

The Race condition happens between the NIC and CPU, if write and read DD bit in the same order, there might be a hole (e.g. 1011)  with the reverse read order, we make sure no more "1" after the first "0"
as the read address are declared as volatile, compiler will not re-ordered them.

> 
> On x86, the reads are not re-ordered (though the compiler can re-order). On
> ARM, the reads can get re-ordered and hence the barriers are required. In
> order to avoid the barriers, we are trying to process only those descriptors
> whose DD bits are set such that they are contiguous. i.e. if the DD bits are
> 1011, we process only the first descriptor.

Ok, I see. thanks for the explanation.
At this moment, I may prefer not change the behavior of x86, so compile option for arm can be added, in future when we observe no performance impact for x86 as well, we can consider to remove it, what do you think?

> 
> > So I didn't see the a new logic be added, would you describe more
> > clear about the purpose of this patch?
> >
> > >
> > > Signed-off-by: Joyce Kong <joyce.kong@arm.com>
> > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > ---
> > >  drivers/net/i40e/i40e_rxtx.c | 13 ++++++++-----
> > >  1 file changed, 8 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/net/i40e/i40e_rxtx.c
> > > b/drivers/net/i40e/i40e_rxtx.c index
> > > 6c58decec..410a81f30 100644
> > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > @@ -452,7 +452,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> > >  	uint16_t pkt_len;
> > >  	uint64_t qword1;
> > >  	uint32_t rx_status;
> > > -	int32_t s[I40E_LOOK_AHEAD], nb_dd;
> > > +	int32_t s[I40E_LOOK_AHEAD], var, nb_dd;
> > >  	int32_t i, j, nb_rx = 0;
> > >  	uint64_t pkt_flags;
> > >  	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -482,11
> > > +482,14 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> > >  					I40E_RXD_QW1_STATUS_SHIFT;
> > >  		}
> > >
> > > -		rte_smp_rmb();
> >
> > Any performance gain by removing this? and it is not necessary to be
> > combined with below change, right?
> >
> > > -
> > >  		/* Compute how many status bits were set */
> > > -		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++)
> > > -			nb_dd += s[j] & (1 <<
> > I40E_RX_DESC_STATUS_DD_SHIFT);
> > > +		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) {
> > > +			var = s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
> > > +			if (var)
> > > +				nb_dd += 1;
> > > +			else
> > > +				break;
> > > +		}
> > >
> > >  		nb_rx += nb_dd;
> > >
> > > --
> > > 2.17.1
  
Honnappa Nagarahalli June 7, 2021, 9:36 p.m. UTC | #5
<snip>

> >
> > > >
> > > > Add the logic to determine how many DD bits have been set for
> > > > contiguous packets, for removing the SMP barrier while reading descs.
> > >
> > > I didn't understand this.
> > > The current logic already guarantee the read out DD bits are from
> > > continue packets, as it read Rx descriptor in a reversed order from the
> ring.
> > Qi, the comments in the code mention that there is a race condition if
> > the descriptors are not read in the reverse order. But, they do not
> > mention what the race condition is and how it can occur. Appreciate if
> > you could explain that.
> 
> The Race condition happens between the NIC and CPU, if write and read DD
> bit in the same order, there might be a hole (e.g. 1011)  with the reverse read
> order, we make sure no more "1" after the first "0"
> as the read address are declared as volatile, compiler will not re-ordered
> them.
My understanding is that

1) the NIC will write an entire cache line of descriptors to memory "atomically" (i.e. the entire cache line is visible to the CPU at once) if there are enough descriptors ready to fill one cache line.
2) But, if there are not enough descriptors ready (because for ex: there is not enough traffic), then it might write partial cache lines.

Please correct me if I am wrong.

For #1, I do not think it matters if we read the descriptors in reverse order or not as the cache line is written atomically.
For #1, if we read in reverse order, does it make sense to not check the DD bits of descriptors that are earlier in the order once we encounter a descriptor that has its DD bit set? This is because NIC updates the descriptors in order.

> 
> >
> > On x86, the reads are not re-ordered (though the compiler can
> > re-order). On ARM, the reads can get re-ordered and hence the barriers
> > are required. In order to avoid the barriers, we are trying to process
> > only those descriptors whose DD bits are set such that they are
> > contiguous. i.e. if the DD bits are 1011, we process only the first descriptor.
> 
> Ok, I see. thanks for the explanation.
> At this moment, I may prefer not change the behavior of x86, so compile
> option for arm can be added, in future when we observe no performance
> impact for x86 as well, we can consider to remove it, what do you think?
I am ok with this approach.

> 
> >
> > > So I didn't see the a new logic be added, would you describe more
> > > clear about the purpose of this patch?
> > >
> > > >
> > > > Signed-off-by: Joyce Kong <joyce.kong@arm.com>
> > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > ---
> > > >  drivers/net/i40e/i40e_rxtx.c | 13 ++++++++-----
> > > >  1 file changed, 8 insertions(+), 5 deletions(-)
> > > >
> > > > diff --git a/drivers/net/i40e/i40e_rxtx.c
> > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > 6c58decec..410a81f30 100644
> > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > @@ -452,7 +452,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue
> *rxq)
> > > >  	uint16_t pkt_len;
> > > >  	uint64_t qword1;
> > > >  	uint32_t rx_status;
> > > > -	int32_t s[I40E_LOOK_AHEAD], nb_dd;
> > > > +	int32_t s[I40E_LOOK_AHEAD], var, nb_dd;
> > > >  	int32_t i, j, nb_rx = 0;
> > > >  	uint64_t pkt_flags;
> > > >  	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -482,11
> > > > +482,14 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> > > >  					I40E_RXD_QW1_STATUS_SHIFT;
> > > >  		}
> > > >
> > > > -		rte_smp_rmb();
> > >
> > > Any performance gain by removing this? and it is not necessary to be
> > > combined with below change, right?
> > >
> > > > -
> > > >  		/* Compute how many status bits were set */
> > > > -		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++)
> > > > -			nb_dd += s[j] & (1 <<
> > > I40E_RX_DESC_STATUS_DD_SHIFT);
> > > > +		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) {
> > > > +			var = s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
> > > > +			if (var)
> > > > +				nb_dd += 1;
> > > > +			else
> > > > +				break;
> > > > +		}
> > > >
> > > >  		nb_rx += nb_dd;
> > > >
> > > > --
> > > > 2.17.1
  
Joyce Kong June 15, 2021, 6:30 a.m. UTC | #6
<snip>
 
> > > > > Add the logic to determine how many DD bits have been set for
> > > > > contiguous packets, for removing the SMP barrier while reading descs.
> > > >
> > > > I didn't understand this.
> > > > The current logic already guarantee the read out DD bits are from
> > > > continue packets, as it read Rx descriptor in a reversed order
> > > > from the ring.
> > > Qi, the comments in the code mention that there is a race condition
> > > if the descriptors are not read in the reverse order. But, they do
> > > not mention what the race condition is and how it can occur.
> > > Appreciate if you could explain that.
> >
> > The Race condition happens between the NIC and CPU, if write and read
> > DD bit in the same order, there might be a hole (e.g. 1011)  with the
> > reverse read order, we make sure no more "1" after the first "0"
> > as the read address are declared as volatile, compiler will not
> > re-ordered them.
> My understanding is that
> 
> 1) the NIC will write an entire cache line of descriptors to memory
> "atomically" (i.e. the entire cache line is visible to the CPU at once) if there
> are enough descriptors ready to fill one cache line.
> 2) But, if there are not enough descriptors ready (because for ex: there is not
> enough traffic), then it might write partial cache lines.
> 
> Please correct me if I am wrong.
> 
> For #1, I do not think it matters if we read the descriptors in reverse order or
> not as the cache line is written atomically.
> For #1, if we read in reverse order, does it make sense to not check the DD
> bits of descriptors that are earlier in the order once we encounter a
> descriptor that has its DD bit set? This is because NIC updates the descriptors
> in order.
> 
> >
> > >
> > > On x86, the reads are not re-ordered (though the compiler can
> > > re-order). On ARM, the reads can get re-ordered and hence the
> > > barriers are required. In order to avoid the barriers, we are trying
> > > to process only those descriptors whose DD bits are set such that
> > > they are contiguous. i.e. if the DD bits are 1011, we process only the first
> descriptor.
> >
> > Ok, I see. thanks for the explanation.
> > At this moment, I may prefer not change the behavior of x86, so
> > compile option for arm can be added, in future when we observe no
> > performance impact for x86 as well, we can consider to remove it, what do
> you think?
> I am ok with this approach.
> 

Thanks for your comments, I will modify the patch according to your suggestions.

> >
> > >
> > > > So I didn't see the a new logic be added, would you describe more
> > > > clear about the purpose of this patch?
> > > >
> > > > >
> > > > > Signed-off-by: Joyce Kong <joyce.kong@arm.com>
> > > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > > ---
> > > > >  drivers/net/i40e/i40e_rxtx.c | 13 ++++++++-----
> > > > >  1 file changed, 8 insertions(+), 5 deletions(-)
> > > > >
> > > > > diff --git a/drivers/net/i40e/i40e_rxtx.c
> > > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > > 6c58decec..410a81f30 100644
> > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > @@ -452,7 +452,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue
> > *rxq)
> > > > >  	uint16_t pkt_len;
> > > > >  	uint64_t qword1;
> > > > >  	uint32_t rx_status;
> > > > > -	int32_t s[I40E_LOOK_AHEAD], nb_dd;
> > > > > +	int32_t s[I40E_LOOK_AHEAD], var, nb_dd;
> > > > >  	int32_t i, j, nb_rx = 0;
> > > > >  	uint64_t pkt_flags;
> > > > >  	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -482,11
> > > > > +482,14 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> > > > >  					I40E_RXD_QW1_STATUS_SHIFT;
> > > > >  		}
> > > > >
> > > > > -		rte_smp_rmb();
> > > >
> > > > Any performance gain by removing this? and it is not necessary to
> > > > be combined with below change, right?
> > > >

I have tested the patch on both x86 and Arm platforms, it seems no performance change.
As Honnappa explained, we combined these to avoid the barriers. In this way, we only
process those descriptors whose DD bits are set such that they are contiguous.

> > > > > -
> > > > >  		/* Compute how many status bits were set */
> > > > > -		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++)
> > > > > -			nb_dd += s[j] & (1 <<
> > > > I40E_RX_DESC_STATUS_DD_SHIFT);
> > > > > +		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) {
> > > > > +			var = s[j] & (1 <<
> I40E_RX_DESC_STATUS_DD_SHIFT);
> > > > > +			if (var)
> > > > > +				nb_dd += 1;
> > > > > +			else
> > > > > +				break;
> > > > > +		}
> > > > >
> > > > >  		nb_rx += nb_dd;
> > > > >
> > > > > --
> > > > > 2.17.1
>
  
Qi Zhang June 16, 2021, 1:29 p.m. UTC | #7
Hi

> -----Original Message-----
> From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Sent: Tuesday, June 8, 2021 5:36 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Joyce Kong <Joyce.Kong@arm.com>;
> Xing, Beilei <beilei.xing@intel.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> Subject: RE: [PATCH v1] net/i40e: remove the SMP barrier in HW scanning
> func
> 
> <snip>
> 
> > >
> > > > >
> > > > > Add the logic to determine how many DD bits have been set for
> > > > > contiguous packets, for removing the SMP barrier while reading descs.
> > > >
> > > > I didn't understand this.
> > > > The current logic already guarantee the read out DD bits are from
> > > > continue packets, as it read Rx descriptor in a reversed order
> > > > from the
> > ring.
> > > Qi, the comments in the code mention that there is a race condition
> > > if the descriptors are not read in the reverse order. But, they do
> > > not mention what the race condition is and how it can occur.
> > > Appreciate if you could explain that.
> >
> > The Race condition happens between the NIC and CPU, if write and read
> > DD bit in the same order, there might be a hole (e.g. 1011)  with the
> > reverse read order, we make sure no more "1" after the first "0"
> > as the read address are declared as volatile, compiler will not
> > re-ordered them.
> My understanding is that
> 
> 1) the NIC will write an entire cache line of descriptors to memory "atomically"
> (i.e. the entire cache line is visible to the CPU at once) if there are enough
> descriptors ready to fill one cache line.
> 2) But, if there are not enough descriptors ready (because for ex: there is not
> enough traffic), then it might write partial cache lines.

Yes, for example a cache line contains 4 x16 bytes descriptors and it is possible we get 1 1 1 0 for DD bit at some moment.

> 
> Please correct me if I am wrong.
> 
> For #1, I do not think it matters if we read the descriptors in reverse order or
> not as the cache line is written atomically.

I think below cases may happens if we don't read in reserve order.

1. CPU get first cache line as 1 1 1 0 in a loop
2. new packets coming and NIC append last 1 to the first cache and a new cache line with 1 1 1 1.
3. CPU continue new cache line with 1 1 1 1 in the same loop, but the last 1 of first cache line is missed, so finally it get 1 1 1 0 1 1 1 1. 


> For #1, if we read in reverse order, does it make sense to not check the DD bits
> of descriptors that are earlier in the order once we encounter a descriptor that
> has its DD bit set? This is because NIC updates the descriptors in order.

I think the answer is yes, when we met the first DD bit, we should able to calculated the exact number base on the index, but not sure how much performance gain.


> 
> >
> > >
> > > On x86, the reads are not re-ordered (though the compiler can
> > > re-order). On ARM, the reads can get re-ordered and hence the
> > > barriers are required. In order to avoid the barriers, we are trying
> > > to process only those descriptors whose DD bits are set such that
> > > they are contiguous. i.e. if the DD bits are 1011, we process only the first
> descriptor.
> >
> > Ok, I see. thanks for the explanation.
> > At this moment, I may prefer not change the behavior of x86, so
> > compile option for arm can be added, in future when we observe no
> > performance impact for x86 as well, we can consider to remove it, what do
> you think?
> I am ok with this approach.
> 
> >
> > >
> > > > So I didn't see the a new logic be added, would you describe more
> > > > clear about the purpose of this patch?
> > > >
> > > > >
> > > > > Signed-off-by: Joyce Kong <joyce.kong@arm.com>
> > > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > > ---
> > > > >  drivers/net/i40e/i40e_rxtx.c | 13 ++++++++-----
> > > > >  1 file changed, 8 insertions(+), 5 deletions(-)
> > > > >
> > > > > diff --git a/drivers/net/i40e/i40e_rxtx.c
> > > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > > 6c58decec..410a81f30 100644
> > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > @@ -452,7 +452,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue
> > *rxq)
> > > > >  	uint16_t pkt_len;
> > > > >  	uint64_t qword1;
> > > > >  	uint32_t rx_status;
> > > > > -	int32_t s[I40E_LOOK_AHEAD], nb_dd;
> > > > > +	int32_t s[I40E_LOOK_AHEAD], var, nb_dd;
> > > > >  	int32_t i, j, nb_rx = 0;
> > > > >  	uint64_t pkt_flags;
> > > > >  	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -482,11
> > > > > +482,14 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> > > > >  					I40E_RXD_QW1_STATUS_SHIFT;
> > > > >  		}
> > > > >
> > > > > -		rte_smp_rmb();
> > > >
> > > > Any performance gain by removing this? and it is not necessary to
> > > > be combined with below change, right?
> > > >
> > > > > -
> > > > >  		/* Compute how many status bits were set */
> > > > > -		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++)
> > > > > -			nb_dd += s[j] & (1 <<
> > > > I40E_RX_DESC_STATUS_DD_SHIFT);
> > > > > +		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) {
> > > > > +			var = s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
> > > > > +			if (var)
> > > > > +				nb_dd += 1;
> > > > > +			else
> > > > > +				break;
> > > > > +		}
> > > > >
> > > > >  		nb_rx += nb_dd;
> > > > >
> > > > > --
> > > > > 2.17.1
  
Bruce Richardson June 16, 2021, 1:37 p.m. UTC | #8
On Wed, Jun 16, 2021 at 01:29:24PM +0000, Zhang, Qi Z wrote:
> Hi
> 
> > -----Original Message-----
> > From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > Sent: Tuesday, June 8, 2021 5:36 AM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; Joyce Kong <Joyce.Kong@arm.com>;
> > Xing, Beilei <beilei.xing@intel.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
> > Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> > <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> > Subject: RE: [PATCH v1] net/i40e: remove the SMP barrier in HW scanning
> > func
> > 
> > <snip>
> > 
> > > >
> > > > > >
> > > > > > Add the logic to determine how many DD bits have been set for
> > > > > > contiguous packets, for removing the SMP barrier while reading descs.
> > > > >
> > > > > I didn't understand this.
> > > > > The current logic already guarantee the read out DD bits are from
> > > > > continue packets, as it read Rx descriptor in a reversed order
> > > > > from the
> > > ring.
> > > > Qi, the comments in the code mention that there is a race condition
> > > > if the descriptors are not read in the reverse order. But, they do
> > > > not mention what the race condition is and how it can occur.
> > > > Appreciate if you could explain that.
> > >
> > > The Race condition happens between the NIC and CPU, if write and read
> > > DD bit in the same order, there might be a hole (e.g. 1011)  with the
> > > reverse read order, we make sure no more "1" after the first "0"
> > > as the read address are declared as volatile, compiler will not
> > > re-ordered them.
> > My understanding is that
> > 
> > 1) the NIC will write an entire cache line of descriptors to memory "atomically"
> > (i.e. the entire cache line is visible to the CPU at once) if there are enough
> > descriptors ready to fill one cache line.
> > 2) But, if there are not enough descriptors ready (because for ex: there is not
> > enough traffic), then it might write partial cache lines.
> 
> Yes, for example a cache line contains 4 x16 bytes descriptors and it is possible we get 1 1 1 0 for DD bit at some moment.
> 
> > 
> > Please correct me if I am wrong.
> > 
> > For #1, I do not think it matters if we read the descriptors in reverse order or
> > not as the cache line is written atomically.
> 
> I think below cases may happens if we don't read in reserve order.
> 
> 1. CPU get first cache line as 1 1 1 0 in a loop
> 2. new packets coming and NIC append last 1 to the first cache and a new cache line with 1 1 1 1.
> 3. CPU continue new cache line with 1 1 1 1 in the same loop, but the last 1 of first cache line is missed, so finally it get 1 1 1 0 1 1 1 1. 
> 

The one-sentence answer here is: when two entities are moving along a line
in the same direction - like two runners in a race - then they can pass
each other multiple times as each goes slower or faster at any point in
time, whereas if they are moving in opposite directions there will only
ever be one cross-over point no matter how the speed of each changes. 

In the case of NIC and software this fact means that there will always be a
clear cross-over point from DD set to not-set.

> 
> > For #1, if we read in reverse order, does it make sense to not check the DD bits
> > of descriptors that are earlier in the order once we encounter a descriptor that
> > has its DD bit set? This is because NIC updates the descriptors in order.
> 
> I think the answer is yes, when we met the first DD bit, we should able to calculated the exact number base on the index, but not sure how much performance gain.
> 
The other factors here are:
1. The driver does not do a straight read of all 32 DD bits in one go,
rather it does 8 at a time and aborts at the end of a set of 8 if not all
are valid.
2. For any that are set, we have to read the descriptor anyway to get the
packet data out of it, so in the shortcut case of the last descriptor being
set, we still have to read the other 7 anyway, and DD comes for free as
part of it.
3. Blindly reading 8 at a time reduces the branching to just a single
decision point at the end of each set of 8, reducing possible branch
mispredicts.
  
Honnappa Nagarahalli June 16, 2021, 8:26 p.m. UTC | #9
<snip>

> > > > >
> > > > > > >
> > > > > > > Add the logic to determine how many DD bits have been set
> > > > > > > for contiguous packets, for removing the SMP barrier while reading
> descs.
> > > > > >
> > > > > > I didn't understand this.
> > > > > > The current logic already guarantee the read out DD bits are
> > > > > > from continue packets, as it read Rx descriptor in a reversed
> > > > > > order from the
> > > > ring.
> > > > > Qi, the comments in the code mention that there is a race
> > > > > condition if the descriptors are not read in the reverse order.
> > > > > But, they do not mention what the race condition is and how it can
> occur.
> > > > > Appreciate if you could explain that.
> > > >
> > > > The Race condition happens between the NIC and CPU, if write and
> > > > read DD bit in the same order, there might be a hole (e.g. 1011)
> > > > with the reverse read order, we make sure no more "1" after the first "0"
> > > > as the read address are declared as volatile, compiler will not
> > > > re-ordered them.
> > > My understanding is that
> > >
> > > 1) the NIC will write an entire cache line of descriptors to memory
> "atomically"
> > > (i.e. the entire cache line is visible to the CPU at once) if there
> > > are enough descriptors ready to fill one cache line.
> > > 2) But, if there are not enough descriptors ready (because for ex:
> > > there is not enough traffic), then it might write partial cache lines.
> >
> > Yes, for example a cache line contains 4 x16 bytes descriptors and it is
> possible we get 1 1 1 0 for DD bit at some moment.
> >
> > >
> > > Please correct me if I am wrong.
> > >
> > > For #1, I do not think it matters if we read the descriptors in
> > > reverse order or not as the cache line is written atomically.
> >
> > I think below cases may happens if we don't read in reserve order.
> >
> > 1. CPU get first cache line as 1 1 1 0 in a loop 2. new packets coming
> > and NIC append last 1 to the first cache and a new cache line with 1 1 1 1.
> > 3. CPU continue new cache line with 1 1 1 1 in the same loop, but the last 1
> of first cache line is missed, so finally it get 1 1 1 0 1 1 1 1.
> >
> 
> The one-sentence answer here is: when two entities are moving along a line in
> the same direction - like two runners in a race - then they can pass each other
> multiple times as each goes slower or faster at any point in time, whereas if
> they are moving in opposite directions there will only ever be one cross-over
> point no matter how the speed of each changes.
> 
> In the case of NIC and software this fact means that there will always be a
> clear cross-over point from DD set to not-set.
Thanks Bruce, that is a great analogy to describe the problem assuming that the reads are actually happening in the program order.

On Arm platform, even though the program is reading in reverse order, the reads might get executed in any random order. We have 2 solutions here:
1) Enforced the order with barriers or
2) Only process descriptors with contiguous DD bits set

> 
> >
> > > For #1, if we read in reverse order, does it make sense to not check
> > > the DD bits of descriptors that are earlier in the order once we
> > > encounter a descriptor that has its DD bit set? This is because NIC updates
> the descriptors in order.
> >
> > I think the answer is yes, when we met the first DD bit, we should able to
> calculated the exact number base on the index, but not sure how much
> performance gain.
> >
> The other factors here are:
> 1. The driver does not do a straight read of all 32 DD bits in one go, rather it
> does 8 at a time and aborts at the end of a set of 8 if not all are valid.
> 2. For any that are set, we have to read the descriptor anyway to get the
> packet data out of it, so in the shortcut case of the last descriptor being set,
> we still have to read the other 7 anyway, and DD comes for free as part of it.
> 3. Blindly reading 8 at a time reduces the branching to just a single decision
> point at the end of each set of 8, reducing possible branch mispredicts.
Agree.
I think there is another requirement. The other words in the descriptor should be read only after reading the word containing the DD bit.

On x86, the program order takes care of this (although compiler barrier is required).
On Arm, this needs to be taken care explicitly using barriers.
  

Patch

diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 6c58decec..410a81f30 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -452,7 +452,7 @@  i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
 	uint16_t pkt_len;
 	uint64_t qword1;
 	uint32_t rx_status;
-	int32_t s[I40E_LOOK_AHEAD], nb_dd;
+	int32_t s[I40E_LOOK_AHEAD], var, nb_dd;
 	int32_t i, j, nb_rx = 0;
 	uint64_t pkt_flags;
 	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
@@ -482,11 +482,14 @@  i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
 					I40E_RXD_QW1_STATUS_SHIFT;
 		}
 
-		rte_smp_rmb();
-
 		/* Compute how many status bits were set */
-		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++)
-			nb_dd += s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
+		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) {
+			var = s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
+			if (var)
+				nb_dd += 1;
+			else
+				break;
+		}
 
 		nb_rx += nb_dd;