[v3,1/3] rwlock: reimplement with atomic builtins

Message ID 1552569304-74817-2-git-send-email-joyce.kong@arm.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series rwlock: reimplement rwlock with atomic and add relevant perf test case |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/intel-Performance-Testing success Performance Testing PASS
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/Intel-compilation success Compilation OK

Commit Message

Joyce Kong March 14, 2019, 1:15 p.m. UTC
  From: Gavin Hu <gavin.hu@arm.com>

The __sync builtin based implementation generates full memory
barriers ('dmb ish') on Arm platforms. Using C11 atomic builtins
to generate one way barriers.

Here is the assembly code of __sync_compare_and_swap builtin.
__sync_bool_compare_and_swap(dst, exp, src);
   0x000000000090f1b0 <+16>:    e0 07 40 f9 ldr x0, [sp, #8]
   0x000000000090f1b4 <+20>:    e1 0f 40 79 ldrh    w1, [sp, #6]
   0x000000000090f1b8 <+24>:    e2 0b 40 79 ldrh    w2, [sp, #4]
   0x000000000090f1bc <+28>:    21 3c 00 12 and w1, w1, #0xffff
   0x000000000090f1c0 <+32>:    03 7c 5f 48 ldxrh   w3, [x0]
   0x000000000090f1c4 <+36>:    7f 00 01 6b cmp w3, w1
   0x000000000090f1c8 <+40>:    61 00 00 54 b.ne    0x90f1d4
<rte_atomic16_cmpset+52>  // b.any
   0x000000000090f1cc <+44>:    02 fc 04 48 stlxrh  w4, w2, [x0]
   0x000000000090f1d0 <+48>:    84 ff ff 35 cbnz    w4, 0x90f1c0
<rte_atomic16_cmpset+32>
   0x000000000090f1d4 <+52>:    bf 3b 03 d5 dmb ish
   0x000000000090f1d8 <+56>:    e0 17 9f 1a cset    w0, eq  // eq = none

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Tested-by: Joyce Kong <joyce.kong@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 lib/librte_eal/common/include/generic/rte_rwlock.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)
  

Comments

Stephen Hemminger March 14, 2019, 3:54 p.m. UTC | #1
On Thu, 14 Mar 2019 21:15:02 +0800
Joyce Kong <joyce.kong@arm.com> wrote:

> -		success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
> -					      (uint32_t)x, (uint32_t)(x + 1));
> +		success = __atomic_compare_exchange_n(&rwl->cnt, &x, x+1, 1,
> +					__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);

Would it be possible to have rte_atomic32_cmpset be an inline function
that became __atomic_comppare_exchange? Then all usages would be the same.
  
Gavin Hu March 15, 2019, 3:04 a.m. UTC | #2
Hi Stephen,

> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Thursday, March 14, 2019 11:54 PM
> To: Joyce Kong (Arm Technology China) <Joyce.Kong@arm.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; jerinj@marvell.com; konstantin.ananyev@intel.com;
> chaozhu@linux.vnet.ibm.com; bruce.richardson@intel.com;
> thomas@monjalon.net; hemant.agrawal@nxp.com; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>
> Subject: Re: [dpdk-dev] [PATCH v3 1/3] rwlock: reimplement with atomic
> builtins
> 
> On Thu, 14 Mar 2019 21:15:02 +0800
> Joyce Kong <joyce.kong@arm.com> wrote:
> 
> > -		success = rte_atomic32_cmpset((volatile uint32_t *)&rwl-
> >cnt,
> > -					      (uint32_t)x, (uint32_t)(x + 1));
> > +		success = __atomic_compare_exchange_n(&rwl->cnt, &x,
> x+1, 1,
> > +					__ATOMIC_ACQUIRE,
> __ATOMIC_RELAXED);
> 
> Would it be possible to have rte_atomic32_cmpset be an inline function
> that became __atomic_comppare_exchange? Then all usages would be the
> same.

There is already a patch for this and Honnappa commented on this: 
https://mails.dpdk.org/archives/dev/2019-January/124297.html
  
Ananyev, Konstantin March 15, 2019, 11:41 a.m. UTC | #3
Hi,


> The __sync builtin based implementation generates full memory
> barriers ('dmb ish') on Arm platforms. Using C11 atomic builtins
> to generate one way barriers.
> 
> Here is the assembly code of __sync_compare_and_swap builtin.
> __sync_bool_compare_and_swap(dst, exp, src);
>    0x000000000090f1b0 <+16>:    e0 07 40 f9 ldr x0, [sp, #8]
>    0x000000000090f1b4 <+20>:    e1 0f 40 79 ldrh    w1, [sp, #6]
>    0x000000000090f1b8 <+24>:    e2 0b 40 79 ldrh    w2, [sp, #4]
>    0x000000000090f1bc <+28>:    21 3c 00 12 and w1, w1, #0xffff
>    0x000000000090f1c0 <+32>:    03 7c 5f 48 ldxrh   w3, [x0]
>    0x000000000090f1c4 <+36>:    7f 00 01 6b cmp w3, w1
>    0x000000000090f1c8 <+40>:    61 00 00 54 b.ne    0x90f1d4
> <rte_atomic16_cmpset+52>  // b.any
>    0x000000000090f1cc <+44>:    02 fc 04 48 stlxrh  w4, w2, [x0]
>    0x000000000090f1d0 <+48>:    84 ff ff 35 cbnz    w4, 0x90f1c0
> <rte_atomic16_cmpset+32>
>    0x000000000090f1d4 <+52>:    bf 3b 03 d5 dmb ish
>    0x000000000090f1d8 <+56>:    e0 17 9f 1a cset    w0, eq  // eq = none
> 
> Signed-off-by: Gavin Hu <gavin.hu@arm.com>
> Tested-by: Joyce Kong <joyce.kong@arm.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---

Wouldn't it be plausible to change _try_ functions to use __atomic too (for consistency)?
Apart from that looks good to me.
Konstantin
  
Gavin Hu March 19, 2019, 8:31 a.m. UTC | #4
Hi Konstantin, 

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Friday, March 15, 2019 7:41 PM
> To: Joyce Kong (Arm Technology China) <Joyce.Kong@arm.com>;
> dev@dpdk.org
> Cc: nd <nd@arm.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; jerinj@marvell.com; chaozhu@linux.vnet.ibm.com;
> Richardson, Bruce <bruce.richardson@intel.com>; thomas@monjalon.net;
> hemant.agrawal@nxp.com; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>
> Subject: RE: [PATCH v3 1/3] rwlock: reimplement with atomic builtins
> 
> Hi,
> 
> 
> > The __sync builtin based implementation generates full memory
> > barriers ('dmb ish') on Arm platforms. Using C11 atomic builtins
> > to generate one way barriers.
> >
> > Here is the assembly code of __sync_compare_and_swap builtin.
> > __sync_bool_compare_and_swap(dst, exp, src);
> >    0x000000000090f1b0 <+16>:    e0 07 40 f9 ldr x0, [sp, #8]
> >    0x000000000090f1b4 <+20>:    e1 0f 40 79 ldrh    w1, [sp, #6]
> >    0x000000000090f1b8 <+24>:    e2 0b 40 79 ldrh    w2, [sp, #4]
> >    0x000000000090f1bc <+28>:    21 3c 00 12 and w1, w1, #0xffff
> >    0x000000000090f1c0 <+32>:    03 7c 5f 48 ldxrh   w3, [x0]
> >    0x000000000090f1c4 <+36>:    7f 00 01 6b cmp w3, w1
> >    0x000000000090f1c8 <+40>:    61 00 00 54 b.ne    0x90f1d4
> > <rte_atomic16_cmpset+52>  // b.any
> >    0x000000000090f1cc <+44>:    02 fc 04 48 stlxrh  w4, w2, [x0]
> >    0x000000000090f1d0 <+48>:    84 ff ff 35 cbnz    w4, 0x90f1c0
> > <rte_atomic16_cmpset+32>
> >    0x000000000090f1d4 <+52>:    bf 3b 03 d5 dmb ish
> >    0x000000000090f1d8 <+56>:    e0 17 9f 1a cset    w0, eq  // eq = none
> >
> > Signed-off-by: Gavin Hu <gavin.hu@arm.com>
> > Tested-by: Joyce Kong <joyce.kong@arm.com>
> > Acked-by: Jerin Jacob <jerinj@marvell.com>
> > ---
> 
> Wouldn't it be plausible to change _try_ functions to use __atomic too (for
> consistency)?
> Apart from that looks good to me.
> Konstantin

Sure, we will do it in next version.
  

Patch

diff --git a/lib/librte_eal/common/include/generic/rte_rwlock.h b/lib/librte_eal/common/include/generic/rte_rwlock.h
index b05d85a..3317a21 100644
--- a/lib/librte_eal/common/include/generic/rte_rwlock.h
+++ b/lib/librte_eal/common/include/generic/rte_rwlock.h
@@ -64,14 +64,14 @@  rte_rwlock_read_lock(rte_rwlock_t *rwl)
 	int success = 0;
 
 	while (success == 0) {
-		x = rwl->cnt;
+		x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
 		/* write lock is held */
 		if (x < 0) {
 			rte_pause();
 			continue;
 		}
-		success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
-					      (uint32_t)x, (uint32_t)(x + 1));
+		success = __atomic_compare_exchange_n(&rwl->cnt, &x, x+1, 1,
+					__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
 	}
 }
 
@@ -114,7 +114,7 @@  rte_rwlock_read_trylock(rte_rwlock_t *rwl)
 static inline void
 rte_rwlock_read_unlock(rte_rwlock_t *rwl)
 {
-	rte_atomic32_dec((rte_atomic32_t *)(intptr_t)&rwl->cnt);
+	__atomic_fetch_sub(&rwl->cnt, 1, __ATOMIC_RELEASE);
 }
 
 /**
@@ -156,14 +156,14 @@  rte_rwlock_write_lock(rte_rwlock_t *rwl)
 	int success = 0;
 
 	while (success == 0) {
-		x = rwl->cnt;
+		x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
 		/* a lock is held */
 		if (x != 0) {
 			rte_pause();
 			continue;
 		}
-		success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
-					      0, (uint32_t)-1);
+		success = __atomic_compare_exchange_n(&rwl->cnt, &x, -1, 1,
+					__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
 	}
 }
 
@@ -176,7 +176,7 @@  rte_rwlock_write_lock(rte_rwlock_t *rwl)
 static inline void
 rte_rwlock_write_unlock(rte_rwlock_t *rwl)
 {
-	rte_atomic32_inc((rte_atomic32_t *)(intptr_t)&rwl->cnt);
+	__atomic_store_n(&rwl->cnt, 0, __ATOMIC_RELEASE);
 }
 
 /**