From patchwork Thu Dec 20 17:42:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Hu X-Patchwork-Id: 49217 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 86A091BDFD; Thu, 20 Dec 2018 18:42:59 +0100 (CET) Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 85B731BDFB for ; Thu, 20 Dec 2018 18:42:56 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DB50880D; Thu, 20 Dec 2018 09:42:55 -0800 (PST) Received: from net-debian.shanghai.arm.com (net-debian.shanghai.arm.com [10.169.36.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6E3423F575; Thu, 20 Dec 2018 09:42:54 -0800 (PST) From: Gavin Hu To: dev@dpdk.org Cc: thomas@monjalon.net, bruce.richardson@intel.com, jerinj@marvell.com, hemant.agrawal@nxp.com, ferruh.yigit@intel.com, Honnappa.Nagarahalli@arm.com, nd@arm.com, Gavin Hu Date: Fri, 21 Dec 2018 01:42:28 +0800 Message-Id: <20181220174229.5834-5-gavin.hu@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181220174229.5834-1-gavin.hu@arm.com> References: <20181220104246.5590-1-gavin.hu@arm.com> <20181220174229.5834-1-gavin.hu@arm.com> Subject: [dpdk-dev] [PATCH v2 4/5] spinlock: reimplement with atomic one-way barrier builtins X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The __sync builtin based implementation generates full memory barriers ('dmb ish') on Arm platforms. Using C11 atomic builtins to generate one way barriers. Here is the assembly code of __sync_compare_and_swap builtin. __sync_bool_compare_and_swap(dst, exp, src); 0x000000000090f1b0 <+16>: e0 07 40 f9 ldr x0, [sp, #8] 0x000000000090f1b4 <+20>: e1 0f 40 79 ldrh w1, [sp, #6] 0x000000000090f1b8 <+24>: e2 0b 40 79 ldrh w2, [sp, #4] 0x000000000090f1bc <+28>: 21 3c 00 12 and w1, w1, #0xffff 0x000000000090f1c0 <+32>: 03 7c 5f 48 ldxrh w3, [x0] 0x000000000090f1c4 <+36>: 7f 00 01 6b cmp w3, w1 0x000000000090f1c8 <+40>: 61 00 00 54 b.ne 0x90f1d4 // b.any 0x000000000090f1cc <+44>: 02 fc 04 48 stlxrh w4, w2, [x0] 0x000000000090f1d0 <+48>: 84 ff ff 35 cbnz w4, 0x90f1c0 0x000000000090f1d4 <+52>: bf 3b 03 d5 dmb ish 0x000000000090f1d8 <+56>: e0 17 9f 1a cset w0, eq // eq = none The benchmarking results showed 3X performance gain on Cavium ThunderX2 and 13% on Qualcomm Falmon and 3.7% on 4-A72 Marvell macchiatobin. Here is the example test result on TX2: *** spinlock_autotest without this patch *** Core [123] Cost Time = 639822 us Core [124] Cost Time = 633253 us Core [125] Cost Time = 646030 us Core [126] Cost Time = 643189 us Core [127] Cost Time = 647039 us Total Cost Time = 95433298 us *** spinlock_autotest with this patch *** Core [123] Cost Time = 163615 us Core [124] Cost Time = 166471 us Core [125] Cost Time = 189044 us Core [126] Cost Time = 195745 us Core [127] Cost Time = 78423 us Total Cost Time = 27339656 us Signed-off-by: Gavin Hu Reviewed-by: Phil Yang Reviewed-by: Honnappa Nagarahalli Reviewed-by: Ola Liljedahl Reviewed-by: Steve Capper --- lib/librte_eal/common/include/generic/rte_spinlock.h | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h b/lib/librte_eal/common/include/generic/rte_spinlock.h index c4c3fc31e..87ae7a4f1 100644 --- a/lib/librte_eal/common/include/generic/rte_spinlock.h +++ b/lib/librte_eal/common/include/generic/rte_spinlock.h @@ -61,9 +61,14 @@ rte_spinlock_lock(rte_spinlock_t *sl); static inline void rte_spinlock_lock(rte_spinlock_t *sl) { - while (__sync_lock_test_and_set(&sl->locked, 1)) - while(sl->locked) + int exp = 0; + + while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0, + __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) { + while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED)) rte_pause(); + exp = 0; + } } #endif @@ -80,7 +85,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl); static inline void rte_spinlock_unlock (rte_spinlock_t *sl) { - __sync_lock_release(&sl->locked); + __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE); } #endif @@ -99,7 +104,10 @@ rte_spinlock_trylock (rte_spinlock_t *sl); static inline int rte_spinlock_trylock (rte_spinlock_t *sl) { - return __sync_lock_test_and_set(&sl->locked,1) == 0; + int exp = 0; + return __atomic_compare_exchange_n(&sl->locked, &exp, 1, + 0, /* disallow spurious failure */ + __ATOMIC_ACQUIRE, __ATOMIC_RELAXED); } #endif @@ -113,7 +121,7 @@ rte_spinlock_trylock (rte_spinlock_t *sl) */ static inline int rte_spinlock_is_locked (rte_spinlock_t *sl) { - return sl->locked; + return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE); } /**