From patchwork Tue Jan 15 07:54:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Hu X-Patchwork-Id: 49815 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4277F1B12D; Tue, 15 Jan 2019 08:54:40 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 205E31B0FF; Tue, 15 Jan 2019 08:54:37 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5222E15AD; Mon, 14 Jan 2019 23:54:37 -0800 (PST) Received: from net-arm-c2400.shanghai.arm.com (net-arm-c2400.shanghai.arm.com [10.169.41.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C6A013F739; Mon, 14 Jan 2019 23:54:35 -0800 (PST) From: gavin hu To: dev@dpdk.org Cc: thomas@monjalon.net, jerinj@marvell.com, hemant.agrawal@nxp.com, stephen@networkplumber.org, Honnappa.Nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com, stable@dpdk.org Date: Tue, 15 Jan 2019 15:54:06 +0800 Message-Id: <1547538849-10996-2-git-send-email-gavin.hu@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547538849-10996-1-git-send-email-gavin.hu@arm.com> References: <1547538849-10996-1-git-send-email-gavin.hu@arm.com> In-Reply-To: <20181220104246.5590-1-gavin.hu@arm.com> References: <20181220104246.5590-1-gavin.hu@arm.com> Subject: [dpdk-dev] [PATCH v4 1/4] eal: fix clang compilation error on x86 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gavin Hu When CONFIG_RTE_FORCE_INTRINSICS is enabled for x86, the clang compilation error was: include/generic/rte_atomic.h:215:9: error: implicit declaration of function '__atomic_exchange_2' is invalid in C99 include/generic/rte_atomic.h:494:9: error: implicit declaration of function '__atomic_exchange_4' is invalid in C99 include/generic/rte_atomic.h:772:9: error: implicit declaration of function '__atomic_exchange_8' is invalid in C99 Use __atomic_exchange_n instead of __atomic_exchange_(2/4/8). For more information, please refer to: http://mails.dpdk.org/archives/dev/2018-April/096776.html Fixes: 7bdccb93078e ("eal: fix ARM build with clang") Cc: stable@dpdk.org Signed-off-by: Gavin Hu Acked-by: Jerin Jacob --- lib/librte_eal/common/include/generic/rte_atomic.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h b/lib/librte_eal/common/include/generic/rte_atomic.h index b99ba46..ed5b125 100644 --- a/lib/librte_eal/common/include/generic/rte_atomic.h +++ b/lib/librte_eal/common/include/generic/rte_atomic.h @@ -212,7 +212,7 @@ rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val); static inline uint16_t rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val) { -#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG) +#if defined(RTE_TOOLCHAIN_CLANG) return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST); #else return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST); @@ -495,7 +495,7 @@ rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val); static inline uint32_t rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val) { -#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG) +#if defined(RTE_TOOLCHAIN_CLANG) return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST); #else return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST); @@ -777,7 +777,7 @@ rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val); static inline uint64_t rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val) { -#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG) +#if defined(RTE_TOOLCHAIN_CLANG) return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST); #else return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST); From patchwork Tue Jan 15 07:54:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Hu X-Patchwork-Id: 49816 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E70B61B13B; Tue, 15 Jan 2019 08:54:42 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 81CDB1B11F for ; Tue, 15 Jan 2019 08:54:39 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E7A24EBD; Mon, 14 Jan 2019 23:54:38 -0800 (PST) Received: from net-arm-c2400.shanghai.arm.com (net-arm-c2400.shanghai.arm.com [10.169.41.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 90A593F739; Mon, 14 Jan 2019 23:54:37 -0800 (PST) From: gavin hu To: dev@dpdk.org Cc: thomas@monjalon.net, jerinj@marvell.com, hemant.agrawal@nxp.com, stephen@networkplumber.org, Honnappa.Nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com Date: Tue, 15 Jan 2019 15:54:07 +0800 Message-Id: <1547538849-10996-3-git-send-email-gavin.hu@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547538849-10996-1-git-send-email-gavin.hu@arm.com> References: <1547538849-10996-1-git-send-email-gavin.hu@arm.com> In-Reply-To: <20181220104246.5590-1-gavin.hu@arm.com> References: <20181220104246.5590-1-gavin.hu@arm.com> Subject: [dpdk-dev] [PATCH v4 2/4] test/spinlock: remove 1us delay for correct benchmarking X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gavin Hu The test is to benchmark the performance of spinlock by counting the number of spinlock acquire and release operations within the specified time. A typical pair of lock and unlock operations costs tens or hundreds of nano seconds, in comparison to this, delaying 1 us outside of the locked region is too much, compromising the goal of benchmarking the lock and unlock performance. Signed-off-by: Gavin Hu Reviewed-by: Ruifeng Wang Reviewed-by: Joyce Kong Reviewed-by: Phil Yang Reviewed-by: Honnappa Nagarahalli Reviewed-by: Ola Liljedahl Acked-by: Jerin Jacob --- test/test/test_spinlock.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/test/test/test_spinlock.c b/test/test/test_spinlock.c index 73bff12..6795195 100644 --- a/test/test/test_spinlock.c +++ b/test/test/test_spinlock.c @@ -120,8 +120,6 @@ load_loop_fn(void *func_param) lcount++; if (use_lock) rte_spinlock_unlock(&lk); - /* delay to make lock duty cycle slighlty realistic */ - rte_delay_us(1); time_diff = rte_get_timer_cycles() - begin; } lock_count[lcore] = lcount; From patchwork Tue Jan 15 07:54:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Hu X-Patchwork-Id: 49817 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C78EF1B137; Tue, 15 Jan 2019 08:54:44 +0100 (CET) Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 29B191B134 for ; Tue, 15 Jan 2019 08:54:41 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 89B6B15AD; Mon, 14 Jan 2019 23:54:40 -0800 (PST) Received: from net-arm-c2400.shanghai.arm.com (net-arm-c2400.shanghai.arm.com [10.169.41.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 325F33F739; Mon, 14 Jan 2019 23:54:39 -0800 (PST) From: gavin hu To: dev@dpdk.org Cc: thomas@monjalon.net, jerinj@marvell.com, hemant.agrawal@nxp.com, stephen@networkplumber.org, Honnappa.Nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com Date: Tue, 15 Jan 2019 15:54:08 +0800 Message-Id: <1547538849-10996-4-git-send-email-gavin.hu@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547538849-10996-1-git-send-email-gavin.hu@arm.com> References: <1547538849-10996-1-git-send-email-gavin.hu@arm.com> In-Reply-To: <20181220104246.5590-1-gavin.hu@arm.com> References: <20181220104246.5590-1-gavin.hu@arm.com> Subject: [dpdk-dev] [PATCH v4 3/4] test/spinlock: amortize the cost of getting time X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gavin Hu Instead of getting timestamps per iteration, amortize its overhead can help getting more precise benchmarking results. Change-Id: I32642b181f883cd57bc1f7c4f56a4744b0438781 Signed-off-by: Gavin Hu Reviewed-by: Joyce Kong --- test/test/test_spinlock.c | 29 ++++++++++++++++------------- 1 file changed, 16 insertions(+), 13 deletions(-) diff --git a/test/test/test_spinlock.c b/test/test/test_spinlock.c index 6795195..6ac7495 100644 --- a/test/test/test_spinlock.c +++ b/test/test/test_spinlock.c @@ -96,16 +96,16 @@ test_spinlock_recursive_per_core(__attribute__((unused)) void *arg) } static rte_spinlock_t lk = RTE_SPINLOCK_INITIALIZER; -static uint64_t lock_count[RTE_MAX_LCORE] = {0}; +static uint64_t time_count[RTE_MAX_LCORE] = {0}; -#define TIME_MS 100 +#define MAX_LOOP 10000 static int load_loop_fn(void *func_param) { uint64_t time_diff = 0, begin; uint64_t hz = rte_get_timer_hz(); - uint64_t lcount = 0; + volatile uint64_t lcount = 0; const int use_lock = *(int*)func_param; const unsigned lcore = rte_lcore_id(); @@ -114,15 +114,15 @@ load_loop_fn(void *func_param) while (rte_atomic32_read(&synchro) == 0); begin = rte_get_timer_cycles(); - while (time_diff < hz * TIME_MS / 1000) { + while (lcount < MAX_LOOP) { if (use_lock) rte_spinlock_lock(&lk); lcount++; if (use_lock) rte_spinlock_unlock(&lk); - time_diff = rte_get_timer_cycles() - begin; } - lock_count[lcore] = lcount; + time_diff = rte_get_timer_cycles() - begin; + time_count[lcore] = time_diff * 1000000 / hz; return 0; } @@ -136,14 +136,16 @@ test_spinlock_perf(void) printf("\nTest with no lock on single core...\n"); load_loop_fn(&lock); - printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]); - memset(lock_count, 0, sizeof(lock_count)); + printf("Core [%u] Cost Time = %"PRIu64" us\n", lcore, + time_count[lcore]); + memset(time_count, 0, sizeof(time_count)); printf("\nTest with lock on single core...\n"); lock = 1; load_loop_fn(&lock); - printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]); - memset(lock_count, 0, sizeof(lock_count)); + printf("Core [%u] Cost Time = %"PRIu64" us\n", lcore, + time_count[lcore]); + memset(time_count, 0, sizeof(time_count)); printf("\nTest with lock on %u cores...\n", rte_lcore_count()); @@ -158,11 +160,12 @@ test_spinlock_perf(void) rte_eal_mp_wait_lcore(); RTE_LCORE_FOREACH(i) { - printf("Core [%u] count = %"PRIu64"\n", i, lock_count[i]); - total += lock_count[i]; + printf("Core [%u] Cost Time = %"PRIu64" us\n", i, + time_count[i]); + total += time_count[i]; } - printf("Total count = %"PRIu64"\n", total); + printf("Total Cost Time = %"PRIu64" us\n", total); return 0; } From patchwork Tue Jan 15 07:54:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Hu X-Patchwork-Id: 49818 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 59C221B14D; Tue, 15 Jan 2019 08:54:46 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id C3E4C1B127 for ; Tue, 15 Jan 2019 08:54:42 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2E21CEBD; Mon, 14 Jan 2019 23:54:42 -0800 (PST) Received: from net-arm-c2400.shanghai.arm.com (net-arm-c2400.shanghai.arm.com [10.169.41.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C95AA3F739; Mon, 14 Jan 2019 23:54:40 -0800 (PST) From: gavin hu To: dev@dpdk.org Cc: thomas@monjalon.net, jerinj@marvell.com, hemant.agrawal@nxp.com, stephen@networkplumber.org, Honnappa.Nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com Date: Tue, 15 Jan 2019 15:54:09 +0800 Message-Id: <1547538849-10996-5-git-send-email-gavin.hu@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547538849-10996-1-git-send-email-gavin.hu@arm.com> References: <1547538849-10996-1-git-send-email-gavin.hu@arm.com> In-Reply-To: <20181220104246.5590-1-gavin.hu@arm.com> References: <20181220104246.5590-1-gavin.hu@arm.com> Subject: [dpdk-dev] [PATCH v4 4/4] spinlock: reimplement with atomic one-way barrier builtins X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gavin Hu The __sync builtin based implementation generates full memory barriers ('dmb ish') on Arm platforms. Using C11 atomic builtins to generate one way barriers. Here is the assembly code of __sync_compare_and_swap builtin. __sync_bool_compare_and_swap(dst, exp, src); 0x000000000090f1b0 <+16>: e0 07 40 f9 ldr x0, [sp, #8] 0x000000000090f1b4 <+20>: e1 0f 40 79 ldrh w1, [sp, #6] 0x000000000090f1b8 <+24>: e2 0b 40 79 ldrh w2, [sp, #4] 0x000000000090f1bc <+28>: 21 3c 00 12 and w1, w1, #0xffff 0x000000000090f1c0 <+32>: 03 7c 5f 48 ldxrh w3, [x0] 0x000000000090f1c4 <+36>: 7f 00 01 6b cmp w3, w1 0x000000000090f1c8 <+40>: 61 00 00 54 b.ne 0x90f1d4 // b.any 0x000000000090f1cc <+44>: 02 fc 04 48 stlxrh w4, w2, [x0] 0x000000000090f1d0 <+48>: 84 ff ff 35 cbnz w4, 0x90f1c0 0x000000000090f1d4 <+52>: bf 3b 03 d5 dmb ish 0x000000000090f1d8 <+56>: e0 17 9f 1a cset w0, eq // eq = none The benchmarking results showed 3X performance gain on Cavium ThunderX2 and 13% on Qualcomm Falkor and 3.7% on 4-A72 Marvell macchiatobin. Here is the example test result on TX2: *** spinlock_autotest without this patch *** Core [123] Cost Time = 639822 us Core [124] Cost Time = 633253 us Core [125] Cost Time = 646030 us Core [126] Cost Time = 643189 us Core [127] Cost Time = 647039 us Total Cost Time = 95433298 us *** spinlock_autotest with this patch *** Core [123] Cost Time = 163615 us Core [124] Cost Time = 166471 us Core [125] Cost Time = 189044 us Core [126] Cost Time = 195745 us Core [127] Cost Time = 78423 us Total Cost Time = 27339656 us Change-Id: I888f22120697d42d5a63fda6b4f77d93a0409aab Signed-off-by: Gavin Hu Reviewed-by: Phil Yang Reviewed-by: Honnappa Nagarahalli Reviewed-by: Ola Liljedahl Reviewed-by: Steve Capper --- lib/librte_eal/common/include/generic/rte_spinlock.h | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h b/lib/librte_eal/common/include/generic/rte_spinlock.h index c4c3fc3..87ae7a4 100644 --- a/lib/librte_eal/common/include/generic/rte_spinlock.h +++ b/lib/librte_eal/common/include/generic/rte_spinlock.h @@ -61,9 +61,14 @@ rte_spinlock_lock(rte_spinlock_t *sl); static inline void rte_spinlock_lock(rte_spinlock_t *sl) { - while (__sync_lock_test_and_set(&sl->locked, 1)) - while(sl->locked) + int exp = 0; + + while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0, + __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) { + while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED)) rte_pause(); + exp = 0; + } } #endif @@ -80,7 +85,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl); static inline void rte_spinlock_unlock (rte_spinlock_t *sl) { - __sync_lock_release(&sl->locked); + __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE); } #endif @@ -99,7 +104,10 @@ rte_spinlock_trylock (rte_spinlock_t *sl); static inline int rte_spinlock_trylock (rte_spinlock_t *sl) { - return __sync_lock_test_and_set(&sl->locked,1) == 0; + int exp = 0; + return __atomic_compare_exchange_n(&sl->locked, &exp, 1, + 0, /* disallow spurious failure */ + __ATOMIC_ACQUIRE, __ATOMIC_RELAXED); } #endif @@ -113,7 +121,7 @@ rte_spinlock_trylock (rte_spinlock_t *sl) */ static inline int rte_spinlock_is_locked (rte_spinlock_t *sl) { - return sl->locked; + return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE); } /**