From patchwork Fri Mar 15 06:56:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joyce Kong X-Patchwork-Id: 51219 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 21D674C95; Fri, 15 Mar 2019 07:56:54 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 2A5AA2BD3 for ; Fri, 15 Mar 2019 07:56:52 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 77C6615AB; Thu, 14 Mar 2019 23:56:51 -0700 (PDT) Received: from net-arm-thunderx2.shanghai.arm.com (net-arm-thunderx2.shanghai.arm.com [10.169.40.121]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 063B53F575; Thu, 14 Mar 2019 23:56:49 -0700 (PDT) From: Joyce Kong To: dev@dpdk.org Cc: nd@arm.com, stephen@networkplumber.org, jerin.jacob@caviumnetworks.com, thomas@monjalon.net, honnappa.nagarahalli@arm.com, gavin.hu@arm.com, Joyce kong Date: Fri, 15 Mar 2019 14:56:27 +0800 Message-Id: <1552632988-80787-2-git-send-email-joyce.kong@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1552632988-80787-1-git-send-email-joyce.kong@arm.com> References: <1552632988-80787-1-git-send-email-joyce.kong@arm.com> In-Reply-To: <1547802943-18711-1-git-send-email-joyce.kong@arm.com> References: <1547802943-18711-1-git-send-email-joyce.kong@arm.com> Subject: [dpdk-dev] [PATCH v6 1/2] eal/ticketlock: ticket based to improve fairness X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The spinlock implementation is unfair, some threads may take locks aggressively while leaving the other threads starving for long time. This patch introduces ticketlock which gives each waiting thread a ticket and they can take the lock one by one. First come, first serviced. This avoids starvation for too long time and is more predictable. Suggested-by: Jerin Jacob Signed-off-by: Joyce kong Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl Reviewed-by: Honnappa Nagarahalli --- MAINTAINERS | 5 + doc/api/doxy-api-index.md | 1 + lib/librte_eal/common/Makefile | 2 +- .../common/include/arch/arm/rte_ticketlock.h | 64 +++++ .../common/include/generic/rte_ticketlock.h | 308 +++++++++++++++++++++ lib/librte_eal/common/meson.build | 1 + 6 files changed, 380 insertions(+), 1 deletion(-) create mode 100644 lib/librte_eal/common/include/arch/arm/rte_ticketlock.h create mode 100644 lib/librte_eal/common/include/generic/rte_ticketlock.h diff --git a/MAINTAINERS b/MAINTAINERS index 452b8eb..7d87e25 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -210,6 +210,11 @@ M: Cristian Dumitrescu F: lib/librte_eal/common/include/rte_bitmap.h F: app/test/test_bitmap.c +Ticketlock +M: Joyce Kong +F: lib/librte_eal/common/include/generic/rte_ticketlock.h +F: lib/librte_eal/common/include/arch/arm/rte_ticketlock.h + ARM v7 M: Jan Viktorin M: Gavin Hu diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index d95ad56..aacc66b 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -65,6 +65,7 @@ The public API headers are grouped by topics: [atomic] (@ref rte_atomic.h), [rwlock] (@ref rte_rwlock.h), [spinlock] (@ref rte_spinlock.h) + [ticketlock] (@ref rte_ticketlock.h) - **CPU arch**: [branch prediction] (@ref rte_branch_prediction.h), diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile index c487201..ac3305c 100644 --- a/lib/librte_eal/common/Makefile +++ b/lib/librte_eal/common/Makefile @@ -20,7 +20,7 @@ INC += rte_bitmap.h rte_vfio.h rte_hypervisor.h rte_test.h INC += rte_reciprocal.h rte_fbarray.h rte_uuid.h GENERIC_INC := rte_atomic.h rte_byteorder.h rte_cycles.h rte_prefetch.h -GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h rte_rwlock.h +GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h rte_rwlock.h rte_ticketlock.h GENERIC_INC += rte_vect.h rte_pause.h rte_io.h # defined in mk/arch/$(RTE_ARCH)/rte.vars.mk diff --git a/lib/librte_eal/common/include/arch/arm/rte_ticketlock.h b/lib/librte_eal/common/include/arch/arm/rte_ticketlock.h new file mode 100644 index 0000000..57deb0b --- /dev/null +++ b/lib/librte_eal/common/include/arch/arm/rte_ticketlock.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Arm Limited + */ + +#ifndef _RTE_TICKETLOCK_ARM_H_ +#define _RTE_TICKETLOCK_ARM_H_ + +#ifndef RTE_FORCE_INTRINSICS +# error Platform must be built with CONFIG_RTE_FORCE_INTRINSICS +#endif + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include "generic/rte_ticketlock.h" + +static inline int rte_tm_supported(void) +{ + return 0; +} + +static inline void +rte_ticketlock_lock_tm(rte_ticketlock_t *tl) +{ + rte_ticketlock_lock(tl); /* fall-back */ +} + +static inline int +rte_ticketlock_trylock_tm(rte_ticketlock_t *tl) +{ + return rte_ticketlock_trylock(tl); +} + +static inline void +rte_ticketlock_unlock_tm(rte_ticketlock_t *tl) +{ + rte_ticketlock_unlock(tl); +} + +static inline void +rte_ticketlock_recursive_lock_tm(rte_ticketlock_recursive_t *tlr) +{ + rte_ticketlock_recursive_lock(tlr); /* fall-back */ +} + +static inline void +rte_ticketlock_recursive_unlock_tm(rte_ticketlock_recursive_t *tlr) +{ + rte_ticketlock_recursive_unlock(tlr); +} + +static inline int +rte_ticketlock_recursive_trylock_tm(rte_ticketlock_recursive_t *tlr) +{ + return rte_ticketlock_recursive_trylock(tlr); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_TICKETLOCK_ARM_H_ */ diff --git a/lib/librte_eal/common/include/generic/rte_ticketlock.h b/lib/librte_eal/common/include/generic/rte_ticketlock.h new file mode 100644 index 0000000..d63aaaa --- /dev/null +++ b/lib/librte_eal/common/include/generic/rte_ticketlock.h @@ -0,0 +1,308 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Arm Limited + */ + +#ifndef _RTE_TICKETLOCK_H_ +#define _RTE_TICKETLOCK_H_ + +/** + * @file + * + * RTE ticket locks + * + * This file defines an API for ticket locks, which give each waiting + * thread a ticket and take the lock one by one, first come, first + * serviced. + * + * All locks must be initialised before use, and only initialised once. + * + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include + +/** + * The rte_ticketlock_t type. + */ +typedef struct { + uint16_t current; + uint16_t next; +} rte_ticketlock_t; + +/** + * A static ticketlock initializer. + */ +#define RTE_TICKETLOCK_INITIALIZER { 0 } + +/** + * Initialize the ticketlock to an unlocked state. + * + * @param tl + * A pointer to the ticketlock. + */ +static inline __rte_experimental void +rte_ticketlock_init(rte_ticketlock_t *tl) +{ + __atomic_store_n(&tl->current, 0, __ATOMIC_RELAXED); + __atomic_store_n(&tl->next, 0, __ATOMIC_RELAXED); +} + +/** + * Take the ticketlock. + * + * @param tl + * A pointer to the ticketlock. + */ +static inline __rte_experimental void +rte_ticketlock_lock(rte_ticketlock_t *tl) +{ + uint16_t me = __atomic_fetch_add(&tl->next, 1, __ATOMIC_RELAXED); + while (__atomic_load_n(&tl->current, __ATOMIC_ACQUIRE) != me) + rte_pause(); +} + +/** + * Release the ticketlock. + * + * @param tl + * A pointer to the ticketlock. + */ +static inline __rte_experimental void +rte_ticketlock_unlock(rte_ticketlock_t *tl) +{ + uint16_t i = __atomic_load_n(&tl->current, __ATOMIC_RELAXED); + __atomic_store_n(&tl->current, i+1, __ATOMIC_RELEASE); +} + +/** + * Try to take the lock. + * + * @param tl + * A pointer to the ticketlock. + * @return + * 1 if the lock is successfully taken; 0 otherwise. + */ +static inline __rte_experimental int +rte_ticketlock_trylock(rte_ticketlock_t *tl) +{ + uint16_t next = __atomic_load_n(&tl->next, __ATOMIC_RELAXED); + uint16_t cur = __atomic_load_n(&tl->current, __ATOMIC_RELAXED); + if (next == cur) { + if (__atomic_compare_exchange_n(&tl->next, &next, next+1, + 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) + return 1; + } + + return 0; +} + +/** + * Test if the lock is taken. + * + * @param tl + * A pointer to the ticketlock. + * @return + * 1 if the lock icurrently taken; 0 otherwise. + */ +static inline __rte_experimental int +rte_ticketlock_is_locked(rte_ticketlock_t *tl) +{ + return (__atomic_load_n(&tl->current, __ATOMIC_ACQUIRE) != + __atomic_load_n(&tl->next, __ATOMIC_ACQUIRE)); +} + +/** + * Test if hardware transactional memory (lock elision) is supported + * + * @return + * 1 if the hardware transactional memory is supported; 0 otherwise. + */ +static inline int rte_tm_supported(void); + +/** + * Try to execute critical section in a hardware memory transaction, + * if it fails or not available take the ticketlock. + * + * NOTE: An attempt to perform a HW I/O operation inside a hardware memory + * transaction always aborts the transaction since the CPU is not able to + * roll-back should the transaction fail. Therefore, hardware transactional + * locks are not advised to be used around rte_eth_rx_burst() and + * rte_eth_tx_burst() calls. + * + * @param tl + * A pointer to the ticketlock. + */ +static inline void +rte_ticketlock_lock_tm(rte_ticketlock_t *tl); + +/** + * Commit hardware memory transaction or release the ticketlock if + * the ticketlock is used as a fall-back + * + * @param tl + * A pointer to the ticketlock. + */ +static inline void +rte_ticketlock_unlock_tm(rte_ticketlock_t *tl); + +/** + * Try to execute critical section in a hardware memory transaction, + * if it fails or not available try to take the lock. + * + * NOTE: An attempt to perform a HW I/O operation inside a hardware memory + * transaction always aborts the transaction since the CPU is not able to + * roll-back should the transaction fail. Therefore, hardware transactional + * locks are not advised to be used around rte_eth_rx_burst() and + * rte_eth_tx_burst() calls. + * + * @param tl + * A pointer to the ticketlock. + * @return + * 1 if the hardware memory transaction is successfully started + * or lock is successfully taken; 0 otherwise. + */ +static inline int +rte_ticketlock_trylock_tm(rte_ticketlock_t *tl); + +/** + * The rte_ticketlock_recursive_t type. + */ +#define TICKET_LOCK_INVALID_ID -1 + +typedef struct { + rte_ticketlock_t tl; /**< the actual ticketlock */ + int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */ + unsigned int count; /**< count of time this lock has been called */ +} rte_ticketlock_recursive_t; + +/** + * A static recursive ticketlock initializer. + */ +#define RTE_TICKETLOCK_RECURSIVE_INITIALIZER {RTE_TICKETLOCK_INITIALIZER, \ + TICKET_LOCK_INVALID_ID, 0} + +/** + * Initialize the recursive ticketlock to an unlocked state. + * + * @param tlr + * A pointer to the recursive ticketlock. + */ +static inline __rte_experimental void +rte_ticketlock_recursive_init(rte_ticketlock_recursive_t *tlr) +{ + rte_ticketlock_init(&tlr->tl); + __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, __ATOMIC_RELAXED); + tlr->count = 0; +} + +/** + * Take the recursive ticketlock. + * + * @param tlr + * A pointer to the recursive ticketlock. + */ +static inline __rte_experimental void +rte_ticketlock_recursive_lock(rte_ticketlock_recursive_t *tlr) +{ + int id = rte_gettid(); + + if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) { + rte_ticketlock_lock(&tlr->tl); + __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED); + } + tlr->count++; +} + +/** + * Release the recursive ticketlock. + * + * @param tlr + * A pointer to the recursive ticketlock. + */ +static inline __rte_experimental void +rte_ticketlock_recursive_unlock(rte_ticketlock_recursive_t *tlr) +{ + if (--(tlr->count) == 0) { + __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, + __ATOMIC_RELAXED); + rte_ticketlock_unlock(&tlr->tl); + } +} + +/** + * Try to take the recursive lock. + * + * @param tlr + * A pointer to the recursive ticketlock. + * @return + * 1 if the lock is successfully taken; 0 otherwise. + */ +static inline __rte_experimental int +rte_ticketlock_recursive_trylock(rte_ticketlock_recursive_t *tlr) +{ + int id = rte_gettid(); + + if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) { + if (rte_ticketlock_trylock(&tlr->tl) == 0) + return 0; + __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED); + } + tlr->count++; + return 1; +} + +/** + * Try to execute critical section in a hardware memory transaction, + * if it fails or not available take the recursive ticketlocks + * + * NOTE: An attempt to perform a HW I/O operation inside a hardware memory + * transaction always aborts the transaction since the CPU is not able to + * roll-back should the transaction fail. Therefore, hardware transactional + * locks are not advised to be used around rte_eth_rx_burst() and + * rte_eth_tx_burst() calls. + * + * @param tlr + * A pointer to the recursive ticketlock. + */ +static inline void +rte_ticketlock_recursive_lock_tm(rte_ticketlock_recursive_t *tlr); + +/** + * Commit hardware memory transaction or release the recursive ticketlock + * if the recursive ticketlock is used as a fall-back + * + * @param tlr + * A pointer to the recursive ticketlock. + */ +static inline void +rte_ticketlock_recursive_unlock_tm(rte_ticketlock_recursive_t *tlr); + +/** + * Try to execute critical section in a hardware memory transaction, + * if it fails or not available try to take the recursive lock + * + * NOTE: An attempt to perform a HW I/O operation inside a hardware memory + * transaction always aborts the transaction since the CPU is not able to + * roll-back should the transaction fail. Therefore, hardware transactional + * locks are not advised to be used around rte_eth_rx_burst() and + * rte_eth_tx_burst() calls. + * + * @param tlr + * A pointer to the recursive ticketlock. + * @return + * 1 if the hardware memory transaction is successfully started + * or lock is successfully taken; 0 otherwise. + */ +static inline int +rte_ticketlock_recursive_trylock_tm(rte_ticketlock_recursive_t *tlr); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_TICKETLOCK_H_ */ diff --git a/lib/librte_eal/common/meson.build b/lib/librte_eal/common/meson.build index 5ecae0b..0670e41 100644 --- a/lib/librte_eal/common/meson.build +++ b/lib/librte_eal/common/meson.build @@ -99,6 +99,7 @@ generic_headers = files( 'include/generic/rte_prefetch.h', 'include/generic/rte_rwlock.h', 'include/generic/rte_spinlock.h', + 'include/generic/rte_ticketlock.h', 'include/generic/rte_vect.h') install_headers(generic_headers, subdir: 'generic') From patchwork Fri Mar 15 06:56:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joyce Kong X-Patchwork-Id: 51220 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 75FCF4C9F; Fri, 15 Mar 2019 07:56:56 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 45CDD4C9F for ; Fri, 15 Mar 2019 07:56:55 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A19BF15BE; Thu, 14 Mar 2019 23:56:54 -0700 (PDT) Received: from net-arm-thunderx2.shanghai.arm.com (net-arm-thunderx2.shanghai.arm.com [10.169.40.121]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 566453F575; Thu, 14 Mar 2019 23:56:53 -0700 (PDT) From: Joyce Kong To: dev@dpdk.org Cc: nd@arm.com, stephen@networkplumber.org, jerin.jacob@caviumnetworks.com, thomas@monjalon.net, honnappa.nagarahalli@arm.com, gavin.hu@arm.com Date: Fri, 15 Mar 2019 14:56:28 +0800 Message-Id: <1552632988-80787-3-git-send-email-joyce.kong@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1552632988-80787-1-git-send-email-joyce.kong@arm.com> References: <1552632988-80787-1-git-send-email-joyce.kong@arm.com> In-Reply-To: <1547802943-18711-1-git-send-email-joyce.kong@arm.com> References: <1547802943-18711-1-git-send-email-joyce.kong@arm.com> Subject: [dpdk-dev] [PATCH v6 2/2] test/ticketlock: add ticket lock test case X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add test cases for ticket lock, recursive ticket lock, and ticket lock performance. Signed-off-by: Joyce Kong Reviewed-by: Gavin Hu Reviewed-by: Phil Yang Signed-off-by: Joyce Kong --- MAINTAINERS | 1 + app/test/Makefile | 1 + app/test/autotest_data.py | 6 + app/test/meson.build | 1 + app/test/test_ticketlock.c | 311 +++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 320 insertions(+) create mode 100644 app/test/test_ticketlock.c diff --git a/MAINTAINERS b/MAINTAINERS index 7d87e25..b9ffd76 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -214,6 +214,7 @@ Ticketlock M: Joyce Kong F: lib/librte_eal/common/include/generic/rte_ticketlock.h F: lib/librte_eal/common/include/arch/arm/rte_ticketlock.h +F: app/test/test_ticketlock.c ARM v7 M: Jan Viktorin diff --git a/app/test/Makefile b/app/test/Makefile index 89949c2..d6aa28b 100644 --- a/app/test/Makefile +++ b/app/test/Makefile @@ -65,6 +65,7 @@ SRCS-y += test_barrier.c SRCS-y += test_malloc.c SRCS-y += test_cycles.c SRCS-y += test_spinlock.c +SRCS-y += test_ticketlock.c SRCS-y += test_memory.c SRCS-y += test_memzone.c SRCS-y += test_bitmap.c diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py index 5f87bb9..db25274 100644 --- a/app/test/autotest_data.py +++ b/app/test/autotest_data.py @@ -171,6 +171,12 @@ "Report": None, }, { + "Name": "Ticketlock autotest", + "Command": "ticketlock_autotest", + "Func": ticketlock_autotest, + "Report": None, + } + { "Name": "Byte order autotest", "Command": "byteorder_autotest", "Func": default_autotest, diff --git a/app/test/meson.build b/app/test/meson.build index 05e5dde..ddb4d09 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -107,6 +107,7 @@ test_sources = files('commands.c', 'test_timer.c', 'test_timer_perf.c', 'test_timer_racecond.c', + 'test_ticketlock.c', 'test_version.c', 'virtual_pmd.c' ) diff --git a/app/test/test_ticketlock.c b/app/test/test_ticketlock.c new file mode 100644 index 0000000..67281ce --- /dev/null +++ b/app/test/test_ticketlock.c @@ -0,0 +1,311 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2019 Arm Limited + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" + +/* + * Ticketlock test + * ============= + * + * - There is a global ticketlock and a table of ticketlocks (one per lcore). + * + * - The test function takes all of these locks and launches the + * ``test_ticketlock_per_core()`` function on each core (except the master). + * + * - The function takes the global lock, display something, then releases + * the global lock. + * - The function takes the per-lcore lock, display something, then releases + * the per-core lock. + * + * - The main function unlocks the per-lcore locks sequentially and + * waits between each lock. This triggers the display of a message + * for each core, in the correct order. The autotest script checks that + * this order is correct. + * + * - A load test is carried out, with all cores attempting to lock a single lock + * multiple times + */ + +static rte_ticketlock_t tl, tl_try; +static rte_ticketlock_t tl_tab[RTE_MAX_LCORE]; +static rte_ticketlock_recursive_t tlr; +static unsigned int count; + +static rte_atomic32_t synchro; + +static int +test_ticketlock_per_core(__attribute__((unused)) void *arg) +{ + rte_ticketlock_lock(&tl); + printf("Global lock taken on core %u\n", rte_lcore_id()); + rte_ticketlock_unlock(&tl); + + rte_ticketlock_lock(&tl_tab[rte_lcore_id()]); + printf("Hello from core %u !\n", rte_lcore_id()); + rte_ticketlock_unlock(&tl_tab[rte_lcore_id()]); + + return 0; +} + +static int +test_ticketlock_recursive_per_core(__attribute__((unused)) void *arg) +{ + unsigned int id = rte_lcore_id(); + + rte_ticketlock_recursive_lock(&tlr); + printf("Global recursive lock taken on core %u - count = %d\n", + id, tlr.count); + rte_ticketlock_recursive_lock(&tlr); + printf("Global recursive lock taken on core %u - count = %d\n", + id, tlr.count); + rte_ticketlock_recursive_lock(&tlr); + printf("Global recursive lock taken on core %u - count = %d\n", + id, tlr.count); + + printf("Hello from within recursive locks from core %u !\n", id); + + rte_ticketlock_recursive_unlock(&tlr); + printf("Global recursive lock released on core %u - count = %d\n", + id, tlr.count); + rte_ticketlock_recursive_unlock(&tlr); + printf("Global recursive lock released on core %u - count = %d\n", + id, tlr.count); + rte_ticketlock_recursive_unlock(&tlr); + printf("Global recursive lock released on core %u - count = %d\n", + id, tlr.count); + + return 0; +} + +static rte_ticketlock_t lk = RTE_TICKETLOCK_INITIALIZER; +static uint64_t lock_count[RTE_MAX_LCORE] = {0}; + +#define TIME_MS 100 + +static int +load_loop_fn(void *func_param) +{ + uint64_t time_diff = 0, begin; + uint64_t hz = rte_get_timer_hz(); + uint64_t lcount = 0; + const int use_lock = *(int *)func_param; + const unsigned int lcore = rte_lcore_id(); + + /* wait synchro for slaves */ + if (lcore != rte_get_master_lcore()) + while (rte_atomic32_read(&synchro) == 0) + ; + + begin = rte_get_timer_cycles(); + while (time_diff < hz * TIME_MS / 1000) { + if (use_lock) + rte_ticketlock_lock(&lk); + lcount++; + if (use_lock) + rte_ticketlock_unlock(&lk); + /* delay to make lock duty cycle slighlty realistic */ + rte_delay_us(1); + time_diff = rte_get_timer_cycles() - begin; + } + lock_count[lcore] = lcount; + return 0; +} + +static int +test_ticketlock_perf(void) +{ + unsigned int i; + uint64_t total = 0; + int lock = 0; + const unsigned int lcore = rte_lcore_id(); + + printf("\nTest with no lock on single core...\n"); + load_loop_fn(&lock); + printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]); + memset(lock_count, 0, sizeof(lock_count)); + + printf("\nTest with lock on single core...\n"); + lock = 1; + load_loop_fn(&lock); + printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]); + memset(lock_count, 0, sizeof(lock_count)); + + printf("\nTest with lock on %u cores...\n", rte_lcore_count()); + + /* Clear synchro and start slaves */ + rte_atomic32_set(&synchro, 0); + rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_MASTER); + + /* start synchro and launch test on master */ + rte_atomic32_set(&synchro, 1); + load_loop_fn(&lock); + + rte_eal_mp_wait_lcore(); + + RTE_LCORE_FOREACH(i) { + printf("Core [%u] count = %"PRIu64"\n", i, lock_count[i]); + total += lock_count[i]; + } + + printf("Total count = %"PRIu64"\n", total); + + return 0; +} + +/* + * Use rte_ticketlock_trylock() to trylock a ticketlock object, + * If it could not lock the object successfully, it would + * return immediately and the variable of "count" would be + * increased by one per times. the value of "count" could be + * checked as the result later. + */ +static int +test_ticketlock_try(__attribute__((unused)) void *arg) +{ + if (rte_ticketlock_trylock(&tl_try) == 0) { + rte_ticketlock_lock(&tl); + count++; + rte_ticketlock_unlock(&tl); + } + + return 0; +} + + +/* + * Test rte_eal_get_lcore_state() in addition to ticketlocks + * as we have "waiting" then "running" lcores. + */ +static int +test_ticketlock(void) +{ + int ret = 0; + int i; + + /* slave cores should be waiting: print it */ + RTE_LCORE_FOREACH_SLAVE(i) { + printf("lcore %d state: %d\n", i, + (int) rte_eal_get_lcore_state(i)); + } + + rte_ticketlock_init(&tl); + rte_ticketlock_init(&tl_try); + rte_ticketlock_recursive_init(&tlr); + RTE_LCORE_FOREACH_SLAVE(i) { + rte_ticketlock_init(&tl_tab[i]); + } + + rte_ticketlock_lock(&tl); + + RTE_LCORE_FOREACH_SLAVE(i) { + rte_ticketlock_lock(&tl_tab[i]); + rte_eal_remote_launch(test_ticketlock_per_core, NULL, i); + } + + /* slave cores should be busy: print it */ + RTE_LCORE_FOREACH_SLAVE(i) { + printf("lcore %d state: %d\n", i, + (int) rte_eal_get_lcore_state(i)); + } + rte_ticketlock_unlock(&tl); + + RTE_LCORE_FOREACH_SLAVE(i) { + rte_ticketlock_unlock(&tl_tab[i]); + rte_delay_ms(10); + } + + rte_eal_mp_wait_lcore(); + + rte_ticketlock_recursive_lock(&tlr); + + /* + * Try to acquire a lock that we already own + */ + if (!rte_ticketlock_recursive_trylock(&tlr)) { + printf("rte_ticketlock_recursive_trylock failed on a lock that " + "we already own\n"); + ret = -1; + } else + rte_ticketlock_recursive_unlock(&tlr); + + RTE_LCORE_FOREACH_SLAVE(i) { + rte_eal_remote_launch(test_ticketlock_recursive_per_core, + NULL, i); + } + rte_ticketlock_recursive_unlock(&tlr); + rte_eal_mp_wait_lcore(); + + /* + * Test if it could return immediately from try-locking a locked object. + * Here it will lock the ticketlock object first, then launch all the + * slave lcores to trylock the same ticketlock object. + * All the slave lcores should give up try-locking a locked object and + * return immediately, and then increase the "count" initialized with + * zero by one per times. + * We can check if the "count" is finally equal to the number of all + * slave lcores to see if the behavior of try-locking a locked + * ticketlock object is correct. + */ + if (rte_ticketlock_trylock(&tl_try) == 0) + return -1; + + count = 0; + RTE_LCORE_FOREACH_SLAVE(i) { + rte_eal_remote_launch(test_ticketlock_try, NULL, i); + } + rte_eal_mp_wait_lcore(); + rte_ticketlock_unlock(&tl_try); + if (rte_ticketlock_is_locked(&tl)) { + printf("ticketlock is locked but it should not be\n"); + return -1; + } + rte_ticketlock_lock(&tl); + if (count != (rte_lcore_count() - 1)) + ret = -1; + + rte_ticketlock_unlock(&tl); + + /* + * Test if it can trylock recursively. + * Use rte_ticketlock_recursive_trylock() to check if it can lock + * a ticketlock object recursively. Here it will try to lock a + * ticketlock object twice. + */ + if (rte_ticketlock_recursive_trylock(&tlr) == 0) { + printf("It failed to do the first ticketlock_recursive_trylock " + "but it should able to do\n"); + return -1; + } + if (rte_ticketlock_recursive_trylock(&tlr) == 0) { + printf("It failed to do the second ticketlock_recursive_trylock " + "but it should able to do\n"); + return -1; + } + rte_ticketlock_recursive_unlock(&tlr); + rte_ticketlock_recursive_unlock(&tlr); + + if (test_ticketlock_perf() < 0) + return -1; + + return ret; +} + +REGISTER_TEST_COMMAND(ticketlock_autotest, test_ticketlock);