From patchwork Thu Jan 14 17:34:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 86641 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6676CA0A02; Thu, 14 Jan 2021 18:35:02 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 37AA21413E8; Thu, 14 Jan 2021 18:35:02 +0100 (CET) Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by mails.dpdk.org (Postfix) with ESMTP id 63B101413E7 for ; Thu, 14 Jan 2021 18:35:00 +0100 (CET) Received: by mail-pf1-f175.google.com with SMTP id a188so3751855pfa.11 for ; Thu, 14 Jan 2021 09:35:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UTIGfVdzSvBeCSS4Jw6bqYuGhN/Z/3UA4VTxCr4+v8M=; b=W/R4NwQAntkPF1fFyu5t89QInbcWk11Z2COpAFGlBWxG9JSVBJdd9a2M+xO0BR4Xq+ atA+0DhWXOxZXjySNBQaldLRK1cl0C4KTlel3zJ+EXLmrISawI6z/T5qfck7tRJpZ4RZ 3dTT2mjBqH5oomAGZNGjY5ocyY+2ile06/4CuWy2lrgGaRrnaJf3KEczELGpvBUOejgr 7Er3loF4hF+gtBJoD+HrVn+VKjZMYDfieKxN8KowE8VWoYb7v3DIuwxZ1qvglyZZvvhl jgFLoURni41jgH/Azru/x0hYXvUL37NYx7PpSR5lQ1gxcyBuLk65eNjrOvNtfqxBm1oL FFrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UTIGfVdzSvBeCSS4Jw6bqYuGhN/Z/3UA4VTxCr4+v8M=; b=K5aJAMP5PlTU8iwdEn2YhbqUI6kLSNgsPMvxwoRxcPzSDVO9hFdH5G0xU/4Y6IR1ve dkn3FAfDMbRO4f8N+04OgWTDr5N0eEiyMgMJrrcVzLCijAG3XjcomEIrv5+6bjgwWTxc 5QZRHwA3y5Lp0ErQS+Wma7OtyPPNiLWcOz1cxd/9CzMhjx8FDtLy2Q9kjbe0sSZb8snD xqIYvh15IgPPwkIzWLIWvvAdCX/SbbKSkpEbAHTX44d9Qku8WON1TG7uWEboAeNB3Yri 2IFbZIVFLYCd32UR0BV2PmmGFEeNvDmpcWIG2+2nfrSjerGGw/TSzyOIhvLnQcQ5Wmyg Cctg== X-Gm-Message-State: AOAM532TFMtbLXOyqFtSrnV4vCDcc34r1R1Flyfd6iRNBS2sgTcY3J5c jedHiIqE0vHwG4CurYJHItfhaHgIVFD9pg== X-Google-Smtp-Source: ABdhPJz0LW4bKy5k25xgXTXLnvFpJr1i5FXgRKu9GFgQlyTBGqGVZNScmQZSMtWgqZE8Z0jMZgqsjA== X-Received: by 2002:a63:2f05:: with SMTP id v5mr8557847pgv.3.1610645698488; Thu, 14 Jan 2021 09:34:58 -0800 (PST) Received: from hermes.wavecable.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id y19sm5803100pfp.211.2021.01.14.09.34.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Jan 2021 09:34:57 -0800 (PST) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Thu, 14 Jan 2021 09:34:54 -0800 Message-Id: <20210114173454.56657-1-stephen@networkplumber.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210112060524.409412-1-stephen@networkplumber.org> References: <20210112060524.409412-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1] eal: add ticket based reader writer lock X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch implements a reader/writer ticket lock. This lock type acts like rte_rwlock() but uses a ticket algorithm and are fair for multiple writers and readers. Writers have priority over readers. The tests are just a clone of existing rte_rwlock with test and function names changed. So the new ticket rwlocks should be drop in replacement for most users. Signed-off-by: Stephen Hemminger --- Ps: I have additional tests for rwlock that test for fairness. Would these be valuable? app/test/autotest_data.py | 6 + app/test/meson.build | 5 + app/test/test_ticket_rwlock.c | 554 ++++++++++++++++++ doc/api/doxy-api-index.md | 1 + lib/librte_eal/arm/include/meson.build | 1 + .../arm/include/rte_ticket_rwlock.h | 22 + .../include/generic/rte_ticket_rwlock.h | 218 +++++++ lib/librte_eal/include/meson.build | 1 + lib/librte_eal/ppc/include/meson.build | 1 + .../ppc/include/rte_ticket_rwlock.h | 18 + lib/librte_eal/x86/include/meson.build | 1 + .../x86/include/rte_ticket_rwlock.h | 18 + 12 files changed, 846 insertions(+) create mode 100644 app/test/test_ticket_rwlock.c create mode 100644 lib/librte_eal/arm/include/rte_ticket_rwlock.h create mode 100644 lib/librte_eal/include/generic/rte_ticket_rwlock.h create mode 100644 lib/librte_eal/ppc/include/rte_ticket_rwlock.h create mode 100644 lib/librte_eal/x86/include/rte_ticket_rwlock.h diff --git a/app/test/autotest_data.py b/app/test/autotest_data.py index 097638941f19..62816c36d873 100644 --- a/app/test/autotest_data.py +++ b/app/test/autotest_data.py @@ -231,6 +231,12 @@ "Func": ticketlock_autotest, "Report": None, }, + { + "Name": "Ticket rwlock autotest", + "Command": "ticket_rwlock_autotest", + "Func": ticketrwlock_autotest, + "Report": None, + }, { "Name": "MCSlock autotest", "Command": "mcslock_autotest", diff --git a/app/test/meson.build b/app/test/meson.build index 94fd39fecb82..26bf0c15097d 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -135,6 +135,7 @@ test_sources = files('commands.c', 'test_timer_racecond.c', 'test_timer_secondary.c', 'test_ticketlock.c', + 'test_ticket_rwlock.c', 'test_trace.c', 'test_trace_register.c', 'test_trace_perf.c', @@ -245,6 +246,10 @@ fast_tests = [ ['string_autotest', true], ['table_autotest', true], ['tailq_autotest', true], + ['ticketrwlock_test1_autotest', true], + ['ticketrwlock_rda_autotest', true], + ['ticketrwlock_rds_wrm_autotest', true], + ['ticketrwlock_rde_wro_autotest', true], ['timer_autotest', false], ['user_delay_us', true], ['version_autotest', true], diff --git a/app/test/test_ticket_rwlock.c b/app/test/test_ticket_rwlock.c new file mode 100644 index 000000000000..cffc9bf23ef6 --- /dev/null +++ b/app/test/test_ticket_rwlock.c @@ -0,0 +1,554 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2014 Intel Corporation + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" + +/* + * ticket rwlock test + * =========== + * Provides UT for rte_ticket_rwlock API. + * Main concern is on functional testing, but also provides some + * performance measurements. + * Obviously for proper testing need to be executed with more than one lcore. + */ + +#define ITER_NUM 0x80 + +#define TEST_SEC 5 + +static rte_rwticketlock_t sl; +static rte_rwticketlock_t sl_tab[RTE_MAX_LCORE]; +static uint32_t synchro; + +enum { + LC_TYPE_RDLOCK, + LC_TYPE_WRLOCK, +}; + +static struct { + rte_rwticketlock_t lock; + uint64_t tick; + volatile union { + uint8_t u8[RTE_CACHE_LINE_SIZE]; + uint64_t u64[RTE_CACHE_LINE_SIZE / sizeof(uint64_t)]; + } data; +} __rte_cache_aligned try_rwlock_data; + +struct try_rwlock_lcore { + int32_t rc; + int32_t type; + struct { + uint64_t tick; + uint64_t fail; + uint64_t success; + } stat; +} __rte_cache_aligned; + +static struct try_rwlock_lcore try_lcore_data[RTE_MAX_LCORE]; + +static int +test_rwlock_per_core(__rte_unused void *arg) +{ + rte_rwticket_write_lock(&sl); + printf("Global write lock taken on core %u\n", rte_lcore_id()); + rte_rwticket_write_unlock(&sl); + + rte_rwticket_write_lock(&sl_tab[rte_lcore_id()]); + printf("Hello from core %u !\n", rte_lcore_id()); + rte_rwticket_write_unlock(&sl_tab[rte_lcore_id()]); + + rte_rwticket_read_lock(&sl); + printf("Global read lock taken on core %u\n", rte_lcore_id()); + rte_delay_ms(100); + printf("Release global read lock on core %u\n", rte_lcore_id()); + rte_rwticket_read_unlock(&sl); + + return 0; +} + +static rte_rwticketlock_t lk = RTE_RWTICKETLOCK_INITIALIZER; +static volatile uint64_t rwlock_data; +static uint64_t time_count[RTE_MAX_LCORE] = {0}; + +#define MAX_LOOP 10000 +#define TEST_RWLOCK_DEBUG 0 + +static int +load_loop_fn(__rte_unused void *arg) +{ + uint64_t time_diff = 0, begin; + uint64_t hz = rte_get_timer_hz(); + uint64_t lcount = 0; + const unsigned int lcore = rte_lcore_id(); + + /* wait synchro for workers */ + if (lcore != rte_get_main_lcore()) + rte_wait_until_equal_32(&synchro, 1, __ATOMIC_RELAXED); + + begin = rte_rdtsc_precise(); + while (lcount < MAX_LOOP) { + rte_rwticket_write_lock(&lk); + ++rwlock_data; + rte_rwticket_write_unlock(&lk); + + rte_rwticket_read_lock(&lk); + if (TEST_RWLOCK_DEBUG && !(lcount % 100)) + printf("Core [%u] rwlock_data = %"PRIu64"\n", + lcore, rwlock_data); + rte_rwticket_read_unlock(&lk); + + lcount++; + /* delay to make lock duty cycle slightly realistic */ + rte_pause(); + } + + time_diff = rte_rdtsc_precise() - begin; + time_count[lcore] = time_diff * 1000000 / hz; + return 0; +} + +static int +test_rwlock_perf(void) +{ + unsigned int i; + uint64_t total = 0; + + printf("\nTicket rwlock Perf Test on %u cores...\n", rte_lcore_count()); + + /* clear synchro and start workers */ + synchro = 0; + if (rte_eal_mp_remote_launch(load_loop_fn, NULL, SKIP_MAIN) < 0) + return -1; + + /* start synchro and launch test on main */ + __atomic_store_n(&synchro, 1, __ATOMIC_RELAXED); + load_loop_fn(NULL); + + rte_eal_mp_wait_lcore(); + + RTE_LCORE_FOREACH(i) { + printf("Core [%u] cost time = %"PRIu64" us\n", + i, time_count[i]); + total += time_count[i]; + } + + printf("Total cost time = %"PRIu64" us\n", total); + memset(time_count, 0, sizeof(time_count)); + + return 0; +} + +/* + * - There is a global rwlock and a table of rwlocks (one per lcore). + * + * - The test function takes all of these locks and launches the + * ``test_rwlock_per_core()`` function on each core (except the main). + * + * - The function takes the global write lock, display something, + * then releases the global lock. + * - Then, it takes the per-lcore write lock, display something, and + * releases the per-core lock. + * - Finally, a read lock is taken during 100 ms, then released. + * + * - The main function unlocks the per-lcore locks sequentially and + * waits between each lock. This triggers the display of a message + * for each core, in the correct order. + * + * Then, it tries to take the global write lock and display the last + * message. The autotest script checks that the message order is correct. + */ +static int +rwlock_test1(void) +{ + int i; + + rte_rwticketlock_init(&sl); + for (i=0; itype = LC_TYPE_RDLOCK; + + ftm = try_rwlock_data.tick; + stm = rte_get_timer_cycles(); + + do { + for (i = 0; i != ITER_NUM; i++) { + rc = try_read(lc); + if (rc == 0) + lcd->stat.success++; + else if (rc == -EBUSY) + lcd->stat.fail++; + else + break; + rc = 0; + } + tm = rte_get_timer_cycles() - stm; + } while (tm < ftm && rc == 0); + + lcd->rc = rc; + lcd->stat.tick = tm; + return rc; +} + +static int +try_write_lcore(__rte_unused void *data) +{ + int32_t rc; + uint32_t i, lc; + uint64_t ftm, stm, tm; + struct try_rwlock_lcore *lcd; + + lc = rte_lcore_id(); + lcd = try_lcore_data + lc; + lcd->type = LC_TYPE_WRLOCK; + + ftm = try_rwlock_data.tick; + stm = rte_get_timer_cycles(); + + do { + for (i = 0; i != ITER_NUM; i++) { + rc = try_write(lc); + if (rc == 0) + lcd->stat.success++; + else if (rc == -EBUSY) + lcd->stat.fail++; + else + break; + rc = 0; + } + tm = rte_get_timer_cycles() - stm; + } while (tm < ftm && rc == 0); + + lcd->rc = rc; + lcd->stat.tick = tm; + return rc; +} + +static void +print_try_lcore_stats(const struct try_rwlock_lcore *tlc, uint32_t lc) +{ + uint64_t f, s; + + f = RTE_MAX(tlc->stat.fail, 1ULL); + s = RTE_MAX(tlc->stat.success, 1ULL); + + printf("try_lcore_data[%u]={\n" + "\trc=%d,\n" + "\ttype=%s,\n" + "\tfail=%" PRIu64 ",\n" + "\tsuccess=%" PRIu64 ",\n" + "\tcycles=%" PRIu64 ",\n" + "\tcycles/op=%#Lf,\n" + "\tcycles/success=%#Lf,\n" + "\tsuccess/fail=%#Lf,\n" + "};\n", + lc, + tlc->rc, + tlc->type == LC_TYPE_RDLOCK ? "RDLOCK" : "WRLOCK", + tlc->stat.fail, + tlc->stat.success, + tlc->stat.tick, + (long double)tlc->stat.tick / + (tlc->stat.fail + tlc->stat.success), + (long double)tlc->stat.tick / s, + (long double)tlc->stat.success / f); +} + +static void +collect_try_lcore_stats(struct try_rwlock_lcore *tlc, + const struct try_rwlock_lcore *lc) +{ + tlc->stat.tick += lc->stat.tick; + tlc->stat.fail += lc->stat.fail; + tlc->stat.success += lc->stat.success; +} + +/* + * Process collected results: + * - check status + * - collect and print statistics + */ +static int +process_try_lcore_stats(void) +{ + int32_t rc; + uint32_t lc, rd, wr; + struct try_rwlock_lcore rlc, wlc; + + memset(&rlc, 0, sizeof(rlc)); + memset(&wlc, 0, sizeof(wlc)); + + rlc.type = LC_TYPE_RDLOCK; + wlc.type = LC_TYPE_WRLOCK; + rd = 0; + wr = 0; + + rc = 0; + RTE_LCORE_FOREACH(lc) { + rc |= try_lcore_data[lc].rc; + if (try_lcore_data[lc].type == LC_TYPE_RDLOCK) { + collect_try_lcore_stats(&rlc, try_lcore_data + lc); + rd++; + } else { + collect_try_lcore_stats(&wlc, try_lcore_data + lc); + wr++; + } + } + + if (rc == 0) { + RTE_LCORE_FOREACH(lc) + print_try_lcore_stats(try_lcore_data + lc, lc); + + if (rd != 0) { + printf("aggregated stats for %u RDLOCK cores:\n", rd); + print_try_lcore_stats(&rlc, rd); + } + + if (wr != 0) { + printf("aggregated stats for %u WRLOCK cores:\n", wr); + print_try_lcore_stats(&wlc, wr); + } + } + + return rc; +} + +static void +try_test_reset(void) +{ + memset(&try_lcore_data, 0, sizeof(try_lcore_data)); + memset(&try_rwlock_data, 0, sizeof(try_rwlock_data)); + try_rwlock_data.tick = TEST_SEC * rte_get_tsc_hz(); +} + +/* all lcores grab RDLOCK */ +static int +try_rwlock_test_rda(void) +{ + try_test_reset(); + + /* start read test on all available lcores */ + rte_eal_mp_remote_launch(try_read_lcore, NULL, CALL_MAIN); + rte_eal_mp_wait_lcore(); + + return process_try_lcore_stats(); +} + +/* all worker lcores grab RDLOCK, main one grabs WRLOCK */ +static int +try_rwlock_test_rds_wrm(void) +{ + try_test_reset(); + + rte_eal_mp_remote_launch(try_read_lcore, NULL, SKIP_MAIN); + try_write_lcore(NULL); + rte_eal_mp_wait_lcore(); + + return process_try_lcore_stats(); +} + +/* main and even worker lcores grab RDLOCK, odd lcores grab WRLOCK */ +static int +try_rwlock_test_rde_wro(void) +{ + uint32_t lc, mlc; + + try_test_reset(); + + mlc = rte_get_main_lcore(); + + RTE_LCORE_FOREACH(lc) { + if (lc != mlc) { + if ((lc & 1) == 0) + rte_eal_remote_launch(try_read_lcore, + NULL, lc); + else + rte_eal_remote_launch(try_write_lcore, + NULL, lc); + } + } + try_read_lcore(NULL); + rte_eal_mp_wait_lcore(); + + return process_try_lcore_stats(); +} + +static int +test_rwlock(void) +{ + uint32_t i; + int32_t rc, ret; + + static const struct { + const char *name; + int (*ftst)(void); + } test[] = { + { + .name = "rwlock_test1", + .ftst = rwlock_test1, + }, + { + .name = "try_rwlock_test_rda", + .ftst = try_rwlock_test_rda, + }, + { + .name = "try_rwlock_test_rds_wrm", + .ftst = try_rwlock_test_rds_wrm, + }, + { + .name = "try_rwlock_test_rde_wro", + .ftst = try_rwlock_test_rde_wro, + }, + }; + + ret = 0; + for (i = 0; i != RTE_DIM(test); i++) { + printf("starting test %s;\n", test[i].name); + rc = test[i].ftst(); + printf("test %s completed with status %d\n", test[i].name, rc); + ret |= rc; + } + + return ret; +} + +REGISTER_TEST_COMMAND(ticketrwlock_autotest, test_rwlock); + +/* subtests used in meson for CI */ +REGISTER_TEST_COMMAND(ticketrwlock_test1_autotest, rwlock_test1); +REGISTER_TEST_COMMAND(ticketrwlock_rda_autotest, try_rwlock_test_rda); +REGISTER_TEST_COMMAND(ticketrwlock_rds_wrm_autotest, try_rwlock_test_rds_wrm); +REGISTER_TEST_COMMAND(ticketrwlock_rde_wro_autotest, try_rwlock_test_rde_wro); diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 748514e24316..d76a4c8ba1c4 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -76,6 +76,7 @@ The public API headers are grouped by topics: [rwlock] (@ref rte_rwlock.h), [spinlock] (@ref rte_spinlock.h), [ticketlock] (@ref rte_ticketlock.h), + [ticketrwlock] (@ref rte_ticket_rwlock.h), [RCU] (@ref rte_rcu_qsbr.h) - **CPU arch**: diff --git a/lib/librte_eal/arm/include/meson.build b/lib/librte_eal/arm/include/meson.build index 770766de1a34..951a527ffa64 100644 --- a/lib/librte_eal/arm/include/meson.build +++ b/lib/librte_eal/arm/include/meson.build @@ -28,6 +28,7 @@ arch_headers = files( 'rte_rwlock.h', 'rte_spinlock.h', 'rte_ticketlock.h', + 'rte_ticket_rwlock.h', 'rte_vect.h', ) install_headers(arch_headers, subdir: get_option('include_subdir_arch')) diff --git a/lib/librte_eal/arm/include/rte_ticket_rwlock.h b/lib/librte_eal/arm/include/rte_ticket_rwlock.h new file mode 100644 index 000000000000..273137a5abba --- /dev/null +++ b/lib/librte_eal/arm/include/rte_ticket_rwlock.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Microsoft Corporation + */ + +#ifndef _RTE_FAIR_RWLOCK_ARM_H_ +#define _RTE_FAIR_RWLOCK_ARM_H_ + +#ifndef RTE_FORCE_INTRINSICS +# error Platform must be built with RTE_FORCE_INTRINSICS +#endif + +#ifdef __cplusplus +extern "C" { +#endif + +#include "generic/rte_ticket_rwlock.h" + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FAIR_RWLOCK_ARM_H_ */ diff --git a/lib/librte_eal/include/generic/rte_ticket_rwlock.h b/lib/librte_eal/include/generic/rte_ticket_rwlock.h new file mode 100644 index 000000000000..b3637358c1f7 --- /dev/null +++ b/lib/librte_eal/include/generic/rte_ticket_rwlock.h @@ -0,0 +1,218 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Microsoft Corporation + */ + +#ifndef _RTE_TICKET_RWLOCK_H_ +#define _RTE_TICKET_RWLOCK_H_ + +/** + * @file + * + * Ticket based reader/writer lock + * + * This file defines an API for ticket style read-write locks. + * This types of lock act like rte_rwlock but provide fairness + * and requests are handled first come, first serviced. + * + * All locks must be initialized before use, and only initialized once. + * + * References: + * "Spinlocks and Read-Write Locks" + * http://locklessinc.com/articles/locks/ + * "Scalable Read-Writer Synchronization for Shared-Memory Multiprocessors" + * https://www.cs.rochester.edu/research/synchronization/pseudocode/rw.html + */ + +#ifdef __cplusplus +extern "C" { +#endif + +typedef union { + uint64_t tickets; + struct { + union { + struct { + uint16_t write; /* current writer */ + uint16_t read; /* current reader */ + }; + uint32_t readwrite; /* atomic for both read and write */ + }; + uint16_t next; /* next ticket */ + }; +} rte_rwticketlock_t; + +/** + * A static rwticket initializer. + */ +#define RTE_RWTICKETLOCK_INITIALIZER { 0 } + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Initialize the rwticketlock to an unlocked state. + * + * @param rwl + * A pointer to the rwticketlock structure. + */ +__rte_experimental +static inline void +rte_rwticketlock_init(rte_rwticketlock_t *rwl) +{ + rwl->tickets = 0; +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * Take a write lock. Loop until the lock is held. + * + * @param rwl + * A pointer to a rwticketlock structure. + */ +__rte_experimental +static inline void +rte_rwticket_write_lock(rte_rwticketlock_t *rwl) +{ + uint16_t me; + + me = __atomic_fetch_add(&rwl->next, 1, __ATOMIC_RELAXED); + rte_wait_until_equal_16(&rwl->write, me, __ATOMIC_ACQUIRE); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Try to take a write lock. + * + * @param rwl + * A pointer to a rwticketlock structure. + * @return + * - zero if the lock is successfully taken + * - -EBUSY if lock could not be acquired for writing because + * it was already locked for reading or writing + */ +__rte_experimental +static inline int +rte_rwticket_write_trylock(rte_rwticketlock_t *rwl) +{ + rte_rwticketlock_t old, new; + + old.tickets = __atomic_load_n(&rwl->tickets, __ATOMIC_RELAXED); + if (old.write != old.next) + return -EBUSY; + + new.tickets = old.tickets; + new.next = old.next + 1; + if (__atomic_compare_exchange_n(&rwl->tickets, &old.tickets, new.tickets, + 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) + return 0; + else + return -EBUSY; +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Release a write lock. + * + * @param rwl + * A pointer to a rwticketlock structure. + */ +__rte_experimental +static inline void +rte_rwticket_write_unlock(rte_rwticketlock_t *rwl) +{ + rte_rwticketlock_t t; + + t.tickets = __atomic_load_n(&rwl->tickets, __ATOMIC_RELAXED); + t.write++; + t.read++; + __atomic_store_n(&rwl->readwrite, t.readwrite, __ATOMIC_RELEASE); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * + * Take a read lock. Loop until the lock is held. + * + * @param l + * A pointer to a rwticketlock structure. + */ +__rte_experimental +static inline void +rte_rwticket_read_lock(rte_rwticketlock_t *rwl) +{ + uint16_t me; + + me = __atomic_fetch_add(&rwl->next, 1, __ATOMIC_RELAXED); + rte_wait_until_equal_16(&rwl->read, me, __ATOMIC_ACQUIRE); + __atomic_fetch_add(&rwl->read, 1, __ATOMIC_RELAXED); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Try to take a read lock. + * + * @param rwl + * A pointer to a rwticketlock structure. + * + * @return + * - zero if the lock is successfully taken + * - -EBUSY if lock could not be acquired for reading because a + * writer holds the lock + */ +__rte_experimental +static inline int +rte_rwticket_read_trylock(rte_rwticketlock_t *rwl) +{ + rte_rwticketlock_t old, new; + int success; + + old.tickets = __atomic_load_n(&rwl->tickets, __ATOMIC_RELAXED); + + do { + uint16_t me = old.next; /* this is our ticket */ + + /* does writer have the lock now? */ + if (old.read != me && old.write != me) + return -EBUSY; + + /* expect to be the next reader */ + new.tickets = old.tickets; + old.read = me; + new.read = new.next = me + 1; + success = __atomic_compare_exchange_n(&rwl->tickets, &old.tickets, new.tickets, + 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED); + } while (!success); + + return 0; +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Release a read lock. + * + * @param rwl + * A pointer to the rwticketlock structure. + */ +__rte_experimental +static inline void +rte_rwticket_read_unlock(rte_rwticketlock_t *rwl) +{ + __atomic_add_fetch(&rwl->write, 1, __ATOMIC_RELEASE); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_TICKET_RWLOCK_H_ */ diff --git a/lib/librte_eal/include/meson.build b/lib/librte_eal/include/meson.build index 0dea342e1deb..fe5c19748926 100644 --- a/lib/librte_eal/include/meson.build +++ b/lib/librte_eal/include/meson.build @@ -65,6 +65,7 @@ generic_headers = files( 'generic/rte_rwlock.h', 'generic/rte_spinlock.h', 'generic/rte_ticketlock.h', + 'generic/rte_ticket_rwlock.h', 'generic/rte_vect.h', ) install_headers(generic_headers, subdir: 'generic') diff --git a/lib/librte_eal/ppc/include/meson.build b/lib/librte_eal/ppc/include/meson.build index dae40ede546e..0bc560327749 100644 --- a/lib/librte_eal/ppc/include/meson.build +++ b/lib/librte_eal/ppc/include/meson.build @@ -16,6 +16,7 @@ arch_headers = files( 'rte_rwlock.h', 'rte_spinlock.h', 'rte_ticketlock.h', + 'rte_ticket_rwlock.h', 'rte_vect.h', ) install_headers(arch_headers, subdir: get_option('include_subdir_arch')) diff --git a/lib/librte_eal/ppc/include/rte_ticket_rwlock.h b/lib/librte_eal/ppc/include/rte_ticket_rwlock.h new file mode 100644 index 000000000000..4768d5bfa8ef --- /dev/null +++ b/lib/librte_eal/ppc/include/rte_ticket_rwlock.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Microsoft Corporation + */ + +#ifndef _RTE_FAIR_RWLOCK_PPC_64_H_ +#define _RTE_FAIR_RWLOCK_PPC_64_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "generic/rte_ticket_rwlock.h" + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FAIR_RWLOCK_PPC_64_H_ */ diff --git a/lib/librte_eal/x86/include/meson.build b/lib/librte_eal/x86/include/meson.build index 549cc21a42ed..e9169f0d1da5 100644 --- a/lib/librte_eal/x86/include/meson.build +++ b/lib/librte_eal/x86/include/meson.build @@ -20,6 +20,7 @@ arch_headers = files( 'rte_rwlock.h', 'rte_spinlock.h', 'rte_ticketlock.h', + 'rte_ticket_rwlock.h', 'rte_vect.h', ) install_headers(arch_headers, subdir: get_option('include_subdir_arch')) diff --git a/lib/librte_eal/x86/include/rte_ticket_rwlock.h b/lib/librte_eal/x86/include/rte_ticket_rwlock.h new file mode 100644 index 000000000000..83c8bd0899d3 --- /dev/null +++ b/lib/librte_eal/x86/include/rte_ticket_rwlock.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Microsoft Corporation + */ + +#ifndef _RTE_FAIR_RWLOCK_X86_64_H_ +#define _RTE_FAIR_RWLOCK_X86_64_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "generic/rte_ticket_rwlock.h" + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FAIR_RWLOCK_X86_64_H_ */