From patchwork Fri Jan 18 15:23:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 49953 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EB8372BAE; Fri, 18 Jan 2019 16:24:36 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 6CAF4288C for ; Fri, 18 Jan 2019 16:24:34 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Jan 2019 07:24:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,491,1539673200"; d="scan'208";a="292610268" Received: from txasoft-yocto.an.intel.com (HELO txasoft-yocto.an.intel.com.) ([10.123.72.192]) by orsmga005.jf.intel.com with ESMTP; 18 Jan 2019 07:24:32 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org Date: Fri, 18 Jan 2019 09:23:22 -0600 Message-Id: <20190118152326.22686-2-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190118152326.22686-1-gage.eads@intel.com> References: <20190115235227.14013-1-gage.eads@intel.com> <20190118152326.22686-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v3 1/5] ring: add 64-bit headtail structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 64-bit head and tail index widths greatly increases the time it takes for them to wrap-around (with current CPU speeds, it won't happen within the author's lifetime). This is important in avoiding the ABA problem -- in which a thread mistakes reading the same tail index in two accesses to mean that the ring was not modified in the intervening time -- in the upcoming non-blocking ring implementation. Using a 64-bit index makes the possibility of this occurring effectively zero. This commit places the new producer and consumer structures in the same location in struct rte_ring as their 32-bit counterparts. Since the 32-bit versions are padded out to a cache line, there is space for the new structure without affecting the layout of struct rte_ring. Thus, the ABI is preserved. Signed-off-by: Gage Eads --- lib/librte_eventdev/rte_event_ring.h | 2 +- lib/librte_ring/Makefile | 3 +- lib/librte_ring/rte_ring.h | 24 +++++- lib/librte_ring/rte_ring_generic_64.h | 152 ++++++++++++++++++++++++++++++++++ 4 files changed, 176 insertions(+), 5 deletions(-) create mode 100644 lib/librte_ring/rte_ring_generic_64.h diff --git a/lib/librte_eventdev/rte_event_ring.h b/lib/librte_eventdev/rte_event_ring.h index 827a3209e..5fcb2d5f7 100644 --- a/lib/librte_eventdev/rte_event_ring.h +++ b/lib/librte_eventdev/rte_event_ring.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2016-2019 Intel Corporation */ /** diff --git a/lib/librte_ring/Makefile b/lib/librte_ring/Makefile index 21a36770d..18c48fbc8 100644 --- a/lib/librte_ring/Makefile +++ b/lib/librte_ring/Makefile @@ -19,6 +19,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c # install includes SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h \ rte_ring_generic.h \ - rte_ring_c11_mem.h + rte_ring_c11_mem.h \ + rte_ring_generic_64.h include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index af5444a9f..b270a4746 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * - * Copyright (c) 2010-2017 Intel Corporation + * Copyright (c) 2010-2019 Intel Corporation * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org * All rights reserved. * Derived from FreeBSD's bufring.h @@ -70,6 +70,15 @@ struct rte_ring_headtail { uint32_t single; /**< True if single prod/cons */ }; +/* 64-bit version of rte_ring_headtail, for use by rings that need to avoid + * head/tail wrap-around. + */ +struct rte_ring_headtail_64 { + volatile uint64_t head; /**< Prod/consumer head. */ + volatile uint64_t tail; /**< Prod/consumer tail. */ + uint32_t single; /**< True if single prod/cons */ +}; + /** * An RTE ring structure. * @@ -97,11 +106,19 @@ struct rte_ring { char pad0 __rte_cache_aligned; /**< empty cache line */ /** Ring producer status. */ - struct rte_ring_headtail prod __rte_cache_aligned; + RTE_STD_C11 + union { + struct rte_ring_headtail prod __rte_cache_aligned; + struct rte_ring_headtail_64 prod_64 __rte_cache_aligned; + }; char pad1 __rte_cache_aligned; /**< empty cache line */ /** Ring consumer status. */ - struct rte_ring_headtail cons __rte_cache_aligned; + RTE_STD_C11 + union { + struct rte_ring_headtail cons __rte_cache_aligned; + struct rte_ring_headtail_64 cons_64 __rte_cache_aligned; + }; char pad2 __rte_cache_aligned; /**< empty cache line */ }; @@ -312,6 +329,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); #else #include "rte_ring_generic.h" #endif +#include "rte_ring_generic_64.h" /** * @internal Enqueue several objects on the ring diff --git a/lib/librte_ring/rte_ring_generic_64.h b/lib/librte_ring/rte_ring_generic_64.h new file mode 100644 index 000000000..58de144c6 --- /dev/null +++ b/lib/librte_ring/rte_ring_generic_64.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright (c) 2010-2019 Intel Corporation + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org + * All rights reserved. + * Derived from FreeBSD's bufring.h + * Used as BSD-3 Licensed with permission from Kip Macy. + */ + +#ifndef _RTE_RING_GENERIC_64_H_ +#define _RTE_RING_GENERIC_64_H_ + +/** + * @internal This function updates the producer head for enqueue using + * 64-bit head/tail values. + * + * @param r + * A pointer to the ring structure + * @param is_sp + * Indicates whether multi-producer path is needed or not + * @param n + * The number of elements we will want to enqueue, i.e. how far should the + * head be moved + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring + * @param old_head + * Returns head value as it was before the move, i.e. where enqueue starts + * @param new_head + * Returns the current/new head value i.e. where enqueue finishes + * @param free_entries + * Returns the amount of free space in the ring BEFORE head was moved + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_move_prod_head_64(struct rte_ring *r, unsigned int is_sp, + unsigned int n, enum rte_ring_queue_behavior behavior, + uint64_t *old_head, uint64_t *new_head, + uint32_t *free_entries) +{ + const uint32_t capacity = r->capacity; + unsigned int max = n; + int success; + + do { + /* Reset n to the initial burst count */ + n = max; + + *old_head = r->prod_64.head; + + /* add rmb barrier to avoid load/load reorder in weak + * memory model. It is noop on x86 + */ + rte_smp_rmb(); + + /* + * The subtraction is done between two unsigned 64bits value + * (the result is always modulo 64 bits even if we have + * *old_head > cons_tail). So 'free_entries' is always between 0 + * and capacity (which is < size). + */ + *free_entries = (capacity + r->cons_64.tail - *old_head); + + /* check that we have enough room in ring */ + if (unlikely(n > *free_entries)) + n = (behavior == RTE_RING_QUEUE_FIXED) ? + 0 : *free_entries; + + if (n == 0) + return 0; + + *new_head = *old_head + n; + if (is_sp) + r->prod_64.head = *new_head, success = 1; + else + success = rte_atomic64_cmpset(&r->prod_64.head, + *old_head, *new_head); + } while (unlikely(success == 0)); + return n; +} + +/** + * @internal This function updates the consumer head for dequeue using + * 64-bit head/tail values. + * + * @param r + * A pointer to the ring structure + * @param is_sc + * Indicates whether multi-consumer path is needed or not + * @param n + * The number of elements we will want to enqueue, i.e. how far should the + * head be moved + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring + * @param old_head + * Returns head value as it was before the move, i.e. where dequeue starts + * @param new_head + * Returns the current/new head value i.e. where dequeue finishes + * @param entries + * Returns the number of entries in the ring BEFORE head was moved + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_move_cons_head_64(struct rte_ring *r, unsigned int is_sc, + unsigned int n, enum rte_ring_queue_behavior behavior, + uint64_t *old_head, uint64_t *new_head, + uint32_t *entries) +{ + unsigned int max = n; + int success; + + do { + /* Restore n as it may change every loop */ + n = max; + + *old_head = r->cons_64.head; + + /* add rmb barrier to avoid load/load reorder in weak + * memory model. It is noop on x86 + */ + rte_smp_rmb(); + + /* The subtraction is done between two unsigned 64bits value + * (the result is always modulo 64 bits even if we have + * cons_head > prod_tail). So 'entries' is always between 0 + * and size(ring)-1. + */ + *entries = (r->prod_64.tail - *old_head); + + /* Set the actual entries for dequeue */ + if (n > *entries) + n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : *entries; + + if (unlikely(n == 0)) + return 0; + + *new_head = *old_head + n; + if (is_sc) + r->cons_64.head = *new_head, success = 1; + else + success = rte_atomic64_cmpset(&r->cons_64.head, + *old_head, *new_head); + } while (unlikely(success == 0)); + return n; +} + +#endif /* _RTE_RING_GENERIC_64_H_ */ From patchwork Fri Jan 18 15:23:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 49954 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F132E2BE2; Fri, 18 Jan 2019 16:24:38 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 435732B9E for ; Fri, 18 Jan 2019 16:24:35 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Jan 2019 07:24:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,491,1539673200"; d="scan'208";a="292610275" Received: from txasoft-yocto.an.intel.com (HELO txasoft-yocto.an.intel.com.) ([10.123.72.192]) by orsmga005.jf.intel.com with ESMTP; 18 Jan 2019 07:24:33 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org Date: Fri, 18 Jan 2019 09:23:23 -0600 Message-Id: <20190118152326.22686-3-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190118152326.22686-1-gage.eads@intel.com> References: <20190115235227.14013-1-gage.eads@intel.com> <20190118152326.22686-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v3 2/5] ring: add a non-blocking implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds support for non-blocking circular ring enqueue and dequeue functions. The ring uses a 128-bit compare-and-swap instruction, and thus is currently limited to x86_64. The algorithm is based on the original rte ring (derived from FreeBSD's bufring.h) and inspired by Michael and Scott's non-blocking concurrent queue. Importantly, it adds a modification counter to each ring entry to ensure only one thread can write to an unused entry. ----- Algorithm: Multi-producer non-blocking enqueue: 1. Move the producer head index 'n' locations forward, effectively reserving 'n' locations. 2. For each pointer: a. Read the producer tail index, then ring[tail]. If ring[tail]'s modification counter isn't 'tail', retry. b. Construct the new entry: {pointer, tail + ring size} c. Compare-and-swap the old entry with the new. If unsuccessful, the next loop iteration will try to enqueue this pointer again. d. Compare-and-swap the tail index with 'tail + 1', whether or not step 2c succeeded. This guarantees threads can make forward progress. Multi-consumer non-blocking dequeue: 1. Move the consumer head index 'n' locations forward, effectively reserving 'n' pointers to be dequeued. 2. Copy 'n' pointers into the caller's object table (ignoring the modification counter), starting from ring[tail], then compare-and-swap the tail index with 'tail + n'. If unsuccessful, repeat step 2. ----- Discussion: There are two cases where the ABA problem is mitigated: 1. Enqueueing a pointer to the ring: without a modification counter tied to the tail index, the index could become stale by the time the enqueue happens, causing it to overwrite valid data. Tying the counter to the tail index gives us an expected value (as opposed to, say, a monotonically incrementing counter). Since the counter will eventually wrap, there is potential for the ABA problem. However, using a 64-bit counter makes this likelihood effectively zero. 2. Updating a tail index: the ABA problem can occur if the thread is preempted and the tail index wraps around. However, using 64-bit indexes makes this likelihood effectively zero. With no contention, an enqueue of n pointers uses (1 + 2n) CAS operations and a dequeue of n pointers uses 2. This algorithm has worse average-case performance than the regular rte ring (particularly a highly-contended ring with large bulk accesses), however: - For applications with preemptible pthreads, the regular rte ring's worst-case performance (i.e. one thread being preempted in the update_tail() critical section) is much worse than the non-blocking ring's. - Software caching can mitigate the average case performance for ring-based algorithms. For example, a non-blocking ring based mempool (a likely use case for this ring) with per-thread caching. The non-blocking ring is enabled via a new flag, RING_F_NB. Because the ring's memsize is now a function of its flags (the non-blocking ring requires 128b for each entry), this commit adds a new argument ('flags') to rte_ring_get_memsize(). An API deprecation notice will be sent in a separate commit. For ease-of-use, existing ring enqueue and dequeue functions work on both regular and non-blocking rings. This introduces an additional branch in the datapath, but this should be a highly predictable branch. ring_perf_autotest shows a negligible performance impact; it's hard to distinguish a real difference versus system noise. | ring_perf_autotest cycles with branch - Test | ring_perf_autotest cycles without ------------------------------------------------------------------ SP/SC single enq/dequeue | 0.33 MP/MC single enq/dequeue | -4.00 SP/SC burst enq/dequeue (size 8) | 0.00 MP/MC burst enq/dequeue (size 8) | 0.00 SP/SC burst enq/dequeue (size 32) | 0.00 MP/MC burst enq/dequeue (size 32) | 0.00 SC empty dequeue | 1.00 MC empty dequeue | 0.00 Single lcore: SP/SC bulk enq/dequeue (size 8) | 0.49 MP/MC bulk enq/dequeue (size 8) | 0.08 SP/SC bulk enq/dequeue (size 32) | 0.07 MP/MC bulk enq/dequeue (size 32) | 0.09 Two physical cores: SP/SC bulk enq/dequeue (size 8) | 0.19 MP/MC bulk enq/dequeue (size 8) | -0.37 SP/SC bulk enq/dequeue (size 32) | 0.09 MP/MC bulk enq/dequeue (size 32) | -0.05 Two NUMA nodes: SP/SC bulk enq/dequeue (size 8) | -1.96 MP/MC bulk enq/dequeue (size 8) | 0.88 SP/SC bulk enq/dequeue (size 32) | 0.10 MP/MC bulk enq/dequeue (size 32) | 0.46 Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4, running on isolcpus cores with a tickless scheduler. Each test run three times and the results averaged. Signed-off-by: Gage Eads --- lib/librte_ring/rte_ring.c | 72 ++++- lib/librte_ring/rte_ring.h | 550 +++++++++++++++++++++++++++++++++-- lib/librte_ring/rte_ring_version.map | 7 + 3 files changed, 587 insertions(+), 42 deletions(-) diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c index d215acecc..f3378dccd 100644 --- a/lib/librte_ring/rte_ring.c +++ b/lib/librte_ring/rte_ring.c @@ -45,9 +45,9 @@ EAL_REGISTER_TAILQ(rte_ring_tailq) /* return the size of memory occupied by a ring */ ssize_t -rte_ring_get_memsize(unsigned count) +rte_ring_get_memsize_v1905(unsigned int count, unsigned int flags) { - ssize_t sz; + ssize_t sz, elt_sz; /* count must be a power of 2 */ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) { @@ -57,10 +57,23 @@ rte_ring_get_memsize(unsigned count) return -EINVAL; } - sz = sizeof(struct rte_ring) + count * sizeof(void *); + elt_sz = (flags & RING_F_NB) ? 2 * sizeof(void *) : sizeof(void *); + + sz = sizeof(struct rte_ring) + count * elt_sz; sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); return sz; } +BIND_DEFAULT_SYMBOL(rte_ring_get_memsize, _v1905, 19.05); +MAP_STATIC_SYMBOL(ssize_t rte_ring_get_memsize(unsigned int count, + unsigned int flags), + rte_ring_get_memsize_v1905); + +ssize_t +rte_ring_get_memsize_v20(unsigned int count) +{ + return rte_ring_get_memsize_v1905(count, 0); +} +VERSION_SYMBOL(rte_ring_get_memsize, _v20, 2.0); int rte_ring_init(struct rte_ring *r, const char *name, unsigned count, @@ -82,8 +95,6 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count, if (ret < 0 || ret >= (int)sizeof(r->name)) return -ENAMETOOLONG; r->flags = flags; - r->prod.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP; - r->cons.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC; if (flags & RING_F_EXACT_SZ) { r->size = rte_align32pow2(count + 1); @@ -100,8 +111,30 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count, r->mask = count - 1; r->capacity = r->mask; } - r->prod.head = r->cons.head = 0; - r->prod.tail = r->cons.tail = 0; + + if (flags & RING_F_NB) { + uint64_t i; + + r->prod_64.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP; + r->cons_64.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC; + r->prod_64.head = r->cons_64.head = 0; + r->prod_64.tail = r->cons_64.tail = 0; + + for (i = 0; i < r->size; i++) { + struct nb_ring_entry *ring_ptr, *base; + + base = ((struct nb_ring_entry *)&r[1]); + + ring_ptr = &base[i & r->mask]; + + ring_ptr->cnt = i; + } + } else { + r->prod.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP; + r->cons.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC; + r->prod.head = r->cons.head = 0; + r->prod.tail = r->cons.tail = 0; + } return 0; } @@ -123,11 +156,19 @@ rte_ring_create(const char *name, unsigned count, int socket_id, ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list); +#if !defined(RTE_ARCH_X86_64) + if (flags & RING_F_NB) { + printf("RING_F_NB is only supported on x86-64 platforms\n"); + rte_errno = EINVAL; + return NULL; + } +#endif + /* for an exact size ring, round up from count to a power of two */ if (flags & RING_F_EXACT_SZ) count = rte_align32pow2(count + 1); - ring_size = rte_ring_get_memsize(count); + ring_size = rte_ring_get_memsize(count, flags); if (ring_size < 0) { rte_errno = ring_size; return NULL; @@ -227,10 +268,17 @@ rte_ring_dump(FILE *f, const struct rte_ring *r) fprintf(f, " flags=%x\n", r->flags); fprintf(f, " size=%"PRIu32"\n", r->size); fprintf(f, " capacity=%"PRIu32"\n", r->capacity); - fprintf(f, " ct=%"PRIu32"\n", r->cons.tail); - fprintf(f, " ch=%"PRIu32"\n", r->cons.head); - fprintf(f, " pt=%"PRIu32"\n", r->prod.tail); - fprintf(f, " ph=%"PRIu32"\n", r->prod.head); + if (r->flags & RING_F_NB) { + fprintf(f, " ct=%"PRIu64"\n", r->cons_64.tail); + fprintf(f, " ch=%"PRIu64"\n", r->cons_64.head); + fprintf(f, " pt=%"PRIu64"\n", r->prod_64.tail); + fprintf(f, " ph=%"PRIu64"\n", r->prod_64.head); + } else { + fprintf(f, " ct=%"PRIu32"\n", r->cons.tail); + fprintf(f, " ch=%"PRIu32"\n", r->cons.head); + fprintf(f, " pt=%"PRIu32"\n", r->prod.tail); + fprintf(f, " ph=%"PRIu32"\n", r->prod.head); + } fprintf(f, " used=%u\n", rte_ring_count(r)); fprintf(f, " avail=%u\n", rte_ring_free_count(r)); } diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index b270a4746..08c9de6a6 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -134,6 +134,18 @@ struct rte_ring { */ #define RING_F_EXACT_SZ 0x0004 #define RTE_RING_SZ_MASK (0x7fffffffU) /**< Ring size mask */ +/** + * The ring uses non-blocking enqueue and dequeue functions. These functions + * do not have the "non-preemptive" constraint of a regular rte ring, and thus + * are suited for applications using preemptible pthreads. However, the + * non-blocking functions have worse average-case performance than their + * regular rte ring counterparts. When used as the handler for a mempool, + * per-thread caching can mitigate the performance difference by reducing the + * number (and contention) of ring accesses. + * + * This flag is only supported on x86_64 platforms. + */ +#define RING_F_NB 0x0008 /* @internal defines for passing to the enqueue dequeue worker functions */ #define __IS_SP 1 @@ -151,11 +163,15 @@ struct rte_ring { * * @param count * The number of elements in the ring (must be a power of 2). + * @param flags + * The flags the ring will be created with. * @return * - The memory size needed for the ring on success. * - -EINVAL if count is not a power of 2. */ -ssize_t rte_ring_get_memsize(unsigned count); +ssize_t rte_ring_get_memsize(unsigned int count, unsigned int flags); +ssize_t rte_ring_get_memsize_v20(unsigned int count); +ssize_t rte_ring_get_memsize_v1905(unsigned int count, unsigned int flags); /** * Initialize a ring structure. @@ -188,6 +204,10 @@ ssize_t rte_ring_get_memsize(unsigned count); * - RING_F_SC_DEQ: If this flag is set, the default behavior when * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()`` * is "single-consumer". Otherwise, it is "multi-consumers". + * - RING_F_EXACT_SZ: If this flag is set, count can be a non-power-of-2 + * number, but up to half the ring space may be wasted. + * - RING_F_NB: (x86_64 only) If this flag is set, the ring uses + * non-blocking variants of the dequeue and enqueue functions. * @return * 0 on success, or a negative value on error. */ @@ -223,12 +243,17 @@ int rte_ring_init(struct rte_ring *r, const char *name, unsigned count, * - RING_F_SC_DEQ: If this flag is set, the default behavior when * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()`` * is "single-consumer". Otherwise, it is "multi-consumers". + * - RING_F_EXACT_SZ: If this flag is set, count can be a non-power-of-2 + * number, but up to half the ring space may be wasted. + * - RING_F_NB: (x86_64 only) If this flag is set, the ring uses + * non-blocking variants of the dequeue and enqueue functions. * @return * On success, the pointer to the new allocated ring. NULL on error with * rte_errno set appropriately. Possible errno values include: * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure * - E_RTE_SECONDARY - function was called from a secondary process instance - * - EINVAL - count provided is not a power of 2 + * - EINVAL - count provided is not a power of 2, or RING_F_NB is used on an + * unsupported platform * - ENOSPC - the maximum number of memzones has already been allocated * - EEXIST - a memzone with the same name already exists * - ENOMEM - no appropriate memory area found in which to create memzone @@ -284,6 +309,50 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); } \ } while (0) +/* The actual enqueue of pointers on the ring. + * Used only by the single-producer non-blocking enqueue function, but + * out-lined here for code readability. + */ +#define ENQUEUE_PTRS_NB(r, ring_start, prod_head, obj_table, n) do { \ + unsigned int i; \ + const uint32_t size = (r)->size; \ + size_t idx = prod_head & (r)->mask; \ + size_t new_cnt = prod_head + size; \ + struct nb_ring_entry *ring = (struct nb_ring_entry *)ring_start; \ + if (likely(idx + n < size)) { \ + for (i = 0; i < (n & ((~(unsigned)0x3))); i += 4, idx += 4) { \ + ring[idx].ptr = obj_table[i]; \ + ring[idx].cnt = new_cnt + i; \ + ring[idx + 1].ptr = obj_table[i + 1]; \ + ring[idx + 1].cnt = new_cnt + i + 1; \ + ring[idx + 2].ptr = obj_table[i + 2]; \ + ring[idx + 2].cnt = new_cnt + i + 2; \ + ring[idx + 3].ptr = obj_table[i + 3]; \ + ring[idx + 3].cnt = new_cnt + i + 3; \ + } \ + switch (n & 0x3) { \ + case 3: \ + ring[idx].cnt = new_cnt + i; \ + ring[idx++].ptr = obj_table[i++]; /* fallthrough */ \ + case 2: \ + ring[idx].cnt = new_cnt + i; \ + ring[idx++].ptr = obj_table[i++]; /* fallthrough */ \ + case 1: \ + ring[idx].cnt = new_cnt + i; \ + ring[idx++].ptr = obj_table[i++]; \ + } \ + } else { \ + for (i = 0; idx < size; i++, idx++) { \ + ring[idx].cnt = new_cnt + i; \ + ring[idx].ptr = obj_table[i]; \ + } \ + for (idx = 0; i < n; i++, idx++) { \ + ring[idx].cnt = new_cnt + i; \ + ring[idx].ptr = obj_table[i]; \ + } \ + } \ +} while (0) + /* the actual copy of pointers on the ring to obj_table. * Placed here since identical code needed in both * single and multi consumer dequeue functions */ @@ -315,6 +384,39 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); } \ } while (0) +/* The actual copy of pointers on the ring to obj_table. + * Placed here since identical code needed in both + * single and multi consumer non-blocking dequeue functions. + */ +#define DEQUEUE_PTRS_NB(r, ring_start, cons_head, obj_table, n) do { \ + unsigned int i; \ + size_t idx = cons_head & (r)->mask; \ + const uint32_t size = (r)->size; \ + struct nb_ring_entry *ring = (struct nb_ring_entry *)ring_start; \ + if (likely(idx + n < size)) { \ + for (i = 0; i < (n & (~(unsigned)0x3)); i += 4, idx += 4) {\ + obj_table[i] = ring[idx].ptr; \ + obj_table[i + 1] = ring[idx + 1].ptr; \ + obj_table[i + 2] = ring[idx + 2].ptr; \ + obj_table[i + 3] = ring[idx + 3].ptr; \ + } \ + switch (n & 0x3) { \ + case 3: \ + obj_table[i++] = ring[idx++].ptr; /* fallthrough */ \ + case 2: \ + obj_table[i++] = ring[idx++].ptr; /* fallthrough */ \ + case 1: \ + obj_table[i++] = ring[idx++].ptr; \ + } \ + } else { \ + for (i = 0; idx < size; i++, idx++) \ + obj_table[i] = ring[idx].ptr; \ + for (idx = 0; i < n; i++, idx++) \ + obj_table[i] = ring[idx].ptr; \ + } \ +} while (0) + + /* Between load and load. there might be cpu reorder in weak model * (powerpc/arm). * There are 2 choices for the users @@ -331,6 +433,319 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); #endif #include "rte_ring_generic_64.h" +/* @internal 128-bit structure used by the non-blocking ring */ +struct nb_ring_entry { + void *ptr; /**< Data pointer */ + uint64_t cnt; /**< Modification counter */ +}; + +/* The non-blocking ring algorithm is based on the original rte ring (derived + * from FreeBSD's bufring.h) and inspired by Michael and Scott's non-blocking + * concurrent queue. + */ + +/** + * @internal + * Enqueue several objects on the non-blocking ring (single-producer only) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items to the ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible to the ring + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_enqueue_sp(struct rte_ring *r, void * const *obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *free_space) +{ + uint32_t free_entries; + size_t head, next; + + n = __rte_ring_move_prod_head_64(r, 1, n, behavior, + &head, &next, &free_entries); + if (n == 0) + goto end; + + ENQUEUE_PTRS_NB(r, &r[1], head, obj_table, n); + + r->prod_64.tail += n; + +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +} + +/** + * @internal + * Enqueue several objects on the non-blocking ring (multi-producer safe) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items to the ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible to the ring + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_enqueue_mp(struct rte_ring *r, void * const *obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *free_space) +{ +#if !defined(RTE_ARCH_X86_64) || !defined(ALLOW_EXPERIMENTAL_API) + RTE_SET_USED(r); + RTE_SET_USED(obj_table); + RTE_SET_USED(n); + RTE_SET_USED(behavior); + RTE_SET_USED(free_space); +#ifndef ALLOW_EXPERIMENTAL_API + printf("[%s()] RING_F_NB requires an experimental API." + " Recompile with ALLOW_EXPERIMENTAL_API to use it.\n" + , __func__); +#endif + return 0; +#endif +#if defined(RTE_ARCH_X86_64) && defined(ALLOW_EXPERIMENTAL_API) + size_t head, next, tail; + uint32_t free_entries; + unsigned int i; + + n = __rte_ring_move_prod_head_64(r, 0, n, behavior, + &head, &next, &free_entries); + if (n == 0) + goto end; + + for (i = 0; i < n; /* i incremented if enqueue succeeds */) { + struct nb_ring_entry old_value, new_value; + struct nb_ring_entry *ring_ptr; + + /* Enqueue to the tail entry. If another thread wins the race, + * retry with the new tail. + */ + tail = r->prod_64.tail; + + ring_ptr = &((struct nb_ring_entry *)&r[1])[tail & r->mask]; + + old_value = *ring_ptr; + + /* If the tail entry's modification counter doesn't match the + * producer tail index, it's already been updated. + */ + if (old_value.cnt != tail) + continue; + + /* Prepare the new entry. The cnt field mitigates the ABA + * problem on the ring write. + */ + new_value.ptr = obj_table[i]; + new_value.cnt = tail + r->size; + + if (rte_atomic128_cmpset((volatile rte_int128_t *)ring_ptr, + (rte_int128_t *)&old_value, + (rte_int128_t *)&new_value)) + i++; + + /* Every thread attempts the cmpset, so they don't have to wait + * for the thread that successfully enqueued to the ring. + * Using a 64-bit tail mitigates the ABA problem here. + * + * Built-in used to handle variable-sized tail index. + */ + __sync_bool_compare_and_swap(&r->prod_64.tail, tail, tail + 1); + } + +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +#endif +} + +/** + * @internal Enqueue several objects on the non-blocking ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items to the ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible to the ring + * @param is_sp + * Indicates whether to use single producer or multi-producer head update + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_enqueue(struct rte_ring *r, void * const *obj_table, + unsigned int n, enum rte_ring_queue_behavior behavior, + unsigned int is_sp, unsigned int *free_space) +{ + if (is_sp) + return __rte_ring_do_nb_enqueue_sp(r, obj_table, n, + behavior, free_space); + else + return __rte_ring_do_nb_enqueue_mp(r, obj_table, n, + behavior, free_space); +} + +/** + * @internal + * Dequeue several objects from the non-blocking ring (single-consumer only) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from the ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from the ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_dequeue_sc(struct rte_ring *r, void **obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *available) +{ + size_t head, next; + uint32_t entries; + + n = __rte_ring_move_cons_head_64(r, 1, n, behavior, + &head, &next, &entries); + if (n == 0) + goto end; + + DEQUEUE_PTRS_NB(r, &r[1], head, obj_table, n); + + r->cons_64.tail += n; + +end: + if (available != NULL) + *available = entries - n; + return n; +} + +/** + * @internal + * Dequeue several objects from the non-blocking ring (multi-consumer safe) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from the ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from the ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_dequeue_mc(struct rte_ring *r, void **obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *available) +{ + size_t head, next; + uint32_t entries; + + n = __rte_ring_move_cons_head_64(r, 0, n, behavior, + &head, &next, &entries); + if (n == 0) + goto end; + + while (1) { + size_t tail = r->cons_64.tail; + + /* Dequeue from the cons tail onwards. If multiple threads read + * the same pointers, the thread that successfully performs the + * CAS will keep them and the other(s) will retry. + */ + DEQUEUE_PTRS_NB(r, &r[1], tail, obj_table, n); + + next = tail + n; + + /* Built-in used to handle variable-sized tail index. */ + if (__sync_bool_compare_and_swap(&r->cons_64.tail, tail, next)) + /* There is potential for the ABA problem here, but + * that is mitigated by the large (64-bit) tail. + */ + break; + } + +end: + if (available != NULL) + *available = entries - n; + return n; +} + +/** + * @internal Dequeue several objects from the non-blocking ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from the ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from the ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_dequeue(struct rte_ring *r, void **obj_table, + unsigned int n, enum rte_ring_queue_behavior behavior, + unsigned int is_sc, unsigned int *available) +{ + if (is_sc) + return __rte_ring_do_nb_dequeue_sc(r, obj_table, n, + behavior, available); + else + return __rte_ring_do_nb_dequeue_mc(r, obj_table, n, + behavior, available); +} + /** * @internal Enqueue several objects on the ring * @@ -438,8 +853,14 @@ static __rte_always_inline unsigned int rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - __IS_MP, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_MP, + free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_MP, + free_space); } /** @@ -461,8 +882,14 @@ static __rte_always_inline unsigned int rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - __IS_SP, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_SP, + free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_SP, + free_space); } /** @@ -488,8 +915,14 @@ static __rte_always_inline unsigned int rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - r->prod.single, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, + r->prod_64.single, free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, + r->prod.single, free_space); } /** @@ -572,8 +1005,14 @@ static __rte_always_inline unsigned int rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - __IS_MC, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_MC, + available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_MC, + available); } /** @@ -596,8 +1035,14 @@ static __rte_always_inline unsigned int rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - __IS_SC, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_SC, + available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_SC, + available); } /** @@ -623,8 +1068,14 @@ static __rte_always_inline unsigned int rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - r->cons.single, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, + r->cons_64.single, available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, + r->cons.single, available); } /** @@ -699,9 +1150,13 @@ rte_ring_dequeue(struct rte_ring *r, void **obj_p) static inline unsigned rte_ring_count(const struct rte_ring *r) { - uint32_t prod_tail = r->prod.tail; - uint32_t cons_tail = r->cons.tail; - uint32_t count = (prod_tail - cons_tail) & r->mask; + uint32_t count; + + if (r->flags & RING_F_NB) + count = (r->prod_64.tail - r->cons_64.tail) & r->mask; + else + count = (r->prod.tail - r->cons.tail) & r->mask; + return (count > r->capacity) ? r->capacity : count; } @@ -821,8 +1276,14 @@ static __rte_always_inline unsigned rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, __IS_MP, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_MP, free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_MP, free_space); } /** @@ -844,8 +1305,14 @@ static __rte_always_inline unsigned rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, __IS_SP, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_SP, free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_SP, free_space); } /** @@ -871,8 +1338,14 @@ static __rte_always_inline unsigned rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE, - r->prod.single, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + r->prod_64.single, free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + r->prod.single, free_space); } /** @@ -899,8 +1372,14 @@ static __rte_always_inline unsigned rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, __IS_MC, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_MC, available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_MC, available); } /** @@ -924,8 +1403,14 @@ static __rte_always_inline unsigned rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, __IS_SC, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_SC, available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_SC, available); } /** @@ -951,9 +1436,14 @@ static __rte_always_inline unsigned rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, - r->cons.single, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + r->cons_64.single, available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + r->cons.single, available); } #ifdef __cplusplus diff --git a/lib/librte_ring/rte_ring_version.map b/lib/librte_ring/rte_ring_version.map index d935efd0d..8969467af 100644 --- a/lib/librte_ring/rte_ring_version.map +++ b/lib/librte_ring/rte_ring_version.map @@ -17,3 +17,10 @@ DPDK_2.2 { rte_ring_free; } DPDK_2.0; + +DPDK_19.05 { + global: + + rte_ring_get_memsize; + +} DPDK_2.2; From patchwork Fri Jan 18 15:23:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 49956 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 380052C28; Fri, 18 Jan 2019 16:24:44 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 462832BA2 for ; Fri, 18 Jan 2019 16:24:36 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Jan 2019 07:24:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,491,1539673200"; d="scan'208";a="292610284" Received: from txasoft-yocto.an.intel.com (HELO txasoft-yocto.an.intel.com.) ([10.123.72.192]) by orsmga005.jf.intel.com with ESMTP; 18 Jan 2019 07:24:34 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org Date: Fri, 18 Jan 2019 09:23:24 -0600 Message-Id: <20190118152326.22686-4-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190118152326.22686-1-gage.eads@intel.com> References: <20190115235227.14013-1-gage.eads@intel.com> <20190118152326.22686-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v3 3/5] test_ring: add non-blocking ring autotest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ring_nb_autotest re-uses the ring_autotest code by wrapping its top-level function with one that takes a 'flags' argument. Signed-off-by: Gage Eads --- test/test/test_ring.c | 57 ++++++++++++++++++++++++++++++++------------------- 1 file changed, 36 insertions(+), 21 deletions(-) diff --git a/test/test/test_ring.c b/test/test/test_ring.c index aaf1e70ad..ff410d978 100644 --- a/test/test/test_ring.c +++ b/test/test/test_ring.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2010-2014 Intel Corporation + * Copyright(c) 2010-2019 Intel Corporation */ #include @@ -601,18 +601,20 @@ test_ring_burst_basic(struct rte_ring *r) * it will always fail to create ring with a wrong ring size number in this function */ static int -test_ring_creation_with_wrong_size(void) +test_ring_creation_with_wrong_size(unsigned int flags) { struct rte_ring * rp = NULL; /* Test if ring size is not power of 2 */ - rp = rte_ring_create("test_bad_ring_size", RING_SIZE + 1, SOCKET_ID_ANY, 0); + rp = rte_ring_create("test_bad_ring_size", RING_SIZE + 1, + SOCKET_ID_ANY, flags); if (NULL != rp) { return -1; } /* Test if ring size is exceeding the limit */ - rp = rte_ring_create("test_bad_ring_size", (RTE_RING_SZ_MASK + 1), SOCKET_ID_ANY, 0); + rp = rte_ring_create("test_bad_ring_size", (RTE_RING_SZ_MASK + 1), + SOCKET_ID_ANY, flags); if (NULL != rp) { return -1; } @@ -623,11 +625,11 @@ test_ring_creation_with_wrong_size(void) * it tests if it would always fail to create ring with an used ring name */ static int -test_ring_creation_with_an_used_name(void) +test_ring_creation_with_an_used_name(unsigned int flags) { struct rte_ring * rp; - rp = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, 0); + rp = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, flags); if (NULL != rp) return -1; @@ -639,10 +641,10 @@ test_ring_creation_with_an_used_name(void) * function to fail correctly */ static int -test_create_count_odd(void) +test_create_count_odd(unsigned int flags) { struct rte_ring *r = rte_ring_create("test_ring_count", - 4097, SOCKET_ID_ANY, 0 ); + 4097, SOCKET_ID_ANY, flags); if(r != NULL){ return -1; } @@ -665,7 +667,7 @@ test_lookup_null(void) * it tests some more basic ring operations */ static int -test_ring_basic_ex(void) +test_ring_basic_ex(unsigned int flags) { int ret = -1; unsigned i; @@ -679,7 +681,7 @@ test_ring_basic_ex(void) } rp = rte_ring_create("test_ring_basic_ex", RING_SIZE, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_SC_DEQ); + RING_F_SP_ENQ | RING_F_SC_DEQ | flags); if (rp == NULL) { printf("test_ring_basic_ex fail to create ring\n"); goto fail_test; @@ -737,7 +739,7 @@ test_ring_basic_ex(void) } static int -test_ring_with_exact_size(void) +test_ring_with_exact_size(unsigned int flags) { struct rte_ring *std_ring = NULL, *exact_sz_ring = NULL; void *ptr_array[16]; @@ -746,13 +748,13 @@ test_ring_with_exact_size(void) int ret = -1; std_ring = rte_ring_create("std", ring_sz, rte_socket_id(), - RING_F_SP_ENQ | RING_F_SC_DEQ); + RING_F_SP_ENQ | RING_F_SC_DEQ | flags); if (std_ring == NULL) { printf("%s: error, can't create std ring\n", __func__); goto end; } exact_sz_ring = rte_ring_create("exact sz", ring_sz, rte_socket_id(), - RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ); + RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ | flags); if (exact_sz_ring == NULL) { printf("%s: error, can't create exact size ring\n", __func__); goto end; @@ -808,17 +810,17 @@ test_ring_with_exact_size(void) } static int -test_ring(void) +__test_ring(unsigned int flags) { struct rte_ring *r = NULL; /* some more basic operations */ - if (test_ring_basic_ex() < 0) + if (test_ring_basic_ex(flags) < 0) goto test_fail; rte_atomic32_init(&synchro); - r = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, 0); + r = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, flags); if (r == NULL) goto test_fail; @@ -837,27 +839,27 @@ test_ring(void) goto test_fail; /* basic operations */ - if ( test_create_count_odd() < 0){ + if (test_create_count_odd(flags) < 0) { printf("Test failed to detect odd count\n"); goto test_fail; } else printf("Test detected odd count\n"); - if ( test_lookup_null() < 0){ + if (test_lookup_null() < 0) { printf("Test failed to detect NULL ring lookup\n"); goto test_fail; } else printf("Test detected NULL ring lookup\n"); /* test of creating ring with wrong size */ - if (test_ring_creation_with_wrong_size() < 0) + if (test_ring_creation_with_wrong_size(flags) < 0) goto test_fail; /* test of creation ring with an used name */ - if (test_ring_creation_with_an_used_name() < 0) + if (test_ring_creation_with_an_used_name(flags) < 0) goto test_fail; - if (test_ring_with_exact_size() < 0) + if (test_ring_with_exact_size(flags) < 0) goto test_fail; /* dump the ring status */ @@ -873,4 +875,17 @@ test_ring(void) return -1; } +static int +test_ring(void) +{ + return __test_ring(0); +} + +static int +test_nb_ring(void) +{ + return __test_ring(RING_F_NB); +} + REGISTER_TEST_COMMAND(ring_autotest, test_ring); +REGISTER_TEST_COMMAND(ring_nb_autotest, test_nb_ring); From patchwork Fri Jan 18 15:23:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 49955 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 211022C17; Fri, 18 Jan 2019 16:24:42 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 38CB52B9E for ; Fri, 18 Jan 2019 16:24:36 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Jan 2019 07:24:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,491,1539673200"; d="scan'208";a="292610290" Received: from txasoft-yocto.an.intel.com (HELO txasoft-yocto.an.intel.com.) ([10.123.72.192]) by orsmga005.jf.intel.com with ESMTP; 18 Jan 2019 07:24:34 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org Date: Fri, 18 Jan 2019 09:23:25 -0600 Message-Id: <20190118152326.22686-5-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190118152326.22686-1-gage.eads@intel.com> References: <20190115235227.14013-1-gage.eads@intel.com> <20190118152326.22686-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v3 4/5] test_ring_perf: add non-blocking ring perf test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" nb_ring_perf_autotest re-uses the ring_perf_autotest code by wrapping its top-level function with one that takes a 'flags' argument. Signed-off-by: Gage Eads --- test/test/test_ring_perf.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/test/test/test_ring_perf.c b/test/test/test_ring_perf.c index ebb3939f5..380c4b4a1 100644 --- a/test/test/test_ring_perf.c +++ b/test/test/test_ring_perf.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2010-2014 Intel Corporation + * Copyright(c) 2010-2019 Intel Corporation */ @@ -363,12 +363,12 @@ test_bulk_enqueue_dequeue(struct rte_ring *r) } static int -test_ring_perf(void) +__test_ring_perf(unsigned int flags) { struct lcore_pair cores; struct rte_ring *r = NULL; - r = rte_ring_create(RING_NAME, RING_SIZE, rte_socket_id(), 0); + r = rte_ring_create(RING_NAME, RING_SIZE, rte_socket_id(), flags); if (r == NULL) return -1; @@ -398,4 +398,17 @@ test_ring_perf(void) return 0; } +static int +test_ring_perf(void) +{ + return __test_ring_perf(0); +} + +static int +test_nb_ring_perf(void) +{ + return __test_ring_perf(RING_F_NB); +} + REGISTER_TEST_COMMAND(ring_perf_autotest, test_ring_perf); +REGISTER_TEST_COMMAND(ring_nb_perf_autotest, test_nb_ring_perf); From patchwork Fri Jan 18 15:23:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 49957 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D6ED42C38; Fri, 18 Jan 2019 16:24:45 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id CF3BF2BAA for ; Fri, 18 Jan 2019 16:24:36 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Jan 2019 07:24:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,491,1539673200"; d="scan'208";a="292610298" Received: from txasoft-yocto.an.intel.com (HELO txasoft-yocto.an.intel.com.) ([10.123.72.192]) by orsmga005.jf.intel.com with ESMTP; 18 Jan 2019 07:24:35 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org Date: Fri, 18 Jan 2019 09:23:26 -0600 Message-Id: <20190118152326.22686-6-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190118152326.22686-1-gage.eads@intel.com> References: <20190115235227.14013-1-gage.eads@intel.com> <20190118152326.22686-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v3 5/5] mempool/ring: add non-blocking ring handlers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" These handlers allow an application to create a mempool based on the non-blocking ring, with any combination of single/multi producer/consumer. Also, add a note to the programmer's guide's "known issues" section. Signed-off-by: Gage Eads Acked-by: Andrew Rybchenko --- doc/guides/prog_guide/env_abstraction_layer.rst | 2 +- drivers/mempool/ring/Makefile | 1 + drivers/mempool/ring/meson.build | 2 + drivers/mempool/ring/rte_mempool_ring.c | 58 +++++++++++++++++++++++-- 4 files changed, 59 insertions(+), 4 deletions(-) diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 9497b879c..b6ac236d6 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -541,7 +541,7 @@ Known Issues 5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR. - Alternatively, x86_64 applications can use the non-blocking stack mempool handler. When considering this handler, note that: + Alternatively, x86_64 applications can use the non-blocking ring or stack mempool handlers. When considering one of them, note that: - it is limited to the x86_64 platform, because it uses an instruction (16-byte compare-and-swap) that is not available on other platforms. - it has worse average-case performance than the non-preemptive rte_ring, but software caching (e.g. the mempool cache) can mitigate this by reducing the number of handler operations. diff --git a/drivers/mempool/ring/Makefile b/drivers/mempool/ring/Makefile index ddab522fe..012ba6966 100644 --- a/drivers/mempool/ring/Makefile +++ b/drivers/mempool/ring/Makefile @@ -10,6 +10,7 @@ LIB = librte_mempool_ring.a CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) +CFLAGS += -DALLOW_EXPERIMENTAL_API LDLIBS += -lrte_eal -lrte_mempool -lrte_ring EXPORT_MAP := rte_mempool_ring_version.map diff --git a/drivers/mempool/ring/meson.build b/drivers/mempool/ring/meson.build index a021e908c..b1cb673cc 100644 --- a/drivers/mempool/ring/meson.build +++ b/drivers/mempool/ring/meson.build @@ -1,4 +1,6 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Intel Corporation +allow_experimental_apis = true + sources = files('rte_mempool_ring.c') diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c index bc123fc52..013dac3bc 100644 --- a/drivers/mempool/ring/rte_mempool_ring.c +++ b/drivers/mempool/ring/rte_mempool_ring.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2010-2016 Intel Corporation + * Copyright(c) 2010-2019 Intel Corporation */ #include @@ -47,11 +47,11 @@ common_ring_get_count(const struct rte_mempool *mp) static int -common_ring_alloc(struct rte_mempool *mp) +__common_ring_alloc(struct rte_mempool *mp, int rg_flags) { - int rg_flags = 0, ret; char rg_name[RTE_RING_NAMESIZE]; struct rte_ring *r; + int ret; ret = snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, mp->name); @@ -82,6 +82,18 @@ common_ring_alloc(struct rte_mempool *mp) return 0; } +static int +common_ring_alloc(struct rte_mempool *mp) +{ + return __common_ring_alloc(mp, 0); +} + +static int +common_ring_alloc_nb(struct rte_mempool *mp) +{ + return __common_ring_alloc(mp, RING_F_NB); +} + static void common_ring_free(struct rte_mempool *mp) { @@ -130,7 +142,47 @@ static const struct rte_mempool_ops ops_sp_mc = { .get_count = common_ring_get_count, }; +static const struct rte_mempool_ops ops_mp_mc_nb = { + .name = "ring_mp_mc_nb", + .alloc = common_ring_alloc_nb, + .free = common_ring_free, + .enqueue = common_ring_mp_enqueue, + .dequeue = common_ring_mc_dequeue, + .get_count = common_ring_get_count, +}; + +static const struct rte_mempool_ops ops_sp_sc_nb = { + .name = "ring_sp_sc_nb", + .alloc = common_ring_alloc_nb, + .free = common_ring_free, + .enqueue = common_ring_sp_enqueue, + .dequeue = common_ring_sc_dequeue, + .get_count = common_ring_get_count, +}; + +static const struct rte_mempool_ops ops_mp_sc_nb = { + .name = "ring_mp_sc_nb", + .alloc = common_ring_alloc_nb, + .free = common_ring_free, + .enqueue = common_ring_mp_enqueue, + .dequeue = common_ring_sc_dequeue, + .get_count = common_ring_get_count, +}; + +static const struct rte_mempool_ops ops_sp_mc_nb = { + .name = "ring_sp_mc_nb", + .alloc = common_ring_alloc_nb, + .free = common_ring_free, + .enqueue = common_ring_sp_enqueue, + .dequeue = common_ring_mc_dequeue, + .get_count = common_ring_get_count, +}; + MEMPOOL_REGISTER_OPS(ops_mp_mc); MEMPOOL_REGISTER_OPS(ops_sp_sc); MEMPOOL_REGISTER_OPS(ops_mp_sc); MEMPOOL_REGISTER_OPS(ops_sp_mc); +MEMPOOL_REGISTER_OPS(ops_mp_mc_nb); +MEMPOOL_REGISTER_OPS(ops_sp_sc_nb); +MEMPOOL_REGISTER_OPS(ops_mp_sc_nb); +MEMPOOL_REGISTER_OPS(ops_sp_mc_nb);