From patchwork Fri Dec 20 04:45:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64048 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 860ECA04F3; Fri, 20 Dec 2019 05:46:00 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2CF161BF74; Fri, 20 Dec 2019 05:46:00 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 0E2AD1BE8E for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C77851FB; Thu, 19 Dec 2019 20:45:56 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B53BA3F718; Thu, 19 Dec 2019 20:45:56 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:08 -0600 Message-Id: <20191220044524.32910-2-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 01/17] test/ring: use division for cycle count calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use division instead of modulo operation to calculate more accurate cycle count. Signed-off-by: Honnappa Nagarahalli Acked-by: Olivier Matz --- app/test/test_ring_perf.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 70ee46ffe..6c2aca483 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -357,10 +357,10 @@ test_single_enqueue_dequeue(struct rte_ring *r) } const uint64_t mc_end = rte_rdtsc(); - printf("SP/SC single enq/dequeue: %"PRIu64"\n", - (sc_end-sc_start) >> iter_shift); - printf("MP/MC single enq/dequeue: %"PRIu64"\n", - (mc_end-mc_start) >> iter_shift); + printf("SP/SC single enq/dequeue: %.2F\n", + ((double)(sc_end-sc_start)) / iterations); + printf("MP/MC single enq/dequeue: %.2F\n", + ((double)(mc_end-mc_start)) / iterations); } /* @@ -395,13 +395,15 @@ test_burst_enqueue_dequeue(struct rte_ring *r) } const uint64_t mc_end = rte_rdtsc(); - uint64_t mc_avg = ((mc_end-mc_start) >> iter_shift) / bulk_sizes[sz]; - uint64_t sc_avg = ((sc_end-sc_start) >> iter_shift) / bulk_sizes[sz]; + double mc_avg = ((double)(mc_end-mc_start) / iterations) / + bulk_sizes[sz]; + double sc_avg = ((double)(sc_end-sc_start) / iterations) / + bulk_sizes[sz]; - printf("SP/SC burst enq/dequeue (size: %u): %"PRIu64"\n", bulk_sizes[sz], - sc_avg); - printf("MP/MC burst enq/dequeue (size: %u): %"PRIu64"\n", bulk_sizes[sz], - mc_avg); + printf("SP/SC burst enq/dequeue (size: %u): %.2F\n", + bulk_sizes[sz], sc_avg); + printf("MP/MC burst enq/dequeue (size: %u): %.2F\n", + bulk_sizes[sz], mc_avg); } } From patchwork Fri Dec 20 04:45:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64050 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B7C9A04F3; Fri, 20 Dec 2019 05:46:17 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A55861BF80; Fri, 20 Dec 2019 05:46:04 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 33EE51BF74 for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8B5E31B; Thu, 19 Dec 2019 20:45:56 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C8ADD3F718; Thu, 19 Dec 2019 20:45:56 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:09 -0600 Message-Id: <20191220044524.32910-3-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 02/17] lib/ring: apis to support configurable element size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Current APIs assume ring elements to be pointers. However, in many use cases, the size can be different. Add new APIs to support configurable ring element sizes. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar Reviewed-by: Gavin Hu Reviewed-by: Ruifeng Wang --- lib/librte_ring/Makefile | 3 +- lib/librte_ring/meson.build | 4 + lib/librte_ring/rte_ring.c | 41 +- lib/librte_ring/rte_ring.h | 1 + lib/librte_ring/rte_ring_elem.h | 1002 ++++++++++++++++++++++++++ lib/librte_ring/rte_ring_version.map | 2 + 6 files changed, 1044 insertions(+), 9 deletions(-) create mode 100644 lib/librte_ring/rte_ring_elem.h diff --git a/lib/librte_ring/Makefile b/lib/librte_ring/Makefile index 22454b084..917c560ad 100644 --- a/lib/librte_ring/Makefile +++ b/lib/librte_ring/Makefile @@ -6,7 +6,7 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name LIB = librte_ring.a -CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -DALLOW_EXPERIMENTAL_API LDLIBS += -lrte_eal EXPORT_MAP := rte_ring_version.map @@ -16,6 +16,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c # install includes SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h \ + rte_ring_elem.h \ rte_ring_generic.h \ rte_ring_c11_mem.h diff --git a/lib/librte_ring/meson.build b/lib/librte_ring/meson.build index ca8a435e9..f2f3ccc88 100644 --- a/lib/librte_ring/meson.build +++ b/lib/librte_ring/meson.build @@ -3,5 +3,9 @@ sources = files('rte_ring.c') headers = files('rte_ring.h', + 'rte_ring_elem.h', 'rte_ring_c11_mem.h', 'rte_ring_generic.h') + +# rte_ring_create_elem and rte_ring_get_memsize_elem are experimental +allow_experimental_apis = true diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c index d9b308036..3e15dc398 100644 --- a/lib/librte_ring/rte_ring.c +++ b/lib/librte_ring/rte_ring.c @@ -33,6 +33,7 @@ #include #include "rte_ring.h" +#include "rte_ring_elem.h" TAILQ_HEAD(rte_ring_list, rte_tailq_entry); @@ -46,23 +47,38 @@ EAL_REGISTER_TAILQ(rte_ring_tailq) /* return the size of memory occupied by a ring */ ssize_t -rte_ring_get_memsize(unsigned count) +rte_ring_get_memsize_elem(unsigned int esize, unsigned int count) { ssize_t sz; + /* Check if element size is a multiple of 4B */ + if (esize % 4 != 0) { + RTE_LOG(ERR, RING, "element size is not a multiple of 4\n"); + + return -EINVAL; + } + /* count must be a power of 2 */ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) { RTE_LOG(ERR, RING, - "Requested size is invalid, must be power of 2, and " - "do not exceed the size limit %u\n", RTE_RING_SZ_MASK); + "Requested number of elements is invalid, must be power of 2, and not exceed %u\n", + RTE_RING_SZ_MASK); + return -EINVAL; } - sz = sizeof(struct rte_ring) + count * sizeof(void *); + sz = sizeof(struct rte_ring) + count * esize; sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); return sz; } +/* return the size of memory occupied by a ring */ +ssize_t +rte_ring_get_memsize(unsigned count) +{ + return rte_ring_get_memsize_elem(sizeof(void *), count); +} + void rte_ring_reset(struct rte_ring *r) { @@ -114,10 +130,10 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count, return 0; } -/* create the ring */ +/* create the ring for a given element size */ struct rte_ring * -rte_ring_create(const char *name, unsigned count, int socket_id, - unsigned flags) +rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, + int socket_id, unsigned int flags) { char mz_name[RTE_MEMZONE_NAMESIZE]; struct rte_ring *r; @@ -135,7 +151,7 @@ rte_ring_create(const char *name, unsigned count, int socket_id, if (flags & RING_F_EXACT_SZ) count = rte_align32pow2(count + 1); - ring_size = rte_ring_get_memsize(count); + ring_size = rte_ring_get_memsize_elem(esize, count); if (ring_size < 0) { rte_errno = ring_size; return NULL; @@ -182,6 +198,15 @@ rte_ring_create(const char *name, unsigned count, int socket_id, return r; } +/* create the ring */ +struct rte_ring * +rte_ring_create(const char *name, unsigned count, int socket_id, + unsigned flags) +{ + return rte_ring_create_elem(name, sizeof(void *), count, socket_id, + flags); +} + /* free the ring */ void rte_ring_free(struct rte_ring *r) diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 2a9f768a1..18fc5d845 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -216,6 +216,7 @@ int rte_ring_init(struct rte_ring *r, const char *name, unsigned count, */ struct rte_ring *rte_ring_create(const char *name, unsigned count, int socket_id, unsigned flags); + /** * De-allocate all memory used by the ring. * diff --git a/lib/librte_ring/rte_ring_elem.h b/lib/librte_ring/rte_ring_elem.h new file mode 100644 index 000000000..fc7fe127c --- /dev/null +++ b/lib/librte_ring/rte_ring_elem.h @@ -0,0 +1,1002 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright (c) 2019 Arm Limited + * Copyright (c) 2010-2017 Intel Corporation + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org + * All rights reserved. + * Derived from FreeBSD's bufring.h + * Used as BSD-3 Licensed with permission from Kip Macy. + */ + +#ifndef _RTE_RING_ELEM_H_ +#define _RTE_RING_ELEM_H_ + +/** + * @file + * RTE Ring with user defined element size + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rte_ring.h" + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Calculate the memory size needed for a ring with given element size + * + * This function returns the number of bytes needed for a ring, given + * the number of elements in it and the size of the element. This value + * is the sum of the size of the structure rte_ring and the size of the + * memory needed for storing the elements. The value is aligned to a cache + * line size. + * + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * @param count + * The number of elements in the ring (must be a power of 2). + * @return + * - The memory size needed for the ring on success. + * - -EINVAL - esize is not a multiple of 4 or count provided is not a + * power of 2. + */ +__rte_experimental +ssize_t rte_ring_get_memsize_elem(unsigned int esize, unsigned int count); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Create a new ring named *name* that stores elements with given size. + * + * This function uses ``memzone_reserve()`` to allocate memory. Then it + * calls rte_ring_init() to initialize an empty ring. + * + * The new ring size is set to *count*, which must be a power of + * two. Water marking is disabled by default. The real usable ring size + * is *count-1* instead of *count* to differentiate a free ring from an + * empty ring. + * + * The ring is added in RTE_TAILQ_RING list. + * + * @param name + * The name of the ring. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * @param count + * The number of elements in the ring (must be a power of 2). + * @param socket_id + * The *socket_id* argument is the socket identifier in case of + * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA + * constraint for the reserved zone. + * @param flags + * An OR of the following: + * - RING_F_SP_ENQ: If this flag is set, the default behavior when + * using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()`` + * is "single-producer". Otherwise, it is "multi-producers". + * - RING_F_SC_DEQ: If this flag is set, the default behavior when + * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()`` + * is "single-consumer". Otherwise, it is "multi-consumers". + * @return + * On success, the pointer to the new allocated ring. NULL on error with + * rte_errno set appropriately. Possible errno values include: + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure + * - E_RTE_SECONDARY - function was called from a secondary process instance + * - EINVAL - esize is not a multiple of 4 or count provided is not a + * power of 2. + * - ENOSPC - the maximum number of memzones has already been allocated + * - EEXIST - a memzone with the same name already exists + * - ENOMEM - no appropriate memory area found in which to create memzone + */ +__rte_experimental +struct rte_ring *rte_ring_create_elem(const char *name, unsigned int esize, + unsigned int count, int socket_id, unsigned int flags); + +static __rte_always_inline void +enqueue_elems_32(struct rte_ring *r, uint32_t idx, + const void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t *ring = (uint32_t *)&r[1]; + const uint32_t *obj = (const uint32_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { + ring[idx] = obj[i]; + ring[idx + 1] = obj[i + 1]; + ring[idx + 2] = obj[i + 2]; + ring[idx + 3] = obj[i + 3]; + ring[idx + 4] = obj[i + 4]; + ring[idx + 5] = obj[i + 5]; + ring[idx + 6] = obj[i + 6]; + ring[idx + 7] = obj[i + 7]; + } + switch (n & 0x7) { + case 7: + ring[idx++] = obj[i++]; /* fallthrough */ + case 6: + ring[idx++] = obj[i++]; /* fallthrough */ + case 5: + ring[idx++] = obj[i++]; /* fallthrough */ + case 4: + ring[idx++] = obj[i++]; /* fallthrough */ + case 3: + ring[idx++] = obj[i++]; /* fallthrough */ + case 2: + ring[idx++] = obj[i++]; /* fallthrough */ + case 1: + ring[idx++] = obj[i++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + ring[idx] = obj[i]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + ring[idx] = obj[i]; + } +} + +static __rte_always_inline void +enqueue_elems_64(struct rte_ring *r, uint32_t prod_head, + const void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + uint64_t *ring = (uint64_t *)&r[1]; + const uint64_t *obj = (const uint64_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { + ring[idx] = obj[i]; + ring[idx + 1] = obj[i + 1]; + ring[idx + 2] = obj[i + 2]; + ring[idx + 3] = obj[i + 3]; + } + switch (n & 0x3) { + case 3: + ring[idx++] = obj[i++]; /* fallthrough */ + case 2: + ring[idx++] = obj[i++]; /* fallthrough */ + case 1: + ring[idx++] = obj[i++]; + } + } else { + for (i = 0; idx < size; i++, idx++) + ring[idx] = obj[i]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + ring[idx] = obj[i]; + } +} + +static __rte_always_inline void +enqueue_elems_128(struct rte_ring *r, uint32_t prod_head, + const void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + __uint128_t *ring = (__uint128_t *)&r[1]; + const __uint128_t *obj = (const __uint128_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x1); i += 2, idx += 2) { + ring[idx] = obj[i]; + ring[idx + 1] = obj[i + 1]; + } + switch (n & 0x1) { + case 1: + ring[idx++] = obj[i++]; + } + } else { + for (i = 0; idx < size; i++, idx++) + ring[idx] = obj[i]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + ring[idx] = obj[i]; + } +} + +/* the actual enqueue of elements on the ring. + * Placed here since identical code needed in both + * single and multi producer enqueue functions. + */ +static __rte_always_inline void +enqueue_elems(struct rte_ring *r, uint32_t prod_head, const void *obj_table, + uint32_t esize, uint32_t num) +{ + uint32_t idx, nr_idx, nr_num; + + /* 8B and 16B copies implemented individually to retain + * the current performance. + */ + if (esize == 8) + enqueue_elems_64(r, prod_head, obj_table, num); + else if (esize == 16) + enqueue_elems_128(r, prod_head, obj_table, num); + else { + /* Normalize to uint32_t */ + uint32_t scale = esize / sizeof(uint32_t); + nr_num = num * scale; + idx = prod_head & r->mask; + nr_idx = idx * scale; + enqueue_elems_32(r, nr_idx, obj_table, nr_num); + } +} + +static __rte_always_inline void +dequeue_elems_32(struct rte_ring *r, uint32_t idx, + void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t *ring = (uint32_t *)&r[1]; + uint32_t *obj = (uint32_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { + obj[i] = ring[idx]; + obj[i + 1] = ring[idx + 1]; + obj[i + 2] = ring[idx + 2]; + obj[i + 3] = ring[idx + 3]; + obj[i + 4] = ring[idx + 4]; + obj[i + 5] = ring[idx + 5]; + obj[i + 6] = ring[idx + 6]; + obj[i + 7] = ring[idx + 7]; + } + switch (n & 0x7) { + case 7: + obj[i++] = ring[idx++]; /* fallthrough */ + case 6: + obj[i++] = ring[idx++]; /* fallthrough */ + case 5: + obj[i++] = ring[idx++]; /* fallthrough */ + case 4: + obj[i++] = ring[idx++]; /* fallthrough */ + case 3: + obj[i++] = ring[idx++]; /* fallthrough */ + case 2: + obj[i++] = ring[idx++]; /* fallthrough */ + case 1: + obj[i++] = ring[idx++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + obj[i] = ring[idx]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + obj[i] = ring[idx]; + } +} + +static __rte_always_inline void +dequeue_elems_64(struct rte_ring *r, uint32_t prod_head, + void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + uint64_t *ring = (uint64_t *)&r[1]; + uint64_t *obj = (uint64_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { + obj[i] = ring[idx]; + obj[i + 1] = ring[idx + 1]; + obj[i + 2] = ring[idx + 2]; + obj[i + 3] = ring[idx + 3]; + } + switch (n & 0x3) { + case 3: + obj[i++] = ring[idx++]; /* fallthrough */ + case 2: + obj[i++] = ring[idx++]; /* fallthrough */ + case 1: + obj[i++] = ring[idx++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + obj[i] = ring[idx]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + obj[i] = ring[idx]; + } +} + +static __rte_always_inline void +dequeue_elems_128(struct rte_ring *r, uint32_t prod_head, + void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + __uint128_t *ring = (__uint128_t *)&r[1]; + __uint128_t *obj = (__uint128_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x1); i += 2, idx += 2) { + obj[i] = ring[idx]; + obj[i + 1] = ring[idx + 1]; + } + switch (n & 0x1) { + case 1: + obj[i++] = ring[idx++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + obj[i] = ring[idx]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + obj[i] = ring[idx]; + } +} + +/* the actual dequeue of elements from the ring. + * Placed here since identical code needed in both + * single and multi producer enqueue functions. + */ +static __rte_always_inline void +dequeue_elems(struct rte_ring *r, uint32_t cons_head, void *obj_table, + uint32_t esize, uint32_t num) +{ + uint32_t idx, nr_idx, nr_num; + + /* 8B and 16B copies implemented individually to retain + * the current performance. + */ + if (esize == 8) + dequeue_elems_64(r, cons_head, obj_table, num); + else if (esize == 16) + dequeue_elems_128(r, cons_head, obj_table, num); + else { + /* Normalize to uint32_t */ + uint32_t scale = esize / sizeof(uint32_t); + nr_num = num * scale; + idx = cons_head & r->mask; + nr_idx = idx * scale; + dequeue_elems_32(r, nr_idx, obj_table, nr_num); + } +} + +/* Between load and load. there might be cpu reorder in weak model + * (powerpc/arm). + * There are 2 choices for the users + * 1.use rmb() memory barrier + * 2.use one-direction load_acquire/store_release barrier,defined by + * CONFIG_RTE_USE_C11_MEM_MODEL=y + * It depends on performance test results. + * By default, move common functions to rte_ring_generic.h + */ +#ifdef RTE_USE_C11_MEM_MODEL +#include "rte_ring_c11_mem.h" +#else +#include "rte_ring_generic.h" +#endif + +/** + * @internal Enqueue several objects on the ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring + * @param is_sp + * Indicates whether to use single producer or multi-producer head update + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_enqueue_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, + enum rte_ring_queue_behavior behavior, unsigned int is_sp, + unsigned int *free_space) +{ + uint32_t prod_head, prod_next; + uint32_t free_entries; + + n = __rte_ring_move_prod_head(r, is_sp, n, behavior, + &prod_head, &prod_next, &free_entries); + if (n == 0) + goto end; + + enqueue_elems(r, prod_head, obj_table, esize, n); + + update_tail(&r->prod, prod_head, prod_next, is_sp, 1); +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +} + +/** + * @internal Dequeue several objects from the ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring + * @param is_sc + * Indicates whether to use single consumer or multi-consumer head update + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_dequeue_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, + enum rte_ring_queue_behavior behavior, unsigned int is_sc, + unsigned int *available) +{ + uint32_t cons_head, cons_next; + uint32_t entries; + + n = __rte_ring_move_cons_head(r, (int)is_sc, n, behavior, + &cons_head, &cons_next, &entries); + if (n == 0) + goto end; + + dequeue_elems(r, cons_head, obj_table, esize, n); + + update_tail(&r->cons, cons_head, cons_next, is_sc, 0); + +end: + if (available != NULL) + *available = entries - n; + return n; +} + +/** + * Enqueue several objects on the ring (multi-producers safe). + * + * This function uses a "compare and set" instruction to move the + * producer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * The number of objects enqueued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_mp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_MP, free_space); +} + +/** + * Enqueue several objects on a ring + * + * @warning This API is NOT multi-producers safe + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * The number of objects enqueued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_sp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_SP, free_space); +} + +/** + * Enqueue several objects on a ring. + * + * This function calls the multi-producer or the single-producer + * version depending on the default behavior that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * The number of objects enqueued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, r->prod.single, free_space); +} + +/** + * Enqueue one object on a ring (multi-producers safe). + * + * This function uses a "compare and set" instruction to move the + * producer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj + * A pointer to the object to be added. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects enqueued. + * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. + */ +static __rte_always_inline int +rte_ring_mp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) +{ + return rte_ring_mp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : + -ENOBUFS; +} + +/** + * Enqueue one object on a ring + * + * @warning This API is NOT multi-producers safe + * + * @param r + * A pointer to the ring structure. + * @param obj + * A pointer to the object to be added. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects enqueued. + * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. + */ +static __rte_always_inline int +rte_ring_sp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) +{ + return rte_ring_sp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : + -ENOBUFS; +} + +/** + * Enqueue one object on a ring. + * + * This function calls the multi-producer or the single-producer + * version, depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj + * A pointer to the object to be added. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects enqueued. + * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. + */ +static __rte_always_inline int +rte_ring_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) +{ + return rte_ring_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : + -ENOBUFS; +} + +/** + * Dequeue several objects from a ring (multi-consumers safe). + * + * This function uses a "compare and set" instruction to move the + * consumer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects dequeued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_mc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_MC, available); +} + +/** + * Dequeue several objects from a ring (NOT multi-consumers safe). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table, + * must be strictly positive. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects dequeued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_sc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_SC, available); +} + +/** + * Dequeue several objects from a ring. + * + * This function calls the multi-consumers or the single-consumer + * version, depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects dequeued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, r->cons.single, available); +} + +/** + * Dequeue one object from a ring (multi-consumers safe). + * + * This function uses a "compare and set" instruction to move the + * consumer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_p + * A pointer to a void * pointer (object) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects dequeued. + * - -ENOENT: Not enough entries in the ring to dequeue; no object is + * dequeued. + */ +static __rte_always_inline int +rte_ring_mc_dequeue_elem(struct rte_ring *r, void *obj_p, + unsigned int esize) +{ + return rte_ring_mc_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : + -ENOENT; +} + +/** + * Dequeue one object from a ring (NOT multi-consumers safe). + * + * @param r + * A pointer to the ring structure. + * @param obj_p + * A pointer to a void * pointer (object) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects dequeued. + * - -ENOENT: Not enough entries in the ring to dequeue, no object is + * dequeued. + */ +static __rte_always_inline int +rte_ring_sc_dequeue_elem(struct rte_ring *r, void *obj_p, + unsigned int esize) +{ + return rte_ring_sc_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : + -ENOENT; +} + +/** + * Dequeue one object from a ring. + * + * This function calls the multi-consumers or the single-consumer + * version depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_p + * A pointer to a void * pointer (object) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success, objects dequeued. + * - -ENOENT: Not enough entries in the ring to dequeue, no object is + * dequeued. + */ +static __rte_always_inline int +rte_ring_dequeue_elem(struct rte_ring *r, void *obj_p, unsigned int esize) +{ + return rte_ring_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : + -ENOENT; +} + +/** + * Enqueue several objects on the ring (multi-producers safe). + * + * This function uses a "compare and set" instruction to move the + * producer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * - n: Actual number of objects enqueued. + */ +static __rte_always_inline unsigned +rte_ring_mp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_MP, free_space); +} + +/** + * Enqueue several objects on a ring + * + * @warning This API is NOT multi-producers safe + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * - n: Actual number of objects enqueued. + */ +static __rte_always_inline unsigned +rte_ring_sp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_SP, free_space); +} + +/** + * Enqueue several objects on a ring. + * + * This function calls the multi-producer or the single-producer + * version depending on the default behavior that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * - n: Actual number of objects enqueued. + */ +static __rte_always_inline unsigned +rte_ring_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, r->prod.single, free_space); +} + +/** + * Dequeue several objects from a ring (multi-consumers safe). When the request + * objects are more than the available objects, only dequeue the actual number + * of objects + * + * This function uses a "compare and set" instruction to move the + * consumer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * - n: Actual number of objects dequeued, 0 if ring is empty + */ +static __rte_always_inline unsigned +rte_ring_mc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_MC, available); +} + +/** + * Dequeue several objects from a ring (NOT multi-consumers safe).When the + * request objects are more than the available objects, only dequeue the + * actual number of objects + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * - n: Actual number of objects dequeued, 0 if ring is empty + */ +static __rte_always_inline unsigned +rte_ring_sc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_SC, available); +} + +/** + * Dequeue multiple objects from a ring up to a maximum number. + * + * This function calls the multi-consumers or the single-consumer + * version, depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * - Number of objects dequeued + */ +static __rte_always_inline unsigned int +rte_ring_dequeue_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, + r->cons.single, available); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_RING_ELEM_H_ */ diff --git a/lib/librte_ring/rte_ring_version.map b/lib/librte_ring/rte_ring_version.map index 89d84bcf4..7a5328dd5 100644 --- a/lib/librte_ring/rte_ring_version.map +++ b/lib/librte_ring/rte_ring_version.map @@ -15,6 +15,8 @@ DPDK_20.0 { EXPERIMENTAL { global: + rte_ring_create_elem; + rte_ring_get_memsize_elem; rte_ring_reset; }; From patchwork Fri Dec 20 04:45:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64049 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1CDFEA04F3; Fri, 20 Dec 2019 05:46:08 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AD9F91BF78; Fri, 20 Dec 2019 05:46:02 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 2A2AC1BF73 for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0EEAA106F; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E821D3F86C; Thu, 19 Dec 2019 20:45:56 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:10 -0600 Message-Id: <20191220044524.32910-4-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 03/17] test/ring: add functional tests for rte_ring_xxx_elem APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add basic infrastructure to test rte_ring_xxx_elem APIs. Add test cases for testing burst and bulk tests. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring.c | 466 ++++++++++++++++++++----------------------- app/test/test_ring.h | 203 +++++++++++++++++++ 2 files changed, 419 insertions(+), 250 deletions(-) create mode 100644 app/test/test_ring.h diff --git a/app/test/test_ring.c b/app/test/test_ring.c index aaf1e70ad..e7a8b468b 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -23,11 +23,13 @@ #include #include #include +#include #include #include #include #include "test.h" +#include "test_ring.h" /* * Ring @@ -67,6 +69,50 @@ static rte_atomic32_t synchro; #define TEST_RING_FULL_EMTPY_ITER 8 +static int esize[] = {-1, 4, 8, 16}; + +static void +test_ring_mem_init(void *obj, unsigned int count, int esize) +{ + unsigned int i; + + /* Legacy queue APIs? */ + if (esize == -1) + for (i = 0; i < count; i++) + ((void **)obj)[i] = (void *)(unsigned long)i; + else + for (i = 0; i < (count * esize / sizeof(uint32_t)); i++) + ((uint32_t *)obj)[i] = i; +} + +static void +test_ring_print_test_string(const char *istr, unsigned int api_type, int esize) +{ + printf("\n%s: ", istr); + + if (esize == -1) + printf("legacy APIs: "); + else + printf("elem APIs: element size %dB ", esize); + + if (api_type == TEST_RING_IGNORE_API_TYPE) + return; + + if ((api_type & TEST_RING_N) == TEST_RING_N) + printf(": default enqueue/dequeue: "); + else if ((api_type & TEST_RING_S) == TEST_RING_S) + printf(": SP/SC: "); + else if ((api_type & TEST_RING_M) == TEST_RING_M) + printf(": MP/MC: "); + + if ((api_type & TEST_RING_SL) == TEST_RING_SL) + printf("single\n"); + else if ((api_type & TEST_RING_BL) == TEST_RING_BL) + printf("bulk\n"); + else if ((api_type & TEST_RING_BR) == TEST_RING_BR) + printf("burst\n"); +} + /* * helper routine for test_ring_basic */ @@ -314,286 +360,203 @@ test_ring_basic(struct rte_ring *r) return -1; } +/* + * Burst and bulk operations with sp/sc, mp/mc and default (during creation) + */ static int -test_ring_burst_basic(struct rte_ring *r) +test_ring_burst_bulk_tests(unsigned int api_type) { + struct rte_ring *r; void **src = NULL, **cur_src = NULL, **dst = NULL, **cur_dst = NULL; int ret; - unsigned i; + unsigned int i, j; + unsigned int num_elems; - /* alloc dummy object pointers */ - src = malloc(RING_SIZE*2*sizeof(void *)); - if (src == NULL) - goto fail; - - for (i = 0; i < RING_SIZE*2 ; i++) { - src[i] = (void *)(unsigned long)i; - } - cur_src = src; + for (i = 0; i < RTE_DIM(esize); i++) { + test_ring_print_test_string("Test standard ring", api_type, + esize[i]); - /* alloc some room for copied objects */ - dst = malloc(RING_SIZE*2*sizeof(void *)); - if (dst == NULL) - goto fail; + /* Create the ring */ + TEST_RING_CREATE("test_ring_burst_bulk_tests", esize[i], + RING_SIZE, SOCKET_ID_ANY, 0, r); - memset(dst, 0, RING_SIZE*2*sizeof(void *)); - cur_dst = dst; - - printf("Test SP & SC basic functions \n"); - printf("enqueue 1 obj\n"); - ret = rte_ring_sp_enqueue_burst(r, cur_src, 1, NULL); - cur_src += 1; - if (ret != 1) - goto fail; - - printf("enqueue 2 objs\n"); - ret = rte_ring_sp_enqueue_burst(r, cur_src, 2, NULL); - cur_src += 2; - if (ret != 2) - goto fail; - - printf("enqueue MAX_BULK objs\n"); - ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK, NULL); - cur_src += MAX_BULK; - if (ret != MAX_BULK) - goto fail; - - printf("dequeue 1 obj\n"); - ret = rte_ring_sc_dequeue_burst(r, cur_dst, 1, NULL); - cur_dst += 1; - if (ret != 1) - goto fail; - - printf("dequeue 2 objs\n"); - ret = rte_ring_sc_dequeue_burst(r, cur_dst, 2, NULL); - cur_dst += 2; - if (ret != 2) - goto fail; + /* alloc dummy object pointers */ + src = test_ring_calloc(RING_SIZE * 2, esize[i]); + if (src == NULL) + goto fail; + test_ring_mem_init(src, RING_SIZE * 2, esize[i]); + cur_src = src; - printf("dequeue MAX_BULK objs\n"); - ret = rte_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK, NULL); - cur_dst += MAX_BULK; - if (ret != MAX_BULK) - goto fail; + /* alloc some room for copied objects */ + dst = test_ring_calloc(RING_SIZE * 2, esize[i]); + if (dst == NULL) + goto fail; + cur_dst = dst; - /* check data */ - if (memcmp(src, dst, cur_dst - dst)) { - rte_hexdump(stdout, "src", src, cur_src - src); - rte_hexdump(stdout, "dst", dst, cur_dst - dst); - printf("data after dequeue is not the same\n"); - goto fail; - } + printf("enqueue 1 obj\n"); + TEST_RING_ENQUEUE(r, cur_src, esize[i], 1, ret, api_type); + if (ret != 1) + goto fail; + TEST_RING_INCP(cur_src, esize[i], 1); - cur_src = src; - cur_dst = dst; + printf("enqueue 2 objs\n"); + TEST_RING_ENQUEUE(r, cur_src, esize[i], 2, ret, api_type); + if (ret != 2) + goto fail; + TEST_RING_INCP(cur_src, esize[i], 2); - printf("Test enqueue without enough memory space \n"); - for (i = 0; i< (RING_SIZE/MAX_BULK - 1); i++) { - ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK, NULL); - cur_src += MAX_BULK; + printf("enqueue MAX_BULK objs\n"); + TEST_RING_ENQUEUE(r, cur_src, esize[i], MAX_BULK, ret, + api_type); if (ret != MAX_BULK) goto fail; - } - - printf("Enqueue 2 objects, free entries = MAX_BULK - 2 \n"); - ret = rte_ring_sp_enqueue_burst(r, cur_src, 2, NULL); - cur_src += 2; - if (ret != 2) - goto fail; + TEST_RING_INCP(cur_src, esize[i], MAX_BULK); - printf("Enqueue the remaining entries = MAX_BULK - 2 \n"); - /* Always one free entry left */ - ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK, NULL); - cur_src += MAX_BULK - 3; - if (ret != MAX_BULK - 3) - goto fail; - - printf("Test if ring is full \n"); - if (rte_ring_full(r) != 1) - goto fail; + printf("dequeue 1 obj\n"); + TEST_RING_DEQUEUE(r, cur_dst, esize[i], 1, ret, api_type); + if (ret != 1) + goto fail; + TEST_RING_INCP(cur_dst, esize[i], 1); - printf("Test enqueue for a full entry \n"); - ret = rte_ring_sp_enqueue_burst(r, cur_src, MAX_BULK, NULL); - if (ret != 0) - goto fail; + printf("dequeue 2 objs\n"); + TEST_RING_DEQUEUE(r, cur_dst, esize[i], 2, ret, api_type); + if (ret != 2) + goto fail; + TEST_RING_INCP(cur_dst, esize[i], 2); - printf("Test dequeue without enough objects \n"); - for (i = 0; i +#include +#include + +/* API type to call + * N - Calls default APIs + * S - Calls SP or SC API + * M - Calls MP or MC API + */ +#define TEST_RING_N 1 +#define TEST_RING_S 2 +#define TEST_RING_M 4 + +/* API type to call + * SL - Calls single element APIs + * BL - Calls bulk APIs + * BR - Calls burst APIs + */ +#define TEST_RING_SL 8 +#define TEST_RING_BL 16 +#define TEST_RING_BR 32 + +#define TEST_RING_IGNORE_API_TYPE ~0U + +#define TEST_RING_INCP(obj, esize, n) do { \ + /* Legacy queue APIs? */ \ + if ((esize) == -1) \ + obj = ((void **)obj) + n; \ + else \ + obj = (void **)(((uint32_t *)obj) + \ + (n * esize / sizeof(uint32_t))); \ +} while (0) + +#define TEST_RING_CREATE(name, esize, count, socket_id, flags, r) do { \ + /* Legacy queue APIs? */ \ + if ((esize) == -1) \ + r = rte_ring_create((name), (count), (socket_id), (flags)); \ + else \ + r = rte_ring_create_elem((name), (esize), (count), \ + (socket_id), (flags)); \ +} while (0) + +#define TEST_RING_ENQUEUE(r, obj, esize, n, ret, api_type) do { \ + /* Legacy queue APIs? */ \ + if ((esize) == -1) \ + switch (api_type) { \ + case (TEST_RING_N | TEST_RING_SL): \ + ret = rte_ring_enqueue(r, obj); \ + break; \ + case (TEST_RING_S | TEST_RING_SL): \ + ret = rte_ring_sp_enqueue(r, obj); \ + break; \ + case (TEST_RING_M | TEST_RING_SL): \ + ret = rte_ring_mp_enqueue(r, obj); \ + break; \ + case (TEST_RING_N | TEST_RING_BL): \ + ret = rte_ring_enqueue_bulk(r, obj, n, NULL); \ + break; \ + case (TEST_RING_S | TEST_RING_BL): \ + ret = rte_ring_sp_enqueue_bulk(r, obj, n, NULL); \ + break; \ + case (TEST_RING_M | TEST_RING_BL): \ + ret = rte_ring_mp_enqueue_bulk(r, obj, n, NULL); \ + break; \ + case (TEST_RING_N | TEST_RING_BR): \ + ret = rte_ring_enqueue_burst(r, obj, n, NULL); \ + break; \ + case (TEST_RING_S | TEST_RING_BR): \ + ret = rte_ring_sp_enqueue_burst(r, obj, n, NULL); \ + break; \ + case (TEST_RING_M | TEST_RING_BR): \ + ret = rte_ring_mp_enqueue_burst(r, obj, n, NULL); \ + } \ + else \ + switch (api_type) { \ + case (TEST_RING_N | TEST_RING_SL): \ + ret = rte_ring_enqueue_elem(r, obj, esize); \ + break; \ + case (TEST_RING_S | TEST_RING_SL): \ + ret = rte_ring_sp_enqueue_elem(r, obj, esize); \ + break; \ + case (TEST_RING_M | TEST_RING_SL): \ + ret = rte_ring_mp_enqueue_elem(r, obj, esize); \ + break; \ + case (TEST_RING_N | TEST_RING_BL): \ + ret = rte_ring_enqueue_bulk_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_S | TEST_RING_BL): \ + ret = rte_ring_sp_enqueue_bulk_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_M | TEST_RING_BL): \ + ret = rte_ring_mp_enqueue_bulk_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_N | TEST_RING_BR): \ + ret = rte_ring_enqueue_burst_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_S | TEST_RING_BR): \ + ret = rte_ring_sp_enqueue_burst_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_M | TEST_RING_BR): \ + ret = rte_ring_mp_enqueue_burst_elem(r, obj, esize, n, \ + NULL); \ + } \ +} while (0) + +#define TEST_RING_DEQUEUE(r, obj, esize, n, ret, api_type) do { \ + /* Legacy queue APIs? */ \ + if ((esize) == -1) \ + switch (api_type) { \ + case (TEST_RING_N | TEST_RING_SL): \ + ret = rte_ring_dequeue(r, obj); \ + break; \ + case (TEST_RING_S | TEST_RING_SL): \ + ret = rte_ring_sc_dequeue(r, obj); \ + break; \ + case (TEST_RING_M | TEST_RING_SL): \ + ret = rte_ring_mc_dequeue(r, obj); \ + break; \ + case (TEST_RING_N | TEST_RING_BL): \ + ret = rte_ring_dequeue_bulk(r, obj, n, NULL); \ + break; \ + case (TEST_RING_S | TEST_RING_BL): \ + ret = rte_ring_sc_dequeue_bulk(r, obj, n, NULL); \ + break; \ + case (TEST_RING_M | TEST_RING_BL): \ + ret = rte_ring_mc_dequeue_bulk(r, obj, n, NULL); \ + break; \ + case (TEST_RING_N | TEST_RING_BR): \ + ret = rte_ring_dequeue_burst(r, obj, n, NULL); \ + break; \ + case (TEST_RING_S | TEST_RING_BR): \ + ret = rte_ring_sc_dequeue_burst(r, obj, n, NULL); \ + break; \ + case (TEST_RING_M | TEST_RING_BR): \ + ret = rte_ring_mc_dequeue_burst(r, obj, n, NULL); \ + } \ + else \ + switch (api_type) { \ + case (TEST_RING_N | TEST_RING_SL): \ + ret = rte_ring_dequeue_elem(r, obj, esize); \ + break; \ + case (TEST_RING_S | TEST_RING_SL): \ + ret = rte_ring_sc_dequeue_elem(r, obj, esize); \ + break; \ + case (TEST_RING_M | TEST_RING_SL): \ + ret = rte_ring_mc_dequeue_elem(r, obj, esize); \ + break; \ + case (TEST_RING_N | TEST_RING_BL): \ + ret = rte_ring_dequeue_bulk_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_S | TEST_RING_BL): \ + ret = rte_ring_sc_dequeue_bulk_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_M | TEST_RING_BL): \ + ret = rte_ring_mc_dequeue_bulk_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_N | TEST_RING_BR): \ + ret = rte_ring_dequeue_burst_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_S | TEST_RING_BR): \ + ret = rte_ring_sc_dequeue_burst_elem(r, obj, esize, n, \ + NULL); \ + break; \ + case (TEST_RING_M | TEST_RING_BR): \ + ret = rte_ring_mc_dequeue_burst_elem(r, obj, esize, n, \ + NULL); \ + } \ +} while (0) + +/* This function is placed here as it is required for both + * performance and functional tests. + */ +static __rte_always_inline void * +test_ring_calloc(unsigned int rsize, int esize) +{ + unsigned int sz; + void *p; + + /* Legacy queue APIs? */ + if (esize == -1) + sz = sizeof(void *); + else + sz = esize; + + p = rte_zmalloc(NULL, rsize * sz, RTE_CACHE_LINE_SIZE); + if (p == NULL) + printf("Failed to allocate memory\n"); + + return p; +} From patchwork Fri Dec 20 04:45:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64052 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 71DC1A04F3; Fri, 20 Dec 2019 05:46:35 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B2A6D1BF8E; Fri, 20 Dec 2019 05:46:09 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 4024C1BF75 for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 18CC711B3; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0E4CF3F718; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:11 -0600 Message-Id: <20191220044524.32910-5-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 04/17] test/ring: test burst APIs with random empty-full test case X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The random empty-full test case should be tested with burst APIs as well. Hence the test case is consolidated in test_ring_burst_bulk_tests function. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring.c | 91 +++++++++++++++++++++----------------------- 1 file changed, 43 insertions(+), 48 deletions(-) diff --git a/app/test/test_ring.c b/app/test/test_ring.c index e7a8b468b..d4f40ad20 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -113,50 +113,6 @@ test_ring_print_test_string(const char *istr, unsigned int api_type, int esize) printf("burst\n"); } -/* - * helper routine for test_ring_basic - */ -static int -test_ring_basic_full_empty(struct rte_ring *r, void * const src[], void *dst[]) -{ - unsigned i, rand; - const unsigned rsz = RING_SIZE - 1; - - printf("Basic full/empty test\n"); - - for (i = 0; TEST_RING_FULL_EMTPY_ITER != i; i++) { - - /* random shift in the ring */ - rand = RTE_MAX(rte_rand() % RING_SIZE, 1UL); - printf("%s: iteration %u, random shift: %u;\n", - __func__, i, rand); - TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rand, - NULL) != 0); - TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rand, - NULL) == rand); - - /* fill the ring */ - TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rsz, NULL) != 0); - TEST_RING_VERIFY(0 == rte_ring_free_count(r)); - TEST_RING_VERIFY(rsz == rte_ring_count(r)); - TEST_RING_VERIFY(rte_ring_full(r)); - TEST_RING_VERIFY(0 == rte_ring_empty(r)); - - /* empty the ring */ - TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rsz, - NULL) == rsz); - TEST_RING_VERIFY(rsz == rte_ring_free_count(r)); - TEST_RING_VERIFY(0 == rte_ring_count(r)); - TEST_RING_VERIFY(0 == rte_ring_full(r)); - TEST_RING_VERIFY(rte_ring_empty(r)); - - /* check data */ - TEST_RING_VERIFY(0 == memcmp(src, dst, rsz)); - rte_ring_dump(stdout, r); - } - return 0; -} - static int test_ring_basic(struct rte_ring *r) { @@ -294,9 +250,6 @@ test_ring_basic(struct rte_ring *r) goto fail; } - if (test_ring_basic_full_empty(r, src, dst) != 0) - goto fail; - cur_src = src; cur_dst = dst; @@ -371,6 +324,8 @@ test_ring_burst_bulk_tests(unsigned int api_type) int ret; unsigned int i, j; unsigned int num_elems; + int rand; + const unsigned int rsz = RING_SIZE - 1; for (i = 0; i < RTE_DIM(esize); i++) { test_ring_print_test_string("Test standard ring", api_type, @@ -483,7 +438,6 @@ test_ring_burst_bulk_tests(unsigned int api_type) goto fail; TEST_RING_INCP(cur_src, esize[i], 2); - printf("Enqueue the remaining entries = MAX_BULK - 3\n"); /* Bulk APIs enqueue exact number of elements */ if ((api_type & TEST_RING_BL) == TEST_RING_BL) @@ -546,6 +500,47 @@ test_ring_burst_bulk_tests(unsigned int api_type) goto fail; } + printf("Random full/empty test\n"); + cur_src = src; + cur_dst = dst; + + for (j = 0; j != TEST_RING_FULL_EMTPY_ITER; j++) { + /* random shift in the ring */ + rand = RTE_MAX(rte_rand() % RING_SIZE, 1UL); + printf("%s: iteration %u, random shift: %u;\n", + __func__, i, rand); + TEST_RING_ENQUEUE(r, cur_src, esize[i], rand, + ret, api_type); + TEST_RING_VERIFY(ret != 0); + + TEST_RING_DEQUEUE(r, cur_dst, esize[i], rand, + ret, api_type); + TEST_RING_VERIFY(ret == rand); + + /* fill the ring */ + TEST_RING_ENQUEUE(r, cur_src, esize[i], rsz, + ret, api_type); + TEST_RING_VERIFY(ret != 0); + + TEST_RING_VERIFY(rte_ring_free_count(r) == 0); + TEST_RING_VERIFY(rsz == rte_ring_count(r)); + TEST_RING_VERIFY(rte_ring_full(r)); + TEST_RING_VERIFY(rte_ring_empty(r) == 0); + + /* empty the ring */ + TEST_RING_DEQUEUE(r, cur_dst, esize[i], rsz, + ret, api_type); + TEST_RING_VERIFY(ret == (int)rsz); + TEST_RING_VERIFY(rsz == rte_ring_free_count(r)); + TEST_RING_VERIFY(rte_ring_count(r) == 0); + TEST_RING_VERIFY(rte_ring_full(r) == 0); + TEST_RING_VERIFY(rte_ring_empty(r)); + + /* check data */ + TEST_RING_VERIFY(memcmp(src, dst, rsz) == 0); + rte_ring_dump(stdout, r); + } + /* Free memory before test completed */ rte_ring_free(r); rte_free(src); From patchwork Fri Dec 20 04:45:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64051 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9751CA04F3; Fri, 20 Dec 2019 05:46:27 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 49BA61BF87; Fri, 20 Dec 2019 05:46:07 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 456E11BF76 for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C30C11D4; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 187AA3F86C; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:12 -0600 Message-Id: <20191220044524.32910-6-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 05/17] test/ring: add default, single element test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add default, single element test cases for rte_ring_xxx_elem APIs. The burst APIs are kept as is since they are being tested with a ring created with SP/SC flags. They are further enhanced with bulk APIs. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring.c | 129 +++++++++++++++++++++++++++---------------- 1 file changed, 81 insertions(+), 48 deletions(-) diff --git a/app/test/test_ring.c b/app/test/test_ring.c index d4f40ad20..1025097c8 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -620,78 +620,111 @@ test_lookup_null(void) } /* - * it tests some more basic ring operations + * Test default, single element, bulk and burst APIs */ static int test_ring_basic_ex(void) { int ret = -1; - unsigned i; + unsigned int i, j; struct rte_ring *rp = NULL; - void **obj = NULL; + void *obj = NULL; - obj = rte_calloc("test_ring_basic_ex_malloc", RING_SIZE, sizeof(void *), 0); - if (obj == NULL) { - printf("test_ring_basic_ex fail to rte_malloc\n"); - goto fail_test; - } + for (i = 0; i < RTE_DIM(esize); i++) { + obj = test_ring_calloc(RING_SIZE, esize[i]); + if (obj == NULL) { + printf("test_ring_basic_ex fail to rte_malloc\n"); + goto fail_test; + } - rp = rte_ring_create("test_ring_basic_ex", RING_SIZE, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_SC_DEQ); - if (rp == NULL) { - printf("test_ring_basic_ex fail to create ring\n"); - goto fail_test; - } + TEST_RING_CREATE("test_ring_basic_ex", esize[i], RING_SIZE, + SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_SC_DEQ, rp); + if (rp == NULL) { + printf("test_ring_basic_ex fail to create ring\n"); + goto fail_test; + } - if (rte_ring_lookup("test_ring_basic_ex") != rp) { - goto fail_test; - } + if (rte_ring_lookup("test_ring_basic_ex") != rp) { + printf("test_ring_basic_ex ring is not found\n"); + goto fail_test; + } - if (rte_ring_empty(rp) != 1) { - printf("test_ring_basic_ex ring is not empty but it should be\n"); - goto fail_test; - } + if (rte_ring_empty(rp) != 1) { + printf("test_ring_basic_ex ring is not empty but it should be\n"); + goto fail_test; + } - printf("%u ring entries are now free\n", rte_ring_free_count(rp)); + printf("%u ring entries are now free\n", + rte_ring_free_count(rp)); - for (i = 0; i < RING_SIZE; i ++) { - rte_ring_enqueue(rp, obj[i]); - } + for (j = 0; j < RING_SIZE; j++) { + TEST_RING_ENQUEUE(rp, obj, esize[i], 1, ret, + TEST_RING_N | TEST_RING_SL); + } - if (rte_ring_full(rp) != 1) { - printf("test_ring_basic_ex ring is not full but it should be\n"); - goto fail_test; - } + if (rte_ring_full(rp) != 1) { + printf("test_ring_basic_ex ring is not full but it should be\n"); + goto fail_test; + } - for (i = 0; i < RING_SIZE; i ++) { - rte_ring_dequeue(rp, &obj[i]); - } + for (j = 0; j < RING_SIZE; j++) { + TEST_RING_DEQUEUE(rp, obj, esize[i], 1, ret, + TEST_RING_N | TEST_RING_SL); + } - if (rte_ring_empty(rp) != 1) { - printf("test_ring_basic_ex ring is not empty but it should be\n"); - goto fail_test; - } + if (rte_ring_empty(rp) != 1) { + printf("test_ring_basic_ex ring is not empty but it should be\n"); + goto fail_test; + } - /* Covering the ring burst operation */ - ret = rte_ring_enqueue_burst(rp, obj, 2, NULL); - if (ret != 2) { - printf("test_ring_basic_ex: rte_ring_enqueue_burst fails \n"); - goto fail_test; - } + /* Following tests use the configured flags to decide + * SP/SC or MP/MC. + */ + /* Covering the ring burst operation */ + TEST_RING_ENQUEUE(rp, obj, esize[i], 2, ret, + TEST_RING_N | TEST_RING_BR); + if (ret != 2) { + printf("test_ring_basic_ex: rte_ring_enqueue_burst fails\n"); + goto fail_test; + } + + TEST_RING_DEQUEUE(rp, obj, esize[i], 2, ret, + TEST_RING_N | TEST_RING_BR); + if (ret != 2) { + printf("test_ring_basic_ex: rte_ring_dequeue_burst fails\n"); + goto fail_test; + } + + /* Covering the ring bulk operation */ + TEST_RING_ENQUEUE(rp, obj, esize[i], 2, ret, + TEST_RING_N | TEST_RING_BL); + if (ret != 2) { + printf("test_ring_basic_ex: rte_ring_enqueue_bulk fails\n"); + goto fail_test; + } + + TEST_RING_DEQUEUE(rp, obj, esize[i], 2, ret, + TEST_RING_N | TEST_RING_BL); + if (ret != 2) { + printf("test_ring_basic_ex: rte_ring_dequeue_bulk fails\n"); + goto fail_test; + } - ret = rte_ring_dequeue_burst(rp, obj, 2, NULL); - if (ret != 2) { - printf("test_ring_basic_ex: rte_ring_dequeue_burst fails \n"); - goto fail_test; + rte_ring_free(rp); + rte_free(obj); + rp = NULL; + obj = NULL; } - ret = 0; + return 0; + fail_test: rte_ring_free(rp); if (obj != NULL) rte_free(obj); - return ret; + return -1; } static int From patchwork Fri Dec 20 04:45:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64054 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 55A46A04F3; Fri, 20 Dec 2019 05:46:53 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AC6C11BF9D; Fri, 20 Dec 2019 05:46:13 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id AAADD1BF73 for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 386EC11FB; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2D4093F718; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:13 -0600 Message-Id: <20191220044524.32910-7-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 06/17] test/ring: rte_ring_xxx_elem test cases for exact size ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Test cases for the exact size ring are changed to test rte_ring_xxx_elem APIs. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring.c | 147 ++++++++++++++++++++++++++----------------- 1 file changed, 89 insertions(+), 58 deletions(-) diff --git a/app/test/test_ring.c b/app/test/test_ring.c index 1025097c8..294e3ee10 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -727,75 +727,106 @@ test_ring_basic_ex(void) return -1; } +/* + * Basic test cases with exact size ring. + */ static int test_ring_with_exact_size(void) { - struct rte_ring *std_ring = NULL, *exact_sz_ring = NULL; - void *ptr_array[16]; - static const unsigned int ring_sz = RTE_DIM(ptr_array); - unsigned int i; + struct rte_ring *std_r = NULL, *exact_sz_r = NULL; + void *obj; + const unsigned int ring_sz = 16; + unsigned int i, j; int ret = -1; - std_ring = rte_ring_create("std", ring_sz, rte_socket_id(), - RING_F_SP_ENQ | RING_F_SC_DEQ); - if (std_ring == NULL) { - printf("%s: error, can't create std ring\n", __func__); - goto end; - } - exact_sz_ring = rte_ring_create("exact sz", ring_sz, rte_socket_id(), - RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ); - if (exact_sz_ring == NULL) { - printf("%s: error, can't create exact size ring\n", __func__); - goto end; - } - - /* - * Check that the exact size ring is bigger than the standard ring - */ - if (rte_ring_get_size(std_ring) >= rte_ring_get_size(exact_sz_ring)) { - printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n", - __func__, - rte_ring_get_size(std_ring), - rte_ring_get_size(exact_sz_ring)); - goto end; - } - /* - * check that the exact_sz_ring can hold one more element than the - * standard ring. (16 vs 15 elements) - */ - for (i = 0; i < ring_sz - 1; i++) { - rte_ring_enqueue(std_ring, NULL); - rte_ring_enqueue(exact_sz_ring, NULL); - } - if (rte_ring_enqueue(std_ring, NULL) != -ENOBUFS) { - printf("%s: error, unexpected successful enqueue\n", __func__); - goto end; - } - if (rte_ring_enqueue(exact_sz_ring, NULL) == -ENOBUFS) { - printf("%s: error, enqueue failed\n", __func__); - goto end; - } + for (i = 0; i < RTE_DIM(esize); i++) { + test_ring_print_test_string("Test exact size ring", + TEST_RING_IGNORE_API_TYPE, + esize[i]); + + /* alloc object pointers */ + obj = test_ring_calloc(16, esize[i]); + if (obj == NULL) + goto test_fail; + + TEST_RING_CREATE("std", esize[i], ring_sz, rte_socket_id(), + RING_F_SP_ENQ | RING_F_SC_DEQ, std_r); + if (std_r == NULL) { + printf("%s: error, can't create std ring\n", __func__); + goto test_fail; + } + TEST_RING_CREATE("exact sz", esize[i], ring_sz, rte_socket_id(), + RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ, + exact_sz_r); + if (exact_sz_r == NULL) { + printf("%s: error, can't create exact size ring\n", + __func__); + goto test_fail; + } - /* check that dequeue returns the expected number of elements */ - if (rte_ring_dequeue_burst(exact_sz_ring, ptr_array, - RTE_DIM(ptr_array), NULL) != ring_sz) { - printf("%s: error, failed to dequeue expected nb of elements\n", + /* + * Check that the exact size ring is bigger than the + * standard ring + */ + if (rte_ring_get_size(std_r) >= rte_ring_get_size(exact_sz_r)) { + printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n", + __func__, + rte_ring_get_size(std_r), + rte_ring_get_size(exact_sz_r)); + goto test_fail; + } + /* + * check that the exact_sz_ring can hold one more element + * than the standard ring. (16 vs 15 elements) + */ + for (j = 0; j < ring_sz - 1; j++) { + TEST_RING_ENQUEUE(std_r, obj, esize[i], 1, ret, + TEST_RING_N | TEST_RING_SL); + TEST_RING_ENQUEUE(exact_sz_r, obj, esize[i], 1, + ret, TEST_RING_N | TEST_RING_SL); + } + TEST_RING_ENQUEUE(std_r, obj, esize[i], 1, ret, + TEST_RING_N | TEST_RING_SL); + if (ret != -ENOBUFS) { + printf("%s: error, unexpected successful enqueue\n", __func__); - goto end; - } + goto test_fail; + } + TEST_RING_ENQUEUE(exact_sz_r, obj, esize[i], 1, ret, + TEST_RING_N | TEST_RING_SL); + if (ret == -ENOBUFS) { + printf("%s: error, enqueue failed\n", __func__); + goto test_fail; + } - /* check that the capacity function returns expected value */ - if (rte_ring_get_capacity(exact_sz_ring) != ring_sz) { - printf("%s: error, incorrect ring capacity reported\n", + /* check that dequeue returns the expected number of elements */ + TEST_RING_DEQUEUE(exact_sz_r, obj, esize[i], ring_sz, + ret, TEST_RING_N | TEST_RING_BR); + if (ret != (int)ring_sz) { + printf("%s: error, failed to dequeue expected nb of elements\n", __func__); - goto end; + goto test_fail; + } + + /* check that the capacity function returns expected value */ + if (rte_ring_get_capacity(exact_sz_r) != ring_sz) { + printf("%s: error, incorrect ring capacity reported\n", + __func__); + goto test_fail; + } + + rte_free(obj); + rte_ring_free(std_r); + rte_ring_free(exact_sz_r); } - ret = 0; /* all ok if we get here */ -end: - rte_ring_free(std_ring); - rte_ring_free(exact_sz_ring); - return ret; + return 0; + +test_fail: + rte_free(obj); + rte_ring_free(std_r); + rte_ring_free(exact_sz_r); + return -1; } static int From patchwork Fri Dec 20 04:45:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64053 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CEBAA04F3; Fri, 20 Dec 2019 05:46:46 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AC3741BF91; Fri, 20 Dec 2019 05:46:11 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id A14561BE8E for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D5AC12FC; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 37F363F86C; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:14 -0600 Message-Id: <20191220044524.32910-8-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 07/17] test/ring: negative test cases for rte_ring_xxx_elem APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" All the negative test cases are consolidated into a single function. This provides the ability to add test cases for rte_ring_xxx_elem APIs easily. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring.c | 176 ++++++++++++++++++++++--------------------- 1 file changed, 91 insertions(+), 85 deletions(-) diff --git a/app/test/test_ring.c b/app/test/test_ring.c index 294e3ee10..552e8b53a 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -113,6 +113,93 @@ test_ring_print_test_string(const char *istr, unsigned int api_type, int esize) printf("burst\n"); } +/* + * Various negative test cases. + */ +static int +test_ring_negative_tests(void) +{ + struct rte_ring *rp = NULL; + struct rte_ring *rt = NULL; + unsigned int i; + + /* Test with esize not a multiple of 4 */ + TEST_RING_CREATE("test_bad_element_size", 23, + RING_SIZE + 1, SOCKET_ID_ANY, 0, rp); + if (rp != NULL) { + printf("Test failed to detect invalid element size\n"); + goto test_fail; + } + + + for (i = 0; i < RTE_DIM(esize); i++) { + /* Test if ring size is not power of 2 */ + TEST_RING_CREATE("test_bad_ring_size", esize[i], + RING_SIZE + 1, SOCKET_ID_ANY, 0, rp); + if (rp != NULL) { + printf("Test failed to detect odd count\n"); + goto test_fail; + } + + /* Test if ring size is exceeding the limit */ + TEST_RING_CREATE("test_bad_ring_size", esize[i], + RTE_RING_SZ_MASK + 1, SOCKET_ID_ANY, + 0, rp); + if (rp != NULL) { + printf("Test failed to detect limits\n"); + goto test_fail; + } + + /* Tests if lookup returns NULL on non-existing ring */ + rp = rte_ring_lookup("ring_not_found"); + if (rp != NULL && rte_errno != ENOENT) { + printf("Test failed to detect NULL ring lookup\n"); + goto test_fail; + } + + /* Test to if a non-power of 2 count causes the create + * function to fail correctly + */ + TEST_RING_CREATE("test_ring_count", esize[i], 4097, + SOCKET_ID_ANY, 0, rp); + if (rp != NULL) + goto test_fail; + + TEST_RING_CREATE("test_ring_negative", esize[i], RING_SIZE, + SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_SC_DEQ, rp); + if (rp == NULL) { + printf("test_ring_negative fail to create ring\n"); + goto test_fail; + } + + if (rte_ring_lookup("test_ring_negative") != rp) + goto test_fail; + + if (rte_ring_empty(rp) != 1) { + printf("test_ring_nagative ring is not empty but it should be\n"); + goto test_fail; + } + + /* Tests if it would always fail to create ring with an used + * ring name. + */ + TEST_RING_CREATE("test_ring_negative", esize[i], RING_SIZE, + SOCKET_ID_ANY, 0, rt); + if (rt != NULL) + goto test_fail; + + rte_ring_free(rp); + } + + return 0; + +test_fail: + + rte_ring_free(rp); + return -1; +} + static int test_ring_basic(struct rte_ring *r) { @@ -555,70 +642,6 @@ test_ring_burst_bulk_tests(unsigned int api_type) return -1; } -/* - * it will always fail to create ring with a wrong ring size number in this function - */ -static int -test_ring_creation_with_wrong_size(void) -{ - struct rte_ring * rp = NULL; - - /* Test if ring size is not power of 2 */ - rp = rte_ring_create("test_bad_ring_size", RING_SIZE + 1, SOCKET_ID_ANY, 0); - if (NULL != rp) { - return -1; - } - - /* Test if ring size is exceeding the limit */ - rp = rte_ring_create("test_bad_ring_size", (RTE_RING_SZ_MASK + 1), SOCKET_ID_ANY, 0); - if (NULL != rp) { - return -1; - } - return 0; -} - -/* - * it tests if it would always fail to create ring with an used ring name - */ -static int -test_ring_creation_with_an_used_name(void) -{ - struct rte_ring * rp; - - rp = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, 0); - if (NULL != rp) - return -1; - - return 0; -} - -/* - * Test to if a non-power of 2 count causes the create - * function to fail correctly - */ -static int -test_create_count_odd(void) -{ - struct rte_ring *r = rte_ring_create("test_ring_count", - 4097, SOCKET_ID_ANY, 0 ); - if(r != NULL){ - return -1; - } - return 0; -} - -static int -test_lookup_null(void) -{ - struct rte_ring *rlp = rte_ring_lookup("ring_not_found"); - if (rlp ==NULL) - if (rte_errno != ENOENT){ - printf( "test failed to returnn error on null pointer\n"); - return -1; - } - return 0; -} - /* * Test default, single element, bulk and burst APIs */ @@ -835,6 +858,10 @@ test_ring(void) unsigned int i, j; struct rte_ring *r = NULL; + /* Negative test cases */ + if (test_ring_negative_tests() < 0) + goto test_fail; + /* some more basic operations */ if (test_ring_basic_ex() < 0) goto test_fail; @@ -861,27 +888,6 @@ test_ring(void) if (test_ring_basic(r) < 0) goto test_fail; - /* basic operations */ - if ( test_create_count_odd() < 0){ - printf("Test failed to detect odd count\n"); - goto test_fail; - } else - printf("Test detected odd count\n"); - - if ( test_lookup_null() < 0){ - printf("Test failed to detect NULL ring lookup\n"); - goto test_fail; - } else - printf("Test detected NULL ring lookup\n"); - - /* test of creating ring with wrong size */ - if (test_ring_creation_with_wrong_size() < 0) - goto test_fail; - - /* test of creation ring with an used name */ - if (test_ring_creation_with_an_used_name() < 0) - goto test_fail; - if (test_ring_with_exact_size() < 0) goto test_fail; From patchwork Fri Dec 20 04:45:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64055 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 03D91A04F3; Fri, 20 Dec 2019 05:47:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2707F1BF90; Fri, 20 Dec 2019 05:46:16 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id D30B41BF78 for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 63CE71396; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4E79F3F718; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:15 -0600 Message-Id: <20191220044524.32910-9-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 08/17] test/ring: remove duplicate test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The test cases in the function test_ring_basic are already covered by the function test_ring_burst_bulk_tests and others. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring.c | 218 ------------------------------------------- 1 file changed, 218 deletions(-) diff --git a/app/test/test_ring.c b/app/test/test_ring.c index 552e8b53a..a082f0137 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -200,206 +200,6 @@ test_ring_negative_tests(void) return -1; } -static int -test_ring_basic(struct rte_ring *r) -{ - void **src = NULL, **cur_src = NULL, **dst = NULL, **cur_dst = NULL; - int ret; - unsigned i, num_elems; - - /* alloc dummy object pointers */ - src = malloc(RING_SIZE*2*sizeof(void *)); - if (src == NULL) - goto fail; - - for (i = 0; i < RING_SIZE*2 ; i++) { - src[i] = (void *)(unsigned long)i; - } - cur_src = src; - - /* alloc some room for copied objects */ - dst = malloc(RING_SIZE*2*sizeof(void *)); - if (dst == NULL) - goto fail; - - memset(dst, 0, RING_SIZE*2*sizeof(void *)); - cur_dst = dst; - - printf("enqueue 1 obj\n"); - ret = rte_ring_sp_enqueue_bulk(r, cur_src, 1, NULL); - cur_src += 1; - if (ret == 0) - goto fail; - - printf("enqueue 2 objs\n"); - ret = rte_ring_sp_enqueue_bulk(r, cur_src, 2, NULL); - cur_src += 2; - if (ret == 0) - goto fail; - - printf("enqueue MAX_BULK objs\n"); - ret = rte_ring_sp_enqueue_bulk(r, cur_src, MAX_BULK, NULL); - cur_src += MAX_BULK; - if (ret == 0) - goto fail; - - printf("dequeue 1 obj\n"); - ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 1, NULL); - cur_dst += 1; - if (ret == 0) - goto fail; - - printf("dequeue 2 objs\n"); - ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 2, NULL); - cur_dst += 2; - if (ret == 0) - goto fail; - - printf("dequeue MAX_BULK objs\n"); - ret = rte_ring_sc_dequeue_bulk(r, cur_dst, MAX_BULK, NULL); - cur_dst += MAX_BULK; - if (ret == 0) - goto fail; - - /* check data */ - if (memcmp(src, dst, cur_dst - dst)) { - rte_hexdump(stdout, "src", src, cur_src - src); - rte_hexdump(stdout, "dst", dst, cur_dst - dst); - printf("data after dequeue is not the same\n"); - goto fail; - } - cur_src = src; - cur_dst = dst; - - printf("enqueue 1 obj\n"); - ret = rte_ring_mp_enqueue_bulk(r, cur_src, 1, NULL); - cur_src += 1; - if (ret == 0) - goto fail; - - printf("enqueue 2 objs\n"); - ret = rte_ring_mp_enqueue_bulk(r, cur_src, 2, NULL); - cur_src += 2; - if (ret == 0) - goto fail; - - printf("enqueue MAX_BULK objs\n"); - ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK, NULL); - cur_src += MAX_BULK; - if (ret == 0) - goto fail; - - printf("dequeue 1 obj\n"); - ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 1, NULL); - cur_dst += 1; - if (ret == 0) - goto fail; - - printf("dequeue 2 objs\n"); - ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 2, NULL); - cur_dst += 2; - if (ret == 0) - goto fail; - - printf("dequeue MAX_BULK objs\n"); - ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK, NULL); - cur_dst += MAX_BULK; - if (ret == 0) - goto fail; - - /* check data */ - if (memcmp(src, dst, cur_dst - dst)) { - rte_hexdump(stdout, "src", src, cur_src - src); - rte_hexdump(stdout, "dst", dst, cur_dst - dst); - printf("data after dequeue is not the same\n"); - goto fail; - } - cur_src = src; - cur_dst = dst; - - printf("fill and empty the ring\n"); - for (i = 0; i X-Patchwork-Id: 64056 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6267A04F3; Fri, 20 Dec 2019 05:47:12 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 998E11BFA1; Fri, 20 Dec 2019 05:46:17 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id DA47E1BF74 for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6D605139F; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 64E403F718; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:16 -0600 Message-Id: <20191220044524.32910-10-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 09/17] test/ring: removed unused variable synchro X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Remove unused variable synchro Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/app/test/test_ring.c b/app/test/test_ring.c index a082f0137..57fbd897c 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -57,8 +57,6 @@ #define RING_SIZE 4096 #define MAX_BULK 32 -static rte_atomic32_t synchro; - #define TEST_RING_VERIFY(exp) \ if (!(exp)) { \ printf("error at %s:%d\tcondition " #exp " failed\n", \ @@ -665,8 +663,6 @@ test_ring(void) if (test_ring_basic_ex() < 0) goto test_fail; - rte_atomic32_init(&synchro); - /* Burst and bulk operations with sp/sc, mp/mc and default */ for (j = TEST_RING_BL; j <= TEST_RING_BR; j <<= 1) for (i = TEST_RING_N; i <= TEST_RING_M; i <<= 1) From patchwork Fri Dec 20 04:45:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64058 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A976DA04F3; Fri, 20 Dec 2019 05:47:27 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4556F1BFAD; Fri, 20 Dec 2019 05:46:20 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id E98C51BF7A for ; Fri, 20 Dec 2019 05:45:58 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 820C613A1; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6D9173F86C; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:17 -0600 Message-Id: <20191220044524.32910-11-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 10/17] test/ring: modify single element enq/deq perf test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add test cases to test rte_ring_xxx_elem APIs for single element enqueue/dequeue test cases. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 100 ++++++++++++++++++++++++++++++-------- 1 file changed, 80 insertions(+), 20 deletions(-) diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 6c2aca483..5829718c1 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -13,6 +13,7 @@ #include #include "test.h" +#include "test_ring.h" /* * Ring @@ -41,6 +42,35 @@ struct lcore_pair { static volatile unsigned lcore_count = 0; +static void +test_ring_print_test_string(unsigned int api_type, int esize, + unsigned int bsz, double value) +{ + if (esize == -1) + printf("legacy APIs"); + else + printf("elem APIs: element size %dB", esize); + + if (api_type == TEST_RING_IGNORE_API_TYPE) + return; + + if ((api_type & TEST_RING_N) == TEST_RING_N) + printf(": default enqueue/dequeue: "); + else if ((api_type & TEST_RING_S) == TEST_RING_S) + printf(": SP/SC: "); + else if ((api_type & TEST_RING_M) == TEST_RING_M) + printf(": MP/MC: "); + + if ((api_type & TEST_RING_SL) == TEST_RING_SL) + printf("single: "); + else if ((api_type & TEST_RING_BL) == TEST_RING_BL) + printf("bulk (size: %u): ", bsz); + else if ((api_type & TEST_RING_BR) == TEST_RING_BR) + printf("burst (size: %u): ", bsz); + + printf("%.2F\n", value); +} + /**** Functions to analyse our core mask to get cores for different tests ***/ static int @@ -335,32 +365,35 @@ run_on_all_cores(struct rte_ring *r) * Test function that determines how long an enqueue + dequeue of a single item * takes on a single lcore. Result is for comparison with the bulk enq+deq. */ -static void -test_single_enqueue_dequeue(struct rte_ring *r) +static int +test_single_enqueue_dequeue(struct rte_ring *r, const int esize, + const unsigned int api_type) { - const unsigned iter_shift = 24; - const unsigned iterations = 1< X-Patchwork-Id: 64064 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF4AEA04F3; Fri, 20 Dec 2019 05:48:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 48CB21BFD1; Fri, 20 Dec 2019 05:46:28 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 107AA1BF73 for ; Fri, 20 Dec 2019 05:46:00 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8DEA9142F; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 833093F718; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:18 -0600 Message-Id: <20191220044524.32910-12-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 11/17] test/ring: modify burst enq/deq perf test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add test cases to test legacy and rte_ring_xxx_elem APIs for burst enqueue/dequeue test cases. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 78 ++++++++++++++++++++------------------- 1 file changed, 40 insertions(+), 38 deletions(-) diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 5829718c1..508c688dc 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -397,47 +397,40 @@ test_single_enqueue_dequeue(struct rte_ring *r, const int esize, } /* - * Test that does both enqueue and dequeue on a core using the burst() API calls - * instead of the bulk() calls used in other tests. Results should be the same - * as for the bulk function called on a single lcore. + * Test that does both enqueue and dequeue on a core using the burst/bulk API + * calls Results should be the same as for the bulk function called on a + * single lcore. */ -static void -test_burst_enqueue_dequeue(struct rte_ring *r) +static int +test_burst_bulk_enqueue_dequeue(struct rte_ring *r, const int esize, + const unsigned int api_type) { - const unsigned iter_shift = 23; - const unsigned iterations = 1< X-Patchwork-Id: 64059 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF5D5A04F3; Fri, 20 Dec 2019 05:47:35 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A8F761BFB4; Fri, 20 Dec 2019 05:46:21 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 4B38B1BF7C for ; Fri, 20 Dec 2019 05:45:59 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 972351435; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8D50C3F86C; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:19 -0600 Message-Id: <20191220044524.32910-13-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 12/17] test/ring: modify bulk enq/deq perf test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify test cases to test legacy and rte_ring_xxx_elem APIs for bulk enqueue/dequeue test cases. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 57 ++++++++++----------------------------- 1 file changed, 14 insertions(+), 43 deletions(-) diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 508c688dc..8a543b6f0 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -433,46 +433,6 @@ test_burst_bulk_enqueue_dequeue(struct rte_ring *r, const int esize, return 0; } -/* Times enqueue and dequeue on a single lcore */ -static void -test_bulk_enqueue_dequeue(struct rte_ring *r) -{ - const unsigned iter_shift = 23; - const unsigned iterations = 1< X-Patchwork-Id: 64061 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 39D52A04F3; Fri, 20 Dec 2019 05:47:53 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6CF751BFC1; Fri, 20 Dec 2019 05:46:24 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 6C72D1BE3D for ; Fri, 20 Dec 2019 05:45:59 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A0F7E1474; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 971573F718; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:20 -0600 Message-Id: <20191220044524.32910-14-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 13/17] test/ring: modify bulk empty deq perf test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify test cases to test legacy and rte_ring_xxx_elem APIs for empty bulk dequeue test cases. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 8a543b6f0..0f578c9ae 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -147,27 +147,24 @@ get_two_sockets(struct lcore_pair *lcp) /* Get cycle counts for dequeuing from an empty ring. Should be 2 or 3 cycles */ static void -test_empty_dequeue(struct rte_ring *r) +test_empty_dequeue(struct rte_ring *r, const int esize, + const unsigned int api_type) { - const unsigned iter_shift = 26; - const unsigned iterations = 1< X-Patchwork-Id: 64063 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E9DFA04F3; Fri, 20 Dec 2019 05:48:12 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 11C701BFCB; Fri, 20 Dec 2019 05:46:27 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id BD6641B9B5 for ; Fri, 20 Dec 2019 05:45:59 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B5E071476; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A07073F86C; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:21 -0600 Message-Id: <20191220044524.32910-15-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 14/17] test/ring: modify multi-lcore perf test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify test cases to test the performance of legacy and rte_ring_xxx_elem APIs for multi lcore scenarios. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 175 +++++++++++++++++++++++++------------- 1 file changed, 115 insertions(+), 60 deletions(-) diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 0f578c9ae..b893b5779 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -178,19 +178,21 @@ struct thread_params { }; /* - * Function that uses rdtsc to measure timing for ring enqueue. Needs pair - * thread running dequeue_bulk function + * Helper function to call bulk SP/MP enqueue functions. + * flag == 0 -> enqueue + * flag == 1 -> dequeue */ -static int -enqueue_bulk(void *p) +static __rte_always_inline int +enqueue_dequeue_bulk_helper(const unsigned int flag, const int esize, + struct thread_params *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; + int ret; + const unsigned int iter_shift = 23; + const unsigned int iterations = 1 << iter_shift; + struct rte_ring *r = p->r; + unsigned int bsize = p->size; + unsigned int i; + void *burst = NULL; #ifdef RTE_USE_C11_MEM_MODEL if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) @@ -200,23 +202,55 @@ enqueue_bulk(void *p) while(lcore_count != 2) rte_pause(); + burst = test_ring_calloc(MAX_BURST, esize); + if (burst == NULL) + return -1; + const uint64_t sp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_sp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + TEST_RING_ENQUEUE(r, burst, esize, bsize, ret, + TEST_RING_S | TEST_RING_BL); + else if (flag == 1) + TEST_RING_DEQUEUE(r, burst, esize, bsize, ret, + TEST_RING_S | TEST_RING_BL); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t sp_end = rte_rdtsc(); const uint64_t mp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_mp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + TEST_RING_ENQUEUE(r, burst, esize, bsize, ret, + TEST_RING_M | TEST_RING_BL); + else if (flag == 1) + TEST_RING_DEQUEUE(r, burst, esize, bsize, ret, + TEST_RING_M | TEST_RING_BL); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t mp_end = rte_rdtsc(); - params->spsc = ((double)(sp_end - sp_start))/(iterations*size); - params->mpmc = ((double)(mp_end - mp_start))/(iterations*size); + p->spsc = ((double)(sp_end - sp_start))/(iterations * bsize); + p->mpmc = ((double)(mp_end - mp_start))/(iterations * bsize); return 0; } +/* + * Function that uses rdtsc to measure timing for ring enqueue. Needs pair + * thread running dequeue_bulk function + */ +static int +enqueue_bulk(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(0, -1, params); +} + /* * Function that uses rdtsc to measure timing for ring dequeue. Needs pair * thread running enqueue_bulk function @@ -224,45 +258,41 @@ enqueue_bulk(void *p) static int dequeue_bulk(void *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; -#ifdef RTE_USE_C11_MEM_MODEL - if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) -#else - if (__sync_add_and_fetch(&lcore_count, 1) != 2) -#endif - while(lcore_count != 2) - rte_pause(); + return enqueue_dequeue_bulk_helper(1, -1, params); +} - const uint64_t sc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_sc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t sc_end = rte_rdtsc(); +/* + * Function that uses rdtsc to measure timing for ring enqueue. Needs pair + * thread running dequeue_bulk function + */ +static int +enqueue_bulk_16B(void *p) +{ + struct thread_params *params = p; - const uint64_t mc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_mc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t mc_end = rte_rdtsc(); + return enqueue_dequeue_bulk_helper(0, 16, params); +} - params->spsc = ((double)(sc_end - sc_start))/(iterations*size); - params->mpmc = ((double)(mc_end - mc_start))/(iterations*size); - return 0; +/* + * Function that uses rdtsc to measure timing for ring dequeue. Needs pair + * thread running enqueue_bulk function + */ +static int +dequeue_bulk_16B(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(1, 16, params); } /* * Function that calls the enqueue and dequeue bulk functions on pairs of cores. * used to measure ring perf between hyperthreads, cores and sockets. */ -static void -run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, +static int +run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, int esize, lcore_function_t f1, lcore_function_t f2) { struct thread_params param1 = {0}, param2 = {0}; @@ -278,14 +308,20 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, } else { rte_eal_remote_launch(f1, ¶m1, cores->c1); rte_eal_remote_launch(f2, ¶m2, cores->c2); - rte_eal_wait_lcore(cores->c1); - rte_eal_wait_lcore(cores->c2); + if (rte_eal_wait_lcore(cores->c1) < 0) + return -1; + if (rte_eal_wait_lcore(cores->c2) < 0) + return -1; } - printf("SP/SC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.spsc + param2.spsc); - printf("MP/MC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.mpmc + param2.mpmc); + test_ring_print_test_string(TEST_RING_S | TEST_RING_BL, esize, + bulk_sizes[i], + param1.spsc + param2.spsc); + test_ring_print_test_string(TEST_RING_M | TEST_RING_BL, esize, + bulk_sizes[i], + param1.mpmc + param2.mpmc); } + + return 0; } static rte_atomic32_t synchro; @@ -466,6 +502,24 @@ test_ring_perf(void) printf("\n### Testing empty bulk deq ###\n"); test_empty_dequeue(r, -1, TEST_RING_S | TEST_RING_BL); test_empty_dequeue(r, -1, TEST_RING_M | TEST_RING_BL); + if (get_two_hyperthreads(&cores) == 0) { + printf("\n### Testing using two hyperthreads ###\n"); + if (run_on_core_pair(&cores, r, -1, enqueue_bulk, + dequeue_bulk) < 0) + return -1; + } + if (get_two_cores(&cores) == 0) { + printf("\n### Testing using two physical cores ###\n"); + if (run_on_core_pair(&cores, r, -1, enqueue_bulk, + dequeue_bulk) < 0) + return -1; + } + if (get_two_sockets(&cores) == 0) { + printf("\n### Testing using two NUMA nodes ###\n"); + if (run_on_core_pair(&cores, r, -1, enqueue_bulk, + dequeue_bulk) < 0) + return -1; + } rte_ring_free(r); TEST_RING_CREATE(RING_NAME, 16, RING_SIZE, rte_socket_id(), 0, r); @@ -494,29 +548,30 @@ test_ring_perf(void) printf("\n### Testing empty bulk deq ###\n"); test_empty_dequeue(r, 16, TEST_RING_S | TEST_RING_BL); test_empty_dequeue(r, 16, TEST_RING_M | TEST_RING_BL); - rte_ring_free(r); - - r = rte_ring_create(RING_NAME, RING_SIZE, rte_socket_id(), 0); - if (r == NULL) - return -1; - if (get_two_hyperthreads(&cores) == 0) { printf("\n### Testing using two hyperthreads ###\n"); - run_on_core_pair(&cores, r, enqueue_bulk, dequeue_bulk); + if (run_on_core_pair(&cores, r, 16, enqueue_bulk_16B, + dequeue_bulk_16B) < 0) + return -1; } if (get_two_cores(&cores) == 0) { printf("\n### Testing using two physical cores ###\n"); - run_on_core_pair(&cores, r, enqueue_bulk, dequeue_bulk); + if (run_on_core_pair(&cores, r, 16, enqueue_bulk_16B, + dequeue_bulk_16B) < 0) + return -1; } if (get_two_sockets(&cores) == 0) { printf("\n### Testing using two NUMA nodes ###\n"); - run_on_core_pair(&cores, r, enqueue_bulk, dequeue_bulk); + if (run_on_core_pair(&cores, r, 16, enqueue_bulk_16B, + dequeue_bulk_16B) < 0) + return -1; } printf("\n### Testing using all slave nodes ###\n"); run_on_all_cores(r); rte_ring_free(r); + return 0; } From patchwork Fri Dec 20 04:45:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64057 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3565FA04F3; Fri, 20 Dec 2019 05:47:20 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DBD281BFA7; Fri, 20 Dec 2019 05:46:18 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 21EE21BF75 for ; Fri, 20 Dec 2019 05:45:59 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C9634147A; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B6EF63F718; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:22 -0600 Message-Id: <20191220044524.32910-16-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 15/17] test/ring: adjust run-on-all-cores perf test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adjust run-on-all-cores test case to use legacy APIs. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index b893b5779..fb95e4f2c 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -520,6 +520,9 @@ test_ring_perf(void) dequeue_bulk) < 0) return -1; } + printf("\n### Testing using all slave nodes ###\n"); + if (run_on_all_cores(r) < 0) + return -1; rte_ring_free(r); TEST_RING_CREATE(RING_NAME, 16, RING_SIZE, rte_socket_id(), 0, r); @@ -567,9 +570,6 @@ test_ring_perf(void) return -1; } - printf("\n### Testing using all slave nodes ###\n"); - run_on_all_cores(r); - rte_ring_free(r); return 0; From patchwork Fri Dec 20 04:45:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64062 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 751FBA04F3; Fri, 20 Dec 2019 05:48:02 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 900211BFC8; Fri, 20 Dec 2019 05:46:25 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 27B5E1BF76 for ; Fri, 20 Dec 2019 05:45:59 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DFE6D14BF; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CA2543F86C; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:23 -0600 Message-Id: <20191220044524.32910-17-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 16/17] lib/hash: use ring with 32b element size to save memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The freelist and external bucket indices are 32b. Using rings that use 32b element sizes will save memory. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl --- lib/librte_hash/rte_cuckoo_hash.c | 97 ++++++++++++++++--------------- lib/librte_hash/rte_cuckoo_hash.h | 2 +- 2 files changed, 51 insertions(+), 48 deletions(-) diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index 87a4c01f2..734bec2ac 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -24,7 +24,7 @@ #include #include #include -#include +#include #include #include #include @@ -136,7 +136,6 @@ rte_hash_create(const struct rte_hash_parameters *params) char ring_name[RTE_RING_NAMESIZE]; char ext_ring_name[RTE_RING_NAMESIZE]; unsigned num_key_slots; - unsigned i; unsigned int hw_trans_mem_support = 0, use_local_cache = 0; unsigned int ext_table_support = 0; unsigned int readwrite_concur_support = 0; @@ -213,8 +212,8 @@ rte_hash_create(const struct rte_hash_parameters *params) snprintf(ring_name, sizeof(ring_name), "HT_%s", params->name); /* Create ring (Dummy slot index is not enqueued) */ - r = rte_ring_create(ring_name, rte_align32pow2(num_key_slots), - params->socket_id, 0); + r = rte_ring_create_elem(ring_name, sizeof(uint32_t), + rte_align32pow2(num_key_slots), params->socket_id, 0); if (r == NULL) { RTE_LOG(ERR, HASH, "memory allocation failed\n"); goto err; @@ -227,7 +226,7 @@ rte_hash_create(const struct rte_hash_parameters *params) if (ext_table_support) { snprintf(ext_ring_name, sizeof(ext_ring_name), "HT_EXT_%s", params->name); - r_ext = rte_ring_create(ext_ring_name, + r_ext = rte_ring_create_elem(ext_ring_name, sizeof(uint32_t), rte_align32pow2(num_buckets + 1), params->socket_id, 0); @@ -294,8 +293,8 @@ rte_hash_create(const struct rte_hash_parameters *params) * use bucket index for the linked list and 0 means NULL * for next bucket */ - for (i = 1; i <= num_buckets; i++) - rte_ring_sp_enqueue(r_ext, (void *)((uintptr_t) i)); + for (uint32_t i = 1; i <= num_buckets; i++) + rte_ring_sp_enqueue_elem(r_ext, &i, sizeof(uint32_t)); if (readwrite_concur_lf_support) { ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) * @@ -433,8 +432,8 @@ rte_hash_create(const struct rte_hash_parameters *params) } /* Populate free slots ring. Entry zero is reserved for key misses. */ - for (i = 1; i < num_key_slots; i++) - rte_ring_sp_enqueue(r, (void *)((uintptr_t) i)); + for (uint32_t i = 1; i < num_key_slots; i++) + rte_ring_sp_enqueue_elem(r, &i, sizeof(uint32_t)); te->data = (void *) h; TAILQ_INSERT_TAIL(hash_list, te, next); @@ -598,13 +597,13 @@ rte_hash_reset(struct rte_hash *h) tot_ring_cnt = h->entries; for (i = 1; i < tot_ring_cnt + 1; i++) - rte_ring_sp_enqueue(h->free_slots, (void *)((uintptr_t) i)); + rte_ring_sp_enqueue_elem(h->free_slots, &i, sizeof(uint32_t)); /* Repopulate the free ext bkt ring. */ if (h->ext_table_support) { for (i = 1; i <= h->num_buckets; i++) - rte_ring_sp_enqueue(h->free_ext_bkts, - (void *)((uintptr_t) i)); + rte_ring_sp_enqueue_elem(h->free_ext_bkts, &i, + sizeof(uint32_t)); } if (h->use_local_cache) { @@ -623,13 +622,14 @@ rte_hash_reset(struct rte_hash *h) static inline void enqueue_slot_back(const struct rte_hash *h, struct lcore_cache *cached_free_slots, - void *slot_id) + uint32_t slot_id) { if (h->use_local_cache) { cached_free_slots->objs[cached_free_slots->len] = slot_id; cached_free_slots->len++; } else - rte_ring_sp_enqueue(h->free_slots, slot_id); + rte_ring_sp_enqueue_elem(h->free_slots, &slot_id, + sizeof(uint32_t)); } /* Search a key from bucket and update its data. @@ -923,9 +923,8 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, uint32_t prim_bucket_idx, sec_bucket_idx; struct rte_hash_bucket *prim_bkt, *sec_bkt, *cur_bkt; struct rte_hash_key *new_k, *keys = h->key_store; - void *slot_id = NULL; - void *ext_bkt_id = NULL; - uint32_t new_idx, bkt_id; + uint32_t slot_id; + uint32_t ext_bkt_id; int ret; unsigned n_slots; unsigned lcore_id; @@ -968,8 +967,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Try to get a free slot from the local cache */ if (cached_free_slots->len == 0) { /* Need to get another burst of free slots from global ring */ - n_slots = rte_ring_mc_dequeue_burst(h->free_slots, + n_slots = rte_ring_mc_dequeue_burst_elem(h->free_slots, cached_free_slots->objs, + sizeof(uint32_t), LCORE_CACHE_SIZE, NULL); if (n_slots == 0) { return -ENOSPC; @@ -982,13 +982,13 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, cached_free_slots->len--; slot_id = cached_free_slots->objs[cached_free_slots->len]; } else { - if (rte_ring_sc_dequeue(h->free_slots, &slot_id) != 0) { + if (rte_ring_sc_dequeue_elem(h->free_slots, &slot_id, + sizeof(uint32_t)) != 0) { return -ENOSPC; } } - new_k = RTE_PTR_ADD(keys, (uintptr_t)slot_id * h->key_entry_size); - new_idx = (uint32_t)((uintptr_t) slot_id); + new_k = RTE_PTR_ADD(keys, slot_id * h->key_entry_size); /* The store to application data (by the application) at *data should * not leak after the store of pdata in the key store. i.e. pdata is * the guard variable. Release the application data to the readers. @@ -1001,9 +1001,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Find an empty slot and insert */ ret = rte_hash_cuckoo_insert_mw(h, prim_bkt, sec_bkt, key, data, - short_sig, new_idx, &ret_val); + short_sig, slot_id, &ret_val); if (ret == 0) - return new_idx - 1; + return slot_id - 1; else if (ret == 1) { enqueue_slot_back(h, cached_free_slots, slot_id); return ret_val; @@ -1011,9 +1011,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Primary bucket full, need to make space for new entry */ ret = rte_hash_cuckoo_make_space_mw(h, prim_bkt, sec_bkt, key, data, - short_sig, prim_bucket_idx, new_idx, &ret_val); + short_sig, prim_bucket_idx, slot_id, &ret_val); if (ret == 0) - return new_idx - 1; + return slot_id - 1; else if (ret == 1) { enqueue_slot_back(h, cached_free_slots, slot_id); return ret_val; @@ -1021,10 +1021,10 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Also search secondary bucket to get better occupancy */ ret = rte_hash_cuckoo_make_space_mw(h, sec_bkt, prim_bkt, key, data, - short_sig, sec_bucket_idx, new_idx, &ret_val); + short_sig, sec_bucket_idx, slot_id, &ret_val); if (ret == 0) - return new_idx - 1; + return slot_id - 1; else if (ret == 1) { enqueue_slot_back(h, cached_free_slots, slot_id); return ret_val; @@ -1067,10 +1067,10 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, * and key. */ __atomic_store_n(&cur_bkt->key_idx[i], - new_idx, + slot_id, __ATOMIC_RELEASE); __hash_rw_writer_unlock(h); - return new_idx - 1; + return slot_id - 1; } } } @@ -1078,26 +1078,26 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Failed to get an empty entry from extendable buckets. Link a new * extendable bucket. We first get a free bucket from ring. */ - if (rte_ring_sc_dequeue(h->free_ext_bkts, &ext_bkt_id) != 0) { + if (rte_ring_sc_dequeue_elem(h->free_ext_bkts, &ext_bkt_id, + sizeof(uint32_t)) != 0) { ret = -ENOSPC; goto failure; } - bkt_id = (uint32_t)((uintptr_t)ext_bkt_id) - 1; /* Use the first location of the new bucket */ - (h->buckets_ext[bkt_id]).sig_current[0] = short_sig; + (h->buckets_ext[ext_bkt_id - 1]).sig_current[0] = short_sig; /* Store to signature and key should not leak after * the store to key_idx. i.e. key_idx is the guard variable * for signature and key. */ - __atomic_store_n(&(h->buckets_ext[bkt_id]).key_idx[0], - new_idx, + __atomic_store_n(&(h->buckets_ext[ext_bkt_id - 1]).key_idx[0], + slot_id, __ATOMIC_RELEASE); /* Link the new bucket to sec bucket linked list */ last = rte_hash_get_last_bkt(sec_bkt); - last->next = &h->buckets_ext[bkt_id]; + last->next = &h->buckets_ext[ext_bkt_id - 1]; __hash_rw_writer_unlock(h); - return new_idx - 1; + return slot_id - 1; failure: __hash_rw_writer_unlock(h); @@ -1373,8 +1373,9 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) /* Cache full, need to free it. */ if (cached_free_slots->len == LCORE_CACHE_SIZE) { /* Need to enqueue the free slots in global ring. */ - n_slots = rte_ring_mp_enqueue_burst(h->free_slots, + n_slots = rte_ring_mp_enqueue_burst_elem(h->free_slots, cached_free_slots->objs, + sizeof(uint32_t), LCORE_CACHE_SIZE, NULL); ERR_IF_TRUE((n_slots == 0), "%s: could not enqueue free slots in global ring\n", @@ -1383,11 +1384,11 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) } /* Put index of new free slot in cache. */ cached_free_slots->objs[cached_free_slots->len] = - (void *)((uintptr_t)bkt->key_idx[i]); + bkt->key_idx[i]; cached_free_slots->len++; } else { - rte_ring_sp_enqueue(h->free_slots, - (void *)((uintptr_t)bkt->key_idx[i])); + rte_ring_sp_enqueue_elem(h->free_slots, + &bkt->key_idx[i], sizeof(uint32_t)); } } @@ -1551,7 +1552,8 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, */ h->ext_bkt_to_free[ret] = index; else - rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index); + rte_ring_sp_enqueue_elem(h->free_ext_bkts, &index, + sizeof(uint32_t)); } __hash_rw_writer_unlock(h); return ret; @@ -1614,7 +1616,8 @@ rte_hash_free_key_with_position(const struct rte_hash *h, uint32_t index = h->ext_bkt_to_free[position]; if (index) { /* Recycle empty ext bkt to free list. */ - rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index); + rte_ring_sp_enqueue_elem(h->free_ext_bkts, &index, + sizeof(uint32_t)); h->ext_bkt_to_free[position] = 0; } } @@ -1625,19 +1628,19 @@ rte_hash_free_key_with_position(const struct rte_hash *h, /* Cache full, need to free it. */ if (cached_free_slots->len == LCORE_CACHE_SIZE) { /* Need to enqueue the free slots in global ring. */ - n_slots = rte_ring_mp_enqueue_burst(h->free_slots, + n_slots = rte_ring_mp_enqueue_burst_elem(h->free_slots, cached_free_slots->objs, + sizeof(uint32_t), LCORE_CACHE_SIZE, NULL); RETURN_IF_TRUE((n_slots == 0), -EFAULT); cached_free_slots->len -= n_slots; } /* Put index of new free slot in cache. */ - cached_free_slots->objs[cached_free_slots->len] = - (void *)((uintptr_t)key_idx); + cached_free_slots->objs[cached_free_slots->len] = key_idx; cached_free_slots->len++; } else { - rte_ring_sp_enqueue(h->free_slots, - (void *)((uintptr_t)key_idx)); + rte_ring_sp_enqueue_elem(h->free_slots, &key_idx, + sizeof(uint32_t)); } return 0; diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h index fb19bb27d..345de6bf9 100644 --- a/lib/librte_hash/rte_cuckoo_hash.h +++ b/lib/librte_hash/rte_cuckoo_hash.h @@ -124,7 +124,7 @@ const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = { struct lcore_cache { unsigned len; /**< Cache len */ - void *objs[LCORE_CACHE_SIZE]; /**< Cache objects */ + uint32_t objs[LCORE_CACHE_SIZE]; /**< Cache objects */ } __rte_cache_aligned; /* Structure that stores key-value pair */ From patchwork Fri Dec 20 04:45:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 64060 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B9AAA04F3; Fri, 20 Dec 2019 05:47:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 230A91BFBB; Fri, 20 Dec 2019 05:46:23 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 27C9B1BF7B for ; Fri, 20 Dec 2019 05:45:59 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EB81814F6; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DF4963F718; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:24 -0600 Message-Id: <20191220044524.32910-18-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 17/17] lib/eventdev: use custom element size ring for event rings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use custom element size ring APIs to replace event ring implementation. This avoids code duplication. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl --- lib/librte_eventdev/rte_event_ring.c | 147 ++------------------------- lib/librte_eventdev/rte_event_ring.h | 45 ++++---- 2 files changed, 24 insertions(+), 168 deletions(-) diff --git a/lib/librte_eventdev/rte_event_ring.c b/lib/librte_eventdev/rte_event_ring.c index 50190de01..d27e23901 100644 --- a/lib/librte_eventdev/rte_event_ring.c +++ b/lib/librte_eventdev/rte_event_ring.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2017 Intel Corporation + * Copyright(c) 2019 Arm Limited */ #include @@ -11,13 +12,6 @@ #include #include "rte_event_ring.h" -TAILQ_HEAD(rte_event_ring_list, rte_tailq_entry); - -static struct rte_tailq_elem rte_event_ring_tailq = { - .name = RTE_TAILQ_EVENT_RING_NAME, -}; -EAL_REGISTER_TAILQ(rte_event_ring_tailq) - int rte_event_ring_init(struct rte_event_ring *r, const char *name, unsigned int count, unsigned int flags) @@ -35,150 +29,21 @@ struct rte_event_ring * rte_event_ring_create(const char *name, unsigned int count, int socket_id, unsigned int flags) { - char mz_name[RTE_MEMZONE_NAMESIZE]; - struct rte_event_ring *r; - struct rte_tailq_entry *te; - const struct rte_memzone *mz; - ssize_t ring_size; - int mz_flags = 0; - struct rte_event_ring_list *ring_list = NULL; - const unsigned int requested_count = count; - int ret; - - ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head, - rte_event_ring_list); - - /* for an exact size ring, round up from count to a power of two */ - if (flags & RING_F_EXACT_SZ) - count = rte_align32pow2(count + 1); - else if (!rte_is_power_of_2(count)) { - rte_errno = EINVAL; - return NULL; - } - - ring_size = sizeof(*r) + (count * sizeof(struct rte_event)); - - ret = snprintf(mz_name, sizeof(mz_name), "%s%s", - RTE_RING_MZ_PREFIX, name); - if (ret < 0 || ret >= (int)sizeof(mz_name)) { - rte_errno = ENAMETOOLONG; - return NULL; - } - - te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0); - if (te == NULL) { - RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n"); - rte_errno = ENOMEM; - return NULL; - } - - rte_mcfg_tailq_write_lock(); - - /* - * reserve a memory zone for this ring. If we can't get rte_config or - * we are secondary process, the memzone_reserve function will set - * rte_errno for us appropriately - hence no check in this this function - */ - mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags); - if (mz != NULL) { - r = mz->addr; - /* Check return value in case rte_ring_init() fails on size */ - int err = rte_event_ring_init(r, name, requested_count, flags); - if (err) { - RTE_LOG(ERR, RING, "Ring init failed\n"); - if (rte_memzone_free(mz) != 0) - RTE_LOG(ERR, RING, "Cannot free memzone\n"); - rte_free(te); - rte_mcfg_tailq_write_unlock(); - return NULL; - } - - te->data = (void *) r; - r->r.memzone = mz; - - TAILQ_INSERT_TAIL(ring_list, te, next); - } else { - r = NULL; - RTE_LOG(ERR, RING, "Cannot reserve memory\n"); - rte_free(te); - } - rte_mcfg_tailq_write_unlock(); - - return r; + return (struct rte_event_ring *)rte_ring_create_elem(name, + sizeof(struct rte_event), + count, socket_id, flags); } struct rte_event_ring * rte_event_ring_lookup(const char *name) { - struct rte_tailq_entry *te; - struct rte_event_ring *r = NULL; - struct rte_event_ring_list *ring_list; - - ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head, - rte_event_ring_list); - - rte_mcfg_tailq_read_lock(); - - TAILQ_FOREACH(te, ring_list, next) { - r = (struct rte_event_ring *) te->data; - if (strncmp(name, r->r.name, RTE_RING_NAMESIZE) == 0) - break; - } - - rte_mcfg_tailq_read_unlock(); - - if (te == NULL) { - rte_errno = ENOENT; - return NULL; - } - - return r; + return (struct rte_event_ring *)rte_ring_lookup(name); } /* free the ring */ void rte_event_ring_free(struct rte_event_ring *r) { - struct rte_event_ring_list *ring_list = NULL; - struct rte_tailq_entry *te; - - if (r == NULL) - return; - - /* - * Ring was not created with rte_event_ring_create, - * therefore, there is no memzone to free. - */ - if (r->r.memzone == NULL) { - RTE_LOG(ERR, RING, - "Cannot free ring (not created with rte_event_ring_create()"); - return; - } - - if (rte_memzone_free(r->r.memzone) != 0) { - RTE_LOG(ERR, RING, "Cannot free memory\n"); - return; - } - - ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head, - rte_event_ring_list); - rte_mcfg_tailq_write_lock(); - - /* find out tailq entry */ - TAILQ_FOREACH(te, ring_list, next) { - if (te->data == (void *) r) - break; - } - - if (te == NULL) { - rte_mcfg_tailq_write_unlock(); - return; - } - - TAILQ_REMOVE(ring_list, te, next); - - rte_mcfg_tailq_write_unlock(); - - rte_free(te); + rte_ring_free((struct rte_ring *)r); } diff --git a/lib/librte_eventdev/rte_event_ring.h b/lib/librte_eventdev/rte_event_ring.h index 827a3209e..c0861b0ec 100644 --- a/lib/librte_eventdev/rte_event_ring.h +++ b/lib/librte_eventdev/rte_event_ring.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2019 Arm Limited */ /** @@ -19,6 +20,7 @@ #include #include #include +#include #include "rte_eventdev.h" #define RTE_TAILQ_EVENT_RING_NAME "RTE_EVENT_RING" @@ -88,22 +90,17 @@ rte_event_ring_enqueue_burst(struct rte_event_ring *r, const struct rte_event *events, unsigned int n, uint16_t *free_space) { - uint32_t prod_head, prod_next; - uint32_t free_entries; + unsigned int num; + uint32_t space; - n = __rte_ring_move_prod_head(&r->r, r->r.prod.single, n, - RTE_RING_QUEUE_VARIABLE, - &prod_head, &prod_next, &free_entries); - if (n == 0) - goto end; + num = rte_ring_enqueue_burst_elem(&r->r, events, + sizeof(struct rte_event), n, + &space); - ENQUEUE_PTRS(&r->r, &r[1], prod_head, events, n, struct rte_event); - - update_tail(&r->r.prod, prod_head, prod_next, r->r.prod.single, 1); -end: if (free_space != NULL) - *free_space = free_entries - n; - return n; + *free_space = space; + + return num; } /** @@ -129,23 +126,17 @@ rte_event_ring_dequeue_burst(struct rte_event_ring *r, struct rte_event *events, unsigned int n, uint16_t *available) { - uint32_t cons_head, cons_next; - uint32_t entries; - - n = __rte_ring_move_cons_head(&r->r, r->r.cons.single, n, - RTE_RING_QUEUE_VARIABLE, - &cons_head, &cons_next, &entries); - if (n == 0) - goto end; + unsigned int num; + uint32_t remaining; - DEQUEUE_PTRS(&r->r, &r[1], cons_head, events, n, struct rte_event); + num = rte_ring_dequeue_burst_elem(&r->r, events, + sizeof(struct rte_event), n, + &remaining); - update_tail(&r->r.cons, cons_head, cons_next, r->r.cons.single, 0); - -end: if (available != NULL) - *available = entries - n; - return n; + *available = remaining; + + return num; } /*