From patchwork Mon Dec 21 11:13:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feifei Wang X-Patchwork-Id: 85596 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5FDF7A09EF; Mon, 21 Dec 2020 12:14:30 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2A49DCAE3; Mon, 21 Dec 2020 12:14:14 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 74078CAC5; Mon, 21 Dec 2020 12:14:11 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD013101E; Mon, 21 Dec 2020 03:14:10 -0800 (PST) Received: from net-arm-n1sdp.shanghai.arm.com (net-arm-n1sdp.shanghai.arm.com [10.169.208.219]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 354563F718; Mon, 21 Dec 2020 03:14:07 -0800 (PST) From: Feifei Wang To: Honnappa Nagarahalli , Konstantin Ananyev , Olivier Matz , Gavin Hu Cc: dev@dpdk.org, nd@arm.com, Feifei Wang , stable@dpdk.org Date: Mon, 21 Dec 2020 05:13:57 -0600 Message-Id: <20201221111359.22013-2-feifei.wang2@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201221111359.22013-1-feifei.wang2@arm.com> References: <20201221111359.22013-1-feifei.wang2@arm.com> Subject: [dpdk-dev] [PATCH v1 1/3] test/ring: reduce iteration numbers to make test duration shorter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When testing ring performance in the case that multiple lcores are mapped to the same physical core, e.g. --lcores '(0-3)@10', it takes a very long time to wait for the "enqueue_dequeue_bulk_helper" to finish. This is because too much iteration numbers and extremely low efficiency for enqueue and dequeue with this kind of core mapping. Following are the test results to show the above phenomenon: x86-Intel(R) Xeon(R) Gold 6240: $sudo ./app/test/dpdk-test --lcores '(0-1)@25' Testing using two hyperthreads(bulk (size: 8):) iter_shift: 3 5 7 9 11 13 *15 17 19 21 23 run time: 7s 7s 7s 8s 9s 16s 47s 170s 660s >0.5h >1h legacy APIs: SP/SC: 37 11 6 40525 40525 40209 40367 40407 40541 NoData NoData legacy APIs: MP/MC: 56 14 11 50657 40526 40526 40526 40625 40585 NoData NoData aarch64-n1sdp: $sudo ./app/test/dpdk-test --lcore '(0-1)@1' Testing using two hyperthreads(bulk (size: 8):) iter_shift: 3 5 7 9 11 13 *15 17 19 21 23 run time: 8s 8s 8s 9s 9s 14s 34s 111s 418s 25min >1h legacy APIs: SP/SC: 0.4 0.2 0.1 488 488 488 488 488 489 489 NoData legacy APIs: MP/MC: 0.4 0.3 0.2 488 488 488 488 490 489 489 NoData As the number of iterations increases, so does the time which is required to run the program. Currently (iter_shift = 23), it will take more than 1 hour to wait for the test to finish. To fix this, the "iter_shift" should decrease and ensure enough iterations to keep the test data stable. In order to achieve this, we also test with "-l" EAL argument: x86-Intel(R) Xeon(R) Gold 6240: $sudo ./app/test/dpdk-test -l 25-26 Testing using two NUMA nodes(bulk (size: 8):) iter_shift: 3 5 7 9 11 13 *15 17 19 21 23 run time: 6s 6s 6s 6s 6s 6s 6s 7s 8s 11s 27s legacy APIs: SP/SC: 47 20 13 22 54 83 91 73 81 75 95 legacy APIs: MP/MC: 44 18 18 240 245 270 250 249 252 250 253 aarch64-n1sdp: $sudo ./app/test/dpdk-test -l 1-2 Testing using two physical cores(bulk (size: 8):) iter_shift: 3 5 7 9 11 13 *15 17 19 21 23 run time: 8s 8s 8s 8s 8s 8s 8s 9s 9s 11s 23s legacy APIs: SP/SC: 0.7 0.4 1.2 1.8 2.0 2.0 2.0 2.0 2.0 2.0 2.0 legacy APIs: MP/MC: 0.3 0.4 1.3 1.9 2.9 2.9 2.9 2.9 2.9 2.9 2.9 According to above test data, when "iter_shift" is set as "15", the test run time is reduced to less than 1 minute and the test result can keep stable in x86 and aarch64 servers. Fixes: 1fa5d0099efc ("test/ring: add custom element size performance tests") Cc: honnappa.nagarahalli@arm.com Cc: stable@dpdk.org Signed-off-by: Feifei Wang Reviewed-by: Honnappa Nagarahalli Reviewed-by: Ruifeng Wang Acked-by: Konstantin Ananyev --- app/test/test_ring_perf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index e63e25a86..fd82e2041 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -178,7 +178,7 @@ enqueue_dequeue_bulk_helper(const unsigned int flag, const int esize, struct thread_params *p) { int ret; - const unsigned int iter_shift = 23; + const unsigned int iter_shift = 15; const unsigned int iterations = 1 << iter_shift; struct rte_ring *r = p->r; unsigned int bsize = p->size; From patchwork Mon Dec 21 11:13:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feifei Wang X-Patchwork-Id: 85597 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0E68A09EF; Mon, 21 Dec 2020 12:14:49 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B171BCBA9; Mon, 21 Dec 2020 12:14:16 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id DD490CB7F for ; Mon, 21 Dec 2020 12:14:14 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5FA0F1042; Mon, 21 Dec 2020 03:14:13 -0800 (PST) Received: from net-arm-n1sdp.shanghai.arm.com (net-arm-n1sdp.shanghai.arm.com [10.169.208.219]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 79B463F718; Mon, 21 Dec 2020 03:14:11 -0800 (PST) From: Feifei Wang To: Honnappa Nagarahalli , Konstantin Ananyev Cc: dev@dpdk.org, nd@arm.com, Feifei Wang Date: Mon, 21 Dec 2020 05:13:58 -0600 Message-Id: <20201221111359.22013-3-feifei.wang2@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201221111359.22013-1-feifei.wang2@arm.com> References: <20201221111359.22013-1-feifei.wang2@arm.com> Subject: [dpdk-dev] [PATCH v1 2/3] ring: add rte prefix before update tail API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add __rte prefix before update_tail API because it is a internal function. Signed-off-by: Feifei Wang Reviewed-by: Honnappa Nagarahalli Reviewed-by: Ruifeng Wang Acked-by: Konstantin Ananyev --- lib/librte_ring/rte_ring_c11_mem.h | 4 ++-- lib/librte_ring/rte_ring_elem.h | 4 ++-- lib/librte_ring/rte_ring_generic.h | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/lib/librte_ring/rte_ring_c11_mem.h b/lib/librte_ring/rte_ring_c11_mem.h index 0fb73a337..7f5eba262 100644 --- a/lib/librte_ring/rte_ring_c11_mem.h +++ b/lib/librte_ring/rte_ring_c11_mem.h @@ -11,8 +11,8 @@ #define _RTE_RING_C11_MEM_H_ static __rte_always_inline void -update_tail(struct rte_ring_headtail *ht, uint32_t old_val, uint32_t new_val, - uint32_t single, uint32_t enqueue) +__rte_ring_update_tail(struct rte_ring_headtail *ht, uint32_t old_val, + uint32_t new_val, uint32_t single, uint32_t enqueue) { RTE_SET_USED(enqueue); diff --git a/lib/librte_ring/rte_ring_elem.h b/lib/librte_ring/rte_ring_elem.h index 7034d29c0..57344d47d 100644 --- a/lib/librte_ring/rte_ring_elem.h +++ b/lib/librte_ring/rte_ring_elem.h @@ -423,7 +423,7 @@ __rte_ring_do_enqueue_elem(struct rte_ring *r, const void *obj_table, __rte_ring_enqueue_elems(r, prod_head, obj_table, esize, n); - update_tail(&r->prod, prod_head, prod_next, is_sp, 1); + __rte_ring_update_tail(&r->prod, prod_head, prod_next, is_sp, 1); end: if (free_space != NULL) *free_space = free_entries - n; @@ -470,7 +470,7 @@ __rte_ring_do_dequeue_elem(struct rte_ring *r, void *obj_table, __rte_ring_dequeue_elems(r, cons_head, obj_table, esize, n); - update_tail(&r->cons, cons_head, cons_next, is_sc, 0); + __rte_ring_update_tail(&r->cons, cons_head, cons_next, is_sc, 0); end: if (available != NULL) diff --git a/lib/librte_ring/rte_ring_generic.h b/lib/librte_ring/rte_ring_generic.h index 953cdbbd5..37c62b8d6 100644 --- a/lib/librte_ring/rte_ring_generic.h +++ b/lib/librte_ring/rte_ring_generic.h @@ -11,8 +11,8 @@ #define _RTE_RING_GENERIC_H_ static __rte_always_inline void -update_tail(struct rte_ring_headtail *ht, uint32_t old_val, uint32_t new_val, - uint32_t single, uint32_t enqueue) +__rte_ring_update_tail(struct rte_ring_headtail *ht, uint32_t old_val, + uint32_t new_val, uint32_t single, uint32_t enqueue) { if (enqueue) rte_smp_wmb(); From patchwork Mon Dec 21 11:13:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feifei Wang X-Patchwork-Id: 85598 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B975A09EF; Mon, 21 Dec 2020 12:15:07 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4F7ADCBDC; Mon, 21 Dec 2020 12:14:19 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 10901CBDC for ; Mon, 21 Dec 2020 12:14:18 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76E54101E; Mon, 21 Dec 2020 03:14:16 -0800 (PST) Received: from net-arm-n1sdp.shanghai.arm.com (net-arm-n1sdp.shanghai.arm.com [10.169.208.219]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 11AF03F718; Mon, 21 Dec 2020 03:14:13 -0800 (PST) From: Feifei Wang To: Honnappa Nagarahalli , Konstantin Ananyev Cc: dev@dpdk.org, nd@arm.com, Feifei Wang Date: Mon, 21 Dec 2020 05:13:59 -0600 Message-Id: <20201221111359.22013-4-feifei.wang2@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201221111359.22013-1-feifei.wang2@arm.com> References: <20201221111359.22013-1-feifei.wang2@arm.com> Subject: [dpdk-dev] [PATCH v1 3/3] ring: rename and refactor ring library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For legacy modes, rename ring_generic/c11 to ring_generic/c11_pvt. Furthermore, add new file ring_elem_pvt.h which includes ring_do_eq/deq and ring element copy/delete APIs. For other modes, rename xx_c11_mem to xx_elem_pvt. Move all private APIs into these new header files. Suggested-by: Honnappa Nagarahalli Signed-off-by: Feifei Wang Reviewed-by: Honnappa Nagarahalli Reviewed-by: Ruifeng Wang --- lib/librte_ring/meson.build | 15 +- .../{rte_ring_c11_mem.h => ring_c11_pvt.h} | 9 +- lib/librte_ring/ring_elem_pvt.h | 385 ++++++++++++++++++ ...{rte_ring_generic.h => ring_generic_pvt.h} | 6 +- ...ring_hts_c11_mem.h => ring_hts_elem_pvt.h} | 88 +++- ...ng_peek_c11_mem.h => ring_peek_elem_pvt.h} | 75 +++- ...ring_rts_c11_mem.h => ring_rts_elem_pvt.h} | 88 +++- lib/librte_ring/rte_ring_elem.h | 374 +---------------- lib/librte_ring/rte_ring_hts.h | 84 +--- lib/librte_ring/rte_ring_peek.h | 71 +--- lib/librte_ring/rte_ring_peek_zc.h | 2 +- lib/librte_ring/rte_ring_rts.h | 84 +--- 12 files changed, 646 insertions(+), 635 deletions(-) rename lib/librte_ring/{rte_ring_c11_mem.h => ring_c11_pvt.h} (96%) create mode 100644 lib/librte_ring/ring_elem_pvt.h rename lib/librte_ring/{rte_ring_generic.h => ring_generic_pvt.h} (98%) rename lib/librte_ring/{rte_ring_hts_c11_mem.h => ring_hts_elem_pvt.h} (60%) rename lib/librte_ring/{rte_ring_peek_c11_mem.h => ring_peek_elem_pvt.h} (62%) rename lib/librte_ring/{rte_ring_rts_c11_mem.h => ring_rts_elem_pvt.h} (62%) diff --git a/lib/librte_ring/meson.build b/lib/librte_ring/meson.build index 36fdcb6a5..98eac5810 100644 --- a/lib/librte_ring/meson.build +++ b/lib/librte_ring/meson.build @@ -2,15 +2,16 @@ # Copyright(c) 2017 Intel Corporation sources = files('rte_ring.c') -headers = files('rte_ring.h', +headers = files('ring_c11_pvt.h', + 'ring_elem_pvt.h', + 'ring_generic_pvt.h', + 'ring_hts_elem_pvt.h', + 'ring_peek_elem_pvt.h', + 'ring_rts_elem_pvt.h', + 'rte_ring.h', 'rte_ring_core.h', 'rte_ring_elem.h', - 'rte_ring_c11_mem.h', - 'rte_ring_generic.h', 'rte_ring_hts.h', - 'rte_ring_hts_c11_mem.h', 'rte_ring_peek.h', - 'rte_ring_peek_c11_mem.h', 'rte_ring_peek_zc.h', - 'rte_ring_rts.h', - 'rte_ring_rts_c11_mem.h') + 'rte_ring_rts.h') diff --git a/lib/librte_ring/rte_ring_c11_mem.h b/lib/librte_ring/ring_c11_pvt.h similarity index 96% rename from lib/librte_ring/rte_ring_c11_mem.h rename to lib/librte_ring/ring_c11_pvt.h index 7f5eba262..9f2f5318f 100644 --- a/lib/librte_ring/rte_ring_c11_mem.h +++ b/lib/librte_ring/ring_c11_pvt.h @@ -7,8 +7,8 @@ * Used as BSD-3 Licensed with permission from Kip Macy. */ -#ifndef _RTE_RING_C11_MEM_H_ -#define _RTE_RING_C11_MEM_H_ +#ifndef _RING_C11_PVT_H_ +#define _RING_C11_PVT_H_ static __rte_always_inline void __rte_ring_update_tail(struct rte_ring_headtail *ht, uint32_t old_val, @@ -69,9 +69,6 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, /* Ensure the head is read before tail */ __atomic_thread_fence(__ATOMIC_ACQUIRE); - /* load-acquire synchronize with store-release of ht->tail - * in update_tail. - */ cons_tail = __atomic_load_n(&r->cons.tail, __ATOMIC_ACQUIRE); @@ -178,4 +175,4 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, return n; } -#endif /* _RTE_RING_C11_MEM_H_ */ +#endif /* _RING_C11_PVT_H_ */ diff --git a/lib/librte_ring/ring_elem_pvt.h b/lib/librte_ring/ring_elem_pvt.h new file mode 100644 index 000000000..8003e5edc --- /dev/null +++ b/lib/librte_ring/ring_elem_pvt.h @@ -0,0 +1,385 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright (c) 2017,2018 HXT-semitech Corporation. + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org + * All rights reserved. + * Derived from FreeBSD's bufring.h + * Used as BSD-3 Licensed with permission from Kip Macy. + */ + +#ifndef _RING_ELEM_PVT_H_ +#define _RING_ELEM_PVT_H_ + +static __rte_always_inline void +__rte_ring_enqueue_elems_32(struct rte_ring *r, const uint32_t size, + uint32_t idx, const void *obj_table, uint32_t n) +{ + unsigned int i; + uint32_t *ring = (uint32_t *)&r[1]; + const uint32_t *obj = (const uint32_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { + ring[idx] = obj[i]; + ring[idx + 1] = obj[i + 1]; + ring[idx + 2] = obj[i + 2]; + ring[idx + 3] = obj[i + 3]; + ring[idx + 4] = obj[i + 4]; + ring[idx + 5] = obj[i + 5]; + ring[idx + 6] = obj[i + 6]; + ring[idx + 7] = obj[i + 7]; + } + switch (n & 0x7) { + case 7: + ring[idx++] = obj[i++]; /* fallthrough */ + case 6: + ring[idx++] = obj[i++]; /* fallthrough */ + case 5: + ring[idx++] = obj[i++]; /* fallthrough */ + case 4: + ring[idx++] = obj[i++]; /* fallthrough */ + case 3: + ring[idx++] = obj[i++]; /* fallthrough */ + case 2: + ring[idx++] = obj[i++]; /* fallthrough */ + case 1: + ring[idx++] = obj[i++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + ring[idx] = obj[i]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + ring[idx] = obj[i]; + } +} + +static __rte_always_inline void +__rte_ring_enqueue_elems_64(struct rte_ring *r, uint32_t prod_head, + const void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + uint64_t *ring = (uint64_t *)&r[1]; + const unaligned_uint64_t *obj = (const unaligned_uint64_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { + ring[idx] = obj[i]; + ring[idx + 1] = obj[i + 1]; + ring[idx + 2] = obj[i + 2]; + ring[idx + 3] = obj[i + 3]; + } + switch (n & 0x3) { + case 3: + ring[idx++] = obj[i++]; /* fallthrough */ + case 2: + ring[idx++] = obj[i++]; /* fallthrough */ + case 1: + ring[idx++] = obj[i++]; + } + } else { + for (i = 0; idx < size; i++, idx++) + ring[idx] = obj[i]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + ring[idx] = obj[i]; + } +} + +static __rte_always_inline void +__rte_ring_enqueue_elems_128(struct rte_ring *r, uint32_t prod_head, + const void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + rte_int128_t *ring = (rte_int128_t *)&r[1]; + const rte_int128_t *obj = (const rte_int128_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x1); i += 2, idx += 2) + memcpy((void *)(ring + idx), + (const void *)(obj + i), 32); + switch (n & 0x1) { + case 1: + memcpy((void *)(ring + idx), + (const void *)(obj + i), 16); + } + } else { + for (i = 0; idx < size; i++, idx++) + memcpy((void *)(ring + idx), + (const void *)(obj + i), 16); + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + memcpy((void *)(ring + idx), + (const void *)(obj + i), 16); + } +} + +/* the actual enqueue of elements on the ring. + * Placed here since identical code needed in both + * single and multi producer enqueue functions. + */ +static __rte_always_inline void +__rte_ring_enqueue_elems(struct rte_ring *r, uint32_t prod_head, + const void *obj_table, uint32_t esize, uint32_t num) +{ + /* 8B and 16B copies implemented individually to retain + * the current performance. + */ + if (esize == 8) + __rte_ring_enqueue_elems_64(r, prod_head, obj_table, num); + else if (esize == 16) + __rte_ring_enqueue_elems_128(r, prod_head, obj_table, num); + else { + uint32_t idx, scale, nr_idx, nr_num, nr_size; + + /* Normalize to uint32_t */ + scale = esize / sizeof(uint32_t); + nr_num = num * scale; + idx = prod_head & r->mask; + nr_idx = idx * scale; + nr_size = r->size * scale; + __rte_ring_enqueue_elems_32(r, nr_size, nr_idx, + obj_table, nr_num); + } +} + +static __rte_always_inline void +__rte_ring_dequeue_elems_32(struct rte_ring *r, const uint32_t size, + uint32_t idx, void *obj_table, uint32_t n) +{ + unsigned int i; + uint32_t *ring = (uint32_t *)&r[1]; + uint32_t *obj = (uint32_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { + obj[i] = ring[idx]; + obj[i + 1] = ring[idx + 1]; + obj[i + 2] = ring[idx + 2]; + obj[i + 3] = ring[idx + 3]; + obj[i + 4] = ring[idx + 4]; + obj[i + 5] = ring[idx + 5]; + obj[i + 6] = ring[idx + 6]; + obj[i + 7] = ring[idx + 7]; + } + switch (n & 0x7) { + case 7: + obj[i++] = ring[idx++]; /* fallthrough */ + case 6: + obj[i++] = ring[idx++]; /* fallthrough */ + case 5: + obj[i++] = ring[idx++]; /* fallthrough */ + case 4: + obj[i++] = ring[idx++]; /* fallthrough */ + case 3: + obj[i++] = ring[idx++]; /* fallthrough */ + case 2: + obj[i++] = ring[idx++]; /* fallthrough */ + case 1: + obj[i++] = ring[idx++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + obj[i] = ring[idx]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + obj[i] = ring[idx]; + } +} + +static __rte_always_inline void +__rte_ring_dequeue_elems_64(struct rte_ring *r, uint32_t prod_head, + void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + uint64_t *ring = (uint64_t *)&r[1]; + unaligned_uint64_t *obj = (unaligned_uint64_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { + obj[i] = ring[idx]; + obj[i + 1] = ring[idx + 1]; + obj[i + 2] = ring[idx + 2]; + obj[i + 3] = ring[idx + 3]; + } + switch (n & 0x3) { + case 3: + obj[i++] = ring[idx++]; /* fallthrough */ + case 2: + obj[i++] = ring[idx++]; /* fallthrough */ + case 1: + obj[i++] = ring[idx++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + obj[i] = ring[idx]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + obj[i] = ring[idx]; + } +} + +static __rte_always_inline void +__rte_ring_dequeue_elems_128(struct rte_ring *r, uint32_t prod_head, + void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + rte_int128_t *ring = (rte_int128_t *)&r[1]; + rte_int128_t *obj = (rte_int128_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x1); i += 2, idx += 2) + memcpy((void *)(obj + i), (void *)(ring + idx), 32); + switch (n & 0x1) { + case 1: + memcpy((void *)(obj + i), (void *)(ring + idx), 16); + } + } else { + for (i = 0; idx < size; i++, idx++) + memcpy((void *)(obj + i), (void *)(ring + idx), 16); + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + memcpy((void *)(obj + i), (void *)(ring + idx), 16); + } +} + +/* the actual dequeue of elements from the ring. + * Placed here since identical code needed in both + * single and multi producer enqueue functions. + */ +static __rte_always_inline void +__rte_ring_dequeue_elems(struct rte_ring *r, uint32_t cons_head, + void *obj_table, uint32_t esize, uint32_t num) +{ + /* 8B and 16B copies implemented individually to retain + * the current performance. + */ + if (esize == 8) + __rte_ring_dequeue_elems_64(r, cons_head, obj_table, num); + else if (esize == 16) + __rte_ring_dequeue_elems_128(r, cons_head, obj_table, num); + else { + uint32_t idx, scale, nr_idx, nr_num, nr_size; + + /* Normalize to uint32_t */ + scale = esize / sizeof(uint32_t); + nr_num = num * scale; + idx = cons_head & r->mask; + nr_idx = idx * scale; + nr_size = r->size * scale; + __rte_ring_dequeue_elems_32(r, nr_size, nr_idx, + obj_table, nr_num); + } +} + +/* Between load and load. there might be cpu reorder in weak model + * (powerpc/arm). + * There are 2 choices for the users + * 1.use rmb() memory barrier + * 2.use one-direction load_acquire/store_release barrier + * It depends on performance test results. + */ +#ifdef RTE_USE_C11_MEM_MODEL +#include "ring_c11_pvt.h" +#else +#include "ring_generic_pvt.h" +#endif + +/** + * @internal Enqueue several objects on the ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of objects. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring + * @param is_sp + * Indicates whether to use single producer or multi-producer head update + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_enqueue_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, + enum rte_ring_queue_behavior behavior, unsigned int is_sp, + unsigned int *free_space) +{ + uint32_t prod_head, prod_next; + uint32_t free_entries; + + n = __rte_ring_move_prod_head(r, is_sp, n, behavior, + &prod_head, &prod_next, &free_entries); + if (n == 0) + goto end; + + __rte_ring_enqueue_elems(r, prod_head, obj_table, esize, n); + + __rte_ring_update_tail(&r->prod, prod_head, prod_next, is_sp, 1); +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +} + +/** + * @internal Dequeue several objects from the ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of objects. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring + * @param is_sc + * Indicates whether to use single consumer or multi-consumer head update + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_dequeue_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, + enum rte_ring_queue_behavior behavior, unsigned int is_sc, + unsigned int *available) +{ + uint32_t cons_head, cons_next; + uint32_t entries; + + n = __rte_ring_move_cons_head(r, (int)is_sc, n, behavior, + &cons_head, &cons_next, &entries); + if (n == 0) + goto end; + + __rte_ring_dequeue_elems(r, cons_head, obj_table, esize, n); + + __rte_ring_update_tail(&r->cons, cons_head, cons_next, is_sc, 0); + +end: + if (available != NULL) + *available = entries - n; + return n; +} + +#endif /* _RING_ELEM_PVT_H_ */ diff --git a/lib/librte_ring/rte_ring_generic.h b/lib/librte_ring/ring_generic_pvt.h similarity index 98% rename from lib/librte_ring/rte_ring_generic.h rename to lib/librte_ring/ring_generic_pvt.h index 37c62b8d6..fc46a27b2 100644 --- a/lib/librte_ring/rte_ring_generic.h +++ b/lib/librte_ring/ring_generic_pvt.h @@ -7,8 +7,8 @@ * Used as BSD-3 Licensed with permission from Kip Macy. */ -#ifndef _RTE_RING_GENERIC_H_ -#define _RTE_RING_GENERIC_H_ +#ifndef _RING_GENERIC_PVT_H_ +#define _RING_GENERIC_PVT_H_ static __rte_always_inline void __rte_ring_update_tail(struct rte_ring_headtail *ht, uint32_t old_val, @@ -170,4 +170,4 @@ __rte_ring_move_cons_head(struct rte_ring *r, unsigned int is_sc, return n; } -#endif /* _RTE_RING_GENERIC_H_ */ +#endif /* _RING_GENERIC_PVT_H_ */ diff --git a/lib/librte_ring/rte_ring_hts_c11_mem.h b/lib/librte_ring/ring_hts_elem_pvt.h similarity index 60% rename from lib/librte_ring/rte_ring_hts_c11_mem.h rename to lib/librte_ring/ring_hts_elem_pvt.h index 16e54b6ff..9268750b0 100644 --- a/lib/librte_ring/rte_ring_hts_c11_mem.h +++ b/lib/librte_ring/ring_hts_elem_pvt.h @@ -7,8 +7,8 @@ * Used as BSD-3 Licensed with permission from Kip Macy. */ -#ifndef _RTE_RING_HTS_C11_MEM_H_ -#define _RTE_RING_HTS_C11_MEM_H_ +#ifndef _RING_HTS_ELEM_PVT_H_ +#define _RING_HTS_ELEM_PVT_H_ /** * @file rte_ring_hts_c11_mem.h @@ -161,4 +161,86 @@ __rte_ring_hts_move_cons_head(struct rte_ring *r, unsigned int num, return n; } -#endif /* _RTE_RING_HTS_C11_MEM_H_ */ +/** + * @internal Enqueue several objects on the HTS ring. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of objects. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_hts_enqueue_elem(struct rte_ring *r, const void *obj_table, + uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, + uint32_t *free_space) +{ + uint32_t free, head; + + n = __rte_ring_hts_move_prod_head(r, n, behavior, &head, &free); + + if (n != 0) { + __rte_ring_enqueue_elems(r, head, obj_table, esize, n); + __rte_ring_hts_update_tail(&r->hts_prod, head, n, 1); + } + + if (free_space != NULL) + *free_space = free - n; + return n; +} + +/** + * @internal Dequeue several objects from the HTS ring. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of objects. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_hts_dequeue_elem(struct rte_ring *r, void *obj_table, + uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, + uint32_t *available) +{ + uint32_t entries, head; + + n = __rte_ring_hts_move_cons_head(r, n, behavior, &head, &entries); + + if (n != 0) { + __rte_ring_dequeue_elems(r, head, obj_table, esize, n); + __rte_ring_hts_update_tail(&r->hts_cons, head, n, 0); + } + + if (available != NULL) + *available = entries - n; + return n; +} + +#endif /* _RING_HTS_ELEM_PVT_H_ */ diff --git a/lib/librte_ring/rte_ring_peek_c11_mem.h b/lib/librte_ring/ring_peek_elem_pvt.h similarity index 62% rename from lib/librte_ring/rte_ring_peek_c11_mem.h rename to lib/librte_ring/ring_peek_elem_pvt.h index 283c7e70b..1c57bcdd6 100644 --- a/lib/librte_ring/rte_ring_peek_c11_mem.h +++ b/lib/librte_ring/ring_peek_elem_pvt.h @@ -7,8 +7,8 @@ * Used as BSD-3 Licensed with permission from Kip Macy. */ -#ifndef _RTE_RING_PEEK_C11_MEM_H_ -#define _RTE_RING_PEEK_C11_MEM_H_ +#ifndef _RING_PEEK_ELEM_PVT_H_ +#define _RING_PEEK_ELEM_PVT_H_ /** * @file rte_ring_peek_c11_mem.h @@ -107,4 +107,73 @@ __rte_ring_hts_set_head_tail(struct rte_ring_hts_headtail *ht, uint32_t tail, __atomic_store_n(&ht->ht.raw, p.raw, __ATOMIC_RELEASE); } -#endif /* _RTE_RING_PEEK_C11_MEM_H_ */ +/** + * @internal This function moves prod head value. + */ +static __rte_always_inline unsigned int +__rte_ring_do_enqueue_start(struct rte_ring *r, uint32_t n, + enum rte_ring_queue_behavior behavior, uint32_t *free_space) +{ + uint32_t free, head, next; + + switch (r->prod.sync_type) { + case RTE_RING_SYNC_ST: + n = __rte_ring_move_prod_head(r, RTE_RING_SYNC_ST, n, + behavior, &head, &next, &free); + break; + case RTE_RING_SYNC_MT_HTS: + n = __rte_ring_hts_move_prod_head(r, n, behavior, + &head, &free); + break; + case RTE_RING_SYNC_MT: + case RTE_RING_SYNC_MT_RTS: + default: + /* unsupported mode, shouldn't be here */ + RTE_ASSERT(0); + n = 0; + free = 0; + } + + if (free_space != NULL) + *free_space = free - n; + return n; +} + +/** + * @internal This function moves cons head value and copies up to *n* + * objects from the ring to the user provided obj_table. + */ +static __rte_always_inline unsigned int +__rte_ring_do_dequeue_start(struct rte_ring *r, void *obj_table, + uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, + uint32_t *available) +{ + uint32_t avail, head, next; + + switch (r->cons.sync_type) { + case RTE_RING_SYNC_ST: + n = __rte_ring_move_cons_head(r, RTE_RING_SYNC_ST, n, + behavior, &head, &next, &avail); + break; + case RTE_RING_SYNC_MT_HTS: + n = __rte_ring_hts_move_cons_head(r, n, behavior, + &head, &avail); + break; + case RTE_RING_SYNC_MT: + case RTE_RING_SYNC_MT_RTS: + default: + /* unsupported mode, shouldn't be here */ + RTE_ASSERT(0); + n = 0; + avail = 0; + } + + if (n != 0) + __rte_ring_dequeue_elems(r, head, obj_table, esize, n); + + if (available != NULL) + *available = avail - n; + return n; +} + +#endif /* _RING_PEEK_ELEM_PVT_H_ */ diff --git a/lib/librte_ring/rte_ring_rts_c11_mem.h b/lib/librte_ring/ring_rts_elem_pvt.h similarity index 62% rename from lib/librte_ring/rte_ring_rts_c11_mem.h rename to lib/librte_ring/ring_rts_elem_pvt.h index 327f22796..cbcec73eb 100644 --- a/lib/librte_ring/rte_ring_rts_c11_mem.h +++ b/lib/librte_ring/ring_rts_elem_pvt.h @@ -7,8 +7,8 @@ * Used as BSD-3 Licensed with permission from Kip Macy. */ -#ifndef _RTE_RING_RTS_C11_MEM_H_ -#define _RTE_RING_RTS_C11_MEM_H_ +#ifndef _RING_RTS_ELEM_PVT_H_ +#define _RING_RTS_ELEM_PVT_H_ /** * @file rte_ring_rts_c11_mem.h @@ -176,4 +176,86 @@ __rte_ring_rts_move_cons_head(struct rte_ring *r, uint32_t num, return n; } -#endif /* _RTE_RING_RTS_C11_MEM_H_ */ +/** + * @internal Enqueue several objects on the RTS ring. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of objects. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_rts_enqueue_elem(struct rte_ring *r, const void *obj_table, + uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, + uint32_t *free_space) +{ + uint32_t free, head; + + n = __rte_ring_rts_move_prod_head(r, n, behavior, &head, &free); + + if (n != 0) { + __rte_ring_enqueue_elems(r, head, obj_table, esize, n); + __rte_ring_rts_update_tail(&r->rts_prod); + } + + if (free_space != NULL) + *free_space = free - n; + return n; +} + +/** + * @internal Dequeue several objects from the RTS ring. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of objects. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_rts_dequeue_elem(struct rte_ring *r, void *obj_table, + uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, + uint32_t *available) +{ + uint32_t entries, head; + + n = __rte_ring_rts_move_cons_head(r, n, behavior, &head, &entries); + + if (n != 0) { + __rte_ring_dequeue_elems(r, head, obj_table, esize, n); + __rte_ring_rts_update_tail(&r->rts_cons); + } + + if (available != NULL) + *available = entries - n; + return n; +} + +#endif /* _RING_RTS_ELEM_PVT_H_ */ diff --git a/lib/librte_ring/rte_ring_elem.h b/lib/librte_ring/rte_ring_elem.h index 57344d47d..119b5c0b6 100644 --- a/lib/librte_ring/rte_ring_elem.h +++ b/lib/librte_ring/rte_ring_elem.h @@ -21,6 +21,7 @@ extern "C" { #endif #include +#include /** * Calculate the memory size needed for a ring with given element size @@ -105,379 +106,6 @@ ssize_t rte_ring_get_memsize_elem(unsigned int esize, unsigned int count); struct rte_ring *rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, int socket_id, unsigned int flags); -static __rte_always_inline void -__rte_ring_enqueue_elems_32(struct rte_ring *r, const uint32_t size, - uint32_t idx, const void *obj_table, uint32_t n) -{ - unsigned int i; - uint32_t *ring = (uint32_t *)&r[1]; - const uint32_t *obj = (const uint32_t *)obj_table; - if (likely(idx + n < size)) { - for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { - ring[idx] = obj[i]; - ring[idx + 1] = obj[i + 1]; - ring[idx + 2] = obj[i + 2]; - ring[idx + 3] = obj[i + 3]; - ring[idx + 4] = obj[i + 4]; - ring[idx + 5] = obj[i + 5]; - ring[idx + 6] = obj[i + 6]; - ring[idx + 7] = obj[i + 7]; - } - switch (n & 0x7) { - case 7: - ring[idx++] = obj[i++]; /* fallthrough */ - case 6: - ring[idx++] = obj[i++]; /* fallthrough */ - case 5: - ring[idx++] = obj[i++]; /* fallthrough */ - case 4: - ring[idx++] = obj[i++]; /* fallthrough */ - case 3: - ring[idx++] = obj[i++]; /* fallthrough */ - case 2: - ring[idx++] = obj[i++]; /* fallthrough */ - case 1: - ring[idx++] = obj[i++]; /* fallthrough */ - } - } else { - for (i = 0; idx < size; i++, idx++) - ring[idx] = obj[i]; - /* Start at the beginning */ - for (idx = 0; i < n; i++, idx++) - ring[idx] = obj[i]; - } -} - -static __rte_always_inline void -__rte_ring_enqueue_elems_64(struct rte_ring *r, uint32_t prod_head, - const void *obj_table, uint32_t n) -{ - unsigned int i; - const uint32_t size = r->size; - uint32_t idx = prod_head & r->mask; - uint64_t *ring = (uint64_t *)&r[1]; - const unaligned_uint64_t *obj = (const unaligned_uint64_t *)obj_table; - if (likely(idx + n < size)) { - for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { - ring[idx] = obj[i]; - ring[idx + 1] = obj[i + 1]; - ring[idx + 2] = obj[i + 2]; - ring[idx + 3] = obj[i + 3]; - } - switch (n & 0x3) { - case 3: - ring[idx++] = obj[i++]; /* fallthrough */ - case 2: - ring[idx++] = obj[i++]; /* fallthrough */ - case 1: - ring[idx++] = obj[i++]; - } - } else { - for (i = 0; idx < size; i++, idx++) - ring[idx] = obj[i]; - /* Start at the beginning */ - for (idx = 0; i < n; i++, idx++) - ring[idx] = obj[i]; - } -} - -static __rte_always_inline void -__rte_ring_enqueue_elems_128(struct rte_ring *r, uint32_t prod_head, - const void *obj_table, uint32_t n) -{ - unsigned int i; - const uint32_t size = r->size; - uint32_t idx = prod_head & r->mask; - rte_int128_t *ring = (rte_int128_t *)&r[1]; - const rte_int128_t *obj = (const rte_int128_t *)obj_table; - if (likely(idx + n < size)) { - for (i = 0; i < (n & ~0x1); i += 2, idx += 2) - memcpy((void *)(ring + idx), - (const void *)(obj + i), 32); - switch (n & 0x1) { - case 1: - memcpy((void *)(ring + idx), - (const void *)(obj + i), 16); - } - } else { - for (i = 0; idx < size; i++, idx++) - memcpy((void *)(ring + idx), - (const void *)(obj + i), 16); - /* Start at the beginning */ - for (idx = 0; i < n; i++, idx++) - memcpy((void *)(ring + idx), - (const void *)(obj + i), 16); - } -} - -/* the actual enqueue of elements on the ring. - * Placed here since identical code needed in both - * single and multi producer enqueue functions. - */ -static __rte_always_inline void -__rte_ring_enqueue_elems(struct rte_ring *r, uint32_t prod_head, - const void *obj_table, uint32_t esize, uint32_t num) -{ - /* 8B and 16B copies implemented individually to retain - * the current performance. - */ - if (esize == 8) - __rte_ring_enqueue_elems_64(r, prod_head, obj_table, num); - else if (esize == 16) - __rte_ring_enqueue_elems_128(r, prod_head, obj_table, num); - else { - uint32_t idx, scale, nr_idx, nr_num, nr_size; - - /* Normalize to uint32_t */ - scale = esize / sizeof(uint32_t); - nr_num = num * scale; - idx = prod_head & r->mask; - nr_idx = idx * scale; - nr_size = r->size * scale; - __rte_ring_enqueue_elems_32(r, nr_size, nr_idx, - obj_table, nr_num); - } -} - -static __rte_always_inline void -__rte_ring_dequeue_elems_32(struct rte_ring *r, const uint32_t size, - uint32_t idx, void *obj_table, uint32_t n) -{ - unsigned int i; - uint32_t *ring = (uint32_t *)&r[1]; - uint32_t *obj = (uint32_t *)obj_table; - if (likely(idx + n < size)) { - for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { - obj[i] = ring[idx]; - obj[i + 1] = ring[idx + 1]; - obj[i + 2] = ring[idx + 2]; - obj[i + 3] = ring[idx + 3]; - obj[i + 4] = ring[idx + 4]; - obj[i + 5] = ring[idx + 5]; - obj[i + 6] = ring[idx + 6]; - obj[i + 7] = ring[idx + 7]; - } - switch (n & 0x7) { - case 7: - obj[i++] = ring[idx++]; /* fallthrough */ - case 6: - obj[i++] = ring[idx++]; /* fallthrough */ - case 5: - obj[i++] = ring[idx++]; /* fallthrough */ - case 4: - obj[i++] = ring[idx++]; /* fallthrough */ - case 3: - obj[i++] = ring[idx++]; /* fallthrough */ - case 2: - obj[i++] = ring[idx++]; /* fallthrough */ - case 1: - obj[i++] = ring[idx++]; /* fallthrough */ - } - } else { - for (i = 0; idx < size; i++, idx++) - obj[i] = ring[idx]; - /* Start at the beginning */ - for (idx = 0; i < n; i++, idx++) - obj[i] = ring[idx]; - } -} - -static __rte_always_inline void -__rte_ring_dequeue_elems_64(struct rte_ring *r, uint32_t prod_head, - void *obj_table, uint32_t n) -{ - unsigned int i; - const uint32_t size = r->size; - uint32_t idx = prod_head & r->mask; - uint64_t *ring = (uint64_t *)&r[1]; - unaligned_uint64_t *obj = (unaligned_uint64_t *)obj_table; - if (likely(idx + n < size)) { - for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { - obj[i] = ring[idx]; - obj[i + 1] = ring[idx + 1]; - obj[i + 2] = ring[idx + 2]; - obj[i + 3] = ring[idx + 3]; - } - switch (n & 0x3) { - case 3: - obj[i++] = ring[idx++]; /* fallthrough */ - case 2: - obj[i++] = ring[idx++]; /* fallthrough */ - case 1: - obj[i++] = ring[idx++]; /* fallthrough */ - } - } else { - for (i = 0; idx < size; i++, idx++) - obj[i] = ring[idx]; - /* Start at the beginning */ - for (idx = 0; i < n; i++, idx++) - obj[i] = ring[idx]; - } -} - -static __rte_always_inline void -__rte_ring_dequeue_elems_128(struct rte_ring *r, uint32_t prod_head, - void *obj_table, uint32_t n) -{ - unsigned int i; - const uint32_t size = r->size; - uint32_t idx = prod_head & r->mask; - rte_int128_t *ring = (rte_int128_t *)&r[1]; - rte_int128_t *obj = (rte_int128_t *)obj_table; - if (likely(idx + n < size)) { - for (i = 0; i < (n & ~0x1); i += 2, idx += 2) - memcpy((void *)(obj + i), (void *)(ring + idx), 32); - switch (n & 0x1) { - case 1: - memcpy((void *)(obj + i), (void *)(ring + idx), 16); - } - } else { - for (i = 0; idx < size; i++, idx++) - memcpy((void *)(obj + i), (void *)(ring + idx), 16); - /* Start at the beginning */ - for (idx = 0; i < n; i++, idx++) - memcpy((void *)(obj + i), (void *)(ring + idx), 16); - } -} - -/* the actual dequeue of elements from the ring. - * Placed here since identical code needed in both - * single and multi producer enqueue functions. - */ -static __rte_always_inline void -__rte_ring_dequeue_elems(struct rte_ring *r, uint32_t cons_head, - void *obj_table, uint32_t esize, uint32_t num) -{ - /* 8B and 16B copies implemented individually to retain - * the current performance. - */ - if (esize == 8) - __rte_ring_dequeue_elems_64(r, cons_head, obj_table, num); - else if (esize == 16) - __rte_ring_dequeue_elems_128(r, cons_head, obj_table, num); - else { - uint32_t idx, scale, nr_idx, nr_num, nr_size; - - /* Normalize to uint32_t */ - scale = esize / sizeof(uint32_t); - nr_num = num * scale; - idx = cons_head & r->mask; - nr_idx = idx * scale; - nr_size = r->size * scale; - __rte_ring_dequeue_elems_32(r, nr_size, nr_idx, - obj_table, nr_num); - } -} - -/* Between load and load. there might be cpu reorder in weak model - * (powerpc/arm). - * There are 2 choices for the users - * 1.use rmb() memory barrier - * 2.use one-direction load_acquire/store_release barrier - * It depends on performance test results. - * By default, move common functions to rte_ring_generic.h - */ -#ifdef RTE_USE_C11_MEM_MODEL -#include "rte_ring_c11_mem.h" -#else -#include "rte_ring_generic.h" -#endif - -/** - * @internal Enqueue several objects on the ring - * - * @param r - * A pointer to the ring structure. - * @param obj_table - * A pointer to a table of objects. - * @param esize - * The size of ring element, in bytes. It must be a multiple of 4. - * This must be the same value used while creating the ring. Otherwise - * the results are undefined. - * @param n - * The number of objects to add in the ring from the obj_table. - * @param behavior - * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring - * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring - * @param is_sp - * Indicates whether to use single producer or multi-producer head update - * @param free_space - * returns the amount of space after the enqueue operation has finished - * @return - * Actual number of objects enqueued. - * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. - */ -static __rte_always_inline unsigned int -__rte_ring_do_enqueue_elem(struct rte_ring *r, const void *obj_table, - unsigned int esize, unsigned int n, - enum rte_ring_queue_behavior behavior, unsigned int is_sp, - unsigned int *free_space) -{ - uint32_t prod_head, prod_next; - uint32_t free_entries; - - n = __rte_ring_move_prod_head(r, is_sp, n, behavior, - &prod_head, &prod_next, &free_entries); - if (n == 0) - goto end; - - __rte_ring_enqueue_elems(r, prod_head, obj_table, esize, n); - - __rte_ring_update_tail(&r->prod, prod_head, prod_next, is_sp, 1); -end: - if (free_space != NULL) - *free_space = free_entries - n; - return n; -} - -/** - * @internal Dequeue several objects from the ring - * - * @param r - * A pointer to the ring structure. - * @param obj_table - * A pointer to a table of objects. - * @param esize - * The size of ring element, in bytes. It must be a multiple of 4. - * This must be the same value used while creating the ring. Otherwise - * the results are undefined. - * @param n - * The number of objects to pull from the ring. - * @param behavior - * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring - * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring - * @param is_sc - * Indicates whether to use single consumer or multi-consumer head update - * @param available - * returns the number of remaining ring entries after the dequeue has finished - * @return - * - Actual number of objects dequeued. - * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. - */ -static __rte_always_inline unsigned int -__rte_ring_do_dequeue_elem(struct rte_ring *r, void *obj_table, - unsigned int esize, unsigned int n, - enum rte_ring_queue_behavior behavior, unsigned int is_sc, - unsigned int *available) -{ - uint32_t cons_head, cons_next; - uint32_t entries; - - n = __rte_ring_move_cons_head(r, (int)is_sc, n, behavior, - &cons_head, &cons_next, &entries); - if (n == 0) - goto end; - - __rte_ring_dequeue_elems(r, cons_head, obj_table, esize, n); - - __rte_ring_update_tail(&r->cons, cons_head, cons_next, is_sc, 0); - -end: - if (available != NULL) - *available = entries - n; - return n; -} - /** * Enqueue several objects on the ring (multi-producers safe). * diff --git a/lib/librte_ring/rte_ring_hts.h b/lib/librte_ring/rte_ring_hts.h index 359b15771..bdbdafc9f 100644 --- a/lib/librte_ring/rte_ring_hts.h +++ b/lib/librte_ring/rte_ring_hts.h @@ -29,89 +29,7 @@ extern "C" { #endif -#include - -/** - * @internal Enqueue several objects on the HTS ring. - * - * @param r - * A pointer to the ring structure. - * @param obj_table - * A pointer to a table of objects. - * @param esize - * The size of ring element, in bytes. It must be a multiple of 4. - * This must be the same value used while creating the ring. Otherwise - * the results are undefined. - * @param n - * The number of objects to add in the ring from the obj_table. - * @param behavior - * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring - * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring - * @param free_space - * returns the amount of space after the enqueue operation has finished - * @return - * Actual number of objects enqueued. - * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. - */ -static __rte_always_inline unsigned int -__rte_ring_do_hts_enqueue_elem(struct rte_ring *r, const void *obj_table, - uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, - uint32_t *free_space) -{ - uint32_t free, head; - - n = __rte_ring_hts_move_prod_head(r, n, behavior, &head, &free); - - if (n != 0) { - __rte_ring_enqueue_elems(r, head, obj_table, esize, n); - __rte_ring_hts_update_tail(&r->hts_prod, head, n, 1); - } - - if (free_space != NULL) - *free_space = free - n; - return n; -} - -/** - * @internal Dequeue several objects from the HTS ring. - * - * @param r - * A pointer to the ring structure. - * @param obj_table - * A pointer to a table of objects. - * @param esize - * The size of ring element, in bytes. It must be a multiple of 4. - * This must be the same value used while creating the ring. Otherwise - * the results are undefined. - * @param n - * The number of objects to pull from the ring. - * @param behavior - * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring - * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring - * @param available - * returns the number of remaining ring entries after the dequeue has finished - * @return - * - Actual number of objects dequeued. - * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. - */ -static __rte_always_inline unsigned int -__rte_ring_do_hts_dequeue_elem(struct rte_ring *r, void *obj_table, - uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, - uint32_t *available) -{ - uint32_t entries, head; - - n = __rte_ring_hts_move_cons_head(r, n, behavior, &head, &entries); - - if (n != 0) { - __rte_ring_dequeue_elems(r, head, obj_table, esize, n); - __rte_ring_hts_update_tail(&r->hts_cons, head, n, 0); - } - - if (available != NULL) - *available = entries - n; - return n; -} +#include /** * Enqueue several objects on the HTS ring (multi-producers safe). diff --git a/lib/librte_ring/rte_ring_peek.h b/lib/librte_ring/rte_ring_peek.h index 45f707dc7..0dd402be4 100644 --- a/lib/librte_ring/rte_ring_peek.h +++ b/lib/librte_ring/rte_ring_peek.h @@ -48,39 +48,7 @@ extern "C" { #endif -#include - -/** - * @internal This function moves prod head value. - */ -static __rte_always_inline unsigned int -__rte_ring_do_enqueue_start(struct rte_ring *r, uint32_t n, - enum rte_ring_queue_behavior behavior, uint32_t *free_space) -{ - uint32_t free, head, next; - - switch (r->prod.sync_type) { - case RTE_RING_SYNC_ST: - n = __rte_ring_move_prod_head(r, RTE_RING_SYNC_ST, n, - behavior, &head, &next, &free); - break; - case RTE_RING_SYNC_MT_HTS: - n = __rte_ring_hts_move_prod_head(r, n, behavior, - &head, &free); - break; - case RTE_RING_SYNC_MT: - case RTE_RING_SYNC_MT_RTS: - default: - /* unsupported mode, shouldn't be here */ - RTE_ASSERT(0); - n = 0; - free = 0; - } - - if (free_space != NULL) - *free_space = free - n; - return n; -} +#include /** * Start to enqueue several objects on the ring. @@ -248,43 +216,6 @@ rte_ring_enqueue_finish(struct rte_ring *r, void * const *obj_table, rte_ring_enqueue_elem_finish(r, obj_table, sizeof(uintptr_t), n); } -/** - * @internal This function moves cons head value and copies up to *n* - * objects from the ring to the user provided obj_table. - */ -static __rte_always_inline unsigned int -__rte_ring_do_dequeue_start(struct rte_ring *r, void *obj_table, - uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, - uint32_t *available) -{ - uint32_t avail, head, next; - - switch (r->cons.sync_type) { - case RTE_RING_SYNC_ST: - n = __rte_ring_move_cons_head(r, RTE_RING_SYNC_ST, n, - behavior, &head, &next, &avail); - break; - case RTE_RING_SYNC_MT_HTS: - n = __rte_ring_hts_move_cons_head(r, n, behavior, - &head, &avail); - break; - case RTE_RING_SYNC_MT: - case RTE_RING_SYNC_MT_RTS: - default: - /* unsupported mode, shouldn't be here */ - RTE_ASSERT(0); - n = 0; - avail = 0; - } - - if (n != 0) - __rte_ring_dequeue_elems(r, head, obj_table, esize, n); - - if (available != NULL) - *available = avail - n; - return n; -} - /** * Start to dequeue several objects from the ring. * Note that user has to call appropriate dequeue_finish() diff --git a/lib/librte_ring/rte_ring_peek_zc.h b/lib/librte_ring/rte_ring_peek_zc.h index cb3bbd067..bc8252a18 100644 --- a/lib/librte_ring/rte_ring_peek_zc.h +++ b/lib/librte_ring/rte_ring_peek_zc.h @@ -72,7 +72,7 @@ extern "C" { #endif -#include +#include /** * Ring zero-copy information structure. diff --git a/lib/librte_ring/rte_ring_rts.h b/lib/librte_ring/rte_ring_rts.h index afc12abe2..83d9903e2 100644 --- a/lib/librte_ring/rte_ring_rts.h +++ b/lib/librte_ring/rte_ring_rts.h @@ -56,89 +56,7 @@ extern "C" { #endif -#include - -/** - * @internal Enqueue several objects on the RTS ring. - * - * @param r - * A pointer to the ring structure. - * @param obj_table - * A pointer to a table of objects. - * @param esize - * The size of ring element, in bytes. It must be a multiple of 4. - * This must be the same value used while creating the ring. Otherwise - * the results are undefined. - * @param n - * The number of objects to add in the ring from the obj_table. - * @param behavior - * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring - * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring - * @param free_space - * returns the amount of space after the enqueue operation has finished - * @return - * Actual number of objects enqueued. - * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. - */ -static __rte_always_inline unsigned int -__rte_ring_do_rts_enqueue_elem(struct rte_ring *r, const void *obj_table, - uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, - uint32_t *free_space) -{ - uint32_t free, head; - - n = __rte_ring_rts_move_prod_head(r, n, behavior, &head, &free); - - if (n != 0) { - __rte_ring_enqueue_elems(r, head, obj_table, esize, n); - __rte_ring_rts_update_tail(&r->rts_prod); - } - - if (free_space != NULL) - *free_space = free - n; - return n; -} - -/** - * @internal Dequeue several objects from the RTS ring. - * - * @param r - * A pointer to the ring structure. - * @param obj_table - * A pointer to a table of objects. - * @param esize - * The size of ring element, in bytes. It must be a multiple of 4. - * This must be the same value used while creating the ring. Otherwise - * the results are undefined. - * @param n - * The number of objects to pull from the ring. - * @param behavior - * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring - * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring - * @param available - * returns the number of remaining ring entries after the dequeue has finished - * @return - * - Actual number of objects dequeued. - * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. - */ -static __rte_always_inline unsigned int -__rte_ring_do_rts_dequeue_elem(struct rte_ring *r, void *obj_table, - uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, - uint32_t *available) -{ - uint32_t entries, head; - - n = __rte_ring_rts_move_cons_head(r, n, behavior, &head, &entries); - - if (n != 0) { - __rte_ring_dequeue_elems(r, head, obj_table, esize, n); - __rte_ring_rts_update_tail(&r->rts_cons); - } - - if (available != NULL) - *available = entries - n; - return n; -} +#include /** * Enqueue several objects on the RTS ring (multi-producers safe).