From patchwork Fri Feb 22 16:06:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 50465 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4EFD24C8F; Fri, 22 Feb 2019 17:07:11 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 7773A2BFA for ; Fri, 22 Feb 2019 17:07:01 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Feb 2019 08:07:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,400,1544515200"; d="scan'208";a="145706397" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga002.fm.intel.com with ESMTP; 22 Feb 2019 08:07:00 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, gavin.hu@arm.com, Honnappa.Nagarahalli@arm.com, nd@arm.com, thomas@monjalon.net Date: Fri, 22 Feb 2019 10:06:55 -0600 Message-Id: <20190222160655.3346-8-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190222160655.3346-1-gage.eads@intel.com> References: <20190222160655.3346-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH 7/7] mempool/stack: add non-blocking stack mempool handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds support for non-blocking (linked list based) stack mempool handler. In mempool_perf_autotest the lock-based stack outperforms the non-blocking handler for certain lcore/alloc count/free count combinations*, however: - For applications with preemptible pthreads, a lock-based stack's worst-case performance (i.e. one thread being preempted while holding the spinlock) is much worse than the non-blocking stack's. - Using per-thread mempool caches will largely mitigate the performance difference. *Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4, running on isolcpus cores with a tickless scheduler. The lock-based stack's rate_persec was 0.6x-3.5x the non-blocking stack's. Signed-off-by: Gage Eads Reviewed-by: Olivier Matz --- doc/guides/prog_guide/env_abstraction_layer.rst | 5 +++++ doc/guides/rel_notes/release_19_05.rst | 5 +++++ drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++-- 3 files changed, 34 insertions(+), 2 deletions(-) diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 929d76dba..5c2dbc706 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -541,6 +541,11 @@ Known Issues 5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR. + Alternatively, applications can use the non-blocking stack mempool handler. When considering this handler, note that: + + - it is currently limited to the x86_64 platform, because it uses an instruction (16-byte compare-and-swap) that is not yet available on other platforms. + - it has worse average-case performance than the non-preemptive rte_ring, but software caching (e.g. the mempool cache) can mitigate this by reducing the number of stack accesses. + + rte_timer Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed. diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst index 52c5ba78e..111a93ea6 100644 --- a/doc/guides/rel_notes/release_19_05.rst +++ b/doc/guides/rel_notes/release_19_05.rst @@ -74,6 +74,11 @@ New Features The library supports two stack implementations: lock-based and non-blocking. The non-blocking implementation is currently limited to x86-64 platforms. +* **Added Non-blocking Stack Mempool Handler.** + + Added a new non-blocking stack handler, which uses the newly added stack + library. + Removed Items ------------- diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c index 25ccdb9af..eae71aa0c 100644 --- a/drivers/mempool/stack/rte_mempool_stack.c +++ b/drivers/mempool/stack/rte_mempool_stack.c @@ -7,7 +7,7 @@ #include static int -stack_alloc(struct rte_mempool *mp) +__stack_alloc(struct rte_mempool *mp, uint32_t flags) { char name[RTE_STACK_NAMESIZE]; struct rte_stack *s; @@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp) return -rte_errno; } - s = rte_stack_create(name, mp->size, mp->socket_id, 0); + s = rte_stack_create(name, mp->size, mp->socket_id, flags); if (s == NULL) return -rte_errno; @@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp) } static int +stack_alloc(struct rte_mempool *mp) +{ + return __stack_alloc(mp, 0); +} + +static int +nb_stack_alloc(struct rte_mempool *mp) +{ + return __stack_alloc(mp, STACK_F_NB); +} + +static int stack_enqueue(struct rte_mempool *mp, void * const *obj_table, unsigned int n) { @@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = { .get_count = stack_get_count }; +static struct rte_mempool_ops ops_nb_stack = { + .name = "nb_stack", + .alloc = nb_stack_alloc, + .free = stack_free, + .enqueue = stack_enqueue, + .dequeue = stack_dequeue, + .get_count = stack_get_count +}; + MEMPOOL_REGISTER_OPS(ops_stack); +MEMPOOL_REGISTER_OPS(ops_nb_stack);