From patchwork Tue May 26 09:01:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Phil Yang X-Patchwork-Id: 70583 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF555A04A4; Tue, 26 May 2020 11:02:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 31CA21DA2D; Tue, 26 May 2020 11:02:29 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 417A11DA2D for ; Tue, 26 May 2020 11:02:28 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B980B1FB; Tue, 26 May 2020 02:02:27 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.109.151]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A3B223F52E; Tue, 26 May 2020 02:02:20 -0700 (PDT) From: Phil Yang To: dev@dpdk.org Cc: mattias.ronnblom@ericsson.com, mb@smartsharesystems.com, stephen@networkplumber.org, thomas@monjalon.net, bruce.richardson@intel.com, ferruh.yigit@intel.com, hemant.agrawal@nxp.com, jerinj@marvell.com, ktraynor@redhat.com, konstantin.ananyev@intel.com, maxime.coquelin@redhat.com, olivier.matz@6wind.com, harry.van.haaren@intel.com, erik.g.carrillo@intel.com, drc@linux.vnet.ibm.com, david.marchand@redhat.com, zhaoyan.chen@intel.com, ola.liljedahl@arm.com, honnappa.nagarahalli@arm.com, ruifeng.wang@arm.com, phil.yang@arm.com, nd@arm.com Date: Tue, 26 May 2020 17:01:07 +0800 Message-Id: <1590483667-10318-5-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1590483667-10318-1-git-send-email-phil.yang@arm.com> References: <1589270586-4480-1-git-send-email-phil.yang@arm.com> <1590483667-10318-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v5 4/4] eal/atomic: add wrapper for c11 atomic thread fence X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Provide a wrapper for __atomic_thread_fence built-in to support optimized code for __ATOMIC_SEQ_CST memory order for x86 platforms. Suggested-by: Honnappa Nagarahalli Signed-off-by: Phil Yang Reviewed-by: Ola Liljedahl Acked-by: Konstantin Ananyev --- lib/librte_eal/arm/include/rte_atomic_32.h | 6 ++++++ lib/librte_eal/arm/include/rte_atomic_64.h | 6 ++++++ lib/librte_eal/include/generic/rte_atomic.h | 6 ++++++ lib/librte_eal/ppc/include/rte_atomic.h | 6 ++++++ lib/librte_eal/x86/include/rte_atomic.h | 17 +++++++++++++++++ 5 files changed, 41 insertions(+) diff --git a/lib/librte_eal/arm/include/rte_atomic_32.h b/lib/librte_eal/arm/include/rte_atomic_32.h index 7dc0d06..dbe7cc6 100644 --- a/lib/librte_eal/arm/include/rte_atomic_32.h +++ b/lib/librte_eal/arm/include/rte_atomic_32.h @@ -37,6 +37,12 @@ extern "C" { #define rte_cio_rmb() rte_rmb() +static __rte_always_inline void +rte_atomic_thread_fence(int mo) +{ + __atomic_thread_fence(mo); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eal/arm/include/rte_atomic_64.h b/lib/librte_eal/arm/include/rte_atomic_64.h index 7b7099c..22ff8ec 100644 --- a/lib/librte_eal/arm/include/rte_atomic_64.h +++ b/lib/librte_eal/arm/include/rte_atomic_64.h @@ -41,6 +41,12 @@ extern "C" { #define rte_cio_rmb() asm volatile("dmb oshld" : : : "memory") +static __rte_always_inline void +rte_atomic_thread_fence(int mo) +{ + __atomic_thread_fence(mo); +} + /*------------------------ 128 bit atomic operations -------------------------*/ #if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS) diff --git a/lib/librte_eal/include/generic/rte_atomic.h b/lib/librte_eal/include/generic/rte_atomic.h index e6ab15a..5b941db 100644 --- a/lib/librte_eal/include/generic/rte_atomic.h +++ b/lib/librte_eal/include/generic/rte_atomic.h @@ -158,6 +158,12 @@ static inline void rte_cio_rmb(void); asm volatile ("" : : : "memory"); \ } while(0) +/** + * Synchronization fence between threads based on the specified + * memory order. + */ +static inline void rte_atomic_thread_fence(int mo); + /*------------------------- 16 bit atomic operations -------------------------*/ /** diff --git a/lib/librte_eal/ppc/include/rte_atomic.h b/lib/librte_eal/ppc/include/rte_atomic.h index 7e3e131..91c5f30 100644 --- a/lib/librte_eal/ppc/include/rte_atomic.h +++ b/lib/librte_eal/ppc/include/rte_atomic.h @@ -40,6 +40,12 @@ extern "C" { #define rte_cio_rmb() rte_rmb() +static __rte_always_inline void +rte_atomic_thread_fence(int mo) +{ + __atomic_thread_fence(mo); +} + /*------------------------- 16 bit atomic operations -------------------------*/ /* To be compatible with Power7, use GCC built-in functions for 16 bit * operations */ diff --git a/lib/librte_eal/x86/include/rte_atomic.h b/lib/librte_eal/x86/include/rte_atomic.h index b9dcd30..bd256e7 100644 --- a/lib/librte_eal/x86/include/rte_atomic.h +++ b/lib/librte_eal/x86/include/rte_atomic.h @@ -83,6 +83,23 @@ rte_smp_mb(void) #define rte_cio_rmb() rte_compiler_barrier() +/** + * Synchronization fence between threads based on the specified + * memory order. + * + * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates + * full 'mfence' which is quite expensive. The optimized + * implementation of rte_smp_mb is used instead. + */ +static __rte_always_inline void +rte_atomic_thread_fence(int mo) +{ + if (mo == __ATOMIC_SEQ_CST) + rte_smp_mb(); + else + __atomic_thread_fence(mo); +} + /*------------------------- 16 bit atomic operations -------------------------*/ #ifndef RTE_FORCE_INTRINSICS