[v8,03/14] eal: use barrier intrinsics

Message ID 1682997341-2271-4-git-send-email-roretzla@linux.microsoft.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series msvc integration changes |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Tyler Retzlaff May 2, 2023, 3:15 a.m. UTC
  Inline assembly is not supported for MSVC x64 instead expand
rte_compiler_barrier as _ReadWriteBarrier and for rte_smp_mb
_m_mfence intrinsics.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/include/generic/rte_atomic.h | 4 ++++
 lib/eal/x86/include/rte_atomic.h     | 4 ++++
 2 files changed, 8 insertions(+)
  

Patch

diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 234b268..e973184 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -116,9 +116,13 @@ 
  * Guarantees that operation reordering does not occur at compile time
  * for operations directly before and after the barrier.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define	rte_compiler_barrier() do {		\
 	asm volatile ("" : : : "memory");	\
 } while(0)
+#else
+#define rte_compiler_barrier() _ReadWriteBarrier()
+#endif
 
 /**
  * Synchronization fence between threads based on the specified memory order.
diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h
index f2ee1a9..569d3ab 100644
--- a/lib/eal/x86/include/rte_atomic.h
+++ b/lib/eal/x86/include/rte_atomic.h
@@ -66,11 +66,15 @@ 
 static __rte_always_inline void
 rte_smp_mb(void)
 {
+#ifndef RTE_TOOLCHAIN_MSVC
 #ifdef RTE_ARCH_I686
 	asm volatile("lock addl $0, -128(%%esp); " ::: "memory");
 #else
 	asm volatile("lock addl $0, -128(%%rsp); " ::: "memory");
 #endif
+#else
+	_mm_mfence();
+#endif
 }
 
 #define rte_io_mb() rte_mb()