[v2,2/6] eal: use rte atomic thread fence
Checks
Commit Message
Use rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic
Update rte_mcslock.h to use rte_atomic_thread_fence instead of
directly using internal __rte_atomic_thread_fence
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/common/eal_common_trace.c | 2 +-
lib/eal/include/rte_mcslock.h | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
@@ -526,7 +526,7 @@ rte_trace_mode rte_trace_mode_get(void)
/* Add the trace point at tail */
STAILQ_INSERT_TAIL(&tp_list, tp, next);
- __atomic_thread_fence(rte_memory_order_release);
+ rte_atomic_thread_fence(rte_memory_order_release);
/* All Good !!! */
return 0;
@@ -83,7 +83,7 @@
* store to prev->next. Otherwise it will cause a deadlock. Need a
* store-load barrier.
*/
- __rte_atomic_thread_fence(rte_memory_order_acq_rel);
+ rte_atomic_thread_fence(rte_memory_order_acq_rel);
/* If the lock has already been acquired, it first atomically
* places the node at the end of the queue and then proceeds
* to spin on me->locked until the previous lock holder resets
@@ -117,7 +117,7 @@
* while-loop first. This has the potential to cause a
* deadlock. Need a load barrier.
*/
- __rte_atomic_thread_fence(rte_memory_order_acquire);
+ rte_atomic_thread_fence(rte_memory_order_acquire);
/* More nodes added to the queue by other CPUs.
* Wait until the next pointer is set.
*/