From patchwork Tue Jul 7 11:13:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Phil Yang X-Patchwork-Id: 73421 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B4E8A00BE; Tue, 7 Jul 2020 13:14:38 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EB3B91DDEC; Tue, 7 Jul 2020 13:14:37 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 56FC51DDEA; Tue, 7 Jul 2020 13:14:36 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C7FFB1FB; Tue, 7 Jul 2020 04:14:35 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.109.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9607B3F71E; Tue, 7 Jul 2020 04:14:31 -0700 (PDT) From: Phil Yang To: thomas@monjalon.net, erik.g.carrillo@intel.com, dev@dpdk.org Cc: jerinj@marvell.com, Honnappa.Nagarahalli@arm.com, drc@linux.vnet.ibm.com, Ruifeng.Wang@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com, david.marchand@redhat.com, mdr@ashroe.eu, nhorman@tuxdriver.com, dodji@redhat.com, stable@dpdk.org Date: Tue, 7 Jul 2020 19:13:20 +0800 Message-Id: <1594120403-17643-1-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1593667604-12029-1-git-send-email-phil.yang@arm.com> References: <1593667604-12029-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v3 1/4] eventdev: fix race condition on timer list counter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The n_poll_lcores counter and poll_lcore array are shared between lcores and the update of these variables are out of the protection of spinlock on each lcore timer list. The read-modify-write operations of the counter are not atomic, so it has the potential of race condition between lcores. Use c11 atomics with RELAXED ordering to prevent confliction. Fixes: cc7b73ea9e3b ("eventdev: add new software timer adapter") Cc: erik.g.carrillo@intel.com Cc: stable@dpdk.org Signed-off-by: Phil Yang Reviewed-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Acked-by: Erik Gabriel Carrillo --- v2: Align the code. (Erik) lib/librte_eventdev/rte_event_timer_adapter.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/lib/librte_eventdev/rte_event_timer_adapter.c b/lib/librte_eventdev/rte_event_timer_adapter.c index 2321803..370ea40 100644 --- a/lib/librte_eventdev/rte_event_timer_adapter.c +++ b/lib/librte_eventdev/rte_event_timer_adapter.c @@ -583,6 +583,7 @@ swtim_callback(struct rte_timer *tim) uint16_t nb_evs_invalid = 0; uint64_t opaque; int ret; + int n_lcores; opaque = evtim->impl_opaque[1]; adapter = (struct rte_event_timer_adapter *)(uintptr_t)opaque; @@ -605,8 +606,12 @@ swtim_callback(struct rte_timer *tim) "with immediate expiry value"); } - if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore].v))) - sw->poll_lcores[sw->n_poll_lcores++] = lcore; + if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore].v))) { + n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1, + __ATOMIC_RELAXED); + __atomic_store_n(&sw->poll_lcores[n_lcores], lcore, + __ATOMIC_RELAXED); + } } else { EVTIM_BUF_LOG_DBG("buffered an event timer expiry event"); @@ -1011,6 +1016,7 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, uint32_t lcore_id = rte_lcore_id(); struct rte_timer *tim, *tims[nb_evtims]; uint64_t cycles; + int n_lcores; #ifdef RTE_LIBRTE_EVENTDEV_DEBUG /* Check that the service is running. */ @@ -1033,8 +1039,10 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore_id].v))) { EVTIM_LOG_DBG("Adding lcore id = %u to list of lcores to poll", lcore_id); - sw->poll_lcores[sw->n_poll_lcores] = lcore_id; - ++sw->n_poll_lcores; + n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1, + __ATOMIC_RELAXED); + __atomic_store_n(&sw->poll_lcores[n_lcores], lcore_id, + __ATOMIC_RELAXED); } ret = rte_mempool_get_bulk(sw->tim_pool, (void **)tims, From patchwork Tue Jul 7 11:13:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Phil Yang X-Patchwork-Id: 73422 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 394F1A00BE; Tue, 7 Jul 2020 13:14:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A23251DE08; Tue, 7 Jul 2020 13:14:42 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id C7D001DE08 for ; Tue, 7 Jul 2020 13:14:40 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 42FEB101E; Tue, 7 Jul 2020 04:14:40 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.109.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 521EE3F71E; Tue, 7 Jul 2020 04:14:36 -0700 (PDT) From: Phil Yang To: thomas@monjalon.net, erik.g.carrillo@intel.com, dev@dpdk.org Cc: jerinj@marvell.com, Honnappa.Nagarahalli@arm.com, drc@linux.vnet.ibm.com, Ruifeng.Wang@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com, david.marchand@redhat.com, mdr@ashroe.eu, nhorman@tuxdriver.com, dodji@redhat.com Date: Tue, 7 Jul 2020 19:13:21 +0800 Message-Id: <1594120403-17643-2-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594120403-17643-1-git-send-email-phil.yang@arm.com> References: <1593667604-12029-1-git-send-email-phil.yang@arm.com> <1594120403-17643-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v3 2/4] eventdev: use C11 atomics for lcore timer armed flag X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The in_use flag is a per core variable which is not shared between lcores in the normal case and the access of this variable should be ordered on the same core. However, if non-EAL thread pick the highest lcore to insert timers into, there is the possibility of conflicts on this flag between threads. Then the atomic CAS operation is needed. Use the C11 atomic CAS instead of the generic rte_atomic operations to avoid the unnecessary barrier on aarch64. Signed-off-by: Phil Yang Reviewed-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Acked-by: Erik Gabriel Carrillo --- v2: 1. Make the code comments more accurate. (Erik) 2. Define the in_use flag as an unsigned type. (Stephen) lib/librte_eventdev/rte_event_timer_adapter.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/lib/librte_eventdev/rte_event_timer_adapter.c b/lib/librte_eventdev/rte_event_timer_adapter.c index 370ea40..6d01a34 100644 --- a/lib/librte_eventdev/rte_event_timer_adapter.c +++ b/lib/librte_eventdev/rte_event_timer_adapter.c @@ -554,7 +554,7 @@ struct swtim { uint32_t timer_data_id; /* Track which cores have actually armed a timer */ struct { - rte_atomic16_t v; + uint16_t v; } __rte_cache_aligned in_use[RTE_MAX_LCORE]; /* Track which cores' timer lists should be polled */ unsigned int poll_lcores[RTE_MAX_LCORE]; @@ -606,7 +606,8 @@ swtim_callback(struct rte_timer *tim) "with immediate expiry value"); } - if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore].v))) { + if (unlikely(sw->in_use[lcore].v == 0)) { + sw->in_use[lcore].v = 1; n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1, __ATOMIC_RELAXED); __atomic_store_n(&sw->poll_lcores[n_lcores], lcore, @@ -834,7 +835,7 @@ swtim_init(struct rte_event_timer_adapter *adapter) /* Initialize the variables that track in-use timer lists */ for (i = 0; i < RTE_MAX_LCORE; i++) - rte_atomic16_init(&sw->in_use[i].v); + sw->in_use[i].v = 0; /* Initialize the timer subsystem and allocate timer data instance */ ret = rte_timer_subsystem_init(); @@ -1017,6 +1018,8 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, struct rte_timer *tim, *tims[nb_evtims]; uint64_t cycles; int n_lcores; + /* Timer list for this lcore is not in use. */ + uint16_t exp_state = 0; #ifdef RTE_LIBRTE_EVENTDEV_DEBUG /* Check that the service is running. */ @@ -1035,8 +1038,12 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, /* If this is the first time we're arming an event timer on this lcore, * mark this lcore as "in use"; this will cause the service * function to process the timer list that corresponds to this lcore. + * The atomic CAS operation can prevent the race condition on in_use + * flag between multiple non-EAL threads. */ - if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore_id].v))) { + if (unlikely(__atomic_compare_exchange_n(&sw->in_use[lcore_id].v, + &exp_state, 1, 0, + __ATOMIC_RELAXED, __ATOMIC_RELAXED))) { EVTIM_LOG_DBG("Adding lcore id = %u to list of lcores to poll", lcore_id); n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1, From patchwork Tue Jul 7 11:13:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Phil Yang X-Patchwork-Id: 73423 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 163ECA00BE; Tue, 7 Jul 2020 13:14:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F18801DE05; Tue, 7 Jul 2020 13:14:46 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 3B8731DDEA for ; Tue, 7 Jul 2020 13:14:45 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BB5341042; Tue, 7 Jul 2020 04:14:44 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.109.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C0F613F71E; Tue, 7 Jul 2020 04:14:40 -0700 (PDT) From: Phil Yang To: thomas@monjalon.net, erik.g.carrillo@intel.com, dev@dpdk.org Cc: jerinj@marvell.com, Honnappa.Nagarahalli@arm.com, drc@linux.vnet.ibm.com, Ruifeng.Wang@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com, david.marchand@redhat.com, mdr@ashroe.eu, nhorman@tuxdriver.com, dodji@redhat.com Date: Tue, 7 Jul 2020 19:13:22 +0800 Message-Id: <1594120403-17643-3-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594120403-17643-1-git-send-email-phil.yang@arm.com> References: <1593667604-12029-1-git-send-email-phil.yang@arm.com> <1594120403-17643-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v3 3/4] eventdev: remove redundant code X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is no thread will access these impl_opaque data after timer canceled. When new timer armed, it got refilled. So the cleanup process is unnecessary. Signed-off-by: Phil Yang Reviewed-by: Dharmik Thakkar --- lib/librte_eventdev/rte_event_timer_adapter.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/lib/librte_eventdev/rte_event_timer_adapter.c b/lib/librte_eventdev/rte_event_timer_adapter.c index 6d01a34..d75415c 100644 --- a/lib/librte_eventdev/rte_event_timer_adapter.c +++ b/lib/librte_eventdev/rte_event_timer_adapter.c @@ -1167,8 +1167,6 @@ swtim_cancel_burst(const struct rte_event_timer_adapter *adapter, rte_mempool_put(sw->tim_pool, (void **)timp); evtims[i]->state = RTE_EVENT_TIMER_CANCELED; - evtims[i]->impl_opaque[0] = 0; - evtims[i]->impl_opaque[1] = 0; rte_smp_wmb(); } From patchwork Tue Jul 7 11:13:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Phil Yang X-Patchwork-Id: 73424 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C2D66A00BE; Tue, 7 Jul 2020 13:15:03 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3B8D61DE1A; Tue, 7 Jul 2020 13:14:51 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id B97561DE13 for ; Tue, 7 Jul 2020 13:14:49 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C1E6113E; Tue, 7 Jul 2020 04:14:49 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.109.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4A10D3F71E; Tue, 7 Jul 2020 04:14:45 -0700 (PDT) From: Phil Yang To: thomas@monjalon.net, erik.g.carrillo@intel.com, dev@dpdk.org Cc: jerinj@marvell.com, Honnappa.Nagarahalli@arm.com, drc@linux.vnet.ibm.com, Ruifeng.Wang@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com, david.marchand@redhat.com, mdr@ashroe.eu, nhorman@tuxdriver.com, dodji@redhat.com Date: Tue, 7 Jul 2020 19:13:23 +0800 Message-Id: <1594120403-17643-4-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594120403-17643-1-git-send-email-phil.yang@arm.com> References: <1593667604-12029-1-git-send-email-phil.yang@arm.com> <1594120403-17643-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v3 4/4] eventdev: relax smp barriers with C11 atomics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The impl_opaque field is shared between the timer arm and cancel operations. Meanwhile, the state flag acts as a guard variable to make sure the update of impl_opaque is synchronized. The original code uses rte_smp barriers to achieve that. This patch uses C11 atomics with an explicit one-way memory barrier instead of full barriers rte_smp_w/rmb() to avoid the unnecessary barrier on aarch64. Since compilers can generate the same instructions for volatile and non-volatile variable in C11 __atomics built-ins, so remain the volatile keyword in front of state enum to avoid the ABI break issue. Signed-off-by: Phil Yang Reviewed-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Acked-by: Erik Gabriel Carrillo --- v3: Fix ABI issue: revert to 'volatile enum rte_event_timer_state type state'. v2: 1. Removed implementation-specific opaque data cleanup code. 2. Replaced thread fence with atomic ACQURE/RELEASE ordering on state access. lib/librte_eventdev/rte_event_timer_adapter.c | 55 ++++++++++++++++++--------- 1 file changed, 37 insertions(+), 18 deletions(-) diff --git a/lib/librte_eventdev/rte_event_timer_adapter.c b/lib/librte_eventdev/rte_event_timer_adapter.c index d75415c..eb2c93a 100644 --- a/lib/librte_eventdev/rte_event_timer_adapter.c +++ b/lib/librte_eventdev/rte_event_timer_adapter.c @@ -629,7 +629,8 @@ swtim_callback(struct rte_timer *tim) sw->expired_timers[sw->n_expired_timers++] = tim; sw->stats.evtim_exp_count++; - evtim->state = RTE_EVENT_TIMER_NOT_ARMED; + __atomic_store_n(&evtim->state, RTE_EVENT_TIMER_NOT_ARMED, + __ATOMIC_RELEASE); } if (event_buffer_batch_ready(&sw->buffer)) { @@ -1020,6 +1021,7 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, int n_lcores; /* Timer list for this lcore is not in use. */ uint16_t exp_state = 0; + enum rte_event_timer_state n_state; #ifdef RTE_LIBRTE_EVENTDEV_DEBUG /* Check that the service is running. */ @@ -1060,30 +1062,36 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, } for (i = 0; i < nb_evtims; i++) { - /* Don't modify the event timer state in these cases */ - if (evtims[i]->state == RTE_EVENT_TIMER_ARMED) { + n_state = __atomic_load_n(&evtims[i]->state, __ATOMIC_ACQUIRE); + if (n_state == RTE_EVENT_TIMER_ARMED) { rte_errno = EALREADY; break; - } else if (!(evtims[i]->state == RTE_EVENT_TIMER_NOT_ARMED || - evtims[i]->state == RTE_EVENT_TIMER_CANCELED)) { + } else if (!(n_state == RTE_EVENT_TIMER_NOT_ARMED || + n_state == RTE_EVENT_TIMER_CANCELED)) { rte_errno = EINVAL; break; } ret = check_timeout(evtims[i], adapter); if (unlikely(ret == -1)) { - evtims[i]->state = RTE_EVENT_TIMER_ERROR_TOOLATE; + __atomic_store_n(&evtims[i]->state, + RTE_EVENT_TIMER_ERROR_TOOLATE, + __ATOMIC_RELAXED); rte_errno = EINVAL; break; } else if (unlikely(ret == -2)) { - evtims[i]->state = RTE_EVENT_TIMER_ERROR_TOOEARLY; + __atomic_store_n(&evtims[i]->state, + RTE_EVENT_TIMER_ERROR_TOOEARLY, + __ATOMIC_RELAXED); rte_errno = EINVAL; break; } if (unlikely(check_destination_event_queue(evtims[i], adapter) < 0)) { - evtims[i]->state = RTE_EVENT_TIMER_ERROR; + __atomic_store_n(&evtims[i]->state, + RTE_EVENT_TIMER_ERROR, + __ATOMIC_RELAXED); rte_errno = EINVAL; break; } @@ -1099,13 +1107,18 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, SINGLE, lcore_id, NULL, evtims[i]); if (ret < 0) { /* tim was in RUNNING or CONFIG state */ - evtims[i]->state = RTE_EVENT_TIMER_ERROR; + __atomic_store_n(&evtims[i]->state, + RTE_EVENT_TIMER_ERROR, + __ATOMIC_RELEASE); break; } - rte_smp_wmb(); EVTIM_LOG_DBG("armed an event timer"); - evtims[i]->state = RTE_EVENT_TIMER_ARMED; + /* RELEASE ordering guarantees the adapter specific value + * changes observed before the update of state. + */ + __atomic_store_n(&evtims[i]->state, RTE_EVENT_TIMER_ARMED, + __ATOMIC_RELEASE); } if (i < nb_evtims) @@ -1132,6 +1145,7 @@ swtim_cancel_burst(const struct rte_event_timer_adapter *adapter, struct rte_timer *timp; uint64_t opaque; struct swtim *sw = swtim_pmd_priv(adapter); + enum rte_event_timer_state n_state; #ifdef RTE_LIBRTE_EVENTDEV_DEBUG /* Check that the service is running. */ @@ -1143,16 +1157,18 @@ swtim_cancel_burst(const struct rte_event_timer_adapter *adapter, for (i = 0; i < nb_evtims; i++) { /* Don't modify the event timer state in these cases */ - if (evtims[i]->state == RTE_EVENT_TIMER_CANCELED) { + /* ACQUIRE ordering guarantees the access of implementation + * specific opague data under the correct state. + */ + n_state = __atomic_load_n(&evtims[i]->state, __ATOMIC_ACQUIRE); + if (n_state == RTE_EVENT_TIMER_CANCELED) { rte_errno = EALREADY; break; - } else if (evtims[i]->state != RTE_EVENT_TIMER_ARMED) { + } else if (n_state != RTE_EVENT_TIMER_ARMED) { rte_errno = EINVAL; break; } - rte_smp_rmb(); - opaque = evtims[i]->impl_opaque[0]; timp = (struct rte_timer *)(uintptr_t)opaque; RTE_ASSERT(timp != NULL); @@ -1166,9 +1182,12 @@ swtim_cancel_burst(const struct rte_event_timer_adapter *adapter, rte_mempool_put(sw->tim_pool, (void **)timp); - evtims[i]->state = RTE_EVENT_TIMER_CANCELED; - - rte_smp_wmb(); + /* The RELEASE ordering here pairs with atomic ordering + * to make sure the state update data observed between + * threads. + */ + __atomic_store_n(&evtims[i]->state, RTE_EVENT_TIMER_CANCELED, + __ATOMIC_RELEASE); } return i;