From patchwork Tue Oct 13 16:25:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 80603 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D1DBA04B7; Tue, 13 Oct 2020 18:26:19 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0DD431DB24; Tue, 13 Oct 2020 18:26:06 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id C76AE1DA84; Tue, 13 Oct 2020 18:25:58 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C9A8106F; Tue, 13 Oct 2020 09:25:58 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 272273F66B; Tue, 13 Oct 2020 09:25:58 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, phil.yang@arm.com, thomas@monjalon.net, arybchenko@solarflare.com, ferruh.yigit@intel.com, konstantin.ananyev@intel.com, jerinj@marvell.com, tgw_team@tencent.com Cc: abhinandan.gujjar@intel.com, nd@arm.com, bruce.richardson@intel.com, john.mcnamara@intel.com, reshma.pattan@intel.com, stable@dpdk.org Date: Tue, 13 Oct 2020 11:25:37 -0500 Message-Id: <20201013162537.24029-2-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201013162537.24029-1-honnappa.nagarahalli@arm.com> References: <20201002000711.41511-1-honnappa.nagarahalli@arm.com> <20201013162537.24029-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 2/2] lib/ethdev: fix memory ordering for call back functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Call back functions are registered on the control plane. They are accessed from the data plane. Hence, correct memory orderings should be used to avoid race conditions. Fixes: 4dc294158cac ("ethdev: support optional Rx and Tx callbacks") Fixes: c8231c63ddcb ("ethdev: insert Rx callback as head of list") Cc: bruce.richardson@intel.com Cc: john.mcnamara@intel.com Cc: reshma.pattan@intel.com Cc: stable@dpdk.org Signed-off-by: Honnappa Nagarahalli Reviewed-by: Ola Liljedahl Reviewed-by: Phil Yang Acked-by: Konstantin Ananyev --- v2 - load the call back function pointer using __atomic_load_n lib/librte_ethdev/rte_ethdev.c | 28 ++++++++++++++++++----- lib/librte_ethdev/rte_ethdev.h | 42 ++++++++++++++++++++++++++-------- 2 files changed, 55 insertions(+), 15 deletions(-) diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c index 59a41c07f..d89fcdc77 100644 --- a/lib/librte_ethdev/rte_ethdev.c +++ b/lib/librte_ethdev/rte_ethdev.c @@ -4486,12 +4486,20 @@ rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id, rte_eth_devices[port_id].post_rx_burst_cbs[queue_id]; if (!tail) { - rte_eth_devices[port_id].post_rx_burst_cbs[queue_id] = cb; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n( + &rte_eth_devices[port_id].post_rx_burst_cbs[queue_id], + cb, __ATOMIC_RELEASE); } else { while (tail->next) tail = tail->next; - tail->next = cb; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE); } rte_spinlock_unlock(&rte_eth_rx_cb_lock); @@ -4576,12 +4584,20 @@ rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id, rte_eth_devices[port_id].pre_tx_burst_cbs[queue_id]; if (!tail) { - rte_eth_devices[port_id].pre_tx_burst_cbs[queue_id] = cb; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n( + &rte_eth_devices[port_id].pre_tx_burst_cbs[queue_id], + cb, __ATOMIC_RELEASE); } else { while (tail->next) tail = tail->next; - tail->next = cb; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE); } rte_spinlock_unlock(&rte_eth_tx_cb_lock); @@ -4612,7 +4628,7 @@ rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id, cb = *prev_cb; if (cb == user_cb) { /* Remove the user cb from the callback list. */ - *prev_cb = cb->next; + __atomic_store_n(prev_cb, cb->next, __ATOMIC_RELAXED); ret = 0; break; } @@ -4646,7 +4662,7 @@ rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id, cb = *prev_cb; if (cb == user_cb) { /* Remove the user cb from the callback list. */ - *prev_cb = cb->next; + __atomic_store_n(prev_cb, cb->next, __ATOMIC_RELAXED); ret = 0; break; } diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h index 70295d7ab..6ad2a549b 100644 --- a/lib/librte_ethdev/rte_ethdev.h +++ b/lib/librte_ethdev/rte_ethdev.h @@ -3734,7 +3734,8 @@ struct rte_eth_rxtx_callback; * The callback function * @param user_param * A generic pointer parameter which will be passed to each invocation of the - * callback function on this port and queue. + * callback function on this port and queue. Inter-thread synchronization + * of any user data changes is the responsibility of the user. * * @return * NULL on error. @@ -3763,7 +3764,8 @@ rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id, * The callback function * @param user_param * A generic pointer parameter which will be passed to each invocation of the - * callback function on this port and queue. + * callback function on this port and queue. Inter-thread synchronization + * of any user data changes is the responsibility of the user. * * @return * NULL on error. @@ -3791,7 +3793,8 @@ rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id, * The callback function * @param user_param * A generic pointer parameter which will be passed to each invocation of the - * callback function on this port and queue. + * callback function on this port and queue. Inter-thread synchronization + * of any user data changes is the responsibility of the user. * * @return * NULL on error. @@ -3816,7 +3819,9 @@ rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id, * on that queue. * * - After a short delay - where the delay is sufficient to allow any - * in-flight callbacks to complete. + * in-flight callbacks to complete. Alternately, the RCU mechanism can be + * used to detect when data plane threads have ceased referencing the + * callback memory. * * @param port_id * The port identifier of the Ethernet device. @@ -3849,7 +3854,9 @@ int rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id, * on that queue. * * - After a short delay - where the delay is sufficient to allow any - * in-flight callbacks to complete. + * in-flight callbacks to complete. Alternately, the RCU mechanism can be + * used to detect when data plane threads have ceased referencing the + * callback memory. * * @param port_id * The port identifier of the Ethernet device. @@ -4510,10 +4517,18 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, rx_pkts, nb_pkts); #ifdef RTE_ETHDEV_RXTX_CALLBACKS - if (unlikely(dev->post_rx_burst_cbs[queue_id] != NULL)) { - struct rte_eth_rxtx_callback *cb = - dev->post_rx_burst_cbs[queue_id]; + struct rte_eth_rxtx_callback *cb; + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id], + __ATOMIC_RELAXED); + + if (unlikely(cb != NULL)) { do { nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx, nb_pkts, cb->param); @@ -4775,7 +4790,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, #endif #ifdef RTE_ETHDEV_RXTX_CALLBACKS - struct rte_eth_rxtx_callback *cb = dev->pre_tx_burst_cbs[queue_id]; + struct rte_eth_rxtx_callback *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id], + __ATOMIC_RELAXED); if (unlikely(cb != NULL)) { do {