From patchwork Mon Oct 16 23:09:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 132671 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88F2643183; Tue, 17 Oct 2023 01:10:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D0BEE410FC; Tue, 17 Oct 2023 01:09:20 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id F41BA40E5E for ; Tue, 17 Oct 2023 01:09:08 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id B98D120B74D0; Mon, 16 Oct 2023 16:09:07 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com B98D120B74D0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1697497747; bh=BuZWB/6ytaxp2GG/syeRa9gqLvYyih2p/nLNala2NsU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F8NJ7LbEQUHpcO4rCu2g5Se3+oNZ5qzcD3ATpogrikIwO7n1e9Okm9ClOVpgCXxxr 5HamhIZ0gJhGzXIgA0wbhm89Euwb1h+FfeSgGKYTxhCnd3su8UOW+QxPmZVPfhj+tK viBcfpQLsJetWUXoSmL//wBquhl8V+9XHWk9JjRc= From: Tyler Retzlaff To: dev@dpdk.org Cc: Akhil Goyal , Anatoly Burakov , Andrew Rybchenko , Bruce Richardson , Chenbo Xia , Ciara Power , David Christensen , David Hunt , Dmitry Kozlyuk , Dmitry Malloy , Elena Agostini , Erik Gabriel Carrillo , Fan Zhang , Ferruh Yigit , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jerin Jacob , Konstantin Ananyev , Matan Azrad , Maxime Coquelin , Narcisa Ana Maria Vasile , Nicolas Chautru , Olivier Matz , Ori Kam , Pallavi Kadam , Pavan Nikhilesh , Reshma Pattan , Sameh Gobriel , Shijith Thotton , Sivaprasad Tummala , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Thomas Monjalon , Viacheslav Ovsiienko , Vladimir Medvedkin , Yipeng Wang , Tyler Retzlaff Subject: [PATCH 16/21] cryptodev: use rte optional stdatomic API Date: Mon, 16 Oct 2023 16:09:00 -0700 Message-Id: <1697497745-20664-17-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> References: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional stdatomic API Signed-off-by: Tyler Retzlaff --- lib/cryptodev/rte_cryptodev.c | 22 ++++++++++++---------- lib/cryptodev/rte_cryptodev.h | 16 ++++++++-------- 2 files changed, 20 insertions(+), 18 deletions(-) diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 314710b..b258827 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -1535,12 +1535,12 @@ struct rte_cryptodev_cb * /* Stores to cb->fn and cb->param should complete before * cb is visible to data plane. */ - __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&tail->next, cb, rte_memory_order_release); } else { /* Stores to cb->fn and cb->param should complete before * cb is visible to data plane. */ - __atomic_store_n(&list->next, cb, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&list->next, cb, rte_memory_order_release); } rte_spinlock_unlock(&rte_cryptodev_callback_lock); @@ -1555,7 +1555,8 @@ struct rte_cryptodev_cb * struct rte_cryptodev_cb *cb) { struct rte_cryptodev *dev; - struct rte_cryptodev_cb **prev_cb, *curr_cb; + RTE_ATOMIC(struct rte_cryptodev_cb *) *prev_cb; + struct rte_cryptodev_cb *curr_cb; struct rte_cryptodev_cb_rcu *list; int ret; @@ -1601,8 +1602,8 @@ struct rte_cryptodev_cb * curr_cb = *prev_cb; if (curr_cb == cb) { /* Remove the user cb from the callback list. */ - __atomic_store_n(prev_cb, curr_cb->next, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(prev_cb, curr_cb->next, + rte_memory_order_relaxed); ret = 0; break; } @@ -1673,12 +1674,12 @@ struct rte_cryptodev_cb * /* Stores to cb->fn and cb->param should complete before * cb is visible to data plane. */ - __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&tail->next, cb, rte_memory_order_release); } else { /* Stores to cb->fn and cb->param should complete before * cb is visible to data plane. */ - __atomic_store_n(&list->next, cb, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&list->next, cb, rte_memory_order_release); } rte_spinlock_unlock(&rte_cryptodev_callback_lock); @@ -1694,7 +1695,8 @@ struct rte_cryptodev_cb * struct rte_cryptodev_cb *cb) { struct rte_cryptodev *dev; - struct rte_cryptodev_cb **prev_cb, *curr_cb; + RTE_ATOMIC(struct rte_cryptodev_cb *) *prev_cb; + struct rte_cryptodev_cb *curr_cb; struct rte_cryptodev_cb_rcu *list; int ret; @@ -1740,8 +1742,8 @@ struct rte_cryptodev_cb * curr_cb = *prev_cb; if (curr_cb == cb) { /* Remove the user cb from the callback list. */ - __atomic_store_n(prev_cb, curr_cb->next, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(prev_cb, curr_cb->next, + rte_memory_order_relaxed); ret = 0; break; } diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index be0698c..9092118 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -979,7 +979,7 @@ struct rte_cryptodev_config { * queue pair on enqueue/dequeue. */ struct rte_cryptodev_cb { - struct rte_cryptodev_cb *next; + RTE_ATOMIC(struct rte_cryptodev_cb *) next; /**< Pointer to next callback */ rte_cryptodev_callback_fn fn; /**< Pointer to callback function */ @@ -992,7 +992,7 @@ struct rte_cryptodev_cb { * Structure used to hold information about the RCU for a queue pair. */ struct rte_cryptodev_cb_rcu { - struct rte_cryptodev_cb *next; + RTE_ATOMIC(struct rte_cryptodev_cb *) next; /**< Pointer to next callback */ struct rte_rcu_qsbr *qsbr; /**< RCU QSBR variable per queue pair */ @@ -1947,15 +1947,15 @@ int rte_cryptodev_remove_deq_callback(uint8_t dev_id, struct rte_cryptodev_cb_rcu *list; struct rte_cryptodev_cb *cb; - /* __ATOMIC_RELEASE memory order was used when the + /* rte_memory_order_release memory order was used when the * call back was inserted into the list. * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * cb and cb->fn/cb->next, rte_memory_order_acquire memory order is * not required. */ list = &fp_ops->qp.deq_cb[qp_id]; rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + cb = rte_atomic_load_explicit(&list->next, rte_memory_order_relaxed); while (cb != NULL) { nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, @@ -2014,15 +2014,15 @@ int rte_cryptodev_remove_deq_callback(uint8_t dev_id, struct rte_cryptodev_cb_rcu *list; struct rte_cryptodev_cb *cb; - /* __ATOMIC_RELEASE memory order was used when the + /* rte_memory_order_release memory order was used when the * call back was inserted into the list. * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * cb and cb->fn/cb->next, rte_memory_order_acquire memory order is * not required. */ list = &fp_ops->qp.enq_cb[qp_id]; rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + cb = rte_atomic_load_explicit(&list->next, rte_memory_order_relaxed); while (cb != NULL) { nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,