From patchwork Tue Oct 17 20:31:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 132818 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3A8543190; Tue, 17 Oct 2023 22:33:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 715F142E62; Tue, 17 Oct 2023 22:31:41 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 84F6342830 for ; Tue, 17 Oct 2023 22:31:20 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 454BD20B74CE; Tue, 17 Oct 2023 13:31:18 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 454BD20B74CE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1697574679; bh=BuZWB/6ytaxp2GG/syeRa9gqLvYyih2p/nLNala2NsU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IsJd/dLPgdwbfLsjRS2zbk8s/ONsub1uTDROr0zSGHD9L29hzq0RFqOvmDlyg+PiL NjS5C7XiTqtlecZxhaCGLMf+rrK0S8LKYR3h5xLmIzdQPaAxBzuY2VVZni89FdMld1 VY252hS8GArFArTeR6Epr8mvVGv+JuOO0aXZ0ybE= From: Tyler Retzlaff To: dev@dpdk.org Cc: Akhil Goyal , Anatoly Burakov , Andrew Rybchenko , Bruce Richardson , Chenbo Xia , Ciara Power , David Christensen , David Hunt , Dmitry Kozlyuk , Dmitry Malloy , Elena Agostini , Erik Gabriel Carrillo , Fan Zhang , Ferruh Yigit , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jerin Jacob , Konstantin Ananyev , Matan Azrad , Maxime Coquelin , Narcisa Ana Maria Vasile , Nicolas Chautru , Olivier Matz , Ori Kam , Pallavi Kadam , Pavan Nikhilesh , Reshma Pattan , Sameh Gobriel , Shijith Thotton , Sivaprasad Tummala , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Thomas Monjalon , Viacheslav Ovsiienko , Vladimir Medvedkin , Yipeng Wang , Tyler Retzlaff Subject: [PATCH v2 14/19] cryptodev: use rte optional stdatomic API Date: Tue, 17 Oct 2023 13:31:12 -0700 Message-Id: <1697574677-16578-15-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1697574677-16578-1-git-send-email-roretzla@linux.microsoft.com> References: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> <1697574677-16578-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional stdatomic API Signed-off-by: Tyler Retzlaff --- lib/cryptodev/rte_cryptodev.c | 22 ++++++++++++---------- lib/cryptodev/rte_cryptodev.h | 16 ++++++++-------- 2 files changed, 20 insertions(+), 18 deletions(-) diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 314710b..b258827 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -1535,12 +1535,12 @@ struct rte_cryptodev_cb * /* Stores to cb->fn and cb->param should complete before * cb is visible to data plane. */ - __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&tail->next, cb, rte_memory_order_release); } else { /* Stores to cb->fn and cb->param should complete before * cb is visible to data plane. */ - __atomic_store_n(&list->next, cb, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&list->next, cb, rte_memory_order_release); } rte_spinlock_unlock(&rte_cryptodev_callback_lock); @@ -1555,7 +1555,8 @@ struct rte_cryptodev_cb * struct rte_cryptodev_cb *cb) { struct rte_cryptodev *dev; - struct rte_cryptodev_cb **prev_cb, *curr_cb; + RTE_ATOMIC(struct rte_cryptodev_cb *) *prev_cb; + struct rte_cryptodev_cb *curr_cb; struct rte_cryptodev_cb_rcu *list; int ret; @@ -1601,8 +1602,8 @@ struct rte_cryptodev_cb * curr_cb = *prev_cb; if (curr_cb == cb) { /* Remove the user cb from the callback list. */ - __atomic_store_n(prev_cb, curr_cb->next, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(prev_cb, curr_cb->next, + rte_memory_order_relaxed); ret = 0; break; } @@ -1673,12 +1674,12 @@ struct rte_cryptodev_cb * /* Stores to cb->fn and cb->param should complete before * cb is visible to data plane. */ - __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&tail->next, cb, rte_memory_order_release); } else { /* Stores to cb->fn and cb->param should complete before * cb is visible to data plane. */ - __atomic_store_n(&list->next, cb, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&list->next, cb, rte_memory_order_release); } rte_spinlock_unlock(&rte_cryptodev_callback_lock); @@ -1694,7 +1695,8 @@ struct rte_cryptodev_cb * struct rte_cryptodev_cb *cb) { struct rte_cryptodev *dev; - struct rte_cryptodev_cb **prev_cb, *curr_cb; + RTE_ATOMIC(struct rte_cryptodev_cb *) *prev_cb; + struct rte_cryptodev_cb *curr_cb; struct rte_cryptodev_cb_rcu *list; int ret; @@ -1740,8 +1742,8 @@ struct rte_cryptodev_cb * curr_cb = *prev_cb; if (curr_cb == cb) { /* Remove the user cb from the callback list. */ - __atomic_store_n(prev_cb, curr_cb->next, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(prev_cb, curr_cb->next, + rte_memory_order_relaxed); ret = 0; break; } diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index be0698c..9092118 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -979,7 +979,7 @@ struct rte_cryptodev_config { * queue pair on enqueue/dequeue. */ struct rte_cryptodev_cb { - struct rte_cryptodev_cb *next; + RTE_ATOMIC(struct rte_cryptodev_cb *) next; /**< Pointer to next callback */ rte_cryptodev_callback_fn fn; /**< Pointer to callback function */ @@ -992,7 +992,7 @@ struct rte_cryptodev_cb { * Structure used to hold information about the RCU for a queue pair. */ struct rte_cryptodev_cb_rcu { - struct rte_cryptodev_cb *next; + RTE_ATOMIC(struct rte_cryptodev_cb *) next; /**< Pointer to next callback */ struct rte_rcu_qsbr *qsbr; /**< RCU QSBR variable per queue pair */ @@ -1947,15 +1947,15 @@ int rte_cryptodev_remove_deq_callback(uint8_t dev_id, struct rte_cryptodev_cb_rcu *list; struct rte_cryptodev_cb *cb; - /* __ATOMIC_RELEASE memory order was used when the + /* rte_memory_order_release memory order was used when the * call back was inserted into the list. * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * cb and cb->fn/cb->next, rte_memory_order_acquire memory order is * not required. */ list = &fp_ops->qp.deq_cb[qp_id]; rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + cb = rte_atomic_load_explicit(&list->next, rte_memory_order_relaxed); while (cb != NULL) { nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, @@ -2014,15 +2014,15 @@ int rte_cryptodev_remove_deq_callback(uint8_t dev_id, struct rte_cryptodev_cb_rcu *list; struct rte_cryptodev_cb *cb; - /* __ATOMIC_RELEASE memory order was used when the + /* rte_memory_order_release memory order was used when the * call back was inserted into the list. * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * cb and cb->fn/cb->next, rte_memory_order_acquire memory order is * not required. */ list = &fp_ops->qp.enq_cb[qp_id]; rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + cb = rte_atomic_load_explicit(&list->next, rte_memory_order_relaxed); while (cb != NULL) { nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops,