From patchwork Tue Oct 17 20:31:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 132811 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 42E9543190; Tue, 17 Oct 2023 22:32:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7027842E36; Tue, 17 Oct 2023 22:31:34 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 2687040A75 for ; Tue, 17 Oct 2023 22:31:20 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id CD44620B74C7; Tue, 17 Oct 2023 13:31:18 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com CD44620B74C7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1697574678; bh=WwKFAE+BTKqgPf6JrMJiSwJubseprZaUYU8IIoc4Zx0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fd1gCo9UdgNUNV7b1wrxn6Ym2uj6JlhTXzMysG8SJYOkgi5q+r9LaGHB5ehqQ55cZ 5bQx09f7zyWqKXbOO216xV69WaYJZ37+vBhUdDB9yX3+UXMN0Fz6V7DQ6PQs8oeexW ML6zGptyjWMo0xvDDRH67eKXqDCcKcBq2Ont2pjM= From: Tyler Retzlaff To: dev@dpdk.org Cc: Akhil Goyal , Anatoly Burakov , Andrew Rybchenko , Bruce Richardson , Chenbo Xia , Ciara Power , David Christensen , David Hunt , Dmitry Kozlyuk , Dmitry Malloy , Elena Agostini , Erik Gabriel Carrillo , Fan Zhang , Ferruh Yigit , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jerin Jacob , Konstantin Ananyev , Matan Azrad , Maxime Coquelin , Narcisa Ana Maria Vasile , Nicolas Chautru , Olivier Matz , Ori Kam , Pallavi Kadam , Pavan Nikhilesh , Reshma Pattan , Sameh Gobriel , Shijith Thotton , Sivaprasad Tummala , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Thomas Monjalon , Viacheslav Ovsiienko , Vladimir Medvedkin , Yipeng Wang , Tyler Retzlaff Subject: [PATCH v2 07/19] mbuf: use rte optional stdatomic API Date: Tue, 17 Oct 2023 13:31:05 -0700 Message-Id: <1697574677-16578-8-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1697574677-16578-1-git-send-email-roretzla@linux.microsoft.com> References: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> <1697574677-16578-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional stdatomic API Signed-off-by: Tyler Retzlaff Acked-by: Konstantin Ananyev --- lib/mbuf/rte_mbuf.h | 20 ++++++++++---------- lib/mbuf/rte_mbuf_core.h | 5 +++-- 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h index 913c459..b8ab477 100644 --- a/lib/mbuf/rte_mbuf.h +++ b/lib/mbuf/rte_mbuf.h @@ -361,7 +361,7 @@ struct rte_pktmbuf_pool_private { static inline uint16_t rte_mbuf_refcnt_read(const struct rte_mbuf *m) { - return __atomic_load_n(&m->refcnt, __ATOMIC_RELAXED); + return rte_atomic_load_explicit(&m->refcnt, rte_memory_order_relaxed); } /** @@ -374,15 +374,15 @@ struct rte_pktmbuf_pool_private { static inline void rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value) { - __atomic_store_n(&m->refcnt, new_value, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&m->refcnt, new_value, rte_memory_order_relaxed); } /* internal */ static inline uint16_t __rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value) { - return __atomic_fetch_add(&m->refcnt, value, - __ATOMIC_ACQ_REL) + value; + return rte_atomic_fetch_add_explicit(&m->refcnt, value, + rte_memory_order_acq_rel) + value; } /** @@ -463,7 +463,7 @@ struct rte_pktmbuf_pool_private { static inline uint16_t rte_mbuf_ext_refcnt_read(const struct rte_mbuf_ext_shared_info *shinfo) { - return __atomic_load_n(&shinfo->refcnt, __ATOMIC_RELAXED); + return rte_atomic_load_explicit(&shinfo->refcnt, rte_memory_order_relaxed); } /** @@ -478,7 +478,7 @@ struct rte_pktmbuf_pool_private { rte_mbuf_ext_refcnt_set(struct rte_mbuf_ext_shared_info *shinfo, uint16_t new_value) { - __atomic_store_n(&shinfo->refcnt, new_value, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&shinfo->refcnt, new_value, rte_memory_order_relaxed); } /** @@ -502,8 +502,8 @@ struct rte_pktmbuf_pool_private { return (uint16_t)value; } - return __atomic_fetch_add(&shinfo->refcnt, value, - __ATOMIC_ACQ_REL) + value; + return rte_atomic_fetch_add_explicit(&shinfo->refcnt, value, + rte_memory_order_acq_rel) + value; } /** Mbuf prefetch */ @@ -1315,8 +1315,8 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m) * Direct usage of add primitive to avoid * duplication of comparing with one. */ - if (likely(__atomic_fetch_add(&shinfo->refcnt, -1, - __ATOMIC_ACQ_REL) - 1)) + if (likely(rte_atomic_fetch_add_explicit(&shinfo->refcnt, -1, + rte_memory_order_acq_rel) - 1)) return 1; /* Reinitialize counter before mbuf freeing. */ diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index e9bc0d1..5688683 100644 --- a/lib/mbuf/rte_mbuf_core.h +++ b/lib/mbuf/rte_mbuf_core.h @@ -19,6 +19,7 @@ #include #include +#include #ifdef __cplusplus extern "C" { @@ -497,7 +498,7 @@ struct rte_mbuf { * rte_mbuf_refcnt_set(). The functionality of these functions (atomic, * or non-atomic) is controlled by the RTE_MBUF_REFCNT_ATOMIC flag. */ - uint16_t refcnt; + RTE_ATOMIC(uint16_t) refcnt; /** * Number of segments. Only valid for the first segment of an mbuf @@ -674,7 +675,7 @@ struct rte_mbuf { struct rte_mbuf_ext_shared_info { rte_mbuf_extbuf_free_callback_t free_cb; /**< Free callback function */ void *fcb_opaque; /**< Free callback argument */ - uint16_t refcnt; + RTE_ATOMIC(uint16_t) refcnt; }; /** Maximum number of nb_segs allowed. */