From patchwork Wed Mar 27 22:37:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 138886 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 86D4043D55; Wed, 27 Mar 2024 23:39:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 46D9242D89; Wed, 27 Mar 2024 23:38:19 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 4AF94410E3 for ; Wed, 27 Mar 2024 23:38:02 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id 6CF1B20E695F; Wed, 27 Mar 2024 15:38:00 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 6CF1B20E695F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1711579080; bh=wONaNFbTIxe7eFVCknq4cizkaGgbzHLBrxyTBXHcU0g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gDC4yY5I2kqgpBKxzP9TWEHvNf6X1iNHwq8w6gdI+beb61sOFX44VhlRA/UjLZJ4g TNOxbA9zLr0GS9R3pHk9gk9OtH0WzCPJTNzKUnXqiOiIy402mfMSNZXRh4m/PZFuYn 7alrDxf2YfqKfyTke2d8j37q/NcuDYVF/LI9K2NU= From: Tyler Retzlaff To: dev@dpdk.org Cc: =?utf-8?q?Mattias_R=C3=B6nnblom?= , =?utf-8?q?Morten_Br=C3=B8rup?= , Abdullah Sevincer , Ajit Khaparde , Alok Prasad , Anatoly Burakov , Andrew Rybchenko , Anoob Joseph , Bruce Richardson , Byron Marohn , Chenbo Xia , Chengwen Feng , Ciara Loftus , Ciara Power , Dariusz Sosnowski , David Hunt , Devendra Singh Rawat , Erik Gabriel Carrillo , Guoyang Zhou , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jakub Grajciar , Jerin Jacob , Jeroen de Borst , Jian Wang , Jiawen Wu , Jie Hai , Jingjing Wu , Joshua Washington , Joyce Kong , Junfeng Guo , Kevin Laatz , Konstantin Ananyev , Liang Ma , Long Li , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nicolas Chautru , Ori Kam , Pavan Nikhilesh , Peter Mccarthy , Rahul Lakkireddy , Reshma Pattan , Rosen Xu , Ruifeng Wang , Rushil Gupta , Sameh Gobriel , Sivaprasad Tummala , Somnath Kotur , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Sunil Uttarwar , Tetsuya Mukawa , Vamsi Attunuru , Viacheslav Ovsiienko , Vladimir Medvedkin , Xiaoyun Wang , Yipeng Wang , Yisen Zhuang , Yuying Zhang , Yuying Zhang , Ziyang Xuan , Tyler Retzlaff Subject: [PATCH v3 09/45] net/af_xdp: use rte stdatomic API Date: Wed, 27 Mar 2024 15:37:22 -0700 Message-Id: <1711579078-10624-10-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1711579078-10624-1-git-send-email-roretzla@linux.microsoft.com> References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com> <1711579078-10624-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API. Signed-off-by: Tyler Retzlaff Acked-by: Stephen Hemminger --- drivers/net/af_xdp/rte_eth_af_xdp.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 268a130..4833180 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -116,7 +116,7 @@ struct xsk_umem_info { const struct rte_memzone *mz; struct rte_mempool *mb_pool; void *buffer; - uint8_t refcnt; + RTE_ATOMIC(uint8_t) refcnt; uint32_t max_xsks; }; @@ -995,7 +995,8 @@ static int link_xdp_prog_with_dev(int ifindex, int fd, __u32 flags) break; xsk_socket__delete(rxq->xsk); - if (__atomic_fetch_sub(&rxq->umem->refcnt, 1, __ATOMIC_ACQUIRE) - 1 == 0) + if (rte_atomic_fetch_sub_explicit(&rxq->umem->refcnt, 1, + rte_memory_order_acquire) - 1 == 0) xdp_umem_destroy(rxq->umem); /* free pkt_tx_queue */ @@ -1097,8 +1098,8 @@ static inline uintptr_t get_base_addr(struct rte_mempool *mp, uint64_t *align) ret = -1; goto out; } - if (__atomic_load_n(&internals->rx_queues[i].umem->refcnt, - __ATOMIC_ACQUIRE)) { + if (rte_atomic_load_explicit(&internals->rx_queues[i].umem->refcnt, + rte_memory_order_acquire)) { *umem = internals->rx_queues[i].umem; goto out; } @@ -1131,11 +1132,11 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, return NULL; if (umem != NULL && - __atomic_load_n(&umem->refcnt, __ATOMIC_ACQUIRE) < + rte_atomic_load_explicit(&umem->refcnt, rte_memory_order_acquire) < umem->max_xsks) { AF_XDP_LOG(INFO, "%s,qid%i sharing UMEM\n", internals->if_name, rxq->xsk_queue_idx); - __atomic_fetch_add(&umem->refcnt, 1, __ATOMIC_ACQUIRE); + rte_atomic_fetch_add_explicit(&umem->refcnt, 1, rte_memory_order_acquire); } } @@ -1177,7 +1178,7 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, mb_pool->name, umem->max_xsks); } - __atomic_store_n(&umem->refcnt, 1, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&umem->refcnt, 1, rte_memory_order_release); } return umem; @@ -1606,7 +1607,8 @@ struct msg_internal { if (rxq->umem == NULL) return -ENOMEM; txq->umem = rxq->umem; - reserve_before = __atomic_load_n(&rxq->umem->refcnt, __ATOMIC_ACQUIRE) <= 1; + reserve_before = rte_atomic_load_explicit(&rxq->umem->refcnt, + rte_memory_order_acquire) <= 1; #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG) ret = rte_pktmbuf_alloc_bulk(rxq->umem->mb_pool, fq_bufs, reserve_size); @@ -1723,7 +1725,7 @@ struct msg_internal { out_xsk: xsk_socket__delete(rxq->xsk); out_umem: - if (__atomic_fetch_sub(&rxq->umem->refcnt, 1, __ATOMIC_ACQUIRE) - 1 == 0) + if (rte_atomic_fetch_sub_explicit(&rxq->umem->refcnt, 1, rte_memory_order_acquire) - 1 == 0) xdp_umem_destroy(rxq->umem); return ret;