From patchwork Mon Oct 16 23:08:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 132676 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B5B643183; Tue, 17 Oct 2023 01:10:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A278542670; Tue, 17 Oct 2023 01:09:26 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id B3EC040E01 for ; Tue, 17 Oct 2023 01:09:08 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 78DC920B74CC; Mon, 16 Oct 2023 16:09:07 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 78DC920B74CC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1697497747; bh=B8S12jXNlNv9DUxq4FZKJX37lWdv8Q4e6Zcg7165yRE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZkIGtMiRuq4GgGM63+qz8Yrv8eMWZ2iK4/v8qFKW8HVMg3GkeJDKrFboNqXW7vUmq 3cu/BRAykxoJDUp4+I+Azt257hQFrL9p9NliQJpEVWGrKtWuuMEsbOs8MxBuFLjtKt H2XKtp0vheEZ68bp8I9rp+vS4SyZcCOon3I+ySvo= From: Tyler Retzlaff To: dev@dpdk.org Cc: Akhil Goyal , Anatoly Burakov , Andrew Rybchenko , Bruce Richardson , Chenbo Xia , Ciara Power , David Christensen , David Hunt , Dmitry Kozlyuk , Dmitry Malloy , Elena Agostini , Erik Gabriel Carrillo , Fan Zhang , Ferruh Yigit , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jerin Jacob , Konstantin Ananyev , Matan Azrad , Maxime Coquelin , Narcisa Ana Maria Vasile , Nicolas Chautru , Olivier Matz , Ori Kam , Pallavi Kadam , Pavan Nikhilesh , Reshma Pattan , Sameh Gobriel , Shijith Thotton , Sivaprasad Tummala , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Thomas Monjalon , Viacheslav Ovsiienko , Vladimir Medvedkin , Yipeng Wang , Tyler Retzlaff Subject: [PATCH 12/21] pdump: use rte optional stdatomic API Date: Mon, 16 Oct 2023 16:08:56 -0700 Message-Id: <1697497745-20664-13-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> References: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional stdatomic API Signed-off-by: Tyler Retzlaff --- lib/pdump/rte_pdump.c | 14 +++++++------- lib/pdump/rte_pdump.h | 8 ++++---- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 53cca10..80b90c6 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -110,8 +110,8 @@ struct pdump_response { * then packet doesn't match the filter (will be ignored). */ if (cbs->filter && rcs[i] == 0) { - __atomic_fetch_add(&stats->filtered, - 1, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&stats->filtered, + 1, rte_memory_order_relaxed); continue; } @@ -127,18 +127,18 @@ struct pdump_response { p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen); if (unlikely(p == NULL)) - __atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&stats->nombuf, 1, rte_memory_order_relaxed); else dup_bufs[d_pkts++] = p; } - __atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&stats->accepted, d_pkts, rte_memory_order_relaxed); ring_enq = rte_ring_enqueue_burst(ring, (void *)&dup_bufs[0], d_pkts, NULL); if (unlikely(ring_enq < d_pkts)) { unsigned int drops = d_pkts - ring_enq; - __atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&stats->ringfull, drops, rte_memory_order_relaxed); rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops); } } @@ -720,10 +720,10 @@ struct pdump_response { uint16_t qid; for (qid = 0; qid < nq; qid++) { - const uint64_t *perq = (const uint64_t *)&stats[port][qid]; + const RTE_ATOMIC(uint64_t) *perq = (const uint64_t __rte_atomic *)&stats[port][qid]; for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) { - val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED); + val = rte_atomic_load_explicit(&perq[i], rte_memory_order_relaxed); sum[i] += val; } } diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h index b1a3918..7feb2b6 100644 --- a/lib/pdump/rte_pdump.h +++ b/lib/pdump/rte_pdump.h @@ -233,10 +233,10 @@ enum { * The statistics are sum of both receive and transmit queues. */ struct rte_pdump_stats { - uint64_t accepted; /**< Number of packets accepted by filter. */ - uint64_t filtered; /**< Number of packets rejected by filter. */ - uint64_t nombuf; /**< Number of mbuf allocation failures. */ - uint64_t ringfull; /**< Number of missed packets due to ring full. */ + RTE_ATOMIC(uint64_t) accepted; /**< Number of packets accepted by filter. */ + RTE_ATOMIC(uint64_t) filtered; /**< Number of packets rejected by filter. */ + RTE_ATOMIC(uint64_t) nombuf; /**< Number of mbuf allocation failures. */ + RTE_ATOMIC(uint64_t) ringfull; /**< Number of missed packets due to ring full. */ uint64_t reserved[4]; /**< Reserved and pad to cache line */ };