From patchwork Wed Mar 27 22:37:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 138910 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78F7143D55; Wed, 27 Mar 2024 23:41:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 37FA542E2F; Wed, 27 Mar 2024 23:38:47 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id D4A9F4161A for ; Wed, 27 Mar 2024 23:38:08 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id 417D420E6F20; Wed, 27 Mar 2024 15:38:00 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 417D420E6F20 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1711579082; bh=W1m3HXXaHh0A9laGg098WGuAz4FWoW1wcCOh+Adtqfw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VTdCLN+azyYmc5ljqMN46/YWGfTIVM57Fz8kf0orFosnrJnBVPCozULXqPtzqcQ6K P/hwUI4M+rUpcdCbO4C1XCXwGHNnIXo8chZDnS8qlYCVKsNEd7wYEHvuttdSjRjaRQ SSxdGgwx//VrNA04KoN0xBiEOwnDq/jvi/BkncQo= From: Tyler Retzlaff To: dev@dpdk.org Cc: =?utf-8?q?Mattias_R=C3=B6nnblom?= , =?utf-8?q?Morten_Br=C3=B8rup?= , Abdullah Sevincer , Ajit Khaparde , Alok Prasad , Anatoly Burakov , Andrew Rybchenko , Anoob Joseph , Bruce Richardson , Byron Marohn , Chenbo Xia , Chengwen Feng , Ciara Loftus , Ciara Power , Dariusz Sosnowski , David Hunt , Devendra Singh Rawat , Erik Gabriel Carrillo , Guoyang Zhou , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jakub Grajciar , Jerin Jacob , Jeroen de Borst , Jian Wang , Jiawen Wu , Jie Hai , Jingjing Wu , Joshua Washington , Joyce Kong , Junfeng Guo , Kevin Laatz , Konstantin Ananyev , Liang Ma , Long Li , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nicolas Chautru , Ori Kam , Pavan Nikhilesh , Peter Mccarthy , Rahul Lakkireddy , Reshma Pattan , Rosen Xu , Ruifeng Wang , Rushil Gupta , Sameh Gobriel , Sivaprasad Tummala , Somnath Kotur , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Sunil Uttarwar , Tetsuya Mukawa , Vamsi Attunuru , Viacheslav Ovsiienko , Vladimir Medvedkin , Xiaoyun Wang , Yipeng Wang , Yisen Zhuang , Yuying Zhang , Yuying Zhang , Ziyang Xuan , Tyler Retzlaff Subject: [PATCH v3 34/45] event/dlb2: use rte stdatomic API Date: Wed, 27 Mar 2024 15:37:47 -0700 Message-Id: <1711579078-10624-35-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1711579078-10624-1-git-send-email-roretzla@linux.microsoft.com> References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com> <1711579078-10624-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API. Signed-off-by: Tyler Retzlaff Acked-by: Stephen Hemminger --- drivers/event/dlb2/dlb2.c | 34 +++++++++++++++++----------------- drivers/event/dlb2/dlb2_priv.h | 15 +++++++-------- drivers/event/dlb2/dlb2_xstats.c | 2 +- 3 files changed, 25 insertions(+), 26 deletions(-) diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 628ddef..0b91f03 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -1005,7 +1005,7 @@ struct process_local_port_data } dlb2->new_event_limit = config->nb_events_limit; - __atomic_store_n(&dlb2->inflights, 0, __ATOMIC_SEQ_CST); + rte_atomic_store_explicit(&dlb2->inflights, 0, rte_memory_order_seq_cst); /* Save number of ports/queues for this event dev */ dlb2->num_ports = config->nb_event_ports; @@ -2668,10 +2668,10 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) batch_size = credits; if (likely(credits && - __atomic_compare_exchange_n( + rte_atomic_compare_exchange_strong_explicit( qm_port->credit_pool[type], - &credits, credits - batch_size, false, - __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST))) + &credits, credits - batch_size, + rte_memory_order_seq_cst, rte_memory_order_seq_cst))) return batch_size; else return 0; @@ -2687,7 +2687,7 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) /* Replenish credits, saving one quanta for enqueues */ uint16_t val = ev_port->inflight_credits - quanta; - __atomic_fetch_sub(&dlb2->inflights, val, __ATOMIC_SEQ_CST); + rte_atomic_fetch_sub_explicit(&dlb2->inflights, val, rte_memory_order_seq_cst); ev_port->inflight_credits -= val; } } @@ -2696,8 +2696,8 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) dlb2_check_enqueue_sw_credits(struct dlb2_eventdev *dlb2, struct dlb2_eventdev_port *ev_port) { - uint32_t sw_inflights = __atomic_load_n(&dlb2->inflights, - __ATOMIC_SEQ_CST); + uint32_t sw_inflights = rte_atomic_load_explicit(&dlb2->inflights, + rte_memory_order_seq_cst); const int num = 1; if (unlikely(ev_port->inflight_max < sw_inflights)) { @@ -2719,8 +2719,8 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) return 1; } - __atomic_fetch_add(&dlb2->inflights, credit_update_quanta, - __ATOMIC_SEQ_CST); + rte_atomic_fetch_add_explicit(&dlb2->inflights, credit_update_quanta, + rte_memory_order_seq_cst); ev_port->inflight_credits += (credit_update_quanta); if (ev_port->inflight_credits < num) { @@ -3234,17 +3234,17 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) if (qm_port->dlb2->version == DLB2_HW_V2) { qm_port->cached_ldb_credits += num; if (qm_port->cached_ldb_credits >= 2 * batch_size) { - __atomic_fetch_add( + rte_atomic_fetch_add_explicit( qm_port->credit_pool[DLB2_LDB_QUEUE], - batch_size, __ATOMIC_SEQ_CST); + batch_size, rte_memory_order_seq_cst); qm_port->cached_ldb_credits -= batch_size; } } else { qm_port->cached_credits += num; if (qm_port->cached_credits >= 2 * batch_size) { - __atomic_fetch_add( + rte_atomic_fetch_add_explicit( qm_port->credit_pool[DLB2_COMBINED_POOL], - batch_size, __ATOMIC_SEQ_CST); + batch_size, rte_memory_order_seq_cst); qm_port->cached_credits -= batch_size; } } @@ -3252,17 +3252,17 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) if (qm_port->dlb2->version == DLB2_HW_V2) { qm_port->cached_dir_credits += num; if (qm_port->cached_dir_credits >= 2 * batch_size) { - __atomic_fetch_add( + rte_atomic_fetch_add_explicit( qm_port->credit_pool[DLB2_DIR_QUEUE], - batch_size, __ATOMIC_SEQ_CST); + batch_size, rte_memory_order_seq_cst); qm_port->cached_dir_credits -= batch_size; } } else { qm_port->cached_credits += num; if (qm_port->cached_credits >= 2 * batch_size) { - __atomic_fetch_add( + rte_atomic_fetch_add_explicit( qm_port->credit_pool[DLB2_COMBINED_POOL], - batch_size, __ATOMIC_SEQ_CST); + batch_size, rte_memory_order_seq_cst); qm_port->cached_credits -= batch_size; } } diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h index 31a3bee..4ff340d 100644 --- a/drivers/event/dlb2/dlb2_priv.h +++ b/drivers/event/dlb2/dlb2_priv.h @@ -348,7 +348,7 @@ struct dlb2_port { uint32_t dequeue_depth; enum dlb2_token_pop_mode token_pop_mode; union dlb2_port_config cfg; - uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */ + RTE_ATOMIC(uint32_t) *credit_pool[DLB2_NUM_QUEUE_TYPES]; union { struct { uint16_t cached_ldb_credits; @@ -586,7 +586,7 @@ struct dlb2_eventdev { uint32_t xstats_count_mode_dev; uint32_t xstats_count_mode_port; uint32_t xstats_count; - uint32_t inflights; /* use __atomic builtins */ + RTE_ATOMIC(uint32_t) inflights; uint32_t new_event_limit; int max_num_events_override; int num_dir_credits_override; @@ -623,15 +623,14 @@ struct dlb2_eventdev { struct { uint16_t max_ldb_credits; uint16_t max_dir_credits; - /* use __atomic builtins */ /* shared hw cred */ - uint32_t ldb_credit_pool __rte_cache_aligned; - /* use __atomic builtins */ /* shared hw cred */ - uint32_t dir_credit_pool __rte_cache_aligned; + RTE_ATOMIC(uint32_t) ldb_credit_pool + __rte_cache_aligned; + RTE_ATOMIC(uint32_t) dir_credit_pool + __rte_cache_aligned; }; struct { uint16_t max_credits; - /* use __atomic builtins */ /* shared hw cred */ - uint32_t credit_pool __rte_cache_aligned; + RTE_ATOMIC(uint32_t) credit_pool __rte_cache_aligned; }; }; uint32_t cos_ports[DLB2_COS_NUM_VALS]; /* total ldb ports in each class */ diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c index ff15271..22094f3 100644 --- a/drivers/event/dlb2/dlb2_xstats.c +++ b/drivers/event/dlb2/dlb2_xstats.c @@ -173,7 +173,7 @@ struct dlb2_xstats_entry { case nb_events_limit: return dlb2->new_event_limit; case inflight_events: - return __atomic_load_n(&dlb2->inflights, __ATOMIC_SEQ_CST); + return rte_atomic_load_explicit(&dlb2->inflights, rte_memory_order_seq_cst); case ldb_pool_size: return dlb2->num_ldb_credits; case dir_pool_size: