From patchwork Wed Mar 27 22:37:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 138879 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0433543D55; Wed, 27 Mar 2024 23:38:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2777D427D7; Wed, 27 Mar 2024 23:38:06 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 0A365410E6 for ; Wed, 27 Mar 2024 23:38:01 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id 038D920E6924; Wed, 27 Mar 2024 15:37:59 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 038D920E6924 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1711579080; bh=47opT1nGwIbT3lN3uGnYsp7xNe1P4EQa+Asr/5cy0Ow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q3w6q0ZdUxmwjWUR6yL9zkFuyPSrZB2/3CAWEZ31hHihCTdLIeb2dYbOYIOjanyjc ReW2LWutv3oSUq/rR5U4Jm41vbeQ68zYQxi8X4N9vmPRufTfz4hmOnS1yFbr8VwCWF okRnySaLzQ9zXIV0+K7ixq+jnI8/7+zGBrB50ueg= From: Tyler Retzlaff To: dev@dpdk.org Cc: =?utf-8?q?Mattias_R=C3=B6nnblom?= , =?utf-8?q?Morten_Br=C3=B8rup?= , Abdullah Sevincer , Ajit Khaparde , Alok Prasad , Anatoly Burakov , Andrew Rybchenko , Anoob Joseph , Bruce Richardson , Byron Marohn , Chenbo Xia , Chengwen Feng , Ciara Loftus , Ciara Power , Dariusz Sosnowski , David Hunt , Devendra Singh Rawat , Erik Gabriel Carrillo , Guoyang Zhou , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jakub Grajciar , Jerin Jacob , Jeroen de Borst , Jian Wang , Jiawen Wu , Jie Hai , Jingjing Wu , Joshua Washington , Joyce Kong , Junfeng Guo , Kevin Laatz , Konstantin Ananyev , Liang Ma , Long Li , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nicolas Chautru , Ori Kam , Pavan Nikhilesh , Peter Mccarthy , Rahul Lakkireddy , Reshma Pattan , Rosen Xu , Ruifeng Wang , Rushil Gupta , Sameh Gobriel , Sivaprasad Tummala , Somnath Kotur , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Sunil Uttarwar , Tetsuya Mukawa , Vamsi Attunuru , Viacheslav Ovsiienko , Vladimir Medvedkin , Xiaoyun Wang , Yipeng Wang , Yisen Zhuang , Yuying Zhang , Yuying Zhang , Ziyang Xuan , Tyler Retzlaff Subject: [PATCH v3 03/45] net/iavf: use rte stdatomic API Date: Wed, 27 Mar 2024 15:37:16 -0700 Message-Id: <1711579078-10624-4-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1711579078-10624-1-git-send-email-roretzla@linux.microsoft.com> References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com> <1711579078-10624-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API. Signed-off-by: Tyler Retzlaff Acked-by: Stephen Hemminger --- drivers/net/iavf/iavf.h | 16 ++++++++-------- drivers/net/iavf/iavf_rxtx.c | 4 ++-- drivers/net/iavf/iavf_rxtx_vec_neon.c | 2 +- drivers/net/iavf/iavf_vchnl.c | 14 +++++++------- 4 files changed, 18 insertions(+), 18 deletions(-) diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 824ae4a..6b977e5 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -238,8 +238,8 @@ struct iavf_info { struct virtchnl_vlan_caps vlan_v2_caps; uint64_t supported_rxdid; uint8_t *proto_xtr; /* proto xtr type for all queues */ - volatile enum virtchnl_ops pend_cmd; /* pending command not finished */ - uint32_t pend_cmd_count; + volatile RTE_ATOMIC(enum virtchnl_ops) pend_cmd; /* pending command not finished */ + RTE_ATOMIC(uint32_t) pend_cmd_count; int cmd_retval; /* return value of the cmd response from PF */ uint8_t *aq_resp; /* buffer to store the adminq response from PF */ @@ -456,13 +456,13 @@ struct iavf_cmd_info { _atomic_set_cmd(struct iavf_info *vf, enum virtchnl_ops ops) { enum virtchnl_ops op_unk = VIRTCHNL_OP_UNKNOWN; - int ret = __atomic_compare_exchange(&vf->pend_cmd, &op_unk, &ops, - 0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE); + int ret = rte_atomic_compare_exchange_strong_explicit(&vf->pend_cmd, &op_unk, ops, + rte_memory_order_acquire, rte_memory_order_acquire); if (!ret) PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd); - __atomic_store_n(&vf->pend_cmd_count, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&vf->pend_cmd_count, 1, rte_memory_order_relaxed); return !ret; } @@ -472,13 +472,13 @@ struct iavf_cmd_info { _atomic_set_async_response_cmd(struct iavf_info *vf, enum virtchnl_ops ops) { enum virtchnl_ops op_unk = VIRTCHNL_OP_UNKNOWN; - int ret = __atomic_compare_exchange(&vf->pend_cmd, &op_unk, &ops, - 0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE); + int ret = rte_atomic_compare_exchange_strong_explicit(&vf->pend_cmd, &op_unk, ops, + rte_memory_order_acquire, rte_memory_order_acquire); if (!ret) PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd); - __atomic_store_n(&vf->pend_cmd_count, 2, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&vf->pend_cmd_count, 2, rte_memory_order_relaxed); return !ret; } diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 0a5246d..d1d4e9f 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -2025,7 +2025,7 @@ struct iavf_txq_ops iavf_txq_release_mbufs_ops[] = { s[j] = rte_le_to_cpu_16(rxdp[j].wb.status_error0); /* This barrier is to order loads of different words in the descriptor */ - rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rte_atomic_thread_fence(rte_memory_order_acquire); /* Compute how many contiguous DD bits were set */ for (j = 0, nb_dd = 0; j < IAVF_LOOK_AHEAD; j++) { @@ -2152,7 +2152,7 @@ struct iavf_txq_ops iavf_txq_release_mbufs_ops[] = { } /* This barrier is to order loads of different words in the descriptor */ - rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rte_atomic_thread_fence(rte_memory_order_acquire); /* Compute how many contiguous DD bits were set */ for (j = 0, nb_dd = 0; j < IAVF_LOOK_AHEAD; j++) { diff --git a/drivers/net/iavf/iavf_rxtx_vec_neon.c b/drivers/net/iavf/iavf_rxtx_vec_neon.c index 83825aa..20b656e 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_neon.c +++ b/drivers/net/iavf/iavf_rxtx_vec_neon.c @@ -273,7 +273,7 @@ descs[0] = vld1q_u64((uint64_t *)(rxdp)); /* Use acquire fence to order loads of descriptor qwords */ - rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rte_atomic_thread_fence(rte_memory_order_acquire); /* A.2 reload qword0 to make it ordered after qword1 load */ descs[3] = vld1q_lane_u64((uint64_t *)(rxdp + 3), descs[3], 0); descs[2] = vld1q_lane_u64((uint64_t *)(rxdp + 2), descs[2], 0); diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 1111d30..6d5969f 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -41,7 +41,7 @@ struct iavf_event_element { }; struct iavf_event_handler { - uint32_t ndev; + RTE_ATOMIC(uint32_t) ndev; rte_thread_t tid; int fd[2]; pthread_mutex_t lock; @@ -129,7 +129,7 @@ struct iavf_event_handler { { struct iavf_event_handler *handler = &event_handler; - if (__atomic_fetch_add(&handler->ndev, 1, __ATOMIC_RELAXED) + 1 != 1) + if (rte_atomic_fetch_add_explicit(&handler->ndev, 1, rte_memory_order_relaxed) + 1 != 1) return 0; #if defined(RTE_EXEC_ENV_IS_WINDOWS) && RTE_EXEC_ENV_IS_WINDOWS != 0 int err = _pipe(handler->fd, MAX_EVENT_PENDING, O_BINARY); @@ -137,7 +137,7 @@ struct iavf_event_handler { int err = pipe(handler->fd); #endif if (err != 0) { - __atomic_fetch_sub(&handler->ndev, 1, __ATOMIC_RELAXED); + rte_atomic_fetch_sub_explicit(&handler->ndev, 1, rte_memory_order_relaxed); return -1; } @@ -146,7 +146,7 @@ struct iavf_event_handler { if (rte_thread_create_internal_control(&handler->tid, "iavf-event", iavf_dev_event_handle, NULL)) { - __atomic_fetch_sub(&handler->ndev, 1, __ATOMIC_RELAXED); + rte_atomic_fetch_sub_explicit(&handler->ndev, 1, rte_memory_order_relaxed); return -1; } @@ -158,7 +158,7 @@ struct iavf_event_handler { { struct iavf_event_handler *handler = &event_handler; - if (__atomic_fetch_sub(&handler->ndev, 1, __ATOMIC_RELAXED) - 1 != 0) + if (rte_atomic_fetch_sub_explicit(&handler->ndev, 1, rte_memory_order_relaxed) - 1 != 0) return; int unused = pthread_cancel((pthread_t)handler->tid.opaque_id); @@ -574,8 +574,8 @@ struct iavf_event_handler { /* read message and it's expected one */ if (msg_opc == vf->pend_cmd) { uint32_t cmd_count = - __atomic_fetch_sub(&vf->pend_cmd_count, - 1, __ATOMIC_RELAXED) - 1; + rte_atomic_fetch_sub_explicit(&vf->pend_cmd_count, + 1, rte_memory_order_relaxed) - 1; if (cmd_count == 0) _notify_cmd(vf, msg_ret); } else {