From patchwork Sat May 13 09:27:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob X-Patchwork-Id: 24279 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 78AB21C00; Sat, 13 May 2017 11:28:08 +0200 (CEST) Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0051.outbound.protection.outlook.com [104.47.42.51]) by dpdk.org (Postfix) with ESMTP id 0F9AB58D1 for ; Sat, 13 May 2017 11:28:05 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=3ytZ4WOTB6srJmmicenUqhiUpeBZ/oT5bm0nbqTLAPI=; b=nLP2H117APNHG/1FIUofwId0yAGm3PKDBHkIIkjd5V1Wt51hYKTdHTSiTzvmnjLiy3db0IvxR7/q92KGy9IA512WHRO7iQo3pjn6ne87QJPxb/2ntmZ6tA30PO4jp1/Wz+whk5x5kHUFwKdOBfnrJ9fsfthDUiUeAiJIIW7bGcU= Authentication-Results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=caviumnetworks.com; Received: from jerin.domain.name (111.93.218.67) by BY1PR0701MB1722.namprd07.prod.outlook.com (10.162.111.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1084.16; Sat, 13 May 2017 09:27:59 +0000 From: Jerin Jacob To: dev@dpdk.org Cc: thomas@monjalon.net, ferruh.yigit@intel.com, Jerin Jacob Date: Sat, 13 May 2017 14:57:26 +0530 Message-Id: <20170513092728.30050-2-jerin.jacob@caviumnetworks.com> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20170513092728.30050-1-jerin.jacob@caviumnetworks.com> References: <20170513092728.30050-1-jerin.jacob@caviumnetworks.com> MIME-Version: 1.0 X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: BM1PR01CA0046.INDPRD01.PROD.OUTLOOK.COM (10.163.199.18) To BY1PR0701MB1722.namprd07.prod.outlook.com (10.162.111.141) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a3556a8e-6364-4cb3-44b8-08d499e25a48 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(201703131423075)(201703031133081); SRVR:BY1PR0701MB1722; X-Microsoft-Exchange-Diagnostics: 1; BY1PR0701MB1722; 3:Z0rXCJRkfuVUoxue0IcSp3wFxHWf8rhfgkVxldDWGpp+OFjGAZojQlGzquaWQaW0M1lk5EQC0jMR0yIrhSkPBw+tG3VQYpMHfZeHREP5oKOsYPQcGDDmCxXUk0qAEfBS3mPx6sjCbg63hUUjll+NuR5Jr4k0BPeIGSpK6UAjS9ceEwwutkfJV5sO28drS5RUhpwAXB2gYD1nWLG8vytF1Yxtg6dqF7CTwR333vvf3Og9bXPxAUDKG7BZ80uNFTHB6sYRqe085od70Zft0q0hhapfSISqZEBiIFF2Cmh3VAFMY6EALL8JfqJZnEsg1qkC8QknETURo/5rlni7beu26A==; 25:8nu2exRnjaAje441Nh+aLnqvTffg3p6X/JuhPCRtxeqcCbHm9cMUatKaEipZvMIo7jiVOxUODKKj6GDLEing+Ej9937F4Cb6RGKgGpbTDaeUNoZfbyYQDngzfCYA2HAKF7YnW+7NHRHNM8rL0v9MPefh4mCTBvRb47AXLqeoVTVcAu1Bp/ltY5WsFMA8whshSxHgMhBoABXsMUywiil32zbaZ+cZC3nm10iCSDx8JV4NY8NkpD1gc13cMyJKRmRxDQZZdAAFj8zfOs6ZpYazNCWeRJyjBtUhkp9Zx+1wIFzLPgNTG59dYmll7rxukTugWEYAGAyr5K15nXPfkhTnBLAtriWmXjtPcoWeOgzXYvuWwgMn0Ed28/EN8B95tdf1RnMDYGwJesBqXc+ZHYPOUwBpC2HjThxpsAiRllRtzrqtatAGGUwxHAGZGIEj4Ri7lTYHHxYXT/qYZVDfwx1kOzeF+yjk4x61u0oHv5r9vro= X-Microsoft-Exchange-Diagnostics: 1; BY1PR0701MB1722; 31:SYcivEOOkn76j3gg4U6/oJDd0Fu38Vozog1K4ZLpm/OYGBIdI/HgiDtvxzIN9f5Ory3byq07nnN1MorRCpcwbU9zJoPFvvSC1Hd8Sn/W1rJx6Sg9lFE/hxr2pk+9Ec9yHpTpVm91LDHZtgso4yYMkvV2EztWdqhd05lcoYysvDUpCKdcrs6VZjZwSvFnZ9JKy+8YrDvAksD6M4Ez8bpKazwn1+8AFnnJcJZwf6yLiNA=; 20:NhydKdUCXBscsStRlMK7J94KXF7WTuYA+oKVgRa9umuMe4ETzjUQXom8aYUGhrGmLGmffjuseyaQU9+3EVnw5jVVYWL37JZbBprUgsx5Lklwab1OGFSx6MGltELFZJhqJqJFdS9R1mf88bxZ+haiZUmiWHcU/lptFWeFHJOyRSHGJKrEuprDbO6LcKZSiWIzBg6DL+GeoBDeWMDTC7yB3XgOxi84evz+EIdkLvxqvdta9g03jWFChYttH08MtsRS7ApGcFeE8VFhfiERvXwWeaj5AiwaT8PPae32VYCro6aBarswOZo+DIiDCeUt4QniUcQAJzuUS1jJ2kDjvbg5FQcU9TOOAxSuklu9xQdM8n/xB1hT2Q4UJpNVFhC0jI+6C6kLLAvKbYo9FpWjUsXvxlvPpfAubUTembjx56YafZ1FLBG1ZTVNwgNS1rljPM59QOT2o/wz6ndPoaFsOxGkG13TDJgqmD3/x2YVDpHek5oL141QDdoPhRXfOGAWWNKBsKrgUIcMEhJcbNwJDUehEfIju/vxvta/jjPTFU4IWvEla7YPBUQtdFDsQEy/ZSdovw3i6xxWUdWobOSWuiCy/i8Gyyhw24Lj9M5JQ6C7Jes= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(601004)(2401047)(8121501046)(5005006)(93006095)(3002001)(10201501046)(6041248)(20161123562025)(20161123555025)(20161123558100)(201703131423075)(201703061421075)(201703161042150)(20161123560025)(20161123564025)(6072148)(6042181); SRVR:BY1PR0701MB1722; BCL:0; PCL:0; RULEID:; SRVR:BY1PR0701MB1722; X-Microsoft-Exchange-Diagnostics: 1; BY1PR0701MB1722; 4:SFPaizhsoyo/Oz9Fcxv7Cl1ydkKMREe8MGzS8vahrMF+xKGygYgt4ykx6BtiHc/dDxWsgVd3GPfAvLiAXht1JPPHzKfWDTW79LTH2flbFlPh7koYNl7XE/Dtjb7dycjYZs1EQhhzJADIU4GURoqbTwZUHx+YBIGamvp32PEe4QmyJd0on7IsREB+n8JCiJn0FWBpzNd85UrRwKCLNGURfdYZeW/ISY8XZKuTrfHypCwVgz9krSQLJX8eqWPLaguCZx7oUSNtUEHUM0dnq4hBoNk5L66YlhAXZ75ZVJwHHSceTs94E8jNaHeCDr/8TU3PrLE3KpS7tFeZTACw2MwTj6uIKWCzoXRgOdbyR7pR4ui69d04YNTP11wRlBhiilgFmzajr4I2zBiXv5Cuo6TeMhFA4J2rpwg1pF+FnEmQdLJyqw/HA0Rfds3DSH7jfYFkAJlvW883FL1DmBZJUXo6MiLXg+vlH07oIKGGrh/3aunoU3RaP5MQEUW8uSzDG1V37t0TmSWYf3BWbGmHWen0kA5ywt5swl2zQTIXNULcHrcNkBTpg3rORsWas1pEFQcoIS7FmbrF6bZEy/qVNZyGwZvf+UZqAo63KHIc7lVi7fsN3WE+o3gUTOzEyDgVtxbqk/2xQ7Y6i9K48FByx12WjhxHPc/qnuj8JOj7NbG7bHplRDMHY5IP0DpZLwUClduMWXOYUY5t+PfqfV1WhXvnLOfP5O2gqPfoM2vzcgkiMJMsOWLHnk63vTmD7B79af9z X-Forefront-PRVS: 0306EE2ED4 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(36756003)(81166006)(66066001)(53416004)(2351001)(2361001)(42186005)(47776003)(33646002)(7736002)(8676002)(5660300001)(76176999)(305945005)(50986999)(5003940100001)(50226002)(6506006)(189998001)(4326008)(5009440100003)(6486002)(110136004)(2906002)(38730400002)(6666003)(107886003)(50466002)(1076002)(48376002)(72206003)(53946003)(25786009)(6512007)(2950100002)(53936002)(3846002)(6916009)(42882006)(575784001)(498600001)(6116002)(579004)(559001)(309714004); DIR:OUT; SFP:1101; SCL:1; SRVR:BY1PR0701MB1722; H:jerin.domain.name; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BY1PR0701MB1722; 23:bUo/iHLWJXE61UO31RQQFwsi6OYnRf0fcF/P+oK?= KE9DrbFNiQ0a5PEBA8xOQCIceJn3aLjc8tZgn9jW2AZ7X7x/9F67YHMP/mFPuJflAY0rdUNUE8c1IFnzZul8AS6w0nwkSvHOMyriJ6MPMtjhOXTB726zvZC2mByc+jUYHdn2KSBL/xFzInxqk4BY7YL0fjHEV2TynQa7+fqnaoSX+CQ3cKRMJhYR0as4iiGMX7A+Dc2WjlIKlpNH+i+WZOwbXqGjW5xcGlrGOVGFrs51H/aUHsoP9Z9+nfnJxgGF0NLL4z8ctgQLmqSTNxWIWPd5jbz0JjkRurYb0ju1YHaEVw7RYm+qZqci13UzRk/j6vW3rEonGJBIb31erwLtu0OtwadVHMcDXYK4FcEW5CqtAumYYYF4rFuap35P7joFn3Uh54bnJfD6CJoF3lOdfYo+cMTSQ+tfPffnaPzRpsbw2VgD7tHmrxozj4VBJgvM44Uz6JG38rPTYUWUGmFURaXXPgiPPifgGUoSq8rvKb5QEtj665MBcL+JW6hOd1xuk1gkdCZzYAbgOOjnI5K49td+YvSyx+llbkKFYYfvdw47U32LrHCF4Wv4FEveSW435vhDjhVZ0JEGzRQGp+jlsV3wXPM/E6SiaavwpsnUbjTGCMHUg76+sMmX/gnO+//hTtX5QS+XBKqYc4WpoPnPl4m9w7wvqWvKh2dOM9/AztDC8eFOVqTMoQb2nrOvh3p6JGJR+yyBvlvwNVk9g6/FvHZD4XzxzvAF00fRo2+VcJyYVAQQbsDdnok1XwfH9xyzSVm97ZVNFKuaIz90T1MiQL82u082yzxn/LYOshhPM3hsicKiPGaqxyjmYS64QdgLTe0kB89L54cnkWa5klC6VYFgXJY5pSJ4Mt8yngY/CYK2tC5/APjokZhusuJRNpA2ycv2tvKJnjTpglg6qWzzbk7VDZA/B2lfVIvGLt902Dx5RNeYxD9ETObziWLaOE92TGr3TKKc5RQ/EilypAMExvklNNDHTU2lGSnaTv4wmM68gb5RmEsYtMIwRFdZuyRqfWzVl+sfERb1Azc4L4qNobhEwkR3U0P/AV5YnO3BnQwCOLSlA7QGSSJmnGRtDxFTJagxU5/4BK/YmGQKZI2XHT57S X-Microsoft-Exchange-Diagnostics: 1; BY1PR0701MB1722; 6:X6ZcNoPR1QuwXijEFp8zuj58GJoeMntkQrciuko8TVTkmGigJLAM72DDP057/E7a2s83wsJQ+tM0KCsc0aSlJJ8Nznt67BGta3Ebtd4VbqHis/ERcP3PCmZa6L/CXjkMHSgnB1caCjs6pZ0CSAGfYY7XN0mRFyxBUyA4gDT3H54HZTCTfiOIshSUwYSY8Ru2T1Pjy6PnRiGwZG+kb+g+KV7BMvowNUMjTT+3coPgE1AoQ5/XVZAOrOA/P9C/CKvWMfyGUAKf1TvZvLySBot9SD5iLdW9XFqO6xpHtWoEcwkh3ODzPjMvhWrlhbr/yrzIWYkgpud/Z9K6i4DJkGtMY9o26m4jll9p9a/++wryB3nx6cgTQGD3iFEop3gX4WPxW3qumDFxwI0y0URp9VKWQ+cDjFoPR3/kR/3hDuBn1BOS4+W6PTzIYlwl/CsZL1u6M43eWsVjuhfOgSKOeAnqjJ8e9bWRLowK5WL+OF3zyS/7UqlKx456vAwJ4ged3icYugd4uYJ4GBfxPr/J/U4j7g==; 5:81B/ZYeH793Sfz61mKGNGw8r1BrRcOGtm/RcYz+xoSJVFj/yNq0pdf7ApPcLoeHXXuYXe98G/m13arOX/JX97qwqJaeXhM27Y4GUNdcEN+TUUSLj4mMvwWyII1JZ4slGOg33m7J1nFzgpPJ0UAZfpQ==; 24:KHmpgpJrfBNOJkSk9q80hPt+RSpMU5r8tKKLFJ6KxF9dFOI0+hopr8173Wek6AK901CGs9Ab8Hm5tWWsmEBZoF7oqvVOiu/GBMYJtsMvu/8= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BY1PR0701MB1722; 7:dl5tYkAi8dQiPy1TV8gCpxDTH5cWeZN0z8rImkkWYh91gG1TSMA9ruSBkBPPy0UmkJwAYGAHeHJE49juY7sxSFbGFDGte11/IMEkEr4oQixzNB4TcBH/6Tuf3MrhNzFb+eyxeB+5WjpHVH3534zNUetBIZGmBRIBQLN2ywIEG1+65MiPmwFTif6NiviGhAM1+l1zPsrGWPCC4XIeMs/7MXt/zcdARlej2dYHifwJiAxo7Q1V6FBTJw4tpLAs4wQQ062TWH1eEZ7zEYr6mGSfAgKTO7V8Qmwp3yW2AZWiAW59zBzEokkcSDx5qgNwtf0jrZPimZV6ywSsCJWwsGhUWA== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 May 2017 09:27:59.9197 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR0701MB1722 Subject: [dpdk-dev] [PATCH 2/4] eal: use the rte macro for always inline X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Replaced inline __attribute__((always_inline)) with __rte_always_inline macro Verified the change by comparing the output binary file. No difference found in the output binary file with this change. Signed-off-by: Jerin Jacob --- drivers/crypto/dpaa2_sec/hw/compat.h | 4 +- drivers/crypto/scheduler/scheduler_failover.c | 2 +- drivers/crypto/scheduler/scheduler_pmd_private.h | 6 +-- drivers/event/octeontx/ssovf_worker.c | 16 ++++---- drivers/event/octeontx/ssovf_worker.h | 22 +++++------ drivers/event/sw/event_ring.h | 14 +++---- drivers/event/sw/iq_ring.h | 20 ++++------ drivers/event/sw/sw_evdev_scheduler.c | 4 +- drivers/net/fm10k/fm10k_rxtx_vec.c | 4 +- drivers/net/i40e/i40e_rxtx.c | 2 +- drivers/net/i40e/i40e_rxtx_vec_common.h | 4 +- drivers/net/ixgbe/ixgbe_rxtx.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 4 +- drivers/net/mlx5/mlx5_rxtx.c | 34 +++++++--------- drivers/net/xenvirt/virtqueue.h | 12 +++--- .../ip_pipeline/pipeline/pipeline_passthrough_be.c | 8 ++-- .../ip_pipeline/pipeline/pipeline_routing_be.c | 4 +- examples/l3fwd/l3fwd_em.h | 2 +- examples/l3fwd/l3fwd_em_hlm_sse.h | 6 +-- examples/l3fwd/l3fwd_em_sse.h | 2 +- examples/l3fwd/l3fwd_lpm.h | 2 +- examples/l3fwd/l3fwd_lpm_sse.h | 4 +- examples/l3fwd/l3fwd_sse.h | 6 +-- examples/performance-thread/common/lthread_pool.h | 10 ++--- examples/performance-thread/common/lthread_queue.h | 10 ++--- examples/performance-thread/common/lthread_sched.c | 4 +- examples/performance-thread/common/lthread_sched.h | 12 +++--- examples/performance-thread/l3fwd-thread/main.c | 8 ++-- examples/tep_termination/main.c | 2 +- examples/vhost/main.c | 18 ++++----- examples/vhost/virtio_net.c | 4 +- examples/vhost_xen/main.c | 12 +++--- lib/librte_acl/acl_run_altivec.h | 4 +- lib/librte_acl/acl_run_avx2.h | 2 +- lib/librte_acl/acl_run_neon.h | 6 +-- lib/librte_acl/acl_run_sse.h | 4 +- lib/librte_eal/common/include/arch/arm/rte_io_64.h | 32 +++++++-------- .../common/include/arch/x86/rte_memcpy.h | 5 ++- lib/librte_eal/common/include/generic/rte_io.h | 32 +++++++-------- lib/librte_ether/rte_ethdev.h | 2 +- lib/librte_mbuf/rte_mbuf.h | 7 ++-- lib/librte_mempool/rte_mempool.h | 20 +++++----- lib/librte_net/net_crc_sse.h | 10 ++--- lib/librte_net/rte_net_crc.c | 2 +- lib/librte_port/rte_port_ring.c | 4 +- lib/librte_ring/rte_ring.h | 46 +++++++++++----------- lib/librte_vhost/rte_vhost.h | 2 +- lib/librte_vhost/vhost.h | 8 ++-- lib/librte_vhost/virtio_net.c | 30 +++++++------- test/test/test_xmmt_ops.h | 4 +- 50 files changed, 234 insertions(+), 250 deletions(-) diff --git a/drivers/crypto/dpaa2_sec/hw/compat.h b/drivers/crypto/dpaa2_sec/hw/compat.h index 11fdaa8e3..ab95ce6bb 100644 --- a/drivers/crypto/dpaa2_sec/hw/compat.h +++ b/drivers/crypto/dpaa2_sec/hw/compat.h @@ -49,7 +49,9 @@ #include #include #include + #include +#include #ifndef __BYTE_ORDER__ #error "Undefined endianness" @@ -60,7 +62,7 @@ #endif #ifndef __always_inline -#define __always_inline (inline __attribute__((always_inline))) +#define __always_inline __rte_always_inline #endif #ifndef __always_unused diff --git a/drivers/crypto/scheduler/scheduler_failover.c b/drivers/crypto/scheduler/scheduler_failover.c index 2471a5f14..162a29bb6 100644 --- a/drivers/crypto/scheduler/scheduler_failover.c +++ b/drivers/crypto/scheduler/scheduler_failover.c @@ -48,7 +48,7 @@ struct fo_scheduler_qp_ctx { uint8_t deq_idx; }; -static inline uint16_t __attribute__((always_inline)) +static __rte_always_inline uint16_t failover_slave_enqueue(struct scheduler_slave *slave, uint8_t slave_idx, struct rte_crypto_op **ops, uint16_t nb_ops) { diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h index 421dae371..05a5916c3 100644 --- a/drivers/crypto/scheduler/scheduler_pmd_private.h +++ b/drivers/crypto/scheduler/scheduler_pmd_private.h @@ -105,7 +105,7 @@ struct scheduler_session { RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES]; }; -static inline uint16_t __attribute__((always_inline)) +static __rte_always_inline uint16_t get_max_enqueue_order_count(struct rte_ring *order_ring, uint16_t nb_ops) { uint32_t count = rte_ring_free_count(order_ring); @@ -113,7 +113,7 @@ get_max_enqueue_order_count(struct rte_ring *order_ring, uint16_t nb_ops) return count > nb_ops ? nb_ops : count; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void scheduler_order_insert(struct rte_ring *order_ring, struct rte_crypto_op **ops, uint16_t nb_ops) { @@ -125,7 +125,7 @@ scheduler_order_insert(struct rte_ring *order_ring, op = ring[(order_ring->cons.head + pos) & order_ring->mask]; \ } while (0) -static inline uint16_t __attribute__((always_inline)) +static __rte_always_inline uint16_t scheduler_order_drain(struct rte_ring *order_ring, struct rte_crypto_op **ops, uint16_t nb_ops) { diff --git a/drivers/event/octeontx/ssovf_worker.c b/drivers/event/octeontx/ssovf_worker.c index ad3fe684d..fcb5f316c 100644 --- a/drivers/event/octeontx/ssovf_worker.c +++ b/drivers/event/octeontx/ssovf_worker.c @@ -32,7 +32,7 @@ #include "ssovf_worker.h" -static force_inline void +static __rte_always_inline void ssows_new_event(struct ssows *ws, const struct rte_event *ev) { const uint64_t event_ptr = ev->u64; @@ -43,7 +43,7 @@ ssows_new_event(struct ssows *ws, const struct rte_event *ev) ssows_add_work(ws, event_ptr, tag, new_tt, grp); } -static force_inline void +static __rte_always_inline void ssows_fwd_swtag(struct ssows *ws, const struct rte_event *ev, const uint8_t grp) { const uint8_t cur_tt = ws->cur_tt; @@ -72,7 +72,7 @@ ssows_fwd_swtag(struct ssows *ws, const struct rte_event *ev, const uint8_t grp) #define OCT_EVENT_TYPE_GRP_FWD (RTE_EVENT_TYPE_MAX - 1) -static force_inline void +static __rte_always_inline void ssows_fwd_group(struct ssows *ws, const struct rte_event *ev, const uint8_t grp) { const uint64_t event_ptr = ev->u64; @@ -95,7 +95,7 @@ ssows_fwd_group(struct ssows *ws, const struct rte_event *ev, const uint8_t grp) ssows_add_work(ws, event_ptr, tag, new_tt, grp); } -static force_inline void +static __rte_always_inline void ssows_forward_event(struct ssows *ws, const struct rte_event *ev) { const uint8_t grp = ev->queue_id; @@ -112,14 +112,14 @@ ssows_forward_event(struct ssows *ws, const struct rte_event *ev) ssows_fwd_group(ws, ev, grp); } -static force_inline void +static __rte_always_inline void ssows_release_event(struct ssows *ws) { if (likely(ws->cur_tt != SSO_SYNC_UNTAGGED)) ssows_swtag_untag(ws); } -force_inline uint16_t __hot +__rte_always_inline uint16_t __hot ssows_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks) { struct ssows *ws = port; @@ -135,7 +135,7 @@ ssows_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks) } } -force_inline uint16_t __hot +__rte_always_inline uint16_t __hot ssows_deq_timeout(void *port, struct rte_event *ev, uint64_t timeout_ticks) { struct ssows *ws = port; @@ -171,7 +171,7 @@ ssows_deq_timeout_burst(void *port, struct rte_event ev[], uint16_t nb_events, return ssows_deq_timeout(port, ev, timeout_ticks); } -force_inline uint16_t __hot +__rte_always_inline uint16_t __hot ssows_enq(void *port, const struct rte_event *ev) { struct ssows *ws = port; diff --git a/drivers/event/octeontx/ssovf_worker.h b/drivers/event/octeontx/ssovf_worker.h index 300dfae83..40c5c5531 100644 --- a/drivers/event/octeontx/ssovf_worker.h +++ b/drivers/event/octeontx/ssovf_worker.h @@ -42,17 +42,13 @@ enum { SSO_SYNC_EMPTY }; -#ifndef force_inline -#define force_inline inline __attribute__((always_inline)) -#endif - #ifndef __hot #define __hot __attribute__((hot)) #endif /* SSO Operations */ -static force_inline uint16_t +static __rte_always_inline uint16_t ssows_get_work(struct ssows *ws, struct rte_event *ev) { uint64_t get_work0, get_work1; @@ -70,7 +66,7 @@ ssows_get_work(struct ssows *ws, struct rte_event *ev) return !!get_work1; } -static force_inline void +static __rte_always_inline void ssows_add_work(struct ssows *ws, const uint64_t event_ptr, const uint32_t tag, const uint8_t new_tt, const uint8_t grp) { @@ -80,7 +76,7 @@ ssows_add_work(struct ssows *ws, const uint64_t event_ptr, const uint32_t tag, ssovf_store_pair(add_work0, event_ptr, ws->grps[grp]); } -static force_inline void +static __rte_always_inline void ssows_swtag_full(struct ssows *ws, const uint64_t event_ptr, const uint32_t tag, const uint8_t new_tt, const uint8_t grp) { @@ -92,7 +88,7 @@ ssows_swtag_full(struct ssows *ws, const uint64_t event_ptr, const uint32_t tag, SSOW_VHWS_OP_SWTAG_FULL0)); } -static force_inline void +static __rte_always_inline void ssows_swtag_desched(struct ssows *ws, uint32_t tag, uint8_t new_tt, uint8_t grp) { uint64_t val; @@ -101,7 +97,7 @@ ssows_swtag_desched(struct ssows *ws, uint32_t tag, uint8_t new_tt, uint8_t grp) ssovf_write64(val, ws->base + SSOW_VHWS_OP_SWTAG_DESCHED); } -static force_inline void +static __rte_always_inline void ssows_swtag_norm(struct ssows *ws, uint32_t tag, uint8_t new_tt) { uint64_t val; @@ -110,27 +106,27 @@ ssows_swtag_norm(struct ssows *ws, uint32_t tag, uint8_t new_tt) ssovf_write64(val, ws->base + SSOW_VHWS_OP_SWTAG_NORM); } -static force_inline void +static __rte_always_inline void ssows_swtag_untag(struct ssows *ws) { ssovf_write64(0, ws->base + SSOW_VHWS_OP_SWTAG_UNTAG); ws->cur_tt = SSO_SYNC_UNTAGGED; } -static force_inline void +static __rte_always_inline void ssows_upd_wqp(struct ssows *ws, uint8_t grp, uint64_t event_ptr) { ssovf_store_pair((uint64_t)grp << 34, event_ptr, (ws->base + SSOW_VHWS_OP_UPD_WQP_GRP0)); } -static force_inline void +static __rte_always_inline void ssows_desched(struct ssows *ws) { ssovf_write64(0, ws->base + SSOW_VHWS_OP_DESCHED); } -static force_inline void +static __rte_always_inline void ssows_swtag_wait(struct ssows *ws) { /* Wait for the SWTAG/SWTAG_FULL operation */ diff --git a/drivers/event/sw/event_ring.h b/drivers/event/sw/event_ring.h index cdaee95d3..734a3b4b1 100644 --- a/drivers/event/sw/event_ring.h +++ b/drivers/event/sw/event_ring.h @@ -61,10 +61,6 @@ struct qe_ring { struct rte_event ring[0] __rte_cache_aligned; }; -#ifndef force_inline -#define force_inline inline __attribute__((always_inline)) -#endif - static inline struct qe_ring * qe_ring_create(const char *name, unsigned int size, unsigned int socket_id) { @@ -91,19 +87,19 @@ qe_ring_destroy(struct qe_ring *r) rte_free(r); } -static force_inline unsigned int +static __rte_always_inline unsigned int qe_ring_count(const struct qe_ring *r) { return r->write_idx - r->read_idx; } -static force_inline unsigned int +static __rte_always_inline unsigned int qe_ring_free_count(const struct qe_ring *r) { return r->size - qe_ring_count(r); } -static force_inline unsigned int +static __rte_always_inline unsigned int qe_ring_enqueue_burst(struct qe_ring *r, const struct rte_event *qes, unsigned int nb_qes, uint16_t *free_count) { @@ -130,7 +126,7 @@ qe_ring_enqueue_burst(struct qe_ring *r, const struct rte_event *qes, return nb_qes; } -static force_inline unsigned int +static __rte_always_inline unsigned int qe_ring_enqueue_burst_with_ops(struct qe_ring *r, const struct rte_event *qes, unsigned int nb_qes, uint8_t *ops) { @@ -157,7 +153,7 @@ qe_ring_enqueue_burst_with_ops(struct qe_ring *r, const struct rte_event *qes, return nb_qes; } -static force_inline unsigned int +static __rte_always_inline unsigned int qe_ring_dequeue_burst(struct qe_ring *r, struct rte_event *qes, unsigned int nb_qes) { diff --git a/drivers/event/sw/iq_ring.h b/drivers/event/sw/iq_ring.h index d480d1560..64cf6784c 100644 --- a/drivers/event/sw/iq_ring.h +++ b/drivers/event/sw/iq_ring.h @@ -56,10 +56,6 @@ struct iq_ring { struct rte_event ring[QID_IQ_DEPTH]; }; -#ifndef force_inline -#define force_inline inline __attribute__((always_inline)) -#endif - static inline struct iq_ring * iq_ring_create(const char *name, unsigned int socket_id) { @@ -81,19 +77,19 @@ iq_ring_destroy(struct iq_ring *r) rte_free(r); } -static force_inline uint16_t +static __rte_always_inline uint16_t iq_ring_count(const struct iq_ring *r) { return r->write_idx - r->read_idx; } -static force_inline uint16_t +static __rte_always_inline uint16_t iq_ring_free_count(const struct iq_ring *r) { return QID_IQ_MASK - iq_ring_count(r); } -static force_inline uint16_t +static __rte_always_inline uint16_t iq_ring_enqueue_burst(struct iq_ring *r, struct rte_event *qes, uint16_t nb_qes) { const uint16_t read = r->read_idx; @@ -112,7 +108,7 @@ iq_ring_enqueue_burst(struct iq_ring *r, struct rte_event *qes, uint16_t nb_qes) return nb_qes; } -static force_inline uint16_t +static __rte_always_inline uint16_t iq_ring_dequeue_burst(struct iq_ring *r, struct rte_event *qes, uint16_t nb_qes) { uint16_t read = r->read_idx; @@ -132,7 +128,7 @@ iq_ring_dequeue_burst(struct iq_ring *r, struct rte_event *qes, uint16_t nb_qes) } /* assumes there is space, from a previous dequeue_burst */ -static force_inline uint16_t +static __rte_always_inline uint16_t iq_ring_put_back(struct iq_ring *r, struct rte_event *qes, uint16_t nb_qes) { uint16_t i, read = r->read_idx; @@ -144,19 +140,19 @@ iq_ring_put_back(struct iq_ring *r, struct rte_event *qes, uint16_t nb_qes) return nb_qes; } -static force_inline const struct rte_event * +static __rte_always_inline const struct rte_event * iq_ring_peek(const struct iq_ring *r) { return &r->ring[r->read_idx & QID_IQ_MASK]; } -static force_inline void +static __rte_always_inline void iq_ring_pop(struct iq_ring *r) { r->read_idx++; } -static force_inline int +static __rte_always_inline int iq_ring_enqueue(struct iq_ring *r, const struct rte_event *qe) { const uint16_t read = r->read_idx; diff --git a/drivers/event/sw/sw_evdev_scheduler.c b/drivers/event/sw/sw_evdev_scheduler.c index a333a6f0a..35f8f175a 100644 --- a/drivers/event/sw/sw_evdev_scheduler.c +++ b/drivers/event/sw/sw_evdev_scheduler.c @@ -362,7 +362,7 @@ sw_schedule_reorder(struct sw_evdev *sw, int qid_start, int qid_end) return pkts_iter; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void sw_refill_pp_buf(struct sw_evdev *sw, struct sw_port *port) { RTE_SET_USED(sw); @@ -372,7 +372,7 @@ sw_refill_pp_buf(struct sw_evdev *sw, struct sw_port *port) RTE_DIM(port->pp_buf)); } -static inline uint32_t __attribute__((always_inline)) +static __rte_always_inline uint32_t __pull_port_lb(struct sw_evdev *sw, uint32_t port_id, int allow_reorder) { static struct reorder_buffer_entry dummy_rob; diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c index 411bc4450..03f6fd70e 100644 --- a/drivers/net/fm10k/fm10k_rxtx_vec.c +++ b/drivers/net/fm10k/fm10k_rxtx_vec.c @@ -738,7 +738,7 @@ vtx(volatile struct fm10k_tx_desc *txdp, vtx1(txdp, *pkt, flags); } -static inline int __attribute__((always_inline)) +static __rte_always_inline int fm10k_tx_free_bufs(struct fm10k_tx_queue *txq) { struct rte_mbuf **txep; @@ -794,7 +794,7 @@ fm10k_tx_free_bufs(struct fm10k_tx_queue *txq) return txq->rs_thresh; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void tx_backlog_entry(struct rte_mbuf **txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 351cb94dd..0aefb2f46 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -1257,7 +1257,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) return nb_tx; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int i40e_tx_free_bufs(struct i40e_tx_queue *txq) { struct i40e_tx_entry *txep; diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index 692096684..39a6da061 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -102,7 +102,7 @@ reassemble_packets(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_bufs, return pkt_idx; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int i40e_tx_free_bufs(struct i40e_tx_queue *txq) { struct i40e_tx_entry *txep; @@ -159,7 +159,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq) return txq->tx_rs_thresh; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void tx_backlog_entry(struct i40e_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 1e0789595..ee8ad9626 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -126,7 +126,7 @@ uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, * Check for descriptors with their DD bit set and free mbufs. * Return the total number of buffers freed. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) { struct ixgbe_tx_entry *txep; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index 1c34bb5f3..9fc112b1c 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -101,7 +101,7 @@ reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs, return pkt_idx; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) { struct ixgbe_tx_entry_v *txep; @@ -158,7 +158,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) return txq->tx_rs_thresh; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void tx_backlog_entry(struct ixgbe_tx_entry_v *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index de6e0fa4a..53b5c68bd 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -69,34 +69,28 @@ #include "mlx5_defs.h" #include "mlx5_prm.h" -static inline int +static __rte_always_inline int check_cqe(volatile struct mlx5_cqe *cqe, - unsigned int cqes_n, const uint16_t ci) - __attribute__((always_inline)); + unsigned int cqes_n, const uint16_t ci); -static inline void -txq_complete(struct txq *txq) __attribute__((always_inline)); +static __rte_always_inline void +txq_complete(struct txq *txq); -static inline uint32_t -txq_mp2mr(struct txq *txq, struct rte_mempool *mp) - __attribute__((always_inline)); +static __rte_always_inline uint32_t +txq_mp2mr(struct txq *txq, struct rte_mempool *mp); -static inline void -mlx5_tx_dbrec(struct txq *txq, volatile struct mlx5_wqe *wqe) - __attribute__((always_inline)); +static __rte_always_inline void +mlx5_tx_dbrec(struct txq *txq, volatile struct mlx5_wqe *wqe); -static inline uint32_t -rxq_cq_to_pkt_type(volatile struct mlx5_cqe *cqe) - __attribute__((always_inline)); +static __rte_always_inline uint32_t +rxq_cq_to_pkt_type(volatile struct mlx5_cqe *cqe); -static inline int +static __rte_always_inline int mlx5_rx_poll_len(struct rxq *rxq, volatile struct mlx5_cqe *cqe, - uint16_t cqe_cnt, uint32_t *rss_hash) - __attribute__((always_inline)); + uint16_t cqe_cnt, uint32_t *rss_hash); -static inline uint32_t -rxq_cq_to_ol_flags(struct rxq *rxq, volatile struct mlx5_cqe *cqe) - __attribute__((always_inline)); +static __rte_always_inline uint32_t +rxq_cq_to_ol_flags(struct rxq *rxq, volatile struct mlx5_cqe *cqe); #ifndef NDEBUG diff --git a/drivers/net/xenvirt/virtqueue.h b/drivers/net/xenvirt/virtqueue.h index 350eae3ec..1bb6877cd 100644 --- a/drivers/net/xenvirt/virtqueue.h +++ b/drivers/net/xenvirt/virtqueue.h @@ -123,7 +123,7 @@ void virtqueue_dump(struct virtqueue *vq); */ struct rte_mbuf * virtqueue_detatch_unused(struct virtqueue *vq); -static inline int __attribute__((always_inline)) +static __rte_always_inline int virtqueue_full(const struct virtqueue *vq) { return vq->vq_free_cnt == 0; @@ -131,7 +131,7 @@ virtqueue_full(const struct virtqueue *vq) #define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx)) -static inline void __attribute__((always_inline)) +static __rte_always_inline void vq_ring_update_avail(struct virtqueue *vq, uint16_t desc_idx) { uint16_t avail_idx; @@ -148,7 +148,7 @@ vq_ring_update_avail(struct virtqueue *vq, uint16_t desc_idx) vq->vq_ring.avail->idx++; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx) { struct vring_desc *dp; @@ -171,7 +171,7 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx) vq->vq_desc_head_idx = desc_idx; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int virtqueue_enqueue_recv_refill(struct virtqueue *rxvq, struct rte_mbuf *cookie) { const uint16_t needed = 1; @@ -201,7 +201,7 @@ virtqueue_enqueue_recv_refill(struct virtqueue *rxvq, struct rte_mbuf *cookie) return 0; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie) { @@ -242,7 +242,7 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie) return 0; } -static inline uint16_t __attribute__((always_inline)) +static __rte_always_inline uint16_t virtqueue_dequeue_burst(struct virtqueue *vq, struct rte_mbuf **rx_pkts, uint32_t *len, uint16_t num) { struct vring_used_elem *uep; diff --git a/examples/ip_pipeline/pipeline/pipeline_passthrough_be.c b/examples/ip_pipeline/pipeline/pipeline_passthrough_be.c index 7ab0afedb..8cb2f0c71 100644 --- a/examples/ip_pipeline/pipeline/pipeline_passthrough_be.c +++ b/examples/ip_pipeline/pipeline/pipeline_passthrough_be.c @@ -76,7 +76,7 @@ static pipeline_msg_req_handler handlers[] = { pipeline_msg_req_invalid_handler, }; -static inline __attribute__((always_inline)) void +static __rte_always_inline void pkt_work_dma( struct rte_mbuf *pkt, void *arg, @@ -121,7 +121,7 @@ pkt_work_dma( } } -static inline __attribute__((always_inline)) void +static __rte_always_inline void pkt4_work_dma( struct rte_mbuf **pkts, void *arg, @@ -217,7 +217,7 @@ pkt4_work_dma( } } -static inline __attribute__((always_inline)) void +static __rte_always_inline void pkt_work_swap( struct rte_mbuf *pkt, void *arg) @@ -241,7 +241,7 @@ pkt_work_swap( } } -static inline __attribute__((always_inline)) void +static __rte_always_inline void pkt4_work_swap( struct rte_mbuf **pkts, void *arg) diff --git a/examples/ip_pipeline/pipeline/pipeline_routing_be.c b/examples/ip_pipeline/pipeline/pipeline_routing_be.c index 21ac7888f..78317165d 100644 --- a/examples/ip_pipeline/pipeline/pipeline_routing_be.c +++ b/examples/ip_pipeline/pipeline/pipeline_routing_be.c @@ -191,7 +191,7 @@ struct layout { dst->c = src->c; \ } -static inline __attribute__((always_inline)) void +static __rte_always_inline void pkt_work_routing( struct rte_mbuf *pkt, struct rte_pipeline_table_entry *table_entry, @@ -317,7 +317,7 @@ pkt_work_routing( } } -static inline __attribute__((always_inline)) void +static __rte_always_inline void pkt4_work_routing( struct rte_mbuf **pkts, struct rte_pipeline_table_entry **table_entries, diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h index 2284bbd5c..d509a1fcd 100644 --- a/examples/l3fwd/l3fwd_em.h +++ b/examples/l3fwd/l3fwd_em.h @@ -34,7 +34,7 @@ #ifndef __L3FWD_EM_H__ #define __L3FWD_EM_H__ -static inline __attribute__((always_inline)) void +static __rte_always_inline void l3fwd_em_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qconf) { diff --git a/examples/l3fwd/l3fwd_em_hlm_sse.h b/examples/l3fwd/l3fwd_em_hlm_sse.h index 7714a20ce..d272f1121 100644 --- a/examples/l3fwd/l3fwd_em_hlm_sse.h +++ b/examples/l3fwd/l3fwd_em_hlm_sse.h @@ -36,7 +36,7 @@ #include "l3fwd_sse.h" -static inline __attribute__((always_inline)) void +static __rte_always_inline void em_get_dst_port_ipv4x8(struct lcore_conf *qconf, struct rte_mbuf *m[8], uint8_t portid, uint16_t dst_port[8]) { @@ -160,7 +160,7 @@ get_ipv6_5tuple(struct rte_mbuf *m0, __m128i mask0, key->xmm[2] = _mm_and_si128(tmpdata2, mask1); } -static inline __attribute__((always_inline)) void +static __rte_always_inline void em_get_dst_port_ipv6x8(struct lcore_conf *qconf, struct rte_mbuf *m[8], uint8_t portid, uint16_t dst_port[8]) { @@ -232,7 +232,7 @@ em_get_dst_port_ipv6x8(struct lcore_conf *qconf, struct rte_mbuf *m[8], } -static inline __attribute__((always_inline)) uint16_t +static __rte_always_inline uint16_t em_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt, uint8_t portid) { diff --git a/examples/l3fwd/l3fwd_em_sse.h b/examples/l3fwd/l3fwd_em_sse.h index c0a9725a6..6c794b6a5 100644 --- a/examples/l3fwd/l3fwd_em_sse.h +++ b/examples/l3fwd/l3fwd_em_sse.h @@ -45,7 +45,7 @@ #include "l3fwd_sse.h" -static inline __attribute__((always_inline)) uint16_t +static __rte_always_inline uint16_t em_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt, uint8_t portid) { diff --git a/examples/l3fwd/l3fwd_lpm.h b/examples/l3fwd/l3fwd_lpm.h index 258a82fec..4d77b5807 100644 --- a/examples/l3fwd/l3fwd_lpm.h +++ b/examples/l3fwd/l3fwd_lpm.h @@ -58,7 +58,7 @@ lpm_get_ipv6_dst_port(void *ipv6_hdr, uint8_t portid, void *lookup_struct) &next_hop) == 0) ? next_hop : portid); } -static inline __attribute__((always_inline)) void +static __rte_always_inline void l3fwd_lpm_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qconf) { diff --git a/examples/l3fwd/l3fwd_lpm_sse.h b/examples/l3fwd/l3fwd_lpm_sse.h index aa06b6d34..5d77a942f 100644 --- a/examples/l3fwd/l3fwd_lpm_sse.h +++ b/examples/l3fwd/l3fwd_lpm_sse.h @@ -36,7 +36,7 @@ #include "l3fwd_sse.h" -static inline __attribute__((always_inline)) uint16_t +static __rte_always_inline uint16_t lpm_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt, uint8_t portid) { @@ -75,7 +75,7 @@ lpm_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt, * precalculated. If packet is ipv6 dst_addr is taken directly from packet * header and dst_ipv4 value is not used. */ -static inline __attribute__((always_inline)) uint16_t +static __rte_always_inline uint16_t lpm_get_dst_port_with_ipv4(const struct lcore_conf *qconf, struct rte_mbuf *pkt, uint32_t dst_ipv4, uint8_t portid) { diff --git a/examples/l3fwd/l3fwd_sse.h b/examples/l3fwd/l3fwd_sse.h index 1afa1f006..cc2329312 100644 --- a/examples/l3fwd/l3fwd_sse.h +++ b/examples/l3fwd/l3fwd_sse.h @@ -57,7 +57,7 @@ * If we encounter invalid IPV4 packet, then set destination port for it * to BAD_PORT value. */ -static inline __attribute__((always_inline)) void +static __rte_always_inline void rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype) { uint8_t ihl; @@ -314,7 +314,7 @@ process_packet(struct rte_mbuf *pkt, uint16_t *dst_port) _mm_storeu_si128((__m128i *)eth_hdr, te); } -static inline __attribute__((always_inline)) void +static __rte_always_inline void send_packetsx4(struct lcore_conf *qconf, uint8_t port, struct rte_mbuf *m[], uint32_t num) { @@ -395,7 +395,7 @@ send_packetsx4(struct lcore_conf *qconf, uint8_t port, struct rte_mbuf *m[], /** * Send packets burst from pkts_burst to the ports in dst_port array */ -static inline __attribute__((always_inline)) void +static __rte_always_inline void send_packets_multi(struct lcore_conf *qconf, struct rte_mbuf **pkts_burst, uint16_t dst_port[MAX_PKT_BURST], int nb_rx) { diff --git a/examples/performance-thread/common/lthread_pool.h b/examples/performance-thread/common/lthread_pool.h index fb0c578b0..315a2e21e 100644 --- a/examples/performance-thread/common/lthread_pool.h +++ b/examples/performance-thread/common/lthread_pool.h @@ -174,7 +174,7 @@ _qnode_pool_create(const char *name, int prealloc_size) { /* * Insert a node into the pool */ -static inline void __attribute__ ((always_inline)) +static __rte_always_inline void _qnode_pool_insert(struct qnode_pool *p, struct qnode *n) { n->next = NULL; @@ -198,7 +198,7 @@ _qnode_pool_insert(struct qnode_pool *p, struct qnode *n) * last item from the queue incurs the penalty of an atomic exchange. Since the * pool is maintained with a bulk pre-allocation the cost of this is amortised. */ -static inline struct qnode *__attribute__ ((always_inline)) +static __rte_always_inline struct qnode * _pool_remove(struct qnode_pool *p) { struct qnode *head; @@ -239,7 +239,7 @@ _pool_remove(struct qnode_pool *p) * This adds a retry to the _pool_remove function * defined above */ -static inline struct qnode *__attribute__ ((always_inline)) +static __rte_always_inline struct qnode * _qnode_pool_remove(struct qnode_pool *p) { struct qnode *n; @@ -259,7 +259,7 @@ _qnode_pool_remove(struct qnode_pool *p) * Allocate a node from the pool * If the pool is empty add mode nodes */ -static inline struct qnode *__attribute__ ((always_inline)) +static __rte_always_inline struct qnode * _qnode_alloc(void) { struct qnode_pool *p = (THIS_SCHED)->qnode_pool; @@ -304,7 +304,7 @@ _qnode_alloc(void) /* * free a queue node to the per scheduler pool from which it came */ -static inline void __attribute__ ((always_inline)) +static __rte_always_inline void _qnode_free(struct qnode *n) { struct qnode_pool *p = n->pool; diff --git a/examples/performance-thread/common/lthread_queue.h b/examples/performance-thread/common/lthread_queue.h index 4fc2074e4..833ed92b5 100644 --- a/examples/performance-thread/common/lthread_queue.h +++ b/examples/performance-thread/common/lthread_queue.h @@ -154,7 +154,7 @@ _lthread_queue_create(const char *name) /** * Return true if the queue is empty */ -static inline int __attribute__ ((always_inline)) +static __rte_always_inline int _lthread_queue_empty(struct lthread_queue *q) { return q->tail == q->head; @@ -185,7 +185,7 @@ RTE_DECLARE_PER_LCORE(struct lthread_sched *, this_sched); * Insert a node into a queue * this implementation is multi producer safe */ -static inline struct qnode *__attribute__ ((always_inline)) +static __rte_always_inline struct qnode * _lthread_queue_insert_mp(struct lthread_queue *q, void *data) { @@ -219,7 +219,7 @@ _lthread_queue_insert_mp(struct lthread_queue * Insert an node into a queue in single producer mode * this implementation is NOT mult producer safe */ -static inline struct qnode *__attribute__ ((always_inline)) +static __rte_always_inline struct qnode * _lthread_queue_insert_sp(struct lthread_queue *q, void *data) { @@ -247,7 +247,7 @@ _lthread_queue_insert_sp(struct lthread_queue /* * Remove a node from a queue */ -static inline void *__attribute__ ((always_inline)) +static __rte_always_inline void * _lthread_queue_poll(struct lthread_queue *q) { void *data = NULL; @@ -278,7 +278,7 @@ _lthread_queue_poll(struct lthread_queue *q) /* * Remove a node from a queue */ -static inline void *__attribute__ ((always_inline)) +static __rte_always_inline void * _lthread_queue_remove(struct lthread_queue *q) { void *data = NULL; diff --git a/examples/performance-thread/common/lthread_sched.c b/examples/performance-thread/common/lthread_sched.c index c64c21ffb..98291478e 100644 --- a/examples/performance-thread/common/lthread_sched.c +++ b/examples/performance-thread/common/lthread_sched.c @@ -369,8 +369,8 @@ void lthread_scheduler_shutdown_all(void) /* * Resume a suspended lthread */ -static inline void -_lthread_resume(struct lthread *lt) __attribute__ ((always_inline)); +static __rte_always_inline void +_lthread_resume(struct lthread *lt); static inline void _lthread_resume(struct lthread *lt) { struct lthread_sched *sched = THIS_SCHED; diff --git a/examples/performance-thread/common/lthread_sched.h b/examples/performance-thread/common/lthread_sched.h index 7cddda9c5..aa2f0c488 100644 --- a/examples/performance-thread/common/lthread_sched.h +++ b/examples/performance-thread/common/lthread_sched.h @@ -112,8 +112,8 @@ static inline uint64_t _sched_now(void) return 1; } -static inline void -_affinitize(void) __attribute__ ((always_inline)); +static __rte_always_inline void +_affinitize(void); static inline void _affinitize(void) { @@ -123,8 +123,8 @@ _affinitize(void) ctx_switch(&(THIS_SCHED)->ctx, <->ctx); } -static inline void -_suspend(void) __attribute__ ((always_inline)); +static __rte_always_inline void +_suspend(void); static inline void _suspend(void) { @@ -136,8 +136,8 @@ _suspend(void) (THIS_SCHED)->nb_blocked_threads--; } -static inline void -_reschedule(void) __attribute__ ((always_inline)); +static __rte_always_inline void +_reschedule(void); static inline void _reschedule(void) { diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c index 2d98473eb..c59cd4233 100644 --- a/examples/performance-thread/l3fwd-thread/main.c +++ b/examples/performance-thread/l3fwd-thread/main.c @@ -720,7 +720,7 @@ send_single_packet(struct rte_mbuf *m, uint8_t port) #if ((APP_LOOKUP_METHOD == APP_LOOKUP_LPM) && \ (ENABLE_MULTI_BUFFER_OPTIMIZE == 1)) -static inline __attribute__((always_inline)) void +static __rte_always_inline void send_packetsx4(uint8_t port, struct rte_mbuf *m[], uint32_t num) { @@ -1281,7 +1281,7 @@ simple_ipv6_fwd_8pkts(struct rte_mbuf *m[8], uint8_t portid) } #endif /* APP_LOOKUP_METHOD */ -static inline __attribute__((always_inline)) void +static __rte_always_inline void l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid) { struct ether_hdr *eth_hdr; @@ -1369,7 +1369,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid) * If we encounter invalid IPV4 packet, then set destination port for it * to BAD_PORT value. */ -static inline __attribute__((always_inline)) void +static __rte_always_inline void rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype) { uint8_t ihl; @@ -1397,7 +1397,7 @@ rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype) #if ((APP_LOOKUP_METHOD == APP_LOOKUP_LPM) && \ (ENABLE_MULTI_BUFFER_OPTIMIZE == 1)) -static inline __attribute__((always_inline)) uint16_t +static __rte_always_inline uint16_t get_dst_port(struct rte_mbuf *pkt, uint32_t dst_ipv4, uint8_t portid) { uint32_t next_hop; diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c index cd6e3f1cf..83c2189ff 100644 --- a/examples/tep_termination/main.c +++ b/examples/tep_termination/main.c @@ -559,7 +559,7 @@ check_ports_num(unsigned max_nb_ports) * This function routes the TX packet to the correct interface. This may be a local device * or the physical port. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m) { struct mbuf_table *tx_q; diff --git a/examples/vhost/main.c b/examples/vhost/main.c index e07f86693..b625c52a0 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -691,7 +691,7 @@ static unsigned check_ports_num(unsigned nb_ports) return valid_num_ports; } -static inline struct vhost_dev *__attribute__((always_inline)) +static __rte_always_inline struct vhost_dev * find_vhost_dev(struct ether_addr *mac) { struct vhost_dev *vdev; @@ -791,7 +791,7 @@ unlink_vmdq(struct vhost_dev *vdev) } } -static inline void __attribute__((always_inline)) +static __rte_always_inline void virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, struct rte_mbuf *m) { @@ -815,7 +815,7 @@ virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, * Check if the packet destination MAC address is for a local device. If so then put * the packet on that devices RX queue. If not then return. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m) { struct ether_hdr *pkt_hdr; @@ -851,7 +851,7 @@ virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m) * Check if the destination MAC of a packet is one local VM, * and get its vlan tag, and offset if it is. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int find_local_dest(struct vhost_dev *vdev, struct rte_mbuf *m, uint32_t *offset, uint16_t *vlan_tag) { @@ -919,7 +919,7 @@ free_pkts(struct rte_mbuf **pkts, uint16_t n) rte_pktmbuf_free(pkts[n]); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void do_drain_mbuf_table(struct mbuf_table *tx_q) { uint16_t count; @@ -936,7 +936,7 @@ do_drain_mbuf_table(struct mbuf_table *tx_q) * This function routes the TX packet to the correct interface. This * may be a local device or the physical port. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag) { struct mbuf_table *tx_q; @@ -1024,7 +1024,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag) } -static inline void __attribute__((always_inline)) +static __rte_always_inline void drain_mbuf_table(struct mbuf_table *tx_q) { static uint64_t prev_tsc; @@ -1044,7 +1044,7 @@ drain_mbuf_table(struct mbuf_table *tx_q) } } -static inline void __attribute__((always_inline)) +static __rte_always_inline void drain_eth_rx(struct vhost_dev *vdev) { uint16_t rx_count, enqueue_count; @@ -1088,7 +1088,7 @@ drain_eth_rx(struct vhost_dev *vdev) free_pkts(pkts, rx_count); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void drain_virtio_tx(struct vhost_dev *vdev) { struct rte_mbuf *pkts[MAX_PKT_BURST]; diff --git a/examples/vhost/virtio_net.c b/examples/vhost/virtio_net.c index cc2c3d882..7d184b8d2 100644 --- a/examples/vhost/virtio_net.c +++ b/examples/vhost/virtio_net.c @@ -80,7 +80,7 @@ vs_vhost_net_remove(struct vhost_dev *dev) free(dev->mem); } -static inline int __attribute__((always_inline)) +static __rte_always_inline int enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, struct rte_mbuf *m, uint16_t desc_idx) { @@ -217,7 +217,7 @@ vs_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, return count; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, struct rte_mbuf *m, uint16_t desc_idx, struct rte_mempool *mbuf_pool) diff --git a/examples/vhost_xen/main.c b/examples/vhost_xen/main.c index d9ef140f7..f83789176 100644 --- a/examples/vhost_xen/main.c +++ b/examples/vhost_xen/main.c @@ -510,7 +510,7 @@ static unsigned check_ports_num(unsigned nb_ports) * Function to convert guest physical addresses to vhost virtual addresses. This * is used to convert virtio buffer addresses. */ -static inline uint64_t __attribute__((always_inline)) +static __rte_always_inline uint64_t gpa_to_vva(struct virtio_net *dev, uint64_t guest_pa) { struct virtio_memory_regions *region; @@ -537,7 +537,7 @@ gpa_to_vva(struct virtio_net *dev, uint64_t guest_pa) * count is returned to indicate the number of packets that were succesfully * added to the RX queue. */ -static inline uint32_t __attribute__((always_inline)) +static __rte_always_inline uint32_t virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t count) { struct vhost_virtqueue *vq; @@ -662,7 +662,7 @@ virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t count) /* * Compares a packet destination MAC address to a device MAC address. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int ether_addr_cmp(struct ether_addr *ea, struct ether_addr *eb) { return ((*(uint64_t *)ea ^ *(uint64_t *)eb) & MAC_ADDR_CMP) == 0; @@ -757,7 +757,7 @@ unlink_vmdq(struct virtio_net *dev) * Check if the packet destination MAC address is for a local device. If so then put * the packet on that devices RX queue. If not then return. */ -static inline unsigned __attribute__((always_inline)) +static __rte_always_inline unsigned virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m) { struct virtio_net_data_ll *dev_ll; @@ -814,7 +814,7 @@ virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m) * This function routes the TX packet to the correct interface. This may be a local device * or the physical port. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, uint16_t vlan_tag) { struct mbuf_table *tx_q; @@ -883,7 +883,7 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool * return; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void virtio_dev_tx(struct virtio_net* dev, struct rte_mempool *mbuf_pool) { struct rte_mbuf m; diff --git a/lib/librte_acl/acl_run_altivec.h b/lib/librte_acl/acl_run_altivec.h index 7d329bcf3..62fd6a22f 100644 --- a/lib/librte_acl/acl_run_altivec.h +++ b/lib/librte_acl/acl_run_altivec.h @@ -104,13 +104,13 @@ resolve_priority_altivec(uint64_t transition, int n, /* * Check for any match in 4 transitions */ -static inline __attribute__((always_inline)) uint32_t +static __rte_always_inline uint32_t check_any_match_x4(uint64_t val[]) { return (val[0] | val[1] | val[2] | val[3]) & RTE_ACL_NODE_MATCH; } -static inline __attribute__((always_inline)) void +static __rte_always_inline void acl_match_check_x4(int slot, const struct rte_acl_ctx *ctx, struct parms *parms, struct acl_flow_data *flows, uint64_t transitions[]) { diff --git a/lib/librte_acl/acl_run_avx2.h b/lib/librte_acl/acl_run_avx2.h index b01a46a5c..804e45afa 100644 --- a/lib/librte_acl/acl_run_avx2.h +++ b/lib/librte_acl/acl_run_avx2.h @@ -86,7 +86,7 @@ static const rte_ymm_t ymm_range_base = { * tr_hi contains high 32 bits for 8 transition. * next_input contains up to 4 input bytes for 8 flows. */ -static inline __attribute__((always_inline)) ymm_t +static __rte_always_inline ymm_t transition8(ymm_t next_input, const uint64_t *trans, ymm_t *tr_lo, ymm_t *tr_hi) { const int32_t *tr; diff --git a/lib/librte_acl/acl_run_neon.h b/lib/librte_acl/acl_run_neon.h index d233ff007..dfa38f5eb 100644 --- a/lib/librte_acl/acl_run_neon.h +++ b/lib/librte_acl/acl_run_neon.h @@ -99,13 +99,13 @@ resolve_priority_neon(uint64_t transition, int n, const struct rte_acl_ctx *ctx, /* * Check for any match in 4 transitions */ -static inline __attribute__((always_inline)) uint32_t +static __rte_always_inline uint32_t check_any_match_x4(uint64_t val[]) { return (val[0] | val[1] | val[2] | val[3]) & RTE_ACL_NODE_MATCH; } -static inline __attribute__((always_inline)) void +static __rte_always_inline void acl_match_check_x4(int slot, const struct rte_acl_ctx *ctx, struct parms *parms, struct acl_flow_data *flows, uint64_t transitions[]) { @@ -124,7 +124,7 @@ acl_match_check_x4(int slot, const struct rte_acl_ctx *ctx, struct parms *parms, /* * Process 4 transitions (in 2 NEON Q registers) in parallel */ -static inline __attribute__((always_inline)) int32x4_t +static __rte_always_inline int32x4_t transition4(int32x4_t next_input, const uint64_t *trans, uint64_t transitions[]) { int32x4x2_t tr_hi_lo; diff --git a/lib/librte_acl/acl_run_sse.h b/lib/librte_acl/acl_run_sse.h index ad40a6745..72f66e4fc 100644 --- a/lib/librte_acl/acl_run_sse.h +++ b/lib/librte_acl/acl_run_sse.h @@ -149,7 +149,7 @@ acl_process_matches(xmm_t *indices, int slot, const struct rte_acl_ctx *ctx, /* * Check for any match in 4 transitions (contained in 2 SSE registers) */ -static inline __attribute__((always_inline)) void +static __rte_always_inline void acl_match_check_x4(int slot, const struct rte_acl_ctx *ctx, struct parms *parms, struct acl_flow_data *flows, xmm_t *indices1, xmm_t *indices2, xmm_t match_mask) @@ -176,7 +176,7 @@ acl_match_check_x4(int slot, const struct rte_acl_ctx *ctx, struct parms *parms, /* * Process 4 transitions (in 2 XMM registers) in parallel */ -static inline __attribute__((always_inline)) xmm_t +static __rte_always_inline xmm_t transition4(xmm_t next_input, const uint64_t *trans, xmm_t *indices1, xmm_t *indices2) { diff --git a/lib/librte_eal/common/include/arch/arm/rte_io_64.h b/lib/librte_eal/common/include/arch/arm/rte_io_64.h index 0402125bb..e59e22a0b 100644 --- a/lib/librte_eal/common/include/arch/arm/rte_io_64.h +++ b/lib/librte_eal/common/include/arch/arm/rte_io_64.h @@ -44,7 +44,7 @@ extern "C" { #include "generic/rte_io.h" #include "rte_atomic_64.h" -static inline uint8_t __attribute__((always_inline)) +static __rte_always_inline uint8_t rte_read8_relaxed(const volatile void *addr) { uint8_t val; @@ -56,7 +56,7 @@ rte_read8_relaxed(const volatile void *addr) return val; } -static inline uint16_t __attribute__((always_inline)) +static __rte_always_inline uint16_t rte_read16_relaxed(const volatile void *addr) { uint16_t val; @@ -68,7 +68,7 @@ rte_read16_relaxed(const volatile void *addr) return val; } -static inline uint32_t __attribute__((always_inline)) +static __rte_always_inline uint32_t rte_read32_relaxed(const volatile void *addr) { uint32_t val; @@ -80,7 +80,7 @@ rte_read32_relaxed(const volatile void *addr) return val; } -static inline uint64_t __attribute__((always_inline)) +static __rte_always_inline uint64_t rte_read64_relaxed(const volatile void *addr) { uint64_t val; @@ -92,7 +92,7 @@ rte_read64_relaxed(const volatile void *addr) return val; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write8_relaxed(uint8_t val, volatile void *addr) { asm volatile( @@ -101,7 +101,7 @@ rte_write8_relaxed(uint8_t val, volatile void *addr) : [val] "r" (val), [addr] "r" (addr)); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write16_relaxed(uint16_t val, volatile void *addr) { asm volatile( @@ -110,7 +110,7 @@ rte_write16_relaxed(uint16_t val, volatile void *addr) : [val] "r" (val), [addr] "r" (addr)); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write32_relaxed(uint32_t val, volatile void *addr) { asm volatile( @@ -119,7 +119,7 @@ rte_write32_relaxed(uint32_t val, volatile void *addr) : [val] "r" (val), [addr] "r" (addr)); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write64_relaxed(uint64_t val, volatile void *addr) { asm volatile( @@ -128,7 +128,7 @@ rte_write64_relaxed(uint64_t val, volatile void *addr) : [val] "r" (val), [addr] "r" (addr)); } -static inline uint8_t __attribute__((always_inline)) +static __rte_always_inline uint8_t rte_read8(const volatile void *addr) { uint8_t val; @@ -137,7 +137,7 @@ rte_read8(const volatile void *addr) return val; } -static inline uint16_t __attribute__((always_inline)) +static __rte_always_inline uint16_t rte_read16(const volatile void *addr) { uint16_t val; @@ -146,7 +146,7 @@ rte_read16(const volatile void *addr) return val; } -static inline uint32_t __attribute__((always_inline)) +static __rte_always_inline uint32_t rte_read32(const volatile void *addr) { uint32_t val; @@ -155,7 +155,7 @@ rte_read32(const volatile void *addr) return val; } -static inline uint64_t __attribute__((always_inline)) +static __rte_always_inline uint64_t rte_read64(const volatile void *addr) { uint64_t val; @@ -164,28 +164,28 @@ rte_read64(const volatile void *addr) return val; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write8(uint8_t value, volatile void *addr) { rte_io_wmb(); rte_write8_relaxed(value, addr); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write16(uint16_t value, volatile void *addr) { rte_io_wmb(); rte_write16_relaxed(value, addr); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write32(uint32_t value, volatile void *addr) { rte_io_wmb(); rte_write32_relaxed(value, addr); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write64(uint64_t value, volatile void *addr) { rte_io_wmb(); diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy.h b/lib/librte_eal/common/include/arch/x86/rte_memcpy.h index b9785e85e..74c280c2c 100644 --- a/lib/librte_eal/common/include/arch/x86/rte_memcpy.h +++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy.h @@ -44,6 +44,7 @@ #include #include #include +#include #ifdef __cplusplus extern "C" { @@ -64,8 +65,8 @@ extern "C" { * @return * Pointer to the destination data. */ -static inline void * -rte_memcpy(void *dst, const void *src, size_t n) __attribute__((always_inline)); +static __rte_always_inline void * +rte_memcpy(void *dst, const void *src, size_t n); #ifdef RTE_MACHINE_CPUFLAG_AVX512F diff --git a/lib/librte_eal/common/include/generic/rte_io.h b/lib/librte_eal/common/include/generic/rte_io.h index d82ee6951..477e7b592 100644 --- a/lib/librte_eal/common/include/generic/rte_io.h +++ b/lib/librte_eal/common/include/generic/rte_io.h @@ -264,55 +264,55 @@ rte_write64(uint64_t value, volatile void *addr); #ifndef RTE_OVERRIDE_IO_H -static inline uint8_t __attribute__((always_inline)) +static __rte_always_inline uint8_t rte_read8_relaxed(const volatile void *addr) { return *(const volatile uint8_t *)addr; } -static inline uint16_t __attribute__((always_inline)) +static __rte_always_inline uint16_t rte_read16_relaxed(const volatile void *addr) { return *(const volatile uint16_t *)addr; } -static inline uint32_t __attribute__((always_inline)) +static __rte_always_inline uint32_t rte_read32_relaxed(const volatile void *addr) { return *(const volatile uint32_t *)addr; } -static inline uint64_t __attribute__((always_inline)) +static __rte_always_inline uint64_t rte_read64_relaxed(const volatile void *addr) { return *(const volatile uint64_t *)addr; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write8_relaxed(uint8_t value, volatile void *addr) { *(volatile uint8_t *)addr = value; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write16_relaxed(uint16_t value, volatile void *addr) { *(volatile uint16_t *)addr = value; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write32_relaxed(uint32_t value, volatile void *addr) { *(volatile uint32_t *)addr = value; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write64_relaxed(uint64_t value, volatile void *addr) { *(volatile uint64_t *)addr = value; } -static inline uint8_t __attribute__((always_inline)) +static __rte_always_inline uint8_t rte_read8(const volatile void *addr) { uint8_t val; @@ -321,7 +321,7 @@ rte_read8(const volatile void *addr) return val; } -static inline uint16_t __attribute__((always_inline)) +static __rte_always_inline uint16_t rte_read16(const volatile void *addr) { uint16_t val; @@ -330,7 +330,7 @@ rte_read16(const volatile void *addr) return val; } -static inline uint32_t __attribute__((always_inline)) +static __rte_always_inline uint32_t rte_read32(const volatile void *addr) { uint32_t val; @@ -339,7 +339,7 @@ rte_read32(const volatile void *addr) return val; } -static inline uint64_t __attribute__((always_inline)) +static __rte_always_inline uint64_t rte_read64(const volatile void *addr) { uint64_t val; @@ -348,28 +348,28 @@ rte_read64(const volatile void *addr) return val; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write8(uint8_t value, volatile void *addr) { rte_io_wmb(); rte_write8_relaxed(value, addr); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write16(uint16_t value, volatile void *addr) { rte_io_wmb(); rte_write16_relaxed(value, addr); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write32(uint32_t value, volatile void *addr) { rte_io_wmb(); rte_write32_relaxed(value, addr); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_write64(uint64_t value, volatile void *addr) { rte_io_wmb(); diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 0f38b45f8..121058c12 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -3266,7 +3266,7 @@ rte_eth_tx_buffer_flush(uint8_t port_id, uint16_t queue_id, * causing N packets to be sent, and the error callback to be called for * the rest. */ -static inline uint16_t __attribute__((always_inline)) +static __rte_always_inline uint16_t rte_eth_tx_buffer(uint8_t port_id, uint16_t queue_id, struct rte_eth_dev_tx_buffer *buffer, struct rte_mbuf *tx_pkt) { diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index 1cb03109c..fe605c7a4 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -840,7 +840,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp) * @param m * The mbuf to be freed. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_mbuf_raw_free(struct rte_mbuf *m) { RTE_ASSERT(RTE_MBUF_DIRECT(m)); @@ -1287,8 +1287,7 @@ static inline void rte_pktmbuf_detach(struct rte_mbuf *m) * - (m) if it is the last reference. It can be recycled or freed. * - (NULL) if the mbuf still has remaining references on it. */ -__attribute__((always_inline)) -static inline struct rte_mbuf * +static __rte_always_inline struct rte_mbuf * rte_pktmbuf_prefree_seg(struct rte_mbuf *m) { __rte_mbuf_sanity_check(m, 0); @@ -1339,7 +1338,7 @@ __rte_pktmbuf_prefree_seg(struct rte_mbuf *m) * @param m * The packet mbuf segment to be freed. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_pktmbuf_free_seg(struct rte_mbuf *m) { m = rte_pktmbuf_prefree_seg(m); diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 48bc8ea3c..76b5b3b15 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -993,7 +993,7 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache); * @param mp * A pointer to the mempool. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_mempool_cache_flush(struct rte_mempool_cache *cache, struct rte_mempool *mp) { @@ -1011,7 +1011,7 @@ rte_mempool_cache_flush(struct rte_mempool_cache *cache, * @return * A pointer to the mempool cache or NULL if disabled or non-EAL thread. */ -static inline struct rte_mempool_cache *__attribute__((always_inline)) +static __rte_always_inline struct rte_mempool_cache * rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id) { if (mp->cache_size == 0) @@ -1038,7 +1038,7 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id) * The flags used for the mempool creation. * Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table, unsigned n, struct rte_mempool_cache *cache) { @@ -1100,7 +1100,7 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table, * The flags used for the mempool creation. * Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table, unsigned n, struct rte_mempool_cache *cache, __rte_unused int flags) @@ -1123,7 +1123,7 @@ rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table, * @param n * The number of objects to add in the mempool from obj_table. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table, unsigned n) { @@ -1144,7 +1144,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table, * @param obj * A pointer to the object to be added. */ -static inline void __attribute__((always_inline)) +static __rte_always_inline void rte_mempool_put(struct rte_mempool *mp, void *obj) { rte_mempool_put_bulk(mp, &obj, 1); @@ -1167,7 +1167,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj) * - >=0: Success; number of objects supplied. * - <0: Error; code of ring dequeue function. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int __mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n, struct rte_mempool_cache *cache) { @@ -1248,7 +1248,7 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, * - 0: Success; objects taken. * - -ENOENT: Not enough entries in the mempool; no object is retrieved. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n, struct rte_mempool_cache *cache, __rte_unused int flags) { @@ -1281,7 +1281,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n, * - 0: Success; objects taken * - -ENOENT: Not enough entries in the mempool; no object is retrieved. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n) { struct rte_mempool_cache *cache; @@ -1309,7 +1309,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n) * - 0: Success; objects taken. * - -ENOENT: Not enough entries in the mempool; no object is retrieved. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_mempool_get(struct rte_mempool *mp, void **obj_p) { return rte_mempool_get_bulk(mp, obj_p, 1); diff --git a/lib/librte_net/net_crc_sse.h b/lib/librte_net/net_crc_sse.h index 8bce522a7..ac93637bf 100644 --- a/lib/librte_net/net_crc_sse.h +++ b/lib/librte_net/net_crc_sse.h @@ -73,7 +73,7 @@ struct crc_pclmulqdq_ctx crc16_ccitt_pclmulqdq __rte_aligned(16); * @return * New 16 byte folded data */ -static inline __attribute__((always_inline)) __m128i +static __rte_always_inline __m128i crcr32_folding_round(__m128i data_block, __m128i precomp, __m128i fold) @@ -96,7 +96,7 @@ crcr32_folding_round(__m128i data_block, * 64 bits reduced data */ -static inline __attribute__((always_inline)) __m128i +static __rte_always_inline __m128i crcr32_reduce_128_to_64(__m128i data128, __m128i precomp) { __m128i tmp0, tmp1, tmp2; @@ -125,7 +125,7 @@ crcr32_reduce_128_to_64(__m128i data128, __m128i precomp) * reduced 32 bits data */ -static inline __attribute__((always_inline)) uint32_t +static __rte_always_inline uint32_t crcr32_reduce_64_to_32(__m128i data64, __m128i precomp) { static const uint32_t mask1[4] __rte_aligned(16) = { @@ -171,7 +171,7 @@ static const uint8_t crc_xmm_shift_tab[48] __rte_aligned(16) = { * reg << (num * 8) */ -static inline __attribute__((always_inline)) __m128i +static __rte_always_inline __m128i xmm_shift_left(__m128i reg, const unsigned int num) { const __m128i *p = (const __m128i *)(crc_xmm_shift_tab + 16 - num); @@ -179,7 +179,7 @@ xmm_shift_left(__m128i reg, const unsigned int num) return _mm_shuffle_epi8(reg, _mm_loadu_si128(p)); } -static inline __attribute__((always_inline)) uint32_t +static __rte_always_inline uint32_t crc32_eth_calc_pclmulqdq( const uint8_t *data, uint32_t data_len, diff --git a/lib/librte_net/rte_net_crc.c b/lib/librte_net/rte_net_crc.c index 9d1ee63fa..0391c7209 100644 --- a/lib/librte_net/rte_net_crc.c +++ b/lib/librte_net/rte_net_crc.c @@ -116,7 +116,7 @@ crc32_eth_init_lut(uint32_t poly, } } -static inline __attribute__((always_inline)) uint32_t +static __rte_always_inline uint32_t crc32_eth_calc_lut(const uint8_t *data, uint32_t data_len, uint32_t crc, diff --git a/lib/librte_port/rte_port_ring.c b/lib/librte_port/rte_port_ring.c index 64bd965f5..a4e709c96 100644 --- a/lib/librte_port/rte_port_ring.c +++ b/lib/librte_port/rte_port_ring.c @@ -293,7 +293,7 @@ rte_port_ring_multi_writer_tx(void *port, struct rte_mbuf *pkt) return 0; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_port_ring_writer_tx_bulk_internal(void *port, struct rte_mbuf **pkts, uint64_t pkts_mask, @@ -609,7 +609,7 @@ rte_port_ring_multi_writer_nodrop_tx(void *port, struct rte_mbuf *pkt) return 0; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_port_ring_writer_nodrop_tx_bulk_internal(void *port, struct rte_mbuf **pkts, uint64_t pkts_mask, diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 97f025a1f..e4e910b4f 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -345,7 +345,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); } \ } while (0) -static inline __attribute__((always_inline)) void +static __rte_always_inline void update_tail(struct rte_ring_headtail *ht, uint32_t old_val, uint32_t new_val, uint32_t single) { @@ -383,7 +383,7 @@ update_tail(struct rte_ring_headtail *ht, uint32_t old_val, uint32_t new_val, * Actual number of objects enqueued. * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. */ -static inline __attribute__((always_inline)) unsigned int +static __rte_always_inline unsigned int __rte_ring_move_prod_head(struct rte_ring *r, int is_sp, unsigned int n, enum rte_ring_queue_behavior behavior, uint32_t *old_head, uint32_t *new_head, @@ -443,7 +443,7 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp, * Actual number of objects enqueued. * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. */ -static inline __attribute__((always_inline)) unsigned int +static __rte_always_inline unsigned int __rte_ring_do_enqueue(struct rte_ring *r, void * const *obj_table, unsigned int n, enum rte_ring_queue_behavior behavior, int is_sp, unsigned int *free_space) @@ -489,7 +489,7 @@ __rte_ring_do_enqueue(struct rte_ring *r, void * const *obj_table, * - Actual number of objects dequeued. * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. */ -static inline __attribute__((always_inline)) unsigned int +static __rte_always_inline unsigned int __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, unsigned int n, enum rte_ring_queue_behavior behavior, uint32_t *old_head, uint32_t *new_head, @@ -548,7 +548,7 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, * - Actual number of objects dequeued. * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. */ -static inline __attribute__((always_inline)) unsigned int +static __rte_always_inline unsigned int __rte_ring_do_dequeue(struct rte_ring *r, void **obj_table, unsigned int n, enum rte_ring_queue_behavior behavior, int is_sc, unsigned int *available) @@ -590,7 +590,7 @@ __rte_ring_do_dequeue(struct rte_ring *r, void **obj_table, * @return * The number of objects enqueued, either 0 or n */ -static inline unsigned int __attribute__((always_inline)) +static __rte_always_inline unsigned int rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -613,7 +613,7 @@ rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, * @return * The number of objects enqueued, either 0 or n */ -static inline unsigned int __attribute__((always_inline)) +static __rte_always_inline unsigned int rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -640,7 +640,7 @@ rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, * @return * The number of objects enqueued, either 0 or n */ -static inline unsigned int __attribute__((always_inline)) +static __rte_always_inline unsigned int rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -662,7 +662,7 @@ rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, * - 0: Success; objects enqueued. * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_ring_mp_enqueue(struct rte_ring *r, void *obj) { return rte_ring_mp_enqueue_bulk(r, &obj, 1, NULL) ? 0 : -ENOBUFS; @@ -679,7 +679,7 @@ rte_ring_mp_enqueue(struct rte_ring *r, void *obj) * - 0: Success; objects enqueued. * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_ring_sp_enqueue(struct rte_ring *r, void *obj) { return rte_ring_sp_enqueue_bulk(r, &obj, 1, NULL) ? 0 : -ENOBUFS; @@ -700,7 +700,7 @@ rte_ring_sp_enqueue(struct rte_ring *r, void *obj) * - 0: Success; objects enqueued. * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_ring_enqueue(struct rte_ring *r, void *obj) { return rte_ring_enqueue_bulk(r, &obj, 1, NULL) ? 0 : -ENOBUFS; @@ -724,7 +724,7 @@ rte_ring_enqueue(struct rte_ring *r, void *obj) * @return * The number of objects dequeued, either 0 or n */ -static inline unsigned int __attribute__((always_inline)) +static __rte_always_inline unsigned int rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -748,7 +748,7 @@ rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, * @return * The number of objects dequeued, either 0 or n */ -static inline unsigned int __attribute__((always_inline)) +static __rte_always_inline unsigned int rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -775,7 +775,7 @@ rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, * @return * The number of objects dequeued, either 0 or n */ -static inline unsigned int __attribute__((always_inline)) +static __rte_always_inline unsigned int rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -798,7 +798,7 @@ rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, * - -ENOENT: Not enough entries in the ring to dequeue; no object is * dequeued. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p) { return rte_ring_mc_dequeue_bulk(r, obj_p, 1, NULL) ? 0 : -ENOBUFS; @@ -816,7 +816,7 @@ rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p) * - -ENOENT: Not enough entries in the ring to dequeue, no object is * dequeued. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p) { return rte_ring_sc_dequeue_bulk(r, obj_p, 1, NULL) ? 0 : -ENOBUFS; @@ -838,7 +838,7 @@ rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p) * - -ENOENT: Not enough entries in the ring to dequeue, no object is * dequeued. */ -static inline int __attribute__((always_inline)) +static __rte_always_inline int rte_ring_dequeue(struct rte_ring *r, void **obj_p) { return rte_ring_dequeue_bulk(r, obj_p, 1, NULL) ? 0 : -ENOENT; @@ -962,7 +962,7 @@ struct rte_ring *rte_ring_lookup(const char *name); * @return * - n: Actual number of objects enqueued. */ -static inline unsigned __attribute__((always_inline)) +static __rte_always_inline unsigned rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -985,7 +985,7 @@ rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table, * @return * - n: Actual number of objects enqueued. */ -static inline unsigned __attribute__((always_inline)) +static __rte_always_inline unsigned rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -1012,7 +1012,7 @@ rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table, * @return * - n: Actual number of objects enqueued. */ -static inline unsigned __attribute__((always_inline)) +static __rte_always_inline unsigned rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -1040,7 +1040,7 @@ rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table, * @return * - n: Actual number of objects dequeued, 0 if ring is empty */ -static inline unsigned __attribute__((always_inline)) +static __rte_always_inline unsigned rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -1065,7 +1065,7 @@ rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, * @return * - n: Actual number of objects dequeued, 0 if ring is empty */ -static inline unsigned __attribute__((always_inline)) +static __rte_always_inline unsigned rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -1092,7 +1092,7 @@ rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, * @return * - Number of objects dequeued */ -static inline unsigned __attribute__((always_inline)) +static __rte_always_inline unsigned rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index 605e47cbf..22d0db23d 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -120,7 +120,7 @@ struct vhost_device_ops { * @return * the host virtual address on success, 0 on failure */ -static inline uint64_t __attribute__((always_inline)) +static __rte_always_inline uint64_t rte_vhost_gpa_to_vva(struct rte_vhost_memory *mem, uint64_t gpa) { struct rte_vhost_mem_region *reg; diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index ddd8a9c43..0f294f395 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -201,13 +201,13 @@ struct virtio_net { #define VHOST_LOG_PAGE 4096 -static inline void __attribute__((always_inline)) +static __rte_always_inline void vhost_log_page(uint8_t *log_base, uint64_t page) { log_base[page / 8] |= 1 << (page % 8); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void vhost_log_write(struct virtio_net *dev, uint64_t addr, uint64_t len) { uint64_t page; @@ -229,7 +229,7 @@ vhost_log_write(struct virtio_net *dev, uint64_t addr, uint64_t len) } } -static inline void __attribute__((always_inline)) +static __rte_always_inline void vhost_log_used_vring(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t offset, uint64_t len) { @@ -272,7 +272,7 @@ extern uint64_t VHOST_FEATURES; extern struct virtio_net *vhost_devices[MAX_VHOST_DEVICE]; /* Convert guest physical address to host physical address */ -static inline phys_addr_t __attribute__((always_inline)) +static __rte_always_inline phys_addr_t gpa_to_hpa(struct virtio_net *dev, uint64_t gpa, uint64_t size) { uint32_t i; diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 48219e050..b5d809676 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -55,7 +55,7 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring) return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t to, uint16_t from, uint16_t size) { @@ -67,7 +67,7 @@ do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq, size * sizeof(struct vring_used_elem)); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq) { uint16_t used_idx = vq->last_used_idx & (vq->size - 1); @@ -95,7 +95,7 @@ flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq) sizeof(vq->used->idx)); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void update_shadow_used_ring(struct vhost_virtqueue *vq, uint16_t desc_idx, uint16_t len) { @@ -153,7 +153,7 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr) } } -static inline int __attribute__((always_inline)) +static __rte_always_inline int copy_mbuf_to_desc(struct virtio_net *dev, struct vring_desc *descs, struct rte_mbuf *m, uint16_t desc_idx, uint32_t size) { @@ -237,7 +237,7 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vring_desc *descs, * added to the RX queue. This function works when the mbuf is scattered, but * it doesn't support the mergeable feature. */ -static inline uint32_t __attribute__((always_inline)) +static __rte_always_inline uint32_t virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count) { @@ -335,7 +335,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, return count; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t avail_idx, uint32_t *vec_idx, struct buf_vector *buf_vec, uint16_t *desc_chain_head, @@ -424,7 +424,7 @@ reserve_avail_buf_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq, return 0; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct rte_mbuf *m, struct buf_vector *buf_vec, uint16_t num_buffers) { @@ -512,7 +512,7 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct rte_mbuf *m, return 0; } -static inline uint32_t __attribute__((always_inline)) +static __rte_always_inline uint32_t virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count) { @@ -655,7 +655,7 @@ parse_ethernet(struct rte_mbuf *m, uint16_t *l4_proto, void **l4_hdr) } } -static inline void __attribute__((always_inline)) +static __rte_always_inline void vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m) { uint16_t l4_proto = 0; @@ -743,13 +743,13 @@ make_rarp_packet(struct rte_mbuf *rarp_mbuf, const struct ether_addr *mac) return 0; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void put_zmbuf(struct zcopy_mbuf *zmbuf) { zmbuf->in_use = 0; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, uint16_t max_desc, struct rte_mbuf *m, uint16_t desc_idx, struct rte_mempool *mbuf_pool) @@ -899,7 +899,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, return 0; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void update_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t used_idx, uint32_t desc_idx) { @@ -910,7 +910,7 @@ update_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq, sizeof(vq->used->ring[used_idx])); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void update_used_idx(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t count) { @@ -930,7 +930,7 @@ update_used_idx(struct virtio_net *dev, struct vhost_virtqueue *vq, eventfd_write(vq->callfd, (eventfd_t)1); } -static inline struct zcopy_mbuf *__attribute__((always_inline)) +static __rte_always_inline struct zcopy_mbuf * get_zmbuf(struct vhost_virtqueue *vq) { uint16_t i; @@ -961,7 +961,7 @@ get_zmbuf(struct vhost_virtqueue *vq) return NULL; } -static inline bool __attribute__((always_inline)) +static __rte_always_inline bool mbuf_is_consumed(struct rte_mbuf *m) { while (m) { diff --git a/test/test/test_xmmt_ops.h b/test/test/test_xmmt_ops.h index 42174d2c9..ef014818b 100644 --- a/test/test/test_xmmt_ops.h +++ b/test/test/test_xmmt_ops.h @@ -44,7 +44,7 @@ #define vect_loadu_sil128(p) vld1q_s32((const int32_t *)p) /* sets the 4 signed 32-bit integer values and returns the xmm_t variable */ -static inline xmm_t __attribute__((always_inline)) +static __rte_always_inline xmm_t vect_set_epi32(int i3, int i2, int i1, int i0) { int32_t data[4] = {i0, i1, i2, i3}; @@ -70,7 +70,7 @@ vect_set_epi32(int i3, int i2, int i1, int i0) #define vect_loadu_sil128(p) vec_ld(0, p) /* sets the 4 signed 32-bit integer values and returns the xmm_t variable */ -static inline xmm_t __attribute__((always_inline)) +static __rte_always_inline xmm_t vect_set_epi32(int i3, int i2, int i1, int i0) { xmm_t data = (xmm_t){i0, i1, i2, i3};