From patchwork Mon Feb 10 16:20:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Monjalon X-Patchwork-Id: 65702 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD749A04B3; Mon, 10 Feb 2020 17:22:19 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 818231BFAC; Mon, 10 Feb 2020 17:21:11 +0100 (CET) Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by dpdk.org (Postfix) with ESMTP id 390491BF9B for ; Mon, 10 Feb 2020 17:21:09 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id DCF1D21B6B; Mon, 10 Feb 2020 11:21:08 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Mon, 10 Feb 2020 11:21:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=mesmtp; bh=xDgRhnllzX 60EI649Kx6ib1MyM40ybuOTtLkjfsjugc=; b=GDDngMouMNAiuSuIKwYIC7sahq k/dWA4IV1E3wcjdkQPcT2tnN1VyTY/cafabuOvUe3HEMWz0JrB2RBxqrZeGjOELK D18QCZGhm7YpPqGFve0y4Mq2YFnwnB7oAdCMQJXQLvP7pF2W9QhtaFvbftyHqMQd bSR6DEBdEO+KcBjtk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=xDgRhnllzX60EI649Kx6ib1MyM40ybuOTtLkjfsjugc=; b=hTwS6OvY zFWLRbz57B8TIvVozod4kHN8Yp3MsD1bjx85cT8KXsVDgnv0+RD3nZ5V7ymd+Sok Jm0zTRe9f3hRfgBlNnYMv2CDXO/b2iF2oIYXC9n50uJFMklT4ILd1QgLaV1YWlQd +0UWWQMLzcZo/No173RUR9GXxlxrBXi6gF80sQGOViV0OjawDlUhb3BUL7ONmk04 xiwgmHQCQc7X2vRfkx0gEAW7Xm74u2xAXj6ifMK/vAF7AwKGSuMXixZqd7Y8slgM Uj3BXDLEKS7WFQTeTyvHPJvbkI6rct0jaZHeksO6qOwHt0cavxNCIl8zid+V8Jgc kGUkn3CSckPk5w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedriedugdektdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucffoh hmrghinhepughpughkrdhorhhgnecukfhppeejjedrudefgedrvddtfedrudekgeenucev lhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhhomhgrsh esmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Received: from xps.monjalon.net (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id E07DD3280066; Mon, 10 Feb 2020 11:21:07 -0500 (EST) From: Thomas Monjalon To: dev@dpdk.org Cc: Jerin Jacob , Qiming Yang , Wenzhuo Lu , Bruce Richardson , Konstantin Ananyev , David Hunt Date: Mon, 10 Feb 2020 17:20:24 +0100 Message-Id: <20200210162032.1177478-8-thomas@monjalon.net> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200210162032.1177478-1-thomas@monjalon.net> References: <20200210162032.1177478-1-thomas@monjalon.net> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 20.05 07/15] replace always-inline attributes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is a macro __rte_always_inline, forcing functions to be inlined, which is now used where appropriate for consistency. Signed-off-by: Thomas Monjalon --- app/test-eventdev/test_order_atq.c | 2 +- app/test-eventdev/test_order_common.h | 4 ++-- app/test-eventdev/test_order_queue.c | 2 +- app/test-eventdev/test_perf_atq.c | 4 ++-- app/test-eventdev/test_perf_common.h | 4 ++-- app/test-eventdev/test_perf_queue.c | 4 ++-- drivers/net/ice/ice_rxtx.c | 2 +- .../common/include/arch/x86/rte_rtm.h | 6 +++--- lib/librte_power/rte_power_empty_poll.c | 18 +++++++++--------- 9 files changed, 23 insertions(+), 23 deletions(-) diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c index abccbccacb..3366cfce9a 100644 --- a/app/test-eventdev/test_order_atq.c +++ b/app/test-eventdev/test_order_atq.c @@ -9,7 +9,7 @@ /* See http://doc.dpdk.org/guides/tools/testeventdev.html for test details */ -static inline __attribute__((always_inline)) void +static __rte_always_inline void order_atq_process_stage_0(struct rte_event *const ev) { ev->sub_event_type = 1; /* move to stage 1 (atomic) on the same queue */ diff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h index 22a1cc8325..e0fe9c968a 100644 --- a/app/test-eventdev/test_order_common.h +++ b/app/test-eventdev/test_order_common.h @@ -62,7 +62,7 @@ order_nb_event_ports(struct evt_options *opt) return evt_nr_active_lcores(opt->wlcores) + 1 /* producer */; } -static inline __attribute__((always_inline)) void +static __rte_always_inline void order_process_stage_1(struct test_order *const t, struct rte_event *const ev, const uint32_t nb_flows, uint32_t *const expected_flow_seq, @@ -87,7 +87,7 @@ order_process_stage_1(struct test_order *const t, rte_atomic64_sub(outstand_pkts, 1); } -static inline __attribute__((always_inline)) void +static __rte_always_inline void order_process_stage_invalid(struct test_order *const t, struct rte_event *const ev) { diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c index 7ac570c730..495efd92f9 100644 --- a/app/test-eventdev/test_order_queue.c +++ b/app/test-eventdev/test_order_queue.c @@ -9,7 +9,7 @@ /* See http://doc.dpdk.org/guides/tools/testeventdev.html for test details */ -static inline __attribute__((always_inline)) void +static __rte_always_inline void order_queue_process_stage_0(struct rte_event *const ev) { ev->queue_id = 1; /* q1 atomic queue */ diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c index d0241ec4ae..8fd51004ee 100644 --- a/app/test-eventdev/test_perf_atq.c +++ b/app/test-eventdev/test_perf_atq.c @@ -14,7 +14,7 @@ atq_nb_event_queues(struct evt_options *opt) rte_eth_dev_count_avail() : evt_nr_active_lcores(opt->plcores); } -static inline __attribute__((always_inline)) void +static __rte_always_inline void atq_mark_fwd_latency(struct rte_event *const ev) { if (unlikely(ev->sub_event_type == 0)) { @@ -24,7 +24,7 @@ atq_mark_fwd_latency(struct rte_event *const ev) } } -static inline __attribute__((always_inline)) void +static __rte_always_inline void atq_fwd_event(struct rte_event *const ev, uint8_t *const sched_type_list, const uint8_t nb_stages) { diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h index d8fbee6d89..ff9705df88 100644 --- a/app/test-eventdev/test_perf_common.h +++ b/app/test-eventdev/test_perf_common.h @@ -91,7 +91,7 @@ struct perf_elt { printf("%s(): lcore %d dev_id %d port=%d\n", __func__,\ rte_lcore_id(), dev, port) -static inline __attribute__((always_inline)) int +static __rte_always_inline int perf_process_last_stage(struct rte_mempool *const pool, struct rte_event *const ev, struct worker_data *const w, void *bufs[], int const buf_sz, uint8_t count) @@ -107,7 +107,7 @@ perf_process_last_stage(struct rte_mempool *const pool, return count; } -static inline __attribute__((always_inline)) uint8_t +static __rte_always_inline uint8_t perf_process_last_stage_latency(struct rte_mempool *const pool, struct rte_event *const ev, struct worker_data *const w, void *bufs[], int const buf_sz, uint8_t count) diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c index 29098580e7..f4ea3a795f 100644 --- a/app/test-eventdev/test_perf_queue.c +++ b/app/test-eventdev/test_perf_queue.c @@ -15,7 +15,7 @@ perf_queue_nb_event_queues(struct evt_options *opt) return nb_prod * opt->nb_stages; } -static inline __attribute__((always_inline)) void +static __rte_always_inline void mark_fwd_latency(struct rte_event *const ev, const uint8_t nb_stages) { @@ -26,7 +26,7 @@ mark_fwd_latency(struct rte_event *const ev, } } -static inline __attribute__((always_inline)) void +static __rte_always_inline void fwd_event(struct rte_event *const ev, uint8_t *const sched_type_list, const uint8_t nb_stages) { diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index ce5b8e6ca3..045680533f 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -2655,7 +2655,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) return nb_tx; } -static inline int __attribute__((always_inline)) +static __rte_always_inline int ice_tx_free_bufs(struct ice_tx_queue *txq) { struct ice_tx_entry *txep; diff --git a/lib/librte_eal/common/include/arch/x86/rte_rtm.h b/lib/librte_eal/common/include/arch/x86/rte_rtm.h index eb0f8e81e1..36bf49846f 100644 --- a/lib/librte_eal/common/include/arch/x86/rte_rtm.h +++ b/lib/librte_eal/common/include/arch/x86/rte_rtm.h @@ -25,7 +25,7 @@ extern "C" { #define RTE_XABORT_NESTED (1 << 5) #define RTE_XABORT_CODE(x) (((x) >> 24) & 0xff) -static __attribute__((__always_inline__)) inline +static __rte_always_inline unsigned int rte_xbegin(void) { unsigned int ret = RTE_XBEGIN_STARTED; @@ -34,7 +34,7 @@ unsigned int rte_xbegin(void) return ret; } -static __attribute__((__always_inline__)) inline +static __rte_always_inline void rte_xend(void) { asm volatile(".byte 0x0f,0x01,0xd5" ::: "memory"); @@ -45,7 +45,7 @@ void rte_xend(void) asm volatile(".byte 0xc6,0xf8,%P0" :: "i" (status) : "memory"); \ } while (0) -static __attribute__((__always_inline__)) inline +static __rte_always_inline int rte_xtest(void) { unsigned char out; diff --git a/lib/librte_power/rte_power_empty_poll.c b/lib/librte_power/rte_power_empty_poll.c index 0a8024ddca..70c07b1533 100644 --- a/lib/librte_power/rte_power_empty_poll.c +++ b/lib/librte_power/rte_power_empty_poll.c @@ -52,13 +52,13 @@ set_power_freq(int lcore_id, enum freq_val freq, bool specific_freq) } -static inline void __attribute__((always_inline)) +static __rte_always_inline void exit_training_state(struct priority_worker *poll_stats) { RTE_SET_USED(poll_stats); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void enter_training_state(struct priority_worker *poll_stats) { poll_stats->iter_counter = 0; @@ -66,7 +66,7 @@ enter_training_state(struct priority_worker *poll_stats) poll_stats->queue_state = TRAINING; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void enter_normal_state(struct priority_worker *poll_stats) { /* Clear the averages arrays and strs */ @@ -86,7 +86,7 @@ enter_normal_state(struct priority_worker *poll_stats) poll_stats->thresh[HGH].threshold_percent = high_to_med_threshold; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void enter_busy_state(struct priority_worker *poll_stats) { memset(poll_stats->edpi_av, 0, sizeof(poll_stats->edpi_av)); @@ -101,14 +101,14 @@ enter_busy_state(struct priority_worker *poll_stats) set_power_freq(poll_stats->lcore_id, HGH, false); } -static inline void __attribute__((always_inline)) +static __rte_always_inline void enter_purge_state(struct priority_worker *poll_stats) { poll_stats->iter_counter = 0; poll_stats->queue_state = LOW_PURGE; } -static inline void __attribute__((always_inline)) +static __rte_always_inline void set_state(struct priority_worker *poll_stats, enum queue_state new_state) { @@ -131,7 +131,7 @@ set_state(struct priority_worker *poll_stats, } } -static inline void __attribute__((always_inline)) +static __rte_always_inline void set_policy(struct priority_worker *poll_stats, struct ep_policy *policy) { @@ -204,7 +204,7 @@ update_training_stats(struct priority_worker *poll_stats, } } -static inline uint32_t __attribute__((always_inline)) +static __rte_always_inline uint32_t update_stats(struct priority_worker *poll_stats) { uint64_t tot_edpi = 0, tot_ppi = 0; @@ -249,7 +249,7 @@ update_stats(struct priority_worker *poll_stats) } -static inline void __attribute__((always_inline)) +static __rte_always_inline void update_stats_normal(struct priority_worker *poll_stats) { uint32_t percent;