From patchwork Mon Apr 15 20:03:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 139318 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 524F243E7E; Mon, 15 Apr 2024 22:07:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F85C410F1; Mon, 15 Apr 2024 22:05:22 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id B54274069F for ; Mon, 15 Apr 2024 22:04:53 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id CC27120FD469; Mon, 15 Apr 2024 13:04:46 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com CC27120FD469 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1713211487; bh=OjGdrxlVKMXgFq5WIyDbp/5i/ZnwuEm40pZrp/jft7c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W9kmeyMn3M1+fWt+UXA9xZu235AHcBKaX2pJgvRhTlUNENfo9BwKn3obY4ry2JEnl DY2xc7rJiHH02QfCE6KlxfyrojsD0tmmX5/BevpVw7gTx0lr1cMoeBNncmbTsM/tli 0CZIUgIaKPQ9M7Wz3/xlWqOVq2fWTrXXWEmg+ZBY= From: Tyler Retzlaff To: dev@dpdk.org Cc: =?utf-8?q?Mattias_R=C3=B6nnblom?= , "Min Hu (Connor)" , =?utf-8?q?Morten_Br=C3=B8rup?= , Abdullah Sevincer , Ajit Khaparde , Akhil Goyal , Alok Prasad , Amit Bernstein , Anatoly Burakov , Andrew Boyer , Andrew Rybchenko , Ankur Dwivedi , Anoob Joseph , Ashish Gupta , Ashwin Sekhar T K , Bruce Richardson , Byron Marohn , Chaoyong He , Chas Williams , Chenbo Xia , Chengwen Feng , Conor Walsh , Cristian Dumitrescu , Dariusz Sosnowski , David Hunt , Devendra Singh Rawat , Ed Czeck , Evgeny Schemeilin , Fan Zhang , Gagandeep Singh , Guoyang Zhou , Harman Kalra , Harry van Haaren , Hemant Agrawal , Honnappa Nagarahalli , Hyong Youb Kim , Jakub Grajciar , Jerin Jacob , Jian Wang , Jiawen Wu , Jie Hai , Jingjing Wu , John Daley , John Miller , Joyce Kong , Kai Ji , Kevin Laatz , Kiran Kumar K , Konstantin Ananyev , Lee Daly , Liang Ma , Liron Himi , Long Li , Maciej Czekaj , Matan Azrad , Matt Peters , Maxime Coquelin , Michael Shamis , Nagadheeraj Rottela , Nicolas Chautru , Nithin Dabilpuram , Ori Kam , Pablo de Lara , Pavan Nikhilesh , Peter Mccarthy , Radu Nicolau , Rahul Lakkireddy , Rakesh Kudurumalla , Raveendra Padasalagi , Reshma Pattan , Ron Beider , Ruifeng Wang , Sachin Saxena , Selwin Sebastian , Shai Brandes , Shepard Siegel , Shijith Thotton , Sivaprasad Tummala , Somnath Kotur , Srikanth Yalavarthi , Stephen Hemminger , Steven Webster , Suanming Mou , Sunil Kumar Kori , Sunil Uttarwar , Sunila Sahu , Tejasree Kondoj , Viacheslav Ovsiienko , Vikas Gupta , Volodymyr Fialko , Wajeeh Atrash , Wisam Jaddo , Xiaoyun Wang , Yipeng Wang , Yisen Zhuang , Yuying Zhang , Zhangfei Gao , Zhirun Yan , Ziyang Xuan , Tyler Retzlaff Subject: [PATCH v2 21/83] net/mlx5: move alignment attribute on types Date: Mon, 15 Apr 2024 13:03:43 -0700 Message-Id: <1713211485-9021-22-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1713211485-9021-1-git-send-email-roretzla@linux.microsoft.com> References: <1710949096-5786-1-git-send-email-roretzla@linux.microsoft.com> <1713211485-9021-1-git-send-email-roretzla@linux.microsoft.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move location of __rte_aligned(a) to new conventional location. The new placement between {struct,union} and the tag allows the desired alignment to be imparted on the type regardless of the toolchain being used for both C and C++. Additionally, it avoids confusion by Doxygen when generating documentation. Signed-off-by: Tyler Retzlaff Acked-by: Morten Brørup --- drivers/net/mlx5/hws/mlx5dr_send.h | 4 ++-- drivers/net/mlx5/mlx5.h | 6 +++--- drivers/net/mlx5/mlx5_flow.h | 4 ++-- drivers/net/mlx5/mlx5_hws_cnt.h | 14 +++++++------- drivers/net/mlx5/mlx5_rx.h | 4 ++-- drivers/net/mlx5/mlx5_rxtx.c | 6 +++--- drivers/net/mlx5/mlx5_tx.h | 10 +++++----- drivers/net/mlx5/mlx5_utils.h | 2 +- 8 files changed, 25 insertions(+), 25 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index c4eaea5..0c67a9e 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -144,7 +144,7 @@ struct mlx5dr_completed_poll { uint16_t mask; }; -struct mlx5dr_send_engine { +struct __rte_cache_aligned mlx5dr_send_engine { struct mlx5dr_send_ring send_ring[MLX5DR_NUM_SEND_RINGS]; /* For now 1:1 mapping */ struct mlx5dv_devx_uar *uar; /* Uar is shared between rings of a queue */ struct mlx5dr_completed_poll completed; @@ -153,7 +153,7 @@ struct mlx5dr_send_engine { uint16_t rings; uint16_t num_entries; bool err; -} __rte_cache_aligned; +}; struct mlx5dr_send_engine_post_ctrl { struct mlx5dr_send_engine *queue; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 0091a24..3646d20 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -415,7 +415,7 @@ struct mlx5_hw_q_job { }; /* HW steering job descriptor LIFO pool. */ -struct mlx5_hw_q { +struct __rte_cache_aligned mlx5_hw_q { uint32_t job_idx; /* Free job index. */ uint32_t size; /* Job LIFO queue size. */ uint32_t ongoing_flow_ops; /* Number of ongoing flow operations. */ @@ -424,7 +424,7 @@ struct mlx5_hw_q { struct rte_ring *indir_iq; /* Indirect action SW in progress queue. */ struct rte_ring *flow_transfer_pending; struct rte_ring *flow_transfer_completed; -} __rte_cache_aligned; +}; #define MLX5_COUNTER_POOLS_MAX_NUM (1 << 15) @@ -1405,7 +1405,7 @@ struct mlx5_hws_cnt_svc_mng { uint32_t query_interval; rte_thread_t service_thread; uint8_t svc_running; - struct mlx5_hws_aso_mng aso_mng __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) struct mlx5_hws_aso_mng aso_mng; }; #define MLX5_FLOW_HW_TAGS_MAX 12 diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 0065727..cc1e8cf 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1654,9 +1654,9 @@ struct mlx5_matcher_info { RTE_ATOMIC(uint32_t) refcnt; }; -struct mlx5_dr_rule_action_container { +struct __rte_cache_aligned mlx5_dr_rule_action_container { struct mlx5dr_rule_action acts[MLX5_HW_MAX_ACTS]; -} __rte_cache_aligned; +}; struct rte_flow_template_table { LIST_ENTRY(rte_flow_template_table) next; diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h index e005960..1cb0564 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.h +++ b/drivers/net/mlx5/mlx5_hws_cnt.h @@ -97,11 +97,11 @@ struct mlx5_hws_cnt_pool_caches { struct rte_ring *qcache[]; }; -struct mlx5_hws_cnt_pool { +struct __rte_cache_aligned mlx5_hws_cnt_pool { LIST_ENTRY(mlx5_hws_cnt_pool) next; - struct mlx5_hws_cnt_pool_cfg cfg __rte_cache_aligned; - struct mlx5_hws_cnt_dcs_mng dcs_mng __rte_cache_aligned; - uint32_t query_gen __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) struct mlx5_hws_cnt_pool_cfg cfg; + alignas(RTE_CACHE_LINE_SIZE) struct mlx5_hws_cnt_dcs_mng dcs_mng; + alignas(RTE_CACHE_LINE_SIZE) uint32_t query_gen; struct mlx5_hws_cnt *pool; struct mlx5_hws_cnt_raw_data_mng *raw_mng; struct rte_ring *reuse_list; @@ -110,7 +110,7 @@ struct mlx5_hws_cnt_pool { struct mlx5_hws_cnt_pool_caches *cache; uint64_t time_of_last_age_check; struct mlx5_priv *priv; -} __rte_cache_aligned; +}; /* HWS AGE status. */ enum { @@ -133,7 +133,7 @@ enum { }; /* HWS counter age parameter. */ -struct mlx5_hws_age_param { +struct __rte_cache_aligned mlx5_hws_age_param { uint32_t timeout; /* Aging timeout in seconds (atomically accessed). */ uint32_t sec_since_last_hit; /* Time in seconds since last hit (atomically accessed). */ @@ -149,7 +149,7 @@ struct mlx5_hws_age_param { cnt_id_t own_cnt_index; /* Counter action created specifically for this AGE action. */ void *context; /* Flow AGE context. */ -} __rte_packed __rte_cache_aligned; +} __rte_packed; /** diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 2fce908..fb4d8e6 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -79,7 +79,7 @@ struct mlx5_eth_rxseg { }; /* RX queue descriptor. */ -struct mlx5_rxq_data { +struct __rte_cache_aligned mlx5_rxq_data { unsigned int csum:1; /* Enable checksum offloading. */ unsigned int hw_timestamp:1; /* Enable HW timestamp. */ unsigned int rt_timestamp:1; /* Realtime timestamp format. */ @@ -146,7 +146,7 @@ struct mlx5_rxq_data { uint32_t rxseg_n; /* Number of split segment descriptions. */ struct mlx5_eth_rxseg rxseg[MLX5_MAX_RXQ_NSEG]; /* Buffer split segment descriptions - sizes, offsets, pools. */ -} __rte_cache_aligned; +}; /* RX queue control descriptor. */ struct mlx5_rxq_ctrl { diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 54d410b..d3d4470 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -77,12 +77,12 @@ static_assert(MLX5_WQE_SIZE == 4 * MLX5_WSEG_SIZE, "invalid WQE size"); -uint32_t mlx5_ptype_table[] __rte_cache_aligned = { +alignas(RTE_CACHE_LINE_SIZE) uint32_t mlx5_ptype_table[] = { [0xff] = RTE_PTYPE_ALL_MASK, /* Last entry for errored packet. */ }; -uint8_t mlx5_cksum_table[1 << 10] __rte_cache_aligned; -uint8_t mlx5_swp_types_table[1 << 10] __rte_cache_aligned; +alignas(RTE_CACHE_LINE_SIZE) uint8_t mlx5_cksum_table[1 << 10]; +alignas(RTE_CACHE_LINE_SIZE) uint8_t mlx5_swp_types_table[1 << 10]; uint64_t rte_net_mlx5_dynf_inline_mask; diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index b1e8ea1..107d7ab 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -83,9 +83,9 @@ enum mlx5_txcmp_code { extern uint64_t rte_net_mlx5_dynf_inline_mask; #define RTE_MBUF_F_TX_DYNF_NOINLINE rte_net_mlx5_dynf_inline_mask -extern uint32_t mlx5_ptype_table[] __rte_cache_aligned; -extern uint8_t mlx5_cksum_table[1 << 10] __rte_cache_aligned; -extern uint8_t mlx5_swp_types_table[1 << 10] __rte_cache_aligned; +extern alignas(RTE_CACHE_LINE_SIZE) uint32_t mlx5_ptype_table[]; +extern alignas(RTE_CACHE_LINE_SIZE) uint8_t mlx5_cksum_table[1 << 10]; +extern alignas(RTE_CACHE_LINE_SIZE) uint8_t mlx5_swp_types_table[1 << 10]; struct mlx5_txq_stats { #ifdef MLX5_PMD_SOFT_COUNTERS @@ -112,7 +112,7 @@ struct mlx5_txq_local { /* TX queue descriptor. */ __extension__ -struct mlx5_txq_data { +struct __rte_cache_aligned mlx5_txq_data { uint16_t elts_head; /* Current counter in (*elts)[]. */ uint16_t elts_tail; /* Counter of first element awaiting completion. */ uint16_t elts_comp; /* elts index since last completion request. */ @@ -173,7 +173,7 @@ struct mlx5_txq_data { struct mlx5_uar_data uar_data; struct rte_mbuf *elts[]; /* Storage for queued packets, must be the last field. */ -} __rte_cache_aligned; +}; /* TX queue control descriptor. */ __extension__ diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index f3c0d76..b51d977 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -235,7 +235,7 @@ struct mlx5_indexed_trunk { uint32_t next; /* Next free trunk in free list. */ uint32_t free; /* Free entries available */ struct rte_bitmap *bmp; - uint8_t data[] __rte_cache_aligned; /* Entry data start. */ + alignas(RTE_CACHE_LINE_SIZE) uint8_t data[]; /* Entry data start. */ }; struct mlx5_indexed_cache {