From patchwork Wed Mar 27 19:56:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 138874 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7848A43D0E; Wed, 27 Mar 2024 20:57:00 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 935BB41104; Wed, 27 Mar 2024 20:56:52 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 1968140A84 for ; Wed, 27 Mar 2024 20:56:49 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id 39C0020A1069; Wed, 27 Mar 2024 12:56:48 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 39C0020A1069 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1711569408; bh=btnpIPMKUjiqnp1YilPvzLc6n4FOzPH86wpZlyMqneE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cUuhTYQZTesw1VhalFIrfYFtO8pgE3TUh4uZm/6vAZphPFHBTIAn4Z2wLu9VSYX5q y/JjI5pb0ikwL4AxXDlkerEylVOmeQpIh3j28bRLM3F22beZcfOrHuxuDIHuvc8T1u KCPYq2Ykt0qeh5nSkJgywtaqxd8j+6RouznvLsLM= From: Tyler Retzlaff To: dev@dpdk.org Cc: Ajit Khaparde , Andrew Boyer , Andrew Rybchenko , Bruce Richardson , Chenbo Xia , Chengwen Feng , Dariusz Sosnowski , David Christensen , Hyong Youb Kim , Jerin Jacob , Jie Hai , Jingjing Wu , John Daley , Kevin Laatz , Kiran Kumar K , Konstantin Ananyev , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nithin Dabilpuram , Ori Kam , Ruifeng Wang , Satha Rao , Somnath Kotur , Suanming Mou , Sunil Kumar Kori , Viacheslav Ovsiienko , Yisen Zhuang , Yuying Zhang , mb@smartsharesystems.com, Tyler Retzlaff Subject: [PATCH v8 2/4] mbuf: remove rte marker fields Date: Wed, 27 Mar 2024 12:56:44 -0700 Message-Id: <1711569406-7750-3-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1711569406-7750-1-git-send-email-roretzla@linux.microsoft.com> References: <1706657173-26166-1-git-send-email-roretzla@linux.microsoft.com> <1711569406-7750-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org RTE_MARKER typedefs are a GCC extension unsupported by MSVC. Remove RTE_MARKER fields from rte_mbuf struct. Maintain alignment of fields after removed cacheline1 marker by placing C11 alignas(RTE_CACHE_LINE_MIN_SIZE). Provide new rearm_data and rx_descriptor_fields1 fields in anonymous unions as single element arrays of with types matching the original markers to maintain API compatibility. Signed-off-by: Tyler Retzlaff --- doc/guides/rel_notes/release_24_03.rst | 2 + lib/mbuf/rte_mbuf.h | 4 +- lib/mbuf/rte_mbuf_core.h | 200 +++++++++++++++++---------------- 3 files changed, 108 insertions(+), 98 deletions(-) diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst index 34d7bad..a82bb4f 100644 --- a/doc/guides/rel_notes/release_24_03.rst +++ b/doc/guides/rel_notes/release_24_03.rst @@ -216,6 +216,8 @@ Removed Items * acc101: Removed obsolete code for non productized HW variant. +* mbuf: ``RTE_MARKER`` fields ``cacheline0`` and ``cacheline1`` + have been removed from ``struct rte_mbuf``. API Changes ----------- diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h index 286b32b..4c4722e 100644 --- a/lib/mbuf/rte_mbuf.h +++ b/lib/mbuf/rte_mbuf.h @@ -108,7 +108,7 @@ static inline void rte_mbuf_prefetch_part1(struct rte_mbuf *m) { - rte_prefetch0(&m->cacheline0); + rte_prefetch0(m); } /** @@ -126,7 +126,7 @@ rte_mbuf_prefetch_part2(struct rte_mbuf *m) { #if RTE_CACHE_LINE_SIZE == 64 - rte_prefetch0(&m->cacheline1); + rte_prefetch0(RTE_PTR_ADD(m, RTE_CACHE_LINE_MIN_SIZE)); #else RTE_SET_USED(m); #endif diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index 9f58076..9d838b8 100644 --- a/lib/mbuf/rte_mbuf_core.h +++ b/lib/mbuf/rte_mbuf_core.h @@ -465,8 +465,6 @@ enum { * The generic rte_mbuf, containing a packet mbuf. */ struct __rte_cache_aligned rte_mbuf { - RTE_MARKER cacheline0; - void *buf_addr; /**< Virtual address of segment buffer. */ #if RTE_IOVA_IN_MBUF /** @@ -488,127 +486,138 @@ struct __rte_cache_aligned rte_mbuf { #endif /* next 8 bytes are initialised on RX descriptor rearm */ - RTE_MARKER64 rearm_data; - uint16_t data_off; - - /** - * Reference counter. Its size should at least equal to the size - * of port field (16 bits), to support zero-copy broadcast. - * It should only be accessed using the following functions: - * rte_mbuf_refcnt_update(), rte_mbuf_refcnt_read(), and - * rte_mbuf_refcnt_set(). The functionality of these functions (atomic, - * or non-atomic) is controlled by the RTE_MBUF_REFCNT_ATOMIC flag. - */ - RTE_ATOMIC(uint16_t) refcnt; + union { + uint64_t rearm_data[1]; + __extension__ + struct { + uint16_t data_off; + + /** + * Reference counter. Its size should at least equal to the size + * of port field (16 bits), to support zero-copy broadcast. + * It should only be accessed using the following functions: + * rte_mbuf_refcnt_update(), rte_mbuf_refcnt_read(), and + * rte_mbuf_refcnt_set(). The functionality of these functions (atomic, + * or non-atomic) is controlled by the RTE_MBUF_REFCNT_ATOMIC flag. + */ + RTE_ATOMIC(uint16_t) refcnt; - /** - * Number of segments. Only valid for the first segment of an mbuf - * chain. - */ - uint16_t nb_segs; + /** + * Number of segments. Only valid for the first segment of an mbuf + * chain. + */ + uint16_t nb_segs; - /** Input port (16 bits to support more than 256 virtual ports). - * The event eth Tx adapter uses this field to specify the output port. - */ - uint16_t port; + /** Input port (16 bits to support more than 256 virtual ports). + * The event eth Tx adapter uses this field to specify the output port. + */ + uint16_t port; + }; + }; uint64_t ol_flags; /**< Offload features. */ - /* remaining bytes are set on RX when pulling packet from descriptor */ - RTE_MARKER rx_descriptor_fields1; - - /* - * The packet type, which is the combination of outer/inner L2, L3, L4 - * and tunnel types. The packet_type is about data really present in the - * mbuf. Example: if vlan stripping is enabled, a received vlan packet - * would have RTE_PTYPE_L2_ETHER and not RTE_PTYPE_L2_VLAN because the - * vlan is stripped from the data. - */ + /* remaining 24 bytes are set on RX when pulling packet from descriptor */ union { - uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */ + /* void * type of the array elements is retained for driver compatibility. */ + void *rx_descriptor_fields1[24 / sizeof(void *)]; __extension__ struct { - uint8_t l2_type:4; /**< (Outer) L2 type. */ - uint8_t l3_type:4; /**< (Outer) L3 type. */ - uint8_t l4_type:4; /**< (Outer) L4 type. */ - uint8_t tun_type:4; /**< Tunnel type. */ + /* + * The packet type, which is the combination of outer/inner L2, L3, L4 + * and tunnel types. The packet_type is about data really present in the + * mbuf. Example: if vlan stripping is enabled, a received vlan packet + * would have RTE_PTYPE_L2_ETHER and not RTE_PTYPE_L2_VLAN because the + * vlan is stripped from the data. + */ union { - uint8_t inner_esp_next_proto; - /**< ESP next protocol type, valid if - * RTE_PTYPE_TUNNEL_ESP tunnel type is set - * on both Tx and Rx. - */ + uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */ __extension__ struct { - uint8_t inner_l2_type:4; - /**< Inner L2 type. */ - uint8_t inner_l3_type:4; - /**< Inner L3 type. */ + uint8_t l2_type:4; /**< (Outer) L2 type. */ + uint8_t l3_type:4; /**< (Outer) L3 type. */ + uint8_t l4_type:4; /**< (Outer) L4 type. */ + uint8_t tun_type:4; /**< Tunnel type. */ + union { + /**< ESP next protocol type, valid if + * RTE_PTYPE_TUNNEL_ESP tunnel type is set + * on both Tx and Rx. + */ + uint8_t inner_esp_next_proto; + __extension__ + struct { + /**< Inner L2 type. */ + uint8_t inner_l2_type:4; + /**< Inner L3 type. */ + uint8_t inner_l3_type:4; + }; + }; + uint8_t inner_l4_type:4; /**< Inner L4 type. */ }; }; - uint8_t inner_l4_type:4; /**< Inner L4 type. */ - }; - }; - uint32_t pkt_len; /**< Total pkt len: sum of all segments. */ - uint16_t data_len; /**< Amount of data in segment buffer. */ - /** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */ - uint16_t vlan_tci; + uint32_t pkt_len; /**< Total pkt len: sum of all segments. */ - union { - union { - uint32_t rss; /**< RSS hash result if RSS enabled */ - struct { + uint16_t data_len; /**< Amount of data in segment buffer. */ + /** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */ + uint16_t vlan_tci; + + union { union { + uint32_t rss; /**< RSS hash result if RSS enabled */ struct { - uint16_t hash; - uint16_t id; - }; - uint32_t lo; - /**< Second 4 flexible bytes */ - }; - uint32_t hi; - /**< First 4 flexible bytes or FD ID, dependent - * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags. - */ - } fdir; /**< Filter identifier if FDIR enabled */ - struct rte_mbuf_sched sched; - /**< Hierarchical scheduler : 8 bytes */ - struct { - uint32_t reserved1; - uint16_t reserved2; - uint16_t txq; - /**< The event eth Tx adapter uses this field - * to store Tx queue id. - * @see rte_event_eth_tx_adapter_txq_set() - */ - } txadapter; /**< Eventdev ethdev Tx adapter */ - uint32_t usr; - /**< User defined tags. See rte_distributor_process() */ - } hash; /**< hash information */ - }; + union { + struct { + uint16_t hash; + uint16_t id; + }; + /**< Second 4 flexible bytes */ + uint32_t lo; + }; + /**< First 4 flexible bytes or FD ID, dependent + * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags. + */ + uint32_t hi; + } fdir; /**< Filter identifier if FDIR enabled */ + struct rte_mbuf_sched sched; + /**< Hierarchical scheduler : 8 bytes */ + struct { + uint32_t reserved1; + uint16_t reserved2; + /**< The event eth Tx adapter uses this field + * to store Tx queue id. + * @see rte_event_eth_tx_adapter_txq_set() + */ + uint16_t txq; + } txadapter; /**< Eventdev ethdev Tx adapter */ + /**< User defined tags. See rte_distributor_process() */ + uint32_t usr; + } hash; /**< hash information */ + }; - /** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */ - uint16_t vlan_tci_outer; + /** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */ + uint16_t vlan_tci_outer; - uint16_t buf_len; /**< Length of segment buffer. */ + uint16_t buf_len; /**< Length of segment buffer. */ + }; + }; struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */ /* second cache line - fields only used in slow path or on TX */ - alignas(RTE_CACHE_LINE_MIN_SIZE) RTE_MARKER cacheline1; - #if RTE_IOVA_IN_MBUF /** * Next segment of scattered packet. Must be NULL in the last * segment or in case of non-segmented packet. */ + alignas(RTE_CACHE_LINE_MIN_SIZE) struct rte_mbuf *next; #else /** * Reserved for dynamic fields * when the next pointer is in first cache line (i.e. RTE_IOVA_IN_MBUF is 0). */ + alignas(RTE_CACHE_LINE_MIN_SIZE) uint64_t dynfield2; #endif @@ -617,17 +626,16 @@ struct __rte_cache_aligned rte_mbuf { uint64_t tx_offload; /**< combined for easy fetch */ __extension__ struct { - uint64_t l2_len:RTE_MBUF_L2_LEN_BITS; /**< L2 (MAC) Header Length for non-tunneling pkt. * Outer_L4_len + ... + Inner_L2_len for tunneling pkt. */ - uint64_t l3_len:RTE_MBUF_L3_LEN_BITS; + uint64_t l2_len:RTE_MBUF_L2_LEN_BITS; /**< L3 (IP) Header Length. */ - uint64_t l4_len:RTE_MBUF_L4_LEN_BITS; + uint64_t l3_len:RTE_MBUF_L3_LEN_BITS; /**< L4 (TCP/UDP) Header Length. */ - uint64_t tso_segsz:RTE_MBUF_TSO_SEGSZ_BITS; + uint64_t l4_len:RTE_MBUF_L4_LEN_BITS; /**< TCP TSO segment size */ - + uint64_t tso_segsz:RTE_MBUF_TSO_SEGSZ_BITS; /* * Fields for Tx offloading of tunnels. * These are undefined for packets which don't request @@ -640,10 +648,10 @@ struct __rte_cache_aligned rte_mbuf { * Applications are expected to set appropriate tunnel * offload flags when they fill in these fields. */ - uint64_t outer_l3_len:RTE_MBUF_OUTL3_LEN_BITS; /**< Outer L3 (IP) Hdr Length. */ - uint64_t outer_l2_len:RTE_MBUF_OUTL2_LEN_BITS; + uint64_t outer_l3_len:RTE_MBUF_OUTL3_LEN_BITS; /**< Outer L2 (MAC) Hdr Length. */ + uint64_t outer_l2_len:RTE_MBUF_OUTL2_LEN_BITS; /* uint64_t unused:RTE_MBUF_TXOFLD_UNUSED_BITS; */ };