From patchwork Wed Mar 20 17:24:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 51415 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CD0841B212; Wed, 20 Mar 2019 18:25:04 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id D05861B184 for ; Wed, 20 Mar 2019 18:24:57 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Mar 2019 10:24:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,249,1549958400"; d="scan'208";a="143689954" Received: from sivswdev08.ir.intel.com (HELO localhost.localdomain) ([10.237.217.47]) by orsmga002.jf.intel.com with ESMTP; 20 Mar 2019 10:24:47 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Wed, 20 Mar 2019 17:24:33 +0000 Message-Id: <1553102679-23576-2-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1551381661-21078-1-git-send-email-konstantin.ananyev@intel.com> References: <1551381661-21078-1-git-send-email-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 1/7] mbuf: new function to generate raw Tx offload value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Operations to set/update bit-fields often cause compilers to generate suboptimal code. To help avoid such situation for tx_offload fields: introduce new enum for tx_offload bit-fields lengths and offsets, and new function to generate raw tx_offload value. Signed-off-by: Konstantin Ananyev --- lib/librte_mbuf/rte_mbuf.h | 71 ++++++++++++++++++++++++++++++++++---- 1 file changed, 64 insertions(+), 7 deletions(-) diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index d961ccaf6..b967ad17e 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -479,6 +479,26 @@ struct rte_mbuf_sched { uint16_t reserved; /**< Reserved. */ }; /**< Hierarchical scheduler */ +/** enum for the tx_offload bit-fields lenghts and offsets. */ +enum { + RTE_MBUF_L2_LEN_BITS = 7, + RTE_MBUF_L3_LEN_BITS = 9, + RTE_MBUF_L4_LEN_BITS = 8, + RTE_MBUF_TSO_SEGSZ_BITS = 16, + RTE_MBUF_OL3_LEN_BITS = 9, + RTE_MBUF_OL2_LEN_BITS = 7, + RTE_MBUF_L2_LEN_OFS = 0, + RTE_MBUF_L3_LEN_OFS = RTE_MBUF_L2_LEN_OFS + RTE_MBUF_L2_LEN_BITS, + RTE_MBUF_L4_LEN_OFS = RTE_MBUF_L3_LEN_OFS + RTE_MBUF_L3_LEN_BITS, + RTE_MBUF_TSO_SEGSZ_OFS = RTE_MBUF_L4_LEN_OFS + RTE_MBUF_L4_LEN_BITS, + RTE_MBUF_OL3_LEN_OFS = RTE_MBUF_TSO_SEGSZ_OFS + RTE_MBUF_TSO_SEGSZ_BITS, + RTE_MBUF_OL2_LEN_OFS = RTE_MBUF_OL3_LEN_OFS + RTE_MBUF_OL3_LEN_BITS, + RTE_MBUF_TXOFLD_UNUSED_OFS = + RTE_MBUF_OL2_LEN_OFS + RTE_MBUF_OL2_LEN_BITS, + RTE_MBUF_TXOFLD_UNUSED_BITS = + sizeof(uint64_t) * CHAR_BIT - RTE_MBUF_TXOFLD_UNUSED_OFS, +}; + /** * The generic rte_mbuf, containing a packet mbuf. */ @@ -640,19 +660,24 @@ struct rte_mbuf { uint64_t tx_offload; /**< combined for easy fetch */ __extension__ struct { - uint64_t l2_len:7; + uint64_t l2_len:RTE_MBUF_L2_LEN_BITS; /**< L2 (MAC) Header Length for non-tunneling pkt. * Outer_L4_len + ... + Inner_L2_len for tunneling pkt. */ - uint64_t l3_len:9; /**< L3 (IP) Header Length. */ - uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */ - uint64_t tso_segsz:16; /**< TCP TSO segment size */ + uint64_t l3_len:RTE_MBUF_L3_LEN_BITS; + /**< L3 (IP) Header Length. */ + uint64_t l4_len:RTE_MBUF_L4_LEN_BITS; + /**< L4 (TCP/UDP) Header Length. */ + uint64_t tso_segsz:RTE_MBUF_TSO_SEGSZ_BITS; + /**< TCP TSO segment size */ /* fields for TX offloading of tunnels */ - uint64_t outer_l3_len:9; /**< Outer L3 (IP) Hdr Length. */ - uint64_t outer_l2_len:7; /**< Outer L2 (MAC) Hdr Length. */ + uint64_t outer_l3_len:RTE_MBUF_OL3_LEN_BITS; + /**< Outer L3 (IP) Hdr Length. */ + uint64_t outer_l2_len:RTE_MBUF_OL2_LEN_BITS; + /**< Outer L2 (MAC) Hdr Length. */ - /* uint64_t unused:8; */ + /* uint64_t unused:RTE_MBUF_TXOFLD_UNUSED_BITS; */ }; }; @@ -2243,6 +2268,38 @@ static inline int rte_pktmbuf_chain(struct rte_mbuf *head, struct rte_mbuf *tail return 0; } +/* + * @warning + * @b EXPERIMENTAL: This API may change without prior notice. + * + * For given input values generate raw tx_offload value. + * @param il2 + * l2_len value. + * @param il3 + * l3_len value. + * @param il4 + * l4_len value. + * @param tso + * tso_segsz value. + * @param ol3 + * outer_l3_len value. + * @param ol2 + * outer_l2_len value. + * @return + * raw tx_offload value. + */ +static inline uint64_t +rte_mbuf_tx_offload(uint64_t il2, uint64_t il3, uint64_t il4, uint64_t tso, + uint64_t ol3, uint64_t ol2) +{ + return il2 << RTE_MBUF_L2_LEN_OFS | + il3 << RTE_MBUF_L3_LEN_OFS | + il4 << RTE_MBUF_L4_LEN_OFS | + tso << RTE_MBUF_TSO_SEGSZ_OFS | + ol3 << RTE_MBUF_OL3_LEN_OFS | + ol2 << RTE_MBUF_OL2_LEN_OFS; +} + /** * Validate general requirements for Tx offload in mbuf. *