From patchwork Tue Jan 17 13:26:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simei Su X-Patchwork-Id: 122185 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66964423FD; Tue, 17 Jan 2023 14:27:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2B26C42D0B; Tue, 17 Jan 2023 14:27:01 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id F3191400D4 for ; Tue, 17 Jan 2023 14:26:58 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673962019; x=1705498019; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=pb8ocqnTqmC3XFIdhSXr42NEGPxqD/tfLDykO6/x1UI=; b=dCdD5xqOvaTzwsRyICaeIT3RuF276jQA9xpvhm8LHTcWc8iks27+2pMN o13/iB8MOiEVndEblo+tB8eOjVrv5DxyZHvJ45Mn0I0bHMD1LO4VvvrQW UkXBBVQa4OdT1IVjnch9E2nT+38ODcicuruLeOHgdzrGekNWpK8587xjy 1Omb3qvN3pT+QhE0BNRUEN6EdEmE/ktcn2vRw9dJD4dKQx/nYm8ExaJzi aWkqXKvSS8UvdKZhhMmhvcnWZXHzX8rBrXqT1a7I8h7ofMCUJwUwxAOQW vbQ3bkVpO2a0EsYF+PoZ2wEtZ4YpxK62NbJep4YXrwl1PW1WOj31HSIxa g==; X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="308255768" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="308255768" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 05:26:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10592"; a="659392188" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="659392188" Received: from unknown (HELO npg-dpdk-simeisu-cvl-119d218.sh.intel.com) ([10.67.119.208]) by orsmga002.jf.intel.com with ESMTP; 17 Jan 2023 05:26:56 -0800 From: Simei Su To: qi.z.zhang@intel.com, junfeng.guo@intel.com Cc: dev@dpdk.org, wenjun1.wu@intel.com, Simei Su Subject: [PATCH v2 1/3] net/igc: code refactoring Date: Tue, 17 Jan 2023 21:26:17 +0800 Message-Id: <20230117132619.83712-2-simei.su@intel.com> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20230117132619.83712-1-simei.su@intel.com> References: <20221220034103.441524-1-simei.su@intel.com> <20230117132619.83712-1-simei.su@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch moves some structures from rxtx.c to rxtx.h for the timesync enabling feature. For example, variables in "igc_rx_queue" structure can be used by variables both in igc_ethdev.c and igc_txrx.c more conveniently. It is also consistent with other PMD coding styles. Signed-off-by: Simei Su --- drivers/net/igc/igc_txrx.c | 118 --------------------------------------------- drivers/net/igc/igc_txrx.h | 115 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 115 insertions(+), 118 deletions(-) diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index ffd219b..c462e91 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -93,124 +93,6 @@ #define IGC_TX_OFFLOAD_NOTSUP_MASK (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK) -/** - * Structure associated with each descriptor of the RX ring of a RX queue. - */ -struct igc_rx_entry { - struct rte_mbuf *mbuf; /**< mbuf associated with RX descriptor. */ -}; - -/** - * Structure associated with each RX queue. - */ -struct igc_rx_queue { - struct rte_mempool *mb_pool; /**< mbuf pool to populate RX ring. */ - volatile union igc_adv_rx_desc *rx_ring; - /**< RX ring virtual address. */ - uint64_t rx_ring_phys_addr; /**< RX ring DMA address. */ - volatile uint32_t *rdt_reg_addr; /**< RDT register address. */ - volatile uint32_t *rdh_reg_addr; /**< RDH register address. */ - struct igc_rx_entry *sw_ring; /**< address of RX software ring. */ - struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */ - struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */ - uint16_t nb_rx_desc; /**< number of RX descriptors. */ - uint16_t rx_tail; /**< current value of RDT register. */ - uint16_t nb_rx_hold; /**< number of held free RX desc. */ - uint16_t rx_free_thresh; /**< max free RX desc to hold. */ - uint16_t queue_id; /**< RX queue index. */ - uint16_t reg_idx; /**< RX queue register index. */ - uint16_t port_id; /**< Device port identifier. */ - uint8_t pthresh; /**< Prefetch threshold register. */ - uint8_t hthresh; /**< Host threshold register. */ - uint8_t wthresh; /**< Write-back threshold register. */ - uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */ - uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */ - uint32_t flags; /**< RX flags. */ - uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */ -}; - -/** Offload features */ -union igc_tx_offload { - uint64_t data; - struct { - uint64_t l3_len:9; /**< L3 (IP) Header Length. */ - uint64_t l2_len:7; /**< L2 (MAC) Header Length. */ - uint64_t vlan_tci:16; - /**< VLAN Tag Control Identifier(CPU order). */ - uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */ - uint64_t tso_segsz:16; /**< TCP TSO segment size. */ - /* uint64_t unused:8; */ - }; -}; - -/* - * Compare mask for igc_tx_offload.data, - * should be in sync with igc_tx_offload layout. - */ -#define TX_MACIP_LEN_CMP_MASK 0x000000000000FFFFULL /**< L2L3 header mask. */ -#define TX_VLAN_CMP_MASK 0x00000000FFFF0000ULL /**< Vlan mask. */ -#define TX_TCP_LEN_CMP_MASK 0x000000FF00000000ULL /**< TCP header mask. */ -#define TX_TSO_MSS_CMP_MASK 0x00FFFF0000000000ULL /**< TSO segsz mask. */ -/** Mac + IP + TCP + Mss mask. */ -#define TX_TSO_CMP_MASK \ - (TX_MACIP_LEN_CMP_MASK | TX_TCP_LEN_CMP_MASK | TX_TSO_MSS_CMP_MASK) - -/** - * Structure to check if new context need be built - */ -struct igc_advctx_info { - uint64_t flags; /**< ol_flags related to context build. */ - /** tx offload: vlan, tso, l2-l3-l4 lengths. */ - union igc_tx_offload tx_offload; - /** compare mask for tx offload. */ - union igc_tx_offload tx_offload_mask; -}; - -/** - * Hardware context number - */ -enum { - IGC_CTX_0 = 0, /**< CTX0 */ - IGC_CTX_1 = 1, /**< CTX1 */ - IGC_CTX_NUM = 2, /**< CTX_NUM */ -}; - -/** - * Structure associated with each descriptor of the TX ring of a TX queue. - */ -struct igc_tx_entry { - struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */ - uint16_t next_id; /**< Index of next descriptor in ring. */ - uint16_t last_id; /**< Index of last scattered descriptor. */ -}; - -/** - * Structure associated with each TX queue. - */ -struct igc_tx_queue { - volatile union igc_adv_tx_desc *tx_ring; /**< TX ring address */ - uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */ - struct igc_tx_entry *sw_ring; /**< virtual address of SW ring. */ - volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */ - uint32_t txd_type; /**< Device-specific TXD type */ - uint16_t nb_tx_desc; /**< number of TX descriptors. */ - uint16_t tx_tail; /**< Current value of TDT register. */ - uint16_t tx_head; - /**< Index of first used TX descriptor. */ - uint16_t queue_id; /**< TX queue index. */ - uint16_t reg_idx; /**< TX queue register index. */ - uint16_t port_id; /**< Device port identifier. */ - uint8_t pthresh; /**< Prefetch threshold register. */ - uint8_t hthresh; /**< Host threshold register. */ - uint8_t wthresh; /**< Write-back threshold register. */ - uint8_t ctx_curr; - - /**< Start context position for transmit queue. */ - struct igc_advctx_info ctx_cache[IGC_CTX_NUM]; - /**< Hardware context history.*/ - uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */ -}; - static inline uint64_t rx_desc_statuserr_to_pkt_flags(uint32_t statuserr) { diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h index 02a0a05..5731761 100644 --- a/drivers/net/igc/igc_txrx.h +++ b/drivers/net/igc/igc_txrx.h @@ -11,6 +11,121 @@ extern "C" { #endif +struct igc_rx_entry { + struct rte_mbuf *mbuf; /**< mbuf associated with RX descriptor. */ +}; + +/** + * Structure associated with each RX queue. + */ +struct igc_rx_queue { + struct rte_mempool *mb_pool; /**< mbuf pool to populate RX ring. */ + volatile union igc_adv_rx_desc *rx_ring; + /**< RX ring virtual address. */ + uint64_t rx_ring_phys_addr; /**< RX ring DMA address. */ + volatile uint32_t *rdt_reg_addr; /**< RDT register address. */ + volatile uint32_t *rdh_reg_addr; /**< RDH register address. */ + struct igc_rx_entry *sw_ring; /**< address of RX software ring. */ + struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */ + struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */ + uint16_t nb_rx_desc; /**< number of RX descriptors. */ + uint16_t rx_tail; /**< current value of RDT register. */ + uint16_t nb_rx_hold; /**< number of held free RX desc. */ + uint16_t rx_free_thresh; /**< max free RX desc to hold. */ + uint16_t queue_id; /**< RX queue index. */ + uint16_t reg_idx; /**< RX queue register index. */ + uint16_t port_id; /**< Device port identifier. */ + uint8_t pthresh; /**< Prefetch threshold register. */ + uint8_t hthresh; /**< Host threshold register. */ + uint8_t wthresh; /**< Write-back threshold register. */ + uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */ + uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */ + uint32_t flags; /**< RX flags. */ + uint64_t offloads; /**< offloads of RTE_ETH_RX_OFFLOAD_* */ +}; + +/** Offload features */ +union igc_tx_offload { + uint64_t data; + struct { + uint64_t l3_len:9; /**< L3 (IP) Header Length. */ + uint64_t l2_len:7; /**< L2 (MAC) Header Length. */ + uint64_t vlan_tci:16; + /**< VLAN Tag Control Identifier(CPU order). */ + uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */ + uint64_t tso_segsz:16; /**< TCP TSO segment size. */ + /* uint64_t unused:8; */ + }; +}; + +/** + * Compare mask for igc_tx_offload.data, + * should be in sync with igc_tx_offload layout. + */ +#define TX_MACIP_LEN_CMP_MASK 0x000000000000FFFFULL /**< L2L3 header mask. */ +#define TX_VLAN_CMP_MASK 0x00000000FFFF0000ULL /**< Vlan mask. */ +#define TX_TCP_LEN_CMP_MASK 0x000000FF00000000ULL /**< TCP header mask. */ +#define TX_TSO_MSS_CMP_MASK 0x00FFFF0000000000ULL /**< TSO segsz mask. */ +/** Mac + IP + TCP + Mss mask. */ +#define TX_TSO_CMP_MASK \ + (TX_MACIP_LEN_CMP_MASK | TX_TCP_LEN_CMP_MASK | TX_TSO_MSS_CMP_MASK) + +/** + * Structure to check if new context need be built + */ +struct igc_advctx_info { + uint64_t flags; /**< ol_flags related to context build. */ + /** tx offload: vlan, tso, l2-l3-l4 lengths. */ + union igc_tx_offload tx_offload; + /** compare mask for tx offload. */ + union igc_tx_offload tx_offload_mask; +}; + +/** + * Hardware context number + */ +enum { + IGC_CTX_0 = 0, /**< CTX0 */ + IGC_CTX_1 = 1, /**< CTX1 */ + IGC_CTX_NUM = 2, /**< CTX_NUM */ +}; + +/** + * Structure associated with each descriptor of the TX ring of a TX queue. + */ +struct igc_tx_entry { + struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */ + uint16_t next_id; /**< Index of next descriptor in ring. */ + uint16_t last_id; /**< Index of last scattered descriptor. */ +}; + +/** + * Structure associated with each TX queue. + */ +struct igc_tx_queue { + volatile union igc_adv_tx_desc *tx_ring; /**< TX ring address */ + uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */ + struct igc_tx_entry *sw_ring; /**< virtual address of SW ring. */ + volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */ + uint32_t txd_type; /**< Device-specific TXD type */ + uint16_t nb_tx_desc; /**< number of TX descriptors. */ + uint16_t tx_tail; /**< Current value of TDT register. */ + uint16_t tx_head; + /**< Index of first used TX descriptor. */ + uint16_t queue_id; /**< TX queue index. */ + uint16_t reg_idx; /**< TX queue register index. */ + uint16_t port_id; /**< Device port identifier. */ + uint8_t pthresh; /**< Prefetch threshold register. */ + uint8_t hthresh; /**< Host threshold register. */ + uint8_t wthresh; /**< Write-back threshold register. */ + uint8_t ctx_curr; + + /**< Start context position for transmit queue. */ + struct igc_advctx_info ctx_cache[IGC_CTX_NUM]; + /**< Hardware context history.*/ + uint64_t offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */ +}; + /* * RX/TX function prototypes */