[dpdk-dev,04/13] mbuf: expand ol_flags field to 64-bits

Message ID 1409759378-10113-5-git-send-email-bruce.richardson@intel.com (mailing list archive)
State Superseded, archived
Headers

Commit Message

Bruce Richardson Sept. 3, 2014, 3:49 p.m. UTC
  The offload flags field (ol_flags) was 16-bits and had no further room
for expansion. This patch increases the field size to 64-bits, using up
the remaining reserved space in the single-cache-line mbuf.

NOTE: none of the values for existing flags have been changed, i.e. no
new numbers have been explicitly reserved between existing flag
definitions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 app/test-pmd/config.c             |  8 ++---
 app/test-pmd/csumonly.c           |  8 ++---
 app/test-pmd/rxonly.c             |  2 +-
 app/test-pmd/testpmd.h            |  4 +--
 app/test-pmd/txonly.c             |  2 +-
 lib/librte_mbuf/rte_mbuf.c        |  2 +-
 lib/librte_mbuf/rte_mbuf.h        |  4 +--
 lib/librte_pmd_e1000/em_rxtx.c    | 34 ++++++++++-----------
 lib/librte_pmd_e1000/igb_rxtx.c   | 62 ++++++++++++++++++---------------------
 lib/librte_pmd_i40e/i40e_rxtx.c   | 31 ++++++++++----------
 lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 61 ++++++++++++++++++--------------------
 lib/librte_pmd_ixgbe/ixgbe_rxtx.h |  2 +-
 12 files changed, 104 insertions(+), 116 deletions(-)
  

Comments

Olivier Matz Sept. 8, 2014, 10:25 a.m. UTC | #1
Hi Bruce,

On 09/03/2014 05:49 PM, Bruce Richardson wrote:
> The offload flags field (ol_flags) was 16-bits and had no further room
> for expansion. This patch increases the field size to 64-bits, using up
> the remaining reserved space in the single-cache-line mbuf.
> 
> NOTE: none of the values for existing flags have been changed, i.e. no
> new numbers have been explicitly reserved between existing flag
> definitions.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

The initial series I've proposed [1][2] had on more enhancement: the
first patch [1] allowed to remove the definition of flag names in
testpmd. Indeed, this is not really good because they must be kept
synchronized with the flags in rte_mbuf. What do you think about this
patch? Should it be integrated in your series? Or later? Or never? ;)

The second patch [2] changes the value of the flags. This is not needed
now, but if we do it in the future, we should not forget to change
app/test-pmd/cmdline.c accordingly. Maybe this could go in your patch
directly as it does not hurt?

Olivier


[1] http://dpdk.org/ml/archives/dev/2014-May/002545.html
[2] http://dpdk.org/ml/archives/dev/2014-May/002546.html
  
Bruce Richardson Sept. 9, 2014, 9 a.m. UTC | #2
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Monday, September 08, 2014 11:26 AM
> To: Richardson, Bruce; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 04/13] mbuf: expand ol_flags field to 64-bits
> 
> Hi Bruce,
> 
> On 09/03/2014 05:49 PM, Bruce Richardson wrote:
> > The offload flags field (ol_flags) was 16-bits and had no further room
> > for expansion. This patch increases the field size to 64-bits, using up
> > the remaining reserved space in the single-cache-line mbuf.
> >
> > NOTE: none of the values for existing flags have been changed, i.e. no
> > new numbers have been explicitly reserved between existing flag
> > definitions.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> 
> The initial series I've proposed [1][2] had on more enhancement: the
> first patch [1] allowed to remove the definition of flag names in
> testpmd. Indeed, this is not really good because they must be kept
> synchronized with the flags in rte_mbuf. What do you think about this
> patch? Should it be integrated in your series? Or later? Or never? ;)

No, it is a good change - I've just keep it out of my series for simplicity as I'm largely trying to keep the scope as small as possible. I would love to see that go in as a separate patch maybe once the mbuf rework is finished. 

> 
> The second patch [2] changes the value of the flags. This is not needed
> now, but if we do it in the future, we should not forget to change
> app/test-pmd/cmdline.c accordingly. Maybe this could go in your patch
> directly as it does not hurt?

As above for now. Right now I'm just trying to get the structure worked out, and deal with any performance regressions that are found (such as what Pablo found last Friday :-( ). 

/Bruce

> 
> Olivier
> 
> 
> [1] http://dpdk.org/ml/archives/dev/2014-May/002545.html
> [2] http://dpdk.org/ml/archives/dev/2014-May/002546.html
  

Patch

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 606e34a..adfa9a8 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1714,14 +1714,14 @@  set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value)
 }
 
 void
-tx_cksum_set(portid_t port_id, uint8_t cksum_mask)
+tx_cksum_set(portid_t port_id, uint64_t ol_flags)
 {
-	uint16_t tx_ol_flags;
+	uint64_t tx_ol_flags;
 	if (port_id_is_invalid(port_id))
 		return;
 	/* Clear last 4 bits and then set L3/4 checksum mask again */
-	tx_ol_flags = (uint16_t) (ports[port_id].tx_ol_flags & 0xFFF0);
-	ports[port_id].tx_ol_flags = (uint16_t) ((cksum_mask & 0xf) | tx_ol_flags);
+	tx_ol_flags = ports[port_id].tx_ol_flags & (~0x0Full);
+	ports[port_id].tx_ol_flags = ((ol_flags & 0xf) | tx_ol_flags);
 }
 
 void
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 2ce4c42..fcc4876 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -217,9 +217,9 @@  pkt_burst_checksum_forward(struct fwd_stream *fs)
 	uint16_t nb_rx;
 	uint16_t nb_tx;
 	uint16_t i;
-	uint16_t ol_flags;
-	uint16_t pkt_ol_flags;
-	uint16_t tx_ol_flags;
+	uint64_t ol_flags;
+	uint64_t pkt_ol_flags;
+	uint64_t tx_ol_flags;
 	uint16_t l4_proto;
 	uint16_t eth_type;
 	uint8_t  l2_len;
@@ -261,7 +261,7 @@  pkt_burst_checksum_forward(struct fwd_stream *fs)
 		mb = pkts_burst[i];
 		l2_len  = sizeof(struct ether_hdr);
 		pkt_ol_flags = mb->ol_flags;
-		ol_flags = (uint16_t) (pkt_ol_flags & (~PKT_TX_L4_MASK));
+		ol_flags = (pkt_ol_flags & (~PKT_TX_L4_MASK));
 
 		eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
 		eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index 7ba36a1..98c788b 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -109,7 +109,7 @@  pkt_burst_receive(struct fwd_stream *fs)
 	struct rte_mbuf  *mb;
 	struct ether_hdr *eth_hdr;
 	uint16_t eth_type;
-	uint16_t ol_flags;
+	uint64_t ol_flags;
 	uint16_t nb_rx;
 	uint16_t i;
 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 09923a8..142091d 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -141,7 +141,7 @@  struct rte_port {
 	struct fwd_stream       *rx_stream; /**< Port RX stream, if unique */
 	struct fwd_stream       *tx_stream; /**< Port TX stream, if unique */
 	unsigned int            socket_id;  /**< For NUMA support */
-	uint16_t                tx_ol_flags;/**< Offload Flags of TX packets. */
+	uint64_t                tx_ol_flags;/**< Offload Flags of TX packets. */
 	uint16_t                tx_vlan_id; /**< Tag Id. in TX VLAN packets. */
 	void                    *fwd_ctx;   /**< Forwarding mode context */
 	uint64_t                rx_bad_ip_csum; /**< rx pkts with bad ip checksum  */
@@ -494,7 +494,7 @@  void tx_vlan_pvid_set(portid_t port_id, uint16_t vlan_id, int on);
 
 void set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value);
 
-void tx_cksum_set(portid_t port_id, uint8_t cksum_mask);
+void tx_cksum_set(portid_t port_id, uint64_t ol_flags);
 
 void set_verbose_level(uint16_t vb_level);
 void set_tx_pkt_segments(unsigned *seg_lengths, unsigned nb_segs);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 2b3f0b9..3d08005 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -203,7 +203,7 @@  pkt_burst_transmit(struct fwd_stream *fs)
 	uint16_t nb_tx;
 	uint16_t nb_pkt;
 	uint16_t vlan_tci;
-	uint16_t ol_flags;
+	uint64_t ol_flags;
 	uint8_t  i;
 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
 	uint64_t start_tsc;
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 26e36eb..9dfcac3 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -174,7 +174,7 @@  rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 
 	fprintf(f, "dump mbuf at 0x%p, phys=%"PRIx64", buf_len=%u\n",
 	       m, (uint64_t)m->buf_physaddr, (unsigned)m->buf_len);
-	fprintf(f, "  pkt_len=%"PRIu32", ol_flags=%"PRIx16", nb_segs=%u, "
+	fprintf(f, "  pkt_len=%"PRIu32", ol_flags=%"PRIx64", nb_segs=%u, "
 	       "in_port=%u\n", m->pkt_len, m->ol_flags,
 	       (unsigned)m->nb_segs, (unsigned)m->port);
 	nb_segs = m->nb_segs;
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 8d0c6fb..6af5b2f 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -141,9 +141,7 @@  struct rte_mbuf {
 	uint8_t nb_segs;        /**< Number of segments. */
 	uint8_t port;           /**< Input port. */
 
-	uint16_t ol_flags;      /**< Offload features. */
-	uint16_t reserved0;     /**< Unused field. Required for padding */
-	uint32_t reserved1;     /**< Unused field. Required for padding */
+	uint64_t ol_flags;      /**< Offload features. */
 
 	/* remaining bytes are set on RX when pulling packet from descriptor */
 	uint16_t packet_type;   /**< Type of packet, e.g. protocols used */
diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
index 67736d6..eb76660 100644
--- a/lib/librte_pmd_e1000/em_rxtx.c
+++ b/lib/librte_pmd_e1000/em_rxtx.c
@@ -168,7 +168,7 @@  union em_vlan_macip {
  * Structure to check if new context need be built
  */
 struct em_ctx_info {
-	uint16_t flags;               /**< ol_flags related to context build. */
+	uint64_t flags;               /**< ol_flags related to context build. */
 	uint32_t cmp_mask;            /**< compare mask */
 	union em_vlan_macip hdrlen;  /**< L2 and L3 header lenghts */
 };
@@ -238,7 +238,7 @@  struct em_tx_queue {
 static inline void
 em_set_xmit_ctx(struct em_tx_queue* txq,
 		volatile struct e1000_context_desc *ctx_txd,
-		uint16_t flags,
+		uint64_t flags,
 		union em_vlan_macip hdrlen)
 {
 	uint32_t cmp_mask, cmd_len;
@@ -304,7 +304,7 @@  em_set_xmit_ctx(struct em_tx_queue* txq,
  * or create a new context descriptor.
  */
 static inline uint32_t
-what_ctx_update(struct em_tx_queue *txq, uint16_t flags,
+what_ctx_update(struct em_tx_queue *txq, uint64_t flags,
 		union em_vlan_macip hdrlen)
 {
 	/* If match with the current context */
@@ -377,7 +377,7 @@  em_xmit_cleanup(struct em_tx_queue *txq)
 }
 
 static inline uint32_t
-tx_desc_cksum_flags_to_upper(uint16_t ol_flags)
+tx_desc_cksum_flags_to_upper(uint64_t ol_flags)
 {
 	static const uint32_t l4_olinfo[2] = {0, E1000_TXD_POPTS_TXSM << 8};
 	static const uint32_t l3_olinfo[2] = {0, E1000_TXD_POPTS_IXSM << 8};
@@ -403,12 +403,12 @@  eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint32_t popts_spec;
 	uint32_t cmd_type_len;
 	uint16_t slen;
-	uint16_t ol_flags;
+	uint64_t ol_flags;
 	uint16_t tx_id;
 	uint16_t tx_last;
 	uint16_t nb_tx;
 	uint16_t nb_used;
-	uint16_t tx_ol_req;
+	uint64_t tx_ol_req;
 	uint32_t ctx;
 	uint32_t new_ctx;
 	union em_vlan_macip hdrlen;
@@ -438,8 +438,7 @@  eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = tx_pkt->ol_flags;
 
 		/* If hardware offload required */
-		tx_ol_req = (uint16_t)(ol_flags & (PKT_TX_IP_CKSUM |
-							PKT_TX_L4_MASK));
+		tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK));
 		if (tx_ol_req) {
 			hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
 			hdrlen.f.l2_len = tx_pkt->l2_len;
@@ -642,22 +641,21 @@  end_of_tx:
  *
  **********************************************************************/
 
-static inline uint16_t
+static inline uint64_t
 rx_desc_status_to_pkt_flags(uint32_t rx_status)
 {
-	uint16_t pkt_flags;
+	uint64_t pkt_flags;
 
 	/* Check if VLAN present */
-	pkt_flags = (uint16_t)((rx_status & E1000_RXD_STAT_VP) ?
-						PKT_RX_VLAN_PKT : 0);
+	pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?  PKT_RX_VLAN_PKT : 0);
 
 	return pkt_flags;
 }
 
-static inline uint16_t
+static inline uint64_t
 rx_desc_error_to_pkt_flags(uint32_t rx_error)
 {
-	uint16_t pkt_flags = 0;
+	uint64_t pkt_flags = 0;
 
 	if (rx_error & E1000_RXD_ERR_IPE)
 		pkt_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -801,8 +799,8 @@  eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 
 		rxm->ol_flags = rx_desc_status_to_pkt_flags(status);
-		rxm->ol_flags = (uint16_t)(rxm->ol_flags |
-				rx_desc_error_to_pkt_flags(rxd.errors));
+		rxm->ol_flags = rxm->ol_flags |
+				rx_desc_error_to_pkt_flags(rxd.errors);
 
 		/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
@@ -1027,8 +1025,8 @@  eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->port = rxq->port_id;
 
 		first_seg->ol_flags = rx_desc_status_to_pkt_flags(status);
-		first_seg->ol_flags = (uint16_t)(first_seg->ol_flags |
-					rx_desc_error_to_pkt_flags(rxd.errors));
+		first_seg->ol_flags = first_seg->ol_flags |
+					rx_desc_error_to_pkt_flags(rxd.errors);
 
 		/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index e364dda..fab888d 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -176,7 +176,7 @@  union igb_vlan_macip {
  * Strucutre to check if new context need be built
  */
 struct igb_advctx_info {
-	uint16_t flags;           /**< ol_flags related to context build. */
+	uint64_t flags;           /**< ol_flags related to context build. */
 	uint32_t cmp_mask;        /**< compare mask for vlan_macip_lens */
 	union igb_vlan_macip vlan_macip_lens; /**< vlan, mac & ip length. */
 };
@@ -244,7 +244,7 @@  struct igb_tx_queue {
 static inline void
 igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 		volatile struct e1000_adv_tx_context_desc *ctx_txd,
-		uint16_t ol_flags, uint32_t vlan_macip_lens)
+		uint64_t ol_flags, uint32_t vlan_macip_lens)
 {
 	uint32_t type_tucmd_mlhl;
 	uint32_t mss_l4len_idx;
@@ -309,7 +309,7 @@  igbe_set_xmit_ctx(struct igb_tx_queue* txq,
  * or create a new context descriptor.
  */
 static inline uint32_t
-what_advctx_update(struct igb_tx_queue *txq, uint16_t flags,
+what_advctx_update(struct igb_tx_queue *txq, uint64_t flags,
 		uint32_t vlan_macip_lens)
 {
 	/* If match with the current context */
@@ -332,7 +332,7 @@  what_advctx_update(struct igb_tx_queue *txq, uint16_t flags,
 }
 
 static inline uint32_t
-tx_desc_cksum_flags_to_olinfo(uint16_t ol_flags)
+tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	static const uint32_t l4_olinfo[2] = {0, E1000_ADVTXD_POPTS_TXSM};
 	static const uint32_t l3_olinfo[2] = {0, E1000_ADVTXD_POPTS_IXSM};
@@ -344,7 +344,7 @@  tx_desc_cksum_flags_to_olinfo(uint16_t ol_flags)
 }
 
 static inline uint32_t
-tx_desc_vlan_flags_to_cmdtype(uint16_t ol_flags)
+tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 {
 	static uint32_t vlan_cmd[2] = {0, E1000_ADVTXD_DCMD_VLE};
 	return vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
@@ -367,12 +367,12 @@  eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint32_t cmd_type_len;
 	uint32_t pkt_len;
 	uint16_t slen;
-	uint16_t ol_flags;
+	uint64_t ol_flags;
 	uint16_t tx_end;
 	uint16_t tx_id;
 	uint16_t tx_last;
 	uint16_t nb_tx;
-	uint16_t tx_ol_req;
+	uint64_t tx_ol_req;
 	uint32_t new_ctx = 0;
 	uint32_t ctx = 0;
 
@@ -402,7 +402,7 @@  eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci;
 		vlan_macip_lens.f.l2_len = tx_pkt->l2_len;
 		vlan_macip_lens.f.l3_len = tx_pkt->l3_len;
-		tx_ol_req = (uint16_t)(ol_flags & PKT_TX_OFFLOAD_MASK);
+		tx_ol_req = ol_flags & PKT_TX_OFFLOAD_MASK;
 
 		/* If a Context Descriptor need be built . */
 		if (tx_ol_req) {
@@ -589,12 +589,12 @@  eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
  *  RX functions
  *
  **********************************************************************/
-static inline uint16_t
+static inline uint64_t
 rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
 {
-	uint16_t pkt_flags;
+	uint64_t pkt_flags;
 
-	static uint16_t ip_pkt_types_map[16] = {
+	static uint64_t ip_pkt_types_map[16] = {
 		0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
 		PKT_RX_IPV6_HDR, 0, 0, 0,
 		PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
@@ -607,34 +607,32 @@  rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
 		0, 0, 0, 0,
 	};
 
-	pkt_flags = (uint16_t)((hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
+	pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
 				ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
-				ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
+				ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
 #else
-	pkt_flags = (uint16_t)((hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
-				ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
+	pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
+				ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
 #endif
-	return (uint16_t)(pkt_flags | (((hl_tp_rs & 0x0F) == 0) ?
-						0 : PKT_RX_RSS_HASH));
+	return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ?  0 : PKT_RX_RSS_HASH);
 }
 
-static inline uint16_t
+static inline uint64_t
 rx_desc_status_to_pkt_flags(uint32_t rx_status)
 {
-	uint16_t pkt_flags;
+	uint64_t pkt_flags;
 
 	/* Check if VLAN present */
-	pkt_flags = (uint16_t)((rx_status & E1000_RXD_STAT_VP) ?
-						PKT_RX_VLAN_PKT : 0);
+	pkt_flags = (rx_status & E1000_RXD_STAT_VP) ?  PKT_RX_VLAN_PKT : 0;
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	if (rx_status & E1000_RXD_STAT_TMST)
-		pkt_flags = (uint16_t)(pkt_flags | PKT_RX_IEEE1588_TMST);
+		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
 
-static inline uint16_t
+static inline uint64_t
 rx_desc_error_to_pkt_flags(uint32_t rx_status)
 {
 	/*
@@ -642,7 +640,7 @@  rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	 * Bit 29: L4I, L4I integrity error
 	 */
 
-	static uint16_t error_to_pkt_flags_map[4] = {
+	static uint64_t error_to_pkt_flags_map[4] = {
 		0,  PKT_RX_L4_CKSUM_BAD, PKT_RX_IP_CKSUM_BAD,
 		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
 	};
@@ -669,7 +667,7 @@  eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	uint16_t rx_id;
 	uint16_t nb_rx;
 	uint16_t nb_hold;
-	uint16_t pkt_flags;
+	uint64_t pkt_flags;
 
 	nb_rx = 0;
 	nb_hold = 0;
@@ -788,10 +786,8 @@  eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
 
 		pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
-		pkt_flags = (uint16_t)(pkt_flags |
-				rx_desc_status_to_pkt_flags(staterr));
-		pkt_flags = (uint16_t)(pkt_flags |
-				rx_desc_error_to_pkt_flags(staterr));
+		pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+		pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
 		rxm->ol_flags = pkt_flags;
 
 		/*
@@ -848,7 +844,7 @@  eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	uint16_t nb_rx;
 	uint16_t nb_hold;
 	uint16_t data_len;
-	uint16_t pkt_flags;
+	uint64_t pkt_flags;
 
 	nb_rx = 0;
 	nb_hold = 0;
@@ -1024,10 +1020,8 @@  eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
 		hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
 		pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
-		pkt_flags = (uint16_t)(pkt_flags |
-				rx_desc_status_to_pkt_flags(staterr));
-		pkt_flags = (uint16_t)(pkt_flags |
-				rx_desc_error_to_pkt_flags(staterr));
+		pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+		pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
 		first_seg->ol_flags = pkt_flags;
 
 		/* Prefetch data of first segment, if configured to do so. */
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 25a5f6f..cd05654 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -91,27 +91,27 @@  static uint16_t i40e_xmit_pkts_simple(void *tx_queue,
 				      uint16_t nb_pkts);
 
 /* Translate the rx descriptor status to pkt flags */
-static inline uint16_t
+static inline uint64_t
 i40e_rxd_status_to_pkt_flags(uint64_t qword)
 {
-	uint16_t flags;
+	uint64_t flags;
 
 	/* Check if VLAN packet */
-	flags = (uint16_t)(qword & (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
-							PKT_RX_VLAN_PKT : 0);
+	flags = qword & (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
+							PKT_RX_VLAN_PKT : 0;
 
 	/* Check if RSS_HASH */
-	flags |= (uint16_t)((((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+	flags |= (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
 					I40E_RX_DESC_FLTSTAT_RSS_HASH) ==
-			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0);
+			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
 
 	return flags;
 }
 
-static inline uint16_t
+static inline uint64_t
 i40e_rxd_error_to_pkt_flags(uint64_t qword)
 {
-	uint16_t flags = 0;
+	uint64_t flags = 0;
 	uint64_t error_bits = (qword >> I40E_RXD_QW1_ERROR_SHIFT);
 
 #define I40E_RX_ERR_BITS 0x3f
@@ -143,12 +143,12 @@  i40e_rxd_error_to_pkt_flags(uint64_t qword)
 }
 
 /* Translate pkt types to pkt flags */
-static inline uint16_t
+static inline uint64_t
 i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
 {
 	uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
 					I40E_RXD_QW1_PTYPE_SHIFT);
-	static const uint16_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
+	static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
 		0, /* PTYPE 0 */
 		0, /* PTYPE 1 */
 		0, /* PTYPE 2 */
@@ -567,7 +567,7 @@  i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
 	uint32_t rx_status;
 	int32_t s[I40E_LOOK_AHEAD], nb_dd;
 	int32_t i, j, nb_rx = 0;
-	uint16_t pkt_flags;
+	uint64_t pkt_flags;
 
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 	rxep = &rxq->sw_ring[rxq->rx_tail];
@@ -789,7 +789,7 @@  i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint16_t rx_packet_len;
 	uint16_t rx_id, nb_hold;
 	uint64_t dma_addr;
-	uint16_t pkt_flags;
+	uint64_t pkt_flags;
 
 	nb_rx = 0;
 	nb_hold = 0;
@@ -896,10 +896,11 @@  i40e_recv_scattered_pkts(void *rx_queue,
 	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
 	struct rte_mbuf *nmb, *rxm;
 	uint16_t rx_id = rxq->rx_tail;
-	uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len, pkt_flags;
+	uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len;
 	uint32_t rx_status;
 	uint64_t qword1;
 	uint64_t dma_addr;
+	uint64_t pkt_flags;
 
 	while (nb_rx < nb_pkts) {
 		rxdp = &rx_ring[rx_id];
@@ -1046,7 +1047,7 @@  i40e_recv_scattered_pkts(void *rx_queue,
 
 /* Check if the context descriptor is needed for TX offloading */
 static inline uint16_t
-i40e_calc_context_desc(uint16_t flags)
+i40e_calc_context_desc(uint64_t flags)
 {
 	uint16_t mask = 0;
 
@@ -1075,7 +1076,7 @@  i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	uint32_t td_offset;
 	uint32_t tx_flags;
 	uint32_t td_tag;
-	uint16_t ol_flags;
+	uint64_t ol_flags;
 	uint8_t l2_len;
 	uint8_t l3_len;
 	uint16_t nb_used;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 964ae06..c7f1642 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -358,7 +358,7 @@  ixgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
 static inline void
 ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
 		volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
-		uint16_t ol_flags, uint32_t vlan_macip_lens)
+		uint64_t ol_flags, uint32_t vlan_macip_lens)
 {
 	uint32_t type_tucmd_mlhl;
 	uint32_t mss_l4len_idx;
@@ -421,7 +421,7 @@  ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
  * or create a new context descriptor.
  */
 static inline uint32_t
-what_advctx_update(struct igb_tx_queue *txq, uint16_t flags,
+what_advctx_update(struct igb_tx_queue *txq, uint64_t flags,
 		uint32_t vlan_macip_lens)
 {
 	/* If match with the current used context */
@@ -444,7 +444,7 @@  what_advctx_update(struct igb_tx_queue *txq, uint16_t flags,
 }
 
 static inline uint32_t
-tx_desc_cksum_flags_to_olinfo(uint16_t ol_flags)
+tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	static const uint32_t l4_olinfo[2] = {0, IXGBE_ADVTXD_POPTS_TXSM};
 	static const uint32_t l3_olinfo[2] = {0, IXGBE_ADVTXD_POPTS_IXSM};
@@ -456,7 +456,7 @@  tx_desc_cksum_flags_to_olinfo(uint16_t ol_flags)
 }
 
 static inline uint32_t
-tx_desc_vlan_flags_to_cmdtype(uint16_t ol_flags)
+tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 {
 	static const uint32_t vlan_cmd[2] = {0, IXGBE_ADVTXD_DCMD_VLE};
 	return vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
@@ -546,12 +546,12 @@  ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint32_t cmd_type_len;
 	uint32_t pkt_len;
 	uint16_t slen;
-	uint16_t ol_flags;
+	uint64_t ol_flags;
 	uint16_t tx_id;
 	uint16_t tx_last;
 	uint16_t nb_tx;
 	uint16_t nb_used;
-	uint16_t tx_ol_req;
+	uint64_t tx_ol_req;
 	uint32_t ctx = 0;
 	uint32_t new_ctx;
 
@@ -584,7 +584,7 @@  ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		vlan_macip_lens.f.l3_len = tx_pkt->l3_len;
 
 		/* If hardware offload required */
-		tx_ol_req = (uint16_t)(ol_flags & PKT_TX_OFFLOAD_MASK);
+		tx_ol_req = ol_flags & PKT_TX_OFFLOAD_MASK;
 		if (tx_ol_req) {
 			/* If new context need be built or reuse the exist ctx. */
 			ctx = what_advctx_update(txq, tx_ol_req,
@@ -814,19 +814,19 @@  end_of_tx:
  *  RX functions
  *
  **********************************************************************/
-static inline uint16_t
+static inline uint64_t
 rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
 {
 	uint16_t pkt_flags;
 
-	static uint16_t ip_pkt_types_map[16] = {
+	static uint64_t ip_pkt_types_map[16] = {
 		0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
 		PKT_RX_IPV6_HDR, 0, 0, 0,
 		PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
 		PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
 	};
 
-	static uint16_t ip_rss_types_map[16] = {
+	static uint64_t ip_rss_types_map[16] = {
 		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
 		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
 		PKT_RX_RSS_HASH, 0, 0, 0,
@@ -839,45 +839,44 @@  rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
 		0, 0, 0, 0,
 	};
 
-	pkt_flags = (uint16_t) ((hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
-				ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
-				ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
+	pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
+			ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
+			ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
 #else
-	pkt_flags = (uint16_t) ((hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
-				ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
+	pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
+			ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
 
 #endif
-	return (uint16_t)(pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF]);
+	return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
 }
 
-static inline uint16_t
+static inline uint64_t
 rx_desc_status_to_pkt_flags(uint32_t rx_status)
 {
-	uint16_t pkt_flags;
+	uint64_t pkt_flags;
 
 	/*
 	 * Check if VLAN present only.
 	 * Do not check whether L3/L4 rx checksum done by NIC or not,
 	 * That can be found from rte_eth_rxmode.hw_ip_checksum flag
 	 */
-	pkt_flags = (uint16_t)((rx_status & IXGBE_RXD_STAT_VP) ?
-						PKT_RX_VLAN_PKT : 0);
+	pkt_flags = (rx_status & IXGBE_RXD_STAT_VP) ?  PKT_RX_VLAN_PKT : 0;
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (rx_status & IXGBE_RXD_STAT_TMST)
-		pkt_flags = (uint16_t)(pkt_flags | PKT_RX_IEEE1588_TMST);
+		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
 
-static inline uint16_t
+static inline uint64_t
 rx_desc_error_to_pkt_flags(uint32_t rx_status)
 {
 	/*
 	 * Bit 31: IPE, IPv4 checksum error
 	 * Bit 30: L4I, L4I integrity error
 	 */
-	static uint16_t error_to_pkt_flags_map[4] = {
+	static uint64_t error_to_pkt_flags_map[4] = {
 		0,  PKT_RX_L4_CKSUM_BAD, PKT_RX_IP_CKSUM_BAD,
 		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
 	};
@@ -948,10 +947,10 @@  ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
 			mb->ol_flags  = rx_desc_hlen_type_rss_to_pkt_flags(
 					rxdp[j].wb.lower.lo_dword.data);
 			/* reuse status field from scan list */
-			mb->ol_flags = (uint16_t)(mb->ol_flags |
-					rx_desc_status_to_pkt_flags(s[j]));
-			mb->ol_flags = (uint16_t)(mb->ol_flags |
-					rx_desc_error_to_pkt_flags(s[j]));
+			mb->ol_flags = mb->ol_flags |
+					rx_desc_status_to_pkt_flags(s[j]);
+			mb->ol_flags = mb->ol_flags |
+					rx_desc_error_to_pkt_flags(s[j]);
 		}
 
 		/* Move mbuf pointers from the S/W ring to the stage */
@@ -1144,7 +1143,7 @@  ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	uint16_t rx_id;
 	uint16_t nb_rx;
 	uint16_t nb_hold;
-	uint16_t pkt_flags;
+	uint64_t pkt_flags;
 
 	nb_rx = 0;
 	nb_hold = 0;
@@ -1262,10 +1261,8 @@  ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
 
 		pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
-		pkt_flags = (uint16_t)(pkt_flags |
-				rx_desc_status_to_pkt_flags(staterr));
-		pkt_flags = (uint16_t)(pkt_flags |
-				rx_desc_error_to_pkt_flags(staterr));
+		pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+		pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
 		rxm->ol_flags = pkt_flags;
 
 		if (likely(pkt_flags & PKT_RX_RSS_HASH))
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
index 38a3a03..70dbcb0 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
@@ -178,7 +178,7 @@  union ixgbe_vlan_macip {
  */
 
 struct ixgbe_advctx_info {
-	uint16_t flags;           /**< ol_flags for context build. */
+	uint64_t flags;           /**< ol_flags for context build. */
 	uint32_t cmp_mask;        /**< compare mask for vlan_macip_lens */
 	union ixgbe_vlan_macip vlan_macip_lens; /**< vlan, mac ip length. */
 };