Show a patch.

GET /api/patches/255/
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 255,
    "url": "https://patches.dpdk.org/api/patches/255/",
    "web_url": "https://patches.dpdk.org/patch/255/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk"
    },
    "msgid": "<1409154628-30825-5-git-send-email-bruce.richardson@intel.com>",
    "date": "2014-08-27T15:50:26",
    "name": "[dpdk-dev,4/6] mbuf: remove the rte_pktmbuf structure",
    "commit_ref": "",
    "pull_url": "",
    "state": "superseded",
    "archived": true,
    "hash": "3f07ee54a428bf1cf2018b822e6b9ab4c0e65210",
    "submitter": {
        "id": 20,
        "url": "https://patches.dpdk.org/api/people/20/",
        "name": "Bruce Richardson",
        "email": "bruce.richardson@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/patch/255/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/255/comments/",
    "check": "pending",
    "checks": "https://patches.dpdk.org/api/patches/255/checks/",
    "tags": {},
    "headers": {
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\r\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "X-IronPort-AV": "E=Sophos;i=\"5.04,412,1406617200\"; d=\"scan'208\";a=\"582430906\"",
        "Precedence": "list",
        "X-Mailman-Version": "2.1.15",
        "List-Post": "<mailto:dev@dpdk.org>",
        "References": "<1409154628-30825-1-git-send-email-bruce.richardson@intel.com>",
        "X-BeenThere": "dev@dpdk.org",
        "List-Id": "patches and discussions about DPDK <dev.dpdk.org>",
        "Subject": "[dpdk-dev] [PATCH 4/6] mbuf: remove the rte_pktmbuf structure",
        "From": "Bruce Richardson <bruce.richardson@intel.com>",
        "Received": [
            "from mga01.intel.com (mga01.intel.com [192.55.52.88])\r\n\tby dpdk.org (Postfix) with ESMTP id EC3C5B387\r\n\tfor <dev@dpdk.org>; Wed, 27 Aug 2014 17:46:55 +0200 (CEST)",
            "from fmsmga001.fm.intel.com ([10.253.24.23])\r\n\tby fmsmga101.fm.intel.com with ESMTP; 27 Aug 2014 08:50:50 -0700",
            "from irvmail001.ir.intel.com ([163.33.26.43])\r\n\tby fmsmga001.fm.intel.com with ESMTP; 27 Aug 2014 08:50:30 -0700",
            "from sivswdev02.ir.intel.com (sivswdev02.ir.intel.com\r\n\t[10.237.217.46])\r\n\tby irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id\r\n\ts7RFoT4x026040; Wed, 27 Aug 2014 16:50:29 +0100",
            "from sivswdev02.ir.intel.com (localhost [127.0.0.1])\r\n\tby sivswdev02.ir.intel.com with ESMTP id s7RFoTns031317;\r\n\tWed, 27 Aug 2014 16:50:29 +0100",
            "(from bricha3@localhost)\r\n\tby sivswdev02.ir.intel.com with  id s7RFoTZv031313;\r\n\tWed, 27 Aug 2014 16:50:29 +0100"
        ],
        "To": "dev@dpdk.org",
        "X-Mailer": "git-send-email 1.7.0.7",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\r\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "X-List-Received-Date": "Wed, 27 Aug 2014 15:46:59 -0000",
        "X-ExtLoop1": "1",
        "Date": "Wed, 27 Aug 2014 16:50:26 +0100",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "In-Reply-To": "<1409154628-30825-1-git-send-email-bruce.richardson@intel.com>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "Message-Id": "<1409154628-30825-5-git-send-email-bruce.richardson@intel.com>",
        "Return-Path": "<bricha3@ecsmtp.ir.intel.com>"
    },
    "content": "From: Olivier Matz <olivier.matz@6wind.com>\n\nThe rte_pktmbuf structure was initially included in the rte_mbuf\nstructure. This was needed when there was 2 types of mbuf (ctrl and\npacket). As the control mbuf has been removed, we can merge the\nrte_pktmbuf into the rte_mbuf structure.\n\nAdvantages of doing this:\n  - the access to mbuf fields is easier (ex: m->data instead of m->pkt.data)\n  - make the structure more consistent: for instance, there was no reason\n    to have the ol_flags field in rte_mbuf\n  - it will allow a deeper reorganization of the rte_mbuf structure in the\n    next commits, allowing to gain several bytes in it\n\nSigned-off-by: Olivier Matz <olivier.matz@6wind.com>\n\nUpdated to work with latest code, and to include new example apps.\n\nSigned-off-by: Bruce Richardson <bruce.richardson@intel.com>\n---\n app/test-pmd/cmdline.c                             |   1 -\n app/test-pmd/csumonly.c                            |   6 +-\n app/test-pmd/flowgen.c                             |  16 +--\n app/test-pmd/icmpecho.c                            |   4 +-\n app/test-pmd/ieee1588fwd.c                         |   6 +-\n app/test-pmd/macfwd-retry.c                        |   2 +-\n app/test-pmd/macfwd.c                              |   8 +-\n app/test-pmd/macswap.c                             |   8 +-\n app/test-pmd/rxonly.c                              |  12 +-\n app/test-pmd/testpmd.c                             |   8 +-\n app/test-pmd/testpmd.h                             |   2 +-\n app/test-pmd/txonly.c                              |  42 +++---\n app/test/commands.c                                |   1 -\n app/test/packet_burst_generator.c                  |  46 +++----\n app/test/test_distributor.c                        |  18 +--\n app/test/test_distributor_perf.c                   |   4 +-\n app/test/test_mbuf.c                               |  12 +-\n app/test/test_sched.c                              |   4 +-\n app/test/test_table_acl.c                          |   6 +-\n app/test/test_table_pipeline.c                     |   4 +-\n examples/dpdk_qat/crypto.c                         |  22 ++--\n examples/dpdk_qat/main.c                           |   2 +-\n examples/exception_path/main.c                     |  10 +-\n examples/ip_fragmentation/main.c                   |   6 +-\n examples/ip_pipeline/pipeline_rx.c                 |   4 +-\n examples/ip_pipeline/pipeline_tx.c                 |   2 +-\n examples/ip_reassembly/main.c                      |   8 +-\n examples/ipv4_multicast/main.c                     |  14 +-\n examples/l3fwd-acl/main.c                          |   2 +-\n examples/l3fwd-power/main.c                        |   2 +-\n examples/l3fwd-vf/main.c                           |   2 +-\n examples/l3fwd/main.c                              |  10 +-\n examples/load_balancer/runtime.c                   |   2 +-\n .../client_server_mp/mp_client/client.c            |   2 +-\n examples/quota_watermark/qw/main.c                 |   4 +-\n examples/vhost/main.c                              |  70 +++++-----\n examples/vhost_xen/main.c                          |  22 ++--\n lib/librte_distributor/rte_distributor.c           |   2 +-\n lib/librte_ip_frag/ip_frag_common.h                |  14 +-\n lib/librte_ip_frag/rte_ipv4_fragmentation.c        |  40 +++---\n lib/librte_ip_frag/rte_ipv4_reassembly.c           |   6 +-\n lib/librte_ip_frag/rte_ipv6_fragmentation.c        |  38 +++---\n lib/librte_ip_frag/rte_ipv6_reassembly.c           |   4 +-\n lib/librte_mbuf/rte_mbuf.c                         |  26 ++--\n lib/librte_mbuf/rte_mbuf.h                         | 142 ++++++++++-----------\n lib/librte_pmd_bond/rte_eth_bond_pmd.c             |   4 +-\n lib/librte_pmd_e1000/em_rxtx.c                     |  64 +++++-----\n lib/librte_pmd_e1000/igb_rxtx.c                    |  68 +++++-----\n lib/librte_pmd_i40e/i40e_rxtx.c                    |  98 +++++++-------\n lib/librte_pmd_ixgbe/ixgbe_rxtx.c                  | 100 +++++++--------\n lib/librte_pmd_ixgbe/ixgbe_rxtx.h                  |   2 +-\n lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c              |  20 ++-\n lib/librte_pmd_pcap/rte_eth_pcap.c                 |  14 +-\n lib/librte_pmd_virtio/virtio_rxtx.c                |  20 +--\n lib/librte_pmd_virtio/virtqueue.h                  |   2 +-\n lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c              |  26 ++--\n lib/librte_pmd_xenvirt/rte_eth_xenvirt.c           |  12 +-\n lib/librte_pmd_xenvirt/virtqueue.h                 |   4 +-\n lib/librte_port/rte_port_frag.c                    |   2 +-\n lib/librte_sched/rte_sched.c                       |  14 +-\n lib/librte_sched/rte_sched.h                       |  10 +-\n 61 files changed, 560 insertions(+), 566 deletions(-)",
    "diff": "diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c\r\nindex 6dd576a..e81530b 100644\r\n--- a/app/test-pmd/cmdline.c\r\n+++ b/app/test-pmd/cmdline.c\r\n@@ -6376,7 +6376,6 @@ dump_struct_sizes(void)\r\n {\r\n #define DUMP_SIZE(t) printf(\"sizeof(\" #t \") = %u\\n\", (unsigned)sizeof(t));\r\n \tDUMP_SIZE(struct rte_mbuf);\r\n-\tDUMP_SIZE(struct rte_pktmbuf);\r\n \tDUMP_SIZE(struct rte_mempool);\r\n \tDUMP_SIZE(struct rte_ring);\r\n #undef DUMP_SIZE\r\ndiff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c\r\nindex e5a1f52..655b6d8 100644\r\n--- a/app/test-pmd/csumonly.c\r\n+++ b/app/test-pmd/csumonly.c\r\n@@ -263,7 +263,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)\r\n \t\tpkt_ol_flags = mb->ol_flags;\r\n \t\tol_flags = (uint16_t) (pkt_ol_flags & (~PKT_TX_L4_MASK));\r\n \r\n-\t\teth_hdr = (struct ether_hdr *) mb->pkt.data;\r\n+\t\teth_hdr = (struct ether_hdr *) mb->data;\r\n \t\teth_type = rte_be_to_cpu_16(eth_hdr->ether_type);\r\n \t\tif (eth_type == ETHER_TYPE_VLAN) {\r\n \t\t\t/* Only allow single VLAN label here */\r\n@@ -432,8 +432,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)\r\n \t\t}\r\n \r\n \t\t/* Combine the packet header write. VLAN is not consider here */\r\n-\t\tmb->pkt.vlan_macip.f.l2_len = l2_len;\r\n-\t\tmb->pkt.vlan_macip.f.l3_len = l3_len;\r\n+\t\tmb->vlan_macip.f.l2_len = l2_len;\r\n+\t\tmb->vlan_macip.f.l3_len = l3_len;\r\n \t\tmb->ol_flags = ol_flags;\r\n \t}\r\n \tnb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);\r\ndiff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c\r\nindex a8f2a65..17dbf83 100644\r\n--- a/app/test-pmd/flowgen.c\r\n+++ b/app/test-pmd/flowgen.c\r\n@@ -171,11 +171,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)\r\n \t\tif (!pkt)\r\n \t\t\tbreak;\r\n \r\n-\t\tpkt->pkt.data_len = pkt_size;\r\n-\t\tpkt->pkt.next = NULL;\r\n+\t\tpkt->data_len = pkt_size;\r\n+\t\tpkt->next = NULL;\r\n \r\n \t\t/* Initialize Ethernet header. */\r\n-\t\teth_hdr = (struct ether_hdr *)pkt->pkt.data;\r\n+\t\teth_hdr = (struct ether_hdr *)pkt->data;\r\n \t\tether_addr_copy(&cfg_ether_dst, &eth_hdr->d_addr);\r\n \t\tether_addr_copy(&cfg_ether_src, &eth_hdr->s_addr);\r\n \t\teth_hdr->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);\r\n@@ -205,12 +205,12 @@ pkt_burst_flow_gen(struct fwd_stream *fs)\r\n \t\tudp_hdr->dgram_len\t= RTE_CPU_TO_BE_16(pkt_size -\r\n \t\t\t\t\t\t\t   sizeof(*eth_hdr) -\r\n \t\t\t\t\t\t\t   sizeof(*ip_hdr));\r\n-\t\tpkt->pkt.nb_segs\t\t= 1;\r\n-\t\tpkt->pkt.pkt_len\t\t= pkt_size;\r\n+\t\tpkt->nb_segs\t\t\t= 1;\r\n+\t\tpkt->pkt_len\t\t\t= pkt_size;\r\n \t\tpkt->ol_flags\t\t\t= ol_flags;\r\n-\t\tpkt->pkt.vlan_macip.f.vlan_tci\t= vlan_tci;\r\n-\t\tpkt->pkt.vlan_macip.f.l2_len\t= sizeof(struct ether_hdr);\r\n-\t\tpkt->pkt.vlan_macip.f.l3_len\t= sizeof(struct ipv4_hdr);\r\n+\t\tpkt->vlan_macip.f.vlan_tci\t= vlan_tci;\r\n+\t\tpkt->vlan_macip.f.l2_len\t= sizeof(struct ether_hdr);\r\n+\t\tpkt->vlan_macip.f.l3_len\t= sizeof(struct ipv4_hdr);\r\n \t\tpkts_burst[nb_pkt]\t\t= pkt;\r\n \r\n \t\tnext_flow = (next_flow + 1) % cfg_n_flows;\r\ndiff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c\r\nindex c28ff5a..4a277b8 100644\r\n--- a/app/test-pmd/icmpecho.c\r\n+++ b/app/test-pmd/icmpecho.c\r\n@@ -330,12 +330,12 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)\r\n \tnb_replies = 0;\r\n \tfor (i = 0; i < nb_rx; i++) {\r\n \t\tpkt = pkts_burst[i];\r\n-\t\teth_h = (struct ether_hdr *) pkt->pkt.data;\r\n+\t\teth_h = (struct ether_hdr *) pkt->data;\r\n \t\teth_type = RTE_BE_TO_CPU_16(eth_h->ether_type);\r\n \t\tl2_len = sizeof(struct ether_hdr);\r\n \t\tif (verbose_level > 0) {\r\n \t\t\tprintf(\"\\nPort %d pkt-len=%u nb-segs=%u\\n\",\r\n-\t\t\t       fs->rx_port, pkt->pkt.pkt_len, pkt->pkt.nb_segs);\r\n+\t\t\t       fs->rx_port, pkt->pkt_len, pkt->nb_segs);\r\n \t\t\tether_addr_dump(\"  ETH:  src=\", &eth_h->s_addr);\r\n \t\t\tether_addr_dump(\" dst=\", &eth_h->d_addr);\r\n \t\t}\r\ndiff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c\r\nindex 3ce9979..ab5e06e 100644\r\n--- a/app/test-pmd/ieee1588fwd.c\r\n+++ b/app/test-pmd/ieee1588fwd.c\r\n@@ -546,7 +546,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)\r\n \t * Check that the received packet is a PTP packet that was detected\r\n \t * by the hardware.\r\n \t */\r\n-\teth_hdr = (struct ether_hdr *)mb->pkt.data;\r\n+\teth_hdr = (struct ether_hdr *)mb->data;\r\n \teth_type = rte_be_to_cpu_16(eth_hdr->ether_type);\r\n \tif (! (mb->ol_flags & PKT_RX_IEEE1588_PTP)) {\r\n \t\tif (eth_type == ETHER_TYPE_1588) {\r\n@@ -557,7 +557,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)\r\n \t\t\tprintf(\"Port %u Received non PTP packet type=0x%4x \"\r\n \t\t\t       \"len=%u\\n\",\r\n \t\t\t       (unsigned) fs->rx_port, eth_type,\r\n-\t\t\t       (unsigned) mb->pkt.pkt_len);\r\n+\t\t\t       (unsigned) mb->pkt_len);\r\n \t\t}\r\n \t\trte_pktmbuf_free(mb);\r\n \t\treturn;\r\n@@ -574,7 +574,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)\r\n \t * Check that the received PTP packet is a PTP V2 packet of type\r\n \t * PTP_SYNC_MESSAGE.\r\n \t */\r\n-\tptp_hdr = (struct ptpv2_msg *) ((char *) mb->pkt.data +\r\n+\tptp_hdr = (struct ptpv2_msg *) ((char *) mb->data +\r\n \t\t\t\t\tsizeof(struct ether_hdr));\r\n \tif (ptp_hdr->version != 0x02) {\r\n \t\tprintf(\"Port %u Received PTP V2 Ethernet frame with wrong PTP\"\r\ndiff --git a/app/test-pmd/macfwd-retry.c b/app/test-pmd/macfwd-retry.c\r\nindex f4e06c4..5122983 100644\r\n--- a/app/test-pmd/macfwd-retry.c\r\n+++ b/app/test-pmd/macfwd-retry.c\r\n@@ -119,7 +119,7 @@ pkt_burst_mac_retry_forward(struct fwd_stream *fs)\r\n \tfs->rx_packets += nb_rx;\r\n \tfor (i = 0; i < nb_rx; i++) {\r\n \t\tmb = pkts_burst[i];\r\n-\t\teth_hdr = (struct ether_hdr *) mb->pkt.data;\r\n+\t\teth_hdr = (struct ether_hdr *) mb->data;\r\n \t\tether_addr_copy(&peer_eth_addrs[fs->peer_addr],\r\n \t\t\t\t&eth_hdr->d_addr);\r\n \t\tether_addr_copy(&ports[fs->tx_port].eth_addr,\r\ndiff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c\r\nindex fc8f749..999c8e3 100644\r\n--- a/app/test-pmd/macfwd.c\r\n+++ b/app/test-pmd/macfwd.c\r\n@@ -110,15 +110,15 @@ pkt_burst_mac_forward(struct fwd_stream *fs)\r\n \ttxp = &ports[fs->tx_port];\r\n \tfor (i = 0; i < nb_rx; i++) {\r\n \t\tmb = pkts_burst[i];\r\n-\t\teth_hdr = (struct ether_hdr *) mb->pkt.data;\r\n+\t\teth_hdr = (struct ether_hdr *) mb->data;\r\n \t\tether_addr_copy(&peer_eth_addrs[fs->peer_addr],\r\n \t\t\t\t&eth_hdr->d_addr);\r\n \t\tether_addr_copy(&ports[fs->tx_port].eth_addr,\r\n \t\t\t\t&eth_hdr->s_addr);\r\n \t\tmb->ol_flags = txp->tx_ol_flags;\r\n-\t\tmb->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n-\t\tmb->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n-\t\tmb->pkt.vlan_macip.f.vlan_tci = txp->tx_vlan_id;\r\n+\t\tmb->vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n+\t\tmb->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n+\t\tmb->vlan_macip.f.vlan_tci = txp->tx_vlan_id;\r\n \t}\r\n \tnb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);\r\n \tfs->tx_packets += nb_tx;\r\ndiff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c\r\nindex 4ed6096..731f487 100644\r\n--- a/app/test-pmd/macswap.c\r\n+++ b/app/test-pmd/macswap.c\r\n@@ -110,7 +110,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)\r\n \ttxp = &ports[fs->tx_port];\r\n \tfor (i = 0; i < nb_rx; i++) {\r\n \t\tmb = pkts_burst[i];\r\n-\t\teth_hdr = (struct ether_hdr *) mb->pkt.data;\r\n+\t\teth_hdr = (struct ether_hdr *) mb->data;\r\n \r\n \t\t/* Swap dest and src mac addresses. */\r\n \t\tether_addr_copy(&eth_hdr->d_addr, &addr);\r\n@@ -118,9 +118,9 @@ pkt_burst_mac_swap(struct fwd_stream *fs)\r\n \t\tether_addr_copy(&addr, &eth_hdr->s_addr);\r\n \r\n \t\tmb->ol_flags = txp->tx_ol_flags;\r\n-\t\tmb->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n-\t\tmb->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n-\t\tmb->pkt.vlan_macip.f.vlan_tci = txp->tx_vlan_id;\r\n+\t\tmb->vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n+\t\tmb->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n+\t\tmb->vlan_macip.f.vlan_tci = txp->tx_vlan_id;\r\n \t}\r\n \tnb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);\r\n \tfs->tx_packets += nb_tx;\r\ndiff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c\r\nindex 5f21a3e..c34a5e1 100644\r\n--- a/app/test-pmd/rxonly.c\r\n+++ b/app/test-pmd/rxonly.c\r\n@@ -149,24 +149,24 @@ pkt_burst_receive(struct fwd_stream *fs)\r\n \t\t\trte_pktmbuf_free(mb);\r\n \t\t\tcontinue;\r\n \t\t}\r\n-\t\teth_hdr = (struct ether_hdr *) mb->pkt.data;\r\n+\t\teth_hdr = (struct ether_hdr *) mb->data;\r\n \t\teth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);\r\n \t\tol_flags = mb->ol_flags;\r\n \t\tprint_ether_addr(\"  src=\", &eth_hdr->s_addr);\r\n \t\tprint_ether_addr(\" - dst=\", &eth_hdr->d_addr);\r\n \t\tprintf(\" - type=0x%04x - length=%u - nb_segs=%d\",\r\n-\t\t       eth_type, (unsigned) mb->pkt.pkt_len,\r\n-\t\t       (int)mb->pkt.nb_segs);\r\n+\t\t       eth_type, (unsigned) mb->pkt_len,\r\n+\t\t       (int)mb->nb_segs);\r\n \t\tif (ol_flags & PKT_RX_RSS_HASH) {\r\n-\t\t\tprintf(\" - RSS hash=0x%x\", (unsigned) mb->pkt.hash.rss);\r\n+\t\t\tprintf(\" - RSS hash=0x%x\", (unsigned) mb->hash.rss);\r\n \t\t\tprintf(\" - RSS queue=0x%x\",(unsigned) fs->rx_queue);\r\n \t\t}\r\n \t\telse if (ol_flags & PKT_RX_FDIR)\r\n \t\t\tprintf(\" - FDIR hash=0x%x - FDIR id=0x%x \",\r\n-\t\t\t       mb->pkt.hash.fdir.hash, mb->pkt.hash.fdir.id);\r\n+\t\t\t       mb->hash.fdir.hash, mb->hash.fdir.id);\r\n \t\tif (ol_flags & PKT_RX_VLAN_PKT)\r\n \t\t\tprintf(\" - VLAN tci=0x%x\",\r\n-\t\t\t\tmb->pkt.vlan_macip.f.vlan_tci);\r\n+\t\t\t\tmb->vlan_macip.f.vlan_tci);\r\n \t\tprintf(\"\\n\");\r\n \t\tif (ol_flags != 0) {\r\n \t\t\tint rxf;\r\ndiff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c\r\nindex 5368d01..81272a0 100644\r\n--- a/app/test-pmd/testpmd.c\r\n+++ b/app/test-pmd/testpmd.c\r\n@@ -404,10 +404,10 @@ testpmd_mbuf_ctor(struct rte_mempool *mp,\r\n \t\t\tmb_ctor_arg->seg_buf_offset);\r\n \tmb->buf_len      = mb_ctor_arg->seg_buf_size;\r\n \tmb->ol_flags     = 0;\r\n-\tmb->pkt.data     = (char *) mb->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\tmb->pkt.nb_segs  = 1;\r\n-\tmb->pkt.vlan_macip.data = 0;\r\n-\tmb->pkt.hash.rss = 0;\r\n+\tmb->data         = (char *) mb->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\tmb->nb_segs      = 1;\r\n+\tmb->vlan_macip.data = 0;\r\n+\tmb->hash.rss     = 0;\r\n }\r\n \r\n static void\r\ndiff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h\r\nindex ac86bfe..c1afd91 100644\r\n--- a/app/test-pmd/testpmd.h\r\n+++ b/app/test-pmd/testpmd.h\r\n@@ -60,7 +60,7 @@ int main(int argc, char **argv);\r\n  * The maximum number of segments per packet is used when creating\r\n  * scattered transmit packets composed of a list of mbufs.\r\n  */\r\n-#define RTE_MAX_SEGS_PER_PKT 255 /**< pkt.nb_segs is a 8-bit unsigned char. */\r\n+#define RTE_MAX_SEGS_PER_PKT 255 /**< nb_segs is a 8-bit unsigned char. */\r\n \r\n #define MAX_PKT_BURST 512\r\n #define DEF_PKT_BURST 32\r\ndiff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c\r\nindex d634096..1b2f661 100644\r\n--- a/app/test-pmd/txonly.c\r\n+++ b/app/test-pmd/txonly.c\r\n@@ -106,18 +106,18 @@ copy_buf_to_pkt_segs(void* buf, unsigned len, struct rte_mbuf *pkt,\r\n \tunsigned copy_len;\r\n \r\n \tseg = pkt;\r\n-\twhile (offset >= seg->pkt.data_len) {\r\n-\t\toffset -= seg->pkt.data_len;\r\n-\t\tseg = seg->pkt.next;\r\n+\twhile (offset >= seg->data_len) {\r\n+\t\toffset -= seg->data_len;\r\n+\t\tseg = seg->next;\r\n \t}\r\n-\tcopy_len = seg->pkt.data_len - offset;\r\n-\tseg_buf = ((char *) seg->pkt.data + offset);\r\n+\tcopy_len = seg->data_len - offset;\r\n+\tseg_buf = ((char *) seg->data + offset);\r\n \twhile (len > copy_len) {\r\n \t\trte_memcpy(seg_buf, buf, (size_t) copy_len);\r\n \t\tlen -= copy_len;\r\n \t\tbuf = ((char*) buf + copy_len);\r\n-\t\tseg = seg->pkt.next;\r\n-\t\tseg_buf = seg->pkt.data;\r\n+\t\tseg = seg->next;\r\n+\t\tseg_buf = seg->data;\r\n \t}\r\n \trte_memcpy(seg_buf, buf, (size_t) len);\r\n }\r\n@@ -125,8 +125,8 @@ copy_buf_to_pkt_segs(void* buf, unsigned len, struct rte_mbuf *pkt,\r\n static inline void\r\n copy_buf_to_pkt(void* buf, unsigned len, struct rte_mbuf *pkt, unsigned offset)\r\n {\r\n-\tif (offset + len <= pkt->pkt.data_len) {\r\n-\t\trte_memcpy(((char *) pkt->pkt.data + offset), buf, (size_t) len);\r\n+\tif (offset + len <= pkt->data_len) {\r\n+\t\trte_memcpy(((char *) pkt->data + offset), buf, (size_t) len);\r\n \t\treturn;\r\n \t}\r\n \tcopy_buf_to_pkt_segs(buf, len, pkt, offset);\r\n@@ -225,19 +225,19 @@ pkt_burst_transmit(struct fwd_stream *fs)\r\n \t\t\t\treturn;\r\n \t\t\tbreak;\r\n \t\t}\r\n-\t\tpkt->pkt.data_len = tx_pkt_seg_lengths[0];\r\n+\t\tpkt->data_len = tx_pkt_seg_lengths[0];\r\n \t\tpkt_seg = pkt;\r\n \t\tfor (i = 1; i < tx_pkt_nb_segs; i++) {\r\n-\t\t\tpkt_seg->pkt.next = tx_mbuf_alloc(mbp);\r\n-\t\t\tif (pkt_seg->pkt.next == NULL) {\r\n-\t\t\t\tpkt->pkt.nb_segs = i;\r\n+\t\t\tpkt_seg->next = tx_mbuf_alloc(mbp);\r\n+\t\t\tif (pkt_seg->next == NULL) {\r\n+\t\t\t\tpkt->nb_segs = i;\r\n \t\t\t\trte_pktmbuf_free(pkt);\r\n \t\t\t\tgoto nomore_mbuf;\r\n \t\t\t}\r\n-\t\t\tpkt_seg = pkt_seg->pkt.next;\r\n-\t\t\tpkt_seg->pkt.data_len = tx_pkt_seg_lengths[i];\r\n+\t\t\tpkt_seg = pkt_seg->next;\r\n+\t\t\tpkt_seg->data_len = tx_pkt_seg_lengths[i];\r\n \t\t}\r\n-\t\tpkt_seg->pkt.next = NULL; /* Last segment of packet. */\r\n+\t\tpkt_seg->next = NULL; /* Last segment of packet. */\r\n \r\n \t\t/*\r\n \t\t * Initialize Ethernet header.\r\n@@ -260,12 +260,12 @@ pkt_burst_transmit(struct fwd_stream *fs)\r\n \t\t * Complete first mbuf of packet and append it to the\r\n \t\t * burst of packets to be transmitted.\r\n \t\t */\r\n-\t\tpkt->pkt.nb_segs = tx_pkt_nb_segs;\r\n-\t\tpkt->pkt.pkt_len = tx_pkt_length;\r\n+\t\tpkt->nb_segs = tx_pkt_nb_segs;\r\n+\t\tpkt->pkt_len = tx_pkt_length;\r\n \t\tpkt->ol_flags = ol_flags;\r\n-\t\tpkt->pkt.vlan_macip.f.vlan_tci  = vlan_tci;\r\n-\t\tpkt->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n-\t\tpkt->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n+\t\tpkt->vlan_macip.f.vlan_tci  = vlan_tci;\r\n+\t\tpkt->vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n+\t\tpkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n \t\tpkts_burst[nb_pkt] = pkt;\r\n \t}\r\n \tnb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt);\r\ndiff --git a/app/test/commands.c b/app/test/commands.c\r\nindex f38d419..668b48d 100644\r\n--- a/app/test/commands.c\r\n+++ b/app/test/commands.c\r\n@@ -275,7 +275,6 @@ dump_struct_sizes(void)\r\n {\r\n #define DUMP_SIZE(t) printf(\"sizeof(\" #t \") = %u\\n\", (unsigned)sizeof(t));\r\n \tDUMP_SIZE(struct rte_mbuf);\r\n-\tDUMP_SIZE(struct rte_pktmbuf);\r\n \tDUMP_SIZE(struct rte_mempool);\r\n \tDUMP_SIZE(struct rte_ring);\r\n #undef DUMP_SIZE\r\ndiff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c\r\nindex 5d539f1..8740348 100644\r\n--- a/app/test/packet_burst_generator.c\r\n+++ b/app/test/packet_burst_generator.c\r\n@@ -54,18 +54,18 @@ copy_buf_to_pkt_segs(void *buf, unsigned len, struct rte_mbuf *pkt,\r\n \tunsigned copy_len;\r\n \r\n \tseg = pkt;\r\n-\twhile (offset >= seg->pkt.data_len) {\r\n-\t\toffset -= seg->pkt.data_len;\r\n-\t\tseg = seg->pkt.next;\r\n+\twhile (offset >= seg->data_len) {\r\n+\t\toffset -= seg->data_len;\r\n+\t\tseg = seg->next;\r\n \t}\r\n-\tcopy_len = seg->pkt.data_len - offset;\r\n-\tseg_buf = ((char *) seg->pkt.data + offset);\r\n+\tcopy_len = seg->data_len - offset;\r\n+\tseg_buf = ((char *) seg->data + offset);\r\n \twhile (len > copy_len) {\r\n \t\trte_memcpy(seg_buf, buf, (size_t) copy_len);\r\n \t\tlen -= copy_len;\r\n \t\tbuf = ((char *) buf + copy_len);\r\n-\t\tseg = seg->pkt.next;\r\n-\t\tseg_buf = seg->pkt.data;\r\n+\t\tseg = seg->next;\r\n+\t\tseg_buf = seg->data;\r\n \t}\r\n \trte_memcpy(seg_buf, buf, (size_t) len);\r\n }\r\n@@ -73,8 +73,8 @@ copy_buf_to_pkt_segs(void *buf, unsigned len, struct rte_mbuf *pkt,\r\n static inline void\r\n copy_buf_to_pkt(void *buf, unsigned len, struct rte_mbuf *pkt, unsigned offset)\r\n {\r\n-\tif (offset + len <= pkt->pkt.data_len) {\r\n-\t\trte_memcpy(((char *) pkt->pkt.data + offset), buf, (size_t) len);\r\n+\tif (offset + len <= pkt->data_len) {\r\n+\t\trte_memcpy(((char *) pkt->data + offset), buf, (size_t) len);\r\n \t\treturn;\r\n \t}\r\n \tcopy_buf_to_pkt_segs(buf, len, pkt, offset);\r\n@@ -220,19 +220,19 @@ nomore_mbuf:\r\n \t\t\tbreak;\r\n \t\t}\r\n \r\n-\t\tpkt->pkt.data_len = tx_pkt_seg_lengths[0];\r\n+\t\tpkt->data_len = tx_pkt_seg_lengths[0];\r\n \t\tpkt_seg = pkt;\r\n \t\tfor (i = 1; i < tx_pkt_nb_segs; i++) {\r\n-\t\t\tpkt_seg->pkt.next = rte_pktmbuf_alloc(mp);\r\n-\t\t\tif (pkt_seg->pkt.next == NULL) {\r\n-\t\t\t\tpkt->pkt.nb_segs = i;\r\n+\t\t\tpkt_seg->next = rte_pktmbuf_alloc(mp);\r\n+\t\t\tif (pkt_seg->next == NULL) {\r\n+\t\t\t\tpkt->nb_segs = i;\r\n \t\t\t\trte_pktmbuf_free(pkt);\r\n \t\t\t\tgoto nomore_mbuf;\r\n \t\t\t}\r\n-\t\t\tpkt_seg = pkt_seg->pkt.next;\r\n-\t\t\tpkt_seg->pkt.data_len = tx_pkt_seg_lengths[i];\r\n+\t\t\tpkt_seg = pkt_seg->next;\r\n+\t\t\tpkt_seg->data_len = tx_pkt_seg_lengths[i];\r\n \t\t}\r\n-\t\tpkt_seg->pkt.next = NULL; /* Last segment of packet. */\r\n+\t\tpkt_seg->next = NULL; /* Last segment of packet. */\r\n \r\n \t\t/*\r\n \t\t * Copy headers in first packet segment(s).\r\n@@ -258,21 +258,21 @@ nomore_mbuf:\r\n \t\t * Complete first mbuf of packet and append it to the\r\n \t\t * burst of packets to be transmitted.\r\n \t\t */\r\n-\t\tpkt->pkt.nb_segs = tx_pkt_nb_segs;\r\n-\t\tpkt->pkt.pkt_len = tx_pkt_length;\r\n-\t\tpkt->pkt.vlan_macip.f.l2_len = eth_hdr_size;\r\n+\t\tpkt->nb_segs = tx_pkt_nb_segs;\r\n+\t\tpkt->pkt_len = tx_pkt_length;\r\n+\t\tpkt->vlan_macip.f.l2_len = eth_hdr_size;\r\n \r\n \t\tif (ipv4) {\r\n-\t\t\tpkt->pkt.vlan_macip.f.vlan_tci  = ETHER_TYPE_IPv4;\r\n-\t\t\tpkt->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n+\t\t\tpkt->vlan_macip.f.vlan_tci  = ETHER_TYPE_IPv4;\r\n+\t\t\tpkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n \r\n \t\t\tif (vlan_enabled)\r\n \t\t\t\tpkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;\r\n \t\t\telse\r\n \t\t\t\tpkt->ol_flags = PKT_RX_IPV4_HDR;\r\n \t\t} else {\r\n-\t\t\tpkt->pkt.vlan_macip.f.vlan_tci  = ETHER_TYPE_IPv6;\r\n-\t\t\tpkt->pkt.vlan_macip.f.l3_len = sizeof(struct ipv6_hdr);\r\n+\t\t\tpkt->vlan_macip.f.vlan_tci  = ETHER_TYPE_IPv6;\r\n+\t\t\tpkt->vlan_macip.f.l3_len = sizeof(struct ipv6_hdr);\r\n \r\n \t\t\tif (vlan_enabled)\r\n \t\t\t\tpkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;\r\ndiff --git a/app/test/test_distributor.c b/app/test/test_distributor.c\r\nindex e7dc1fb..0e96d42 100644\r\n--- a/app/test/test_distributor.c\r\n+++ b/app/test/test_distributor.c\r\n@@ -121,7 +121,7 @@ sanity_test(struct rte_distributor *d, struct rte_mempool *p)\r\n \t/* now set all hash values in all buffers to zero, so all pkts go to the\r\n \t * one worker thread */\r\n \tfor (i = 0; i < BURST; i++)\r\n-\t\tbufs[i]->pkt.hash.rss = 0;\r\n+\t\tbufs[i]->hash.rss = 0;\r\n \r\n \trte_distributor_process(d, bufs, BURST);\r\n \trte_distributor_flush(d);\r\n@@ -143,7 +143,7 @@ sanity_test(struct rte_distributor *d, struct rte_mempool *p)\r\n \tif (rte_lcore_count() >= 3) {\r\n \t\tclear_packet_count();\r\n \t\tfor (i = 0; i < BURST; i++)\r\n-\t\t\tbufs[i]->pkt.hash.rss = (i & 1) << 8;\r\n+\t\t\tbufs[i]->hash.rss = (i & 1) << 8;\r\n \r\n \t\trte_distributor_process(d, bufs, BURST);\r\n \t\trte_distributor_flush(d);\r\n@@ -168,7 +168,7 @@ sanity_test(struct rte_distributor *d, struct rte_mempool *p)\r\n \t * so load gets distributed */\r\n \tclear_packet_count();\r\n \tfor (i = 0; i < BURST; i++)\r\n-\t\tbufs[i]->pkt.hash.rss = i;\r\n+\t\tbufs[i]->hash.rss = i;\r\n \r\n \trte_distributor_process(d, bufs, BURST);\r\n \trte_distributor_flush(d);\r\n@@ -200,7 +200,7 @@ sanity_test(struct rte_distributor *d, struct rte_mempool *p)\r\n \t\treturn -1;\r\n \t}\r\n \tfor (i = 0; i < BIG_BATCH; i++)\r\n-\t\tmany_bufs[i]->pkt.hash.rss = i << 2;\r\n+\t\tmany_bufs[i]->hash.rss = i << 2;\r\n \r\n \tfor (i = 0; i < BIG_BATCH/BURST; i++) {\r\n \t\trte_distributor_process(d, &many_bufs[i*BURST], BURST);\r\n@@ -281,7 +281,7 @@ sanity_test_with_mbuf_alloc(struct rte_distributor *d, struct rte_mempool *p)\r\n \t\twhile (rte_mempool_get_bulk(p, (void *)bufs, BURST) < 0)\r\n \t\t\trte_distributor_process(d, NULL, 0);\r\n \t\tfor (j = 0; j < BURST; j++) {\r\n-\t\t\tbufs[j]->pkt.hash.rss = (i+j) << 1;\r\n+\t\t\tbufs[j]->hash.rss = (i+j) << 1;\r\n \t\t\tbufs[j]->refcnt = 1;\r\n \t\t}\r\n \r\n@@ -360,7 +360,7 @@ sanity_test_with_worker_shutdown(struct rte_distributor *d,\r\n \t/* now set all hash values in all buffers to zero, so all pkts go to the\r\n \t * one worker thread */\r\n \tfor (i = 0; i < BURST; i++)\r\n-\t\tbufs[i]->pkt.hash.rss = 0;\r\n+\t\tbufs[i]->hash.rss = 0;\r\n \r\n \trte_distributor_process(d, bufs, BURST);\r\n \t/* at this point, we will have processed some packets and have a full\r\n@@ -373,7 +373,7 @@ sanity_test_with_worker_shutdown(struct rte_distributor *d,\r\n \t\treturn -1;\r\n \t}\r\n \tfor (i = 0; i < BURST; i++)\r\n-\t\tbufs[i]->pkt.hash.rss = 0;\r\n+\t\tbufs[i]->hash.rss = 0;\r\n \r\n \t/* get worker zero to quit */\r\n \tzero_quit = 1;\r\n@@ -417,7 +417,7 @@ test_flush_with_worker_shutdown(struct rte_distributor *d,\r\n \t/* now set all hash values in all buffers to zero, so all pkts go to the\r\n \t * one worker thread */\r\n \tfor (i = 0; i < BURST; i++)\r\n-\t\tbufs[i]->pkt.hash.rss = 0;\r\n+\t\tbufs[i]->hash.rss = 0;\r\n \r\n \trte_distributor_process(d, bufs, BURST);\r\n \t/* at this point, we will have processed some packets and have a full\r\n@@ -489,7 +489,7 @@ quit_workers(struct rte_distributor *d, struct rte_mempool *p)\r\n \tzero_quit = 0;\r\n \tquit = 1;\r\n \tfor (i = 0; i < num_workers; i++)\r\n-\t\tbufs[i]->pkt.hash.rss = i << 1;\r\n+\t\tbufs[i]->hash.rss = i << 1;\r\n \trte_distributor_process(d, bufs, num_workers);\r\n \r\n \trte_mempool_put_bulk(p, (void *)bufs, num_workers);\r\ndiff --git a/app/test/test_distributor_perf.c b/app/test/test_distributor_perf.c\r\nindex 1031baa..7ecbc6b 100644\r\n--- a/app/test/test_distributor_perf.c\r\n+++ b/app/test/test_distributor_perf.c\r\n@@ -160,7 +160,7 @@ perf_test(struct rte_distributor *d, struct rte_mempool *p)\r\n \t}\r\n \t/* ensure we have different hash value for each pkt */\r\n \tfor (i = 0; i < BURST; i++)\r\n-\t\tbufs[i]->pkt.hash.rss = i;\r\n+\t\tbufs[i]->hash.rss = i;\r\n \r\n \tstart = rte_rdtsc();\r\n \tfor (i = 0; i < (1<<ITER_POWER); i++)\r\n@@ -199,7 +199,7 @@ quit_workers(struct rte_distributor *d, struct rte_mempool *p)\r\n \r\n \tquit = 1;\r\n \tfor (i = 0; i < num_workers; i++)\r\n-\t\tbufs[i]->pkt.hash.rss = i << 1;\r\n+\t\tbufs[i]->hash.rss = i << 1;\r\n \trte_distributor_process(d, bufs, num_workers);\r\n \r\n \trte_mempool_put_bulk(p, (void *)bufs, num_workers);\r\ndiff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c\r\nindex 6b32d9d..fd26d6f 100644\r\n--- a/app/test/test_mbuf.c\r\n+++ b/app/test/test_mbuf.c\r\n@@ -344,8 +344,8 @@ testclone_testupdate_testdetach(void)\r\n \t\tGOTO_FAIL(\"cannot clone data\\n\");\r\n \trte_pktmbuf_free(clone);\r\n \r\n-\tmc->pkt.next = rte_pktmbuf_alloc(pktmbuf_pool);\r\n-\tif(mc->pkt.next == NULL)\r\n+\tmc->next = rte_pktmbuf_alloc(pktmbuf_pool);\r\n+\tif(mc->next == NULL)\r\n \t\tGOTO_FAIL(\"Next Pkt Null\\n\");\r\n \r\n \tclone = rte_pktmbuf_clone(mc, pktmbuf_pool);\r\n@@ -432,7 +432,7 @@ test_pktmbuf_pool_ptr(void)\r\n \t\t\tprintf(\"rte_pktmbuf_alloc() failed (%u)\\n\", i);\r\n \t\t\tret = -1;\r\n \t\t}\r\n-\t\tm[i]->pkt.data = RTE_PTR_ADD(m[i]->pkt.data, 64);\r\n+\t\tm[i]->data = RTE_PTR_ADD(m[i]->data, 64);\r\n \t}\r\n \r\n \t/* free them */\r\n@@ -451,8 +451,8 @@ test_pktmbuf_pool_ptr(void)\r\n \t\t\tprintf(\"rte_pktmbuf_alloc() failed (%u)\\n\", i);\r\n \t\t\tret = -1;\r\n \t\t}\r\n-\t\tif (m[i]->pkt.data != RTE_PTR_ADD(m[i]->buf_addr, RTE_PKTMBUF_HEADROOM)) {\r\n-\t\t\tprintf (\"pkt.data pointer not set properly\\n\");\r\n+\t\tif (m[i]->data != RTE_PTR_ADD(m[i]->buf_addr, RTE_PKTMBUF_HEADROOM)) {\r\n+\t\t\tprintf (\"data pointer not set properly\\n\");\r\n \t\t\tret = -1;\r\n \t\t}\r\n \t}\r\n@@ -493,7 +493,7 @@ test_pktmbuf_free_segment(void)\r\n \t\t\tmb = m[i];\r\n \t\t\twhile(mb != NULL) {\r\n \t\t\t\tmt = mb;\r\n-\t\t\t\tmb = mb->pkt.next;\r\n+\t\t\t\tmb = mb->next;\r\n \t\t\t\trte_pktmbuf_free_seg(mt);\r\n \t\t\t}\r\n \t\t}\r\ndiff --git a/app/test/test_sched.c b/app/test/test_sched.c\r\nindex d9abb51..c8f4e33 100644\r\n--- a/app/test/test_sched.c\r\n+++ b/app/test/test_sched.c\r\n@@ -147,8 +147,8 @@ prepare_pkt(struct rte_mbuf *mbuf)\r\n \trte_sched_port_pkt_write(mbuf, SUBPORT, PIPE, TC, QUEUE, e_RTE_METER_YELLOW);\r\n \r\n \t/* 64 byte packet */\r\n-\tmbuf->pkt.pkt_len  = 60;\r\n-\tmbuf->pkt.data_len = 60;\r\n+\tmbuf->pkt_len  = 60;\r\n+\tmbuf->data_len = 60;\r\n }\r\n \r\n \r\ndiff --git a/app/test/test_table_acl.c b/app/test/test_table_acl.c\r\nindex 5bcc8b8..70e2e68 100644\r\n--- a/app/test/test_table_acl.c\r\n+++ b/app/test/test_table_acl.c\r\n@@ -515,7 +515,7 @@ test_pipeline_single_filter(int expected_count)\r\n \t\t\tstruct rte_mbuf *mbuf;\r\n \r\n \t\t\tmbuf = rte_pktmbuf_alloc(pool);\r\n-\t\t\tmemset(mbuf->pkt.data, 0x00,\r\n+\t\t\tmemset(mbuf->data, 0x00,\r\n \t\t\t\tsizeof(struct ipv4_5tuple));\r\n \r\n \t\t\tfive_tuple.proto = j;\r\n@@ -524,7 +524,7 @@ test_pipeline_single_filter(int expected_count)\r\n \t\t\tfive_tuple.port_src = rte_bswap16(100 + j);\r\n \t\t\tfive_tuple.port_dst = rte_bswap16(200 + j);\r\n \r\n-\t\t\tmemcpy(mbuf->pkt.data, &five_tuple,\r\n+\t\t\tmemcpy(mbuf->data, &five_tuple,\r\n \t\t\t\tsizeof(struct ipv4_5tuple));\r\n \t\t\tRTE_LOG(INFO, PIPELINE, \"%s: Enqueue onto ring %d\\n\",\r\n \t\t\t\t__func__, i);\r\n@@ -551,7 +551,7 @@ test_pipeline_single_filter(int expected_count)\r\n \t\t\tprintf(\"Got %d object(s) from ring %d!\\n\", ret, i);\r\n \t\t\tfor (j = 0; j < ret; j++) {\r\n \t\t\t\tmbuf = (struct rte_mbuf *)objs[j];\r\n-\t\t\t\trte_hexdump(stdout, \"mbuf\", mbuf->pkt.data, 64);\r\n+\t\t\t\trte_hexdump(stdout, \"mbuf\", mbuf->data, 64);\r\n \t\t\t\trte_pktmbuf_free(mbuf);\r\n \t\t\t}\r\n \t\t\ttx_count += ret;\r\ndiff --git a/app/test/test_table_pipeline.c b/app/test/test_table_pipeline.c\r\nindex 35644a6..c5368ae 100644\r\n--- a/app/test/test_table_pipeline.c\r\n+++ b/app/test/test_table_pipeline.c\r\n@@ -504,8 +504,8 @@ test_pipeline_single_filter(int test_type, int expected_count)\r\n \t\t\tprintf(\"Got %d object(s) from ring %d!\\n\", ret, i);\r\n \t\t\tfor (j = 0; j < ret; j++) {\r\n \t\t\t\tmbuf = (struct rte_mbuf *)objs[j];\r\n-\t\t\t\trte_hexdump(stdout, \"Object:\", mbuf->pkt.data,\r\n-\t\t\t\t\tmbuf->pkt.data_len);\r\n+\t\t\t\trte_hexdump(stdout, \"Object:\", mbuf->data,\r\n+\t\t\t\t\tmbuf->data_len);\r\n \t\t\t\trte_pktmbuf_free(mbuf);\r\n \t\t\t}\r\n \t\t\ttx_count += ret;\r\ndiff --git a/examples/dpdk_qat/crypto.c b/examples/dpdk_qat/crypto.c\r\nindex 577ab32..318d47c 100644\r\n--- a/examples/dpdk_qat/crypto.c\r\n+++ b/examples/dpdk_qat/crypto.c\r\n@@ -183,7 +183,7 @@ struct glob_keys g_crypto_hash_keys = {\r\n  *\r\n  */\r\n #define PACKET_DATA_START_PHYS(p) \\\r\n-\t\t((p)->buf_physaddr + ((char *)p->pkt.data - (char *)p->buf_addr))\r\n+\t\t((p)->buf_physaddr + ((char *)p->data - (char *)p->buf_addr))\r\n \r\n /*\r\n  * A fixed offset to where the crypto is to be performed, which is the first\r\n@@ -773,7 +773,7 @@ enum crypto_result\r\n crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n {\r\n \tCpaCySymDpOpData *opData =\r\n-\t\t\t(CpaCySymDpOpData *) ((char *) (rte_buff->pkt.data)\r\n+\t\t\t(CpaCySymDpOpData *) ((char *) (rte_buff->data)\r\n \t\t\t\t\t+ CRYPTO_OFFSET_TO_OPDATA);\r\n \tuint32_t lcore_id;\r\n \r\n@@ -785,7 +785,7 @@ crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n \tbzero(opData, sizeof(CpaCySymDpOpData));\r\n \r\n \topData->srcBuffer = opData->dstBuffer = PACKET_DATA_START_PHYS(rte_buff);\r\n-\topData->srcBufferLen = opData->dstBufferLen = rte_buff->pkt.data_len;\r\n+\topData->srcBufferLen = opData->dstBufferLen = rte_buff->data_len;\r\n \topData->sessionCtx = qaCoreConf[lcore_id].encryptSessionHandleTbl[c][h];\r\n \topData->thisPhys = PACKET_DATA_START_PHYS(rte_buff)\r\n \t\t\t+ CRYPTO_OFFSET_TO_OPDATA;\r\n@@ -805,7 +805,7 @@ crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n \t\t\topData->ivLenInBytes = IV_LENGTH_8_BYTES;\r\n \r\n \t\topData->cryptoStartSrcOffsetInBytes = CRYPTO_START_OFFSET;\r\n-\t\topData->messageLenToCipherInBytes = rte_buff->pkt.data_len\r\n+\t\topData->messageLenToCipherInBytes = rte_buff->data_len\r\n \t\t\t\t- CRYPTO_START_OFFSET;\r\n \t\t/*\r\n \t\t * Work around for padding, message length has to be a multiple of\r\n@@ -818,7 +818,7 @@ crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n \tif (NO_HASH != h) {\r\n \r\n \t\topData->hashStartSrcOffsetInBytes = HASH_START_OFFSET;\r\n-\t\topData->messageLenToHashInBytes = rte_buff->pkt.data_len\r\n+\t\topData->messageLenToHashInBytes = rte_buff->data_len\r\n \t\t\t\t- HASH_START_OFFSET;\r\n \t\t/*\r\n \t\t * Work around for padding, message length has to be a multiple of block\r\n@@ -831,7 +831,7 @@ crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n \t\t * Assumption: Ok ignore the passed digest pointer and place HMAC at end\r\n \t\t * of packet.\r\n \t\t */\r\n-\t\topData->digestResult = rte_buff->buf_physaddr + rte_buff->pkt.data_len;\r\n+\t\topData->digestResult = rte_buff->buf_physaddr + rte_buff->data_len;\r\n \t}\r\n \r\n \tif (CPA_STATUS_SUCCESS != enqueueOp(opData, lcore_id)) {\r\n@@ -848,7 +848,7 @@ enum crypto_result\r\n crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n {\r\n \r\n-\tCpaCySymDpOpData *opData = (void*) (((char *) rte_buff->pkt.data)\r\n+\tCpaCySymDpOpData *opData = (void*) (((char *) rte_buff->data)\r\n \t\t\t+ CRYPTO_OFFSET_TO_OPDATA);\r\n \tuint32_t lcore_id;\r\n \r\n@@ -860,7 +860,7 @@ crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n \tbzero(opData, sizeof(CpaCySymDpOpData));\r\n \r\n \topData->dstBuffer = opData->srcBuffer = PACKET_DATA_START_PHYS(rte_buff);\r\n-\topData->dstBufferLen = opData->srcBufferLen = rte_buff->pkt.data_len;\r\n+\topData->dstBufferLen = opData->srcBufferLen = rte_buff->data_len;\r\n \topData->thisPhys = PACKET_DATA_START_PHYS(rte_buff)\r\n \t\t\t+ CRYPTO_OFFSET_TO_OPDATA;\r\n \topData->sessionCtx = qaCoreConf[lcore_id].decryptSessionHandleTbl[c][h];\r\n@@ -880,7 +880,7 @@ crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n \t\t\topData->ivLenInBytes = IV_LENGTH_8_BYTES;\r\n \r\n \t\topData->cryptoStartSrcOffsetInBytes = CRYPTO_START_OFFSET;\r\n-\t\topData->messageLenToCipherInBytes = rte_buff->pkt.data_len\r\n+\t\topData->messageLenToCipherInBytes = rte_buff->data_len\r\n \t\t\t\t- CRYPTO_START_OFFSET;\r\n \r\n \t\t/*\r\n@@ -892,7 +892,7 @@ crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n \t}\r\n \tif (NO_HASH != h) {\r\n \t\topData->hashStartSrcOffsetInBytes = HASH_START_OFFSET;\r\n-\t\topData->messageLenToHashInBytes = rte_buff->pkt.data_len\r\n+\t\topData->messageLenToHashInBytes = rte_buff->data_len\r\n \t\t\t\t- HASH_START_OFFSET;\r\n \t\t/*\r\n \t\t * Work around for padding, message length has to be a multiple of block\r\n@@ -900,7 +900,7 @@ crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)\r\n \t\t */\r\n \t\topData->messageLenToHashInBytes -= opData->messageLenToHashInBytes\r\n \t\t\t\t% HASH_BLOCK_DEFAULT_SIZE;\r\n-\t\topData->digestResult = rte_buff->buf_physaddr + rte_buff->pkt.data_len;\r\n+\t\topData->digestResult = rte_buff->buf_physaddr + rte_buff->data_len;\r\n \t}\r\n \r\n \tif (CPA_STATUS_SUCCESS != enqueueOp(opData, lcore_id)) {\r\ndiff --git a/examples/dpdk_qat/main.c b/examples/dpdk_qat/main.c\r\nindex d61db4c..75c9876 100644\r\n--- a/examples/dpdk_qat/main.c\r\n+++ b/examples/dpdk_qat/main.c\r\n@@ -384,7 +384,7 @@ main_loop(__attribute__((unused)) void *dummy)\r\n \t\t\t}\r\n \t\t}\r\n \r\n-\t\tport = dst_ports[pkt->pkt.in_port];\r\n+\t\tport = dst_ports[pkt->in_port];\r\n \r\n \t\t/* Transmit the packet */\r\n \t\tnic_tx_send_packet(pkt, (uint8_t)port);\r\ndiff --git a/examples/exception_path/main.c b/examples/exception_path/main.c\r\nindex 0204116..5045ef8 100644\r\n--- a/examples/exception_path/main.c\r\n+++ b/examples/exception_path/main.c\r\n@@ -302,16 +302,16 @@ main_loop(__attribute__((unused)) void *arg)\r\n \t\t\tif (m == NULL)\r\n \t\t\t\tcontinue;\r\n \r\n-\t\t\tret = read(tap_fd, m->pkt.data, MAX_PACKET_SZ);\r\n+\t\t\tret = read(tap_fd, m->data, MAX_PACKET_SZ);\r\n \t\t\tlcore_stats[lcore_id].rx++;\r\n \t\t\tif (unlikely(ret < 0)) {\r\n \t\t\t\tFATAL_ERROR(\"Reading from %s interface failed\",\r\n \t\t\t\t            tap_name);\r\n \t\t\t}\r\n-\t\t\tm->pkt.nb_segs = 1;\r\n-\t\t\tm->pkt.next = NULL;\r\n-\t\t\tm->pkt.pkt_len = (uint16_t)ret;\r\n-\t\t\tm->pkt.data_len = (uint16_t)ret;\r\n+\t\t\tm->nb_segs = 1;\r\n+\t\t\tm->next = NULL;\r\n+\t\t\tm->pkt_len = (uint16_t)ret;\r\n+\t\t\tm->data_len = (uint16_t)ret;\r\n \t\t\tret = rte_eth_tx_burst(port_ids[lcore_id], 0, &m, 1);\r\n \t\t\tif (unlikely(ret < 1)) {\r\n \t\t\t\trte_pktmbuf_free(m);\r\ndiff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c\r\nindex 72cd2b2..ac8d4f7 100644\r\n--- a/examples/ip_fragmentation/main.c\r\n+++ b/examples/ip_fragmentation/main.c\r\n@@ -342,7 +342,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,\r\n \t\t}\r\n \r\n \t\t/* if we don't need to do any fragmentation */\r\n-\t\tif (likely (IPV4_MTU_DEFAULT >= m->pkt.pkt_len)) {\r\n+\t\tif (likely (IPV4_MTU_DEFAULT >= m->pkt_len)) {\r\n \t\t\tqconf->tx_mbufs[port_out].m_table[len] = m;\r\n \t\t\tlen2 = 1;\r\n \t\t} else {\r\n@@ -379,7 +379,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,\r\n \t\t}\r\n \r\n \t\t/* if we don't need to do any fragmentation */\r\n-\t\tif (likely (IPV6_MTU_DEFAULT >= m->pkt.pkt_len)) {\r\n+\t\tif (likely (IPV6_MTU_DEFAULT >= m->pkt_len)) {\r\n \t\t\tqconf->tx_mbufs[port_out].m_table[len] = m;\r\n \t\t\tlen2 = 1;\r\n \t\t} else {\r\n@@ -413,7 +413,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,\r\n \t\t\trte_panic(\"No headroom in mbuf.\\n\");\r\n \t\t}\r\n \r\n-\t\tm->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n+\t\tm->vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n \r\n \t\t/* 02:00:00:00:00:xx */\r\n \t\td_addr_bytes = &eth_hdr->d_addr.addr_bytes[0];\r\ndiff --git a/examples/ip_pipeline/pipeline_rx.c b/examples/ip_pipeline/pipeline_rx.c\r\nindex e43ebfa..7a8309c 100644\r\n--- a/examples/ip_pipeline/pipeline_rx.c\r\n+++ b/examples/ip_pipeline/pipeline_rx.c\r\n@@ -255,8 +255,8 @@ app_pkt_metadata_fill(struct rte_mbuf *m)\r\n \t/* Pop Ethernet header */\r\n \tif (app.ether_hdr_pop_push) {\r\n \t\trte_pktmbuf_adj(m, (uint16_t)sizeof(struct ether_hdr));\r\n-\t\tm->pkt.vlan_macip.f.l2_len = 0;\r\n-\t\tm->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n+\t\tm->vlan_macip.f.l2_len = 0;\r\n+\t\tm->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n \t}\r\n }\r\n \r\ndiff --git a/examples/ip_pipeline/pipeline_tx.c b/examples/ip_pipeline/pipeline_tx.c\r\nindex 3bf2c8b..b9491e3 100644\r\n--- a/examples/ip_pipeline/pipeline_tx.c\r\n+++ b/examples/ip_pipeline/pipeline_tx.c\r\n@@ -66,7 +66,7 @@ app_pkt_metadata_flush(struct rte_mbuf *pkt)\r\n \tether_addr_copy(&pkt_meta->nh_arp, &ether_hdr->d_addr);\r\n \tether_addr_copy(&local_ether_addr, &ether_hdr->s_addr);\r\n \tether_hdr->ether_type = rte_bswap16(ETHER_TYPE_IPv4);\r\n-\tpkt->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n+\tpkt->vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n }\r\n \r\n static int\r\ndiff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c\r\nindex 3bb6afd..8184aad 100644\r\n--- a/examples/ip_reassembly/main.c\r\n+++ b/examples/ip_reassembly/main.c\r\n@@ -412,8 +412,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,\r\n \t\t\tdr = &qconf->death_row;\r\n \r\n \t\t\t/* prepare mbuf: setup l2_len/l3_len. */\r\n-\t\t\tm->pkt.vlan_macip.f.l2_len = sizeof(*eth_hdr);\r\n-\t\t\tm->pkt.vlan_macip.f.l3_len = sizeof(*ip_hdr);\r\n+\t\t\tm->vlan_macip.f.l2_len = sizeof(*eth_hdr);\r\n+\t\t\tm->vlan_macip.f.l3_len = sizeof(*ip_hdr);\r\n \r\n \t\t\t/* process this fragment. */\r\n \t\t\tmo = rte_ipv4_frag_reassemble_packet(tbl, dr, m, tms, ip_hdr);\r\n@@ -455,8 +455,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,\r\n \t\t\tdr  = &qconf->death_row;\r\n \r\n \t\t\t/* prepare mbuf: setup l2_len/l3_len. */\r\n-\t\t\tm->pkt.vlan_macip.f.l2_len = sizeof(*eth_hdr);\r\n-\t\t\tm->pkt.vlan_macip.f.l3_len = sizeof(*ip_hdr) + sizeof(*frag_hdr);\r\n+\t\t\tm->vlan_macip.f.l2_len = sizeof(*eth_hdr);\r\n+\t\t\tm->vlan_macip.f.l3_len = sizeof(*ip_hdr) + sizeof(*frag_hdr);\r\n \r\n \t\t\tmo = rte_ipv6_frag_reassemble_packet(tbl, dr, m, tms, ip_hdr, frag_hdr);\r\n \t\t\tif (mo == NULL)\r\ndiff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c\r\nindex 7b53296..cc12d9d 100644\r\n--- a/examples/ipv4_multicast/main.c\r\n+++ b/examples/ipv4_multicast/main.c\r\n@@ -329,17 +329,17 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)\r\n \t}\r\n \r\n \t/* prepend new header */\r\n-\thdr->pkt.next = pkt;\r\n+\thdr->next = pkt;\r\n \r\n \r\n \t/* update header's fields */\r\n-\thdr->pkt.pkt_len = (uint16_t)(hdr->pkt.data_len + pkt->pkt.pkt_len);\r\n-\thdr->pkt.nb_segs = (uint8_t)(pkt->pkt.nb_segs + 1);\r\n+\thdr->pkt_len = (uint16_t)(hdr->data_len + pkt->pkt_len);\r\n+\thdr->nb_segs = (uint8_t)(pkt->nb_segs + 1);\r\n \r\n \t/* copy metadata from source packet*/\r\n-\thdr->pkt.in_port = pkt->pkt.in_port;\r\n-\thdr->pkt.vlan_macip = pkt->pkt.vlan_macip;\r\n-\thdr->pkt.hash = pkt->pkt.hash;\r\n+\thdr->in_port = pkt->in_port;\r\n+\thdr->vlan_macip = pkt->vlan_macip;\r\n+\thdr->hash = pkt->hash;\r\n \r\n \thdr->ol_flags = pkt->ol_flags;\r\n \r\n@@ -412,7 +412,7 @@ mcast_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf)\r\n \r\n \t/* Should we use rte_pktmbuf_clone() or not. */\r\n \tuse_clone = (port_num <= MCAST_CLONE_PORTS &&\r\n-\t    m->pkt.nb_segs <= MCAST_CLONE_SEGS);\r\n+\t    m->nb_segs <= MCAST_CLONE_SEGS);\r\n \r\n \t/* Mark all packet's segments as referenced port_num times */\r\n \tif (use_clone == 0)\r\ndiff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c\r\nindex 9b2c21b..406611f 100644\r\n--- a/examples/l3fwd-acl/main.c\r\n+++ b/examples/l3fwd-acl/main.c\r\n@@ -709,7 +709,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,\r\n \t\t\tunsigned char *) + sizeof(struct ether_hdr));\r\n \r\n \t\t/* Check to make sure the packet is valid (RFC1812) */\r\n-\t\tif (is_valid_ipv4_pkt(ipv4_hdr, pkt->pkt.pkt_len) >= 0) {\r\n+\t\tif (is_valid_ipv4_pkt(ipv4_hdr, pkt->pkt_len) >= 0) {\r\n \r\n \t\t\t/* Update time to live and header checksum */\r\n \t\t\t--(ipv4_hdr->time_to_live);\r\ndiff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c\r\nindex 57fc371..a9d5c80 100644\r\n--- a/examples/l3fwd-power/main.c\r\n+++ b/examples/l3fwd-power/main.c\r\n@@ -687,7 +687,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,\r\n \r\n #ifdef DO_RFC_1812_CHECKS\r\n \t\t/* Check to make sure the packet is valid (RFC1812) */\r\n-\t\tif (is_valid_ipv4_pkt(ipv4_hdr, m->pkt.pkt_len) < 0) {\r\n+\t\tif (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {\r\n \t\t\trte_pktmbuf_free(m);\r\n \t\t\treturn;\r\n \t\t}\r\ndiff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c\r\nindex 2ca5c21..7b1e08a 100644\r\n--- a/examples/l3fwd-vf/main.c\r\n+++ b/examples/l3fwd-vf/main.c\r\n@@ -489,7 +489,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, lookup_struct_t * l3fwd\r\n \r\n #ifdef DO_RFC_1812_CHECKS\r\n \t/* Check to make sure the packet is valid (RFC1812) */\r\n-\tif (is_valid_ipv4_pkt(ipv4_hdr, m->pkt.pkt_len) < 0) {\r\n+\tif (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {\r\n \t\trte_pktmbuf_free(m);\r\n \t\treturn;\r\n \t}\r\ndiff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c\r\nindex bef409a..e3e3463 100755\r\n--- a/examples/l3fwd/main.c\r\n+++ b/examples/l3fwd/main.c\r\n@@ -809,19 +809,19 @@ simple_ipv4_fwd_4pkts(struct rte_mbuf* m[4], uint8_t portid, struct lcore_conf *\r\n #ifdef DO_RFC_1812_CHECKS\r\n \t/* Check to make sure the packet is valid (RFC1812) */\r\n \tuint8_t valid_mask = MASK_ALL_PKTS;\r\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[0], m[0]->pkt.pkt_len) < 0) {\r\n+\tif (is_valid_ipv4_pkt(ipv4_hdr[0], m[0]->pkt_len) < 0) {\r\n \t\trte_pktmbuf_free(m[0]);\r\n \t\tvalid_mask &= EXECLUDE_1ST_PKT;\r\n \t}\r\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[1], m[1]->pkt.pkt_len) < 0) {\r\n+\tif (is_valid_ipv4_pkt(ipv4_hdr[1], m[1]->pkt_len) < 0) {\r\n \t\trte_pktmbuf_free(m[1]);\r\n \t\tvalid_mask &= EXECLUDE_2ND_PKT;\r\n \t}\r\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[2], m[2]->pkt.pkt_len) < 0) {\r\n+\tif (is_valid_ipv4_pkt(ipv4_hdr[2], m[2]->pkt_len) < 0) {\r\n \t\trte_pktmbuf_free(m[2]);\r\n \t\tvalid_mask &= EXECLUDE_3RD_PKT;\r\n \t}\r\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[3], m[3]->pkt.pkt_len) < 0) {\r\n+\tif (is_valid_ipv4_pkt(ipv4_hdr[3], m[3]->pkt_len) < 0) {\r\n \t\trte_pktmbuf_free(m[3]);\r\n \t\tvalid_mask &= EXECLUDE_4TH_PKT;\r\n \t}\r\n@@ -1009,7 +1009,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon\r\n \r\n #ifdef DO_RFC_1812_CHECKS\r\n \t\t/* Check to make sure the packet is valid (RFC1812) */\r\n-\t\tif (is_valid_ipv4_pkt(ipv4_hdr, m->pkt.pkt_len) < 0) {\r\n+\t\tif (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {\r\n \t\t\trte_pktmbuf_free(m);\r\n \t\t\treturn;\r\n \t\t}\r\ndiff --git a/examples/load_balancer/runtime.c b/examples/load_balancer/runtime.c\r\nindex 9612392..b69917b 100644\r\n--- a/examples/load_balancer/runtime.c\r\n+++ b/examples/load_balancer/runtime.c\r\n@@ -540,7 +540,7 @@ app_lcore_worker(\r\n \t\t\tipv4_dst = rte_be_to_cpu_32(ipv4_hdr->dst_addr);\r\n \r\n \t\t\tif (unlikely(rte_lpm_lookup(lp->lpm_table, ipv4_dst, &port) != 0)) {\r\n-\t\t\t\tport = pkt->pkt.in_port;\r\n+\t\t\t\tport = pkt->in_port;\r\n \t\t\t}\r\n \r\n \t\t\tpos = lp->mbuf_out[port].n_mbufs;\r\ndiff --git a/examples/multi_process/client_server_mp/mp_client/client.c b/examples/multi_process/client_server_mp/mp_client/client.c\r\nindex 91f70eb..71e4a48 100644\r\n--- a/examples/multi_process/client_server_mp/mp_client/client.c\r\n+++ b/examples/multi_process/client_server_mp/mp_client/client.c\r\n@@ -211,7 +211,7 @@ enqueue_packet(struct rte_mbuf *buf, uint8_t port)\r\n static void\r\n handle_packet(struct rte_mbuf *buf)\r\n {\r\n-\tconst uint8_t in_port = buf->pkt.in_port;\r\n+\tconst uint8_t in_port = buf->in_port;\r\n \tconst uint8_t out_port = output_ports[in_port];\r\n \r\n \tenqueue_packet(buf, out_port);\r\ndiff --git a/examples/quota_watermark/qw/main.c b/examples/quota_watermark/qw/main.c\r\nindex 579698b..c8bd62f 100644\r\n--- a/examples/quota_watermark/qw/main.c\r\n+++ b/examples/quota_watermark/qw/main.c\r\n@@ -104,8 +104,8 @@ static void send_pause_frame(uint8_t port_id, uint16_t duration)\r\n     pause_frame->opcode = rte_cpu_to_be_16(0x0001);\r\n     pause_frame->param  = rte_cpu_to_be_16(duration);\r\n \r\n-    mbuf->pkt.pkt_len  = 60;\r\n-    mbuf->pkt.data_len = 60;\r\n+    mbuf->pkt_len  = 60;\r\n+    mbuf->data_len = 60;\r\n \r\n     rte_eth_tx_burst(port_id, 0, &mbuf, 1);\r\n }\r\ndiff --git a/examples/vhost/main.c b/examples/vhost/main.c\r\nindex fe28912..afbab31 100644\r\n--- a/examples/vhost/main.c\r\n+++ b/examples/vhost/main.c\r\n@@ -1027,7 +1027,7 @@ virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t count)\r\n \t\tvq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len;\r\n \r\n \t\t/* Copy mbuf data to buffer */\r\n-\t\trte_memcpy((void *)(uintptr_t)buff_addr, (const void*)buff->pkt.data, rte_pktmbuf_data_len(buff));\r\n+\t\trte_memcpy((void *)(uintptr_t)buff_addr, (const void*)buff->data, rte_pktmbuf_data_len(buff));\r\n \r\n \t\tres_cur_idx++;\r\n \t\tpacket_success++;\r\n@@ -1089,7 +1089,7 @@ link_vmdq(struct virtio_net *dev, struct rte_mbuf *m)\r\n \tint i, ret;\r\n \r\n \t/* Learn MAC address of guest device from packet */\r\n-\tpkt_hdr = (struct ether_hdr *)m->pkt.data;\r\n+\tpkt_hdr = (struct ether_hdr *)m->data;\r\n \r\n \tdev_ll = ll_root_used;\r\n \r\n@@ -1176,7 +1176,7 @@ virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m)\r\n \tstruct ether_hdr *pkt_hdr;\r\n \tuint64_t ret = 0;\r\n \r\n-\tpkt_hdr = (struct ether_hdr *)m->pkt.data;\r\n+\tpkt_hdr = (struct ether_hdr *)m->data;\r\n \r\n \t/*get the used devices list*/\r\n \tdev_ll = ll_root_used;\r\n@@ -1235,7 +1235,7 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *\r\n \tunsigned len, ret, offset = 0;\r\n \tconst uint16_t lcore_id = rte_lcore_id();\r\n \tstruct virtio_net_data_ll *dev_ll = ll_root_used;\r\n-\tstruct ether_hdr *pkt_hdr = (struct ether_hdr *)m->pkt.data;\r\n+\tstruct ether_hdr *pkt_hdr = (struct ether_hdr *)m->data;\r\n \r\n \t/*check if destination is local VM*/\r\n \tif ((vm2vm_mode == VM2VM_SOFTWARE) && (virtio_tx_local(dev, m) == 0))\r\n@@ -1288,22 +1288,22 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *\r\n \t\treturn;\r\n \t}\r\n \r\n-\tmbuf->pkt.data_len = m->pkt.data_len + VLAN_HLEN + offset;\r\n-\tmbuf->pkt.pkt_len = mbuf->pkt.data_len;\r\n+\tmbuf->data_len = m->data_len + VLAN_HLEN + offset;\r\n+\tmbuf->pkt_len = mbuf->data_len;\r\n \r\n \t/* Copy ethernet header to mbuf. */\r\n-\trte_memcpy((void*)mbuf->pkt.data, (const void*)m->pkt.data, ETH_HLEN);\r\n+\trte_memcpy((void*)mbuf->data, (const void*)m->data, ETH_HLEN);\r\n \r\n \r\n \t/* Setup vlan header. Bytes need to be re-ordered for network with htons()*/\r\n-\tvlan_hdr = (struct vlan_ethhdr *) mbuf->pkt.data;\r\n+\tvlan_hdr = (struct vlan_ethhdr *) mbuf->data;\r\n \tvlan_hdr->h_vlan_encapsulated_proto = vlan_hdr->h_vlan_proto;\r\n \tvlan_hdr->h_vlan_proto = htons(ETH_P_8021Q);\r\n \tvlan_hdr->h_vlan_TCI = htons(vlan_tag);\r\n \r\n \t/* Copy the remaining packet contents to the mbuf. */\r\n-\trte_memcpy((void*) ((uint8_t*)mbuf->pkt.data + VLAN_ETH_HLEN),\r\n-\t\t(const void*) ((uint8_t*)m->pkt.data + ETH_HLEN), (m->pkt.data_len - ETH_HLEN));\r\n+\trte_memcpy((void*) ((uint8_t*)mbuf->data + VLAN_ETH_HLEN),\r\n+\t\t(const void*) ((uint8_t*)m->data + ETH_HLEN), (m->data_len - ETH_HLEN));\r\n \ttx_q->m_table[len] = mbuf;\r\n \tlen++;\r\n \tif (enable_stats) {\r\n@@ -1393,8 +1393,8 @@ virtio_dev_tx(struct virtio_net* dev, struct rte_mempool *mbuf_pool)\r\n \t\tvq->used->ring[used_idx].len = 0;\r\n \r\n \t\t/* Setup dummy mbuf. This is copied to a real mbuf if transmitted out the physical port. */\r\n-\t\tm.pkt.data_len = desc->len;\r\n-\t\tm.pkt.data = (void*)(uintptr_t)buff_addr;\r\n+\t\tm.data_len = desc->len;\r\n+\t\tm.data = (void*)(uintptr_t)buff_addr;\r\n \r\n \t\tPRINT_PACKET(dev, (uintptr_t)buff_addr, desc->len, 0);\r\n \r\n@@ -1713,9 +1713,9 @@ attach_rxmbuf_zcp(struct virtio_net *dev)\r\n \t}\r\n \r\n \tmbuf->buf_addr = (void *)(uintptr_t)(buff_addr - RTE_PKTMBUF_HEADROOM);\r\n-\tmbuf->pkt.data = (void *)(uintptr_t)(buff_addr);\r\n+\tmbuf->data = (void *)(uintptr_t)(buff_addr);\r\n \tmbuf->buf_physaddr = phys_addr - RTE_PKTMBUF_HEADROOM;\r\n-\tmbuf->pkt.data_len = desc->len;\r\n+\tmbuf->data_len = desc->len;\r\n \tMBUF_HEADROOM_UINT32(mbuf) = (uint32_t)desc_idx;\r\n \r\n \tLOG_DEBUG(VHOST_DATA,\r\n@@ -1750,9 +1750,9 @@ static inline void pktmbuf_detach_zcp(struct rte_mbuf *m)\r\n \r\n \tbuf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?\r\n \t\t\tRTE_PKTMBUF_HEADROOM : m->buf_len;\r\n-\tm->pkt.data = (char *) m->buf_addr + buf_ofs;\r\n+\tm->data = (char *) m->buf_addr + buf_ofs;\r\n \r\n-\tm->pkt.data_len = 0;\r\n+\tm->data_len = 0;\r\n }\r\n \r\n /*\r\n@@ -1984,7 +1984,7 @@ virtio_tx_route_zcp(struct virtio_net *dev, struct rte_mbuf *m,\r\n \tunsigned len, ret, offset = 0;\r\n \tstruct vpool *vpool;\r\n \tstruct virtio_net_data_ll *dev_ll = ll_root_used;\r\n-\tstruct ether_hdr *pkt_hdr = (struct ether_hdr *)m->pkt.data;\r\n+\tstruct ether_hdr *pkt_hdr = (struct ether_hdr *)m->data;\r\n \tuint16_t vlan_tag = (uint16_t)vlan_tags[(uint16_t)dev->device_fh];\r\n \r\n \t/*Add packet to the port tx queue*/\r\n@@ -2055,24 +2055,24 @@ virtio_tx_route_zcp(struct virtio_net *dev, struct rte_mbuf *m,\r\n \t\t}\r\n \t}\r\n \r\n-\tmbuf->pkt.nb_segs = m->pkt.nb_segs;\r\n-\tmbuf->pkt.next = m->pkt.next;\r\n-\tmbuf->pkt.data_len = m->pkt.data_len + offset;\r\n-\tmbuf->pkt.pkt_len = mbuf->pkt.data_len;\r\n+\tmbuf->nb_segs = m->nb_segs;\r\n+\tmbuf->next = m->next;\r\n+\tmbuf->data_len = m->data_len + offset;\r\n+\tmbuf->pkt_len = mbuf->data_len;\r\n \tif (unlikely(need_copy)) {\r\n \t\t/* Copy the packet contents to the mbuf. */\r\n-\t\trte_memcpy((void *)((uint8_t *)mbuf->pkt.data),\r\n-\t\t\t(const void *) ((uint8_t *)m->pkt.data),\r\n-\t\t\tm->pkt.data_len);\r\n+\t\trte_memcpy((void *)((uint8_t *)mbuf->data),\r\n+\t\t\t(const void *) ((uint8_t *)m->data),\r\n+\t\t\tm->data_len);\r\n \t} else {\r\n-\t\tmbuf->pkt.data = m->pkt.data;\r\n+\t\tmbuf->data = m->data;\r\n \t\tmbuf->buf_physaddr = m->buf_physaddr;\r\n \t\tmbuf->buf_addr = m->buf_addr;\r\n \t}\r\n \tmbuf->ol_flags = PKT_TX_VLAN_PKT;\r\n-\tmbuf->pkt.vlan_macip.f.vlan_tci = vlan_tag;\r\n-\tmbuf->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n-\tmbuf->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n+\tmbuf->vlan_macip.f.vlan_tci = vlan_tag;\r\n+\tmbuf->vlan_macip.f.l2_len = sizeof(struct ether_hdr);\r\n+\tmbuf->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n \tMBUF_HEADROOM_UINT32(mbuf) = (uint32_t)desc_idx;\r\n \r\n \ttx_q->m_table[len] = mbuf;\r\n@@ -2081,8 +2081,8 @@ virtio_tx_route_zcp(struct virtio_net *dev, struct rte_mbuf *m,\r\n \tLOG_DEBUG(VHOST_DATA,\r\n \t\t\"(%\"PRIu64\") in tx_route_zcp: pkt: nb_seg: %d, next:%s\\n\",\r\n \t\tdev->device_fh,\r\n-\t\tmbuf->pkt.nb_segs,\r\n-\t\t(mbuf->pkt.next == NULL) ? \"null\" : \"non-null\");\r\n+\t\tmbuf->nb_segs,\r\n+\t\t(mbuf->next == NULL) ? \"null\" : \"non-null\");\r\n \r\n \tif (enable_stats) {\r\n \t\tdev_statistics[dev->device_fh].tx_total++;\r\n@@ -2196,11 +2196,11 @@ virtio_dev_tx_zcp(struct virtio_net *dev)\r\n \t\t * Setup dummy mbuf. This is copied to a real mbuf if\r\n \t\t * transmitted out the physical port.\r\n \t\t */\r\n-\t\tm.pkt.data_len = desc->len;\r\n-\t\tm.pkt.nb_segs = 1;\r\n-\t\tm.pkt.next = NULL;\r\n-\t\tm.pkt.data = (void *)(uintptr_t)buff_addr;\r\n-\t\tm.buf_addr = m.pkt.data;\r\n+\t\tm.data_len = desc->len;\r\n+\t\tm.nb_segs = 1;\r\n+\t\tm.next = NULL;\r\n+\t\tm.data = (void *)(uintptr_t)buff_addr;\r\n+\t\tm.buf_addr = m.data;\r\n \t\tm.buf_physaddr = phys_addr;\r\n \r\n \t\t/*\r\ndiff --git a/examples/vhost_xen/main.c b/examples/vhost_xen/main.c\r\nindex b275747..8162cd8 100644\r\n--- a/examples/vhost_xen/main.c\r\n+++ b/examples/vhost_xen/main.c\r\n@@ -677,7 +677,7 @@ virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t count)\r\n \t\tvq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len;\r\n \r\n \t\t/* Copy mbuf data to buffer */\r\n-\t\trte_memcpy((void *)(uintptr_t)buff_addr, (const void*)buff->pkt.data, rte_pktmbuf_data_len(buff));\r\n+\t\trte_memcpy((void *)(uintptr_t)buff_addr, (const void*)buff->data, rte_pktmbuf_data_len(buff));\r\n \r\n \t\tres_cur_idx++;\r\n \t\tpacket_success++;\r\n@@ -808,7 +808,7 @@ virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m)\r\n \tstruct ether_hdr *pkt_hdr;\r\n \tuint64_t ret = 0;\r\n \r\n-\tpkt_hdr = (struct ether_hdr *)m->pkt.data;\r\n+\tpkt_hdr = (struct ether_hdr *)m->data;\r\n \r\n \t/*get the used devices list*/\r\n \tdev_ll = ll_root_used;\r\n@@ -879,22 +879,22 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *\r\n \tif(!mbuf)\r\n \t\treturn;\r\n \r\n-\tmbuf->pkt.data_len = m->pkt.data_len + VLAN_HLEN;\r\n-\tmbuf->pkt.pkt_len = mbuf->pkt.data_len;\r\n+\tmbuf->data_len = m->data_len + VLAN_HLEN;\r\n+\tmbuf->pkt_len = mbuf->data_len;\r\n \r\n \t/* Copy ethernet header to mbuf. */\r\n-\trte_memcpy((void*)mbuf->pkt.data, (const void*)m->pkt.data, ETH_HLEN);\r\n+\trte_memcpy((void*)mbuf->data, (const void*)m->data, ETH_HLEN);\r\n \r\n \r\n \t/* Setup vlan header. Bytes need to be re-ordered for network with htons()*/\r\n-\tvlan_hdr = (struct vlan_ethhdr *) mbuf->pkt.data;\r\n+\tvlan_hdr = (struct vlan_ethhdr *) mbuf->data;\r\n \tvlan_hdr->h_vlan_encapsulated_proto = vlan_hdr->h_vlan_proto;\r\n \tvlan_hdr->h_vlan_proto = htons(ETH_P_8021Q);\r\n \tvlan_hdr->h_vlan_TCI = htons(vlan_tag);\r\n \r\n \t/* Copy the remaining packet contents to the mbuf. */\r\n-\trte_memcpy((void*) ((uint8_t*)mbuf->pkt.data + VLAN_ETH_HLEN),\r\n-\t\t(const void*) ((uint8_t*)m->pkt.data + ETH_HLEN), (m->pkt.data_len - ETH_HLEN));\r\n+\trte_memcpy((void*) ((uint8_t*)mbuf->data + VLAN_ETH_HLEN),\r\n+\t\t(const void*) ((uint8_t*)m->data + ETH_HLEN), (m->data_len - ETH_HLEN));\r\n \ttx_q->m_table[len] = mbuf;\r\n \tlen++;\r\n \tif (enable_stats) {\r\n@@ -980,9 +980,9 @@ virtio_dev_tx(struct virtio_net* dev, struct rte_mempool *mbuf_pool)\r\n \t\trte_prefetch0((void*)(uintptr_t)buff_addr);\r\n \r\n \t\t/* Setup dummy mbuf. This is copied to a real mbuf if transmitted out the physical port. */\r\n-\t\tm.pkt.data_len = desc->len;\r\n-\t\tm.pkt.data = (void*)(uintptr_t)buff_addr;\r\n-\t\tm.pkt.nb_segs = 1;\r\n+\t\tm.data_len = desc->len;\r\n+\t\tm.data = (void*)(uintptr_t)buff_addr;\r\n+\t\tm.nb_segs = 1;\r\n \r\n \t\tvirtio_tx_route(dev, &m, mbuf_pool, 0);\r\n \r\ndiff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c\r\nindex 2d92e45..585ff88 100644\r\n--- a/lib/librte_distributor/rte_distributor.c\r\n+++ b/lib/librte_distributor/rte_distributor.c\r\n@@ -282,7 +282,7 @@ rte_distributor_process(struct rte_distributor *d,\r\n \t\t\tnext_mb = mbufs[next_idx++];\r\n \t\t\tnext_value = (((int64_t)(uintptr_t)next_mb)\r\n \t\t\t\t\t<< RTE_DISTRIB_FLAG_BITS);\r\n-\t\t\tnew_tag = (next_mb->pkt.hash.rss | 1);\r\n+\t\t\tnew_tag = (next_mb->hash.rss | 1);\r\n \r\n \t\t\tuint32_t match = 0;\r\n \t\t\tunsigned i;\r\ndiff --git a/lib/librte_ip_frag/ip_frag_common.h b/lib/librte_ip_frag/ip_frag_common.h\r\nindex 70be949..81ca23a 100644\r\n--- a/lib/librte_ip_frag/ip_frag_common.h\r\n+++ b/lib/librte_ip_frag/ip_frag_common.h\r\n@@ -173,20 +173,20 @@ ip_frag_chain(struct rte_mbuf *mn, struct rte_mbuf *mp)\r\n \tstruct rte_mbuf *ms;\r\n \r\n \t/* adjust start of the last fragment data. */\r\n-\trte_pktmbuf_adj(mp, (uint16_t)(mp->pkt.vlan_macip.f.l2_len +\r\n-\t\tmp->pkt.vlan_macip.f.l3_len));\r\n+\trte_pktmbuf_adj(mp, (uint16_t)(mp->vlan_macip.f.l2_len +\r\n+\t\tmp->vlan_macip.f.l3_len));\r\n \r\n \t/* chain two fragments. */\r\n \tms = rte_pktmbuf_lastseg(mn);\r\n-\tms->pkt.next = mp;\r\n+\tms->next = mp;\r\n \r\n \t/* accumulate number of segments and total length. */\r\n-\tmn->pkt.nb_segs = (uint8_t)(mn->pkt.nb_segs + mp->pkt.nb_segs);\r\n-\tmn->pkt.pkt_len += mp->pkt.pkt_len;\r\n+\tmn->nb_segs = (uint8_t)(mn->nb_segs + mp->nb_segs);\r\n+\tmn->pkt_len += mp->pkt_len;\r\n \r\n \t/* reset pkt_len and nb_segs for chained fragment. */\r\n-\tmp->pkt.pkt_len = mp->pkt.data_len;\r\n-\tmp->pkt.nb_segs = 1;\r\n+\tmp->pkt_len = mp->data_len;\r\n+\tmp->nb_segs = 1;\r\n }\r\n \r\n \r\ndiff --git a/lib/librte_ip_frag/rte_ipv4_fragmentation.c b/lib/librte_ip_frag/rte_ipv4_fragmentation.c\r\nindex 9d4e1f7..0b10310 100644\r\n--- a/lib/librte_ip_frag/rte_ipv4_fragmentation.c\r\n+++ b/lib/librte_ip_frag/rte_ipv4_fragmentation.c\r\n@@ -109,7 +109,7 @@ rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,\r\n \t/* Fragment size should be a multiply of 8. */\r\n \tIP_FRAG_ASSERT((frag_size & IPV4_HDR_FO_MASK) == 0);\r\n \r\n-\tin_hdr = (struct ipv4_hdr *) pkt_in->pkt.data;\r\n+\tin_hdr = (struct ipv4_hdr *) pkt_in->data;\r\n \tflag_offset = rte_cpu_to_be_16(in_hdr->fragment_offset);\r\n \r\n \t/* If Don't Fragment flag is set */\r\n@@ -118,7 +118,7 @@ rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,\r\n \r\n \t/* Check that pkts_out is big enough to hold all fragments */\r\n \tif (unlikely(frag_size * nb_pkts_out <\r\n-\t    (uint16_t)(pkt_in->pkt.pkt_len - sizeof (struct ipv4_hdr))))\r\n+\t    (uint16_t)(pkt_in->pkt_len - sizeof (struct ipv4_hdr))))\r\n \t\treturn -EINVAL;\r\n \r\n \tin_seg = pkt_in;\r\n@@ -140,8 +140,8 @@ rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,\r\n \t\t}\r\n \r\n \t\t/* Reserve space for the IP header that will be built later */\r\n-\t\tout_pkt->pkt.data_len = sizeof(struct ipv4_hdr);\r\n-\t\tout_pkt->pkt.pkt_len = sizeof(struct ipv4_hdr);\r\n+\t\tout_pkt->data_len = sizeof(struct ipv4_hdr);\r\n+\t\tout_pkt->pkt_len = sizeof(struct ipv4_hdr);\r\n \r\n \t\tout_seg_prev = out_pkt;\r\n \t\tmore_out_segs = 1;\r\n@@ -156,29 +156,29 @@ rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,\r\n \t\t\t\t__free_fragments(pkts_out, out_pkt_pos);\r\n \t\t\t\treturn -ENOMEM;\r\n \t\t\t}\r\n-\t\t\tout_seg_prev->pkt.next = out_seg;\r\n+\t\t\tout_seg_prev->next = out_seg;\r\n \t\t\tout_seg_prev = out_seg;\r\n \r\n \t\t\t/* Prepare indirect buffer */\r\n \t\t\trte_pktmbuf_attach(out_seg, in_seg);\r\n-\t\t\tlen = mtu_size - out_pkt->pkt.pkt_len;\r\n-\t\t\tif (len > (in_seg->pkt.data_len - in_seg_data_pos)) {\r\n-\t\t\t\tlen = in_seg->pkt.data_len - in_seg_data_pos;\r\n+\t\t\tlen = mtu_size - out_pkt->pkt_len;\r\n+\t\t\tif (len > (in_seg->data_len - in_seg_data_pos)) {\r\n+\t\t\t\tlen = in_seg->data_len - in_seg_data_pos;\r\n \t\t\t}\r\n-\t\t\tout_seg->pkt.data = (char*) in_seg->pkt.data + (uint16_t)in_seg_data_pos;\r\n-\t\t\tout_seg->pkt.data_len = (uint16_t)len;\r\n-\t\t\tout_pkt->pkt.pkt_len = (uint16_t)(len +\r\n-\t\t\t    out_pkt->pkt.pkt_len);\r\n-\t\t\tout_pkt->pkt.nb_segs += 1;\r\n+\t\t\tout_seg->data = (char*) in_seg->data + (uint16_t)in_seg_data_pos;\r\n+\t\t\tout_seg->data_len = (uint16_t)len;\r\n+\t\t\tout_pkt->pkt_len = (uint16_t)(len +\r\n+\t\t\t    out_pkt->pkt_len);\r\n+\t\t\tout_pkt->nb_segs += 1;\r\n \t\t\tin_seg_data_pos += len;\r\n \r\n \t\t\t/* Current output packet (i.e. fragment) done ? */\r\n-\t\t\tif (unlikely(out_pkt->pkt.pkt_len >= mtu_size))\r\n+\t\t\tif (unlikely(out_pkt->pkt_len >= mtu_size))\r\n \t\t\t\tmore_out_segs = 0;\r\n \r\n \t\t\t/* Current input segment done ? */\r\n-\t\t\tif (unlikely(in_seg_data_pos == in_seg->pkt.data_len)) {\r\n-\t\t\t\tin_seg = in_seg->pkt.next;\r\n+\t\t\tif (unlikely(in_seg_data_pos == in_seg->data_len)) {\r\n+\t\t\t\tin_seg = in_seg->next;\r\n \t\t\t\tin_seg_data_pos = 0;\r\n \r\n \t\t\t\tif (unlikely(in_seg == NULL))\r\n@@ -188,17 +188,17 @@ rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,\r\n \r\n \t\t/* Build the IP header */\r\n \r\n-\t\tout_hdr = (struct ipv4_hdr*) out_pkt->pkt.data;\r\n+\t\tout_hdr = (struct ipv4_hdr*) out_pkt->data;\r\n \r\n \t\t__fill_ipv4hdr_frag(out_hdr, in_hdr,\r\n-\t\t    (uint16_t)out_pkt->pkt.pkt_len,\r\n+\t\t    (uint16_t)out_pkt->pkt_len,\r\n \t\t    flag_offset, fragment_offset, more_in_segs);\r\n \r\n \t\tfragment_offset = (uint16_t)(fragment_offset +\r\n-\t\t    out_pkt->pkt.pkt_len - sizeof(struct ipv4_hdr));\r\n+\t\t    out_pkt->pkt_len - sizeof(struct ipv4_hdr));\r\n \r\n \t\tout_pkt->ol_flags |= PKT_TX_IP_CKSUM;\r\n-\t\tout_pkt->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n+\t\tout_pkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);\r\n \r\n \t\t/* Write the fragment to the output list */\r\n \t\tpkts_out[out_pkt_pos] = out_pkt;\r\ndiff --git a/lib/librte_ip_frag/rte_ipv4_reassembly.c b/lib/librte_ip_frag/rte_ipv4_reassembly.c\r\nindex a27b23a..06c37af 100644\r\n--- a/lib/librte_ip_frag/rte_ipv4_reassembly.c\r\n+++ b/lib/librte_ip_frag/rte_ipv4_reassembly.c\r\n@@ -87,10 +87,10 @@ ipv4_frag_reassemble(const struct ip_frag_pkt *fp)\r\n \r\n \t/* update ipv4 header for the reassmebled packet */\r\n \tip_hdr = (struct ipv4_hdr*)(rte_pktmbuf_mtod(m, uint8_t *) +\r\n-\t\tm->pkt.vlan_macip.f.l2_len);\r\n+\t\tm->vlan_macip.f.l2_len);\r\n \r\n \tip_hdr->total_length = rte_cpu_to_be_16((uint16_t)(fp->total_size +\r\n-\t\tm->pkt.vlan_macip.f.l3_len));\r\n+\t\tm->vlan_macip.f.l3_len));\r\n \tip_hdr->fragment_offset = (uint16_t)(ip_hdr->fragment_offset &\r\n \t\trte_cpu_to_be_16(IPV4_HDR_DF_FLAG));\r\n \tip_hdr->hdr_checksum = 0;\r\n@@ -137,7 +137,7 @@ rte_ipv4_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,\r\n \r\n \tip_ofs *= IPV4_HDR_OFFSET_UNITS;\r\n \tip_len = (uint16_t)(rte_be_to_cpu_16(ip_hdr->total_length) -\r\n-\t\tmb->pkt.vlan_macip.f.l3_len);\r\n+\t\tmb->vlan_macip.f.l3_len);\r\n \r\n \tIP_FRAG_LOG(DEBUG, \"%s:%d:\\n\"\r\n \t\t\"mbuf: %p, tms: %\" PRIu64\r\ndiff --git a/lib/librte_ip_frag/rte_ipv6_fragmentation.c b/lib/librte_ip_frag/rte_ipv6_fragmentation.c\r\nindex fa04991..e007662 100644\r\n--- a/lib/librte_ip_frag/rte_ipv6_fragmentation.c\r\n+++ b/lib/librte_ip_frag/rte_ipv6_fragmentation.c\r\n@@ -122,10 +122,10 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,\r\n \r\n \t/* Check that pkts_out is big enough to hold all fragments */\r\n \tif (unlikely (frag_size * nb_pkts_out <\r\n-\t    (uint16_t)(pkt_in->pkt.pkt_len - sizeof (struct ipv6_hdr))))\r\n+\t    (uint16_t)(pkt_in->pkt_len - sizeof (struct ipv6_hdr))))\r\n \t\treturn (-EINVAL);\r\n \r\n-\tin_hdr = (struct ipv6_hdr *) pkt_in->pkt.data;\r\n+\tin_hdr = (struct ipv6_hdr *) pkt_in->data;\r\n \r\n \tin_seg = pkt_in;\r\n \tin_seg_data_pos = sizeof(struct ipv6_hdr);\r\n@@ -146,8 +146,8 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,\r\n \t\t}\r\n \r\n \t\t/* Reserve space for the IP header that will be built later */\r\n-\t\tout_pkt->pkt.data_len = sizeof(struct ipv6_hdr) + sizeof(struct ipv6_extension_fragment);\r\n-\t\tout_pkt->pkt.pkt_len  = sizeof(struct ipv6_hdr) + sizeof(struct ipv6_extension_fragment);\r\n+\t\tout_pkt->data_len = sizeof(struct ipv6_hdr) + sizeof(struct ipv6_extension_fragment);\r\n+\t\tout_pkt->pkt_len  = sizeof(struct ipv6_hdr) + sizeof(struct ipv6_extension_fragment);\r\n \r\n \t\tout_seg_prev = out_pkt;\r\n \t\tmore_out_segs = 1;\r\n@@ -162,30 +162,30 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,\r\n \t\t\t\t__free_fragments(pkts_out, out_pkt_pos);\r\n \t\t\t\treturn (-ENOMEM);\r\n \t\t\t}\r\n-\t\t\tout_seg_prev->pkt.next = out_seg;\r\n+\t\t\tout_seg_prev->next = out_seg;\r\n \t\t\tout_seg_prev = out_seg;\r\n \r\n \t\t\t/* Prepare indirect buffer */\r\n \t\t\trte_pktmbuf_attach(out_seg, in_seg);\r\n-\t\t\tlen = mtu_size - out_pkt->pkt.pkt_len;\r\n-\t\t\tif (len > (in_seg->pkt.data_len - in_seg_data_pos)) {\r\n-\t\t\t\tlen = in_seg->pkt.data_len - in_seg_data_pos;\r\n+\t\t\tlen = mtu_size - out_pkt->pkt_len;\r\n+\t\t\tif (len > (in_seg->data_len - in_seg_data_pos)) {\r\n+\t\t\t\tlen = in_seg->data_len - in_seg_data_pos;\r\n \t\t\t}\r\n-\t\t\tout_seg->pkt.data = (char *) in_seg->pkt.data + (uint16_t) in_seg_data_pos;\r\n-\t\t\tout_seg->pkt.data_len = (uint16_t)len;\r\n-\t\t\tout_pkt->pkt.pkt_len = (uint16_t)(len +\r\n-\t\t\t    out_pkt->pkt.pkt_len);\r\n-\t\t\tout_pkt->pkt.nb_segs += 1;\r\n+\t\t\tout_seg->data = (char *) in_seg->data + (uint16_t) in_seg_data_pos;\r\n+\t\t\tout_seg->data_len = (uint16_t)len;\r\n+\t\t\tout_pkt->pkt_len = (uint16_t)(len +\r\n+\t\t\t    out_pkt->pkt_len);\r\n+\t\t\tout_pkt->nb_segs += 1;\r\n \t\t\tin_seg_data_pos += len;\r\n \r\n \t\t\t/* Current output packet (i.e. fragment) done ? */\r\n-\t\t\tif (unlikely(out_pkt->pkt.pkt_len >= mtu_size)) {\r\n+\t\t\tif (unlikely(out_pkt->pkt_len >= mtu_size)) {\r\n \t\t\t\tmore_out_segs = 0;\r\n \t\t\t}\r\n \r\n \t\t\t/* Current input segment done ? */\r\n-\t\t\tif (unlikely(in_seg_data_pos == in_seg->pkt.data_len)) {\r\n-\t\t\t\tin_seg = in_seg->pkt.next;\r\n+\t\t\tif (unlikely(in_seg_data_pos == in_seg->data_len)) {\r\n+\t\t\t\tin_seg = in_seg->next;\r\n \t\t\t\tin_seg_data_pos = 0;\r\n \r\n \t\t\t\tif (unlikely(in_seg == NULL)) {\r\n@@ -196,14 +196,14 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,\r\n \r\n \t\t/* Build the IP header */\r\n \r\n-\t\tout_hdr = (struct ipv6_hdr *) out_pkt->pkt.data;\r\n+\t\tout_hdr = (struct ipv6_hdr *) out_pkt->data;\r\n \r\n \t\t__fill_ipv6hdr_frag(out_hdr, in_hdr,\r\n-\t\t    (uint16_t) out_pkt->pkt.pkt_len - sizeof(struct ipv6_hdr),\r\n+\t\t    (uint16_t) out_pkt->pkt_len - sizeof(struct ipv6_hdr),\r\n \t\t    fragment_offset, more_in_segs);\r\n \r\n \t\tfragment_offset = (uint16_t)(fragment_offset +\r\n-\t\t    out_pkt->pkt.pkt_len - sizeof(struct ipv6_hdr)\r\n+\t\t    out_pkt->pkt_len - sizeof(struct ipv6_hdr)\r\n \t\t\t- sizeof(struct ipv6_extension_fragment));\r\n \r\n \t\t/* Write the fragment to the output list */\r\ndiff --git a/lib/librte_ip_frag/rte_ipv6_reassembly.c b/lib/librte_ip_frag/rte_ipv6_reassembly.c\r\nindex 3f06960..dee3425 100644\r\n--- a/lib/librte_ip_frag/rte_ipv6_reassembly.c\r\n+++ b/lib/librte_ip_frag/rte_ipv6_reassembly.c\r\n@@ -109,7 +109,7 @@ ipv6_frag_reassemble(const struct ip_frag_pkt *fp)\r\n \r\n \t/* update ipv6 header for the reassembled datagram */\r\n \tip_hdr = (struct ipv6_hdr *) (rte_pktmbuf_mtod(m, uint8_t *) +\r\n-\t\t\t\t\t\t\t\t  m->pkt.vlan_macip.f.l2_len);\r\n+\t\t\t\t\t\t\t\t  m->vlan_macip.f.l2_len);\r\n \r\n \tip_hdr->payload_len = rte_cpu_to_be_16(payload_len);\r\n \r\n@@ -120,7 +120,7 @@ ipv6_frag_reassemble(const struct ip_frag_pkt *fp)\r\n \t * other headers, so we assume there are no other headers and thus update\r\n \t * the main IPv6 header instead.\r\n \t */\r\n-\tmove_len = m->pkt.vlan_macip.f.l2_len + m->pkt.vlan_macip.f.l3_len -\r\n+\tmove_len = m->vlan_macip.f.l2_len + m->vlan_macip.f.l3_len -\r\n \t\t\tsizeof(*frag_hdr);\r\n \tfrag_hdr = (struct ipv6_extension_fragment *) (ip_hdr + 1);\r\n \tip_hdr->proto = frag_hdr->next_header;\r\ndiff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c\r\nindex 39865c1..967e4d8 100644\r\n--- a/lib/librte_mbuf/rte_mbuf.c\r\n+++ b/lib/librte_mbuf/rte_mbuf.c\r\n@@ -117,12 +117,12 @@ rte_pktmbuf_init(struct rte_mempool *mp,\r\n \tm->buf_len = (uint16_t)buf_len;\r\n \r\n \t/* keep some headroom between start of buffer and data */\r\n-\tm->pkt.data = (char*) m->buf_addr + RTE_MIN(RTE_PKTMBUF_HEADROOM, m->buf_len);\r\n+\tm->data = (char*) m->buf_addr + RTE_MIN(RTE_PKTMBUF_HEADROOM, m->buf_len);\r\n \r\n \t/* init some constant fields */\r\n \tm->pool = mp;\r\n-\tm->pkt.nb_segs = 1;\r\n-\tm->pkt.in_port = 0xff;\r\n+\tm->nb_segs = 1;\r\n+\tm->in_port = 0xff;\r\n }\r\n \r\n /* do some sanity checks on a mbuf: panic if it fails */\r\n@@ -153,10 +153,10 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)\r\n \tif (is_header == 0)\r\n \t\treturn;\r\n \r\n-\tnb_segs = m->pkt.nb_segs;\r\n+\tnb_segs = m->nb_segs;\r\n \tm_seg = m;\r\n \twhile (m_seg && nb_segs != 0) {\r\n-\t\tm_seg = m_seg->pkt.next;\r\n+\t\tm_seg = m_seg->next;\r\n \t\tnb_segs --;\r\n \t}\r\n \tif (nb_segs != 0)\r\n@@ -175,22 +175,22 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)\r\n \tfprintf(f, \"dump mbuf at 0x%p, phys=%\"PRIx64\", buf_len=%u\\n\",\r\n \t       m, (uint64_t)m->buf_physaddr, (unsigned)m->buf_len);\r\n \tfprintf(f, \"  pkt_len=%\"PRIu32\", ol_flags=%\"PRIx16\", nb_segs=%u, \"\r\n-\t       \"in_port=%u\\n\", m->pkt.pkt_len, m->ol_flags,\r\n-\t       (unsigned)m->pkt.nb_segs, (unsigned)m->pkt.in_port);\r\n-\tnb_segs = m->pkt.nb_segs;\r\n+\t       \"in_port=%u\\n\", m->pkt_len, m->ol_flags,\r\n+\t       (unsigned)m->nb_segs, (unsigned)m->in_port);\r\n+\tnb_segs = m->nb_segs;\r\n \r\n \twhile (m && nb_segs != 0) {\r\n \t\t__rte_mbuf_sanity_check(m, 0);\r\n \r\n \t\tfprintf(f, \"  segment at 0x%p, data=0x%p, data_len=%u\\n\",\r\n-\t\t       m, m->pkt.data, (unsigned)m->pkt.data_len);\r\n+\t\t       m, m->data, (unsigned)m->data_len);\r\n \t\tlen = dump_len;\r\n-\t\tif (len > m->pkt.data_len)\r\n-\t\t\tlen = m->pkt.data_len;\r\n+\t\tif (len > m->data_len)\r\n+\t\t\tlen = m->data_len;\r\n \t\tif (len != 0)\r\n-\t\t\trte_hexdump(f, NULL, m->pkt.data, len);\r\n+\t\t\trte_hexdump(f, NULL, m->data, len);\r\n \t\tdump_len -= len;\r\n-\t\tm = m->pkt.next;\r\n+\t\tm = m->next;\r\n \t\tnb_segs --;\r\n \t}\r\n }\r\ndiff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h\r\nindex cdb92dc..d5b9a50 100644\r\n--- a/lib/librte_mbuf/rte_mbuf.h\r\n+++ b/lib/librte_mbuf/rte_mbuf.h\r\n@@ -133,32 +133,6 @@ union rte_vlan_macip {\r\n #define TX_MACIP_LEN_CMP_MASK   (TX_MAC_LEN_CMP_MASK | TX_IP_LEN_CMP_MASK)\r\n \r\n /**\r\n- * A packet message buffer.\r\n- */\r\n-struct rte_pktmbuf {\r\n-\t/* valid for any segment */\r\n-\tstruct rte_mbuf *next;  /**< Next segment of scattered packet. */\r\n-\tvoid* data;             /**< Start address of data in segment buffer. */\r\n-\tuint16_t data_len;      /**< Amount of data in segment buffer. */\r\n-\r\n-\t/* these fields are valid for first segment only */\r\n-\tuint8_t nb_segs;        /**< Number of segments. */\r\n-\tuint8_t in_port;        /**< Input port. */\r\n-\tuint32_t pkt_len;       /**< Total pkt len: sum of all segment data_len. */\r\n-\r\n-\t/* offload features */\r\n-\tunion rte_vlan_macip vlan_macip;\r\n-\tunion {\r\n-\t\tuint32_t rss;       /**< RSS hash result if RSS enabled */\r\n-\t\tstruct {\r\n-\t\t\tuint16_t hash;\r\n-\t\t\tuint16_t id;\r\n-\t\t} fdir;             /**< Filter identifier if FDIR enabled */\r\n-\t\tuint32_t sched;     /**< Hierarchical scheduler */\r\n-\t} hash;                 /**< hash information */\r\n-};\r\n-\r\n-/**\r\n  * The generic rte_mbuf, containing a packet mbuf.\r\n  */\r\n struct rte_mbuf {\r\n@@ -185,7 +159,26 @@ struct rte_mbuf {\r\n \tuint16_t reserved;            /**< Unused field. Required for padding */\r\n \tuint16_t ol_flags;            /**< Offload features. */\r\n \r\n-\tstruct rte_pktmbuf pkt;\r\n+\t/* valid for any segment */\r\n+\tstruct rte_mbuf *next;  /**< Next segment of scattered packet. */\r\n+\tvoid* data;             /**< Start address of data in segment buffer. */\r\n+\tuint16_t data_len;      /**< Amount of data in segment buffer. */\r\n+\r\n+\t/* these fields are valid for first segment only */\r\n+\tuint8_t nb_segs;        /**< Number of segments. */\r\n+\tuint8_t in_port;        /**< Input port. */\r\n+\tuint32_t pkt_len;       /**< Total pkt len: sum of all segment data_len. */\r\n+\r\n+\t/* offload features, valid for first segment only */\r\n+\tunion rte_vlan_macip vlan_macip;\r\n+\tunion {\r\n+\t\tuint32_t rss;       /**< RSS hash result if RSS enabled */\r\n+\t\tstruct {\r\n+\t\t\tuint16_t hash;\r\n+\t\t\tuint16_t id;\r\n+\t\t} fdir;             /**< Filter identifier if FDIR enabled */\r\n+\t\tuint32_t sched;     /**< Hierarchical scheduler */\r\n+\t} hash;                 /**< hash information */\r\n \r\n \tunion {\r\n \t\tuint8_t metadata[0];\r\n@@ -478,7 +471,7 @@ void rte_ctrlmbuf_init(struct rte_mempool *mp, void *opaque_arg,\r\n  * @param m\r\n  *   The control mbuf.\r\n  */\r\n-#define rte_ctrlmbuf_data(m) ((m)->pkt.data)\r\n+#define rte_ctrlmbuf_data(m) ((m)->data)\r\n \r\n /**\r\n  * A macro that returns the length of the carried data.\r\n@@ -545,18 +538,18 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)\r\n {\r\n \tuint32_t buf_ofs;\r\n \r\n-\tm->pkt.next = NULL;\r\n-\tm->pkt.pkt_len = 0;\r\n-\tm->pkt.vlan_macip.data = 0;\r\n-\tm->pkt.nb_segs = 1;\r\n-\tm->pkt.in_port = 0xff;\r\n+\tm->next = NULL;\r\n+\tm->pkt_len = 0;\r\n+\tm->vlan_macip.data = 0;\r\n+\tm->nb_segs = 1;\r\n+\tm->in_port = 0xff;\r\n \r\n \tm->ol_flags = 0;\r\n \tbuf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?\r\n \t\t\tRTE_PKTMBUF_HEADROOM : m->buf_len;\r\n-\tm->pkt.data = (char*) m->buf_addr + buf_ofs;\r\n+\tm->data = (char*) m->buf_addr + buf_ofs;\r\n \r\n-\tm->pkt.data_len = 0;\r\n+\tm->data_len = 0;\r\n \t__rte_mbuf_sanity_check(m, 1);\r\n }\r\n \r\n@@ -610,11 +603,16 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)\r\n \tmi->buf_addr = md->buf_addr;\r\n \tmi->buf_len = md->buf_len;\r\n \r\n-\tmi->pkt = md->pkt;\r\n+\tmi->next = md->next;\r\n+\tmi->data = md->data;\r\n+\tmi->data_len = md->data_len;\r\n+\tmi->in_port = md->in_port;\r\n+\tmi->vlan_macip = md->vlan_macip;\r\n+\tmi->hash = md->hash;\r\n \r\n-\tmi->pkt.next = NULL;\r\n-\tmi->pkt.pkt_len = mi->pkt.data_len;\r\n-\tmi->pkt.nb_segs = 1;\r\n+\tmi->next = NULL;\r\n+\tmi->pkt_len = mi->data_len;\r\n+\tmi->nb_segs = 1;\r\n \tmi->ol_flags = md->ol_flags;\r\n \r\n \t__rte_mbuf_sanity_check(mi, 1);\r\n@@ -644,9 +642,9 @@ static inline void rte_pktmbuf_detach(struct rte_mbuf *m)\r\n \r\n \tbuf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?\r\n \t\t\tRTE_PKTMBUF_HEADROOM : m->buf_len;\r\n-\tm->pkt.data = (char*) m->buf_addr + buf_ofs;\r\n+\tm->data = (char*) m->buf_addr + buf_ofs;\r\n \r\n-\tm->pkt.data_len = 0;\r\n+\tm->data_len = 0;\r\n }\r\n \r\n #endif /* RTE_MBUF_REFCNT */\r\n@@ -713,7 +711,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)\r\n \t__rte_mbuf_sanity_check(m, 1);\r\n \r\n \twhile (m != NULL) {\r\n-\t\tm_next = m->pkt.next;\r\n+\t\tm_next = m->next;\r\n \t\trte_pktmbuf_free_seg(m);\r\n \t\tm = m_next;\r\n \t}\r\n@@ -749,21 +747,21 @@ static inline struct rte_mbuf *rte_pktmbuf_clone(struct rte_mbuf *md,\r\n \t\treturn (NULL);\r\n \r\n \tmi = mc;\r\n-\tprev = &mi->pkt.next;\r\n-\tpktlen = md->pkt.pkt_len;\r\n+\tprev = &mi->next;\r\n+\tpktlen = md->pkt_len;\r\n \tnseg = 0;\r\n \r\n \tdo {\r\n \t\tnseg++;\r\n \t\trte_pktmbuf_attach(mi, md);\r\n \t\t*prev = mi;\r\n-\t\tprev = &mi->pkt.next;\r\n-\t} while ((md = md->pkt.next) != NULL &&\r\n+\t\tprev = &mi->next;\r\n+\t} while ((md = md->next) != NULL &&\r\n \t    (mi = rte_pktmbuf_alloc(mp)) != NULL);\r\n \r\n \t*prev = NULL;\r\n-\tmc->pkt.nb_segs = nseg;\r\n-\tmc->pkt.pkt_len = pktlen;\r\n+\tmc->nb_segs = nseg;\r\n+\tmc->pkt_len = pktlen;\r\n \r\n \t/* Allocation of new indirect segment failed */\r\n \tif (unlikely (mi == NULL)) {\r\n@@ -792,7 +790,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)\r\n \r\n \tdo {\r\n \t\trte_mbuf_refcnt_update(m, v);\r\n-\t} while ((m = m->pkt.next) != NULL);\r\n+\t} while ((m = m->next) != NULL);\r\n }\r\n \r\n #endif /* RTE_MBUF_REFCNT */\r\n@@ -808,7 +806,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)\r\n static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)\r\n {\r\n \t__rte_mbuf_sanity_check(m, 1);\r\n-\treturn (uint16_t) ((char*) m->pkt.data - (char*) m->buf_addr);\r\n+\treturn (uint16_t) ((char*) m->data - (char*) m->buf_addr);\r\n }\r\n \r\n /**\r\n@@ -823,7 +821,7 @@ static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)\r\n {\r\n \t__rte_mbuf_sanity_check(m, 1);\r\n \treturn (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -\r\n-\t\t\t  m->pkt.data_len);\r\n+\t\t\t  m->data_len);\r\n }\r\n \r\n /**\r\n@@ -839,8 +837,8 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)\r\n \tstruct rte_mbuf *m2 = (struct rte_mbuf *)m;\r\n \r\n \t__rte_mbuf_sanity_check(m, 1);\r\n-\twhile (m2->pkt.next != NULL)\r\n-\t\tm2 = m2->pkt.next;\r\n+\twhile (m2->next != NULL)\r\n+\t\tm2 = m2->next;\r\n \treturn m2;\r\n }\r\n \r\n@@ -856,7 +854,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)\r\n  * @param t\r\n  *   The type to cast the result into.\r\n  */\r\n-#define rte_pktmbuf_mtod(m, t) ((t)((m)->pkt.data))\r\n+#define rte_pktmbuf_mtod(m, t) ((t)((m)->data))\r\n \r\n /**\r\n  * A macro that returns the length of the packet.\r\n@@ -866,7 +864,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)\r\n  * @param m\r\n  *   The packet mbuf.\r\n  */\r\n-#define rte_pktmbuf_pkt_len(m) ((m)->pkt.pkt_len)\r\n+#define rte_pktmbuf_pkt_len(m) ((m)->pkt_len)\r\n \r\n /**\r\n  * A macro that returns the length of the segment.\r\n@@ -876,7 +874,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)\r\n  * @param m\r\n  *   The packet mbuf.\r\n  */\r\n-#define rte_pktmbuf_data_len(m) ((m)->pkt.data_len)\r\n+#define rte_pktmbuf_data_len(m) ((m)->data_len)\r\n \r\n /**\r\n  * Prepend len bytes to an mbuf data area.\r\n@@ -901,11 +899,11 @@ static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,\r\n \tif (unlikely(len > rte_pktmbuf_headroom(m)))\r\n \t\treturn NULL;\r\n \r\n-\tm->pkt.data = (char*) m->pkt.data - len;\r\n-\tm->pkt.data_len = (uint16_t)(m->pkt.data_len + len);\r\n-\tm->pkt.pkt_len  = (m->pkt.pkt_len + len);\r\n+\tm->data = (char*) m->data - len;\r\n+\tm->data_len = (uint16_t)(m->data_len + len);\r\n+\tm->pkt_len  = (m->pkt_len + len);\r\n \r\n-\treturn (char*) m->pkt.data;\r\n+\treturn (char*) m->data;\r\n }\r\n \r\n /**\r\n@@ -934,9 +932,9 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)\r\n \tif (unlikely(len > rte_pktmbuf_tailroom(m_last)))\r\n \t\treturn NULL;\r\n \r\n-\ttail = (char*) m_last->pkt.data + m_last->pkt.data_len;\r\n-\tm_last->pkt.data_len = (uint16_t)(m_last->pkt.data_len + len);\r\n-\tm->pkt.pkt_len  = (m->pkt.pkt_len + len);\r\n+\ttail = (char*) m_last->data + m_last->data_len;\r\n+\tm_last->data_len = (uint16_t)(m_last->data_len + len);\r\n+\tm->pkt_len  = (m->pkt_len + len);\r\n \treturn (char*) tail;\r\n }\r\n \r\n@@ -958,13 +956,13 @@ static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)\r\n {\r\n \t__rte_mbuf_sanity_check(m, 1);\r\n \r\n-\tif (unlikely(len > m->pkt.data_len))\r\n+\tif (unlikely(len > m->data_len))\r\n \t\treturn NULL;\r\n \r\n-\tm->pkt.data_len = (uint16_t)(m->pkt.data_len - len);\r\n-\tm->pkt.data = ((char*) m->pkt.data + len);\r\n-\tm->pkt.pkt_len  = (m->pkt.pkt_len - len);\r\n-\treturn (char*) m->pkt.data;\r\n+\tm->data_len = (uint16_t)(m->data_len - len);\r\n+\tm->data = ((char*) m->data + len);\r\n+\tm->pkt_len  = (m->pkt_len - len);\r\n+\treturn (char*) m->data;\r\n }\r\n \r\n /**\r\n@@ -988,11 +986,11 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)\r\n \t__rte_mbuf_sanity_check(m, 1);\r\n \r\n \tm_last = rte_pktmbuf_lastseg(m);\r\n-\tif (unlikely(len > m_last->pkt.data_len))\r\n+\tif (unlikely(len > m_last->data_len))\r\n \t\treturn -1;\r\n \r\n-\tm_last->pkt.data_len = (uint16_t)(m_last->pkt.data_len - len);\r\n-\tm->pkt.pkt_len  = (m->pkt.pkt_len - len);\r\n+\tm_last->data_len = (uint16_t)(m_last->data_len - len);\r\n+\tm->pkt_len  = (m->pkt_len - len);\r\n \treturn 0;\r\n }\r\n \r\n@@ -1008,7 +1006,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)\r\n static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)\r\n {\r\n \t__rte_mbuf_sanity_check(m, 1);\r\n-\treturn !!(m->pkt.nb_segs == 1);\r\n+\treturn !!(m->nb_segs == 1);\r\n }\r\n \r\n /**\r\ndiff --git a/lib/librte_pmd_bond/rte_eth_bond_pmd.c b/lib/librte_pmd_bond/rte_eth_bond_pmd.c\r\nindex d72d6ed..5979ce5 100644\r\n--- a/lib/librte_pmd_bond/rte_eth_bond_pmd.c\r\n+++ b/lib/librte_pmd_bond/rte_eth_bond_pmd.c\r\n@@ -198,14 +198,14 @@ xmit_slave_hash(const struct rte_mbuf *buf, uint8_t slave_count, uint8_t policy)\r\n \r\n \tswitch (policy) {\r\n \tcase BALANCE_XMIT_POLICY_LAYER2:\r\n-\t\teth_hdr = (struct ether_hdr *)buf->pkt.data;\r\n+\t\teth_hdr = (struct ether_hdr *)buf->data;\r\n \r\n \t\thash = ether_hash(eth_hdr);\r\n \t\thash ^= hash >> 8;\r\n \t\treturn hash % slave_count;\r\n \r\n \tcase BALANCE_XMIT_POLICY_LAYER23:\r\n-\t\teth_hdr = (struct ether_hdr *)buf->pkt.data;\r\n+\t\teth_hdr = (struct ether_hdr *)buf->data;\r\n \r\n \t\tif (buf->ol_flags & PKT_RX_VLAN_PKT)\r\n \t\t\teth_offset = sizeof(struct ether_hdr) + sizeof(struct vlan_hdr);\r\ndiff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c\r\nindex 3304f50..058e1bd 100644\r\n--- a/lib/librte_pmd_e1000/em_rxtx.c\r\n+++ b/lib/librte_pmd_e1000/em_rxtx.c\r\n@@ -91,7 +91,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)\r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR(mb)             \\\r\n \t(uint64_t) ((mb)->buf_physaddr +       \\\r\n-\t(uint64_t) ((char *)((mb)->pkt.data) - (char *)(mb)->buf_addr))\r\n+\t(uint64_t) ((char *)((mb)->data) - (char *)(mb)->buf_addr))\r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \\\r\n \t(uint64_t) ((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM)\r\n@@ -421,7 +421,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\ttx_ol_req = (uint16_t)(ol_flags & (PKT_TX_IP_CKSUM |\r\n \t\t\t\t\t\t\tPKT_TX_L4_MASK));\r\n \t\tif (tx_ol_req) {\r\n-\t\t\thdrlen = tx_pkt->pkt.vlan_macip;\r\n+\t\t\thdrlen = tx_pkt->vlan_macip;\r\n \t\t\t/* If new context to be built or reuse the exist ctx. */\r\n \t\t\tctx = what_ctx_update(txq, tx_ol_req, hdrlen);\r\n \r\n@@ -434,7 +434,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t * This will always be the number of segments + the number of\r\n \t\t * Context descriptors required to transmit the packet\r\n \t\t */\r\n-\t\tnb_used = (uint16_t)(tx_pkt->pkt.nb_segs + new_ctx);\r\n+\t\tnb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx);\r\n \r\n \t\t/*\r\n \t\t * The number of descriptors that must be allocated for a\r\n@@ -454,7 +454,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t\t\" tx_first=%u tx_last=%u\\n\",\r\n \t\t\t(unsigned) txq->port_id,\r\n \t\t\t(unsigned) txq->queue_id,\r\n-\t\t\t(unsigned) tx_pkt->pkt.pkt_len,\r\n+\t\t\t(unsigned) tx_pkt->pkt_len,\r\n \t\t\t(unsigned) tx_id,\r\n \t\t\t(unsigned) tx_last);\r\n \r\n@@ -516,7 +516,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t/* Set VLAN Tag offload fields. */\r\n \t\tif (ol_flags & PKT_TX_VLAN_PKT) {\r\n \t\t\tcmd_type_len |= E1000_TXD_CMD_VLE;\r\n-\t\t\tpopts_spec = tx_pkt->pkt.vlan_macip.f.vlan_tci <<\r\n+\t\t\tpopts_spec = tx_pkt->vlan_macip.f.vlan_tci <<\r\n \t\t\t\tE1000_TXD_VLAN_SHIFT;\r\n \t\t}\r\n \r\n@@ -566,7 +566,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t\t/*\r\n \t\t\t * Set up Transmit Data Descriptor.\r\n \t\t\t */\r\n-\t\t\tslen = m_seg->pkt.data_len;\r\n+\t\t\tslen = m_seg->data_len;\r\n \t\t\tbuf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);\r\n \r\n \t\t\ttxd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);\r\n@@ -576,7 +576,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t\ttxe->last_id = tx_last;\r\n \t\t\ttx_id = txe->next_id;\r\n \t\t\ttxe = txn;\r\n-\t\t\tm_seg = m_seg->pkt.next;\r\n+\t\t\tm_seg = m_seg->next;\r\n \t\t} while (m_seg != NULL);\r\n \r\n \t\t/*\r\n@@ -771,20 +771,20 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t */\r\n \t\tpkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.length) -\r\n \t\t\t\trxq->crc_len);\r\n-\t\trxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\trte_packet_prefetch(rxm->pkt.data);\r\n-\t\trxm->pkt.nb_segs = 1;\r\n-\t\trxm->pkt.next = NULL;\r\n-\t\trxm->pkt.pkt_len = pkt_len;\r\n-\t\trxm->pkt.data_len = pkt_len;\r\n-\t\trxm->pkt.in_port = rxq->port_id;\r\n+\t\trxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trte_packet_prefetch(rxm->data);\r\n+\t\trxm->nb_segs = 1;\r\n+\t\trxm->next = NULL;\r\n+\t\trxm->pkt_len = pkt_len;\r\n+\t\trxm->data_len = pkt_len;\r\n+\t\trxm->in_port = rxq->port_id;\r\n \r\n \t\trxm->ol_flags = rx_desc_status_to_pkt_flags(status);\r\n \t\trxm->ol_flags = (uint16_t)(rxm->ol_flags |\r\n \t\t\t\trx_desc_error_to_pkt_flags(rxd.errors));\r\n \r\n \t\t/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */\r\n-\t\trxm->pkt.vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);\r\n+\t\trxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);\r\n \r\n \t\t/*\r\n \t\t * Store the mbuf address into the next entry of the array\r\n@@ -940,8 +940,8 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t * Set data length & data buffer address of mbuf.\r\n \t\t */\r\n \t\tdata_len = rte_le_to_cpu_16(rxd.length);\r\n-\t\trxm->pkt.data_len = data_len;\r\n-\t\trxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trxm->data_len = data_len;\r\n+\t\trxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n \r\n \t\t/*\r\n \t\t * If this is the first buffer of the received packet,\r\n@@ -953,12 +953,12 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t */\r\n \t\tif (first_seg == NULL) {\r\n \t\t\tfirst_seg = rxm;\r\n-\t\t\tfirst_seg->pkt.pkt_len = data_len;\r\n-\t\t\tfirst_seg->pkt.nb_segs = 1;\r\n+\t\t\tfirst_seg->pkt_len = data_len;\r\n+\t\t\tfirst_seg->nb_segs = 1;\r\n \t\t} else {\r\n-\t\t\tfirst_seg->pkt.pkt_len += data_len;\r\n-\t\t\tfirst_seg->pkt.nb_segs++;\r\n-\t\t\tlast_seg->pkt.next = rxm;\r\n+\t\t\tfirst_seg->pkt_len += data_len;\r\n+\t\t\tfirst_seg->nb_segs++;\r\n+\t\t\tlast_seg->next = rxm;\r\n \t\t}\r\n \r\n \t\t/*\r\n@@ -981,18 +981,18 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t *     mbuf, subtract the length of that CRC part from the\r\n \t\t *     data length of the previous mbuf.\r\n \t\t */\r\n-\t\trxm->pkt.next = NULL;\r\n+\t\trxm->next = NULL;\r\n \t\tif (unlikely(rxq->crc_len > 0)) {\r\n-\t\t\tfirst_seg->pkt.pkt_len -= ETHER_CRC_LEN;\r\n+\t\t\tfirst_seg->pkt_len -= ETHER_CRC_LEN;\r\n \t\t\tif (data_len <= ETHER_CRC_LEN) {\r\n \t\t\t\trte_pktmbuf_free_seg(rxm);\r\n-\t\t\t\tfirst_seg->pkt.nb_segs--;\r\n-\t\t\t\tlast_seg->pkt.data_len = (uint16_t)\r\n-\t\t\t\t\t(last_seg->pkt.data_len -\r\n+\t\t\t\tfirst_seg->nb_segs--;\r\n+\t\t\t\tlast_seg->data_len = (uint16_t)\r\n+\t\t\t\t\t(last_seg->data_len -\r\n \t\t\t\t\t (ETHER_CRC_LEN - data_len));\r\n-\t\t\t\tlast_seg->pkt.next = NULL;\r\n+\t\t\t\tlast_seg->next = NULL;\r\n \t\t\t} else\r\n-\t\t\t\trxm->pkt.data_len =\r\n+\t\t\t\trxm->data_len =\r\n \t\t\t\t\t(uint16_t) (data_len - ETHER_CRC_LEN);\r\n \t\t}\r\n \r\n@@ -1003,17 +1003,17 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t *      - IP checksum flag,\r\n \t\t *      - error flags.\r\n \t\t */\r\n-\t\tfirst_seg->pkt.in_port = rxq->port_id;\r\n+\t\tfirst_seg->in_port = rxq->port_id;\r\n \r\n \t\tfirst_seg->ol_flags = rx_desc_status_to_pkt_flags(status);\r\n \t\tfirst_seg->ol_flags = (uint16_t)(first_seg->ol_flags |\r\n \t\t\t\t\trx_desc_error_to_pkt_flags(rxd.errors));\r\n \r\n \t\t/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */\r\n-\t\trxm->pkt.vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);\r\n+\t\trxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);\r\n \r\n \t\t/* Prefetch data of first segment, if configured to do so. */\r\n-\t\trte_packet_prefetch(first_seg->pkt.data);\r\n+\t\trte_packet_prefetch(first_seg->data);\r\n \r\n \t\t/*\r\n \t\t * Store the mbuf address into the next entry of the array\r\ndiff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c\r\nindex b0112be..99bb9d9 100644\r\n--- a/lib/librte_pmd_e1000/igb_rxtx.c\r\n+++ b/lib/librte_pmd_e1000/igb_rxtx.c\r\n@@ -96,7 +96,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)\r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR(mb) \\\r\n \t(uint64_t) ((mb)->buf_physaddr +\t\t   \\\r\n-\t\t\t(uint64_t) ((char *)((mb)->pkt.data) -     \\\r\n+\t\t\t(uint64_t) ((char *)((mb)->data) -     \\\r\n \t\t\t\t(char *)(mb)->buf_addr))\r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \\\r\n@@ -365,7 +365,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \r\n \tfor (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {\r\n \t\ttx_pkt = *tx_pkts++;\r\n-\t\tpkt_len = tx_pkt->pkt.pkt_len;\r\n+\t\tpkt_len = tx_pkt->pkt_len;\r\n \r\n \t\tRTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);\r\n \r\n@@ -377,10 +377,10 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t * for the packet, starting from the current position (tx_id)\r\n \t\t * in the ring.\r\n \t\t */\r\n-\t\ttx_last = (uint16_t) (tx_id + tx_pkt->pkt.nb_segs - 1);\r\n+\t\ttx_last = (uint16_t) (tx_id + tx_pkt->nb_segs - 1);\r\n \r\n \t\tol_flags = tx_pkt->ol_flags;\r\n-\t\tvlan_macip_lens = tx_pkt->pkt.vlan_macip.data;\r\n+\t\tvlan_macip_lens = tx_pkt->vlan_macip.data;\r\n \t\ttx_ol_req = (uint16_t)(ol_flags & PKT_TX_OFFLOAD_MASK);\r\n \r\n \t\t/* If a Context Descriptor need be built . */\r\n@@ -527,7 +527,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t\t/*\r\n \t\t\t * Set up transmit descriptor.\r\n \t\t\t */\r\n-\t\t\tslen = (uint16_t) m_seg->pkt.data_len;\r\n+\t\t\tslen = (uint16_t) m_seg->data_len;\r\n \t\t\tbuf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);\r\n \t\t\ttxd->read.buffer_addr =\r\n \t\t\t\trte_cpu_to_le_64(buf_dma_addr);\r\n@@ -538,7 +538,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t\ttxe->last_id = tx_last;\r\n \t\t\ttx_id = txe->next_id;\r\n \t\t\ttxe = txn;\r\n-\t\t\tm_seg = m_seg->pkt.next;\r\n+\t\t\tm_seg = m_seg->next;\r\n \t\t} while (m_seg != NULL);\r\n \r\n \t\t/*\r\n@@ -753,18 +753,18 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t */\r\n \t\tpkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.wb.upper.length) -\r\n \t\t\t\t      rxq->crc_len);\r\n-\t\trxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\trte_packet_prefetch(rxm->pkt.data);\r\n-\t\trxm->pkt.nb_segs = 1;\r\n-\t\trxm->pkt.next = NULL;\r\n-\t\trxm->pkt.pkt_len = pkt_len;\r\n-\t\trxm->pkt.data_len = pkt_len;\r\n-\t\trxm->pkt.in_port = rxq->port_id;\r\n-\r\n-\t\trxm->pkt.hash.rss = rxd.wb.lower.hi_dword.rss;\r\n+\t\trxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trte_packet_prefetch(rxm->data);\r\n+\t\trxm->nb_segs = 1;\r\n+\t\trxm->next = NULL;\r\n+\t\trxm->pkt_len = pkt_len;\r\n+\t\trxm->data_len = pkt_len;\r\n+\t\trxm->in_port = rxq->port_id;\r\n+\r\n+\t\trxm->hash.rss = rxd.wb.lower.hi_dword.rss;\r\n \t\thlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);\r\n \t\t/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */\r\n-\t\trxm->pkt.vlan_macip.f.vlan_tci =\r\n+\t\trxm->vlan_macip.f.vlan_tci =\r\n \t\t\trte_le_to_cpu_16(rxd.wb.upper.vlan);\r\n \r\n \t\tpkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);\r\n@@ -929,8 +929,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t * Set data length & data buffer address of mbuf.\r\n \t\t */\r\n \t\tdata_len = rte_le_to_cpu_16(rxd.wb.upper.length);\r\n-\t\trxm->pkt.data_len = data_len;\r\n-\t\trxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trxm->data_len = data_len;\r\n+\t\trxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n \r\n \t\t/*\r\n \t\t * If this is the first buffer of the received packet,\r\n@@ -942,12 +942,12 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t */\r\n \t\tif (first_seg == NULL) {\r\n \t\t\tfirst_seg = rxm;\r\n-\t\t\tfirst_seg->pkt.pkt_len = data_len;\r\n-\t\t\tfirst_seg->pkt.nb_segs = 1;\r\n+\t\t\tfirst_seg->pkt_len = data_len;\r\n+\t\t\tfirst_seg->nb_segs = 1;\r\n \t\t} else {\r\n-\t\t\tfirst_seg->pkt.pkt_len += data_len;\r\n-\t\t\tfirst_seg->pkt.nb_segs++;\r\n-\t\t\tlast_seg->pkt.next = rxm;\r\n+\t\t\tfirst_seg->pkt_len += data_len;\r\n+\t\t\tfirst_seg->nb_segs++;\r\n+\t\t\tlast_seg->next = rxm;\r\n \t\t}\r\n \r\n \t\t/*\r\n@@ -970,18 +970,18 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t *     mbuf, subtract the length of that CRC part from the\r\n \t\t *     data length of the previous mbuf.\r\n \t\t */\r\n-\t\trxm->pkt.next = NULL;\r\n+\t\trxm->next = NULL;\r\n \t\tif (unlikely(rxq->crc_len > 0)) {\r\n-\t\t\tfirst_seg->pkt.pkt_len -= ETHER_CRC_LEN;\r\n+\t\t\tfirst_seg->pkt_len -= ETHER_CRC_LEN;\r\n \t\t\tif (data_len <= ETHER_CRC_LEN) {\r\n \t\t\t\trte_pktmbuf_free_seg(rxm);\r\n-\t\t\t\tfirst_seg->pkt.nb_segs--;\r\n-\t\t\t\tlast_seg->pkt.data_len = (uint16_t)\r\n-\t\t\t\t\t(last_seg->pkt.data_len -\r\n+\t\t\t\tfirst_seg->nb_segs--;\r\n+\t\t\t\tlast_seg->data_len = (uint16_t)\r\n+\t\t\t\t\t(last_seg->data_len -\r\n \t\t\t\t\t (ETHER_CRC_LEN - data_len));\r\n-\t\t\t\tlast_seg->pkt.next = NULL;\r\n+\t\t\t\tlast_seg->next = NULL;\r\n \t\t\t} else\r\n-\t\t\t\trxm->pkt.data_len =\r\n+\t\t\t\trxm->data_len =\r\n \t\t\t\t\t(uint16_t) (data_len - ETHER_CRC_LEN);\r\n \t\t}\r\n \r\n@@ -994,14 +994,14 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t *      - VLAN TCI, if any,\r\n \t\t *      - error flags.\r\n \t\t */\r\n-\t\tfirst_seg->pkt.in_port = rxq->port_id;\r\n-\t\tfirst_seg->pkt.hash.rss = rxd.wb.lower.hi_dword.rss;\r\n+\t\tfirst_seg->in_port = rxq->port_id;\r\n+\t\tfirst_seg->hash.rss = rxd.wb.lower.hi_dword.rss;\r\n \r\n \t\t/*\r\n \t\t * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is\r\n \t\t * set in the pkt_flags field.\r\n \t\t */\r\n-\t\tfirst_seg->pkt.vlan_macip.f.vlan_tci =\r\n+\t\tfirst_seg->vlan_macip.f.vlan_tci =\r\n \t\t\trte_le_to_cpu_16(rxd.wb.upper.vlan);\r\n \t\thlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);\r\n \t\tpkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);\r\n@@ -1012,7 +1012,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\tfirst_seg->ol_flags = pkt_flags;\r\n \r\n \t\t/* Prefetch data of first segment, if configured to do so. */\r\n-\t\trte_packet_prefetch(first_seg->pkt.data);\r\n+\t\trte_packet_prefetch(first_seg->data);\r\n \r\n \t\t/*\r\n \t\t * Store the mbuf address into the next entry of the array\r\ndiff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c\r\nindex 76ff019..faf6095 100644\r\n--- a/lib/librte_pmd_i40e/i40e_rxtx.c\r\n+++ b/lib/librte_pmd_i40e/i40e_rxtx.c\r\n@@ -79,7 +79,7 @@\r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR(mb) \\\r\n \t((uint64_t)((mb)->buf_physaddr + \\\r\n-\t(uint64_t)((char *)((mb)->pkt.data) - \\\r\n+\t(uint64_t)((char *)((mb)->data) - \\\r\n \t(char *)(mb)->buf_addr)))\r\n \r\n static const struct rte_memzone *\r\n@@ -614,9 +614,9 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)\r\n \t\t\t\t\t\tI40E_RXD_QW1_STATUS_SHIFT;\r\n \t\t\tpkt_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>\r\n \t\t\t\tI40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;\r\n-\t\t\tmb->pkt.data_len = pkt_len;\r\n-\t\t\tmb->pkt.pkt_len = pkt_len;\r\n-\t\t\tmb->pkt.vlan_macip.f.vlan_tci = rx_status &\r\n+\t\t\tmb->data_len = pkt_len;\r\n+\t\t\tmb->pkt_len = pkt_len;\r\n+\t\t\tmb->vlan_macip.f.vlan_tci = rx_status &\r\n \t\t\t\t(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?\r\n \t\t\trte_le_to_cpu_16(\\\r\n \t\t\t\trxdp[j].wb.qword0.lo_dword.l2tag1) : 0;\r\n@@ -625,7 +625,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)\r\n \t\t\tpkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);\r\n \t\t\tmb->ol_flags = pkt_flags;\r\n \t\t\tif (pkt_flags & PKT_RX_RSS_HASH)\r\n-\t\t\t\tmb->pkt.hash.rss = rte_le_to_cpu_32(\\\r\n+\t\t\t\tmb->hash.rss = rte_le_to_cpu_32(\\\r\n \t\t\t\t\trxdp->wb.qword0.hi_dword.rss);\r\n \t\t}\r\n \r\n@@ -687,10 +687,10 @@ i40e_rx_alloc_bufs(struct i40e_rx_queue *rxq)\r\n \tfor (i = 0; i < rxq->rx_free_thresh; i++) {\r\n \t\tmb = rxep[i].mbuf;\r\n \t\trte_mbuf_refcnt_set(mb, 1);\r\n-\t\tmb->pkt.next = NULL;\r\n-\t\tmb->pkt.data = (char *)mb->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\tmb->pkt.nb_segs = 1;\r\n-\t\tmb->pkt.in_port = rxq->port_id;\r\n+\t\tmb->next = NULL;\r\n+\t\tmb->data = (char *)mb->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\tmb->nb_segs = 1;\r\n+\t\tmb->in_port = rxq->port_id;\r\n \t\tdma_addr = rte_cpu_to_le_64(\\\r\n \t\t\tRTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb));\r\n \t\trxdp[i].read.hdr_addr = dma_addr;\r\n@@ -845,15 +845,15 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\r\n \t\trx_packet_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>\r\n \t\t\t\tI40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;\r\n \r\n-\t\trxm->pkt.data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\trte_prefetch0(rxm->pkt.data);\r\n-\t\trxm->pkt.nb_segs = 1;\r\n-\t\trxm->pkt.next = NULL;\r\n-\t\trxm->pkt.pkt_len = rx_packet_len;\r\n-\t\trxm->pkt.data_len = rx_packet_len;\r\n-\t\trxm->pkt.in_port = rxq->port_id;\r\n+\t\trxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trte_prefetch0(rxm->data);\r\n+\t\trxm->nb_segs = 1;\r\n+\t\trxm->next = NULL;\r\n+\t\trxm->pkt_len = rx_packet_len;\r\n+\t\trxm->data_len = rx_packet_len;\r\n+\t\trxm->in_port = rxq->port_id;\r\n \r\n-\t\trxm->pkt.vlan_macip.f.vlan_tci = rx_status &\r\n+\t\trxm->vlan_macip.f.vlan_tci = rx_status &\r\n \t\t\t(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?\r\n \t\t\trte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;\r\n \t\tpkt_flags = i40e_rxd_status_to_pkt_flags(qword1);\r\n@@ -861,7 +861,7 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\r\n \t\tpkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);\r\n \t\trxm->ol_flags = pkt_flags;\r\n \t\tif (pkt_flags & PKT_RX_RSS_HASH)\r\n-\t\t\trxm->pkt.hash.rss =\r\n+\t\t\trxm->hash.rss =\r\n \t\t\t\trte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);\r\n \r\n \t\trx_pkts[nb_rx++] = rxm;\r\n@@ -948,8 +948,8 @@ i40e_recv_scattered_pkts(void *rx_queue,\r\n \t\trxdp->read.pkt_addr = dma_addr;\r\n \t\trx_packet_len = (qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>\r\n \t\t\t\t\tI40E_RXD_QW1_LENGTH_PBUF_SHIFT;\r\n-\t\trxm->pkt.data_len = rx_packet_len;\r\n-\t\trxm->pkt.data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trxm->data_len = rx_packet_len;\r\n+\t\trxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n \r\n \t\t/**\r\n \t\t * If this is the first buffer of the received packet, set the\r\n@@ -960,14 +960,14 @@ i40e_recv_scattered_pkts(void *rx_queue,\r\n \t\t */\r\n \t\tif (!first_seg) {\r\n \t\t\tfirst_seg = rxm;\r\n-\t\t\tfirst_seg->pkt.nb_segs = 1;\r\n-\t\t\tfirst_seg->pkt.pkt_len = rx_packet_len;\r\n+\t\t\tfirst_seg->nb_segs = 1;\r\n+\t\t\tfirst_seg->pkt_len = rx_packet_len;\r\n \t\t} else {\r\n-\t\t\tfirst_seg->pkt.pkt_len =\r\n-\t\t\t\t(uint16_t)(first_seg->pkt.pkt_len +\r\n+\t\t\tfirst_seg->pkt_len =\r\n+\t\t\t\t(uint16_t)(first_seg->pkt_len +\r\n \t\t\t\t\t\trx_packet_len);\r\n-\t\t\tfirst_seg->pkt.nb_segs++;\r\n-\t\t\tlast_seg->pkt.next = rxm;\r\n+\t\t\tfirst_seg->nb_segs++;\r\n+\t\t\tlast_seg->next = rxm;\r\n \t\t}\r\n \r\n \t\t/**\r\n@@ -990,23 +990,23 @@ i40e_recv_scattered_pkts(void *rx_queue,\r\n \t\t *  the length of that CRC part from the data length of the\r\n \t\t *  previous mbuf.\r\n \t\t */\r\n-\t\trxm->pkt.next = NULL;\r\n+\t\trxm->next = NULL;\r\n \t\tif (unlikely(rxq->crc_len > 0)) {\r\n-\t\t\tfirst_seg->pkt.pkt_len -= ETHER_CRC_LEN;\r\n+\t\t\tfirst_seg->pkt_len -= ETHER_CRC_LEN;\r\n \t\t\tif (rx_packet_len <= ETHER_CRC_LEN) {\r\n \t\t\t\trte_pktmbuf_free_seg(rxm);\r\n-\t\t\t\tfirst_seg->pkt.nb_segs--;\r\n-\t\t\t\tlast_seg->pkt.data_len =\r\n-\t\t\t\t\t(uint16_t)(last_seg->pkt.data_len -\r\n+\t\t\t\tfirst_seg->nb_segs--;\r\n+\t\t\t\tlast_seg->data_len =\r\n+\t\t\t\t\t(uint16_t)(last_seg->data_len -\r\n \t\t\t\t\t(ETHER_CRC_LEN - rx_packet_len));\r\n-\t\t\t\tlast_seg->pkt.next = NULL;\r\n+\t\t\t\tlast_seg->next = NULL;\r\n \t\t\t} else\r\n-\t\t\t\trxm->pkt.data_len = (uint16_t)(rx_packet_len -\r\n+\t\t\t\trxm->data_len = (uint16_t)(rx_packet_len -\r\n \t\t\t\t\t\t\t\tETHER_CRC_LEN);\r\n \t\t}\r\n \r\n-\t\tfirst_seg->pkt.in_port = rxq->port_id;\r\n-\t\tfirst_seg->pkt.vlan_macip.f.vlan_tci = (rx_status &\r\n+\t\tfirst_seg->in_port = rxq->port_id;\r\n+\t\tfirst_seg->vlan_macip.f.vlan_tci = (rx_status &\r\n \t\t\t(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?\r\n \t\t\trte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;\r\n \t\tpkt_flags = i40e_rxd_status_to_pkt_flags(qword1);\r\n@@ -1014,11 +1014,11 @@ i40e_recv_scattered_pkts(void *rx_queue,\r\n \t\tpkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);\r\n \t\tfirst_seg->ol_flags = pkt_flags;\r\n \t\tif (pkt_flags & PKT_RX_RSS_HASH)\r\n-\t\t\trxm->pkt.hash.rss =\r\n+\t\t\trxm->hash.rss =\r\n \t\t\t\trte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);\r\n \r\n \t\t/* Prefetch data of first segment, if configured to do so. */\r\n-\t\trte_prefetch0(first_seg->pkt.data);\r\n+\t\trte_prefetch0(first_seg->data);\r\n \t\trx_pkts[nb_rx++] = first_seg;\r\n \t\tfirst_seg = NULL;\r\n \t}\r\n@@ -1108,8 +1108,8 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\r\n \t\tRTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);\r\n \r\n \t\tol_flags = tx_pkt->ol_flags;\r\n-\t\tl2_len = tx_pkt->pkt.vlan_macip.f.l2_len;\r\n-\t\tl3_len = tx_pkt->pkt.vlan_macip.f.l3_len;\r\n+\t\tl2_len = tx_pkt->vlan_macip.f.l2_len;\r\n+\t\tl3_len = tx_pkt->vlan_macip.f.l3_len;\r\n \r\n \t\t/* Calculate the number of context descriptors needed. */\r\n \t\tnb_ctx = i40e_calc_context_desc(ol_flags);\r\n@@ -1119,7 +1119,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\r\n \t\t * a packet equals to the number of the segments of that\r\n \t\t * packet plus 1 context descriptor if needed.\r\n \t\t */\r\n-\t\tnb_used = (uint16_t)(tx_pkt->pkt.nb_segs + nb_ctx);\r\n+\t\tnb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);\r\n \t\ttx_last = (uint16_t)(tx_id + nb_used - 1);\r\n \r\n \t\t/* Circular ring */\r\n@@ -1145,7 +1145,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\r\n \r\n \t\t/* Descriptor based VLAN insertion */\r\n \t\tif (ol_flags & PKT_TX_VLAN_PKT) {\r\n-\t\t\ttx_flags |= tx_pkt->pkt.vlan_macip.f.vlan_tci <<\r\n+\t\t\ttx_flags |= tx_pkt->vlan_macip.f.vlan_tci <<\r\n \t\t\t\t\t\tI40E_TX_FLAG_L2TAG1_SHIFT;\r\n \t\t\ttx_flags |= I40E_TX_FLAG_INSERT_VLAN;\r\n \t\t\ttd_cmd |= I40E_TX_DESC_CMD_IL2TAG1;\r\n@@ -1202,7 +1202,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\r\n \t\t\ttxe->mbuf = m_seg;\r\n \r\n \t\t\t/* Setup TX Descriptor */\r\n-\t\t\tslen = m_seg->pkt.data_len;\r\n+\t\t\tslen = m_seg->data_len;\r\n \t\t\tbuf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);\r\n \t\t\ttxd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);\r\n \t\t\ttxd->cmd_type_offset_bsz = i40e_build_ctob(td_cmd,\r\n@@ -1210,7 +1210,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\r\n \t\t\ttxe->last_id = tx_last;\r\n \t\t\ttx_id = txe->next_id;\r\n \t\t\ttxe = txn;\r\n-\t\t\tm_seg = m_seg->pkt.next;\r\n+\t\t\tm_seg = m_seg->next;\r\n \t\t} while (m_seg != NULL);\r\n \r\n \t\t/* The last packet data descriptor needs End Of Packet (EOP) */\r\n@@ -1298,7 +1298,7 @@ tx4(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts)\r\n \t\ttxdp->buffer_addr = rte_cpu_to_le_64(dma_addr);\r\n \t\ttxdp->cmd_type_offset_bsz =\r\n \t\t\ti40e_build_ctob((uint32_t)I40E_TD_CMD, 0,\r\n-\t\t\t\t\t(*pkts)->pkt.data_len, 0);\r\n+\t\t\t\t\t(*pkts)->data_len, 0);\r\n \t}\r\n }\r\n \r\n@@ -1312,7 +1312,7 @@ tx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts)\r\n \ttxdp->buffer_addr = rte_cpu_to_le_64(dma_addr);\r\n \ttxdp->cmd_type_offset_bsz =\r\n \t\ti40e_build_ctob((uint32_t)I40E_TD_CMD, 0,\r\n-\t\t\t\t(*pkts)->pkt.data_len, 0);\r\n+\t\t\t\t(*pkts)->data_len, 0);\r\n }\r\n \r\n /* Fill hardware descriptor ring with mbuf data */\r\n@@ -2019,10 +2019,10 @@ i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq)\r\n \t\t}\r\n \r\n \t\trte_mbuf_refcnt_set(mbuf, 1);\r\n-\t\tmbuf->pkt.next = NULL;\r\n-\t\tmbuf->pkt.data = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\tmbuf->pkt.nb_segs = 1;\r\n-\t\tmbuf->pkt.in_port = rxq->port_id;\r\n+\t\tmbuf->next = NULL;\r\n+\t\tmbuf->data = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\tmbuf->nb_segs = 1;\r\n+\t\tmbuf->in_port = rxq->port_id;\r\n \r\n \t\tdma_addr =\r\n \t\t\trte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf));\r\ndiff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c\r\nindex 40ea4f8..c95e117 100644\r\n--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c\r\n+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c\r\n@@ -178,7 +178,7 @@ tx4(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts)\r\n \r\n \tfor (i = 0; i < 4; ++i, ++txdp, ++pkts) {\r\n \t\tbuf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(*pkts);\r\n-\t\tpkt_len = (*pkts)->pkt.data_len;\r\n+\t\tpkt_len = (*pkts)->data_len;\r\n \r\n \t\t/* write data to descriptor */\r\n \t\ttxdp->read.buffer_addr = buf_dma_addr;\r\n@@ -197,7 +197,7 @@ tx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts)\r\n \tuint32_t pkt_len;\r\n \r\n \tbuf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(*pkts);\r\n-\tpkt_len = (*pkts)->pkt.data_len;\r\n+\tpkt_len = (*pkts)->data_len;\r\n \r\n \t/* write data to descriptor */\r\n \ttxdp->read.buffer_addr = buf_dma_addr;\r\n@@ -570,7 +570,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \tfor (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {\r\n \t\tnew_ctx = 0;\r\n \t\ttx_pkt = *tx_pkts++;\r\n-\t\tpkt_len = tx_pkt->pkt.pkt_len;\r\n+\t\tpkt_len = tx_pkt->pkt_len;\r\n \r\n \t\tRTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);\r\n \r\n@@ -579,7 +579,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t * are needed for offload functionality.\r\n \t\t */\r\n \t\tol_flags = tx_pkt->ol_flags;\r\n-\t\tvlan_macip_lens = tx_pkt->pkt.vlan_macip.data;\r\n+\t\tvlan_macip_lens = tx_pkt->vlan_macip.data;\r\n \r\n \t\t/* If hardware offload required */\r\n \t\ttx_ol_req = (uint16_t)(ol_flags & PKT_TX_OFFLOAD_MASK);\r\n@@ -597,7 +597,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t * This will always be the number of segments + the number of\r\n \t\t * Context descriptors required to transmit the packet\r\n \t\t */\r\n-\t\tnb_used = (uint16_t)(tx_pkt->pkt.nb_segs + new_ctx);\r\n+\t\tnb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx);\r\n \r\n \t\t/*\r\n \t\t * The number of descriptors that must be allocated for a\r\n@@ -757,7 +757,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t\t/*\r\n \t\t\t * Set up Transmit Data Descriptor.\r\n \t\t\t */\r\n-\t\t\tslen = m_seg->pkt.data_len;\r\n+\t\t\tslen = m_seg->data_len;\r\n \t\t\tbuf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);\r\n \t\t\ttxd->read.buffer_addr =\r\n \t\t\t\trte_cpu_to_le_64(buf_dma_addr);\r\n@@ -768,7 +768,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t\ttxe->last_id = tx_last;\r\n \t\t\ttx_id = txe->next_id;\r\n \t\t\ttxe = txn;\r\n-\t\t\tm_seg = m_seg->pkt.next;\r\n+\t\t\tm_seg = m_seg->next;\r\n \t\t} while (m_seg != NULL);\r\n \r\n \t\t/*\r\n@@ -937,10 +937,10 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)\r\n \t\t\tmb = rxep[j].mbuf;\r\n \t\t\tpkt_len = (uint16_t)(rxdp[j].wb.upper.length -\r\n \t\t\t\t\t\t\trxq->crc_len);\r\n-\t\t\tmb->pkt.data_len = pkt_len;\r\n-\t\t\tmb->pkt.pkt_len = pkt_len;\r\n-\t\t\tmb->pkt.vlan_macip.f.vlan_tci = rxdp[j].wb.upper.vlan;\r\n-\t\t\tmb->pkt.hash.rss = rxdp[j].wb.lower.hi_dword.rss;\r\n+\t\t\tmb->data_len = pkt_len;\r\n+\t\t\tmb->pkt_len = pkt_len;\r\n+\t\t\tmb->vlan_macip.f.vlan_tci = rxdp[j].wb.upper.vlan;\r\n+\t\t\tmb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;\r\n \r\n \t\t\t/* convert descriptor fields to rte mbuf flags */\r\n \t\t\tmb->ol_flags  = rx_desc_hlen_type_rss_to_pkt_flags(\r\n@@ -995,10 +995,10 @@ ixgbe_rx_alloc_bufs(struct igb_rx_queue *rxq)\r\n \t\t/* populate the static rte mbuf fields */\r\n \t\tmb = rxep[i].mbuf;\r\n \t\trte_mbuf_refcnt_set(mb, 1);\r\n-\t\tmb->pkt.next = NULL;\r\n-\t\tmb->pkt.data = (char *)mb->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\tmb->pkt.nb_segs = 1;\r\n-\t\tmb->pkt.in_port = rxq->port_id;\r\n+\t\tmb->next = NULL;\r\n+\t\tmb->data = (char *)mb->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\tmb->nb_segs = 1;\r\n+\t\tmb->in_port = rxq->port_id;\r\n \r\n \t\t/* populate the descriptors */\r\n \t\tdma_addr = (uint64_t)mb->buf_physaddr + RTE_PKTMBUF_HEADROOM;\r\n@@ -1247,17 +1247,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t */\r\n \t\tpkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.wb.upper.length) -\r\n \t\t\t\t      rxq->crc_len);\r\n-\t\trxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\trte_packet_prefetch(rxm->pkt.data);\r\n-\t\trxm->pkt.nb_segs = 1;\r\n-\t\trxm->pkt.next = NULL;\r\n-\t\trxm->pkt.pkt_len = pkt_len;\r\n-\t\trxm->pkt.data_len = pkt_len;\r\n-\t\trxm->pkt.in_port = rxq->port_id;\r\n+\t\trxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trte_packet_prefetch(rxm->data);\r\n+\t\trxm->nb_segs = 1;\r\n+\t\trxm->next = NULL;\r\n+\t\trxm->pkt_len = pkt_len;\r\n+\t\trxm->data_len = pkt_len;\r\n+\t\trxm->in_port = rxq->port_id;\r\n \r\n \t\thlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);\r\n \t\t/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */\r\n-\t\trxm->pkt.vlan_macip.f.vlan_tci =\r\n+\t\trxm->vlan_macip.f.vlan_tci =\r\n \t\t\trte_le_to_cpu_16(rxd.wb.upper.vlan);\r\n \r\n \t\tpkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);\r\n@@ -1268,12 +1268,12 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\trxm->ol_flags = pkt_flags;\r\n \r\n \t\tif (likely(pkt_flags & PKT_RX_RSS_HASH))\r\n-\t\t\trxm->pkt.hash.rss = rxd.wb.lower.hi_dword.rss;\r\n+\t\t\trxm->hash.rss = rxd.wb.lower.hi_dword.rss;\r\n \t\telse if (pkt_flags & PKT_RX_FDIR) {\r\n-\t\t\trxm->pkt.hash.fdir.hash =\r\n+\t\t\trxm->hash.fdir.hash =\r\n \t\t\t\t(uint16_t)((rxd.wb.lower.hi_dword.csum_ip.csum)\r\n \t\t\t\t\t   & IXGBE_ATR_HASH_MASK);\r\n-\t\t\trxm->pkt.hash.fdir.id = rxd.wb.lower.hi_dword.csum_ip.ip_id;\r\n+\t\t\trxm->hash.fdir.id = rxd.wb.lower.hi_dword.csum_ip.ip_id;\r\n \t\t}\r\n \t\t/*\r\n \t\t * Store the mbuf address into the next entry of the array\r\n@@ -1430,8 +1430,8 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t * Set data length & data buffer address of mbuf.\r\n \t\t */\r\n \t\tdata_len = rte_le_to_cpu_16(rxd.wb.upper.length);\r\n-\t\trxm->pkt.data_len = data_len;\r\n-\t\trxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trxm->data_len = data_len;\r\n+\t\trxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n \r\n \t\t/*\r\n \t\t * If this is the first buffer of the received packet,\r\n@@ -1443,13 +1443,13 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t */\r\n \t\tif (first_seg == NULL) {\r\n \t\t\tfirst_seg = rxm;\r\n-\t\t\tfirst_seg->pkt.pkt_len = data_len;\r\n-\t\t\tfirst_seg->pkt.nb_segs = 1;\r\n+\t\t\tfirst_seg->pkt_len = data_len;\r\n+\t\t\tfirst_seg->nb_segs = 1;\r\n \t\t} else {\r\n-\t\t\tfirst_seg->pkt.pkt_len = (uint16_t)(first_seg->pkt.pkt_len\r\n+\t\t\tfirst_seg->pkt_len = (uint16_t)(first_seg->pkt_len\r\n \t\t\t\t\t+ data_len);\r\n-\t\t\tfirst_seg->pkt.nb_segs++;\r\n-\t\t\tlast_seg->pkt.next = rxm;\r\n+\t\t\tfirst_seg->nb_segs++;\r\n+\t\t\tlast_seg->next = rxm;\r\n \t\t}\r\n \r\n \t\t/*\r\n@@ -1472,18 +1472,18 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t *     mbuf, subtract the length of that CRC part from the\r\n \t\t *     data length of the previous mbuf.\r\n \t\t */\r\n-\t\trxm->pkt.next = NULL;\r\n+\t\trxm->next = NULL;\r\n \t\tif (unlikely(rxq->crc_len > 0)) {\r\n-\t\t\tfirst_seg->pkt.pkt_len -= ETHER_CRC_LEN;\r\n+\t\t\tfirst_seg->pkt_len -= ETHER_CRC_LEN;\r\n \t\t\tif (data_len <= ETHER_CRC_LEN) {\r\n \t\t\t\trte_pktmbuf_free_seg(rxm);\r\n-\t\t\t\tfirst_seg->pkt.nb_segs--;\r\n-\t\t\t\tlast_seg->pkt.data_len = (uint16_t)\r\n-\t\t\t\t\t(last_seg->pkt.data_len -\r\n+\t\t\t\tfirst_seg->nb_segs--;\r\n+\t\t\t\tlast_seg->data_len = (uint16_t)\r\n+\t\t\t\t\t(last_seg->data_len -\r\n \t\t\t\t\t (ETHER_CRC_LEN - data_len));\r\n-\t\t\t\tlast_seg->pkt.next = NULL;\r\n+\t\t\t\tlast_seg->next = NULL;\r\n \t\t\t} else\r\n-\t\t\t\trxm->pkt.data_len =\r\n+\t\t\t\trxm->data_len =\r\n \t\t\t\t\t(uint16_t) (data_len - ETHER_CRC_LEN);\r\n \t\t}\r\n \r\n@@ -1496,13 +1496,13 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\t *      - VLAN TCI, if any,\r\n \t\t *      - error flags.\r\n \t\t */\r\n-\t\tfirst_seg->pkt.in_port = rxq->port_id;\r\n+\t\tfirst_seg->in_port = rxq->port_id;\r\n \r\n \t\t/*\r\n \t\t * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is\r\n \t\t * set in the pkt_flags field.\r\n \t\t */\r\n-\t\tfirst_seg->pkt.vlan_macip.f.vlan_tci =\r\n+\t\tfirst_seg->vlan_macip.f.vlan_tci =\r\n \t\t\t\trte_le_to_cpu_16(rxd.wb.upper.vlan);\r\n \t\thlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);\r\n \t\tpkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);\r\n@@ -1513,17 +1513,17 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\tfirst_seg->ol_flags = pkt_flags;\r\n \r\n \t\tif (likely(pkt_flags & PKT_RX_RSS_HASH))\r\n-\t\t\tfirst_seg->pkt.hash.rss = rxd.wb.lower.hi_dword.rss;\r\n+\t\t\tfirst_seg->hash.rss = rxd.wb.lower.hi_dword.rss;\r\n \t\telse if (pkt_flags & PKT_RX_FDIR) {\r\n-\t\t\tfirst_seg->pkt.hash.fdir.hash =\r\n+\t\t\tfirst_seg->hash.fdir.hash =\r\n \t\t\t\t(uint16_t)((rxd.wb.lower.hi_dword.csum_ip.csum)\r\n \t\t\t\t\t   & IXGBE_ATR_HASH_MASK);\r\n-\t\t\tfirst_seg->pkt.hash.fdir.id =\r\n+\t\t\tfirst_seg->hash.fdir.id =\r\n \t\t\t\trxd.wb.lower.hi_dword.csum_ip.ip_id;\r\n \t\t}\r\n \r\n \t\t/* Prefetch data of first segment, if configured to do so. */\r\n-\t\trte_packet_prefetch(first_seg->pkt.data);\r\n+\t\trte_packet_prefetch(first_seg->data);\r\n \r\n \t\t/*\r\n \t\t * Store the mbuf address into the next entry of the array\r\n@@ -3212,10 +3212,10 @@ ixgbe_alloc_rx_queue_mbufs(struct igb_rx_queue *rxq)\r\n \t\t}\r\n \r\n \t\trte_mbuf_refcnt_set(mbuf, 1);\r\n-\t\tmbuf->pkt.next = NULL;\r\n-\t\tmbuf->pkt.data = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\tmbuf->pkt.nb_segs = 1;\r\n-\t\tmbuf->pkt.in_port = rxq->port_id;\r\n+\t\tmbuf->next = NULL;\r\n+\t\tmbuf->data = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\tmbuf->nb_segs = 1;\r\n+\t\tmbuf->in_port = rxq->port_id;\r\n \r\n \t\tdma_addr =\r\n \t\t\trte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf));\r\ndiff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h\r\nindex 64c0695..4c9cb74 100644\r\n--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h\r\n+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h\r\n@@ -47,7 +47,7 @@\r\n #endif\r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR(mb) \\\r\n-\t(uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->pkt.data) - \\\r\n+\t(uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->data) - \\\r\n \t(char *)(mb)->buf_addr))\r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \\\r\ndiff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c\r\nindex 047acf0..bafb215 100644\r\n--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c\r\n+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c\r\n@@ -48,9 +48,7 @@ static inline void\r\n ixgbe_rxq_rearm(struct igb_rx_queue *rxq)\r\n {\r\n \tstatic const struct rte_mbuf mb_def = {\r\n-\t\t.pkt = {\r\n-\t\t\t.nb_segs = 1,\r\n-\t\t},\r\n+\t\t.nb_segs = 1,\r\n \t};\r\n \tint i;\r\n \tuint16_t rx_id;\r\n@@ -68,7 +66,7 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)\r\n \r\n \trxdp = rxq->rx_ring + rxq->rxrearm_start;\r\n \r\n-\tdef_low = _mm_load_si128((__m128i *)&(mb_def.pkt));\r\n+\tdef_low = _mm_load_si128((__m128i *)&(mb_def.next));\r\n \r\n \t/* Initialize the mbufs in vector, process 2 mbufs in one loop */\r\n \tfor (i = 0; i < RTE_IXGBE_RXQ_REARM_THRESH; i += 2, rxep += 2) {\r\n@@ -99,8 +97,8 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)\r\n \t\t_mm_store_si128((__m128i *)&rxdp++->read, dma_addr1);\r\n \r\n \t\t/* flush mbuf with pkt template */\r\n-\t\t_mm_store_si128((__m128i *)&mb0->pkt, vaddr0);\r\n-\t\t_mm_store_si128((__m128i *)&mb1->pkt, vaddr1);\r\n+\t\t_mm_store_si128((__m128i *)&mb0->next, vaddr0);\r\n+\t\t_mm_store_si128((__m128i *)&mb1->next, vaddr1);\r\n \r\n \t\t/* update refcnt per pkt */\r\n \t\trte_mbuf_refcnt_set(mb0, 1);\r\n@@ -299,9 +297,9 @@ ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\tstaterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2);\r\n \r\n \t\t/* D.3 copy final 3,4 data to rx_pkts */\r\n-\t\t_mm_storeu_si128((__m128i *)&(rx_pkts[pos+3]->pkt.data_len),\r\n+\t\t_mm_storeu_si128((__m128i *)&(rx_pkts[pos+3]->data_len),\r\n \t\t\t\tpkt_mb4);\r\n-\t\t_mm_storeu_si128((__m128i *)&(rx_pkts[pos+2]->pkt.data_len),\r\n+\t\t_mm_storeu_si128((__m128i *)&(rx_pkts[pos+2]->data_len),\r\n \t\t\t\tpkt_mb3);\r\n \r\n \t\t/* D.2 pkt 1,2 set in_port/nb_seg and remove crc */\r\n@@ -313,9 +311,9 @@ ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,\r\n \t\tstaterr = _mm_packs_epi32(staterr, zero);\r\n \r\n \t\t/* D.3 copy final 1,2 data to rx_pkts */\r\n-\t\t_mm_storeu_si128((__m128i *)&(rx_pkts[pos+1]->pkt.data_len),\r\n+\t\t_mm_storeu_si128((__m128i *)&(rx_pkts[pos+1]->data_len),\r\n \t\t\t\tpkt_mb2);\r\n-\t\t_mm_storeu_si128((__m128i *)&(rx_pkts[pos]->pkt.data_len),\r\n+\t\t_mm_storeu_si128((__m128i *)&(rx_pkts[pos]->data_len),\r\n \t\t\t\tpkt_mb1);\r\n \r\n \t\t/* C.4 calc avaialbe number of desc */\r\n@@ -342,7 +340,7 @@ vtx1(volatile union ixgbe_adv_tx_desc *txdp,\r\n \t/* load buf_addr/buf_physaddr in t0 */\r\n \tt0 = _mm_loadu_si128((__m128i *)&(pkt->buf_addr));\r\n \t/* load data, ... pkt_len in t1 */\r\n-\tt1 = _mm_loadu_si128((__m128i *)&(pkt->pkt.data));\r\n+\tt1 = _mm_loadu_si128((__m128i *)&(pkt->data));\r\n \r\n \t/* calc offset = (data - buf_adr) */\r\n \toffset = _mm_sub_epi64(t1, t0);\r\ndiff --git a/lib/librte_pmd_pcap/rte_eth_pcap.c b/lib/librte_pmd_pcap/rte_eth_pcap.c\r\nindex eebe768..121de65 100644\r\n--- a/lib/librte_pmd_pcap/rte_eth_pcap.c\r\n+++ b/lib/librte_pmd_pcap/rte_eth_pcap.c\r\n@@ -151,9 +151,9 @@ eth_pcap_rx(void *queue,\r\n \r\n \t\tif (header.len <= buf_size) {\r\n \t\t\t/* pcap packet will fit in the mbuf, go ahead and copy */\r\n-\t\t\trte_memcpy(mbuf->pkt.data, packet, header.len);\r\n-\t\t\tmbuf->pkt.data_len = (uint16_t)header.len;\r\n-\t\t\tmbuf->pkt.pkt_len = mbuf->pkt.data_len;\r\n+\t\t\trte_memcpy(mbuf->data, packet, header.len);\r\n+\t\t\tmbuf->data_len = (uint16_t)header.len;\r\n+\t\t\tmbuf->pkt_len = mbuf->data_len;\r\n \t\t\tbufs[num_rx] = mbuf;\r\n \t\t\tnum_rx++;\r\n \t\t} else {\r\n@@ -200,9 +200,9 @@ eth_pcap_tx_dumper(void *queue,\r\n \tfor (i = 0; i < nb_pkts; i++) {\r\n \t\tmbuf = bufs[i];\r\n \t\tcalculate_timestamp(&header.ts);\r\n-\t\theader.len = mbuf->pkt.data_len;\r\n+\t\theader.len = mbuf->data_len;\r\n \t\theader.caplen = header.len;\r\n-\t\tpcap_dump((u_char*) dumper_q->dumper, &header, mbuf->pkt.data);\r\n+\t\tpcap_dump((u_char*) dumper_q->dumper, &header, mbuf->data);\r\n \t\trte_pktmbuf_free(mbuf);\r\n \t\tnum_tx++;\r\n \t}\r\n@@ -237,8 +237,8 @@ eth_pcap_tx(void *queue,\r\n \r\n \tfor (i = 0; i < nb_pkts; i++) {\r\n \t\tmbuf = bufs[i];\r\n-\t\tret = pcap_sendpacket(tx_queue->pcap, (u_char*) mbuf->pkt.data,\r\n-\t\t\t\tmbuf->pkt.data_len);\r\n+\t\tret = pcap_sendpacket(tx_queue->pcap, (u_char*) mbuf->data,\r\n+\t\t\t\tmbuf->data_len);\r\n \t\tif (unlikely(ret != 0))\r\n \t\t\tbreak;\r\n \t\tnum_tx++;\r\ndiff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c\r\nindex 186514d..07bc7b2 100644\r\n--- a/lib/librte_pmd_virtio/virtio_rxtx.c\r\n+++ b/lib/librte_pmd_virtio/virtio_rxtx.c\r\n@@ -118,7 +118,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,\r\n \t\t}\r\n \r\n \t\trte_prefetch0(cookie);\r\n-\t\trte_packet_prefetch(cookie->pkt.data);\r\n+\t\trte_packet_prefetch(cookie->data);\r\n \t\trx_pkts[i]  = cookie;\r\n \t\tvq->vq_used_cons_idx++;\r\n \t\tvq_ring_free_chain(vq, desc_idx);\r\n@@ -209,7 +209,7 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)\r\n \tstart_dp[idx].flags = VRING_DESC_F_NEXT;\r\n \tidx = start_dp[idx].next;\r\n \tstart_dp[idx].addr  = RTE_MBUF_DATA_DMA_ADDR(cookie);\r\n-\tstart_dp[idx].len   = cookie->pkt.data_len;\r\n+\tstart_dp[idx].len   = cookie->data_len;\r\n \tstart_dp[idx].flags = 0;\r\n \tidx = start_dp[idx].next;\r\n \ttxvq->vq_desc_head_idx = idx;\r\n@@ -469,16 +469,16 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\r\n \t\t\tcontinue;\r\n \t\t}\r\n \r\n-\t\trxm->pkt.in_port = rxvq->port_id;\r\n-\t\trxm->pkt.data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\trxm->pkt.nb_segs = 1;\r\n-\t\trxm->pkt.next = NULL;\r\n-\t\trxm->pkt.pkt_len  = (uint32_t)(len[i]\r\n+\t\trxm->in_port = rxvq->port_id;\r\n+\t\trxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trxm->nb_segs = 1;\r\n+\t\trxm->next = NULL;\r\n+\t\trxm->pkt_len  = (uint32_t)(len[i]\r\n \t\t\t\t\t       - sizeof(struct virtio_net_hdr));\r\n-\t\trxm->pkt.data_len = (uint16_t)(len[i]\r\n+\t\trxm->data_len = (uint16_t)(len[i]\r\n \t\t\t\t\t       - sizeof(struct virtio_net_hdr));\r\n \r\n-\t\tVIRTIO_DUMP_PACKET(rxm, rxm->pkt.data_len);\r\n+\t\tVIRTIO_DUMP_PACKET(rxm, rxm->data_len);\r\n \r\n \t\trx_pkts[nb_rx++] = rxm;\r\n \t\trxvq->bytes += len[i] - sizeof(struct virtio_net_hdr);\r\n@@ -555,7 +555,7 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\r\n \t\t\t\tbreak;\r\n \t\t\t}\r\n \t\t\tnb_tx++;\r\n-\t\t\ttxvq->bytes += txm->pkt.data_len;\r\n+\t\t\ttxvq->bytes += txm->data_len;\r\n \t\t} else {\r\n \t\t\tPMD_TX_LOG(ERR, \"No free tx descriptors to transmit\");\r\n \t\t\tbreak;\r\ndiff --git a/lib/librte_pmd_virtio/virtqueue.h b/lib/librte_pmd_virtio/virtqueue.h\r\nindex 87db35f..d777feb 100644\r\n--- a/lib/librte_pmd_virtio/virtqueue.h\r\n+++ b/lib/librte_pmd_virtio/virtqueue.h\r\n@@ -59,7 +59,7 @@\r\n #define VIRTQUEUE_MAX_NAME_SZ 32\r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR(mb) \\\r\n-\t(uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->pkt.data) - \\\r\n+\t(uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->data) - \\\r\n \t(char *)(mb)->buf_addr))\r\n \r\n #define VTNET_SQ_RQ_QUEUE_IDX 0\r\ndiff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c\r\nindex 2470c8e..7dd5a98 100644\r\n--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c\r\n+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c\r\n@@ -79,7 +79,7 @@\r\n \r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR(mb) \\\r\n-\t(uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->pkt.data) - \\\r\n+\t(uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->data) - \\\r\n \t(char *)(mb)->buf_addr))\r\n \r\n #define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \\\r\n@@ -288,7 +288,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \r\n \t\t\ttxm = tx_pkts[nb_tx];\r\n \t\t\t/* Don't support scatter packets yet, free them if met */\r\n-\t\t\tif (txm->pkt.nb_segs != 1) {\r\n+\t\t\tif (txm->nb_segs != 1) {\r\n \t\t\t\tPMD_TX_LOG(DEBUG, \"Don't support scatter packets yet, drop!\");\r\n \t\t\t\trte_pktmbuf_free(tx_pkts[nb_tx]);\r\n \t\t\t\ttxq->stats.drop_total++;\r\n@@ -298,7 +298,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t\t}\r\n \r\n \t\t\t/* Needs to minus ether header len */\r\n-\t\t\tif (txm->pkt.data_len > (hw->cur_mtu + ETHER_HDR_LEN)) {\r\n+\t\t\tif (txm->data_len > (hw->cur_mtu + ETHER_HDR_LEN)) {\r\n \t\t\t\tPMD_TX_LOG(DEBUG, \"Packet data_len higher than MTU\");\r\n \t\t\t\trte_pktmbuf_free(tx_pkts[nb_tx]);\r\n \t\t\t\ttxq->stats.drop_total++;\r\n@@ -313,7 +313,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\r\n \t\t\ttbi = txq->cmd_ring.buf_info + txq->cmd_ring.next2fill;\r\n \t\t\ttbi->bufPA = RTE_MBUF_DATA_DMA_ADDR(txm);\r\n \t\t\ttxd->addr = tbi->bufPA;\r\n-\t\t\ttxd->len = txm->pkt.data_len;\r\n+\t\t\ttxd->len = txm->data_len;\r\n \r\n \t\t\t/* Mark the last descriptor as End of Packet. */\r\n \t\t\ttxd->cq = 1;\r\n@@ -550,21 +550,21 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\r\n \t\t\t\t\t       rte_pktmbuf_mtod(rxm, void *));\r\n #endif\r\n \t\t\t\t/* Copy vlan tag in packet buffer */\r\n-\t\t\t\trxm->pkt.vlan_macip.f.vlan_tci =\r\n+\t\t\t\trxm->vlan_macip.f.vlan_tci =\r\n \t\t\t\t\trte_le_to_cpu_16((uint16_t)rcd->tci);\r\n \r\n \t\t\t} else\r\n \t\t\t\trxm->ol_flags = 0;\r\n \r\n \t\t\t/* Initialize newly received packet buffer */\r\n-\t\t\trxm->pkt.in_port = rxq->port_id;\r\n-\t\t\trxm->pkt.nb_segs = 1;\r\n-\t\t\trxm->pkt.next = NULL;\r\n-\t\t\trxm->pkt.pkt_len = (uint16_t)rcd->len;\r\n-\t\t\trxm->pkt.data_len = (uint16_t)rcd->len;\r\n-\t\t\trxm->pkt.in_port = rxq->port_id;\r\n-\t\t\trxm->pkt.vlan_macip.f.vlan_tci = 0;\r\n-\t\t\trxm->pkt.data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\t\trxm->in_port = rxq->port_id;\r\n+\t\t\trxm->nb_segs = 1;\r\n+\t\t\trxm->next = NULL;\r\n+\t\t\trxm->pkt_len = (uint16_t)rcd->len;\r\n+\t\t\trxm->data_len = (uint16_t)rcd->len;\r\n+\t\t\trxm->in_port = rxq->port_id;\r\n+\t\t\trxm->vlan_macip.f.vlan_tci = 0;\r\n+\t\t\trxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n \r\n \t\t\trx_pkts[nb_rx++] = rxm;\r\n \r\ndiff --git a/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c b/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c\r\nindex ba82319..c118652 100644\r\n--- a/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c\r\n+++ b/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c\r\n@@ -109,12 +109,12 @@ eth_xenvirt_rx(void *q, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\r\n \tfor (i = 0; i < num ; i ++) {\r\n \t\trxm = rx_pkts[i];\r\n \t\tPMD_RX_LOG(DEBUG, \"packet len:%d\\n\", len[i]);\r\n-\t\trxm->pkt.next = NULL;\r\n-\t\trxm->pkt.data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n-\t\trxm->pkt.data_len = (uint16_t)(len[i] - sizeof(struct virtio_net_hdr));\r\n-\t\trxm->pkt.nb_segs = 1;\r\n-\t\trxm->pkt.in_port = pi->port_id;\r\n-\t\trxm->pkt.pkt_len  = (uint32_t)(len[i] - sizeof(struct virtio_net_hdr));\r\n+\t\trxm->next = NULL;\r\n+\t\trxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;\r\n+\t\trxm->data_len = (uint16_t)(len[i] - sizeof(struct virtio_net_hdr));\r\n+\t\trxm->nb_segs = 1;\r\n+\t\trxm->in_port = pi->port_id;\r\n+\t\trxm->pkt_len  = (uint32_t)(len[i] - sizeof(struct virtio_net_hdr));\r\n \t}\r\n \t/* allocate new mbuf for the used descriptor */\r\n \twhile (likely(!virtqueue_full(rxvq))) {\r\ndiff --git a/lib/librte_pmd_xenvirt/virtqueue.h b/lib/librte_pmd_xenvirt/virtqueue.h\r\nindex 81cd938..d8717fe 100644\r\n--- a/lib/librte_pmd_xenvirt/virtqueue.h\r\n+++ b/lib/librte_pmd_xenvirt/virtqueue.h\r\n@@ -54,7 +54,7 @@\r\n  * rather than gpa<->hva in virito spec.\r\n  */\r\n #define RTE_MBUF_DATA_DMA_ADDR(mb) \\\r\n-\t((uint64_t)((mb)->pkt.data))\r\n+\t((uint64_t)((mb)->data))\r\n \r\n enum { VTNET_RQ = 0, VTNET_TQ = 1, VTNET_CQ = 2 };\r\n \r\n@@ -238,7 +238,7 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)\r\n \tstart_dp[idx].addr  = (uintptr_t)NULL;\r\n \tidx = start_dp[idx].next;\r\n \tstart_dp[idx].addr  = RTE_MBUF_DATA_DMA_ADDR(cookie);\r\n-\tstart_dp[idx].len   = cookie->pkt.data_len;\r\n+\tstart_dp[idx].len   = cookie->data_len;\r\n \tstart_dp[idx].flags = 0;\r\n \tidx = start_dp[idx].next;\r\n \ttxvq->vq_desc_head_idx = idx;\r\ndiff --git a/lib/librte_port/rte_port_frag.c b/lib/librte_port/rte_port_frag.c\r\nindex ce5026f..9f1bd3c 100644\r\n--- a/lib/librte_port/rte_port_frag.c\r\n+++ b/lib/librte_port/rte_port_frag.c\r\n@@ -159,7 +159,7 @@ rte_port_ring_reader_ipv4_frag_rx(void *port,\r\n \t\tp->n_pkts--;\r\n \r\n \t\t/* If not jumbo, pass current packet to output */\r\n-\t\tif (pkt->pkt.pkt_len <= IPV4_MTU_DEFAULT) {\r\n+\t\tif (pkt->pkt_len <= IPV4_MTU_DEFAULT) {\r\n \t\t\tpkts[n_pkts_out++] = pkt;\r\n \r\n \t\t\tn_pkts_to_provide = n_pkts - n_pkts_out;\r\ndiff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c\r\nindex 968c2b3..ba60277 100644\r\n--- a/lib/librte_sched/rte_sched.c\r\n+++ b/lib/librte_sched/rte_sched.c\r\n@@ -1015,7 +1015,7 @@ rte_sched_port_update_subport_stats(struct rte_sched_port *port, uint32_t qindex\r\n {\r\n \tstruct rte_sched_subport *s = port->subport + (qindex / rte_sched_port_queues_per_subport(port));\r\n \tuint32_t tc_index = (qindex >> 2) & 0x3;\r\n-\tuint32_t pkt_len = pkt->pkt.pkt_len;\r\n+\tuint32_t pkt_len = pkt->pkt_len;\r\n \r\n \ts->stats.n_pkts_tc[tc_index] += 1;\r\n \ts->stats.n_bytes_tc[tc_index] += pkt_len;\r\n@@ -1026,7 +1026,7 @@ rte_sched_port_update_subport_stats_on_drop(struct rte_sched_port *port, uint32_\r\n {\r\n \tstruct rte_sched_subport *s = port->subport + (qindex / rte_sched_port_queues_per_subport(port));\r\n \tuint32_t tc_index = (qindex >> 2) & 0x3;\r\n-\tuint32_t pkt_len = pkt->pkt.pkt_len;\r\n+\tuint32_t pkt_len = pkt->pkt_len;\r\n \r\n \ts->stats.n_pkts_tc_dropped[tc_index] += 1;\r\n \ts->stats.n_bytes_tc_dropped[tc_index] += pkt_len;\r\n@@ -1036,7 +1036,7 @@ static inline void\r\n rte_sched_port_update_queue_stats(struct rte_sched_port *port, uint32_t qindex, struct rte_mbuf *pkt)\r\n {\r\n \tstruct rte_sched_queue_extra *qe = port->queue_extra + qindex;\r\n-\tuint32_t pkt_len = pkt->pkt.pkt_len;\r\n+\tuint32_t pkt_len = pkt->pkt_len;\r\n \r\n \tqe->stats.n_pkts += 1;\r\n \tqe->stats.n_bytes += pkt_len;\r\n@@ -1046,7 +1046,7 @@ static inline void\r\n rte_sched_port_update_queue_stats_on_drop(struct rte_sched_port *port, uint32_t qindex, struct rte_mbuf *pkt)\r\n {\r\n \tstruct rte_sched_queue_extra *qe = port->queue_extra + qindex;\r\n-\tuint32_t pkt_len = pkt->pkt.pkt_len;\r\n+\tuint32_t pkt_len = pkt->pkt_len;\r\n \r\n \tqe->stats.n_pkts_dropped += 1;\r\n \tqe->stats.n_bytes_dropped += pkt_len;\r\n@@ -1563,7 +1563,7 @@ grinder_credits_check(struct rte_sched_port *port, uint32_t pos)\r\n \tstruct rte_sched_pipe *pipe = grinder->pipe;\r\n \tstruct rte_mbuf *pkt = grinder->pkt;\r\n \tuint32_t tc_index = grinder->tc_index;\r\n-\tuint32_t pkt_len = pkt->pkt.pkt_len + port->frame_overhead;\r\n+\tuint32_t pkt_len = pkt->pkt_len + port->frame_overhead;\r\n \tuint32_t subport_tb_credits = subport->tb_credits;\r\n \tuint32_t subport_tc_credits = subport->tc_credits[tc_index];\r\n \tuint32_t pipe_tb_credits = pipe->tb_credits;\r\n@@ -1599,7 +1599,7 @@ grinder_credits_check(struct rte_sched_port *port, uint32_t pos)\r\n \tstruct rte_sched_pipe *pipe = grinder->pipe;\r\n \tstruct rte_mbuf *pkt = grinder->pkt;\r\n \tuint32_t tc_index = grinder->tc_index;\r\n-\tuint32_t pkt_len = pkt->pkt.pkt_len + port->frame_overhead;\r\n+\tuint32_t pkt_len = pkt->pkt_len + port->frame_overhead;\r\n \tuint32_t subport_tb_credits = subport->tb_credits;\r\n \tuint32_t subport_tc_credits = subport->tc_credits[tc_index];\r\n \tuint32_t pipe_tb_credits = pipe->tb_credits;\r\n@@ -1640,7 +1640,7 @@ grinder_schedule(struct rte_sched_port *port, uint32_t pos)\r\n \tstruct rte_sched_grinder *grinder = port->grinder + pos;\r\n \tstruct rte_sched_queue *queue = grinder->queue[grinder->qpos];\r\n \tstruct rte_mbuf *pkt = grinder->pkt;\r\n-\tuint32_t pkt_len = pkt->pkt.pkt_len + port->frame_overhead;\r\n+\tuint32_t pkt_len = pkt->pkt_len + port->frame_overhead;\r\n \r\n #if RTE_SCHED_TS_CREDITS_CHECK\r\n \tif (!grinder_credits_check(port, pos)) {\r\ndiff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h\r\nindex 3f27755..e6bba22 100644\r\n--- a/lib/librte_sched/rte_sched.h\r\n+++ b/lib/librte_sched/rte_sched.h\r\n@@ -106,7 +106,7 @@ extern \"C\" {\r\n    2. Start of Frame Delimiter (SFD):       1 byte;\r\n    3. Frame Check Sequence (FCS):           4 bytes;\r\n    4. Inter Frame Gap (IFG):               12 bytes.\r\n-The FCS is considered overhead only if not included in the packet length (field pkt.pkt_len\r\n+The FCS is considered overhead only if not included in the packet length (field pkt_len\r\n of struct rte_mbuf). */\r\n #ifndef RTE_SCHED_FRAME_OVERHEAD_DEFAULT\r\n #define RTE_SCHED_FRAME_OVERHEAD_DEFAULT      24\r\n@@ -196,7 +196,7 @@ struct rte_sched_port_params {\r\n };\r\n \r\n /** Path through the scheduler hierarchy used by the scheduler enqueue operation to\r\n-identify the destination queue for the current packet. Stored in the field pkt.hash.sched\r\n+identify the destination queue for the current packet. Stored in the field hash.sched\r\n of struct rte_mbuf of each packet, typically written by the classification stage and read by\r\n scheduler enqueue.*/\r\n struct rte_sched_port_hierarchy {\r\n@@ -352,7 +352,7 @@ static inline void\r\n rte_sched_port_pkt_write(struct rte_mbuf *pkt,\r\n \tuint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue, enum rte_meter_color color)\r\n {\r\n-\tstruct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->pkt.hash.sched;\r\n+\tstruct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;\r\n \r\n \tsched->color = (uint32_t) color;\r\n \tsched->subport = subport;\r\n@@ -381,7 +381,7 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,\r\n static inline void\r\n rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport, uint32_t *pipe, uint32_t *traffic_class, uint32_t *queue)\r\n {\r\n-\tstruct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->pkt.hash.sched;\r\n+\tstruct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;\r\n \r\n \t*subport = sched->subport;\r\n \t*pipe = sched->pipe;\r\n@@ -392,7 +392,7 @@ rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport, uint3\r\n static inline enum rte_meter_color\r\n rte_sched_port_pkt_read_color(struct rte_mbuf *pkt)\r\n {\r\n-\tstruct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->pkt.hash.sched;\r\n+\tstruct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;\r\n \r\n \treturn (enum rte_meter_color) sched->color;\r\n }\r\n",
    "prefixes": [
        "dpdk-dev",
        "4/6"
    ]
}