get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/425/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 425,
    "url": "http://patches.dpdk.org/api/patches/425/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/1411046390-29478-3-git-send-email-huawei.xie@intel.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1411046390-29478-3-git-send-email-huawei.xie@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1411046390-29478-3-git-send-email-huawei.xie@intel.com",
    "date": "2014-09-18T13:19:50",
    "name": "[dpdk-dev,2/2] examples/vhost: vhost example modification to use vhost lib API",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "0ef7b40fc221d42d116250c9c55e617a2b047fb7",
    "submitter": {
        "id": 16,
        "url": "http://patches.dpdk.org/api/people/16/?format=api",
        "name": "Huawei Xie",
        "email": "huawei.xie@intel.com"
    },
    "delegate": null,
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/1411046390-29478-3-git-send-email-huawei.xie@intel.com/mbox/",
    "series": [],
    "comments": "http://patches.dpdk.org/api/patches/425/comments/",
    "check": "pending",
    "checks": "http://patches.dpdk.org/api/patches/425/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id B0A2BB3AB;\n\tThu, 18 Sep 2014 15:14:36 +0200 (CEST)",
            "from mga11.intel.com (mga11.intel.com [192.55.52.93])\n\tby dpdk.org (Postfix) with ESMTP id CEE33B3AA\n\tfor <dev@dpdk.org>; Thu, 18 Sep 2014 15:14:29 +0200 (CEST)",
            "from fmsmga001.fm.intel.com ([10.253.24.23])\n\tby fmsmga102.fm.intel.com with ESMTP; 18 Sep 2014 06:20:12 -0700",
            "from shvmail01.sh.intel.com ([10.239.29.42])\n\tby fmsmga001.fm.intel.com with ESMTP; 18 Sep 2014 06:20:02 -0700",
            "from shecgisg003.sh.intel.com (shecgisg003.sh.intel.com\n\t[10.239.29.90])\n\tby shvmail01.sh.intel.com with ESMTP id s8IDJxbT001715;\n\tThu, 18 Sep 2014 21:19:59 +0800",
            "from shecgisg003.sh.intel.com (localhost [127.0.0.1])\n\tby shecgisg003.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP\n\tid s8IDJvsq029529; Thu, 18 Sep 2014 21:19:59 +0800",
            "(from hxie5@localhost)\n\tby shecgisg003.sh.intel.com (8.13.6/8.13.6/Submit) id s8IDJvam029523; \n\tThu, 18 Sep 2014 21:19:57 +0800"
        ],
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.04,547,1406617200\"; d=\"scan'208\";a=\"593248728\"",
        "From": "Huawei Xie <huawei.xie@intel.com>",
        "To": "dev@dpdk.org",
        "Date": "Thu, 18 Sep 2014 21:19:50 +0800",
        "Message-Id": "<1411046390-29478-3-git-send-email-huawei.xie@intel.com>",
        "X-Mailer": "git-send-email 1.7.4.1",
        "In-Reply-To": "<1411046390-29478-1-git-send-email-huawei.xie@intel.com>",
        "References": "<1411046390-29478-1-git-send-email-huawei.xie@intel.com>",
        "Subject": "[dpdk-dev] [PATCH 2/2] examples/vhost: vhost example modification\n\tto use vhost lib API",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "patches and discussions about DPDK <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "This vhost example demonstrates how to integrate user space vhost with DPDK\naccelerated ethernet vSwitch.\n 1) rte_vhost_driver_register initialises vhost driver.\n 2) rte_vhost_driver_callback_register registers new_device/destroy_device callbacks.\n    Those callbacks should be implemented in ethernet switch application.\n    new_device is called when a virtio_device is ready for processing in vSwitch.\n    destroy_device is called when a virtio_device is de-activated by guest OS.\n 3) rte_vhost_driver_session_start starts vhost driver session loop.\n 4) rte_vhost_enqueue/dequeue_burst to send packets to or receive packets from\n    guest virtio device. virtio_dev_rx/tx is removed.\n 5) zero copy feature is implemented in the example.\n 6) mergable rx/tx will be implemented in vhost lib.\n\nSigned-off-by: Huawei Xie <huawei.xie@intel.com>\n---\n examples/vhost/Makefile |   52 ++\n examples/vhost/main.c   | 1455 +++++++++++++----------------------------------\n examples/vhost/main.h   |   47 +-\n 3 files changed, 483 insertions(+), 1071 deletions(-)\n create mode 100644 examples/vhost/Makefile",
    "diff": "diff --git a/examples/vhost/Makefile b/examples/vhost/Makefile\nnew file mode 100644\nindex 0000000..a4d4fb0\n--- /dev/null\n+++ b/examples/vhost/Makefile\n@@ -0,0 +1,52 @@\n+#   BSD LICENSE\n+#\n+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n+#   All rights reserved.\n+#\n+#   Redistribution and use in source and binary forms, with or without\n+#   modification, are permitted provided that the following conditions\n+#   are met:\n+#\n+#     * Redistributions of source code must retain the above copyright\n+#       notice, this list of conditions and the following disclaimer.\n+#     * Redistributions in binary form must reproduce the above copyright\n+#       notice, this list of conditions and the following disclaimer in\n+#       the documentation and/or other materials provided with the\n+#       distribution.\n+#     * Neither the name of Intel Corporation nor the names of its\n+#       contributors may be used to endorse or promote products derived\n+#       from this software without specific prior written permission.\n+#\n+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+#   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+\n+ifeq ($(RTE_SDK),)\n+$(error \"Please define RTE_SDK environment variable\")\n+endif\n+\n+# Default target, can be overriden by command line or environment\n+RTE_TARGET ?= x86_64-native-linuxapp-gcc\n+\n+include $(RTE_SDK)/mk/rte.vars.mk\n+\n+# binary name\n+APP = vhost-switch\n+\n+# all source are stored in SRCS-y\n+#SRCS-y := cusedrv.c loopback-userspace.c\n+SRCS-y := main.c\n+\n+CFLAGS += -O2 -I/usr/local/include -D_FILE_OFFSET_BITS=64 -Wno-unused-parameter\n+CFLAGS += $(WERROR_FLAGS)\n+LDFLAGS += -lfuse\n+\n+include $(RTE_SDK)/mk/rte.extapp.mk\ndiff --git a/examples/vhost/main.c b/examples/vhost/main.c\nindex 7d9e6a2..a796dbd 100644\n--- a/examples/vhost/main.c\n+++ b/examples/vhost/main.c\n@@ -49,10 +49,9 @@\n #include <rte_log.h>\n #include <rte_string_fns.h>\n #include <rte_malloc.h>\n+#include <rte_virtio_net.h>\n \n #include \"main.h\"\n-#include \"virtio-net.h\"\n-#include \"vhost-net-cdev.h\"\n \n #define MAX_QUEUES 128\n \n@@ -100,7 +99,6 @@\n #define TX_WTHRESH 0  /* Default values of TX write-back threshold reg. */\n \n #define MAX_PKT_BURST 32 \t\t/* Max burst size for RX/TX */\n-#define MAX_MRG_PKT_BURST 16 \t/* Max burst for merge buffers. Set to 1 due to performance issue. */\n #define BURST_TX_DRAIN_US 100 \t/* TX drain every ~100us */\n \n #define BURST_RX_WAIT_US 15 \t/* Defines how long we wait between retries on RX */\n@@ -168,13 +166,14 @@ static uint32_t num_switching_cores = 0;\n \n /* number of devices/queues to support*/\n static uint32_t num_queues = 0;\n-uint32_t num_devices = 0;\n+static uint32_t num_devices;\n \n /*\n  * Enable zero copy, pkts buffer will directly dma to hw descriptor,\n  * disabled on default.\n  */\n static uint32_t zero_copy;\n+static int mergeable;\n \n /* number of descriptors to apply*/\n static uint32_t num_rx_descriptor = RTE_TEST_RX_DESC_DEFAULT_ZCP;\n@@ -218,12 +217,6 @@ static uint32_t burst_rx_retry_num = BURST_RX_RETRIES;\n /* Character device basename. Can be set by user. */\n static char dev_basename[MAX_BASENAME_SZ] = \"vhost-net\";\n \n-/* Charater device index. Can be set by user. */\n-static uint32_t dev_index = 0;\n-\n-/* This can be set by the user so it is made available here. */\n-extern uint64_t VHOST_FEATURES;\n-\n /* Default configuration for rx and tx thresholds etc. */\n static struct rte_eth_rxconf rx_conf_default = {\n \t.rx_thresh = {\n@@ -678,11 +671,12 @@ us_vhost_parse_args(int argc, char **argv)\n \t\t\t\t\tus_vhost_usage(prgname);\n \t\t\t\t\treturn -1;\n \t\t\t\t} else {\n+\t\t\t\t\tmergeable = !!ret;\n \t\t\t\t\tif (ret) {\n \t\t\t\t\t\tvmdq_conf_default.rxmode.jumbo_frame = 1;\n \t\t\t\t\t\tvmdq_conf_default.rxmode.max_rx_pkt_len\n \t\t\t\t\t\t\t= JUMBO_FRAME_MAX_SIZE;\n-\t\t\t\t\t\tVHOST_FEATURES = (1ULL << VIRTIO_NET_F_MRG_RXBUF);\n+\n \t\t\t\t\t}\n \t\t\t\t}\n \t\t\t}\n@@ -708,17 +702,6 @@ us_vhost_parse_args(int argc, char **argv)\n \t\t\t\t}\n \t\t\t}\n \n-\t\t\t/* Set character device index. */\n-\t\t\tif (!strncmp(long_option[option_index].name, \"dev-index\", MAX_LONG_OPT_SZ)) {\n-\t\t\t\tret = parse_num_opt(optarg, INT32_MAX);\n-\t\t\t\tif (ret == -1) {\n-\t\t\t\t\tRTE_LOG(INFO, VHOST_CONFIG, \"Invalid argument for character device index [0..N]\\n\");\n-\t\t\t\t\tus_vhost_usage(prgname);\n-\t\t\t\t\treturn -1;\n-\t\t\t\t} else\n-\t\t\t\t\tdev_index = ret;\n-\t\t\t}\n-\n \t\t\t/* Enable/disable rx/tx zero copy. */\n \t\t\tif (!strncmp(long_option[option_index].name,\n \t\t\t\t\"zero-copy\", MAX_LONG_OPT_SZ)) {\n@@ -867,36 +850,11 @@ static unsigned check_ports_num(unsigned nb_ports)\n #endif\n \n /*\n- * Function to convert guest physical addresses to vhost virtual addresses. This\n- * is used to convert virtio buffer addresses.\n- */\n-static inline uint64_t __attribute__((always_inline))\n-gpa_to_vva(struct virtio_net *dev, uint64_t guest_pa)\n-{\n-\tstruct virtio_memory_regions *region;\n-\tuint32_t regionidx;\n-\tuint64_t vhost_va = 0;\n-\n-\tfor (regionidx = 0; regionidx < dev->mem->nregions; regionidx++) {\n-\t\tregion = &dev->mem->regions[regionidx];\n-\t\tif ((guest_pa >= region->guest_phys_address) &&\n-\t\t\t(guest_pa <= region->guest_phys_address_end)) {\n-\t\t\tvhost_va = region->address_offset + guest_pa;\n-\t\t\tbreak;\n-\t\t}\n-\t}\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") GPA %p| VVA %p\\n\",\n-\t\tdev->device_fh, (void*)(uintptr_t)guest_pa, (void*)(uintptr_t)vhost_va);\n-\n-\treturn vhost_va;\n-}\n-\n-/*\n  * Function to convert guest physical addresses to vhost physical addresses.\n  * This is used to convert virtio buffer addresses.\n  */\n static inline uint64_t __attribute__((always_inline))\n-gpa_to_hpa(struct virtio_net *dev, uint64_t guest_pa,\n+gpa_to_hpa(struct vhost_dev  *vdev, uint64_t guest_pa,\n \tuint32_t buf_len, hpa_type *addr_type)\n {\n \tstruct virtio_memory_regions_hpa *region;\n@@ -905,8 +863,8 @@ gpa_to_hpa(struct virtio_net *dev, uint64_t guest_pa,\n \n \t*addr_type = PHYS_ADDR_INVALID;\n \n-\tfor (regionidx = 0; regionidx < dev->mem->nregions_hpa; regionidx++) {\n-\t\tregion = &dev->mem->regions_hpa[regionidx];\n+\tfor (regionidx = 0; regionidx < vdev->nregions_hpa; regionidx++) {\n+\t\tregion = &vdev->regions_hpa[regionidx];\n \t\tif ((guest_pa >= region->guest_phys_address) &&\n \t\t\t(guest_pa <= region->guest_phys_address_end)) {\n \t\t\tvhost_pa = region->host_phys_addr_offset + guest_pa;\n@@ -927,497 +885,6 @@ gpa_to_hpa(struct virtio_net *dev, uint64_t guest_pa,\n }\n \n /*\n- * This function adds buffers to the virtio devices RX virtqueue. Buffers can\n- * be received from the physical port or from another virtio device. A packet\n- * count is returned to indicate the number of packets that were succesfully\n- * added to the RX queue. This function works when mergeable is disabled.\n- */\n-static inline uint32_t __attribute__((always_inline))\n-virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t count)\n-{\n-\tstruct vhost_virtqueue *vq;\n-\tstruct vring_desc *desc;\n-\tstruct rte_mbuf *buff;\n-\t/* The virtio_hdr is initialised to 0. */\n-\tstruct virtio_net_hdr_mrg_rxbuf virtio_hdr = {{0,0,0,0,0,0},0};\n-\tuint64_t buff_addr = 0;\n-\tuint64_t buff_hdr_addr = 0;\n-\tuint32_t head[MAX_PKT_BURST], packet_len = 0;\n-\tuint32_t head_idx, packet_success = 0;\n-\tuint32_t retry = 0;\n-\tuint16_t avail_idx, res_cur_idx;\n-\tuint16_t res_base_idx, res_end_idx;\n-\tuint16_t free_entries;\n-\tuint8_t success = 0;\n-\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") virtio_dev_rx()\\n\", dev->device_fh);\n-\tvq = dev->virtqueue[VIRTIO_RXQ];\n-\tcount = (count > MAX_PKT_BURST) ? MAX_PKT_BURST : count;\n-\n-\t/* As many data cores may want access to available buffers, they need to be reserved. */\n-\tdo {\n-\t\tres_base_idx = vq->last_used_idx_res;\n-\t\tavail_idx = *((volatile uint16_t *)&vq->avail->idx);\n-\n-\t\tfree_entries = (avail_idx - res_base_idx);\n-\t\t/* If retry is enabled and the queue is full then we wait and retry to avoid packet loss. */\n-\t\tif (enable_retry && unlikely(count > free_entries)) {\n-\t\t\tfor (retry = 0; retry < burst_rx_retry_num; retry++) {\n-\t\t\t\trte_delay_us(burst_rx_delay_time);\n-\t\t\t\tavail_idx =\n-\t\t\t\t\t*((volatile uint16_t *)&vq->avail->idx);\n-\t\t\t\tfree_entries = (avail_idx - res_base_idx);\n-\t\t\t\tif (count <= free_entries)\n-\t\t\t\t\tbreak;\n-\t\t\t}\n-\t\t}\n-\n-\t\t/*check that we have enough buffers*/\n-\t\tif (unlikely(count > free_entries))\n-\t\t\tcount = free_entries;\n-\n-\t\tif (count == 0)\n-\t\t\treturn 0;\n-\n-\t\tres_end_idx = res_base_idx + count;\n-\t\t/* vq->last_used_idx_res is atomically updated. */\n-\t\tsuccess = rte_atomic16_cmpset(&vq->last_used_idx_res, res_base_idx,\n-\t\t\t\t\t\t\t\t\tres_end_idx);\n-\t} while (unlikely(success == 0));\n-\tres_cur_idx = res_base_idx;\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") Current Index %d| End Index %d\\n\", dev->device_fh, res_cur_idx, res_end_idx);\n-\n-\t/* Prefetch available ring to retrieve indexes. */\n-\trte_prefetch0(&vq->avail->ring[res_cur_idx & (vq->size - 1)]);\n-\n-\t/* Retrieve all of the head indexes first to avoid caching issues. */\n-\tfor (head_idx = 0; head_idx < count; head_idx++)\n-\t\thead[head_idx] = vq->avail->ring[(res_cur_idx + head_idx) & (vq->size - 1)];\n-\n-\t/*Prefetch descriptor index. */\n-\trte_prefetch0(&vq->desc[head[packet_success]]);\n-\n-\twhile (res_cur_idx != res_end_idx) {\n-\t\t/* Get descriptor from available ring */\n-\t\tdesc = &vq->desc[head[packet_success]];\n-\n-\t\tbuff = pkts[packet_success];\n-\n-\t\t/* Convert from gpa to vva (guest physical addr -> vhost virtual addr) */\n-\t\tbuff_addr = gpa_to_vva(dev, desc->addr);\n-\t\t/* Prefetch buffer address. */\n-\t\trte_prefetch0((void*)(uintptr_t)buff_addr);\n-\n-\t\t/* Copy virtio_hdr to packet and increment buffer address */\n-\t\tbuff_hdr_addr = buff_addr;\n-\t\tpacket_len = rte_pktmbuf_data_len(buff) + vq->vhost_hlen;\n-\n-\t\t/*\n-\t\t * If the descriptors are chained the header and data are\n-\t\t * placed in separate buffers.\n-\t\t */\n-\t\tif (desc->flags & VRING_DESC_F_NEXT) {\n-\t\t\tdesc->len = vq->vhost_hlen;\n-\t\t\tdesc = &vq->desc[desc->next];\n-\t\t\t/* Buffer address translation. */\n-\t\t\tbuff_addr = gpa_to_vva(dev, desc->addr);\n-\t\t\tdesc->len = rte_pktmbuf_data_len(buff);\n-\t\t} else {\n-\t\t\tbuff_addr += vq->vhost_hlen;\n-\t\t\tdesc->len = packet_len;\n-\t\t}\n-\n-\t\t/* Update used ring with desc information */\n-\t\tvq->used->ring[res_cur_idx & (vq->size - 1)].id = head[packet_success];\n-\t\tvq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len;\n-\n-\t\t/* Copy mbuf data to buffer */\n-\t\trte_memcpy((void *)(uintptr_t)buff_addr,\n-\t\t\t(const void *)buff->pkt.data,\n-\t\t\trte_pktmbuf_data_len(buff));\n-\t\tPRINT_PACKET(dev, (uintptr_t)buff_addr,\n-\t\t\trte_pktmbuf_data_len(buff), 0);\n-\n-\t\tres_cur_idx++;\n-\t\tpacket_success++;\n-\n-\t\trte_memcpy((void *)(uintptr_t)buff_hdr_addr,\n-\t\t\t(const void *)&virtio_hdr, vq->vhost_hlen);\n-\n-\t\tPRINT_PACKET(dev, (uintptr_t)buff_hdr_addr, vq->vhost_hlen, 1);\n-\n-\t\tif (res_cur_idx < res_end_idx) {\n-\t\t\t/* Prefetch descriptor index. */\n-\t\t\trte_prefetch0(&vq->desc[head[packet_success]]);\n-\t\t}\n-\t}\n-\n-\trte_compiler_barrier();\n-\n-\t/* Wait until it's our turn to add our buffer to the used ring. */\n-\twhile (unlikely(vq->last_used_idx != res_base_idx))\n-\t\trte_pause();\n-\n-\t*(volatile uint16_t *)&vq->used->idx += count;\n-\tvq->last_used_idx = res_end_idx;\n-\n-\t/* Kick the guest if necessary. */\n-\tif (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))\n-\t\teventfd_write((int)vq->kickfd, 1);\n-\treturn count;\n-}\n-\n-static inline uint32_t __attribute__((always_inline))\n-copy_from_mbuf_to_vring(struct virtio_net *dev,\n-\tuint16_t res_base_idx, uint16_t res_end_idx,\n-\tstruct rte_mbuf *pkt)\n-{\n-\tuint32_t vec_idx = 0;\n-\tuint32_t entry_success = 0;\n-\tstruct vhost_virtqueue *vq;\n-\t/* The virtio_hdr is initialised to 0. */\n-\tstruct virtio_net_hdr_mrg_rxbuf virtio_hdr = {\n-\t\t{0, 0, 0, 0, 0, 0}, 0};\n-\tuint16_t cur_idx = res_base_idx;\n-\tuint64_t vb_addr = 0;\n-\tuint64_t vb_hdr_addr = 0;\n-\tuint32_t seg_offset = 0;\n-\tuint32_t vb_offset = 0;\n-\tuint32_t seg_avail;\n-\tuint32_t vb_avail;\n-\tuint32_t cpy_len, entry_len;\n-\n-\tif (pkt == NULL)\n-\t\treturn 0;\n-\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") Current Index %d| \"\n-\t\t\"End Index %d\\n\",\n-\t\tdev->device_fh, cur_idx, res_end_idx);\n-\n-\t/*\n-\t * Convert from gpa to vva\n-\t * (guest physical addr -> vhost virtual addr)\n-\t */\n-\tvq = dev->virtqueue[VIRTIO_RXQ];\n-\tvb_addr =\n-\t\tgpa_to_vva(dev, vq->buf_vec[vec_idx].buf_addr);\n-\tvb_hdr_addr = vb_addr;\n-\n-\t/* Prefetch buffer address. */\n-\trte_prefetch0((void *)(uintptr_t)vb_addr);\n-\n-\tvirtio_hdr.num_buffers = res_end_idx - res_base_idx;\n-\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") RX: Num merge buffers %d\\n\",\n-\t\tdev->device_fh, virtio_hdr.num_buffers);\n-\n-\trte_memcpy((void *)(uintptr_t)vb_hdr_addr,\n-\t\t(const void *)&virtio_hdr, vq->vhost_hlen);\n-\n-\tPRINT_PACKET(dev, (uintptr_t)vb_hdr_addr, vq->vhost_hlen, 1);\n-\n-\tseg_avail = rte_pktmbuf_data_len(pkt);\n-\tvb_offset = vq->vhost_hlen;\n-\tvb_avail =\n-\t\tvq->buf_vec[vec_idx].buf_len - vq->vhost_hlen;\n-\n-\tentry_len = vq->vhost_hlen;\n-\n-\tif (vb_avail == 0) {\n-\t\tuint32_t desc_idx =\n-\t\t\tvq->buf_vec[vec_idx].desc_idx;\n-\t\tvq->desc[desc_idx].len = vq->vhost_hlen;\n-\n-\t\tif ((vq->desc[desc_idx].flags\n-\t\t\t& VRING_DESC_F_NEXT) == 0) {\n-\t\t\t/* Update used ring with desc information */\n-\t\t\tvq->used->ring[cur_idx & (vq->size - 1)].id\n-\t\t\t\t= vq->buf_vec[vec_idx].desc_idx;\n-\t\t\tvq->used->ring[cur_idx & (vq->size - 1)].len\n-\t\t\t\t= entry_len;\n-\n-\t\t\tentry_len = 0;\n-\t\t\tcur_idx++;\n-\t\t\tentry_success++;\n-\t\t}\n-\n-\t\tvec_idx++;\n-\t\tvb_addr =\n-\t\t\tgpa_to_vva(dev, vq->buf_vec[vec_idx].buf_addr);\n-\n-\t\t/* Prefetch buffer address. */\n-\t\trte_prefetch0((void *)(uintptr_t)vb_addr);\n-\t\tvb_offset = 0;\n-\t\tvb_avail = vq->buf_vec[vec_idx].buf_len;\n-\t}\n-\n-\tcpy_len = RTE_MIN(vb_avail, seg_avail);\n-\n-\twhile (cpy_len > 0) {\n-\t\t/* Copy mbuf data to vring buffer */\n-\t\trte_memcpy((void *)(uintptr_t)(vb_addr + vb_offset),\n-\t\t\t(const void *)(rte_pktmbuf_mtod(pkt, char*) + seg_offset),\n-\t\t\tcpy_len);\n-\n-\t\tPRINT_PACKET(dev,\n-\t\t\t(uintptr_t)(vb_addr + vb_offset),\n-\t\t\tcpy_len, 0);\n-\n-\t\tseg_offset += cpy_len;\n-\t\tvb_offset += cpy_len;\n-\t\tseg_avail -= cpy_len;\n-\t\tvb_avail -= cpy_len;\n-\t\tentry_len += cpy_len;\n-\n-\t\tif (seg_avail != 0) {\n-\t\t\t/*\n-\t\t\t * The virtio buffer in this vring\n-\t\t\t * entry reach to its end.\n-\t\t\t * But the segment doesn't complete.\n-\t\t\t */\n-\t\t\tif ((vq->desc[vq->buf_vec[vec_idx].desc_idx].flags &\n-\t\t\t\tVRING_DESC_F_NEXT) == 0) {\n-\t\t\t\t/* Update used ring with desc information */\n-\t\t\t\tvq->used->ring[cur_idx & (vq->size - 1)].id\n-\t\t\t\t\t= vq->buf_vec[vec_idx].desc_idx;\n-\t\t\t\tvq->used->ring[cur_idx & (vq->size - 1)].len\n-\t\t\t\t\t= entry_len;\n-\t\t\t\tentry_len = 0;\n-\t\t\t\tcur_idx++;\n-\t\t\t\tentry_success++;\n-\t\t\t}\n-\n-\t\t\tvec_idx++;\n-\t\t\tvb_addr = gpa_to_vva(dev,\n-\t\t\t\tvq->buf_vec[vec_idx].buf_addr);\n-\t\t\tvb_offset = 0;\n-\t\t\tvb_avail = vq->buf_vec[vec_idx].buf_len;\n-\t\t\tcpy_len = RTE_MIN(vb_avail, seg_avail);\n-\t\t} else {\n-\t\t\t/*\n-\t\t\t * This current segment complete, need continue to\n-\t\t\t * check if the whole packet complete or not.\n-\t\t\t */\n-\t\t\tpkt = pkt->pkt.next;\n-\t\t\tif (pkt != NULL) {\n-\t\t\t\t/*\n-\t\t\t\t * There are more segments.\n-\t\t\t\t */\n-\t\t\t\tif (vb_avail == 0) {\n-\t\t\t\t\t/*\n-\t\t\t\t\t * This current buffer from vring is\n-\t\t\t\t\t * used up, need fetch next buffer\n-\t\t\t\t\t * from buf_vec.\n-\t\t\t\t\t */\n-\t\t\t\t\tuint32_t desc_idx =\n-\t\t\t\t\t\tvq->buf_vec[vec_idx].desc_idx;\n-\t\t\t\t\tvq->desc[desc_idx].len = vb_offset;\n-\n-\t\t\t\t\tif ((vq->desc[desc_idx].flags &\n-\t\t\t\t\t\tVRING_DESC_F_NEXT) == 0) {\n-\t\t\t\t\t\tuint16_t wrapped_idx =\n-\t\t\t\t\t\t\tcur_idx & (vq->size - 1);\n-\t\t\t\t\t\t/*\n-\t\t\t\t\t\t * Update used ring with the\n-\t\t\t\t\t\t * descriptor information\n-\t\t\t\t\t\t */\n-\t\t\t\t\t\tvq->used->ring[wrapped_idx].id\n-\t\t\t\t\t\t\t= desc_idx;\n-\t\t\t\t\t\tvq->used->ring[wrapped_idx].len\n-\t\t\t\t\t\t\t= entry_len;\n-\t\t\t\t\t\tentry_success++;\n-\t\t\t\t\t\tentry_len = 0;\n-\t\t\t\t\t\tcur_idx++;\n-\t\t\t\t\t}\n-\n-\t\t\t\t\t/* Get next buffer from buf_vec. */\n-\t\t\t\t\tvec_idx++;\n-\t\t\t\t\tvb_addr = gpa_to_vva(dev,\n-\t\t\t\t\t\tvq->buf_vec[vec_idx].buf_addr);\n-\t\t\t\t\tvb_avail =\n-\t\t\t\t\t\tvq->buf_vec[vec_idx].buf_len;\n-\t\t\t\t\tvb_offset = 0;\n-\t\t\t\t}\n-\n-\t\t\t\tseg_offset = 0;\n-\t\t\t\tseg_avail = rte_pktmbuf_data_len(pkt);\n-\t\t\t\tcpy_len = RTE_MIN(vb_avail, seg_avail);\n-\t\t\t} else {\n-\t\t\t\t/*\n-\t\t\t\t * This whole packet completes.\n-\t\t\t\t */\n-\t\t\t\tuint32_t desc_idx =\n-\t\t\t\t\tvq->buf_vec[vec_idx].desc_idx;\n-\t\t\t\tvq->desc[desc_idx].len = vb_offset;\n-\n-\t\t\t\twhile (vq->desc[desc_idx].flags &\n-\t\t\t\t\tVRING_DESC_F_NEXT) {\n-\t\t\t\t\tdesc_idx = vq->desc[desc_idx].next;\n-\t\t\t\t\t vq->desc[desc_idx].len = 0;\n-\t\t\t\t}\n-\n-\t\t\t\t/* Update used ring with desc information */\n-\t\t\t\tvq->used->ring[cur_idx & (vq->size - 1)].id\n-\t\t\t\t\t= vq->buf_vec[vec_idx].desc_idx;\n-\t\t\t\tvq->used->ring[cur_idx & (vq->size - 1)].len\n-\t\t\t\t\t= entry_len;\n-\t\t\t\tentry_len = 0;\n-\t\t\t\tcur_idx++;\n-\t\t\t\tentry_success++;\n-\t\t\t\tseg_avail = 0;\n-\t\t\t\tcpy_len = RTE_MIN(vb_avail, seg_avail);\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\treturn entry_success;\n-}\n-\n-/*\n- * This function adds buffers to the virtio devices RX virtqueue. Buffers can\n- * be received from the physical port or from another virtio device. A packet\n- * count is returned to indicate the number of packets that were succesfully\n- * added to the RX queue. This function works for mergeable RX.\n- */\n-static inline uint32_t __attribute__((always_inline))\n-virtio_dev_merge_rx(struct virtio_net *dev, struct rte_mbuf **pkts,\n-\tuint32_t count)\n-{\n-\tstruct vhost_virtqueue *vq;\n-\tuint32_t pkt_idx = 0, entry_success = 0;\n-\tuint32_t retry = 0;\n-\tuint16_t avail_idx, res_cur_idx;\n-\tuint16_t res_base_idx, res_end_idx;\n-\tuint8_t success = 0;\n-\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") virtio_dev_merge_rx()\\n\",\n-\t\tdev->device_fh);\n-\tvq = dev->virtqueue[VIRTIO_RXQ];\n-\tcount = RTE_MIN((uint32_t)MAX_PKT_BURST, count);\n-\n-\tif (count == 0)\n-\t\treturn 0;\n-\n-\tfor (pkt_idx = 0; pkt_idx < count; pkt_idx++) {\n-\t\tuint32_t secure_len = 0;\n-\t\tuint16_t need_cnt;\n-\t\tuint32_t vec_idx = 0;\n-\t\tuint32_t pkt_len = pkts[pkt_idx]->pkt.pkt_len + vq->vhost_hlen;\n-\t\tuint16_t i, id;\n-\n-\t\tdo {\n-\t\t\t/*\n-\t\t\t * As many data cores may want access to available\n-\t\t\t * buffers, they need to be reserved.\n-\t\t\t */\n-\t\t\tres_base_idx = vq->last_used_idx_res;\n-\t\t\tres_cur_idx = res_base_idx;\n-\n-\t\t\tdo {\n-\t\t\t\tavail_idx = *((volatile uint16_t *)&vq->avail->idx);\n-\t\t\t\tif (unlikely(res_cur_idx == avail_idx)) {\n-\t\t\t\t\t/*\n-\t\t\t\t\t * If retry is enabled and the queue is\n-\t\t\t\t\t * full then we wait and retry to avoid\n-\t\t\t\t\t * packet loss.\n-\t\t\t\t\t */\n-\t\t\t\t\tif (enable_retry) {\n-\t\t\t\t\t\tuint8_t cont = 0;\n-\t\t\t\t\t\tfor (retry = 0; retry < burst_rx_retry_num; retry++) {\n-\t\t\t\t\t\t\trte_delay_us(burst_rx_delay_time);\n-\t\t\t\t\t\t\tavail_idx =\n-\t\t\t\t\t\t\t\t*((volatile uint16_t *)&vq->avail->idx);\n-\t\t\t\t\t\t\tif (likely(res_cur_idx != avail_idx)) {\n-\t\t\t\t\t\t\t\tcont = 1;\n-\t\t\t\t\t\t\t\tbreak;\n-\t\t\t\t\t\t\t}\n-\t\t\t\t\t\t}\n-\t\t\t\t\t\tif (cont == 1)\n-\t\t\t\t\t\t\tcontinue;\n-\t\t\t\t\t}\n-\n-\t\t\t\t\tLOG_DEBUG(VHOST_DATA,\n-\t\t\t\t\t\t\"(%\"PRIu64\") Failed \"\n-\t\t\t\t\t\t\"to get enough desc from \"\n-\t\t\t\t\t\t\"vring\\n\",\n-\t\t\t\t\t\tdev->device_fh);\n-\t\t\t\t\treturn pkt_idx;\n-\t\t\t\t} else {\n-\t\t\t\t\tuint16_t wrapped_idx =\n-\t\t\t\t\t\t(res_cur_idx) & (vq->size - 1);\n-\t\t\t\t\tuint32_t idx =\n-\t\t\t\t\t\tvq->avail->ring[wrapped_idx];\n-\t\t\t\t\tuint8_t next_desc;\n-\n-\t\t\t\t\tdo {\n-\t\t\t\t\t\tnext_desc = 0;\n-\t\t\t\t\t\tsecure_len += vq->desc[idx].len;\n-\t\t\t\t\t\tif (vq->desc[idx].flags &\n-\t\t\t\t\t\t\tVRING_DESC_F_NEXT) {\n-\t\t\t\t\t\t\tidx = vq->desc[idx].next;\n-\t\t\t\t\t\t\tnext_desc = 1;\n-\t\t\t\t\t\t}\n-\t\t\t\t\t} while (next_desc);\n-\n-\t\t\t\t\tres_cur_idx++;\n-\t\t\t\t}\n-\t\t\t} while (pkt_len > secure_len);\n-\n-\t\t\t/* vq->last_used_idx_res is atomically updated. */\n-\t\t\tsuccess = rte_atomic16_cmpset(&vq->last_used_idx_res,\n-\t\t\t\t\t\t\tres_base_idx,\n-\t\t\t\t\t\t\tres_cur_idx);\n-\t\t} while (success == 0);\n-\n-\t\tid = res_base_idx;\n-\t\tneed_cnt = res_cur_idx - res_base_idx;\n-\n-\t\tfor (i = 0; i < need_cnt; i++, id++) {\n-\t\t\tuint16_t wrapped_idx = id & (vq->size - 1);\n-\t\t\tuint32_t idx = vq->avail->ring[wrapped_idx];\n-\t\t\tuint8_t next_desc;\n-\t\t\tdo {\n-\t\t\t\tnext_desc = 0;\n-\t\t\t\tvq->buf_vec[vec_idx].buf_addr =\n-\t\t\t\t\tvq->desc[idx].addr;\n-\t\t\t\tvq->buf_vec[vec_idx].buf_len =\n-\t\t\t\t\tvq->desc[idx].len;\n-\t\t\t\tvq->buf_vec[vec_idx].desc_idx = idx;\n-\t\t\t\tvec_idx++;\n-\n-\t\t\t\tif (vq->desc[idx].flags & VRING_DESC_F_NEXT) {\n-\t\t\t\t\tidx = vq->desc[idx].next;\n-\t\t\t\t\tnext_desc = 1;\n-\t\t\t\t}\n-\t\t\t} while (next_desc);\n-\t\t}\n-\n-\t\tres_end_idx = res_cur_idx;\n-\n-\t\tentry_success = copy_from_mbuf_to_vring(dev, res_base_idx,\n-\t\t\tres_end_idx, pkts[pkt_idx]);\n-\n-\t\trte_compiler_barrier();\n-\n-\t\t/*\n-\t\t * Wait until it's our turn to add our buffer\n-\t\t * to the used ring.\n-\t\t */\n-\t\twhile (unlikely(vq->last_used_idx != res_base_idx))\n-\t\t\trte_pause();\n-\n-\t\t*(volatile uint16_t *)&vq->used->idx += entry_success;\n-\t\tvq->last_used_idx = res_end_idx;\n-\n-\t\t/* Kick the guest if necessary. */\n-\t\tif (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))\n-\t\t\teventfd_write((int)vq->kickfd, 1);\n-\t}\n-\n-\treturn count;\n-}\n-\n-/*\n  * Compares a packet destination MAC address to a device MAC address.\n  */\n static inline int __attribute__((always_inline))\n@@ -1431,10 +898,11 @@ ether_addr_cmp(struct ether_addr *ea, struct ether_addr *eb)\n  * vlan tag to a VMDQ.\n  */\n static int\n-link_vmdq(struct virtio_net *dev, struct rte_mbuf *m)\n+link_vmdq(struct vhost_dev *vdev, struct rte_mbuf *m)\n {\n \tstruct ether_hdr *pkt_hdr;\n \tstruct virtio_net_data_ll *dev_ll;\n+\tstruct virtio_net *dev = vdev->dev;\n \tint i, ret;\n \n \t/* Learn MAC address of guest device from packet */\n@@ -1443,7 +911,7 @@ link_vmdq(struct virtio_net *dev, struct rte_mbuf *m)\n \tdev_ll = ll_root_used;\n \n \twhile (dev_ll != NULL) {\n-\t\tif (ether_addr_cmp(&(pkt_hdr->s_addr), &dev_ll->dev->mac_address)) {\n+\t\tif (ether_addr_cmp(&(pkt_hdr->s_addr), &dev_ll->vdev->mac_address)) {\n \t\t\tRTE_LOG(INFO, VHOST_DATA, \"(%\"PRIu64\") WARNING: This device is using an existing MAC address and has not been registered.\\n\", dev->device_fh);\n \t\t\treturn -1;\n \t\t}\n@@ -1451,30 +919,30 @@ link_vmdq(struct virtio_net *dev, struct rte_mbuf *m)\n \t}\n \n \tfor (i = 0; i < ETHER_ADDR_LEN; i++)\n-\t\tdev->mac_address.addr_bytes[i] = pkt_hdr->s_addr.addr_bytes[i];\n+\t\tvdev->mac_address.addr_bytes[i] = pkt_hdr->s_addr.addr_bytes[i];\n \n \t/* vlan_tag currently uses the device_id. */\n-\tdev->vlan_tag = vlan_tags[dev->device_fh];\n+\tvdev->vlan_tag = vlan_tags[dev->device_fh];\n \n \t/* Print out VMDQ registration info. */\n \tRTE_LOG(INFO, VHOST_DATA, \"(%\"PRIu64\") MAC_ADDRESS %02x:%02x:%02x:%02x:%02x:%02x and VLAN_TAG %d registered\\n\",\n \t\tdev->device_fh,\n-\t\tdev->mac_address.addr_bytes[0], dev->mac_address.addr_bytes[1],\n-\t\tdev->mac_address.addr_bytes[2], dev->mac_address.addr_bytes[3],\n-\t\tdev->mac_address.addr_bytes[4], dev->mac_address.addr_bytes[5],\n-\t\tdev->vlan_tag);\n+\t\tvdev->mac_address.addr_bytes[0], vdev->mac_address.addr_bytes[1],\n+\t\tvdev->mac_address.addr_bytes[2], vdev->mac_address.addr_bytes[3],\n+\t\tvdev->mac_address.addr_bytes[4], vdev->mac_address.addr_bytes[5],\n+\t\tvdev->vlan_tag);\n \n \t/* Register the MAC address. */\n-\tret = rte_eth_dev_mac_addr_add(ports[0], &dev->mac_address, (uint32_t)dev->device_fh);\n+\tret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address, (uint32_t)dev->device_fh);\n \tif (ret)\n \t\tRTE_LOG(ERR, VHOST_DATA, \"(%\"PRIu64\") Failed to add device MAC address to VMDQ\\n\",\n \t\t\t\t\tdev->device_fh);\n \n \t/* Enable stripping of the vlan tag as we handle routing. */\n-\trte_eth_dev_set_vlan_strip_on_queue(ports[0], (uint16_t)dev->vmdq_rx_q, 1);\n+\trte_eth_dev_set_vlan_strip_on_queue(ports[0], (uint16_t)vdev->vmdq_rx_q, 1);\n \n \t/* Set device as ready for RX. */\n-\tdev->ready = DEVICE_RX;\n+\tvdev->ready = DEVICE_RX;\n \n \treturn 0;\n }\n@@ -1484,33 +952,33 @@ link_vmdq(struct virtio_net *dev, struct rte_mbuf *m)\n  * queue before disabling RX on the device.\n  */\n static inline void\n-unlink_vmdq(struct virtio_net *dev)\n+unlink_vmdq(struct vhost_dev *vdev)\n {\n \tunsigned i = 0;\n \tunsigned rx_count;\n \tstruct rte_mbuf *pkts_burst[MAX_PKT_BURST];\n \n-\tif (dev->ready == DEVICE_RX) {\n+\tif (vdev->ready == DEVICE_RX) {\n \t\t/*clear MAC and VLAN settings*/\n-\t\trte_eth_dev_mac_addr_remove(ports[0], &dev->mac_address);\n+\t\trte_eth_dev_mac_addr_remove(ports[0], &vdev->mac_address);\n \t\tfor (i = 0; i < 6; i++)\n-\t\t\tdev->mac_address.addr_bytes[i] = 0;\n+\t\t\tvdev->mac_address.addr_bytes[i] = 0;\n \n-\t\tdev->vlan_tag = 0;\n+\t\tvdev->vlan_tag = 0;\n \n \t\t/*Clear out the receive buffers*/\n \t\trx_count = rte_eth_rx_burst(ports[0],\n-\t\t\t\t\t(uint16_t)dev->vmdq_rx_q, pkts_burst, MAX_PKT_BURST);\n+\t\t\t\t\t(uint16_t)vdev->vmdq_rx_q, pkts_burst, MAX_PKT_BURST);\n \n \t\twhile (rx_count) {\n \t\t\tfor (i = 0; i < rx_count; i++)\n \t\t\t\trte_pktmbuf_free(pkts_burst[i]);\n \n \t\t\trx_count = rte_eth_rx_burst(ports[0],\n-\t\t\t\t\t(uint16_t)dev->vmdq_rx_q, pkts_burst, MAX_PKT_BURST);\n+\t\t\t\t\t(uint16_t)vdev->vmdq_rx_q, pkts_burst, MAX_PKT_BURST);\n \t\t}\n \n-\t\tdev->ready = DEVICE_MAC_LEARNING;\n+\t\tvdev->ready = DEVICE_MAC_LEARNING;\n \t}\n }\n \n@@ -1518,12 +986,14 @@ unlink_vmdq(struct virtio_net *dev)\n  * Check if the packet destination MAC address is for a local device. If so then put\n  * the packet on that devices RX queue. If not then return.\n  */\n-static inline unsigned __attribute__((always_inline))\n-virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m)\n+static inline int __attribute__((always_inline))\n+virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m)\n {\n \tstruct virtio_net_data_ll *dev_ll;\n \tstruct ether_hdr *pkt_hdr;\n \tuint64_t ret = 0;\n+\tstruct virtio_net *dev = vdev->dev;\n+\tstruct virtio_net *tdev; /* destination virito device */\n \n \tpkt_hdr = (struct ether_hdr *)m->pkt.data;\n \n@@ -1531,43 +1001,34 @@ virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m)\n \tdev_ll = ll_root_used;\n \n \twhile (dev_ll != NULL) {\n-\t\tif ((dev_ll->dev->ready == DEVICE_RX) && ether_addr_cmp(&(pkt_hdr->d_addr),\n-\t\t\t\t          &dev_ll->dev->mac_address)) {\n+\t\tif ((dev_ll->vdev->ready == DEVICE_RX) && ether_addr_cmp(&(pkt_hdr->d_addr),\n+\t\t\t&dev_ll->vdev->mac_address)) {\n \n \t\t\t/* Drop the packet if the TX packet is destined for the TX device. */\n-\t\t\tif (dev_ll->dev->device_fh == dev->device_fh) {\n+\t\t\tif (dev_ll->vdev->dev->device_fh == dev->device_fh) {\n \t\t\t\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") TX: Source and destination MAC addresses are the same. Dropping packet.\\n\",\n-\t\t\t\t\t\t\tdev_ll->dev->device_fh);\n+\t\t\t\t\t\t\tdev->device_fh);\n \t\t\t\treturn 0;\n \t\t\t}\n+\t\t\ttdev = dev_ll->vdev->dev;\n \n+\t\t\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") TX: MAC address is local\\n\", tdev->device_fh);\n \n-\t\t\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") TX: MAC address is local\\n\", dev_ll->dev->device_fh);\n-\n-\t\t\tif (dev_ll->dev->remove) {\n+\t\t\tif (unlikely(dev_ll->vdev->remove)) {\n \t\t\t\t/*drop the packet if the device is marked for removal*/\n-\t\t\t\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") Device is marked for removal\\n\", dev_ll->dev->device_fh);\n+\t\t\t\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") Device is marked for removal\\n\", tdev->device_fh);\n \t\t\t} else {\n-\t\t\t\tuint32_t mergeable =\n-\t\t\t\t\tdev_ll->dev->features &\n-\t\t\t\t\t(1 << VIRTIO_NET_F_MRG_RXBUF);\n-\n \t\t\t\t/*send the packet to the local virtio device*/\n-\t\t\t\tif (likely(mergeable == 0))\n-\t\t\t\t\tret = virtio_dev_rx(dev_ll->dev, &m, 1);\n-\t\t\t\telse\n-\t\t\t\t\tret = virtio_dev_merge_rx(dev_ll->dev,\n-\t\t\t\t\t\t&m, 1);\n-\n+\t\t\t\tret = rte_vhost_enqueue_burst(tdev, VIRTIO_RXQ, &m, 1);\n \t\t\t\tif (enable_stats) {\n \t\t\t\t\trte_atomic64_add(\n-\t\t\t\t\t&dev_statistics[dev_ll->dev->device_fh].rx_total_atomic,\n+\t\t\t\t\t&dev_statistics[tdev->device_fh].rx_total_atomic,\n \t\t\t\t\t1);\n \t\t\t\t\trte_atomic64_add(\n-\t\t\t\t\t&dev_statistics[dev_ll->dev->device_fh].rx_atomic,\n+\t\t\t\t\t&dev_statistics[tdev->device_fh].rx_atomic,\n \t\t\t\t\tret);\n-\t\t\t\t\tdev_statistics[dev->device_fh].tx_total++;\n-\t\t\t\t\tdev_statistics[dev->device_fh].tx += ret;\n+\t\t\t\t\tdev_statistics[tdev->device_fh].tx_total++;\n+\t\t\t\t\tdev_statistics[tdev->device_fh].tx += ret;\n \t\t\t\t}\n \t\t\t}\n \n@@ -1584,47 +1045,49 @@ virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m)\n  * or the physical port.\n  */\n static inline void __attribute__((always_inline))\n-virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, uint16_t vlan_tag)\n+virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, uint16_t vlan_tag)\n {\n \tstruct mbuf_table *tx_q;\n-\tstruct vlan_ethhdr *vlan_hdr;\n \tstruct rte_mbuf **m_table;\n-\tstruct rte_mbuf *mbuf, *prev;\n \tunsigned len, ret, offset = 0;\n \tconst uint16_t lcore_id = rte_lcore_id();\n \tstruct virtio_net_data_ll *dev_ll = ll_root_used;\n \tstruct ether_hdr *pkt_hdr = (struct ether_hdr *)m->pkt.data;\n+\tstruct virtio_net *dev = vdev->dev;\n \n-\t/*check if destination is local VM*/\n-\tif ((vm2vm_mode == VM2VM_SOFTWARE) && (virtio_tx_local(dev, m) == 0))\n+\t/*heck if destination is local VM*/\n+\tif (vm2vm_mode == VM2VM_SOFTWARE && (virtio_tx_local(vdev, m) == 0)) {\n+\t\trte_pktmbuf_free(m);\n \t\treturn;\n+\t}\n \n \tif (vm2vm_mode == VM2VM_HARDWARE) {\n \t\twhile (dev_ll != NULL) {\n-\t\t\tif ((dev_ll->dev->ready == DEVICE_RX)\n+\t\t\tif ((dev_ll->vdev->ready == DEVICE_RX)\n \t\t\t\t&& ether_addr_cmp(&(pkt_hdr->d_addr),\n-\t\t\t\t&dev_ll->dev->mac_address)) {\n+\t\t\t\t&dev_ll->vdev->mac_address)) {\n \t\t\t\t/*\n \t\t\t\t * Drop the packet if the TX packet is\n \t\t\t\t * destined for the TX device.\n \t\t\t\t */\n-\t\t\t\tif (dev_ll->dev->device_fh == dev->device_fh) {\n+\t\t\t\tif (dev_ll->vdev->dev->device_fh == dev->device_fh) {\n \t\t\t\t\tLOG_DEBUG(VHOST_DATA,\n \t\t\t\t\t\"(%\"PRIu64\") TX: Source and destination\"\n \t\t\t\t\t\" MAC addresses are the same. Dropping \"\n \t\t\t\t\t\"packet.\\n\",\n-\t\t\t\t\tdev_ll->dev->device_fh);\n+\t\t\t\t\tdev_ll->vdev->device_fh);\n+\t\t\t\t\trte_pktmbuf_free(m);\n \t\t\t\t\treturn;\n \t\t\t\t}\n \t\t\t\toffset = 4;\n \t\t\t\tvlan_tag =\n \t\t\t\t(uint16_t)\n-\t\t\t\tvlan_tags[(uint16_t)dev_ll->dev->device_fh];\n+\t\t\t\tvlan_tags[(uint16_t)dev_ll->vdev->dev->device_fh];\n \n \t\t\t\tLOG_DEBUG(VHOST_DATA,\n \t\t\t\t\"(%\"PRIu64\") TX: pkt to local VM device id:\"\n \t\t\t\t\"(%\"PRIu64\") vlan tag: %d.\\n\",\n-\t\t\t\tdev->device_fh, dev_ll->dev->device_fh,\n+\t\t\t\tdev->device_fh, dev_ll->vdev->dev->device_fh,\n \t\t\t\tvlan_tag);\n \n \t\t\t\tbreak;\n@@ -1639,55 +1102,12 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *\n \ttx_q = &lcore_tx_queue[lcore_id];\n \tlen = tx_q->len;\n \n-\t/* Allocate an mbuf and populate the structure. */\n-\tmbuf = rte_pktmbuf_alloc(mbuf_pool);\n-\tif (unlikely(mbuf == NULL)) {\n-\t\tRTE_LOG(ERR, VHOST_DATA,\n-\t\t\t\"Failed to allocate memory for mbuf.\\n\");\n-\t\treturn;\n-\t}\n-\n-\tmbuf->pkt.data_len = m->pkt.data_len + VLAN_HLEN + offset;\n-\tmbuf->pkt.pkt_len = m->pkt.pkt_len + VLAN_HLEN + offset;\n-\tmbuf->pkt.nb_segs = m->pkt.nb_segs;\n-\n-\t/* Copy ethernet header to mbuf. */\n-\trte_memcpy((void*)mbuf->pkt.data, (const void*)m->pkt.data, ETH_HLEN);\n-\n-\n-\t/* Setup vlan header. Bytes need to be re-ordered for network with htons()*/\n-\tvlan_hdr = (struct vlan_ethhdr *) mbuf->pkt.data;\n-\tvlan_hdr->h_vlan_encapsulated_proto = vlan_hdr->h_vlan_proto;\n-\tvlan_hdr->h_vlan_proto = htons(ETH_P_8021Q);\n-\tvlan_hdr->h_vlan_TCI = htons(vlan_tag);\n-\n-\t/* Copy the remaining packet contents to the mbuf. */\n-\trte_memcpy((void*) ((uint8_t*)mbuf->pkt.data + VLAN_ETH_HLEN),\n-\t\t(const void*) ((uint8_t*)m->pkt.data + ETH_HLEN), (m->pkt.data_len - ETH_HLEN));\n-\n-\t/* Copy the remaining segments for the whole packet. */\n-\tprev = mbuf;\n-\twhile (m->pkt.next) {\n-\t\t/* Allocate an mbuf and populate the structure. */\n-\t\tstruct rte_mbuf *next_mbuf = rte_pktmbuf_alloc(mbuf_pool);\n-\t\tif (unlikely(next_mbuf == NULL)) {\n-\t\t\trte_pktmbuf_free(mbuf);\n-\t\t\tRTE_LOG(ERR, VHOST_DATA,\n-\t\t\t\t\"Failed to allocate memory for mbuf.\\n\");\n-\t\t\treturn;\n-\t\t}\n-\n-\t\tm = m->pkt.next;\n-\t\tprev->pkt.next = next_mbuf;\n-\t\tprev = next_mbuf;\n-\t\tnext_mbuf->pkt.data_len = m->pkt.data_len;\n+\tm->ol_flags = PKT_TX_VLAN_PKT;\n+\tm->pkt.data_len += offset;\n+\tm->pkt.pkt_len = m->pkt.data_len;\n+\tm->pkt.vlan_macip.f.vlan_tci = vlan_tag;\n \n-\t\t/* Copy data to next mbuf. */\n-\t\trte_memcpy(rte_pktmbuf_mtod(next_mbuf, void *),\n-\t\t\trte_pktmbuf_mtod(m, const void *), m->pkt.data_len);\n-\t}\n-\n-\ttx_q->m_table[len] = mbuf;\n+\ttx_q->m_table[len] = m;\n \tlen++;\n \tif (enable_stats) {\n \t\tdev_statistics[dev->device_fh].tx_total++;\n@@ -1710,321 +1130,6 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *\n \ttx_q->len = len;\n \treturn;\n }\n-\n-static inline void __attribute__((always_inline))\n-virtio_dev_tx(struct virtio_net* dev, struct rte_mempool *mbuf_pool)\n-{\n-\tstruct rte_mbuf m;\n-\tstruct vhost_virtqueue *vq;\n-\tstruct vring_desc *desc;\n-\tuint64_t buff_addr = 0;\n-\tuint32_t head[MAX_PKT_BURST];\n-\tuint32_t used_idx;\n-\tuint32_t i;\n-\tuint16_t free_entries, packet_success = 0;\n-\tuint16_t avail_idx;\n-\n-\tvq = dev->virtqueue[VIRTIO_TXQ];\n-\tavail_idx =  *((volatile uint16_t *)&vq->avail->idx);\n-\n-\t/* If there are no available buffers then return. */\n-\tif (vq->last_used_idx == avail_idx)\n-\t\treturn;\n-\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") virtio_dev_tx()\\n\", dev->device_fh);\n-\n-\t/* Prefetch available ring to retrieve head indexes. */\n-\trte_prefetch0(&vq->avail->ring[vq->last_used_idx & (vq->size - 1)]);\n-\n-\t/*get the number of free entries in the ring*/\n-\tfree_entries = (avail_idx - vq->last_used_idx);\n-\n-\t/* Limit to MAX_PKT_BURST. */\n-\tif (free_entries > MAX_PKT_BURST)\n-\t\tfree_entries = MAX_PKT_BURST;\n-\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") Buffers available %d\\n\", dev->device_fh, free_entries);\n-\t/* Retrieve all of the head indexes first to avoid caching issues. */\n-\tfor (i = 0; i < free_entries; i++)\n-\t\thead[i] = vq->avail->ring[(vq->last_used_idx + i) & (vq->size - 1)];\n-\n-\t/* Prefetch descriptor index. */\n-\trte_prefetch0(&vq->desc[head[packet_success]]);\n-\trte_prefetch0(&vq->used->ring[vq->last_used_idx & (vq->size - 1)]);\n-\n-\twhile (packet_success < free_entries) {\n-\t\tdesc = &vq->desc[head[packet_success]];\n-\n-\t\t/* Discard first buffer as it is the virtio header */\n-\t\tdesc = &vq->desc[desc->next];\n-\n-\t\t/* Buffer address translation. */\n-\t\tbuff_addr = gpa_to_vva(dev, desc->addr);\n-\t\t/* Prefetch buffer address. */\n-\t\trte_prefetch0((void*)(uintptr_t)buff_addr);\n-\n-\t\tused_idx = vq->last_used_idx & (vq->size - 1);\n-\n-\t\tif (packet_success < (free_entries - 1)) {\n-\t\t\t/* Prefetch descriptor index. */\n-\t\t\trte_prefetch0(&vq->desc[head[packet_success+1]]);\n-\t\t\trte_prefetch0(&vq->used->ring[(used_idx + 1) & (vq->size - 1)]);\n-\t\t}\n-\n-\t\t/* Update used index buffer information. */\n-\t\tvq->used->ring[used_idx].id = head[packet_success];\n-\t\tvq->used->ring[used_idx].len = 0;\n-\n-\t\t/* Setup dummy mbuf. This is copied to a real mbuf if transmitted out the physical port. */\n-\t\tm.pkt.data_len = desc->len;\n-\t\tm.pkt.pkt_len = desc->len;\n-\t\tm.pkt.data = (void*)(uintptr_t)buff_addr;\n-\n-\t\tPRINT_PACKET(dev, (uintptr_t)buff_addr, desc->len, 0);\n-\n-\t\t/* If this is the first received packet we need to learn the MAC and setup VMDQ */\n-\t\tif (dev->ready == DEVICE_MAC_LEARNING) {\n-\t\t\tif (dev->remove || (link_vmdq(dev, &m) == -1)) {\n-\t\t\t\t/*discard frame if device is scheduled for removal or a duplicate MAC address is found. */\n-\t\t\t\tpacket_success += free_entries;\n-\t\t\t\tvq->last_used_idx += packet_success;\n-\t\t\t\tbreak;\n-\t\t\t}\n-\t\t}\n-\t\tvirtio_tx_route(dev, &m, mbuf_pool, (uint16_t)dev->device_fh);\n-\n-\t\tvq->last_used_idx++;\n-\t\tpacket_success++;\n-\t}\n-\n-\trte_compiler_barrier();\n-\tvq->used->idx += packet_success;\n-\t/* Kick guest if required. */\n-\tif (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))\n-\t\teventfd_write((int)vq->kickfd, 1);\n-}\n-\n-/* This function works for TX packets with mergeable feature enabled. */\n-static inline void __attribute__((always_inline))\n-virtio_dev_merge_tx(struct virtio_net *dev, struct rte_mempool *mbuf_pool)\n-{\n-\tstruct rte_mbuf *m, *prev;\n-\tstruct vhost_virtqueue *vq;\n-\tstruct vring_desc *desc;\n-\tuint64_t vb_addr = 0;\n-\tuint32_t head[MAX_PKT_BURST];\n-\tuint32_t used_idx;\n-\tuint32_t i;\n-\tuint16_t free_entries, entry_success = 0;\n-\tuint16_t avail_idx;\n-\tuint32_t buf_size = MBUF_SIZE - (sizeof(struct rte_mbuf)\n-\t\t\t+ RTE_PKTMBUF_HEADROOM);\n-\n-\tvq = dev->virtqueue[VIRTIO_TXQ];\n-\tavail_idx =  *((volatile uint16_t *)&vq->avail->idx);\n-\n-\t/* If there are no available buffers then return. */\n-\tif (vq->last_used_idx == avail_idx)\n-\t\treturn;\n-\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") virtio_dev_merge_tx()\\n\",\n-\t\tdev->device_fh);\n-\n-\t/* Prefetch available ring to retrieve head indexes. */\n-\trte_prefetch0(&vq->avail->ring[vq->last_used_idx & (vq->size - 1)]);\n-\n-\t/*get the number of free entries in the ring*/\n-\tfree_entries = (avail_idx - vq->last_used_idx);\n-\n-\t/* Limit to MAX_PKT_BURST. */\n-\tfree_entries = RTE_MIN(free_entries, MAX_PKT_BURST);\n-\n-\tLOG_DEBUG(VHOST_DATA, \"(%\"PRIu64\") Buffers available %d\\n\",\n-\t\tdev->device_fh, free_entries);\n-\t/* Retrieve all of the head indexes first to avoid caching issues. */\n-\tfor (i = 0; i < free_entries; i++)\n-\t\thead[i] = vq->avail->ring[(vq->last_used_idx + i) & (vq->size - 1)];\n-\n-\t/* Prefetch descriptor index. */\n-\trte_prefetch0(&vq->desc[head[entry_success]]);\n-\trte_prefetch0(&vq->used->ring[vq->last_used_idx & (vq->size - 1)]);\n-\n-\twhile (entry_success < free_entries) {\n-\t\tuint32_t vb_avail, vb_offset;\n-\t\tuint32_t seg_avail, seg_offset;\n-\t\tuint32_t cpy_len;\n-\t\tuint32_t seg_num = 0;\n-\t\tstruct rte_mbuf *cur;\n-\t\tuint8_t alloc_err = 0;\n-\n-\t\tdesc = &vq->desc[head[entry_success]];\n-\n-\t\t/* Discard first buffer as it is the virtio header */\n-\t\tdesc = &vq->desc[desc->next];\n-\n-\t\t/* Buffer address translation. */\n-\t\tvb_addr = gpa_to_vva(dev, desc->addr);\n-\t\t/* Prefetch buffer address. */\n-\t\trte_prefetch0((void *)(uintptr_t)vb_addr);\n-\n-\t\tused_idx = vq->last_used_idx & (vq->size - 1);\n-\n-\t\tif (entry_success < (free_entries - 1)) {\n-\t\t\t/* Prefetch descriptor index. */\n-\t\t\trte_prefetch0(&vq->desc[head[entry_success+1]]);\n-\t\t\trte_prefetch0(&vq->used->ring[(used_idx + 1) & (vq->size - 1)]);\n-\t\t}\n-\n-\t\t/* Update used index buffer information. */\n-\t\tvq->used->ring[used_idx].id = head[entry_success];\n-\t\tvq->used->ring[used_idx].len = 0;\n-\n-\t\tvb_offset = 0;\n-\t\tvb_avail = desc->len;\n-\t\tseg_offset = 0;\n-\t\tseg_avail = buf_size;\n-\t\tcpy_len = RTE_MIN(vb_avail, seg_avail);\n-\n-\t\tPRINT_PACKET(dev, (uintptr_t)vb_addr, desc->len, 0);\n-\n-\t\t/* Allocate an mbuf and populate the structure. */\n-\t\tm = rte_pktmbuf_alloc(mbuf_pool);\n-\t\tif (unlikely(m == NULL)) {\n-\t\t\tRTE_LOG(ERR, VHOST_DATA,\n-\t\t\t\t\"Failed to allocate memory for mbuf.\\n\");\n-\t\t\treturn;\n-\t\t}\n-\n-\t\tseg_num++;\n-\t\tcur = m;\n-\t\tprev = m;\n-\t\twhile (cpy_len != 0) {\n-\t\t\trte_memcpy((void *)(rte_pktmbuf_mtod(cur, char *) + seg_offset),\n-\t\t\t\t(void *)((uintptr_t)(vb_addr + vb_offset)),\n-\t\t\t\tcpy_len);\n-\n-\t\t\tseg_offset += cpy_len;\n-\t\t\tvb_offset += cpy_len;\n-\t\t\tvb_avail -= cpy_len;\n-\t\t\tseg_avail -= cpy_len;\n-\n-\t\t\tif (vb_avail != 0) {\n-\t\t\t\t/*\n-\t\t\t\t * The segment reachs to its end,\n-\t\t\t\t * while the virtio buffer in TX vring has\n-\t\t\t\t * more data to be copied.\n-\t\t\t\t */\n-\t\t\t\tcur->pkt.data_len = seg_offset;\n-\t\t\t\tm->pkt.pkt_len += seg_offset;\n-\t\t\t\t/* Allocate mbuf and populate the structure. */\n-\t\t\t\tcur = rte_pktmbuf_alloc(mbuf_pool);\n-\t\t\t\tif (unlikely(cur == NULL)) {\n-\t\t\t\t\tRTE_LOG(ERR, VHOST_DATA, \"Failed to \"\n-\t\t\t\t\t\t\"allocate memory for mbuf.\\n\");\n-\t\t\t\t\trte_pktmbuf_free(m);\n-\t\t\t\t\talloc_err = 1;\n-\t\t\t\t\tbreak;\n-\t\t\t\t}\n-\n-\t\t\t\tseg_num++;\n-\t\t\t\tprev->pkt.next = cur;\n-\t\t\t\tprev = cur;\n-\t\t\t\tseg_offset = 0;\n-\t\t\t\tseg_avail = buf_size;\n-\t\t\t} else {\n-\t\t\t\tif (desc->flags & VRING_DESC_F_NEXT) {\n-\t\t\t\t\t/*\n-\t\t\t\t\t * There are more virtio buffers in\n-\t\t\t\t\t * same vring entry need to be copied.\n-\t\t\t\t\t */\n-\t\t\t\t\tif (seg_avail == 0) {\n-\t\t\t\t\t\t/*\n-\t\t\t\t\t\t * The current segment hasn't\n-\t\t\t\t\t\t * room to accomodate more\n-\t\t\t\t\t\t * data.\n-\t\t\t\t\t\t */\n-\t\t\t\t\t\tcur->pkt.data_len = seg_offset;\n-\t\t\t\t\t\tm->pkt.pkt_len += seg_offset;\n-\t\t\t\t\t\t/*\n-\t\t\t\t\t\t * Allocate an mbuf and\n-\t\t\t\t\t\t * populate the structure.\n-\t\t\t\t\t\t */\n-\t\t\t\t\t\tcur = rte_pktmbuf_alloc(mbuf_pool);\n-\t\t\t\t\t\tif (unlikely(cur == NULL)) {\n-\t\t\t\t\t\t\tRTE_LOG(ERR,\n-\t\t\t\t\t\t\t\tVHOST_DATA,\n-\t\t\t\t\t\t\t\t\"Failed to \"\n-\t\t\t\t\t\t\t\t\"allocate memory \"\n-\t\t\t\t\t\t\t\t\"for mbuf\\n\");\n-\t\t\t\t\t\t\trte_pktmbuf_free(m);\n-\t\t\t\t\t\t\talloc_err = 1;\n-\t\t\t\t\t\t\tbreak;\n-\t\t\t\t\t\t}\n-\t\t\t\t\t\tseg_num++;\n-\t\t\t\t\t\tprev->pkt.next = cur;\n-\t\t\t\t\t\tprev = cur;\n-\t\t\t\t\t\tseg_offset = 0;\n-\t\t\t\t\t\tseg_avail = buf_size;\n-\t\t\t\t\t}\n-\n-\t\t\t\t\tdesc = &vq->desc[desc->next];\n-\n-\t\t\t\t\t/* Buffer address translation. */\n-\t\t\t\t\tvb_addr = gpa_to_vva(dev, desc->addr);\n-\t\t\t\t\t/* Prefetch buffer address. */\n-\t\t\t\t\trte_prefetch0((void *)(uintptr_t)vb_addr);\n-\t\t\t\t\tvb_offset = 0;\n-\t\t\t\t\tvb_avail = desc->len;\n-\n-\t\t\t\t\tPRINT_PACKET(dev, (uintptr_t)vb_addr,\n-\t\t\t\t\t\tdesc->len, 0);\n-\t\t\t\t} else {\n-\t\t\t\t\t/* The whole packet completes. */\n-\t\t\t\t\tcur->pkt.data_len = seg_offset;\n-\t\t\t\t\tm->pkt.pkt_len += seg_offset;\n-\t\t\t\t\tvb_avail = 0;\n-\t\t\t\t}\n-\t\t\t}\n-\n-\t\t\tcpy_len = RTE_MIN(vb_avail, seg_avail);\n-\t\t}\n-\n-\t\tif (unlikely(alloc_err == 1))\n-\t\t\tbreak;\n-\n-\t\tm->pkt.nb_segs = seg_num;\n-\n-\t\t/*\n-\t\t * If this is the first received packet we need to learn\n-\t\t * the MAC and setup VMDQ\n-\t\t */\n-\t\tif (dev->ready == DEVICE_MAC_LEARNING) {\n-\t\t\tif (dev->remove || (link_vmdq(dev, m) == -1)) {\n-\t\t\t\t/*\n-\t\t\t\t * Discard frame if device is scheduled for\n-\t\t\t\t * removal or a duplicate MAC address is found.\n-\t\t\t\t */\n-\t\t\t\tentry_success = free_entries;\n-\t\t\t\tvq->last_used_idx += entry_success;\n-\t\t\t\trte_pktmbuf_free(m);\n-\t\t\t\tbreak;\n-\t\t\t}\n-\t\t}\n-\n-\t\tvirtio_tx_route(dev, m, mbuf_pool, (uint16_t)dev->device_fh);\n-\t\tvq->last_used_idx++;\n-\t\tentry_success++;\n-\t\trte_pktmbuf_free(m);\n-\t}\n-\n-\trte_compiler_barrier();\n-\tvq->used->idx += entry_success;\n-\t/* Kick guest if required. */\n-\tif (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))\n-\t\teventfd_write((int)vq->kickfd, 1);\n-\n-}\n-\n /*\n  * This function is called by each data core. It handles all RX/TX registered with the\n  * core. For TX the specific lcore linked list is used. For RX, MAC addresses are compared\n@@ -2035,6 +1140,7 @@ switch_worker(__attribute__((unused)) void *arg)\n {\n \tstruct rte_mempool *mbuf_pool = arg;\n \tstruct virtio_net *dev = NULL;\n+\tstruct vhost_dev *vdev = NULL;\n \tstruct rte_mbuf *pkts_burst[MAX_PKT_BURST];\n \tstruct virtio_net_data_ll *dev_ll;\n \tstruct mbuf_table *tx_q;\n@@ -2045,7 +1151,8 @@ switch_worker(__attribute__((unused)) void *arg)\n \tconst uint16_t lcore_id = rte_lcore_id();\n \tconst uint16_t num_cores = (uint16_t)rte_lcore_count();\n \tuint16_t rx_count = 0;\n-\tuint32_t mergeable = 0;\n+\tuint16_t tx_count;\n+\tuint32_t retry = 0;\n \n \tRTE_LOG(INFO, VHOST_DATA, \"Procesing on Core %u started\\n\", lcore_id);\n \tlcore_ll = lcore_info[lcore_id].lcore_ll;\n@@ -2102,37 +1209,39 @@ switch_worker(__attribute__((unused)) void *arg)\n \n \t\twhile (dev_ll != NULL) {\n \t\t\t/*get virtio device ID*/\n-\t\t\tdev = dev_ll->dev;\n-\t\t\tmergeable =\n-\t\t\t\tdev->features & (1 << VIRTIO_NET_F_MRG_RXBUF);\n+\t\t\tvdev = dev_ll->vdev;\n+\t\t\tdev = vdev->dev;\n \n-\t\t\tif (dev->remove) {\n+\t\t\tif (vdev->remove) {\n \t\t\t\tdev_ll = dev_ll->next;\n-\t\t\t\tunlink_vmdq(dev);\n-\t\t\t\tdev->ready = DEVICE_SAFE_REMOVE;\n+\t\t\t\tunlink_vmdq(vdev);\n+\t\t\t\tvdev->ready = DEVICE_SAFE_REMOVE;\n \t\t\t\tcontinue;\n \t\t\t}\n-\t\t\tif (likely(dev->ready == DEVICE_RX)) {\n+\t\t\tif (likely(vdev->ready == DEVICE_RX)) {\n \t\t\t\t/*Handle guest RX*/\n \t\t\t\trx_count = rte_eth_rx_burst(ports[0],\n-\t\t\t\t\t(uint16_t)dev->vmdq_rx_q, pkts_burst, MAX_PKT_BURST);\n+\t\t\t\t\tvdev->vmdq_rx_q, pkts_burst, MAX_PKT_BURST);\n \n \t\t\t\tif (rx_count) {\n-\t\t\t\t\tif (likely(mergeable == 0))\n-\t\t\t\t\t\tret_count =\n-\t\t\t\t\t\t\tvirtio_dev_rx(dev,\n-\t\t\t\t\t\t\tpkts_burst, rx_count);\n-\t\t\t\t\telse\n-\t\t\t\t\t\tret_count =\n-\t\t\t\t\t\t\tvirtio_dev_merge_rx(dev,\n-\t\t\t\t\t\t\tpkts_burst, rx_count);\n-\n+\t\t\t\t\t/*\n+\t\t\t\t\t* Retry is enabled and the queue is full then we wait and retry to avoid packet loss\n+\t\t\t\t\t* Here MAX_PKT_BURST must be less than virtio queue size\n+\t\t\t\t\t*/\n+\t\t\t\t\tif (enable_retry && unlikely(rx_count > rte_vring_available_entries(dev, VIRTIO_RXQ))) {\n+\t\t\t\t\t\tfor (retry = 0; retry < burst_rx_retry_num; retry++) {\n+\t\t\t\t\t\t\trte_delay_us(burst_rx_delay_time);\n+\t\t\t\t\t\t\tif (rx_count <= rte_vring_available_entries(dev, VIRTIO_RXQ))\n+\t\t\t\t\t\t\t\tbreak;\n+\t\t\t\t\t\t}\n+\t\t\t\t\t}\n+\t\t\t\t\tret_count = rte_vhost_enqueue_burst(dev, VIRTIO_RXQ, pkts_burst, rx_count);\n \t\t\t\t\tif (enable_stats) {\n \t\t\t\t\t\trte_atomic64_add(\n-\t\t\t\t\t\t&dev_statistics[dev_ll->dev->device_fh].rx_total_atomic,\n+\t\t\t\t\t\t&dev_statistics[dev_ll->vdev->dev->device_fh].rx_total_atomic,\n \t\t\t\t\t\trx_count);\n \t\t\t\t\t\trte_atomic64_add(\n-\t\t\t\t\t\t&dev_statistics[dev_ll->dev->device_fh].rx_atomic, ret_count);\n+\t\t\t\t\t\t&dev_statistics[dev_ll->vdev->dev->device_fh].rx_atomic, ret_count);\n \t\t\t\t\t}\n \t\t\t\t\twhile (likely(rx_count)) {\n \t\t\t\t\t\trx_count--;\n@@ -2142,12 +1251,18 @@ switch_worker(__attribute__((unused)) void *arg)\n \t\t\t\t}\n \t\t\t}\n \n-\t\t\tif (!dev->remove) {\n-\t\t\t\t/*Handle guest TX*/\n-\t\t\t\tif (likely(mergeable == 0))\n-\t\t\t\t\tvirtio_dev_tx(dev, mbuf_pool);\n-\t\t\t\telse\n-\t\t\t\t\tvirtio_dev_merge_tx(dev, mbuf_pool);\n+\t\t\tif (!vdev->remove) {\n+\t\t\t\t/* Handle guest TX*/\n+\t\t\t\ttx_count = rte_vhost_dequeue_burst(dev, VIRTIO_TXQ, mbuf_pool, pkts_burst, MAX_PKT_BURST);\n+\t\t\t\t/* If this is the first received packet we need to learn the MAC and setup VMDQ */\n+\t\t\t\tif (unlikely(vdev->ready == DEVICE_MAC_LEARNING) && tx_count) {\n+\t\t\t\t\tif (vdev->remove || (link_vmdq(vdev, pkts_burst[0]) == -1)) {\n+\t\t\t\t\t\twhile (tx_count--)\n+\t\t\t\t\t\t\trte_pktmbuf_free(pkts_burst[tx_count]);\n+\t\t\t\t\t}\n+\t\t\t\t}\n+\t\t\t\twhile (tx_count)\n+\t\t\t\t\tvirtio_tx_route(vdev, pkts_burst[--tx_count], mbuf_pool, (uint16_t)dev->device_fh);\n \t\t\t}\n \n \t\t\t/*move to the next device in the list*/\n@@ -2264,12 +1379,13 @@ attach_rxmbuf_zcp(struct virtio_net *dev)\n \tstruct rte_mbuf *mbuf = NULL;\n \tstruct vpool *vpool;\n \thpa_type addr_type;\n+\tstruct vhost_dev *vdev = (struct vhost_dev *)dev->priv;\n \n-\tvpool = &vpool_array[dev->vmdq_rx_q];\n+\tvpool = &vpool_array[vdev->vmdq_rx_q];\n \tvq = dev->virtqueue[VIRTIO_RXQ];\n \n \tdo {\n-\t\tif (unlikely(get_available_ring_index_zcp(dev, &res_base_idx,\n+\t\tif (unlikely(get_available_ring_index_zcp(vdev->dev, &res_base_idx,\n \t\t\t\t1) != 1))\n \t\t\treturn;\n \t\tdesc_idx = vq->avail->ring[(res_base_idx) & (vq->size - 1)];\n@@ -2278,12 +1394,12 @@ attach_rxmbuf_zcp(struct virtio_net *dev)\n \t\tif (desc->flags & VRING_DESC_F_NEXT) {\n \t\t\tdesc = &vq->desc[desc->next];\n \t\t\tbuff_addr = gpa_to_vva(dev, desc->addr);\n-\t\t\tphys_addr = gpa_to_hpa(dev, desc->addr, desc->len,\n+\t\t\tphys_addr = gpa_to_hpa(vdev, desc->addr, desc->len,\n \t\t\t\t\t&addr_type);\n \t\t} else {\n \t\t\tbuff_addr = gpa_to_vva(dev,\n \t\t\t\t\tdesc->addr + vq->vhost_hlen);\n-\t\t\tphys_addr = gpa_to_hpa(dev,\n+\t\t\tphys_addr = gpa_to_hpa(vdev,\n \t\t\t\t\tdesc->addr + vq->vhost_hlen,\n \t\t\t\t\tdesc->len, &addr_type);\n \t\t}\n@@ -2606,13 +1722,14 @@ virtio_tx_route_zcp(struct virtio_net *dev, struct rte_mbuf *m,\n \tstruct virtio_net_data_ll *dev_ll = ll_root_used;\n \tstruct ether_hdr *pkt_hdr = (struct ether_hdr *)m->pkt.data;\n \tuint16_t vlan_tag = (uint16_t)vlan_tags[(uint16_t)dev->device_fh];\n+\tuint16_t vmdq_rx_q = ((struct vhost_dev *)dev->priv)->vmdq_rx_q;\n \n \t/*Add packet to the port tx queue*/\n-\ttx_q = &tx_queue_zcp[(uint16_t)dev->vmdq_rx_q];\n+\ttx_q = &tx_queue_zcp[vmdq_rx_q];\n \tlen = tx_q->len;\n \n \t/* Allocate an mbuf and populate the structure. */\n-\tvpool = &vpool_array[MAX_QUEUES + (uint16_t)dev->vmdq_rx_q];\n+\tvpool = &vpool_array[MAX_QUEUES + vmdq_rx_q];\n \trte_ring_sc_dequeue(vpool->ring, (void **)&mbuf);\n \tif (unlikely(mbuf == NULL)) {\n \t\tstruct vhost_virtqueue *vq = dev->virtqueue[VIRTIO_TXQ];\n@@ -2633,15 +1750,15 @@ virtio_tx_route_zcp(struct virtio_net *dev, struct rte_mbuf *m,\n \t\t */\n \t\tvlan_tag = external_pkt_default_vlan_tag;\n \t\twhile (dev_ll != NULL) {\n-\t\t\tif (likely(dev_ll->dev->ready == DEVICE_RX) &&\n+\t\t\tif (likely(dev_ll->vdev->ready == DEVICE_RX) &&\n \t\t\t\tether_addr_cmp(&(pkt_hdr->d_addr),\n-\t\t\t\t&dev_ll->dev->mac_address)) {\n+\t\t\t\t&dev_ll->vdev->mac_address)) {\n \n \t\t\t\t/*\n \t\t\t\t * Drop the packet if the TX packet is destined\n \t\t\t\t * for the TX device.\n \t\t\t\t */\n-\t\t\t\tif (unlikely(dev_ll->dev->device_fh\n+\t\t\t\tif (unlikely(dev_ll->vdev->dev->device_fh\n \t\t\t\t\t== dev->device_fh)) {\n \t\t\t\t\tLOG_DEBUG(VHOST_DATA,\n \t\t\t\t\t\"(%\"PRIu64\") TX: Source and destination\"\n@@ -2661,7 +1778,7 @@ virtio_tx_route_zcp(struct virtio_net *dev, struct rte_mbuf *m,\n \t\t\t\toffset = 4;\n \t\t\t\tvlan_tag =\n \t\t\t\t(uint16_t)\n-\t\t\t\tvlan_tags[(uint16_t)dev_ll->dev->device_fh];\n+\t\t\t\tvlan_tags[(uint16_t)dev_ll->vdev->dev->device_fh];\n \n \t\t\t\tLOG_DEBUG(VHOST_DATA,\n \t\t\t\t\"(%\"PRIu64\") TX: pkt to local VM device id:\"\n@@ -2751,6 +1868,7 @@ virtio_dev_tx_zcp(struct virtio_net *dev)\n \tuint16_t avail_idx;\n \tuint8_t need_copy = 0;\n \thpa_type addr_type;\n+\tstruct vhost_dev *vdev = (struct vhost_dev *)dev->priv;\n \n \tvq = dev->virtqueue[VIRTIO_TXQ];\n \tavail_idx =  *((volatile uint16_t *)&vq->avail->idx);\n@@ -2794,7 +1912,7 @@ virtio_dev_tx_zcp(struct virtio_net *dev)\n \n \t\t/* Buffer address translation. */\n \t\tbuff_addr = gpa_to_vva(dev, desc->addr);\n-\t\tphys_addr = gpa_to_hpa(dev, desc->addr, desc->len, &addr_type);\n+\t\tphys_addr = gpa_to_hpa(vdev, desc->addr, desc->len, &addr_type);\n \n \t\tif (likely(packet_success < (free_entries - 1)))\n \t\t\t/* Prefetch descriptor index. */\n@@ -2843,8 +1961,8 @@ virtio_dev_tx_zcp(struct virtio_net *dev)\n \t\t * If this is the first received packet we need to learn\n \t\t * the MAC and setup VMDQ\n \t\t */\n-\t\tif (unlikely(dev->ready == DEVICE_MAC_LEARNING)) {\n-\t\t\tif (dev->remove || (link_vmdq(dev, &m) == -1)) {\n+\t\tif (unlikely(vdev->ready == DEVICE_MAC_LEARNING)) {\n+\t\t\tif (vdev->remove || (link_vmdq(vdev, &m) == -1)) {\n \t\t\t\t/*\n \t\t\t\t * Discard frame if device is scheduled for\n \t\t\t\t * removal or a duplicate MAC address is found.\n@@ -2869,6 +1987,7 @@ static int\n switch_worker_zcp(__attribute__((unused)) void *arg)\n {\n \tstruct virtio_net *dev = NULL;\n+\tstruct vhost_dev  *vdev = NULL;\n \tstruct rte_mbuf *pkts_burst[MAX_PKT_BURST];\n \tstruct virtio_net_data_ll *dev_ll;\n \tstruct mbuf_table *tx_q;\n@@ -2897,12 +2016,13 @@ switch_worker_zcp(__attribute__((unused)) void *arg)\n \t\t\t * put back into vpool.ring.\n \t\t\t */\n \t\t\tdev_ll = lcore_ll->ll_root_used;\n-\t\t\twhile ((dev_ll != NULL) && (dev_ll->dev != NULL)) {\n+\t\t\twhile ((dev_ll != NULL) && (dev_ll->vdev != NULL)) {\n \t\t\t\t/* Get virtio device ID */\n-\t\t\t\tdev = dev_ll->dev;\n+\t\t\t\tvdev = dev_ll->vdev;\n+\t\t\t\tdev = vdev->dev;\n \n-\t\t\t\tif (likely(!dev->remove)) {\n-\t\t\t\t\ttx_q = &tx_queue_zcp[(uint16_t)dev->vmdq_rx_q];\n+\t\t\t\tif (likely(!vdev->remove)) {\n+\t\t\t\t\ttx_q = &tx_queue_zcp[(uint16_t)vdev->vmdq_rx_q];\n \t\t\t\t\tif (tx_q->len) {\n \t\t\t\t\t\tLOG_DEBUG(VHOST_DATA,\n \t\t\t\t\t\t\"TX queue drained after timeout\"\n@@ -2927,7 +2047,7 @@ switch_worker_zcp(__attribute__((unused)) void *arg)\n \t\t\t\t\t\ttx_q->len = 0;\n \n \t\t\t\t\t\ttxmbuf_clean_zcp(dev,\n-\t\t\t\t\t\t\t&vpool_array[MAX_QUEUES+dev->vmdq_rx_q]);\n+\t\t\t\t\t\t\t&vpool_array[MAX_QUEUES + vdev->vmdq_rx_q]);\n \t\t\t\t\t}\n \t\t\t\t}\n \t\t\t\tdev_ll = dev_ll->next;\n@@ -2947,17 +2067,18 @@ switch_worker_zcp(__attribute__((unused)) void *arg)\n \t\t/* Process devices */\n \t\tdev_ll = lcore_ll->ll_root_used;\n \n-\t\twhile ((dev_ll != NULL) && (dev_ll->dev != NULL)) {\n-\t\t\tdev = dev_ll->dev;\n-\t\t\tif (unlikely(dev->remove)) {\n+\t\twhile ((dev_ll != NULL) && (dev_ll->vdev != NULL)) {\n+\t\t\tvdev = dev_ll->vdev;\n+\t\t\tdev  = vdev->dev;\n+\t\t\tif (unlikely(vdev->remove)) {\n \t\t\t\tdev_ll = dev_ll->next;\n-\t\t\t\tunlink_vmdq(dev);\n-\t\t\t\tdev->ready = DEVICE_SAFE_REMOVE;\n+\t\t\t\tunlink_vmdq(vdev);\n+\t\t\t\tvdev->ready = DEVICE_SAFE_REMOVE;\n \t\t\t\tcontinue;\n \t\t\t}\n \n-\t\t\tif (likely(dev->ready == DEVICE_RX)) {\n-\t\t\t\tuint32_t index = dev->vmdq_rx_q;\n+\t\t\tif (likely(vdev->ready == DEVICE_RX)) {\n+\t\t\t\tuint32_t index = vdev->vmdq_rx_q;\n \t\t\t\tuint16_t i;\n \t\t\t\tcount_in_ring\n \t\t\t\t= rte_ring_count(vpool_array[index].ring);\n@@ -2976,7 +2097,7 @@ switch_worker_zcp(__attribute__((unused)) void *arg)\n \n \t\t\t\t/* Handle guest RX */\n \t\t\t\trx_count = rte_eth_rx_burst(ports[0],\n-\t\t\t\t\t(uint16_t)dev->vmdq_rx_q, pkts_burst,\n+\t\t\t\t\tvdev->vmdq_rx_q, pkts_burst,\n \t\t\t\t\tMAX_PKT_BURST);\n \n \t\t\t\tif (rx_count) {\n@@ -2999,7 +2120,7 @@ switch_worker_zcp(__attribute__((unused)) void *arg)\n \t\t\t\t}\n \t\t\t}\n \n-\t\t\tif (likely(!dev->remove))\n+\t\t\tif (likely(!vdev->remove))\n \t\t\t\t/* Handle guest TX */\n \t\t\t\tvirtio_dev_tx_zcp(dev);\n \n@@ -3112,7 +2233,7 @@ alloc_data_ll(uint32_t size)\n \t}\n \n \tfor (i = 0; i < size - 1; i++) {\n-\t\tll_new[i].dev = NULL;\n+\t\tll_new[i].vdev = NULL;\n \t\tll_new[i].next = &ll_new[i+1];\n \t}\n \tll_new[i].next = NULL;\n@@ -3152,42 +2273,32 @@ init_data_ll (void)\n }\n \n /*\n- * Set virtqueue flags so that we do not receive interrupts.\n- */\n-static void\n-set_irq_status (struct virtio_net *dev)\n-{\n-\tdev->virtqueue[VIRTIO_RXQ]->used->flags = VRING_USED_F_NO_NOTIFY;\n-\tdev->virtqueue[VIRTIO_TXQ]->used->flags = VRING_USED_F_NO_NOTIFY;\n-}\n-\n-/*\n  * Remove a device from the specific data core linked list and from the main linked list. Synchonization\n  * occurs through the use of the lcore dev_removal_flag. Device is made volatile here to avoid re-ordering\n  * of dev->remove=1 which can cause an infinite loop in the rte_pause loop.\n  */\n static void\n-destroy_device (volatile struct virtio_net *dev)\n+destroy_device(struct virtio_net *dev)\n {\n \tstruct virtio_net_data_ll *ll_lcore_dev_cur;\n \tstruct virtio_net_data_ll *ll_main_dev_cur;\n \tstruct virtio_net_data_ll *ll_lcore_dev_last = NULL;\n \tstruct virtio_net_data_ll *ll_main_dev_last = NULL;\n+\tstruct vhost_dev *vdev;\n \tint lcore;\n \n \tdev->flags &= ~VIRTIO_DEV_RUNNING;\n \n+\tvdev = (struct vhost_dev *)dev->priv;\n \t/*set the remove flag. */\n-\tdev->remove = 1;\n-\n-\twhile(dev->ready != DEVICE_SAFE_REMOVE) {\n+\tvdev->remove = 1;\n+\twhile (vdev->ready != DEVICE_SAFE_REMOVE)\n \t\trte_pause();\n-\t}\n \n \t/* Search for entry to be removed from lcore ll */\n-\tll_lcore_dev_cur = lcore_info[dev->coreid].lcore_ll->ll_root_used;\n+\tll_lcore_dev_cur = lcore_info[vdev->coreid].lcore_ll->ll_root_used;\n \twhile (ll_lcore_dev_cur != NULL) {\n-\t\tif (ll_lcore_dev_cur->dev == dev) {\n+\t\tif (ll_lcore_dev_cur->vdev == vdev) {\n \t\t\tbreak;\n \t\t} else {\n \t\t\tll_lcore_dev_last = ll_lcore_dev_cur;\n@@ -3206,7 +2317,7 @@ destroy_device (volatile struct virtio_net *dev)\n \tll_main_dev_cur = ll_root_used;\n \tll_main_dev_last = NULL;\n \twhile (ll_main_dev_cur != NULL) {\n-\t\tif (ll_main_dev_cur->dev == dev) {\n+\t\tif (ll_main_dev_cur->vdev == vdev) {\n \t\t\tbreak;\n \t\t} else {\n \t\t\tll_main_dev_last = ll_main_dev_cur;\n@@ -3215,7 +2326,7 @@ destroy_device (volatile struct virtio_net *dev)\n \t}\n \n \t/* Remove entries from the lcore and main ll. */\n-\trm_data_ll_entry(&lcore_info[ll_lcore_dev_cur->dev->coreid].lcore_ll->ll_root_used, ll_lcore_dev_cur, ll_lcore_dev_last);\n+\trm_data_ll_entry(&lcore_info[vdev->coreid].lcore_ll->ll_root_used, ll_lcore_dev_cur, ll_lcore_dev_last);\n \trm_data_ll_entry(&ll_root_used, ll_main_dev_cur, ll_main_dev_last);\n \n \t/* Set the dev_removal_flag on each lcore. */\n@@ -3235,19 +2346,19 @@ destroy_device (volatile struct virtio_net *dev)\n \t}\n \n \t/* Add the entries back to the lcore and main free ll.*/\n-\tput_data_ll_free_entry(&lcore_info[ll_lcore_dev_cur->dev->coreid].lcore_ll->ll_root_free, ll_lcore_dev_cur);\n+\tput_data_ll_free_entry(&lcore_info[vdev->coreid].lcore_ll->ll_root_free, ll_lcore_dev_cur);\n \tput_data_ll_free_entry(&ll_root_free, ll_main_dev_cur);\n \n \t/* Decrement number of device on the lcore. */\n-\tlcore_info[ll_lcore_dev_cur->dev->coreid].lcore_ll->device_num--;\n+\tlcore_info[vdev->coreid].lcore_ll->device_num--;\n \n \tRTE_LOG(INFO, VHOST_DATA, \"(%\"PRIu64\") Device has been removed from data core\\n\", dev->device_fh);\n \n \tif (zero_copy) {\n-\t\tstruct vpool *vpool = &vpool_array[dev->vmdq_rx_q];\n+\t\tstruct vpool *vpool = &vpool_array[vdev->vmdq_rx_q];\n \n \t\t/* Stop the RX queue. */\n-\t\tif (rte_eth_dev_rx_queue_stop(ports[0], dev->vmdq_rx_q) != 0) {\n+\t\tif (rte_eth_dev_rx_queue_stop(ports[0], vdev->vmdq_rx_q) != 0) {\n \t\t\tLOG_DEBUG(VHOST_CONFIG,\n \t\t\t\t\"(%\"PRIu64\") In destroy_device: Failed to stop \"\n \t\t\t\t\"rx queue:%d\\n\",\n@@ -3263,24 +2374,173 @@ destroy_device (volatile struct virtio_net *dev)\n \t\tmbuf_destroy_zcp(vpool);\n \n \t\t/* Stop the TX queue. */\n-\t\tif (rte_eth_dev_tx_queue_stop(ports[0], dev->vmdq_rx_q) != 0) {\n+\t\tif (rte_eth_dev_tx_queue_stop(ports[0], vdev->vmdq_rx_q) != 0) {\n \t\t\tLOG_DEBUG(VHOST_CONFIG,\n \t\t\t\t\"(%\"PRIu64\") In destroy_device: Failed to \"\n \t\t\t\t\"stop tx queue:%d\\n\",\n \t\t\t\tdev->device_fh, dev->vmdq_rx_q);\n \t\t}\n \n-\t\tvpool = &vpool_array[dev->vmdq_rx_q + MAX_QUEUES];\n+\t\tvpool = &vpool_array[vdev->vmdq_rx_q + MAX_QUEUES];\n \n \t\tLOG_DEBUG(VHOST_CONFIG,\n \t\t\t\"(%\"PRIu64\") destroy_device: Start put mbuf in mempool \"\n \t\t\t\"back to ring for TX queue: %d, dev:(%\"PRIu64\")\\n\",\n-\t\t\tdev->device_fh, (dev->vmdq_rx_q + MAX_QUEUES),\n+\t\t\tdev->device_fh, (vdev->vmdq_rx_q + MAX_QUEUES),\n \t\t\tdev->device_fh);\n \n \t\tmbuf_destroy_zcp(vpool);\n+\t\trte_free(vdev->regions_hpa);\n+\t}\n+\trte_free(vdev);\n+\n+}\n+\n+/*\n+ * Calculate the region count of physical continous regions for one particular\n+ * region of whose vhost virtual address is continous. The particular region\n+ * start from vva_start, with size of 'size' in argument.\n+ */\n+static uint32_t\n+check_hpa_regions(uint64_t vva_start, uint64_t size)\n+{\n+\tuint32_t i, nregions = 0, page_size = getpagesize();\n+\tuint64_t cur_phys_addr = 0, next_phys_addr = 0;\n+\tif (vva_start % page_size) {\n+\t\tLOG_DEBUG(VHOST_CONFIG,\n+\t\t\t\"in check_countinous: vva start(%p) mod page_size(%d) \"\n+\t\t\t\"has remainder\\n\",\n+\t\t\t(void *)(uintptr_t)vva_start, page_size);\n+\t\treturn 0;\n+\t}\n+\tif (size % page_size) {\n+\t\tLOG_DEBUG(VHOST_CONFIG,\n+\t\t\t\"in check_countinous: \"\n+\t\t\t\"size((%\"PRIu64\")) mod page_size(%d) has remainder\\n\",\n+\t\t\tsize, page_size);\n+\t\treturn 0;\n+\t}\n+\tfor (i = 0; i < size - page_size; i = i + page_size) {\n+\t\tcur_phys_addr\n+\t\t\t= rte_mem_virt2phy((void *)(uintptr_t)(vva_start + i));\n+\t\tnext_phys_addr = rte_mem_virt2phy(\n+\t\t\t(void *)(uintptr_t)(vva_start + i + page_size));\n+\t\tif ((cur_phys_addr + page_size) != next_phys_addr) {\n+\t\t\t++nregions;\n+\t\t\tLOG_DEBUG(VHOST_CONFIG,\n+\t\t\t\t\"in check_continuous: hva addr:(%p) is not \"\n+\t\t\t\t\"continuous with hva addr:(%p), diff:%d\\n\",\n+\t\t\t\t(void *)(uintptr_t)(vva_start + (uint64_t)i),\n+\t\t\t\t(void *)(uintptr_t)(vva_start + (uint64_t)i\n+\t\t\t\t+ page_size), page_size);\n+\t\t\tLOG_DEBUG(VHOST_CONFIG,\n+\t\t\t\t\"in check_continuous: hpa addr:(%p) is not \"\n+\t\t\t\t\"continuous with hpa addr:(%p), \"\n+\t\t\t\t\"diff:(%\"PRIu64\")\\n\",\n+\t\t\t\t(void *)(uintptr_t)cur_phys_addr,\n+\t\t\t\t(void *)(uintptr_t)next_phys_addr,\n+\t\t\t\t(next_phys_addr-cur_phys_addr));\n+\t\t}\n \t}\n+\treturn nregions;\n+}\n \n+/*\n+ * Divide each region whose vhost virtual address is continous into a few\n+ * sub-regions, make sure the physical address within each sub-region are\n+ * continous. And fill offset(to GPA) and size etc. information of each\n+ * sub-region into regions_hpa.\n+ */\n+static uint32_t\n+fill_hpa_memory_regions(struct virtio_memory_regions_hpa *mem_region_hpa, struct virtio_memory *virtio_memory)\n+{\n+\tuint32_t regionidx, regionidx_hpa = 0, i, k, page_size = getpagesize();\n+\tuint64_t cur_phys_addr = 0, next_phys_addr = 0, vva_start;\n+\n+\tif (mem_region_hpa == NULL)\n+\t\treturn 0;\n+\n+\tfor (regionidx = 0; regionidx < virtio_memory->nregions; regionidx++) {\n+\t\tvva_start = virtio_memory->regions[regionidx].guest_phys_address +\n+\t\t\tvirtio_memory->regions[regionidx].address_offset;\n+\t\tmem_region_hpa[regionidx_hpa].guest_phys_address\n+\t\t\t= virtio_memory->regions[regionidx].guest_phys_address;\n+\t\tmem_region_hpa[regionidx_hpa].host_phys_addr_offset =\n+\t\t\trte_mem_virt2phy((void *)(uintptr_t)(vva_start)) -\n+\t\t\tmem_region_hpa[regionidx_hpa].guest_phys_address;\n+\t\tLOG_DEBUG(VHOST_CONFIG,\n+\t\t\t\"in fill_hpa_regions: guest phys addr start[%d]:(%p)\\n\",\n+\t\t\tregionidx_hpa,\n+\t\t\t(void *)(uintptr_t)\n+\t\t\t(mem_region_hpa[regionidx_hpa].guest_phys_address));\n+\t\tLOG_DEBUG(VHOST_CONFIG,\n+\t\t\t\"in fill_hpa_regions: host  phys addr start[%d]:(%p)\\n\",\n+\t\t\tregionidx_hpa,\n+\t\t\t(void *)(uintptr_t)\n+\t\t\t(mem_region_hpa[regionidx_hpa].host_phys_addr_offset));\n+\t\tfor (i = 0, k = 0;\n+\t\t\ti < virtio_memory->regions[regionidx].memory_size -\n+\t\t\t\tpage_size;\n+\t\t\ti += page_size) {\n+\t\t\tcur_phys_addr = rte_mem_virt2phy(\n+\t\t\t\t\t(void *)(uintptr_t)(vva_start + i));\n+\t\t\tnext_phys_addr = rte_mem_virt2phy(\n+\t\t\t\t\t(void *)(uintptr_t)(vva_start +\n+\t\t\t\t\ti + page_size));\n+\t\t\tif ((cur_phys_addr + page_size) != next_phys_addr) {\n+\t\t\t\tmem_region_hpa[regionidx_hpa].guest_phys_address_end =\n+\t\t\t\t\tmem_region_hpa[regionidx_hpa].guest_phys_address +\n+\t\t\t\t\tk + page_size;\n+\t\t\t\tmem_region_hpa[regionidx_hpa].memory_size\n+\t\t\t\t\t= k + page_size;\n+\t\t\t\tLOG_DEBUG(VHOST_CONFIG, \"in fill_hpa_regions: guest \"\n+\t\t\t\t\t\"phys addr end  [%d]:(%p)\\n\",\n+\t\t\t\t\tregionidx_hpa,\n+\t\t\t\t\t(void *)(uintptr_t)\n+\t\t\t\t\t(mem_region_hpa[regionidx_hpa].guest_phys_address_end));\n+\t\t\t\tLOG_DEBUG(VHOST_CONFIG,\n+\t\t\t\t\t\"in fill_hpa_regions: guest phys addr \"\n+\t\t\t\t\t\"size [%d]:(%p)\\n\",\n+\t\t\t\t\tregionidx_hpa,\n+\t\t\t\t\t(void *)(uintptr_t)\n+\t\t\t\t\t(mem_region_hpa[regionidx_hpa].memory_size));\n+\t\t\t\tmem_region_hpa[regionidx_hpa + 1].guest_phys_address\n+\t\t\t\t\t= mem_region_hpa[regionidx_hpa].guest_phys_address_end;\n+\t\t\t\t++regionidx_hpa;\n+\t\t\t\tmem_region_hpa[regionidx_hpa].host_phys_addr_offset =\n+\t\t\t\t\tnext_phys_addr -\n+\t\t\t\t\tmem_region_hpa[regionidx_hpa].guest_phys_address;\n+\t\t\t\tLOG_DEBUG(VHOST_CONFIG, \"in fill_hpa_regions: guest\"\n+\t\t\t\t\t\" phys addr start[%d]:(%p)\\n\",\n+\t\t\t\t\tregionidx_hpa,\n+\t\t\t\t\t(void *)(uintptr_t)\n+\t\t\t\t\t(mem_region_hpa[regionidx_hpa].guest_phys_address));\n+\t\t\t\tLOG_DEBUG(VHOST_CONFIG,\n+\t\t\t\t\t\"in fill_hpa_regions: host  phys addr \"\n+\t\t\t\t\t\"start[%d]:(%p)\\n\",\n+\t\t\t\t\tregionidx_hpa,\n+\t\t\t\t\t(void *)(uintptr_t)\n+\t\t\t\t\t(mem_region_hpa[regionidx_hpa].host_phys_addr_offset));\n+\t\t\t\tk = 0;\n+\t\t\t} else {\n+\t\t\t\tk += page_size;\n+\t\t\t}\n+\t\t}\n+\t\tmem_region_hpa[regionidx_hpa].guest_phys_address_end\n+\t\t\t= mem_region_hpa[regionidx_hpa].guest_phys_address\n+\t\t\t+ k + page_size;\n+\t\tmem_region_hpa[regionidx_hpa].memory_size = k + page_size;\n+\t\tLOG_DEBUG(VHOST_CONFIG, \"in fill_hpa_regions: guest phys addr end  \"\n+\t\t\t\"[%d]:(%p)\\n\", regionidx_hpa,\n+\t\t\t(void *)(uintptr_t)\n+\t\t\t(mem_region_hpa[regionidx_hpa].guest_phys_address_end));\n+\t\tLOG_DEBUG(VHOST_CONFIG, \"in fill_hpa_regions: guest phys addr size \"\n+\t\t\t\"[%d]:(%p)\\n\", regionidx_hpa,\n+\t\t\t(void *)(uintptr_t)\n+\t\t\t(mem_region_hpa[regionidx_hpa].memory_size));\n+\t\t++regionidx_hpa;\n+\t}\n+\treturn regionidx_hpa;\n }\n \n /*\n@@ -3293,6 +2553,52 @@ new_device (struct virtio_net *dev)\n \tstruct virtio_net_data_ll *ll_dev;\n \tint lcore, core_add = 0;\n \tuint32_t device_num_min = num_devices;\n+\tstruct vhost_dev *vdev;\n+\tuint32_t regionidx;\n+\n+\tvdev = rte_zmalloc(\"vhost device\", sizeof(*vdev), CACHE_LINE_SIZE);\n+\tif (vdev == NULL) {\n+\t\tRTE_LOG(INFO, VHOST_DATA, \"(%\"PRIu64\") Couldn't allocate memory for vhost dev\\n\",\n+\t\t\tdev->device_fh);\n+\t\treturn -1;\n+\t}\n+\tvdev->dev = dev;\n+\tdev->priv = vdev;\n+\n+\tif (zero_copy) {\n+\t\tvdev->nregions_hpa = dev->mem->nregions;\n+\t\tfor (regionidx = 0; regionidx < dev->mem->nregions; regionidx++) {\n+\t\t\tvdev->nregions_hpa\n+\t\t\t\t+= check_hpa_regions(\n+\t\t\t\t\tdev->mem->regions[regionidx].guest_phys_address\n+\t\t\t\t\t+ dev->mem->regions[regionidx].address_offset,\n+\t\t\t\t\tdev->mem->regions[regionidx].memory_size);\n+\n+\t\t}\n+\n+\t\tvdev->regions_hpa = (struct virtio_memory_regions_hpa *) rte_zmalloc(\"vhost hpa region\",\n+\t\t\tsizeof(struct virtio_memory_regions_hpa) * vdev->nregions_hpa,\n+\t\t\tCACHE_LINE_SIZE);\n+\t\tif (vdev->regions_hpa == NULL) {\n+\t\t\tRTE_LOG(ERR, VHOST_CONFIG, \"Cannot allocate memory for hpa region\\n\");\n+\t\t\trte_free(vdev);\n+\t\t\treturn -1;\n+\t\t}\n+\n+\n+\t\tif (fill_hpa_memory_regions(\n+\t\t\tvdev->regions_hpa, dev->mem\n+\t\t\t) != vdev->nregions_hpa) {\n+\n+\t\t\tRTE_LOG(ERR, VHOST_CONFIG,\n+\t\t\t\t\"hpa memory regions number mismatch: \"\n+\t\t\t\t\"[%d]\\n\", vdev->nregions_hpa);\n+\t\t\trte_free(vdev->regions_hpa);\n+\t\t\trte_free(vdev);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n \n \t/* Add device to main ll */\n \tll_dev = get_data_ll_free_entry(&ll_root_free);\n@@ -3300,15 +2606,18 @@ new_device (struct virtio_net *dev)\n \t\tRTE_LOG(INFO, VHOST_DATA, \"(%\"PRIu64\") No free entry found in linked list. Device limit \"\n \t\t\t\"of %d devices per core has been reached\\n\",\n \t\t\tdev->device_fh, num_devices);\n+\t\tif (vdev->regions_hpa)\n+\t\t\trte_free(vdev->regions_hpa);\n+\t\trte_free(vdev);\n \t\treturn -1;\n \t}\n-\tll_dev->dev = dev;\n+\tll_dev->vdev = vdev;\n \tadd_data_ll_entry(&ll_root_used, ll_dev);\n-\tll_dev->dev->vmdq_rx_q\n-\t\t= ll_dev->dev->device_fh * (num_queues / num_devices);\n+\tvdev->vmdq_rx_q\n+\t\t= dev->device_fh *  (num_queues / num_devices);\n \n \tif (zero_copy) {\n-\t\tuint32_t index = ll_dev->dev->vmdq_rx_q;\n+\t\tuint32_t index = vdev->vmdq_rx_q;\n \t\tuint32_t count_in_ring, i;\n \t\tstruct mbuf_table *tx_q;\n \n@@ -3339,47 +2648,51 @@ new_device (struct virtio_net *dev)\n \t\t\tdev->device_fh,\n \t\t\trte_ring_count(vpool_array[index].ring));\n \n-\t\ttx_q = &tx_queue_zcp[(uint16_t)dev->vmdq_rx_q];\n-\t\ttx_q->txq_id = dev->vmdq_rx_q;\n+\t\ttx_q = &tx_queue_zcp[(uint16_t)vdev->vmdq_rx_q];\n+\t\ttx_q->txq_id = vdev->vmdq_rx_q;\n \n-\t\tif (rte_eth_dev_tx_queue_start(ports[0], dev->vmdq_rx_q) != 0) {\n-\t\t\tstruct vpool *vpool = &vpool_array[dev->vmdq_rx_q];\n+\t\tif (rte_eth_dev_tx_queue_start(ports[0], vdev->vmdq_rx_q) != 0) {\n+\t\t\tstruct vpool *vpool = &vpool_array[vdev->vmdq_rx_q];\n \n \t\t\tLOG_DEBUG(VHOST_CONFIG,\n \t\t\t\t\"(%\"PRIu64\") In new_device: Failed to start \"\n \t\t\t\t\"tx queue:%d\\n\",\n-\t\t\t\tdev->device_fh, dev->vmdq_rx_q);\n+\t\t\t\tdev->device_fh, vdev->vmdq_rx_q);\n \n \t\t\tmbuf_destroy_zcp(vpool);\n+\t\t\trte_free(vdev->regions_hpa);\n+\t\t\trte_free(vdev);\n \t\t\treturn -1;\n \t\t}\n \n-\t\tif (rte_eth_dev_rx_queue_start(ports[0], dev->vmdq_rx_q) != 0) {\n-\t\t\tstruct vpool *vpool = &vpool_array[dev->vmdq_rx_q];\n+\t\tif (rte_eth_dev_rx_queue_start(ports[0], vdev->vmdq_rx_q) != 0) {\n+\t\t\tstruct vpool *vpool = &vpool_array[vdev->vmdq_rx_q];\n \n \t\t\tLOG_DEBUG(VHOST_CONFIG,\n \t\t\t\t\"(%\"PRIu64\") In new_device: Failed to start \"\n \t\t\t\t\"rx queue:%d\\n\",\n-\t\t\t\tdev->device_fh, dev->vmdq_rx_q);\n+\t\t\t\tdev->device_fh, vdev->vmdq_rx_q);\n \n \t\t\t/* Stop the TX queue. */\n \t\t\tif (rte_eth_dev_tx_queue_stop(ports[0],\n-\t\t\t\tdev->vmdq_rx_q) != 0) {\n+\t\t\t\tvdev->vmdq_rx_q) != 0) {\n \t\t\t\tLOG_DEBUG(VHOST_CONFIG,\n \t\t\t\t\t\"(%\"PRIu64\") In new_device: Failed to \"\n \t\t\t\t\t\"stop tx queue:%d\\n\",\n-\t\t\t\t\tdev->device_fh, dev->vmdq_rx_q);\n+\t\t\t\t\tdev->device_fh, vmdq_rx_q);\n \t\t\t}\n \n \t\t\tmbuf_destroy_zcp(vpool);\n+\t\t\trte_free(vdev->regions_hpa);\n+\t\t\trte_free(vdev);\n \t\t\treturn -1;\n \t\t}\n \n \t}\n \n \t/*reset ready flag*/\n-\tdev->ready = DEVICE_MAC_LEARNING;\n-\tdev->remove = 0;\n+\tvdev->ready = DEVICE_MAC_LEARNING;\n+\tvdev->remove = 0;\n \n \t/* Find a suitable lcore to add the device. */\n \tRTE_LCORE_FOREACH_SLAVE(lcore) {\n@@ -3389,26 +2702,33 @@ new_device (struct virtio_net *dev)\n \t\t}\n \t}\n \t/* Add device to lcore ll */\n-\tll_dev->dev->coreid = core_add;\n-\tll_dev = get_data_ll_free_entry(&lcore_info[ll_dev->dev->coreid].lcore_ll->ll_root_free);\n+\tll_dev = get_data_ll_free_entry(&lcore_info[core_add].lcore_ll->ll_root_free);\n \tif (ll_dev == NULL) {\n \t\tRTE_LOG(INFO, VHOST_DATA, \"(%\"PRIu64\") Failed to add device to data core\\n\", dev->device_fh);\n-\t\tdev->ready = DEVICE_SAFE_REMOVE;\n+\t\tvdev->ready = DEVICE_SAFE_REMOVE;\n \t\tdestroy_device(dev);\n+\t\tif (vdev->regions_hpa)\n+\t\t\trte_free(vdev->regions_hpa);\n+\t\trte_free(vdev);\n \t\treturn -1;\n \t}\n-\tll_dev->dev = dev;\n-\tadd_data_ll_entry(&lcore_info[ll_dev->dev->coreid].lcore_ll->ll_root_used, ll_dev);\n+\tll_dev->vdev = vdev;\n+\tvdev->coreid = core_add;\n+\n+\n+\n+\tadd_data_ll_entry(&lcore_info[vdev->coreid].lcore_ll->ll_root_used, ll_dev);\n \n \t/* Initialize device stats */\n \tmemset(&dev_statistics[dev->device_fh], 0, sizeof(struct device_statistics));\n \n \t/* Disable notifications. */\n-\tset_irq_status(dev);\n-\tlcore_info[ll_dev->dev->coreid].lcore_ll->device_num++;\n+\trte_vhost_enable_guest_notification(dev, VIRTIO_RXQ, 0);\n+\trte_vhost_enable_guest_notification(dev, VIRTIO_TXQ, 0);\n+\tlcore_info[vdev->coreid].lcore_ll->device_num++;\n \tdev->flags |= VIRTIO_DEV_RUNNING;\n \n-\tRTE_LOG(INFO, VHOST_DATA, \"(%\"PRIu64\") Device has been added to data core %d\\n\", dev->device_fh, dev->coreid);\n+\tRTE_LOG(INFO, VHOST_DATA, \"(%\"PRIu64\") Device has been added to data core %d\\n\", dev->device_fh, vdev->coreid);\n \n \treturn 0;\n }\n@@ -3447,7 +2767,7 @@ print_stats(void)\n \n \t\tdev_ll = ll_root_used;\n \t\twhile (dev_ll != NULL) {\n-\t\t\tdevice_fh = (uint32_t)dev_ll->dev->device_fh;\n+\t\t\tdevice_fh = (uint32_t)dev_ll->vdev->dev->device_fh;\n \t\t\ttx_total = dev_statistics[device_fh].tx_total;\n \t\t\ttx = dev_statistics[device_fh].tx;\n \t\t\ttx_dropped = tx_total - tx;\n@@ -3707,15 +3027,18 @@ MAIN(int argc, char *argv[])\n \t\t\t\tlcore_id);\n \t}\n \n+\tif (mergeable == 0)\n+\t\trte_vhost_feature_disable(1ULL << VIRTIO_NET_F_MRG_RXBUF);\n+\n \t/* Register CUSE device to handle IOCTLs. */\n-\tret = register_cuse_device((char*)&dev_basename, dev_index, get_virtio_net_callbacks());\n+\tret = rte_vhost_driver_register((char *)&dev_basename);\n \tif (ret != 0)\n \t\trte_exit(EXIT_FAILURE,\"CUSE device setup failure.\\n\");\n \n-\tinit_virtio_net(&virtio_net_device_ops);\n+\trte_vhost_driver_callback_register(&virtio_net_device_ops);\n \n \t/* Start CUSE session. */\n-\tstart_cuse_session_loop();\n+\trte_vhost_driver_session_start();\n \treturn 0;\n \n }\ndiff --git a/examples/vhost/main.h b/examples/vhost/main.h\nindex c15d938..02e991d 100644\n--- a/examples/vhost/main.h\n+++ b/examples/vhost/main.h\n@@ -57,13 +57,50 @@\n #define RTE_LOGTYPE_VHOST_DATA   RTE_LOGTYPE_USER2\n #define RTE_LOGTYPE_VHOST_PORT   RTE_LOGTYPE_USER3\n \n-/*\n+/**\n+ * Information relating to memory regions including offsets to\n+ * addresses in host physical space.\n+ */\n+struct virtio_memory_regions_hpa {\n+\t/**< Base guest physical address of region. */\n+\tuint64_t    guest_phys_address;\n+\t/**< End guest physical address of region. */\n+\tuint64_t    guest_phys_address_end;\n+\t/**< Size of region. */\n+\tuint64_t    memory_size;\n+\t/**< Offset of region for gpa to hpa translation. */\n+\tuint64_t    host_phys_addr_offset;\n+};\n+\n+/**\n  * Device linked list structure for data path.\n  */\n-struct virtio_net_data_ll\n-{\n-\tstruct virtio_net\t\t\t*dev;\t/* Pointer to device created by configuration core. */\n-\tstruct virtio_net_data_ll\t*next;  /* Pointer to next device in linked list. */\n+struct vhost_dev {\n+\t/**< Pointer to device created by vhost lib. */\n+\tstruct virtio_net      *dev;\n+\t/**< Number of memory regions for gpa to hpa translation. */\n+\tuint32_t nregions_hpa;\n+\t/**< Memory region information for gpa to hpa translation. */\n+\tstruct virtio_memory_regions_hpa *regions_hpa;\n+\t/**< Device MAC address (Obtained on first TX packet). */\n+\tstruct ether_addr mac_address;\n+\t/**< RX VMDQ queue number. */\n+\tuint16_t vmdq_rx_q;\n+\t/**< Vlan tag assigned to the pool */\n+\tuint32_t vlan_tag;\n+\t/**< Data core that the device is added to. */\n+\tuint16_t coreid;\n+\t/**< A device is set as ready if the MAC address has been set. */\n+\tvolatile uint8_t ready;\n+\t/**< Device is marked for removal from the data core. */\n+\tvolatile uint8_t remove;\n+} __rte_cache_aligned;\n+\n+struct virtio_net_data_ll {\n+\t/* Pointer to device created by configuration core. */\n+\tstruct vhost_dev\t\t*vdev;\n+\t/* Pointer to next device in linked list. */\n+\tstruct virtio_net_data_ll\t*next;\n };\n \n /*\n",
    "prefixes": [
        "dpdk-dev",
        "2/2"
    ]
}