get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/104685/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 104685,
    "url": "http://patches.dpdk.org/api/patches/104685/?format=api",
    "web_url": "http://patches.dpdk.org/project/dts/patch/20211125131311.134679-3-linglix.chen@intel.com/",
    "project": {
        "id": 3,
        "url": "http://patches.dpdk.org/api/projects/3/?format=api",
        "name": "DTS",
        "link_name": "dts",
        "list_id": "dts.dpdk.org",
        "list_email": "dts@dpdk.org",
        "web_url": "",
        "scm_url": "git://dpdk.org/tools/dts",
        "webscm_url": "http://git.dpdk.org/tools/dts/",
        "list_archive_url": "https://inbox.dpdk.org/dts",
        "list_archive_url_format": "https://inbox.dpdk.org/dts/{}",
        "commit_url_format": ""
    },
    "msgid": "<20211125131311.134679-3-linglix.chen@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dts/20211125131311.134679-3-linglix.chen@intel.com",
    "date": "2021-11-25T13:13:11",
    "name": "[V1,2/2] test_plans/*: Change igb_uio to vfio-pci",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": false,
    "hash": "53cf0695a30f8903f31c8d1b6dbe04a1f9040fec",
    "submitter": {
        "id": 1843,
        "url": "http://patches.dpdk.org/api/people/1843/?format=api",
        "name": "Lingli Chen",
        "email": "linglix.chen@intel.com"
    },
    "delegate": null,
    "mbox": "http://patches.dpdk.org/project/dts/patch/20211125131311.134679-3-linglix.chen@intel.com/mbox/",
    "series": [
        {
            "id": 20763,
            "url": "http://patches.dpdk.org/api/series/20763/?format=api",
            "web_url": "http://patches.dpdk.org/project/dts/list/?series=20763",
            "date": "2021-11-25T13:13:09",
            "name": "move ioat device IDs to DMA class: change misc to dma.",
            "version": 1,
            "mbox": "http://patches.dpdk.org/series/20763/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/104685/comments/",
    "check": "fail",
    "checks": "http://patches.dpdk.org/api/patches/104685/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dts-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 6DB22A0C4B;\n\tThu, 25 Nov 2021 06:12:40 +0100 (CET)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 62C6B40E03;\n\tThu, 25 Nov 2021 06:12:40 +0100 (CET)",
            "from mga05.intel.com (mga05.intel.com [192.55.52.43])\n by mails.dpdk.org (Postfix) with ESMTP id E16CF40140\n for <dts@dpdk.org>; Thu, 25 Nov 2021 06:12:36 +0100 (CET)",
            "from fmsmga005.fm.intel.com ([10.253.24.32])\n by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 24 Nov 2021 21:12:35 -0800",
            "from unknown (HELO dpdk.lan) ([10.240.183.77])\n by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 24 Nov 2021 21:12:33 -0800"
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6200,9189,10178\"; a=\"321679159\"",
            "E=Sophos;i=\"5.87,262,1631602800\"; d=\"scan'208\";a=\"321679159\"",
            "E=Sophos;i=\"5.87,262,1631602800\"; d=\"scan'208\";a=\"741061179\""
        ],
        "From": "Lingli Chen <linglix.chen@intel.com>",
        "To": "dts@dpdk.org",
        "Cc": "Lingli Chen <linglix.chen@intel.com>",
        "Subject": "[dts][PATCH V1 2/2] test_plans/*: Change igb_uio to vfio-pci",
        "Date": "Thu, 25 Nov 2021 13:13:11 +0000",
        "Message-Id": "<20211125131311.134679-3-linglix.chen@intel.com>",
        "X-Mailer": "git-send-email 2.33.1",
        "In-Reply-To": "<20211125131311.134679-1-linglix.chen@intel.com>",
        "References": "<20211125131311.134679-1-linglix.chen@intel.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "X-BeenThere": "dts@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "test suite reviews and discussions <dts.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dts>,\n <mailto:dts-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dts/>",
        "List-Post": "<mailto:dts@dpdk.org>",
        "List-Help": "<mailto:dts-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dts>,\n <mailto:dts-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dts-bounces@dpdk.org"
    },
    "content": "Cbdma only tests vfio-pci from 21.11, so remove igb_uio.\n\nSigned-off-by: Lingli Chen <linglix.chen@intel.com>\n---\n test_plans/cbdma_test_plan.rst                | 14 ++--\n test_plans/dpdk_gro_lib_test_plan.rst         | 24 +++---\n test_plans/dpdk_gso_lib_test_plan.rst         | 12 +--\n .../dpdk_hugetlbfs_mount_size_test_plan.rst   | 10 +--\n .../pvp_diff_qemu_version_test_plan.rst       |  8 +-\n .../pvp_multi_paths_performance_test_plan.rst | 20 ++---\n ...host_single_core_performance_test_plan.rst | 20 ++---\n ...rtio_single_core_performance_test_plan.rst | 20 ++---\n ...emu_multi_paths_port_restart_test_plan.rst | 24 +++---\n test_plans/pvp_share_lib_test_plan.rst        |  4 +-\n .../pvp_vhost_user_reconnect_test_plan.rst    | 40 +++++-----\n test_plans/pvp_virtio_bonding_test_plan.rst   | 12 +--\n ...pvp_virtio_user_2M_hugepages_test_plan.rst |  4 +-\n ...er_multi_queues_port_restart_test_plan.rst | 20 ++---\n .../vdev_primary_secondary_test_plan.rst      | 14 ++--\n test_plans/vhost_cbdma_test_plan.rst          | 10 +--\n .../vhost_event_idx_interrupt_test_plan.rst   |  8 +-\n .../vhost_multi_queue_qemu_test_plan.rst      | 12 +--\n test_plans/vhost_user_interrupt_test_plan.rst |  4 +-\n .../vhost_user_live_migration_test_plan.rst   | 80 +++++++++----------\n .../vhost_virtio_pmd_interrupt_test_plan.rst  | 14 ++--\n .../vhost_virtio_user_interrupt_test_plan.rst | 12 +--\n .../virtio_event_idx_interrupt_test_plan.rst  | 20 ++---\n .../virtio_pvp_regression_test_plan.rst       | 32 ++++----\n ...tio_user_as_exceptional_path_test_plan.rst | 12 +--\n ...ser_for_container_networking_test_plan.rst |  4 +-\n test_plans/vm2vm_virtio_pmd_test_plan.rst     | 54 ++++++-------\n test_plans/vswitch_sample_cbdma_test_plan.rst | 12 +--\n 28 files changed, 260 insertions(+), 260 deletions(-)",
    "diff": "diff --git a/test_plans/cbdma_test_plan.rst b/test_plans/cbdma_test_plan.rst\nindex d9dcc193..53fe5eb7 100644\n--- a/test_plans/cbdma_test_plan.rst\n+++ b/test_plans/cbdma_test_plan.rst\n@@ -93,7 +93,7 @@ NIC RX -> copy packet -> free original -> update mac addresses -> NIC TX\n Test Case1: CBDMA basic test with differnet size packets\n ========================================================\n \n-1.Bind one cbdma port and one nic port to igb_uio driver.\n+1.Bind one cbdma port and one nic port to vfio-pci driver.\n \n 2.Launch dma app::\n \n@@ -106,7 +106,7 @@ Test Case1: CBDMA basic test with differnet size packets\n Test Case2: CBDMA test with multi-threads\n =========================================\n \n-1.Bind one cbdma port and one nic port to igb_uio driver.\n+1.Bind one cbdma port and one nic port to vfio-pci driver.\n \n 2.Launch dma app with three cores::\n \n@@ -119,7 +119,7 @@ Test Case2: CBDMA test with multi-threads\n Test Case3: CBDMA test with multi nic ports\n ===========================================\n \n-1.Bind two cbdma ports and two nic ports to igb_uio driver.\n+1.Bind two cbdma ports and two nic ports to vfio-pci driver.\n \n 2.Launch dma app with multi-ports::\n \n@@ -132,7 +132,7 @@ Test Case3: CBDMA test with multi nic ports\n Test Case4: CBDMA test with multi-queues\n ========================================\n \n-1.Bind two cbdma ports and one nic port to igb_uio driver.\n+1.Bind two cbdma ports and one nic port to vfio-pci driver.\n \n 2.Launch dma app with multi-queues::\n \n@@ -148,7 +148,7 @@ Check performance gains status when queue numbers added.\n Test Case5: CBDMA performance cmparison between mac-updating and no-mac-updating\n ================================================================================\n \n-1.Bind one cbdma ports and one nic port to igb_uio driver.\n+1.Bind one cbdma ports and one nic port to vfio-pci driver.\n \n 2.Launch dma app::\n \n@@ -173,7 +173,7 @@ Test Case5: CBDMA performance cmparison between mac-updating and no-mac-updating\n Test Case6: CBDMA performance cmparison between HW copies and SW copies using different packet size\n ===================================================================================================\n \n-1.Bind four cbdma pors and one nic port to igb_uio driver.\n+1.Bind four cbdma pors and one nic port to vfio-pci driver.\n \n 2.Launch dma app with three cores::\n \n@@ -198,7 +198,7 @@ Test Case6: CBDMA performance cmparison between HW copies and SW copies using di\n Test Case7: CBDMA multi application mode test\n =============================================\n \n-1.Bind four cbdma ports to ugb_uio driver.\n+1.Bind four cbdma ports to vfio-pci driver.\n \n 2.Launch test-pmd app with three cores and proc_type primary:\n \ndiff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst\nindex ef16d997..9685afc1 100644\n--- a/test_plans/dpdk_gro_lib_test_plan.rst\n+++ b/test_plans/dpdk_gro_lib_test_plan.rst\n@@ -127,9 +127,9 @@ Test Case1: DPDK GRO lightmode test with tcp/ipv4 traffic\n     ip netns exec ns1 ifconfig [enp216s0f0] 1.1.1.8 up\n     ip netns exec ns1 ethtool -K [enp216s0f0] tso on\n \n-2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 1::\n+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x\n     ./testpmd -l 2-4 -n 4 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd csum\n@@ -179,9 +179,9 @@ Test Case2: DPDK GRO heavymode test with tcp/ipv4 traffic\n     ip netns exec ns1 ifconfig [enp216s0f0] 1.1.1.8 up\n     ip netns exec ns1 ethtool -K [enp216s0f0] tso on\n \n-2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 2::\n+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 2::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x\n     ./testpmd -l 2-4 -n 4 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd csum\n@@ -231,9 +231,9 @@ Test Case3: DPDK GRO heavymode_flush4 test with tcp/ipv4 traffic\n     ip netns exec ns1 ifconfig [enp216s0f0] 1.1.1.8 up\n     ip netns exec ns1 ethtool -K [enp216s0f0] tso on\n \n-2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 4::\n+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 4::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x\n     ./testpmd -l 2-4 -n 4 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd csum\n@@ -299,9 +299,9 @@ Vxlan topology\n     ip netns exec t2 ip addr add $VXLAN_IP/24 dev $VXLAN_NAME\n     ip netns exec t2 ip link set up dev $VXLAN_NAME\n \n-2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 4::\n+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 4::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x\n     ./testpmd -l 2-4 -n 4 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd csum\n@@ -363,9 +363,9 @@ NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net\n     ip netns exec ns1 ifconfig enp26s0f0 1.1.1.8 up\n     ip netns exec ns1 ethtool -K enp26s0f0 tso on\n \n-2. Bind cbdma port and nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 1::\n+2. Bind cbdma port and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x\n     ./x86_64-native-linuxapp-gcc/app/testpmd -l 29-31 -n 4 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2\n     set fwd csum\n@@ -421,9 +421,9 @@ NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net\n     ip netns exec ns1 ifconfig enp26s0f0 1.1.1.8 up\n     ip netns exec ns1 ethtool -K enp26s0f0 tso on\n \n-2. Bind cbdma port and nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 1::\n+2. Bind cbdma port and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x\n     ./x86_64-native-linuxapp-gcc/app/testpmd -l 29-31 -n 4 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0@80:04.0;txq1@80:04.1]' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2\n     set fwd csum\ndiff --git a/test_plans/dpdk_gso_lib_test_plan.rst b/test_plans/dpdk_gso_lib_test_plan.rst\nindex be1bdd20..2c8b32ad 100644\n--- a/test_plans/dpdk_gso_lib_test_plan.rst\n+++ b/test_plans/dpdk_gso_lib_test_plan.rst\n@@ -96,9 +96,9 @@ Test Case1: DPDK GSO test with tcp traffic\n     ip netns exec ns1 ifconfig [enp216s0f0] 1.1.1.8 up\n     ip netns exec ns1 ethtool -K [enp216s0f0] gro on\n \n-2. Bind nic1 to igb_uio, launch vhost-user with testpmd::\n+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x       # xx:xx.x is the pci addr of nic1\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x       # xx:xx.x is the pci addr of nic1\n     ./testpmd -l 2-4 -n 4 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd csum\n@@ -156,9 +156,9 @@ Test Case3: DPDK GSO test with vxlan traffic\n     ip netns exec ns1 ip link add vxlan100 type vxlan id 1000 remote 188.0.0.2 local 188.0.0.1 dstport 4789 dev [enp216s0f0]\n     ip netns exec ns1 ifconfig vxlan100 1.1.1.1/24 up\n \n-2. Bind nic1 to igb_uio, launch vhost-user with testpmd::\n+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x\n     ./testpmd -l 2-4 -n 4 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd csum\n@@ -210,9 +210,9 @@ Test Case4: DPDK GSO test with gre traffic\n     ip netns exec ns1 ip tunnel add gre100 mode gre remote 188.0.0.2 local 188.0.0.1\n     ip netns exec ns1 ifconfig gre100 1.1.1.1/24 up\n \n-2. Bind nic1 to igb_uio, launch vhost-user with testpmd::\n+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x\n     ./testpmd -l 2-4 -n 4 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd csum\ndiff --git a/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst b/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst\nindex 218b9604..bda21d39 100644\n--- a/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst\n+++ b/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst\n@@ -49,7 +49,7 @@ Test Case 1: default hugepage size w/ and w/o numa\n \n     mount -t hugetlbfs hugetlbfs /mnt/huge\n \n-2. Bind one nic port to igb_uio driver, launch testpmd::\n+2. Bind one nic port to vfio-pci driver, launch testpmd::\n \n     ./dpdk-testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i\n     testpmd>start\n@@ -71,7 +71,7 @@ Test Case 2: mount size exactly match total hugepage size with two mount points\n     mount -t hugetlbfs -o size=4G hugetlbfs /mnt/huge1\n     mount -t hugetlbfs -o size=4G hugetlbfs /mnt/huge2\n \n-2. Bind two nic ports to igb_uio driver, launch testpmd with numactl::\n+2. Bind two nic ports to vfio-pci driver, launch testpmd with numactl::\n \n     numactl --membind=1 ./dpdk-testpmd -l 31-32 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge1 --file-prefix=abc -a 82:00.0 -- -i --socket-num=1 --no-numa\n     testpmd>start\n@@ -88,7 +88,7 @@ Test Case 3: mount size greater than total hugepage size with single mount point\n \n     mount -t hugetlbfs -o size=9G hugetlbfs /mnt/huge\n \n-2. Bind one nic port to igb_uio driver, launch testpmd::\n+2. Bind one nic port to vfio-pci driver, launch testpmd::\n \n     ./dpdk-testpmd -c 0x3 -n 4 --legacy-mem --huge-dir /mnt/huge --file-prefix=abc -- -i\n     testpmd>start\n@@ -104,7 +104,7 @@ Test Case 4: mount size greater than total hugepage size with multiple mount poi\n     mount -t hugetlbfs -o size=4G hugetlbfs /mnt/huge2\n     mount -t hugetlbfs -o size=1G hugetlbfs /mnt/huge3\n \n-2. Bind one nic port to igb_uio driver, launch testpmd::\n+2. Bind one nic port to vfio-pci driver, launch testpmd::\n \n     numactl --membind=0 ./dpdk-testpmd -c 0x3 -n 4  --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge1 --file-prefix=abc -- -i --socket-num=0 --no-numa\n     testpmd>start\n@@ -120,7 +120,7 @@ Test Case 4: mount size greater than total hugepage size with multiple mount poi\n Test Case 5: run dpdk app in limited hugepages controlled by cgroup\n ===================================================================\n \n-1. Bind one nic port to igb_uio driver, launch testpmd in limited hugepages::\n+1. Bind one nic port to vfio-pci driver, launch testpmd in limited hugepages::\n \n     cgcreate -g hugetlb:/test-subgroup\n     cgset -r hugetlb.1GB.limit_in_bytes=2147483648 test-subgroup\ndiff --git a/test_plans/pvp_diff_qemu_version_test_plan.rst b/test_plans/pvp_diff_qemu_version_test_plan.rst\nindex 612e0e26..125c9a65 100644\n--- a/test_plans/pvp_diff_qemu_version_test_plan.rst\n+++ b/test_plans/pvp_diff_qemu_version_test_plan.rst\n@@ -47,7 +47,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path\n ========================================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xe -n 4 \\\n@@ -72,7 +72,7 @@ Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path\n     -device virtio-net-pci,netdev=netdev1,mac=52:54:00:00:00:01,mrg_rxbuf=on \\\n     -vnc :10\n \n-4. On VM, bind virtio net to igb_uio and run testpmd ::\n+4. On VM, bind virtio net to vfio-pci and run testpmd ::\n     ./testpmd -c 0x3 -n 3 -- -i \\\n     --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -85,7 +85,7 @@ Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path\n Test Case 2: PVP test with virtio 1.0 mergeable path\n ====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xe -n 4 \\\n@@ -110,7 +110,7 @@ Test Case 2: PVP test with virtio 1.0 mergeable path\n     -device virtio-net-pci,netdev=netdev1,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd::\n+3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 3 -- -i \\\n     --nb-cores=1 --txd=1024 --rxd=1024\ndiff --git a/test_plans/pvp_multi_paths_performance_test_plan.rst b/test_plans/pvp_multi_paths_performance_test_plan.rst\nindex 11a700f0..3a0a8a72 100644\n--- a/test_plans/pvp_multi_paths_performance_test_plan.rst\n+++ b/test_plans/pvp_multi_paths_performance_test_plan.rst\n@@ -56,7 +56,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n Test Case 1: pvp test with virtio 1.1 mergeable path\n ====================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-3 \\\n@@ -80,7 +80,7 @@ Test Case 1: pvp test with virtio 1.1 mergeable path\n Test Case 2: pvp test with virtio 1.1 non-mergeable path\n ========================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-3 \\\n@@ -104,7 +104,7 @@ Test Case 2: pvp test with virtio 1.1 non-mergeable path\n Test Case 3: pvp test with inorder mergeable path\n =================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-3 \\\n@@ -128,7 +128,7 @@ Test Case 3: pvp test with inorder mergeable path\n Test Case 4: pvp test with inorder non-mergeable path\n =====================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4 \\\n@@ -152,7 +152,7 @@ Test Case 4: pvp test with inorder non-mergeable path\n Test Case 5: pvp test with mergeable path\n =========================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4 \\\n@@ -176,7 +176,7 @@ Test Case 5: pvp test with mergeable path\n Test Case 6: pvp test with non-mergeable path\n =============================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4 \\\n@@ -200,7 +200,7 @@ Test Case 6: pvp test with non-mergeable path\n Test Case 7: pvp test with vectorized_rx path\n =============================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4 \\\n@@ -224,7 +224,7 @@ Test Case 7: pvp test with vectorized_rx path\n Test Case 8: pvp test with virtio 1.1 inorder mergeable path\n ============================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-3 \\\n@@ -248,7 +248,7 @@ Test Case 8: pvp test with virtio 1.1 inorder mergeable path\n Test Case 9: pvp test with virtio 1.1 inorder non-mergeable path\n ================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-3 \\\n@@ -272,7 +272,7 @@ Test Case 9: pvp test with virtio 1.1 inorder non-mergeable path\n Test Case 10: pvp test with virtio 1.1 vectorized path\n ======================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \\\ndiff --git a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst\nindex 00e6009c..b10c415a 100644\n--- a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst\n+++ b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst\n@@ -47,7 +47,7 @@ TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG\n Test Case 1: vhost single core performance test with virtio 1.1 mergeable path\n ==============================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\n@@ -68,7 +68,7 @@ Test Case 1: vhost single core performance test with virtio 1.1 mergeable path\n Test Case 2: vhost single core performance test with virtio 1.1 non-mergeable path\n ==================================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\n@@ -89,7 +89,7 @@ Test Case 2: vhost single core performance test with virtio 1.1 non-mergeable pa\n Test Case 3: vhost single core performance test with inorder mergeable path\n ===========================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\n@@ -110,7 +110,7 @@ Test Case 3: vhost single core performance test with inorder mergeable path\n Test Case 4: vhost single core performance test with inorder non-mergeable path\n ===============================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\n@@ -131,7 +131,7 @@ Test Case 4: vhost single core performance test with inorder non-mergeable path\n Test Case 5: vhost single core performance test with mergeable path\n ===================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\n@@ -152,7 +152,7 @@ Test Case 5: vhost single core performance test with mergeable path\n Test Case 6: vhost single core performance test with non-mergeable path\n =======================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\n@@ -173,7 +173,7 @@ Test Case 6: vhost single core performance test with non-mergeable path\n Test Case 7: vhost single core performance test with vectorized_rx path\n =======================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\n@@ -194,7 +194,7 @@ Test Case 7: vhost single core performance test with vectorized_rx path\n Test Case 8: vhost single core performance test with virtio 1.1 inorder mergeable path\n ======================================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\n@@ -215,7 +215,7 @@ Test Case 8: vhost single core performance test with virtio 1.1 inorder mergeabl\n Test Case 9: vhost single core performance test with virtio 1.1 inorder non-mergeable path\n ==========================================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\n@@ -236,7 +236,7 @@ Test Case 9: vhost single core performance test with virtio 1.1 inorder non-merg\n Test Case 10: vhost single core performance test with virtio 1.1 vectorized path\n ================================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \\\ndiff --git a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst\nindex 3a66cd12..ea7ff698 100644\n--- a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst\n+++ b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst\n@@ -47,7 +47,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n Test Case 1: virtio single core performance test with virtio 1.1 mergeable path\n ===============================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024\n@@ -67,7 +67,7 @@ Test Case 1: virtio single core performance test with virtio 1.1 mergeable path\n Test Case 2: virtio single core performance test with virtio 1.1 non-mergeable path\n ===================================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -88,7 +88,7 @@ Test Case 2: virtio single core performance test with virtio 1.1 non-mergeable p\n Test Case 3: virtio single core performance test with inorder mergeable path\n ============================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -109,7 +109,7 @@ Test Case 3: virtio single core performance test with inorder mergeable path\n Test Case 4: virtio single core performance test with inorder non-mergeable path\n ================================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -130,7 +130,7 @@ Test Case 4: virtio single core performance test with inorder non-mergeable path\n Test Case 5: virtio single core performance test with mergeable path\n ====================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -151,7 +151,7 @@ Test Case 5: virtio single core performance test with mergeable path\n Test Case 6: virtio single core performance test with non-mergeable path\n ========================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -172,7 +172,7 @@ Test Case 6: virtio single core performance test with non-mergeable path\n Test Case 7: virtio single core performance test with vectorized_rx path\n ========================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -193,7 +193,7 @@ Test Case 7: virtio single core performance test with vectorized_rx path\n Test Case 8: virtio single core performance test with virtio 1.1 inorder mergeable path\n =======================================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4 \\\n@@ -214,7 +214,7 @@ Test Case 8: virtio single core performance test with virtio 1.1 inorder mergeab\n Test Case 9: virtio single core performance test with virtio 1.1 inorder non-mergeable path\n ===========================================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -235,7 +235,7 @@ Test Case 9: virtio single core performance test with virtio 1.1 inorder non-mer\n Test Case 10: virtio single core performance test with virtio 1.1 vectorized path\n =================================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  --no-pci \\\ndiff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst\nindex 9456fdc4..ddf8beca 100644\n--- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst\n+++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst\n@@ -48,7 +48,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n Test Case 1: pvp test with virtio 0.95 mergeable path\n =====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xe -n 4 \\\n@@ -71,7 +71,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd::\n+3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 3 -- -i \\\n     --nb-cores=1 --txd=1024 --rxd=1024\n@@ -95,7 +95,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path\n Test Case 2: pvp test with virtio 0.95 normal path\n ==================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xe -n 4 \\\n@@ -117,7 +117,7 @@ Test Case 2: pvp test with virtio 0.95 normal path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd with tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::\n \n     ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip \\\n     --nb-cores=1 --txd=1024 --rxd=1024\n@@ -141,7 +141,7 @@ Test Case 2: pvp test with virtio 0.95 normal path\n Test Case 3: pvp test with virtio 0.95 vrctor_rx path\n =====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xe -n 4 \\\n@@ -163,7 +163,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without ant tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without ant tx-offloads::\n \n     ./testpmd -c 0x3 -n 3 -- -i \\\n     --nb-cores=1 --txd=1024 --rxd=1024\n@@ -187,7 +187,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path\n Test Case 4: pvp test with virtio 1.0 mergeable path\n ====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xe -n 4 \\\n@@ -209,7 +209,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd::\n+3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 3 -- -i \\\n     --nb-cores=1 --txd=1024 --rxd=1024\n@@ -233,7 +233,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path\n Test Case 5: pvp test with virtio 1.0 normal path\n =================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xe -n 4 \\\n@@ -255,7 +255,7 @@ Test Case 5: pvp test with virtio 1.0 normal path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd with tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::\n \n     ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\\\n     --nb-cores=1 --txd=1024 --rxd=1024\n@@ -279,7 +279,7 @@ Test Case 5: pvp test with virtio 1.0 normal path\n Test Case 6: pvp test with virtio 1.0 vrctor_rx path\n ====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xe -n 4 \\\n@@ -301,7 +301,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::\n \n     ./testpmd -c 0x3 -n 3 -- -i \\\n     --nb-cores=1 --txd=1024 --rxd=1024\ndiff --git a/test_plans/pvp_share_lib_test_plan.rst b/test_plans/pvp_share_lib_test_plan.rst\nindex f0610e90..a1a6c56f 100644\n--- a/test_plans/pvp_share_lib_test_plan.rst\n+++ b/test_plans/pvp_share_lib_test_plan.rst\n@@ -54,7 +54,7 @@ Test Case1: Vhost/virtio-user pvp share lib test with niantic\n \n     export LD_LIBRARY_PATH=/root/dpdk/x86_64-native-linuxapp-gcc/drivers:$LD_LIBRARY_PATH\n \n-4. Bind niantic port with igb_uio, use option ``-d`` to load the dynamic pmd when launch vhost::\n+4. Bind niantic port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost::\n \n     ./testpmd  -c 0x03 -n 4 -d librte_net_vhost.so.21.0 -d librte_net_i40e.so.21.0 -d librte_mempool_ring.so.21.0 \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i\n@@ -75,7 +75,7 @@ Test Case2: Vhost/virtio-user pvp share lib test with fortville\n \n Similar as Test Case1, all steps are similar except step 4:\n \n-4. Bind fortville port with igb_uio, use option ``-d`` to load the dynamic pmd when launch vhost::\n+4. Bind fortville port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost::\n \n     ./testpmd  -c 0x03 -n 4 -d librte_net_vhost.so -d librte_net_i40e.so -d librte_mempool_ring.so \\\n     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i\ndiff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst\nindex 6641d447..f13bbb0a 100644\n--- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst\n+++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst\n@@ -59,7 +59,7 @@ Test Case1: vhost-user/virtio-pmd pvp split ring reconnect from vhost-user\n ==========================================================================\n Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG\n \n-1. Bind one port to igb_uio, then launch vhost with client mode by below commands::\n+1. Bind one port to vfio-pci, then launch vhost with client mode by below commands::\n \n     ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1\n     testpmd>set fwd mac\n@@ -79,7 +79,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd::\n+3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -104,7 +104,7 @@ Test Case2: vhost-user/virtio-pmd pvp split ring reconnect from VM\n ==================================================================\n Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG\n \n-1. Bind one port to igb_uio, then launch vhost with client mode by below commands::\n+1. Bind one port to vfio-pci, then launch vhost with client mode by below commands::\n \n     ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1\n     testpmd>set fwd mac\n@@ -124,7 +124,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd::\n+3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -149,7 +149,7 @@ Similar as Test Case1, all steps are similar except step 5, 6.\n Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from vhost-user\n ==========================================================================================\n \n-1. Bind one port to igb_uio, launch the vhost by below command::\n+1. Bind one port to vfio-pci, launch the vhost by below command::\n \n     ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -181,13 +181,13 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :11\n \n-3. On VM1, bind virtio1 to igb_uio and run testpmd::\n+3. On VM1, bind virtio1 to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n     testpmd>start\n \n-4. On VM2, bind virtio2 to igb_uio and run testpmd::\n+4. On VM2, bind virtio2 to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -211,7 +211,7 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from\n Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from VMs\n ===================================================================================\n \n-1. Bind one port to igb_uio, launch the vhost by below command::\n+1. Bind one port to vfio-pci, launch the vhost by below command::\n \n     ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -243,13 +243,13 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \\\n     -vnc :11\n \n-3. On VM1, bind virtio1 to igb_uio and run testpmd::\n+3. On VM1, bind virtio1 to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n     testpmd>start\n \n-4. On VM2, bind virtio2 to igb_uio and run testpmd::\n+4. On VM2, bind virtio2 to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -392,7 +392,7 @@ Test Case10: vhost-user/virtio-pmd pvp packed ring reconnect from vhost-user\n ============================================================================\n Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG\n \n-1. Bind one port to igb_uio, then launch vhost with client mode by below commands::\n+1. Bind one port to vfio-pci, then launch vhost with client mode by below commands::\n \n     ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1\n     testpmd>set fwd mac\n@@ -412,7 +412,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd::\n+3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -437,7 +437,7 @@ Test Case11: vhost-user/virtio-pmd pvp packed ring reconnect from VM\n ====================================================================\n Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG\n \n-1. Bind one port to igb_uio, then launch vhost with client mode by below commands::\n+1. Bind one port to vfio-pci, then launch vhost with client mode by below commands::\n \n     ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1\n     testpmd>set fwd mac\n@@ -457,7 +457,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd::\n+3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -482,7 +482,7 @@ Similar as Test Case1, all steps are similar except step 5, 6.\n Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from vhost-user\n ============================================================================================\n \n-1. Bind one port to igb_uio, launch the vhost by below command::\n+1. Bind one port to vfio-pci, launch the vhost by below command::\n \n     ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -514,13 +514,13 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \\\n     -vnc :11\n \n-3. On VM1, bind virtio1 to igb_uio and run testpmd::\n+3. On VM1, bind virtio1 to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n     testpmd>start\n \n-4. On VM2, bind virtio2 to igb_uio and run testpmd::\n+4. On VM2, bind virtio2 to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -544,7 +544,7 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro\n Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from VMs\n =====================================================================================\n \n-1. Bind one port to igb_uio, launch the vhost by below command::\n+1. Bind one port to vfio-pci, launch the vhost by below command::\n \n     ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -576,13 +576,13 @@ Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \\\n     -vnc :11\n \n-3. On VM1, bind virtio1 to igb_uio and run testpmd::\n+3. On VM1, bind virtio1 to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n     testpmd>start\n \n-4. On VM2, bind virtio2 to igb_uio and run testpmd::\n+4. On VM2, bind virtio2 to vfio-pci and run testpmd::\n \n     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\ndiff --git a/test_plans/pvp_virtio_bonding_test_plan.rst b/test_plans/pvp_virtio_bonding_test_plan.rst\nindex 90438cc9..2434802c 100644\n--- a/test_plans/pvp_virtio_bonding_test_plan.rst\n+++ b/test_plans/pvp_virtio_bonding_test_plan.rst\n@@ -50,7 +50,7 @@ Test case 1: vhost-user/virtio-pmd pvp bonding test with mode 0\n ===============================================================\n Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG\n \n-1. Bind one port to igb_uio,launch vhost by below command::\n+1. Bind one port to vfio-pci,launch vhost by below command::\n \n     ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -79,9 +79,9 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG\n     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img -vnc :10\n \n-3. On vm, bind four virtio-net devices to igb_uio::\n+3. On vm, bind four virtio-net devices to vfio-pci::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x xx:xx.x xx:xx.x xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x xx:xx.x xx:xx.x xx:xx.x\n \n 4. Launch testpmd in VM::\n \n@@ -112,7 +112,7 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG\n Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 to 6\n ===================================================================================\n \n-1. Bind one port to igb_uio,launch vhost by below command::\n+1. Bind one port to vfio-pci,launch vhost by below command::\n \n     ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -141,9 +141,9 @@ Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 t\n     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img -vnc :10\n \n-3. On vm, bind four virtio-net devices to igb_uio::\n+3. On vm, bind four virtio-net devices to vfio-pci::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x xx:xx.x xx:xx.x xx:xx.x\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x xx:xx.x xx:xx.x xx:xx.x\n \n 4. Launch testpmd in VM::\n \ndiff --git a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst\nindex 89af30f7..a4ac1f18 100644\n--- a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst\n+++ b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst\n@@ -44,7 +44,7 @@ Test Case1:  Basic test for virtio-user split ring 2M hugepage\n \n 1. Before the test, plese make sure only 2M hugepage are mounted in host.\n \n-2. Bind one port to igb_uio, launch vhost::\n+2. Bind one port to vfio-pci, launch vhost::\n \n     ./testpmd -l 3-4 -n 4 --file-prefix=vhost \\\n     --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i\n@@ -64,7 +64,7 @@ Test Case1:  Basic test for virtio-user packed ring 2M hugepage\n \n 1. Before the test, plese make sure only 2M hugepage are mounted in host.\n \n-2. Bind one port to igb_uio, launch vhost::\n+2. Bind one port to vfio-pci, launch vhost::\n \n     ./testpmd -l 3-4 -n 4 --file-prefix=vhost \\\n     --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i\ndiff --git a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst\nindex e877a791..629f6f42 100644\n--- a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst\n+++ b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst\n@@ -51,7 +51,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n Test Case 1: pvp 2 queues test with packed ring mergeable path\n ===============================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -89,7 +89,7 @@ Test Case 1: pvp 2 queues test with packed ring mergeable path\n Test Case 2: pvp 2 queues test with packed ring non-mergeable path\n ==================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -122,7 +122,7 @@ Test Case 2: pvp 2 queues test with packed ring non-mergeable path\n Test Case 3: pvp 2 queues test with split ring inorder mergeable path\n =====================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -155,7 +155,7 @@ Test Case 3: pvp 2 queues test with split ring inorder mergeable path\n Test Case 4: pvp 2 queues test with split ring inorder non-mergeable path\n ==========================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -188,7 +188,7 @@ Test Case 4: pvp 2 queues test with split ring inorder non-mergeable path\n Test Case 5: pvp 2 queues test with split ring mergeable path\n =============================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -221,7 +221,7 @@ Test Case 5: pvp 2 queues test with split ring mergeable path\n Test Case 6: pvp 2 queues test with split ring non-mergeable path\n =================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -254,7 +254,7 @@ Test Case 6: pvp 2 queues test with split ring non-mergeable path\n Test Case 7: pvp 2 queues test with split ring vector_rx path\n =============================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -287,7 +287,7 @@ Test Case 7: pvp 2 queues test with split ring vector_rx path\n Test Case 8: pvp 2 queues test with packed ring inorder mergeable path\n ======================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -320,7 +320,7 @@ Test Case 8: pvp 2 queues test with packed ring inorder mergeable path\n Test Case 9: pvp 2 queues test with packed ring inorder non-mergeable path\n ===========================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\n@@ -353,7 +353,7 @@ Test Case 9: pvp 2 queues test with packed ring inorder non-mergeable path\n Test Case 10: pvp 2 queues test with packed ring vectorized path\n ================================================================\n \n-1. Bind one port to igb_uio, then launch vhost by below command::\n+1. Bind one port to vfio-pci, then launch vhost by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -n 4 -l 2-4  \\\ndiff --git a/test_plans/vdev_primary_secondary_test_plan.rst b/test_plans/vdev_primary_secondary_test_plan.rst\nindex a148fcbe..1e6cd2e0 100644\n--- a/test_plans/vdev_primary_secondary_test_plan.rst\n+++ b/test_plans/vdev_primary_secondary_test_plan.rst\n@@ -141,7 +141,7 @@ SW preparation: Change one line of the symmetric_mp sample and rebuild::\n     vi ./examples/multi_process/symmetric_mp/main.c\n     -.offloads = DEV_RX_OFFLOAD_CHECKSUM,\n \n-1. Bind one port to igb_uio, launch testpmd by below command::\n+1. Bind one port to vfio-pci, launch testpmd by below command::\n \n     ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024\n     testpmd>set fwd txonly\n@@ -161,10 +161,10 @@ SW preparation: Change one line of the symmetric_mp sample and rebuild::\n     -chardev socket,id=char1,path=./vhost-net1,server -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=2 \\\n     -device virtio-net-pci,mac=52:54:00:00:00:03,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=15  -vnc :10 -daemonize\n \n-3.  Bind virtio port to igb_uio::\n+3.  Bind virtio port to vfio-pci::\n \n-    ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x\n-    ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x\n+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x\n+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x\n \n 4. Launch two process by example::\n \n@@ -199,10 +199,10 @@ Test Case 2: Virtio-pmd primary and secondary process hotplug test\n     -chardev socket,id=char1,path=./vhost-net1,server -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=2 \\\n     -device virtio-net-pci,mac=52:54:00:00:00:03,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=15  -vnc :10 -daemonize\n \n-3.  Bind virtio port to igb_uio::\n+3.  Bind virtio port to vfio-pci::\n \n-    ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x\n-    ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x\n+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x\n+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x\n \n 4. Start sample code as primary process::\n \ndiff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst\nindex 7fe74f12..3d0e518a 100644\n--- a/test_plans/vhost_cbdma_test_plan.rst\n+++ b/test_plans/vhost_cbdma_test_plan.rst\n@@ -69,7 +69,7 @@ Packet pipeline:\n ================\n TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n \n-1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::\n+1. Bind one cbdma port and one nic port to vfio-pci, then launch vhost by below command::\n \n     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \\\n     -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -127,7 +127,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx operations\n =========================================================================================\n \n-1. Bind 8 cbdma channels and one nic port to igb_uio, then launch vhost by below command::\n+1. Bind 8 cbdma channels and one nic port to vfio-pci, then launch vhost by below command::\n \n     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \\\n      --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \\\n@@ -178,7 +178,7 @@ Packet pipeline:\n ================\n TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n \n-1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::\n+1. Bind one cbdma port and one nic port to vfio-pci, then launch vhost by below command::\n \n     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \\\n     -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -245,7 +245,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx operations\n ==========================================================================================\n \n-1. Bind 8 cbdma channels and one nic port to igb_uio, then launch vhost by below command::\n+1. Bind 8 cbdma channels and one nic port to vfio-pci, then launch vhost by below command::\n \n     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \\\n      --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \\\n@@ -292,7 +292,7 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx\n Test Case 5: Compare PVP split ring performance between CPU copy, CBDMA copy and Sync copy\n ==========================================================================================\n \n-1. Bind one cbdma port and one nic port which on same numa to igb_uio, then launch vhost by below command::\n+1. Bind one cbdma port and one nic port which on same numa to vfio-pci, then launch vhost by below command::\n \n     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0]' \\\n     -- -i --nb-cores=1 --txd=1024 --rxd=1024\ndiff --git a/test_plans/vhost_event_idx_interrupt_test_plan.rst b/test_plans/vhost_event_idx_interrupt_test_plan.rst\nindex 0cf4834f..111de954 100644\n--- a/test_plans/vhost_event_idx_interrupt_test_plan.rst\n+++ b/test_plans/vhost_event_idx_interrupt_test_plan.rst\n@@ -399,7 +399,7 @@ Test Case 6: wake up packed ring vhost-user cores by multi virtio-net in VMs wit\n Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test\n ===============================================================================================================\n \n-1. Bind 16 cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::\n+1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode::\n \n     ./l3fwd-power -l 1-16 -n 4 --log-level=9 \\\n     --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \\\n@@ -460,7 +460,7 @@ Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode a\n Test Case 8: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test\n ================================================================================================================================\n \n-1. Bind two cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::\n+1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode::\n \n     ./l3fwd-power -l 1-2 -n 4 --log-level=9 \\\n     --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \\\n@@ -515,7 +515,7 @@ Test Case 8: wake up split ring vhost-user cores by multi virtio-net in VMs with\n Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test\n ================================================================================================================\n \n-1. Bind 16 cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::\n+1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode::\n \n     ./l3fwd-power -l 1-16 -n 4 --log-level=9 \\\n     --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \\\n@@ -576,7 +576,7 @@ Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode\n Test Case 10: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test\n ==================================================================================================================================\n \n-1. Bind two cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::\n+1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode::\n \n     ./l3fwd-power -l 1-2 -n 4 --log-level=9 \\\n     --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \\\ndiff --git a/test_plans/vhost_multi_queue_qemu_test_plan.rst b/test_plans/vhost_multi_queue_qemu_test_plan.rst\nindex abaf7af6..445848ff 100644\n--- a/test_plans/vhost_multi_queue_qemu_test_plan.rst\n+++ b/test_plans/vhost_multi_queue_qemu_test_plan.rst\n@@ -43,7 +43,7 @@ Test Case: vhost pmd/virtio-pmd PVP 2queues mergeable path performance\n flow: \n TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n \n-1. Bind one port to igb_uio, then launch testpmd by below command: \n+1. Bind one port to vfio-pci, then launch testpmd by below command:\n     rm -rf vhost-net*\n     ./testpmd -c 0xe -n 4 \\\n     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \\\n@@ -62,7 +62,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \\\n     -vnc :2 -daemonize\n \n-3. On VM, bind virtio net to igb_uio and run testpmd ::\n+3. On VM, bind virtio net to vfio-pci and run testpmd ::\n     ./testpmd -c 0x07 -n 3 -- -i \\\n     --rxq=2 --txq=2 --txqflags=0xf01 --rss-ip --nb-cores=2\n     testpmd>set fwd mac\n@@ -84,7 +84,7 @@ to RX/TX packets normally.\n flow: \n TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n \n-1. Bind one port to igb_uio, then launch testpmd by below command, \n+1. Bind one port to vfio-pci, then launch testpmd by below command,\n    ensure the vhost using 2 queues::\n \n     rm -rf vhost-net*\n@@ -106,7 +106,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \\\n     -vnc :2 -daemonize\n \n-3. On VM, bind virtio net to igb_uio and run testpmd,\n+3. On VM, bind virtio net to vfio-pci and run testpmd,\n    using one queue for testing at first::\n  \n     ./testpmd -c 0x7 -n 3 -- -i --rxq=1 --txq=1 --tx-offloads=0x0 \\\n@@ -160,7 +160,7 @@ packets.\n flow: \n TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n \n-1. Bind one port to igb_uio, then launch testpmd by below command, \n+1. Bind one port to vfio-pci, then launch testpmd by below command,\n    ensure the vhost using 2 queues::\n \n     rm -rf vhost-net*\n@@ -182,7 +182,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \\\n     -vnc :2 -daemonize\n \n-3. On VM, bind virtio net to igb_uio and run testpmd,\n+3. On VM, bind virtio net to vfio-pci and run testpmd,\n    using one queue for testing at first::\n  \n     ./testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 \\\ndiff --git a/test_plans/vhost_user_interrupt_test_plan.rst b/test_plans/vhost_user_interrupt_test_plan.rst\nindex f8d35297..0fb2f3b6 100644\n--- a/test_plans/vhost_user_interrupt_test_plan.rst\n+++ b/test_plans/vhost_user_interrupt_test_plan.rst\n@@ -136,7 +136,7 @@ Test Case5: Wake up split ring vhost-user cores with l3fwd-power sample when mul\n     ./testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \\\n     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4 -- -i --rxq=4 --txq=4 --rss-ip\n \n-2. Bind 4 cbdma ports to igb_uio driver, then launch l3fwd-power with a virtual vhost device::\n+2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device::\n \n     ./l3fwd-power -l 9-12 -n 4 --log-level=9 \\\n     --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -p 0x1 --parse-ptype 1 \\\n@@ -157,7 +157,7 @@ Test Case6: Wake up packed ring vhost-user cores with l3fwd-power sample when mu\n     ./testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \\\n     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4,packed_vq=1 -- -i --rxq=4 --txq=4 --rss-ip\n \n-2. Bind 4 cbdma ports to igb_uio driver, then launch l3fwd-power with a virtual vhost device::\n+2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device::\n \n     ./l3fwd-power -l 9-12 -n 4 --log-level=9 \\\n     --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -p 0x1 --parse-ptype 1 \\\ndiff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst\nindex 7ee5fa87..276de3b9 100644\n--- a/test_plans/vhost_user_live_migration_test_plan.rst\n+++ b/test_plans/vhost_user_live_migration_test_plan.rst\n@@ -74,9 +74,9 @@ On host server side:\n     host server# mkdir /mnt/huge\n     host server# mount -t hugetlbfs hugetlbfs /mnt/huge\n \n-2. Bind host port to igb_uio and start testpmd with vhost port::\n+2. Bind host port to vfio-pci and start testpmd with vhost port::\n \n-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1\n+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1\n     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n     host server# testpmd>start\n \n@@ -95,11 +95,11 @@ On host server side:\n \n On the backup server, run the vhost testpmd on the host and launch VM:\n \n-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::\n+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::\n \n     backup server # mkdir /mnt/huge\n     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge\n-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0\n+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0\n     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n     backup server # testpmd>start\n \n@@ -127,8 +127,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:\n     host VM# cd /root/<dpdk_folder>\n     host VM# make -j 110 install T=x86_64-native-linuxapp-gcc\n     host VM# modprobe uio\n-    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko\n-    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0\n+    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko\n+    host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0\n     host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages\n     host VM# screen -S vm\n     host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i\n@@ -174,9 +174,9 @@ On host server side:\n     host server# mkdir /mnt/huge\n     host server# mount -t hugetlbfs hugetlbfs /mnt/huge\n \n-2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu::\n+2. Bind host port to vfio-pci and start testpmd with vhost port,note not start vhost port before launching qemu::\n \n-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1\n+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1\n     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n \n 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::\n@@ -194,11 +194,11 @@ On host server side:\n \n On the backup server, run the vhost testpmd on the host and launch VM:\n \n-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::\n+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::\n \n     backup server # mkdir /mnt/huge\n     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge\n-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0\n+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0\n     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n \n 5. Launch VM on the backup server, the script is similar to host, need add \" -incoming tcp:0:4444 \" for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::\n@@ -225,8 +225,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:\n     host VM# cd /root/<dpdk_folder>\n     host VM# make -j 110 install T=x86_64-native-linuxapp-gcc\n     host VM# modprobe uio\n-    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko\n-    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0\n+    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko\n+    host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0\n     host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages\n     host VM# screen -S vm\n     host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i\n@@ -274,9 +274,9 @@ On host server side:\n     host server# mkdir /mnt/huge\n     host server# mount -t hugetlbfs hugetlbfs /mnt/huge\n \n-2. Bind host port to igb_uio and start testpmd with vhost port::\n+2. Bind host port to vfio-pci and start testpmd with vhost port::\n \n-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1\n+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1\n     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n     host server# testpmd>start\n \n@@ -295,11 +295,11 @@ On host server side:\n \n On the backup server, run the vhost testpmd on the host and launch VM:\n \n-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::\n+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::\n \n     backup server # mkdir /mnt/huge\n     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge\n-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0\n+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0\n     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n     backup server # testpmd>start\n \n@@ -362,9 +362,9 @@ On host server side:\n     host server# mkdir /mnt/huge\n     host server# mount -t hugetlbfs hugetlbfs /mnt/huge\n \n-2. Bind host port to igb_uio and start testpmd with vhost port::\n+2. Bind host port to vfio-pci and start testpmd with vhost port::\n \n-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1\n+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1\n     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4\n     host server# testpmd>start\n \n@@ -383,11 +383,11 @@ On host server side:\n \n On the backup server, run the vhost testpmd on the host and launch VM:\n \n-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::\n+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::\n \n     backup server # mkdir /mnt/huge\n     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge\n-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0\n+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0\n     backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4\n     backup server # testpmd>start\n \n@@ -454,9 +454,9 @@ On host server side:\n     host server# mkdir /mnt/huge\n     host server# mount -t hugetlbfs hugetlbfs /mnt/huge\n \n-2. Bind host port to igb_uio and start testpmd with vhost port::\n+2. Bind host port to vfio-pci and start testpmd with vhost port::\n \n-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1\n+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1\n     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n     host server# testpmd>start\n \n@@ -475,11 +475,11 @@ On host server side:\n \n On the backup server, run the vhost testpmd on the host and launch VM:\n \n-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::\n+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::\n \n     backup server # mkdir /mnt/huge\n     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge\n-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0\n+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0\n     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n     backup server # testpmd>start\n \n@@ -507,8 +507,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:\n     host VM# cd /root/<dpdk_folder>\n     host VM# make -j 110 install T=x86_64-native-linuxapp-gcc\n     host VM# modprobe uio\n-    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko\n-    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0\n+    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko\n+    host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0\n     host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages\n     host VM# screen -S vm\n     host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i\n@@ -554,9 +554,9 @@ On host server side:\n     host server# mkdir /mnt/huge\n     host server# mount -t hugetlbfs hugetlbfs /mnt/huge\n \n-2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu::\n+2. Bind host port to vfio-pci and start testpmd with vhost port,note not start vhost port before launching qemu::\n \n-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1\n+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1\n     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n \n 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::\n@@ -574,11 +574,11 @@ On host server side:\n \n On the backup server, run the vhost testpmd on the host and launch VM:\n \n-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::\n+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::\n \n     backup server # mkdir /mnt/huge\n     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge\n-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0\n+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0\n     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n \n 5. Launch VM on the backup server, the script is similar to host, need add \" -incoming tcp:0:4444 \" for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::\n@@ -605,8 +605,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:\n     host VM# cd /root/<dpdk_folder>\n     host VM# make -j 110 install T=x86_64-native-linuxapp-gcc\n     host VM# modprobe uio\n-    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko\n-    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0\n+    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko\n+    host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0\n     host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages\n     host VM# screen -S vm\n     host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i\n@@ -654,9 +654,9 @@ On host server side:\n     host server# mkdir /mnt/huge\n     host server# mount -t hugetlbfs hugetlbfs /mnt/huge\n \n-2. Bind host port to igb_uio and start testpmd with vhost port::\n+2. Bind host port to vfio-pci and start testpmd with vhost port::\n \n-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1\n+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1\n     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n     host server# testpmd>start\n \n@@ -675,11 +675,11 @@ On host server side:\n \n On the backup server, run the vhost testpmd on the host and launch VM:\n \n-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::\n+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::\n \n     backup server # mkdir /mnt/huge\n     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge\n-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0\n+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0\n     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i\n     backup server # testpmd>start\n \n@@ -742,9 +742,9 @@ On host server side:\n     host server# mkdir /mnt/huge\n     host server# mount -t hugetlbfs hugetlbfs /mnt/huge\n \n-2. Bind host port to igb_uio and start testpmd with vhost port::\n+2. Bind host port to vfio-pci and start testpmd with vhost port::\n \n-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1\n+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1\n     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4\n     host server# testpmd>start\n \n@@ -763,11 +763,11 @@ On host server side:\n \n On the backup server, run the vhost testpmd on the host and launch VM:\n \n-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::\n+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::\n \n     backup server # mkdir /mnt/huge\n     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge\n-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0\n+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0\n     backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4\n     backup server # testpmd>start\n \ndiff --git a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst\nindex 77c1946c..9a108b3d 100644\n--- a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst\n+++ b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst\n@@ -52,7 +52,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n Test Case 1: Basic virtio interrupt test with 4 queues\n =======================================================\n \n-1. Bind one NIC port to igb_uio, then launch testpmd by below command::\n+1. Bind one NIC port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip\n@@ -88,7 +88,7 @@ Test Case 1: Basic virtio interrupt test with 4 queues\n Test Case 2: Basic virtio interrupt test with 16 queues\n =======================================================\n \n-1. Bind one NIC port to igb_uio, then launch testpmd by below command::\n+1. Bind one NIC port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip\n@@ -124,7 +124,7 @@ Test Case 2: Basic virtio interrupt test with 16 queues\n Test Case 3: Basic virtio-1.0 interrupt test with 4 queues\n ==========================================================\n \n-1. Bind one NIC port to igb_uio, then launch testpmd by below command::\n+1. Bind one NIC port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip\n@@ -160,7 +160,7 @@ Test Case 3: Basic virtio-1.0 interrupt test with 4 queues\n Test Case 4: Packed ring virtio interrupt test with 16 queues\n =============================================================\n \n-1. Bind one NIC port to igb_uio, then launch testpmd by below command::\n+1. Bind one NIC port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip\n@@ -196,7 +196,7 @@ Test Case 4: Packed ring virtio interrupt test with 16 queues\n Test Case 5: Basic virtio interrupt test with 16 queues and cbdma enabled\n =========================================================================\n \n-1. Bind 16 cbdma channels and one NIC port to igb_uio, then launch testpmd by below command::\n+1. Bind 16 cbdma channels and one NIC port to vfio-pci, then launch testpmd by below command::\n \n     ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip\n \n@@ -231,7 +231,7 @@ Test Case 5: Basic virtio interrupt test with 16 queues and cbdma enabled\n Test Case 6: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled\n ============================================================================\n \n-1. Bind four cbdma channels and one NIC port to igb_uio, then launch testpmd by below command::\n+1. Bind four cbdma channels and one NIC port to vfio-pci, then launch testpmd by below command::\n \n     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip\n \n@@ -266,7 +266,7 @@ Test Case 6: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled\n Test Case 7: Packed ring virtio interrupt test with 16 queues and cbdma enabled\n ===============================================================================\n \n-1. Bind 16 cbdma channels ports and one NIC port to igb_uio, then launch testpmd by below command::\n+1. Bind 16 cbdma channels ports and one NIC port to vfio-pci, then launch testpmd by below command::\n \n     ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip\n \ndiff --git a/test_plans/vhost_virtio_user_interrupt_test_plan.rst b/test_plans/vhost_virtio_user_interrupt_test_plan.rst\nindex e70ec91f..239b1671 100644\n--- a/test_plans/vhost_virtio_user_interrupt_test_plan.rst\n+++ b/test_plans/vhost_virtio_user_interrupt_test_plan.rst\n@@ -44,7 +44,7 @@ Test Case1: Split ring virtio-user interrupt test with vhost-user as backend\n \n flow: TG --> NIC --> Vhost --> Virtio\n \n-1. Bind one NIC port to igb_uio, launch testpmd with a virtual vhost device as backend::\n+1. Bind one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend::\n \n     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i  --rxq=1 --txq=1\n     testpmd>start\n@@ -114,7 +114,7 @@ Test Case4: Packed ring virtio-user interrupt test with vhost-user as backend\n \n flow: TG --> NIC --> Vhost --> Virtio\n \n-1. Bind one NIC port to igb_uio, launch testpmd with a virtual vhost device as backend::\n+1. Bind one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend::\n \n     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i  --rxq=1 --txq=1\n     testpmd>start\n@@ -184,7 +184,7 @@ Test Case7: LSC event between vhost-user and virtio-user with split ring and cbd\n \n flow: Vhost <--> Virtio\n \n-1. Bind one cbdma port to igb_uio driver, then start vhost-user side::\n+1. Bind one cbdma port to vfio-pci driver, then start vhost-user side::\n \n     ./testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i\n     testpmd>set fwd mac\n@@ -211,7 +211,7 @@ Test Case8: Split ring virtio-user interrupt test with vhost-user as backend and\n \n flow: TG --> NIC --> Vhost --> Virtio\n \n-1. Bind one cbdma port and one NIC port to igb_uio, launch testpmd with a virtual vhost device as backend::\n+1. Bind one cbdma port and one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend::\n \n     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i  --rxq=1 --txq=1\n     testpmd>start\n@@ -232,7 +232,7 @@ Test Case9: LSC event between vhost-user and virtio-user with packed ring and cb\n \n flow: Vhost <--> Virtio\n \n-1. Bind one cbdma port to igb_uio driver, then start vhost-user side::\n+1. Bind one cbdma port to vfio-pci driver, then start vhost-user side::\n \n     ./testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i\n     testpmd>set fwd mac\n@@ -259,7 +259,7 @@ Test Case10: Packed ring virtio-user interrupt test with vhost-user as backend a\n \n flow: TG --> NIC --> Vhost --> Virtio\n \n-1. Bind one cbdma port and one NIC port to igb_uio, launch testpmd with a virtual vhost device as backend::\n+1. Bind one cbdma port and one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend::\n \n     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i  --rxq=1 --txq=1\n     testpmd>start\ndiff --git a/test_plans/virtio_event_idx_interrupt_test_plan.rst b/test_plans/virtio_event_idx_interrupt_test_plan.rst\nindex 064aa10e..4ffb4d20 100644\n--- a/test_plans/virtio_event_idx_interrupt_test_plan.rst\n+++ b/test_plans/virtio_event_idx_interrupt_test_plan.rst\n@@ -49,7 +49,7 @@ TG --> NIC --> Vhost-user --> Virtio-net\n Test Case 1: Compare interrupt times with and without split ring virtio event idx enabled\n =========================================================================================\n \n-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i\n@@ -82,7 +82,7 @@ Test Case 1: Compare interrupt times with and without split ring virtio event id\n Test Case 2: Split ring virtio-pci driver reload test\n =====================================================\n \n-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -120,7 +120,7 @@ Test Case 2: Split ring virtio-pci driver reload test\n Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 16 queues test\n =============================================================================================\n \n-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16\n@@ -155,7 +155,7 @@ Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 1\n Test Case 4: Compare interrupt times with and without packed ring virtio event idx enabled\n ==========================================================================================\n \n-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i\n@@ -188,7 +188,7 @@ Test Case 4: Compare interrupt times with and without packed ring virtio event i\n Test Case 5: Packed ring virtio-pci driver reload test\n ======================================================\n \n-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -226,7 +226,7 @@ Test Case 5: Packed ring virtio-pci driver reload test\n Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode 16 queues test\n ==============================================================================================\n \n-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16\n@@ -261,7 +261,7 @@ Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode\n Test Case 7: Split ring virtio-pci driver reload test with CBDMA enabled\n ========================================================================\n \n-1. Bind one nic port and one cbdma channel to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -299,7 +299,7 @@ Test Case 7: Split ring virtio-pci driver reload test with CBDMA enabled\n Test Case 8: Wake up split ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test\n ================================================================================================================\n \n-1. Bind one nic port and 16 cbdma channels to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16\n@@ -334,7 +334,7 @@ Test Case 8: Wake up split ring virtio-net cores with event idx interrupt mode a\n Test Case 9: Packed ring virtio-pci driver reload test with CBDMA enabled\n =========================================================================\n \n-1. Bind one nic port and one cbdma channel to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -372,7 +372,7 @@ Test Case 9: Packed ring virtio-pci driver reload test with CBDMA enabled\n Test Case 10: Wake up packed ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test\n =================================================================================================================\n \n-1. Bind one nic port and 16 cbdma channels to igb_uio, then launch the vhost sample by below commands::\n+1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16\ndiff --git a/test_plans/virtio_pvp_regression_test_plan.rst b/test_plans/virtio_pvp_regression_test_plan.rst\nindex fb45c561..476b4274 100644\n--- a/test_plans/virtio_pvp_regression_test_plan.rst\n+++ b/test_plans/virtio_pvp_regression_test_plan.rst\n@@ -49,7 +49,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG\n Test Case 1: pvp test with virtio 0.95 mergeable path\n =====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-3 -n 4 \\\n@@ -70,7 +70,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::\n \n     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\\\n     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024\n@@ -88,7 +88,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path\n Test Case 2: pvp test with virtio 0.95 non-mergeable path\n =========================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-3 -n 4 \\\n@@ -109,7 +109,7 @@ Test Case 2: pvp test with virtio 0.95 non-mergeable path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::\n \n     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\\\n     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024\n@@ -127,7 +127,7 @@ Test Case 2: pvp test with virtio 0.95 non-mergeable path\n Test Case 3: pvp test with virtio 0.95 vrctor_rx path\n =====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-3 -n 4 \\\n@@ -148,7 +148,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n \n     ./testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \\\n     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024\n@@ -166,7 +166,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path\n Test Case 4: pvp test with virtio 1.0 mergeable path\n ====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-3 -n 4 \\\n@@ -187,7 +187,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::\n \n     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\\\n     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024\n@@ -205,7 +205,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path\n Test Case 5: pvp test with virtio 1.0 non-mergeable path\n ========================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-3 -n 4 \\\n@@ -226,7 +226,7 @@ Test Case 5: pvp test with virtio 1.0 non-mergeable path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::\n \n     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\\\n     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024\n@@ -244,7 +244,7 @@ Test Case 5: pvp test with virtio 1.0 non-mergeable path\n Test Case 6: pvp test with virtio 1.0 vrctor_rx path\n ====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-3 -n 4 \\\n@@ -265,7 +265,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \\\n     -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n \n     ./testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \\\n     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024\n@@ -283,7 +283,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path\n Test Case 7: pvp test with virtio 1.1 mergeable path\n ====================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-3 -n 4 \\\n@@ -304,7 +304,7 @@ Test Case 7: pvp test with virtio 1.1 mergeable path\n     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2 \\\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15,packed=on -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::\n \n     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip \\\n     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024\n@@ -322,7 +322,7 @@ Test Case 7: pvp test with virtio 1.1 mergeable path\n Test Case 8: pvp test with virtio 1.1 non-mergeable path\n =========================================================\n \n-1. Bind one port to igb_uio, then launch testpmd by below command::\n+1. Bind one port to vfio-pci, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n     ./testpmd -l 1-3 -n 4 \\\n@@ -343,7 +343,7 @@ Test Case 8: pvp test with virtio 1.1 non-mergeable path\n     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2 \\\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15,packed=on -vnc :10\n \n-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::\n+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::\n \n     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip \\\n     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024\ndiff --git a/test_plans/virtio_user_as_exceptional_path_test_plan.rst b/test_plans/virtio_user_as_exceptional_path_test_plan.rst\nindex f04271fa..2dffa877 100644\n--- a/test_plans/virtio_user_as_exceptional_path_test_plan.rst\n+++ b/test_plans/virtio_user_as_exceptional_path_test_plan.rst\n@@ -71,9 +71,9 @@ Flow:tap0-->vhost-net-->virtio_user-->nic0-->nic1\n \n     modprobe vhost-net\n \n-3. Bind nic0 to igb_uio and launch the virtio_user with testpmd::\n+3. Bind nic0 to vfio-pci and launch the virtio_user with testpmd::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x        # xx:xx.x is the pci addr of nic0\n     ./testpmd -c 0xc0000 -n 4 --file-prefix=test2 \\\n     --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024 -- -i --rxd=1024 --txd=1024\n     testpmd>set fwd csum\n@@ -123,9 +123,9 @@ Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<-->TG\n \n     ufw disable\n \n-2. Bind the physical port to igb_uio, launch testpmd with one queue for virtio_user::\n+2. Bind the physical port to vfio-pci, launch testpmd with one queue for virtio_user::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x        # xx:xx.x is the pci addr of nic0\n     ./testpmd -l 1-2 -n 4  --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=1 -- -i --rxd=1024 --txd=1024\n \n 3. Check if there is a tap device generated::\n@@ -153,9 +153,9 @@ Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<-->TG\n \n     ufw disable\n \n-2. Bind the physical port to igb_uio, launch testpmd with two queues for virtio_user::\n+2. Bind the physical port to vfio-pci, launch testpmd with two queues for virtio_user::\n \n-    ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0\n+    ./dpdk-devbind.py -b vfio-pci xx:xx.x        # xx:xx.x is the pci addr of nic0\n     ./testpmd -l 1-2 -n 4  --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=2 -- -i --rxd=1024 --txd=1024 --txq=2 --rxq=2 --nb-cores=1\n \n 3. Check if there is a tap device generated::\ndiff --git a/test_plans/virtio_user_for_container_networking_test_plan.rst b/test_plans/virtio_user_for_container_networking_test_plan.rst\nindex 15c9c248..d28b30a1 100644\n--- a/test_plans/virtio_user_for_container_networking_test_plan.rst\n+++ b/test_plans/virtio_user_for_container_networking_test_plan.rst\n@@ -70,7 +70,7 @@ Test Case 1: packet forward test for container networking\n     mkdir /mnt/huge\n     mount -t hugetlbfs nodev /mnt/huge\n \n-2. Bind one port to igb_uio, launch vhost::\n+2. Bind one port to vfio-pci, launch vhost::\n \n     ./testpmd -l 1-2 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i\n \n@@ -92,7 +92,7 @@ Test Case 2: packet forward with multi-queues for container networking\n     mkdir /mnt/huge\n     mount -t hugetlbfs nodev /mnt/huge\n \n-2. Bind one port to igb_uio, launch vhost::\n+2. Bind one port to vfio-pci, launch vhost::\n \n     ./testpmd -l 1-3 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2\n \ndiff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst\nindex 7eeaa652..30499bcd 100644\n--- a/test_plans/vm2vm_virtio_pmd_test_plan.rst\n+++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst\n@@ -46,7 +46,7 @@ Virtio-pmd <-> Vhost-user <-> Testpmd <-> Vhost-user <-> Virtio-pmd\n Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path\n ============================================================\n \n-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::\n+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::\n \n     rm -rf vhost-net*\n     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -77,13 +77,13 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path\n     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n \n-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd rxonly\n     testpmd>start\n \n-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 and send 64B packets, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2 and send 64B packets, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd txonly\n@@ -101,7 +101,7 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path\n Test Case 2: VM2VM vhost-user/virtio-pmd with normal path\n =========================================================\n \n-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::\n+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::\n \n     rm -rf vhost-net*\n     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -132,13 +132,13 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with normal path\n     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n \n-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 ::\n+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1 ::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024\n     testpmd>set fwd rxonly\n     testpmd>start\n \n-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio2 and send 64B packets ::\n+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio2 and send 64B packets ::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024\n     testpmd>set fwd txonly\n@@ -156,7 +156,7 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with normal path\n Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path\n ===============================================================\n \n-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::\n+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::\n \n     rm -rf vhost-net*\n     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -187,13 +187,13 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path\n     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n \n-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd rxonly\n     testpmd>start\n \n-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2, [0000:xx.00] is [Bus,Device,Function] of virtio-net::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024\n     testpmd>set fwd txonly\n@@ -211,7 +211,7 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path\n Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path\n ============================================================\n \n-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::\n+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::\n \n     rm -rf vhost-net*\n     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -242,13 +242,13 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path\n     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n \n-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 ::\n+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1 ::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024\n     testpmd>set fwd rxonly\n     testpmd>start\n \n-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 ::\n+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2 ::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024\n     testpmd>set fwd txonly\n@@ -266,7 +266,7 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path\n Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check\n ================================================================================\n \n-1. Bind virtio with igb_uio driver, launch the testpmd by below commands::\n+1. Bind virtio with vfio-pci driver, launch the testpmd by below commands::\n \n     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -309,7 +309,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check\n     -CONFIG_RTE_LIBRTE_PMD_PCAP=n\n     +CONFIG_RTE_LIBRTE_PMD_PCAP=y\n \n-4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n+4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n \n     ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n     testpmd>set fwd rxonly\n@@ -319,7 +319,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check\n \n     ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump  'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'\n \n-6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode::\n+6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n     testpmd>set fwd mac\n@@ -354,7 +354,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check\n Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid check\n ===================================================================================\n \n-1. Bind virtio with igb_uio driver, launch the testpmd by below commands::\n+1. Bind virtio with vfio-pci driver, launch the testpmd by below commands::\n \n     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -397,7 +397,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch\n     -CONFIG_RTE_LIBRTE_PMD_PCAP=n\n     +CONFIG_RTE_LIBRTE_PMD_PCAP=y\n \n-4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n+4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n \n     ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n     testpmd>set fwd rxonly\n@@ -407,7 +407,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch\n \n     ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump  'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'\n \n-6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode::\n+6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n     testpmd>set fwd mac\n@@ -442,7 +442,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch\n Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid check\n ===================================================================================\n \n-1. Bind virtio with igb_uio driver, launch the testpmd by below commands::\n+1. Bind virtio with vfio-pci driver, launch the testpmd by below commands::\n \n     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024\n     testpmd>set fwd mac\n@@ -485,7 +485,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch\n     -CONFIG_RTE_LIBRTE_PMD_PCAP=n\n     +CONFIG_RTE_LIBRTE_PMD_PCAP=y\n \n-4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n+4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n \n     ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n     testpmd>set fwd rxonly\n@@ -495,7 +495,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch\n \n     ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump  'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'\n \n-6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode::\n+6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n     testpmd>set fwd mac\n@@ -530,7 +530,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch\n Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path\n ============================================================\n \n-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::\n+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::\n \n     rm -rf vhost-net*\n     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024\n@@ -561,13 +561,13 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path\n     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n \n-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 ::\n+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1 ::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024\n     testpmd>set fwd rxonly\n     testpmd>start\n \n-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 ::\n+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2 ::\n \n     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024\n     testpmd>set fwd txonly\n@@ -585,7 +585,7 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path\n Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable with server mode stable test\n ==========================================================================================================\n \n-1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::\n+1. Bind 16 cbdma channels to vfio-pci driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::\n \n     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \\\n     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8\n@@ -659,7 +659,7 @@ Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable wi\n Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDMA enable with server mode test\n ==============================================================================================================\n \n-1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost ports below commands::\n+1. Bind 16 cbdma channels to vfio-pci driver, then launch the testpmd with 2 vhost ports below commands::\n \n     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \\\n     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4\n@@ -730,7 +730,7 @@ Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDM\n Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable test\n =====================================================================================\n \n-1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::\n+1. Bind 16 cbdma channels to vfio-pci driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::\n \n     rm -rf vhost-net*\n     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \\\ndiff --git a/test_plans/vswitch_sample_cbdma_test_plan.rst b/test_plans/vswitch_sample_cbdma_test_plan.rst\nindex 44518eec..9abc3a99 100644\n--- a/test_plans/vswitch_sample_cbdma_test_plan.rst\n+++ b/test_plans/vswitch_sample_cbdma_test_plan.rst\n@@ -62,7 +62,7 @@ Modify the testpmd code as following::\n Test Case1: PVP performance check with CBDMA channel using vhost async driver\n =============================================================================\n \n-1. Bind physical port to vfio-pci and CBDMA channel to igb_uio.\n+1. Bind physical port to vfio-pci and CBDMA channel to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n@@ -98,7 +98,7 @@ Test Case1: PVP performance check with CBDMA channel using vhost async driver\n Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver\n =================================================================================\n \n-1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.\n+1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n@@ -136,7 +136,7 @@ Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver\n Test Case3: VM2VM forwarding test with two CBDMA channels\n =========================================================\n \n-1.Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.\n+1.Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n@@ -183,7 +183,7 @@ Test Case3: VM2VM forwarding test with two CBDMA channels\n Test Case4: VM2VM test with cbdma channels register/unregister stable check\n ============================================================================\n \n-1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.\n+1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n@@ -263,7 +263,7 @@ Test Case4: VM2VM test with cbdma channels register/unregister stable check\n Test Case5: VM2VM split ring test with iperf and reconnect stable check\n =======================================================================\n \n-1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.\n+1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n@@ -322,7 +322,7 @@ Test Case5: VM2VM split ring test with iperf and reconnect stable check\n Test Case6: VM2VM packed ring test with iperf and reconnect stable test\n =======================================================================\n \n-1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.\n+1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n",
    "prefixes": [
        "V1",
        "2/2"
    ]
}