get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/115153/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 115153,
    "url": "http://patches.dpdk.org/api/patches/115153/?format=api",
    "web_url": "http://patches.dpdk.org/project/dts/patch/20220816075917.3419381-1-weix.ling@intel.com/",
    "project": {
        "id": 3,
        "url": "http://patches.dpdk.org/api/projects/3/?format=api",
        "name": "DTS",
        "link_name": "dts",
        "list_id": "dts.dpdk.org",
        "list_email": "dts@dpdk.org",
        "web_url": "",
        "scm_url": "git://dpdk.org/tools/dts",
        "webscm_url": "http://git.dpdk.org/tools/dts/",
        "list_archive_url": "https://inbox.dpdk.org/dts",
        "list_archive_url_format": "https://inbox.dpdk.org/dts/{}",
        "commit_url_format": ""
    },
    "msgid": "<20220816075917.3419381-1-weix.ling@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dts/20220816075917.3419381-1-weix.ling@intel.com",
    "date": "2022-08-16T07:59:17",
    "name": "[V4,1/2] test_plans/vswitch_sample_cbdma_test_plan: modify testplan to test virito dequeue",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": false,
    "hash": "422884bbd405464971c43ea412362f56fed318d6",
    "submitter": {
        "id": 1828,
        "url": "http://patches.dpdk.org/api/people/1828/?format=api",
        "name": "Ling, WeiX",
        "email": "weix.ling@intel.com"
    },
    "delegate": null,
    "mbox": "http://patches.dpdk.org/project/dts/patch/20220816075917.3419381-1-weix.ling@intel.com/mbox/",
    "series": [
        {
            "id": 24318,
            "url": "http://patches.dpdk.org/api/series/24318/?format=api",
            "web_url": "http://patches.dpdk.org/project/dts/list/?series=24318",
            "date": "2022-08-16T07:59:04",
            "name": "modify vswitch_sample_cbdma to test virito dequeue",
            "version": 4,
            "mbox": "http://patches.dpdk.org/series/24318/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/115153/comments/",
    "check": "pending",
    "checks": "http://patches.dpdk.org/api/patches/115153/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dts-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 648FAA00C3;\n\tTue, 16 Aug 2022 10:03:28 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 8B8D94068E;\n\tTue, 16 Aug 2022 10:03:28 +0200 (CEST)",
            "from mga01.intel.com (mga01.intel.com [192.55.52.88])\n by mails.dpdk.org (Postfix) with ESMTP id 192314067C\n for <dts@dpdk.org>; Tue, 16 Aug 2022 10:03:26 +0200 (CEST)",
            "from orsmga006.jf.intel.com ([10.7.209.51])\n by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 16 Aug 2022 01:03:26 -0700",
            "from unknown (HELO localhost.localdomain) ([10.239.252.222])\n by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 16 Aug 2022 01:03:24 -0700"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1660637007; x=1692173007;\n h=from:to:cc:subject:date:message-id:mime-version:\n content-transfer-encoding;\n bh=Ez5dPA+DSFXD5mDpQcR40fLFlGz5CJ68fhKC9qR2UI0=;\n b=YM6Az2tfIsQ5BpS3jkbJxcebLFxqwRmlw8JWwy1QhVQngL3JllKavz36\n 7YwyE28Bkjf7Oybrz4AWanHkWPWoBtW3wUNqAzviB/LaGCUEKqkV4qowA\n nJ9AY7WwfscqzOtI2KbdtFNZXhUnmtdphzFJ8ksN4mtTgHwD9kwNMyyHG\n l0Jki7SlK71RUyjhEpxB8IMzOxGV2bwzLoWUacF42wNkWzljdPx9TQ93M\n zsDefyKBy+OOAMCon4pY+eH0KyCnSxqQ0WubDKRWeGt17JS1kjW9a9JUj\n ea7lVmk9aBiW9zwZ/I+sx0Gk/Htd3aTQR3BXSxLBz7PdF5pLX2nzvOnN+ g==;",
        "X-IronPort-AV": [
            "E=McAfee;i=\"6400,9594,10440\"; a=\"318147711\"",
            "E=Sophos;i=\"5.93,240,1654585200\";\n d=\"scan'208,223\";a=\"318147711\"",
            "E=Sophos;i=\"5.93,240,1654585200\";\n d=\"scan'208,223\";a=\"583206145\""
        ],
        "From": "Wei Ling <weix.ling@intel.com>",
        "To": "dts@dpdk.org",
        "Cc": "Wei Ling <weix.ling@intel.com>",
        "Subject": "[dts][PATCH V4 1/2] test_plans/vswitch_sample_cbdma_test_plan: modify\n testplan to test virito dequeue",
        "Date": "Tue, 16 Aug 2022 03:59:17 -0400",
        "Message-Id": "<20220816075917.3419381-1-weix.ling@intel.com>",
        "X-Mailer": "git-send-email 2.25.1",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain; charset=UTF-8",
        "Content-Transfer-Encoding": "8bit",
        "X-BeenThere": "dts@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "test suite reviews and discussions <dts.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dts>,\n <mailto:dts-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dts/>",
        "List-Post": "<mailto:dts@dpdk.org>",
        "List-Help": "<mailto:dts-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dts>,\n <mailto:dts-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dts-bounces@dpdk.org"
    },
    "content": "From DPDK-22.07, virtio support async dequeue for split and packed ring\npath, so modify vswitch_sample_cbdma testplan to test the split and \npacked ring async dequeue feature.\n\nSigned-off-by: Wei Ling <weix.ling@intel.com>\n---\n test_plans/vswitch_sample_cbdma_test_plan.rst | 459 ++++++++++++------\n 1 file changed, 320 insertions(+), 139 deletions(-)",
    "diff": "diff --git a/test_plans/vswitch_sample_cbdma_test_plan.rst b/test_plans/vswitch_sample_cbdma_test_plan.rst\nindex c207d842..a4cbf309 100644\n--- a/test_plans/vswitch_sample_cbdma_test_plan.rst\n+++ b/test_plans/vswitch_sample_cbdma_test_plan.rst\n@@ -8,108 +8,130 @@ Vswitch sample test with vhost async data path test plan\n Description\n ===========\n \n-Vswitch sample can leverage IOAT to accelerate vhost async data-path from dpdk 20.11. This plan test\n+Vswitch sample can leverage DMA to accelerate vhost async data-path from dpdk 20.11. This plan test\n vhost DMA operation callbacks for CBDMA PMD and vhost async data-path in vhost sample.\n-From 20.11 to 21.02, only split ring support cbdma copy with vhost enqueue direction;\n-from 21.05,packed ring also can support cbdma copy with vhost enqueue direction.\n+From 22.07, split and packed ring support cbdma copy with both vhost enqueue and deuque direction.\n+\n+--dmas This parameter is used to specify the assigned DMA device of a vhost device. Async vhost-user\n+net driver will be used if --dmas is set. For example –dmas [txd0@00:04.0,txd1@00:04.1,rxd0@00:04.2,rxd1@00:04.3]\n+means use DMA channel 00:04.0/00:04.2 for vhost device 0 enqueue/dequeue operation and use DMA channel\n+00:04.1/00:04.3 for vhost device 1 enqueue/dequeue operation. The index of the device corresponds to\n+the socket file in order, that means vhost device 0 is created through the first socket file,\n+vhost device 1 is created through the second socket file, and so on.\n+For more about vswitch example, please refer to the DPDK docment:http://doc.dpdk.org/guides/sample_app_ug/vhost.html\n \n Prerequisites\n =============\n \n+Hardware\n+--------\n+Supportted NICs: nic that supports VMDQ\n+\n+Software\n+--------\n+Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz\n+\n+General set up\n+--------------\n+1. Compile DPDK and vhost example::\n+\n+   # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc\n+   # meson configure -Dexamples=vhost x86_64-native-linuxapp-gcc\n+   # ninja -C x86_64-native-linuxapp-gcc -j 110\n \n-Test Case1: PVP performance check with CBDMA channel using vhost async driver\n-=============================================================================\n+Test Case 1: PVP performance check with CBDMA channel using vhost async driver\n+-----------------------------------------------------------------------------\n+This case tests the basic performance of split ring and packed ring with different packet size when using vhost async drvier.\n \n-1. Bind physical port to vfio-pci and CBDMA channel to vfio-pci.\n+1. Bind one physical port and 2 CBDMA devices to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n-\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 31-32 -n 4 -- \\\n-\t-p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --client --total-num-mbufs 600000\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 31-32 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -- \\\n+\t-p 0x1 --mergeable 1 --vm2vm 1  --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0,rxd0@0000:00:04.1] --client --total-num-mbufs 600000\n \n 3. Launch virtio-user with packed ring::\n \n \t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \\\n-\t--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n+\t--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1 \\\n+\t-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n \n 4. Start pkts from virtio-user side to let vswitch know the mac addr::\n \n \ttestpmd>set fwd mac\n \ttestpmd>start tx_first\n \n-5. Inject pkts (packets length=64...1518) separately with dest_mac=virtio_mac_address (specific in above cmd with 00:11:22:33:44:10) to NIC using packet generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected.\n+5. Inject packets with different packet size[64, 128, 256, 512, 1024, 1280, 1518] and dest_mac=virtio_mac_address (specific in above cmd with 00:11:22:33:44:10) to NIC using packet generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected.\n \n-6. Quit and re-launch virtio-user with packed ring size not power of 2::\n-\n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \\\n-\t--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1,queue_size=1025 -- -i --rxq=1 --txq=1 --txd=1025 --rxd=1025 --nb-cores=1\n-\n-7. Re-test step 4-5, record performance of different packet length.\n-\n-8. Quit and re-launch virtio-user with split ring::\n+6. Quit and re-launch virtio-user with split ring::\n \n \t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \\\n \t--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,server=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n \n-9. Re-test step 4-5, record performance of different packet length.\n+7. Re-test step 4-5, record performance of different packet length.\n \n-Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver\n-=================================================================================\n+Test Case 2: PVP test with two VMs using vhost async driver\n+----------------------------------------------------------\n+This case tests that the imix packets can forward normally with two VMs in PVP topology when using vhost async drvier, both split ring and packed ring have been covered.\n \n-1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n+1. Bind one physical ports to vfio-pci and 4 CBDMA devices to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n-\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- \\\n-\t-p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:01.0,txd1@0000:00:01.1] --client--total-num-mbufs 600000\n-\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -- \\\n+\t-p 0x1 --mergeable 1 --vm2vm 1  --stats 1 --socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd0@0000:00:04.1,txd1@0000:00:04.2,rxd1@0000:00:04.3] --client--total-num-mbufs 600000\n+\t\n 3. launch two virtio-user ports::\n \n \t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \\\n-\t--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n+\t--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n \t\n \t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \\\n-\t--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n+\t--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/root/dpdk/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n \n-4. Start pkts from two virtio-user side individually to let vswitch know the mac addr::\n+4. Start packets from two virtio-user side individually to let vswitch know the mac addr::\n \n \ttestpmd0>set fwd mac\n+\ttestpmd0>start tx_first\n \ttestpmd1>set fwd mac\n \ttestpmd1>start tx_first\n-\ttestpmd1>start tx_first\n \n 5. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator,record performance number can get expected from Packet generator rx side.\n \n-6. Stop dpdk-vhost side and relaunch it with same cmd as step2.\n+6. Quit and relaunch dpdk-vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3  -- \\\n+\t-p 0x1 --mergeable 1 --vm2vm 1  --stats 1 --socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:01.0,rxd1@0000:00:01.1] --client--total-num-mbufs 600000\n \n-7. Start pkts from two virtio-user side individually to let vswitch know the mac addr::\n+7. Start packets from two virtio-user side individually to let vswitch know the mac addr::\n \n-    testpmd0>stop\n-    testpmd0>start tx_first\n-    testpmd1>stop\n-    testpmd1>start tx_first\n+\ttestpmd0>stop\n+\ttestpmd0>start tx_first\n+\ttestpmd1>stop\n+\ttestpmd1>start tx_first\n \n-8. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator, ensure get same throughput as step5.\n+8.Inject IMIX packets (64b...1518b) to NIC using packet generator, ensure get same throughput as step5.\n \n-Test Case3: VM2VM forwarding test with two CBDMA channels\n-=========================================================\n+Test Case 3: VM2VM virtio-user forwarding test using vhost async driver\n+----------------------------------------------------------------------\n+This case tests that the imix packets can forward normally in VM2VM topology(virtio-user as front-end) when using vhost async drvier, both split ring and packed ring have been covered.\n \n-1.Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n+1.Bind one physical port and 4 CBDMA devices to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n-\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \\\n-\t--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1]  --client --total-num-mbufs 600000\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -- -p 0x1 --mergeable 1 --vm2vm 1  \\\n+\t--socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd0@0000:00:04.1,txd1@0000:00:04.2,rxd1@0000:00:04.3]  --client --total-num-mbufs 600000\n \n 3. Launch virtio-user::\n \n \t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \\\n-\t--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n+\t--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n \n \t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \\\n-\t--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n+\t--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/root/dpdk/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1\n \n-4. Loop pkts between two virtio-user sides, record performance number with 64b/2000b/8000b/IMIX pkts can get expected::\n+4. Loop packets between two virtio-user sides, record performance number with 64b/2000b/8000b/IMIX pkts can get expected::\n \n \ttestpmd0>set fwd mac\n \ttestpmd0>start tx_first\n@@ -134,45 +156,49 @@ Test Case3: VM2VM forwarding test with two CBDMA channels\n \ttestpmd1>start tx_first\n \ttestpmd1>show port stats all\n \n-5. Stop dpdk-vhost side and relaunch it with same cmd as step2.\n+5. Stop dpdk-vhost side and relaunch it with below command::\n+\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -- -p 0x1 --mergeable 1 --vm2vm 1  \\\n+\t--socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd1@0000:00:04.1]  --client --total-num-mbufs 600000\n \n 6. Rerun step 4.\n \n-Test Case4: VM2VM test with cbdma channels register/unregister stable check\n-============================================================================\n+Test Case 4: VM2VM virtio-pmd split ring test with cbdma channels register/unregister stable check\n+--------------------------------------------------------------------------------------------------\n+This case checks that the split ring with CBDMA channel can work stably when the virtio-pmd port is registed and unregisted for many times.\n \n-1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n+1. Bind one physical port and 4 CBDMA devices to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n-    ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \\\n-    --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client --total-num-mbufs 600000\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -- -p 0x1 --mergeable 1 --vm2vm 1  \\\n+\t--socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd0@0000:00:04.1,txd1@0000:00:01.2,rxd1@0000:00:01.3] --client --total-num-mbufs 600000\n \n-3. Start VM0 with qemu-5.2.0::\n+3. Start VM0 with qemu::\n \n- \tqemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \\\n-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n-        -chardev socket,id=char0,path=/tmp/vhost-net0,server \\\n-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10\n+\tqemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net0,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10\n \n-4. Start VM1 with qemu-5.2.0::\n+4. Start VM1 with qemu::\n \n \tqemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \\\n-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n-        -chardev socket,id=char0,path=/tmp/vhost-net1,server \\\n-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net1,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n \n 5. Bind virtio port to vfio-pci in both two VMs::\n \n@@ -216,122 +242,277 @@ Test Case4: VM2VM test with cbdma channels register/unregister stable check\n \t./usertools/dpdk-devbind.py --bind=virtio-pci 00:05.0\n \t./usertools/dpdk-devbind.py --bind=vfio-pci 00:05.0\n \n-9. Restart vhost, then rerun step 7,check vhost can stable work and get expected throughput.\n+9. Quit and relaunch dpdk-vhost with below command::\n+\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -- -p 0x1 --mergeable 1 --vm2vm 1  \\\n+\t--socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd1@0000:00:01.3] --client --total-num-mbufs 600000\n+\n+10. Rerun step 6-7,check vhost can stable work and get expected throughput.\n \n-Test Case5: VM2VM split ring test with iperf and reconnect stable check\n-=======================================================================\n+Test Case 5: VM2VM virtio-pmd packed ring test with cbdma channels register/unregister stable check\n+---------------------------------------------------------------------------------------------------\n+This case checks that the packed ring with CBDMA channel can work stably when the virtio-pmd port is registed and unregisted for many times.\n \n-1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n+1. Bind one physical port and 4 CBDMA devices to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n-\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \\\n-\t--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client --total-num-mbufs 600000\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -- -p 0x1 --mergeable 1 --vm2vm 1  \\\n+\t--socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd0@0000:00:04.1,txd1@0000:00:04.2,rxd1@0000:00:04.3] --client --total-num-mbufs 600000\n \n-3. Start VM0 with qemu-5.2.0::\n+3. Start VM0 with qemu::\n \n- \tqemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \\\n-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n-        -chardev socket,id=char0,path=/tmp/vhost-net0,server \\\n-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10\n+\tqemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net0,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10\n \n-4. Start VM1 with qemu-5.2.0::\n+4. Start VM1 with qemu::\n \n \tqemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \\\n-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n-        -chardev socket,id=char0,path=/tmp/vhost-net1,server \\\n-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net1,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n+\n+5. Bind virtio port to vfio-pci in both two VMs::\n+\n+\tmodprobe vfio enable_unsafe_noiommu_mode=1\n+\tmodprobe vfio-pci\n+\techo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode\n+\t./usertools/dpdk-devbind.py --bind=vfio-pci 00:05.0\n+\n+6. Start testpmd in VMs seperately::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -- -i --rxq=1 --txq=1 --nb-cores=1 --txd=1024 --rxd=1024\n+\n+7. Loop packets between two virtio-user sides, record performance number with 64b/2000b/8000b/IMIX pkts can get expected::\n+\n+\ttestpmd0>set fwd mac\n+\ttestpmd0>start tx_first\n+\ttestpmd0>stop\n+\ttestpmd0>set eth-peer 0 52:54:00:00:00:02\n+\ttestpmd0>start\n+\ttestpmd1>set fwd mac\n+\ttestpmd1>set eth-peer 0 52:54:00:00:00:01\n+\ttestpmd1>set txpkts 64\n+\ttestpmd1>start tx_first\n+\ttestpmd1>show port stats all\n+\ttestpmd1>stop\n+\ttestpmd1>set txpkts 2000\n+\ttestpmd1>start tx_first\n+\ttestpmd1>show port stats all\n+\ttestpmd1>stop\n+\ttestpmd1>set txpkts 2000,2000,2000,2000\n+\ttestpmd1>start tx_first\n+\ttestpmd1>show port stats all\n+\ttestpmd1>stop\n+\ttestpmd1>set txpkts 64,256,2000,64,256,2000\n+\ttestpmd1>start tx_first\n+\ttestpmd1>show port stats all\n+\n+8. Quit two testpmd in two VMs, bind virtio-pmd port to virtio-pci,then bind port back to vfio-pci, rerun below cmd 50 times::\n+\n+\t./usertools/dpdk-devbind.py -u 00:05.0\n+\t./usertools/dpdk-devbind.py --bind=virtio-pci 00:05.0\n+\t./usertools/dpdk-devbind.py --bind=vfio-pci 00:05.0\n+\n+9. Rerun step 6-7,check vhost can stable work and get expected throughput.\n+\n+Test Case 6: VM2VM virtio-net split ring test with 4 cbdma channels and iperf stable check\n+------------------------------------------------------------------------------------------\n+This case tests with split ring with cbdma channels in two VMs, check that iperf/scp and reconnection can work stably between two virito-net.\n+\n+1. Bind one physical port and 4 CBDMA devices to vfio-pci.\n+\n+2. On host, launch dpdk-vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \\\n+\t-- -p 0x1 --mergeable 1 --vm2vm 1 --socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd0@0000:00:04.1,txd1@0000:00:04.2,rxd1@0000:00:04.3] --client\n+\n+3. Start VM1 with qemu::\n+\n+\ttaskset -c 5,6 /usr/local/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net0,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10\n+\n+4. Start VM2 with qemu::\n+\n+\ttaskset -c 7,8 /usr/local/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net1,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n+\n+5. On VM1, set virtio device IP and run arp protocal::\n+\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+6. On VM2, set virtio device IP and run arp protocal::\n+\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+7. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+8. Check iperf throughput can get x Gbits/sec.\n+\n+9. Scp 1MB file form VM0 to VM1, check packets can be forwarding success by scp::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+10. Relaunch dpdk-vhost, then rerun step 7-9 five times.\n+\n+11. Relaunch dpdk-vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 2-3 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \\\n+\t-- -p 0x1 --mergeable 1 --vm2vm 1 --socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd1@0000:00:04.1] --client\n+\n+12. rerun step 7-9.\n+\n+Test Case 7: VM2VM virtio-net packed ring test with 4 cbdma channels and iperf stable check\n+-------------------------------------------------------------------------------------------\n+This case tests with packed ring with 4 cbdma channels in two VMs, check that iperf/scp and reconnection can work stably between two virito-net.\n+\n+1. Bind one physical ports to vfio-pci and 4 CBDMA devices to vfio-pci.\n+\n+2. On host, launch dpdk-vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -- -p 0x1 --mergeable 1 --vm2vm 1 \\\n+\t--socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd0@0000:00:04.1,txd1@0000:00:04.2,rxd1@0000:00:04.3] --total-num-mbufs 600000\n+\n+3. Start VM1::\n+\n+\tqemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net0 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10\n+\n+4. Start VM2::\n+\n+\tqemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net1 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n \n 5. On VM1, set virtio device IP and run arp protocal::\n \n-    ifconfig ens5 1.1.1.2\n-    arp -s 1.1.1.8 52:54:00:00:00:02\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n \n 6. On VM2, set virtio device IP and run arp protocal::\n \n-    ifconfig ens5 1.1.1.8\n-    arp -s 1.1.1.2 52:54:00:00:00:01\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n \n 7. Check the iperf performance between two VMs by below commands::\n \n-    Under VM1, run: `iperf -s -i 1`\n-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n \n 8. Check iperf throughput can get x Gbits/sec.\n \n 9. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::\n \n-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n \n-10. Relaunch vhost-dpdk, then rerun step 7-9 five times.\n+10. Rerun step 7-9 five times.\n \n-Test Case6: VM2VM packed ring test with iperf and reconnect stable test\n-=======================================================================\n+Test Case 8: VM2VM virtio-net packed ring test with 2 cbdma channels and iperf stable check\n+-------------------------------------------------------------------------------------------\n+This case tests with packed ring with 2 cbdma channels in two VMs, check that iperf/scp and reconnection can work stably between two virito-net.\n \n-1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.\n+1. Bind one physical ports to vfio-pci and 2 CBDMA devices to vfio-pci.\n \n 2. On host, launch dpdk-vhost by below command::\n \n-\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \\\n-\t--socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --total-num-mbufs 600000\n+\t./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -a 0000:af:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -- -p 0x1 --mergeable 1 --vm2vm 1 \\\n+\t--socket-file /root/dpdk/vhost-net0 --socket-file /root/dpdk/vhost-net1 --dmas [txd0@0000:00:04.0,rxd1@0000:00:04.1] --total-num-mbufs 600000\n \n-3. Start VM0 with qemu-5.2.0::\n+3. Start VM1::\n \n- \tqemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \\\n-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n-        -chardev socket,id=char0,path=/tmp/vhost-net0 \\\n-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10\n+\tqemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net0 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10\n \n-4. Start VM1 with qemu-5.2.0::\n+4. Start VM2::\n \n \tqemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \\\n-        -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n-        -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n-        -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-        -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-        -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-        -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n-        -chardev socket,id=char0,path=/tmp/vhost-net1 \\\n-        -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-        -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=/root/dpdk/vhost-net1 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n \n 5. On VM1, set virtio device IP and run arp protocal::\n \n-    ifconfig ens5 1.1.1.2\n-    arp -s 1.1.1.8 52:54:00:00:00:02\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n \n 6. On VM2, set virtio device IP and run arp protocal::\n \n-    ifconfig ens5 1.1.1.8\n-    arp -s 1.1.1.2 52:54:00:00:00:01\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n \n 7. Check the iperf performance between two VMs by below commands::\n \n-    Under VM1, run: `iperf -s -i 1`\n-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n \n 8. Check iperf throughput can get x Gbits/sec.\n \n 9. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::\n \n-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n \n 10. Rerun step 7-9 five times.\n",
    "prefixes": [
        "V4",
        "1/2"
    ]
}