diff mbox series

[V4,1/2] test_plans/basic_4k_pages_cbdma_test_plan: modify the dmas parameter by DPDK changed

Message ID 20221128024218.2314773-1-weix.ling@intel.com (mailing list archive)
State Superseded
Headers show
Series modify the dmas parameter by DPDK changed | expand

Commit Message

Wei Ling Nov. 28, 2022, 2:42 a.m. UTC
From DPDK-22.11, the dmas parameter have changed, so modify the dmas
parameter in the testplan.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 test_plans/basic_4k_pages_cbdma_test_plan.rst | 626 +++++++++---------
 1 file changed, 318 insertions(+), 308 deletions(-)
diff mbox series

Patch

diff --git a/test_plans/basic_4k_pages_cbdma_test_plan.rst b/test_plans/basic_4k_pages_cbdma_test_plan.rst
index 009a200c..495eb73b 100644
--- a/test_plans/basic_4k_pages_cbdma_test_plan.rst
+++ b/test_plans/basic_4k_pages_cbdma_test_plan.rst
@@ -20,23 +20,35 @@  vhost-user/virtio-net mergeable path.
 3.Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring and packed ring
 vhost-user/virtio-net mergeable path.
 
-Note:
+.. note::
 
-1. When CBDMA channels are bound to vfio driver, VA mode is the default and recommended.
-For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.
-2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. And case 4-5 have not yet been automated.
+   1. When CBDMA channels are bound to vfio driver, VA mode is the default and recommended.
+   For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.
+   2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. In this patch,
+   we enable asynchronous data path for vhostpmd. Asynchronous data path is enabled per tx/rx queue, and users need to specify
+   the DMA device used by the tx/rx queue. Each tx/rx queue only supports to use one DMA device (This is limited by the
+   implementation of vhostpmd), but one DMA device can be shared among multiple tx/rx queues of different vhost PMD ports.
 
-For more about dpdk-testpmd sample, please refer to the DPDK docments:
-https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html
-For virtio-user vdev parameter, you can refer to the DPDK docments:
-https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.
+   Two PMD parameters are added:
+   - dmas:	specify the used DMA device for a tx/rx queue.(Default: no queues enable asynchronous data path)
+   - dma-ring-size: DMA ring size.(Default: 4096).
+
+   Here is an example:
+   --vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=2048'
+
+   For more about dpdk-testpmd sample, please refer to the DPDK docments:
+   https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html
+   For virtio-user vdev parameter, you can refer to the DPDK docments:
+   https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.
 
 Prerequisites
 =============
 
 Software
 --------
-Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz
+   Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz
+   iperf
+   qemu: https://download.qemu.org/qemu-7.1.0.tar.xz
 
 General set up
 --------------
@@ -46,9 +58,9 @@  General set up
 
 	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
 	# ninja -C <dpdk build dir> -j 110
-	For example:
-	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
-	ninja -C x86_64-native-linuxapp-gcc -j 110
+    For example:
+    CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
+    ninja -C x86_64-native-linuxapp-gcc -j 110
 
 3. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
 
@@ -65,10 +77,10 @@  General set up
 
 4. Prepare tmpfs with 4K-pages::
 
-	mkdir /mnt/tmpfs_nohuge0
-	mkdir /mnt/tmpfs_nohuge1
-	mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G
-	mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=4G
+    mkdir /mnt/tmpfs_nohuge0
+    mkdir /mnt/tmpfs_nohuge1
+    mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G
+    mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=4G
 
 Test case
 =========
@@ -85,234 +97,239 @@  Common steps
 
 Test Case 1: Basic test vhost-user/virtio-user split ring vhost async operation using 4K-pages and cbdma enable
 ---------------------------------------------------------------------------------------------------------------
-This case tests basic functions of split ring virtio path when uses the asynchronous operations with CBDMA channels in 4K-pages memory environment and PVP vhost-user/virtio-user topology.
+This case tests basic functions of split ring virtio path when uses the asynchronous operations with CBDMA channels
+in 4K-pages memory environment and PVP vhost-user/virtio-user topology.
 
-1. Bind one port to vfio-pci, launch vhost::
+1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, launch vhost::
 
-	./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \
-	--vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0;rxq0]' -- -i --no-numa --socket-num=0 --lcore-dma=[lcore4@0000:00:04.0]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \
+    -a 0000:18:00.0 -a 0000:00:04.0 \
+    --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
+    -- -i --no-numa --socket-num=0
+    testpmd>start
 
 2. Launch virtio-user with 4K-pages::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,queues=1 -- -i
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,queues=1 \
+    -- -i
+    testpmd>start
 
 3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
-	testpmd>show port stats all
+    testpmd>show port stats all
 
 Test Case 2: Basic test vhost-user/virtio-user packed ring vhost async operation using 4K-pages and cbdma enable
 ----------------------------------------------------------------------------------------------------------------
-This case tests basic functions of packed ring virtio path when uses the asynchronous operations with CBDMA channels in 4K-pages memory environment and PVP vhost-user/virtio-user topology.
+This case tests basic functions of packed ring virtio path when uses the asynchronous operations with CBDMA channels
+in 4K-pages memory environment and PVP vhost-user/virtio-user topology.
 
-1. Bind one port to vfio-pci, launch vhost::
+1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, launch vhost::
 
-	modprobe vfio-pci
-	./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \
-	--vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0;rxq0]' -- -i --no-numa --socket-num=0 --lcore-dma=[lcore4@0000:00:04.0]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \
+    -a 0000:18:00.0 -a 0000:00:04.0 \
+    --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \
+    -- -i --no-numa --socket-num=0
+    testpmd>start
 
 2. Launch virtio-user with 4K-pages::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \
-	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,packed_vq=1,queues=1 \
+    -- -i
+    testpmd>start
 
 3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
-	testpmd>show port stats all
+    testpmd>show port stats all
 
 Test Case 3: VM2VM vhost-user/virtio-net split ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable
 -------------------------------------------------------------------------------------------------------------------------------
-This case test the function of Vhost TSO in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack
-when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.
+This case test the function of Vhost TSO in the topology of vhost-user/virtio-net split ring mergeable path by verifing the
+TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.
 
-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::
+1. Bind 2 CBDMA port to vfio-pci, then launch vhost by below command::
 
-	rm -rf vhost-net*
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \
+    -a 0000:00:04.0 -a 0000:00:04.1 \
+    --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \
+    --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024
+    testpmd>start
 
 2. Launch VM1 and VM2::
 
-	taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-	-chardev socket,id=char0,path=./vhost-net0 \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
-
-	taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-	-chardev socket,id=char0,path=./vhost-net1 \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img  \
+    -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0 \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+
+    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=./vhost-net1 \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	ifconfig ens5 1.1.1.2
-	arp -s 1.1.1.8 52:54:00:00:00:02
+    ifconfig ens5 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	ifconfig ens5 1.1.1.8
-	arp -s 1.1.1.2 52:54:00:00:00:01
+    ifconfig ens5 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Check the iperf performance between two VMs by below commands::
 
-	Under VM1, run: `iperf -s -i 1`
-	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+    Under VM1, run: `iperf -s -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
 6. Check 2VMs can receive and send big packets to each other::
 
-	testpmd>show port xstats all
-	Port 0 should have tx packets above 1522
-	Port 1 should have rx packets above 1522
+    testpmd>show port xstats all
+    Port 0 should have tx packets above 1522
+    Port 1 should have rx packets above 1522
 
 Test Case 4: VM2VM vhost-user/virtio-net packed ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable
 --------------------------------------------------------------------------------------------------------------------------------
-This case test the function of Vhost TSO in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack
-when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.
+This case test the function of Vhost TSO in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the
+TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.
 
-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::
+1. Bind 2 CBDMA port to vfio-pci, then launch vhost by below command::
 
-	rm -rf vhost-net*
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \
+    -a 0000:00:04.0 -a 0000:00:04.1 \
+    --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \
+    --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024
+    testpmd>start
 
 2. Launch VM1 and VM2::
 
-	taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-	-chardev socket,id=char0,path=./vhost-net0 \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
-
-	taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-	-chardev socket,id=char0,path=./vhost-net1 \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
+    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img  \
+    -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0 \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
+
+    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=./vhost-net1 \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	ifconfig ens5 1.1.1.2
-	arp -s 1.1.1.8 52:54:00:00:00:02
+    ifconfig ens5 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	ifconfig ens5 1.1.1.8
-	arp -s 1.1.1.2 52:54:00:00:00:01
+    ifconfig ens5 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Check the iperf performance between two VMs by below commands::
 
-	Under VM1, run: `iperf -s -i 1`
-	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+    Under VM1, run: `iperf -s -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
 6. Check 2VMs can receive and send big packets to each other::
 
-	testpmd>show port xstats all
-	Port 0 should have tx packets above 1522
-	Port 1 should have rx packets above 1522
+    testpmd>show port xstats all
+    Port 0 should have tx packets above 1522
+    Port 1 should have rx packets above 1522
 
 Test Case 5: vm2vm vhost/virtio-net split ring multi queues using 4K-pages and cbdma enable
 -------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net
-split ring mergeable path when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. The dynamic change of multi-queues number is also tested.
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid
+after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost
+uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.
+The dynamic change of multi-queues number is also tested.
 
-1. Bind one port to vfio-pci, launch vhost::
+1. Bind 4 CBDMA port to vfio-pci, launch vhost::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-	-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \
-	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
+    -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \
+    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.3;txq5@0000:00:04.3;txq6@0000:00:04.3;txq7@0000:00:04.3]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>start
 
 2. Launch VM qemu::
 
-	taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-	-chardev socket,id=char0,path=./vhost-net0,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
-
-	taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-	-chardev socket,id=char0,path=./vhost-net1,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
+    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.2
-	arp -s 1.1.1.8 52:54:00:00:00:02
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.8
-	arp -s 1.1.1.2 52:54:00:00:00:01
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Scp 1MB file form VM1 to VM2::
 
-	Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
 
 6. Check the iperf performance between two VMs by below commands::
 
-	Under VM1, run: `iperf -s -i 1`
-	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+    Under VM1, run: `iperf -s -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
 7. Quit and relaunch vhost w/ diff CBDMA channels::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-	-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \
-	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
+    -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
+    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.2;txq5@0000:00:04.2;rxq2@0000:00:04.3;rxq3@0000:00:04.3;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3],dma-ring-size=1024' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>start
 
 8. Rerun step 5-6.
 
@@ -325,11 +342,11 @@  split ring mergeable path when vhost uses the asynchronous operations with CBDMA
 
 10. On VM1, set virtio device::
 
-	ethtool -L ens5 combined 4
+      ethtool -L ens5 combined 4
 
 11. On VM2, set virtio device::
 
-	ethtool -L ens5 combined 4
+      ethtool -L ens5 combined 4
 
 12. Scp 1MB file form VM1 to VM2::
 
@@ -342,227 +359,220 @@  split ring mergeable path when vhost uses the asynchronous operations with CBDMA
 
 14. Quit and relaunch vhost with 1 queues::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \
-	-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-	testpmd>start
+     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
+     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \
+     -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
+     testpmd>start
 
 15. On VM1, set virtio device::
 
-	ethtool -L ens5 combined 1
+      ethtool -L ens5 combined 1
 
 16. On VM2, set virtio device::
 
-	ethtool -L ens5 combined 1
+      ethtool -L ens5 combined 1
 
 17. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
 
-	Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
 
 18. Check the iperf performance, ensure queue0 can work from vhost side::
 
-	Under VM1, run: `iperf -s -i 1`
-	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+     Under VM1, run: `iperf -s -i 1`
+     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
 Test Case 6: vm2vm vhost/virtio-net packed ring multi queues using 4K-pages and cbdma enable
 --------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net
-packed ring mergeable path when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid
+after packets forwarding in vm2vm vhost-user/virtio-net packed ring mergeable path when vhost
+uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.
 
-1. Bind one port to vfio-pci, launch vhost::
+1. Bind 2 CBDMA port to vfio-pci, launch vhost::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-	-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \
-	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
+    -a 0000:00:04.0 -a 0000:00:04.1 \
+    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>start
 
 2. Launch VM qemu::
 
-	taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-	-chardev socket,id=char0,path=./vhost-net0,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
-
-	taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-	-chardev socket,id=char0,path=./vhost-net1,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
+    taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
+
+    taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.2
-	arp -s 1.1.1.8 52:54:00:00:00:02
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.8
-	arp -s 1.1.1.2 52:54:00:00:00:01
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Scp 1MB file form VM1 to VM2::
 
-	Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
 
 6. Check the iperf performance between two VMs by below commands::
 
-	Under VM1, run: `iperf -s -i 1`
-	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+    Under VM1, run: `iperf -s -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
 Test Case 7: vm2vm vhost/virtio-net split ring multi queues using 1G/4k-pages and cbdma enable
 ----------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net
-split ring mergeable path when vhost uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment and the front-end is in 4k-pages memory environment.
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid
+after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost
+uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory
+environment and the front-end is in 4k-pages memory environment.
 
-1. Bind 16 CBDMA channel to vfio-pci, launch vhost::
+1. Bind 4 CBDMA port to vfio-pci, launch vhost::
 
-	./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \
-	0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7
-
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \
-	-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-	-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \
-	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \
+    -a 0000:00:04.0 -a 0000:00:04.1 \
+    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>start
 
 2. Launch VM qemu::
 
-	taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \
-	-chardev socket,id=char0,path=./vhost-net0,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
-
-	taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \
-	-chardev socket,id=char0,path=./vhost-net1,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
+    taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \
+    -chardev socket,id=char0,path=./vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+    taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.2
-	arp -s 1.1.1.8 52:54:00:00:00:02
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.8
-	arp -s 1.1.1.2 52:54:00:00:00:01
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Scp 1MB file form VM1 to VM2::
 
-	Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name
 
 6. Check the iperf performance between two VMs by below commands::
 
-	Under VM1, run: `iperf -s -i 1`
-	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+    Under VM1, run: `iperf -s -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
 7. Quit and relaunch vhost w/ diff CBDMA channels::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-	-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \
-	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
+    -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \
+    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>start
 
 8. Rerun step 5-6.
 
 Test Case 8: vm2vm vhost/virtio-net split packed ring multi queues with 1G/4k-pages and cbdma enable
 ----------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net
-split and packed ring mergeable path when vhost uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment and the front-end is in 4k-pages memory environment.
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after
+packets forwarding in vm2vm vhost-user/virtio-net split and packed ring mergeable path when vhost
+uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment
+and the front-end is in 4k-pages memory environment.
 
-1. Bind 16 CBDMA channel to vfio-pci, launch vhost::
+1. Bind 8 CBDMA port to vfio-pci, launch vhost::
 
-	./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \
-	0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7
-
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \
-	-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-	-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \
-	--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \
+    -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
+    --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \
+    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.4;txq1@0000:00:04.4;txq2@0000:00:04.4;txq3@0000:00:04.4;txq4@0000:00:04.5;txq5@0000:00:04.5;rxq2@0000:00:04.6;rxq3@0000:00:04.6;rxq4@0000:00:04.6;rxq5@0000:00:04.6;rxq6@0000:00:04.7;rxq7@0000:00:04.7]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>start
 
 2. Launch VM qemu::
 
-	taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \
-	-chardev socket,id=char0,path=./vhost-net0,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
-
-	taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \
-	-chardev socket,id=char0,path=./vhost-net1,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
+    taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \
+    -chardev socket,id=char0,path=./vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
+
+    taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.2
-	arp -s 1.1.1.8 52:54:00:00:00:02
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.8
-	arp -s 1.1.1.2 52:54:00:00:00:01
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Scp 1MB file form VM1 to VM2::
 
-	Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name
 
 6. Check the iperf performance between two VMs by below commands::
 
-	Under VM1, run: `iperf -s -i 1`
-	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+    Under VM1, run: `iperf -s -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
 7. Relaunch VM1, and rerun step 3.