[V3,1/2] test_plans/basic_4k_pages_dsa: modify the dmas parameter

Message ID 20230327055658.3202858-2-weix.ling@intel.com (mailing list archive)
State Accepted
Headers
Series fix basic_4k_pages_dsa testplan and |

Commit Message

Ling, WeiX March 27, 2023, 5:56 a.m. UTC
  Modify the dmas parameter in testplan by the DPDK local path changed.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 test_plans/basic_4k_pages_dsa_test_plan.rst | 376 +++++++++-----------
 1 file changed, 178 insertions(+), 198 deletions(-)
  

Patch

diff --git a/test_plans/basic_4k_pages_dsa_test_plan.rst b/test_plans/basic_4k_pages_dsa_test_plan.rst
index eeea25d8..3e656519 100644
--- a/test_plans/basic_4k_pages_dsa_test_plan.rst
+++ b/test_plans/basic_4k_pages_dsa_test_plan.rst
@@ -23,9 +23,9 @@  and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
 
 .. note::
 
-   1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
-   exceed IOMMU's max capability, better to use 1G guest hugepage.
-   2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
+	1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
+	exceed IOMMU's max capability, better to use 1G guest hugepage.
+	2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
 
 Prerequisites
 =============
@@ -36,30 +36,30 @@  General set up
 
 2. Compile DPDK::
 
-	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
-	# ninja -C <dpdk build dir> -j 110
+	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>
+	ninja -C <dpdk build dir> -j 110
 	For example,
 	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
 	ninja -C x86_64-native-linuxapp-gcc -j 110
 
-3. Get the PCI device of DUT, for example, 0000:6a:00.0 is NIC port, 0000:6a:01.0 - 0000:f6:01.0 are DSA devices::
+3. Get the PCI device of DUT, for example, 0000:29:00.0 is NIC port, 0000:6a:01.0 - 0000:6f:01.0 are DSA devices::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -s
+	./usertools/dpdk-devbind.py -s
 
 	Network devices using kernel driver
 	===================================
-	0000:6a:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci
-
-	4DMA devices using kernel driver
-	4===============================
-	40000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:6f:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:74:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:79:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:e7:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:ec:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:f1:01.0 'Device 0b25' drv=idxd unused=vfio-pci
-	40000:f6:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:29:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci
+
+	DMA devices using kernel driver
+	===============================
+	0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6f:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:74:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:79:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6f:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci
+	0000:6f:01.0 'Device 0b25' drv=idxd unused=vfio-pci
 
 4. Prepare tmpfs with 4K-pages::
 
@@ -75,41 +75,37 @@  Common steps
 ------------
 1. Bind 1 NIC port to vfio-pci::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <nic_pci>
-	For example:
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:00.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
 
 2.Bind DSA devices to DPDK vfio-pci driver::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <dsa_pci>
-	For example, bind 2 DSA devices to vfio-pci driver:
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
 
 .. note::
 
 	One DPDK DSA device can create 8 WQ at most. Below is an example, where DPDK DSA device will create one and
-	eight WQ for DSA deivce 0000:e7:01.0 and 0000:ec:01.0. The value of “max_queues” is 1~8:
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:e7:01.0,max_queues=1 -- -i
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:ec:01.0,max_queues=8 -- -i
+	eight WQ for DSA deivce 0000:6a:01.0 and 0000:6f:01.0. The value of “max_queues” is 1~8:
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:6a:01.0,max_queues=1 -- -i
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:6f:01.0,max_queues=8 -- -i
 
 3. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ)::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd <dsa_pci>
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q <wq_num> <dsa_idx>
+	./usertools/dpdk-devbind.py -b idxd <dsa_pci>
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q <wq_num> <dsa_idx>
 
 .. note::
 
 	Better to reset WQ when need operate DSA devices that bound to idxd drvier:
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <dsa_idx>
+	./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <dsa_idx>
 	You can check it by 'ls /dev/dsa'
-	dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0
+	dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:6f:01.0
 	wq_num: Number of workqueues per DSA endpoint, where 1<=wq_num<=8
 
 	For example, bind 2 DSA devices to idxd driver and configure WQ:
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
+	./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
 	Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3"
 
 Test Case 1: PVP split ring multi-queues with 4K-pages and dsa dpdk driver
@@ -118,21 +114,20 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 1. Bind 2 DSA device and 1 NIC port to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:6a:00.0 -a 0000:e7:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:e7:01.0-q1,lcore12@0000:e7:01.0-q2,lcore12@0000:e7:01.0-q3,lcore13@0000:e7:01.0-q4,lcore13@0000:e7:01.0-q5,lcore14@0000:e7:01.0-q6,lcore14@0000:e7:01.0-q7]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:29:00.0 -a 0000:6a:01.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q0;txq3@0000:6a:01.0-q0;txq4@0000:6a:01.0-q1;txq5@0000:6a:01.0-q1;rxq2@0000:6a:01.0-q2;rxq3@0000:6a:01.0-q2;rxq4@0000:6a:01.0-q3;rxq5@0000:6a:01.0-q3;rxq6@0000:6a:01.0-q3;rxq7@0000:6a:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
@@ -153,10 +148,9 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 7. Quit and relaunch vhost with 1G hugepage::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:e7:01.0-q1,lcore12@0000:e7:01.0-q2,lcore12@0000:e7:01.0-q3,lcore13@0000:ec:01.0-q0,lcore13@0000:ec:01.0-q1,lcore14@0000:ec:01.0-q2,lcore14@0000:ec:01.0-q3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q1;txq3@0000:6a:01.0-q1;txq4@0000:6a:01.0-q2;txq5@0000:6a:01.0-q2;txq6@0000:6a:01.0-q3;txq7@0000:6a:01.0-q3;rxq0@0000:6f:01.0-q0;rxq1@0000:6f:01.0-q0;rxq2@0000:6f:01.0-q1;rxq3@0000:6f:01.0-q1;rxq4@0000:6f:01.0-q2;rxq5@0000:6f:01.0-q2;rxq6@0000:6f:01.0-q3;rxq7@0000:6f:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -164,7 +158,7 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 9. Quit and relaunch virtio-user with mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
@@ -178,21 +172,20 @@  This case tests packed ring with multi-queues can work normally in 4k-pages envi
 
 1. Bind 2 DSA device and 1 NIC port to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:6a:00.0 -a 0000:f1:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore13@0000:f1:01.0-q4,lcore13@0000:f1:01.0-q5,lcore14@0000:f1:01.0-q6,lcore14@0000:f1:01.0-q7]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:29:00.0 -a 0000:6a:01.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q0;txq3@0000:6a:01.0-q0;txq4@0000:6a:01.0-q1;txq5@0000:6a:01.0-q1;rxq2@0000:6a:01.0-q2;rxq3@0000:6a:01.0-q2;rxq4@0000:6a:01.0-q3;rxq5@0000:6a:01.0-q3;rxq6@0000:6a:01.0-q3;rxq7@0000:6a:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
@@ -213,10 +206,9 @@  This case tests packed ring with multi-queues can work normally in 4k-pages envi
 
 7. Quit and relaunch vhost with with 1G hugepage::::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore13@0000:f6:01.0-q0,lcore13@0000:f6:01.0-q1,lcore14@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q1;txq3@0000:6a:01.0-q1;txq4@0000:6a:01.0-q2;txq5@0000:6a:01.0-q2;txq6@0000:6a:01.0-q3;txq7@0000:6a:01.0-q3;rxq0@0000:6f:01.0-q0;rxq1@0000:6f:01.0-q0;rxq2@0000:6f:01.0-q1;rxq3@0000:6f:01.0-q1;rxq4@0000:6f:01.0-q2;rxq5@0000:6f:01.0-q2;rxq6@0000:6f:01.0-q3;rxq7@0000:6f:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -224,7 +216,7 @@  This case tests packed ring with multi-queues can work normally in 4k-pages envi
 
 9. Quit and relaunch virtio-user with mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
@@ -238,14 +230,14 @@  This case test the function of Vhost tx offload in the topology of vhost-user/vi
 
 1. Bind 1 DSA device to vfio-pci like common step 2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@0000:f1:01.0-q0,lcore4@0000:f1:01.0-q1]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:6a:01.0,max_queues=2 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q0]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:6a:01.0-q1;rxq0@0000:6a:01.0-q1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2::
@@ -274,18 +266,18 @@  This case test the function of Vhost tx offload in the topology of vhost-user/vi
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 6. Check that 2VMs can receive and send big packets to each other through vhost log. Port 0 should have tx packets above 1519, Port 1 should have rx packets above 1519::
 
@@ -293,19 +285,19 @@  This case test the function of Vhost tx offload in the topology of vhost-user/vi
 
 Test Case 4: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic
 -------------------------------------------------------------------------------------------------------
-This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path 
+This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path
 by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment.
 
 1. Bind 1 DSA device to vfio-pci like common step 2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0
- 
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0
+
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@0000:f1:01.0-q0,lcore4@0000:f1:01.0-q1]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:6a:01.0,max_queues=2 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2::
@@ -334,42 +326,41 @@  by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o
 
 3. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 4. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 5. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 6. Check that 2VMs can receive and send big packets to each other through vhost log::
 
 	testpmd>show port xstats all
-        Port 0 should have tx packets above 1518
-        Port 1 should have rx packets above 1518
+	Port 0 should have tx packets above 1518
+	Port 1 should have rx packets above 1518
 
 Test Case 5: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa dpdk driver
 -------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net multi-queues mergeable path when vhost uses the asynchronous operations with dsa dpdk driver.
 And one virtio-net is split ring, the other is packed ring. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment.
 
 1. Bind 2 dsa channel to vfio-pci like common step 2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore3@0000:f6:01.0-q3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q0;txq3@0000:6a:01.0-q0;txq4@0000:6a:01.0-q1;txq5@0000:6a:01.0-q1;rxq2@0000:6a:01.0-q2;rxq3@0000:6a:01.0-q2;rxq4@0000:6a:01.0-q3;rxq5@0000:6a:01.0-q3;rxq6@0000:6a:01.0-q3;rxq7@0000:6a:01.0-q3]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:6f:01.0-q0;txq1@0000:6f:01.0-q0;txq2@0000:6f:01.0-q0;txq3@0000:6f:01.0-q0;txq4@0000:6f:01.0-q1;txq5@0000:6f:01.0-q1;rxq2@0000:6f:01.0-q2;rxq3@0000:6f:01.0-q2;rxq4@0000:6f:01.0-q3;rxq5@0000:6f:01.0-q3;rxq6@0000:6f:01.0-q3;rxq7@0000:6f:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 3. Launch VM qemu::
@@ -398,24 +389,24 @@  And one virtio-net is split ring, the other is packed ring. The vhost run in 1G
 
 4. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ethtool -L ens5 combined 8
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 5. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ethtool -L ens5 combined 8
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 6. Scp 1MB file form VM1 to VM2::
 
-	<VM1># scp <file> root@1.1.1.8:/
+	scp <file> root@1.1.1.8:/
 
 7. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 8. Relaunch vm1 and rerun step 4-7.
 
@@ -428,14 +419,13 @@  dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pag
 1. Bind 2 dsa channel to vfio-pci, launch vhost::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u f1:01.0 f1:01.0
-	./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f1:01.0
-
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore5@0000:f6:01.0-q3]
+	./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:01.0 0000:6f:01.0
+
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q0;txq2@0000:6a:01.0-q0;txq3@0000:6a:01.0-q0;txq4@0000:6a:01.0-q1;txq5@0000:6a:01.0-q1;rxq2@0000:6a:01.0-q2;rxq3@0000:6a:01.0-q2;rxq4@0000:6a:01.0-q3;rxq5@0000:6a:01.0-q3;rxq6@0000:6a:01.0-q3;rxq7@0000:6a:01.0-q3]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@0000:6f:01.0-q0;txq1@0000:6f:01.0-q0;txq2@0000:6f:01.0-q0;txq3@0000:6f:01.0-q0;txq4@0000:6f:01.0-q1;txq5@0000:6f:01.0-q1;rxq2@0000:6f:01.0-q2;rxq3@0000:6f:01.0-q2;rxq4@0000:6f:01.0-q3;rxq5@0000:6f:01.0-q3;rxq6@0000:6f:01.0-q3;rxq7@0000:6f:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 2. Prepare tmpfs with 4K-pages::
@@ -492,20 +482,19 @@  dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pag
 
 8. Quit and relaunch vhost w/ diff dsa channels::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \
-	--lcore-dma=[lcore2@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q1,lcore4@0000:f6:01.0-q0,lcore5@0000:f6:01.0-q1]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=2 -a 0000:6f:01.0,max_queues=2 \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q1;txq2@0000:6f:01.0-q0;txq3@0000:6f:01.0-q1;rxq0@0000:6a:01.0-q0;rxq1@0000:6a:01.0-q1;rxq2@0000:6f:01.0-q0;rxq3@0000:6f:01.0-q1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q1;txq2@0000:6f:01.0-q0;txq3@0000:6f:01.0-q1;rxq0@0000:6a:01.0-q0;rxq1@0000:6a:01.0-q1;rxq2@0000:6f:01.0-q0;rxq3@0000:6f:01.0-q1]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
 	testpmd>start
 
 9. On VM1, set virtio device::
 
-	<VM1># ethtool -L ens5 combined 4
+	ethtool -L ens5 combined 4
 
 10. On VM2, set virtio device::
 
-	<VM2># ethtool -L ens5 combined 4
+	ethtool -L ens5 combined 4
 
 11. Rerun step 6-7.
 
@@ -515,27 +504,25 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 1. Bind 1 NIC port to vfio-pci and 2 DSA device to idxd like common step 1 and 3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
-
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
 	.ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
-	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
+	./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+	./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:29:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
 	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
@@ -556,10 +543,9 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 7. Quit and relaunch vhost with diff dsa virtual channels and 1G-page::::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq1.0,lcore14@wq1.1,lcore14@wq1.2]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:29:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.1;txq3@wq0.1;txq4@wq0.2;txq5@wq0.2;txq6@wq0.3;txq7@wq0.3;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.1;rxq3@wq0.1;rxq4@wq0.2;rxq5@wq0.2;rxq6@wq0.3;rxq7@wq0.3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -571,27 +557,25 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 1. Bind 1 NIC port to vfio-pci and 2 DSA device to idxd like common step 1 and 3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
-
+	./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0
 	.ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:29:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
 	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd mac
@@ -612,10 +596,9 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 7. Quit and relaunch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore12@wq1.0,lcore2@wq1.1]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:29:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;txq6@wq0.1;txq7@wq0.1;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.0;rxq3@wq0.0;rxq4@wq0.1;rxq5@wq0.1;rxq6@wq0.1;rxq7@wq0.1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -630,17 +613,17 @@  in 4k-pages environment.
 1. Bind 1 DSA device to idxd like common step 2::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 0000:6a:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
 	ls /dev/dsa #check wq configure success
 
 2. Launch the Vhost sample by below commands::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --rxq=1 --txq=1 --no-numa --socket-num=0 --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3]
+	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@wq0.2;rxq0@wq0.3]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --rxq=1 --txq=1 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2 on socket 1::
@@ -669,24 +652,24 @@  in 4k-pages environment.
 
 4. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 5. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 6. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 7. Check that 2VMs can receive and send big packets to each other through vhost log::
 
 	testpmd>show port xstats all
-        Port 0 should have tx packets above 1518
-        Port 1 should have rx packets above 151518
+	Port 0 should have tx packets above 1518
+	Port 1 should have rx packets above 151518
 
 Test Case 10: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic
 ----------------------------------------------------------------------------------------------------------
@@ -697,8 +680,8 @@  in 4k-pages environment.
 1. Bind 2 DSA device to idxd like common step 2::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
 	ls /dev/dsa #check wq configure success
@@ -706,9 +689,9 @@  in 4k-pages environment.
 2. Launch the Vhost sample by below commands::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@wq0.0,lcore4@wq1.0]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.0]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@wq0.1;rxq0@wq0.1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2 with qemu::
@@ -737,24 +720,24 @@  in 4k-pages environment.
 
 4. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 5. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 6. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 7. Check that 2VMs can receive and send big packets to each other through vhost log::
 
 	testpmd>show port xstats all
-        Port 0 should have tx packets above 1518
-        Port 1 should have rx packets above 1518
+	Port 0 should have tx packets above 1518
+	Port 1 should have rx packets above 1518
 
 Test Case 11: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa kernel driver
 ----------------------------------------------------------------------------------------------------------
@@ -765,8 +748,8 @@  dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 1. Bind 8 DSA device to idxd like common step 3::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0 0000:74:01.0 0000:79:01.0 0000:e7:01.0 0000:ec:01.0 0000:f1:01.0 0000:f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0 0000:74:01.0 0000:79:01.0 0000:e7:01.0 0000:ec:01.0 0000:f1:01.0 0000:f6:01.0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
@@ -780,10 +763,9 @@  dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 2. Launch vhost::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@wq0.0,lcore2@wq1.1,lcore2@wq2.2,lcore2@wq3.3,lcore3@wq0.0,lcore3@wq2.2,lcore3@wq4.4,lcore3@wq5.5,lcore3@wq6.6,lcore3@wq7.7,lcore4@wq1.1,lcore4@wq3.3,lcore4@wq0.1,lcore4@wq1.2,lcore4@wq2.3,lcore4@wq3.4,lcore4@wq4.5,lcore4@wq5.6,lcore4@wq6.7,lcore5@wq7.0]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@wq0.2;txq1@wq0.2;txq2@wq0.2;txq3@wq0.2;txq4@wq0.3;txq5@wq0.3;rxq2@wq1.2;rxq3@wq1.2;rxq4@wq1.3;rxq5@wq1.3;rxq6@wq1.3;rxq7@wq1.3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 3. Launch VM qemu::
@@ -812,24 +794,24 @@  dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 
 4. On VM1, set virtio device IP and run arp protocal::
 
-	<VM1># ethtool -L ens5 combined 8
-	<VM1># ifconfig ens5 1.1.1.2
-	<VM1># arp -s 1.1.1.8 52:54:00:00:00:02
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.2
+	arp -s 1.1.1.8 52:54:00:00:00:02
 
 5. On VM2, set virtio device IP and run arp protocal::
 
-	<VM2># ethtool -L ens5 combined 8
-	<VM2># ifconfig ens5 1.1.1.8
-	<VM2># arp -s 1.1.1.2 52:54:00:00:00:01
+	ethtool -L ens5 combined 8
+	ifconfig ens5 1.1.1.8
+	arp -s 1.1.1.2 52:54:00:00:00:01
 
 6. Scp 1MB file form VM1 to VM2::
 
-	<VM1># scp <file> root@1.1.1.8:/
+	scp <file> root@1.1.1.8:/
 
 7. Check the iperf performance between two VMs by below commands::
 
-	<VM1># iperf -s -i 1
-	<VM2># iperf -c 1.1.1.2 -i 1 -t 60
+	iperf -s -i 1
+	iperf -c 1.1.1.2 -i 1 -t 60
 
 8. Relaunch vm1 and rerun step 4-7.
 
@@ -842,18 +824,17 @@  dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 1. Bind 2 dsa channel to idxd, launch vhost::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	./usertools/dpdk-devbind.py -u 0000:6a:01.0 0000:6f:01.0
+	./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 0000:6f:01.0
 	./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 0
 	./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 1
 
 2. Launch vhost::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3,lcore4@wq1.0,lcore4@wq1.1,lcore5@wq1.2,lcore5@wq1.3]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 3. Prepare tmpfs with 4K-pages::
@@ -911,18 +892,17 @@  dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 9. Quit and relaunch vhost w/ diff dsa channels::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \
-	--lcore-dma=[lcore2@wq0.0,lcore3@wq0.1,lcore4@wq1.0,lcore5@wq1.1]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
 	testpmd>start
 
 10. On VM1, set virtio device::
 
-	<VM1># ethtool -L ens5 combined 4
+	ethtool -L ens5 combined 4
 
 11. On VM2, set virtio device::
 
-	<VM2># ethtool -L ens5 combined 4
+	ethtool -L ens5 combined 4
 
-12. Rerun step 6-7.
+12. Rerun step 6-7.
\ No newline at end of file