From patchwork Wed Apr 6 09:09:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109190 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D150FA0509; Wed, 6 Apr 2022 11:09:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C958340E2D; Wed, 6 Apr 2022 11:09:45 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id EA1F040689 for ; Wed, 6 Apr 2022 11:09:42 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649236183; x=1680772183; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=tfIvxs6ji3cvcuxAOs/CEPVNmAlfFZbr4hg9P9BtAHY=; b=RZCU2VTyhJ05Dw57/0nc2M42h89mRU47XHECLthnUrL7wXTVTHgTZmFr zyT0fQNUwRXtQ1+UoQ/GyC5OjoQvbZLYkGHl+5qP8F+Rqgn3wdRbc9+d+ kjHwr+nkcr9vyjt8/fH9sPP1ZDX+wqILdruqeLeq9xYKtdrXCsU91sWPt kCLIXg1vnHbPZKOXHn9BDjP7BHSmhaBJ6LsMSApkSpMXOUUWIRkZjBJ2M 7L4PUn3l1dZl1tVdtcNPa8cCxghLR/0WLmedtscb2eb6jETjZDWKwt0X8 sCV+upTQu8/jkmH6HPIdsjl1xxMu0wFySAAWDJ2KahqMcBRKU1YJAaLu5 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="321688770" X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="321688770" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:09:42 -0700 X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="570424788" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:09:39 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/5] test_plans/vm2vm_virtio_net_perf_test_plan: delete CBDMA test case Date: Wed, 6 Apr 2022 17:09:33 +0800 Message-Id: <20220406090933.28267-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), delete cbdma related case form test_plan/vm2vm_virtio_net_perf_test_plan. Signed-off-by: Wei Ling --- .../vm2vm_virtio_net_perf_test_plan.rst | 720 ++---------------- 1 file changed, 84 insertions(+), 636 deletions(-) diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst index 6e679b5b..9787b658 100644 --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst @@ -44,88 +44,62 @@ in the UDP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net and packed ring vhost-user/virtio-net mergeable and non-mergeable path. 3. Check Vhost tx offload function by verifying the TSO/cksum in the TCP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net mergeable path with CBDMA channel. -4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost enqueue operation with multi-CBDMA channels. +4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost +enqueue operation with multi-CBDMA channels. + Note: 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1. -2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, dut to old qemu exist reconnect issue when multi-queues test. +2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, +DUT to old qemu exist reconnect issue when multi-queues test. 3.For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. -Test flow -========= - -Virtio-net <-> Vhost <-> Testpmd <-> Vhost <-> Virtio-net - -Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic -========================================================================= - -1. Launch the Vhost sample on socket 0 by below commands:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ - -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>start - -2. Launch VM1 and VM2 on socket 1:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. -3. On VM1, set virtio device IP and run arp protocol:: +Prerequisites +============= - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 +Topology +-------- + Test flow: Virtio-net-->Vhost-->Testpmd-->Vhost-->Virtio-net -4. On VM2, set virtio device IP and run arp protocol:: +Hardware +-------- + Supportted NICs: ALL - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz -5. Check the iperf performance with different packet size between two VMs by below commands:: +General set up +-------------- +1. Compile DPDK:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 -6. Check 2VMs can receive and send big packets to each other:: +Test case +========= - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 +Common steps +------------ -Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic -====================================================================================== +Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic +------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring to get tcp traffic throughput between 2 VMs. -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: +1. Launch the Vhost sample on socket 0 by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:80:04.0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:80:04.1]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>vhost enable tx all + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ + -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start -2. Launch VM1 and VM2:: +2. Launch VM1 and VM2 on socket 1:: taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -136,7 +110,8 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -147,7 +122,8 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 3. On VM1, set virtio device IP and run arp protocol:: @@ -159,7 +135,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t ifconfig ens5 1.1.1.8 arp -s 1.1.1.2 52:54:00:00:00:01 -5. Check the iperf performance between two VMs by below commands:: +5. Check the iperf performance with different packet size between two VMs by below commands:: Under VM1, run: `iperf -s -i 1` Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` @@ -170,18 +146,16 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -7. Check throughput and compare with case1, CBDMA enable performance should larger than w/o CBDMA performance when cross socket. - -Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic -========================================================================= +Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic +------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring to get udp traffic throughput between 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start 2. Launch VM1 and VM2:: @@ -195,7 +169,8 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -206,7 +181,8 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 3. On VM1, set virtio device IP and run arp protocol:: @@ -229,13 +205,13 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 4: Check split ring virtio-net device capability -========================================================== +Test Case 3: Check split ring virtio-net device capability +---------------------------------------------------------- +This case uses testpmd and QEMU to test split ring device capability in 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 @@ -252,7 +228,8 @@ Test Case 4: Check split ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -263,7 +240,8 @@ Test Case 4: Check split ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2:: @@ -279,247 +257,13 @@ Test Case 4: Check split ring virtio-net device capability tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on -Test Case 5: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check -============================================================================================================================== - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 using qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Quit and relaunch vhost w/ diff CBDMA channels:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -8. Rerun step 5-6. - -9. Quit and relaunch vhost w/ iova=pa:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -10. Rerun step 5-6. - -11. Quit and relaunch vhost w/o CBDMA channels:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 - testpmd>vhost enable tx all - testpmd>start - -12. On VM1, set virtio device:: - - ethtool -L ens5 combined 4 - -13. On VM2, set virtio device:: - - ethtool -L ens5 combined 4 - -14. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -15. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -16. Quit and relaunch vhost with 1 queues:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 - testpmd>vhost enable tx all - testpmd>start - -17. On VM1, set virtio device:: - - ethtool -L ens5 combined 1 - -18. On VM2, set virtio device:: - - ethtool -L ens5 combined 1 - -19. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -20. Check the iperf performance, ensure queue0 can work from vhost side:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -Test Case 6: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check -================================================================================================================================== - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 using qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Quit and relaunch vhost ports w/o CBDMA channels:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -8. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -10. Quit and relaunch vhost ports with 1 queues:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 - testpmd>vhost enable tx all - testpmd>start - -11. On VM1, set virtio device:: - - ethtool -L ens5 combined 1 - -12. On VM2, set virtio device:: - - ethtool -L ens5 combined 1 - -13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -14. Check the iperf performance, ensure queue0 can work from vhost side:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic -========================================================================== +Test Case 4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic +-------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring to get tcp traffic throughput between 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 @@ -536,7 +280,8 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,\ + mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -547,7 +292,8 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,\ + mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocol:: @@ -570,73 +316,13 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic -======================================================================================= - -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 on socket 1 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -6. Check 2VMs can receive and send big packets to each other:: - - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 - -7. Check throughput and compare with case6, CBDMA enable performance should larger than w/o CBDMA performance when cross socket. - -Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic -========================================================================== +Test Case 5: VM2VM packed ring vhost-user/virtio-net test with udp traffic +-------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring to get udp traffic throughput between 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 @@ -653,7 +339,8 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -664,7 +351,8 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocol:: @@ -687,13 +375,13 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 10: Check packed ring virtio-net device capability -============================================================ +Test Case 6: Check packed ring virtio-net device capability +----------------------------------------------------------- +This case uses testpmd and QEMU to test split ring device capability in 2 VMs. 1. Launch the Vhost sample by below commands:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 @@ -710,7 +398,8 @@ Test Case 10: Check packed ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -721,7 +410,8 @@ Test Case 10: Check packed ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,host_ufo=on,guest_ufo=on,guest_ecn=on,packed=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,host_ufo=on,guest_ufo=on,guest_ecn=on,packed=on -vnc :12 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2:: @@ -736,245 +426,3 @@ Test Case 10: Check packed ring virtio-net device capability tx-tcp-segmentation: on tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on - -Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check -===================================================================================================================== - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Rerun step 5-6 five times. - -Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check -========================================================================================================================= - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Rerun step 5-6 five times. - -Test Case 13: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa -========================================================================================================= - -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \ - --iova=pa -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 on socket 1 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Check 2VMs can receive and send big packets to each other:: - - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 - -Test Case 14: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check -================================================================================================================================= - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Rerun step 5-6 five times. From patchwork Wed Apr 6 09:09:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109191 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 124EAA0509; Wed, 6 Apr 2022 11:10:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0592E40DF6; Wed, 6 Apr 2022 11:10:22 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id D3DC840689 for ; Wed, 6 Apr 2022 11:10:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649236220; x=1680772220; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=pKlsJEGktI3u+xVnxt2BNiwtAzhITJR5BF641VSJJq8=; b=DsMPpy07Sra14WYqeN3QK6ARHO3dFAv/wyIqO7lIwgTp5+9efnbGLHWL We13yv5w0wQJq+Wx/qB33kLMg/Bk9s030Wqx+uyY7XK+FoEz08RoNPbJm 0l3Ykgy8phi5Cu4UkOqC1qIddLxwDoqmzGybZkf951X8DB3bq8yV2+36o EI+UNA56g5ZoVJ4yDXeFdbX+YTNRRrQ4sPf5SLEzmv+L7A5c2Dk5OTe/I BVNtt4T16XToTMCAY3z+1SYKZTIU40EcNYNeN3RuHlm6+KKpcLvNC6xlr 0mKP9jGCW72Az0BtJrOPR7gHKGzHkZ9bksKbyhePTQ/3Uejnoz7ivjfbv w==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="260987063" X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="260987063" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:10:08 -0700 X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="570424900" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:10:05 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/5] tests/vm2vm_virtio_net_perf: delete CBDMA test case Date: Wed, 6 Apr 2022 17:09:58 +0800 Message-Id: <20220406090958.28325-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), delete cbdma related cases form tests/vm2vm_virtio_net_perf. Signed-off-by: Wei Ling --- tests/TestSuite_vm2vm_virtio_net_perf.py | 793 ++--------------------- 1 file changed, 44 insertions(+), 749 deletions(-) diff --git a/tests/TestSuite_vm2vm_virtio_net_perf.py b/tests/TestSuite_vm2vm_virtio_net_perf.py index 486f1acf..8c234c24 100644 --- a/tests/TestSuite_vm2vm_virtio_net_perf.py +++ b/tests/TestSuite_vm2vm_virtio_net_perf.py @@ -38,7 +38,6 @@ vm2vm split ring and packed ring vhost-user/virtio-net check the payload of larg mergeable and non-mergeable dequeue zero copy. please use qemu version greater 4.1.94 which support packed feathur to test this suite. """ -import random import re import string import time @@ -71,12 +70,6 @@ class TestVM2VMVirtioNetPerf(TestCase): self.vhost = self.dut.new_session(suite="vhost") self.pmd_vhost = PmdOutput(self.dut, self.vhost) self.app_testpmd_path = self.dut.apps_name["test-pmd"] - # get cbdma device - self.cbdma_dev_infos = [] - self.dmas_info = None - self.device_str = None - self.checked_vm = False - self.dut.restore_interfaces() def set_up(self): """ @@ -86,158 +79,29 @@ class TestVM2VMVirtioNetPerf(TestCase): self.vm_dut = [] self.vm = [] - def get_cbdma_ports_info_and_bind_to_dpdk( - self, cbdma_num=2, allow_diff_socket=False - ): - """ - get all cbdma ports - """ - out = self.dut.send_expect( - "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 - ) - device_info = out.split("\n") - for device in device_info: - pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) - if pci_info is not None: - dev_info = pci_info.group(1) - # the numa id of ioat dev, only add the device which on same socket with nic dev - bus = int(dev_info[5:7], base=16) - if bus >= 128: - cur_socket = 1 - else: - cur_socket = 0 - if allow_diff_socket: - self.cbdma_dev_infos.append(pci_info.group(1)) - else: - if self.ports_socket == cur_socket: - self.cbdma_dev_infos.append(pci_info.group(1)) - self.verify( - len(self.cbdma_dev_infos) >= cbdma_num, - "There no enough cbdma device to run this suite", - ) - used_cbdma = self.cbdma_dev_infos[0:cbdma_num] - dmas_info = "" - for dmas in used_cbdma[0 : int(cbdma_num / 2)]: - number = used_cbdma[0 : int(cbdma_num / 2)].index(dmas) - dmas = "txq{}@{},".format(number, dmas) - dmas_info += dmas - for dmas in used_cbdma[int(cbdma_num / 2) :]: - number = used_cbdma[int(cbdma_num / 2) :].index(dmas) - dmas = "txq{}@{},".format(number, dmas) - dmas_info += dmas - self.dmas_info = dmas_info[:-1] - self.device_str = " ".join(used_cbdma) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=%s %s" - % (self.drivername, self.device_str), - "# ", - 60, - ) - - def bind_cbdma_device_to_kernel(self): - if self.device_str is not None: - self.dut.send_expect("modprobe ioatdma", "# ") - self.dut.send_expect( - "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 - ) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" - % self.device_str, - "# ", - 60, - ) - - @property - def check_2m_env(self): - out = self.dut.send_expect( - "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " - ) - return True if out == "2048" else False - def start_vhost_testpmd( self, - cbdma=False, no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - rxq_txq=None, - exchange_cbdma=False, - iova_mode="", ): """ launch the testpmd with different parameters """ - if cbdma is True: - dmas_info_list = self.dmas_info.split(",") - cbdma_arg_0_list = [] - cbdma_arg_1_list = [] - for item in dmas_info_list: - if dmas_info_list.index(item) < int(len(dmas_info_list) / 2): - cbdma_arg_0_list.append(item) - else: - cbdma_arg_1_list.append(item) - cbdma_arg_0 = ",dmas=[{}]".format(";".join(cbdma_arg_0_list)) - cbdma_arg_1 = ",dmas=[{}]".format(";".join(cbdma_arg_1_list)) - else: - cbdma_arg_0 = "" - cbdma_arg_1 = "" testcmd = self.app_testpmd_path + " " - if not client_mode: - vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=%d%s' " % ( - self.base_dir, - enable_queues, - cbdma_arg_0, - ) - vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=%d%s' " % ( - self.base_dir, - enable_queues, - cbdma_arg_1, - ) - else: - vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,client=1,queues=%d%s' " % ( - self.base_dir, - enable_queues, - cbdma_arg_0, - ) - vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,client=1,queues=%d%s' " % ( - self.base_dir, - enable_queues, - cbdma_arg_1, - ) - if exchange_cbdma: - vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,client=1,queues=%d%s' " % ( - self.base_dir, - enable_queues, - cbdma_arg_1, - ) - vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,client=1,queues=%d%s' " % ( - self.base_dir, - enable_queues, - cbdma_arg_0, - ) - + vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=1' " % ( + self.base_dir + ) + vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=1' " % ( + self.base_dir + ) eal_params = self.dut.create_eal_parameters( cores=self.cores_list, prefix="vhost", no_pci=no_pci ) - if rxq_txq is None: - params = " -- -i --nb-cores=%d --txd=1024 --rxd=1024" % nb_cores - else: - params = " -- -i --nb-cores=%d --txd=1024 --rxd=1024 --rxq=%d --txq=%d" % ( - nb_cores, - rxq_txq, - rxq_txq, - ) - if iova_mode: - iova_parm = " --iova=" + iova_mode - else: - iova_parm = "" - self.command_line = testcmd + eal_params + vdev1 + vdev2 + iova_parm + params + params = " -- -i --nb-cores=2 --txd=1024 --rxd=1024" + self.command_line = testcmd + eal_params + vdev1 + vdev2 + params self.pmd_vhost.execute_cmd(self.command_line, timeout=30) - self.pmd_vhost.execute_cmd("vhost enable tx all", timeout=30) self.pmd_vhost.execute_cmd("start", timeout=30) - def start_vms(self, server_mode=False, opt_queue=None, vm_config="vhost_sample"): + def start_vms(self, vm_config="vhost_sample"): """ start two VM, each VM has one virtio device """ @@ -246,12 +110,7 @@ class TestVM2VMVirtioNetPerf(TestCase): vm_info = VM(self.dut, "vm%d" % i, vm_config) vm_params = {} vm_params["driver"] = "vhost-user" - if not server_mode: - vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i - else: - vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + ",server" - if opt_queue is not None: - vm_params["opt_queue"] = opt_queue + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i vm_params["opt_mac"] = "52:54:00:00:00:0%d" % (i + 1) vm_params["opt_settings"] = self.vm_args vm_info.set_vm_device(**vm_params) @@ -265,23 +124,15 @@ class TestVM2VMVirtioNetPerf(TestCase): self.vm_dut.append(vm_dut) self.vm.append(vm_info) - def config_vm_env(self, combined=False, rxq_txq=1): + def config_vm_env(self): """ set virtio device IP and run arp protocal """ vm1_intf = self.vm_dut[0].ports_info[0]["intf"] vm2_intf = self.vm_dut[1].ports_info[0]["intf"] - if combined: - self.vm_dut[0].send_expect( - "ethtool -L %s combined %d" % (vm1_intf, rxq_txq), "#", 10 - ) self.vm_dut[0].send_expect( "ifconfig %s %s" % (vm1_intf, self.virtio_ip1), "#", 10 ) - if combined: - self.vm_dut[1].send_expect( - "ethtool -L %s combined %d" % (vm2_intf, rxq_txq), "#", 10 - ) self.vm_dut[1].send_expect( "ifconfig %s %s" % (vm2_intf, self.virtio_ip2), "#", 10 ) @@ -292,87 +143,22 @@ class TestVM2VMVirtioNetPerf(TestCase): "arp -s %s %s" % (self.virtio_ip1, self.virtio_mac1), "#", 10 ) - def prepare_test_env( - self, - cbdma=False, - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=None, - combined=False, - rxq_txq=None, - iova_mode="", - ): - """ - start vhost testpmd and qemu, and config the vm env - """ - self.start_vhost_testpmd( - cbdma=cbdma, - no_pci=no_pci, - client_mode=client_mode, - enable_queues=enable_queues, - nb_cores=nb_cores, - rxq_txq=rxq_txq, - iova_mode=iova_mode, - ) - self.start_vms(server_mode=server_mode, opt_queue=opt_queue) - self.config_vm_env(combined=combined, rxq_txq=rxq_txq) - def start_iperf(self, iperf_mode="tso"): """ run perf command between to vms """ # clear the port xstats before iperf self.vhost.send_expect("clear port xstats all", "testpmd> ", 10) - - # add -f g param, use Gbits/sec report teste result if iperf_mode == "tso": - iperf_server = "iperf -s -i 1" - iperf_client = "iperf -c 1.1.1.2 -i 1 -t 60" + server = "iperf -s -i 1" + client = "iperf -c 1.1.1.2 -i 1 -t 60" elif iperf_mode == "ufo": - iperf_server = "iperf -s -u -i 1" - iperf_client = "iperf -c 1.1.1.2 -i 1 -t 30 -P 4 -u -b 1G -l 9000" - self.vm_dut[0].send_expect("%s > iperf_server.log &" % iperf_server, "", 10) - self.vm_dut[1].send_expect("%s > iperf_client.log &" % iperf_client, "", 60) + server = "iperf -s -u -i 1" + client = "iperf -c 1.1.1.2 -i 1 -t 60 -P 4 -u -b 1G -l 9000" + self.vm_dut[0].send_expect("%s > iperf_server.log &" % server, "", 10) + self.vm_dut[1].send_expect("%s > iperf_client.log &" % client, "", 10) time.sleep(90) - def get_perf_result(self): - """ - get the iperf test result - """ - self.table_header = ["Mode", "[M|G]bits/sec"] - self.result_table_create(self.table_header) - self.vm_dut[0].send_expect("pkill iperf", "# ") - self.vm_dut[1].session.copy_file_from("%s/iperf_client.log" % self.dut.base_dir) - fp = open("./iperf_client.log") - fmsg = fp.read() - fp.close() - # remove the server report info from msg - index = fmsg.find("Server Report") - if index != -1: - fmsg = fmsg[:index] - iperfdata = re.compile("\S*\s*[M|G]bits/sec").findall(fmsg) - # the last data of iperf is the ave data from 0-30 sec - self.verify(len(iperfdata) != 0, "The iperf data between to vms is 0") - self.verify( - (iperfdata[-1]).split()[1] == "Gbits/sec", - "The iperf data is %s,Can't reach Gbits/sec" % iperfdata[-1], - ) - self.logger.info("The iperf data between vms is %s" % iperfdata[-1]) - - # put the result to table - results_row = ["vm2vm", iperfdata[-1]] - self.result_table_add(results_row) - - # print iperf resut - self.result_table_print() - # rm the iperf log file in vm - self.vm_dut[0].send_expect("rm iperf_server.log", "#", 10) - self.vm_dut[1].send_expect("rm iperf_client.log", "#", 10) - return float(iperfdata[-1].split()[0]) - def verify_xstats_info_on_vhost(self): """ check both 2VMs can receive and send big packets to each other @@ -390,16 +176,6 @@ class TestVM2VMVirtioNetPerf(TestCase): int(tx_info.group(1)) > 0, "Port 0 not forward packet greater than 1522" ) - def start_iperf_and_verify_vhost_xstats_info(self, iperf_mode="tso"): - """ - start to send packets and verify vm can received data of iperf - and verify the vhost can received big pkts in testpmd - """ - self.start_iperf(iperf_mode) - iperfdata = self.get_perf_result() - self.verify_xstats_info_on_vhost() - return iperfdata - def stop_all_apps(self): for i in range(len(self.vm)): self.vm[i].stop() @@ -434,557 +210,76 @@ class TestVM2VMVirtioNetPerf(TestCase): "tx-tcp6-segmentation in vm not right", ) - def check_scp_file_valid_between_vms(self, file_size=1024): - """ - scp file form VM1 to VM2, check the data is valid - """ - # default file_size=1024K - data = "" - for char in range(file_size * 1024): - data += random.choice(self.random_string) - self.vm_dut[0].send_expect('echo "%s" > /tmp/payload' % data, "# ") - # scp this file to vm1 - out = self.vm_dut[1].send_command( - "scp root@%s:/tmp/payload /root" % self.virtio_ip1, timeout=5 - ) - if "Are you sure you want to continue connecting" in out: - self.vm_dut[1].send_command("yes", timeout=3) - self.vm_dut[1].send_command(self.vm[0].password, timeout=3) - # get the file info in vm1, and check it valid - md5_send = self.vm_dut[0].send_expect("md5sum /tmp/payload", "# ") - md5_revd = self.vm_dut[1].send_expect("md5sum /root/payload", "# ") - md5_send = md5_send[: md5_send.find(" ")] - md5_revd = md5_revd[: md5_revd.find(" ")] - self.verify( - md5_send == md5_revd, "the received file is different with send file" - ) - def test_vm2vm_split_ring_iperf_with_tso(self): """ TestCase1: VM2VM split ring vhost-user/virtio-net test with tcp traffic """ self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on" - self.prepare_test_env( - cbdma=False, - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - - def test_vm2vm_split_ring_with_tso_and_cbdma_enable(self): - """ - TestCase2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic - """ - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on" - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - ) - cbdma_value = self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - expect_value = self.get_suite_cfg()["expected_throughput"][ - "test_vm2vm_split_ring_iperf_with_tso" - ] - self.verify( - cbdma_value > expect_value, - "CBDMA enable performance: %s is lower than CBDMA disable: %s." - % (cbdma_value, expect_value), - ) + self.start_vhost_testpmd() + self.start_vms() + self.config_vm_env() + self.start_iperf(iperf_mode='tso') + self.verify_xstats_info_on_vhost() def test_vm2vm_split_ring_iperf_with_ufo(self): """ - TestCase3: VM2VM split ring vhost-user/virtio-net test with udp traffic + TestCase2: VM2VM split ring vhost-user/virtio-net test with udp traffic """ self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" - self.prepare_test_env( - cbdma=False, - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=1, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="ufo") + self.start_vhost_testpmd() + self.start_vms() + self.config_vm_env() + self.start_iperf(iperf_mode='ufo') + self.verify_xstats_info_on_vhost() def test_vm2vm_split_ring_device_capbility(self): """ - TestCase4: Check split ring virtio-net device capability + TestCase3: Check split ring virtio-net device capability """ self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" - self.start_vhost_testpmd( - cbdma=False, - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - rxq_txq=None, - ) + self.start_vhost_testpmd() self.start_vms() self.offload_capbility_check(self.vm_dut[0]) self.offload_capbility_check(self.vm_dut[1]) - def test_vm2vm_split_ring_with_mergeable_path_check_large_packet_and_cbdma_enable_8queue( - self, - ): - """ - TestCase5: VM2VM virtio-net split ring mergeable CBDMA enable test with large packet payload valid check - """ - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost with CBDMA and with 8 queue with VA mode") - self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - server_mode=True, - opt_queue=8, - combined=True, - rxq_txq=8, - iova_mode="va", - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = self.start_iperf_and_verify_vhost_xstats_info( - iperf_mode="tso" - ) - ipef_result.append( - [ - "Enable", - "mergeable path with VA mode", - 8, - iperf_data_cbdma_enable_8_queue, - ] - ) - - self.logger.info("Re-launch and exchange CBDMA and with 8 queue with VA mode") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=True, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - rxq_txq=8, - exchange_cbdma=True, - iova_mode="va", - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue_exchange = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - [ - "Disable", - "mergeable path exchange CBDMA with VA mode", - 8, - iperf_data_cbdma_enable_8_queue_exchange, - ] - ) - - # This test step need to test on 1G guest hugepage ENV. - if not self.check_2m_env: - self.logger.info( - "Re-launch and exchange CBDMA and with 8 queue with PA mode" - ) - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=True, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - rxq_txq=8, - exchange_cbdma=True, - iova_mode="pa", - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue_exchange_pa = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - [ - "Disable", - "mergeable path exchange CBDMA with PA mode", - 8, - iperf_data_cbdma_enable_8_queue_exchange_pa, - ] - ) - - self.logger.info("Re-launch without CBDMA and with 4 queue") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=False, - no_pci=False, - client_mode=True, - enable_queues=4, - nb_cores=4, - rxq_txq=4, - ) - self.config_vm_env(combined=True, rxq_txq=4) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_disable_4_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - [ - "Disable", - "mergeable path without CBDMA with 4 queue", - 4, - iperf_data_cbdma_disable_4_queue, - ] - ) - - self.logger.info("Re-launch without CBDMA and with 1 queue") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=False, - no_pci=False, - client_mode=True, - enable_queues=4, - nb_cores=4, - rxq_txq=1, - ) - self.config_vm_env(combined=True, rxq_txq=1) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_disable_1_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - [ - "Disable", - "mergeable path without CBDMA with 1 queue", - 1, - iperf_data_cbdma_disable_1_queue, - ] - ) - - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - self.verify( - iperf_data_cbdma_enable_8_queue > iperf_data_cbdma_disable_4_queue, - "CMDMA enable: %s is lower than CBDMA disable: %s" - % (iperf_data_cbdma_enable_8_queue, iperf_data_cbdma_disable_4_queue), - ) - - def test_vm2vm_split_ring_with_no_mergeable_path_check_large_packet_and_cbdma_enable_8queue( - self, - ): - """ - TestCase6: VM2VM virtio-net split ring non-mergeable CBDMA enable test with large packet payload valid check - """ - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost-testpmd with CBDMA and used 8 queue") - self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - server_mode=True, - opt_queue=8, - combined=True, - rxq_txq=8, - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = self.start_iperf_and_verify_vhost_xstats_info( - iperf_mode="tso" - ) - ipef_result.append( - ["Enable", "no-mergeable path", 8, iperf_data_cbdma_enable_8_queue] - ) - - self.logger.info("Re-launch without CBDMA and used 8 queue") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=False, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - rxq_txq=8, - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_disable_8_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Disable", "no-mergeable path", 8, iperf_data_cbdma_disable_8_queue] - ) - - self.logger.info("Re-launch without CBDMA and used 1 queue") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=False, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - rxq_txq=1, - ) - self.config_vm_env(combined=True, rxq_txq=1) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_disable_1_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Disable", "no-mergeable path", 1, iperf_data_cbdma_disable_1_queue] - ) - - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - self.verify( - iperf_data_cbdma_enable_8_queue > iperf_data_cbdma_disable_8_queue, - "CMDMA enable: %s is lower than CBDMA disable: %s" - % (iperf_data_cbdma_enable_8_queue, iperf_data_cbdma_disable_8_queue), - ) - def test_vm2vm_packed_ring_iperf_with_tso(self): """ - TestCase7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic - """ - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.prepare_test_env( - cbdma=False, - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - - def test_vm2vm_packed_ring_iperf_with_tso_and_cbdma_enable(self): - """ - TestCase8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic + TestCase4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic """ - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=None, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") + self.start_vhost_testpmd() + self.start_vms() + self.config_vm_env() + self.start_iperf(iperf_mode='tso') + self.verify_xstats_info_on_vhost() def test_vm2vm_packed_ring_iperf_with_ufo(self): """ - Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp trafficc + Test Case 5: VM2VM packed ring vhost-user/virtio-net test with udp trafficc """ self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.prepare_test_env( - cbdma=False, - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=None, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="ufo") + self.start_vhost_testpmd() + self.start_vms() + self.config_vm_env() + self.start_iperf(iperf_mode='ufo') + self.verify_xstats_info_on_vhost() def test_vm2vm_packed_ring_device_capbility(self): """ - Test Case 10: Check packed ring virtio-net device capability + Test Case 6: Check packed ring virtio-net device capability """ self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.start_vhost_testpmd( - cbdma=False, - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - rxq_txq=None, - ) + self.start_vhost_testpmd() self.start_vms() self.offload_capbility_check(self.vm_dut[0]) self.offload_capbility_check(self.vm_dut[1]) - def test_vm2vm_packed_ring_with_mergeable_path_check_large_packet_and_cbdma_enable_8queue( - self, - ): - """ - Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check - """ - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost-testpmd with CBDMA and used 8 queue") - self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=8, - nb_cores=4, - server_mode=False, - opt_queue=8, - combined=True, - rxq_txq=8, - ) - for i in range(0, 5): - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Enable_%d" % i, "mergeable path", 8, iperf_data_cbdma_enable_8_queue] - ) - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - - def test_vm2vm_packed_ring_with_no_mergeable_path_check_large_packet_and_cbdma_enable_8queue( - self, - ): - """ - Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check - """ - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost-testpmd with CBDMA and used 8 queue") - self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=8, - nb_cores=4, - server_mode=False, - opt_queue=8, - combined=True, - rxq_txq=8, - ) - for i in range(0, 5): - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Enable", "mergeable path", 8, iperf_data_cbdma_enable_8_queue] - ) - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - - def test_vm2vm_packed_ring_with_tso_and_cbdma_enable_iova_pa(self): - """ - Test Case 13: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa - """ - # This test case need to test on 1G guest hugepage ENV. - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - iova_mode="pa", - ) - self.check_scp_file_valid_between_vms() - cbdma_value = self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - expect_value = self.get_suite_cfg()["expected_throughput"][ - "test_vm2vm_split_ring_iperf_with_tso" - ] - self.verify( - cbdma_value > expect_value, - "CBDMA enable performance: %s is lower than CBDMA disable: %s." - % (cbdma_value, expect_value), - ) - - def test_vm2vm_packed_ring_with_mergeable_path_check_large_packet_and_cbdma_enable_8queue_iova_pa( - self, - ): - """ - Test Case 14: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check - """ - # This test case need to test on 1G guest hugepage ENV. - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost-testpmd with CBDMA and used 8 queue") - self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=8, - nb_cores=4, - server_mode=False, - opt_queue=8, - combined=True, - rxq_txq=8, - iova_mode="pa", - ) - for i in range(0, 5): - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Enable_%d" % i, "mergeable path", 8, iperf_data_cbdma_enable_8_queue] - ) - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - def tear_down(self): """ run after each test case. """ self.stop_all_apps() self.dut.kill_all() - self.bind_cbdma_device_to_kernel() def tear_down_all(self): """ From patchwork Wed Apr 6 09:10:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109192 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 424B2A0509; Wed, 6 Apr 2022 11:10:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3A6CD40E2D; Wed, 6 Apr 2022 11:10:27 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 0B40040689 for ; Wed, 6 Apr 2022 11:10:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649236225; x=1680772225; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=MjsJrMyhEVlmF2X7WeiaW8l5CkFXdyJmzQ4p7n3XvTg=; b=lxXxoz2312/I5aHszZcilCwjs2traxzgBWc6W9+jD8pyHzDTNs9ZxwJP /SAJcDcmAcaP0MH7dsOqaQNypnKEb+0wt+9zEJ1LoC81F2eSSSjdDcY9y FQWhihSrd85nqDgyIq1R4c8q6QyZGntRoX7Fdxkrv91Vo/9DfR0+8ut6R aPeo5NZA5IwpYg5Nyu5gkt2IR56wcGFBkza1lu9ci5znE467V5aSBCF9G thkDMyQX9jvflNafqo3Bt6j6fmIWPzTyn5il0uPkf5T9aWVvmbu8tq5PZ 1XOTS/RbPlwERJuXLs2vuY6iRiNBY3Fx4ylAttcTcwegaZTWrYXpvFIGV w==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="248515531" X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="248515531" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:10:24 -0700 X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="570425003" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:10:21 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/5] test_plans/vm2vm_virtio_net_perf_cbdma_test_plan: add DPDK22.03 new feature Date: Wed, 6 Apr 2022 17:10:14 +0800 Message-Id: <20220406091014.28383-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new test_plan/vm2vm_virtio_net_perf_cbdma_test_plan. Signed-off-by: Wei Ling --- .../vm2vm_virtio_net_perf_cbdma_test_plan.rst | 876 ++++++++++++++++++ 1 file changed, 876 insertions(+) create mode 100644 test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst diff --git a/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst new file mode 100644 index 00000000..823d09fc --- /dev/null +++ b/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst @@ -0,0 +1,876 @@ +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary forim must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. + +===================================== +vm2vm vhost-user/virtio-net test plan +===================================== + +Description +=========== + +This test plan test several features in VM2VM topo: +1. Check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP stack with vm2vm split ring and +packed ring vhost-user/virtio-net mergeable path with CBDMA channel. +2. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when +vhost enqueue operation with multi-CBDMA channels. +Note: +1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1. +2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, +DUT to old qemu exist reconnect issue when multi-queues test. +3.For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. + + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + +Prerequisites +============= + +Topology +-------- + Test flow: Virtio-net-->Vhost-->Testpmd-->Vhost-->Virtio-net + +Hardware +-------- + Supportted NICs: ALL + +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 2 CBDMA channels to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, Bind 1 NIC port and 2 CBDMA channels:: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +Test Case 1: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic +-------------------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring and CBDMA enable to get throughput between 2 VMs. + +1. Bind 2 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq==1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] + testpmd>start + +3. Launch VM1 and VM2:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge1G0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge1G1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +7. Check 2VMs can receive and send big packets to each other:: + + testpmd>show port xstats all + Port 0 should have tx packets above 1522 + Port 1 should have rx packets above 1522 + +Test Case 2: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check +------------------------------------------------------------------------------------------------------------------------------ +This case uses testpmd and QEMU and iperf to test split ring mergeable path with 8 queues and CBDMA enable to get throughput between 2 VMs. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +3. Launch VM1 and VM2 using qemu:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Quit and relaunch vhost w/ diff CBDMA channels:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +9. Rerun step 5-6. + +10. Quit and relaunch vhost w/ iova=pa:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +11. Rerun step 5-6. + +12. Quit and relaunch vhost w/o CBDMA channels:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4 + testpmd>start + +13. On VM1, set virtio device:: + + ethtool -L ens5 combined 4 + +14. On VM2, set virtio device:: + + ethtool -L ens5 combined 4 + +15. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +16. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +17. Quit and relaunch vhost with 1 queues:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=1 --rxq=1 + testpmd>start + +18. On VM1, set virtio device:: + + ethtool -L ens5 combined 1 + +19. On VM2, set virtio device:: + + ethtool -L ens5 combined 1 + +20. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +21. Check the iperf performance, ensure queue0 can work from vhost side:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +Test Case 3: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check +---------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring non-mergeable path with 8 queues and CBDMA enable to get throughput between 2 VMs. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +3. Launch VM1 and VM2:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Quit and relaunch vhost w/ diff CBDMA channels:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +9. Rerun step 5-6. + +10. Quit and relaunch vhost ports w/o CBDMA channels:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + testpmd>start + +11. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +12. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +13. Quit and relaunch vhost ports with 1 queues:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=1 --rxq=1 + testpmd>start + +14. On VM1, set virtio device:: + + ethtool -L ens5 combined 1 + +15. On VM2, set virtio device:: + + ethtool -L ens5 combined 1 + +16. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +17. Check the iperf performance, ensure queue0 can work from vhost side:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +Test Case 4: VM2VM split ring vhost-user/virtio-net mergeable 16 queues CBDMA enable test with large packet payload valid check +------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test split ring mergeable path with 16 queues and CBDMA enable to get throughput between 2 VMs. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-9 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \ + --iova=va -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore3@0000:00:04.2,lcore3@0000:00:04.3,lcore4@0000:00:04.4,lcore4@0000:00:04.5,lcore5@0000:00:04.6,lcore5@0000:00:04.7,lcore6@0000:80:04.0,lcore6@0000:80:04.1,lcore7@0000:80:04.2,lcore7@0000:80:04.3,lcore8@0000:80:04.4,lcore8@0000:80:04.5,lcore9@0000:80:04.6,lcore9@0000:80:04.7] + testpmd>start + +3. Launch VM1 and VM2 using qemu:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 16 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 16 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +Test Case 5: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic +--------------------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test packed ring path and CBDMA enable to get throughput between 2 VMs. + +1. Bind 2 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0]' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] + testpmd>start + +3. Launch VM1 and VM2 on socket 1 with qemu:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +7. Check 2VMs can receive and send big packets to each other:: + + testpmd>show port xstats all + Port 0 should have tx packets above 1522 + Port 1 should have rx packets above 1522 + +Test Case 6: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check +-------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test packed ring mergeable path with 8 queues and CBDMA enable to get throughput between 2 VMs. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +3. Launch VM1 and VM2 with qemu:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Rerun step 5-6 five times. + +Test Case 7: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check +------------------------------------------------------------------------------------------------------------------------ +This case uses testpmd and QEMU and iperf to test packed ring non-mergeable path with 8 queues and CBDMA enable to get throughput between 2 VMs. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +3. Launch VM1 and VM2:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Rerun step 5-6 five times. + +Test Case 8: VM2VM virtio-net packed ring mergeable 16 queues CBDMA enabled test with large packet payload valid check +---------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test packed ring mergeable path with 16 queues and CBDMA enable to get throughput between 2 VMs. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-9 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15]' \ + --iova=pa -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore3@0000:00:04.2,lcore3@0000:00:04.3,lcore4@0000:00:04.4,lcore4@0000:00:04.5,lcore5@0000:00:04.6,lcore5@0000:00:04.7,lcore6@0000:80:04.0,lcore6@0000:80:04.1,lcore7@0000:80:04.2,lcore7@0000:80:04.3,lcore8@0000:80:04.4,lcore8@0000:80:04.5,lcore9@0000:80:04.6,lcore9@0000:80:04.7] + testpmd>start + +3. Launch VM1 and VM2 with qemu:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge1G0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge1G1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 16 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 16 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Rerun step 5-6 five times. + +Test Case 9: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa +-------------------------------------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test packed ring and CBDMA enable when set iova=pa mode to get throughput between 2 VMs. + +1. Bind 2 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0]' \ + --iova=pa -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] + testpmd>start + +3. Launch VM1 and VM2 on socket 1 with qemu:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Check 2VMs can receive and send big packets to each other:: + + testpmd>show port xstats all + Port 0 should have tx packets above 1522 + Port 1 should have rx packets above 1522 + +Test Case 10: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check +--------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and QEMU and iperf to test packed ring mergeable path with 8 queues and CBDMA enable when set iova=pa mode to get throughput between 2 VMs. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +3. Launch VM1 and VM2 with qemu:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Rerun step 5-6 five times. \ No newline at end of file From patchwork Wed Apr 6 09:10:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109194 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AED72A0509; Wed, 6 Apr 2022 11:10:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A7BAB410EF; Wed, 6 Apr 2022 11:10:53 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 3CEC040689 for ; Wed, 6 Apr 2022 11:10:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649236251; x=1680772251; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=QxBv3hdQxRmGBrjS5AdHWuyh5M1YGss5wyYK+eVQ6OQ=; b=ENpPHXUBxdJci0o3OFM1P+qncF8CxSPx9MLzXKnrlhSfdn9OvMCOQZVo PCHyeg5+O8geE+HOwwMZoaXuioEI6AYaeoKCw/hPQA+IaPZP5wsX1iQ8u gxNAw0O5P84BTUKeJ/7vC0HyQypnqAQ2zG4mz4vdQbcDLVChjpTdHDpHa ya1O8ef9nrWjTPd0B34Uk7XNMqUUeyBiCHw6wr7TsEAzo5/LagQtmigTh vZkHbUGTeciwwHa/nJlvEnCjco8SI+ckUSGteyWGsJVCZRiqblS6AeDIN 8uuMd/htjWHz5mgad595YdwfgTttmoacbtdKNgRt85nWX/dtqw/F3meJQ g==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="261169604" X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="261169604" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:10:37 -0700 X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="570425084" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:10:35 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 4/5] tests/vm2vm_virtio_net_perf_cbdma: add DPDK22.03 new feature Date: Wed, 6 Apr 2022 17:10:29 +0800 Message-Id: <20220406091029.28441-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new tests/vm2vm_virtio_net_perf_cbdma. Signed-off-by: Wei Ling --- .../TestSuite_vm2vm_virtio_net_perf_cbdma.py | 744 ++++++++++++++++++ 1 file changed, 744 insertions(+) create mode 100644 tests/TestSuite_vm2vm_virtio_net_perf_cbdma.py diff --git a/tests/TestSuite_vm2vm_virtio_net_perf_cbdma.py b/tests/TestSuite_vm2vm_virtio_net_perf_cbdma.py new file mode 100644 index 00000000..a5994005 --- /dev/null +++ b/tests/TestSuite_vm2vm_virtio_net_perf_cbdma.py @@ -0,0 +1,744 @@ +# BSD LICENSE +# +# Copyright(c) <2022> Intel Corporation. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +""" +DPDK Test suite. + +vm2vm split ring and packed ring with tx offload (TSO and UFO) with non-mergeable path. +vm2vm split ring and packed ring with UFO about virtio-net device capability with non-mergeable path. +vm2vm split ring and packed ring vhost-user/virtio-net check the payload of large packet is valid with +mergeable and non-mergeable dequeue zero copy. +please use qemu version greater 4.1.94 which support packed feathur to test this suite. +""" +import random +import re +import string +import time + +import framework.utils as utils +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase +from framework.virt_common import VM + + +class TestVM2VMVirtioNetPerfCbdma(TestCase): + def set_up_all(self): + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:9] + self.vm_num = 2 + self.virtio_ip1 = "1.1.1.1" + self.virtio_ip2 = "1.1.1.2" + self.virtio_mac1 = "52:54:00:00:00:01" + self.virtio_mac2 = "52:54:00:00:00:02" + self.base_dir = self.dut.base_dir.replace('~', '/root') + self.random_string = string.ascii_letters + string.digits + socket_num = len(set([int(core['socket']) for core in self.dut.cores])) + self.socket_mem = ','.join(['2048']*socket_num) + self.vhost = self.dut.new_session(suite="vhost") + self.pmdout_vhost_user = PmdOutput(self.dut, self.vhost) + self.app_testpmd_path = self.dut.apps_name['test-pmd'] + + def set_up(self): + """ + run before each test case. + """ + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.vm_dut = [] + self.vm = [] + + def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): + """ + get and bind cbdma ports into DPDK driver + """ + self.all_cbdma_list = [] + self.cbdma_list = [] + self.cbdma_str = "" + out = self.dut.send_expect('./usertools/dpdk-devbind.py --status-dev dma', '# ', 30) + device_info = out.split('\n') + for device in device_info: + pci_info = re.search('\s*(0000:\S*:\d*.\d*)', device) + if pci_info is not None: + dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which on same socket with nic dev + bus = int(dev_info[5:7], base=16) + if bus >= 128: + cur_socket = 1 + else: + cur_socket = 0 + if allow_diff_socket: + self.all_cbdma_list.append(pci_info.group(1)) + else: + if self.ports_socket == cur_socket: + self.all_cbdma_list.append(pci_info.group(1)) + self.verify(len(self.all_cbdma_list) >= cbdma_num, 'There no enough cbdma device') + self.cbdma_list = self.all_cbdma_list[0:cbdma_num] + self.cbdma_str = ' '.join(self.cbdma_list) + self.dut.send_expect('./usertools/dpdk-devbind.py --force --bind=%s %s' % (self.drivername, self.cbdma_str), '# ', 60) + + @staticmethod + def generate_dms_param(queues): + das_list = [] + for i in range(queues): + das_list.append("txq{}".format(i)) + das_param = "[{}]".format(";".join(das_list)) + return das_param + + @staticmethod + def generate_lcore_dma_param(cbdma_list, core_list): + group_num = int(len(cbdma_list) / len(core_list)) + lcore_dma_list = [] + if len(cbdma_list) == 1: + for core in core_list: + lcore_dma_list.append("lcore{}@{}".format(core, cbdma_list[0])) + elif len(core_list) == 1: + for cbdma in cbdma_list: + lcore_dma_list.append("lcore{}@{}".format(core_list[0], cbdma)) + else: + for cbdma in cbdma_list: + core_list_index = int(cbdma_list.index(cbdma) / group_num) + lcore_dma_list.append("lcore{}@{}".format(core_list[core_list_index], cbdma)) + lcore_dma_param = "[{}]".format(",".join(lcore_dma_list)) + return lcore_dma_param + + def bind_cbdma_device_to_kernel(self): + self.dut.send_expect('modprobe ioatdma', '# ') + self.dut.send_expect('./usertools/dpdk-devbind.py -u %s' % self.cbdma_str, '# ', 30) + self.dut.send_expect('./usertools/dpdk-devbind.py --force --bind=ioatdma %s' % self.cbdma_str, '# ', 60) + + @property + def check_2M_env(self): + out = self.dut.send_expect("cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# ") + return True if out == '2048' else False + + def start_vhost_testpmd(self, cores, param="", eal_param="", ports = "", iova_mode=''): + if iova_mode: + eal_param += " --iova=" + iova_mode + self.pmdout_vhost_user.start_testpmd(cores=cores, param=param, eal_param=eal_param, ports=ports, prefix="vhost") + self.pmdout_vhost_user.execute_cmd('start') + + def start_vms(self, server_mode=False, vm_queue=1, vm_config='vhost_sample'): + """ + start two VM, each VM has one virtio device + """ + for i in range(self.vm_num): + vm_dut = None + vm_info = VM(self.dut, 'vm%d' % i, vm_config) + vm_params = {} + vm_params['driver'] = 'vhost-user' + if not server_mode: + vm_params['opt_path'] = self.base_dir + '/vhost-net%d' % i + else: + vm_params['opt_path'] = self.base_dir + '/vhost-net%d' % i + ',server' + vm_params['opt_queue'] = vm_queue + vm_params['opt_mac'] = "52:54:00:00:00:0%d" % (i+1) + vm_params['opt_settings'] = self.vm_args + vm_info.set_vm_device(**vm_params) + try: + vm_dut = vm_info.start(set_target=False) + if vm_dut is None: + raise Exception("Set up VM ENV failed") + except Exception as e: + print(utils.RED("Failure for %s" % str(e))) + self.verify(vm_dut is not None, "start vm failed") + self.vm_dut.append(vm_dut) + self.vm.append(vm_info) + + def config_vm_ip(self): + """ + set virtio device IP and run arp protocal + """ + vm1_intf = self.vm_dut[0].ports_info[0]['intf'] + vm2_intf = self.vm_dut[1].ports_info[0]['intf'] + self.vm_dut[0].send_expect("ifconfig %s %s" % (vm1_intf, self.virtio_ip1), "#", 10) + self.vm_dut[1].send_expect("ifconfig %s %s" % (vm2_intf, self.virtio_ip2), "#", 10) + self.vm_dut[0].send_expect("arp -s %s %s" % (self.virtio_ip2, self.virtio_mac2), "#", 10) + self.vm_dut[1].send_expect("arp -s %s %s" % (self.virtio_ip1, self.virtio_mac1), "#", 10) + + def config_vm_combined(self, combined=1): + """ + set virtio device combined + """ + vm1_intf = self.vm_dut[0].ports_info[0]['intf'] + vm2_intf = self.vm_dut[1].ports_info[0]['intf'] + self.vm_dut[0].send_expect("ethtool -L %s combined %d" % (vm1_intf, combined), "#", 10) + self.vm_dut[1].send_expect("ethtool -L %s combined %d" % (vm2_intf, combined), "#", 10) + + def check_ping_between_vms(self): + ping_out = self.vm_dut[0].send_expect("ping {} -c 4".format(self.virtio_ip2), "#" , 20) + self.logger.info(ping_out) + + def start_iperf(self): + """ + run perf command between to vms + """ + self.vhost.send_expect("clear port xstats all", "testpmd> ", 10) + + server = "iperf -s -i 1" + client = "iperf -c {} -i 1 -t 60".format(self.virtio_ip1) + self.vm_dut[0].send_expect("{} > iperf_server.log &".format(server), "", 10) + self.vm_dut[1].send_expect("{} > iperf_client.log &".format(client), "", 10) + time.sleep(60) + + def get_perf_result(self): + """ + get the iperf test result + """ + self.table_header = ['Mode', '[M|G]bits/sec'] + self.result_table_create(self.table_header) + self.vm_dut[0].send_expect('pkill iperf', '# ') + self.vm_dut[1].session.copy_file_from("%s/iperf_client.log" % self.dut.base_dir) + fp = open("./iperf_client.log") + fmsg = fp.read() + fp.close() + # remove the server report info from msg + index = fmsg.find("Server Report") + if index != -1: + fmsg = fmsg[:index] + iperfdata = re.compile('\S*\s*[M|G]bits/sec').findall(fmsg) + # the last data of iperf is the ave data from 0-30 sec + self.verify(len(iperfdata) != 0, "The iperf data between to vms is 0") + self.logger.info("The iperf data between vms is %s" % iperfdata[-1]) + + # put the result to table + results_row = ["vm2vm", iperfdata[-1]] + self.result_table_add(results_row) + + # print iperf resut + self.result_table_print() + # rm the iperf log file in vm + self.vm_dut[0].send_expect('rm iperf_server.log', '#', 10) + self.vm_dut[1].send_expect('rm iperf_client.log', '#', 10) + + def verify_xstats_info_on_vhost(self): + """ + check both 2VMs can receive and send big packets to each other + """ + self.vhost.send_expect("show port stats all", "testpmd> ", 20) + out_tx = self.vhost.send_expect("show port xstats 0", "testpmd> ", 20) + out_rx = self.vhost.send_expect("show port xstats 1", "testpmd> ", 20) + + tx_info = re.search("tx_size_1523_to_max_packets:\s*(\d*)", out_tx) + rx_info = re.search("rx_size_1523_to_max_packets:\s*(\d*)", out_rx) + + self.verify(int(rx_info.group(1)) > 0, + "Port 1 not receive packet greater than 1522") + self.verify(int(tx_info.group(1)) > 0, + "Port 0 not forward packet greater than 1522") + + def offload_capbility_check(self, vm_client): + """ + check UFO and TSO offload status on for the Virtio-net driver in VM + """ + vm_intf = vm_client.ports_info[0]['intf'] + vm_client.send_expect('ethtool -k %s > offload.log' % vm_intf, '#', 10) + fmsg = vm_client.send_expect("cat ./offload.log", "#") + udp_info = re.search("udp-fragmentation-offload:\s*(\S*)", fmsg) + tcp_info = re.search("tx-tcp-segmentation:\s*(\S*)", fmsg) + tcp_enc_info = re.search("tx-tcp-ecn-segmentation:\s*(\S*)", fmsg) + tcp6_info = re.search("tx-tcp6-segmentation:\s*(\S*)", fmsg) + + self.verify(udp_info is not None and udp_info.group(1) == "on", + "the udp-fragmentation-offload in vm not right") + self.verify(tcp_info is not None and tcp_info.group(1) == "on", + "tx-tcp-segmentation in vm not right") + self.verify(tcp_enc_info is not None and tcp_enc_info.group(1) == "on", + "tx-tcp-ecn-segmentation in vm not right") + self.verify(tcp6_info is not None and tcp6_info.group(1) == "on", + "tx-tcp6-segmentation in vm not right") + + def check_scp_file_valid_between_vms(self, file_size=1024): + """ + scp file form VM1 to VM2, check the data is valid + """ + # default file_size=1024K + data = '' + for char in range(file_size * 1024): + data += random.choice(self.random_string) + self.vm_dut[0].send_expect('echo "%s" > /tmp/payload' % data, '# ') + # scp this file to vm1 + out = self.vm_dut[1].send_command('scp root@%s:/tmp/payload /root' % self.virtio_ip1, timeout=5) + if 'Are you sure you want to continue connecting' in out: + self.vm_dut[1].send_command('yes', timeout=3) + self.vm_dut[1].send_command(self.vm[0].password, timeout=3) + # get the file info in vm1, and check it valid + md5_send = self.vm_dut[0].send_expect('md5sum /tmp/payload', '# ') + md5_revd = self.vm_dut[1].send_expect('md5sum /root/payload', '# ') + md5_send = md5_send[: md5_send.find(' ')] + md5_revd = md5_revd[: md5_revd.find(' ')] + self.verify(md5_send == md5_revd, 'the received file is different with send file') + + def test_vm2vm_split_ring_iperf_with_tso_and_cbdma_enable(self): + """ + Test Case 1: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(2) + dmas = self.generate_dms_param(1) + lcore_dma = self.generate_lcore_dma_param(cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:3]) + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas={},dma_ring_size=2048'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas={},dma_ring_size=2048'".format(dmas) + param = " --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode='va') + self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on" + self.start_vms(server_mode=False, vm_queue=1) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_perf_result() + self.verify_xstats_info_on_vhost() + + def test_vm2vm_split_ring_with_mergeable_path_8queue_check_large_packet_and_cbdma_enable(self): + """ + Test Case 2: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + dmas = self.generate_dms_param(8) + core1 = self.vhost_core_list[1] + core2 = self.vhost_core_list[2] + core3 = self.vhost_core_list[3] + core4 = self.vhost_core_list[4] + cbdma1 = self.cbdma_list[0] + cbdma2 = self.cbdma_list[1] + cbdma3 = self.cbdma_list[2] + cbdma4 = self.cbdma_list[3] + cbdma5 = self.cbdma_list[4] + cbdma6 = self.cbdma_list[5] + cbdma7 = self.cbdma_list[6] + cbdma8 = self.cbdma_list[7] + cbdma9 = self.cbdma_list[8] + cbdma10 = self.cbdma_list[9] + cbdma11 = self.cbdma_list[10] + cbdma12 = self.cbdma_list[11] + cbdma13 = self.cbdma_list[12] + cbdma14 = self.cbdma_list[13] + cbdma15 = self.cbdma_list[14] + cbdma16 = self.cbdma_list[15] + lcore_dma = f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3}," \ + f"lcore{core1}@{cbdma4},lcore{core1}@{cbdma5},lcore{core1}@{cbdma6}," \ + f"lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," \ + f"lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," \ + f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," \ + f"lcore{core4}@{cbdma16}]" + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format(dmas) + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode='va') + self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" + self.start_vms(server_mode=True, vm_queue=8) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + self.logger.info("Quit and relaunch vhost w/ diff CBDMA channels") + self.pmdout_vhost_user.execute_cmd("quit", "#") + lcore_dma = f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2}," \ + f"lcore{core1}@{cbdma3},lcore{core1}@{cbdma4}," \ + f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma3},lcore{core2}@{cbdma5}," \ + f"lcore{core2}@{cbdma6},lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," \ + f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma4},lcore{core3}@{cbdma9}," \ + f"lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," \ + f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," \ + f"lcore{core4}@{cbdma16}]" + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode='va') + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost w/ iova=pa") + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + " --lcore-dma={}".format(lcore_dma) + self.pmdout_vhost_user.execute_cmd("quit", "#") + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode="pa") + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + self.logger.info("Quit and relaunch vhost w/o CBDMA channels") + self.pmdout_vhost_user.execute_cmd("quit", "#") + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4'" + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4'" + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4" + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param) + self.config_vm_combined(combined=4) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + self.logger.info("Quit and relaunch vhost w/o CBDMA channels with 1 queue") + self.pmdout_vhost_user.execute_cmd("quit", "#") + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4'" + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4'" + param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1" + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param) + self.config_vm_combined(combined=1) + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + def test_vm2vm_split_ring_with_non_mergeable_path_8queue_check_large_packet_and_cbdma_enable(self): + """ + Test Case 3: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + dmas = self.generate_dms_param(8) + core1 = self.vhost_core_list[1] + core2 = self.vhost_core_list[2] + core3 = self.vhost_core_list[3] + core4 = self.vhost_core_list[4] + cbdma1 = self.cbdma_list[0] + cbdma2 = self.cbdma_list[1] + cbdma3 = self.cbdma_list[2] + cbdma4 = self.cbdma_list[3] + cbdma5 = self.cbdma_list[4] + cbdma6 = self.cbdma_list[5] + cbdma7 = self.cbdma_list[6] + cbdma8 = self.cbdma_list[7] + cbdma9 = self.cbdma_list[8] + cbdma10 = self.cbdma_list[9] + cbdma11 = self.cbdma_list[10] + cbdma12 = self.cbdma_list[11] + cbdma13 = self.cbdma_list[12] + cbdma14 = self.cbdma_list[13] + cbdma15 = self.cbdma_list[14] + cbdma16 = self.cbdma_list[15] + lcore_dma = f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3}," \ + f"lcore{core1}@{cbdma4},lcore{core1}@{cbdma5},lcore{core1}@{cbdma6}," \ + f"lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," \ + f"lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," \ + f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," \ + f"lcore{core4}@{cbdma16}]" + + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format(dmas) + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode='va') + self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" + self.start_vms(server_mode=True, vm_queue=8) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + self.logger.info("Quit and relaunch vhost w/ diff CBDMA channels") + self.pmdout_vhost_user.execute_cmd("quit", "#") + lcore_dma = f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2}," \ + f"lcore{core1}@{cbdma3},lcore{core1}@{cbdma4}," \ + f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma3},lcore{core2}@{cbdma5}," \ + f"lcore{core2}@{cbdma6},lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," \ + f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma4},lcore{core3}@{cbdma9}," \ + f"lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," \ + f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," \ + f"lcore{core4}@{cbdma16}]" + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6]'" + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode='va') + self.check_scp_file_valid_between_vms() + self.check_ping_between_vms() + self.start_iperf() + self.get_perf_result() + + self.logger.info("Quit and relaunch vhost w/o CBDMA channels") + self.pmdout_vhost_user.execute_cmd("quit", "#") + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8'" + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'" + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode='va') + self.config_vm_combined(combined=4) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + self.logger.info("Quit and relaunch vhost w/o CBDMA channels with 1 queue") + self.pmdout_vhost_user.execute_cmd("quit", "#") + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8'" + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'" + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=1 --rxq=1" + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param) + self.config_vm_combined(combined=1) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + def test_vm2vm_split_ring_with_mergeable_path_16queue_check_large_packet_and_cbdma_enable(self): + """ + Test Case 4: VM2VM split ring vhost-user/virtio-net mergeable 16 queues CBDMA enable test with large packet payload valid check + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + dmas = self.generate_dms_param(16) + lcore_dma = self.generate_lcore_dma_param(cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:9]) + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=16,dmas={}'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=16,dmas={}'".format(dmas) + param = " --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode='va') + self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" + self.start_vms(server_mode=True, vm_queue=16) + self.config_vm_ip() + self.config_vm_combined(combined=16) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + def test_vm2vm_packed_ring_iperf_with_tso_and_cbdma_enable(self): + """ + Test Case 5: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(2) + dmas = self.generate_dms_param(1) + lcore_dma = self.generate_lcore_dma_param(cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:3]) + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas={}'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas={}'".format(dmas) + param = " --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode='va') + self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" + self.start_vms(server_mode=False, vm_queue=1) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_perf_result() + self.verify_xstats_info_on_vhost() + + def test_vm2vm_packed_ring_with_mergeable_path_8queue_check_large_packet_and_cbdma_enable(self): + """ + Test Case 6: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) + dmas = self.generate_dms_param(7) + core1 = self.vhost_core_list[1] + core2 = self.vhost_core_list[2] + core3 = self.vhost_core_list[3] + core4 = self.vhost_core_list[4] + cbdma1 = self.cbdma_list[0] + cbdma2 = self.cbdma_list[1] + cbdma3 = self.cbdma_list[2] + cbdma4 = self.cbdma_list[3] + cbdma5 = self.cbdma_list[4] + cbdma6 = self.cbdma_list[5] + cbdma7 = self.cbdma_list[6] + cbdma8 = self.cbdma_list[7] + cbdma9 = self.cbdma_list[8] + cbdma10 = self.cbdma_list[9] + cbdma11 = self.cbdma_list[10] + cbdma12 = self.cbdma_list[11] + cbdma13 = self.cbdma_list[12] + cbdma14 = self.cbdma_list[13] + cbdma15 = self.cbdma_list[14] + cbdma16 = self.cbdma_list[15] + lcore_dma = f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3},lcore{core1}@{cbdma4}," \ + f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma3},lcore{core2}@{cbdma5},lcore{core2}@{cbdma6},lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," \ + f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma4},lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12},lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," \ + f"lcore{core4}@{cbdma16}]" + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas={}'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas={}'".format(dmas) + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, iova_mode='va') + # self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" + self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" + self.start_vms(server_mode=False, vm_queue=8) + self.config_vm_ip() + self.check_ping_between_vms() + self.config_vm_combined(combined=8) + for _ in range(6): + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + def test_vm2vm_packed_ring_with_non_mergeable_path_8queue_check_large_packet_and_cbdma_enable(self): + """ + Test Case 7: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) + dmas = self.generate_dms_param(8) + core1 = self.vhost_core_list[1] + core2 = self.vhost_core_list[2] + core3 = self.vhost_core_list[3] + core4 = self.vhost_core_list[4] + cbdma1 = self.cbdma_list[0] + cbdma2 = self.cbdma_list[1] + cbdma3 = self.cbdma_list[2] + cbdma4 = self.cbdma_list[3] + cbdma5 = self.cbdma_list[4] + cbdma6 = self.cbdma_list[5] + cbdma7 = self.cbdma_list[6] + cbdma8 = self.cbdma_list[7] + cbdma9 = self.cbdma_list[8] + cbdma10 = self.cbdma_list[9] + cbdma11 = self.cbdma_list[10] + cbdma12 = self.cbdma_list[11] + cbdma13 = self.cbdma_list[12] + cbdma14 = self.cbdma_list[13] + cbdma15 = self.cbdma_list[14] + cbdma16 = self.cbdma_list[15] + lcore_dma = f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3}," \ + f"lcore{core1}@{cbdma4},lcore{core1}@{cbdma5},lcore{core1}@{cbdma6}," \ + f"lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," \ + f"lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," \ + f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," \ + f"lcore{core4}@{cbdma16}]" + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas={}'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas={}'".format(dmas) + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, + iova_mode='va') + self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" + self.start_vms(server_mode=False, vm_queue=8) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + for _ in range(6): + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + def test_vm2vm_packed_ring_with_mergeable_path_16queue_check_large_packet_and_cbdma_enable(self): + """ + Test Case 8: VM2VM virtio-net packed ring mergeable 16 queues CBDMA enabled test with large packet payload valid check + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) + dmas = self.generate_dms_param(16) + lcore_dma = self.generate_lcore_dma_param(cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:9]) + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=16,dmas={}'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,queues=16,dmas={}'".format(dmas) + param = " --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, + iova_mode='va') + self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" + self.start_vms(server_mode=False, vm_queue=16) + self.config_vm_ip() + self.config_vm_combined(combined=16) + self.check_ping_between_vms() + for _ in range(6): + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + def test_vm2vm_packed_ring_iperf_with_tso_when_set_ivoa_pa_and_cbdma_enable(self): + """ + Test Case 9: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(2) + dmas = self.generate_dms_param(1) + lcore_dma = self.generate_lcore_dma_param(cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:3]) + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas={}'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas={}'".format(dmas) + param = " --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, + iova_mode='pa') + self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" + self.start_vms(server_mode=False, vm_queue=1) + self.config_vm_ip() + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + self.verify_xstats_info_on_vhost() + + def test_vm2vm_packed_ring_with_mergeable_path_8queue_check_large_packet_when_set_ivoa_pa_and_cbdma_enable(self): + """ + Test Case 10: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + dmas = self.generate_dms_param(7) + core1 = self.vhost_core_list[1] + core2 = self.vhost_core_list[2] + core3 = self.vhost_core_list[3] + core4 = self.vhost_core_list[4] + cbdma1 = self.cbdma_list[0] + cbdma2 = self.cbdma_list[1] + cbdma3 = self.cbdma_list[2] + cbdma4 = self.cbdma_list[3] + cbdma5 = self.cbdma_list[4] + cbdma6 = self.cbdma_list[5] + cbdma7 = self.cbdma_list[6] + cbdma8 = self.cbdma_list[7] + cbdma9 = self.cbdma_list[8] + cbdma10 = self.cbdma_list[9] + cbdma11 = self.cbdma_list[10] + cbdma12 = self.cbdma_list[11] + cbdma13 = self.cbdma_list[12] + cbdma14 = self.cbdma_list[13] + cbdma15 = self.cbdma_list[14] + cbdma16 = self.cbdma_list[15] + lcore_dma = f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3}," \ + f"lcore{core1}@{cbdma4},lcore{core1}@{cbdma5},lcore{core1}@{cbdma6}," \ + f"lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," \ + f"lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," \ + f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," \ + f"lcore{core4}@{cbdma16}]" + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas={}'".format(dmas) + \ + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas={}'".format(dmas) + param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + " --lcore-dma={}".format(lcore_dma) + self.start_vhost_testpmd(cores=self.vhost_core_list, ports=self.cbdma_list, eal_param=eal_param, param=param, + iova_mode='pa') + self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" + self.start_vms(server_mode=False, vm_queue=8) + self.config_vm_ip() + self.check_ping_between_vms() + for _ in range(1): + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + + def stop_all_apps(self): + for i in range(len(self.vm)): + self.vm[i].stop() + self.pmdout_vhost_user.quit() + + def tear_down(self): + """ + run after each test case. + """ + self.stop_all_apps() + self.dut.kill_all() + self.bind_cbdma_device_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.close_session(self.vhost) From patchwork Wed Apr 6 09:10:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109193 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D8B3A0509; Wed, 6 Apr 2022 11:10:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72D81427EB; Wed, 6 Apr 2022 11:10:51 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 68EFF40689 for ; Wed, 6 Apr 2022 11:10:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649236250; x=1680772250; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=6TlYj70NveUuRwtfKJDhC+AoEOaLqyfe0w0Vu3EHrBA=; b=jVA9zkQ+1Hl3WRdCKncrIeHkEXsOhPI8WO2RuADYogg2pxTSgsbIUsW5 cApZUrAWR4K4L94Sgj8/0G7Xo+RkKEsBDdLVg7nOTgtNkVqYVfLWe7wxS +XdEkGe1+4IwOWHcUPyFxiv5ZrP1ZN/tY6DSYeFwHCl/TPpNEHJs+NPrR aZZ5iSffnzdMGxbxHJfQL1GtpwnyQUW7ricw/ZqSCTRiFu2ybLRynZ9xH CAhcpF8HOw/tzgGAb32ga4pX2oOv8WqQxwQLKCSTh+LoXHj0/ewYM5qyy HNnS6ac0Svk2cU3ayxFjl1jCzB9qa+UecFEWTMFTHK3dFvZ7fCOLWa/AJ w==; X-IronPort-AV: E=McAfee;i="6200,9189,10308"; a="260690854" X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="260690854" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:10:49 -0700 X-IronPort-AV: E=Sophos;i="5.90,239,1643702400"; d="scan'208";a="570425149" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 02:10:48 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 5/5] test_plans/index: add new testsuite Date: Wed, 6 Apr 2022 17:10:43 +0800 Message-Id: <20220406091043.28499-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add vm2vm_virtio_net_perf_cbdma new testplan into index.rst. Signed-off-by: Wei Ling Tested-by: Chenyu Huang --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index f8118d14..ff8cba5d 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -229,6 +229,7 @@ The following are the test plans for the DPDK DTS automated test system. virtio_perf_cryptodev_func_test_plan virtio_smoke_test_plan vm2vm_virtio_net_perf_test_plan + vm2vm_virtio_net_perf_cbdma_test_plan vm2vm_virtio_pmd_test_plan dpdk_gro_lib_test_plan dpdk_gso_lib_test_plan