From patchwork Fri Jul 2 17:28:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Yinan" X-Patchwork-Id: 95205 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1C87CA0A0C; Fri, 2 Jul 2021 10:44:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1383B41354; Fri, 2 Jul 2021 10:44:26 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id EDDF540686 for ; Fri, 2 Jul 2021 10:44:23 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10032"; a="272552553" X-IronPort-AV: E=Sophos;i="5.83,316,1616482800"; d="scan'208";a="272552553" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jul 2021 01:44:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,316,1616482800"; d="scan'208";a="426440381" Received: from dpdk-yinan-ntb1.sh.intel.com ([10.67.119.39]) by orsmga002.jf.intel.com with ESMTP; 02 Jul 2021 01:44:21 -0700 From: Yinan Wang To: dts@dpdk.org Cc: Yinan Wang Date: Fri, 2 Jul 2021 13:28:01 -0400 Message-Id: <20210702172801.839789-1-yinan.wang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [dts] [PATCH v2] test_plans/vm2vm_virtio_pmd: add three cbdma cases X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Signed-off-by: Yinan Wang --- test_plans/vm2vm_virtio_pmd_test_plan.rst | 246 ++++++++++++++++++++-- 1 file changed, 230 insertions(+), 16 deletions(-) diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst index db410e48..0b1d4a7f 100644 --- a/test_plans/vm2vm_virtio_pmd_test_plan.rst +++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst @@ -1,4 +1,4 @@ -.. Copyright (c) <2019>, Intel Corporation +.. Copyright (c) <2021>, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -38,20 +38,6 @@ This test plan includes vm2vm mergeable, normal and vector_rx path test with vir also add mergeable and normal path test with virtio 1.1. Specially, three mergeable path cases check the payload of each packets are valid by using pdump. -Prerequisites -============= - -Enable pcap lib in dpdk code and recompile:: - - --- a/config/common_base - +++ b/config/common_base - @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y - # - # Compile software PMD backed by PCAP files - # - -CONFIG_RTE_LIBRTE_PMD_PCAP=n - +CONFIG_RTE_LIBRTE_PMD_PCAP=y - Test flow ========= Virtio-pmd <-> Vhost-user <-> Testpmd <-> Vhost-user <-> Virtio-pmd @@ -593,4 +579,232 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path xxxxx Throughput (since last show) RX-pps: xxx - TX-pps: xxx \ No newline at end of file + TX-pps: xxx + +Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable with server mode stable test +========================================================================================================== + +1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands:: + + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>vhost enable tx all + testpmd>start + +2. Launch VM1 and VM2 using qemu5.2.0:: + + taskset -c 6-16 /home/qemu-install/qemu-5.2/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 17-27 /home/qemu-install/qemu-5.2/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +3. On VM1 and VM2, bind virtio device with vfio-pci driver:: + + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 + +4. Launch testpmd in VM1:: + + ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + testpmd>set mac fwd + testpmd>start + +5. Launch testpmd in VM2, sent imix pkts from VM2:: + + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + testpmd>set mac fwd + testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 + testpmd>start tx_first 1 + +6. Check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: + + testpmd>show port stats all + testpmd>stop + +7. Relaunch and start vhost side testpmd with below cmd, change cbdma threshold for one vhost port's cbdma channels:: + + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=64' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start + +8. Send pkts by testpmd in VM2, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: + + testpmd>stop + testpmd>start tx_first 1 + testpmd>show port stats all + testpmd>stop + +9. Rerun step 7-8 for 10 times. + +Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDMA enable with server mode test +============================================================================================================== + +1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost ports below commands:: + + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 + testpmd>vhost enable tx all + testpmd>start + +2. Launch VM1 and VM2 using qemu5.2.0:: + + taskset -c 6-16 /home/qemu-install/qemu-5.2/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 17-27 /home/qemu-install/qemu-5.2/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +3. On VM1 and VM2, bind virtio device with vfio-pci driver:: + + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 + +4. Launch testpmd in VM1:: + + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + testpmd>set mac fwd + testpmd>start + +5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx:: + + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + testpmd>set mac fwd + testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 + testpmd>start tx_first 32 + testpmd>show port stats all + testpmd>stop + +6. Relaunch and start vhost side testpmd with eight queues, change cbdma threshold for one vhost port's cbdma channels:: + + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=64' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start + +7. Send pkts by testpmd in VM2, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: + + testpmd>stop + testpmd>start tx_first 32 + testpmd>show port stats all + testpmd>stop + +8. Rerun step 6-7 for 10 times. + +Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable test +===================================================================================== + +1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands:: + + rm -rf vhost-net* + ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>vhost enable tx all + testpmd>start + +2. Launch VM1 and VM2 with qemu 5.2.0:: + + taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + +3. On VM1 and VM2, bind virtio device with vfio-pci driver:: + + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 + +4. Launch testpmd in VM1:: + + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + testpmd>set mac fwd + testpmd>start + +5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: + + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + testpmd>set mac fwd + testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000 + testpmd>start tx_first 32 + testpmd>show port stats all + testpmd>stop + +6. Quit VM2 and relaunch VM2 with split ring:: + + taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + +7. Bind virtio device with vfio-pci driver, launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: + + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 + ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 + testpmd>set mac fwd + testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000 + testpmd>start tx_first 32 + testpmd>show port stats all + testpmd>stop