From patchwork Tue Aug 16 03:00:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 115143 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C2DF3A00C3; Tue, 16 Aug 2022 05:04:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BA4B540697; Tue, 16 Aug 2022 05:04:41 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id B14D54067C for ; Tue, 16 Aug 2022 05:04:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660619079; x=1692155079; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=4k9wW7AXwOuTOsgZJJPnrLQCpg+N0BXVIwfPlTiFEmM=; b=W6bAW9BUYyLpacOsZYt2f0iqlsBVmHYEhG99H8cbazKpSpMMVAU8U82r yv2JAefEo/4TsWaVNiUe20hl3myK98n9IDWDMTICp84QqJRGPZ8n2y3/s tU6r/kPTCPNBbKmBmLS01fzwvShVSP/Qgm8ySJiaybUZrv2f+FLGy1OJy 1L4VhEopHg37gASuer6HOh5m7TGKfv5Xnc7wqAIQB+1my7ghDGb94pNYz XBYTzb6Qxn5dHPziP9INhMjk8T7gQJWkqlG+3h415JUJTT2Tfl1hF94dM H9sifpmTlO6Z8YZ4/l35XTteB4BfclXQuUMUpFtFI6KKeLwtZg9h7c3dp w==; X-IronPort-AV: E=McAfee;i="6400,9594,10440"; a="279073000" X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208,223";a="279073000" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2022 20:04:38 -0700 X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208,223";a="603376178" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2022 20:04:36 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/3] test_plans/basic_4k_pages_cbdma_test_plan: modify testplan to test virtio dequeue Date: Mon, 15 Aug 2022 23:00:25 -0400 Message-Id: <20220816030025.3416146-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify basic_4k_pages_cbdma testplan to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- test_plans/basic_4k_pages_cbdma_test_plan.rst | 624 ++++++++++++------ 1 file changed, 426 insertions(+), 198 deletions(-) diff --git a/test_plans/basic_4k_pages_cbdma_test_plan.rst b/test_plans/basic_4k_pages_cbdma_test_plan.rst index 5bbd19b4..e5763998 100644 --- a/test_plans/basic_4k_pages_cbdma_test_plan.rst +++ b/test_plans/basic_4k_pages_cbdma_test_plan.rst @@ -5,27 +5,26 @@ Basic test with CBDMA in 4K-pages test plan =========================================== -DPDK 19.02 add support for using virtio-user without hugepages. The --no-huge mode was augmented to use memfd-backed memory +DPDK 19.02 add support for using virtio-user without hugepages. The --no-huge mode was augmented to use memfd-backed memory (on systems that support memfd), to allow using virtio-user-based NICs without hugepages. Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. -In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA -channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue operation with CBDMA channels is supported -in both split and packed ring. +In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA +channels and one DMA channel can be shared by multiple vrings at the same time. From DPDK22.07, Vhost enqueue and dequeue operation with +CBDMA channels is supported in both split and packed ring. This document provides the test plan for testing some basic functions with CBDMA device in 4K-pages memory environment. 1. Test split and packed ring virtio path in the PVP topology environmet. 2. Check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net mergeable path. -3.Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring and packed ring +3.Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring and packed ring vhost-user/virtio-net mergeable path. -4. Vhost-user using 1G hugepges and virtio-user using 4k-pages. -.. note: +Note: - 1. When CBDMA channels are bound to vfio driver, VA mode is the default and recommended. - For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. - 2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. And case 4-5 have not yet been automated. +1. When CBDMA channels are bound to vfio driver, VA mode is the default and recommended. +For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. +2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. And case 4-5 have not yet been automated. For more about dpdk-testpmd sample, please refer to the DPDK docments: https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html @@ -45,10 +44,10 @@ General set up 2. Compile DPDK:: - # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static # ninja -C -j 110 For example: - CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc -j 110 3. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: @@ -84,213 +83,284 @@ Common steps For example, bind 1 NIC port and 1 CBDMA channels: # ./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0,0000:80:04.0 -Test Case 1: Basic test vhost/virtio-user split ring with 4K-pages and cbdma enable ------------------------------------------------------------------------------------ -This case uses testpmd Traffic Generator(For example, Trex) to test split ring when vhost uses the asynchronous enqueue operations with CBDMA channels -in 4k-pages environment. And the mapping between vrings and dsa virtual channels is 1:1. +Test Case 1: Basic test vhost-user/virtio-user split ring vhost async operation using 4K-pages and cbdma enable +--------------------------------------------------------------------------------------------------------------- +This case tests basic functions of split ring virtio path when uses the asynchronous operations with CBDMA channels in 4K-pages memory environment and PVP vhost-user/virtio-user topology. -1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common steps 1. +1. Bind one port to vfio-pci, launch vhost:: -2. Launch vhost by below command:: + ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0;rxq0]' -- -i --no-numa --socket-num=0 --lcore-dma=[lcore4@0000:00:04.0] + testpmd>start - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:af:00.0 -a 0000:80:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0]' -- -i --no-numa --socket-num=1 --lcore-dma=[lcore32@0000:80:04.0] - testpmd> start +2. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_yinan + mount tmpfs /mnt/tmpfs_yinan -t tmpfs -o size=4G 3. Launch virtio-user with 4K-pages:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 33-34 -n 4 --no-huge -m 1024 --file-prefix=virtio-user --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1 -- -i - testpmd> set fwd mac - testpmd> start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,queues=1 -- -i + testpmd>start 4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: - testpmd> show port stats all + testpmd>show port stats all + +Test Case 2: Basic test vhost-user/virtio-user packed ring vhost async operation using 4K-pages and cbdma enable +---------------------------------------------------------------------------------------------------------------- +This case tests basic functions of packed ring virtio path when uses the asynchronous operations with CBDMA channels in 4K-pages memory environment and PVP vhost-user/virtio-user topology. -Test Case 2: Basic test vhost/virtio-user packed ring with 4K-pages and cbdma enable ------------------------------------------------------------------------------------- -This case uses testpmd Traffic Generator(For example, Trex) to test packed ring when vhost uses the asynchronous enqueue operations with CBDMA channels -in 4k-pages environment. And the mapping between vrings and dsa virtual channels is 1:1. +1. Bind one port to vfio-pci, launch vhost:: -1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common steps 1. + modprobe vfio-pci + ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0;rxq0]' -- -i --no-numa --socket-num=0 --lcore-dma=[lcore4@0000:00:04.0] + testpmd>start -2. Launch vhost by below command:: +2. Prepare tmpfs with 4K-pages:: - ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:af:00.0 0000:80:04.0 - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:af:00.0 -a 0000:80:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0]' -- -i --no-numa --socket-num=1 --lcore-dma=[lcore32@0000:80:04.0] - testpmd> start + mkdir /mnt/tmpfs_yinan + mount tmpfs /mnt/tmpfs_yinan -t tmpfs -o size=4G 3. Launch virtio-user with 4K-pages:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 33-34 -n 4 --no-huge -m 1024 --file-prefix=virtio-user --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i - testpmd> set fwd mac - testpmd> start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i + testpmd>start 4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: - testpmd> show port stats all + testpmd>show port stats all + +Test Case 3: VM2VM vhost-user/virtio-net split ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable +------------------------------------------------------------------------------------------------------------------------------- +This case test the function of Vhost TSO in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack +when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. -Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and CBDMA enable test with tcp traffic ---------------------------------------------------------------------------------------------------- -This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path -by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous enqueue operations with CBDMA channels -in 4k-pages environment. +1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: -1. Bind 2 CBDMA channels to vfio-pci, as common steps 1. + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0],dma_ring_size=2048' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] + testpmd>start -2. Launch vhost by below command:: +2. Prepare tmpfs with 4K-pages:: - ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30-32 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:80:04.0 -a 0000:80:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore31@0000:80:04.0,lcore32@0000:80:04.1] - testpmd> start + mkdir /mnt/tmpfs_yinan0 + mkdir /mnt/tmpfs_yinan1 + mount tmpfs /mnt/tmpfs_yinan0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_yinan1 -t tmpfs -o size=4G 3. Launch VM1 and VM2:: - taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ - -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 - - taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 4. On VM1, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 5. On VM2, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 6. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 7. Check 2VMs can receive and send big packets to each other:: - testpmd> show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 + testpmd>show port xstats all + Port 0 should have tx packets above 1522 + Port 1 should have rx packets above 1522 -Test Case 4: vm2vm vhost/virtio-net split ring multi queues with 4K-pages and cbdma enable -------------------------------------------------------------------------------------------- -This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in -vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous enqueue operations with CBDMA channel. -The dynamic change of multi-queues number also test. +Test Case 4: VM2VM vhost-user/virtio-net packed ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable +-------------------------------------------------------------------------------------------------------------------------------- +This case test the function of Vhost TSO in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack +when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. + +1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0],dma_ring_size=2048' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] + testpmd>start + +2. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_yinan0 + mkdir /mnt/tmpfs_yinan1 + mount tmpfs /mnt/tmpfs_yinan0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_yinan1 -t tmpfs -o size=4G + +3. Launch VM1 and VM2:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: -1. Bind 16 CBDMA channels to vfio-pci, as common steps 1. + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 -2. Launch vhost by below command:: +5. On VM2, set virtio device IP and run arp protocal:: - ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \ - 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore30@0000:00:04.4,lcore30@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore32@0000:80:04.4,lcore32@0000:80:04.5,lcore32@0000:80:04.6,lcore33@0000:80:04.7] +6. Check the iperf performance between two VMs by below commands:: - testpmd> start + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +7. Check 2VMs can receive and send big packets to each other:: + + testpmd>show port xstats all + Port 0 should have tx packets above 1522 + Port 1 should have rx packets above 1522 + +Test Case 5: vm2vm vhost/virtio-net split ring multi queues using 4K-pages and cbdma enable +------------------------------------------------------------------------------------------- +This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net +split ring mergeable path when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. The dynamic change of multi-queues number is also tested. +1. Bind one port to vfio-pci, launch vhost:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +2. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_yinan0 + mkdir /mnt/tmpfs_yinan1 + mount tmpfs /mnt/tmpfs_yinan0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_yinan1 -t tmpfs -o size=4G 3. Launch VM qemu:: - taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ - -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 4. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 5. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 6. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 7. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 8. Quit and relaunch vhost w/ diff CBDMA channels:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:80:04.0,lcore31@0000:00:04.2,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.1,lcore32@0000:00:04.3,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore32@0000:80:04.4,lcore32@0000:80:04.5,lcore32@0000:80:04.6,lcore33@0000:80:04.7] - testpmd> start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start 9. Rerun step 6-7. 10. Quit and relaunch vhost w/o CBDMA channels:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 - testpmd> start + testpmd>start 11. On VM1, set virtio device:: - ethtool -L ens5 combined 4 + ethtool -L ens5 combined 4 12. On VM2, set virtio device:: - ethtool -L ens5 combined 4 + ethtool -L ens5 combined 4 13. Scp 1MB file form VM1 to VM2:: @@ -303,91 +373,249 @@ The dynamic change of multi-queues number also test. 15. Quit and relaunch vhost with 1 queues:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 - testpmd> start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 + testpmd>start 16. On VM1, set virtio device:: - ethtool -L ens5 combined 1 + ethtool -L ens5 combined 1 17. On VM2, set virtio device:: - ethtool -L ens5 combined 1 + ethtool -L ens5 combined 1 18. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 19. Check the iperf performance, ensure queue0 can work from vhost side:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +Test Case 6: vm2vm vhost/virtio-net packed ring multi queues using 4K-pages and cbdma enable +-------------------------------------------------------------------------------------------- +This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net +packed ring mergeable path when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. + +1. Bind one port to vfio-pci, launch vhost:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +2. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_yinan0 + mkdir /mnt/tmpfs_yinan1 + mount tmpfs /mnt/tmpfs_yinan0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_yinan1 -t tmpfs -o size=4G + +3. Launch VM qemu:: + + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 -Test Case 5: vm2vm vhost/virtio-net split packed ring multi queues with 1G/4k-pages and cbdma enable +5. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +Test Case 7: vm2vm vhost/virtio-net split ring multi queues using 1G/4k-pages and cbdma enable +---------------------------------------------------------------------------------------------- +This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net +split ring mergeable path when vhost uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment and the front-end is in 4k-pages memory environment. + +1. Bind 16 CBDMA channel to vfio-pci, launch vhost:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \ + 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7 + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7] + testpmd>start + +2. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_yinan0 + mkdir /mnt/tmpfs_yinan1 + mount tmpfs /mnt/tmpfs_yinan0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_yinan1 -t tmpfs -o size=4G + +3. Launch VM qemu:: + + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +4. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +5. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +6. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Quit and relaunch vhost w/ diff CBDMA channels:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] + testpmd>start + +9. Rerun step 6-7. + +Test Case 8: vm2vm vhost/virtio-net split packed ring multi queues with 1G/4k-pages and cbdma enable ---------------------------------------------------------------------------------------------------- -This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in -vm2vm vhost-user/virtio-net multi-queues mergeable path when vhost uses the asynchronous enqueue operations with CBDMA -channels. And one virtio-net is split ring, the other is packed ring. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. +This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net +split and packed ring mergeable path when vhost uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment and the front-end is in 4k-pages memory environment. -1. Bind 16 CBDMA channel to vfio-pci, as common steps 1. +1. Bind 16 CBDMA channel to vfio-pci, launch vhost:: -2. Launch vhost by below command:: + ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \ + 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7 - ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \ - 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7] + testpmd>start - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7] - testpmd> start +2. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_yinan0 + mkdir /mnt/tmpfs_yinan1 + mount tmpfs /mnt/tmpfs_yinan0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_yinan1 -t tmpfs -o size=4G 3. Launch VM qemu:: - taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ - -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,packed=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 4. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 5. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 6. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 7. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Relaunch VM1, and rerun step 4. + +9. Rerun step 6-7. From patchwork Tue Aug 16 03:00:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 115144 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EAE50A00C3; Tue, 16 Aug 2022 05:04:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E20E340A79; Tue, 16 Aug 2022 05:04:55 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id E0E874067C for ; Tue, 16 Aug 2022 05:04:53 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660619094; x=1692155094; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ZDKuVUzyhyabN36/4DovIbd6JOKh/34aqqaJ1G8q8eQ=; b=OyelfCuZ963sQoquCGXBF9zi+Pdn8sDovpaOy42sJGhYSaT5PU9zlYCM pHeejQoGMN9qA2aGnn0B9Ppr3VZgbNPmr0W0JuiDtnCZIwZEEjoeM7BrH JOndaXUbOIvQYlgHkt/Orhdr/cmHUqaPLkL6NSC+4oWjs3+6vrxjTeJEx WFoRu9Dlphc0YBOznmacOfcxy98ZM1Uzw6xMkm7oRsncPXpbh1hQjsYZS +PObHGAbhSSQWBn4NV1iOLK30OWJyB0MMYpzryCCAu72z04yVbo4VNVAb KFylqMm0hBvBH0Il8WrXKLfDlCfEX0h6wlv/apwoBitmy1gHElTR95d37 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10440"; a="271881714" X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208,223";a="271881714" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2022 20:04:52 -0700 X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208,223";a="603376218" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2022 20:04:50 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/3] tests/basic_4k_pages_cbdma: modify testsuite to test virtio dequeue Date: Mon, 15 Aug 2022 23:00:44 -0400 Message-Id: <20220816030044.3416210-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify basic_4k_pages_cbdma testsuite to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- tests/TestSuite_basic_4k_pages_cbdma.py | 792 +++++++++++++++++++++--- 1 file changed, 705 insertions(+), 87 deletions(-) diff --git a/tests/TestSuite_basic_4k_pages_cbdma.py b/tests/TestSuite_basic_4k_pages_cbdma.py index 45e78f1e..164f30c7 100644 --- a/tests/TestSuite_basic_4k_pages_cbdma.py +++ b/tests/TestSuite_basic_4k_pages_cbdma.py @@ -7,18 +7,28 @@ DPDK Test suite. vhost/virtio-user pvp with 4K pages. """ +import os import re import time +import random +import string -import framework.utils as utils from framework.packet import Packet from framework.pktgen import PacketGeneratorHelper from framework.pmd_output import PmdOutput from framework.test_case import TestCase -from framework.virt_common import VM +from framework.qemu_kvm import QEMUKvm +from framework.settings import CONFIG_ROOT_PATH +from framework.config import VirtConf class TestBasic4kPagesCbdma(TestCase): + def get_virt_config(self, vm_name): + conf = VirtConf(CONFIG_ROOT_PATH + os.sep + self.suite_name + ".cfg") + conf.load_virt_config(vm_name) + virt_conf = conf.get_virt_config() + return virt_conf + def set_up_all(self): """ Run at the start of each test suite. @@ -56,13 +66,51 @@ class TestBasic4kPagesCbdma(TestCase): self.virtio_mac1 = "52:54:00:00:00:01" self.virtio_mac2 = "52:54:00:00:00:02" self.base_dir = self.dut.base_dir.replace("~", "/root") + self.random_string = string.ascii_letters + string.digits + + self.vm0_virt_conf = self.get_virt_config(vm_name='vm0') + for param in self.vm0_virt_conf: + if 'cpu' in param.keys(): + self.vm0_cpupin = param['cpu'][0]['cpupin'] + self.vm0_lcore = ",".join(list(self.vm0_cpupin.split())) + self.vm0_lcore_smp = len(list(self.vm0_cpupin.split())) + if 'qemu' in param.keys(): + self.vm0_qemu_path = param['qemu'][0]['path'] + if 'mem' in param.keys(): + self.vm0_mem_size = param['mem'][0]['size'] + if "disk" in param.keys(): + self.vm0_image_path = param['disk'][0]['file'] + if 'vnc' in param.keys(): + self.vm0_vnc = param['vnc'][0]['displayNum'] + if 'login' in param.keys(): + self.vm0_user = param['login'][0]['user'] + self.vm0_passwd = param['login'][0]['password'] + + self.vm1_virt_conf = self.get_virt_config(vm_name='vm1') + for param in self.vm1_virt_conf: + if 'cpu' in param.keys(): + self.vm1_cpupin = param['cpu'][0]['cpupin'] + self.vm1_lcore = ",".join(list(self.vm1_cpupin.split())) + self.vm1_lcore_smp = len(list(self.vm1_cpupin.split())) + if 'qemu' in param.keys(): + self.vm1_qemu_path = param['qemu'][0]['path'] + if 'mem' in param.keys(): + self.vm1_mem_size = param['mem'][0]['size'] + if "disk" in param.keys(): + self.vm1_image_path = param['disk'][0]['file'] + if 'vnc' in param.keys(): + self.vm1_vnc = param['vnc'][0]['displayNum'] + if 'login' in param.keys(): + self.vm1_user = param['login'][0]['user'] + self.vm1_passwd = param['login'][0]['password'] def set_up(self): """ Run before each test case. """ - self.dut.send_expect("rm -rf /tmp/vhost-net*", "# ") self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf /tmp/vhost-net*", "# ") self.umount_tmpfs_for_4k() # Prepare the result table self.table_header = ["Frame"] @@ -73,6 +121,118 @@ class TestBasic4kPagesCbdma(TestCase): self.result_table_create(self.table_header) self.vm_dut = [] self.vm = [] + self.packed = False + + def start_vm(self, packed=False, queues=1, server=False): + if packed: + packed_param = ',packed=on' + else: + packed_param = '' + + if server: + server = ',server' + else: + server = '' + + self.qemu_cmd0 = f"taskset -c {self.vm0_lcore} {self.vm0_qemu_path} -name vm0 -enable-kvm " \ + f"-pidfile /tmp/.vm0.pid -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait " \ + f"-netdev user,id=nttsip1,hostfwd=tcp:%s:6000-:22 -device e1000,netdev=nttsip1 " \ + f"-chardev socket,id=char0,path=/root/dpdk/vhost-net0{server} " \ + f"-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues={queues} " \ + f"-device virtio-net-pci,netdev=netdev0,mac=%s," \ + f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param} " \ + f"-cpu host -smp {self.vm0_lcore_smp} -m {self.vm0_mem_size} -object memory-backend-file,id=mem,size={self.vm0_mem_size}M,mem-path=/mnt/tmpfs_nohuge0,share=on " \ + f"-numa node,memdev=mem -mem-prealloc -drive file={self.vm0_image_path} " \ + f"-chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial " \ + f"-device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :{self.vm0_vnc} " + + self.qemu_cmd1 = f"taskset -c {self.vm1_lcore} {self.vm1_qemu_path} -name vm1 -enable-kvm " \ + f"-pidfile /tmp/.vm1.pid -daemonize -monitor unix:/tmp/vm1_monitor.sock,server,nowait " \ + f"-netdev user,id=nttsip1,hostfwd=tcp:%s:6001-:22 -device e1000,netdev=nttsip1 " \ + f"-chardev socket,id=char0,path=/root/dpdk/vhost-net1{server} " \ + f"-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues={queues} " \ + f"-device virtio-net-pci,netdev=netdev0,mac=%s," \ + f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param} " \ + f"-cpu host -smp {self.vm1_lcore_smp} -m {self.vm1_mem_size} -object memory-backend-file,id=mem,size={self.vm1_mem_size}M,mem-path=/mnt/tmpfs_nohuge1,share=on " \ + f"-numa node,memdev=mem -mem-prealloc -drive file={self.vm1_image_path} " \ + f"-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial " \ + f"-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.0 -vnc :{self.vm1_vnc} " + + self.vm0_session = self.dut.new_session(suite="vm0_session") + cmd0 = self.qemu_cmd0 % ( + self.dut.get_ip_address(), + self.virtio_mac1, + ) + self.vm0_session.send_expect(cmd0, "# ") + time.sleep(10) + self.vm0_dut = self.connect_vm0() + self.verify(self.vm0_dut is not None, "vm start fail") + self.vm_session = self.vm0_dut.new_session(suite="vm_session") + + self.vm1_session = self.dut.new_session(suite="vm1_session") + cmd1 = self.qemu_cmd1 % ( + self.dut.get_ip_address(), + self.virtio_mac2, + ) + self.vm1_session.send_expect(cmd1, "# ") + time.sleep(10) + self.vm1_dut = self.connect_vm1() + self.verify(self.vm1_dut is not None, "vm start fail") + self.vm_session = self.vm1_dut.new_session(suite="vm_session") + + def connect_vm0(self): + self.vm0 = QEMUKvm(self.dut, "vm0", self.suite_name) + self.vm0.net_type = "hostfwd" + self.vm0.hostfwd_addr = "%s:6000" % self.dut.get_ip_address() + self.vm0.def_driver = "vfio-pci" + self.vm0.driver_mode = "noiommu" + self.wait_vm_net_ready(vm_index=0) + vm_dut = self.vm0.instantiate_vm_dut(autodetect_topo=False, bind_dev=False) + if vm_dut: + return vm_dut + else: + return None + + + def connect_vm1(self): + self.vm1 = QEMUKvm(self.dut, "vm1", "vm_hotplug") + self.vm1.net_type = "hostfwd" + self.vm1.hostfwd_addr = "%s:6001" % self.dut.get_ip_address() + self.vm1.def_driver = "vfio-pci" + self.vm1.driver_mode = "noiommu" + self.wait_vm_net_ready(vm_index=1) + vm_dut = self.vm1.instantiate_vm_dut(autodetect_topo=False, bind_dev=False) + if vm_dut: + return vm_dut + else: + return None + + def wait_vm_net_ready(self, vm_index=0): + self.vm_net_session = self.dut.new_session(suite="vm_net_session") + self.start_time = time.time() + cur_time = time.time() + time_diff = cur_time - self.start_time + while time_diff < 120: + try: + out = self.vm_net_session.send_expect( + "~/QMP/qemu-ga-client --address=/tmp/vm%s_qga0.sock ifconfig" % vm_index, "#" + ) + except Exception as EnvironmentError: + pass + if "10.0.2" in out: + pos = self.vm0.hostfwd_addr.find(":") + ssh_key = ( + "[" + + self.vm0.hostfwd_addr[:pos] + + "]" + + self.vm0.hostfwd_addr[pos:] + ) + os.system("ssh-keygen -R %s" % ssh_key) + break + time.sleep(1) + cur_time = time.time() + time_diff = cur_time - self.start_time + self.dut.close_session(self.vm_net_session) def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): """ @@ -179,59 +339,23 @@ class TestBasic4kPagesCbdma(TestCase): fixed_prefix=True, ) - def start_vms( - self, - setting_args="", - server_mode=False, - opt_queue=None, - vm_config="vhost_sample", - ): - """ - start one VM, each VM has one virtio device - """ - vm_params = {} - if opt_queue is not None: - vm_params["opt_queue"] = opt_queue - - for i in range(self.vm_num): - vm_dut = None - vm_info = VM(self.dut, "vm%d" % i, vm_config) - - vm_params["driver"] = "vhost-user" - if not server_mode: - vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i - else: - vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + ",server" - vm_params["opt_mac"] = "52:54:00:00:00:0%d" % (i + 1) - vm_params["opt_settings"] = setting_args - vm_info.set_vm_device(**vm_params) - time.sleep(3) - try: - vm_dut = vm_info.start(set_target=False) - if vm_dut is None: - raise Exception("Set up VM ENV failed") - except Exception as e: - print((utils.RED("Failure for %s" % str(e)))) - raise e - self.vm_dut.append(vm_dut) - self.vm.append(vm_info) def config_vm_ip(self): """ set virtio device IP and run arp protocal """ - vm1_intf = self.vm_dut[0].ports_info[0]["intf"] - vm2_intf = self.vm_dut[1].ports_info[0]["intf"] - self.vm_dut[0].send_expect( + vm1_intf = self.vm0_dut.ports_info[0]["intf"] + vm2_intf = self.vm1_dut.ports_info[0]["intf"] + self.vm0_dut.send_expect( "ifconfig %s %s" % (vm1_intf, self.virtio_ip1), "#", 10 ) - self.vm_dut[1].send_expect( + self.vm1_dut.send_expect( "ifconfig %s %s" % (vm2_intf, self.virtio_ip2), "#", 10 ) - self.vm_dut[0].send_expect( + self.vm0_dut.send_expect( "arp -s %s %s" % (self.virtio_ip2, self.virtio_mac2), "#", 10 ) - self.vm_dut[1].send_expect( + self.vm1_dut.send_expect( "arp -s %s %s" % (self.virtio_ip1, self.virtio_mac1), "#", 10 ) @@ -239,25 +363,56 @@ class TestBasic4kPagesCbdma(TestCase): """ set virtio device combined """ - vm1_intf = self.vm_dut[0].ports_info[0]["intf"] - vm2_intf = self.vm_dut[1].ports_info[0]["intf"] - self.vm_dut[0].send_expect( + vm1_intf = self.vm0_dut.ports_info[0]["intf"] + vm2_intf = self.vm1_dut.ports_info[0]["intf"] + self.vm0_dut.send_expect( "ethtool -L %s combined %d" % (vm1_intf, combined), "#", 10 ) - self.vm_dut[1].send_expect( + self.vm1_dut.send_expect( "ethtool -L %s combined %d" % (vm2_intf, combined), "#", 10 ) + def check_ping_between_vms(self): + ping_out = self.vm0_dut.send_expect( + "ping {} -c 4".format(self.virtio_ip2), "#", 20 + ) + self.logger.info(ping_out) + + def check_scp_file_valid_between_vms(self, file_size=1024): + """ + scp file form VM1 to VM2, check the data is valid + """ + # default file_size=1024K + data = "" + for _ in range(file_size * 1024): + data += random.choice(self.random_string) + self.vm0_dut.send_expect('echo "%s" > /tmp/payload' % data, "# ") + # scp this file to vm1 + out = self.vm1_dut.send_command( + "scp root@%s:/tmp/payload /root" % self.virtio_ip1, timeout=5 + ) + if "Are you sure you want to continue connecting" in out: + self.vm1_dut.send_command("yes", timeout=3) + self.vm1_dut.send_command(self.vm0_passwd, timeout=3) + # get the file info in vm1, and check it valid + md5_send = self.vm0_dut.send_expect("md5sum /tmp/payload", "# ") + md5_revd = self.vm1_dut.send_expect("md5sum /root/payload", "# ") + md5_send = md5_send[: md5_send.find(" ")] + md5_revd = md5_revd[: md5_revd.find(" ")] + self.verify( + md5_send == md5_revd, "the received file is different with send file" + ) + def start_iperf(self): """ run perf command between to vms """ iperf_server = "iperf -s -i 1" iperf_client = "iperf -c {} -i 1 -t 60".format(self.virtio_ip1) - self.vm_dut[0].send_expect( + self.vm0_dut.send_expect( "{} > iperf_server.log &".format(iperf_server), "", 10 ) - self.vm_dut[1].send_expect( + self.vm1_dut.send_expect( "{} > iperf_client.log &".format(iperf_client), "", 60 ) time.sleep(60) @@ -268,8 +423,8 @@ class TestBasic4kPagesCbdma(TestCase): """ self.table_header = ["Mode", "[M|G]bits/sec"] self.result_table_create(self.table_header) - self.vm_dut[0].send_expect("pkill iperf", "# ") - self.vm_dut[1].session.copy_file_from("%s/iperf_client.log" % self.dut.base_dir) + self.vm0_dut.send_expect("pkill iperf", "# ") + self.vm1_dut.session.copy_file_from("%s/iperf_client.log" % self.dut.base_dir) fp = open("./iperf_client.log") fmsg = fp.read() fp.close() @@ -289,19 +444,18 @@ class TestBasic4kPagesCbdma(TestCase): # print iperf resut self.result_table_print() # rm the iperf log file in vm - self.vm_dut[0].send_expect("rm iperf_server.log", "#", 10) - self.vm_dut[1].send_expect("rm iperf_client.log", "#", 10) + self.vm0_dut.send_expect("rm iperf_server.log", "#", 10) + self.vm1_dut.send_expect("rm iperf_client.log", "#", 10) def verify_xstats_info_on_vhost(self): """ check both 2VMs can receive and send big packets to each other """ - self.vhost_user_pmd.execute_cmd("show port stats all") out_tx = self.vhost_user_pmd.execute_cmd("show port xstats 0") out_rx = self.vhost_user_pmd.execute_cmd("show port xstats 1") - tx_info = re.search("tx_size_1523_to_max_packets:\s*(\d*)", out_tx) - rx_info = re.search("rx_size_1523_to_max_packets:\s*(\d*)", out_rx) + tx_info = re.search("tx_q0_size_1519_max_packets:\s*(\d*)", out_tx) + rx_info = re.search("rx_q0_size_1519_max_packets:\s*(\d*)", out_rx) self.verify( int(rx_info.group(1)) > 0, "Port 1 not receive packet greater than 1522" @@ -327,34 +481,27 @@ class TestBasic4kPagesCbdma(TestCase): out = self.dut.send_expect( "mount |grep 'mnt/tmpfs' |awk -F ' ' {'print $3'}", "#" ) - mount_infos = out.replace("\r", "").split("\n") - if len(mount_infos) != 0: - for mount_info in mount_infos: + if out != "": + mount_points = out.replace("\r", "").split("\n") + else: + mount_points = [] + if len(mount_points) != 0: + for mount_info in mount_points: self.dut.send_expect("umount {}".format(mount_info), "# ") - def umount_huge_pages(self): - self.dut.send_expect("mount |grep '/mnt/huge' |awk -F ' ' {'print $3'}", "#") - self.dut.send_expect("umount /mnt/huge", "# ") - - def mount_huge_pages(self): - self.dut.send_expect("mkdir -p /mnt/huge", "# ") - self.dut.send_expect("mount -t hugetlbfs nodev /mnt/huge", "# ") - - def test_perf_pvp_virtio_user_split_ring_with_4K_pages_and_cbdma_enable(self): + def test_perf_pvp_split_ring_vhost_async_operation_using_4K_pages_and_cbdma_enable(self): """ - Test Case 1: Basic test vhost/virtio-user split ring with 4K-pages and cbdma enable + Test Case 1: Basic test vhost-user/virtio-user split ring vhost async operation using 4K-pages and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(1) - lcore_dma = f"lcore{self.vhost_core_list[1]}@{self.cbdma_list[0]}" - vhost_eal_param = "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[txq0]'" - vhost_param = " --no-numa --socket-num={} --lcore-dma=[{}]".format( - self.ports_socket, lcore_dma - ) + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) + lcore_dma = "lcore%s@%s,"% (self.vhost_core_list[1],self.cbdma_list[0]) + vhost_eal_param = "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[txq0;rxq0]'" + vhost_param = " --no-numa --socket-num=%s --lcore-dma=[%s]" % (self.ports_socket, lcore_dma) ports = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: ports.append(i) self.start_vhost_user_testpmd( - cores=self.vhost_core_list[0:2], + cores=self.vhost_core_list, eal_param=vhost_eal_param, param=vhost_param, ports=ports, @@ -370,21 +517,19 @@ class TestBasic4kPagesCbdma(TestCase): self.send_and_verify() self.result_table_print() - def test_perf_pvp_virtio_user_packed_ring_with_4K_pages_and_cbdma_enable(self): + def test_perf_pvp_packed_ring_vhost_async_operation_using_4K_pages_and_cbdma_enable(self): """ - Test Case 2: Basic test vhost/virtio-user packed ring with 4K-pages and cbdma enable + Test Case 2: Basic test vhost-user/virtio-user packed ring vhost async operation using 4K-pages and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(1) - lcore_dma = f"lcore{self.vhost_core_list[1]}@{self.cbdma_list[0]}" - vhost_eal_param = "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[txq0]'" - vhost_param = " --no-numa --socket-num={} --lcore-dma=[{}]".format( - self.ports_socket, lcore_dma - ) + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) + lcore_dma = "lcore%s@%s," % (self.vhost_core_list[1], self.cbdma_list[0]) + vhost_eal_param = "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[txq0;rxq0]'" + vhost_param = " --no-numa --socket-num=%s --lcore-dma=[%s]" % (self.ports_socket, lcore_dma) ports = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: ports.append(i) self.start_vhost_user_testpmd( - cores=self.vhost_core_list[0:2], + cores=self.vhost_core_list, eal_param=vhost_eal_param, param=vhost_param, ports=ports, @@ -400,6 +545,477 @@ class TestBasic4kPagesCbdma(TestCase): self.send_and_verify() self.result_table_print() + def test_vm2vm_split_ring_vhost_async_operaiton_test_with_tcp_traffic_using_4k_pages_and_cbdma_enable(self): + """ + Test Case 3: VM2VM vhost-user/virtio-net split ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) + lcore_dma = "lcore%s@%s," \ + "lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], + self.vhost_core_list[2], self.cbdma_list[1] + ) + vhost_eal_param = "--no-huge -m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0],dma_ring_size=2048'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0],dma_ring_size=2048'" + vhost_param = " --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" % lcore_dma + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.mount_tmpfs_for_4k(number=2) + + self.start_vm(packed=False,queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_packed_ring_vhost_async_operaiton_test_with_tcp_traffic_using_4k_pages_and_cbdma_enable(self): + """ + Test Case 4: VM2VM vhost-user/virtio-net packed ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) + lcore_dma = "lcore%s@%s," \ + "lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], + self.vhost_core_list[2], self.cbdma_list[1] + ) + vhost_eal_param = "--no-huge -m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0],dma_ring_size=2048'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0],dma_ring_size=2048'" + vhost_param = " --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" % lcore_dma + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.mount_tmpfs_for_4k(number=2) + + self.start_vm(packed=True, queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_split_ring_multi_queues_using_4k_pages_and_cbdma_enable(self): + """ + Test Case 5: vm2vm vhost/virtio-net split ring multi queues using 4K-pages and cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + lcore_dma = "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], + self.vhost_core_list[1], self.cbdma_list[1], + self.vhost_core_list[1], self.cbdma_list[2], + self.vhost_core_list[1], self.cbdma_list[3], + self.vhost_core_list[1], self.cbdma_list[4], + self.vhost_core_list[1], self.cbdma_list[5], + self.vhost_core_list[2], self.cbdma_list[6], + self.vhost_core_list[2], self.cbdma_list[7], + self.vhost_core_list[3], self.cbdma_list[8], + self.vhost_core_list[3], self.cbdma_list[9], + self.vhost_core_list[3], self.cbdma_list[10], + self.vhost_core_list[3], self.cbdma_list[11], + self.vhost_core_list[3], self.cbdma_list[12], + self.vhost_core_list[3], self.cbdma_list[13], + self.vhost_core_list[3], self.cbdma_list[14], + self.vhost_core_list[3], self.cbdma_list[15], + ) + vhost_eal_param = "--no-huge -m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" % lcore_dma + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.mount_tmpfs_for_4k(number=2) + + self.start_vm(packed=False, queues=8, server=True) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vhost_user_pmd.quit() + lcore_dma = "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], + self.vhost_core_list[1], self.cbdma_list[1], + self.vhost_core_list[1], self.cbdma_list[2], + self.vhost_core_list[1], self.cbdma_list[3], + self.vhost_core_list[2], self.cbdma_list[0], + self.vhost_core_list[2], self.cbdma_list[2], + self.vhost_core_list[2], self.cbdma_list[4], + self.vhost_core_list[2], self.cbdma_list[5], + self.vhost_core_list[2], self.cbdma_list[6], + self.vhost_core_list[2], self.cbdma_list[7], + self.vhost_core_list[3], self.cbdma_list[1], + self.vhost_core_list[3], self.cbdma_list[3], + self.vhost_core_list[3], self.cbdma_list[8], + self.vhost_core_list[3], self.cbdma_list[9], + self.vhost_core_list[3], self.cbdma_list[10], + self.vhost_core_list[3], self.cbdma_list[11], + self.vhost_core_list[3], self.cbdma_list[12], + self.vhost_core_list[3], self.cbdma_list[13], + self.vhost_core_list[3], self.cbdma_list[14], + self.vhost_core_list[4], self.cbdma_list[15], + ) + vhost_eal_param = "--no-huge -m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" % lcore_dma + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vhost_user_pmd.quit() + vhost_eal_param = "--no-huge -m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=4'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=4'" + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.config_vm_combined(combined=4) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vhost_user_pmd.quit() + vhost_eal_param = "--no-huge -m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=4'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=4'" + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.config_vm_combined(combined=1) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_packed_ring_multi_queues_using_4k_pages_and_cbdma_enable(self): + """ + Test Case 6: vm2vm vhost/virtio-net packed ring multi queues using 4K-pages and cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + lcore_dma = "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], + self.vhost_core_list[1], self.cbdma_list[1], + self.vhost_core_list[1], self.cbdma_list[2], + self.vhost_core_list[1], self.cbdma_list[3], + self.vhost_core_list[1], self.cbdma_list[4], + self.vhost_core_list[1], self.cbdma_list[5], + self.vhost_core_list[2], self.cbdma_list[6], + self.vhost_core_list[2], self.cbdma_list[7], + self.vhost_core_list[3], self.cbdma_list[8], + self.vhost_core_list[3], self.cbdma_list[9], + self.vhost_core_list[3], self.cbdma_list[10], + self.vhost_core_list[3], self.cbdma_list[11], + self.vhost_core_list[3], self.cbdma_list[12], + self.vhost_core_list[3], self.cbdma_list[13], + self.vhost_core_list[3], self.cbdma_list[14], + self.vhost_core_list[3], self.cbdma_list[15], + ) + vhost_eal_param = "--no-huge -m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" % lcore_dma + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.mount_tmpfs_for_4k(number=2) + + self.start_vm(packed=True, queues=8, server=True) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_split_ring_multi_queues_using_1G_and_4k_pages_and_cbdma_enable(self): + """ + Test Case 7: vm2vm vhost/virtio-net split ring multi queues using 1G/4k-pages and cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + lcore_dma = "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], + self.vhost_core_list[1], self.cbdma_list[1], + self.vhost_core_list[1], self.cbdma_list[2], + self.vhost_core_list[1], self.cbdma_list[3], + self.vhost_core_list[2], self.cbdma_list[4], + self.vhost_core_list[2], self.cbdma_list[5], + self.vhost_core_list[2], self.cbdma_list[6], + self.vhost_core_list[2], self.cbdma_list[7], + self.vhost_core_list[3], self.cbdma_list[8], + self.vhost_core_list[3], self.cbdma_list[9], + self.vhost_core_list[3], self.cbdma_list[10], + self.vhost_core_list[3], self.cbdma_list[11], + self.vhost_core_list[4], self.cbdma_list[12], + self.vhost_core_list[4], self.cbdma_list[13], + self.vhost_core_list[4], self.cbdma_list[14], + self.vhost_core_list[4], self.cbdma_list[15], + ) + vhost_eal_param = "-m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" % lcore_dma + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.mount_tmpfs_for_4k(number=2) + + self.start_vm(packed=False, queues=8, server=True) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vhost_user_pmd.quit() + lcore_dma = "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], + self.vhost_core_list[1], self.cbdma_list[1], + self.vhost_core_list[1], self.cbdma_list[2], + self.vhost_core_list[1], self.cbdma_list[3], + self.vhost_core_list[2], self.cbdma_list[0], + self.vhost_core_list[2], self.cbdma_list[2], + self.vhost_core_list[2], self.cbdma_list[4], + self.vhost_core_list[2], self.cbdma_list[5], + self.vhost_core_list[2], self.cbdma_list[6], + self.vhost_core_list[2], self.cbdma_list[7], + self.vhost_core_list[3], self.cbdma_list[1], + self.vhost_core_list[3], self.cbdma_list[3], + self.vhost_core_list[3], self.cbdma_list[8], + self.vhost_core_list[3], self.cbdma_list[9], + self.vhost_core_list[3], self.cbdma_list[10], + self.vhost_core_list[3], self.cbdma_list[11], + self.vhost_core_list[3], self.cbdma_list[12], + self.vhost_core_list[3], self.cbdma_list[13], + self.vhost_core_list[3], self.cbdma_list[14], + self.vhost_core_list[4], self.cbdma_list[15], + ) + vhost_eal_param = "--no-huge -m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" % lcore_dma + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_packed_ring_multi_queues_using_1G_and_4k_pages_and_cbdma_enable(self): + """ + Test Case 8: vm2vm vhost/virtio-net split packed ring multi queues with 1G/4k-pages and cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + lcore_dma = "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s," \ + "lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], + self.vhost_core_list[1], self.cbdma_list[1], + self.vhost_core_list[1], self.cbdma_list[2], + self.vhost_core_list[1], self.cbdma_list[3], + self.vhost_core_list[2], self.cbdma_list[4], + self.vhost_core_list[2], self.cbdma_list[5], + self.vhost_core_list[2], self.cbdma_list[6], + self.vhost_core_list[2], self.cbdma_list[7], + self.vhost_core_list[3], self.cbdma_list[8], + self.vhost_core_list[3], self.cbdma_list[9], + self.vhost_core_list[3], self.cbdma_list[10], + self.vhost_core_list[3], self.cbdma_list[11], + self.vhost_core_list[4], self.cbdma_list[12], + self.vhost_core_list[4], self.cbdma_list[13], + self.vhost_core_list[4], self.cbdma_list[14], + self.vhost_core_list[4], self.cbdma_list[15], + ) + vhost_eal_param = "-m 1024 " + \ + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + \ + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" % lcore_dma + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + ) + self.vhost_user_pmd.execute_cmd("start") + self.mount_tmpfs_for_4k(number=2) + + self.start_vm(packed=True, queues=8, server=True) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + def tear_down(self): """ Run after each test case. @@ -407,6 +1023,8 @@ class TestBasic4kPagesCbdma(TestCase): self.virtio_user0_pmd.quit() self.vhost_user_pmd.quit() self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf /tmp/vhost-net*", "# ") self.bind_cbdma_device_to_kernel() self.umount_tmpfs_for_4k() From patchwork Tue Aug 16 03:00:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 115145 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1EF8DA00C3; Tue, 16 Aug 2022 05:05:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1863E40F19; Tue, 16 Aug 2022 05:05:04 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 2AF334067C for ; Tue, 16 Aug 2022 05:05:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660619103; x=1692155103; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=F396fMvdx6MJv+schDmgU5PQhcG3OMyls/4Zpxx6N4k=; b=L/7tLPs/lfrahZCjRcuMQXLBKJBuQW38OX1yRinOdMwsXfBLA40/L1At zSIXwVIarWpsFhmbhDmtgMIP8viCPwFHAiVy/ZSITl89fVzznnH/36kgu 1BCTOmym8ZdZrE2THbos/mpFSRun+GHfRGOPWb3QajPj24CpfIGTaORES fkJAaKSBJtDqV3AZ5rxsNw20OiY1j73RmmAqHiKWdB73HLGTqRbMH1wku Fzcj8VkvdqI8wzE6c8XU5qPBw6IQJy5qoI2s1qY3EN6c2NCE9a8CRDmAJ n/lB86kxeStKkhjg/39pnRo3Mn+8NRfQ3BoCtD4z09Q5iP4eMTohhCLgq Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10440"; a="271881729" X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208";a="271881729" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2022 20:05:02 -0700 X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208";a="603376229" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Aug 2022 20:05:00 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 3/3] conf/basic_4k_pages_cbdma: add testsuite config file Date: Mon, 15 Aug 2022 23:00:55 -0400 Message-Id: <20220816030055.3416270-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add testsuite config file basic_4k_pages_cbdma.cfg. Signed-off-by: Wei Ling --- conf/basic_4k_pages_cbdma.cfg | 36 +++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 conf/basic_4k_pages_cbdma.cfg diff --git a/conf/basic_4k_pages_cbdma.cfg b/conf/basic_4k_pages_cbdma.cfg new file mode 100644 index 00000000..cdac3af6 --- /dev/null +++ b/conf/basic_4k_pages_cbdma.cfg @@ -0,0 +1,36 @@ +[vm0] +cpu = + model=host,number=8,cpupin=20 21 22 23 24 25 26 27; +mem = + size=4096,hugepage=yes; +disk = + file=/home/image/ubuntu2004.img; +login = + user=root,password=tester; +vnc = + displayNum=4; +net = + type=user,opt_vlan=2; + type=nic,opt_vlan=2; +daemon = + enable=yes; +qemu = + path=/home/QEMU/qemu-7.0.0/bin/qemu-system-x86_64; +[vm1] +cpu = + model=host,number=8,cpupin=48 49 50 51 52 53 54 55; +mem = + size=4096,hugepage=yes; +disk = + file=/home/image/ubuntu2004_2.img; +login = + user=root,password=tester; +net = + type=nic,opt_vlan=3; + type=user,opt_vlan=3; +vnc = + displayNum=5; +daemon = + enable=yes; +qemu = + path=/home/QEMU/qemu-7.0.0/bin/qemu-system-x86_64;