From patchwork Thu Dec 22 02:29:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121247 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36085A034C; Thu, 22 Dec 2022 03:38:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2FE0F40FDF; Thu, 22 Dec 2022 03:38:18 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 6FBD8400D7 for ; Thu, 22 Dec 2022 03:38:15 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671676695; x=1703212695; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=rNlwXkr+FUM98zH9ev6G4IaDBCOMpmyucCZlvvOlKRg=; b=P4PVSdiVqVWd3JUyr6NS5Fcztmp6L1kOXpFicI6RFBRTPhz5Cy/gGSOF bAEK6kZLNl8ZG54LuyPlhbcej4zZy7VvQPLscHLuWffd0P6dpbPdHT64q Cq619bBJ5oCWAGVEnP+bUL28OzW13N3nsyU3Atwj0Fde3iYqm75+7CErR 5oLVhQ6saxlDFa43LsJVBjHPqRSK3QuKda3vRqZIkihb0UMwK2vQPFdkz VWU6ymSOdAc5b+SU2ka2R5brBjvqv/Ruw0Wb+ecOAL6PtPJdKSbQiZEWf tto0XkB3gEVVf8EG5TOpxpL0WBVfKRmPEh4JBmNlAB6CRK6tmqAPaoV9J w==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="320081073" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="320081073" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 18:38:14 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="825819875" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="825819875" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 18:38:11 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V4 1/2] test_plans/basic_4k_pages_cbdma_test_plan: modify dmas parameter by DPDK changed Date: Thu, 22 Dec 2022 10:29:36 +0800 Message-Id: <20221222022936.174229-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org The dmas parameter have been changed by the local patch, so modify the dmas parameter in the testplan. Signed-off-by: Wei Ling --- test_plans/basic_4k_pages_cbdma_test_plan.rst | 626 +++++++++--------- 1 file changed, 318 insertions(+), 308 deletions(-) diff --git a/test_plans/basic_4k_pages_cbdma_test_plan.rst b/test_plans/basic_4k_pages_cbdma_test_plan.rst index 009a200c..6c7d8398 100644 --- a/test_plans/basic_4k_pages_cbdma_test_plan.rst +++ b/test_plans/basic_4k_pages_cbdma_test_plan.rst @@ -20,23 +20,35 @@ vhost-user/virtio-net mergeable path. 3.Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring and packed ring vhost-user/virtio-net mergeable path. -Note: +.. note:: -1. When CBDMA channels are bound to vfio driver, VA mode is the default and recommended. -For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. -2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. And case 4-5 have not yet been automated. + 1. When CBDMA channels are bound to vfio driver, VA mode is the default and recommended. + For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. + 2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. In this patch, + we enable asynchronous data path for vhostpmd. Asynchronous data path is enabled per tx/rx queue, and users need to specify + the DMA device used by the tx/rx queue. Each tx/rx queue only supports to use one DMA device (This is limited by the + implementation of vhostpmd), but one DMA device can be shared among multiple tx/rx queues of different vhost PMD ports. -For more about dpdk-testpmd sample, please refer to the DPDK docments: -https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html -For virtio-user vdev parameter, you can refer to the DPDK docments: -https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + Two PMD parameters are added: + - dmas: specify the used DMA device for a tx/rx queue.(Default: no queues enable asynchronous data path) + - dma-ring-size: DMA ring size.(Default: 4096). + + Here is an example: + --vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=2048' + + For more about dpdk-testpmd sample, please refer to the DPDK docments: + https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + For virtio-user vdev parameter, you can refer to the DPDK docments: + https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. Prerequisites ============= Software -------- -Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + iperf + qemu: https://download.qemu.org/qemu-7.1.0.tar.xz General set up -------------- @@ -46,9 +58,9 @@ General set up # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static # ninja -C -j 110 - For example: - CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc - ninja -C x86_64-native-linuxapp-gcc -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 3. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: @@ -65,10 +77,10 @@ General set up 4. Prepare tmpfs with 4K-pages:: - mkdir /mnt/tmpfs_nohuge0 - mkdir /mnt/tmpfs_nohuge1 - mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G - mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=4G + mkdir /mnt/tmpfs_nohuge0 + mkdir /mnt/tmpfs_nohuge1 + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=4G Test case ========= @@ -85,234 +97,239 @@ Common steps Test Case 1: Basic test vhost-user/virtio-user split ring vhost async operation using 4K-pages and cbdma enable --------------------------------------------------------------------------------------------------------------- -This case tests basic functions of split ring virtio path when uses the asynchronous operations with CBDMA channels in 4K-pages memory environment and PVP vhost-user/virtio-user topology. +This case tests basic functions of split ring virtio path when uses the asynchronous operations with CBDMA channels +in 4K-pages memory environment and PVP vhost-user/virtio-user topology. -1. Bind one port to vfio-pci, launch vhost:: +1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, launch vhost:: - ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0;rxq0]' -- -i --no-numa --socket-num=0 --lcore-dma=[lcore4@0000:00:04.0] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \ + -- -i --no-numa --socket-num=0 + testpmd>start 2. Launch virtio-user with 4K-pages:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,queues=1 -- -i - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --no-huge -m 1024 --file-prefix=virtio-user \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,queues=1 \ + -- -i + testpmd>start 3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: - testpmd>show port stats all + testpmd>show port stats all Test Case 2: Basic test vhost-user/virtio-user packed ring vhost async operation using 4K-pages and cbdma enable ---------------------------------------------------------------------------------------------------------------- -This case tests basic functions of packed ring virtio path when uses the asynchronous operations with CBDMA channels in 4K-pages memory environment and PVP vhost-user/virtio-user topology. +This case tests basic functions of packed ring virtio path when uses the asynchronous operations with CBDMA channels +in 4K-pages memory environment and PVP vhost-user/virtio-user topology. -1. Bind one port to vfio-pci, launch vhost:: +1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, launch vhost:: - modprobe vfio-pci - ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0;rxq0]' -- -i --no-numa --socket-num=0 --lcore-dma=[lcore4@0000:00:04.0] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \ + -- -i --no-numa --socket-num=0 + testpmd>start 2. Launch virtio-user with 4K-pages:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --no-huge -m 1024 --file-prefix=virtio-user \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,packed_vq=1,queues=1 \ + -- -i + testpmd>start 3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: - testpmd>show port stats all + testpmd>show port stats all Test Case 3: VM2VM vhost-user/virtio-net split ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable ------------------------------------------------------------------------------------------------------------------------------- -This case test the function of Vhost TSO in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack -when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. +This case test the function of Vhost TSO in the topology of vhost-user/virtio-net split ring mergeable path by verifing the +TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: +1. Bind 2 CBDMA port to vfio-pci, then launch vhost by below command:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 + testpmd>start 2. Launch VM1 and VM2:: - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 6. Check 2VMs can receive and send big packets to each other:: - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 + testpmd>show port xstats all + Port 0 should have tx packets above 1518 + Port 1 should have rx packets above 1518 Test Case 4: VM2VM vhost-user/virtio-net packed ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable -------------------------------------------------------------------------------------------------------------------------------- -This case test the function of Vhost TSO in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack -when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. +This case test the function of Vhost TSO in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the +TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: +1. Bind 2 CBDMA port to vfio-pci, then launch vhost by below command:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 + testpmd>start 2. Launch VM1 and VM2:: - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 6. Check 2VMs can receive and send big packets to each other:: - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 + testpmd>show port xstats all + Port 0 should have tx packets above 1518 + Port 1 should have rx packets above 1518 Test Case 5: vm2vm vhost/virtio-net split ring multi queues using 4K-pages and cbdma enable ------------------------------------------------------------------------------------------- -This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net -split ring mergeable path when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. The dynamic change of multi-queues number is also tested. +This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid +after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost +uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. +The dynamic change of multi-queues number is also tested. -1. Bind one port to vfio-pci, launch vhost:: +1. Bind 4 CBDMA port to vfio-pci, launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.3;txq5@0000:00:04.3;txq6@0000:00:04.3;txq7@0000:00:04.3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 2. Launch VM qemu:: - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 6. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 7. Quit and relaunch vhost w/ diff CBDMA channels:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.2;txq5@0000:00:04.2;rxq2@0000:00:04.3;rxq3@0000:00:04.3;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3],dma-ring-size=1024' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 8. Rerun step 5-6. @@ -325,11 +342,11 @@ split ring mergeable path when vhost uses the asynchronous operations with CBDMA 10. On VM1, set virtio device:: - ethtool -L ens5 combined 4 + ethtool -L ens5 combined 4 11. On VM2, set virtio device:: - ethtool -L ens5 combined 4 + ethtool -L ens5 combined 4 12. Scp 1MB file form VM1 to VM2:: @@ -342,227 +359,220 @@ split ring mergeable path when vhost uses the asynchronous operations with CBDMA 14. Quit and relaunch vhost with 1 queues:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 + testpmd>start 15. On VM1, set virtio device:: - ethtool -L ens5 combined 1 + ethtool -L ens5 combined 1 16. On VM2, set virtio device:: - ethtool -L ens5 combined 1 + ethtool -L ens5 combined 1 17. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 18. Check the iperf performance, ensure queue0 can work from vhost side:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` Test Case 6: vm2vm vhost/virtio-net packed ring multi queues using 4K-pages and cbdma enable -------------------------------------------------------------------------------------------- -This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net -packed ring mergeable path when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. +This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid +after packets forwarding in vm2vm vhost-user/virtio-net packed ring mergeable path when vhost +uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. -1. Bind one port to vfio-pci, launch vhost:: +1. Bind 2 CBDMA port to vfio-pci, launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 2. Launch VM qemu:: - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 6. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` Test Case 7: vm2vm vhost/virtio-net split ring multi queues using 1G/4k-pages and cbdma enable ---------------------------------------------------------------------------------------------- -This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net -split ring mergeable path when vhost uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment and the front-end is in 4k-pages memory environment. +This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid +after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost +uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory +environment and the front-end is in 4k-pages memory environment. -1. Bind 16 CBDMA channel to vfio-pci, launch vhost:: +1. Bind 4 CBDMA port to vfio-pci, launch vhost:: - ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \ - 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7 - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 2. Launch VM qemu:: - taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 6. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 7. Quit and relaunch vhost w/ diff CBDMA channels:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 8. Rerun step 5-6. Test Case 8: vm2vm vhost/virtio-net split packed ring multi queues with 1G/4k-pages and cbdma enable ---------------------------------------------------------------------------------------------------- -This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net -split and packed ring mergeable path when vhost uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment and the front-end is in 4k-pages memory environment. +This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after +packets forwarding in vm2vm vhost-user/virtio-net split and packed ring mergeable path when vhost +uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment +and the front-end is in 4k-pages memory environment. -1. Bind 16 CBDMA channel to vfio-pci, launch vhost:: +1. Bind 8 CBDMA port to vfio-pci, launch vhost:: - ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \ - 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7 - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.4;txq1@0000:00:04.4;txq2@0000:00:04.4;txq3@0000:00:04.4;txq4@0000:00:04.5;txq5@0000:00:04.5;rxq2@0000:00:04.6;rxq3@0000:00:04.6;rxq4@0000:00:04.6;rxq5@0000:00:04.6;rxq6@0000:00:04.7;rxq7@0000:00:04.7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 2. Launch VM qemu:: - taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 6. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 7. Relaunch VM1, and rerun step 3. From patchwork Thu Dec 22 02:29:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121248 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F559A034C; Thu, 22 Dec 2022 03:38:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 599BE427EB; Thu, 22 Dec 2022 03:38:26 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 46269400D7 for ; Thu, 22 Dec 2022 03:38:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671676705; x=1703212705; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=XlPbxJiejGVr1jiQhJr60Riib94oVZHiHANcMHfd8Bw=; b=jDg22nThT9DpM79brNYSgK3XoJ+TMFwy6s/dzsZP9CZ5RJSUZsmrfkcy ufYhbBmhod0P1wNOiPg2bTXxsGOvVrbgsZJi6emuid03d6MGFKBwlgiyk BDOOrUny109yg8oggO7WxpgviKr0u7bXb8/ykSCqECxozyGl36J0U3dXI 7TIKlomQQZL6DC1vZvrk9Zuj5Rfgvm38sY+gzpmb10prfO3dId0UhS3aD dC3CAURcl0XvGON9909rX50q9kK8GkWE788XxOpSYgZqFGZ+mjQ0laQ7U f8tI5KNKMWQibc+soleqW7SRMzzdeFfy0Hkz5oQs3vBp3ikt10sTCCFA/ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="320081086" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="320081086" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 18:38:24 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="825819887" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="825819887" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 18:38:22 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V4 2/2] tests/basic_4k_pages_cbdma: modify dmas parameter by DPDK changed Date: Thu, 22 Dec 2022 10:29:46 +0800 Message-Id: <20221222022946.174292-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org The dmas parameter have been changed by the local patch, so modify the dmas parameter in the testsuite. Signed-off-by: Wei Ling Acked-by: Lijuan Tu --- tests/TestSuite_basic_4k_pages_cbdma.py | 685 ++++++++++-------------- 1 file changed, 285 insertions(+), 400 deletions(-) diff --git a/tests/TestSuite_basic_4k_pages_cbdma.py b/tests/TestSuite_basic_4k_pages_cbdma.py index 2c316a4f..02969937 100644 --- a/tests/TestSuite_basic_4k_pages_cbdma.py +++ b/tests/TestSuite_basic_4k_pages_cbdma.py @@ -2,11 +2,6 @@ # Copyright(c) 2022 Intel Corporation # -""" -DPDK Test suite. -vhost/virtio-user pvp with 4K pages. -""" - import os import random import re @@ -111,8 +106,7 @@ class TestBasic4kPagesCbdma(TestCase): """ self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ") self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") - self.dut.send_expect("rm -rf /tmp/vhost-net*", "# ") - self.umount_tmpfs_for_4k() + self.dut.send_expect("rm -rf /root/dpdk/vhost-net*", "# ") # Prepare the result table self.table_header = ["Frame"] self.table_header.append("Mode") @@ -124,17 +118,9 @@ class TestBasic4kPagesCbdma(TestCase): self.vm = [] self.packed = False - def start_vm(self, packed=False, queues=1, server=False): - if packed: - packed_param = ",packed=on" - else: - packed_param = "" - - if server: - server = ",server" - else: - server = "" - + def start_vm0(self, packed=False, queues=1, server=False): + packed_param = ",packed=on" if packed else "" + server = ",server" if server else "" self.qemu_cmd0 = ( f"taskset -c {self.vm0_lcore} {self.vm0_qemu_path} -name vm0 -enable-kvm " f"-pidfile /tmp/.vm0.pid -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait " @@ -149,6 +135,20 @@ class TestBasic4kPagesCbdma(TestCase): f"-device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :{self.vm0_vnc} " ) + self.vm0_session = self.dut.new_session(suite="vm0_session") + cmd0 = self.qemu_cmd0 % ( + self.dut.get_ip_address(), + self.virtio_mac1, + ) + self.vm0_session.send_expect(cmd0, "# ") + time.sleep(10) + self.vm0_dut = self.connect_vm0() + self.verify(self.vm0_dut is not None, "vm start fail") + self.vm_session = self.vm0_dut.new_session(suite="vm_session") + + def start_vm1(self, packed=False, queues=1, server=False): + packed_param = ",packed=on" if packed else "" + server = ",server" if server else "" self.qemu_cmd1 = ( f"taskset -c {self.vm1_lcore} {self.vm1_qemu_path} -name vm1 -enable-kvm " f"-pidfile /tmp/.vm1.pid -daemonize -monitor unix:/tmp/vm1_monitor.sock,server,nowait " @@ -163,17 +163,6 @@ class TestBasic4kPagesCbdma(TestCase): f"-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.0 -vnc :{self.vm1_vnc} " ) - self.vm0_session = self.dut.new_session(suite="vm0_session") - cmd0 = self.qemu_cmd0 % ( - self.dut.get_ip_address(), - self.virtio_mac1, - ) - self.vm0_session.send_expect(cmd0, "# ") - time.sleep(10) - self.vm0_dut = self.connect_vm0() - self.verify(self.vm0_dut is not None, "vm start fail") - self.vm_session = self.vm0_dut.new_session(suite="vm_session") - self.vm1_session = self.dut.new_session(suite="vm1_session") cmd1 = self.qemu_cmd1 % ( self.dut.get_ip_address(), @@ -379,7 +368,7 @@ class TestBasic4kPagesCbdma(TestCase): def check_ping_between_vms(self): ping_out = self.vm0_dut.send_expect( - "ping {} -c 4".format(self.virtio_ip2), "#", 20 + "ping {} -c 4".format(self.virtio_ip2), "#", 60 ) self.logger.info(ping_out) @@ -459,10 +448,10 @@ class TestBasic4kPagesCbdma(TestCase): rx_info = re.search("rx_q0_size_1519_max_packets:\s*(\d*)", out_rx) self.verify( - int(rx_info.group(1)) > 0, "Port 1 not receive packet greater than 1522" + int(rx_info.group(1)) > 0, "Port 1 not receive packet greater than 1518" ) self.verify( - int(tx_info.group(1)) > 0, "Port 0 not forward packet greater than 1522" + int(tx_info.group(1)) > 0, "Port 0 not forward packet greater than 1518" ) def mount_tmpfs_for_4k(self, number=1): @@ -497,12 +486,12 @@ class TestBasic4kPagesCbdma(TestCase): Test Case 1: Basic test vhost-user/virtio-user split ring vhost async operation using 4K-pages and cbdma enable """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) - lcore_dma = "lcore%s@%s," % (self.vhost_core_list[1], self.cbdma_list[0]) - vhost_eal_param = "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[txq0;rxq0]'" - vhost_param = " --no-numa --socket-num=%s --lcore-dma=[%s]" % ( - self.ports_socket, - lcore_dma, + dmas = "txq0@%s;rxq0@%s" % (self.cbdma_list[0], self.cbdma_list[0]) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[%s]'" + % dmas ) + vhost_param = "--no-numa --socket-num=%s " % self.ports_socket ports = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: ports.append(i) @@ -529,12 +518,12 @@ class TestBasic4kPagesCbdma(TestCase): Test Case 2: Basic test vhost-user/virtio-user packed ring vhost async operation using 4K-pages and cbdma enable """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) - lcore_dma = "lcore%s@%s," % (self.vhost_core_list[1], self.cbdma_list[0]) - vhost_eal_param = "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[txq0;rxq0]'" - vhost_param = " --no-numa --socket-num=%s --lcore-dma=[%s]" % ( - self.ports_socket, - lcore_dma, + dmas = "txq0@%s;rxq0@%s" % (self.cbdma_list[0], self.cbdma_list[0]) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[%s]'" + % dmas ) + vhost_param = "--no-numa --socket-num=%s " % self.ports_socket ports = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: ports.append(i) @@ -561,18 +550,16 @@ class TestBasic4kPagesCbdma(TestCase): Test Case 3: VM2VM vhost-user/virtio-net split ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) - lcore_dma = "lcore%s@%s," "lcore%s@%s" % ( - self.vhost_core_list[1], - self.cbdma_list[0], - self.vhost_core_list[2], - self.cbdma_list[1], - ) + dmas1 = "txq0@%s;rxq0@%s" % (self.cbdma_list[0], self.cbdma_list[0]) + dmas2 = "txq0@%s;rxq0@%s" % (self.cbdma_list[1], self.cbdma_list[1]) vhost_eal_param = ( "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]'" - + " --vdev 'net_vhost1,iface=./vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]'" + + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" + % dmas1 + + " --vdev 'net_vhost1,iface=./vhost-net1,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" + % dmas2 ) - vhost_param = " --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" % lcore_dma + vhost_param = "--nb-cores=2 --txd=1024 --rxd=1024" self.start_vhost_user_testpmd( cores=self.vhost_core_list, eal_param=vhost_eal_param, @@ -581,7 +568,8 @@ class TestBasic4kPagesCbdma(TestCase): ) self.vhost_user_pmd.execute_cmd("start") - self.start_vm(packed=False, queues=1, server=False) + self.start_vm0(packed=False, queues=1, server=False) + self.start_vm1(packed=False, queues=1, server=False) self.config_vm_ip() self.check_ping_between_vms() self.start_iperf() @@ -599,18 +587,16 @@ class TestBasic4kPagesCbdma(TestCase): Test Case 4: VM2VM vhost-user/virtio-net packed ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) - lcore_dma = "lcore%s@%s," "lcore%s@%s" % ( - self.vhost_core_list[1], - self.cbdma_list[0], - self.vhost_core_list[2], - self.cbdma_list[1], - ) + dmas1 = "txq0@%s;rxq0@%s" % (self.cbdma_list[0], self.cbdma_list[0]) + dmas2 = "txq0@%s;rxq0@%s" % (self.cbdma_list[1], self.cbdma_list[1]) vhost_eal_param = ( "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]'" - + " --vdev 'net_vhost1,iface=./vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]'" + + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" + % dmas1 + + " --vdev 'net_vhost1,iface=./vhost-net1,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" + % dmas2 ) - vhost_param = " --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" % lcore_dma + vhost_param = "--nb-cores=2 --txd=1024 --rxd=1024" self.start_vhost_user_testpmd( cores=self.vhost_core_list, eal_param=vhost_eal_param, @@ -619,7 +605,8 @@ class TestBasic4kPagesCbdma(TestCase): ) self.vhost_user_pmd.execute_cmd("start") - self.start_vm(packed=True, queues=1, server=False) + self.start_vm0(packed=True, queues=1, server=False) + self.start_vm1(packed=True, queues=1, server=False) self.config_vm_ip() self.check_ping_between_vms() self.start_iperf() @@ -634,68 +621,55 @@ class TestBasic4kPagesCbdma(TestCase): """ Test Case 5: vm2vm vhost/virtio-net split ring multi queues using 4K-pages and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4, allow_diff_socket=True) + dmas1 = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[1], + self.cbdma_list[1], + ) + ) + dmas2 = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s" + % ( + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], self.cbdma_list[2], - self.vhost_core_list[1], self.cbdma_list[3], - self.vhost_core_list[1], - self.cbdma_list[4], - self.vhost_core_list[1], - self.cbdma_list[5], - self.vhost_core_list[2], - self.cbdma_list[6], - self.vhost_core_list[2], - self.cbdma_list[7], - self.vhost_core_list[3], - self.cbdma_list[8], - self.vhost_core_list[3], - self.cbdma_list[9], - self.vhost_core_list[3], - self.cbdma_list[10], - self.vhost_core_list[3], - self.cbdma_list[11], - self.vhost_core_list[3], - self.cbdma_list[12], - self.vhost_core_list[3], - self.cbdma_list[13], - self.vhost_core_list[3], - self.cbdma_list[14], - self.vhost_core_list[3], - self.cbdma_list[15], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], ) ) vhost_eal_param = ( "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" - + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" - ) - vhost_param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" - % lcore_dma + + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[%s]'" + % dmas2 ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" self.start_vhost_user_testpmd( cores=self.vhost_core_list, eal_param=vhost_eal_param, @@ -704,7 +678,8 @@ class TestBasic4kPagesCbdma(TestCase): ) self.vhost_user_pmd.execute_cmd("start") - self.start_vm(packed=False, queues=8, server=True) + self.start_vm0(packed=False, queues=8, server=True) + self.start_vm1(packed=False, queues=8, server=True) self.config_vm_ip() self.config_vm_combined(combined=8) self.check_scp_file_valid_between_vms() @@ -712,79 +687,70 @@ class TestBasic4kPagesCbdma(TestCase): self.get_iperf_result() self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas1 = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[1], + self.cbdma_list[1], + ) + ) + dmas2 = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" + % ( self.cbdma_list[2], - self.vhost_core_list[1], - self.cbdma_list[3], - self.vhost_core_list[2], - self.cbdma_list[0], - self.vhost_core_list[2], self.cbdma_list[2], - self.vhost_core_list[2], - self.cbdma_list[4], - self.vhost_core_list[2], - self.cbdma_list[5], - self.vhost_core_list[2], - self.cbdma_list[6], - self.vhost_core_list[2], - self.cbdma_list[7], - self.vhost_core_list[3], - self.cbdma_list[1], - self.vhost_core_list[3], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], self.cbdma_list[3], - self.vhost_core_list[3], - self.cbdma_list[8], - self.vhost_core_list[3], - self.cbdma_list[9], - self.vhost_core_list[3], - self.cbdma_list[10], - self.vhost_core_list[3], - self.cbdma_list[11], - self.vhost_core_list[3], - self.cbdma_list[12], - self.vhost_core_list[3], - self.cbdma_list[13], - self.vhost_core_list[3], - self.cbdma_list[14], - self.vhost_core_list[4], - self.cbdma_list[15], ) ) vhost_eal_param = ( "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" - + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" - ) - vhost_param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" - % lcore_dma + + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s],dma-ring-size=1024'" + % dmas1 + + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[%s],dma-ring-size=1024'" + % dmas2 ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" self.start_vhost_user_testpmd( cores=self.vhost_core_list, eal_param=vhost_eal_param, @@ -803,7 +769,7 @@ class TestBasic4kPagesCbdma(TestCase): + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=4'" + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=4'" ) - vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" self.start_vhost_user_testpmd( cores=self.vhost_core_list, eal_param=vhost_eal_param, @@ -845,68 +811,35 @@ class TestBasic4kPagesCbdma(TestCase): """ Test Case 6: vm2vm vhost/virtio-net packed ring multi queues using 4K-pages and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2, allow_diff_socket=True) + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[1], - self.cbdma_list[2], - self.vhost_core_list[1], - self.cbdma_list[3], - self.vhost_core_list[1], - self.cbdma_list[4], - self.vhost_core_list[1], - self.cbdma_list[5], - self.vhost_core_list[2], - self.cbdma_list[6], - self.vhost_core_list[2], - self.cbdma_list[7], - self.vhost_core_list[3], - self.cbdma_list[8], - self.vhost_core_list[3], - self.cbdma_list[9], - self.vhost_core_list[3], - self.cbdma_list[10], - self.vhost_core_list[3], - self.cbdma_list[11], - self.vhost_core_list[3], - self.cbdma_list[12], - self.vhost_core_list[3], - self.cbdma_list[13], - self.vhost_core_list[3], - self.cbdma_list[14], - self.vhost_core_list[3], - self.cbdma_list[15], ) ) vhost_eal_param = ( "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" - + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" - ) - vhost_param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" - % lcore_dma + + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,tso=1,dmas=[%s]'" + % dmas + + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,tso=1,dmas=[%s]'" + % dmas ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" self.start_vhost_user_testpmd( cores=self.vhost_core_list, eal_param=vhost_eal_param, @@ -915,7 +848,8 @@ class TestBasic4kPagesCbdma(TestCase): ) self.vhost_user_pmd.execute_cmd("start") - self.start_vm(packed=True, queues=8, server=True) + self.start_vm0(packed=True, queues=8, server=True) + self.start_vm1(packed=True, queues=8, server=True) self.config_vm_ip() self.config_vm_combined(combined=8) self.check_ping_between_vms() @@ -931,68 +865,43 @@ class TestBasic4kPagesCbdma(TestCase): """ Test Case 7: vm2vm vhost/virtio-net split ring multi queues using 1G/4k-pages and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4, allow_diff_socket=True) + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[1], - self.cbdma_list[2], - self.vhost_core_list[1], - self.cbdma_list[3], - self.vhost_core_list[2], - self.cbdma_list[4], - self.vhost_core_list[2], - self.cbdma_list[5], - self.vhost_core_list[2], - self.cbdma_list[6], - self.vhost_core_list[2], - self.cbdma_list[7], - self.vhost_core_list[3], - self.cbdma_list[8], - self.vhost_core_list[3], - self.cbdma_list[9], - self.vhost_core_list[3], - self.cbdma_list[10], - self.vhost_core_list[3], - self.cbdma_list[11], - self.vhost_core_list[4], - self.cbdma_list[12], - self.vhost_core_list[4], - self.cbdma_list[13], - self.vhost_core_list[4], - self.cbdma_list[14], - self.vhost_core_list[4], - self.cbdma_list[15], ) ) vhost_eal_param = ( "-m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - ) - vhost_param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" - % lcore_dma + + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,tso=1,dmas=[%s],dma-ring-size=1024'" + % dmas + + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,tso=1,dmas=[%s],dma-ring-size=1024'" + % dmas ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" self.start_vhost_user_testpmd( cores=self.vhost_core_list, eal_param=vhost_eal_param, @@ -1001,7 +910,8 @@ class TestBasic4kPagesCbdma(TestCase): ) self.vhost_user_pmd.execute_cmd("start") - self.start_vm(packed=False, queues=8, server=True) + self.start_vm0(packed=False, queues=8, server=True) + self.start_vm1(packed=False, queues=8, server=True) self.config_vm_ip() self.config_vm_combined(combined=8) self.check_ping_between_vms() @@ -1010,79 +920,50 @@ class TestBasic4kPagesCbdma(TestCase): self.get_iperf_result() self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s;" + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[1], self.cbdma_list[2], - self.vhost_core_list[1], - self.cbdma_list[3], - self.vhost_core_list[2], - self.cbdma_list[0], - self.vhost_core_list[2], self.cbdma_list[2], - self.vhost_core_list[2], - self.cbdma_list[4], - self.vhost_core_list[2], - self.cbdma_list[5], - self.vhost_core_list[2], - self.cbdma_list[6], - self.vhost_core_list[2], - self.cbdma_list[7], - self.vhost_core_list[3], - self.cbdma_list[1], - self.vhost_core_list[3], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], self.cbdma_list[3], - self.vhost_core_list[3], - self.cbdma_list[8], - self.vhost_core_list[3], - self.cbdma_list[9], - self.vhost_core_list[3], - self.cbdma_list[10], - self.vhost_core_list[3], - self.cbdma_list[11], - self.vhost_core_list[3], - self.cbdma_list[12], - self.vhost_core_list[3], - self.cbdma_list[13], - self.vhost_core_list[3], - self.cbdma_list[14], - self.vhost_core_list[4], - self.cbdma_list[15], ) ) vhost_eal_param = ( "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - ) - vhost_param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" - % lcore_dma + + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,tso=1,dmas=[%s]'" + % dmas + + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,tso=1,dmas=[%s]'" + % dmas ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" self.start_vhost_user_testpmd( cores=self.vhost_core_list, eal_param=vhost_eal_param, @@ -1099,74 +980,77 @@ class TestBasic4kPagesCbdma(TestCase): self.vm1.stop() self.vhost_user_pmd.quit() - def test_vm2vm_packed_ring_multi_queues_using_1G_and_4k_pages_and_cbdma_enable( + def test_vm2vm_split_packed_ring_multi_queues_using_1G_and_4k_pages_and_cbdma_enable( self, ): """ Test Case 8: vm2vm vhost/virtio-net split packed ring multi queues with 1G/4k-pages and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=8, allow_diff_socket=True) + dmas1 = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[1], self.cbdma_list[2], - self.vhost_core_list[1], + self.cbdma_list[2], + self.cbdma_list[3], self.cbdma_list[3], - self.vhost_core_list[2], + self.cbdma_list[3], + self.cbdma_list[3], + ) + ) + dmas2 = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" + % ( + self.cbdma_list[4], + self.cbdma_list[4], + self.cbdma_list[4], self.cbdma_list[4], - self.vhost_core_list[2], self.cbdma_list[5], - self.vhost_core_list[2], + self.cbdma_list[5], + self.cbdma_list[6], self.cbdma_list[6], - self.vhost_core_list[2], self.cbdma_list[7], - self.vhost_core_list[3], - self.cbdma_list[8], - self.vhost_core_list[3], - self.cbdma_list[9], - self.vhost_core_list[3], - self.cbdma_list[10], - self.vhost_core_list[3], - self.cbdma_list[11], - self.vhost_core_list[4], - self.cbdma_list[12], - self.vhost_core_list[4], - self.cbdma_list[13], - self.vhost_core_list[4], - self.cbdma_list[14], - self.vhost_core_list[4], - self.cbdma_list[15], + self.cbdma_list[7], + self.cbdma_list[7], + self.cbdma_list[7], ) ) vhost_eal_param = ( "-m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - ) - vhost_param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 --lcore-dma=[%s]" - % lcore_dma + + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[%s]'" + % dmas2 ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" self.start_vhost_user_testpmd( cores=self.vhost_core_list, eal_param=vhost_eal_param, @@ -1175,7 +1059,8 @@ class TestBasic4kPagesCbdma(TestCase): ) self.vhost_user_pmd.execute_cmd("start") - self.start_vm(packed=True, queues=8, server=True) + self.start_vm0(packed=False, queues=8, server=True) + self.start_vm1(packed=True, queues=8, server=True) self.config_vm_ip() self.config_vm_combined(combined=8) self.check_ping_between_vms()