From patchwork Fri Mar 24 07:42:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 125504 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2158B4282C; Fri, 24 Mar 2023 08:46:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B1A741140; Fri, 24 Mar 2023 08:46:35 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 37B2C4068E for ; Fri, 24 Mar 2023 08:46:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679643993; x=1711179993; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=tQdWFkpvTtm2XZUiQVF/Jm97GzVKa1RbZv7HOKsPF3Y=; b=Rr7PAP/0LpzKaACx7nVyoD/IvhNb7wJ3eyLk50DDAdcBJG6DVhcabQeQ n46jG46t4fNiL3HRQcER8ECIWpvXMZ7aMnfstoKPQHPeL75b1h/30+Phh RlnZlZtDW64t3zUZIxdXIlKEsqtui3mgBvBcoIZdBwCl167eDnUnV/WOr +ODY6jwEyo5ZPysZfJ1krpiqA5J/1gYxBlVgec9EGu8wsxZGw37bFDzSY A3z6sBNGN2jBUogZ1GQn6FJZVc0u3lEq+QoCbFUMg2fijtBvlGU66tpWx qqlF9iOwR+va81zYc9E59Jo1nb4k7JDR/MD1sl1Uba/9aZPvkSlEgbfiR Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10658"; a="338441685" X-IronPort-AV: E=Sophos;i="5.98,287,1673942400"; d="scan'208";a="338441685" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2023 00:45:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10658"; a="1012157319" X-IronPort-AV: E=Sophos;i="5.98,287,1673942400"; d="scan'208";a="1012157319" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2023 00:45:22 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/2] test_plans/vm2vm_virtio_net_perf: delete UFO related testcases Date: Fri, 24 Mar 2023 15:42:57 +0800 Message-Id: <20230324074257.1435279-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As the Linux kernel have removed the UFO(UDP fragmentation offload) feature, so delete the UFO related testcases and check point in the testplan. Signed-off-by: Wei Ling --- .../vm2vm_virtio_net_perf_test_plan.rst | 146 ++---------------- 1 file changed, 14 insertions(+), 132 deletions(-) diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst index 21661b4b..a8eb487f 100644 --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst @@ -9,8 +9,8 @@ Description =========== This test plan test several features in VM2VM topo: -1. Check Vhost tx offload (TSO and UFO) function by verifying the TSO/cksum in the TCP/IP stack and UFO/cksum -in the UDP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net mergeable path. +1. Check Vhost tx offload (TSO) function by verifying the TSO/cksum in the TCP/IP stack and UFO/cksum +in the IP stack with vm2vm split ring and packed ring vhost-user/virtio-net mergeable path. 2. Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring and packed ring vhost-user/virtio-net mergeable and non-mergeable path. Note: @@ -80,76 +80,19 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic -========================================================================= - -1. Launch the Vhost sample by below commands:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - testpmd>start - -2. Launch VM1 and VM2:: - - qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ifconfig ens3 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ifconfig ens3 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -u -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30 -P 4 -u -b 1G -l 9000` - -6. Check 2VMs can receive and send big packets to each other:: - - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 - -Test Case 3: Check split ring virtio-net device capability +Test Case 2: Check split ring virtio-net device capability ========================================================== 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start -2. Launch VM1 and VM2,set TSO and UFO on in qemu command:: +2. Launch VM1 and VM2,set TSO on in qemu command:: qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -160,7 +103,7 @@ Test Case 3: Check split ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -171,23 +114,21 @@ Test Case 3: Check split ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 -3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2:: +3. Check TSO offload status on for the Virtio-net driver on VM1 and VM2:: Under VM1, run: `run ethtool -k ens3` - udp-fragmentation-offload: on tx-tcp-segmentation: on tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on Under VM2, run: `run ethtool -k ens3` - udp-fragmentation-offload: on tx-tcp-segmentation: on tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on -Test Case 4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic +Test Case 3: VM2VM packed ring vhost-user/virtio-net test with tcp traffic ========================================================================== 1. Launch the Vhost sample by below commands:: @@ -244,76 +185,19 @@ Test Case 4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 5: VM2VM packed ring vhost-user/virtio-net test with udp traffic -========================================================================== - -1. Launch the Vhost sample by below commands:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1' \ - -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>start - -2. Launch VM1 and VM2 with qemu:: - - qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 40 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu1910.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ifconfig ens3 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ifconfig ens3 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -u -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 30 -P 4 -u -b 1G -l 9000` - -6. Check 2VMs can receive and send big packets to each other:: - - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 - -Test Case 6: Check packed ring virtio-net device capability -============================================================ +Test Case 4: Check packed ring virtio-net device capability +=========================================================== 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1' \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start -2. Launch VM1 and VM2 with qemu,set TSO and UFO on in qemu command:: +2. Launch VM1 and VM2 with qemu,set TSO on in qemu command:: qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -335,18 +219,16 @@ Test Case 6: Check packed ring virtio-net device capability -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1 \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,host_ufo=on,guest_ufo=on,guest_ecn=on,packed=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 3. Check UFO and TSO offload status on for the Virtio-net driver on VM1 and VM2:: Under VM1, run: `run ethtool -k ens3` - udp-fragmentation-offload: on tx-tcp-segmentation: on tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on Under VM2, run: `run ethtool -k ens3` - udp-fragmentation-offload: on tx-tcp-segmentation: on tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on From patchwork Fri Mar 24 07:43:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 125503 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EFC0D4282B; Fri, 24 Mar 2023 08:46:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E89D1406B8; Fri, 24 Mar 2023 08:46:33 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 4C5EC4068E for ; Fri, 24 Mar 2023 08:46:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679643992; x=1711179992; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=lCukpLKHq2DOumbCKy65DDLJPEmaA4t2VkgcELD8HZE=; b=fVfhhIaXE0yHaWkLGg8zKy1nJOreDvSrD4zZIdRr1kspXVUepiRbZmhK +j7W7bP4JWY0G2pDfkmdoj1nTV5VSG2FewN4CMn8bTUEsrYedtvLJ2rOy wV2ZD4da8+MERztupruCifrPqGMobKKFdhN63CZXCtWN8TqLkX/UdWBME r8chl7/oIuc0vg3aDx24OkLJ7UnuPFTVg8Pitgh6mIcE5Ip6fMHRO7V+w wxQeKwn+/GjAp/x1GlGODYPaU8Qz/wMch5QvpPQowD92YtJGZUCIWcE0r UnvF4/pmi0qGgiaCYBt36/W6K6XObvwdW6qsfj0OAczfhQgtHAvzSXjYc Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10658"; a="338441689" X-IronPort-AV: E=Sophos;i="5.98,287,1673942400"; d="scan'208";a="338441689" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2023 00:45:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10658"; a="1012157325" X-IronPort-AV: E=Sophos;i="5.98,287,1673942400"; d="scan'208";a="1012157325" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2023 00:45:31 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/2] tests/vm2vm_virtio_net_perf: delete UFO related testcases Date: Fri, 24 Mar 2023 15:43:06 +0800 Message-Id: <20230324074306.1435339-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org 1.As the Linux kernel have removed the UFO(UDP fragmentation offload) feature, so delete the UFO related testcases and check point in the testsuite. 2.Use the virtio_common API to reduce duplicate code. 3.Use the pmd_out start_testpmd API to start dpdk-testpmd to replace send_expect API. Signed-off-by: Wei Ling --- tests/TestSuite_vm2vm_virtio_net_perf.py | 398 ++++------------------- 1 file changed, 65 insertions(+), 333 deletions(-) diff --git a/tests/TestSuite_vm2vm_virtio_net_perf.py b/tests/TestSuite_vm2vm_virtio_net_perf.py index aef00404..32d917aa 100644 --- a/tests/TestSuite_vm2vm_virtio_net_perf.py +++ b/tests/TestSuite_vm2vm_virtio_net_perf.py @@ -2,133 +2,76 @@ # Copyright(c) 2019 Intel Corporation # -""" -DPDK Test suite. - -vm2vm split ring and packed ring with tx offload (TSO and UFO) with non-mergeable path. -vm2vm split ring and packed ring with UFO about virtio-net device capability with non-mergeable path. -vm2vm split ring and packed ring vhost-user/virtio-net check the payload of large packet is valid with -mergeable and non-mergeable dequeue zero copy. -please use qemu version greater 4.1.94 which support packed feathur to test this suite. -""" -import random import re -import string -import time import framework.utils as utils from framework.pmd_output import PmdOutput from framework.test_case import TestCase from framework.virt_common import VM +from .virtio_common import basic_common as BC + class TestVM2VMVirtioNetPerf(TestCase): def set_up_all(self): self.dut_ports = self.dut.get_ports() self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) - core_config = "1S/5C/1T" - self.cores_list = self.dut.get_core_list(core_config, socket=self.ports_socket) - self.verify( - len(self.cores_list) >= 4, - "There has not enough cores to test this suite %s" % self.suite_name, - ) + self.cores_list = self.dut.get_core_list("all", socket=self.ports_socket) + self.vhost_user_cores = self.cores_list[0:3] self.vm_num = 2 - self.virtio_ip1 = "1.1.1.2" - self.virtio_ip2 = "1.1.1.3" - self.virtio_mac1 = "52:54:00:00:00:01" - self.virtio_mac2 = "52:54:00:00:00:02" self.base_dir = self.dut.base_dir.replace("~", "/root") - self.random_string = string.ascii_letters + string.digits - socket_num = len(set([int(core["socket"]) for core in self.dut.cores])) - self.socket_mem = ",".join(["2048"] * socket_num) - self.vhost = self.dut.new_session(suite="vhost") - self.pmd_vhost = PmdOutput(self.dut, self.vhost) + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) self.app_testpmd_path = self.dut.apps_name["test-pmd"] - self.checked_vm = False + self.testpmd_name = self.app_testpmd_path.split("/")[-1] + self.BC = BC(self) def set_up(self): """ run before each test case. """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -I qemu-system-x86_64", "#") self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") self.vm_dut = [] self.vm = [] - @property - def check_2m_env(self): - out = self.dut.send_expect( - "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " - ) - return True if out == "2048" else False - - def start_vhost_testpmd( - self, - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - rxq_txq=None, - ): + def start_vhost_testpmd(self): """ - launch the testpmd with different parameters + start vhost-user testpmd """ - testcmd = self.app_testpmd_path + " " - if not client_mode: - vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=%d,tso=1' " % ( - self.base_dir, - enable_queues, - ) - vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=%d,tso=1' " % ( - self.base_dir, - enable_queues, - ) - else: - vdev1 = ( - "--vdev 'net_vhost0,iface=%s/vhost-net0,client=1,queues=%d,tso=1' " - % ( - self.base_dir, - enable_queues, - ) - ) - vdev2 = ( - "--vdev 'net_vhost1,iface=%s/vhost-net1,client=1,queues=%d,tso=1' " - % ( - self.base_dir, - enable_queues, - ) - ) - eal_params = self.dut.create_eal_parameters( - cores=self.cores_list, prefix="vhost", no_pci=no_pci + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1' " + "--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1'" ) - if rxq_txq is None: - params = " -- -i --nb-cores=%d --txd=1024 --rxd=1024" % nb_cores - else: - params = " -- -i --nb-cores=%d --txd=1024 --rxd=1024 --rxq=%d --txq=%d" % ( - nb_cores, - rxq_txq, - rxq_txq, - ) - self.command_line = testcmd + eal_params + vdev1 + vdev2 + params - self.pmd_vhost.execute_cmd(self.command_line, timeout=30) - self.pmd_vhost.execute_cmd("start", timeout=30) + param = "--nb-cores=2 --txd=1024 --rxd=1024" + self.vhost_user_pmd.start_testpmd( + cores=self.vhost_user_cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="vhost-user", + fixed_prefix=True, + ) + self.vhost_user_pmd.execute_cmd("start") - def start_vms(self, server_mode=False, opt_queue=None, vm_config="vhost_sample"): + def start_vms(self, mrg_rxbuf=True, packed=False): """ start two VM, each VM has one virtio device """ + mrg_rxbuf_param = "on" if mrg_rxbuf else "off" + packed_param = ",packed=on" if packed else "" for i in range(self.vm_num): vm_dut = None - vm_info = VM(self.dut, "vm%d" % i, vm_config) + vm_info = VM(self.dut, "vm%d" % i, "vhost_sample") vm_params = {} vm_params["driver"] = "vhost-user" - if not server_mode: - vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i - else: - vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + ",server" - if opt_queue is not None: - vm_params["opt_queue"] = opt_queue + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i vm_params["opt_mac"] = "52:54:00:00:00:0%d" % (i + 1) - vm_params["opt_settings"] = self.vm_args + vm_params["opt_settings"] = ( + "disable-modern=false,mrg_rxbuf=%s,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on%s" + % (mrg_rxbuf_param, packed_param) + ) vm_info.set_vm_device(**vm_params) try: vm_dut = vm_info.start(set_target=False) @@ -140,122 +83,14 @@ class TestVM2VMVirtioNetPerf(TestCase): self.vm_dut.append(vm_dut) self.vm.append(vm_info) - def config_vm_env(self, combined=False, rxq_txq=1): - """ - set virtio device IP and run arp protocal - """ - vm1_intf = self.vm_dut[0].ports_info[0]["intf"] - vm2_intf = self.vm_dut[1].ports_info[0]["intf"] - if combined: - self.vm_dut[0].send_expect( - "ethtool -L %s combined %d" % (vm1_intf, rxq_txq), "#", 10 - ) - self.vm_dut[0].send_expect( - "ifconfig %s %s" % (vm1_intf, self.virtio_ip1), "#", 10 - ) - if combined: - self.vm_dut[1].send_expect( - "ethtool -L %s combined %d" % (vm2_intf, rxq_txq), "#", 10 - ) - self.vm_dut[1].send_expect( - "ifconfig %s %s" % (vm2_intf, self.virtio_ip2), "#", 10 - ) - self.vm_dut[0].send_expect( - "arp -s %s %s" % (self.virtio_ip2, self.virtio_mac2), "#", 10 - ) - self.vm_dut[1].send_expect( - "arp -s %s %s" % (self.virtio_ip1, self.virtio_mac1), "#", 10 - ) - - def prepare_test_env( - self, - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=None, - combined=False, - rxq_txq=None, - ): - """ - start vhost testpmd and qemu, and config the vm env - """ - self.start_vhost_testpmd( - no_pci=no_pci, - client_mode=client_mode, - enable_queues=enable_queues, - nb_cores=nb_cores, - rxq_txq=rxq_txq, - ) - self.start_vms(server_mode=server_mode, opt_queue=opt_queue) - self.config_vm_env(combined=combined, rxq_txq=rxq_txq) - - def start_iperf(self, iperf_mode="tso"): - """ - run perf command between to vms - """ - # clear the port xstats before iperf - self.vhost.send_expect("clear port xstats all", "testpmd> ", 10) - - # add -f g param, use Gbits/sec report teste result - if iperf_mode == "tso": - iperf_server = "iperf -s -i 1" - iperf_client = "iperf -c 1.1.1.2 -i 1 -t 60" - elif iperf_mode == "ufo": - iperf_server = "iperf -s -u -i 1" - iperf_client = "iperf -c 1.1.1.2 -i 1 -t 30 -P 4 -u -b 1G -l 9000" - self.vm_dut[0].send_expect("%s > iperf_server.log &" % iperf_server, "", 10) - self.vm_dut[1].send_expect("%s > iperf_client.log &" % iperf_client, "", 60) - time.sleep(90) - - def get_perf_result(self): - """ - get the iperf test result - """ - self.table_header = ["Mode", "[M|G]bits/sec"] - self.result_table_create(self.table_header) - self.vm_dut[0].send_expect("pkill iperf", "# ") - self.vm_dut[1].session.copy_file_from("%s/iperf_client.log" % self.dut.base_dir) - fp = open("./iperf_client.log") - fmsg = fp.read() - fp.close() - # remove the server report info from msg - index = fmsg.find("Server Report") - if index != -1: - fmsg = fmsg[:index] - iperfdata = re.compile("\S*\s*[M|G]bits/sec").findall(fmsg) - # the last data of iperf is the ave data from 0-30 sec - self.verify(len(iperfdata) != 0, "The iperf data between to vms is 0") - self.verify( - (iperfdata[-1]).split()[1] == "Gbits/sec", - "The iperf data is %s,Can't reach Gbits/sec" % iperfdata[-1], - ) - self.logger.info("The iperf data between vms is %s" % iperfdata[-1]) - - # put the result to table - results_row = ["vm2vm", iperfdata[-1]] - self.result_table_add(results_row) - - # print iperf resut - self.result_table_print() - # rm the iperf log file in vm - self.vm_dut[0].send_expect("rm iperf_server.log", "#", 10) - self.vm_dut[1].send_expect("rm iperf_client.log", "#", 10) - return float(iperfdata[-1].split()[0]) - def verify_xstats_info_on_vhost(self): """ check both 2VMs can receive and send big packets to each other """ - out_tx = self.vhost.send_expect("show port xstats 0", "testpmd> ", 20) - out_rx = self.vhost.send_expect("show port xstats 1", "testpmd> ", 20) - - # rx_info = re.search("rx_size_1523_to_max_packets:\s*(\d*)", out_rx) + out_tx = self.vhost_user_pmd.execute_cmd("show port xstats 0") + out_rx = self.vhost_user_pmd.execute_cmd("show port xstats 1") rx_info = re.search("rx_q0_size_1519_max_packets:\s*(\d*)", out_rx) - # tx_info = re.search("tx_size_1523_to_max_packets:\s*(\d*)", out_tx) tx_info = re.search("tx_q0_size_1519_max_packets:\s*(\d*)", out_tx) - self.verify( int(rx_info.group(1)) > 0, "Port 1 not receive packet greater than 1522" ) @@ -263,37 +98,16 @@ class TestVM2VMVirtioNetPerf(TestCase): int(tx_info.group(1)) > 0, "Port 0 not forward packet greater than 1522" ) - def start_iperf_and_verify_vhost_xstats_info(self, iperf_mode="tso"): - """ - start to send packets and verify vm can received data of iperf - and verify the vhost can received big pkts in testpmd - """ - self.start_iperf(iperf_mode) - iperfdata = self.get_perf_result() - self.verify_xstats_info_on_vhost() - return iperfdata - - def stop_all_apps(self): - for i in range(len(self.vm)): - self.vm[i].stop() - self.pmd_vhost.quit() - - def offload_capbility_check(self, vm_client): + def offload_capbility_check(self, vm_session): """ check UFO and TSO offload status on for the Virtio-net driver in VM """ - vm_intf = vm_client.ports_info[0]["intf"] - vm_client.send_expect("ethtool -k %s > offload.log" % vm_intf, "#", 10) - fmsg = vm_client.send_expect("cat ./offload.log", "#") - udp_info = re.search("udp-fragmentation-offload:\s*(\S*)", fmsg) + vm_intf = vm_session.ports_info[0]["intf"] + vm_session.send_expect("ethtool -k %s > offload.log" % vm_intf, "#", 10) + fmsg = vm_session.send_expect("cat ./offload.log", "#") tcp_info = re.search("tx-tcp-segmentation:\s*(\S*)", fmsg) tcp_enc_info = re.search("tx-tcp-ecn-segmentation:\s*(\S*)", fmsg) tcp6_info = re.search("tx-tcp6-segmentation:\s*(\S*)", fmsg) - - self.verify( - udp_info is not None and udp_info.group(1) == "on", - "the udp-fragmentation-offload in vm not right", - ) self.verify( tcp_info is not None and tcp_info.group(1) == "on", "tx-tcp-segmentation in vm not right", @@ -307,131 +121,51 @@ class TestVM2VMVirtioNetPerf(TestCase): "tx-tcp6-segmentation in vm not right", ) - def check_scp_file_valid_between_vms(self, file_size=1024): - """ - scp file form VM1 to VM2, check the data is valid - """ - # default file_size=1024K - data = "" - for char in range(file_size * 1024): - data += random.choice(self.random_string) - self.vm_dut[0].send_expect('echo "%s" > /tmp/payload' % data, "# ") - # scp this file to vm1 - out = self.vm_dut[1].send_command( - "scp root@%s:/tmp/payload /root" % self.virtio_ip1, timeout=5 - ) - if "Are you sure you want to continue connecting" in out: - self.vm_dut[1].send_command("yes", timeout=3) - self.vm_dut[1].send_command(self.vm[0].password, timeout=3) - # get the file info in vm1, and check it valid - md5_send = self.vm_dut[0].send_expect("md5sum /tmp/payload", "# ") - md5_revd = self.vm_dut[1].send_expect("md5sum /root/payload", "# ") - md5_send = md5_send[: md5_send.find(" ")] - md5_revd = md5_revd[: md5_revd.find(" ")] - self.verify( - md5_send == md5_revd, "the received file is different with send file" - ) - def test_vm2vm_split_ring_iperf_with_tso(self): """ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic """ - self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on" - self.prepare_test_env( - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - - def test_vm2vm_split_ring_iperf_with_ufo(self): - """ - Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic - """ - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" - self.prepare_test_env( - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=1, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="ufo") + self.start_vhost_testpmd() + self.start_vms(mrg_rxbuf=False, packed=False) + self.BC.config_2_vms_ip() + self.BC.run_iperf_test_between_2_vms() + self.BC.check_iperf_result_between_2_vms() + self.verify_xstats_info_on_vhost() def test_vm2vm_split_ring_device_capbility(self): """ - Test Case 3: Check split ring virtio-net device capability + Test Case 2: Check split ring virtio-net device capability """ - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" - self.start_vhost_testpmd( - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - rxq_txq=None, - ) - self.start_vms() + self.start_vhost_testpmd() + self.start_vms(mrg_rxbuf=True, packed=False) self.offload_capbility_check(self.vm_dut[0]) self.offload_capbility_check(self.vm_dut[1]) def test_vm2vm_packed_ring_iperf_with_tso(self): """ - Test Case 4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic - """ - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.prepare_test_env( - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - - def test_vm2vm_packed_ring_iperf_with_ufo(self): - """ - Test Case 5: VM2VM packed ring vhost-user/virtio-net test with udp traffic + Test Case 3: VM2VM packed ring vhost-user/virtio-net test with tcp traffic """ - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.prepare_test_env( - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=None, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="ufo") + self.start_vhost_testpmd() + self.start_vms(mrg_rxbuf=True, packed=True) + self.BC.config_2_vms_ip() + self.BC.run_iperf_test_between_2_vms() + self.BC.check_iperf_result_between_2_vms() + self.verify_xstats_info_on_vhost() def test_vm2vm_packed_ring_device_capbility(self): """ - Test Case 6: Check packed ring virtio-net device capability + Test Case 4: Check packed ring virtio-net device capability """ - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.start_vhost_testpmd( - no_pci=True, - client_mode=False, - enable_queues=1, - nb_cores=2, - rxq_txq=None, - ) - self.start_vms() + self.start_vhost_testpmd() + self.start_vms(mrg_rxbuf=True, packed=True) self.offload_capbility_check(self.vm_dut[0]) self.offload_capbility_check(self.vm_dut[1]) + def stop_all_apps(self): + for i in range(len(self.vm)): + self.vm[i].stop() + self.vhost_user_pmd.quit() + def tear_down(self): """ run after each test case. @@ -443,6 +177,4 @@ class TestVM2VMVirtioNetPerf(TestCase): """ Run after each test suite. """ - self.bind_nic_driver(self.dut_ports, self.drivername) - if getattr(self, "vhost", None): - self.dut.close_session(self.vhost) + self.dut.close_session(self.vhost_user)