From patchwork Mon Dec 26 02:03:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121370 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BDA7DA0032; Mon, 26 Dec 2022 03:12:40 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9360640143; Mon, 26 Dec 2022 03:12:40 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 0AA98400D4 for ; Mon, 26 Dec 2022 03:12:38 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672020759; x=1703556759; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=EW0zqbTGP/C10KqXQFLsLluzW1/rmO1vzIAhotkENms=; b=WoSrHECtz0ObfV4ru8dgoSGklSTYChLn3vUwLvxiYQSm0VXU3zz7fFJK 89av78OaMiRYyxNDUqOQ4c1LH/pDO6QjrCd1rcrUk+i4EtXQcRe8sQ80h 1aB59/sWezYUGqqeLGa8LDt5LSbImrA8XeRVtpTl1Aqdmh8dR1GPGvERH wJK4Rk5+42Wgr4RjcTyuEOr6Z10CK7YEN0ZzmWqs1Zmo6VGzFnzJnvUgV /94/6S2F9wTqYeUKI+PByldC5dQKZ2ZRO3N1Xrq169msVYl4tBRoMZEMQ qUM4hK+yNizZZbIjS26G2RoK2MaiWtmUwIcFsuPmsIEDeum99SULTmfB0 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10571"; a="308255918" X-IronPort-AV: E=Sophos;i="5.96,274,1665471600"; d="scan'208";a="308255918" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Dec 2022 18:12:38 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10571"; a="602677584" X-IronPort-AV: E=Sophos;i="5.96,274,1665471600"; d="scan'208";a="602677584" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Dec 2022 18:12:36 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2] optimization pvp_qemu_multi_paths_port_restart testplan and testsuite Date: Mon, 26 Dec 2022 10:03:28 +0800 Message-Id: <20221226020328.2469844-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org 1.Add `disable-modern=false` parameter in vitio0.95 testcases. 2.Add `-a 0000:af:00.0` in start vhost-user testpmd. 3.Add `-a 0000:04:00.0,vectorized=1` in virtio0.95 and virtio1.0 vector_rx path case. Signed-off-by: Wei Ling --- ...emu_multi_paths_port_restart_test_plan.rst | 108 +++++++++--------- ...Suite_pvp_qemu_multi_paths_port_restart.py | 6 +- 2 files changed, 58 insertions(+), 56 deletions(-) diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst index 017ea5f0..a621738d 100644 --- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst +++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst @@ -19,27 +19,27 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG Test Case 1: pvp test with virtio 0.95 mergeable path ===================================================== -1. Bind one port to vfio-pci, then launch testpmd by below command:: +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ - -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch VM with mrg_rxbuf feature on:: - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \ + -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ -vnc :10 3. On VM, bind virtio net to vfio-pci and run testpmd:: @@ -66,26 +66,26 @@ Test Case 1: pvp test with virtio 0.95 mergeable path Test Case 2: pvp test with virtio 0.95 normal path ================================================== -1. Bind one port to vfio-pci, then launch testpmd by below command:: +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ - -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch VM with mrg_rxbuf feature off:: - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ -vnc :10 3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads:: @@ -112,31 +112,31 @@ Test Case 2: pvp test with virtio 0.95 normal path Test Case 3: pvp test with virtio 0.95 vrctor_rx path ===================================================== -1. Bind one port to vfio-pci, then launch testpmd by below command:: +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ - -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch VM with mrg_rxbuf feature off:: - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ -vnc :10 3. On VM, bind virtio net to vfio-pci and run testpmd without ant tx-offloads:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -158,23 +158,23 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path Test Case 4: pvp test with virtio 1.0 mergeable path ==================================================== -1. Bind one port to vfio-pci, then launch testpmd by below command:: +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ - -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0:: - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ @@ -204,23 +204,23 @@ Test Case 4: pvp test with virtio 1.0 mergeable path Test Case 5: pvp test with virtio 1.0 normal path ================================================= -1. Bind one port to vfio-pci, then launch testpmd by below command:: +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ - -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0:: - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ @@ -250,23 +250,23 @@ Test Case 5: pvp test with virtio 1.0 normal path Test Case 6: pvp test with virtio 1.0 vrctor_rx path ==================================================== -1. Bind one port to vfio-pci, then launch testpmd by below command:: +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ - -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0:: - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 2 -m 4096 \ + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ @@ -274,7 +274,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start diff --git a/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py b/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py index 2b753eb1..f07b698f 100644 --- a/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py +++ b/tests/TestSuite_pvp_qemu_multi_paths_port_restart.py @@ -101,8 +101,10 @@ class TestPVPQemuMultiPathPortRestart(TestCase): ) elif path == "vector_rx": command = ( - self.path + "-c 0x3 -n 3 -- -i " + "--nb-cores=1 --txd=1024 --rxd=1024" - ) + self.path + + "-c 0x3 -n 3 -a %s,vectorized=1 -- -i " + + "--nb-cores=1 --txd=1024 --rxd=1024" + ) % self.vm_dut.get_port_pci(0) self.vm_dut.send_expect(command, "testpmd> ", 30) self.vm_dut.send_expect("set fwd mac", "testpmd> ", 30) self.vm_dut.send_expect("start", "testpmd> ", 30)