From patchwork Mon Dec 26 01:33:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121368 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F4140A0032; Mon, 26 Dec 2022 02:43:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA5984021D; Mon, 26 Dec 2022 02:43:10 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 82F84400D4 for ; Mon, 26 Dec 2022 02:43:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672018989; x=1703554989; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=/RQAWK4pNwQghvyT8IenydOvO1oDUThELFpg5sIU31c=; b=m67bAz97n5iO4hqh8FnXE4Qcq9Tcykwaxw/6vX/9jfSIKjcD5RIBSjvJ 8TTRc3NbjrEpGfsLSMc+0punbfcdZLgRoLmfhUBLG6uc5QR3Mx9+XX919 L74NOlf50r/VF634W2ykykalvIuaHhB7mGj93f+45hUMVS5ypkpyC5E2Y 2LXv3fEbFbg/Mrep6Wkc71dIbQEkuzG4ypZCos+shbjvov4noF5SQIVaX pAHrJ0d2uh5OiETJRNj8AoZR6pqkCggLs2fET08xIwza0GfHlqkwJz/J9 nH5/fTee2zRU6sJtdfanKlf+KyQoDlc3DFFjnXs17pKJmIpSjNe2KPHpZ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10571"; a="347697928" X-IronPort-AV: E=Sophos;i="5.96,274,1665471600"; d="scan'208";a="347697928" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Dec 2022 17:43:04 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10571"; a="826756151" X-IronPort-AV: E=Sophos;i="5.96,274,1665471600"; d="scan'208";a="826756151" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Dec 2022 17:43:02 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/2] test_plans/pvp_vhost_user_reconnect_test_plan: adjust testplan's format Date: Mon, 26 Dec 2022 09:33:53 +0800 Message-Id: <20221226013353.2469371-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Adjust testplan's format. Signed-off-by: Wei Ling --- .../pvp_vhost_user_reconnect_test_plan.rst | 178 +++++++++++------- 1 file changed, 112 insertions(+), 66 deletions(-) diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst index 6877aec4..ee9d136a 100644 --- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst +++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst @@ -26,13 +26,15 @@ Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vh Note that QEMU version v2.7 or above is required for split ring cases, and QEMU version v4.2.0 or above is required for packed ring cases. -Test Case1: vhost-user/virtio-pmd pvp split ring reconnect from vhost-user -========================================================================== +Test Case 1: vhost-user/virtio-pmd pvp split ring reconnect from vhost-user +=========================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: +1. Bind 1 NIC port to vfio-pci, then launch vhost with client mode by below commands:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -52,7 +54,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -63,7 +66,9 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -71,13 +76,15 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG testpmd>show port stats all -Test Case2: vhost-user/virtio-pmd pvp split ring reconnect from VM -================================================================== +Test Case 2: vhost-user/virtio-pmd pvp split ring reconnect from VM +=================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: +1. Bind 1 NIC port to vfio-pci, then launch vhost with client mode by below commands:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -107,22 +114,24 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. Reboot the VM, rerun step2-step4, check the reconnection can be established. -Test Case3: vhost-user/virtio-pmd pvp split ring reconnect stability test -========================================================================= +Test Case 3: vhost-user/virtio-pmd pvp split ring reconnect stability test +========================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -Similar as Test Case1, all steps are similar except step 5, 6. +Similar as Test Case 1, all steps are similar except step 5, 6. -5. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +5. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -6. Reboot VM, then re-launch VM, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +6. Reboot VM, then re-launch VM, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from vhost-user ========================================================================================== -1. Bind one port to vfio-pci, launch the vhost by below command:: +1. Bind 1 NIC port to vfio-pci, launch the vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -154,13 +163,15 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -171,7 +182,9 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 6. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -182,9 +195,11 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from VMs =================================================================================== -1. Bind one port to vfio-pci, launch the vhost by below command:: +1. Bind 1 NIC port to vfio-pci, launch the vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -216,13 +231,15 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -239,11 +256,11 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from Test Case 6: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect stability test ========================================================================================= -Similar as Test Case 4, all steps are similar except step 6, 7. +Similar as Test Case 4, all steps are similar except step 6, 7. -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +6. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -7. Reboot VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +7. Reboot VMs, then re-launch VMs, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. Test Case 7: vhost-user/virtio-net VM2VM split ring reconnect from vhost-user ============================================================================= @@ -251,7 +268,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -295,7 +314,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 6. Kill the vhost-user, then re-launch the vhost-user:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. @@ -306,7 +327,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -353,19 +376,21 @@ Test Case 9: vhost-user/virtio-net VM2VM split ring reconnect stability test ============================================================================ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 -Similar as Test Case 7, all steps are similar except step 6. +Similar as Test Case 7, all steps are similar except step 6. -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +6. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -7. Reboot two VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +7. Reboot two VMs, then re-launch VMs, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -Test Case10: vhost-user/virtio-pmd pvp packed ring reconnect from vhost-user -============================================================================ +Test Case 10: vhost-user/virtio-pmd pvp packed ring reconnect from vhost-user +============================================================================= Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: +1. Bind 1 NIC port to vfio-pci, then launch vhost with client mode by below commands:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -385,7 +410,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -396,7 +422,9 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -404,13 +432,15 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG testpmd>show port stats all -Test Case11: vhost-user/virtio-pmd pvp packed ring reconnect from VM -==================================================================== +Test Case 11: vhost-user/virtio-pmd pvp packed ring reconnect from VM +===================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: +1. Bind 1 NIC port to vfio-pci, then launch vhost with client mode by below commands:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -440,22 +470,24 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. Reboot the VM, rerun step2-step4, check the reconnection can be established. -Test Case12: vhost-user/virtio-pmd pvp packed ring reconnect stability test -=========================================================================== +Test Case 12: vhost-user/virtio-pmd pvp packed ring reconnect stability test +============================================================================ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -Similar as Test Case1, all steps are similar except step 5, 6. +Similar as Test Case 1, all steps are similar except step 5, 6. -5. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +5. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -6. Reboot VM, then re-launch VM, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +6. Reboot VM, then re-launch VM, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from vhost-user ============================================================================================ -1. Bind one port to vfio-pci, launch the vhost by below command:: +1. Bind 1 NIC port to vfio-pci, launch the vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -487,13 +519,15 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -504,7 +538,9 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 6. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -515,9 +551,11 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from VMs ===================================================================================== -1. Bind one port to vfio-pci, launch the vhost by below command:: +1. Bind 1 NIC port to vfio-pci, launch the vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -549,13 +587,15 @@ Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -572,11 +612,11 @@ Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro Test Case 15: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect stability test =========================================================================================== -Similar as Test Case 4, all steps are similar except step 6, 7. +Similar as Test Case 4, all steps are similar except step 6, 7. -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +6. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -7. Reboot VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +7. Reboot VMs, then re-launch VMs, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. Test Case 16: vhost-user/virtio-net VM2VM packed ring reconnect from vhost-user =============================================================================== @@ -584,7 +624,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -628,7 +670,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 6. Kill the vhost-user, then re-launch the vhost-user:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. @@ -639,7 +683,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -686,8 +732,8 @@ Test Case 18: vhost-user/virtio-net VM2VM packed ring reconnect stability test ============================================================================== Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 -Similar as Test Case 7, all steps are similar except step 6. +Similar as Test Case 7, all steps are similar except step 6. -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +6. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -7. Reboot two VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. \ No newline at end of file +7. Reboot two VMs, then re-launch VMs, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. \ No newline at end of file From patchwork Mon Dec 26 01:34:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121369 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 289B5A0032; Mon, 26 Dec 2022 02:43:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1D5CA40E2D; Mon, 26 Dec 2022 02:43:16 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 7236F400D4 for ; Mon, 26 Dec 2022 02:43:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672018994; x=1703554994; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ZcXxCEojka5PRI65kTiy9M+I5jF2BdOEmvFVo9UQRN0=; b=QzPMqwBL8QXJYaXPCvY77mMRamV937eI6qC3/fLyy/PyTiEyCbNh7G6K 2HeqcNPjgZilfZywhaeqQHYhsq6QYgCwhljHobDS7VUB8jDVESN4LMfjI SnLLv2d9D3m7Vm6QWaEWOzqPVYJV2VNGgK8VAMySSESqK1DSYUOQNKMBV NORHfEGvHkb4oPKz6Crg7TtGg/FNVUScYi6qtlLHLU0PGNSYlPH87WKSU 5tHLnZ61QbfIjbM34xS6EcrnWKB1aIC8wgCnRjaHYxMBOBGiCjNkB6WWa G8waNOpHQwwK76PGC6Pwelv0nTL//sxis8f20UHzyiuRRXZTeZn/8qOpW g==; X-IronPort-AV: E=McAfee;i="6500,9779,10571"; a="347697949" X-IronPort-AV: E=Sophos;i="5.96,274,1665471600"; d="scan'208";a="347697949" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Dec 2022 17:43:14 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10571"; a="826756189" X-IronPort-AV: E=Sophos;i="5.96,274,1665471600"; d="scan'208";a="826756189" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Dec 2022 17:43:12 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/2] tests/pvp_vhost_user_reconnect: split the packed ring cases Date: Mon, 26 Dec 2022 09:34:04 +0800 Message-Id: <20221226013404.2469431-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As the packed ring not support reconnect from the back-end side, but support reconnect from the front front-end side, so split the packed ring case to 2 cases reconnect from back-end and front-end. Signed-off-by: Wei Ling --- tests/TestSuite_pvp_vhost_user_reconnect.py | 177 +++++++++++++------- 1 file changed, 114 insertions(+), 63 deletions(-) diff --git a/tests/TestSuite_pvp_vhost_user_reconnect.py b/tests/TestSuite_pvp_vhost_user_reconnect.py index 93006413..377be1d9 100644 --- a/tests/TestSuite_pvp_vhost_user_reconnect.py +++ b/tests/TestSuite_pvp_vhost_user_reconnect.py @@ -2,16 +2,8 @@ # Copyright(c) 2019 Intel Corporation # -""" -DPDK Test suite. - -Vhost reconnect two VM test suite. -Becase this suite will use the reconnet feature, the VM will start as -server mode, so the qemu version should greater than 2.7 -""" import re import time - import framework.utils as utils from framework.packet import Packet from framework.pktgen import PacketGeneratorHelper @@ -21,11 +13,9 @@ from framework.virt_common import VM class TestPVPVhostUserReconnect(TestCase): def set_up_all(self): - # Get and verify the ports self.dut_ports = self.dut.get_ports() self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing") - # Get the port's socket self.pf = self.dut_ports[0] netdev = self.dut.ports_info[self.pf]["port"] @@ -38,7 +28,6 @@ class TestPVPVhostUserReconnect(TestCase): self.socket_mem = "1024" else: self.socket_mem = "1024,1024" - self.reconnect_times = 5 self.vm_num = 1 self.frame_sizes = [64, 1518] @@ -67,7 +56,7 @@ class TestPVPVhostUserReconnect(TestCase): self.dut.send_expect("rm -rf ./vhost-net*", "# ") self.vhost_user = self.dut.new_session(suite="vhost-user") - def launch_testpmd_as_vhost_user(self): + def launch_testpmd_as_vhost_user(self, no_pci=False): """ launch the testpmd as vhost user """ @@ -78,32 +67,20 @@ class TestPVPVhostUserReconnect(TestCase): i, ) testcmd = self.dut.base_dir + "/%s" % self.path - eal_params = self.dut.create_eal_parameters( - cores=self.cores, prefix="vhost", ports=[self.pci_info] - ) - para = " -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024" - self.vhostapp_testcmd = testcmd + eal_params + vdev_info + para - self.vhost_user.send_expect(self.vhostapp_testcmd, "testpmd> ", 40) - self.vhost_user.send_expect("set fwd mac", "testpmd> ", 40) - self.vhost_user.send_expect("start", "testpmd> ", 40) - def launch_testpmd_as_vhost_user_with_no_pci(self): - """ - launch the testpmd as vhost user - """ - vdev_info = "" - for i in range(self.vm_num): - vdev_info += "--vdev 'net_vhost%d,iface=vhost-net%d,client=1,queues=1' " % ( - i, - i, + if not no_pci: + eal_params = self.dut.create_eal_parameters( + cores=self.cores, prefix="vhost", ports=[self.pci_info] ) - testcmd = self.dut.base_dir + "/%s" % self.path - eal_params = self.dut.create_eal_parameters( - cores=self.cores, no_pci=True, prefix="vhost" - ) - para = " -- -i --nb-cores=1 --txd=1024 --rxd=1024" + else: + eal_params = self.dut.create_eal_parameters( + cores=self.cores, no_pci=True, prefix="vhost" + ) + para = " -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024" self.vhostapp_testcmd = testcmd + eal_params + vdev_info + para self.vhost_user.send_expect(self.vhostapp_testcmd, "testpmd> ", 40) + if not no_pci: + self.vhost_user.send_expect("set fwd mac", "testpmd> ", 40) self.vhost_user.send_expect("start", "testpmd> ", 40) def check_link_status_after_testpmd_start(self, dut_info): @@ -288,7 +265,8 @@ class TestPVPVhostUserReconnect(TestCase): ) Mpps = pps / 1000000.0 if self.running_case in [ - "test_perf_packed_ring_reconnet_two_vms", + "test_perf_packed_ring_reconnet_two_vms_from_vms", + "test_perf_packed_ring_reconnet_two_vms_from_vhost_user", "test_perf_split_ring_reconnet_two_vms", ]: check_speed = 2 if frame_size == 64 else 0.5 @@ -337,7 +315,9 @@ class TestPVPVhostUserReconnect(TestCase): def test_perf_split_ring_reconnet_one_vm(self): """ - test reconnect stability test of one vm + Test Case 1: vhost-user/virtio-pmd pvp split ring reconnect from vhost-user + Test Case 2: vhost-user/virtio-pmd pvp split ring reconnect from VM + Test Case 3: vhost-user/virtio-pmd pvp split ring reconnect stability test """ self.header_row = [ "Mode", @@ -358,7 +338,7 @@ class TestPVPVhostUserReconnect(TestCase): vm_cycle = 1 # reconnet from vhost self.logger.info("now reconnect from vhost") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") self.launch_testpmd_as_vhost_user() self.reconnect_data = self.send_and_verify(vm_cycle, "reconnet from vhost") @@ -366,7 +346,7 @@ class TestPVPVhostUserReconnect(TestCase): # reconnet from qemu self.logger.info("now reconnect from vm") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT qemu-system-x86_64", "# ") self.start_vms() self.vm_testpmd_start() @@ -376,7 +356,9 @@ class TestPVPVhostUserReconnect(TestCase): def test_perf_split_ring_reconnet_two_vms(self): """ - test reconnect stability test of two vms + Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from vhost-user + Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from VMs + Test Case 6: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect stability test """ self.header_row = [ "Mode", @@ -397,7 +379,7 @@ class TestPVPVhostUserReconnect(TestCase): vm_cycle = 1 # reconnet from vhost self.logger.info("now reconnect from vhost") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") self.launch_testpmd_as_vhost_user() self.reconnect_data = self.send_and_verify(vm_cycle, "reconnet from vhost") @@ -405,7 +387,7 @@ class TestPVPVhostUserReconnect(TestCase): # reconnet from qemu self.logger.info("now reconnect from vm") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT qemu-system-x86_64", "# ") self.start_vms() self.vm_testpmd_start() @@ -415,13 +397,15 @@ class TestPVPVhostUserReconnect(TestCase): def test_perf_split_ring_vm2vm_virtio_net_reconnet_two_vms(self): """ - test the iperf traffice can resume after reconnet + Test Case 7: vhost-user/virtio-net VM2VM split ring reconnect from vhost-user + Test Case 8: vhost-user/virtio-net VM2VM split ring reconnect from VMs + Test Case 9: vhost-user/virtio-net VM2VM split ring reconnect stability test """ self.header_row = ["Mode", "[M|G]bits/sec", "Cycle"] self.result_table_create(self.header_row) self.vm_num = 2 vm_cycle = 0 - self.launch_testpmd_as_vhost_user_with_no_pci() + self.launch_testpmd_as_vhost_user(no_pci=True) self.start_vms(bind_dev=False) self.config_vm_intf() self.start_iperf() @@ -430,9 +414,9 @@ class TestPVPVhostUserReconnect(TestCase): vm_cycle = 1 # reconnet from vhost self.logger.info("now reconnect from vhost") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") - self.launch_testpmd_as_vhost_user_with_no_pci() + self.launch_testpmd_as_vhost_user(no_pci=True) self.start_iperf() self.reconnect_data = self.iperf_result_verify( vm_cycle, "reconnet from vhost" @@ -442,7 +426,7 @@ class TestPVPVhostUserReconnect(TestCase): # reconnet from VM self.logger.info("now reconnect from vm") vm_tmp = list() - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.vm_dut[0].send_expect("rm iperf_server.log", "# ", 10) self.vm_dut[1].send_expect("rm iperf_client.log", "# ", 10) self.dut.send_expect("killall -s INT qemu-system-x86_64", "# ") @@ -453,9 +437,10 @@ class TestPVPVhostUserReconnect(TestCase): self.check_reconnect_perf() self.result_table_print() - def test_perf_packed_ring_reconnet_one_vm(self): + def test_perf_packed_ring_reconnet_one_vm_from_vhost_user(self): """ - test reconnect stability test of one vm + Test Case 10: vhost-user/virtio-pmd pvp packed ring reconnect from vhost-user + Test Case 12: vhost-user/virtio-pmd pvp packed ring reconnect stability test """ self.header_row = [ "Mode", @@ -476,15 +461,38 @@ class TestPVPVhostUserReconnect(TestCase): vm_cycle = 1 # reconnet from vhost self.logger.info("now reconnect from vhost") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") self.launch_testpmd_as_vhost_user() self.reconnect_data = self.send_and_verify(vm_cycle, "reconnet from vhost") self.check_reconnect_perf() + self.result_table_print() + def test_perf_packed_ring_reconnet_one_vm_from_vm(self): + """ + Test Case 11: vhost-user/virtio-pmd pvp packed ring reconnect from VM + Test Case 12: vhost-user/virtio-pmd pvp packed ring reconnect stability test + """ + self.header_row = [ + "Mode", + "FrameSize(B)", + "Throughput(Mpps)", + "LineRate(%)", + "Cycle", + "Queue Number", + ] + self.result_table_create(self.header_row) + vm_cycle = 0 + self.vm_num = 1 + self.launch_testpmd_as_vhost_user() + self.start_vms(packed=True) + self.vm_testpmd_start() + self.before_data = self.send_and_verify(vm_cycle, "reconnet one vm") + + vm_cycle = 1 # reconnet from qemu self.logger.info("now reconnect from vm") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT qemu-system-x86_64", "# ") self.start_vms(packed=True) self.vm_testpmd_start() @@ -492,9 +500,10 @@ class TestPVPVhostUserReconnect(TestCase): self.check_reconnect_perf() self.result_table_print() - def test_perf_packed_ring_reconnet_two_vms(self): + def test_perf_packed_ring_reconnet_two_vms_from_vhost_user(self): """ - test reconnect stability test of two vms + Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from vhost-user + Test Case 15: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect stability test """ self.header_row = [ "Mode", @@ -515,14 +524,38 @@ class TestPVPVhostUserReconnect(TestCase): vm_cycle = 1 # reconnet from vhost self.logger.info("now reconnect from vhost") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") self.launch_testpmd_as_vhost_user() self.reconnect_data = self.send_and_verify(vm_cycle, "reconnet from vhost") self.check_reconnect_perf() + self.result_table_print() + + def test_perf_packed_ring_reconnet_two_vms_from_vms(self): + """ + Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from VMs + Test Case 15: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect stability test + """ + self.header_row = [ + "Mode", + "FrameSize(B)", + "Throughput(Mpps)", + "LineRate(%)", + "Cycle", + "Queue Number", + ] + self.result_table_create(self.header_row) + vm_cycle = 0 + self.vm_num = 2 + self.launch_testpmd_as_vhost_user() + self.start_vms(packed=True) + self.vm_testpmd_start() + self.before_data = self.send_and_verify(vm_cycle, "reconnet two vm") + + vm_cycle = 1 # reconnet from qemu self.logger.info("now reconnect from vm") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT qemu-system-x86_64", "# ") self.start_vms(packed=True) self.vm_testpmd_start() @@ -530,15 +563,16 @@ class TestPVPVhostUserReconnect(TestCase): self.check_reconnect_perf() self.result_table_print() - def test_perf_packed_ring_virtio_net_reconnet_two_vms(self): + def test_perf_packed_ring_virtio_net_reconnet_two_vms_from_vhost_user(self): """ - test the iperf traffice can resume after reconnet + Test Case 16: vhost-user/virtio-net VM2VM packed ring reconnect from vhost-user + Test Case 18: vhost-user/virtio-net VM2VM packed ring reconnect stability test """ self.header_row = ["Mode", "[M|G]bits/sec", "Cycle"] self.result_table_create(self.header_row) self.vm_num = 2 vm_cycle = 0 - self.launch_testpmd_as_vhost_user_with_no_pci() + self.launch_testpmd_as_vhost_user(no_pci=True) self.start_vms(packed=True, bind_dev=False) self.config_vm_intf() self.start_iperf() @@ -547,18 +581,35 @@ class TestPVPVhostUserReconnect(TestCase): vm_cycle = 1 # reconnet from vhost self.logger.info("now reconnect from vhost") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") - self.launch_testpmd_as_vhost_user_with_no_pci() + self.launch_testpmd_as_vhost_user(no_pci=True) self.start_iperf() self.reconnect_data = self.iperf_result_verify( vm_cycle, "reconnet from vhost" ) self.check_reconnect_perf() + self.result_table_print() + def test_perf_packed_ring_virtio_net_reconnet_two_vms_from_vms(self): + """ + Test Case 17: vhost-user/virtio-net VM2VM packed ring reconnect from VMs + Test Case 18: vhost-user/virtio-net VM2VM packed ring reconnect stability test + """ + self.header_row = ["Mode", "[M|G]bits/sec", "Cycle"] + self.result_table_create(self.header_row) + self.vm_num = 2 + vm_cycle = 0 + self.launch_testpmd_as_vhost_user(no_pci=True) + self.start_vms(packed=True, bind_dev=False) + self.config_vm_intf() + self.start_iperf() + self.before_data = self.iperf_result_verify(vm_cycle, "before reconnet") + + vm_cycle = 1 # reconnet from VM self.logger.info("now reconnect from vm") - for i in range(self.reconnect_times): + for _ in range(self.reconnect_times): self.vm_dut[0].send_expect("rm iperf_server.log", "# ", 10) self.vm_dut[1].send_expect("rm iperf_client.log", "# ", 10) self.dut.send_expect("killall -s INT qemu-system-x86_64", "# ") @@ -570,9 +621,9 @@ class TestPVPVhostUserReconnect(TestCase): self.result_table_print() def tear_down(self): - # - # Run after each test case. - # + """ + Run after each test case. + """ try: self.stop_all_apps() except Exception as e: