From patchwork Wed Nov 30 06:17:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120325 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D812AA00C2; Wed, 30 Nov 2022 07:23:07 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D27C740A79; Wed, 30 Nov 2022 07:23:07 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 53BE44014F for ; Wed, 30 Nov 2022 07:23:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669789386; x=1701325386; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=vGIY4gIIC28RHjsjuZmbilKVW0Op6lYnm0dnljlegbI=; b=i8pvJHGJyp6YOpSFzB4ra6ARwskf/J5gZIKECnW6PYtRwo5tT8VEuqP8 t0kbp9l+jJGGEQCTwtx5iGArkGBILqgzjmY5DPq6KZAQnbgF0OU2eouZC iF7jkRko/DJh5n9uaURIjqVMAso09e7prE+rIgWY0POjv5EeMV3kZdh/k y5VBubaJ8/lXjpZO4ug0DE31PfcQLNF+AWznWnixOWO+ClNfTHBlAG+0G XzlJps355ZUWmeB4Fr9K0A+IDTpuNII+JdKZHAdTXEmfNf3DEVYVWhZ7F 4KS/z0RZ3yd3qLUP3I9Xvq+O4yg2mpuviwBrw/5fpU2ZlFgaPKkjRydE3 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="317167335" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="317167335" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:23:05 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="732861339" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="732861339" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:23:04 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/3] test_plans/basic_4k_pages_dsa_test_plan: modify testplan description Date: Wed, 30 Nov 2022 14:17:28 +0800 Message-Id: <20221130061728.1163892-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Modify the description in testplan. Signed-off-by: Wei Ling --- test_plans/basic_4k_pages_dsa_test_plan.rst | 75 +++++++++++---------- 1 file changed, 38 insertions(+), 37 deletions(-) diff --git a/test_plans/basic_4k_pages_dsa_test_plan.rst b/test_plans/basic_4k_pages_dsa_test_plan.rst index 4a67dfa0..7d0f9d77 100644 --- a/test_plans/basic_4k_pages_dsa_test_plan.rst +++ b/test_plans/basic_4k_pages_dsa_test_plan.rst @@ -1,9 +1,9 @@ .. SPDX-License-Identifier: BSD-3-Clause Copyright(c) 2022 Intel Corporation -============================================== +============================================= Basic 4k-pages test with DSA driver test plan -============================================== +============================================= Description =========== @@ -21,10 +21,11 @@ and packed ring vhost-user/virtio-net mergeable and non-mergeable path. 4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring. 5. Vhost-user using 1G hugepges and virtio-user using 4k-pages. -Note: -1. When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may -exceed IOMMU's max capability, better to use 1G guest hugepage. -2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. +.. note:: + + 1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may + exceed IOMMU's max capability, better to use 1G guest hugepage. + 2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. Prerequisites ============= @@ -41,7 +42,7 @@ General set up CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc -j 110 -3. Get the PCI device ID and DSA device ID of DUT, for example, 0000:6a:00.0 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs:: +3. Get the PCI device of DUT, for example, 0000:6a:00.0 is NIC port, 0000:6a:01.0 - 0000:f6:01.0 are DSA devices:: # ./usertools/dpdk-devbind.py -s @@ -74,14 +75,14 @@ Common steps ------------ 1. Bind 1 NIC port to vfio-pci:: - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci For example: - # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:4f.1 + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:00.0 2.Bind DSA devices to DPDK vfio-pci driver:: - # ./usertools/dpdk-devbind.py -b vfio-pci - For example, bind 2 DMA devices to vfio-pci driver: + # ./usertools/dpdk-devbind.py -b vfio-pci + For example, bind 2 DSA devices to vfio-pci driver: # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0 .. note:: @@ -93,18 +94,18 @@ Common steps 3. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ):: - # ./usertools/dpdk-devbind.py -b idxd - # ./drivers/dma/dma/idxd/dpdk_idxd_cfg.py -q + # ./usertools/dpdk-devbind.py -b idxd + # ./drivers/dma/dma/idxd/dpdk_idxd_cfg.py -q .. note:: Better to reset WQ when need operate DSA devices that bound to idxd drvier: # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset You can check it by 'ls /dev/dsa' - numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 - numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8 + dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 + wq_num: Number of workqueues per DSA endpoint, where 1<=numWq<=8 - For example, bind 2 DMA devices to idxd driver and configure WQ: + For example, bind 2 DSA devices to idxd driver and configure WQ: # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 @@ -112,10 +113,10 @@ Common steps Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" Test Case 1: PVP split ring multi-queues with 4K-pages and dsa dpdk driver ------------------------------------------------------------------------------- +-------------------------------------------------------------------------- This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa dpdk driver. -1. Bind 2 dsa device and one nic port to vfio-pci like common step 1-2:: +1. Bind 2 DSA device and 1 NIC port to vfio-pci like common step 1-2:: # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 @@ -172,10 +173,10 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 10. Rerun step 4-6. Test Case 2: PVP packed ring multi-queues with 4K-pages and dsa dpdk driver ------------------------------------------------------------------------------- +--------------------------------------------------------------------------- This case tests packed ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa dpdk driver. -1. Bind 2 dsa device and one nic port to vfio-pci like common step 1-2:: +1. Bind 2 DSA device and 1 NIC port to vfio-pci like common step 1-2:: # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 @@ -232,10 +233,10 @@ This case tests packed ring with multi-queues can work normally in 4k-pages envi 10.Rerun step 4-6. Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic --------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------ This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. -1. Bind 1 dsa device to vfio-pci like common step 2:: +1. Bind 1 DSA device to vfio-pci like common step 2:: # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 @@ -291,11 +292,11 @@ This case test the function of Vhost tx offload in the topology of vhost-user/vi testpmd>show port xstats all Test Case 4: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic ---------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. -1. Bind 1 dsa device to vfio-pci like common step 2:: +1. Bind 1 DSA device to vfio-pci like common step 2:: # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 @@ -351,7 +352,7 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o testpmd>show port xstats all Test Case 5: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa dpdk driver ---------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net multi-queues mergeable path when vhost uses the asynchronous operations with dsa dpdk driver. And one virtio-net is split ring, the other is packed ring. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. @@ -417,7 +418,7 @@ And one virtio-net is split ring, the other is packed ring. The vhost run in 1G 8. Relaunch vm1 and rerun step 4-7. Test Case 6: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa dpdk driver ---------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------ This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous operations with dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. @@ -507,10 +508,10 @@ dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pag 11. Rerun step 6-7. Test Case 7: PVP split ring multi-queues with 4K-pages and dsa kernel driver --------------------------------------------------------------------------------- +---------------------------------------------------------------------------- This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa kernel driver. -1. Bind one nic port to vfio-pci and 2 dsa device to idxd like common step 1 and 3:: +1. Bind 1 NIC port to vfio-pci and 2 DSA device to idxd like common step 1 and 3:: # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 @@ -563,10 +564,10 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 8. Rerun step 4-6. Test Case 8: PVP packed ring multi-queues with 4K-pages and dsa kernel driver ---------------------------------------------------------------------------------- +----------------------------------------------------------------------------- This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa kernel driver. -1. Bind one nic port to vfio-pci and 2 dsa device to idxd like common step 1 and 3:: +1. Bind 1 NIC port to vfio-pci and 2 DSA device to idxd like common step 1 and 3:: # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 @@ -619,12 +620,12 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 8. Rerun step 4-6. Test Case 9: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic ---------------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. -1. Bind 1 dsa device to idxd like common step 2:: +1. Bind 1 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 @@ -684,12 +685,12 @@ in 4k-pages environment. testpmd>show port xstats all Test Case 10: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic ------------------------------------------------------------------------------------------------------------ +---------------------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. -1. Bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -750,12 +751,12 @@ in 4k-pages environment. testpmd>show port xstats all Test Case 11: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa kernel driver ------------------------------------------------------------------------------------------------------------ +---------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net split and packed ring mergeable path when vhost uses the asynchronous operations with dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. -1. Bind 8 dsa device to idxd like common step 3:: +1. Bind 8 DSA device to idxd like common step 3:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -827,7 +828,7 @@ dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p 8. Relaunch vm1 and rerun step 4-7. Test Case 12: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa kernel driver ------------------------------------------------------------------------------------------------------ +--------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous operations with dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. From patchwork Wed Nov 30 06:17:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120326 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0CFA7A00C2; Wed, 30 Nov 2022 07:23:25 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 07B4D40395; Wed, 30 Nov 2022 07:23:25 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 304E74014F for ; Wed, 30 Nov 2022 07:23:23 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669789403; x=1701325403; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=WGXEwdm9QCNx2iHHsraPdz9IR6UyPWuUa1OexQ3TI28=; b=GsPuaTXur4sd8KRTdr4rpCGSyZWAKvb9Ex9ctM6CwVRZyvcOYmKeHnKe ApMvBndk2g6YFVT26a2wg9+dUkjFkKUOKwzTTrVr2CWKlgE6c8GhN37u8 /sFRSjXkDO87TekRSgbCvOi9ZLRHrgah8b7ACetktGWYrjS7v2bE4ZgnL uOefxQDHWrhiXRQxz1IST1CeKQ3IY94Fy3qPfQ1Tskwalnw/ElPh0C3RZ +A6zZNnhkKy3yJgOp9FKQgr0L0xMG0AjAZkPgoGqmqbBXfqrxdApK85ox 8dLZ17VHtWkUVfgOagnpf/OPmQIT9CU3lGLo9Fr69EPrK0LcRqba+EWdf w==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="317167355" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="317167355" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:23:18 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="732861396" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="732861396" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:23:16 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/3] tests/basic_4k_pages_dsa: add new testsuite Date: Wed, 30 Nov 2022 14:17:40 +0800 Message-Id: <20221130061740.1163952-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add the new testsuite TestSuite_basic_4k_pages_dsa.py. Signed-off-by: Wei Ling --- tests/TestSuite_basic_4k_pages_dsa.py | 1677 +++++++++++++++++++++++++ 1 file changed, 1677 insertions(+) create mode 100644 tests/TestSuite_basic_4k_pages_dsa.py diff --git a/tests/TestSuite_basic_4k_pages_dsa.py b/tests/TestSuite_basic_4k_pages_dsa.py new file mode 100644 index 00000000..778b17b7 --- /dev/null +++ b/tests/TestSuite_basic_4k_pages_dsa.py @@ -0,0 +1,1677 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2022 Intel Corporation +# + +""" +DPDK Test suite. +vhost/virtio-user pvp with 4K pages. +""" + +import os +import random +import re +import string +import time + +from framework.config import VirtConf +from framework.packet import Packet +from framework.pktgen import PacketGeneratorHelper +from framework.pmd_output import PmdOutput +from framework.qemu_kvm import QEMUKvm +from framework.settings import CONFIG_ROOT_PATH, HEADER_SIZE +from framework.test_case import TestCase + +from .virtio_common import dsa_common as DC + + +class TestBasic4kPagesDsa(TestCase): + def get_virt_config(self, vm_name): + conf = VirtConf(CONFIG_ROOT_PATH + os.sep + self.suite_name + ".cfg") + conf.load_virt_config(vm_name) + virt_conf = conf.get_virt_config() + return virt_conf + + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_num = len([n for n in self.dut.cores if int(n["socket"]) == 0]) + self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing") + self.verify( + self.cores_num >= 4, + "There has not enought cores to test this suite %s" % self.suite_name, + ) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:9] + self.virtio0_core_list = self.cores_list[9:14] + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user0 = self.dut.new_session(suite="virtio-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) + self.frame_sizes = [64, 128, 256, 512, 1024, 1518] + self.out_path = "/tmp/%s" % self.suite_name + out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") + if "No such file or directory" in out: + self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") + # create an instance to set stream field setting + self.pktgen_helper = PacketGeneratorHelper() + self.number_of_ports = 1 + self.app_testpmd_path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.app_testpmd_path.split("/")[-1] + self.virtio_mac = "00:01:02:03:04:05" + self.vm_num = 2 + self.virtio_ip1 = "1.1.1.1" + self.virtio_ip2 = "1.1.1.2" + self.virtio_mac1 = "52:54:00:00:00:01" + self.virtio_mac2 = "52:54:00:00:00:02" + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.random_string = string.ascii_letters + string.digits + self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["tcp"] + self.DC = DC(self) + + self.mount_tmpfs_for_4k(number=2) + self.vm0_virt_conf = self.get_virt_config(vm_name="vm0") + for param in self.vm0_virt_conf: + if "cpu" in param.keys(): + self.vm0_cpupin = param["cpu"][0]["cpupin"] + self.vm0_lcore = ",".join(list(self.vm0_cpupin.split())) + self.vm0_lcore_smp = len(list(self.vm0_cpupin.split())) + if "qemu" in param.keys(): + self.vm0_qemu_path = param["qemu"][0]["path"] + if "mem" in param.keys(): + self.vm0_mem_size = param["mem"][0]["size"] + if "disk" in param.keys(): + self.vm0_image_path = param["disk"][0]["file"] + if "vnc" in param.keys(): + self.vm0_vnc = param["vnc"][0]["displayNum"] + if "login" in param.keys(): + self.vm0_user = param["login"][0]["user"] + self.vm0_passwd = param["login"][0]["password"] + + self.vm1_virt_conf = self.get_virt_config(vm_name="vm1") + for param in self.vm1_virt_conf: + if "cpu" in param.keys(): + self.vm1_cpupin = param["cpu"][0]["cpupin"] + self.vm1_lcore = ",".join(list(self.vm1_cpupin.split())) + self.vm1_lcore_smp = len(list(self.vm1_cpupin.split())) + if "qemu" in param.keys(): + self.vm1_qemu_path = param["qemu"][0]["path"] + if "mem" in param.keys(): + self.vm1_mem_size = param["mem"][0]["size"] + if "disk" in param.keys(): + self.vm1_image_path = param["disk"][0]["file"] + if "vnc" in param.keys(): + self.vm1_vnc = param["vnc"][0]["displayNum"] + if "login" in param.keys(): + self.vm1_user = param["login"][0]["user"] + self.vm1_passwd = param["login"][0]["password"] + + def set_up(self): + """ + Run before each test case. + """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf /root/dpdk/vhost-net*", "# ") + # Prepare the result table + self.table_header = ["Frame"] + self.table_header.append("Mode") + self.table_header.append("Mpps") + self.table_header.append("% linerate") + self.result_table_create(self.table_header) + self.vm_dut = [] + self.vm = [] + self.use_dsa_list = [] + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + self.packed = False + + def start_vm0(self, packed=False, queues=1, server=False): + packed_param = ",packed=on" if packed else "" + server = ",server" if server else "" + self.qemu_cmd0 = ( + f"taskset -c {self.vm0_lcore} {self.vm0_qemu_path} -name vm0 -enable-kvm " + f"-pidfile /tmp/.vm0.pid -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait " + f"-netdev user,id=nttsip1,hostfwd=tcp:%s:6000-:22 -device e1000,netdev=nttsip1 " + f"-chardev socket,id=char0,path=/root/dpdk/vhost-net0{server} " + f"-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues={queues} " + f"-device virtio-net-pci,netdev=netdev0,mac=%s," + f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param} " + f"-cpu host -smp {self.vm0_lcore_smp} -m {self.vm0_mem_size} -object memory-backend-file,id=mem,size={self.vm0_mem_size}M,mem-path=/mnt/tmpfs_nohuge0,share=on " + f"-numa node,memdev=mem -mem-prealloc -drive file={self.vm0_image_path} " + f"-chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial " + f"-device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :{self.vm0_vnc} " + ) + + self.vm0_session = self.dut.new_session(suite="vm0_session") + cmd0 = self.qemu_cmd0 % ( + self.dut.get_ip_address(), + self.virtio_mac1, + ) + self.vm0_session.send_expect(cmd0, "# ") + time.sleep(10) + self.vm0_dut = self.connect_vm0() + self.verify(self.vm0_dut is not None, "vm start fail") + self.vm_session = self.vm0_dut.new_session(suite="vm_session") + + def start_vm1(self, packed=False, queues=1, server=False): + packed_param = ",packed=on" if packed else "" + server = ",server" if server else "" + self.qemu_cmd1 = ( + f"taskset -c {self.vm1_lcore} {self.vm1_qemu_path} -name vm1 -enable-kvm " + f"-pidfile /tmp/.vm1.pid -daemonize -monitor unix:/tmp/vm1_monitor.sock,server,nowait " + f"-netdev user,id=nttsip1,hostfwd=tcp:%s:6001-:22 -device e1000,netdev=nttsip1 " + f"-chardev socket,id=char0,path=/root/dpdk/vhost-net1{server} " + f"-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues={queues} " + f"-device virtio-net-pci,netdev=netdev0,mac=%s," + f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param} " + f"-cpu host -smp {self.vm1_lcore_smp} -m {self.vm1_mem_size} -object memory-backend-file,id=mem,size={self.vm1_mem_size}M,mem-path=/mnt/tmpfs_nohuge1,share=on " + f"-numa node,memdev=mem -mem-prealloc -drive file={self.vm1_image_path} " + f"-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial " + f"-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.0 -vnc :{self.vm1_vnc} " + ) + self.vm1_session = self.dut.new_session(suite="vm1_session") + cmd1 = self.qemu_cmd1 % ( + self.dut.get_ip_address(), + self.virtio_mac2, + ) + self.vm1_session.send_expect(cmd1, "# ") + time.sleep(10) + self.vm1_dut = self.connect_vm1() + self.verify(self.vm1_dut is not None, "vm start fail") + self.vm_session = self.vm1_dut.new_session(suite="vm_session") + + def connect_vm0(self): + self.vm0 = QEMUKvm(self.dut, "vm0", self.suite_name) + self.vm0.net_type = "hostfwd" + self.vm0.hostfwd_addr = "%s:6000" % self.dut.get_ip_address() + self.vm0.def_driver = "vfio-pci" + self.vm0.driver_mode = "noiommu" + self.wait_vm_net_ready(vm_index=0) + vm_dut = self.vm0.instantiate_vm_dut(autodetect_topo=False, bind_dev=False) + if vm_dut: + return vm_dut + else: + return None + + def connect_vm1(self): + self.vm1 = QEMUKvm(self.dut, "vm1", "vm_hotplug") + self.vm1.net_type = "hostfwd" + self.vm1.hostfwd_addr = "%s:6001" % self.dut.get_ip_address() + self.vm1.def_driver = "vfio-pci" + self.vm1.driver_mode = "noiommu" + self.wait_vm_net_ready(vm_index=1) + vm_dut = self.vm1.instantiate_vm_dut(autodetect_topo=False, bind_dev=False) + if vm_dut: + return vm_dut + else: + return None + + def wait_vm_net_ready(self, vm_index=0): + self.vm_net_session = self.dut.new_session(suite="vm_net_session") + self.start_time = time.time() + cur_time = time.time() + time_diff = cur_time - self.start_time + while time_diff < 120: + try: + out = self.vm_net_session.send_expect( + "~/QMP/qemu-ga-client --address=/tmp/vm%s_qga0.sock ifconfig" + % vm_index, + "#", + ) + except Exception as EnvironmentError: + pass + if "10.0.2" in out: + pos = self.vm0.hostfwd_addr.find(":") + ssh_key = ( + "[" + + self.vm0.hostfwd_addr[:pos] + + "]" + + self.vm0.hostfwd_addr[pos:] + ) + os.system("ssh-keygen -R %s" % ssh_key) + break + time.sleep(1) + cur_time = time.time() + time_diff = cur_time - self.start_time + self.dut.close_session(self.vm_net_session) + + def send_imix_and_verify(self, mode): + """ + Send imix packet with packet generator and verify + """ + frame_sizes = [64, 128, 256, 512, 1024, 1518] + tgenInput = [] + for frame_size in frame_sizes: + payload_size = frame_size - self.headers_size + port = self.tester.get_local_port(self.dut_ports[0]) + fields_config = { + "ip": { + "src": {"action": "random"}, + }, + } + pkt = Packet() + pkt.assign_layers(["ether", "ipv4", "tcp", "raw"]) + pkt.config_layers( + [ + ("ether", {"dst": "%s" % self.virtio_mac}), + ("ipv4", {"src": "1.1.1.1"}), + ("raw", {"payload": ["01"] * int("%d" % payload_size)}), + ] + ) + pkt.save_pcapfile( + self.tester, + "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size), + ) + tgenInput.append( + ( + port, + port, + "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size), + ) + ) + + self.tester.pktgen.clear_streams() + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgenInput, 100, fields_config, self.tester.pktgen + ) + bps, pps = self.tester.pktgen.measure_throughput(stream_ids=streams) + Mpps = pps / 1000000.0 + Mbps = bps / 1000000.0 + self.verify( + Mbps > 0, + f"{self.running_case} can not receive packets of frame size {frame_sizes}", + ) + bps_linerate = self.wirespeed(self.nic, 64, 1) * 8 * (64 + 20) + throughput = Mbps * 100 / float(bps_linerate) + results_row = ["imix"] + results_row.append(mode) + results_row.append(Mpps) + results_row.append(throughput) + self.result_table_add(results_row) + + def start_vhost_user_testpmd( + self, + cores, + eal_param="", + param="", + no_pci=False, + ports="", + port_options="", + ): + """ + launch the testpmd as virtio with vhost_user + """ + if not no_pci and port_options != "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + port_options=port_options, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + elif not no_pci and port_options == "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=no_pci, + prefix="vhost", + fixed_prefix=True, + ) + + def start_virtio_user0_testpmd(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net0 + """ + self.virtio_user0_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user0", + fixed_prefix=True, + ) + + def config_vm_ip(self): + """ + set virtio device IP and run arp protocal + """ + vm1_intf = self.vm0_dut.ports_info[0]["intf"] + vm2_intf = self.vm1_dut.ports_info[0]["intf"] + self.vm0_dut.send_expect( + "ifconfig %s %s" % (vm1_intf, self.virtio_ip1), "#", 10 + ) + self.vm1_dut.send_expect( + "ifconfig %s %s" % (vm2_intf, self.virtio_ip2), "#", 10 + ) + self.vm0_dut.send_expect( + "arp -s %s %s" % (self.virtio_ip2, self.virtio_mac2), "#", 10 + ) + self.vm1_dut.send_expect( + "arp -s %s %s" % (self.virtio_ip1, self.virtio_mac1), "#", 10 + ) + + def config_vm_combined(self, combined=1): + """ + set virtio device combined + """ + vm1_intf = self.vm0_dut.ports_info[0]["intf"] + vm2_intf = self.vm1_dut.ports_info[0]["intf"] + self.vm0_dut.send_expect( + "ethtool -L %s combined %d" % (vm1_intf, combined), "#", 10 + ) + self.vm1_dut.send_expect( + "ethtool -L %s combined %d" % (vm2_intf, combined), "#", 10 + ) + + def check_ping_between_vms(self): + ping_out = self.vm0_dut.send_expect( + "ping {} -c 4".format(self.virtio_ip2), "#", 20 + ) + self.logger.info(ping_out) + + def check_scp_file_valid_between_vms(self, file_size=1024): + """ + scp file form VM1 to VM2, check the data is valid + """ + # default file_size=1024K + data = "" + for _ in range(file_size * 1024): + data += random.choice(self.random_string) + self.vm0_dut.send_expect('echo "%s" > /tmp/payload' % data, "# ") + # scp this file to vm1 + out = self.vm1_dut.send_command( + "scp root@%s:/tmp/payload /root" % self.virtio_ip1, timeout=5 + ) + if "Are you sure you want to continue connecting" in out: + self.vm1_dut.send_command("yes", timeout=3) + self.vm1_dut.send_command(self.vm0_passwd, timeout=3) + # get the file info in vm1, and check it valid + md5_send = self.vm0_dut.send_expect("md5sum /tmp/payload", "# ") + md5_revd = self.vm1_dut.send_expect("md5sum /root/payload", "# ") + md5_send = md5_send[: md5_send.find(" ")] + md5_revd = md5_revd[: md5_revd.find(" ")] + self.verify( + md5_send == md5_revd, "the received file is different with send file" + ) + + def start_iperf(self): + """ + run perf command between to vms + """ + iperf_server = "iperf -s -i 1" + iperf_client = "iperf -c {} -i 1 -t 60".format(self.virtio_ip1) + self.vm0_dut.send_expect("{} > iperf_server.log &".format(iperf_server), "", 10) + self.vm1_dut.send_expect("{} > iperf_client.log &".format(iperf_client), "", 60) + time.sleep(60) + + def get_iperf_result(self): + """ + get the iperf test result + """ + self.table_header = ["Mode", "[M|G]bits/sec"] + self.result_table_create(self.table_header) + self.vm0_dut.send_expect("pkill iperf", "# ") + self.vm1_dut.session.copy_file_from("%s/iperf_client.log" % self.dut.base_dir) + fp = open("./iperf_client.log") + fmsg = fp.read() + fp.close() + # remove the server report info from msg + index = fmsg.find("Server Report") + if index != -1: + fmsg = fmsg[:index] + iperfdata = re.compile("\S*\s*[M|G]bits/sec").findall(fmsg) + # the last data of iperf is the ave data from 0-30 sec + self.verify(len(iperfdata) != 0, "The iperf data between to vms is 0") + self.logger.info("The iperf data between vms is %s" % iperfdata[-1]) + + # put the result to table + results_row = ["vm2vm", iperfdata[-1]] + self.result_table_add(results_row) + + # print iperf resut + self.result_table_print() + # rm the iperf log file in vm + self.vm0_dut.send_expect("rm iperf_server.log", "#", 10) + self.vm1_dut.send_expect("rm iperf_client.log", "#", 10) + + def verify_xstats_info_on_vhost(self): + """ + check both 2VMs can receive and send big packets to each other + """ + out_tx = self.vhost_user_pmd.execute_cmd("show port xstats 0") + out_rx = self.vhost_user_pmd.execute_cmd("show port xstats 1") + + tx_info = re.search("tx_q0_size_1519_max_packets:\s*(\d*)", out_tx) + rx_info = re.search("rx_q0_size_1519_max_packets:\s*(\d*)", out_rx) + + self.verify( + int(rx_info.group(1)) > 0, "Port 1 not receive packet greater than 1522" + ) + self.verify( + int(tx_info.group(1)) > 0, "Port 0 not forward packet greater than 1522" + ) + + def mount_tmpfs_for_4k(self, number=1): + """ + Prepare tmpfs with 4K-pages + """ + for num in range(number): + self.dut.send_expect("mkdir /mnt/tmpfs_nohuge{}".format(num), "# ") + self.dut.send_expect( + "mount tmpfs /mnt/tmpfs_nohuge{} -t tmpfs -o size=4G".format(num), "# " + ) + + def umount_tmpfs_for_4k(self): + """ + Prepare tmpfs with 4K-pages + """ + out = self.dut.send_expect( + "mount |grep 'mnt/tmpfs' |awk -F ' ' {'print $3'}", "#" + ) + if out != "": + mount_points = out.replace("\r", "").split("\n") + else: + mount_points = [] + if len(mount_points) != 0: + for mount_info in mount_points: + self.dut.send_expect("umount {}".format(mount_info), "# ") + + def check_packets_of_vhost_each_queue(self, queues): + self.vhost_user_pmd.execute_cmd("show port stats all") + out = self.vhost_user_pmd.execute_cmd("stop") + self.logger.info(out) + for queue in range(queues): + reg = "Queue= %d" % queue + index = out.find(reg) + rx = re.search("RX-packets:\s*(\d*)", out[index:]) + tx = re.search("TX-packets:\s*(\d*)", out[index:]) + rx_packets = int(rx.group(1)) + tx_packets = int(tx.group(1)) + self.verify( + rx_packets > 0 and tx_packets > 0, + "The queue {} rx-packets or tx-packets is 0 about ".format(queue) + + "rx-packets: {}, tx-packets: {}".format(rx_packets, tx_packets), + ) + + def test_perf_pvp_split_ring_multi_queues_with_4k_pages_and_dsa_dpdk_driver( + self, + ): + """ + Test Case 1: PVP split ring multi-queues with 4K-pages and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" + % dmas + ) + vhost_param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=%d" + % self.ports_socket + ) + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring inorder mergeable with 4k page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring inorder mergeable with 4k page restart vhost" + ) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q1;" + "txq3@%s-q1;" + "txq4@%s-q2;" + "txq5@%s-q2;" + "txq6@%s-q3;" + "txq7@%s-q3;" + "rxq0@%s-q0;" + "rxq1@%s-q0;" + "rxq2@%s-q1;" + "rxq3@%s-q1;" + "rxq4@%s-q2;" + "rxq5@%s-q2;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring inorder mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring inorder mergeable with 1G page restart vhost" + ) + + self.virtio_user0_pmd.quit() + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=0,queues=8,server=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring mergeable with 1G page restart vhost" + ) + self.result_table_print() + + def test_perf_pvp_packed_ring_multi_queues_with_4k_pages_and_dsa_dpdk_driver( + self, + ): + """ + Test Case 2: PVP packed ring multi-queues with 4K-pages and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" + % dmas + ) + vhost_param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=%d" + % self.ports_socket + ) + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring inorder mergeable with 4k page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring inorder mergeable with 4k page restart vhost" + ) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q1;" + "txq3@%s-q1;" + "txq4@%s-q2;" + "txq5@%s-q2;" + "txq6@%s-q3;" + "txq7@%s-q3;" + "rxq0@%s-q0;" + "rxq1@%s-q0;" + "rxq2@%s-q1;" + "rxq3@%s-q1;" + "rxq4@%s-q2;" + "rxq5@%s-q2;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring inorder mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring inorder mergeable with 1G page restart vhost" + ) + + self.virtio_user0_pmd.quit() + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring mergeable with 1G page restart vhost" + ) + self.result_table_print() + + def test_vm2vm_split_ring_vhost_user_virtio_net_4k_pages_and_dsa_dpdk_driver_test_with_tcp_traffic( + self, + ): + """ + Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas1 = "txq0@%s-q0;" "rxq0@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;" "rxq0@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--no-huge -m 1024 " + + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[%s]'" % dmas2 + ) + vhost_param = ( + " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d" + % self.ports_socket + ) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=1, server=False) + self.start_vm1(packed=False, queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_packed_ring_vhost_user_virtio_net_4k_pages_and_dsa_dpdk_driver_test_with_tcp_traffic( + self, + ): + """ + Test Case 4: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas1 = "txq0@%s-q0;" "rxq0@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;" "rxq0@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--no-huge -m 1024 " + + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[%s]'" % dmas2 + ) + vhost_param = ( + " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d" + % self.ports_socket + ) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=True, queues=1, server=False) + self.start_vm1(packed=True, queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_vhost_virtio_net_split_packed_ring_multi_queues_with_1G_4k_pages_and_dsa_dpdk_driver( + self, + ): + """ + Test Case 5: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas1 = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + dmas2 = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[%s]'" % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=8, server=False) + self.start_vm1(packed=True, queues=8, server=False) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_vhost_virtio_net_split_ring_multi_queues_with_1G_4k_pages_and_dsa_dpdk_driver( + self, + ): + """ + Test Case 6: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas1 = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + dmas2 = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=8, server=True) + self.start_vm1(packed=False, queues=8, server=True) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vhost_user_pmd.quit() + dmas1 = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "txq2@%s-q0;" + "txq3@%s-q1;" + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q0;" + "rxq3@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + dmas2 = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "txq2@%s-q0;" + "txq3@%s-q1;" + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q0;" + "rxq3@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=2", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + self.config_vm_combined(combined=4) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_perf_pvp_split_ring_multi_queues_with_4k_pages_and_dsa_kernel_driver(self): + """ + Test Case 7: PVP split ring multi-queues with 4K-pages and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.DC.create_work_queue(work_queue_number=4, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" + % dmas + ) + vhost_param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=%d" + % self.ports_socket + ) + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options="", + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring inorder mergeable with 4k page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring inorder mergeable with 4k page restart vhost" + ) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.1;" + "txq3@wq0.1;" + "txq4@wq0.2;" + "txq5@wq0.2;" + "txq6@wq0.3;" + "txq7@wq0.3;" + "rxq0@wq0.0;" + "rxq1@wq0.0;" + "rxq2@wq0.1;" + "rxq3@wq0.1;" + "rxq4@wq0.2;" + "rxq5@wq0.2;" + "rxq6@wq0.3;" + "rxq7@wq0.3" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options="", + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring inorder mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring inorder mergeable with 1G page restart vhost" + ) + + self.virtio_user0_pmd.quit() + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=0,queues=8,server=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring mergeable with 1G page restart vhost" + ) + self.result_table_print() + + def test_perf_pvp_packed_ring_multi_queues_with_4k_pages_and_dsa_kernel_driver( + self, + ): + """ + Test Case 8: PVP packed ring multi-queues with 4K-pages and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.DC.create_work_queue(work_queue_number=4, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" + % dmas + ) + vhost_param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=%d" + % self.ports_socket + ) + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options="", + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring inorder mergeable with 4k page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring inorder mergeable with 4k page restart vhost" + ) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "txq6@wq0.1;" + "txq7@wq0.1;" + "rxq0@wq0.0;" + "rxq1@wq0.0;" + "rxq2@wq0.0;" + "rxq3@wq0.0;" + "rxq4@wq0.1;" + "rxq5@wq0.1;" + "rxq6@wq0.1;" + "rxq7@wq0.1" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options="", + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring inorder mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring inorder mergeable with 1G page restart vhost" + ) + + self.virtio_user0_pmd.quit() + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring mergeable with 1G page restart vhost" + ) + self.result_table_print() + + def test_vm2vm_split_ring_vhost_user_virtio_net_4k_pages_and_dsa_kernel_driver_test_with_tcp_traffic( + self, + ): + """ + Test Case 9: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + dmas1 = "txq0@wq0.0;rxq0@wq0.1" + dmas2 = "txq0@wq0.2;rxq0@wq0.3" + vhost_eal_param = ( + "--no-huge -m 1024 " + + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[%s]'" % dmas2 + ) + vhost_param = ( + " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d" + % self.ports_socket + ) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=1, server=False) + self.start_vm1(packed=False, queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_packed_ring_vhost_user_virtio_net_4k_pages_and_dsa_kernel_driver_test_with_tcp_traffic( + self, + ): + """ + Test Case 10: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + dmas1 = "txq0@wq0.0;rxq0@wq0.0" + dmas2 = "txq0@wq0.1;rxq0@wq0.1" + vhost_eal_param = ( + "--no-huge -m 1024 " + + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[%s]'" % dmas2 + ) + vhost_param = ( + " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d" + % self.ports_socket + ) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=True, queues=1, server=False) + self.start_vm1(packed=True, queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_vhost_virtio_net_split_packed_ring_multi_queues_with_1G_4k_pages_and_dsa_kenel_driver( + self, + ): + """ + Test Case 11: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.DC.create_work_queue(work_queue_number=4, dsa_index=1) + dmas1 = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + dmas2 = ( + "txq0@wq0.2;" + "txq1@wq0.2;" + "txq2@wq0.2;" + "txq3@wq0.2;" + "txq4@wq0.3;" + "txq5@wq0.3;" + "rxq2@wq1.2;" + "rxq3@wq1.2;" + "rxq4@wq1.3;" + "rxq5@wq1.3;" + "rxq6@wq1.3;" + "rxq7@wq1.3" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[%s]'" % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=8, server=False) + self.start_vm1(packed=True, queues=8, server=False) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_vhost_virtio_net_split_ring_multi_queues_with_1G_4k_pages_and_dsa_kernel_driver( + self, + ): + """ + Test Case 12: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.DC.create_work_queue(work_queue_number=4, dsa_index=1) + dmas1 = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + dmas2 = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=8, server=True) + self.start_vm1(packed=False, queues=8, server=True) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vhost_user_pmd.quit() + dmas1 = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "rxq0@wq0.0;" + "rxq1@wq0.1;" + "rxq2@wq0.2;" + "rxq3@wq0.3" + ) + dmas2 = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "rxq0@wq0.0;" + "rxq1@wq0.1;" + "rxq2@wq0.2;" + "rxq3@wq0.3" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + self.config_vm_combined(combined=4) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def tear_down(self): + """ + Run after each test case. + """ + self.virtio_user0_pmd.quit() + self.vhost_user_pmd.quit() + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf /tmp/vhost-net*", "# ") + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.umount_tmpfs_for_4k() + self.dut.close_session(self.vhost_user) + self.dut.close_session(self.virtio_user0) From patchwork Wed Nov 30 06:17:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120327 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29B8CA00C2; Wed, 30 Nov 2022 07:23:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 24FA740A7F; Wed, 30 Nov 2022 07:23:29 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 0FFE34014F for ; Wed, 30 Nov 2022 07:23:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669789408; x=1701325408; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=1K5NJpKVmtuQt7o8FGtq37l7V3UrRpZwCSAYoJg7Ruw=; b=UoFFQGDDmwJ/+WrRoPObHdPYyRSn+ZSbPmEei1aOVfS8uE0u0Ke5MJIJ rApqiKq90O6GfOt5BT/zCsHP48bTc2ZTbdnxfeBl9qLe1k/uMUg6+wY5a oFX4rl3rV3z7GZ93YNAXYYaBOr2PYyWXWam2rq9CSr0G1zVPMiCM7yfzh LJdBRNQWjbfYKutjKQvyQDsAEkKqFAOUQBsP8DzwAD9hwvX7xUF141umA MB7xgWUtNkjBM033MGwMs/Zpa/rdYRAcc/4niCR+rAcsF4cv4LqXMERkp 0xELGcNc3ia2p04KZH1aHfN7mRrUKxzvZIeohQp2yl048nvHoRo1LiYTU w==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="342239792" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="342239792" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:23:27 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="818514252" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="818514252" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:23:25 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 3/3] conf/basic_4k_pages_dsa: add testsuite config file Date: Wed, 30 Nov 2022 14:17:49 +0800 Message-Id: <20221130061749.1164012-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add the testsuite config file basic_4k_pages_dsa.cfg. Signed-off-by: Wei Ling --- conf/basic_4k_pages_dsa.cfg | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 conf/basic_4k_pages_dsa.cfg diff --git a/conf/basic_4k_pages_dsa.cfg b/conf/basic_4k_pages_dsa.cfg new file mode 100644 index 00000000..ed905c2c --- /dev/null +++ b/conf/basic_4k_pages_dsa.cfg @@ -0,0 +1,36 @@ +[vm0] +cpu = + model=host,number=8,cpupin=20 21 22 23 24 25 26 27; +mem = + size=4096,hugepage=yes; +disk = + file=/home/image/ubuntu2004.img; +login = + user=root,password=tester; +vnc = + displayNum=4; +net = + type=user,opt_vlan=2; + type=nic,opt_vlan=2; +daemon = + enable=yes; +qemu = + path=/home/QEMU/qemu-7.1.0/bin/qemu-system-x86_64; +[vm1] +cpu = + model=host,number=8,cpupin=48 49 50 51 52 53 54 55; +mem = + size=4096,hugepage=yes; +disk = + file=/home/image/ubuntu2004_2.img; +login = + user=root,password=tester; +net = + type=nic,opt_vlan=3; + type=user,opt_vlan=3; +vnc = + displayNum=5; +daemon = + enable=yes; +qemu = + path=/home/QEMU/qemu-7.1.0/bin/qemu-system-x86_64;