From patchwork Fri Dec 23 03:11:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121335 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AED82A0093; Fri, 23 Dec 2022 04:20:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A9B2B410F9; Fri, 23 Dec 2022 04:20:36 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 1216B40685 for ; Fri, 23 Dec 2022 04:20:34 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671765635; x=1703301635; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=OFawssspwrrDqpNj98pUyJs8hr4xPdTmHNIYTolvYEM=; b=E3zoabcbV5O7IvsK8rpKEX9c//e5Em5otdk/sRl2s3zL/fc80iFXRJiv PAOYkdRJeQ+pO689CoU8uGD3AwAdXOatUvApEqfTD45tBd9X9+jrS0ocf 7kWavCWqr60w7O3Oku9VHwuuLxMO4kV54BdaCJ16cIpEjT3aWkABH65JZ pV9EQK67Bd3980EjAW5VD7ibSWVW9710Uj4X2pxunISq0gljRm4kBX6no ielqD1KXxZ8bymjpr2jvlfdZvxJKnTtj4Lf64eZ1Dchvbawx6oavfTeEV wVBwfP6SpqE1RQbDCOsqAsrM3TvYqNfigKhkFogfQjY1+4WLwW5vdLv7i A==; X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="384651162" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="384651162" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 19:20:34 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="651985732" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="651985732" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 19:20:33 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/4] test_plans/index: add vhost_dsa_test_plan Date: Fri, 23 Dec 2022 11:11:49 +0800 Message-Id: <20221223031149.753360-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add vhost_dsa_test_plan in test_plans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index 914a5aa6..66123e71 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -148,6 +148,7 @@ The following are the test plans for the DPDK DTS automated test system. shutdown_api_test_plan speed_capabilities_test_plan vhost_cbdma_test_plan + vhost_dsa_test_plan vhost_user_interrupt_test_plan vhost_user_interrupt_cbdma_test_plan sriov_kvm_test_plan From patchwork Fri Dec 23 03:11:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121336 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0895A0093; Fri, 23 Dec 2022 04:20:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CAC4B42686; Fri, 23 Dec 2022 04:20:46 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 7AA6A40685 for ; Fri, 23 Dec 2022 04:20:44 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671765644; x=1703301644; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=9V3ANjAaQi6hulMqtwiwu0bWh+/ju1qj34KT9tZOmMc=; b=jrtceARRCKDr2kB4XKWNyPUaodlGNZPY6sMNY7tpkR4SFCXN8Z7Brxfi rUR8tdB97dgCIyxjXxmgXw7Gne++As0tJN7SUtsxsAAnpJLAFPOQAj4fG QJSxr1JFIzOOpZKUpLMZkIsOsDypEr2msPxW29WdWywTpUH2m8teAqiMx to7QSoWo/mcdYz72af/H4/58E64Anxshh1lrSS3eJQLequijt/Dej9174 8LUMUnvu//CdxL9AHjXLxdGgzNglR7yxW0lCQWPiGkqBbDejuIR0ACI1f Sh/qym//+RzQWU0vy3EgZ+hnk/SIZnPvrFxLXZuy6TWiE89onGRXlDgTj g==; X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="384651199" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="384651199" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 19:20:43 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="651985786" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="651985786" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 19:20:41 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/4] test_plans/vhost_dsa_test_plan: add new testplan Date: Fri, 23 Dec 2022 11:11:57 +0800 Message-Id: <20221223031157.753420-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add vhost_dsa testplan to test the PVP topo with split ring and packed ring all path with DSA dirver. Signed-off-by: Wei Ling --- test_plans/vhost_dsa_test_plan.rst | 1259 ++++++++++++++++++++++++++++ 1 file changed, 1259 insertions(+) create mode 100644 test_plans/vhost_dsa_test_plan.rst diff --git a/test_plans/vhost_dsa_test_plan.rst b/test_plans/vhost_dsa_test_plan.rst new file mode 100644 index 00000000..d773016b --- /dev/null +++ b/test_plans/vhost_dsa_test_plan.rst @@ -0,0 +1,1259 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2022 Intel Corporation + +=================================================== +PVP vhost async operation with DSA driver test plan +=================================================== + +Description +=========== + +This document provides the test plan for testing Vhost asynchronous +data path with DSA driver (kernel IDXD driver and DPDK vfio-pci driver) +in the PVP topology environment with testpmd. + +DSA is a kind of DMA engine, Vhost asynchronous data path leverages DMA devices +to offload memory copies from the CPU and it is implemented in an asynchronous way. +Linux kernel and DPDK provide DSA driver (kernel IDXD driver and DPDK vfio-pci driver), +no matter which driver is used, DPDK DMA library is used in data-path to offload copies +to DSA, and the only difference is which driver configures DSA. + +Asynchronous data path is enabled per tx/rx queue, and users need +to specify the DMA device used by the tx/rx queue. Each tx/rx queue +only supports to use one DMA device, but one DMA device can be shared +among multiple tx/rx queues of different vhostpmd ports. + +.. note:: + + 1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may + exceed IOMMU's max capability, better to use 1G guest hugepage. + 2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. In this patch, + we enable asynchronous data path for vhostpmd. Asynchronous data path is enabled per tx/rx queue, and users need to specify + the DMA device used by the tx/rx queue. Each tx/rx queue only supports to use one DMA device (This is limited by the + implementation of vhostpmd), but one DMA device can be shared among multiple tx/rx queues of different vhost PMD ports. + +Two PMD parameters are added: +- dmas: specify the used DMA device for a tx/rx queue +(Default: no queues enable asynchronous data path) +- dma-ring-size: DMA ring size. +(Default: 4096). + +Here is an example: +--vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=4096' + +IOMMU impact: +If iommu off, idxd can work with iova=pa +If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can't use iova=pa(fwd not work due to pkts payload wrong). + +Prerequisites +============= +Topology +-------- + Test flow: TG-->NIC-->Vhost-user-->Virtio-user-->Vhost-user-->NIC-->TG + +Hardware +-------- + Supportted NICs: ALL + +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example, + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 + +2. Get the PCI device of DUT, for example, 0000:6a:00.0 is NIC port, 0000:6a:01.0 - 0000:f6:01.0 are DSA devices:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:6a:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci + 0000:6f:01.0 'Device 0b25' drv=idxd unused=vfio-pci + 0000:74:01.0 'Device 0b25' drv=idxd unused=vfio-pci + 0000:79:01.0 'Device 0b25' drv=idxd unused=vfio-pci + 0000:e7:01.0 'Device 0b25' drv=idxd unused=vfio-pci + 0000:ec:01.0 'Device 0b25' drv=idxd unused=vfio-pci + 0000:f1:01.0 'Device 0b25' drv=idxd unused=vfio-pci + 0000:f6:01.0 'Device 0b25' drv=idxd unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:00.0 + +2. Bind DSA devices to DPDK vfio-pci driver:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, bind 2 DSA devices to vfio-pci driver: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0 + +.. note:: + + One DPDK DSA device can create 8 WQ at most. Below is an example, where DPDK DSA device will create one and + eight WQ for DSA deivce 0000:e7:01.0 and 0000:ec:01.0. The value of “max_queues” is 1~8: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:e7:01.0,max_queues=1 -- -i + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:ec:01.0,max_queues=8 -- -i + +3. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ):: + + # ./usertools/dpdk-devbind.py -b idxd + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q + +.. note:: + + Better to reset WQ when need operate DSA devices that bound to idxd drvier: + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset + You can check it by 'ls /dev/dsa' + dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 + wq_num: Number of workqueues per DSA endpoint, where 1<=wq_num<=8 + + For example, bind 2 DSA devices to idxd driver and configure WQ: + + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 + Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" + +4. Send tcp imix packets [64,1518] to NIC by traffic generator:: + + The imix packets include packet size [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. + +-------------+-------------+-------------+-------------+ + | MAC | MAC | IPV4 | IPV4 | + | Src address | Dst address | Src address | Dst address | + |-------------|-------------|-------------|-------------| + | Random MAC | Virtio mac | Random IP | Random IP | + +-------------+-------------+-------------+-------------+ + All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10. + +Test Case 1: PVP split ring vhost async operation test with each tx/rx queue using one DSA dpdk driver channel +-------------------------------------------------------------------------------------------------------------- +This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +with each tx/rx queue using one DSA dpdk driver channel. Both iova as VA and PA mode have been tested. + +1. Bind 2 DSA device(f1:01.0,f6:01.0) and one nic port(6a:00.0) to vfio-pci like common step 1-2:: + + # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 6a:00.0 + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;rxq0@0000:f1:01.0-q2;rxq1@0000:f1:01.0-q3]' \ + --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=2 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=2 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=2,vectorized=1 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +11. Quit all testpmd and relaunch vhost with pa mode by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;rxq0@0000:f6:01.0-q0;rxq1@0000:f6:01.0-q1]' \ + --iova=pa -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +12. Rerun step 3-6. + +Test Case 2: PVP split ring vhost async operations test with one DSA dpdk driver channel being shared among multiple tx/rx queues +--------------------------------------------------------------------------------------------------------------------------------- +This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with +one DSA dpdk driver channel being shared among multiple tx/rx queues. Both iova as VA and PA mode have been tested. + +1. Bind 2 DSA device and one nic port to vfio-pci like comon step 1-2:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 f1:01.0 f6:01.0 + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q1;txq5@0000:f1:01.0-q1;txq6@0000:f1:01.0-q1;txq7@0000:f1:01.0-q1;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q0;rxq2@0000:f1:01.0-q0;rxq3@0000:f1:01.0-q0;rxq4@0000:f1:01.0-q1;rxq5@0000:f1:01.0-q1;rxq6@0000:f1:01.0-q1;rxq7@0000:f1:01.0-q1]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8,vectorized=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +11. Quit all testpmd and relaunch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=8 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f1:01.0-q2;txq3@0000:f1:01.0-q3;txq4@0000:f1:01.0-q4;txq5@0000:f1:01.0-q5;txq6@0000:f1:01.0-q6;txq7@0000:f1:01.0-q7;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q3;rxq4@0000:f1:01.0-q4;rxq5@0000:f1:01.0-q5;rxq6@0000:f1:01.0-q6;rxq7@0000:f1:01.0-q7]' \ + --iova=pa -- -i --nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +12. Rerun step 7. + +13. Quit all testpmd and relaunch vhost with pa mode by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=1 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q0;txq5@0000:f1:01.0-q0;txq6@0000:f1:01.0-q0;txq7@0000:f1:01.0-q0;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q0;rxq2@0000:f1:01.0-q0;rxq3@0000:f1:01.0-q0;rxq4@0000:f1:01.0-q0;rxq5@0000:f1:01.0-q0;rxq6@0000:f1:01.0-q0;rxq7@0000:f1:01.0-q0]' \ + --iova=pa -- -i --nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +14. Rerun step 8. + +Test Case 3: PVP split ring dynamic queues vhost async operation with dsa dpdk driver channels +---------------------------------------------------------------------------------------------- +This case tests if the vhost-user async operation with dsa dpdk driver can work normally when the queue number of split ring dynamic change. Both iova as VA and PA mode have been tested. + +1. Bind 2 DSA devices and 1 NIC port to vfio-pci like common step 1-2:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:00.0 + # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f1:01.0-q2;txq3@0000:f1:01.0-q2]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-9 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + testpmd>set fwd csum + testpmd>start + +4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Quit and relaunch vhost without dsa:: // + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +7. Rerun step 4-5. + +8. Quit and relaunch vhost by below command:: // + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q1;rxq3@0000:f1:01.0-q0]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Rerun step 4-5. + +10. Quit and relaunch vhost with with diff channel by below command:: // + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q1;txq5@0000:f1:01.0-q2;rxq2@0000:f6:01.0-q0;rxq3@0000:f6:01.0-1;rxq4@0000:f6:01.0-q2;rxq5@0000:f6:01.0-q2;rxq6@0000:f6:01.0-q2;rxq7@0000:f6:01.0-q2]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Rerun step 4-5. + +12. Relaunch virtio-user by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-9 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=0,in_order=0,queues=8,server=1 \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + testpmd>set fwd csum + testpmd>start + +13. Rerun step 4-5. + +14. Quit and relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q1;rxq3@0000:f1:01.0-q1]' \ + --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4 + testpmd>set fwd mac + testpmd>start + +15. Rerun step 4-5. + +Test Case 4: PVP packed ring vhost async operation test with each tx/rx queue using one DSA dpdk driver channel +--------------------------------------------------------------------------------------------------------------- +This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +with each tx/rx queue using one DSA dpdk driver channel. Both iova as VA and PA mode have been tested. + +1. Bind 2 DSA device(f1:01.0,f6:01.0) and one nic port(6a:00.0) to vfio-pci like common step 1-2:: + + # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 6a:00.0 + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;rxq0@0000:f1:01.0-q2;rxq1@0000:f1:01.0-q3]' \ + --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=4 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=4 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=4 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=4 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=4,vectorized=1 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=4,vectorized=1,queue_size=1025 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1025 --rxd=1025 + testpmd>set fwd csum + testpmd>start + +12. Quit all testpmd and relaunch vhost with pa mode by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0@0000:f1:01.0-q0;txq@10000:f1:01.0-q1;rxq0@0000:f6:01.0-q0;rxq1@0000:f6:01.0-q1]' \ + --iova=pa -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +13. Rerun step 3-6. + +Test Case 5: PVP packed ring vhost async operation test with one DSA dpdk driver channel being shared among multiple tx/rx queues +--------------------------------------------------------------------------------------------------------------------------------- +This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations with +one DSA dpdk driver channel being shared among multiple tx/rx queues. Both iova as VA and PA mode have been tested. + +1. Bind 2 DSA device and one nic port to vfio-pci like comon step 1-2:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 f1:01.0 f6:01.0 + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-12 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q1;txq5@0000:f1:01.0-q1;txq6@0000:f1:01.0-q1;txq7@0000:f1:01.0-q1;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q0;rxq2@0000:f1:01.0-q0;rxq3@0000:f1:01.0-q0;rxq4@0000:f1:01.0-q1;rxq5@0000:f1:01.0-q1;rxq6@0000:f1:01.0-q1;rxq7@0000:f1:01.0-q1]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8,vectorized=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8,vectorized=1,queue_size=1025 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd>set fwd csum + testpmd>start + +11. Quit all testpmd and relaunch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=8 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f1:01.0-q2;txq3@0000:f1:01.0-q3;txq4@0000:f1:01.0-q4;txq5@0000:f1:01.0-q5;txq6@0000:f1:01.0-q6;txq7@0000:f1:01.0-q7;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q3;rxq4@0000:f1:01.0-q4;rxq5@0000:f1:01.0-q5;rxq6@0000:f1:01.0-q6;rxq7@0000:f1:01.0-q7]' \ + --iova=pa -- -i --nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +12. Rerun step 7. + +13. Quit all testpmd and relaunch vhost with pa mode by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=1 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q0;txq5@0000:f1:01.0-q0;txq6@0000:f1:01.0-q0;txq7@0000:f1:01.0-q0;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q0;rxq2@0000:f1:01.0-q0;rxq3@0000:f1:01.0-q0;rxq4@0000:f1:01.0-q0;rxq5@0000:f1:01.0-q0;rxq6@0000:f1:01.0-q0;rxq7@0000:f1:01.0-q0]' \ + --iova=pa -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +14. Rerun step 8. + +Test Case 6: PVP packed ring dynamic queues vhost async operation with dsa dpdk driver channels +----------------------------------------------------------------------------------------------- +This case tests if the vhost-user async operation with dsa dpdk driver can work normally when the queue number of split ring dynamic change. Both iova as VA and PA mode have been tested. + +1. Bind 2 DSA devices and 1 NIC port to vfio-pci like common step 1-2:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:00.0 + # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q1]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Quit and relaunch vhost without dsa:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +7. Rerun step 4-5. + +8. Quit and relaunch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q1]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Rerun step 4-5. + +10. Quit and relaunch vhost with with diff channel by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q1;txq5@0000:f1:01.0-q2;rxq2@0000:f6:01.0-q0;rxq3@0000:f6:01.0-q1;rxq4@0000:f6:01.0-q2;rxq5@0000:f6:01.0-q2;rxq6@0000:f6:01.0-q2;rxq7@0000:f6:01.0-q2]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Rerun step 4-5. + +12. Relaunch virtio-user by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-9 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8,server=1 \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + testpmd>set fwd csum + testpmd>start + +13. Rerun step 4-5. + +14. Quit and relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q1;rxq3@0000:f1:01.0-q1]' \ + --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4 + testpmd>set fwd mac + testpmd>start + +15. Rerun step 4-5. + +Test Case 7: PVP split ring vhost async operation test with each tx/rx queue using one DSA kernel driver channel +---------------------------------------------------------------------------------------------------------------- +This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +with each tx/rx queue using one DSA dpdk driver channel. + +1. Bind 1 DSA device to idxd driver and one nic port to vfio-pci like common step 1 and 3:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + + #ls /dev/dsa,check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + ls /dev/dsa #check wq configure success + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0@wq0.0;txq1@wq0.1;rxq0@wq0.2;rxq1@wq0.3]' \ + --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=2 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +Test Case 8: PVP split ring all path multi-queues vhost async operation test with one DSA kernel driver channel being shared among multiple tx/rx queues +-------------------------------------------------------------------------------------------------------------------------------------------------------- +This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +with one DSA dpdk driver channel being shared among multiple tx/rx queues. + +1. Bind 1 DSA device to idxd driver and one nic port to vfio-pci like common step 1 and 3:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + ls /dev/dsa #check wq configure success + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;txq6@wq0.1;txq7@wq0.1;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.0;rxq3@wq0.0;rxq4@wq0.1;rxq5@wq0.1;rxq6@wq0.1;rxq7@wq0.1]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +11. Quit all testpmd and relaunch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.1;txq3@wq0.1;txq4@wq0.2;txq5@wq0.2;txq6@wq0.3;txq7@wq0.3;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.1;rxq3@wq0.1;rxq4@wq0.2;rxq5@wq0.2;rxq6@wq0.3;rxq7@wq0.3]' \ + --iova=pa -- -i --nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +12. Rerun step 7. + +Test Case 9: PVP split ring dynamic queues vhost async operation with dsa kernel driver channels +------------------------------------------------------------------------------------------------ +This case tests if the vhost-user async operation with dsa kernel driver can work normally when the queue number of split ring dynamic change. + +1. Bind 2 DSA device to idxd driver and 1 NIC port to vfio-pci like common step 1 and 3:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.2]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-9 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + testpmd>set fwd mac + testpmd>start + +4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Quit and relaunch vhost without dsa:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +7. Rerun step 4-5. + +8. Quit and relaunch vhost with diff channel by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.1;rxq3@wq0.0]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Rerun step 4-5. + +10. Quit and relaunch vhost with with diff channel by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.2;rxq2@wq1.0;rxq3@wq1.1;rxq4@wq1.2;rxq5@wq1.2;rxq6@wq1.2;rxq7@wq1.2]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Rerun step 4-5. + +12. Quit and relaunch vhost with with diff channel by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +13. Rerun step 4-5. + +14. Relaunch virtio-user by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net0,mrg_rxbuf=0,in_order=0,queues=8,server=1 \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + testpmd>set fwd mac + testpmd>start + +15. Rerun step 4-5. + +Test Case 10: PVP packed ring all path multi-queues vhost async operation test with each tx/rx queue using one DSA kernel driver channel +---------------------------------------------------------------------------------------------------------------------------------------- +This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +with each tx/rx queue using one DSA kernel driver channel. + +1. Bind 2 DSA device to idxd driver and one nic port to vfio-pci like common step 1 and 3:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + + #ls /dev/dsa,check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-14 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=2,dmas=[txq0@wq0.0;txq1@wq0.1;rxq0@wq1.0;rxq1@wq1.1]' \ + --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=2,packed_vq=1,vectorized=1 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=2,packed_vq=1,vectorized=1,queue_size=1025 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1025 --rxd=1025 + testpmd>set fwd csum + testpmd>start + +Test Case 11: PVP packed ring all path multi-queues vhost async operation test with one DSA kernel driver channel being shared among multiple tx/rx queues +---------------------------------------------------------------------------------------------------------------------------------------------------------- +This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +with one DSA dpdk driver channel being shared among multiple tx/rx queues. + +1. Bind 1 DSA device to idxd driver and one nic port to vfio-pci like common step 1 and 3:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + ls /dev/dsa #check wq configure success + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;txq6@wq0.1;txq7@wq0.1;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.0;rxq3@wq0.0;rxq4@wq0.1;rxq5@wq0.1;rxq6@wq0.1;rxq7@wq0.1]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data:: + + testpmd>show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: + + testpmd>start + testpmd>show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,packed_vq=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10. Relaunch virtio-user with vector_rx path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1,vectorized=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +11. Relaunch virtio-user with vector_rx path and ring size is not power of 2, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1,vectorized=1,queue_size=1025 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd>set fwd csum + testpmd>start + +11. Quit all testpmd and relaunch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;txq6@wq0.1;txq7@wq0.1;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.0;rxq3@wq0.0;rxq4@wq0.1;rxq5@wq0.1;rxq6@wq0.1;rxq7@wq0.1]' \ + --iova=pa -- -i --nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +12. Rerun step 7. + +Test Case 12: PVP packed ring dynamic queues vhost async operation with dsa kernel driver channels +-------------------------------------------------------------------------------------------------- +This case tests if the vhost-user async operation with dsa kernel driver can work normally when the queue number of packed ring dynamic change. + +1. Bind 2 DSA device to idxd driver and 1 NIC port to vfio-pci like common step 1 and 3:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.2]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + testpmd>set fwd mac + testpmd>start + +4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd>stop + +6. Quit and relaunch vhost without dsa:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +7. Rerun step 4-5. + +8. Quit and relaunch vhost with diff channel by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/vhost-net0,queues=8,client=1,dmas=[rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.1;rxq3@wq0.0]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +9. Rerun step 4-5. + +10. Quit and relaunch vhost with with diff channel by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.2;rxq2@wq1.0;rxq3@wq1.1;rxq4@wq1.2;rxq5@wq1.2;rxq6@wq1.2;rxq7@wq1.2]' \ + --iova=va -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +11. Rerun step 4-5. + +12. Quit and relaunch vhost with with diff channel by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +13. Rerun step 4-5. + +14. Quit and relaunch virtio-user:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=8,server=1,queue_size=1025 \ + -- -i --nb-cores=4 --txd=1025 --rxd=1025 --txq=8 --rxq=8 + testpmd>set fwd csum + testpmd>start + +15. Rerun step 4-5. + +Test Case 13: PVP split and packed ring dynamic queues vhost async operation with dsa dpdk and kernel driver channels +--------------------------------------------------------------------------------------------------------------------- +This case tests if the vhost-user async operation with dsa kernel driver and dsa dpdk driver can work normally when the queue number of split ring and packed ring dynamic change. + +1. Bind 2 DSA device to idxd driver, 2 DSA device and 1 NIC port to vfio-pci like common step 1-3:: + + # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 f1:01.0 f6:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0]' + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user with split ring mergeable in-order path by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + testpmd>set fwd csum + testpmd>start + +4. Send tcp imix packets from packet generator with random ip, check perforamnce can get target. + +5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log. + +6. Quit and relaunch vhost as below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q1;txq3@0000:f1:01.0-q1;rxq0@0000:f6:01.0-q0;rxq1@0000:f6:01.0-q0;rxq2@0000:f6:01.0-q1;rxq3@0000:f6:01.0-q2]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +7. Send imix packets from packet generator with random ip, check perforamnce can get target. + +8. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log. + +9. Quit and relaunch vhost as below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq1.0;txq3@wq1.0;txq4@0000:f1:01.0-q0;txq5@0000:f1:01.0-q0;rxq2@wq1.1;rxq3@wq1.1;rxq4@0000:f1:01.0-q1;rxq5@0000:f1:01.0-q1;rxq6@0000:f6:01.0-q0;rxq7@0000:f6:01.0-q1]' \ + testpmd>set fwd mac + testpmd>start + +10. Send imix packets from packet generator with random ip, check perforamnce can get target. + +11. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + +12. Quit and relaunch virtio-user with packed ring mergeable in-order path by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + testpmd>set fwd mac + testpmd>start + +13. Start vhost port and rerun steps 10-11. + +14. Quit and relaunch vhost with diff cahnnels as below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;txq6@wq0.6;rxq2@0000:f1:01.0-q0;rxq3@0000:f1:01.0-q1;rxq4@0000:f6:01.0-q0;rxq5@0000:f6:01.0-q1;rxq6@0000:f6:01.0-q2;rxq7@0000:f6:01.0-q3]' + testpmd>set fwd mac + testpmd>start + +15. Send imix packets from packet generator with random ip, check perforamnce can get target. + +16. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. \ No newline at end of file From patchwork Fri Dec 23 03:12:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121337 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06A24A0093; Fri, 23 Dec 2022 04:20:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 00E35427EB; Fri, 23 Dec 2022 04:20:57 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 0369540685 for ; Fri, 23 Dec 2022 04:20:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671765655; x=1703301655; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=kLLUjydC00bQa5SunQelKrHctYNwCJt0xPCWrGpUGYQ=; b=Sx70iTixwLJPNdGhHZY2mcCYlOos8KjqWZzDy9pWSOen9wY2/GfTxCNI spfhr5AvXAdhCkKT8dR7rM672GetkrOK7206/GVskdvXerOUXw5+90h52 eAS6Ni4P+sWBPkeSN5W7dZYZpcEeNMom+nUQSwrh06B9ws50Mmrf5DLBK LqQyU0h0oT4azuNdS1Abe8668+Ol36i5DcJjDmAVzoIhA5UGVvd0OdWz6 WuFcSI42TUXC9k8vUh7d9MiULdDzasML2qkLB3+R+WCA6qnOzrGL1S3bH fhoghz0o0R+WCnzcPdajXK2I4sK66/iC+qGw5Sv7+o0rGED+vLRUKYC+g g==; X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="299937595" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="299937595" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 19:20:53 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="684404562" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="684404562" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 19:20:51 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/4] tests/vhost_dsa: add new testsuite Date: Fri, 23 Dec 2022 11:12:07 +0800 Message-Id: <20221223031207.753486-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add vhost_dsa testsuite to test the PVP topo with split ring and packed ring all path with DSA dirver. Signed-off-by: Wei Ling --- tests/TestSuite_vhost_dsa.py | 2326 ++++++++++++++++++++++++++++++++++ 1 file changed, 2326 insertions(+) create mode 100644 tests/TestSuite_vhost_dsa.py diff --git a/tests/TestSuite_vhost_dsa.py b/tests/TestSuite_vhost_dsa.py new file mode 100644 index 00000000..1477eb6c --- /dev/null +++ b/tests/TestSuite_vhost_dsa.py @@ -0,0 +1,2326 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2022 Intel Corporation +# + +import json +import os +import re +from copy import deepcopy + +import framework.rst as rst +from framework.packet import Packet +from framework.pktgen import PacketGeneratorHelper +from framework.pmd_output import PmdOutput +from framework.settings import HEADER_SIZE, UPDATE_EXPECTED, load_global_setting +from framework.test_case import TestCase + +from .virtio_common import dsa_common as DC + +SPLIT_RING_PATH = { + "inorder_mergeable": "mrg_rxbuf=1,in_order=1", + "mergeable": "mrg_rxbuf=1,in_order=0", + "inorder_non_mergeable": "mrg_rxbuf=0,in_order=1", + "non_mergeable": "mrg_rxbuf=0,in_order=0", + "vectorized": "mrg_rxbuf=0,in_order=0,vectorized=1", +} + +PACKED_RING_PATH = { + "inorder_mergeable": "mrg_rxbuf=1,in_order=1,packed_vq=1", + "mergeable": "mrg_rxbuf=1,in_order=0,packed_vq=1", + "inorder_non_mergeable": "mrg_rxbuf=0,in_order=1,packed_vq=1", + "non_mergeable": "mrg_rxbuf=0,in_order=0,packed_vq=1", + "vectorized": "mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1", + "vectorized_path_not_power_of_2": "mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1,queue_size=1025", +} + + +class TestVhostDsa(TestCase): + def set_up_all(self): + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.number_of_ports = 1 + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user = self.dut.new_session(suite="virtio-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user_pmd = PmdOutput(self.dut, self.virtio_user) + self.virtio_mac = "00:01:02:03:04:05" + self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["tcp"] + self.pci_info = self.dut.ports_info[0]["pci"] + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:9] + self.virtio_core_list = self.cores_list[10:15] + self.out_path = "/tmp/%s" % self.suite_name + out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") + if "No such file or directory" in out: + self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") + self.pktgen_helper = PacketGeneratorHelper() + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.testpmd_name = self.dut.apps_name["test-pmd"].split("/")[-1] + self.save_result_flag = True + self.json_obj = {} + self.DC = DC(self) + + def set_up(self): + """ + Run before each test case. + """ + self.table_header = ["Frame", "Mode/RXD-TXD", "Mpps", "% linerate"] + self.result_table_create(self.table_header) + self.test_parameters = self.get_suite_cfg()["test_parameters"] + self.test_duration = self.get_suite_cfg()["test_duration"] + self.throughput = {} + self.gap = self.get_suite_cfg()["accepted_tolerance"] + self.test_result = {} + self.nb_desc = self.test_parameters.get(list(self.test_parameters.keys())[0])[0] + self.dut.send_expect("killall -I %s" % self.testpmd_name, "#", 20) + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.mode_list = [] + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + def get_vhost_port_num(self): + out = self.vhost_user.send_expect("show port summary all", "testpmd> ", 60) + port_num = re.search("Number of available ports:\s*(\d*)", out) + return int(port_num.group(1)) + + def check_each_queue_of_port_packets(self, queues): + """ + check each queue of each port has receive packets + """ + self.logger.info(self.vhost_user_pmd.execute_cmd("show port stats all")) + out = self.vhost_user_pmd.execute_cmd("stop") + self.logger.info(out) + port_num = self.get_vhost_port_num() + for port in range(port_num): + for queue in range(queues): + if queues == 1: + reg = "Forward statistics for port %d" % port + else: + reg = "Port= %d/Queue= %d" % (port, queue) + index = out.find(reg) + rx = re.search("RX-packets:\s*(\d*)", out[index:]) + tx = re.search("TX-packets:\s*(\d*)", out[index:]) + rx_packets = int(rx.group(1)) + tx_packets = int(tx.group(1)) + self.verify( + rx_packets > 0 and tx_packets > 0, + "The port %d/queue %d rx-packets or tx-packets is 0 about " + % (port, queue) + + "rx-packets: %d, tx-packets: %d" % (rx_packets, tx_packets), + ) + self.vhost_user_pmd.execute_cmd("start") + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def start_vhost_testpmd( + self, + cores="Default", + param="", + eal_param="", + ports="", + port_options="", + iova_mode="va", + ): + eal_param += " --iova=" + iova_mode + if port_options != "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + param=param, + eal_param=eal_param, + ports=ports, + port_options=port_options, + prefix="vhost", + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=cores, + param=param, + eal_param=eal_param, + ports=ports, + prefix="vhost", + ) + if self.nic == "I40E_40G-QSFP_A": + self.vhost_user_pmd.execute_cmd("port config all rss ipv4-tcp") + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + def start_virtio_testpmd(self, cores="Default", param="", eal_param=""): + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user_pmd.start_testpmd( + cores=cores, param=param, eal_param=eal_param, no_pci=True, prefix="virtio" + ) + # self.virtio_user_pmd.execute_cmd("set fwd csum") + self.virtio_user_pmd.execute_cmd("set fwd mac") + self.virtio_user_pmd.execute_cmd("start") + + def test_perf_pvp_split_ring_vhost_async_operation_test_with_each_tx_rx_queue_using_1_dpdk_driver( + self, + ): + """ + Test Case 1: PVP split ring vhost async operation test with each tx/rx queue using one DSA dpdk driver channel + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "rxq0@%s-q2;" + "rxq1@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=2'" + % (self.virtio_mac, path) + ) + if key == "non_mergeable": + new_virtio_param = "--enable-hw-vlan-strip " + virtio_param + else: + new_virtio_param = virtio_param + + mode = key + "_VA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=new_virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "rxq0@%s-q0;" + "rxq1@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_param = "--nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[%s]'" % dmas + ) + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=2", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="pa", + ) + for key, path in SPLIT_RING_PATH.items(): + if key == "inorder_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost-net0,%s,queues=2'" + % (self.virtio_mac, path) + ) + + mode = key + "_PA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + mode += "_RestartVhost" + self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + self.virtio_user_pmd.quit() + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_split_ring_vhost_async_operation_test_with_dpdk_driver_being_shared_among_multi_tx_rx_queue( + self, + ): + """ + Test Case 2: PVP split ring vhost async operations test with one DSA dpdk driver channel being shared among multiple tx/rx queues + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "txq6@%s-q1;" + "txq7@%s-q1;" + "rxq0@%s-q0;" + "rxq1@%s-q0;" + "rxq2@%s-q0;" + "rxq3@%s-q0;" + "rxq4@%s-q1;" + "rxq5@%s-q1;" + "rxq6@%s-q1;" + "rxq7@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8'" + % (self.virtio_mac, path) + ) + if key == "non_mergeable": + new_virtio_param = "--enable-hw-vlan-strip " + virtio_param + else: + new_virtio_param = virtio_param + + mode = key + "_VA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=new_virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "txq2@%s-q2;" + "txq3@%s-q3;" + "txq4@%s-q4;" + "txq5@%s-q5;" + "txq6@%s-q6;" + "txq7@%s-q7;" + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q3;" + "rxq4@%s-q4;" + "rxq5@%s-q5;" + "rxq6@%s-q6;" + "rxq7@%s-q7" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="pa", + ) + for key, path in SPLIT_RING_PATH.items(): + if key == "mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost-net0,%s,queues=8'" + % (self.virtio_mac, path) + ) + + mode = key + "_PA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q0;" + "txq5@%s-q0;" + "txq6@%s-q0;" + "txq7@%s-q0;" + "rxq0@%s-q0;" + "rxq1@%s-q0;" + "rxq2@%s-q0;" + "rxq3@%s-q0;" + "rxq4@%s-q0;" + "rxq5@%s-q0;" + "rxq6@%s-q0;" + "rxq7@%s-q0" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="pa", + ) + for key, path in SPLIT_RING_PATH.items(): + if key == "inorder_non_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost-net0,%s,queues=8'" + % (self.virtio_mac, path) + ) + + mode = key + "_PA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_split_ring_dynamic_queues_vhost_async_operation_with_dpdk_driver( + self, + ): + """ + Test Case 3: PVP split ring dynamic queues vhost async operation with dsa dpdk driver channels + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "txq2@%s-q2;" + "txq3@%s-q2" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + if key == "inorder_mergeable": + mode = key + "_VA_4_queue" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + self.vhost_user_pmd.quit() + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,'" + vhost_param = "--nb-cores=2 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=[self.dut.ports_info[0]["pci"]], + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_1_queue_wo_dsa" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=1) + + self.vhost_user_pmd.quit() + dmas = ( + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q1;" + "rxq3@%s-q0" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_4_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=4) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q2;" + "rxq2@%s-q0;" + "rxq3@%s-q1;" + "rxq4@%s-q2;" + "rxq5@%s-q2;" + "rxq6@%s-q2;" + "rxq7@%s-q2" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_8_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.virtio_user_pmd.quit() + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in SPLIT_RING_PATH.items(): + if key == "non_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + mode = key + "_VA_8_queue_diff" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q1;" + "rxq3@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="pa", + ) + mode = "non_mergeable" + "_PA_4_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_ring_vhost_async_operation_test_with_each_tx_rx_queue_using_1_dpdk_driver( + self, + ): + """ + Test Case 4: PVP packed ring vhost async operation test with each tx/rx queue using one DSA dpdk driver channel + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "rxq0@%s-q2;" + "rxq1@%s-q2" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=2'" + % (self.virtio_mac, path) + ) + if "vectorized" in key: + virtio_eal_param = "--force-max-simd-bitwidth=512 " + virtio_eal_param + if key == "vectorized_path_not_power_of_2": + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1025 --rxd=1025" + else: + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + + mode = key + "_VA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "rxq0@%s-q0;" + "rxq1@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="pa", + ) + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost-net0,%s,queues=2'" + % (self.virtio_mac, path) + ) + + mode = key + "_PA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + mode += "_RestartVhost" + self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + self.virtio_user_pmd.quit() + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_ring_vhost_async_operation_test_with_1_dpdk_driver_being_shared_among_multi_tx_rx_queue( + self, + ): + """ + Test Case 5: PVP packed ring vhost async operation test with one DSA dpdk driver channel being shared among multiple tx/rx queues + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "txq6@%s-q1;" + "txq7@%s-q1;" + "rxq0@%s-q0;" + "rxq1@%s-q0;" + "rxq2@%s-q0;" + "rxq3@%s-q0;" + "rxq4@%s-q1;" + "rxq5@%s-q1;" + "rxq6@%s-q1;" + "rxq7@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8'" + % (self.virtio_mac, path) + ) + if "vectorized" in key: + virtio_eal_param = "--force-max-simd-bitwidth=512 " + virtio_eal_param + if key == "vectorized_path_not_power_of_2": + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025" + else: + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + + mode = key + "_VA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q0;" + "txq5@%s-q0;" + "txq6@%s-q0;" + "txq7@%s-q0;" + "rxq0@%s-q0;" + "rxq1@%s-q0;" + "rxq2@%s-q0;" + "rxq3@%s-q0;" + "rxq4@%s-q0;" + "rxq5@%s-q0;" + "rxq6@%s-q0;" + "rxq7@%s-q0" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="pa", + ) + virtio_param = "--nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_non_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost-net0,%s,queues=8'" + % (self.virtio_mac, path) + ) + + mode = key + "_PA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_ring_dynamic_queues_vhost_async_operation_with_dpdk_driver( + self, + ): + """ + Test Case 6: PVP packed ring dynamic queues vhost async operation with dsa dpdk driver channels + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + if key == "inorder_mergeable": + mode = key + "_VA_4_queue" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + self.vhost_user_pmd.quit() + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,'" + vhost_param = "--nb-cores=2 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=[self.dut.ports_info[0]["pci"]], + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_1_queue_wo_dsa" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=1) + + self.vhost_user_pmd.quit() + dmas = ( + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_4_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=4) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q2;" + "rxq2@%s-q0;" + "rxq3@%s-q1;" + "rxq4@%s-q2;" + "rxq5@%s-q2;" + "rxq6@%s-q2;" + "rxq7@%s-q2" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_8_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.virtio_user_pmd.quit() + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + if key == "non_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + mode = key + "_VA_8_queue_diff" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q1;" + "rxq3@%s-q1" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="pa", + ) + mode = "non_mergeable" + "_PA_4_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_split_ring_vhost_async_operation_test_with_each_tx_rx_queue_using_1_kernel_driver( + self, + ): + """ + Test Case 7: PVP split ring vhost async operation test with each tx/rx queue using one DSA kernel driver channel + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + dmas = "txq0@wq0.0;" "txq1@wq0.1;" "rxq0@wq0.2;" "rxq1@wq0.3" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=2'" + % (self.virtio_mac, path) + ) + if key == "non_mergeable": + new_virtio_param = "--enable-hw-vlan-strip " + virtio_param + else: + new_virtio_param = virtio_param + + mode = key + "_VA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=new_virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + self.virtio_user_pmd.quit() + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_split_ring_vhost_async_operation_test_with_1_kernel_driver_being_shared_among_multi_tx_rx_queue( + self, + ): + """ + Test Case 8: PVP split ring all path multi-queues vhost async operation test with one DSA kernel driver channel being shared among multiple tx/rx queues + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "txq6@wq0.1;" + "txq7@wq0.1;" + "rxq0@wq0.0;" + "rxq1@wq0.0;" + "rxq2@wq0.0;" + "rxq3@wq0.0;" + "rxq4@wq0.1;" + "rxq5@wq0.1;" + "rxq6@wq0.1;" + "rxq7@wq0.1" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8'" + % (self.virtio_mac, path) + ) + if key == "non_mergeable": + new_virtio_param = "--enable-hw-vlan-strip " + virtio_param + else: + new_virtio_param = virtio_param + + mode = key + "_VA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=new_virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.1;" + "txq3@wq0.1;" + "txq4@wq0.2;" + "txq5@wq0.2;" + "txq6@wq0.3;" + "txq7@wq0.3;" + "rxq0@wq0.0;" + "rxq1@wq0.0;" + "rxq2@wq0.1;" + "rxq3@wq0.1;" + "rxq4@wq0.2;" + "rxq5@wq0.2;" + "rxq6@wq0.3;" + "rxq7@wq0.3" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="pa", + ) + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_split_ring_dynamic_queue_vhost_async_operation_with_dsa_kernel_driver( + self, + ): + """ + Test Case 9: PVP split ring dynamic queues vhost async operation with dsa kernel driver channels + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = "txq0@wq0.0;" "txq1@wq0.1;" "txq2@wq0.2;" "txq3@wq0.2" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + if key == "inorder_mergeable": + mode = key + "_VA_4_queue" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + self.vhost_user_pmd.quit() + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,'" + vhost_param = "--nb-cores=2 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=[self.dut.ports_info[0]["pci"]], + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_1_queue_wo_dsa" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=1) + + self.vhost_user_pmd.quit() + dmas = "rxq0@wq0.0;" "rxq1@wq0.1;" "rxq2@wq0.1;" "rxq3@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_4_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=4) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.2;" + "rxq2@wq1.0;" + "rxq3@wq1.1;" + "rxq4@wq1.2;" + "rxq5@wq1.2;" + "rxq6@wq1.2;" + "rxq7@wq1.2" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_8_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_8_queue_diff_1" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.virtio_user_pmd.quit() + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in SPLIT_RING_PATH.items(): + if key == "non_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + mode = key + "_VA_8_queue_diff" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_ring_vhost_async_operation_test_with_each_tx_rx_queue_using_1_dsa_kernel_driver( + self, + ): + """ + Test Case 10: PVP packed ring all path multi-queues vhost async operation test with each tx/rx queue using one DSA kernel driver channel + + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.DC.create_work_queue(work_queue_number=2, dsa_index=1) + dmas = "txq0@wq0.0;" "txq1@wq0.1;" "rxq0@wq1.0;" "rxq1@wq1.1" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=2'" + % (self.virtio_mac, path) + ) + if "vectorized" in key: + virtio_eal_param = "--force-max-simd-bitwidth=512 " + virtio_eal_param + if key == "vectorized_path_not_power_of_2": + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1025 --rxd=1025" + else: + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + + mode = key + "_VA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + self.virtio_user_pmd.quit() + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_ring_vhost_async_operation_test_with_1_kernel_driver_being_shared_among_multi_tx_rx_queue( + self, + ): + """ + Test Case 11: PVP packed ring all path multi-queues vhost async operation test with one DSA kernel driver channel being shared among multiple tx/rx queues + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "txq6@wq0.1;" + "txq7@wq0.1;" + "rxq0@wq0.0;" + "rxq1@wq0.0;" + "rxq2@wq0.0;" + "rxq3@wq0.0;" + "rxq4@wq0.1;" + "rxq5@wq0.1;" + "rxq6@wq0.1;" + "rxq7@wq0.1" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8'" + % (self.virtio_mac, path) + ) + if "vectorized" in key: + virtio_eal_param = "--force-max-simd-bitwidth=512 " + virtio_eal_param + if key == "vectorized_path_not_power_of_2": + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025" + else: + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + + mode = key + "_VA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "txq6@wq0.1;" + "txq7@wq0.1;" + "rxq0@wq0.0;" + "rxq1@wq0.0;" + "rxq2@wq0.0;" + "rxq3@wq0.0;" + "rxq4@wq0.1;" + "rxq5@wq0.1;" + "rxq6@wq0.1;" + "rxq7@wq0.1" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="pa", + ) + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_ring_dynamic_queues_vhost_async_operation_with_dsa_kernel_driver( + self, + ): + """ + Test Case 12: PVP packed ring dynamic queues vhost async operation with dsa kernel driver channels + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = "txq0@wq0.0;" "txq1@wq0.1;" "txq2@wq0.2;" "txq3@wq0.2" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + if key == "inorder_mergeable": + mode = key + "_VA_4_queue" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + self.vhost_user_pmd.quit() + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,'" + vhost_param = "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=[self.dut.ports_info[0]["pci"]], + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_1_queue_wo_dsa" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=1) + + self.vhost_user_pmd.quit() + dmas = "rxq0@wq0.0;" "rxq1@wq0.1;" "rxq2@wq0.1;" "rxq3@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_4_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=4) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.2;" + "rxq2@wq1.0;" + "rxq3@wq1.1;" + "rxq4@wq1.2;" + "rxq5@wq1.2;" + "rxq6@wq1.2;" + "rxq7@wq1.2" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_8_queue_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_8_queue_diff_1" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.virtio_user_pmd.quit() + virtio_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=8,server=1,queue_size=1025" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025" + mode = "vectorized_path_not_power_of_2" + "_VA_8_queue_diff" + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_split_and_packed_ring_dynamic_queues_vhost_async_operation_with_dsa_dpdk_and_kernel_driver( + self, + ): + """ + Test Case 13: PVP split and packed ring dynamic queues vhost async operation with dsa dpdk and kernel driver channels + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, + driver_name="vfio-pci", + dsa_index_list=[2, 3], + socket=self.ports_socket, + ) + dmas = "txq0@wq0.0;" "txq1@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options="", + iova_mode="va", + ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + if key == "inorder_mergeable": + mode = key + "_VA_kernel_driver" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q1;" + "txq3@%s-q1;" + "rxq0@%s-q0;" + "rxq1@%s-q0;" + "rxq2@%s-q1;" + "rxq3@%s-q2" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_dpdk_driver" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "txq6@wq0.6;" + "rxq2@%s-q0;" + "rxq3@%s-q1;" + "rxq4@%s-q0;" + "rxq5@%s-q1;" + "rxq6@%s-q2;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_kernel_dpdk_driver" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.virtio_user_pmd.quit() + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + if key == "inorder_mergeable": + mode = key + "_VA_kernel_dpdk_driver_packed" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "txq6@wq0.6;" + "rxq2@%s-q0;" + "rxq3@%s-q1;" + "rxq4@%s-q0;" + "rxq5@%s-q1;" + "rxq6@%s-q2;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=ports, + port_options=port_options, + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_kernel_dpdk_driver_packed_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def send_imix_packets(self, mode): + """ + Send imix packet with packet generator and verify + """ + frame_sizes = [64, 128, 256, 512, 1024, 1518] + tgenInput = [] + for frame_size in frame_sizes: + payload_size = frame_size - self.headers_size + port = self.tester.get_local_port(self.dut_ports[0]) + fields_config = { + "ip": { + "src": {"action": "random"}, + }, + } + pkt = Packet() + pkt.assign_layers(["ether", "ipv4", "tcp", "raw"]) + pkt.config_layers( + [ + ("ether", {"dst": "%s" % self.virtio_mac}), + ("ipv4", {"src": "1.1.1.1"}), + ("raw", {"payload": ["01"] * int("%d" % payload_size)}), + ] + ) + pkt.save_pcapfile( + self.tester, + "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size), + ) + tgenInput.append( + ( + port, + port, + "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size), + ) + ) + + self.tester.pktgen.clear_streams() + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgenInput, 100, fields_config, self.tester.pktgen + ) + trans_options = {"delay": 5, "duration": self.test_duration} + bps, pps = self.tester.pktgen.measure_throughput( + stream_ids=streams, options=trans_options + ) + Mpps = pps / 1000000.0 + Mbps = bps / 1000000.0 + self.verify( + Mbps > 0, + f"{self.running_case} can not receive packets of frame size {frame_sizes}", + ) + bps_linerate = self.wirespeed(self.nic, 64, 1) * 8 * (64 + 20) + throughput = Mbps * 100 / float(bps_linerate) + self.throughput[mode] = { + "imix": { + self.nb_desc: [Mbps, Mpps], + } + } + results_row = ["imix"] + results_row.append(mode) + results_row.append(Mpps) + results_row.append(throughput) + self.result_table_add(results_row) + + def handle_expected(self, mode_list): + """ + Update expected numbers to configurate file: conf/$suite_name.cfg + """ + if load_global_setting(UPDATE_EXPECTED) == "yes": + for mode in mode_list: + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + if frame_size == "imix": + self.expected_throughput[mode][frame_size][nb_desc] = round( + self.throughput[mode][frame_size][nb_desc][1], 3 + ) + else: + self.expected_throughput[mode][frame_size][nb_desc] = round( + self.throughput[mode][frame_size][nb_desc], 3 + ) + + def handle_results(self, mode_list): + """ + results handled process: + """ + header = self.table_header + header.append("nb_desc") + header.append("Expected Throughput") + header.append("Throughput Difference") + for mode in mode_list: + self.test_result[mode] = dict() + for frame_size in self.test_parameters.keys(): + ret_datas = {} + if frame_size == "imix": + bps_linerate = self.wirespeed(self.nic, 64, 1) * 8 * (64 + 20) + ret_datas = {} + for nb_desc in self.test_parameters[frame_size]: + ret_data = {} + ret_data[header[0]] = frame_size + ret_data[header[1]] = mode + ret_data[header[2]] = "{:.3f} Mpps".format( + self.throughput[mode][frame_size][nb_desc][1] + ) + ret_data[header[3]] = "{:.3f}%".format( + self.throughput[mode][frame_size][nb_desc][0] + * 100 + / bps_linerate + ) + ret_data[header[4]] = nb_desc + ret_data[header[5]] = "{:.3f} Mpps".format( + self.expected_throughput[mode][frame_size][nb_desc] + ) + ret_data[header[6]] = "{:.3f} Mpps".format( + self.throughput[mode][frame_size][nb_desc][1] + - self.expected_throughput[mode][frame_size][nb_desc] + ) + ret_datas[nb_desc] = deepcopy(ret_data) + else: + wirespeed = self.wirespeed( + self.nic, frame_size, self.number_of_ports + ) + for nb_desc in self.test_parameters[frame_size]: + ret_data = {} + ret_data[header[0]] = frame_size + ret_data[header[1]] = mode + ret_data[header[2]] = "{:.3f} Mpps".format( + self.throughput[mode][frame_size][nb_desc] + ) + ret_data[header[3]] = "{:.3f}%".format( + self.throughput[mode][frame_size][nb_desc] * 100 / wirespeed + ) + ret_data[header[4]] = nb_desc + ret_data[header[5]] = "{:.3f} Mpps".format( + self.expected_throughput[mode][frame_size][nb_desc] + ) + ret_data[header[6]] = "{:.3f} Mpps".format( + self.throughput[mode][frame_size][nb_desc] + - self.expected_throughput[mode][frame_size][nb_desc] + ) + ret_datas[nb_desc] = deepcopy(ret_data) + self.test_result[mode][frame_size] = deepcopy(ret_datas) + self.result_table_create(header) + for mode in mode_list: + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + table_row = list() + for i in range(len(header)): + table_row.append( + self.test_result[mode][frame_size][nb_desc][header[i]] + ) + self.result_table_add(table_row) + self.result_table_print() + if self.save_result_flag: + self.save_result(self.test_result, mode_list) + + def save_result(self, data, mode_list): + """ + Saves the test results as a separated file named with + self.nic+_perf_virtio_user_pvp.json in output folder + if self.save_result_flag is True + """ + case_name = self.running_case + self.json_obj[case_name] = list() + status_result = [] + for mode in mode_list: + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + row_in = self.test_result[mode][frame_size][nb_desc] + row_dict0 = dict() + row_dict0["performance"] = list() + row_dict0["parameters"] = list() + row_dict0["parameters"] = list() + result_throughput = float(row_in["Mpps"].split()[0]) + expected_throughput = float( + row_in["Expected Throughput"].split()[0] + ) + # delta value and accepted tolerance in percentage + delta = result_throughput - expected_throughput + gap = expected_throughput * -self.gap * 0.01 + delta = float(delta) + gap = float(gap) + self.logger.info("Accept tolerance are (Mpps) %f" % gap) + self.logger.info("Throughput Difference are (Mpps) %f" % delta) + if result_throughput > expected_throughput + gap: + row_dict0["status"] = "PASS" + else: + row_dict0["status"] = "FAIL" + row_dict1 = dict( + name="Throughput", + value=result_throughput, + unit="Mpps", + delta=delta, + ) + row_dict2 = dict( + name="Txd/Rxd", value=row_in["Mode/RXD-TXD"], unit="descriptor" + ) + row_dict3 = dict( + name="frame_size", value=row_in["Frame"], unit="bytes" + ) + row_dict0["performance"].append(row_dict1) + row_dict0["parameters"].append(row_dict2) + row_dict0["parameters"].append(row_dict3) + self.json_obj[case_name].append(row_dict0) + status_result.append(row_dict0["status"]) + with open( + os.path.join( + rst.path2Result, "{0:s}_{1}.json".format(self.nic, self.suite_name) + ), + "w", + ) as fp: + json.dump(self.json_obj, fp) + self.verify("FAIL" not in status_result, "Exceeded Gap") + + def tear_down(self): + """ + Run after each test case. + """ + self.dut.send_expect("killall -I %s" % self.testpmd_name, "#", 20) + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.close_session(self.vhost_user) + self.dut.close_session(self.virtio_user) + self.dut.kill_all() From patchwork Fri Dec 23 03:12:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121338 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 494F8A0093; Fri, 23 Dec 2022 04:21:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4523240697; Fri, 23 Dec 2022 04:21:27 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 1C55840685 for ; Fri, 23 Dec 2022 04:21:24 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671765685; x=1703301685; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=zTyHGVOUMwPsNhln8bc+TQnTzgRJ0gS6Kwyb4Tg+ESU=; b=kXp066crZJN9SgMEbDlSehhZPv4+S3/C/tYUGoo7mBqICjEMCEJBw2jB WOcU4atVzw2WJTJtfRKd0Acas2NG1Vj8/suZtDMNe81syZmkDnIoInSuT Q4UNzNszSMPquyTuR5RWYcQYBAQLpScZWexcXm77uw902TAA8eFJCNkMH CqWaDOAuNSe59cMDQLnhpayTOBVqlnKhNtqZyOODPfRPnNSNGUWwVIj/z oRfTWYf8GTbO2t5BkA/C9ooKNFCqDl+t9SCoot1s4//iNEpiv81j45WWY Rsjvi8mCpPv/XBhe9jk7AMJTzpqlwfV4xDLP17IkOpRKIdt2R21GJCTuB Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="317909591" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="317909591" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 19:21:24 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="682600721" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="682600721" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 19:21:22 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 4/4] conf/vhost_dsa: add vhost_dsa testsuite config file Date: Fri, 23 Dec 2022 11:12:38 +0800 Message-Id: <20221223031238.753547-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add vhost_dsa config file in conf. Signed-off-by: Wei Ling Acked-by: Lijuan Tu --- conf/vhost_dsa.cfg | 150 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 150 insertions(+) create mode 100644 conf/vhost_dsa.cfg diff --git a/conf/vhost_dsa.cfg b/conf/vhost_dsa.cfg new file mode 100644 index 00000000..4fcd0c07 --- /dev/null +++ b/conf/vhost_dsa.cfg @@ -0,0 +1,150 @@ +[suite] +update_expected = True +test_parameters = {'imix': [1024]} +test_duration = 20 +accepted_tolerance = 2 +expected_throughput = { + 'test_perf_pvp_split_ring_vhost_async_operation_test_with_each_tx_rx_queue_using_1_dpdk_driver': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_ring_vhost_async_operation_test_with_dpdk_driver_being_shared_among_multi_tx_rx_queue': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_PA': {'imix': {1024: 0.000}}, + 'mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_ring_dynamic_queues_vhost_async_operation_with_dpdk_driver': { + 'inorder_mergeable_VA_4_queue': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_1_queue_wo_dsa': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_4_queue_diff': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_8_queue_diff': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_8_queue_diff': {'imix': {1024: 0.000}}, + 'non_mergeable_PA_4_queue_diff': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_ring_vhost_async_operation_test_with_each_tx_rx_queue_using_1_dpdk_driver': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_ring_vhost_async_operation_test_with_1_dpdk_driver_being_shared_among_multi_tx_rx_queue': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_ring_dynamic_queues_vhost_async_operation_with_dpdk_driver': { + 'inorder_mergeable_VA_4_queue': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_1_queue_wo_dsa': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_4_queue_diff': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_8_queue_diff': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_8_queue_diff': {'imix': {1024: 0.000}}, + 'non_mergeable_PA_4_queue_diff': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_ring_vhost_async_operation_test_with_each_tx_rx_queue_using_1_kernel_driver': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_ring_vhost_async_operation_test_with_1_kernel_driver_being_shared_among_multi_tx_rx_queue': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_ring_dynamic_queue_vhost_async_operation_with_dsa_kernel_driver': { + 'inorder_mergeable_VA_4_queue': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_1_queue_wo_dsa': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_4_queue_diff': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_8_queue_diff': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_8_queue_diff_1': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_8_queue_diff': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_ring_vhost_async_operation_test_with_each_tx_rx_queue_using_1_dsa_kernel_driver': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_ring_vhost_async_operation_test_with_1_kernel_driver_being_shared_among_multi_tx_rx_queue': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_PA': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_ring_dynamic_queues_vhost_async_operation_with_dsa_kernel_driver': { + 'inorder_mergeable_VA_4_queue': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_1_queue_wo_dsa': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_4_queue_diff': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_8_queue_diff': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_8_queue_diff_1': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_8_queue_diff': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_and_packed_ring_dynamic_queues_vhost_async_operation_with_dsa_dpdk_and_kernel_driver': { + 'inorder_mergeable_VA_kernel_driver': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_dpdk_driver': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_kernel_dpdk_driver': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_kernel_dpdk_driver_packed': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_kernel_dpdk_driver_packed_diff': {'imix': {1024: 0.000}}}}