From patchwork Thu Dec 22 03:35:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121252 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6CAA7A034C; Thu, 22 Dec 2022 04:44:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E1E340FDF; Thu, 22 Dec 2022 04:44:03 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id C8D91400D7 for ; Thu, 22 Dec 2022 04:44:00 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671680641; x=1703216641; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=jBeP2oPrqSogSf42Maqa56nUk0UTgp/BNfpcWuI24rw=; b=DCZt8DP5O+mdx9mt2k13892RWkPU/hVmQRQocPWHkvVOOavgdd4Ghpk4 YNuhvj6UH7Z9NGfh4XyzkkdfCDbttF7amlahs0BHVZXAmHRSSw2hnH/oK Fx17PKg7R/g+dOVbRfLNVoALIeiMLXckSYGVk6LseKfiNZwp7j/eLoIaH TPhFyYHeB+hf4908YNT+UNqu5KldqGqqgOQRKNAAhgvnB6uShHtCeL9Ua yOZCGnq9AhGG99KKadJqsQJ6gHU4ub/+fTdLTjYMVpb3/Sc5lMyTDGgnq pICyLPtKMoZcXyuYq6PyVI5j59DDZ8FOIic7jCBZtMS1pMpB6OLW9k5Nk w==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="303466324" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="303466324" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 19:43:59 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="758780589" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="758780589" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 19:43:57 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/2] test_plans/vm2vm_virtio_pmd_dsa_test_plan: modify dmas parameter by DPDK changed Date: Thu, 22 Dec 2022 11:35:21 +0800 Message-Id: <20221222033521.175636-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org The dmas parameter have been changed by the local patch, so modify the dmas parameter in the testplan. Signed-off-by: Wei Ling --- test_plans/vm2vm_virtio_pmd_dsa_test_plan.rst | 298 ++++++++---------- 1 file changed, 127 insertions(+), 171 deletions(-) diff --git a/test_plans/vm2vm_virtio_pmd_dsa_test_plan.rst b/test_plans/vm2vm_virtio_pmd_dsa_test_plan.rst index afad2a1b..cc6ac7f3 100644 --- a/test_plans/vm2vm_virtio_pmd_dsa_test_plan.rst +++ b/test_plans/vm2vm_virtio_pmd_dsa_test_plan.rst @@ -1,36 +1,36 @@ .. SPDX-License-Identifier: BSD-3-Clause Copyright(c) 2022 Intel Corporation -======================================================= -vm2vm vhost-user/virtio-pmd with dsa driver test plan -======================================================= +===================================================== +vm2vm vhost-user/virtio-pmd with DSA driver test plan +===================================================== -This document provides the test plan for testing some basic functions with DSA driver(kernel IDXD driver and DPDK vfio-pci driver) +This document provides the test plan for testing some basic functions with DSA driver(kernel IDXD driver and DPDK vfio-pci driver) in vm2vm vhost-user/virtio-pmd topology environment. 1. vm2vm mergeable, non-mergebale path test with virtio 1.0 and virtio1.1 and check virtio-pmd tx chain packets in mergeable path. 2. dynamic change queue number. -..Note: -1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. -2.For split virtqueue virtio-net with multi-queues server mode test, better to use qemu version >= 5.2.0, dut to qemu(v4.2.0~v5.1.0) exist split ring multi-queues reconnection issue. -3.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may -exceed IOMMU's max capability, better to use 1G guest hugepage. -4.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. +.. note:: -For more about dpdk-testpmd sample, please refer to the DPDK docments: -https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html -For more about qemu, you can refer to the qemu doc: https://qemu-project.gitlab.io/qemu/system/invocation.html + 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. + 2.For split virtqueue virtio-net with multi-queues server mode test, better to use qemu version >= 5.2.0, dut to qemu(v4.2.0~v5.1.0) exist split ring multi-queues reconnection issue. + 3.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may + exceed IOMMU's max capability, better to use 1G guest hugepage. + 4.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. In this patch, + we enable asynchronous data path for vhostpmd. Asynchronous data path is enabled per tx/rx queue, and users need to specify + the DMA device used by the tx/rx queue. Each tx/rx queue only supports to use one DMA device (This is limited by the + implementation of vhostpmd), but one DMA device can be shared among multiple tx/rx queues of different vhost PMD ports. Prerequisites ============= Topology -------- - Test flow: Virtio-pmd-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-pmd + Test flow: Virtio-pmd-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-pmd Software -------- - iperf + iperf General set up -------------- @@ -42,7 +42,7 @@ General set up CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc -j 110 -2. Get the DMA device ID of DUT, for example, 0000:6a:01.0 is DMA device ID:: +2. Get the PCI device of DUT, for example, 0000:6a:01.0 - 0000:f6:01.0 are DSA devices:: # ./usertools/dpdk-devbind.py -s @@ -64,8 +64,8 @@ Common steps ------------ 1.Bind DSA devices to DPDK vfio-pci driver:: - # ./usertools/dpdk-devbind.py -b vfio-pci - For example, bind 2 DMA devices to vfio-pci driver: + # ./usertools/dpdk-devbind.py -b vfio-pci + For example, bind 2 DSA devices to vfio-pci driver: # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0 .. note:: @@ -77,18 +77,18 @@ Common steps 2. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ):: - # ./usertools/dpdk-devbind.py -b idxd - # ./drivers/dma/dma/idxd/dpdk_idxd_cfg.py -q + # ./usertools/dpdk-devbind.py -b idxd + # ./drivers/dma/dma/idxd/dpdk_idxd_cfg.py -q .. note:: Better to reset WQ when need operate DSA devices that bound to idxd drvier: - # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset You can check it by 'ls /dev/dsa' - numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 - numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8 + dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 + wq_num: Number of workqueues per DSA endpoint, where 1<=wq_num<=8 - For example, bind 2 DMA devices to idxd driver and configure WQ: + For example, bind 2 DSA devices to idxd driver and configure WQ: # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 @@ -96,24 +96,23 @@ Common steps Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" Test Case 1: VM2VM virtio-pmd split ring mergeable path dynamic queue size with dsa dpdk driver and server mode ------------------------------------------------------------------------------------------------------------------ -This case tests split ring mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa dpdk driver, +--------------------------------------------------------------------------------------------------------------- +This case tests split ring mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa dpdk driver, check that it can tx chain packets normally after dynamically changing queue number from vhost, reconnection has also been tested. -1. Bind 2 dsa device to vfio-pci, then launch the testpmd with 2 vhost ports below commands:: +1. Bind 2 DSA device to vfio-pci, then launch the testpmd with 2 vhost ports below commands:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ - --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore5@0000:f6:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;rxq0@0000:f1:01.0-q1;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q1;rxq3@0000:f1:01.0-q1]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:f1:01.0-q2;txq1@0000:f1:01.0-q2;txq2@0000:f1:01.0-q2;txq3@0000:f1:01.0-q2;rxq0@0000:f1:01.0-q3;rxq1@0000:f1:01.0-q3;rxq2@0000:f1:01.0-q3;rxq3@0000:f1:01.0-q3]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 testpmd>start 2. Launch VM1 and VM2 using qemu:: - taskset -c 6-16 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 6-16 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -122,9 +121,9 @@ check that it can tx chain packets normally after dynamically changing queue num -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -177,47 +176,28 @@ check that it can tx chain packets normally after dynamically changing queue num testpmd>start tx_first 32 testpmd>show port stats all -10. Check vhost testpmd RX/TX can work normally, packets can looped between two VMs and both 8 queues can RX/TX traffic. +10. Check vhost testpmd RX/TX can work normally, packets can looped between two VMs and both 8 queues can RX/TX traffic. 11. Rerun step 6. -12. Relaunch and start vhost side testpmd with 8 queues:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=8 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f1:01.0-q4,lcore4@0000:f1:01.0-q5,lcore5@0000:f1:01.0-q6,lcore5@0000:f1:01.0-q7] - testpmd>start - -13. Send packets by testpmd in VM2, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: - - testpmd>stop - testpmd>start tx_first 32 - testpmd>show port stats all - testpmd>stop - -14. Rerun step 12-13 for 3 times. - Test Case 2: VM2VM virtio-pmd split ring non-mergeable path dynamic queue size with dsa dpdk driver and server mode ----------------------------------------------------------------------------------------------------------------------- -This case tests split ring non-mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa dpdk driver, +------------------------------------------------------------------------------------------------------------------- +This case tests split ring non-mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa dpdk driver, check that it can work normally after dynamically changing queue number at virtio-pmd side, reconnection has also been tested. -1. Bind 2 dsa device to vfio-pci, then launch the testpmd with 2 vhost ports below commands:: +1. Bind 2 DSA device to vfio-pci, then launch the testpmd with 2 vhost ports below commands:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore5@0000:f6:01.0-q3] + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q1;txq3@0000:f1:01.0-q1;rxq0@0000:f1:01.0-q2;rxq1@0000:f1:01.0-q2;rxq2@0000:f1:01.0-q3;rxq3@0000:f1:01.0-q3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:f6:01.0-q0;txq1@0000:f6:01.0-q0;txq2@0000:f6:01.0-q1;txq3@0000:f6:01.0-q1;rxq0@0000:f6:01.0-q2;rxq1@0000:f6:01.0-q2;rxq2@0000:f6:01.0-q3;rxq3@0000:f6:01.0-q3]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 2. Launch VM1 and VM2 using qemu:: - taskset -c 6-16 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 6-16 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -226,9 +206,9 @@ check that it can work normally after dynamically changing queue number at virti -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -241,18 +221,18 @@ check that it can work normally after dynamically changing queue number at virti modprobe vfio modprobe vfio-pci - echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiom mu_mode ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 4. Launch testpmd in VM1:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>set txpkts 64,256,512 testpmd>start tx_first 32 @@ -287,24 +267,23 @@ check that it can work normally after dynamically changing queue number at virti 11. Stop testpmd in VM2, and check that 4 queues can RX/TX traffic. Test Case 3: VM2VM virtio-pmd packed ring mergeable path dynamic queue size with dsa dpdk driver and server mode ------------------------------------------------------------------------------------------------------------------ +---------------------------------------------------------------------------------------------------------------- This case tests packed ring mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa dpdk driver, check that it can tx chain packets normally after dynamically changing queue number. -1. Bind 2 dsa device to vfio-pci, then launch the testpmd with 2 vhost ports below commands:: +1. Bind 2 DSA device to vfio-pci, then launch the testpmd with 2 vhost ports below commands:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ - --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore5@0000:f6:01.0-q3] + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f1:01.0-q2;txq3@0000:f1:01.0-q3;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f1:01.0-q2;txq3@0000:f1:01.0-q3;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q3]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 testpmd>start 2. Launch VM1 and VM2 using qemu:: - taskset -c 6-16 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 6-16 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -313,9 +292,9 @@ check that it can tx chain packets normally after dynamically changing queue num -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -348,9 +327,9 @@ check that it can tx chain packets normally after dynamically changing queue num 6. Quit VM2 and relaunch VM2 with split ring:: - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -372,7 +351,7 @@ check that it can tx chain packets normally after dynamically changing queue num testpmd>set fwd mac testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 testpmd>start tx_first 32 - + 9. On host, Check imix packets can looped between two VMs and 4 queues all have packets rx/tx:: testpmd>show port stats all @@ -403,24 +382,23 @@ check that it can tx chain packets normally after dynamically changing queue num 14. Rerun step 10. Test Case 4: VM2VM virtio-pmd packed ring non-mergeable path dynamic queue size with dsa dpdk driver and server mode ------------------------------------------------------------------------------------------------------------------------ -This case tests packed ring non-mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa dpdk driver, +-------------------------------------------------------------------------------------------------------------------- +This case tests packed ring non-mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa dpdk driver, check that it can work normally after dynamically changing queue number at virtio-pmd side. -1. Bind 2 dsa device to vfio-pci, then launch the testpmd with 2 vhost ports below commands:: +1. Bind 2 DSA device to vfio-pci, then launch the testpmd with 2 vhost ports below commands:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore5@0000:f6:01.0-q3] + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f1:01.0-q2;txq3@0000:f1:01.0-q3;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f1:01.0-q2;txq3@0000:f1:01.0-q3;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q3]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 2. Launch VM1 and VM2 using qemu:: - taskset -c 6-16 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 6-16 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -429,9 +407,9 @@ check that it can work normally after dynamically changing queue number at virti -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -449,13 +427,13 @@ check that it can work normally after dynamically changing queue number at virti 4. Launch testpmd in VM1:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>set txpkts 64,256,512 testpmd>start tx_first 32 @@ -474,7 +452,7 @@ check that it can work normally after dynamically changing queue number at virti testpmd>stop testpmd>port stop all testpmd>port config all rxq 4 - testpmd>port config all txq 4 + te stpmd>port config all txq 4 testpmd>port start all testpmd>start @@ -489,11 +467,11 @@ check that it can work normally after dynamically changing queue number at virti 11. Stop testpmd in VM2, and check that 4 queues can RX/TX traffic. Test Case 5: VM2VM virtio-pmd split ring mergeable path dynamic queue size with dsa kernel driver and server mode -------------------------------------------------------------------------------------------------------------------- -This case tests split ring mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa kernel driver, +----------------------------------------------------------------------------------------------------------------- +This case tests split ring mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa kernel driver, check that it can tx chain packets normally after dynamically changing queue number at vhost side, reconnection has also been tested. -1. Bind 2 dsa device to idxd:: +1. Bind 2 DSA device to idxd:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -505,17 +483,16 @@ check that it can tx chain packets normally after dynamically changing queue num 2. Launch the testpmd with 2 vhost ports below commands:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ - --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3,lcore4@wq0.4,lcore4@wq0.5,lcore5@wq0.6,lcore5@wq0.7] + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;rxq0@wq0.1;rxq1@wq0.1;rxq2@wq0.1;rxq3@wq0.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@wq0.1;txq1@wq0.1;txq2@wq0.1;txq3@wq0.1;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.0;rxq3@wq0.0]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 testpmd>start 3. Launch VM1 and VM2 using qemu:: - taskset -c 6-16 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 6-16 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -524,9 +501,9 @@ check that it can tx chain packets normally after dynamically changing queue num -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -560,7 +537,7 @@ check that it can tx chain packets normally after dynamically changing queue num perf top -8. Stop vhost, check that both 4 queues can rx/tx queues. +8. Stop vhost, check that both 4 queues can rx/tx queues. 9. On host, dynamic change queue numbers:: @@ -581,30 +558,12 @@ check that it can tx chain packets normally after dynamically changing queue num 12. Rerun step 7. -13. Relaunch and start vhost side testpmd with 8 queues:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3,lcore4@wq1.0,lcore4@wq1.1,lcore5@wq1.2,lcore5@wq1.3] - testpmd>start - -14. Send packets by testpmd in VM2, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: - - testpmd>stop - testpmd>start tx_first 32 - testpmd>show port stats all - testpmd>stop - -15. Rerun step 13-14 for 3 times. - Test Case 6: VM2VM virtio-pmd split ring non-mergeable path dynamic queue size with dsa kernel driver and server mode ------------------------------------------------------------------------------------------------------------------------ -This case tests split ring non-mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa kernel driver, +--------------------------------------------------------------------------------------------------------------------- +This case tests split ring non-mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa kernel driver, check that it can work normally after dynamically changing queue number at virtio-pmd side, reconnection has also been tested. -1. Bind 2 dsa device to idxd:: +1. Bind 2 DSA device to idxd:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -616,17 +575,16 @@ check that it can work normally after dynamically changing queue number at virti 2. Launch the testpmd with 2 vhost ports below commands:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3,lcore4@wq0.4,lcore4@wq0.5,lcore5@wq0.6,lcore5@wq0.7] + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 3. Launch VM1 and VM2 using qemu:: - taskset -c 6-16 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 6-16 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -635,9 +593,9 @@ check that it can work normally after dynamically changing queue number at virti -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -655,13 +613,13 @@ check that it can work normally after dynamically changing queue number at virti 5. Launch testpmd in VM1:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 6. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>set txpkts 64,256,512 testpmd>start tx_first 32 @@ -693,32 +651,31 @@ check that it can work normally after dynamically changing queue number at virti 11. Stop testpmd in host, and check that 4 queues can RX/TX traffic. Test Case 7: VM2VM virtio-pmd packed ring mergeable path dynamic queue size with dsa kernel driver and server mode -------------------------------------------------------------------------------------------------------------------- -This case tests packed ring mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa kernel driver, +------------------------------------------------------------------------------------------------------------------ +This case tests packed ring mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa kernel driver, check that it can tx chain packets normally after dynamically changing queue number. -1. Bind 1 dsa device to idxd:: +1. Bind 1 DSA device to idxd:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 ./usertools/dpdk-devbind.py -b idxd 6a:01.0 - ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 8 0 + ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 0 ls /dev/dsa #check wq configure success 2. Launch the testpmd with 2 vhost ports below commands:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ - --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3,lcore4@wq0.4,lcore4@wq0.5,lcore5@wq0.6,lcore5@wq0.7] + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 testpmd>start 3. Launch VM1 and VM2 using qemu:: - taskset -c 6-16 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 6-16 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -727,9 +684,9 @@ check that it can tx chain packets normally after dynamically changing queue num -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -762,9 +719,9 @@ check that it can tx chain packets normally after dynamically changing queue num 7. Quit VM2 and relaunch VM2 with split ring:: - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -816,33 +773,32 @@ check that it can tx chain packets normally after dynamically changing queue num 15. Rerun step 10. Test Case 8: VM2VM virtio-pmd packed ring non-mergeable path dynamic queue size with dsa kernel driver and server mode ------------------------------------------------------------------------------------------------------------------------- -This case tests packed ring non-mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa kernel driver, +---------------------------------------------------------------------------------------------------------------------- +This case tests packed ring non-mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with dsa kernel driver, check that it can work normally after dynamically changing queue number at virtio-pmd side. -1. Bind 2 dsa device to idxd:: +1. Bind 2 DSA device to idxd:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 8 0 - ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 8 1 + ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 0 + ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 1 ls /dev/dsa #check wq configure success 1. Launch the testpmd with 2 vhost ports below commands:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3,lcore4@wq1.0,lcore4@wq1.1,lcore5@wq1.2,lcore5@wq1.3] + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@wq0.2;txq1@wq0.2;txq2@wq0.2;txq3@wq0.2;txq4@wq0.3;txq5@wq0.3;rxq2@wq1.2;rxq3@wq1.2;rxq4@wq1.3;rxq5@wq1.3;rxq6@wq1.3;rxq7@wq1.3]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 2. Launch VM1 and VM2 using qemu:: - taskset -c 6-16 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 6-16 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -851,9 +807,9 @@ check that it can work normally after dynamically changing queue number at virti -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - taskset -c 17-27 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -871,13 +827,13 @@ check that it can work normally after dynamically changing queue number at virti 4. Launch testpmd in VM1:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>set txpkts 64,256,512 testpmd>start tx_first 32 @@ -918,7 +874,7 @@ check that it can work normally after dynamically changing queue number at virti testpmd>port config all txq 8 testpmd>port start all testpmd>start - + 13. Send packets by testpmd in VM2, check Check virtio-pmd RX/TX can work normally and imix packets can looped between two VMs for 1 mins:: testpmd>stop @@ -927,4 +883,4 @@ check that it can work normally after dynamically changing queue number at virti 14. Rerun step 6. -15. Stop testpmd in VM2, and check that 8 queues can RX/TX traffic. +15. Stop testpmd in VM2, and check that 8 queues can RX/TX traffic. \ No newline at end of file From patchwork Thu Dec 22 03:35:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121253 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8857DA034C; Thu, 22 Dec 2022 04:44:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7E5F140A7F; Thu, 22 Dec 2022 04:44:12 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 58B3B400D7 for ; Thu, 22 Dec 2022 04:44:10 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671680650; x=1703216650; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=lSeW8YrVrJZ1UF/vTxL7Z6vGNSxxs3iIdzBhDXRpilc=; b=EVbzS/qM1II+f8HqfuvK2AGI2vy4hdYTuSqu8euUpF711EbuLbiIeYVG 6moIfRC/HuoNT/R1DcnlzQdlpzQ2ggTGAvuLlboid9CbgNvmRoEKTtZ1m 97Up0xPYhRfNrCSk+oCiH7FecvGaPpsOn+LkEl37Gk3sMU716t+FsUDvm AXxGiSnbacr4fOWWfMUXevu0zy15w52PaGyY4EUn/clNNNSp/38UEuIDP I/vFhjqqNirckbC2Tz5+J8RwCi4/EMjHBoTzk8aPHa6m3RjJUFr4orJyg tk0Ric6cNKSK4xxgWjyxswGjmfZ9ObULUFM5+A65mCQSK4MoWJtzSUa3r g==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="303466363" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="303466363" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 19:44:09 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="758780617" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="758780617" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 19:44:07 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/2] tests/vm2vm_virtio_pmd_dsa: add new testsuite Date: Thu, 22 Dec 2022 11:35:31 +0800 Message-Id: <20221222033531.175696-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new testsuite TestSuite_vm2vm_virtio_pmd_dsa.py sync with the testplan to cover the vm2vm_virtio_pmd topo with dsa driver. Signed-off-by: Wei Ling Acked-by: Lijuan Tu --- tests/TestSuite_vm2vm_virtio_pmd_dsa.py | 780 ++++++++++++++++++++++++ 1 file changed, 780 insertions(+) create mode 100644 tests/TestSuite_vm2vm_virtio_pmd_dsa.py diff --git a/tests/TestSuite_vm2vm_virtio_pmd_dsa.py b/tests/TestSuite_vm2vm_virtio_pmd_dsa.py new file mode 100644 index 00000000..c4353661 --- /dev/null +++ b/tests/TestSuite_vm2vm_virtio_pmd_dsa.py @@ -0,0 +1,780 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2022 Intel Corporation +# + +import re +import time + +import framework.utils as utils +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase +from framework.virt_common import VM + +from .virtio_common import dsa_common as DC + + +class TestVM2VMVirtioPmdDsa(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:5] + self.memory_channel = self.dut.get_memory_channels() + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.pci_info = self.dut.ports_info[0]["pci"] + self.app_testpmd_path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.app_testpmd_path.split("/")[-1] + self.vhost_user = self.dut.new_session(suite="vhost") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.DC = DC(self) + + def set_up(self): + """ + Run before each test case. + """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("killall -s INT perf", "#") + self.dut.send_expect("rm -rf ./vhost-net*", "#") + self.vm_num = 2 + self.vm_dut = [] + self.vm = [] + self.use_dsa_list = [] + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def start_vhost_testpmd( + self, + cores, + eal_param="", + param="", + no_pci=False, + ports="", + port_options="", + ): + if not no_pci and port_options != "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + port_options=port_options, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + elif not no_pci and port_options == "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=no_pci, + prefix="vhost", + fixed_prefix=True, + ) + self.vhost_user_pmd.execute_cmd("start") + + def start_vms( + self, + vm_queue, + mergeable=True, + packed=False, + server_mode=True, + restart_vm1=False, + vm_config="vhost_sample", + ): + """ + start two VM, each VM has one virtio device + """ + vm_params = {} + vm_params["opt_queue"] = vm_queue + if restart_vm1: + self.vm_num = 1 + for i in range(self.vm_num): + if restart_vm1: + i = i + 1 + vm_info = VM(self.dut, "vm%d" % i, vm_config) + vm_params["driver"] = "vhost-user" + if not server_mode: + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + else: + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + ",server" + vm_params["opt_mac"] = "52:54:00:00:00:0%d" % (i + 1) + if mergeable: + mrg_rxbuf = "on" + else: + mrg_rxbuf = "off" + if packed: + vm_params[ + "opt_settings" + ] = "disable-modern=false,mrg_rxbuf={},mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on".format( + mrg_rxbuf + ) + else: + vm_params[ + "opt_settings" + ] = "disable-modern=false,mrg_rxbuf={},mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on".format( + mrg_rxbuf + ) + vm_info.set_vm_device(**vm_params) + time.sleep(3) + try: + vm_dut = vm_info.start() + if vm_dut is None: + raise Exception("Set up VM ENV failed") + except Exception as e: + print((utils.RED("Failure for %s" % str(e)))) + raise e + self.vm_dut.append(vm_dut) + self.vm.append(vm_info) + + def start_vm_testpmd(self, vm_pmd, queues, mergeable=True): + if mergeable: + param = "--enable-hw-vlan-strip --txq={} --rxq={} --txd=1024 --rxd=1024 --max-pkt-len=9600 --tx-offloads=0x00 --rx-offloads=0x00002000".format( + queues, queues + ) + else: + param = "--enable-hw-vlan-strip --txq={} --rxq={} --txd=1024 --rxd=1024 --tx-offloads=0x00".format( + queues, queues + ) + vm_pmd.start_testpmd(cores="default", param=param) + vm_pmd.execute_cmd("set fwd mac") + + def send_big_imix_packets_from_vm1(self): + self.vm1_pmd.execute_cmd("set txpkts 64,256,512,1024,2000,64,256,512,1024,2000") + self.vm1_pmd.execute_cmd("start tx_first 32") + self.vm1_pmd.execute_cmd("show port stats all") + + def send_small_imix_packets_from_vm1(self): + self.vm1_pmd.execute_cmd("set txpkts 64,256,512") + self.vm1_pmd.execute_cmd("start tx_first 32") + self.vm1_pmd.execute_cmd("show port stats all") + + def send_64b_packets_from_vm1(self): + self.vm1_pmd.execute_cmd("stop") + self.vm1_pmd.execute_cmd("start tx_first 32") + self.vm1_pmd.execute_cmd("show port stats all") + + def check_packets_looped_in_2vms(self, vm_pmd): + results = 0.0 + for _ in range(10): + out = vm_pmd.execute_cmd("show port stats 0") + lines = re.search("Rx-pps:\s*(\d*)", out) + result = lines.group(1) + results += float(result) + Mpps = results / (1000000 * 10) + self.logger.info(Mpps) + self.verify(Mpps > 0, "virtio-pmd can not looped packets") + + def check_packets_of_each_queue(self, vm_pmd, queues): + vm_pmd.execute_cmd("show port stats all") + out = vm_pmd.execute_cmd("stop") + self.logger.info(out) + for queue in range(queues): + reg = "Queue= %d" % queue + index = out.find(reg) + rx = re.search("RX-packets:\s*(\d*)", out[index:]) + tx = re.search("TX-packets:\s*(\d*)", out[index:]) + rx_packets = int(rx.group(1)) + tx_packets = int(tx.group(1)) + self.verify( + rx_packets > 0 and tx_packets > 0, + "The queue {} rx-packets or tx-packets is 0 about ".format(queue) + + "rx-packets: {}, tx-packets: {}".format(rx_packets, tx_packets), + ) + + def dynamic_change_queue_size(self, dut_pmd, queues): + dut_pmd.execute_cmd("stop") + dut_pmd.execute_cmd("port stop all") + dut_pmd.execute_cmd("port config all rxq {}".format(queues)) + dut_pmd.execute_cmd("port config all txq {}".format(queues)) + dut_pmd.execute_cmd("port start all") + dut_pmd.execute_cmd("start") + + def get_and_verify_func_name_of_perf_top(self, func_name_list): + self.dut.send_expect("rm -fr perf_top.log", "# ", 120) + self.dut.send_expect("perf top > perf_top.log", "", 120) + time.sleep(30) + self.dut.send_expect("^C", "#") + out = self.dut.send_expect("cat perf_top.log", "# ", 120) + self.logger.info(out) + for func_name in func_name_list: + self.verify( + func_name in out, + "the func_name {} is not in the perf top output".format(func_name), + ) + + def test_vm2vm_virtio_pmd_split_ring_mergeable_path_dynamic_queue_size_with_dsa_dpdk_driver_and_server_mode( + self, + ): + """ + Test Case 1: VM2VM virtio-pmd split ring mergeable path dynamic queue size with dsa dpdk driver and server mode + """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "txq2@%s-q2;" + "txq3@%s-q3;" + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'" + % (dmas, dmas) + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.start_vms(vm_queue=8, mergeable=True, packed=False, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=True) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=True) + self.send_big_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=8) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + def test_vm2vm_virtio_pmd_split_ring_non_mergeable_path_dynamic_queue_size_with_dsa_dpdk_driver_and_server_mode( + self, + ): + """ + Test Case 2: VM2VM virtio-pmd split ring non-mergeable path dynamic queue size with dsa dpdk driver and server mode + """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q1;" + "txq3@%s-q1;" + "rxq0@%s-q2;" + "rxq1@%s-q2;" + "rxq2@%s-q3;" + "rxq3@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'" + % (dmas, dmas) + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.start_vms(vm_queue=8, mergeable=False, packed=False, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=False) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=False) + self.send_small_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=4) + self.vm0_pmd.execute_cmd("start") + self.send_small_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + def test_vm2vm_virtio_pmd_packed_ring_mergeable_path_dynamic_queue_size_with_dsa_dpdk_driver_and_server_mode( + self, + ): + """ + Test Case 3: VM2VM virtio-pmd packed ring mergeable path dynamic queue size with dsa dpdk driver and server mode + """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "txq2@%s-q2;" + "txq3@%s-q3;" + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'" + % (dmas, dmas) + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.start_vms(vm_queue=8, mergeable=True, packed=True, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=True) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=True) + self.send_big_imix_packets_from_vm1() + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.logger.info("Quit and relaunch VM2 with split ring") + self.vm1_pmd.execute_cmd("quit", "#") + self.vm[1].stop() + self.vm_dut.remove(self.vm_dut[1]) + self.vm.remove(self.vm[1]) + self.start_vms( + vm_queue=8, mergeable=True, packed=False, restart_vm1=True, server_mode=True + ) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=True) + self.send_big_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=8) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + def test_vm2vm_virtio_pmd_packed_ring_non_mergeable_path_dynamic_queue_size_with_dsa_dpdk_driver_and_server_mode( + self, + ): + """ + Test Case 4: VM2VM virtio-pmd packed ring non-mergeable path dynamic queue size with dsa dpdk driver and server mode + """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "txq2@%s-q2;" + "txq3@%s-q3;" + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'" + % (dmas, dmas) + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.start_vms(vm_queue=8, mergeable=False, packed=True, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=False) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=False) + self.send_small_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=4) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + def test_vm2vm_virtio_pmd_split_ring_mergeable_path_dynamic_queue_size_with_dsa_kernel_driver_and_server_mode( + self, + ): + """ + Test Case 5: VM2VM virtio-pmd split ring mergeable path dynamic queue size with dsa kernel driver and server mode + """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas1 = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "rxq0@wq0.1;" + "rxq1@wq0.1;" + "rxq2@wq0.1;" + "rxq3@wq0.1" + ) + + dmas2 = ( + "txq0@wq0.1;" + "txq1@wq0.1;" + "txq2@wq0.1;" + "txq3@wq0.1;" + "rxq0@wq0.0;" + "rxq1@wq0.0;" + "rxq2@wq0.0;" + "rxq3@wq0.0" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.start_vms(vm_queue=8, mergeable=True, packed=False, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=True) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=True) + self.send_big_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=8) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + def test_vm2vm_virtio_pmd_split_ring_non_mergeable_path_dynamic_queue_size_with_dsa_kernel_driver_and_server_mode( + self, + ): + """ + Test Case 6: VM2VM virtio-pmd split ring non-mergeable path dynamic queue size with dsa kernel driver and server mode + """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'" + % (dmas, dmas) + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.start_vms(vm_queue=8, mergeable=False, packed=False, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=False) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=False) + self.send_small_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=4) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + def test_vm2vm_virtio_pmd_packed_ring_mergeable_path_dynamic_queue_size_with_dsa_kernel_driver_and_server_mode( + self, + ): + """ + Test Case 7: VM2VM virtio-pmd packed ring mergeable path dynamic queue size with dsa kernel driver and server mode + """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "rxq0@wq0.0;" + "rxq1@wq0.1;" + "rxq2@wq0.2;" + "rxq3@wq0.3" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'" + % (dmas, dmas) + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.start_vms(vm_queue=8, mergeable=True, packed=True, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=True) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=True) + self.send_big_imix_packets_from_vm1() + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.logger.info("Quit and relaunch VM2 with split ring") + self.vm1_pmd.execute_cmd("quit", "#") + self.vm[1].stop() + self.vm_dut.remove(self.vm_dut[1]) + self.vm.remove(self.vm[1]) + self.start_vms( + vm_queue=8, mergeable=True, packed=False, restart_vm1=True, server_mode=True + ) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=True) + self.send_small_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=8) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + def test_vm2vm_virtio_pmd_packed_ring_non_mergeable_path_dynamic_queue_size_with_dsa_kernel_driver_and_server_mode( + self, + ): + """ + Test Case 8: VM2VM virtio-pmd packed ring non-mergeable path dynamic queue size with dsa kernel driver and server mode + """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas1 = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + + dmas2 = ( + "txq0@wq0.2;" + "txq1@wq0.2;" + "txq2@wq0.2;" + "txq3@wq0.2;" + "txq4@wq0.3;" + "txq5@wq0.3;" + "rxq2@wq1.2;" + "rxq3@wq1.2;" + "rxq4@wq1.3;" + "rxq5@wq1.3;" + "rxq6@wq1.3;" + "rxq7@wq1.3" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.start_vms(vm_queue=8, mergeable=False, packed=True, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=False) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=False) + self.send_small_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=4) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_looped_in_2vms(vm_pmd=self.vm0_pmd) + self.check_packets_looped_in_2vms(vm_pmd=self.vm1_pmd) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + def stop_all_apps(self): + for i in range(len(self.vm)): + self.vm[i].stop() + self.vhost_user_pmd.quit() + + def tear_down(self): + self.stop_all_apps() + self.dut.kill_all() + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + def tear_down_all(self): + self.dut.close_session(self.vhost_user)