From patchwork Wed Aug 3 02:03:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114557 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 02571A00C5; Wed, 3 Aug 2022 04:09:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E6BB3427F6; Wed, 3 Aug 2022 04:09:32 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id DE06F40141 for ; Wed, 3 Aug 2022 04:09:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659492571; x=1691028571; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=OlFLjSeWT7zpMHypNyRJTdl+E4vtxJaxYE4BFxyhrtA=; b=jfC3w0DCck9wOaRqbaXz0sGoE17LdLH+UG9vxVm5SH3xYStutQyzCnWd O0FUSakqZiCS2kMLBFRagqzrIo7fJmp1uyZ/pqqC0wrpgo3vtTLu3obAW tcbQto2x4Uwm7+zuUjgHx01ToypUO250x+nsINI4QK1ko8ubwwgKkCIz+ YJPL4uHwZvxg+NdwGWt1RqVSCeRl8+W5MIZIHWliPiKIzQw7aSWx1O+Oz VZX8IaIDnChob+Ic7LFruLpErcEWc/JZzNVsHYsAcYj84IniAwcEo7/3u IVOES9I1Eoob959AfIoM0vwPE+kyxaYLPizk7ljxmqGBOgj/J4a5sCGh1 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10427"; a="353565371" X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="353565371" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 19:09:30 -0700 X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="630956319" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 19:09:28 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/2] test_plans/vhost_virtio_pmd_interrupt_cbdma_test_plan: modify testplan to test virtio dequeue Date: Tue, 2 Aug 2022 22:03:52 -0400 Message-Id: <20220803020352.1123844-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify vhost_virtio_pmd_interrupt_cbdma testplan to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- ...t_virtio_pmd_interrupt_cbdma_test_plan.rst | 131 +++++++++++------- 1 file changed, 82 insertions(+), 49 deletions(-) diff --git a/test_plans/vhost_virtio_pmd_interrupt_cbdma_test_plan.rst b/test_plans/vhost_virtio_pmd_interrupt_cbdma_test_plan.rst index fc203020..8b82cca6 100644 --- a/test_plans/vhost_virtio_pmd_interrupt_cbdma_test_plan.rst +++ b/test_plans/vhost_virtio_pmd_interrupt_cbdma_test_plan.rst @@ -10,45 +10,75 @@ Description Virtio-pmd interrupt need test with l3fwd-power sample, small packets send from traffic generator to virtio-pmd side,check virtio-pmd cores can be wakeup status,and virtio-pmd cores should be -sleep status after stop sending packets from traffic generator when cbdma enabled.This test plan +sleep status after stop sending packets from traffic generator when cbdma enable.This test plan cover virtio 0.95, virtio 1.0 and virtio 1.1. ..Note: 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. -2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test. -3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. +2.For split virtqueue virtio-net with multi-queues server mode test, better to use qemu version >= 5.2.0, dut to qemu(v4.2.0~v5.1.0) exist split ring multi-queues reconnection issue. +3.Kernel version > 4.8.0, mostly linux distribution don't support vfio-noiommu mode by default, +so testing this case need rebuild kernel to enable vfio-noiommu. +4.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may +exceed IOMMU's max capability, better to use 1G guest hugepage. +5.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. Prerequisites ============= +Topology +-------- +Test flow:TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG -Test ENV preparation: Kernel version > 4.8.0, mostly linux distribution don't support vfio-noiommu mode by default, -so testing this case need rebuild kernel to enable vfio-noiommu. +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz -Test flow -========= +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: + + # ./usertools/dpdk-devbind.py -s -TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci -Test Case1: Basic virtio interrupt test with 16 queues and cbdma enabled -========================================================================= + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Test Case1: Basic virtio0.95 interrupt test with 16 queues and cbdma enable +--------------------------------------------------------------------------- +This case tests virtio0.95 pmd interrupt with l3fwd-power sample when vhost uses the asynchronous operations with CBDMA channels. 1. Bind 16 cbdma channels and one NIC port to vfio-pci, then launch testpmd by below command:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1ffff -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip + --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \ + -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip --lcore-dma=[lcore1@0000:00:04.0,lcore2@0000:00:04.0,lcore3@0000:00:04.1,lcore3@0000:00:04.2,lcore4@0000:00:04.3,lcore5@0000:00:04.4,lcore6@0000:00:04.5,lcore7@0000:00:04.6,lcore8@0000:00:04.7,\ + lcore9@0000:80:04.0,lcore10@0000:80:04.1,lcore11@0000:80:04.2,lcore12@0000:80:04.3,lcore13@0000:80:04.4,lcore14@0000:80:04.5,lcore15@0000:80:04.6,lcore16@0000:80:04.7] 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on:: - taskset -c 34-35 \ - qemu-system-x86_64 -name us-vhost-vm2 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=16 \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=40 \ - -vnc :11 -daemonize + taskset -c 34-35 qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=16 \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=40 \ + -vnc :11 -daemonize 3. Bind virtio port to vfio-pci:: @@ -62,32 +92,33 @@ Test Case1: Basic virtio interrupt test with 16 queues and cbdma enabled -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' \ --no-numa --parse-ptype -5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. +5. Send random dest IP packets to host NIC with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. -6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up. +6. Change dest IP to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up. 7. Stop the date transmitter, check all related core will be back to sleep status. -Test Case2: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled -============================================================================ +Test Case2: Basic virtio-1.0 interrupt test with 4 queues and cbdma enable +-------------------------------------------------------------------------- +This case tests virtio1.0 pmd interrupt with l3fwd-power sample when vhost uses the asynchronous operations with CBDMA channels. 1. Bind four cbdma channels and one NIC port to vfio-pci, then launch testpmd by below command:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 \ - --vdev 'net_vhost0,iface=vhost-net,queues=4,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' \ - -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip + --vdev 'net_vhost0,iface=vhost-net,queues=4,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ + -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.0,lcore5@0000:00:04.0,lcore5@0000:00:04.1,lcore6@0000:00:04.1] 2. Launch VM1, set queues=4, vectors>=2xqueues+2, mq=on:: taskset -c 34-35 \ qemu-system-x86_64 -name us-vhost-vm2 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=4,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=4 \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-modern=false,mrg_rxbuf=on,csum=on,mq=on,vectors=15 \ - -vnc :11 -daemonize + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=4,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=4 \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-modern=false,mrg_rxbuf=on,csum=on,mq=on,vectors=15 \ + -vnc :11 -daemonize 3. Bind virtio port to vfio-pci:: @@ -99,32 +130,34 @@ Test Case2: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype -5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. +5. Send random dest IP packets to host NIC with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. -6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up. +6. Change dest IP to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up. 7. Stop the date transmitter, check all related core will be back to sleep status. -Test Case3: Packed ring virtio interrupt test with 16 queues and cbdma enabled -=============================================================================== +Test Case3: Packed ring virtio interrupt test with 16 queues and cbdma enable +----------------------------------------------------------------------------- +This case tests packed ring virtio-pmd interrupt with l3fwd-power sample when vhost uses the asynchronous operations with CBDMA channels. 1. Bind 16 cbdma channels ports and one NIC port to vfio-pci, then launch testpmd by below command:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1ffff -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip + --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \ + -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip --lcore-dma=[lcore1@0000:00:04.0,lcore2@0000:00:04.0,lcore3@0000:00:04.1,lcore3@0000:00:04.2,lcore4@0000:00:04.3,lcore5@0000:00:04.4,lcore6@0000:00:04.5,lcore7@0000:00:04.6,lcore8@0000:00:04.7,\ + lcore9@0000:80:04.0,lcore10@0000:80:04.1,lcore11@0000:80:04.2,lcore12@0000:80:04.3,lcore13@0000:80:04.4,lcore14@0000:80:04.5,lcore15@0000:80:04.6,lcore16@0000:80:04.7] 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on:: taskset -c 34-35 \ qemu-system-x86_64 -name us-vhost-vm2 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=16 \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=40,packed=on \ - -vnc :11 -daemonize + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=16 \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=40,packed=on \ + -vnc :11 -daemonize 3. Bind virtio port to vfio-pci:: @@ -136,8 +169,8 @@ Test Case3: Packed ring virtio interrupt test with 16 queues and cbdma enabled ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype -5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. +5. Send random dest IP packets to host NIC with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. -6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up. +6. Change dest IP to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up. 7. Stop the date transmitter, check all related core will be back to sleep status. From patchwork Wed Aug 3 02:04:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114558 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4DB27A00C5; Wed, 3 Aug 2022 04:09:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 456F840E50; Wed, 3 Aug 2022 04:09:43 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 3B09240141 for ; Wed, 3 Aug 2022 04:09:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659492581; x=1691028581; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=935e2sTkCiegjI/GqGEkN8gPDLe37NGmTbU+XsQG7WA=; b=jLy14LdHTSIzgkVLg3KNj6UqTSlZJDMIIX/++hYhXi3NfQgWANVTUmvH 5GXdodmTV5dwjXgWR6oumNPVUrFyAq9gFBdeF8OHcnwhsHiYPXGpAb2k6 3vSyN9uJmr0w4jK34mUGc2bv5dfO8sQX3B27GhjHk23DrFJH7RSQfnMfT Q8hTVHSb5IT4yWYFb72dupWS33JZtUV5hFVfAKLE8M78CwB8J32DAwhM7 JJKLj/JJu67cJVYdTpij6JiXoBJ4lfqBqsqWzxxiLWQD0HuARmYYf9E7H 6mP8pJCHn88BGAJSIyHQtxsDwzSKi5VB3wipwetQ/FCw8vn+/PMEcF2cT Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10427"; a="269942226" X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="269942226" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 19:09:40 -0700 X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="630956352" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 19:09:38 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/2] tests/vhost_virtio_pmd_interrupt_cbdma: modify testsuite to test virtio dequeue Date: Tue, 2 Aug 2022 22:04:02 -0400 Message-Id: <20220803020402.1123908-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify vhost_virtio_pmd_interrupt_cbdma testsuite to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- ...tSuite_vhost_virtio_pmd_interrupt_cbdma.py | 209 ++++++++++++------ 1 file changed, 145 insertions(+), 64 deletions(-) diff --git a/tests/TestSuite_vhost_virtio_pmd_interrupt_cbdma.py b/tests/TestSuite_vhost_virtio_pmd_interrupt_cbdma.py index 779010ba..e8132a8a 100644 --- a/tests/TestSuite_vhost_virtio_pmd_interrupt_cbdma.py +++ b/tests/TestSuite_vhost_virtio_pmd_interrupt_cbdma.py @@ -33,7 +33,7 @@ class TestVhostVirtioPmdInterruptCbdma(TestCase): [n for n in self.dut.cores if int(n["socket"]) == self.ports_socket] ) self.core_list = self.dut.get_core_list("all", socket=self.ports_socket) - self.core_list_vhost = self.core_list[0:17] + self.vhost_core_list = self.core_list[0:17] self.tx_port = self.tester.get_local_port(self.dut_ports[0]) self.dst_mac = self.dut.get_mac_address(self.dut_ports[0]) self.logger.info( @@ -138,7 +138,7 @@ class TestVhostVirtioPmdInterruptCbdma(TestCase): 2 * self.queues + 2 ) if mode == 0: - vm_params["opt_settings"] = "disable-modern=true," + opt_param + vm_params["opt_settings"] = opt_param elif mode == 1: vm_params["opt_settings"] = "disable-modern=false," + opt_param self.vm.set_vm_device(**vm_params) @@ -301,38 +301,74 @@ class TestVhostVirtioPmdInterruptCbdma(TestCase): self.dut.close_session(vm_dut2) self.vhost_pmd.quit() - def test_perf_virtio_interrupt_with_16_queues_and_cbdma_enabled(self): + def test_perf_virtio95_interrupt_test_with_16_queues_and_cbdma_enable(self): """ - Test Case1: Basic virtio interrupt test with 16 queues and cbdma enabled + Test Case1: Basic virtio0.95 interrupt test with 16 queues and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) lcore_dma = ( - f"[lcore{self.core_list_vhost[1]}@{self.cbdma_list[0]}," - f"lcore{self.core_list[2]}@{self.cbdma_list[0]}," - f"lcore{self.core_list[3]}@{self.cbdma_list[1]}," - f"lcore{self.core_list[3]}@{self.cbdma_list[2]}," - f"lcore{self.core_list[4]}@{self.cbdma_list[3]}," - f"lcore{self.core_list[5]}@{self.cbdma_list[4]}," - f"lcore{self.core_list[6]}@{self.cbdma_list[5]}," - f"lcore{self.core_list[7]}@{self.cbdma_list[6]}," - f"lcore{self.core_list[8]}@{self.cbdma_list[7]}," - f"lcore{self.core_list[9]}@{self.cbdma_list[8]}," - f"lcore{self.core_list[10]}@{self.cbdma_list[9]}," - f"lcore{self.core_list[11]}@{self.cbdma_list[10]}," - f"lcore{self.core_list[12]}@{self.cbdma_list[11]}," - f"lcore{self.core_list[13]}@{self.cbdma_list[12]}," - f"lcore{self.core_list[14]}@{self.cbdma_list[13]}," - f"lcore{self.core_list[15]}@{self.cbdma_list[14]}," - f"lcore{self.core_list[16]}@{self.cbdma_list[15]}]" + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[3], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[2], + self.vhost_core_list[4], + self.cbdma_list[3], + self.vhost_core_list[5], + self.cbdma_list[4], + self.vhost_core_list[6], + self.cbdma_list[5], + self.vhost_core_list[7], + self.cbdma_list[6], + self.vhost_core_list[8], + self.cbdma_list[7], + self.vhost_core_list[9], + self.cbdma_list[8], + self.vhost_core_list[10], + self.cbdma_list[9], + self.vhost_core_list[11], + self.cbdma_list[10], + self.vhost_core_list[12], + self.cbdma_list[11], + self.vhost_core_list[13], + self.cbdma_list[12], + self.vhost_core_list[14], + self.cbdma_list[13], + self.vhost_core_list[15], + self.cbdma_list[14], + self.vhost_core_list[16], + self.cbdma_list[15], + ) ) - vhost_param = "--nb-cores=16 --rxq=16 --txq=16 --rss-ip --lcore-dma={}".format( - lcore_dma + vhost_param = ( + "--nb-cores=16 --rxq=16 --txq=16 --rss-ip --lcore-dma=[%s]" % lcore_dma ) - vhost_eal_param = "--vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]'" + vhost_eal_param = "--vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]'" ports = self.cbdma_list ports.append(self.dut.ports_info[0]["pci"]) self.vhost_pmd.start_testpmd( - cores=self.core_list_vhost, + cores=self.vhost_core_list, ports=ports, prefix="vhost", eal_param=vhost_eal_param, @@ -346,28 +382,35 @@ class TestVhostVirtioPmdInterruptCbdma(TestCase): self.launch_l3fwd_power_in_vm() self.send_and_verify() - def test_perf_virtio10_interrupt_with_4_queues_and_cbdma_enabled(self): + def test_perf_virtio10_interrupt_test_with_4_queues_and_cbdma_enable(self): """ - Test Case2: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled + Test Case2: Basic virtio-1.0 interrupt test with 4 queues and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(4) + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4) lcore_dma = ( - f"[lcore{self.core_list_vhost[1]}@{self.cbdma_list[0]}," - f"lcore{self.core_list_vhost[2]}@{self.cbdma_list[0]}," - f"lcore{self.core_list_vhost[3]}@{self.cbdma_list[0]}," - f"lcore{self.core_list_vhost[3]}@{self.cbdma_list[1]}," - f"lcore{self.core_list_vhost[4]}@{self.cbdma_list[1]}]" - ) - vhost_param = "--nb-cores=4 --rxq=4 --txq=4 --rss-ip --lcore-dma={}".format( - lcore_dma + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[2], + self.vhost_core_list[4], + self.cbdma_list[3], + ) ) - vhost_eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net,queues=4,dmas=[txq0;txq1;txq2;txq3]'" + vhost_param = ( + "--nb-cores=4 --rxq=4 --txq=4 --rss-ip --lcore-dma=[%s]" % lcore_dma ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net,queues=4,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]'" ports = self.cbdma_list ports.append(self.dut.ports_info[0]["pci"]) self.vhost_pmd.start_testpmd( - cores=self.core_list_vhost, + cores=self.vhost_core_list, ports=ports, prefix="vhost", eal_param=vhost_eal_param, @@ -381,38 +424,76 @@ class TestVhostVirtioPmdInterruptCbdma(TestCase): self.launch_l3fwd_power_in_vm() self.send_and_verify() - def test_perf_packed_ring_virtio_interrupt_with_16_queues_and_cbdma_enabled(self): + def test_perf_packed_ring_virtio_interrupt_test_with_16_queues_and_cbdma_enable( + self, + ): """ - Test Case3: Packed ring virtio interrupt test with 16 queues and cbdma enabled + Test Case3: Packed ring virtio interrupt test with 16 queues and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) lcore_dma = ( - f"[lcore{self.core_list_vhost[1]}@{self.cbdma_list[0]}," - f"lcore{self.core_list[2]}@{self.cbdma_list[0]}," - f"lcore{self.core_list[3]}@{self.cbdma_list[1]}," - f"lcore{self.core_list[3]}@{self.cbdma_list[2]}," - f"lcore{self.core_list[4]}@{self.cbdma_list[3]}," - f"lcore{self.core_list[5]}@{self.cbdma_list[4]}," - f"lcore{self.core_list[6]}@{self.cbdma_list[5]}," - f"lcore{self.core_list[7]}@{self.cbdma_list[6]}," - f"lcore{self.core_list[8]}@{self.cbdma_list[7]}," - f"lcore{self.core_list[9]}@{self.cbdma_list[8]}," - f"lcore{self.core_list[10]}@{self.cbdma_list[9]}," - f"lcore{self.core_list[11]}@{self.cbdma_list[10]}," - f"lcore{self.core_list[12]}@{self.cbdma_list[11]}," - f"lcore{self.core_list[13]}@{self.cbdma_list[12]}," - f"lcore{self.core_list[14]}@{self.cbdma_list[13]}," - f"lcore{self.core_list[15]}@{self.cbdma_list[14]}," - f"lcore{self.core_list[16]}@{self.cbdma_list[15]}]" + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[3], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[2], + self.vhost_core_list[4], + self.cbdma_list[3], + self.vhost_core_list[5], + self.cbdma_list[4], + self.vhost_core_list[6], + self.cbdma_list[5], + self.vhost_core_list[7], + self.cbdma_list[6], + self.vhost_core_list[8], + self.cbdma_list[7], + self.vhost_core_list[9], + self.cbdma_list[8], + self.vhost_core_list[10], + self.cbdma_list[9], + self.vhost_core_list[11], + self.cbdma_list[10], + self.vhost_core_list[12], + self.cbdma_list[11], + self.vhost_core_list[13], + self.cbdma_list[12], + self.vhost_core_list[14], + self.cbdma_list[13], + self.vhost_core_list[15], + self.cbdma_list[14], + self.vhost_core_list[16], + self.cbdma_list[15], + ) ) - vhost_param = "--nb-cores=16 --rxq=16 --txq=16 --rss-ip --lcore-dma={}".format( - lcore_dma + vhost_param = ( + "--nb-cores=16 --rxq=16 --txq=16 --rss-ip --lcore-dma=[%s]" % lcore_dma ) - vhost_eal_param = "--vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]'" + vhost_eal_param = "--vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]'" ports = self.cbdma_list ports.append(self.dut.ports_info[0]["pci"]) self.vhost_pmd.start_testpmd( - cores=self.core_list_vhost, + cores=self.vhost_core_list, ports=ports, prefix="vhost", eal_param=vhost_eal_param,