From patchwork Tue Nov 22 07:13:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120052 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 70A20A057F; Tue, 22 Nov 2022 08:19:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C48242D48; Tue, 22 Nov 2022 08:19:11 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id CB3B8427EB for ; Tue, 22 Nov 2022 08:19:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669101550; x=1700637550; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=VdH4sBt4nwhDDD8nvbWfR1C1mLwTCdkiOymg3zmp3zE=; b=F5HUrcUCPb/igRlwCdj2v2O9Svbss7eU3eZ17+pYvwwvsFV/9Kawe7AL utGzg3pN1qifjvxsG3mTqymTvGwi+veUIBPg3KtI3WzCAqkdMSDH+3UP9 MxPJWx1paquomrbdulBleiqvBtDaAnDKgon78Jt24w2RPfMaoxQSi8X7Z iI6SGAv5XTt67ZC5cKSWx0AzLCBNTzFEH5D+mlEldCFkxM7mmwdT0famW 11Hsi1y/ykfHS2xAXJPD0S2rPZd0b1kPVIFJKFeFJROmTbKgr7Q6T/rwJ S5rWHdXVbMd5+6XXmpVb1691aXwxbLqTGXuLhIALJtSpJN1P7N9/kn5cY Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="400041767" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208,223";a="400041767" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2022 23:19:08 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="619114073" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208,223";a="619114073" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2022 23:19:05 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 1/2] test_plans/virtio_event_idx_interrupt_cbdma_test_plan: modify the dmas parameter by DPDK changed Date: Tue, 22 Nov 2022 15:13:17 +0800 Message-Id: <20221122071317.2895261-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.11, the dmas parameter have changed, so modify the dmas parameter in the testplan. Signed-off-by: Wei Ling --- ...io_event_idx_interrupt_cbdma_test_plan.rst | 118 +++++++++--------- 1 file changed, 60 insertions(+), 58 deletions(-) diff --git a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst index 0926e052..d8694ad5 100644 --- a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst +++ b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst @@ -12,10 +12,11 @@ This feature is to suppress interrupts for performance improvement, need compare interrupt times with and without virtio event idx enabled. This test plan test virtio event idx interrupt with cbdma enable. Also need cover driver reload test. -..Note: -1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. -2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test. -3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. +.. note:: + + 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. + 2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test. + 3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. Prerequisites ============= @@ -53,27 +54,29 @@ General set up Test case ========= -Test Case1: Split ring virtio-pci driver reload test with CBDMA enable ----------------------------------------------------------------------- -This case tests split ring event idx interrupt mode workable after reload virtio-pci driver several times when vhost uses the asynchronous operations with CBDMA channels. +Test Case 1: Split ring virtio-pci driver reload test with CBDMA enable +----------------------------------------------------------------------- +This case tests split ring event idx interrupt mode workable after reload +virtio-pci driver several times when vhost uses the asynchronous +operations with CBDMA channels. 1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0;rxq0]' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --lcore-dma=[lcore29@0000:00:04.0] + --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd> start 2. Launch VM:: - taskset -c 32-33 \ - qemu-system-x86_64 -name us-vhost-vm1 \ + taskset -c 32-33 qemu-system-x86_64 -name vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004_1.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait \ + -device e1000,netdev=nttsip1 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char1,path=./vhost-net \ + -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ -vnc :11 -daemonize @@ -95,30 +98,29 @@ This case tests split ring event idx interrupt mode workable after reload virtio 6. Rerun step4 and step5 10 times to check event idx workable after driver reload. -Test Case2: Split ring 16 queues virtio-net event idx interrupt mode test with cbdma enable -------------------------------------------------------------------------------------------- -This case tests the split ring virtio-net event idx interrupt with 16 queues and when vhost uses the asynchronous operations with CBDMA channels. +Test Case 2: Split ring 16 queues virtio-net event idx interrupt mode test with cbdma enable +-------------------------------------------------------------------------------------------- +This case tests the split ring virtio-net event idx interrupt with 16 queues and when +vhost uses the asynchronous operations with CBDMA channels. -1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands:: +1. Bind one nic port and 4 cbdma channels to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \ - -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore3@0000:00:04.1,lcore4@0000:00:04.2,lcore5@0000:00:04.3,lcore6@0000:00:04.4,lcore7@0000:00:04.5,lcore8@0000:00:04.6,lcore9@0000:00:04.7,\ - lcore10@0000:80:04.0,lcore11@0000:80:04.1,lcore12@0000:80:04.2,lcore13@0000:80:04.3,lcore14@0000:80:04.4,lcore15@0000:80:04.5,lcore16@0000:80:04.6,lcore17@0000:80:04.7] + --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;txq6@0000:00:04.0;txq7@0000:00:04.0;txq8@0000:00:04.1;txq9@0000:00:04.1;txq10@0000:00:04.1;txq11@0000:00:04.1;txq12@0000:00:04.1;txq13@0000:00:04.1;txq14@0000:00:04.1;txq15@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.2;rxq5@0000:00:04.2;rxq6@0000:00:04.2;rxq7@0000:00:04.2;rxq8@0000:00:04.3;rxq9@0000:00:04.3;rxq10@0000:00:04.3;rxq11@0000:00:04.3;rxq12@0000:00:04.3;rxq13@0000:00:04.3;rxq14@0000:00:04.3;rxq15@0000:00:04.3]' \ + -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 testpmd> start 2. Launch VM:: - taskset -c 32-33 \ - qemu-system-x86_64 -name us-vhost-vm1 \ + taskset -c 32-33 qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004_1.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait \ + -device e1000,netdev=nttsip1 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char1,path=./vhost-net \ + -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ -vnc :11 -daemonize 3. On VM1, give virtio device IP and enable vitio-net with 16 quques:: @@ -136,28 +138,30 @@ This case tests the split ring virtio-net event idx interrupt with 16 queues and testpmd> start testpmd> stop -Test Case3: Packed ring virtio-pci driver reload test with CBDMA enable ------------------------------------------------------------------------ -This case tests packed ring event idx interrupt mode workable after reload virtio-pci driver several times when uses the asynchronous operations with CBDMA channels. +Test Case 3: Packed ring virtio-pci driver reload test with CBDMA enable +------------------------------------------------------------------------ +This case tests packed ring event idx interrupt mode workable after reload +virtio-pci driver several times when uses the asynchronous operations +with CBDMA channels. 1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0;rxq0]' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --lcore-dma=[lcore29@0000:00:04.0] + --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd> start 2. Launch VM:: - taskset -c 32-33 \ - qemu-system-x86_64 -name us-vhost-vm1 \ + taskset -c 32-33 qemu-system-x86_64 -name vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004_1.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait \ + -device e1000,netdev=nttsip1 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char1,path=./vhost-net \ + -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ -vnc :11 -daemonize 3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets:: @@ -178,30 +182,29 @@ This case tests packed ring event idx interrupt mode workable after reload virti 6. Rerun step4 and step5 10 times to check event idx workable after driver reload. -Test Case4: Packed ring 16 queues virtio-net event idx interrupt mode test with cbdma enable --------------------------------------------------------------------------------------------- -This case tests the packed ring virtio-net event idx interrupt with 16 queues and when vhost uses the asynchronous operations with CBDMA channels. +Test Case 4: Packed ring 16 queues virtio-net event idx interrupt mode test with cbdma enable +--------------------------------------------------------------------------------------------- +This case tests the packed ring virtio-net event idx interrupt with 16 queues and when vhost +uses the asynchronous operations with CBDMA channels. -1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands:: +1. Bind one nic port and 4 cbdma channels to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \ - -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore3@0000:00:04.1,lcore4@0000:00:04.2,lcore5@0000:00:04.3,lcore6@0000:00:04.4,lcore7@0000:00:04.5,lcore8@0000:00:04.6,lcore9@0000:00:04.7,\ - lcore10@0000:80:04.0,lcore11@0000:80:04.1,lcore12@0000:80:04.2,lcore13@0000:80:04.3,lcore14@0000:80:04.4,lcore15@0000:80:04.5,lcore15@0000:80:04.6,lcore15@0000:80:04.7] + --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;txq6@0000:00:04.0;txq7@0000:00:04.0;txq8@0000:00:04.1;txq9@0000:00:04.1;txq10@0000:00:04.1;txq11@0000:00:04.1;txq12@0000:00:04.1;txq13@0000:00:04.1;txq14@0000:00:04.1;txq15@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.2;rxq5@0000:00:04.2;rxq6@0000:00:04.2;rxq7@0000:00:04.2;rxq8@0000:00:04.3;rxq9@0000:00:04.3;rxq10@0000:00:04.3;rxq11@0000:00:04.3;rxq12@0000:00:04.3;rxq13@0000:00:04.3;rxq14@0000:00:04.3;rxq15@0000:00:04.3]' \ + -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 testpmd> start 2. Launch VM:: - taskset -c 32-33 \ - qemu-system-x86_64 -name us-vhost-vm1 \ + taskset -c 32-33 qemu-system-x86_64 -name vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004_1.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait \ + -device e1000,netdev=nttsip1 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char1,path=./vhost-net \ + -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ -vnc :11 -daemonize 3. On VM1, configure virtio device IP and enable vitio-net with 16 quques:: @@ -218,4 +221,3 @@ This case tests the packed ring virtio-net event idx interrupt with 16 queues an testpmd> stop testpmd> start testpmd> stop - From patchwork Tue Nov 22 07:13:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120053 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 96119A057F; Tue, 22 Nov 2022 08:19:24 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 90E9042D4B; Tue, 22 Nov 2022 08:19:24 +0100 (CET) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id CDB30427EB for ; Tue, 22 Nov 2022 08:19:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669101563; x=1700637563; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=w5Mz7bQdkTzIqtbo05MygBOVTvNXoFjGm1MX0CHdE+c=; b=JmTzPmxovslEV0gVQqTZIUulvGG4h//W/HHOwfMT56ydbKKXJTEMXZhu RfNM6fHbUnQRA8aZ8hp+GaNZP0qK417D0LlU4KJA2d6msdyRBvrszGxYm Hdsi/Tj0h4h7+8Y21vUwjt6xEF6tZreNpzPbEeU6ETEV0vaXAzmmG56jy 7DCcANmycf2zujQvqvOIpvYV84dTlOqReZEfqddJs+it9n0VsjXjJCAZ8 7TL9uy3agS8hiaaA7h/pJ2tVEjd60gK2deg8UsMRgUuoRjHnnAeTmrrQv XenchhoHJrGcuU75wYYXVPjNmWGFTjzvXIxu4TaAUkWbE1JRB2WtafWft g==; X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="375896386" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208,223";a="375896386" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2022 23:19:22 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="747235817" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208,223";a="747235817" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2022 23:19:19 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 2/2] tests/virtio_event_idx_interrupt_cbdma: modify the dmas parameter by DPDK changed Date: Tue, 22 Nov 2022 15:13:31 +0800 Message-Id: <20221122071331.2895321-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.11, the dmas parameter have changed, so modify the dmas parameter in the testuite. Signed-off-by: Wei Ling --- ...tSuite_virtio_event_idx_interrupt_cbdma.py | 244 ++++++++++-------- 1 file changed, 140 insertions(+), 104 deletions(-) diff --git a/tests/TestSuite_virtio_event_idx_interrupt_cbdma.py b/tests/TestSuite_virtio_event_idx_interrupt_cbdma.py index c5d7af18..20919131 100644 --- a/tests/TestSuite_virtio_event_idx_interrupt_cbdma.py +++ b/tests/TestSuite_virtio_event_idx_interrupt_cbdma.py @@ -260,9 +260,12 @@ class TestVirtioIdxInterruptCbdma(TestCase): Test Case1: Split ring virtio-pci driver reload test with CBDMA enable """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) - lcore_dma = "lcore%s@%s" % (self.vhost_core_list[1], self.cbdma_list[0]) - vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024 --lcore-dma=[%s]" % lcore_dma - vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0;rxq0]'" + dmas = "txq0@%s;rxq0@%s" % ( + self.cbdma_list[0], + self.cbdma_list[0], + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[%s]'" % dmas ports = self.cbdma_list ports.append(self.dut.ports_info[0]["pci"]) self.vhost_pmd.start_testpmd( @@ -287,63 +290,78 @@ class TestVirtioIdxInterruptCbdma(TestCase): Test Case2: Split ring 16 queues virtio-net event idx interrupt mode test with cbdma enable """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s;" + "txq8@%s;" + "txq9@%s;" + "txq10@%s;" + "txq11@%s;" + "txq12@%s;" + "txq13@%s;" + "txq14@%s;" + "txq15@%s;" + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s;" + "rxq8@%s;" + "rxq9@%s;" + "rxq10@%s;" + "rxq11@%s;" + "rxq12@%s;" + "rxq13@%s;" + "rxq14@%s;" + "rxq15@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[2], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[3], self.cbdma_list[2], - self.vhost_core_list[4], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], self.cbdma_list[3], - self.vhost_core_list[5], - self.cbdma_list[4], - self.vhost_core_list[6], - self.cbdma_list[5], - self.vhost_core_list[7], - self.cbdma_list[6], - self.vhost_core_list[8], - self.cbdma_list[7], - self.vhost_core_list[9], - self.cbdma_list[8], - self.vhost_core_list[10], - self.cbdma_list[9], - self.vhost_core_list[11], - self.cbdma_list[10], - self.vhost_core_list[12], - self.cbdma_list[11], - self.vhost_core_list[13], - self.cbdma_list[12], - self.vhost_core_list[14], - self.cbdma_list[13], - self.vhost_core_list[15], - self.cbdma_list[14], - self.vhost_core_list[16], - self.cbdma_list[15], ) ) - vhost_param = ( - "--nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 --lcore-dma=[%s]" - % lcore_dma + vhost_param = "--nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16" + vhost_eal_param = ( + "--vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[%s]'" % dmas ) - vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]'" ports = self.cbdma_list ports.append(self.dut.ports_info[0]["pci"]) self.vhost_pmd.start_testpmd( @@ -369,9 +387,12 @@ class TestVirtioIdxInterruptCbdma(TestCase): Test Case3: Packed ring virtio-pci driver reload test with CBDMA enable """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) - lcore_dma = "lcore%s@%s" % (self.vhost_core_list[1], self.cbdma_list[0]) - vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024 --lcore-dma=[%s]" % lcore_dma - vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0;rxq0]'" + dmas = "txq0@%s;rxq0@%s" % ( + self.cbdma_list[0], + self.cbdma_list[0], + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[%s]'" % dmas ports = self.cbdma_list ports.append(self.dut.ports_info[0]["pci"]) self.vhost_pmd.start_testpmd( @@ -396,63 +417,78 @@ class TestVirtioIdxInterruptCbdma(TestCase): Test Case4: Packed ring 16 queues virtio-net event idx interrupt mode test with cbdma enable """ self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s;" + "txq8@%s;" + "txq9@%s;" + "txq10@%s;" + "txq11@%s;" + "txq12@%s;" + "txq13@%s;" + "txq14@%s;" + "txq15@%s;" + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s;" + "rxq8@%s;" + "rxq9@%s;" + "rxq10@%s;" + "rxq11@%s;" + "rxq12@%s;" + "rxq13@%s;" + "rxq14@%s;" + "rxq15@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[2], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[3], self.cbdma_list[2], - self.vhost_core_list[4], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[3], self.cbdma_list[3], - self.vhost_core_list[5], - self.cbdma_list[4], - self.vhost_core_list[6], - self.cbdma_list[5], - self.vhost_core_list[7], - self.cbdma_list[6], - self.vhost_core_list[8], - self.cbdma_list[7], - self.vhost_core_list[9], - self.cbdma_list[8], - self.vhost_core_list[10], - self.cbdma_list[9], - self.vhost_core_list[11], - self.cbdma_list[10], - self.vhost_core_list[12], - self.cbdma_list[11], - self.vhost_core_list[13], - self.cbdma_list[12], - self.vhost_core_list[14], - self.cbdma_list[13], - self.vhost_core_list[15], - self.cbdma_list[14], - self.vhost_core_list[16], - self.cbdma_list[15], ) ) - vhost_param = ( - "--nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 --lcore-dma=[%s]" - % lcore_dma + vhost_param = "--nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16" + vhost_eal_param = ( + "--vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[%s]'" % dmas ) - vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]'" ports = self.cbdma_list ports.append(self.dut.ports_info[0]["pci"]) self.vhost_pmd.start_testpmd(