From patchwork Tue May 10 03:22:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110959 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 799D2A034C; Tue, 10 May 2022 05:23:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 71F9F415D7; Tue, 10 May 2022 05:23:02 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id A6B9F410F2 for ; Tue, 10 May 2022 05:23:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652152980; x=1683688980; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=R2QHasHdN/zdaJmIkMmvpDfwMMhyPJPVkKGOLPPUbk8=; b=aEWfLyAdPv9HqM52D15vPhAIg4OK8Es+AdKkMJ0kHh5DYVTPo0fkGrJe VlomHE9n7N6j48aXGs1CJ5G2xf0sa8e5JQyQFmwqUZK/trEMslG5eWl3R MEgbYh+j32SB+ULV27iyDb2zAuiMZ/jdhzhvln9y8scYaAMeCTxy4E0ww R8+wyx2CG/Nb2nY+3WsCapSRA6T1YHERozwX9B0mZl7iDVqY4Mt3wF5PR FxKR2ReBQJ3qU3JnmpLCQiCOj+tAI6Q/ciRPt6eh0/on1Nm+B0f4jyPQy G7Usb62/M3fQT0Q+kwOaOZlST3xceVsgpYUc2SIptc2wB+wGV7dY2cl62 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="268910394" X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="268910394" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:22:59 -0700 X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="710820547" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:22:58 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/3] test_plans/index: add virtio_event_idx_interrupt_cbdma Date: Mon, 9 May 2022 23:22:11 -0400 Message-Id: <20220510032211.343602-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new testplan virtio_event_idx_interrupt_cbdma_test_plan. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index d0f73d23..9a349a53 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -228,6 +228,7 @@ The following are the test plans for the DPDK DTS automated test system. vhost_virtio_pmd_interrupt_test_plan vhost_virtio_user_interrupt_test_plan virtio_event_idx_interrupt_test_plan + virtio_event_idx_interrupt_cbdma_test_plan virtio_ipsec_cryptodev_func_test_plan virtio_perf_cryptodev_func_test_plan virtio_smoke_test_plan From patchwork Tue May 10 03:22:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110960 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9EADAA034C; Tue, 10 May 2022 05:23:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9840C41156; Tue, 10 May 2022 05:23:11 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id DF368410F2 for ; Tue, 10 May 2022 05:23:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652152990; x=1683688990; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=shbIiNIbqK+2MaEOGywAKETzwKHBQOvIuUPfmSH+u+o=; b=PNI1sGz2NHMc5vwVgBK5P1TQQRi+kAXDtvca1mym/l3dvamI2PrBXvPG f2C0+CcRu4upR1GF5z8srszfFiaw9vo7Q0R398eh8LEDdQwKyqoFZCyDp U8A0BfjCeWgVmf9Rat6T3Z7BrxtzXYSVAMWnM75LYRpUoB/0FMUttTyjU U0ltewDU9D1doruCvJeVBTjIlZ0AOTPkJBhUkIrfpOG83+uy77I8rWGMc l65/XhdX5mOYWR2jiXKYrGmjrbQcgoHALHjG/nBfacdOSFEQzaHZFDUw5 K2bgYmkvMrPWwuSguxdg+ewUSxi30MopgAXr8p2Vm4ea01nFrMxZMgwNn Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="268086963" X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="268086963" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:23:08 -0700 X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="710820575" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:23:07 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/3] test_plans/virtio_event_idx_interrupt_cbdma_test_plan: add virtio_event_idx_interrupt_cbdma testplan Date: Mon, 9 May 2022 23:22:20 -0400 Message-Id: <20220510032220.343716-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new testplan test_plans/virtio_event_idx_interrupt_cbdma_test_plan. Signed-off-by: Wei Ling --- ...io_event_idx_interrupt_cbdma_test_plan.rst | 207 ++++++++++++++++++ 1 file changed, 207 insertions(+) create mode 100644 test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst diff --git a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst new file mode 100644 index 00000000..7c470953 --- /dev/null +++ b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst @@ -0,0 +1,207 @@ +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. + +==================================================== +virtio event idx interrupt mode with cbdma test plan +==================================================== + +Description +=========== + +This feature is to suppress interrupts for performance improvement, need compare +interrupt times with and without virtio event idx enabled. This test plan test +virtio event idx interrupt with cbdma enabled. Also need cover driver reload test. + +..Note: +1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. +2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test. +3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. + +Test flow +========= + +TG --> NIC --> Vhost-user --> Virtio-net + +Test Case1: Split ring virtio-pci driver reload test with CBDMA enabled +======================================================================= + +1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd> start + +2. Launch VM:: + + taskset -c 32-33 \ + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu2004_2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ + -vnc :12 -daemonize + +3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets:: + + ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net + tcpdump -i [ens3] + +4. Reload virtio-net driver by below cmds:: + + ifconfig [ens3] down + ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net + ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0] + +5. Check virtio device can receive packets again:: + + ifconfig [ens3] 1.1.1.2 + tcpdump -i [ens3] + +6. Rerun step4 and step5 100 times to check event idx workable after driver reload. + +Test Case2: Wake up split ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test +============================================================================================================== + +1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ + -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 + testpmd> start + +2. Launch VM:: + + taskset -c 32-33 \ + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu2004_2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,id=char0,path=./vhost-net,server -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \ + -vnc :12 -daemonize + +3. On VM1, give virtio device ip addr and enable vitio-net with 16 quques:: + + ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net + ethtool -L [ens3] combined 16 + +4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM:: + + cat /proc/interrupts + +5. After two hours stress test, stop and restart testpmd, check each queue has new packets coming:: + + testpmd> stop + testpmd> start + testpmd> stop + +Test Case3: Packed ring virtio-pci driver reload test with CBDMA enabled +======================================================================== + +1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd> start + +2. Launch VM:: + + taskset -c 32-33 \ + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu2004_2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \ + -vnc :12 -daemonize + +3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets:: + + ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net + tcpdump -i [ens3] + +4. Reload virtio-net driver by below cmds:: + + ifconfig [ens3] down + ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net + ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0] + +5. Check virtio device can receive packets again:: + + ifconfig [ens3] 1.1.1.2 + tcpdump -i [ens3] + +6. Rerun step4 and step5 100 times to check event idx workable after driver reload. + +Test Case4: Wake up packed ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test +=============================================================================================================== + +1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ + -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 + testpmd> start + +2. Launch VM:: + + taskset -c 32-33 \ + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu2004_2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,id=char0,path=./vhost-net,server -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \ + -vnc :12 -daemonize + +3. On VM1, give virtio device ip addr and enable vitio-net with 16 quques:: + + ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net + ethtool -L [ens3] combined 16 + +4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM:: + + cat /proc/interrupts + +5. After two hours stress test, stop and restart testpmd, check each queue has new packets coming:: + + testpmd> stop + testpmd> start + testpmd> stop + From patchwork Tue May 10 03:22:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110961 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE43AA034C; Tue, 10 May 2022 05:23:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B7FD34281D; Tue, 10 May 2022 05:23:21 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 1AE86410F2 for ; Tue, 10 May 2022 05:23:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652153000; x=1683689000; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=6cA/zGig/3kcVqThwPef7TXl1+9BqW9aWRw4MYhsGrU=; b=hF2KG5PmHEdi6mUPDaOdlKF/ZQudzkf3Bizpj1t1y24cb3gKPaq4zRe7 N+nVROKN92YCG+07gq29guuJPgGtPw9nUNOzk2gwPXSZuHUgP1J+sVvUa COmzRKpEyJYNzBueWi1qJqOT5eEKEvWG3xIE9nTZoCJjqG+9/FH6yVsg3 WUt4jl54VBZZGpboayMaoeyckHkVxLuqSX2jEmCLO1EFP2SvpVhx7YYM5 uF8+zU9VxMqdt3Hryt8jiej/6eq/MbdmiKu5a1DfGXJbYdY+RnLaxnjFm anPogQ/H0OdoNi5uWeaKtbotnOxUMmSt+yLoDvTB7do0XCfLT6T1GKlhx w==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="256783130" X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="256783130" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:23:18 -0700 X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="710820605" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:23:17 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 3/3] tests/virtio_event_idx_interrupt_cbdma: add virtio_event_idx_interrupt_cbdma testsuite Date: Mon, 9 May 2022 23:22:30 -0400 Message-Id: <20220510032230.343837-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new testsuite tests/TestSuite_virtio_event_idx_interrupt_cbdma.py. Signed-off-by: Wei Ling Acked-by: Xingguang He Tested-by: Chenyu Huang --- ...tSuite_virtio_event_idx_interrupt_cbdma.py | 446 ++++++++++++++++++ 1 file changed, 446 insertions(+) create mode 100644 tests/TestSuite_virtio_event_idx_interrupt_cbdma.py diff --git a/tests/TestSuite_virtio_event_idx_interrupt_cbdma.py b/tests/TestSuite_virtio_event_idx_interrupt_cbdma.py new file mode 100644 index 00000000..3f8008bd --- /dev/null +++ b/tests/TestSuite_virtio_event_idx_interrupt_cbdma.py @@ -0,0 +1,446 @@ +# BSD LICENSE +# +# Copyright(c) <2022> Intel Corporation. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +""" +DPDK Test suite. +Virtio idx interrupt need test with l3fwd-power sample +""" + +import _thread +import re +import time + +from framework.pktgen import PacketGeneratorHelper +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase +from framework.virt_common import VM + + +class TestVirtioIdxInterruptCbdma(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.queues = 1 + self.nb_cores = 1 + self.dut_ports = self.dut.get_ports() + self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing") + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.core_list = self.dut.get_core_list("all", socket=self.ports_socket) + self.core_list_vhost = self.core_list[0:17] + self.cores_num = len( + [n for n in self.dut.cores if int(n["socket"]) == self.ports_socket] + ) + self.dst_mac = self.dut.get_mac_address(self.dut_ports[0]) + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.pf_pci = self.dut.ports_info[0]["pci"] + self.out_path = "/tmp" + out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") + if "No such file or directory" in out: + self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") + # create an instance to set stream field setting + self.pktgen_helper = PacketGeneratorHelper() + self.app_testpmd_path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.app_testpmd_path.split("/")[-1] + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.vhost_pmd = PmdOutput(self.dut, self.vhost_user) + + def set_up(self): + """ + Run before each test case. + """ + # Clean the execution ENV + self.flag = None + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.vhost = self.dut.new_session(suite="vhost") + + def get_core_mask(self): + self.core_config = "1S/%dC/1T" % (self.nb_cores + 1) + self.verify( + self.cores_num >= (self.nb_cores + 1), + "There has not enough cores to test this case %s" % self.running_case, + ) + self.core_list = self.dut.get_core_list(self.core_config) + + def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): + """ + get all cbdma ports + """ + self.all_cbdma_list = [] + self.cbdma_list = [] + self.cbdma_str = "" + out = self.dut.send_expect( + "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 + ) + device_info = out.split("\n") + for device in device_info: + pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) + if pci_info is not None: + dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which on same socket with nic dev + bus = int(dev_info[5:7], base=16) + if bus >= 128: + cur_socket = 1 + else: + cur_socket = 0 + if allow_diff_socket: + self.all_cbdma_list.append(pci_info.group(1)) + else: + if self.ports_socket == cur_socket: + self.all_cbdma_list.append(pci_info.group(1)) + self.verify( + len(self.all_cbdma_list) >= cbdma_num, "There no enough cbdma device" + ) + self.cbdma_list = self.all_cbdma_list[0:cbdma_num] + self.cbdma_str = " ".join(self.cbdma_list) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=%s %s" + % (self.drivername, self.cbdma_str), + "# ", + 60, + ) + + def bind_cbdma_device_to_kernel(self): + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.cbdma_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" % self.cbdma_str, + "# ", + 60, + ) + + def start_vms(self, packed=False, mode=False, set_target=False, bind_dev=False): + """ + start qemus + """ + self.vm = VM(self.dut, "vm0", "vhost_sample") + vm_params = {} + vm_params["driver"] = "vhost-user" + if mode: + vm_params["opt_path"] = "%s/vhost-net,%s" % (self.base_dir, mode) + else: + vm_params["opt_path"] = "%s/vhost-net" % self.base_dir + vm_params["opt_mac"] = "00:11:22:33:44:55" + opt_args = ( + "mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on" + ) + if self.queues > 1: + vm_params["opt_queue"] = self.queues + opt_args = opt_args + ",mq=on,vectors=%d" % (2 * self.queues + 2) + if packed: + opt_args = opt_args + ",packed=on" + vm_params["opt_settings"] = opt_args + self.vm.set_vm_device(**vm_params) + try: + self.vm_dut = self.vm.start(set_target=set_target, bind_dev=bind_dev) + if self.vm_dut is None: + raise Exception("Set up VM ENV failed") + except Exception as e: + self.logger.error("ERROR: Failure for %s" % str(e)) + + def config_virito_net_in_vm(self): + """ + config ip for virtio net + set net for multi queues enable + """ + self.vm_intf = self.vm_dut.ports_info[0]["intf"] + self.vm_dut.send_expect("ifconfig %s down" % self.vm_intf, "#") + out = self.vm_dut.send_expect("ifconfig", "#") + self.verify(self.vm_intf not in out, "the virtio-pci down failed") + self.vm_dut.send_expect("ifconfig %s up" % self.vm_intf, "#") + if self.queues > 1: + self.vm_dut.send_expect( + "ethtool -L %s combined %d" % (self.vm_intf, self.queues), "#", 20 + ) + + def start_to_send_packets(self, delay): + """ + start send packets + """ + tgen_input = [] + port = self.tester.get_local_port(self.dut_ports[0]) + self.tester.scapy_append( + 'a=[Ether(dst="%s")/IP(src="0.240.74.101",proto=255)/UDP()/("X"*18)]' + % (self.dst_mac) + ) + self.tester.scapy_append('wrpcap("%s/interrupt.pcap", a)' % self.out_path) + self.tester.scapy_execute() + tgen_input.append((port, port, "%s/interrupt.pcap" % self.out_path)) + self.tester.pktgen.clear_streams() + fields_config = { + "ip": { + "dst": {"action": "random"}, + }, + } + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgen_input, 1, fields_config, self.tester.pktgen + ) + traffic_opt = {"delay": 5, "duration": delay, "rate": 1} + _, self.flag = self.tester.pktgen.measure_throughput( + stream_ids=streams, options=traffic_opt + ) + + def check_packets_after_reload_virtio_device(self, reload_times): + """ + start to send packets and check virtio net has receive packets + """ + # ixia send packets times equal to reload_times * wait_times + start_time = time.time() + _thread.start_new_thread(self.start_to_send_packets, (reload_times * 20,)) + # wait the ixia begin to send packets + time.sleep(10) + self.vm_pci = self.vm_dut.ports_info[0]["pci"] + # reload virtio device to check the virtio-net can receive packets + for i in range(reload_times + 1): + if time.time() - start_time > reload_times * 30: + self.logger.error( + "The ixia has stop to send packets, please change the delay time of ixia" + ) + self.logger.info("The virtio device has reload %d times" % i) + return False + self.logger.info("The virtio net device reload %d times" % i) + self.vm_dut.send_expect( + "tcpdump -n -vv -i %s" % self.vm_intf, "tcpdump", 30 + ) + time.sleep(5) + out = self.vm_dut.get_session_output(timeout=3) + self.vm_dut.send_expect("^c", "#", 30) + self.verify( + "ip-proto-255" in out, + "The virtio device can not receive packets after reload %d times" % i, + ) + time.sleep(2) + # reload virtio device + self.vm_dut.restore_interfaces() + time.sleep(3) + self.vm_dut.send_expect("ifconfig %s down" % self.vm_intf, "#") + self.vm_dut.send_expect("ifconfig %s up" % self.vm_intf, "#") + # wait ixia thread exit + self.logger.info("wait the thread of ixia to exit") + while 1: + if self.flag is not None: + break + time.sleep(5) + return True + + def check_each_queue_has_packets_info_on_vhost(self): + """ + check each queue has receive packets on vhost side + """ + out = self.vhost_pmd.execute_cmd("stop") + print(out) + for queue_index in range(0, self.queues): + queue = re.search("Port= 0/Queue=\s*%d" % queue_index, out) + queue = queue.group() + index = out.find(queue) + rx = re.search("RX-packets:\s*(\d*)", out[index:]) + tx = re.search("TX-packets:\s*(\d*)", out[index:]) + rx_packets = int(rx.group(1)) + tx_packets = int(tx.group(1)) + self.verify( + rx_packets > 0 and tx_packets > 0, + "The queue %d rx-packets or tx-packets is 0 about " % queue_index + + "rx-packets:%d, tx-packets:%d" % (rx_packets, tx_packets), + ) + self.vhost_pmd.execute_cmd("clear port stats all") + + def stop_all_apps(self): + """ + close all vms + """ + self.vm.stop() + self.vhost.send_expect("quit", "#", 20) + + def test_perf_split_ring_virito_pci_driver_reload_with_cbdma_enabled(self): + """ + Test Case1: Split ring virtio-pci driver reload test with CBDMA enabled + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + lcore_dma = "[lcore{}@{}]".format(self.core_list_vhost[1], self.cbdma_list[0]) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0]'" + ports = self.cbdma_list + ports.append(self.dut.ports_info[0]["pci"]) + self.vhost_pmd.start_testpmd( + cores=self.core_list_vhost, + ports=ports, + prefix="vhost", + eal_param=vhost_eal_param, + param=vhost_param, + ) + self.vhost_pmd.execute_cmd("start") + self.queues = 1 + self.start_vms(packed=False) + self.config_virito_net_in_vm() + res = self.check_packets_after_reload_virtio_device(reload_times=100) + self.verify(res is True, "Should increase the wait times of ixia") + self.stop_all_apps() + + def test_perf_wake_up_split_ring_virtio_net_cores_with_event_idx_interrupt_mode_and_cbdma_enabled_16queue( + self, + ): + """ + Test Case2: Wake up split ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) + lcore_dma = ( + f"[lcore{self.core_list_vhost[1]}@{self.cbdma_list[0]}," + f"lcore{self.core_list[2]}@{self.cbdma_list[0]}," + f"lcore{self.core_list[3]}@{self.cbdma_list[1]}," + f"lcore{self.core_list[4]}@{self.cbdma_list[2]}," + f"lcore{self.core_list[5]}@{self.cbdma_list[3]}," + f"lcore{self.core_list[6]}@{self.cbdma_list[4]}," + f"lcore{self.core_list[7]}@{self.cbdma_list[5]}," + f"lcore{self.core_list[8]}@{self.cbdma_list[6]}," + f"lcore{self.core_list[9]}@{self.cbdma_list[7]}," + f"lcore{self.core_list[10]}@{self.cbdma_list[8]}," + f"lcore{self.core_list[11]}@{self.cbdma_list[9]}," + f"lcore{self.core_list[12]}@{self.cbdma_list[10]}," + f"lcore{self.core_list[13]}@{self.cbdma_list[11]}," + f"lcore{self.core_list[14]}@{self.cbdma_list[12]}," + f"lcore{self.core_list[15]}@{self.cbdma_list[13]}," + f"lcore{self.core_list[16]}@{self.cbdma_list[14]}," + f"lcore{self.core_list[17]}@{self.cbdma_list[15]}]" + ) + vhost_param = "--nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 --lcore-dma={}".format( + lcore_dma + ) + vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]'" + ports = self.cbdma_list + ports.append(self.dut.ports_info[0]["pci"]) + self.vhost_pmd.start_testpmd( + cores=self.core_list_vhost, + ports=ports, + prefix="vhost", + eal_param=vhost_eal_param, + param=vhost_param, + ) + self.vhost_pmd.execute_cmd("start") + self.queues = 16 + self.start_vms(packed=False, mode="server") + self.config_virito_net_in_vm() + self.start_to_send_packets(delay=15) + self.check_each_queue_has_packets_info_on_vhost() + self.stop_all_apps() + + def test_perf_packed_ring_virito_pci_driver_reload_with_cbdma_enabled(self): + """ + Test Case3: Packed ring virtio-pci driver reload test with CBDMA enabled + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + lcore_dma = "[lcore{}@{}]".format(self.core_list_vhost[1], self.cbdma_list[0]) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0]'" + ports = self.cbdma_list + ports.append(self.dut.ports_info[0]["pci"]) + self.vhost_pmd.start_testpmd( + cores=self.core_list_vhost, + ports=ports, + prefix="vhost", + eal_param=vhost_eal_param, + param=vhost_param, + ) + self.vhost_pmd.execute_cmd("start") + self.queues = 1 + self.start_vms(packed=True) + self.config_virito_net_in_vm() + res = self.check_packets_after_reload_virtio_device(reload_times=100) + self.verify(res is True, "Should increase the wait times of ixia") + self.stop_all_apps() + + def test_perf_wake_up_packed_ring_virtio_net_cores_with_event_idx_interrupt_mode_and_cbdma_enabled_16queue( + self, + ): + """ + Test Case4: Wake up packed ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) + lcore_dma = ( + f"[lcore{self.core_list_vhost[1]}@{self.cbdma_list[0]}," + f"lcore{self.core_list[2]}@{self.cbdma_list[0]}," + f"lcore{self.core_list[3]}@{self.cbdma_list[1]}," + f"lcore{self.core_list[4]}@{self.cbdma_list[2]}," + f"lcore{self.core_list[5]}@{self.cbdma_list[3]}," + f"lcore{self.core_list[6]}@{self.cbdma_list[4]}," + f"lcore{self.core_list[7]}@{self.cbdma_list[5]}," + f"lcore{self.core_list[8]}@{self.cbdma_list[6]}," + f"lcore{self.core_list[9]}@{self.cbdma_list[7]}," + f"lcore{self.core_list[10]}@{self.cbdma_list[8]}," + f"lcore{self.core_list[11]}@{self.cbdma_list[9]}," + f"lcore{self.core_list[12]}@{self.cbdma_list[10]}," + f"lcore{self.core_list[13]}@{self.cbdma_list[11]}," + f"lcore{self.core_list[14]}@{self.cbdma_list[12]}," + f"lcore{self.core_list[15]}@{self.cbdma_list[13]}," + f"lcore{self.core_list[16]}@{self.cbdma_list[14]}," + f"lcore{self.core_list[17]}@{self.cbdma_list[15]}]" + ) + vhost_param = "--nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 --lcore-dma={}".format( + lcore_dma + ) + vhost_eal_param = "--vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]'" + ports = self.cbdma_list + ports.append(self.dut.ports_info[0]["pci"]) + self.vhost_pmd.start_testpmd( + cores=self.core_list_vhost, + ports=ports, + prefix="vhost", + eal_param=vhost_eal_param, + param=vhost_param, + ) + self.vhost_pmd.execute_cmd("start") + self.queues = 16 + self.start_vms(packed=True, mode="server") + self.config_virito_net_in_vm() + self.start_to_send_packets(delay=15) + self.check_each_queue_has_packets_info_on_vhost() + self.stop_all_apps() + + def tear_down(self): + """ + Run after each test case. + """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.bind_cbdma_device_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.close_session(self.vhost)