From patchwork Tue Nov 22 09:11:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120084 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2FAF0A057F; Tue, 22 Nov 2022 10:17:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2B58D42D54; Tue, 22 Nov 2022 10:17:19 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id E277842D4B for ; Tue, 22 Nov 2022 10:17:17 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669108638; x=1700644638; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=sbtTUhtWNsfYBGmesIQobdeuww3Ex7fObpsWIn6YBw8=; b=KRP6ppgq99fWJXe82SBeTyMCsGGPG3R9wBbFyxEAK76ADWeQ2OgjbMaU NcCwN/jsYEiKBcjzjnS8KuAwSk+9J1EuhqmTPj+c+9v97xSqxoZNu5f1p l9ifwtDRfjA7f+X3GohmS9l4MV7fqzjdUHMJyiWqdSVwbsYJgIVhR06hK hRF3rSTctcMeRNWwpeFXYA22GzMpRaoW484NBQYn+5NbD1hHLhNa7Tcpn hrIAmCEb6/l1yAUoZ1JLtHILZR+VywdXYvaDQF7RLao724vve6r497UPK PH6ygX4pbhVOBxxaM+GM2eTHNJ3umIoEn7GPs8Vp2IyWwmraEIUQm0q0i g==; X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="340640493" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208";a="340640493" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2022 01:17:17 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="643654018" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208";a="643654018" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2022 01:17:15 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V4 1/3] test_plans/index: add vhost_event_idx_interrupt_cbdma_test_plan Date: Tue, 22 Nov 2022 17:11:28 +0800 Message-Id: <20221122091128.2899305-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add vhost_event_idx_interrupt_cbdma_test_plan config in index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index 68739dd8..857e60cb 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -207,6 +207,7 @@ The following are the test plans for the DPDK DTS automated test system. malicious_driver_event_indication_test_plan vhost_event_idx_interrupt_test_plan + vhost_event_idx_interrupt_cbdma_test_plan vhost_virtio_pmd_interrupt_test_plan vhost_virtio_pmd_interrupt_cbdma_test_plan vhost_virtio_user_interrupt_test_plan From patchwork Tue Nov 22 09:11:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120086 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A2215A057F; Tue, 22 Nov 2022 10:17:47 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9D08642D4E; Tue, 22 Nov 2022 10:17:47 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 3A428427EB for ; Tue, 22 Nov 2022 10:17:44 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669108665; x=1700644665; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=MOhswtxupTYJUirSI/b65/0h/WQ0Mjx2SEiixx2Q1K0=; b=mP/cRZPuc7OTPI+sWo643QD8bG7peOQQY1m5sAI9ZDc1EtyrxIqUesvp 07uJ3GcMLZV3nkBB/VzITl28XIjQmRNluMFQQfI1GXOMBadIEVgpBmRQv N8bmSCh6D9g6QIUId/FSjZqvGUKrhBUXAfOB3RR9BptMkLO4WUa6YxSEt naMvlWeAT1TKeB0pv6sKG+NuBizTpJS0qY9g+yYsEQE/+cTJ8aZsxJofD gOXMlVBxgwSO7DKYc/Dy8S/T2iQJIbi2VKOiWWAlHEOZ++WH8lrT3jzaN 6y5RF+bZXc0KKcE2D/mIE6EvvTdLM5590IxLm4ECmwMbOtf/J33pm6cdX w==; X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="378031008" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208";a="378031008" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2022 01:17:27 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="643654042" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208";a="643654042" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2022 01:17:25 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V4 2/3] test_plans/vhost_event_idx_interrupt_cbdma_test_plan: add new testplan Date: Tue, 22 Nov 2022 17:11:39 +0800 Message-Id: <20221122091139.2899365-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add vhost_event_idx_interrupt_cbdma_test_plan in test_plans to test the virtio enqueue with split ring and packed ring path and CBDMA. Signed-off-by: Wei Ling --- ...st_event_idx_interrupt_cbdma_test_plan.rst | 274 ++++++++++++++++++ 1 file changed, 274 insertions(+) create mode 100644 test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst diff --git a/test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst b/test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst new file mode 100644 index 00000000..5b20f393 --- /dev/null +++ b/test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst @@ -0,0 +1,274 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2022 Intel Corporation + +==================================================== +vhost event idx interrupt modei with CBDMA test plan +==================================================== + +Description +=========== + +Vhost event idx interrupt need test with l3fwd-power sample with CBDMA channel, +send small packets from virtio-net to vhost side, check vhost-user cores can be +wakeup status,and vhost-user cores should be sleep status after stop sending +packets from virtioside. + +.. note:: + + 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. + 2.For split virtqueue virtio-net with multi-queues server mode test, better to use qemu version >= 5.2.0, dut to qemu(v4.2.0~v5.1.0) exist split ring multi-queues reconnection issue. + 3.Kernel version > 4.8.0, mostly linux distribution don't support vfio-noiommu mode by default, + so testing this case need rebuild kernel to enable vfio-noiommu. + 4.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. + 5.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. + +Prerequisites +============= +Topology +-------- +Test flow: Virtio-net --> Vhost-user + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 + +Test case +========= + +Test Case 1: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test +--------------------------------------------------------------------------------------------------------------- + +1. Bind 8 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=./vhost-net0,queues=16,client=1,dmas=[rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3;rxq4@0000:80:04.4;rxq5@0000:80:04.5;rxq6@0000:80:04.6;rxq7@0000:80:04.7;rxq8@0000:80:04.0;rxq9@0000:80:04.1;rxq10@0000:80:04.2]' \ + -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" + +2. Launch VM1 with server mode:: + + taskset -c 17-25 qemu-system-x86_64 -name us-vhost-vm1 -enable-kvm -cpu host -smp 16 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,mq=on,vectors=40 -vnc :12 + +3. Relauch l3fwd-power sample for port up:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=./vhost-net0,queues=16,client=1,dmas=[rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3;rxq4@0000:80:04.4;rxq5@0000:80:04.5;rxq6@0000:80:04.6;rxq7@0000:80:04.7;rxq8@0000:80:04.0;rxq9@0000:80:04.1;rxq10@0000:80:04.2]' \ + -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" + +4. Set vitio-net with 16 quques and give vitio-net ip address:: + + ethtool -L [ens3] combined 16 # [ens3] is the name of virtio-net + ifconfig [ens3] 1.1.1.1 + +5. Send packets with different IPs from virtio-net, notice to bind each vcpu to different send packets process:: + + taskset -c 0 ping 1.1.1.2 + taskset -c 1 ping 1.1.1.3 + taskset -c 2 ping 1.1.1.4 + taskset -c 3 ping 1.1.1.5 + taskset -c 4 ping 1.1.1.6 + taskset -c 5 ping 1.1.1.7 + taskset -c 6 ping 1.1.1.8 + taskset -c 7 ping 1.1.1.9 + taskset -c 8 ping 1.1.1.10 + taskset -c 9 ping 1.1.1.11 + taskset -c 10 ping 1.1.1.12 + taskset -c 11 ping 1.1.1.13 + taskset -c 12 ping 1.1.1.14 + taskset -c 13 ping 1.1.1.15 + taskset -c 14 ping 1.1.1.16 + taskset -c 15 ping 1.1.1.17 + +6. Check vhost related cores are waked up with l3fwd-power log, such as following:: + + L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0 + ... + ... + L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15 + +Test Case 2: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test +-------------------------------------------------------------------------------------------------------------------------------- + +1. Bind 2 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[rxq0@0000:00:04.0]' \ + --vdev 'eth_vhost1,iface=./vhost-net1,queues=1,client=1,dmas=[rxq0@0000:80:04.0]' \ + -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" + +2. Launch VM1 and VM2 with server mode:: + + taskset -c 33 \ + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu22-04.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,server,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on -vnc :10 -daemonize + + taskset -c 34 \ + qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu22-04-2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,server,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,csum=on -vnc :11 -daemonize + +3. Relauch l3fwd-power sample for port up:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[rxq0@0000:00:04.0]' \ + --vdev 'eth_vhost1,iface=./vhost-net1,queues=1,client=1,dmas=[rxq0@0000:80:04.0]' \ + -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" + +4. On VM1, set ip for virtio device and send packets to vhost:: + + ifconfig [ens3] 1.1.1.2 + #[ens3] is the virtual device name + ping 1.1.1.3 + #send packets to vhost + +5. On VM2, also set ip for virtio device and send packets to vhost:: + + ifconfig [ens3] 1.1.1.4 + #[ens3] is the virtual device name + ping 1.1.1.5 + #send packets to vhost + +6. Check vhost related cores are waked up with l3fwd-power log. + +Test Case 3: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test +---------------------------------------------------------------------------------------------------------------- + +1. Bind 8 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=./vhost-net0,queues=16,client=1,dmas=[rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3;rxq4@0000:80:04.4;rxq5@0000:80:04.5;rxq6@0000:80:04.6;rxq7@0000:80:04.7;rxq8@0000:80:04.0;rxq9@0000:80:04.1;rxq10@0000:80:04.2]' \ + -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" + +2. Launch VM1 with server mode:: + + taskset -c 17-25 qemu-system-x86_64 -name us-vhost-vm1 -enable-kvm -cpu host -smp 16 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,mq=on,vectors=40,packed=on -vnc :12 + +3. Relauch l3fwd-power sample for port up:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=./vhost-net0,queues=16,client=1,dmas=[rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3;rxq4@0000:80:04.4;rxq5@0000:80:04.5;rxq6@0000:80:04.6;rxq7@0000:80:04.7;rxq8@0000:80:04.0;rxq9@0000:80:04.1;rxq10@0000:80:04.2]' \ + -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" + +4. Set vitio-net with 16 quques and give vitio-net ip address:: + + ethtool -L [ens3] combined 16 # [ens3] is the name of virtio-net + ifconfig [ens3] 1.1.1.1 + +5. Send packets with different IPs from virtio-net, notice to bind each vcpu to different send packets process:: + + taskset -c 0 ping 1.1.1.2 + taskset -c 1 ping 1.1.1.3 + taskset -c 2 ping 1.1.1.4 + taskset -c 3 ping 1.1.1.5 + taskset -c 4 ping 1.1.1.6 + taskset -c 5 ping 1.1.1.7 + taskset -c 6 ping 1.1.1.8 + taskset -c 7 ping 1.1.1.9 + taskset -c 8 ping 1.1.1.10 + taskset -c 9 ping 1.1.1.11 + taskset -c 10 ping 1.1.1.12 + taskset -c 11 ping 1.1.1.13 + taskset -c 12 ping 1.1.1.14 + taskset -c 13 ping 1.1.1.15 + taskset -c 14 ping 1.1.1.16 + taskset -c 15 ping 1.1.1.17 + +6. Check vhost related cores are waked up with l3fwd-power log, such as following:: + + L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0 + ... + ... + L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15 + +Test Case 4: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test +--------------------------------------------------------------------------------------------------------------------------------- + +1. Bind 2 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[rxq0@0000:00:04.0]' \ + --vdev 'eth_vhost1,iface=./vhost-net1,queues=1,client=1,dmas=[rxq0@0000:80:04.0]' \ + -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" + +2. Launch VM1 and VM2 with server mode:: + + taskset -c 33 \ + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu22-04.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,server,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,packed=on -vnc :10 -daemonize + + taskset -c 34 \ + qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu22-04-2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,server,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,csum=on,packed=on -vnc :11 -daemonize + +3. Relauch l3fwd-power sample for port up:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[rxq0@0000:00:04.0]' \ + --vdev 'eth_vhost1,iface=./vhost-net1,queues=1,client=1,dmas=[rxq0@0000:80:04.0]' \ + -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" + +4. On VM1, set ip for virtio device and send packets to vhost:: + + ifconfig [ens3] 1.1.1.2 + #[ens3] is the virtual device name + ping 1.1.1.3 + #send packets to vhost + +5. On VM2, also set ip for virtio device and send packets to vhost:: + + ifconfig [ens3] 1.1.1.4 + #[ens3] is the virtual device name + ping 1.1.1.5 + #send packets to vhost + +6. Check vhost related cores are waked up with l3fwd-power log. From patchwork Tue Nov 22 09:11:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120085 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F5CBA057F; Tue, 22 Nov 2022 10:17:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7799E42D2B; Tue, 22 Nov 2022 10:17:39 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 305D3427EB for ; Tue, 22 Nov 2022 10:17:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669108657; x=1700644657; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=68apicsJtmFSqqH0cwY/+5lnvtg3jGF96Cz55dcXTow=; b=ersFYwjxjV6rM4eSeit/sDFHolfdSvHc4QWoSfwXdnt33CJgKRQTbGm6 JHO7T3PMLqpJywAO6M3Za1BBwyLR8mjyjWygWp6rFRfF0fONqqh6WCiJj vGTi9+cSTLOS8PC1e09mQveHz9MzIW8vL2PaxZ5b0RCXVkQ6EKJreKaQR gZzK7SIiiStaAvbglqktvNFLkgSFyX0m/52kLLxDpqugNXofyehQkzZzk OPgSvqsMrxl6SMcf7NRCzYIdfPuak9hfY77GSPSvaX+dw7N2pVA30HmVa z802RREJYQaqtuHVrHJfNTQbr4Q2RpFd46rJLpDhwLSr2dcYQ3P+sHF3D g==; X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="315592463" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208";a="315592463" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2022 01:17:36 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="643654066" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208";a="643654066" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Nov 2022 01:17:34 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V4 3/3] tests/vhost_event_idx_interrupt_cbdma: add new testsuite Date: Tue, 22 Nov 2022 17:11:48 +0800 Message-Id: <20221122091148.2899425-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add TestSuite_vhost_event_idx_interrupt_cbdma.py in testsuite to test the virtio enqueue with split ring and packed ring path and CBDMA. Signed-off-by: Wei Ling --- ...stSuite_vhost_event_idx_interrupt_cbdma.py | 435 ++++++++++++++++++ 1 file changed, 435 insertions(+) create mode 100644 tests/TestSuite_vhost_event_idx_interrupt_cbdma.py diff --git a/tests/TestSuite_vhost_event_idx_interrupt_cbdma.py b/tests/TestSuite_vhost_event_idx_interrupt_cbdma.py new file mode 100644 index 00000000..614aca56 --- /dev/null +++ b/tests/TestSuite_vhost_event_idx_interrupt_cbdma.py @@ -0,0 +1,435 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2022 Intel Corporation +# + +""" +DPDK Test suite. +Vhost event idx interrupt need test with l3fwd-power sample +""" + +import re +import time + +from framework.test_case import TestCase +from framework.virt_common import VM + + +class TestVhostEventIdxInterruptCbdma(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + + """ + self.vm_num = 1 + self.queues = 1 + self.cores_num = len([n for n in self.dut.cores if int(n["socket"]) == 0]) + self.prepare_l3fwd_power() + self.pci_info = self.dut.ports_info[0]["pci"] + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.app_l3fwd_power_path = self.dut.apps_name["l3fwd-power"] + self.l3fwdpower_name = self.app_l3fwd_power_path.split("/")[-1] + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cbdma_dev_infos = [] + self.device_str = None + + def set_up(self): + """ + Run before each test case. + """ + # Clean the execution ENV + self.verify_info = [] + self.dut.send_expect(f"killall {self.l3fwdpower_name}", "#") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.vhost = self.dut.new_session(suite="vhost-l3fwd") + self.vm_dut = [] + self.vm = [] + self.nopci = True + + def get_core_mask(self): + self.core_config = "1S/%dC/1T" % (self.vm_num * self.queues) + self.verify( + self.cores_num >= self.queues * self.vm_num, + "There has not enought cores to test this case %s" % self.running_case, + ) + self.core_list_l3fwd = self.dut.get_core_list(self.core_config) + + def prepare_l3fwd_power(self): + out = self.dut.build_dpdk_apps("examples/l3fwd-power") + self.verify("Error" not in out, "compilation l3fwd-power error") + + def list_split(self, items, n): + return [items[i : i + n] for i in range(0, len(items), n)] + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def launch_l3fwd_power(self): + """ + launch l3fwd-power with a virtual vhost device + """ + res = True + self.logger.info("Launch l3fwd_sample sample:") + config_info = "" + core_index = 0 + # config the interrupt cores info + for port in range(self.vm_num): + for queue in range(self.queues): + if config_info != "": + config_info += "," + config_info += "(%d,%d,%s)" % ( + port, + queue, + self.core_list_l3fwd[core_index], + ) + info = { + "core": self.core_list_l3fwd[core_index], + "port": port, + "queue": queue, + } + self.verify_info.append(info) + core_index = core_index + 1 + # config the vdev info, if have 2 vms, it shoule have 2 vdev info + vdev_info = "" + self.cbdma_dev_infos_list = [] + if self.vm_num >= 2: + self.cbdma_dev_infos_list = self.list_split( + self.cbdma_dev_infos, int(len(self.cbdma_dev_infos) / self.vm_num) + ) + for i in range(self.vm_num): + dmas = "" + if self.vm_num == 1: + for queue in range(self.queues): + dmas += f"txq{queue}@{self.cbdma_dev_infos[queue]};" + + else: + cbdma_dev_infos = self.cbdma_dev_infos_list[i] + for index, q in enumerate(cbdma_dev_infos): + dmas += f"txq{index}@{q};" + vdev_info += ( + f"--vdev 'net_vhost%d,iface=%s/vhost-net%d,dmas=[{dmas}],queues=%d,client=1' " + % (i, self.base_dir, i, self.queues) + ) + + port_info = "0x1" if self.vm_num == 1 else "0x3" + + example_para = self.app_l3fwd_power_path + " " + para = ( + " --log-level=9 %s -- -p %s --parse-ptype 1 --config '%s' --interrupt-only" + % (vdev_info, port_info, config_info) + ) + eal_params = self.dut.create_eal_parameters( + cores=self.core_list_l3fwd, + no_pci=self.nopci, + ports=self.used_cbdma, + ) + command_line_client = example_para + eal_params + para + self.vhost.get_session_before(timeout=2) + self.vhost.send_expect(command_line_client, "POWER", 40) + time.sleep(10) + out = self.vhost.get_session_before() + if "Error" in out and "Error opening" not in out: + self.logger.error("Launch l3fwd-power sample error") + res = False + else: + self.logger.info("Launch l3fwd-power sample finished") + self.verify(res is True, "Lanuch l3fwd failed") + + def relaunch_l3fwd_power(self): + """ + relaunch l3fwd-power sample for port up + """ + self.dut.send_expect("killall -s INT %s" % self.l3fwdpower_name, "#") + # make sure l3fwd-power be killed + pid = self.dut.send_expect( + "ps -ef |grep l3|grep -v grep |awk '{print $2}'", "#" + ) + if pid: + self.dut.send_expect("kill -9 %s" % pid, "#") + self.launch_l3fwd_power() + + def set_vm_cpu_number(self, vm_config): + # config the vcpu numbers when queue number greater than 1 + if self.queues == 1: + return + params_number = len(vm_config.params) + for i in range(params_number): + if list(vm_config.params[i].keys())[0] == "cpu": + vm_config.params[i]["cpu"][0]["number"] = self.queues + + def check_qemu_version(self, vm_config): + """ + in this suite, the qemu version should greater 2.7 + """ + self.vm_qemu_version = vm_config.qemu_emulator + params_number = len(vm_config.params) + for i in range(params_number): + if list(vm_config.params[i].keys())[0] == "qemu": + self.vm_qemu_version = vm_config.params[i]["qemu"][0]["path"] + + out = self.dut.send_expect("%s --version" % self.vm_qemu_version, "#") + result = re.search("QEMU\s*emulator\s*version\s*(\d*.\d*)", out) + self.verify( + result is not None, + "the qemu path may be not right: %s" % self.vm_qemu_version, + ) + version = result.group(1) + index = version.find(".") + self.verify( + int(version[:index]) > 2 + or (int(version[:index]) == 2 and int(version[index + 1 :]) >= 7), + "This qemu version should greater than 2.7 " + + "in this suite, please config it in vhost_sample.cfg file", + ) + + def start_vms(self, vm_num=1, packed=False): + """ + start qemus + """ + for i in range(vm_num): + vm_info = VM(self.dut, "vm%d" % i, "vhost_sample_copy") + vm_info.load_config() + vm_params = {} + vm_params["driver"] = "vhost-user" + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + vm_params["opt_mac"] = "00:11:22:33:44:5%d" % i + vm_params["opt_server"] = "server" + if self.queues > 1: + vm_params["opt_queue"] = self.queues + opt_args = "csum=on,mq=on,vectors=%d" % (2 * self.queues + 2) + else: + opt_args = "csum=on" + if packed: + opt_args = opt_args + ",packed=on" + vm_params["opt_settings"] = opt_args + vm_info.set_vm_device(**vm_params) + self.set_vm_cpu_number(vm_info) + self.check_qemu_version(vm_info) + vm_dut = None + try: + vm_dut = vm_info.start(load_config=False, set_target=False) + if vm_dut is None: + raise Exception("Set up VM ENV failed") + except Exception as e: + self.logger.error("ERROR: Failure for %s" % str(e)) + self.vm_dut.append(vm_dut) + self.vm.append(vm_info) + + def config_virito_net_in_vm(self): + """ + set vitio-net with 2 quques enable + """ + for i in range(len(self.vm_dut)): + vm_intf = self.vm_dut[i].ports_info[0]["intf"] + self.vm_dut[i].send_expect( + "ethtool -L %s combined %d" % (vm_intf, self.queues), "#", 20 + ) + + def check_vhost_core_status(self, vm_index, status): + """ + check the cpu status + """ + out = self.vhost.get_session_before() + for i in range(self.queues): + # because of the verify_info include all config(vm0 and vm1) + # so current index shoule vm_index + queue_index + verify_index = i + vm_index + if status == "waked up": + info = "lcore %s is waked up from rx interrupt on port %d queue %d" + info = info % ( + self.verify_info[verify_index]["core"], + self.verify_info[verify_index]["port"], + self.verify_info[verify_index]["queue"], + ) + elif status == "sleeps": + info = ( + "lcore %s sleeps until interrupt triggers" + % self.verify_info[verify_index]["core"] + ) + self.logger.info(info) + self.verify(info in out, "The CPU status not right for %s" % info) + + def send_and_verify(self): + """ + start to send packets and check the cpu status + stop and restart to send packets and check the cpu status + """ + ping_ip = 3 + for vm_index in range(self.vm_num): + session_info = [] + vm_intf = self.vm_dut[vm_index].ports_info[0]["intf"] + self.vm_dut[vm_index].send_expect( + "ifconfig %s 1.1.1.%d" % (vm_intf, ping_ip), "#" + ) + ping_ip = ping_ip + 1 + self.vm_dut[vm_index].send_expect("ifconfig %s up" % vm_intf, "#") + for queue in range(self.queues): + session = self.vm_dut[vm_index].new_session( + suite="ping_info_%d" % queue + ) + session.send_expect( + "taskset -c %d ping 1.1.1.%d" % (queue, ping_ip), "PING", 30 + ) + session_info.append(session) + ping_ip = ping_ip + 1 + time.sleep(3) + self.check_vhost_core_status(vm_index=vm_index, status="waked up") + # close all sessions of ping in vm + for sess_index in range(len(session_info)): + session_info[sess_index].send_expect("^c", "#") + self.vm_dut[vm_index].close_session(session_info[sess_index]) + + def get_cbdma_ports_info_and_bind_to_dpdk(self): + """ + get all cbdma ports + """ + self.cbdma_dev_infos = [] + self.used_cbdma = [] + out = self.dut.send_expect( + "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 + ) + device_info = out.split("\n") + for device in device_info: + pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) + if pci_info is not None: + # dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which + # on same socket with nic dev + self.cbdma_dev_infos.append(pci_info.group(1)) + self.verify( + len(self.cbdma_dev_infos) >= self.queues, + "There no enough cbdma device to run this suite", + ) + if self.queues == 1: + self.cbdma_dev_infos = [self.cbdma_dev_infos[0], self.cbdma_dev_infos[-1]] + self.used_cbdma = self.cbdma_dev_infos[0 : self.queues * self.vm_num] + self.device_str = " ".join(self.used_cbdma) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=%s %s" + % (self.drivername, self.device_str), + "# ", + 60, + ) + + def bind_cbdma_device_to_kernel(self): + if self.device_str is not None: + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" + % self.device_str, + "# ", + 60, + ) + + def stop_all_apps(self): + """ + close all vms + """ + for i in range(len(self.vm)): + self.vm[i].stop() + self.dut.send_expect("killall %s" % self.l3fwdpower_name, "#", timeout=2) + + def test_wake_up_split_ring_vhost_user_cores_with_event_idx_interrupt_mode_16_queues_with_cbdma( + self, + ): + """ + Test Case 1: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test + """ + self.vm_num = 1 + self.bind_nic_driver(self.dut_ports) + self.queues = 16 + self.get_core_mask() + self.nopci = False + self.get_cbdma_ports_info_and_bind_to_dpdk() + self.launch_l3fwd_power() + self.start_vms( + vm_num=self.vm_num, + ) + self.relaunch_l3fwd_power() + self.config_virito_net_in_vm() + self.send_and_verify() + self.stop_all_apps() + + def test_wake_up_split_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt_with_cbdma( + self, + ): + """ + Test Case 2: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test + """ + self.vm_num = 2 + self.bind_nic_driver(self.dut_ports) + self.queues = 1 + self.get_core_mask() + self.nopci = False + self.get_cbdma_ports_info_and_bind_to_dpdk() + self.launch_l3fwd_power() + self.start_vms( + vm_num=self.vm_num, + ) + self.relaunch_l3fwd_power() + self.config_virito_net_in_vm() + self.send_and_verify() + self.stop_all_apps() + + def test_wake_up_packed_ring_vhost_user_cores_with_event_idx_interrupt_mode_16_queues_with_cbdma( + self, + ): + """ + Test Case 3: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test + """ + self.vm_num = 1 + self.bind_nic_driver(self.dut_ports) + self.queues = 16 + self.get_core_mask() + self.nopci = False + self.get_cbdma_ports_info_and_bind_to_dpdk() + self.launch_l3fwd_power() + self.start_vms(vm_num=self.vm_num, packed=True) + self.relaunch_l3fwd_power() + self.config_virito_net_in_vm() + self.send_and_verify() + self.stop_all_apps() + + def test_wake_up_packed_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt_with_cbdma( + self, + ): + """ + Test Case 4: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test + """ + self.vm_num = 2 + self.bind_nic_driver(self.dut_ports) + self.queues = 1 + self.get_core_mask() + self.nopci = False + self.get_cbdma_ports_info_and_bind_to_dpdk() + self.launch_l3fwd_power() + self.start_vms(vm_num=self.vm_num, packed=True) + self.relaunch_l3fwd_power() + self.config_virito_net_in_vm() + self.send_and_verify() + self.stop_all_apps() + + def tear_down(self): + """ + Run after each test case. + """ + self.dut.close_session(self.vhost) + self.dut.send_expect(f"killall {self.l3fwdpower_name}", "#") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.bind_cbdma_device_to_kernel() + if "cbdma" in self.running_case: + self.bind_nic_driver(self.dut_ports, self.drivername) + + def tear_down_all(self): + """ + Run after each test suite. + """ + pass