From patchwork Thu Dec 22 05:24:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 121259 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3A29CA034C; Thu, 22 Dec 2022 06:32:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 34E5D410D0; Thu, 22 Dec 2022 06:32:57 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 01948400D7 for ; Thu, 22 Dec 2022 06:32:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671687175; x=1703223175; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=uhU2HVmk4dhXwFDnIxF9eV6jxveFhcBlC6T+3IP4QvY=; b=nxvJdaD3fsicl327R0S+MkOgoTKkqbsWI26eIiWamlsqKHczoxwJON8z syviEtOrxvXI4YXn9UxJ9Grgrfu0LAeOUHY9rpGbQdk0+BkFS3MrtMjcG 0KQm91wmBTWRUWPcJ3xSDTQNsOhaA53rn8+Q7CknAWEwT+79/IaFICoDH j6J85XuNd0DHyrwznGuiGC7G4TbxihSC6WeftcrNK4o/RZoEvaoJh9yBs U5YF6GY4Z6m0RveJZvhJ3ZhXWBREZN+HC1Urioy4nAr76kuJG1JnvgiX6 dccJx29ysDFYVTRbu3ZJY/m6hhK6DuHA8NImeyINA3kGQ7b/qR8GPpO9f w==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="307735144" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="307735144" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 21:32:54 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="825856182" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="825856182" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 21:32:52 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V5 1/2] test_plans/basic_4k_pages_dsa_test_plan: modify dmas parameter by DPDK changed Date: Thu, 22 Dec 2022 13:24:16 +0800 Message-Id: <20221222052416.177403-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org The dmas parameter have been changed by the local patch, so modify the dmas parameter in the testplan. Signed-off-by: Wei Ling --- test_plans/basic_4k_pages_dsa_test_plan.rst | 89 +++++++++++---------- 1 file changed, 48 insertions(+), 41 deletions(-) diff --git a/test_plans/basic_4k_pages_dsa_test_plan.rst b/test_plans/basic_4k_pages_dsa_test_plan.rst index 4a67dfa0..eeea25d8 100644 --- a/test_plans/basic_4k_pages_dsa_test_plan.rst +++ b/test_plans/basic_4k_pages_dsa_test_plan.rst @@ -1,9 +1,9 @@ .. SPDX-License-Identifier: BSD-3-Clause Copyright(c) 2022 Intel Corporation -============================================== +============================================= Basic 4k-pages test with DSA driver test plan -============================================== +============================================= Description =========== @@ -21,10 +21,11 @@ and packed ring vhost-user/virtio-net mergeable and non-mergeable path. 4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring. 5. Vhost-user using 1G hugepges and virtio-user using 4k-pages. -Note: -1. When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may -exceed IOMMU's max capability, better to use 1G guest hugepage. -2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. +.. note:: + + 1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may + exceed IOMMU's max capability, better to use 1G guest hugepage. + 2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. Prerequisites ============= @@ -41,7 +42,7 @@ General set up CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc -j 110 -3. Get the PCI device ID and DSA device ID of DUT, for example, 0000:6a:00.0 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs:: +3. Get the PCI device of DUT, for example, 0000:6a:00.0 is NIC port, 0000:6a:01.0 - 0000:f6:01.0 are DSA devices:: # ./usertools/dpdk-devbind.py -s @@ -74,14 +75,14 @@ Common steps ------------ 1. Bind 1 NIC port to vfio-pci:: - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci For example: - # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:4f.1 + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:6a:00.0 2.Bind DSA devices to DPDK vfio-pci driver:: - # ./usertools/dpdk-devbind.py -b vfio-pci - For example, bind 2 DMA devices to vfio-pci driver: + # ./usertools/dpdk-devbind.py -b vfio-pci + For example, bind 2 DSA devices to vfio-pci driver: # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0 .. note:: @@ -93,18 +94,18 @@ Common steps 3. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ):: - # ./usertools/dpdk-devbind.py -b idxd - # ./drivers/dma/dma/idxd/dpdk_idxd_cfg.py -q + # ./usertools/dpdk-devbind.py -b idxd + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q .. note:: Better to reset WQ when need operate DSA devices that bound to idxd drvier: - # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset You can check it by 'ls /dev/dsa' - numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 - numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8 + dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 + wq_num: Number of workqueues per DSA endpoint, where 1<=wq_num<=8 - For example, bind 2 DMA devices to idxd driver and configure WQ: + For example, bind 2 DSA devices to idxd driver and configure WQ: # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 @@ -112,10 +113,10 @@ Common steps Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" Test Case 1: PVP split ring multi-queues with 4K-pages and dsa dpdk driver ------------------------------------------------------------------------------- +-------------------------------------------------------------------------- This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa dpdk driver. -1. Bind 2 dsa device and one nic port to vfio-pci like common step 1-2:: +1. Bind 2 DSA device and 1 NIC port to vfio-pci like common step 1-2:: # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 @@ -172,10 +173,10 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 10. Rerun step 4-6. Test Case 2: PVP packed ring multi-queues with 4K-pages and dsa dpdk driver ------------------------------------------------------------------------------- +--------------------------------------------------------------------------- This case tests packed ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa dpdk driver. -1. Bind 2 dsa device and one nic port to vfio-pci like common step 1-2:: +1. Bind 2 DSA device and 1 NIC port to vfio-pci like common step 1-2:: # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 @@ -232,10 +233,10 @@ This case tests packed ring with multi-queues can work normally in 4k-pages envi 10.Rerun step 4-6. Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic --------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------ This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. -1. Bind 1 dsa device to vfio-pci like common step 2:: +1. Bind 1 DSA device to vfio-pci like common step 2:: # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 @@ -291,11 +292,11 @@ This case test the function of Vhost tx offload in the topology of vhost-user/vi testpmd>show port xstats all Test Case 4: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic ---------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. -1. Bind 1 dsa device to vfio-pci like common step 2:: +1. Bind 1 DSA device to vfio-pci like common step 2:: # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 @@ -346,12 +347,14 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o # iperf -s -i 1 # iperf -c 1.1.1.2 -i 1 -t 60 -6. Check that 2VMs can receive and send big packets to each other through vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx packets above 1522:: +6. Check that 2VMs can receive and send big packets to each other through vhost log:: testpmd>show port xstats all + Port 0 should have tx packets above 1518 + Port 1 should have rx packets above 1518 Test Case 5: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa dpdk driver ---------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net multi-queues mergeable path when vhost uses the asynchronous operations with dsa dpdk driver. And one virtio-net is split ring, the other is packed ring. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. @@ -417,7 +420,7 @@ And one virtio-net is split ring, the other is packed ring. The vhost run in 1G 8. Relaunch vm1 and rerun step 4-7. Test Case 6: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa dpdk driver ---------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------ This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous operations with dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. @@ -507,10 +510,10 @@ dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pag 11. Rerun step 6-7. Test Case 7: PVP split ring multi-queues with 4K-pages and dsa kernel driver --------------------------------------------------------------------------------- +---------------------------------------------------------------------------- This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa kernel driver. -1. Bind one nic port to vfio-pci and 2 dsa device to idxd like common step 1 and 3:: +1. Bind 1 NIC port to vfio-pci and 2 DSA device to idxd like common step 1 and 3:: # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 @@ -563,10 +566,10 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 8. Rerun step 4-6. Test Case 8: PVP packed ring multi-queues with 4K-pages and dsa kernel driver ---------------------------------------------------------------------------------- +----------------------------------------------------------------------------- This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa kernel driver. -1. Bind one nic port to vfio-pci and 2 dsa device to idxd like common step 1 and 3:: +1. Bind 1 NIC port to vfio-pci and 2 DSA device to idxd like common step 1 and 3:: # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 @@ -619,12 +622,12 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 8. Rerun step 4-6. Test Case 9: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic ---------------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. -1. Bind 1 dsa device to idxd like common step 2:: +1. Bind 1 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 @@ -679,17 +682,19 @@ in 4k-pages environment. # iperf -s -i 1 # iperf -c 1.1.1.2 -i 1 -t 60 -7. Check that 2VMs can receive and send big packets to each other through vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx packets above 1522:: +7. Check that 2VMs can receive and send big packets to each other through vhost log:: testpmd>show port xstats all + Port 0 should have tx packets above 1518 + Port 1 should have rx packets above 151518 Test Case 10: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic ------------------------------------------------------------------------------------------------------------ +---------------------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. -1. Bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -745,17 +750,19 @@ in 4k-pages environment. # iperf -s -i 1 # iperf -c 1.1.1.2 -i 1 -t 60 -7. Check that 2VMs can receive and send big packets to each other through vhost log. Port 0 should have tx packets above 1522, Port 1 should have rx packets above 1522:: +7. Check that 2VMs can receive and send big packets to each other through vhost log:: testpmd>show port xstats all + Port 0 should have tx packets above 1518 + Port 1 should have rx packets above 1518 Test Case 11: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa kernel driver ------------------------------------------------------------------------------------------------------------ +---------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net split and packed ring mergeable path when vhost uses the asynchronous operations with dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. -1. Bind 8 dsa device to idxd like common step 3:: +1. Bind 8 DSA device to idxd like common step 3:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -827,7 +834,7 @@ dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p 8. Relaunch vm1 and rerun step 4-7. Test Case 12: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa kernel driver ------------------------------------------------------------------------------------------------------ +--------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous operations with dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment.