From patchwork Wed Nov 30 06:33:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120336 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0F1FA00C2; Wed, 30 Nov 2022 07:38:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C73440395; Wed, 30 Nov 2022 07:38:58 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id D15124014F for ; Wed, 30 Nov 2022 07:38:56 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669790337; x=1701326337; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Tz5R0njbyP6F4REUFnBy1XSQO//2q+yoYnzvm2C4T30=; b=czjTc08/Y3EYLxPykXgUlsIZpAzd89fhcPzm9Gagj1i+HuWLWbyLEjVd n6e2KGovaH/1/FD+y0ZVRs2odgq3mSmwcQXPw0FmSYVanj4mdmVCg1D0w I8nWzx2sMzd/jIPy8A4hjEuRWtuaO+J52ZuYfZA6PR5mqAIOyRst9bgsu YdjOrDkRu8KEzu5BAVhHyXx6Xm0dO4WNTniXA1kUmR30cFw6OlTo85sAH zl/QLUcSMO7zNzENqrOHBjCNnukHJxvpWoh9bHdLGgHaSS5EkM1gnT7IT Sz5TlW2RBoluQxsrTxmzJSPGdeBa6dgss9ho0WWCqTs2NB3Pzc08l/UCx Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="342241533" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="342241533" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:38:56 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="732866089" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="732866089" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:38:54 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/2] test_plans/vm2vm_virtio_user_dsa_test_plan: modify testplan description Date: Wed, 30 Nov 2022 14:33:18 +0800 Message-Id: <20221130063318.1165259-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Modify the description in testplan. Signed-off-by: Wei Ling --- .../vm2vm_virtio_user_dsa_test_plan.rst | 166 +++++++++--------- 1 file changed, 85 insertions(+), 81 deletions(-) diff --git a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst index 369fe076..75f94a4e 100644 --- a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst +++ b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst @@ -7,26 +7,34 @@ VM2VM vhost-user/virtio-user with DSA driver test plan Description =========== -Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. -In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA -channels and one DMA channel can be shared by multiple vrings at the same time. From DPDK22.07, Vhost enqueue and dequeue operation with -DSA driver is supported in both split and packed ring. +Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an +asynchronous way. DPDK Vhost with DSA acceleration supports M:N mapping between virtqueues and DSA WQs. Specifically, +one DSA WQ can be used by multiple virtqueues and one virtqueue can offload copies to multiple DSA WQs at the same time. +Vhost async enqueue and async dequeue operation is supported in both split and packed ring. This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in VM2VM virtio-user topology. -1. Split virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vector_rx path test and payload check. -2. Packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vectorized path (ringsize not powerof 2) test and payload check. -3. Test indirect descriptor feature. For example, the split ring mergeable inorder path use non-indirect descriptor, the 2000,2000,2000,2000 chain packets will need 4 consequent ring, still need one ring put header. -the split ring mergeable path use indirect descriptor, the 2000,2000,2000,2000 chain packets will only occupy one ring. +1.Split virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vector_rx path test and payload check. +2.Packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vectorized path (ringsize not powerof 2) test and payload check. +3.Test indirect descriptor feature. For example, the split ring mergeable inorder path use non-indirect descriptor, +the 2000,2000,2000,2000 chain packets will need 4 consequent ring, still need one ring put header; the split ring mergeable path +use indirect descriptor, the 2000,2000,2000,2000 chain packets will only occupy one ring. -IOMMU impact: -If iommu off, idxd can work with iova=pa -If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can't use iova=pa(fwd not work due to pkts payload wrong). +.. note:: + + 1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may + exceed IOMMU's max capability, better to use 1G guest hugepage. + 2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. In this patch, + we enable asynchronous data path for vhostpmd. Asynchronous data path is enabled per tx/rx queue, and users need to specify + the DMA device used by the tx/rx queue. Each tx/rx queue only supports to use one DMA device (This is limited by the + implementation of vhostpmd), but one DMA device can be shared among multiple tx/rx queues of different vhost PMD ports. -Note: -1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may -exceed IOMMU's max capability, better to use 1G guest hugepage. -2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. +Two PMD parameters are added: +- dmas: specify the used DMA device for a tx/rx queue.(Default: no queues enable asynchronous data path) +- dma-ring-size: DMA ring size.(Default: 4096). + +Here is an example: +--vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=2048' Prerequisites ============= @@ -45,14 +53,10 @@ General set up CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc -j 110 -2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs:: +2. Get the PCI device of DUT, for example, 0000:6a:01.0 - 0000:f6:01.0 are DSA devices:: # ./usertools/dpdk-devbind.py -s - Network devices using kernel driver - =================================== - 0000:4f:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci - DMA devices using kernel driver =============================== 0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci @@ -71,44 +75,44 @@ Common steps ------------ 1. Bind DSA devices to DPDK vfio-pci driver:: - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci - For example, bind 2 DMA devices to vfio-pci driver: + For example, bind 2 DSA devices to vfio-pci driver: # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0 .. note:: - One DPDK DSA device can create 8 WQ at most. Below is an example, where DPDK DSA device will create one and - eight WQ for DSA deivce 0000:e7:01.0 and 0000:ec:01.0. The value of “max_queues” is 1~8: + One DPDK DSA device can create 8 WQ at most. Below is an example, where DPDK DSA device will create one WQ for deivce + 0000:e7:01.0 and eight WQs for 0000:ec:01.0. The value range of “max_queues” is 1~8 and the default value is 8: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:e7:01.0,max_queues=1 -- -i # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:ec:01.0,max_queues=8 -- -i 2. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ):: - # ./usertools/dpdk-devbind.py -b idxd - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q + # ./usertools/dpdk-devbind.py -b idxd + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q .. note:: + dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 + wq_num: Number of work queues configured per DSA instance, where 1<=num_wq<=8 + Better to reset WQ when need operate DSA devices that bound to idxd drvier: - # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset You can check it by 'ls /dev/dsa' - numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 - numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8 - - For example, bind 2 DMA devices to idxd driver and configure WQ: + For example, bind 2 DSA devices to idxd driver and configure WQ: # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" Test Case 1: VM2VM split ring non-mergeable path and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------- +---------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 2 dsa device to vfio-pci like common step 1:: +1. Bind 2 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 @@ -180,11 +184,11 @@ and multi-queues when vhost uses the asynchronous operations with dsa dpdk drive 11. Rerun step 6. Test Case 2: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver ---------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------ This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 3 dsa device to vfio-pci like common step 1:: +1. Bind 3 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 @@ -256,11 +260,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 3: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa dpdk driver -------------------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. Bind 4 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -311,11 +315,11 @@ still need one ring put header. So check 504 packets and 48128 bytes received by 8. Rerun step 3-6. Test Case 4: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. Bind 4 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -366,11 +370,11 @@ So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w 8. Rerun step 3-6. Test Case 5: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa dpdk driver ------------------------------------------------------------------------------------------------------------------------------- +--------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring vectorized path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 3 dsa ports to vfio-pci:: +1. Bind 3 dsa ports to vfio-pci:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 @@ -444,11 +448,11 @@ and multi-queues when vhost uses the asynchronous operations with dsa dpdk drive 11. Rerun step 6. Test Case 6: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 3 dsa device to vfio-pci like common step 1:: +1. Bind 3 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 @@ -520,11 +524,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 7: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver ---------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. Bind 4 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 f1:01.0 f6:01.0 @@ -596,11 +600,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 8: VM2VM packed ring mergeable path and multi-queues payload check with dsa dpdk driver --------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 2 dsa device to vfio-pci like common step 1:: +1. Bind 2 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 @@ -669,11 +673,11 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 9: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------ +--------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. Bind 4 DSA device to vfio-pci like common step 1:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -743,12 +747,12 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 10: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 2 dsa device to vfio-pci like common step 1:: +1. Bind 2 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 @@ -799,11 +803,11 @@ So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w 8. Rerun step 3-6. Test Case 11: VM2VM split ring non-mergeable path and multi-queues payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 1 dsa device to idxd like common step 2:: +1. Bind 1 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset xx @@ -880,11 +884,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 12: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver ----------------------------------------------------------------------------------------------------------------- +--------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -961,11 +965,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 13: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 1 dsa device to idxd like common step 2:: +1. Bind 1 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 @@ -1011,11 +1015,11 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron still need one ring put header. So check 504 packets and 48128 bytes received by virtio-user1 and 502 packets with 64 length and 2 packets with 8K length in pdump-virtio-rx.pcap. Test Case 14: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1061,18 +1065,18 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. Test Case 15: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa kernel driver -------------------------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------------------------ This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring vectorized path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa ports to idxd:: +1. Bind 2 dsa ports to idxd:: - ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 - # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 - ls /dev/dsa #check wq configure success + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: @@ -1142,11 +1146,11 @@ vectorized path and multi-queues when vhost uses the asynchronous operations wit 11. Rerun step 6. Test Case 16: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1224,11 +1228,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 17: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver ------------------------------------------------------------------------------------------------------------------- +---------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1304,11 +1308,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 18: VM2VM packed ring mergeable path and multi-queues payload check with dsa kernel driver ------------------------------------------------------------------------------------------------------ +---------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd:: +1. Bind 2 DSA device to idxd:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1381,11 +1385,11 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 19: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa kernel driver -------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------ This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1458,11 +1462,11 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 20: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa kernel driver --------------------------------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. Bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1511,11 +1515,11 @@ packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. Test Case 21: VM2VM split ring mergeable path and multi-queues test indirect descriptor with dsa dpdk and kernel driver -------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk and kernel driver. -1. bind 2 dsa ports to idxd and 2 dsa ports to vfio-pci:: +1. Bind 2 dsa ports to idxd and 2 dsa ports to vfio-pci:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -1563,11 +1567,11 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. Test Case 22: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk and kernel driver ------------------------------------------------------------------------------------------------------------------------ +--------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk and kernel driver. -1. bind 2 dsa device to vfio-pci and 2 dsa port to idxd like common step 1-2:: +1. Bind 2 DSA device to vfio-pci and 2 dsa port to idxd like common step 1-2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0