From patchwork Wed Nov 30 06:33:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120336 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0F1FA00C2; Wed, 30 Nov 2022 07:38:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C73440395; Wed, 30 Nov 2022 07:38:58 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id D15124014F for ; Wed, 30 Nov 2022 07:38:56 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669790337; x=1701326337; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Tz5R0njbyP6F4REUFnBy1XSQO//2q+yoYnzvm2C4T30=; b=czjTc08/Y3EYLxPykXgUlsIZpAzd89fhcPzm9Gagj1i+HuWLWbyLEjVd n6e2KGovaH/1/FD+y0ZVRs2odgq3mSmwcQXPw0FmSYVanj4mdmVCg1D0w I8nWzx2sMzd/jIPy8A4hjEuRWtuaO+J52ZuYfZA6PR5mqAIOyRst9bgsu YdjOrDkRu8KEzu5BAVhHyXx6Xm0dO4WNTniXA1kUmR30cFw6OlTo85sAH zl/QLUcSMO7zNzENqrOHBjCNnukHJxvpWoh9bHdLGgHaSS5EkM1gnT7IT Sz5TlW2RBoluQxsrTxmzJSPGdeBa6dgss9ho0WWCqTs2NB3Pzc08l/UCx Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="342241533" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="342241533" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:38:56 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="732866089" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="732866089" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:38:54 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/2] test_plans/vm2vm_virtio_user_dsa_test_plan: modify testplan description Date: Wed, 30 Nov 2022 14:33:18 +0800 Message-Id: <20221130063318.1165259-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Modify the description in testplan. Signed-off-by: Wei Ling --- .../vm2vm_virtio_user_dsa_test_plan.rst | 166 +++++++++--------- 1 file changed, 85 insertions(+), 81 deletions(-) diff --git a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst index 369fe076..75f94a4e 100644 --- a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst +++ b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst @@ -7,26 +7,34 @@ VM2VM vhost-user/virtio-user with DSA driver test plan Description =========== -Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. -In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA -channels and one DMA channel can be shared by multiple vrings at the same time. From DPDK22.07, Vhost enqueue and dequeue operation with -DSA driver is supported in both split and packed ring. +Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an +asynchronous way. DPDK Vhost with DSA acceleration supports M:N mapping between virtqueues and DSA WQs. Specifically, +one DSA WQ can be used by multiple virtqueues and one virtqueue can offload copies to multiple DSA WQs at the same time. +Vhost async enqueue and async dequeue operation is supported in both split and packed ring. This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in VM2VM virtio-user topology. -1. Split virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vector_rx path test and payload check. -2. Packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vectorized path (ringsize not powerof 2) test and payload check. -3. Test indirect descriptor feature. For example, the split ring mergeable inorder path use non-indirect descriptor, the 2000,2000,2000,2000 chain packets will need 4 consequent ring, still need one ring put header. -the split ring mergeable path use indirect descriptor, the 2000,2000,2000,2000 chain packets will only occupy one ring. +1.Split virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vector_rx path test and payload check. +2.Packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vectorized path (ringsize not powerof 2) test and payload check. +3.Test indirect descriptor feature. For example, the split ring mergeable inorder path use non-indirect descriptor, +the 2000,2000,2000,2000 chain packets will need 4 consequent ring, still need one ring put header; the split ring mergeable path +use indirect descriptor, the 2000,2000,2000,2000 chain packets will only occupy one ring. -IOMMU impact: -If iommu off, idxd can work with iova=pa -If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can't use iova=pa(fwd not work due to pkts payload wrong). +.. note:: + + 1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may + exceed IOMMU's max capability, better to use 1G guest hugepage. + 2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. In this patch, + we enable asynchronous data path for vhostpmd. Asynchronous data path is enabled per tx/rx queue, and users need to specify + the DMA device used by the tx/rx queue. Each tx/rx queue only supports to use one DMA device (This is limited by the + implementation of vhostpmd), but one DMA device can be shared among multiple tx/rx queues of different vhost PMD ports. -Note: -1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may -exceed IOMMU's max capability, better to use 1G guest hugepage. -2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. +Two PMD parameters are added: +- dmas: specify the used DMA device for a tx/rx queue.(Default: no queues enable asynchronous data path) +- dma-ring-size: DMA ring size.(Default: 4096). + +Here is an example: +--vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=2048' Prerequisites ============= @@ -45,14 +53,10 @@ General set up CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc -j 110 -2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs:: +2. Get the PCI device of DUT, for example, 0000:6a:01.0 - 0000:f6:01.0 are DSA devices:: # ./usertools/dpdk-devbind.py -s - Network devices using kernel driver - =================================== - 0000:4f:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci - DMA devices using kernel driver =============================== 0000:6a:01.0 'Device 0b25' drv=idxd unused=vfio-pci @@ -71,44 +75,44 @@ Common steps ------------ 1. Bind DSA devices to DPDK vfio-pci driver:: - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci - For example, bind 2 DMA devices to vfio-pci driver: + For example, bind 2 DSA devices to vfio-pci driver: # ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0 .. note:: - One DPDK DSA device can create 8 WQ at most. Below is an example, where DPDK DSA device will create one and - eight WQ for DSA deivce 0000:e7:01.0 and 0000:ec:01.0. The value of “max_queues” is 1~8: + One DPDK DSA device can create 8 WQ at most. Below is an example, where DPDK DSA device will create one WQ for deivce + 0000:e7:01.0 and eight WQs for 0000:ec:01.0. The value range of “max_queues” is 1~8 and the default value is 8: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:e7:01.0,max_queues=1 -- -i # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4 -n 4 -a 0000:ec:01.0,max_queues=8 -- -i 2. Bind DSA devices to kernel idxd driver, and configure Work Queue (WQ):: - # ./usertools/dpdk-devbind.py -b idxd - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q + # ./usertools/dpdk-devbind.py -b idxd + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q .. note:: + dsa_idx: Index of DSA devices, where 0<=dsa_idx<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 + wq_num: Number of work queues configured per DSA instance, where 1<=num_wq<=8 + Better to reset WQ when need operate DSA devices that bound to idxd drvier: - # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset + # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset You can check it by 'ls /dev/dsa' - numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0 - numWq: Number of workqueues per DSA endpoint, where 1<=numWq<=8 - - For example, bind 2 DMA devices to idxd driver and configure WQ: + For example, bind 2 DSA devices to idxd driver and configure WQ: # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" Test Case 1: VM2VM split ring non-mergeable path and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------- +---------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 2 dsa device to vfio-pci like common step 1:: +1. Bind 2 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 @@ -180,11 +184,11 @@ and multi-queues when vhost uses the asynchronous operations with dsa dpdk drive 11. Rerun step 6. Test Case 2: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver ---------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------ This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 3 dsa device to vfio-pci like common step 1:: +1. Bind 3 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 @@ -256,11 +260,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 3: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa dpdk driver -------------------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. Bind 4 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -311,11 +315,11 @@ still need one ring put header. So check 504 packets and 48128 bytes received by 8. Rerun step 3-6. Test Case 4: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. Bind 4 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -366,11 +370,11 @@ So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w 8. Rerun step 3-6. Test Case 5: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa dpdk driver ------------------------------------------------------------------------------------------------------------------------------- +--------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring vectorized path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 3 dsa ports to vfio-pci:: +1. Bind 3 dsa ports to vfio-pci:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 @@ -444,11 +448,11 @@ and multi-queues when vhost uses the asynchronous operations with dsa dpdk drive 11. Rerun step 6. Test Case 6: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 3 dsa device to vfio-pci like common step 1:: +1. Bind 3 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 @@ -520,11 +524,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 7: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver ---------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. Bind 4 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 f1:01.0 f6:01.0 @@ -596,11 +600,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 8: VM2VM packed ring mergeable path and multi-queues payload check with dsa dpdk driver --------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 2 dsa device to vfio-pci like common step 1:: +1. Bind 2 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 @@ -669,11 +673,11 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 9: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------ +--------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. Bind 4 DSA device to vfio-pci like common step 1:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -743,12 +747,12 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 10: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. -1. bind 2 dsa device to vfio-pci like common step 1:: +1. Bind 2 DSA device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 @@ -799,11 +803,11 @@ So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w 8. Rerun step 3-6. Test Case 11: VM2VM split ring non-mergeable path and multi-queues payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 1 dsa device to idxd like common step 2:: +1. Bind 1 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset xx @@ -880,11 +884,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 12: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver ----------------------------------------------------------------------------------------------------------------- +--------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -961,11 +965,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 13: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 1 dsa device to idxd like common step 2:: +1. Bind 1 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 @@ -1011,11 +1015,11 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron still need one ring put header. So check 504 packets and 48128 bytes received by virtio-user1 and 502 packets with 64 length and 2 packets with 8K length in pdump-virtio-rx.pcap. Test Case 14: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1061,18 +1065,18 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. Test Case 15: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa kernel driver -------------------------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------------------------ This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring vectorized path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa ports to idxd:: +1. Bind 2 dsa ports to idxd:: - ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 - # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 - ls /dev/dsa #check wq configure success + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: @@ -1142,11 +1146,11 @@ vectorized path and multi-queues when vhost uses the asynchronous operations wit 11. Rerun step 6. Test Case 16: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1224,11 +1228,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 17: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver ------------------------------------------------------------------------------------------------------------------- +---------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1304,11 +1308,11 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 18: VM2VM packed ring mergeable path and multi-queues payload check with dsa kernel driver ------------------------------------------------------------------------------------------------------ +---------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd:: +1. Bind 2 DSA device to idxd:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1381,11 +1385,11 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 19: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa kernel driver -------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------ This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1458,11 +1462,11 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 20: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa kernel driver --------------------------------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. Bind 2 dsa device to idxd like common step 2:: +1. Bind 2 DSA device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -1511,11 +1515,11 @@ packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. Test Case 21: VM2VM split ring mergeable path and multi-queues test indirect descriptor with dsa dpdk and kernel driver -------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk and kernel driver. -1. bind 2 dsa ports to idxd and 2 dsa ports to vfio-pci:: +1. Bind 2 dsa ports to idxd and 2 dsa ports to vfio-pci:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0 @@ -1563,11 +1567,11 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. Test Case 22: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk and kernel driver ------------------------------------------------------------------------------------------------------------------------ +--------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk and kernel driver. -1. bind 2 dsa device to vfio-pci and 2 dsa port to idxd like common step 1-2:: +1. Bind 2 DSA device to vfio-pci and 2 dsa port to idxd like common step 1-2:: ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0 From patchwork Wed Nov 30 06:33:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120337 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7BF7A00C2; Wed, 30 Nov 2022 07:39:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C0BDB40A79; Wed, 30 Nov 2022 07:39:08 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 216924014F for ; Wed, 30 Nov 2022 07:39:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669790347; x=1701326347; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=hob77E6yknz6GMe+pG9xEHFmZES1PxakzUeRxX3WhqQ=; b=nh8Qxig0FyjOsK0qN035+1d9bMyA4NMm1cp9X8Z0rMs8sDxrYGKKOazz fpLnA1VF4uosp5WKx+yyRMzFo7yQq9HawJ6dMUgB4vOsra5PdXjJW2FSl 8iNWlq+PhsJYE8EALv4PXgev9M8ccPd1kSc0+iIcgBUVO34yBNzcxYGu6 GCbn/YcuDf/zXa55u/DE5gZgsjT72pwcnosj3WIDn8yY80K1SucaMhtAg S7lJpQzR2Mj7pdVqDK+MFK8K+9xUt1mphrW1B43C75hOCxpPkXXwZ/MI8 /+7ZkWm5iry6xrD2Sp4Ahxzu9xcMPpY2U0p+gBqyBsxO9lns8vtcUcID7 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="342241546" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="342241546" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:39:06 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10546"; a="732866190" X-IronPort-AV: E=Sophos;i="5.96,205,1665471600"; d="scan'208";a="732866190" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 22:39:04 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/2] tests/vm2vm_virtio_user_dsa: add new testsuite Date: Wed, 30 Nov 2022 14:33:28 +0800 Message-Id: <20221130063328.1165319-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add the TestSuite_vm2vm_virtio_user_dsa.py testsuite. Signed-off-by: Wei Ling --- tests/TestSuite_vm2vm_virtio_user_dsa.py | 2079 ++++++++++++++++++++++ 1 file changed, 2079 insertions(+) create mode 100644 tests/TestSuite_vm2vm_virtio_user_dsa.py diff --git a/tests/TestSuite_vm2vm_virtio_user_dsa.py b/tests/TestSuite_vm2vm_virtio_user_dsa.py new file mode 100644 index 00000000..7e359f3b --- /dev/null +++ b/tests/TestSuite_vm2vm_virtio_user_dsa.py @@ -0,0 +1,2079 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2022 Intel Corporation +# + +import re + +from framework.packet import Packet +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase + +from .virtio_common import dsa_common as DC + + +class TestVM2VMVirtioUserDsa(TestCase): + def set_up_all(self): + self.dut_ports = self.dut.get_ports() + self.port_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_list = self.dut.get_core_list(config="all", socket=self.port_socket) + self.vhost_core_list = self.cores_list[0:2] + self.virtio0_core_list = self.cores_list[2:4] + self.virtio1_core_list = self.cores_list[4:6] + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user0 = self.dut.new_session(suite="virtio-user0") + self.virtio_user1 = self.dut.new_session(suite="virtio-user1") + self.pdump_user = self.dut.new_session(suite="pdump-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) + self.virtio_user1_pmd = PmdOutput(self.dut, self.virtio_user1) + self.path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.path.split("/")[-1] + self.app_pdump = self.dut.apps_name["pdump"] + self.pdump_name = self.app_pdump.split("/")[-1] + self.dump_virtio_pcap = "/tmp/pdump-virtio-rx.pcap" + self.dump_vhost_pcap = "/tmp/pdump-vhost-rx.pcap" + + self.DC = DC(self) + + def set_up(self): + self.path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.path.split("/")[-1] + self.app_pdump = self.dut.apps_name["pdump"] + self.pdump_name = self.app_pdump.split("/")[-1] + + self.dut.send_expect("rm -rf ./vhost-net*", "#") + self.dut.send_expect("rm -rf %s" % self.dump_virtio_pcap, "#") + self.dut.send_expect("rm -rf %s" % self.dump_vhost_pcap, "#") + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -s INT %s" % self.pdump_name, "#") + self.use_dsa_list = [] + + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def start_vhost_testpmd( + self, + cores, + eal_param="", + param="", + no_pci=False, + ports="", + port_options="", + iova_mode="va", + ): + if iova_mode: + eal_param += " --iova=" + iova_mode + if not no_pci and port_options != "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + port_options=port_options, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + elif not no_pci and port_options == "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=no_pci, + prefix="vhost", + fixed_prefix=True, + ) + + def start_virtio_testpmd_with_vhost_net1(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net1 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user1_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user1", + fixed_prefix=True, + ) + self.virtio_user1_pmd.execute_cmd("set fwd rxonly") + self.virtio_user1_pmd.execute_cmd("start") + + def start_virtio_testpmd_with_vhost_net0(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net0 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user0_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user0", + fixed_prefix=True, + ) + + def start_pdump_to_capture_pkt(self): + """ + launch pdump app with dump_port and file_prefix + the pdump app should start after testpmd started + if dump the vhost-testpmd, the vhost-testpmd should started before launch pdump + if dump the virtio-testpmd, the virtio-testpmd should started before launch pdump + """ + command_line = ( + self.app_pdump + + "-l 1-2 -n 4 --file-prefix=virtio-user1 -v -- " + + "--pdump 'device_id=net_virtio_user1,queue=*,rx-dev=%s,mbuf-size=8000'" + ) + self.pdump_user.send_expect(command_line % (self.dump_virtio_pcap), "Port") + + def check_virtio_user1_stats(self, check_dict): + """ + check the virtio-user1 show port stats + """ + out = self.virtio_user1_pmd.execute_cmd("show port stats all") + self.logger.info(out) + rx_packets = re.search("RX-packets:\s*(\d*)", out) + rx_bytes = re.search("RX-bytes:\s*(\d*)", out) + rx_num = int(rx_packets.group(1)) + byte_num = int(rx_bytes.group(1)) + packet_count = 0 + byte_count = 0 + for key, value in check_dict.items(): + packet_count += value + byte_count += key * value + self.verify( + rx_num == packet_count, + "receive pakcet number: {} is not equal as send:{}".format( + rx_num, packet_count + ), + ) + self.verify( + byte_num == byte_count, + "receive pakcet byte:{} is not equal as send:{}".format( + byte_num, byte_count + ), + ) + + def check_packet_payload_valid(self, check_dict): + """ + check the payload is valid + """ + self.pdump_user.send_expect("^c", "# ", 60) + self.dut.session.copy_file_from( + src=self.dump_virtio_pcap, dst=self.dump_virtio_pcap + ) + pkt = Packet() + pkts = pkt.read_pcapfile(self.dump_virtio_pcap) + for key, value in check_dict.items(): + count = 0 + for i in range(len(pkts)): + if len(pkts[i]) == key: + count += 1 + self.verify( + value == count, + "pdump file: {} have not include enough packets {}".format(count, key), + ) + + def clear_virtio_user1_stats(self): + self.virtio_user1_pmd.execute_cmd("stop") + self.virtio_user1_pmd.execute_cmd("clear port stats all") + self.virtio_user1_pmd.execute_cmd("start") + out = self.virtio_user1_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_502_960byte_and_64_64byte_pkts(self): + """ + send 251 960byte and 32 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,128,256,512") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def send_502_64byte_and_64_8000byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set txpkts 2000,2000,2000,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def send_54_4640byte_and_448_64byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,256,2000,64,256,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def send_448_64byte_and_54_4640byte_pkts(self): + """ + send 448 64byte and 54 4640byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,256,2000,64,256,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def send_1_64byte_pkts(self): + """ + send 1 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def test_split_non_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 1: VM2VM vhost-user/virtio-user split ring non-mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q1;txq1@%s-q1;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq1@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_inorder_non_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 2: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q1;txq1@%s-q1;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq1@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=2", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_inorder_mergeable_multi_queues_non_indirect_descriptor_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 3: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.virtio_user0_pmd.execute_cmd("quit", "#", 60) + self.virtio_user1_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1;rxq1@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_mergeable_multi_queues_indirect_descriptor_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 4: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.virtio_user0_pmd.execute_cmd("quit", "#", 60) + self.virtio_user1_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1;rxq1@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_vectorized_multi_queues_payload_check_with_vhost_async_dpdk_driver( + self, + ): + """ + Test Case 5: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;rxq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;rxq0@%s-q1;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q3;txq1@%s-q3;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq1@%s-q3;rxq1@%s-q3" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_non_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 6: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1;rxq1@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_non_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 7: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=2", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q5;txq1@%s-q6;rxq0@%s-q5;rxq1@%s-q6" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q5;txq1@%s-q6;rxq1@%s-q5;rxq1@%s-q6" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 8: VM2VM packed ring mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q4;txq1@%s-q5;rxq1@%s-q6;rxq1@%s-q7" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 9: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;rxq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1;rxq1@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_tx_multi_queues_indirect_descriptor_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 10: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;txq1@%s-q0;rxq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list[0:1], + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.virtio_user0_pmd.execute_cmd("quit", "#", 60) + self.virtio_user1_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=2", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 11: VM2VM packed ring vectorized path and payload check test with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;txq1@%s-q0;rxq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list[0:1], + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_448_64byte_and_54_4640byte_pkts() + check_dict = {64: 448, 4640: 0} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_ringsize_not_powerof_2_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 12: VM2VM packed ring vectorized path payload check test with ring size is not power of 2 with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", socket=self.port_socket + ) + dmas1 = "txq0@%s-q0;txq1@%s-q0;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q0;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list[0:1], + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_448_64byte_and_54_4640byte_pkts() + check_dict = {64: 448, 4640: 0} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_non_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 13: VM2VM split ring non-mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;rxq1@wq0.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_inorder_non_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 14: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.2;rxq1@wq0.3]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_inorder_mergeable_multi_queues_non_indirect_descriptor_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 15: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.2;rxq1@wq0.3]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_mergeable_multi_queues_indirect_descriptor_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 16: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.0;rxq0@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;txq1@wq0.1;rxq0@wq0.1;rxq0@wq0.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_vectorized_multi_queues_payload_check_with_vhost_async_operation_with_kernel_driver( + self, + ): + """ + Test Case 17: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.0;rxq1@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.0;rxq1@wq0.0]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_non_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 18: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;rxq1@wq0.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_non_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 19: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.DC.create_work_queue(work_queue_number=2, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;rxq1@wq1.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 20: VM2VM packed ring mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;txq1@wq0.1;rxq0@wq0.0;rxq1@wq0.0]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 21: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + self.DC.create_work_queue(work_queue_number=1, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq0@wq0.0;rxq1@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq1@wq1.0]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_tx_multi_queues_indirect_descriptor_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 22: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.DC.create_work_queue(work_queue_number=2, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq0@wq1.1;rxq1@wq1.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_multi_queues_indirect_descriptor_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 23: VM2VM packed ring vectorized path and multi-queues test indirect descriptor and payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.DC.create_work_queue(work_queue_number=2, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq0@wq1.1;rxq1@wq1.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 0} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_ringsize_not_powerof_2_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 24: VM2VM packed ring vectorized path payload check test with ring size is not power of 2 with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.DC.create_work_queue(work_queue_number=2, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq0@wq1.1;rxq1@wq1.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 0} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_mergeable_multi_queues_indirect_descriptor_payload_check_with_dpdk_and_kernel_driver( + self, + ): + """ + Test Case 25: VM2VM split ring mergeable path and multi-queues test indirect descriptor with dsa dpdk and kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, + driver_name="vfio-pci", + dsa_index_list=[1], + socket=self.port_socket, + ) + dmas1 = "txq0@wq0.0;rxq0@wq0.0;rxq1@wq0.0" + dmas2 = "txq0@%s-q0;txq1@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_mergeable_multi_queues_payload_check_with_dpdk_and_kernel_driver( + self, + ): + """ + Test Case 26: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk and kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, + driver_name="vfio-pci", + dsa_index_list=[1], + socket=self.port_socket, + ) + dmas1 = "txq0@%s-q0;txq1@wq0.0;rxq0@%s-q0;rxq1@wq0.0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@wq0.0;txq1@%s-q0;rxq0@wq0.0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_tx_batch_processing_with_dpdk_and_kernel_driver( + self, + ): + """ + Test Case 27: VM2VM packed ring vectorized-tx path test batch processing with dsa dpdk and kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, + driver_name="vfio-pci", + dsa_index_list=[1], + socket=self.port_socket, + ) + dmas1 = "txq0@%s-q0;txq1@wq0.0;rxq0@%s-q0;rxq1@wq0.0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@wq0.0;txq1@%s-q0;rxq0@wq0.0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_1_64byte_pkts() + check_dict = {64: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def quit_all_testpmd(self): + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.virtio_user0_pmd.execute_cmd("quit", "#", 60) + self.virtio_user1_pmd.execute_cmd("quit", "#", 60) + self.pdump_user.send_expect("^c", "# ", 60) + + def tear_down(self): + self.quit_all_testpmd() + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -s INT %s" % self.pdump_name, "#") + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + def tear_down_all(self): + pass