From patchwork Fri Apr 15 03:25:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109734 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC728A050B; Fri, 15 Apr 2022 05:25:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C52944067C; Fri, 15 Apr 2022 05:25:40 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id E755F4003C for ; Fri, 15 Apr 2022 05:25:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649993139; x=1681529139; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=zC+O457rAZLCjcRc4tn4M9Qp1DpvkU/e0oBNJghJ5Ug=; b=W2H/0IKnT4QQgTalRQTCJbfjmYZjogRUKbu/Vr3Eu49060oyZpzxQb5Q /EeDmogeiZniEDMQEiB4klNtXghm1oXWVjlOmEPWqNNmziH0kI3qAEgxo Qz+V+7Q4fzGasPVeOsEBkp2tb4cxzCs9mWxaTJd2wXU9s1ChD02oWd1mg kQnQJGzVk2F9edAHYTVITBNiUAhqAN79GanoyPgWWWTjylXDhnIl0POms l2t4IesyDTo6ShVMKWvyn7A4Qgga/M7voayqWEzf4rVI1EMdFYQ9FfcWH 8mdKIiSsbnuzZEaNLR2PByPUR/X3HySu85+rqxmki+A3CcFp7gUUy/xF8 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10317"; a="262528638" X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="262528638" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:25:38 -0700 X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="574133552" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:25:36 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/5] test_plans/loopback_virtio_user_server_mode_test_plan: delte CBDMA test case Date: Fri, 15 Apr 2022 11:25:16 +0800 Message-Id: <20220415032516.251365-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), delete cbdma related case form test_plans/loopback_virtio_user_server_mode_test_plan. Signed-off-by: Wei Ling --- ...back_virtio_user_server_mode_test_plan.rst | 922 ++++++++---------- 1 file changed, 387 insertions(+), 535 deletions(-) diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_test_plan.rst index 092eb5e8..b372f885 100644 --- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst +++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst @@ -34,75 +34,113 @@ vhost/virtio-user loopback server mode test plan ================================================ -Virtio-user server mode is a feature to enable virtio-user as the server, vhost as the client, thus after vhost-user is killed then relaunched, -virtio-user can reconnect back to vhost-user again; at another hand, virtio-user also can reconnect back to vhost-user after virtio-user is killed. -This feature test need cover different rx/tx paths with virtio 1.0 and virtio 1.1, includes split ring mergeable, non-mergeable, inorder mergeable, -inorder non-mergeable, vector_rx path and packed ring mergeable, non-mergeable, inorder non-mergeable, inorder mergeable, vectorized path. -Split ring and packed ring test when vhost enqueue operation with multi-CBDMA channels. When DMA devices are bound to vfio driver, -VA mode is the default and recommended. For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. +Virtio-user server mode is a feature to enable virtio-user as the server, +vhost as the client, thus after vhost-user is killed then relaunched, +virtio-user can reconnect back to vhost-user again; at another hand, +virtio-user also can reconnect back to vhost-user after virtio-user is killed. +This feature test need cover different rx/tx paths with virtio 1.0 and virtio 1.1, +includes split ring mergeable, non-mergeable, inorder mergeable, +inorder non-mergeable, vector_rx path and packed ring mergeable, non-mergeable, +inorder non-mergeable, inorder mergeable, vectorized path. + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + +Prerequisites +============= + +Topology +-------- +Test flow: Virtio-user-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-user + +Hardware +-------- +Supportted NICs: ALL + +Software +-------- +Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +Test case +========= + +Common steps +------------ Test Case 1: Basic test for packed ring server mode -=================================================== +--------------------------------------------------- +This case uses testpmd to test packed ring with loopback virtio-user server mode. 1. Launch virtio-user as server mode:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=1,packed_vq=1 -- -i --rxq=1 --txq=1 --no-numa - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch vhost as client mode:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=/tmp/sock0,client=1,queues=1' -- -i --rxq=1 --txq=1 --nb-cores=1 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 3. Run below command to get throughput,verify the loopback throughput is not zero:: - testpmd>show port stats all + testpmd> show port stats all Test Case 2: Basic test for split ring server mode -=================================================== +--------------------------------------------------- +This case uses testpmd to test split ring with loopback virtio-user server mode. 1. Launch virtio-user as server mode:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=1 -- -i --rxq=1 --txq=1 --no-numa >set fwd mac >start 2. Launch vhost as client mode:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=/tmp/sock0,client=1,queues=1' -- -i --rxq=1 --txq=1 --nb-cores=1 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 3. Run below command to get throughput,verify the loopback throughput is not zero:: - testpmd>show port stats all + testpmd> show port stats all Test Case 3: loopback reconnect test with split ring mergeable path and server mode -=================================================================================== +----------------------------------------------------------------------------------- +This case uses testpmd to test split ring mergeable path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 8 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 8 queues and check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -111,21 +149,21 @@ Test Case 3: loopback reconnect test with split ring mergeable path and server m 4. Relaunch vhost and send chain packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop 7. Quit virtio-user side testpmd, check the vhost side link status:: @@ -134,52 +172,52 @@ Test Case 3: loopback reconnect test with split ring mergeable path and server m 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>set txpkts 2000,2000,2000,2000 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd>stop Test Case 4: loopback reconnect test with split ring inorder mergeable path and server mode -=========================================================================================== +------------------------------------------------------------------------------------------- +This case uses testpmd to test split ring inorder mergeable path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 2 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 2 queues, check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -188,21 +226,21 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and 4. Relaunch vhost and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop 7. Quit virtio-user side testpmd, check the vhost side link status:: @@ -211,51 +249,51 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=1\ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>set txpkts 2000,2000,2000,2000 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop Test Case 5: loopback reconnect test with split ring inorder non-mergeable path and server mode -=============================================================================================== +----------------------------------------------------------------------------------------------- +This case uses testpmd to test split ring inorder non-mergeable path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 2 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 2 queues check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -264,20 +302,20 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path 4. Relaunch vhost and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop 7. Quit virtio-user side testpmd, check the vhost side link status:: @@ -286,49 +324,49 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop Test Case 6: loopback reconnect test with split ring non-mergeable path and server mode -======================================================================================= +--------------------------------------------------------------------------------------- +This case uses testpmd to test split ring non-mergeable path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 2 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -337,20 +375,20 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv 4. Relaunch vhost and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop 7. Quit virtio-user side testpmd, check the vhost side link status:: @@ -359,49 +397,49 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop Test Case 7: loopback reconnect test with split ring vector_rx path and server mode -=================================================================================== +----------------------------------------------------------------------------------- +This case uses testpmd to test split ring vector_rx path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 2 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -410,20 +448,20 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m 4. Relaunch vhost and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop 7. Quit virtio-user side testpmd, check the vhost side link status:: @@ -432,50 +470,50 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop Test Case 8: loopback reconnect test with packed ring mergeable path and server mode -==================================================================================== +------------------------------------------------------------------------------------ +This case uses testpmd to test packed ring mergeable path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 2 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -484,21 +522,21 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server 4. Relaunch vhost and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop 7. Quit virtio-user side testpmd, check the vhost side link status:: @@ -507,51 +545,51 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>set txpkts 2000,2000,2000,2000 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd>stop Test Case 9: loopback reconnect test with packed ring non-mergeable path and server mode -======================================================================================== +---------------------------------------------------------------------------------------- +This case uses testpmd to test packed ring non-mergeable path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 2 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -560,16 +598,16 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser 4. Relaunch vhost and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: @@ -582,50 +620,50 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop Test Case 10: loopback reconnect test with packed ring inorder mergeable path and server mode -============================================================================================= +--------------------------------------------------------------------------------------------- +This case uses testpmd to test packed ring inorder mergeable path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 8 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 8 queues and check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -634,21 +672,21 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an 4. Relaunch vhost and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop 7. Quit virtio-user side testpmd, check the vhost side link status:: @@ -657,51 +695,51 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8 - >set fwd mac - >set txpkts 2000,2000,2000,2000 - >start tx_first 32 + testpmd> set fwd mac + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>set txpkts 2000,2000,2000,2000 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> set txpkts 2000,2000,2000,2000 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop Test Case 11: loopback reconnect test with packed ring inorder non-mergeable path and server mode -================================================================================================= +------------------------------------------------------------------------------------------------- +This case uses testpmd to test packed ring inorder non-mergeable path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 2 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -710,20 +748,20 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat 4. Relaunch vhost and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop 7. Quit virtio-user side testpmd, check the vhost side link status:: @@ -732,49 +770,49 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop Test Case 12: loopback reconnect test with packed ring vectorized path and server mode -======================================================================================= +-------------------------------------------------------------------------------------- +This case uses testpmd to test packed ring vectorized path with loopback virtio-user server mode and reconnect vhost-user and virtio-user. 1. launch vhost as client mode with 2 queues:: - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --log-level=pmd.net.vhost.driver,8 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --log-level=pmd.net.vhost.driver,8 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start + testpmd> set fwd mac + testpmd> start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --log-level=pmd.net.virtio.driver,8 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --log-level=pmd.net.virtio.driver,8 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 - >show port stats all + testpmd> set fwd mac + testpmd> start tx_first 32 + testpmd> show port stats all 3. Quit vhost side testpmd, check the virtio-user side link status:: @@ -783,20 +821,20 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve 4. Relaunch vhost and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 5. Check the virtio-user side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 6. Check each RX/TX queue has packets:: - testpmd>stop + testpmd> stop 7. Quit virtio-user side testpmd, check the vhost side link status:: @@ -805,46 +843,52 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve 8. Relaunch virtio-user and send packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 - >set fwd mac - >start tx_first 32 + testpmd> set fwd mac + testpmd> start tx_first 32 9. Check the vhost side link status and run below command to get throughput, check throughput can get expected:: testpmd> show port info 0 #it should show up" - testpmd>show port stats all + testpmd> show port stats all 10. Port restart at vhost side by below command and check throughput can get expected:: - testpmd>stop - testpmd>port stop 0 - testpmd>port start 0 - testpmd>start tx_first 32 - testpmd>show port stats all + testpmd> stop + testpmd> port stop 0 + testpmd> port start 0 + testpmd> start tx_first 32 + testpmd> show port stats all 11. Check each RX/TX queue has packets:: - testpmd>stop + testpmd>stop Test Case 13: loopback packed ring all path payload check test using server mode and multi-queues -================================================================================================= +------------------------------------------------------------------------------------------------- +This case uses testpmd to test packed ring all path with loopback virtio-user server mode and multi-queues to check payload. 1. launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 --no-pci --file-prefix=vhost -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 --no-pci --file-prefix=vhost -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 2. Launch virtio-user with packed ring mergeable inorder path:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd> set fwd csum - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 3. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio -- --pdump 'device_id=net_virtio_user0,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' + ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' + --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000' 4. Send large pkts from vhost:: @@ -860,29 +904,31 @@ Test Case 13: loopback packed ring all path payload check test using server mode 7. Quit and relaunch virtio with packed ring mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd> set fwd csum - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 8. Rerun step 3-6. 9. Quit and relaunch virtio with packed ring non-mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd> set fwd csum - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 10. Rerun step 3. 11. Send pkts from vhost:: - testpmd> set fwd csum - testpmd> set txpkts 64,128,256,512 - testpmd> set burst 1 - testpmd> start tx_first 1 - testpmd> stop + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> stop 12. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. @@ -890,50 +936,56 @@ Test Case 13: loopback packed ring all path payload check test using server mode 14. Quit and relaunch virtio with packed ring inorder non-mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd> set fwd csum - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 15. Rerun step 10-13. 16. Quit and relaunch virtio with packed ring vectorized path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd> set fwd csum - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 17 Rerun step 10-13. 18. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,queue_size=1025,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025 - testpmd> set fwd csum - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,queue_size=1025,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025 + testpmd> set fwd csum + testpmd> start 19. Rerun step 10-13. Test Case 14: loopback split ring all path payload check test using server mode and multi-queues -================================================================================================ +------------------------------------------------------------------------------------------------ +This case uses testpmd to test split ring all path with loopback virtio-user server mode and multi-queues to check payload. 1. Launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 --no-pci --file-prefix=vhost -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 --no-pci --file-prefix=vhost -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 2. Launch virtio-user with split ring mergeable inorder path:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd>set fwd csum + testpmd>start 3. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- --pdump 'device_id=net_virtio_user0,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' + ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' + --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000' 4. Send large pkts from vhost:: @@ -949,31 +1001,31 @@ Test Case 14: loopback split ring all path payload check test using server mode 7. Quit and relaunch virtio with split ring mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd>set fwd csum + testpmd>start 8. Rerun steps 3-6. 9. Quit and relaunch virtio with split ring non-mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \ -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd>set fwd csum + testpmd>start 10. Rerun step 3. 11. Send pkts from vhost:: - testpmd> set fwd csum - testpmd> set txpkts 64,128,256,512 - testpmd> set burst 1 - testpmd> start tx_first 1 - testpmd> stop + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> stop 12. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. @@ -981,220 +1033,20 @@ Test Case 14: loopback split ring all path payload check test using server mode 14. Quit and relaunch virtio with split ring inorder non-mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start 15. Rerun step 10-13. 16. Quit and relaunch virtio with split ring vectorized path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -17. Rerun step 10-13. - -Test Case 15: loopback packed ring all path cbdma test payload check with server mode and multi-queues -====================================================================================================== - -1. bind 8 cbdma port to vfio-pci and launch vhost:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - -2. Launch virtio-user with packed ring mergeable inorder path:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- --pdump 'device_id=net_virtio_user0,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: - - testpmd> vhost enable tx all - testpmd> set fwd csum - testpmd> set txpkts 64,64,64,2000,2000,2000 - testpmd> set burst 1 - testpmd> start tx_first 1 - testpmd> stop - -5. Quit pdump, check all the packets length are 6192 Byte in the pcap file, and the payload in receive packets are same. - -6. Quit and relaunch vhost and rerun step3-5. - -7. Quit and relaunch virtio with packed ring mergeable path as below:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -8. Rerun steps 3-6. - -9. Quit and relaunch virtio with packed ring non-mergeable path as below:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -10. Rerun step 3. - -11. Send pkts from vhost:: - - testpmd> vhost enable tx all - testpmd> set fwd csum - testpmd> set txpkts 64,128,256,512 - testpmd> set burst 1 - testpmd> start tx_first 1 - testpmd> stop - -12. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. - -13. Quit and relaunch vhost and rerun step 10-12. - -14. Quit and relaunch virtio with packed ring inorder non-mergeable path as below:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -15. Rerun step 10-13. - -16. Quit and relaunch virtio with packed ring vectorized path as below:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -17. Rerun step 10-13. - -18. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025 - testpmd>set fwd csum - testpmd>start - -19. Rerun step 10-13. - -20. Quit and relaunch vhost w/ iova=pa:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 --file-prefix=vhost -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=pa -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - -21. Quit virtio and rerun steps 2-19. - -Test Case 16: loopback split ring all path cbdma test payload check with server mode and multi-queues -===================================================================================================== - -1. bind 8 cbdma port to vfio-pci and launch vhost:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - -2. Launch virtio-user with split ring mergeable inorder path:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- --pdump 'device_id=net_virtio_user0,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Send large pkts from vhost:: - - testpmd> vhost enable tx all - testpmd> set fwd csum - testpmd> set txpkts 64,64,64,2000,2000,2000 - testpmd> set burst 1 - testpmd> start tx_first 1 - testpmd> stop - -5. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. - -6. Quit and relaunch vhost and rerun step3-5. - -7. Quit and relaunch virtio with split ring mergeable path as below:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -8. Rerun steps 3-6. - -9. Quit and relaunch virtio with split ring non-mergeable path as below:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -10. Rerun step 3. - -11. Send pkts from vhost:: - - testpmd> vhost enable tx all - testpmd> set fwd csum - testpmd> set txpkts 64,128,256,512 - testpmd> set burst 1 - testpmd> start tx_first 1 - testpmd> stop - -12. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. - -13. Quit and relaunch vhost and rerun step 10-12. - -14. Quit and relaunch virtio with split ring inorder non-mergeable path as below:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -15. Rerun step 10-13. - -16. Quit and relaunch virtio with split ring vectorized path as below:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -17. Rerun step 10-13. - -18. Quit and relaunch vhost w/ iova=pa:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 --file-prefix=vhost -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=pa -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start -19. Quit virtio and rerun steps 2-17. +17. Rerun step 10-13. \ No newline at end of file From patchwork Fri Apr 15 03:25:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109735 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 003F9A050B; Fri, 15 Apr 2022 05:25:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EE6F240140; Fri, 15 Apr 2022 05:25:59 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id CB0634003C for ; Fri, 15 Apr 2022 05:25:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649993158; x=1681529158; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=cutA0oaBd6R6GmmrsNHwYXywv/eQWX3nCq0+7Yb6SjI=; b=H3t/S8QaTxUWXj/MNFnfupjOdAhbSuUTA64jr0BPv31WoDFXms5RFly3 EI1gJ0HYouBoFdWpl3XRnaxRt9BzDqfQrK8gR/NCJX82Ju2UPI88BPY4p JLPg0WdtGm+9QE1gm4nsivYrEt8nfvr+v80Vd97XRDGKuGuZCDRzwWXoD VFxgnzv6++u4EUZsJ+XGiBti0vIb6SKcGwLypsWGL8NTycmTV6YSwijsQ DRezAULQbMRnkkHN2tsLHj6dZMfkETS3+gnynkcpvY0uLoTo4aJLVE8+1 yIpBt5cYK1GinxkLcy8M73oEktlbBKvTRFxM5L0Nu/uXq9s6dtCVELM5M A==; X-IronPort-AV: E=McAfee;i="6400,9594,10317"; a="261929505" X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="261929505" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:25:56 -0700 X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="700907828" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:25:55 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/5] tests/loopback_virtio_user_server_mode: delete CBDMA test case and useless code Date: Fri, 15 Apr 2022 11:25:51 +0800 Message-Id: <20220415032551.251430-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), delete cbdma related cases form tests/loopback_virtio_user_server_mode. Signed-off-by: Wei Ling --- ...tSuite_loopback_virtio_user_server_mode.py | 758 ++++++------------ 1 file changed, 236 insertions(+), 522 deletions(-) diff --git a/tests/TestSuite_loopback_virtio_user_server_mode.py b/tests/TestSuite_loopback_virtio_user_server_mode.py index a59d34b1..cea5dc82 100644 --- a/tests/TestSuite_loopback_virtio_user_server_mode.py +++ b/tests/TestSuite_loopback_virtio_user_server_mode.py @@ -39,7 +39,6 @@ import copy import re import time -import framework.utils as utils from framework.packet import Packet from framework.pmd_output import PmdOutput from framework.test_case import TestCase @@ -70,8 +69,13 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.testpmd_name = self.path.split("/")[-1] self.app_pdump = self.dut.apps_name["pdump"] self.dump_pcap = "/root/pdump-rx.pcap" - self.device_str = "" - self.cbdma_dev_infos = [] + self.dump_pcap_q0 = "/root/pdump-rx-q0.pcap" + self.dump_pcap_q1 = "/root/pdump-rx-q1.pcap" + + self.vhost_user = self.dut.new_session(suite="vhost_user") + self.virtio_user = self.dut.new_session(suite="virtio-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user_pmd = PmdOutput(self.dut, self.virtio_user) def set_up(self): """ @@ -90,11 +94,6 @@ class TestLoopbackVirtioUserServerMode(TestCase): ] self.result_table_create(self.table_header) - self.vhost = self.dut.new_session(suite="vhost") - self.virtio_user = self.dut.new_session(suite="virtio-user") - self.vhost_pmd = PmdOutput(self.dut, self.vhost) - self.virtio_user_pmd = PmdOutput(self.dut, self.virtio_user) - def lanuch_vhost_testpmd(self, queue_number=1, nb_cores=1, extern_params=""): """ start testpmd on vhost @@ -105,7 +104,7 @@ class TestLoopbackVirtioUserServerMode(TestCase): param = "--rxq={} --txq={} --nb-cores={} {}".format( queue_number, queue_number, nb_cores, extern_params ) - self.vhost_pmd.start_testpmd( + self.vhost_user_pmd.start_testpmd( self.core_list_host, param=param, no_pci=True, @@ -114,7 +113,7 @@ class TestLoopbackVirtioUserServerMode(TestCase): prefix="vhost", fixed_prefix=True, ) - self.vhost_pmd.execute_cmd("set fwd mac", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120) @property def check_2M_env(self): @@ -158,7 +157,7 @@ class TestLoopbackVirtioUserServerMode(TestCase): param = "--rxq={} --txq={} --nb-cores={} {}".format( self.queue_number, self.queue_number, self.nb_cores, extern_params ) - self.vhost_pmd.start_testpmd( + self.vhost_user_pmd.start_testpmd( self.core_list_host, param=param, no_pci=True, @@ -168,7 +167,7 @@ class TestLoopbackVirtioUserServerMode(TestCase): fixed_prefix=True, ) if set_fwd_mac: - self.vhost_pmd.execute_cmd("set fwd mac", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120) def lanuch_virtio_user_testpmd_with_multi_queue( self, mode, extern_params="", set_fwd_mac=True, vectorized_path=False @@ -227,36 +226,23 @@ class TestLoopbackVirtioUserServerMode(TestCase): session_tx.send_expect("start tx_first 1", "testpmd> ", 10) session_tx.send_expect("stop", "testpmd> ", 10) - def start_to_send_960_packets_csum(self, session_tx, cbdma=False): + def start_to_send_960_packets_csum(self, session_tx): """ start the testpmd of vhost-user, start to send 8k packets """ - if cbdma: - session_tx.send_expect("vhost enable tx all", "testpmd> ", 10) session_tx.send_expect("set fwd csum", "testpmd> ", 10) session_tx.send_expect("set txpkts 64,128,256,512", "testpmd> ", 10) session_tx.send_expect("set burst 1", "testpmd> ", 10) session_tx.send_expect("start tx_first 1", "testpmd> ", 3) session_tx.send_expect("stop", "testpmd> ", 10) - def start_to_send_6192_packets_csum_cbdma(self, session_tx): - """ - start the testpmd of vhost-user, start to send 8k packets - """ - session_tx.send_expect("vhost enable tx all", "testpmd> ", 30) - session_tx.send_expect("set fwd csum", "testpmd> ", 30) - session_tx.send_expect("set txpkts 64,64,64,2000,2000,2000", "testpmd> ", 30) - session_tx.send_expect("set burst 1", "testpmd> ", 30) - session_tx.send_expect("start tx_first 1", "testpmd> ", 5) - session_tx.send_expect("stop", "testpmd> ", 30) - def check_port_throughput_after_port_stop(self): """ check the throughput after port stop """ loop = 1 while loop <= 5: - out = self.vhost_pmd.execute_cmd("show port stats 0", "testpmd>", 60) + out = self.vhost_user_pmd.execute_cmd("show port stats 0", "testpmd>", 60) lines = re.search("Rx-pps:\s*(\d*)", out) result = lines.group(1) if result == "0": @@ -273,7 +259,7 @@ class TestLoopbackVirtioUserServerMode(TestCase): """ loop = 1 while loop <= 5: - out = self.vhost_pmd.execute_cmd("show port info all", "testpmd> ", 120) + out = self.vhost_user_pmd.execute_cmd("show port info all", "testpmd> ", 120) port_status = re.findall("Link\s*status:\s*([a-z]*)", out) if "down" not in port_status: break @@ -292,99 +278,120 @@ class TestLoopbackVirtioUserServerMode(TestCase): ) def port_restart(self): - self.vhost_pmd.execute_cmd("stop", "testpmd> ", 120) - self.vhost_pmd.execute_cmd("port stop 0", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("stop", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("port stop 0", "testpmd> ", 120) self.check_port_throughput_after_port_stop() - self.vhost_pmd.execute_cmd("clear port stats all", "testpmd> ", 120) - self.vhost_pmd.execute_cmd("port start 0", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("clear port stats all", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("port start 0", "testpmd> ", 120) self.check_port_link_status_after_port_restart() - self.vhost_pmd.execute_cmd("start tx_first 32", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("start tx_first 32", "testpmd> ", 120) def port_restart_send_8k_packets(self): - self.vhost_pmd.execute_cmd("stop", "testpmd> ", 120) - self.vhost_pmd.execute_cmd("port stop 0", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("stop", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("port stop 0", "testpmd> ", 120) self.check_port_throughput_after_port_stop() - self.vhost_pmd.execute_cmd("clear port stats all", "testpmd> ", 120) - self.vhost_pmd.execute_cmd("port start 0", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("clear port stats all", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("port start 0", "testpmd> ", 120) self.check_port_link_status_after_port_restart() - self.vhost_pmd.execute_cmd("set txpkts 2000,2000,2000,2000", "testpmd> ", 120) - self.vhost_pmd.execute_cmd("start tx_first 32", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("set txpkts 2000,2000,2000,2000", "testpmd> ", 120) + self.vhost_user_pmd.execute_cmd("start tx_first 32", "testpmd> ", 120) - def launch_pdump_to_capture_pkt(self, dump_port): + def launch_pdump_to_capture_pkt(self, multi_queue=False): """ bootup pdump in dut """ self.pdump_session = self.dut.new_session(suite="pdump") - cmd = ( - self.app_pdump - + " " - + "-v --file-prefix=virtio -- " - + "--pdump 'device_id=%s,queue=*,rx-dev=%s,mbuf-size=8000'" - ) - self.pdump_session.send_expect(cmd % (dump_port, self.dump_pcap), "Port") + if not multi_queue: + cmd = ( + self.app_pdump + + " " + + "-v --file-prefix=virtio -- " + + "--pdump 'device_id=net_virtio_user0,queue=*,rx-dev=%s,mbuf-size=8000'" + ) + self.pdump_session.send_expect(cmd % (self.dump_pcap), "Port") + else: + cmd = ( + self.app_pdump + + " " + + "-v --file-prefix=virtio -- " + + "--pdump 'device_id=net_virtio_user0,queue=0,rx-dev=%s,mbuf-size=8000' " + + "--pdump 'device_id=net_virtio_user0,queue=1,rx-dev=%s,mbuf-size=8000'" + ) + self.pdump_session.send_expect(cmd % (self.dump_pcap_q0, self.dump_pcap_q1), "Port") - def check_packet_payload_valid(self, pkt_len): + def check_packet_payload_valid(self, pkt_len, multi_queue=False): """ check the payload is valid """ self.pdump_session.send_expect("^c", "# ", 60) time.sleep(3) - self.dut.session.copy_file_from( - src="%s" % self.dump_pcap, dst="%s" % self.dump_pcap - ) - pkt = Packet() - pkts = pkt.read_pcapfile(self.dump_pcap) - expect_data = str(pkts[0]["Raw"]) - - for i in range(len(pkts)): - self.verify( - len(pkts[i]) == pkt_len, - "virtio-user0 receive packet's length not equal %s Byte" % pkt_len, + if not multi_queue: + self.dut.session.copy_file_from( + src="%s" % self.dump_pcap, dst="%s" % self.dump_pcap ) - check_data = str(pkts[i]["Raw"]) - self.verify( - check_data == expect_data, - "the payload in receive packets has been changed from %s" % i, - ) - self.dut.send_expect("rm -rf %s" % self.dump_pcap, "#") + pkt = Packet() + pkts = pkt.read_pcapfile(self.dump_pcap) + expect_data = str(pkts[0]["Raw"]) + + for i in range(len(pkts)): + self.verify( + len(pkts[i]) == pkt_len, + "virtio-user0 receive packet's length not equal %s Byte" % pkt_len, + ) + check_data = str(pkts[i]["Raw"]) + self.verify( + check_data == expect_data, + "the payload in receive packets has been changed from %s" % i, + ) + + else: + for pacp in [self.dump_pcap_q0, self.dump_pcap_q1]: + self.dut.session.copy_file_from( + src="%s" % pacp, dst="%s" % pacp + ) + pkt = Packet() + pkts = pkt.read_pcapfile(pacp) + expect_data = str(pkts[0]["Raw"]) + + for i in range(len(pkts)): + self.verify( + len(pkts[i]) == pkt_len, + "virtio-user0 receive packet's length not equal %s Byte" % pkt_len, + ) + check_data = str(pkts[i]["Raw"]) + self.verify( + check_data == expect_data, + "the payload in receive packets has been changed from %s" % i, + ) def relanuch_vhost_testpmd_send_packets( - self, extern_params, cbdma=False, iova="va" + self, extern_params ): - self.vhost_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) self.logger.info("now reconnet from vhost") - if cbdma: - self.lanuch_vhost_testpmd_with_cbdma(extern_params=extern_params, iova=iova) - else: - self.lanuch_vhost_testpmd_with_multi_queue( - extern_params=extern_params, set_fwd_mac=False - ) - self.launch_pdump_to_capture_pkt(self.vuser0_port) - if cbdma: - self.start_to_send_6192_packets_csum_cbdma(self.vhost) - else: - self.start_to_send_8k_packets_csum(self.vhost) - self.check_packet_payload_valid(self.pkt_len) + self.lanuch_vhost_testpmd_with_multi_queue( + extern_params=extern_params, set_fwd_mac=False + ) + self.launch_pdump_to_capture_pkt(multi_queue=True) + self.start_to_send_8k_packets_csum(self.vhost_user) + self.check_packet_payload_valid(self.pkt_len, multi_queue=True) def relanuch_vhost_testpmd_send_960_packets( - self, extern_params, cbdma=False, iova="va" + self, extern_params ): - self.vhost_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) self.logger.info("now reconnet from vhost") - if cbdma: - self.lanuch_vhost_testpmd_with_cbdma(extern_params=extern_params, iova=iova) - else: - self.lanuch_vhost_testpmd_with_multi_queue( - extern_params=extern_params, set_fwd_mac=False - ) - self.launch_pdump_to_capture_pkt(self.vuser0_port) - self.start_to_send_960_packets_csum(self.vhost, cbdma=cbdma) - self.check_packet_payload_valid(pkt_len=960) + self.lanuch_vhost_testpmd_with_multi_queue( + extern_params=extern_params, set_fwd_mac=False + ) + self.launch_pdump_to_capture_pkt(multi_queue=True) + self.start_to_send_960_packets_csum(self.vhost_user) + self.check_packet_payload_valid(pkt_len=960, multi_queue=True) def relanuch_virtio_testpmd_with_multi_path( - self, mode, case_info, extern_params, cbdma=False, iova="va" + self, mode, case_info, extern_params ): self.virtio_user_pmd.execute_cmd("quit", "#", 60) @@ -394,22 +401,17 @@ class TestLoopbackVirtioUserServerMode(TestCase): ) self.virtio_user_pmd.execute_cmd("set fwd csum") self.virtio_user_pmd.execute_cmd("start") - self.launch_pdump_to_capture_pkt(self.vuser0_port) - if cbdma: - self.start_to_send_6192_packets_csum_cbdma(self.vhost) - else: - self.start_to_send_8k_packets_csum(self.vhost) - self.check_packet_payload_valid(self.pkt_len) + self.launch_pdump_to_capture_pkt(multi_queue=True) + self.start_to_send_8k_packets_csum(self.vhost_user) + self.check_packet_payload_valid(self.pkt_len, multi_queue=True) - self.relanuch_vhost_testpmd_send_packets(extern_params, cbdma, iova=iova) + self.relanuch_vhost_testpmd_send_packets(extern_params) def relanuch_virtio_testpmd_with_non_mergeable_path( self, mode, case_info, extern_params, - cbdma=False, - iova="va", vectorized_path=False, ): @@ -423,21 +425,21 @@ class TestLoopbackVirtioUserServerMode(TestCase): ) self.virtio_user_pmd.execute_cmd("set fwd csum") self.virtio_user_pmd.execute_cmd("start") - self.launch_pdump_to_capture_pkt(self.vuser0_port) + self.launch_pdump_to_capture_pkt(multi_queue=True) - self.start_to_send_960_packets_csum(self.vhost, cbdma=cbdma) - self.check_packet_payload_valid(pkt_len=960) + self.start_to_send_960_packets_csum(self.vhost_user,) + self.check_packet_payload_valid(pkt_len=960, multi_queue=True) - self.relanuch_vhost_testpmd_send_960_packets(extern_params, cbdma, iova=iova) + self.relanuch_vhost_testpmd_send_960_packets(extern_params) def relanuch_vhost_testpmd_with_multi_queue(self): - self.vhost_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) self.check_link_status(self.virtio_user, "down") self.lanuch_vhost_testpmd_with_multi_queue() def relanuch_virtio_testpmd_with_multi_queue(self, mode, extern_params=""): self.virtio_user_pmd.execute_cmd("quit", "#", 60) - self.check_link_status(self.vhost, "down") + self.check_link_status(self.vhost_user, "down") self.lanuch_virtio_user_testpmd_with_multi_queue(mode, extern_params) def calculate_avg_throughput(self, case_info, cycle, Pkt_size=True): @@ -446,9 +448,9 @@ class TestLoopbackVirtioUserServerMode(TestCase): """ results = 0.0 results_row = [] - self.vhost_pmd.execute_cmd("show port stats all", "testpmd>", 60) + self.vhost_user_pmd.execute_cmd("show port stats all", "testpmd>", 60) for i in range(10): - out = self.vhost_pmd.execute_cmd("show port stats all", "testpmd>", 60) + out = self.vhost_user_pmd.execute_cmd("show port stats all", "testpmd>", 60) time.sleep(1) lines = re.search("Rx-pps:\s*(\d*)", out) result = lines.group(1) @@ -472,7 +474,7 @@ class TestLoopbackVirtioUserServerMode(TestCase): """ check all the queue has receive packets """ - out = self.vhost_pmd.execute_cmd("stop", "testpmd> ", 60) + out = self.vhost_user_pmd.execute_cmd("stop", "testpmd> ", 60) for queue_index in range(0, self.queue_number): queue = "Queue= %d" % queue_index index = out.find(queue) @@ -490,24 +492,24 @@ class TestLoopbackVirtioUserServerMode(TestCase): """ close testpmd about vhost-user and virtio-user """ - self.vhost_pmd.execute_cmd("quit", "#", 60) self.virtio_user_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) def close_all_session(self): """ close session of vhost-user and virtio-user """ - self.dut.close_session(self.vhost) self.dut.close_session(self.virtio_user) + self.dut.close_session(self.vhost_user) - def test_server_mode_launch_virtio_first(self): + def test_server_mode_launch_virtio11_first(self): """ - Test Case 2: basic test for split ring server mode, launch virtio-user first + Test Case 1: basic test for packed ring server mode, launch virtio-user first """ self.queue_number = 1 self.nb_cores = 1 virtio_pmd_arg = { - "version": "packed_vq=0,in_order=0,mrg_rxbuf=1", + "version": "packed_vq=1,in_order=0,mrg_rxbuf=1", "path": "--tx-offloads=0x0 --enable-hw-vlan-strip", } self.lanuch_virtio_user_testpmd( @@ -517,19 +519,19 @@ class TestLoopbackVirtioUserServerMode(TestCase): ) self.lanuch_vhost_testpmd() self.virtio_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120) - self.start_to_send_packets(self.virtio_user, self.vhost) + self.start_to_send_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput("lanuch virtio first", "") self.result_table_print() self.close_all_testpmd() - def test_server_mode_launch_virtio11_first(self): + def test_server_mode_launch_virtio_first(self): """ - Test Case 1: basic test for packed ring server mode, launch virtio-user first + Test Case 2: basic test for split ring server mode, launch virtio-user first """ self.queue_number = 1 self.nb_cores = 1 virtio_pmd_arg = { - "version": "packed_vq=1,in_order=0,mrg_rxbuf=1", + "version": "packed_vq=0,in_order=0,mrg_rxbuf=1", "path": "--tx-offloads=0x0 --enable-hw-vlan-strip", } self.lanuch_virtio_user_testpmd( @@ -539,31 +541,31 @@ class TestLoopbackVirtioUserServerMode(TestCase): ) self.lanuch_vhost_testpmd() self.virtio_user_pmd.execute_cmd("set fwd mac", "testpmd> ", 120) - self.start_to_send_packets(self.virtio_user, self.vhost) + self.start_to_send_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput("lanuch virtio first", "") self.result_table_print() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio11_mergeable_path(self): + def test_server_mode_reconnect_with_virtio10_mergeable_path(self): """ - Test Case 8: reconnect test with virtio 1.1 mergeable path and server mode + Test Case 3: reconnect test with virtio 1.0 mergeable path and server mode """ - self.queue_number = 2 + self.queue_number = 8 self.nb_cores = 2 - case_info = "virtio1.1 mergeable path" - mode = "packed_vq=1,in_order=0,mrg_rxbuf=1" + case_info = "virtio1.0 mergeable path" + mode = "in_order=0,mrg_rxbuf=1" extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" self.lanuch_vhost_testpmd_with_multi_queue() self.lanuch_virtio_user_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_8k_packets(self.vhost, self.virtio_user) + self.start_to_send_8k_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "before reconnet", Pkt_size=False) - # reconnect from vhost + # reconnet from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_8k_packets(self.virtio_user, self.vhost) + self.start_to_send_8k_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput(case_info, "reconnet from vhost", Pkt_size=False) # reconnet from virtio @@ -571,9 +573,9 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.relanuch_virtio_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_8k_packets(self.vhost, self.virtio_user) + self.start_to_send_8k_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput( - case_info, "reconnet from virtio user", Pkt_size=False + case_info, "reconnet from virtio_user", Pkt_size=False ) # port restart @@ -585,106 +587,106 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.check_packets_of_each_queue() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio11_non_mergeable_path(self): + def test_server_mode_reconnect_with_virtio10_inorder_mergeable_path(self): """ - Test Case 9: reconnect test with virtio 1.1 non_mergeable path and server mode + Test Case 4: reconnect test with virtio 1.0 inorder mergeable path and server mode """ self.queue_number = 2 self.nb_cores = 2 - case_info = "virtio1.1 non_mergeable path" - mode = "packed_vq=1,in_order=0,mrg_rxbuf=0" + case_info = "virtio1.0 inorder mergeable path" + mode = "in_order=1,mrg_rxbuf=1" extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" self.lanuch_vhost_testpmd_with_multi_queue() self.lanuch_virtio_user_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_packets(self.vhost, self.virtio_user) - self.calculate_avg_throughput(case_info, "before reconnet") + self.start_to_send_8k_packets(self.vhost_user, self.virtio_user) + self.calculate_avg_throughput(case_info, "before reconnet", Pkt_size=False) - # reconnect from vhost + # reconnet from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_packets(self.virtio_user, self.vhost) - self.calculate_avg_throughput(case_info, "reconnet from vhost") + self.start_to_send_8k_packets(self.virtio_user, self.vhost_user) + self.calculate_avg_throughput(case_info, "reconnet from vhost", Pkt_size=False) # reconnet from virtio self.logger.info("now reconnet from virtio_user") self.relanuch_virtio_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_packets(self.vhost, self.virtio_user) - self.calculate_avg_throughput(case_info, "reconnet from virtio_user") + self.start_to_send_8k_packets(self.vhost_user, self.virtio_user) + self.calculate_avg_throughput( + case_info, "reconnet from virtio_user", Pkt_size=False + ) # port restart self.logger.info("now vhost port restart") - self.port_restart() - self.calculate_avg_throughput(case_info, "after port restart") + self.port_restart_send_8k_packets() + self.calculate_avg_throughput(case_info, "after port restart", Pkt_size=False) self.result_table_print() self.check_packets_of_each_queue() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio11_inorder_mergeable_path(self): + def test_server_mode_reconnect_with_virtio10_inorder_non_mergeable_path(self): """ - Test Case 10: reconnect test with virtio 1.1 inorder mergeable path and server mode + Test Case 5: reconnect test with virtio 1.0 inorder non_mergeable path and server mode """ - self.queue_number = 8 + self.queue_number = 2 self.nb_cores = 2 - case_info = "virtio1.1 inorder mergeable path" - mode = "packed_vq=1,in_order=1,mrg_rxbuf=1" + case_info = "virtio1.0 inorder non_mergeable path" + mode = "in_order=1,mrg_rxbuf=0" extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" self.lanuch_vhost_testpmd_with_multi_queue() self.lanuch_virtio_user_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_8k_packets(self.vhost, self.virtio_user) - self.calculate_avg_throughput(case_info, "before reconnet", Pkt_size=False) + self.start_to_send_packets(self.vhost_user, self.virtio_user) + self.calculate_avg_throughput(case_info, "before reconnet") - # reconnect from vhost + # reconnet from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_8k_packets(self.virtio_user, self.vhost) - self.calculate_avg_throughput(case_info, "reconnet from vhost", Pkt_size=False) + self.start_to_send_packets(self.virtio_user, self.vhost_user) + self.calculate_avg_throughput(case_info, "reconnet from vhost") # reconnet from virtio self.logger.info("now reconnet from virtio_user") self.relanuch_virtio_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_8k_packets(self.vhost, self.virtio_user) - self.calculate_avg_throughput( - case_info, "reconnet from virtio user", Pkt_size=False - ) + self.start_to_send_packets(self.vhost_user, self.virtio_user) + self.calculate_avg_throughput(case_info, "reconnet from virtio_user") # port restart self.logger.info("now vhost port restart") - self.port_restart_send_8k_packets() - self.calculate_avg_throughput(case_info, "after port restart", Pkt_size=False) + self.port_restart() + self.calculate_avg_throughput(case_info, "after port restart") self.result_table_print() self.check_packets_of_each_queue() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio11_inorder_non_mergeable_path(self): + def test_server_mode_reconnect_with_virtio10_non_mergeable_path(self): """ - Test Case 11: reconnect test with virtio 1.1 inorder non_mergeable path and server mode + Test Case 6: reconnect test with virtio 1.0 non_mergeable path and server mode """ self.queue_number = 2 self.nb_cores = 2 - case_info = "virtio1.1 inorder non_mergeable path" - mode = "packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1" - extern_params = "--rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip" + case_info = "virtio1.0 non_mergeable path" + mode = "in_order=0,mrg_rxbuf=0,vectorized=1" + extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" self.lanuch_vhost_testpmd_with_multi_queue() self.lanuch_virtio_user_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "before reconnet") - # reconnect from vhost + # reconnet from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_packets(self.virtio_user, self.vhost) + self.start_to_send_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput(case_info, "reconnet from vhost") # reconnet from virtio @@ -692,7 +694,7 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.relanuch_virtio_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "reconnet from virtio_user") # port restart @@ -704,34 +706,29 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.check_packets_of_each_queue() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio11_inorder_vectorized_path(self): + def test_server_mode_reconnect_with_virtio10_vector_rx_path(self): """ - Test Case 12: reconnect test with virtio 1.1 inorder vectorized path and server mode + Test Case 7: reconnect test with virtio 1.0 vector_rx path and server mode """ self.queue_number = 2 self.nb_cores = 2 - case_info = "virtio1.1 inorder vectorized path" - mode = "packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1" - extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" + case_info = "virtio1.0 vector_rx path" + mode = "in_order=0,mrg_rxbuf=0,vectorized=1" self.lanuch_vhost_testpmd_with_multi_queue() - self.lanuch_virtio_user_testpmd_with_multi_queue( - mode=mode, extern_params=extern_params - ) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "before reconnet") - # reconnect from vhost + # reconnet from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_packets(self.virtio_user, self.vhost) + self.start_to_send_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput(case_info, "reconnet from vhost") # reconnet from virtio self.logger.info("now reconnet from virtio_user") - self.relanuch_virtio_testpmd_with_multi_queue( - mode=mode, extern_params=extern_params - ) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.relanuch_virtio_testpmd_with_multi_queue(mode=mode) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "reconnet from virtio_user") # port restart @@ -743,26 +740,26 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.check_packets_of_each_queue() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio10_inorder_mergeable_path(self): + def test_server_mode_reconnect_with_virtio11_mergeable_path(self): """ - Test Case 4: reconnect test with virtio 1.0 inorder mergeable path and server mode + Test Case 8: reconnect test with virtio 1.1 mergeable path and server mode """ self.queue_number = 2 self.nb_cores = 2 - case_info = "virtio1.0 inorder mergeable path" - mode = "in_order=1,mrg_rxbuf=1" + case_info = "virtio1.1 mergeable path" + mode = "packed_vq=1,in_order=0,mrg_rxbuf=1" extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" self.lanuch_vhost_testpmd_with_multi_queue() self.lanuch_virtio_user_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_8k_packets(self.vhost, self.virtio_user) + self.start_to_send_8k_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "before reconnet", Pkt_size=False) - # reconnet from vhost + # reconnect from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_8k_packets(self.virtio_user, self.vhost) + self.start_to_send_8k_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput(case_info, "reconnet from vhost", Pkt_size=False) # reconnet from virtio @@ -770,9 +767,9 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.relanuch_virtio_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_8k_packets(self.vhost, self.virtio_user) + self.start_to_send_8k_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput( - case_info, "reconnet from virtio_user", Pkt_size=False + case_info, "reconnet from virtio user", Pkt_size=False ) # port restart @@ -784,26 +781,26 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.check_packets_of_each_queue() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio10_inorder_non_mergeable_path(self): + def test_server_mode_reconnect_with_virtio11_non_mergeable_path(self): """ - Test Case 5: reconnect test with virtio 1.0 inorder non_mergeable path and server mode + Test Case 9: reconnect test with virtio 1.1 non_mergeable path and server mode """ self.queue_number = 2 self.nb_cores = 2 - case_info = "virtio1.0 inorder non_mergeable path" - mode = "in_order=1,mrg_rxbuf=0" + case_info = "virtio1.1 non_mergeable path" + mode = "packed_vq=1,in_order=0,mrg_rxbuf=0" extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" self.lanuch_vhost_testpmd_with_multi_queue() self.lanuch_virtio_user_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "before reconnet") - # reconnet from vhost + # reconnect from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_packets(self.virtio_user, self.vhost) + self.start_to_send_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput(case_info, "reconnet from vhost") # reconnet from virtio @@ -811,7 +808,7 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.relanuch_virtio_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "reconnet from virtio_user") # port restart @@ -823,26 +820,26 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.check_packets_of_each_queue() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio10_mergeable_path(self): + def test_server_mode_reconnect_with_virtio11_inorder_mergeable_path(self): """ - Test Case 3: reconnect test with virtio 1.0 mergeable path and server mode + Test Case 10: reconnect test with virtio 1.1 inorder mergeable path and server mode """ self.queue_number = 8 self.nb_cores = 2 - case_info = "virtio1.0 mergeable path" - mode = "in_order=0,mrg_rxbuf=1" + case_info = "virtio1.1 inorder mergeable path" + mode = "packed_vq=1,in_order=1,mrg_rxbuf=1" extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" self.lanuch_vhost_testpmd_with_multi_queue() self.lanuch_virtio_user_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_8k_packets(self.vhost, self.virtio_user) + self.start_to_send_8k_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "before reconnet", Pkt_size=False) - # reconnet from vhost + # reconnect from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_8k_packets(self.virtio_user, self.vhost) + self.start_to_send_8k_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput(case_info, "reconnet from vhost", Pkt_size=False) # reconnet from virtio @@ -850,9 +847,9 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.relanuch_virtio_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_8k_packets(self.vhost, self.virtio_user) + self.start_to_send_8k_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput( - case_info, "reconnet from virtio_user", Pkt_size=False + case_info, "reconnet from virtio user", Pkt_size=False ) # port restart @@ -864,26 +861,26 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.check_packets_of_each_queue() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio10_non_mergeable_path(self): + def test_server_mode_reconnect_with_virtio11_inorder_non_mergeable_path(self): """ - Test Case 6: reconnect test with virtio 1.0 non_mergeable path and server mode + Test Case 11: reconnect test with virtio 1.1 inorder non_mergeable path and server mode """ self.queue_number = 2 self.nb_cores = 2 - case_info = "virtio1.0 non_mergeable path" - mode = "in_order=0,mrg_rxbuf=0,vectorized=1" - extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" + case_info = "virtio1.1 inorder non_mergeable path" + mode = "packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1" + extern_params = "--rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip" self.lanuch_vhost_testpmd_with_multi_queue() self.lanuch_virtio_user_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "before reconnet") - # reconnet from vhost + # reconnect from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_packets(self.virtio_user, self.vhost) + self.start_to_send_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput(case_info, "reconnet from vhost") # reconnet from virtio @@ -891,7 +888,7 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.relanuch_virtio_testpmd_with_multi_queue( mode=mode, extern_params=extern_params ) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "reconnet from virtio_user") # port restart @@ -903,29 +900,34 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.check_packets_of_each_queue() self.close_all_testpmd() - def test_server_mode_reconnect_with_virtio10_vector_rx_path(self): + def test_server_mode_reconnect_with_virtio11_inorder_vectorized_path(self): """ - Test Case 7: reconnect test with virtio 1.0 vector_rx path and server mode + Test Case 12: reconnect test with virtio 1.1 inorder vectorized path and server mode """ self.queue_number = 2 self.nb_cores = 2 - case_info = "virtio1.0 vector_rx path" - mode = "in_order=0,mrg_rxbuf=0,vectorized=1" + case_info = "virtio1.1 inorder vectorized path" + mode = "packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1" + extern_params = "--tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip" self.lanuch_vhost_testpmd_with_multi_queue() - self.lanuch_virtio_user_testpmd_with_multi_queue(mode=mode) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.lanuch_virtio_user_testpmd_with_multi_queue( + mode=mode, extern_params=extern_params + ) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "before reconnet") - # reconnet from vhost + # reconnect from vhost self.logger.info("now reconnet from vhost") self.relanuch_vhost_testpmd_with_multi_queue() - self.start_to_send_packets(self.virtio_user, self.vhost) + self.start_to_send_packets(self.virtio_user, self.vhost_user) self.calculate_avg_throughput(case_info, "reconnet from vhost") # reconnet from virtio self.logger.info("now reconnet from virtio_user") - self.relanuch_virtio_testpmd_with_multi_queue(mode=mode) - self.start_to_send_packets(self.vhost, self.virtio_user) + self.relanuch_virtio_testpmd_with_multi_queue( + mode=mode, extern_params=extern_params + ) + self.start_to_send_packets(self.vhost_user, self.virtio_user) self.calculate_avg_throughput(case_info, "reconnet from virtio_user") # port restart @@ -956,14 +958,12 @@ class TestLoopbackVirtioUserServerMode(TestCase): ) self.virtio_user_pmd.execute_cmd("set fwd csum") self.virtio_user_pmd.execute_cmd("start") - # 3. Attach pdump secondary process to primary process by same file-prefix:: - self.vuser0_port = "net_virtio_user0" - self.launch_pdump_to_capture_pkt(self.vuser0_port) - self.start_to_send_8k_packets_csum(self.vhost) + self.launch_pdump_to_capture_pkt(multi_queue=True) + self.start_to_send_8k_packets_csum(self.vhost_user) # 5. Check all the packets length is 8000 Byte in the pcap file self.pkt_len = 8000 - self.check_packet_payload_valid(self.pkt_len) + self.check_packet_payload_valid(self.pkt_len, multi_queue=True) # reconnet from vhost self.relanuch_vhost_testpmd_send_packets(extern_params) @@ -1020,14 +1020,12 @@ class TestLoopbackVirtioUserServerMode(TestCase): ) self.virtio_user_pmd.execute_cmd("set fwd csum") self.virtio_user_pmd.execute_cmd("start") - # 3. Attach pdump secondary process to primary process by same file-prefix:: - self.vuser0_port = "net_virtio_user0" - self.launch_pdump_to_capture_pkt(self.vuser0_port) - self.start_to_send_8k_packets_csum(self.vhost) + self.launch_pdump_to_capture_pkt(multi_queue=True) + self.start_to_send_8k_packets_csum(self.vhost_user) # 5. Check all the packets length is 8000 Byte in the pcap file self.pkt_len = 8000 - self.check_packet_payload_valid(self.pkt_len) + self.check_packet_payload_valid(self.pkt_len, multi_queue=True) # reconnet from vhost self.relanuch_vhost_testpmd_send_packets(extern_params) @@ -1059,298 +1057,14 @@ class TestLoopbackVirtioUserServerMode(TestCase): self.close_all_testpmd() - def test_server_mode_reconnect_with_packed_all_path_cbdma_payload_check(self): - """ - Test Case 15: loopback packed ring all path cbdma test payload check with server mode and multi-queues - """ - self.cbdma_nic_dev_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.queue_number = 8 - self.vdev = f"--vdev 'eth_vhost0,iface=vhost-net,queues={self.queue_number},client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]};txq2@{self.cbdma_dev_infos[2]};txq3@{self.cbdma_dev_infos[3]};txq4@{self.cbdma_dev_infos[4]};txq5@{self.cbdma_dev_infos[5]};txq6@{self.cbdma_dev_infos[6]};txq7@{self.cbdma_dev_infos[7]}]' " - - self.nb_cores = 1 - extern_params = "--txd=1024 --rxd=1024" - case_info = "packed ring mergeable inorder path" - mode = "mrg_rxbuf=1,in_order=1,packed_vq=1" - - self.lanuch_vhost_testpmd_with_cbdma(extern_params=extern_params) - self.logger.info(case_info) - self.lanuch_virtio_user_testpmd_with_multi_queue( - mode=mode, extern_params=extern_params, set_fwd_mac=False - ) - self.virtio_user_pmd.execute_cmd("set fwd csum") - self.virtio_user_pmd.execute_cmd("start") - # 3. Attach pdump secondary process to primary process by same file-prefix:: - self.vuser0_port = "net_virtio_user0" - self.launch_pdump_to_capture_pkt(self.vuser0_port) - self.start_to_send_6192_packets_csum_cbdma(self.vhost) - - # 5. Check all the packets length is 6192 Byte in the pcap file - self.pkt_len = 6192 - self.check_packet_payload_valid(self.pkt_len) - # reconnet from vhost - self.relanuch_vhost_testpmd_send_packets(extern_params, cbdma=True) - - # reconnet from virtio - self.logger.info("now reconnet from virtio_user with other path") - case_info = "packed ring mergeable path" - mode = "mrg_rxbuf=1,in_order=0,packed_vq=1" - self.relanuch_virtio_testpmd_with_multi_path( - mode, case_info, extern_params, cbdma=True - ) - - case_info = "packed ring non-mergeable path" - mode = "mrg_rxbuf=0,in_order=0,packed_vq=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_params, cbdma=True - ) - - case_info = "packed ring inorder non-mergeable path" - mode = "mrg_rxbuf=0,in_order=1,packed_vq=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_params, cbdma=True - ) - - case_info = "packed ring vectorized path" - mode = "mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_params, cbdma=True, vectorized_path=True - ) - - case_info = "packed ring vectorized path and ring size is not power of 2" - mode = "mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025" - extern_param = "--txd=1025 --rxd=1025" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_param, cbdma=True, vectorized_path=True - ) - - if not self.check_2M_env: - self.relanuch_vhost_testpmd_iova_pa(extern_params=extern_params) - - self.close_all_testpmd() - - def test_server_mode_reconnect_with_split_all_path_cbdma_payload_check(self): - """ - Test Case 16: loopback split ring all path cbdma test payload check with server mode and multi-queues - """ - self.cbdma_nic_dev_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.queue_number = 8 - self.vdev = f"--vdev 'eth_vhost0,iface=vhost-net,queues={self.queue_number},client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]};txq2@{self.cbdma_dev_infos[2]};txq3@{self.cbdma_dev_infos[3]};txq4@{self.cbdma_dev_infos[4]};txq5@{self.cbdma_dev_infos[5]};txq6@{self.cbdma_dev_infos[6]};txq7@{self.cbdma_dev_infos[7]}]' " - - self.nb_cores = 1 - extern_params = "--txd=1024 --rxd=1024" - case_info = "split ring mergeable inorder path" - mode = "mrg_rxbuf=1,in_order=1" - - self.lanuch_vhost_testpmd_with_cbdma(extern_params=extern_params) - self.logger.info(case_info) - self.lanuch_virtio_user_testpmd_with_multi_queue( - mode=mode, extern_params=extern_params, set_fwd_mac=False - ) - self.virtio_user_pmd.execute_cmd("set fwd csum") - self.virtio_user_pmd.execute_cmd("start") - # 3. Attach pdump secondary process to primary process by same file-prefix:: - self.vuser0_port = "net_virtio_user0" - self.launch_pdump_to_capture_pkt(self.vuser0_port) - self.start_to_send_6192_packets_csum_cbdma(self.vhost) - - # 5. Check all the packets length is 6192 Byte in the pcap file - self.pkt_len = 6192 - self.check_packet_payload_valid(self.pkt_len) - # reconnet from vhost - self.relanuch_vhost_testpmd_send_packets(extern_params, cbdma=True) - - # reconnet from virtio - self.logger.info("now reconnet from virtio_user with other path") - case_info = "split ring mergeable path" - mode = "mrg_rxbuf=1,in_order=0" - self.relanuch_virtio_testpmd_with_multi_path( - mode, case_info, extern_params, cbdma=True - ) - - case_info = "split ring non-mergeable path" - mode = "mrg_rxbuf=0,in_order=0" - extern_param = extern_params + " --enable-hw-vlan-strip" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_param, cbdma=True - ) - - case_info = "split ring inorder non-mergeable path" - mode = "mrg_rxbuf=0,in_order=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_params, cbdma=True - ) - - case_info = "split ring vectorized path" - mode = "mrg_rxbuf=0,in_order=0,vectorized=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_params, cbdma=True - ) - - if not self.check_2M_env: - self.relanuch_vhost_testpmd_iova_pa(extern_params=extern_params) - - self.close_all_testpmd() - - def relanuch_vhost_testpmd_iova_pa(self, extern_params=""): - self.vhost_pmd.execute_cmd("quit", "#", 60) - self.logger.info("now relaunch vhost iova=pa") - self.lanuch_vhost_testpmd_with_cbdma(extern_params=extern_params, iova="pa") - - if "packed" in self.running_case: - case_info = "packed ring mergeable inorder path" - mode = "mrg_rxbuf=1,in_order=1,packed_vq=1" - self.relanuch_virtio_testpmd_with_multi_path( - mode, case_info, extern_params, cbdma=True, iova="pa" - ) - - case_info = "packed ring mergeable path" - mode = "mrg_rxbuf=1,in_order=0,packed_vq=1" - self.relanuch_virtio_testpmd_with_multi_path( - mode, case_info, extern_params, cbdma=True, iova="pa" - ) - - case_info = "packed ring non-mergeable path" - mode = "mrg_rxbuf=0,in_order=0,packed_vq=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_params, cbdma=True, iova="pa" - ) - - case_info = "packed ring inorder non-mergeable path" - mode = "mrg_rxbuf=0,in_order=1,packed_vq=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_params, cbdma=True, iova="pa" - ) - - case_info = "packed ring vectorized path" - mode = "mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, - case_info, - extern_params, - cbdma=True, - vectorized_path=True, - iova="pa", - ) - - case_info = "packed ring vectorized path and ring size is not power of 2" - mode = "mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025" - extern_param = "--txd=1025 --rxd=1025" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, - case_info, - extern_param, - cbdma=True, - vectorized_path=True, - iova="pa", - ) - - if "split" in self.running_case: - case_info = "split ring mergeable inorder path" - mode = "mrg_rxbuf=1,in_order=1" - self.relanuch_virtio_testpmd_with_multi_path( - mode, case_info, extern_params, cbdma=True, iova="pa" - ) - - case_info = "split ring mergeable path" - mode = "mrg_rxbuf=1,in_order=0" - self.relanuch_virtio_testpmd_with_multi_path( - mode, case_info, extern_params, cbdma=True, iova="pa" - ) - - case_info = "split ring non-mergeable path" - mode = "mrg_rxbuf=0,in_order=0" - extern_param = extern_params + " --enable-hw-vlan-strip" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_param, cbdma=True, iova="pa" - ) - - case_info = "split ring inorder non-mergeable path" - mode = "mrg_rxbuf=0,in_order=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_params, cbdma=True, iova="pa" - ) - - case_info = "split ring vectorized path" - mode = "mrg_rxbuf=0,in_order=0,vectorized=1" - self.relanuch_virtio_testpmd_with_non_mergeable_path( - mode, case_info, extern_params, cbdma=True, iova="pa" - ) - - def lanuch_vhost_testpmd_with_cbdma(self, extern_params="", iova="va"): - """ - start testpmd with cbdma - """ - eal_params = self.vdev + " --iova={}".format(iova) - param = "--rxq={} --txq={} --nb-cores={} {}".format( - self.queue_number, self.queue_number, self.nb_cores, extern_params - ) - self.vhost_pmd.start_testpmd( - self.core_list_host, - param=param, - no_pci=False, - ports=[], - eal_param=eal_params, - prefix="vhost", - fixed_prefix=True, - ) - - def get_cbdma_ports_info_and_bind_to_dpdk(self): - """ - get all cbdma ports - """ - out = self.dut.send_expect( - "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 - ) - device_info = out.split("\n") - for device in device_info: - pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) - if pci_info is not None: - dev_info = pci_info.group(1) - # the numa id of ioat dev, only add the device which on same socket with nic dev - bus = int(dev_info[5:7], base=16) - if bus >= 128: - cur_socket = 1 - else: - cur_socket = 0 - if self.ports_socket == cur_socket: - self.cbdma_dev_infos.append(pci_info.group(1)) - self.verify( - len(self.cbdma_dev_infos) >= 8, - "There no enough cbdma device to run this suite", - ) - self.device_str = " ".join(self.cbdma_dev_infos[0 : self.cbdma_nic_dev_num]) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=%s %s" - % (self.drivername, self.device_str), - "# ", - 60, - ) - - def bind_cbdma_device_to_kernel(self): - if self.device_str is not None: - self.dut.send_expect("modprobe ioatdma", "# ") - self.dut.send_expect( - "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 - ) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" - % self.device_str, - "# ", - 60, - ) - def tear_down(self): """ Run after each test case. """ self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") - self.close_all_session() - time.sleep(2) def tear_down_all(self): """ Run after each test suite. """ - self.bind_cbdma_device_to_kernel() + self.close_all_session() From patchwork Fri Apr 15 03:26:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109736 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2E8CDA050B; Fri, 15 Apr 2022 05:26:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 25E9F40687; Fri, 15 Apr 2022 05:26:11 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id B6C2B4003C for ; Fri, 15 Apr 2022 05:26:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649993168; x=1681529168; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=AvH+22jmsqQVXLj5ZOqfN2RFNAW2nxjP+6th+W//Dfc=; b=LQK4FtJMw+KvHbrpX+uHQg9kiVhwL6wd2MqwRk+V6IMeaClufFg3Cwvj f4oKgtQV1mr1pRmHw9G0PeILXOjBOeckycGllJoacGcN0PwV5HvK9xtjp 6CZQuefuTPlYjuzFYiVzdlgDD+/s9pP6YAY01YcFppKQVDdfqV02UuPJ8 kLp1S/fmDxPjDrQBefYH7oHl/97etW77J638MunNWf/UeBuZbBeM+xWq9 1RfUPCI6Q3/6bQ5rBCfj2OYd8GU8QGHsMIW1oimI7nW4szqetZc2ZC/XL HL57T5x0EGx+syE2vYDrD6+gAGBL5S3BCwA0nJW6DCwoILx5OtapIVe3r Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10317"; a="288146519" X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="288146519" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:26:07 -0700 X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="700907853" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:26:06 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/5] test_plans/loopback_virtio_user_server_mode_cbdma_test_plan: add DPDK22.03 new feature Date: Fri, 15 Apr 2022 11:26:02 +0800 Message-Id: <20220415032602.251488-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new test_plans/loopback_virtio_user_server_mode_cbdma_test_plan. Signed-off-by: Wei Ling --- ...irtio_user_server_mode_cbdma_test_plan.rst | 372 ++++++++++++++++++ 1 file changed, 372 insertions(+) create mode 100644 test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst diff --git a/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst new file mode 100644 index 00000000..761397df --- /dev/null +++ b/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst @@ -0,0 +1,372 @@ +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOgit T LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. + +================================================ +vhost/virtio-user loopback server mode test plan +================================================ + +Description +=========== + +1. Virtio-user server mode is a feature to enable virtio-user as the server, vhost as the client, +thus after vhost-user is killed then relaunched, +virtio-user can reconnect back to vhost-user again; at another hand, +virtio-user also can reconnect back to vhost-user after virtio-user is killed. +2. This feature test need cover different rx/tx paths with virtio 1.0 and virtio 1.1, +includes split ring mergeable, non-mergeable, inorder mergeable,inorder non-mergeable, vector_rx path +and packed ring mergeable, non-mergeable, inorder non-mergeable, inorder mergeable, vectorized path. +3. Split ring and packed ring test when vhost enqueue operation with multi-CBDMA channels. +When DMA devices are bound to vfio driver,VA mode is the default and recommended. For PA mode, +page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + +Prerequisites +============= + +Topology +-------- +Test flow: Virtio-user-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-user + +Hardware +-------- +Supportted NICs: ALL + +Software +-------- +Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port and CBDMA channels to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, bind 2 CBDMA channels:: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +2. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- \ + --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user1,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' + + +Test Case 1: loopback packed ring all path cbdma test payload check with server mode and multi-queues +----------------------------------------------------------------------------------------------------- +This case uses testpmd to test packed ring all path with multi-queues to test payload check with server mode and relaunch vhost. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore15@0000:00:04.4,lcore15@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] + +3. Launch virtio-user with packed ring mergeable inorder path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Start pdump to capture virtio-user packets, as common step 2. + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> stop + +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost and rerun step 4-6. + +8. Quit and relaunch virtio with packed ring mergeable path as below:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Rerun steps 4-7. + +10. Quit and relaunch virtio with packed ring non-mergeable path as below:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +11. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> stop + +12. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +13. Quit and relaunch vhost and rerun step 11-12. + +14. Quit and relaunch virtio with packed ring inorder non-mergeable path as below:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +15. Rerun step 11-13. + +16. Quit and relaunch virtio with packed ring vectorized path as below:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +17. Rerun step 11-13. + +18. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025 + testpmd>set fwd csum + testpmd>start + +19. Rerun step 11-13. + +20. Quit and relaunch vhost w/ iova=pa:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ + --iova=pa -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] + +21. Rerun steps 2-19. + +Test Case 2: loopback split ring all path cbdma test payload check with server mode and multi-queues +---------------------------------------------------------------------------------------------------- +This case uses testpmd to test split ring all path with multi-queues to test payload check with server mode and relaunch vhost. + +1. Bind 3 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + +3. Launch virtio-user with split ring mergeable inorder path:: + + dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + -vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ + - -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +4. Start pdump to capture virtio-user packets, as common step 2. + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> stop + +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost and rerun step 4-6. + +8. Quit and relaunch virtio with split ring mergeable path as below:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +9. Rerun steps 4-7. + +10. Quit and relaunch virtio with split ring non-mergeable path as below:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +11. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> stop + +12. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +13. Quit and relaunch vhost and rerun step 11-12. + +14. Quit and relaunch virtio with split ring inorder non-mergeable path as below:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +15. Rerun step 11-13. + +16. Quit and relaunch virtio with split ring vectorized path as below:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +17. Rerun step 11-13. + +18. Quit and relaunch vhost w/ iova=pa:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=pa -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + +19. Rerun steps 2-18. + +Test Case 3: loopback split ring large chain packets stress test with server mode and cbdma enqueue +--------------------------------------------------------------------------------------------------- +This case uses testpmd to test split ring large chain packets stress test with server mode. + +1. Bind 1 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,client=1,dmas=[txq0]' --iova=va -- -i --nb-cores=1 --mbuf-size=65535 --lcore-dma=[lcore3@0000:00:04.0] + +3. Launch virtio and start testpmd:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048 \ + -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 + testpmd>start + +4. Send large packets from vhost, check virtio can receive packets:: + + testpmd> set txpkts 65535,65535,65535,65535,65535 + testpmd> start tx_first 32 + testpmd> show port stats all + +5. Stop and quit vhost testpmd and relaunch vhost with iova=pa:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,client=1,dmas=[txq0]' --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 --lcore-dma=[lcore3@0000:00:04.0] + +6. Rerun steps 4. + +Test Case 4: loopback packed ring large chain packets stress test with server mode and cbdma enqueue +---------------------------------------------------------------------------------------------------- +This case uses testpmd to test packed ring large chain packets stress test with server mode. + +1. Bind 1 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' --iova=va -- -i --nb-cores=1 --mbuf-size=65535 --lcore-dma=[lcore3@0000:00:04.0] + +3. Launch virtio and start testpmd:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1 \ + -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 + testpmd>start + +4. Send large packets from vhost, check virtio can receive packets:: + + testpmd> set txpkts 65535,65535,65535,65535,65535 + testpmd> start tx_first 32 + testpmd> show port stats all + +5. Stop and quit vhost testpmd and relaunch vhost with iova=pa:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 --lcore-dma=[lcore3@0000:00:04.0] + +6. Rerun steps 4. \ No newline at end of file From patchwork Fri Apr 15 03:26:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109737 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 57D4DA050B; Fri, 15 Apr 2022 05:26:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4E62A4067C; Fri, 15 Apr 2022 05:26:22 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 2301E4003C for ; Fri, 15 Apr 2022 05:26:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649993180; x=1681529180; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=x/OVkf3TpdAZDzxNP0JtkeAajMWzXtaaVPMK04+WuCQ=; b=GnuvsudbExiXS/r+jtChCi3yU2YlTI4RAAjZjoN/7MyMSOBQ48ZPYOwS nhKDXm30umHygCpgwI/I1GZMJSCk9P27Ay11OhZwW/mbf4OL3jBCTkUSk TIYtazFIJObSUcdG5/HgI+a6d6bDvbxFf2x4ctFgpynIg1ahIW4nLAg3U +JEgEbyZ3ukrO2elFVSSJN1YItxbG4YS4y4e6Hn/INfKRy4ws3H3Ec3P8 H7q0Bzb8QdR7K7Zh2VRENGhVL+HVQ2PsR91WuxN0U958LcVB7DgEg+set 8w405gTwM7ek8rtfKYCAPNNkyUPy46C+TRXZ/roudFaCVbMb2Iy7SZSn5 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10317"; a="250383524" X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="250383524" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:26:19 -0700 X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="700907904" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:26:17 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 4/5] tests/loopback_virtio_user_server_mode_cbdma: add DPDK22.03 new feature Date: Fri, 15 Apr 2022 11:26:13 +0800 Message-Id: <20220415032613.251548-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new tests/loopback_virtio_user_server_mode_cbdma. Signed-off-by: Wei Ling --- ..._loopback_virtio_user_server_mode_cbdma.py | 983 ++++++++++++++++++ 1 file changed, 983 insertions(+) create mode 100644 tests/TestSuite_loopback_virtio_user_server_mode_cbdma.py diff --git a/tests/TestSuite_loopback_virtio_user_server_mode_cbdma.py b/tests/TestSuite_loopback_virtio_user_server_mode_cbdma.py new file mode 100644 index 00000000..93e8575c --- /dev/null +++ b/tests/TestSuite_loopback_virtio_user_server_mode_cbdma.py @@ -0,0 +1,983 @@ +# +# BSD LICENSE +# +# Copyright(c) <2022> Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +""" +DPDK Test suite. +Test loopback virtio-user server mode +""" +import re +import time + +from framework.packet import Packet +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase + + +class TestLoopbackVirtioUserServerModeCbama(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.core_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.core_list[0:9] + self.virtio0_core_list = self.core_list[10:12] + self.path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.path.split("/")[-1] + self.app_pdump = self.dut.apps_name["pdump"] + self.dump_pcap_q0 = "/root/pdump-rx-q0.pcap" + self.dump_pcap_q1 = "/root/pdump-rx-q1.pcap" + self.device_str = None + self.cbdma_dev_infos = [] + self.vhost_user = self.dut.new_session(suite="vhost_user") + self.virtio_user = self.dut.new_session(suite="virtio-user") + self.pdump_session = self.dut.new_session(suite="pdump") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user_pmd = PmdOutput(self.dut, self.virtio_user) + + def set_up(self): + """ + Run before each test case. + """ + self.dut.send_expect("rm -rf ./vhost-net*", "#") + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.table_header = [ + "Mode", + "Pkt_size", + "Throughput(Mpps)", + "Queue Number", + "Cycle", + ] + self.result_table_create(self.table_header) + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def send_6192_packets_from_vhost(self): + """ + start the testpmd of vhost-user, start to send 8k packets + """ + time.sleep(3) + self.vhost_user_pmd.execute_cmd("set fwd csum") + self.vhost_user_pmd.execute_cmd("set txpkts 64,64,64,2000,2000,2000") + self.vhost_user_pmd.execute_cmd("set burst 1") + self.vhost_user_pmd.execute_cmd("start tx_first 1") + self.vhost_user_pmd.execute_cmd("stop") + + def send_960_packets_from_vhost(self): + """ + start the testpmd of vhost-user, start to send 8k packets + """ + time.sleep(3) + self.vhost_user_pmd.execute_cmd("set fwd csum") + self.vhost_user_pmd.execute_cmd("set txpkts 64,128,256,512") + self.vhost_user_pmd.execute_cmd("set burst 1") + self.vhost_user_pmd.execute_cmd("start tx_first 1") + self.vhost_user_pmd.execute_cmd("stop") + + def send_chain_packets_from_vhost(self): + time.sleep(3) + self.vhost_user_pmd.execute_cmd("set txpkts 65535,65535,65535,65535,65535") + self.vhost_user_pmd.execute_cmd("start tx_first 32") + + def verify_virtio_user_receive_packets(self): + out = self.virtio_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + rx_pkts = int(re.search("RX-packets: (\d+)", out).group(1)) + self.verify(rx_pkts > 0, "virtio-user can not received packets") + + def launch_pdump_to_capture_pkt(self, capture_all_queue=True): + command = ( + self.app_pdump + + " " + + "-v --file-prefix=virtio-user0 -- " + + "--pdump 'device_id=net_virtio_user0,queue=0,rx-dev=%s,mbuf-size=8000' " + + "--pdump 'device_id=net_virtio_user0,queue=1,rx-dev=%s,mbuf-size=8000'" + ) + self.pdump_session.send_expect( + command % (self.dump_pcap_q0, self.dump_pcap_q1), "Port" + ) + + def check_packet_payload_valid(self, pkt_len): + self.pdump_session.send_expect("^c", "# ", 60) + dump_file_list = [self.dump_pcap_q0, self.dump_pcap_q1] + for pcap in dump_file_list: + self.dut.session.copy_file_from(src="%s" % pcap, dst="%s" % pcap) + pkt = Packet() + pkts = pkt.read_pcapfile(pcap) + expect_data = str(pkts[0]["Raw"]) + for i in range(len(pkts)): + self.verify( + len(pkts[i]) == pkt_len, + "virtio-user0 receive packet's length not equal %s Byte" % pkt_len, + ) + check_data = str(pkts[i]["Raw"]) + self.verify( + check_data == expect_data, + "the payload in receive packets has been changed from %s" % i, + ) + + def start_vhost_testpmd(self, cores, eal_param, param, ports, iova_mode="va"): + eal_param += " --iova=" + iova_mode + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + + def start_virtio_testpmd_with_vhost_net0(self, cores, eal_param, param): + """ + launch the testpmd as virtio with vhost_net0 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user0", + fixed_prefix=True, + ) + self.virtio_user_pmd.execute_cmd("set fwd csum") + self.virtio_user_pmd.execute_cmd("start") + + @staticmethod + def generate_dms_param(queues): + das_list = [] + for i in range(queues): + das_list.append("txq{}".format(i)) + das_param = "[{}]".format(";".join(das_list)) + return das_param + + def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): + """ + get and bind cbdma ports into DPDK driver + """ + self.all_cbdma_list = [] + self.cbdma_list = [] + self.cbdma_str = "" + out = self.dut.send_expect( + "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 + ) + device_info = out.split("\n") + for device in device_info: + pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) + if pci_info is not None: + dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which on same socket with nic dev + bus = int(dev_info[5:7], base=16) + if bus >= 128: + cur_socket = 1 + else: + cur_socket = 0 + if allow_diff_socket: + self.all_cbdma_list.append(pci_info.group(1)) + else: + if self.ports_socket == cur_socket: + self.all_cbdma_list.append(pci_info.group(1)) + self.verify( + len(self.all_cbdma_list) >= cbdma_num, "There no enough cbdma device" + ) + self.cbdma_list = self.all_cbdma_list[0:cbdma_num] + self.cbdma_str = " ".join(self.cbdma_list) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=%s %s" + % (self.drivername, self.cbdma_str), + "# ", + 60, + ) + + def bind_cbdma_device_to_kernel(self): + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.cbdma_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" % self.cbdma_str, + "# ", + 60, + ) + + def close_all_session(self): + """ + close session of vhost-user and virtio-user + """ + self.dut.close_session(self.vhost_user) + self.dut.close_session(self.virtio_user) + + def test_server_mode_packed_ring_all_path_multi_queues_payload_check_with_cbdma( + self, + ): + """ + Test Case 1: loopback packed ring all path cbdma test payload check with server mode and multi-queues + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + vhost_eal_param = "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]'" + core1 = self.vhost_core_list[1] + core2 = self.vhost_core_list[2] + core3 = self.vhost_core_list[3] + core4 = self.vhost_core_list[4] + core5 = self.vhost_core_list[5] + cbdma1 = self.cbdma_list[0] + cbdma2 = self.cbdma_list[1] + cbdma3 = self.cbdma_list[2] + cbdma4 = self.cbdma_list[3] + cbdma5 = self.cbdma_list[4] + cbdma6 = self.cbdma_list[5] + cbdma7 = self.cbdma_list[6] + cbdma8 = self.cbdma_list[7] + lcore_dma = ( + f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma8}," + f"lcore{core2}@{cbdma2},lcore{core2}@{cbdma3},lcore{core2}@{cbdma4}," + f"lcore{core3}@{cbdma3},lcore{core3}@{cbdma4},lcore{core3}@{cbdma5}," + f"lcore{core4}@{cbdma3},lcore{core4}@{cbdma4},lcore{core4}@{cbdma5},lcore{core4}@{cbdma6}," + f"lcore{core5}@{cbdma1},lcore{core5}@{cbdma2},lcore{core5}@{cbdma3},lcore{core5}@{cbdma4},lcore{core5}@{cbdma5},lcore{core5}@{cbdma6},lcore{core5}@{cbdma7},lcore{core5}@{cbdma8}]" + ) + vhost_param = ( + " --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.logger.info("Launch virtio with packed ring mergeable inorder path") + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch vhost and rerun step 4-6") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch virtio with packed ring mergeable path") + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch vhost and rerun step 4-6") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch virtio with packed ring non-mergeable path") + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 10-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info( + "Quit and relaunch virtio with packed ring inorder non-mergeable path" + ) + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 10-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info( + "Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 " + ) + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 10-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost w/ iova=pa, Rerun steps 2-19") + if not self.check_2M_env: + self.virtio_user_pmd.quit() + self.vhost_user_pmd.quit() + vhost_eal_param = "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]'" + vhost_param = ( + " --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.logger.info("Launch virtio with packed ring mergeable inorder path") + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch vhost and rerun step 4-6") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch virtio with packed ring mergeable path") + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch vhost and rerun step 4-6") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info( + "Quit and relaunch virtio with packed ring non-mergeable path" + ) + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 10-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info( + "Quit and relaunch virtio with packed ring inorder non-mergeable path" + ) + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 10-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info( + "Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 " + ) + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 10-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_server_mode_split_ring_all_path_multi_queues_payload_check_with_cbdma( + self, + ): + """ + Test Case 2: loopback split ring all path cbdma test payload check with server mode and multi-queues + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(3) + vhost_eal_param = "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + core1 = self.vhost_core_list[1] + core2 = self.vhost_core_list[2] + core3 = self.vhost_core_list[3] + core4 = self.vhost_core_list[4] + core5 = self.vhost_core_list[5] + cbdma1 = self.cbdma_list[0] + cbdma2 = self.cbdma_list[1] + cbdma3 = self.cbdma_list[2] + lcore_dma = ( + f"[lcore{core1}@{cbdma1}," + f"lcore{core2}@{cbdma1}," + f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma3}," + f"lcore{core4}@{cbdma2},lcore{core4}@{cbdma3}," + f"lcore{core5}@{cbdma2},lcore{core5}@{cbdma3}]" + ) + vhost_param = ( + " --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.logger.info("Launch virtio with split ring mergeable inorder path") + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch vhost and rerun step 4-6") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch virtio with split ring mergeable path") + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch vhost and rerun step 4-6") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch virtio with split ring non-mergeable path") + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1" + virtio_param = ( + " --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + ) + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 11-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info( + "Quit and relaunch virtio with split ring inorder non-mergeable path" + ) + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 11-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch virtio with split ring vectorized path") + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 11-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost w/ iova=pa, Rerun steps 2-19") + if not self.check_2M_env: + self.virtio_user_pmd.quit() + self.vhost_user_pmd.quit() + vhost_eal_param = "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + vhost_param = ( + " --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.logger.info("Launch virtio with split ring mergeable inorder path") + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch vhost and rerun step 4-6") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch virtio with split ring mergeable path") + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info("Quit and relaunch vhost and rerun step 4-6") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.logger.info( + "Quit and relaunch virtio with split ring non-mergeable path" + ) + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1" + virtio_param = " --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 11-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info( + "Quit and relaunch virtio with split ring inorder non-mergeable path" + ) + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 11-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch virtio with split ring vectorized path") + self.virtio_user_pmd.quit() + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1" + virtio_param = " --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.logger.info("Quit and relaunch vhost and rerun step 11-12") + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_server_mode_split_ring_large_chain_packets_stress_test_with_cbdma(self): + """ + Test Case 3: loopback split ring large chain packets stress test with server mode and cbdma enqueue + """ + if not self.check_2M_env: + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=1,client=1,dmas=[txq0]'" + ) + core1 = self.vhost_core_list[1] + cbdma1 = self.cbdma_list[0] + lcore_dma = f"[lcore{core1}@{cbdma1}]" + vhost_param = " --nb-cores=1 --mbuf-size=65535" + " --lcore-dma={}".format( + lcore_dma + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048" + virtio_param = " --nb-cores=1 --rxq=1 --txq=1 --txd=2048 --rxd=2048" + self.logger.info("Launch virtio with split ring vectorized path") + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + + self.send_chain_packets_from_vhost() + self.verify_virtio_user_receive_packets() + + self.logger.info( + "Stop and quit vhost testpmd and relaunch vhost with iova=pa" + ) + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.send_chain_packets_from_vhost() + self.verify_virtio_user_receive_packets() + + def test_server_mode_packed_ring_large_chain_packets_stress_test_with_cbdma(self): + """ + Test Case 4: loopback split packed large chain packets stress test with server mode and cbdma enqueue + """ + if not self.check_2M_env: + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=1,client=1,dmas=[txq0]'" + ) + core1 = self.vhost_core_list[1] + cbdma1 = self.cbdma_list[0] + lcore_dma = f"[lcore{core1}@{cbdma1}]" + vhost_param = " --nb-cores=1 --mbuf-size=65535" + " --lcore-dma={}".format( + lcore_dma + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1" + virtio_param = " --nb-cores=1 --rxq=1 --txq=1 --txd=2048 --rxd=2048" + self.logger.info("Launch virtio with split ring vectorized path") + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + + self.send_chain_packets_from_vhost() + self.verify_virtio_user_receive_packets() + + self.logger.info( + "Stop and quit vhost testpmd and relaunch vhost with iova=pa" + ) + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.send_chain_packets_from_vhost() + self.verify_virtio_user_receive_packets() + + def tear_down(self): + """ + Run after each test case. + """ + self.virtio_user_pmd.quit() + self.vhost_user_pmd.quit() + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.bind_cbdma_device_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.close_all_session() From patchwork Fri Apr 15 03:26:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109738 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A363A050B; Fri, 15 Apr 2022 05:26:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 806C6407FF; Fri, 15 Apr 2022 05:26:32 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 271D14003C for ; Fri, 15 Apr 2022 05:26:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649993190; x=1681529190; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=AsVMU8BBWN3lY5VuoVwg2DSlOlf80WMWOj4dG4u1PpI=; b=XeotabpHFeZtxC+i4X4fYOVaaxxSpmlB71quNbu37w16y8TkuByT/Yfs 2iy8OfkRndaz+KfczivkytJE2kGSLPuf7xG1Cin90/sd8qBCA3PNh5BDt 86nSSPPLBxqlsTa2a67OCakmKOURkYux/CIhtFw/Y8jYajb3cTpOGZR8K Z5OSOKlpVLaeFg5S6UUvrm7RUZVYymsEPPBywGd42AOrL02qtCm1fkyMw 9EWdvPFuSJnD/xzZnNwRCUdLsN+sDdi2YhEj17k8ICJl9YkJIny83Ld7y htyYD8uruCXw361keoUqNBM2BR0SXGE2QsspqC7sjw7s72SXkyirFzwsc A==; X-IronPort-AV: E=McAfee;i="6400,9594,10317"; a="243017068" X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="243017068" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:26:29 -0700 X-IronPort-AV: E=Sophos;i="5.90,261,1643702400"; d="scan'208";a="700907925" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2022 20:26:28 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 5/5] test_plans/index: add new testsuite Date: Fri, 15 Apr 2022 11:26:24 +0800 Message-Id: <20220415032624.251607-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add loopback_virtio_user_server_mode_cbdma_test_plan new testplan into index.rst. Signed-off-by: Wei Ling Tested-by: Chenyu Huang --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index f8118d14..a6a396c7 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -121,6 +121,7 @@ The following are the test plans for the DPDK DTS automated test system. linux_modules_test_plan loopback_multi_paths_port_restart_test_plan loopback_virtio_user_server_mode_test_plan + loopback_virtio_user_server_mode_cbdma_test_plan mac_filter_test_plan macsec_for_ixgbe_test_plan metering_and_policing_test_plan