Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/120076/?format=api
https://patches.dpdk.org/api/patches/120076/?format=api", "web_url": "https://patches.dpdk.org/project/dts/patch/20221122085233.2898065-1-weix.ling@intel.com/", "project": { "id": 3, "url": "https://patches.dpdk.org/api/projects/3/?format=api", "name": "DTS", "link_name": "dts", "list_id": "dts.dpdk.org", "list_email": "dts@dpdk.org", "web_url": "", "scm_url": "git://dpdk.org/tools/dts", "webscm_url": "http://git.dpdk.org/tools/dts/", "list_archive_url": "https://inbox.dpdk.org/dts", "list_archive_url_format": "https://inbox.dpdk.org/dts/{}", "commit_url_format": "" }, "msgid": "<20221122085233.2898065-1-weix.ling@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dts/20221122085233.2898065-1-weix.ling@intel.com", "date": "2022-11-22T08:52:33", "name": "[V3,1/2] test_plans/basic_4k_pages_cbdma_test_plan: modify the dmas parameter by DPDK changed", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": false, "hash": "ba631a71b5fe0823d9889a12d7682a73a678a155", "submitter": { "id": 1828, "url": "https://patches.dpdk.org/api/people/1828/?format=api", "name": "Ling, WeiX", "email": "weix.ling@intel.com" }, "delegate": null, "mbox": "https://patches.dpdk.org/project/dts/patch/20221122085233.2898065-1-weix.ling@intel.com/mbox/", "series": [ { "id": 25860, "url": "https://patches.dpdk.org/api/series/25860/?format=api", "web_url": "https://patches.dpdk.org/project/dts/list/?series=25860", "date": "2022-11-22T08:52:09", "name": "modify the dmas parameter by DPDK changed", "version": 3, "mbox": "https://patches.dpdk.org/series/25860/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/120076/comments/", "check": "pending", "checks": "https://patches.dpdk.org/api/patches/120076/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dts-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 31A2AA057F;\n\tTue, 22 Nov 2022 09:58:29 +0100 (CET)", "from mails.dpdk.org (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 2AB9F42D4B;\n\tTue, 22 Nov 2022 09:58:29 +0100 (CET)", "from mga05.intel.com (mga05.intel.com [192.55.52.43])\n by mails.dpdk.org (Postfix) with ESMTP id AF36D427EB\n for <dts@dpdk.org>; Tue, 22 Nov 2022 09:58:26 +0100 (CET)", "from fmsmga004.fm.intel.com ([10.253.24.48])\n by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 22 Nov 2022 00:58:22 -0800", "from unknown (HELO localhost.localdomain) ([10.239.252.222])\n by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 22 Nov 2022 00:58:20 -0800" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1669107507; x=1700643507;\n h=from:to:cc:subject:date:message-id:mime-version:\n content-transfer-encoding;\n bh=SZdxm3izq6Zc1QZVmrn+acHqVt8Bw2tDRCDswgiP+so=;\n b=g9FnJafs4vC2JYjvrM4el5HB+OcH/0YzoPPZyGvMUu8x0iv4HpCHExt4\n fT6BQ5Rb1UF7C+4SOsi2EBjnk8H82jtnPuDTH277gWsKSmuuY/jdQBsmT\n muz6+yql/kRQ0qYXtCIk/KwMZ4WzQte01NRJX5mV1bdZFKBhmL4Z0/ZY9\n eGbO94Se2SpAwcGHCGVaw6jRgu74IIUH/sOJex8NE27ny9cm2uecVk8x0\n EP5Yk2Fl7hDcLc8HORKrSIogInfm3deKrbDvoE99ZTXpg0MgPPRLlTQMa\n tEcWf8MoepYM/g9pw4XIVSaqDzUn6v5A5BFSRZasw5aywjROOc0++w9hp g==;", "X-IronPort-AV": [ "E=McAfee;i=\"6500,9779,10538\"; a=\"400058392\"", "E=Sophos;i=\"5.96,183,1665471600\";\n d=\"scan'208,223\";a=\"400058392\"", "E=McAfee;i=\"6500,9779,10538\"; a=\"710125128\"", "E=Sophos;i=\"5.96,183,1665471600\";\n d=\"scan'208,223\";a=\"710125128\"" ], "From": "Wei Ling <weix.ling@intel.com>", "To": "dts@dpdk.org", "Cc": "Wei Ling <weix.ling@intel.com>", "Subject": "[dts][PATCH V3 1/2] test_plans/basic_4k_pages_cbdma_test_plan: modify\n the dmas parameter by DPDK changed", "Date": "Tue, 22 Nov 2022 16:52:33 +0800", "Message-Id": "<20221122085233.2898065-1-weix.ling@intel.com>", "X-Mailer": "git-send-email 2.25.1", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "X-BeenThere": "dts@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "test suite reviews and discussions <dts.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dts>,\n <mailto:dts-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dts/>", "List-Post": "<mailto:dts@dpdk.org>", "List-Help": "<mailto:dts-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dts>,\n <mailto:dts-request@dpdk.org?subject=subscribe>", "Errors-To": "dts-bounces@dpdk.org" }, "content": "From DPDK-22.11, the dmas parameter have changed, so modify the dmas\nparameter in the testplan.\n\nSigned-off-by: Wei Ling <weix.ling@intel.com>\n---\n test_plans/basic_4k_pages_cbdma_test_plan.rst | 626 +++++++++---------\n 1 file changed, 318 insertions(+), 308 deletions(-)", "diff": "diff --git a/test_plans/basic_4k_pages_cbdma_test_plan.rst b/test_plans/basic_4k_pages_cbdma_test_plan.rst\nindex 009a200c..495eb73b 100644\n--- a/test_plans/basic_4k_pages_cbdma_test_plan.rst\n+++ b/test_plans/basic_4k_pages_cbdma_test_plan.rst\n@@ -20,23 +20,35 @@ vhost-user/virtio-net mergeable path.\n 3.Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring and packed ring\n vhost-user/virtio-net mergeable path.\n \n-Note:\n+.. note::\n \n-1. When CBDMA channels are bound to vfio driver, VA mode is the default and recommended.\n-For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.\n-2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. And case 4-5 have not yet been automated.\n+ 1. When CBDMA channels are bound to vfio driver, VA mode is the default and recommended.\n+ For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.\n+ 2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. In this patch,\n+ we enable asynchronous data path for vhostpmd. Asynchronous data path is enabled per tx/rx queue, and users need to specify\n+ the DMA device used by the tx/rx queue. Each tx/rx queue only supports to use one DMA device (This is limited by the\n+ implementation of vhostpmd), but one DMA device can be shared among multiple tx/rx queues of different vhost PMD ports.\n \n-For more about dpdk-testpmd sample, please refer to the DPDK docments:\n-https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html\n-For virtio-user vdev parameter, you can refer to the DPDK docments:\n-https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.\n+ Two PMD parameters are added:\n+ - dmas:\tspecify the used DMA device for a tx/rx queue.(Default: no queues enable asynchronous data path)\n+ - dma-ring-size: DMA ring size.(Default: 4096).\n+\n+ Here is an example:\n+ --vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=2048'\n+\n+ For more about dpdk-testpmd sample, please refer to the DPDK docments:\n+ https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html\n+ For virtio-user vdev parameter, you can refer to the DPDK docments:\n+ https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.\n \n Prerequisites\n =============\n \n Software\n --------\n-Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz\n+ Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz\n+ iperf\n+ qemu: https://download.qemu.org/qemu-7.1.0.tar.xz\n \n General set up\n --------------\n@@ -46,9 +58,9 @@ General set up\n \n \t# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static <dpdk build dir>\n \t# ninja -C <dpdk build dir> -j 110\n-\tFor example:\n-\tCC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc\n-\tninja -C x86_64-native-linuxapp-gcc -j 110\n+ For example:\n+ CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc\n+ ninja -C x86_64-native-linuxapp-gcc -j 110\n \n 3. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::\n \n@@ -65,10 +77,10 @@ General set up\n \n 4. Prepare tmpfs with 4K-pages::\n \n-\tmkdir /mnt/tmpfs_nohuge0\n-\tmkdir /mnt/tmpfs_nohuge1\n-\tmount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G\n-\tmount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=4G\n+ mkdir /mnt/tmpfs_nohuge0\n+ mkdir /mnt/tmpfs_nohuge1\n+ mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G\n+ mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=4G\n \n Test case\n =========\n@@ -85,234 +97,239 @@ Common steps\n \n Test Case 1: Basic test vhost-user/virtio-user split ring vhost async operation using 4K-pages and cbdma enable\n ---------------------------------------------------------------------------------------------------------------\n-This case tests basic functions of split ring virtio path when uses the asynchronous operations with CBDMA channels in 4K-pages memory environment and PVP vhost-user/virtio-user topology.\n+This case tests basic functions of split ring virtio path when uses the asynchronous operations with CBDMA channels\n+in 4K-pages memory environment and PVP vhost-user/virtio-user topology.\n \n-1. Bind one port to vfio-pci, launch vhost::\n+1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, launch vhost::\n \n-\t./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x\n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \\\n-\t--vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0;rxq0]' -- -i --no-numa --socket-num=0 --lcore-dma=[lcore4@0000:00:04.0]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \\\n+ -a 0000:18:00.0 -a 0000:00:04.0 \\\n+ --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \\\n+ -- -i --no-numa --socket-num=0\n+ testpmd>start\n \n 2. Launch virtio-user with 4K-pages::\n \n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \\\n-\t--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,queues=1 -- -i\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \\\n+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,queues=1 \\\n+ -- -i\n+ testpmd>start\n \n 3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::\n \n-\ttestpmd>show port stats all\n+ testpmd>show port stats all\n \n Test Case 2: Basic test vhost-user/virtio-user packed ring vhost async operation using 4K-pages and cbdma enable\n ----------------------------------------------------------------------------------------------------------------\n-This case tests basic functions of packed ring virtio path when uses the asynchronous operations with CBDMA channels in 4K-pages memory environment and PVP vhost-user/virtio-user topology.\n+This case tests basic functions of packed ring virtio path when uses the asynchronous operations with CBDMA channels\n+in 4K-pages memory environment and PVP vhost-user/virtio-user topology.\n \n-1. Bind one port to vfio-pci, launch vhost::\n+1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, launch vhost::\n \n-\tmodprobe vfio-pci\n-\t./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x\n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \\\n-\t--vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0;rxq0]' -- -i --no-numa --socket-num=0 --lcore-dma=[lcore4@0000:00:04.0]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \\\n+ -a 0000:18:00.0 -a 0000:00:04.0 \\\n+ --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \\\n+ -- -i --no-numa --socket-num=0\n+ testpmd>start\n \n 2. Launch virtio-user with 4K-pages::\n \n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \\\n-\t--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \\\n+ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,packed_vq=1,queues=1 \\\n+ -- -i\n+ testpmd>start\n \n 3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::\n \n-\ttestpmd>show port stats all\n+ testpmd>show port stats all\n \n Test Case 3: VM2VM vhost-user/virtio-net split ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable\n -------------------------------------------------------------------------------------------------------------------------------\n-This case test the function of Vhost TSO in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack\n-when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.\n+This case test the function of Vhost TSO in the topology of vhost-user/virtio-net split ring mergeable path by verifing the\n+TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.\n \n-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::\n+1. Bind 2 CBDMA port to vfio-pci, then launch vhost by below command::\n \n-\trm -rf vhost-net*\n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \\\n-\t--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \\\n-\t--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \\\n-\t--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \\\n+ -a 0000:00:04.0 -a 0000:00:04.1 \\\n+ --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \\\n+ --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \\\n+ --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024\n+ testpmd>start\n \n 2. Launch VM1 and VM2::\n \n-\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \\\n-\t-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net0 \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10\n-\n-\ttaskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net1 \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n+ taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \\\n+ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net0 \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10\n+\n+ taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net1 \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n \n 3. On VM1, set virtio device IP and run arp protocal::\n \n-\tifconfig ens5 1.1.1.2\n-\tarp -s 1.1.1.8 52:54:00:00:00:02\n+ ifconfig ens5 1.1.1.2\n+ arp -s 1.1.1.8 52:54:00:00:00:02\n \n 4. On VM2, set virtio device IP and run arp protocal::\n \n-\tifconfig ens5 1.1.1.8\n-\tarp -s 1.1.1.2 52:54:00:00:00:01\n+ ifconfig ens5 1.1.1.8\n+ arp -s 1.1.1.2 52:54:00:00:00:01\n \n 5. Check the iperf performance between two VMs by below commands::\n \n-\tUnder VM1, run: `iperf -s -i 1`\n-\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+ Under VM1, run: `iperf -s -i 1`\n+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n \n 6. Check 2VMs can receive and send big packets to each other::\n \n-\ttestpmd>show port xstats all\n-\tPort 0 should have tx packets above 1522\n-\tPort 1 should have rx packets above 1522\n+ testpmd>show port xstats all\n+ Port 0 should have tx packets above 1522\n+ Port 1 should have rx packets above 1522\n \n Test Case 4: VM2VM vhost-user/virtio-net packed ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable\n --------------------------------------------------------------------------------------------------------------------------------\n-This case test the function of Vhost TSO in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack\n-when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.\n+This case test the function of Vhost TSO in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the\n+TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.\n \n-1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command::\n+1. Bind 2 CBDMA port to vfio-pci, then launch vhost by below command::\n \n-\trm -rf vhost-net*\n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \\\n-\t--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \\\n-\t--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \\\n-\t--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \\\n+ -a 0000:00:04.0 -a 0000:00:04.1 \\\n+ --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \\\n+ --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \\\n+ --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024\n+ testpmd>start\n \n 2. Launch VM1 and VM2::\n \n-\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \\\n-\t-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net0 \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10\n-\n-\ttaskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net1 \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n+ taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \\\n+ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net0 \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10\n+\n+ taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net1 \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n \n 3. On VM1, set virtio device IP and run arp protocal::\n \n-\tifconfig ens5 1.1.1.2\n-\tarp -s 1.1.1.8 52:54:00:00:00:02\n+ ifconfig ens5 1.1.1.2\n+ arp -s 1.1.1.8 52:54:00:00:00:02\n \n 4. On VM2, set virtio device IP and run arp protocal::\n \n-\tifconfig ens5 1.1.1.8\n-\tarp -s 1.1.1.2 52:54:00:00:00:01\n+ ifconfig ens5 1.1.1.8\n+ arp -s 1.1.1.2 52:54:00:00:00:01\n \n 5. Check the iperf performance between two VMs by below commands::\n \n-\tUnder VM1, run: `iperf -s -i 1`\n-\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+ Under VM1, run: `iperf -s -i 1`\n+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n \n 6. Check 2VMs can receive and send big packets to each other::\n \n-\ttestpmd>show port xstats all\n-\tPort 0 should have tx packets above 1522\n-\tPort 1 should have rx packets above 1522\n+ testpmd>show port xstats all\n+ Port 0 should have tx packets above 1522\n+ Port 1 should have rx packets above 1522\n \n Test Case 5: vm2vm vhost/virtio-net split ring multi queues using 4K-pages and cbdma enable\n -------------------------------------------------------------------------------------------\n-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net\n-split ring mergeable path when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment. The dynamic change of multi-queues number is also tested.\n+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid\n+after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost\n+uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.\n+The dynamic change of multi-queues number is also tested.\n \n-1. Bind one port to vfio-pci, launch vhost::\n+1. Bind 4 CBDMA port to vfio-pci, launch vhost::\n \n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n-\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n-\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n-\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n-\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \\\n-\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \\\n+ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \\\n+ --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \\\n+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.3;txq5@0000:00:04.3;txq6@0000:00:04.3;txq7@0000:00:04.3]' \\\n+ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8\n+ testpmd>start\n \n 2. Launch VM qemu::\n \n-\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net0,server \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10\n-\n-\ttaskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net1,server \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12\n+ taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net0,server \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10\n+\n+ taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net1,server \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12\n \n 3. On VM1, set virtio device IP and run arp protocal::\n \n-\tethtool -L ens5 combined 8\n-\tifconfig ens5 1.1.1.2\n-\tarp -s 1.1.1.8 52:54:00:00:00:02\n+ ethtool -L ens5 combined 8\n+ ifconfig ens5 1.1.1.2\n+ arp -s 1.1.1.8 52:54:00:00:00:02\n \n 4. On VM2, set virtio device IP and run arp protocal::\n \n-\tethtool -L ens5 combined 8\n-\tifconfig ens5 1.1.1.8\n-\tarp -s 1.1.1.2 52:54:00:00:00:01\n+ ethtool -L ens5 combined 8\n+ ifconfig ens5 1.1.1.8\n+ arp -s 1.1.1.2 52:54:00:00:00:01\n \n 5. Scp 1MB file form VM1 to VM2::\n \n-\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n+ Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n \n 6. Check the iperf performance between two VMs by below commands::\n \n-\tUnder VM1, run: `iperf -s -i 1`\n-\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+ Under VM1, run: `iperf -s -i 1`\n+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n \n 7. Quit and relaunch vhost w/ diff CBDMA channels::\n \n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n-\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n-\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \\\n-\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \\\n-\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \\\n-\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \\\n+ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \\\n+ --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \\\n+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.2;txq5@0000:00:04.2;rxq2@0000:00:04.3;rxq3@0000:00:04.3;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3],dma-ring-size=1024' \\\n+ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8\n+ testpmd>start\n \n 8. Rerun step 5-6.\n \n@@ -325,11 +342,11 @@ split ring mergeable path when vhost uses the asynchronous operations with CBDMA\n \n 10. On VM1, set virtio device::\n \n-\tethtool -L ens5 combined 4\n+ ethtool -L ens5 combined 4\n \n 11. On VM2, set virtio device::\n \n-\tethtool -L ens5 combined 4\n+ ethtool -L ens5 combined 4\n \n 12. Scp 1MB file form VM1 to VM2::\n \n@@ -342,227 +359,220 @@ split ring mergeable path when vhost uses the asynchronous operations with CBDMA\n \n 14. Quit and relaunch vhost with 1 queues::\n \n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \\\n-\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \\\n-\t-- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \\\n+ --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \\\n+ -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1\n+ testpmd>start\n \n 15. On VM1, set virtio device::\n \n-\tethtool -L ens5 combined 1\n+ ethtool -L ens5 combined 1\n \n 16. On VM2, set virtio device::\n \n-\tethtool -L ens5 combined 1\n+ ethtool -L ens5 combined 1\n \n 17. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::\n \n-\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n+ Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n \n 18. Check the iperf performance, ensure queue0 can work from vhost side::\n \n-\tUnder VM1, run: `iperf -s -i 1`\n-\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+ Under VM1, run: `iperf -s -i 1`\n+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n \n Test Case 6: vm2vm vhost/virtio-net packed ring multi queues using 4K-pages and cbdma enable\n --------------------------------------------------------------------------------------------\n-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net\n-packed ring mergeable path when vhost uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.\n+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid\n+after packets forwarding in vm2vm vhost-user/virtio-net packed ring mergeable path when vhost\n+uses the asynchronous operations with CBDMA channels in 4K-pages memory environment.\n \n-1. Bind one port to vfio-pci, launch vhost::\n+1. Bind 2 CBDMA port to vfio-pci, launch vhost::\n \n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n-\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n-\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n-\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n-\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \\\n-\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \\\n+ -a 0000:00:04.0 -a 0000:00:04.1 \\\n+ --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \\\n+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \\\n+ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8\n+ testpmd>start\n \n 2. Launch VM qemu::\n \n-\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net0,server \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10\n-\n-\ttaskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net1,server \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12\n+ taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net0,server \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10\n+\n+ taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net1,server \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12\n \n 3. On VM1, set virtio device IP and run arp protocal::\n \n-\tethtool -L ens5 combined 8\n-\tifconfig ens5 1.1.1.2\n-\tarp -s 1.1.1.8 52:54:00:00:00:02\n+ ethtool -L ens5 combined 8\n+ ifconfig ens5 1.1.1.2\n+ arp -s 1.1.1.8 52:54:00:00:00:02\n \n 4. On VM2, set virtio device IP and run arp protocal::\n \n-\tethtool -L ens5 combined 8\n-\tifconfig ens5 1.1.1.8\n-\tarp -s 1.1.1.2 52:54:00:00:00:01\n+ ethtool -L ens5 combined 8\n+ ifconfig ens5 1.1.1.8\n+ arp -s 1.1.1.2 52:54:00:00:00:01\n \n 5. Scp 1MB file form VM1 to VM2::\n \n-\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n+ Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n \n 6. Check the iperf performance between two VMs by below commands::\n \n-\tUnder VM1, run: `iperf -s -i 1`\n-\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+ Under VM1, run: `iperf -s -i 1`\n+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n \n Test Case 7: vm2vm vhost/virtio-net split ring multi queues using 1G/4k-pages and cbdma enable\n ----------------------------------------------------------------------------------------------\n-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net\n-split ring mergeable path when vhost uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment and the front-end is in 4k-pages memory environment.\n+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid\n+after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost\n+uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory\n+environment and the front-end is in 4k-pages memory environment.\n \n-1. Bind 16 CBDMA channel to vfio-pci, launch vhost::\n+1. Bind 4 CBDMA port to vfio-pci, launch vhost::\n \n-\t./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \\\n-\t0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7\n-\n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \\\n-\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n-\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n-\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \\\n-\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \\\n-\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \\\n-\t--lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \\\n+ -a 0000:00:04.0 -a 0000:00:04.1 \\\n+ --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \\\n+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \\\n+ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8\n+ testpmd>start\n \n 2. Launch VM qemu::\n \n-\ttaskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net0,server \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10\n-\n-\ttaskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net1,server \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12\n+ taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net0,server \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10\n+\n+ taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net1,server \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12\n \n 3. On VM1, set virtio device IP and run arp protocal::\n \n-\tethtool -L ens5 combined 8\n-\tifconfig ens5 1.1.1.2\n-\tarp -s 1.1.1.8 52:54:00:00:00:02\n+ ethtool -L ens5 combined 8\n+ ifconfig ens5 1.1.1.2\n+ arp -s 1.1.1.8 52:54:00:00:00:02\n \n 4. On VM2, set virtio device IP and run arp protocal::\n \n-\tethtool -L ens5 combined 8\n-\tifconfig ens5 1.1.1.8\n-\tarp -s 1.1.1.2 52:54:00:00:00:01\n+ ethtool -L ens5 combined 8\n+ ifconfig ens5 1.1.1.8\n+ arp -s 1.1.1.2 52:54:00:00:00:01\n \n 5. Scp 1MB file form VM1 to VM2::\n \n-\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n+ Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n \n 6. Check the iperf performance between two VMs by below commands::\n \n-\tUnder VM1, run: `iperf -s -i 1`\n-\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+ Under VM1, run: `iperf -s -i 1`\n+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n \n 7. Quit and relaunch vhost w/ diff CBDMA channels::\n \n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n-\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n-\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \\\n-\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \\\n-\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \\\n-\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \\\n+ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \\\n+ --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \\\n+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \\\n+ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8\n+ testpmd>start\n \n 8. Rerun step 5-6.\n \n Test Case 8: vm2vm vhost/virtio-net split packed ring multi queues with 1G/4k-pages and cbdma enable\n ----------------------------------------------------------------------------------------------------\n-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net\n-split and packed ring mergeable path when vhost uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment and the front-end is in 4k-pages memory environment.\n+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after\n+packets forwarding in vm2vm vhost-user/virtio-net split and packed ring mergeable path when vhost\n+uses the asynchronous operations with CBDMA channels,the back-end is in 1G-pages memory environment\n+and the front-end is in 4k-pages memory environment.\n \n-1. Bind 16 CBDMA channel to vfio-pci, launch vhost::\n+1. Bind 8 CBDMA port to vfio-pci, launch vhost::\n \n-\t./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \\\n-\t0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7\n-\n-\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \\\n-\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n-\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n-\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \\\n-\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \\\n-\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \\\n-\t--lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7]\n-\ttestpmd>start\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \\\n+ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+ --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \\\n+ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.4;txq1@0000:00:04.4;txq2@0000:00:04.4;txq3@0000:00:04.4;txq4@0000:00:04.5;txq5@0000:00:04.5;rxq2@0000:00:04.6;rxq3@0000:00:04.6;rxq4@0000:00:04.6;rxq5@0000:00:04.6;rxq6@0000:00:04.7;rxq7@0000:00:04.7]' \\\n+ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8\n+ testpmd>start\n \n 2. Launch VM qemu::\n \n-\ttaskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net0,server \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10\n-\n-\ttaskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n-\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n-\t-numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \\\n-\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n-\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n-\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n-\t-netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \\\n-\t-chardev socket,id=char0,path=./vhost-net1,server \\\n-\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n-\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12\n+ taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net0,server \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10\n+\n+ taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \\\n+ -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \\\n+ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+ -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net1,server \\\n+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12\n \n 3. On VM1, set virtio device IP and run arp protocal::\n \n-\tethtool -L ens5 combined 8\n-\tifconfig ens5 1.1.1.2\n-\tarp -s 1.1.1.8 52:54:00:00:00:02\n+ ethtool -L ens5 combined 8\n+ ifconfig ens5 1.1.1.2\n+ arp -s 1.1.1.8 52:54:00:00:00:02\n \n 4. On VM2, set virtio device IP and run arp protocal::\n \n-\tethtool -L ens5 combined 8\n-\tifconfig ens5 1.1.1.8\n-\tarp -s 1.1.1.2 52:54:00:00:00:01\n+ ethtool -L ens5 combined 8\n+ ifconfig ens5 1.1.1.8\n+ arp -s 1.1.1.2 52:54:00:00:00:01\n \n 5. Scp 1MB file form VM1 to VM2::\n \n-\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n+ Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name\n \n 6. Check the iperf performance between two VMs by below commands::\n \n-\tUnder VM1, run: `iperf -s -i 1`\n-\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+ Under VM1, run: `iperf -s -i 1`\n+ Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n \n 7. Relaunch VM1, and rerun step 3.\n \n", "prefixes": [ "V3", "1/2" ] }{ "id": 120076, "url": "