Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/110960/?format=api
http://patches.dpdk.org/api/patches/110960/?format=api", "web_url": "http://patches.dpdk.org/project/dts/patch/20220510032220.343716-1-weix.ling@intel.com/", "project": { "id": 3, "url": "http://patches.dpdk.org/api/projects/3/?format=api", "name": "DTS", "link_name": "dts", "list_id": "dts.dpdk.org", "list_email": "dts@dpdk.org", "web_url": "", "scm_url": "git://dpdk.org/tools/dts", "webscm_url": "http://git.dpdk.org/tools/dts/", "list_archive_url": "https://inbox.dpdk.org/dts", "list_archive_url_format": "https://inbox.dpdk.org/dts/{}", "commit_url_format": "" }, "msgid": "<20220510032220.343716-1-weix.ling@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dts/20220510032220.343716-1-weix.ling@intel.com", "date": "2022-05-10T03:22:20", "name": "[V2,2/3] test_plans/virtio_event_idx_interrupt_cbdma_test_plan: add virtio_event_idx_interrupt_cbdma testplan", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "22ef00e45665b174fef65aefb6e3795ba59007bf", "submitter": { "id": 1828, "url": "http://patches.dpdk.org/api/people/1828/?format=api", "name": "Ling, WeiX", "email": "weix.ling@intel.com" }, "delegate": null, "mbox": "http://patches.dpdk.org/project/dts/patch/20220510032220.343716-1-weix.ling@intel.com/mbox/", "series": [ { "id": 22861, "url": "http://patches.dpdk.org/api/series/22861/?format=api", "web_url": "http://patches.dpdk.org/project/dts/list/?series=22861", "date": "2022-05-10T03:22:02", "name": "add virtio_event_idx_interrupt_cbdma", "version": 2, "mbox": "http://patches.dpdk.org/series/22861/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/110960/comments/", "check": "pending", "checks": "http://patches.dpdk.org/api/patches/110960/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dts-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 9EADAA034C;\n\tTue, 10 May 2022 05:23:11 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 9840C41156;\n\tTue, 10 May 2022 05:23:11 +0200 (CEST)", "from mga04.intel.com (mga04.intel.com [192.55.52.120])\n by mails.dpdk.org (Postfix) with ESMTP id DF368410F2\n for <dts@dpdk.org>; Tue, 10 May 2022 05:23:09 +0200 (CEST)", "from fmsmga001.fm.intel.com ([10.253.24.23])\n by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 09 May 2022 20:23:08 -0700", "from unknown (HELO localhost.localdomain) ([10.239.251.222])\n by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 09 May 2022 20:23:07 -0700" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1652152990; x=1683688990;\n h=from:to:cc:subject:date:message-id:mime-version:\n content-transfer-encoding;\n bh=shbIiNIbqK+2MaEOGywAKETzwKHBQOvIuUPfmSH+u+o=;\n b=PNI1sGz2NHMc5vwVgBK5P1TQQRi+kAXDtvca1mym/l3dvamI2PrBXvPG\n f2C0+CcRu4upR1GF5z8srszfFiaw9vo7Q0R398eh8LEDdQwKyqoFZCyDp\n U8A0BfjCeWgVmf9Rat6T3Z7BrxtzXYSVAMWnM75LYRpUoB/0FMUttTyjU\n U0ltewDU9D1doruCvJeVBTjIlZ0AOTPkJBhUkIrfpOG83+uy77I8rWGMc\n l65/XhdX5mOYWR2jiXKYrGmjrbQcgoHALHjG/nBfacdOSFEQzaHZFDUw5\n K2bgYmkvMrPWwuSguxdg+ewUSxi30MopgAXr8p2Vm4ea01nFrMxZMgwNn Q==;", "X-IronPort-AV": [ "E=McAfee;i=\"6400,9594,10342\"; a=\"268086963\"", "E=Sophos;i=\"5.91,213,1647327600\"; d=\"scan'208\";a=\"268086963\"", "E=Sophos;i=\"5.91,213,1647327600\"; d=\"scan'208\";a=\"710820575\"" ], "From": "Wei Ling <weix.ling@intel.com>", "To": "dts@dpdk.org", "Cc": "Wei Ling <weix.ling@intel.com>", "Subject": "[dts][PATCH V2 2/3]\n test_plans/virtio_event_idx_interrupt_cbdma_test_plan: add\n virtio_event_idx_interrupt_cbdma testplan", "Date": "Mon, 9 May 2022 23:22:20 -0400", "Message-Id": "<20220510032220.343716-1-weix.ling@intel.com>", "X-Mailer": "git-send-email 2.25.1", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "X-BeenThere": "dts@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "test suite reviews and discussions <dts.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dts>,\n <mailto:dts-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dts/>", "List-Post": "<mailto:dts@dpdk.org>", "List-Help": "<mailto:dts-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dts>,\n <mailto:dts-request@dpdk.org?subject=subscribe>", "Errors-To": "dts-bounces@dpdk.org" }, "content": "Add new testplan test_plans/virtio_event_idx_interrupt_cbdma_test_plan.\n\nSigned-off-by: Wei Ling <weix.ling@intel.com>\n---\n ...io_event_idx_interrupt_cbdma_test_plan.rst | 207 ++++++++++++++++++\n 1 file changed, 207 insertions(+)\n create mode 100644 test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst", "diff": "diff --git a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst\nnew file mode 100644\nindex 00000000..7c470953\n--- /dev/null\n+++ b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst\n@@ -0,0 +1,207 @@\n+.. Copyright (c) <2022>, Intel Corporation\n+ All rights reserved.\n+\n+ Redistribution and use in source and binary forms, with or without\n+ modification, are permitted provided that the following conditions\n+ are met:\n+\n+ - Redistributions of source code must retain the above copyright\n+ notice, this list of conditions and the following disclaimer.\n+\n+ - Redistributions in binary form must reproduce the above copyright\n+ notice, this list of conditions and the following disclaimer in\n+ the documentation and/or other materials provided with the\n+ distribution.\n+\n+ - Neither the name of Intel Corporation nor the names of its\n+ contributors may be used to endorse or promote products derived\n+ from this software without specific prior written permission.\n+\n+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n+ COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n+ HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,\n+ STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED\n+ OF THE POSSIBILITY OF SUCH DAMAGE.\n+\n+====================================================\n+virtio event idx interrupt mode with cbdma test plan\n+====================================================\n+\n+Description\n+===========\n+\n+This feature is to suppress interrupts for performance improvement, need compare\n+interrupt times with and without virtio event idx enabled. This test plan test \n+virtio event idx interrupt with cbdma enabled. Also need cover driver reload test.\n+\n+..Note:\n+1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet.\n+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test.\n+3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.\n+\n+Test flow\n+=========\n+\n+TG --> NIC --> Vhost-user --> Virtio-net\n+\n+Test Case1: Split ring virtio-pci driver reload test with CBDMA enabled\n+=======================================================================\n+\n+1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::\n+\n+ rm -rf vhost-net*\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \\\n+ --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' \\\n+ -- -i --nb-cores=1 --txd=1024 --rxd=1024\n+ testpmd> start\n+\n+2. Launch VM::\n+\n+ taskset -c 32-33 \\\n+ qemu-system-x86_64 -name us-vhost-vm1 \\\n+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n+ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu2004_2.img \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \\\n+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \\\n+ -vnc :12 -daemonize\n+\n+3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::\n+\n+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net\n+ tcpdump -i [ens3]\n+\n+4. Reload virtio-net driver by below cmds::\n+\n+ ifconfig [ens3] down\n+ ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net\n+ ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0]\n+\n+5. Check virtio device can receive packets again::\n+\n+ ifconfig [ens3] 1.1.1.2\n+ tcpdump -i [ens3]\n+\n+6. Rerun step4 and step5 100 times to check event idx workable after driver reload.\n+\n+Test Case2: Wake up split ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test\n+==============================================================================================================\n+\n+1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands::\n+\n+ rm -rf vhost-net*\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \\\n+ --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \\\n+ -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16\n+ testpmd> start\n+\n+2. Launch VM::\n+\n+ taskset -c 32-33 \\\n+ qemu-system-x86_64 -name us-vhost-vm1 \\\n+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n+ -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu2004_2.img \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net,server -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \\\n+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \\\n+ -vnc :12 -daemonize\n+\n+3. On VM1, give virtio device ip addr and enable vitio-net with 16 quques::\n+\n+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net\n+ ethtool -L [ens3] combined 16\n+\n+4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::\n+\n+ cat /proc/interrupts\n+\n+5. After two hours stress test, stop and restart testpmd, check each queue has new packets coming::\n+\n+ testpmd> stop\n+ testpmd> start\n+ testpmd> stop\n+\n+Test Case3: Packed ring virtio-pci driver reload test with CBDMA enabled\n+========================================================================\n+\n+1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::\n+\n+ rm -rf vhost-net*\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost \\\n+ --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' \\\n+ -- -i --nb-cores=1 --txd=1024 --rxd=1024\n+ testpmd> start\n+\n+2. Launch VM::\n+\n+ taskset -c 32-33 \\\n+ qemu-system-x86_64 -name us-vhost-vm1 \\\n+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n+ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu2004_2.img \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \\\n+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \\\n+ -vnc :12 -daemonize\n+\n+3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::\n+\n+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net\n+ tcpdump -i [ens3]\n+\n+4. Reload virtio-net driver by below cmds::\n+\n+ ifconfig [ens3] down\n+ ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net\n+ ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0]\n+\n+5. Check virtio device can receive packets again::\n+\n+ ifconfig [ens3] 1.1.1.2\n+ tcpdump -i [ens3]\n+\n+6. Rerun step4 and step5 100 times to check event idx workable after driver reload.\n+\n+Test Case4: Wake up packed ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test\n+===============================================================================================================\n+\n+1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands::\n+\n+ rm -rf vhost-net*\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost \\\n+ --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \\\n+ -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16\n+ testpmd> start\n+\n+2. Launch VM::\n+\n+ taskset -c 32-33 \\\n+ qemu-system-x86_64 -name us-vhost-vm1 \\\n+ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n+ -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu2004_2.img \\\n+ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \\\n+ -chardev socket,id=char0,path=./vhost-net,server -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \\\n+ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \\\n+ -vnc :12 -daemonize\n+\n+3. On VM1, give virtio device ip addr and enable vitio-net with 16 quques::\n+\n+ ifconfig [ens3] 1.1.1.2 # [ens3] is the name of virtio-net\n+ ethtool -L [ens3] combined 16\n+\n+4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::\n+\n+ cat /proc/interrupts\n+\n+5. After two hours stress test, stop and restart testpmd, check each queue has new packets coming::\n+\n+ testpmd> stop\n+ testpmd> start\n+ testpmd> stop\n+\n", "prefixes": [ "V2", "2/3" ] }{ "id": 110960, "url": "