Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/125543/?format=api
https://patches.dpdk.org/api/patches/125543/?format=api", "web_url": "https://patches.dpdk.org/project/dts/patch/20230328072043.3795609-2-weix.ling@intel.com/", "project": { "id": 3, "url": "https://patches.dpdk.org/api/projects/3/?format=api", "name": "DTS", "link_name": "dts", "list_id": "dts.dpdk.org", "list_email": "dts@dpdk.org", "web_url": "", "scm_url": "git://dpdk.org/tools/dts", "webscm_url": "http://git.dpdk.org/tools/dts/", "list_archive_url": "https://inbox.dpdk.org/dts", "list_archive_url_format": "https://inbox.dpdk.org/dts/{}", "commit_url_format": "" }, "msgid": "<20230328072043.3795609-2-weix.ling@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dts/20230328072043.3795609-2-weix.ling@intel.com", "date": "2023-03-28T07:20:42", "name": "[V1,1/2] test_plans/pvp_qemu_multi_paths_port_restart: completion testplan", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "e8985d1ea19ec00b6efc99617fb7421d31b08d20", "submitter": { "id": 1828, "url": "https://patches.dpdk.org/api/people/1828/?format=api", "name": "Ling, WeiX", "email": "weix.ling@intel.com" }, "delegate": null, "mbox": "https://patches.dpdk.org/project/dts/patch/20230328072043.3795609-2-weix.ling@intel.com/mbox/", "series": [ { "id": 27557, "url": "https://patches.dpdk.org/api/series/27557/?format=api", "web_url": "https://patches.dpdk.org/project/dts/list/?series=27557", "date": "2023-03-28T07:20:41", "name": "completion testplan and optimize testsuite script", "version": 1, "mbox": "https://patches.dpdk.org/series/27557/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/125543/comments/", "check": "pending", "checks": "https://patches.dpdk.org/api/patches/125543/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dts-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 31A1742854;\n\tTue, 28 Mar 2023 09:24:34 +0200 (CEST)", "from mails.dpdk.org (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 2DF7E410EE;\n\tTue, 28 Mar 2023 09:24:34 +0200 (CEST)", "from mga18.intel.com (mga18.intel.com [134.134.136.126])\n by mails.dpdk.org (Postfix) with ESMTP id D3E3440156\n for <dts@dpdk.org>; Tue, 28 Mar 2023 09:24:32 +0200 (CEST)", "from fmsmga002.fm.intel.com ([10.253.24.26])\n by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 28 Mar 2023 00:24:31 -0700", "from unknown (HELO localhost.localdomain) ([10.239.252.222])\n by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 28 Mar 2023 00:24:30 -0700" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1679988273; x=1711524273;\n h=from:to:cc:subject:date:message-id:in-reply-to:\n references:mime-version:content-transfer-encoding;\n bh=oJVs6nC8Q1SaJC70V4Gi16tWjwTj6yf9TnQUjaS7U7g=;\n b=N38Xq26ZBtKyEQ7ec8MZfxGySU1UDITjFjFBmFrW4326OSqUvT3pz1+b\n p4hxHWHF60L4hxzqB8REfPNc9TiTby/0DNVH490+TRXgHSw+hKbMzwDur\n 8riWILrVoAUvQw2I+iiBZwLw4UuBHnzcPRBuMwL0MZ67LzxJ3U6FOEsGa\n BzBdetK4g29bsbgkAdeYKxucVtmySJ0FE5ZkzZI8rAW+wjy0ZIlF9og02\n HYhaT2j5qqHOlDRRDwj/5D7vQco79Gwx7v9bnZ8QRDkpVfsxuGv0vYNQO\n l+D1Ni/aRP/I8zQxt899+KBmdSld1Zwu+uGLEtXq4WDWyJT2fEskvnOXu g==;", "X-IronPort-AV": [ "E=McAfee;i=\"6600,9927,10662\"; a=\"324389274\"", "E=Sophos;i=\"5.98,296,1673942400\"; d=\"scan'208\";a=\"324389274\"", "E=McAfee;i=\"6600,9927,10662\"; a=\"794675208\"", "E=Sophos;i=\"5.98,296,1673942400\"; d=\"scan'208\";a=\"794675208\"" ], "X-ExtLoop1": "1", "From": "Wei Ling <weix.ling@intel.com>", "To": "dts@dpdk.org", "Cc": "Wei Ling <weix.ling@intel.com>", "Subject": "[dts][PATCH V1 1/2] test_plans/pvp_qemu_multi_paths_port_restart:\n completion testplan", "Date": "Tue, 28 Mar 2023 15:20:42 +0800", "Message-Id": "<20230328072043.3795609-2-weix.ling@intel.com>", "X-Mailer": "git-send-email 2.25.1", "In-Reply-To": "<20230328072043.3795609-1-weix.ling@intel.com>", "References": "<20230328072043.3795609-1-weix.ling@intel.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "X-BeenThere": "dts@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "test suite reviews and discussions <dts.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dts>,\n <mailto:dts-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dts/>", "List-Post": "<mailto:dts@dpdk.org>", "List-Help": "<mailto:dts-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dts>,\n <mailto:dts-request@dpdk.org?subject=subscribe>", "Errors-To": "dts-bounces@dpdk.org" }, "content": "Completion `-a 0000:04:00.0` parameter when start testpmd in VM and\nmodify testcase 10 re-run time from 100 to 10 to reduce run time.\n\nSigned-off-by: Wei Ling <weix.ling@intel.com>\n---\n ...emu_multi_paths_port_restart_test_plan.rst | 130 +++++++++++-------\n 1 file changed, 80 insertions(+), 50 deletions(-)", "diff": "diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst\nindex 7e24290a..84ee68de 100644\n--- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst\n+++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst\n@@ -44,8 +44,8 @@ Test Case 1: pvp test with virtio 0.95 mergeable path\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \\\n+ -- -i --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -53,15 +53,18 @@ Test Case 1: pvp test with virtio 0.95 mergeable path\n \n testpmd>show port stats all\n \n-5. Port restart 100 times by below command and re-calculate the average througnput,verify the throughput is not zero after port restart::\n+5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n+\n+6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n+\n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- ...\n- testpmd>stop\n- testpmd>show port stats all\n- testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n Test Case 2: pvp test with virtio 0.95 normal path\n ==================================================\n@@ -90,8 +93,8 @@ Test Case 2: pvp test with virtio 0.95 normal path\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip \\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \\\n+ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -102,14 +105,17 @@ Test Case 2: pvp test with virtio 0.95 normal path\n 5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n- testpmd>show port stats all\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n \n 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n \n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n-Test Case 3: pvp test with virtio 0.95 vrctor_rx path\n+Test Case 3: pvp test with virtio 0.95 vector_rx path\n =====================================================\n \n 1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::\n@@ -136,8 +142,8 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd without ant tx-offloads::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0,vectorized=1 \\\n+ -- -i --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -148,12 +154,15 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path\n 5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n- testpmd>show port stats all\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n \n 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n \n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n Test Case 4: pvp test with virtio 1.0 mergeable path\n ====================================================\n@@ -182,8 +191,8 @@ Test Case 4: pvp test with virtio 1.0 mergeable path\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \\\n+ -- -i --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -194,12 +203,15 @@ Test Case 4: pvp test with virtio 1.0 mergeable path\n 5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n- testpmd>show port stats all\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n \n 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n \n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n Test Case 5: pvp test with virtio 1.0 normal path\n =================================================\n@@ -228,8 +240,8 @@ Test Case 5: pvp test with virtio 1.0 normal path\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \\\n+ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -240,14 +252,17 @@ Test Case 5: pvp test with virtio 1.0 normal path\n 5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n- testpmd>show port stats all\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n \n 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n \n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n-Test Case 6: pvp test with virtio 1.0 vrctor_rx path\n+Test Case 6: pvp test with virtio 1.0 vector_rx path\n ====================================================\n \n 1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::\n@@ -274,8 +289,8 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0,vectorized=1 \\\n+ -- -i --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -286,12 +301,15 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path\n 5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n- testpmd>show port stats all\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n \n 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n \n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n Test Case 7: pvp test with virtio 1.1 mergeable path\n ====================================================\n@@ -320,8 +338,8 @@ Test Case 7: pvp test with virtio 1.1 mergeable path\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \\\n+ -- -i --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -332,12 +350,15 @@ Test Case 7: pvp test with virtio 1.1 mergeable path\n 5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n- testpmd>show port stats all\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n \n 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n \n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n Test Case 8: pvp test with virtio 1.1 normal path\n =================================================\n@@ -366,8 +387,8 @@ Test Case 8: pvp test with virtio 1.1 normal path\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \\\n+ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -378,14 +399,17 @@ Test Case 8: pvp test with virtio 1.1 normal path\n 5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n- testpmd>show port stats all\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n \n 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n \n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n-Test Case 9: pvp test with virtio 1.1 vrctor_rx path\n+Test Case 9: pvp test with virtio 1.1 vector_rx path\n ====================================================\n \n 1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::\n@@ -412,8 +436,8 @@ Test Case 9: pvp test with virtio 1.1 vrctor_rx path\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0,vectorized=1 \\\n+ -- -i --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -424,15 +448,18 @@ Test Case 9: pvp test with virtio 1.1 vrctor_rx path\n 5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n- testpmd>show port stats all\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n \n 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n \n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n-Test Case 10: pvp test with virtio 1.0 mergeable path restart 100 times\n-=======================================================================\n+Test Case 10: pvp test with virtio 1.0 mergeable path restart 10 times\n+======================================================================\n \n 1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command::\n \n@@ -458,8 +485,8 @@ Test Case 10: pvp test with virtio 1.0 mergeable path restart 100 times\n \n 3. On VM, bind virtio net to vfio-pci and run testpmd::\n \n- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \\\n- --nb-cores=1 --txd=1024 --rxd=1024\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 1 -a 0000:04:00.0 \\\n+ -- -i --nb-cores=1 --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>start\n \n@@ -470,11 +497,14 @@ Test Case 10: pvp test with virtio 1.0 mergeable path restart 100 times\n 5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop::\n \n testpmd>stop\n- testpmd>show port stats all\n+ testpmd>port stop 0\n+ testpmd>show port stats 0\n \n 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart::\n \n+ testpmd>clear port stats all\n+ testpmd>port start all\n testpmd>start\n- testpmd>show port stats all\n+ testpmd>show port stats 0\n \n-7. Rerun steps 4-6 100 times to check stability.\n+7. Rerun steps 4-6 10 times to check stability.\n", "prefixes": [ "V1", "1/2" ] }{ "id": 125543, "url": "