Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/110909/?format=api
https://patches.dpdk.org/api/patches/110909/?format=api", "web_url": "https://patches.dpdk.org/project/dts/patch/20220509110344.17047-1-linglix.chen@intel.com/", "project": { "id": 3, "url": "https://patches.dpdk.org/api/projects/3/?format=api", "name": "DTS", "link_name": "dts", "list_id": "dts.dpdk.org", "list_email": "dts@dpdk.org", "web_url": "", "scm_url": "git://dpdk.org/tools/dts", "webscm_url": "http://git.dpdk.org/tools/dts/", "list_archive_url": "https://inbox.dpdk.org/dts", "list_archive_url_format": "https://inbox.dpdk.org/dts/{}", "commit_url_format": "" }, "msgid": "<20220509110344.17047-1-linglix.chen@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dts/20220509110344.17047-1-linglix.chen@intel.com", "date": "2022-05-09T11:03:44", "name": "[V2] test_plans/*: remove common_base info in test plan", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "34d04922f8bfd54a281ade877aa8b20b269b3e15", "submitter": { "id": 1843, "url": "https://patches.dpdk.org/api/people/1843/?format=api", "name": "Lingli Chen", "email": "linglix.chen@intel.com" }, "delegate": null, "mbox": "https://patches.dpdk.org/project/dts/patch/20220509110344.17047-1-linglix.chen@intel.com/mbox/", "series": [ { "id": 22843, "url": "https://patches.dpdk.org/api/series/22843/?format=api", "web_url": "https://patches.dpdk.org/project/dts/list/?series=22843", "date": "2022-05-09T11:03:44", "name": "[V2] test_plans/*: remove common_base info in test plan", "version": 2, "mbox": "https://patches.dpdk.org/series/22843/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/110909/comments/", "check": "warning", "checks": "https://patches.dpdk.org/api/patches/110909/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dts-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 335FAA0508;\n\tMon, 9 May 2022 04:40:43 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 059A54068F;\n\tMon, 9 May 2022 04:40:43 +0200 (CEST)", "from mga14.intel.com (mga14.intel.com [192.55.52.115])\n by mails.dpdk.org (Postfix) with ESMTP id 4546840395\n for <dts@dpdk.org>; Mon, 9 May 2022 04:40:41 +0200 (CEST)", "from orsmga007.jf.intel.com ([10.7.209.58])\n by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 08 May 2022 19:40:40 -0700", "from unknown (HELO DPDK-CVL-tetser102.icx.intel.com)\n ([10.239.251.92])\n by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 08 May 2022 19:40:38 -0700" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1652064041; x=1683600041;\n h=from:to:cc:subject:date:message-id;\n bh=0y3Ngc/EzYFx7+w4Kk/f0rhWDn/OsASoxvx3nqhwWMY=;\n b=Rlvslf3pnEjXDCI6flvzrWHRbrYUdVt1nWGASeYAhAMLw9IFY6d61L9b\n LdKx630QvgLi6sDpSVNjCV6FnPN5bozVmUJNNXrJJBS/5AcyEeHTPRhJe\n bbX8LZLY/pozu97Kozakuf6I83O4oJ/6c0uGlSY8hZsBok6lG/CQtO28K\n KdKPuNGpaTnyqq0lSZH7eq+e2iJEA+rT0ziPENiufo92QWqipQoBjfnD7\n ez3hgGiWmdJPLqfC5NGjzDwcoxLho3qnrTekn4/KhZik2dvnMwXIv/xm+\n KzFISjUpjInnOy8MrFetu2ozXe3Pyt6YMH2EX7bIcpueGiIpZnnkIhzpN g==;", "X-IronPort-AV": [ "E=McAfee;i=\"6400,9594,10341\"; a=\"269055770\"", "E=Sophos;i=\"5.91,210,1647327600\"; d=\"scan'208\";a=\"269055770\"", "E=Sophos;i=\"5.91,210,1647327600\"; d=\"scan'208\";a=\"564812560\"" ], "From": "Lingli Chen <linglix.chen@intel.com>", "To": "dts@dpdk.org", "Cc": "Lingli Chen <linglix.chen@intel.com>", "Subject": "[dts][PATCH V2] test_plans/*: remove common_base info in test plan", "Date": "Mon, 9 May 2022 11:03:44 +0000", "Message-Id": "<20220509110344.17047-1-linglix.chen@intel.com>", "X-Mailer": "git-send-email 2.17.1", "X-BeenThere": "dts@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "test suite reviews and discussions <dts.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dts>,\n <mailto:dts-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dts/>", "List-Post": "<mailto:dts@dpdk.org>", "List-Help": "<mailto:dts-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dts>,\n <mailto:dts-request@dpdk.org?subject=subscribe>", "Errors-To": "dts-bounces@dpdk.org" }, "content": "makefile have moved from test suite long times, but plan still have common_base info, they need keep sync each other.\n\nSigned-off-by: Lingli Chen <linglix.chen@intel.com>\n---\nv2:add ptpclient_test_plan modify\n\nv1:modify 12 suit test_plans\n\n test_plans/bbdev_test_plan.rst | 9 --\n test_plans/compressdev_zlib_pmd_test_plan.rst | 8 --\n test_plans/iavf_test_plan.rst | 11 +--\n test_plans/ipv4_reassembly_test_plan.rst | 2 +-\n test_plans/kni_test_plan.rst | 2 +-\n test_plans/nvgre_test_plan.rst | 2 -\n test_plans/packet_capture_test_plan.rst | 10 --\n test_plans/ptpclient_test_plan.rst | 5 +-\n test_plans/qinq_filter_test_plan.rst | 3 -\n test_plans/vhost_1024_ethports_test_plan.rst | 14 +--\n test_plans/vm2vm_virtio_pmd_test_plan.rst | 93 ++++++-------------\n test_plans/vmdq_dcb_test_plan.rst | 4 +-\n test_plans/vxlan_test_plan.rst | 3 -\n 13 files changed, 40 insertions(+), 126 deletions(-)", "diff": "diff --git a/test_plans/bbdev_test_plan.rst b/test_plans/bbdev_test_plan.rst\nindex 2e2a64e5..2ac4cbe9 100644\n--- a/test_plans/bbdev_test_plan.rst\n+++ b/test_plans/bbdev_test_plan.rst\n@@ -93,15 +93,6 @@ Prerequisites\n measure the overhead added by the framework.\n 2) Turbo_sw is a sw-only driver wrapper for FlexRAN SDK optimized Turbo\n coding libraries.\n- It can be enabled by setting\n-\n- ``CONFIG_RTE_LIBRTE_PMD_BBDEV_TURBO_SW=y``\n-\n- The offload cases can be enabled by setting\n-\n- ``CONFIG_RTE_BBDEV_OFFLOAD_COST=y``\n-\n- They are both located in the build configuration file ``common_base``.\n \n 4. Test tool\n \ndiff --git a/test_plans/compressdev_zlib_pmd_test_plan.rst b/test_plans/compressdev_zlib_pmd_test_plan.rst\nindex 5d20dc61..678ea714 100644\n--- a/test_plans/compressdev_zlib_pmd_test_plan.rst\n+++ b/test_plans/compressdev_zlib_pmd_test_plan.rst\n@@ -49,14 +49,6 @@ http://doc.dpdk.org/guides/compressdevs/zlib.html\n \n Prerequisites\n ----------------------\n-In order to enable this virtual compression PMD, user must:\n-\n- Set CONFIG_RTE_LIBRTE_PMD_ZLIB=y in config/common_base.\n-\n-and enable compressdev unit test:\n-\n- Set CONFIG_RTE_COMPRESSDEV_TEST=y in config/common_base.\n-\n A compress performance test app is added into DPDK to test CompressDev.\n RTE_COMPRESS_ZLIB and RTE_LIB_COMPRESSDEV is enabled by default in meson build.\n Calgary corpus is a collection of text and binary data files, commonly used\ndiff --git a/test_plans/iavf_test_plan.rst b/test_plans/iavf_test_plan.rst\nindex ddd7fbb9..9f657905 100644\n--- a/test_plans/iavf_test_plan.rst\n+++ b/test_plans/iavf_test_plan.rst\n@@ -457,9 +457,7 @@ Test Case: VF performance\n Test Case: vector vf performance\n ---------------------------------\n \n-1. config vector=y in config/common_base, and rebuild dpdk\n-\n-2. start testpmd for PF::\n+1. start testpmd for PF::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 --socket-mem=1024,1024 --file-prefix=pf \\\n -a 08:00.0 -a 08:00.1 -- -i\n@@ -467,7 +465,7 @@ Test Case: vector vf performance\n testpmd>set vf mac addr 0 0 00:12:34:56:78:01\n testpmd>set vf mac addr 1 0 00:12:34:56:78:02\n \n-3. start testpmd for VF::\n+2. start testpmd for VF::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x0f8 -n 4 --master-lcore=3 --socket-mem=1024,1024 --file-prefix=vf \\\n -a 09:0a.0 -a 09:02.0 -- -i --txq=2 --rxq=2 --rxd=512 --txd=512 --nb-cores=4 --rss-ip\n@@ -476,10 +474,9 @@ Test Case: vector vf performance\n testpmd>set fwd mac\n testpmd>start\n \n-4. send traffic and verify throughput\n+3. send traffic and verify throughput\n \n Test Case: scalar/bulk vf performance\n -------------------------------------\n \n-1. change CONFIG_RTE_LIBRTE_IAVF_INC_VECTOR=n in config/common_base, and rebuild dpdk.\n-2. repeat test steps 2-4 in above test case: vector vf performance.\n+1. repeat above test case: vector vf performance, by launch dpdk-testpmd with '--force-max-simd-bitwidth=64'.\ndiff --git a/test_plans/ipv4_reassembly_test_plan.rst b/test_plans/ipv4_reassembly_test_plan.rst\nindex 354dae51..2f6de54e 100644\n--- a/test_plans/ipv4_reassembly_test_plan.rst\n+++ b/test_plans/ipv4_reassembly_test_plan.rst\n@@ -106,7 +106,7 @@ Sample command::\n -P -p 0x2 --config \"(1,0,1)\" --maxflows=4096 --flowttl=10s\n \n Modifies the sample app source code to enable up to 7 fragments per packet,\n-and it need set the \"CONFIG_RTE_LIBRTE_IP_FRAG_MAX_FRAG=7\" in ./config/common_base and re-build DPDK.\n+and it need set the \"RTE_LIBRTE_IP_FRAG_MAX_FRAG=7\" in ./config/rte_config.h and re-build DPDK.\n \n Sends 4K packets split in 7 fragments each with a ``maxflows`` of 4K.\n \ndiff --git a/test_plans/kni_test_plan.rst b/test_plans/kni_test_plan.rst\nindex 1802f6ab..ee6c977a 100644\n--- a/test_plans/kni_test_plan.rst\n+++ b/test_plans/kni_test_plan.rst\n@@ -121,7 +121,7 @@ system to another)::\n \n Case config::\n \n- For enable KNI features, need to set the \"CONFIG_RTE_KNI_KMOD=y\" in ./config/common_base and re-build DPDK.\n+ For enable KNI features, need build DPDK with '-Denable_kmods=True'.\n \n Test Case: ifconfig testing\n ===========================\ndiff --git a/test_plans/nvgre_test_plan.rst b/test_plans/nvgre_test_plan.rst\nindex c05292ee..71a406fd 100644\n--- a/test_plans/nvgre_test_plan.rst\n+++ b/test_plans/nvgre_test_plan.rst\n@@ -55,8 +55,6 @@ plugged into the available PCIe Gen3 8-lane slot.\n \n DUT board must be two sockets system and each cpu have more than 8 lcores.\n \n-For fortville NICs need change the value of CONFIG_RTE_LIBRTE_I40E_INC_VECTOR\n-in dpdk/config/common_base file to n.\n \n Test Case: NVGRE ipv4 packet detect\n ===================================\ndiff --git a/test_plans/packet_capture_test_plan.rst b/test_plans/packet_capture_test_plan.rst\nindex e2be1430..7e7f8768 100644\n--- a/test_plans/packet_capture_test_plan.rst\n+++ b/test_plans/packet_capture_test_plan.rst\n@@ -86,16 +86,6 @@ note: portB0/portB1 are the binded ports.\n Prerequisites\n =============\n \n-Enable pcap lib in dpdk code and recompile::\n-\n- --- a/config/common_base\n- +++ b/config/common_base\n- @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y\n- #\n- # Compile software PMD backed by PCAP files\n- #\n- -CONFIG_RTE_LIBRTE_PMD_PCAP=n\n- +CONFIG_RTE_LIBRTE_PMD_PCAP=y\n \n Test cases\n ==========\ndiff --git a/test_plans/ptpclient_test_plan.rst b/test_plans/ptpclient_test_plan.rst\nindex 31ba2b15..3ae8b68b 100644\n--- a/test_plans/ptpclient_test_plan.rst\n+++ b/test_plans/ptpclient_test_plan.rst\n@@ -46,10 +46,7 @@ has been installed on the tester.\n \n Case Config::\n \n- Meson: For support IEEE1588, need to execute \"sed -i '$a\\#define RTE_LIBRTE_IEEE1588 1' config/rte_config.h\",\n- and re-build DPDK.\n- $ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static <build_target>\n- $ ninja -C <build_target>\n+ Meson: For support IEEE1588, build DPDK with '-Dc_args=-DRTE_LIBRTE_IEEE1588'\n \n The sample should be validated on Forville, Niantic and i350 Nics. \n \ndiff --git a/test_plans/qinq_filter_test_plan.rst b/test_plans/qinq_filter_test_plan.rst\nindex 488596ed..6781c517 100644\n--- a/test_plans/qinq_filter_test_plan.rst\n+++ b/test_plans/qinq_filter_test_plan.rst\n@@ -53,9 +53,6 @@ Test Case 1: test qinq packet type\n \n Testpmd configuration - 4 RX/TX queues per port\n ------------------------------------------------\n-#. For fortville NICs need change the value of \n- CONFIG_RTE_LIBRTE_I40E_INC_VECTOR in dpdk/config/common_base file to n.\n-\n #. set up testpmd with fortville NICs::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss\ndiff --git a/test_plans/vhost_1024_ethports_test_plan.rst b/test_plans/vhost_1024_ethports_test_plan.rst\nindex a95042f2..1c69c123 100644\n--- a/test_plans/vhost_1024_ethports_test_plan.rst\n+++ b/test_plans/vhost_1024_ethports_test_plan.rst\n@@ -41,19 +41,13 @@ So when vhost-user ports number > 1023, it will report an error \"failed to add l\n Test Case1: Basic test for launch vhost with 1023 ethports\n ===========================================================\n \n-1. SW preparation: change \"CONFIG_RTE_MAX_ETHPORTS\" to 1023 in DPDK configure file::\n-\n- vi ./config/common_base\n- -CONFIG_RTE_MAX_ETHPORTS=32\n- +CONFIG_RTE_MAX_ETHPORTS=1023\n+1. SW preparation::\n+ build dpdk with '-Dmax_ethports=1024'\n \n 2. Launch vhost with 1023 vdev::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 --file-prefix=vhost --vdev 'eth_vhost0,iface=vhost-net,queues=1' \\\n --vdev 'eth_vhost1,iface=vhost-net1,queues=1' ... -- -i # only list two vdev, here ommit other 1021 vdevs, from eth_vhost2 to eth_vhost1022\n \n-3. Change \"CONFIG_RTE_MAX_ETHPORTS\" back to 32 in DPDK configure file::\n-\n- vi ./config/common_base\n- +CONFIG_RTE_MAX_ETHPORTS=32\n- -CONFIG_RTE_MAX_ETHPORTS=1023\n+3. restore dpdk::\n+ build dpdk with '-Dmax_ethports=32'\ndiff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst\nindex 903695ff..6afb8d6f 100644\n--- a/test_plans/vm2vm_virtio_pmd_test_plan.rst\n+++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst\n@@ -296,60 +296,47 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check\n -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n \n-3. On VM1, enable pcap lib in dpdk code and recompile::\n-\n- diff --git a/config/common_base b/config/common_base\n- index 6b96e0e80..0f7d22f22 100644\n- --- a/config/common_base\n- +++ b/config/common_base\n- @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y\n- #\n- # Compile software PMD backed by PCAP files\n- #\n- -CONFIG_RTE_LIBRTE_PMD_PCAP=n\n- +CONFIG_RTE_LIBRTE_PMD_PCAP=y\n-\n-4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n+3. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n testpmd>set fwd rxonly\n testpmd>start\n \n-5. Bootup pdump in VM1::\n+4. Bootup pdump in VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'\n \n-6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::\n+5. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n testpmd>set fwd mac\n testpmd>set txpkts 2000,2000,2000,2000\n \n-7. Send ten packets with 8k length from virtio-pmd on VM2::\n+6. Send ten packets with 8k length from virtio-pmd on VM2::\n \n testpmd>set burst 1\n testpmd>start tx_first 10\n \n-8. Check payload is correct in each dumped packets.\n+7. Check payload is correct in each dumped packets.\n \n-9. Relaunch testpmd in VM1::\n+8. Relaunch testpmd in VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024\n testpmd>set fwd rxonly\n testpmd>start\n \n-10. Bootup pdump in VM1::\n+9. Bootup pdump in VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap,mbuf-size=8000'\n \n-11. Relaunch testpmd on VM2, send ten 64B packets from virtio-pmd on VM2::\n+10. Relaunch testpmd on VM2, send ten 64B packets from virtio-pmd on VM2::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024\n testpmd>set fwd mac\n testpmd>set burst 1\n testpmd>start tx_first 10\n \n-12. Check payload is correct in each dumped packets.\n+11. Check payload is correct in each dumped packets.\n \n Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid check\n ===================================================================================\n@@ -384,60 +371,47 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch\n -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n \n-3. On VM1, enable pcap lib in dpdk code and recompile::\n-\n- diff --git a/config/common_base b/config/common_base\n- index 6b96e0e80..0f7d22f22 100644\n- --- a/config/common_base\n- +++ b/config/common_base\n- @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y\n- #\n- # Compile software PMD backed by PCAP files\n- #\n- -CONFIG_RTE_LIBRTE_PMD_PCAP=n\n- +CONFIG_RTE_LIBRTE_PMD_PCAP=y\n-\n-4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n+3. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n testpmd>set fwd rxonly\n testpmd>start\n \n-5. Bootup pdump in VM1::\n+4. Bootup pdump in VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'\n \n-6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::\n+5. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n testpmd>set fwd mac\n testpmd>set txpkts 2000,2000,2000,2000\n \n-7. Send ten packets from virtio-pmd on VM2::\n+6. Send ten packets from virtio-pmd on VM2::\n \n testpmd>set burst 1\n testpmd>start tx_first 10\n \n-8. Check payload is correct in each dumped packets.\n+7. Check payload is correct in each dumped packets.\n \n-9. Relaunch testpmd in VM1::\n+8. Relaunch testpmd in VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024\n testpmd>set fwd rxonly\n testpmd>start\n \n-10. Bootup pdump in VM1::\n+9. Bootup pdump in VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap'\n \n-11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::\n+10. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n testpmd>set fwd mac\n testpmd>set burst 1\n testpmd>start tx_first 10\n \n-12. Check payload is correct in each dumped packets.\n+11. Check payload is correct in each dumped packets.\n \n Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid check\n ===================================================================================\n@@ -472,60 +446,47 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch\n -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n \n-3. On VM1, enable pcap lib in dpdk code and recompile::\n-\n- diff --git a/config/common_base b/config/common_base\n- index 6b96e0e80..0f7d22f22 100644\n- --- a/config/common_base\n- +++ b/config/common_base\n- @@ -492,7 +492,7 @@ CONFIG_RTE_LIBRTE_PMD_NULL=y\n- #\n- # Compile software PMD backed by PCAP files\n- #\n- -CONFIG_RTE_LIBRTE_PMD_PCAP=n\n- +CONFIG_RTE_LIBRTE_PMD_PCAP=y\n-\n-4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n+3. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n testpmd>set fwd rxonly\n testpmd>start\n \n-5. Bootup pdump in VM1::\n+4. Bootup pdump in VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'\n \n-6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::\n+5. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n testpmd>set fwd mac\n testpmd>set txpkts 2000,2000,2000,2000\n \n-7. Send ten packets from virtio-pmd on VM2::\n+6. Send ten packets from virtio-pmd on VM2::\n \n testpmd>set burst 1\n testpmd>start tx_first 10\n \n-8. Check payload is correct in each dumped packets.\n+7. Check payload is correct in each dumped packets.\n \n-9. Relaunch testpmd in VM1::\n+8. Relaunch testpmd in VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024\n testpmd>set fwd rxonly\n testpmd>start\n \n-10. Bootup pdump in VM1::\n+9. Bootup pdump in VM1::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-rx-small.pcap'\n \n-11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::\n+10. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2::\n \n ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000\n testpmd>set fwd mac\n testpmd>set burst 1\n testpmd>start tx_first 10\n \n-12. Check payload is correct in each dumped packets.\n+11. Check payload is correct in each dumped packets.\n \n Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path\n ============================================================\ndiff --git a/test_plans/vmdq_dcb_test_plan.rst b/test_plans/vmdq_dcb_test_plan.rst\nindex a4beaa93..1d20a9bc 100644\n--- a/test_plans/vmdq_dcb_test_plan.rst\n+++ b/test_plans/vmdq_dcb_test_plan.rst\n@@ -91,7 +91,7 @@ Expected Result:\n Test Case 2: Verify VMDQ & DCB with 16 Pools and 8 TCs\n ======================================================\n \n-1. change CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM to 8 in \"./config/common_linuxapp\", rebuild DPDK.\n+1. change RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM to 8 in \"./config/rte_config.h\", rebuild DPDK.\n meson: change \"#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4\" to 8 in config/rte_config.h, rebuild DPDK.\n \n 2. Repeat Test Case 1, with `--nb-pools 16` and `--nb-tcs 8` of the sample application::\n@@ -115,4 +115,4 @@ Expected result:\n - Every RX queue should have received approximately (+/-15%) the same number of incoming packets\n - verify queue id should be in [vlan user priority value * 2, vlan user priority value * 2 + 1]\n \n-(NOTE: SIGHUP output will obviously change to show 8 columns per row, with only 16 rows)\n\\ No newline at end of file\n+(NOTE: SIGHUP output will obviously change to show 8 columns per row, with only 16 rows)\ndiff --git a/test_plans/vxlan_test_plan.rst b/test_plans/vxlan_test_plan.rst\nindex f7bdeca3..12e35bee 100644\n--- a/test_plans/vxlan_test_plan.rst\n+++ b/test_plans/vxlan_test_plan.rst\n@@ -53,9 +53,6 @@ plugged into the available PCIe Gen3 8-lane slot.\n \n DUT board must be two sockets system and each cpu have more than 8 lcores.\n \n-For fortville NICs need change the value of CONFIG_RTE_LIBRTE_I40E_INC_VECTOR\n-in dpdk/config/common_base file to n.\n-\n Test Case: Vxlan ipv4 packet detect\n ===================================\n Start testpmd with tunneling packet type to vxlan::\n", "prefixes": [ "V2" ] }{ "id": 110909, "url": "