[V1] test_plans: fix build warning and errors

Message ID 20201221095344.7564-1-haiyangx.zhao@intel.com (mailing list archive)
State Accepted
Headers
Series [V1] test_plans: fix build warning and errors |

Commit Message

Zhao, HaiyangX Dec. 21, 2020, 9:53 a.m. UTC
  fix test plans build warning and errors for DTS 20.11 formal release.

Signed-off-by: Haiyang Zhao <haiyangx.zhao@intel.com>
---
 test_plans/cvl_dcf_acl_filter_test_plan.rst   |  13 +-
 .../cvl_dcf_switch_filter_test_plan.rst       |  20 +-
 test_plans/dcf_lifecycle_test_plan.rst        |  10 +-
 test_plans/dpdk_gro_lib_test_plan.rst         |  59 +++---
 test_plans/iavf_fdir_test_plan.rst            |   8 +
 test_plans/index.rst                          |  15 +-
 test_plans/large_vf_test_plan.rst             |   4 +-
 test_plans/pipeline_test_plan.rst             |   4 +-
 test_plans/stats_checks_test_plan.rst         | 198 +++++++++---------
 test_plans/vhost_cbdma_test_plan.rst          |  10 +-
 test_plans/virtio_smoke_test_plan.rst         |   2 +-
 .../vm2vm_virtio_net_perf_test_plan.rst       |  26 +--
 12 files changed, 194 insertions(+), 175 deletions(-)
  

Comments

Tu, Lijuan Dec. 22, 2020, 2:56 a.m. UTC | #1
> fix test plans build warning and errors for DTS 20.11 formal release.
> 
> Signed-off-by: Haiyang Zhao <haiyangx.zhao@intel.com>

Applied
  

Patch

diff --git a/test_plans/cvl_dcf_acl_filter_test_plan.rst b/test_plans/cvl_dcf_acl_filter_test_plan.rst
index 086f2722..e9267a59 100644
--- a/test_plans/cvl_dcf_acl_filter_test_plan.rst
+++ b/test_plans/cvl_dcf_acl_filter_test_plan.rst
@@ -108,11 +108,12 @@  Prerequisites
 
 10. Launch dpdk on VF1::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -w 86:01.1 --file-prefix=vf1 -- -i
-    testpmd> set fwd rxonly
-    testpmd> set verbose 1
-    testpmd> start
-    testpmd> show port info all
+     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -w 86:01.1 --file-prefix=vf1 -- -i
+     testpmd> set fwd rxonly
+     testpmd> set verbose 1
+     testpmd> start
+     testpmd> show port info all
+
 
    check the VF1 driver is net_iavf.
    the mac address is 00:01:23:45:67:89
@@ -806,7 +807,7 @@  Test Case 9: multirules with different pattern or input set
     flow create 0 ingress pattern eth / ipv4 src spec 192.168.2.3 src mask 255.255.0.255 / udp / end actions drop / end
 
 10. send same packets, check packet 1 is dropped by rule 0, packet 2 is dropped by rule 1.
-   packet 3 is dropped by rule 2, packet 4 is dropped by rule 3, packet 5 is dropped by rule4.
+    packet 3 is dropped by rule 2, packet 4 is dropped by rule 3, packet 5 is dropped by rule4.
 
 Test Case 10: multirules with all patterns
 ==========================================
diff --git a/test_plans/cvl_dcf_switch_filter_test_plan.rst b/test_plans/cvl_dcf_switch_filter_test_plan.rst
index c90f439e..3b63c041 100644
--- a/test_plans/cvl_dcf_switch_filter_test_plan.rst
+++ b/test_plans/cvl_dcf_switch_filter_test_plan.rst
@@ -4558,10 +4558,10 @@  Subcase 1: DCF stop/DCF start
 5. send matched packets, port 1 can still receive the packets.
 
 Test case: Drop action test
-======================
+===========================
 
 Subcase 1: DCF DROP IPV4 SRC PACKAGES
------------------------------
+-------------------------------------
 
 1. validate a rule::
 
@@ -4605,7 +4605,7 @@  Subcase 1: DCF DROP IPV4 SRC PACKAGES
    send matched packets, check port can receive the packet.
 
 Subcase 2: DCF DROP IPV4 SRC SPEC MASK PACKAGES
------------------------------
+-----------------------------------------------
 
 1. validate a rule::
 
@@ -4649,7 +4649,7 @@  Subcase 2: DCF DROP IPV4 SRC SPEC MASK PACKAGES
    send matched packets, check port can receive the packet.
  
 Subcase 3: DCF DROP NVGRE PACKAGES
------------------------------
+----------------------------------
 
 1. validate a rule::
 
@@ -4693,7 +4693,7 @@  Subcase 3: DCF DROP NVGRE PACKAGES
    send matched packets, check port can receive the packet.
 
 Subcase 4: DCF DROP PPOES PACKAGES
------------------------------
+----------------------------------
 
 1. validate a rule::
 
@@ -4737,7 +4737,7 @@  Subcase 4: DCF DROP PPOES PACKAGES
    send matched packets, check port can receive the packet.
  
 Subcase 5:  DCF DROP PFCP PACKAGES
------------------------------
+----------------------------------
 
 1. validate a rule::
 
@@ -4781,7 +4781,7 @@  Subcase 5:  DCF DROP PFCP PACKAGES
    send matched packets, check port can receive the packet.
 
 Subcase 6:  DCF DROP VLAN PACKAGES
------------------------------
+----------------------------------
 
 1. validate a rule::
 
@@ -4825,7 +4825,7 @@  Subcase 6:  DCF DROP VLAN PACKAGES
    send matched packets, check port can receive the packet.
 
 Subcase 7:  DCF DROP L2TP PACKAGES
------------------------------
+----------------------------------
 
 1. validate a rule::
 
@@ -4869,7 +4869,7 @@  Subcase 7:  DCF DROP L2TP PACKAGES
    send matched packets, check port can receive the packet.
 
 Subcase 8:  DCF DROP ESP PACKAGES
------------------------------
+---------------------------------
 
 1. validate a rule::
 
@@ -4913,7 +4913,7 @@  Subcase 8:  DCF DROP ESP PACKAGES
    send matched packets, check port can receive the packet.
 
 Subcase 8:  DCF DROP blend PACKAGES
------------------------------
+-----------------------------------
 
 1. validate a rule::
 
diff --git a/test_plans/dcf_lifecycle_test_plan.rst b/test_plans/dcf_lifecycle_test_plan.rst
index c3e039f6..1e5fcecd 100644
--- a/test_plans/dcf_lifecycle_test_plan.rst
+++ b/test_plans/dcf_lifecycle_test_plan.rst
@@ -812,7 +812,7 @@  If kill DCF process, when DCF launched. The DCF rules should be removed.
 
     sendp([Ether(src="00:11:22:33:44:55", dst="5E:8E:8B:4D:89:05")/IP()/TCP(sport=8012)/Raw(load='X'*30)], iface="testeri0")
 
-   check the packet is dropped by VF1::
+   check the packet is dropped by VF1.
 
 3. kill DCF process ::
 
@@ -843,7 +843,7 @@  Kill DCF process, then fail to launch avf on the previous DCF VF.
 
     sendp([Ether(src="00:11:22:33:44:55", dst="5E:8E:8B:4D:89:05")/IP()/TCP(sport=8012)/Raw(load='X'*30)], iface="testeri0")
 
-   check the packet is dropped by VF1::
+   check the packet is dropped by VF1.
 
 3. kill DCF process ::
 
@@ -878,7 +878,7 @@  TC28: DCF graceful exit
 
     sendp([Ether(src="00:11:22:33:44:55", dst="5E:8E:8B:4D:89:05")/IP()/TCP(sport=8012)/Raw(load='X'*30)], iface="testeri0")
 
-   check the packet is dropped by VF1::
+   check the packet is dropped by VF1.
 
 3. Exit the DCF in DCF testpmd ::
 
@@ -899,7 +899,7 @@  TC29: DCF enabled, AVF VF reset
 
     sendp([Ether(src="00:11:22:33:44:55", dst="5E:8E:8B:4D:89:05")/IP()/TCP(sport=8012)/Raw(load='X'*30)], iface="testeri0")
 
-   check the packet is dropped by VF1::
+   check the packet is dropped by VF1.
 
 3. reset VF1 in testpmd::
 
@@ -940,7 +940,7 @@  TC30: DCF enabled, DCF VF reset
 
     sendp([Ether(src="00:11:22:33:44:55", dst="5E:8E:8B:4D:89:05")/IP()/TCP(sport=8012)/Raw(load='X'*30)], iface="testeri0")
 
-   check the packet is dropped by VF1::
+   check the packet is dropped by VF1.
 
 3. reset VF0 in testpmd::
 
diff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst
index fac61aa8..bdbcdf62 100644
--- a/test_plans/dpdk_gro_lib_test_plan.rst
+++ b/test_plans/dpdk_gro_lib_test_plan.rst
@@ -89,27 +89,27 @@  Modify the testpmd code as following::
 
 Modify the dpdk code as following::
 
-diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
-index b38a4b6b1..573250dbe 100644
---- a/drivers/net/vhost/rte_eth_vhost.c
-+++ b/drivers/net/vhost/rte_eth_vhost.c
-@@ -1071,8 +1071,14 @@ eth_dev_info(struct rte_eth_dev *dev,
-  dev_info->min_rx_bufsize = 0;
- 
-  dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
--       DEV_TX_OFFLOAD_VLAN_INSERT;
-- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
-+       DEV_TX_OFFLOAD_VLAN_INSERT |
-+       DEV_TX_OFFLOAD_UDP_CKSUM |
-+       DEV_TX_OFFLOAD_TCP_CKSUM |
-+       DEV_TX_OFFLOAD_IPV4_CKSUM |
-+       DEV_TX_OFFLOAD_TCP_TSO;
-+ dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-+       DEV_RX_OFFLOAD_TCP_CKSUM |
-+       DEV_RX_OFFLOAD_UDP_CKSUM |
-+       DEV_RX_OFFLOAD_IPV4_CKSUM |
-+       DEV_RX_OFFLOAD_TCP_LRO;
- }
+   diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
+   index b38a4b6b1..573250dbe 100644
+   --- a/drivers/net/vhost/rte_eth_vhost.c
+   +++ b/drivers/net/vhost/rte_eth_vhost.c
+   @@ -1071,8 +1071,14 @@ eth_dev_info(struct rte_eth_dev *dev,
+     dev_info->min_rx_bufsize = 0;
+
+     dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
+   -       DEV_TX_OFFLOAD_VLAN_INSERT;
+   - dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+   +       DEV_TX_OFFLOAD_VLAN_INSERT |
+   +       DEV_TX_OFFLOAD_UDP_CKSUM |
+   +       DEV_TX_OFFLOAD_TCP_CKSUM |
+   +       DEV_TX_OFFLOAD_IPV4_CKSUM |
+   +       DEV_TX_OFFLOAD_TCP_TSO;
+   + dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
+   +       DEV_RX_OFFLOAD_TCP_CKSUM |
+   +       DEV_RX_OFFLOAD_UDP_CKSUM |
+   +       DEV_RX_OFFLOAD_IPV4_CKSUM |
+   +       DEV_RX_OFFLOAD_TCP_LRO;
+    }
 
 Test flow
 =========
@@ -148,7 +148,7 @@  Test Case1: DPDK GRO lightmode test with tcp/ipv4 traffic
 
 3.  Set up vm with virto device and using kernel virtio-net driver::
 
-    taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
+     taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
        -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
@@ -200,7 +200,7 @@  Test Case2: DPDK GRO heavymode test with tcp/ipv4 traffic
 
 3.  Set up vm with virto device and using kernel virtio-net driver::
 
-    taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
+     taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
        -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
@@ -252,7 +252,7 @@  Test Case3: DPDK GRO heavymode_flush4 test with tcp/ipv4 traffic
 
 3.  Set up vm with virto device and using kernel virtio-net driver::
 
-    taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
+     taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
        -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
@@ -278,10 +278,11 @@  Test Case4: DPDK GRO test with vxlan traffic
 Vxlan topology
 --------------
   VM          Host
+
 50.1.1.2      50.1.1.1
-   |           |
+   \|           \|
 1.1.2.3       1.1.2.4
-   |------------Testpmd------------|
+   \|------------Testpmd------------|
 
 1. Connect two nic port directly, put nic2 into another namesapce and create Host VxLAN port::
 
@@ -322,7 +323,7 @@  Vxlan topology
 
 3.  Set up vm with virto device and using kernel virtio-net driver::
 
-    taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
+     taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
        -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
@@ -383,7 +384,7 @@  NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net
 
 3.  Set up vm with virto device and using kernel virtio-net driver::
 
-    taskset -c 31 /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name us-vhost-vm1 \
+     taskset -c 31 /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
        -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -netdev user,id=yinan,hostfwd=tcp:127.0.0.1:6005-:22 -device e1000,netdev=yinan \
@@ -441,7 +442,7 @@  NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net
 
 3.  Set up vm with virto device and using kernel virtio-net driver::
 
-    taskset -c 31 /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name us-vhost-vm1 \
+     taskset -c 31 /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
        -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -netdev user,id=yinan,hostfwd=tcp:127.0.0.1:6005-:22 -device e1000,netdev=yinan \
diff --git a/test_plans/iavf_fdir_test_plan.rst b/test_plans/iavf_fdir_test_plan.rst
index e0c9d603..6de28d2c 100644
--- a/test_plans/iavf_fdir_test_plan.rst
+++ b/test_plans/iavf_fdir_test_plan.rst
@@ -2431,6 +2431,7 @@  Subcase 1: Layer3 co-exist GTP EH fdir + dst
 --------------------------------------------
 
 Rules::
+
     #1  flow create 0 ingress pattern eth / ipv4 dst is 192.168.0.31 / udp / gtpu / gtp_psc / end actions rss queues 1 2 end / mark id 1 / end
     #2  flow create 0 ingress pattern eth / ipv6 dst is ::32 / udp / gtpu / gtp_psc / end actions rss queues 3 4 5 6 end / mark id 2 / end
     #3  flow create 0 ingress pattern eth / ipv4 dst is 192.168.0.33 / udp / gtpu / gtp_psc / end actions queue index 7 / mark id 3 / end
@@ -2441,6 +2442,7 @@  Rules::
     #8  flow create 0 ingress pattern eth / ipv6 dst is ::38 / udp / gtpu / gtp_psc / end actions drop / end
 
 Matched packets::
+
     p_gtpu1 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IP(src="192.168.0.21", dst="192.168.0.31")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/GTP_PDUSession_ExtensionHeader(pdu_type=0, qos_flow=0x33)/IPv6()/UDP()/Raw('x'*20)
     p_gtpu2 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IPv6(src="::12", dst="::32")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/GTP_PDUSession_ExtensionHeader(pdu_type=1, qos_flow=0x33)/IPv6()/TCP()/Raw('x'*20)
     p_gtpu3 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IP(src="192.168.0.23", dst="192.168.0.33")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/GTP_PDUSession_ExtensionHeader(pdu_type=1, qos_flow=0x33)/IPv6()/Raw('x'*20)
@@ -2451,6 +2453,7 @@  Matched packets::
     p_gtpu8 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IPv6(src="2001::8", dst="::38")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/GTP_PDUSession_ExtensionHeader(pdu_type=0, qos_flow=0x33)/IPv6()/IPv6ExtHdrFragment()/Raw('x'*20)
 
 Mismatched packets::
+
     p_gtpu1 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IP(src="192.168.0.21", dst="192.168.0.32")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/GTP_PDUSession_ExtensionHeader(pdu_type=0, qos_flow=0x33)/IPv6()/UDP()/Raw('x'*20)
     p_gtpu2 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IPv6(src="::12", dst="::33")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/GTP_PDUSession_ExtensionHeader(pdu_type=1, qos_flow=0x33)/IPv6()/TCP()/Raw('x'*20)
     p_gtpu3 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IP(src="192.168.0.23", dst="192.168.0.34")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/GTP_PDUSession_ExtensionHeader(pdu_type=1, qos_flow=0x33)/IPv6()/Raw('x'*20)
@@ -2506,6 +2509,7 @@  Subcase 2: Layer3 co-exist GTP fdir + dst
 -----------------------------------------
 
 Rules::
+
     #1  flow create 0 ingress pattern eth / ipv4 dst is 192.168.0.31 / udp / gtpu / end actions rss queues 1 2 end / mark id 1 / end
     #2	flow create 0 ingress pattern eth / ipv6 dst is ::32 / udp / gtpu / end actions rss queues 3 4 5 6 end / mark id 2 / end
     #3	flow create 0 ingress pattern eth / ipv4 dst is 192.168.0.33 / udp / gtpu / end actions queue index 7 / mark id 3 / end
@@ -2516,6 +2520,7 @@  Rules::
     #8	flow create 0 ingress pattern eth / ipv6 dst is ::38 / udp / gtpu / end actions drop / end
 
 Matched packets::
+
     p_gtpu1 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IP(src="192.168.0.21", dst="192.168.0.31")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/IPv6()/UDP()/Raw('x'*20)
     p_gtpu2 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IPv6(src="::12", dst="::32")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/IPv6()/TCP()/Raw('x'*20)
     p_gtpu3 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IP(src="192.168.0.23", dst="192.168.0.33")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/IPv6()/Raw('x'*20)
@@ -2526,6 +2531,7 @@  Matched packets::
     p_gtpu8 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IPv6(src="2001::8", dst="::38")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/IPv6()/IPv6ExtHdrFragment()/Raw('x'*20)
 
 Mismatched packets::
+
     p_gtpu1 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IP(src="192.168.0.21", dst="192.168.0.32")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/IPv6()/UDP()/Raw('x'*20)
     p_gtpu2 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IPv6(src="::12", dst="::33")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/IPv6()/TCP()/Raw('x'*20)
     p_gtpu3 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IP(src="192.168.0.23", dst="192.168.0.34")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/IPv6()/Raw('x'*20)
@@ -2541,6 +2547,7 @@  Subcase 3: Layer3 co-exist GTP EH fdir + src
 --------------------------------------------
 
 Rules::
+
     #1  flow create 0 ingress pattern eth / ipv4 src is 192.168.0.21 / udp / gtpu / gtp_psc / end actions rss queues 1 2 end / mark id 1 / end
     #2	flow create 0 ingress pattern eth / ipv6 src is ::12 / udp / gtpu / gtp_psc / end actions rss queues 3 4 5 6 end / mark id 2 / end
     #3	flow create 0 ingress pattern eth / ipv4 src is 192.168.0.23 / udp / gtpu / gtp_psc / end actions queue index 7 / mark id 3 / end
@@ -2587,6 +2594,7 @@  Rules::
     #6	flow create 0 ingress pattern eth / ipv6 src is ::16 / udp / gtpu / end actions passthru / mark id 6 / end
     #7	flow create 0 ingress pattern eth / ipv4 src is 192.168.0.27 / udp / gtpu / end actions drop / end
     #8	flow create 0 ingress pattern eth / ipv6 src is 2001::8 / udp / gtpu / end actions drop / end
+
 Matched packets::
 
     p_gtpu1 = Ether(src="a4:bf:01:51:27:ca", dst="00:11:22:33:44:55")/IP(src="192.168.0.21", dst="192.168.0.31")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12)/IPv6()/UDP()/Raw('x'*20)
diff --git a/test_plans/index.rst b/test_plans/index.rst
index 6a0750d1..1a3f8383 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -45,14 +45,17 @@  The following are the test plans for the DPDK DTS automated test system.
     cvl_advanced_rss_test_plan
     cvl_advanced_rss_gtpu_test_plan
     cvl_advanced_iavf_rss_test_plan
+    cvl_advanced_iavf_rss_gtpu_test_plan
+    cvl_advanced_iavf_rss_vlan_esp_ah_l2tp_pfcp_test_plan
     cvl_advanced_rss_pppoe_vlan_esp_ah_l2tp_pfcp_test_plan
+    cvl_dcf_acl_filter_test_plan
     cvl_dcf_date_path_test_plan
-    cvl_dcf_dp_test_plan
     cvl_dcf_switch_filter_test_plan
     cvl_fdir_test_plan
-    cvl_iavf_rss_gtpu_test_plan
+    cvl_limit_value_test_test_plan
     cvl_rss_configure_test_plan
     cvl_switch_filter_test_plan
+    cvl_vf_support_multicast_address_test_plan
     cloud_filter_with_l4_port_test_plan
     dcf_lifecycle_test_plan
     crypto_perf_cryptodev_perf_test_plan
@@ -65,6 +68,7 @@  The following are the test plans for the DPDK DTS automated test system.
     dynamic_config_test_plan
     dynamic_flowtype_test_plan
     dynamic_queue_test_plan
+    eeprom_dump_test_plan
     etag_test_plan
     external_memory_test_plan
     external_mempool_handler_test_plan
@@ -98,6 +102,7 @@  The following are the test plans for the DPDK DTS automated test system.
     l3fwd_em_test_plan
     l3fwd_test_plan
     l3fwdacl_test_plan
+    large_vf_test_plan
     link_flowctrl_test_plan
     link_status_interrupt_test_plan
     linux_modules_test_plan
@@ -114,6 +119,7 @@  The following are the test plans for the DPDK DTS automated test system.
     perf_virtio_user_loopback_test_plan
     perf_virtio_user_pvp_test_plan
     perf_vm2vm_virtio_net_perf_test_plan
+    pipeline_test_plan
     pvp_virtio_user_multi_queues_port_restart_test_plan
     pmd_bonded_8023ad_test_plan
     pmd_bonded_test_plan
@@ -136,10 +142,10 @@  The following are the test plans for the DPDK DTS automated test system.
     rss_key_update_test_plan
     rxtx_offload_test_plan
     rteflow_priority_test_plan
+    rte_flow_test_plan
     runtime_vf_queue_number_kernel_test_plan
     runtime_vf_queue_number_maxinum_test_plan
     runtime_vf_queue_number_test_plan
-    rxtx_offload_test_plan
     scatter_test_plan
     short_live_test_plan
     shutdown_api_test_plan
@@ -182,6 +188,7 @@  The following are the test plans for the DPDK DTS automated test system.
     vf_pf_reset_test_plan
     vf_port_start_stop_test_plan
     vf_rss_test_plan
+    vf_single_core_perf_test_plan
     vf_to_vf_nic_bridge_test_plan
     vf_vlan_test_plan
     kernelpf_iavf_test_plan
@@ -211,11 +218,13 @@  The following are the test plans for the DPDK DTS automated test system.
     virtio_event_idx_interrupt_test_plan
     virtio_ipsec_cryptodev_func_test_plan
     virtio_perf_cryptodev_func_test_plan
+    virtio_smoke_test_plan
     vm2vm_virtio_net_perf_test_plan
     vm2vm_virtio_pmd_test_plan
     dpdk_gro_lib_test_plan
     dpdk_gso_lib_test_plan
     vhost_dequeue_zero_copy_test_plan
+    vswitch_sample_cbdma_test_plan
     vxlan_gpe_support_in_i40e_test_plan
     pvp_diff_qemu_version_test_plan
     pvp_share_lib_test_plan
diff --git a/test_plans/large_vf_test_plan.rst b/test_plans/large_vf_test_plan.rst
index 6399ed12..71e66bf9 100644
--- a/test_plans/large_vf_test_plan.rst
+++ b/test_plans/large_vf_test_plan.rst
@@ -45,7 +45,7 @@  Prerequisites
    scapy: http://www.secdev.org/projects/scapy/
 
 3. Copy specific ice package to /lib/firmware/updates/intel/ice/ddp/ice.pkg
-  Then reinstall kernel driver.
+   Then reinstall kernel driver.
 
 4. Generate 3 VFs on each PF and set mac address for VF0::
 
@@ -345,7 +345,7 @@  Fail to start testpmd with "--txq=256 --rxq=256".
 
 
 Test case: 128 Max VFs + 4 queues (default)
-==========================================
+===========================================
 
 Subcase 1: multi fdir among 4 queues for 128 VFs
 ------------------------------------------------
diff --git a/test_plans/pipeline_test_plan.rst b/test_plans/pipeline_test_plan.rst
index c492fd62..2b15ab76 100644
--- a/test_plans/pipeline_test_plan.rst
+++ b/test_plans/pipeline_test_plan.rst
@@ -68,7 +68,7 @@  present in the {DTS_SRC_DIR}/dep directory.
 Directory Structure of Each Test Case
 =====================================
 Within {DTS_SRC_DIR}/dep/pipeline.tar.gz, all files related to a particular test case are maintained
-in a separate directory of which the directory structure is shown below:
+in a separate directory of which the directory structure is shown below::
 
     test_case_name [directory]
         test_case_name.spec
@@ -79,7 +79,7 @@  in a separate directory of which the directory structure is shown below:
             in_x.txt [x: 1-4; depending on test case]
             out_x.txt [x: 1-4; depending on test case]
 
-For an example, files related to mov_001 test case are maintained as shown below:
+For an example, files related to mov_001 test case are maintained as shown below::
 
     mov_001 [directory]
         mov_001.spec
diff --git a/test_plans/stats_checks_test_plan.rst b/test_plans/stats_checks_test_plan.rst
index f314ba88..51ec36d7 100644
--- a/test_plans/stats_checks_test_plan.rst
+++ b/test_plans/stats_checks_test_plan.rst
@@ -149,114 +149,114 @@  Test Case: PF xstatus Checks
 
 5. Check stats and xstats::
 
-  testpmd> stop
-  Telling cores to stop...
-  Waiting for lcores to finish...
+    testpmd> stop
+    Telling cores to stop...
+    Waiting for lcores to finish...
 
-  ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
-  RX-packets: 29             TX-packets: 29             TX-dropped: 0
+    ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
+    RX-packets: 29             TX-packets: 29             TX-dropped: 0
 
-  ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
-  RX-packets: 21             TX-packets: 21             TX-dropped: 0
+    ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
+    RX-packets: 21             TX-packets: 21             TX-dropped: 0
 
-  ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
-  RX-packets: 24             TX-packets: 24             TX-dropped: 0
+    ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
+    RX-packets: 24             TX-packets: 24             TX-dropped: 0
 
-  ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
-  RX-packets: 26             TX-packets: 26             TX-dropped: 0
+    ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
+    RX-packets: 26             TX-packets: 26             TX-dropped: 0
 
-  ---------------------- Forward statistics for port 0  ----------------------
-  RX-packets: 100            RX-dropped: 0             RX-total: 100
-  TX-packets: 0              TX-dropped: 0             TX-total: 0
-  ----------------------------------------------------------------------------
+    ---------------------- Forward statistics for port 0  ----------------------
+    RX-packets: 100            RX-dropped: 0             RX-total: 100
+    TX-packets: 0              TX-dropped: 0             TX-total: 0
+    ----------------------------------------------------------------------------
 
-  ---------------------- Forward statistics for port 1  ----------------------
-  RX-packets: 0              RX-dropped: 0             RX-total: 0
-  TX-packets: 100            TX-dropped: 0             TX-total: 100
-  ----------------------------------------------------------------------------
+    ---------------------- Forward statistics for port 1  ----------------------
+    RX-packets: 0              RX-dropped: 0             RX-total: 0
+    TX-packets: 100            TX-dropped: 0             TX-total: 100
+    ----------------------------------------------------------------------------
 
-  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
-  RX-packets: 100            RX-dropped: 0             RX-total: 100
-  TX-packets: 100            TX-dropped: 0             TX-total: 100
-  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+    +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
+    RX-packets: 100            RX-dropped: 0             RX-total: 100
+    TX-packets: 100            TX-dropped: 0             TX-total: 100
+    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
-  testpmd> show port stats all
+    testpmd> show port stats all
+
+    ######################## NIC statistics for port 0  ########################
+    RX-packets: 100        RX-missed: 0          RX-bytes:  6000
+    RX-errors: 0
+    RX-nombuf:  0
+    TX-packets: 0          TX-errors: 0          TX-bytes:  0
+
+    Throughput (since last show)
+    Rx-pps:            0          Rx-bps:            0
+    Tx-pps:            0          Tx-bps:            0
+    ############################################################################
+
+    ######################## NIC statistics for port 1  ########################
+    RX-packets: 0          RX-missed: 0          RX-bytes:  0
+    RX-errors: 0
+    RX-nombuf:  0
+    TX-packets: 100        TX-errors: 0          TX-bytes:  6000
+
+    Throughput (since last show)
+    Rx-pps:            0          Rx-bps:            0
+    Tx-pps:            0          Tx-bps:            0
+    ############################################################################
 
-  ######################## NIC statistics for port 0  ########################
-  RX-packets: 100        RX-missed: 0          RX-bytes:  6000
-  RX-errors: 0
-  RX-nombuf:  0
-  TX-packets: 0          TX-errors: 0          TX-bytes:  0
-
-  Throughput (since last show)
-  Rx-pps:            0          Rx-bps:            0
-  Tx-pps:            0          Tx-bps:            0
-  ############################################################################
-
-  ######################## NIC statistics for port 1  ########################
-  RX-packets: 0          RX-missed: 0          RX-bytes:  0
-  RX-errors: 0
-  RX-nombuf:  0
-  TX-packets: 100        TX-errors: 0          TX-bytes:  6000
-
-  Throughput (since last show)
-  Rx-pps:            0          Rx-bps:            0
-  Tx-pps:            0          Tx-bps:            0
-  ############################################################################
-
-  testpmd> show port xstats all
-  ###### NIC extended statistics for port 0
-  rx_good_packets: 100
-  tx_good_packets: 0
-  rx_good_bytes: 6000
-  tx_good_bytes: 0
-  ......
-  rx_q0_packets: 0
-  rx_q0_bytes: 0
-  rx_q0_errors: 0
-  rx_q1_packets: 0
-  rx_q1_bytes: 0
-  rx_q1_errors: 0
-  rx_q2_packets: 0
-  rx_q2_bytes: 0
-  rx_q2_errors: 0
-  rx_q3_packets: 0
-  rx_q3_bytes: 0
-  rx_q3_errors: 0
-  tx_q0_packets: 0
-  tx_q0_bytes: 0
-  tx_q1_packets: 0
-  tx_q1_bytes: 0
-  tx_q2_packets: 0
-  tx_q2_bytes: 0
-  tx_q3_packets: 0
-  tx_q3_bytes: 0
-  ......
-  ###### NIC extended statistics for port 1
-  rx_good_packets: 0
-  tx_good_packets: 100
-  rx_good_bytes: 0
-  tx_good_bytes: 6000
-  rx_q0_packets: 0
-  rx_q0_bytes: 0
-  rx_q0_errors: 0
-  rx_q1_packets: 0
-  rx_q1_bytes: 0
-  rx_q1_errors: 0
-  rx_q2_packets: 0
-  rx_q2_bytes: 0
-  rx_q2_errors: 0
-  rx_q3_packets: 0
-  rx_q3_bytes: 0
-  rx_q3_errors: 0
-  tx_q0_packets: 0
-  tx_q0_bytes: 0
-  tx_q1_packets: 0
-  tx_q1_bytes: 0
-  tx_q2_packets: 0
-  tx_q2_bytes: 0
-  tx_q3_packets: 0
-  tx_q3_bytes: 0
+    testpmd> show port xstats all
+    ###### NIC extended statistics for port 0
+    rx_good_packets: 100
+    tx_good_packets: 0
+    rx_good_bytes: 6000
+    tx_good_bytes: 0
+    ......
+    rx_q0_packets: 0
+    rx_q0_bytes: 0
+    rx_q0_errors: 0
+    rx_q1_packets: 0
+    rx_q1_bytes: 0
+    rx_q1_errors: 0
+    rx_q2_packets: 0
+    rx_q2_bytes: 0
+    rx_q2_errors: 0
+    rx_q3_packets: 0
+    rx_q3_bytes: 0
+    rx_q3_errors: 0
+    tx_q0_packets: 0
+    tx_q0_bytes: 0
+    tx_q1_packets: 0
+    tx_q1_bytes: 0
+    tx_q2_packets: 0
+    tx_q2_bytes: 0
+    tx_q3_packets: 0
+    tx_q3_bytes: 0
+    ......
+    ###### NIC extended statistics for port 1
+    rx_good_packets: 0
+    tx_good_packets: 100
+    rx_good_bytes: 0
+    tx_good_bytes: 6000
+    rx_q0_packets: 0
+    rx_q0_bytes: 0
+    rx_q0_errors: 0
+    rx_q1_packets: 0
+    rx_q1_bytes: 0
+    rx_q1_errors: 0
+    rx_q2_packets: 0
+    rx_q2_bytes: 0
+    rx_q2_errors: 0
+    rx_q3_packets: 0
+    rx_q3_bytes: 0
+    rx_q3_errors: 0
+    tx_q0_packets: 0
+    tx_q0_bytes: 0
+    tx_q1_packets: 0
+    tx_q1_bytes: 0
+    tx_q2_packets: 0
+    tx_q2_bytes: 0
+    tx_q3_packets: 0
+    tx_q3_bytes: 0
 
 verify rx_good_packets, RX-packets of port 0 and tx_good_packets, TX-packets of port 1 are both 100.
 rx_good_bytes, RX-bytes of port 0 and tx_good_bytes, TX-bytes of port 1 are the same.
diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index 504b9aa0..bbfa22c1 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -181,11 +181,11 @@  Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations
 
 10. Relaunch vhost with another two cbdma channels and 2 queueus, check perforamnce can get target::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@00:04.5;txq1@00:04.6],dmathr=512' \
-    -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
-    >set fwd mac
-    >start
+     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@00:04.5;txq1@00:04.6],dmathr=512' \
+     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
+     >set fwd mac
+     >start
 
 11. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log.
 
diff --git a/test_plans/virtio_smoke_test_plan.rst b/test_plans/virtio_smoke_test_plan.rst
index 66f10805..cc184bf5 100644
--- a/test_plans/virtio_smoke_test_plan.rst
+++ b/test_plans/virtio_smoke_test_plan.rst
@@ -74,7 +74,7 @@  Test Case 1: loopback reconnect test with split ring mergeable path and server m
     testpmd>stop
 
 Test Case 2: pvp test with virtio packed ring vectorized path
-============================================================
+=============================================================
 
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index 824cadde..f0107746 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -224,7 +224,7 @@  Test Case 4: Check split ring virtio-net device capability
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-   ./dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
+    ./dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \
     --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
@@ -338,18 +338,18 @@  Test Case 5: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test wi
 
 10. Quit vhost ports and relaunch vhost ports with 1 queues::
 
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-    testpmd>start
+     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
+     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
+     testpmd>start
 
 11. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
 
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
 
 12. Check the iperf performance, ensure queue0 can work from vhost side::
 
-    Under VM1, run: `taskset -c 0 iperf -s -i 1`
-    Under VM2, run: `taskset -c 0 iperf -c 1.1.1.2 -i 1 -t 60`
+     Under VM1, run: `taskset -c 0 iperf -s -i 1`
+     Under VM2, run: `taskset -c 0 iperf -c 1.1.1.2 -i 1 -t 60`
 
 Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check
 ========================================================================================================================
@@ -423,18 +423,18 @@  Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable tes
 
 10. Quit vhost ports and relaunch vhost ports with 1 queues::
 
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-    testpmd>start
+     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
+     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
+     testpmd>start
 
 11. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
 
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
+     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
 
 12. Check the iperf performance, ensure queue0 can work from vhost side::
 
-    Under VM1, run: `taskset -c 0 iperf -s -i 1`
-    Under VM2, run: `taskset -c 0 iperf -c 1.1.1.2 -i 1 -t 60`
+     Under VM1, run: `taskset -c 0 iperf -s -i 1`
+     Under VM2, run: `taskset -c 0 iperf -c 1.1.1.2 -i 1 -t 60`
 
 Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
 ==========================================================================