[V3,4/5] rename base classes 4

Message ID 20220610050810.1531-5-junx.dong@intel.com (mailing list archive)
State Accepted
Headers
Series rename base classes |

Commit Message

Jun Dong June 10, 2022, 5:08 a.m. UTC
  From: Juraj Linkeš <juraj.linkes@pantheon.tech>

the rest of test_plans/*

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Signed-off-by: Jun Dong <junx.dong@intel.com>
---
 test_plans/ice_rss_configure_test_plan.rst    |  20 +-
 test_plans/ice_switch_filter_test_plan.rst    | 180 ++++----
 test_plans/inline_ipsec_test_plan.rst         |  72 ++--
 test_plans/interrupt_pmd_test_plan.rst        |   2 +-
 test_plans/ip_pipeline_test_plan.rst          | 144 +++----
 test_plans/ipgre_test_plan.rst                |   2 +-
 test_plans/ipsec_gw_and_library_test_plan.rst |  18 +-
 .../ipsec_gw_cryptodev_func_test_plan.rst     |   6 +-
 test_plans/ipv4_reassembly_test_plan.rst      |  48 +--
 ..._get_extra_queue_information_test_plan.rst |   6 +-
 test_plans/kernelpf_iavf_test_plan.rst        |  28 +-
 test_plans/kni_test_plan.rst                  |  18 +-
 test_plans/l2fwd_cryptodev_func_test_plan.rst |   8 +-
 test_plans/l2fwd_test_plan.rst                |  12 +-
 test_plans/l2tp_esp_coverage_test_plan.rst    | 404 +++++++++---------
 test_plans/l3fwd_func_test_plan.rst           |  42 +-
 test_plans/l3fwdacl_test_plan.rst             |   4 +-
 test_plans/link_flowctrl_test_plan.rst        |  16 +-
 .../link_status_interrupt_test_plan.rst       |   4 +-
 test_plans/linux_modules_test_plan.rst        |  20 +-
 ...irtio_user_server_mode_cbdma_test_plan.rst |  14 +-
 ..._virtio_user_server_mode_dsa_test_plan.rst |   6 +-
 test_plans/mdd_test_plan.rst                  |  16 +-
 .../metering_and_policing_test_plan.rst       |  22 +-
 test_plans/metrics_test_plan.rst              |  22 +-
 test_plans/multiple_pthread_test_plan.rst     |  24 +-
 test_plans/nic_single_core_perf_test_plan.rst |  24 +-
 test_plans/nvgre_test_plan.rst                |  30 +-
 test_plans/packet_capture_test_plan.rst       |  84 ++--
 test_plans/pf_smoke_test_plan.rst             |   4 +-
 test_plans/pipeline_test_plan.rst             |  30 +-
 test_plans/pmd_bonded_8023ad_test_plan.rst    |  12 +-
 test_plans/pmd_bonded_test_plan.rst           |  22 +-
 test_plans/pmd_stacked_bonded_test_plan.rst   |  18 +-
 test_plans/pmd_test_plan.rst                  |  22 +-
 test_plans/pmdpcap_test_plan.rst              |   2 +-
 test_plans/pmdrss_hash_test_plan.rst          |   4 +-
 test_plans/pmdrssreta_test_plan.rst           |   6 +-
 test_plans/port_control_test_plan.rst         |   6 +-
 test_plans/port_representor_test_plan.rst     |   4 +-
 test_plans/power_branch_ratio_test_plan.rst   |   6 +-
 ...power_managerment_throughput_test_plan.rst |   4 +-
 test_plans/power_pbf_test_plan.rst            |   4 +-
 test_plans/ptpclient_test_plan.rst            |  28 +-
 .../pvp_diff_qemu_version_test_plan.rst       |   8 +-
 .../pvp_multi_paths_performance_test_plan.rst |  20 +-
 ...host_single_core_performance_test_plan.rst |  20 +-
 ...rtio_single_core_performance_test_plan.rst |  20 +-
 ...emu_multi_paths_port_restart_test_plan.rst |  12 +-
 test_plans/pvp_share_lib_test_plan.rst        |   2 +-
 test_plans/pvp_vhost_dsa_test_plan.rst        |  84 ++--
 .../pvp_vhost_user_reconnect_test_plan.rst    |  32 +-
 test_plans/pvp_virtio_bonding_test_plan.rst   |   2 +-
 ...pvp_virtio_user_2M_hugepages_test_plan.rst |   4 +-
 .../pvp_virtio_user_4k_pages_test_plan.rst    |   4 +-
 ...er_multi_queues_port_restart_test_plan.rst |  20 +-
 test_plans/qinq_filter_test_plan.rst          |  78 ++--
 test_plans/qos_api_test_plan.rst              |  12 +-
 test_plans/qos_meter_test_plan.rst            |   8 +-
 test_plans/qos_sched_test_plan.rst            |  14 +-
 test_plans/queue_start_stop_test_plan.rst     |  10 +-
 test_plans/rte_flow_test_plan.rst             |  20 +-
 test_plans/rteflow_priority_test_plan.rst     |  28 +-
 ...ntime_vf_queue_number_kernel_test_plan.rst |   8 +-
 .../runtime_vf_queue_number_test_plan.rst     |   8 +-
 test_plans/rxtx_offload_test_plan.rst         |  16 +-
 test_plans/shutdown_api_test_plan.rst         |  38 +-
 test_plans/softnic_test_plan.rst              |  10 +-
 test_plans/tso_test_plan.rst                  |  30 +-
 test_plans/tx_preparation_test_plan.rst       |  10 +-
 test_plans/uni_pkt_test_plan.rst              |   6 +-
 test_plans/unit_tests_loopback_test_plan.rst  |   2 +-
 test_plans/unit_tests_pmd_perf_test_plan.rst  |   2 +-
 test_plans/userspace_ethtool_test_plan.rst    |  10 +-
 test_plans/veb_switch_test_plan.rst           |   8 +-
 test_plans/vf_daemon_test_plan.rst            |  64 +--
 test_plans/vf_interrupt_pmd_test_plan.rst     |  26 +-
 test_plans/vf_kernel_test_plan.rst            | 142 +++---
 test_plans/vf_l3fwd_test_plan.rst             |   6 +-
 test_plans/vf_macfilter_test_plan.rst         |   8 +-
 test_plans/vf_offload_test_plan.rst           |  22 +-
 test_plans/vf_packet_rxtx_test_plan.rst       |  20 +-
 test_plans/vf_pf_reset_test_plan.rst          |  56 +--
 test_plans/vf_port_start_stop_test_plan.rst   |   4 +-
 test_plans/vf_rss_test_plan.rst               |   4 +-
 test_plans/vf_single_core_perf_test_plan.rst  |  20 +-
 test_plans/vf_smoke_test_plan.rst             |   4 +-
 test_plans/vf_vlan_test_plan.rst              |  14 +-
 test_plans/vhost_cbdma_test_plan.rst          |  30 +-
 .../vhost_user_live_migration_test_plan.rst   |  66 +--
 ...t_virtio_pmd_interrupt_cbdma_test_plan.rst |  10 +-
 .../vhost_virtio_pmd_interrupt_test_plan.rst  |  14 +-
 ..._virtio_user_interrupt_cbdma_test_plan.rst |  14 +-
 .../vhost_virtio_user_interrupt_test_plan.rst |  24 +-
 ...io_event_idx_interrupt_cbdma_test_plan.rst |  12 +-
 .../virtio_event_idx_interrupt_test_plan.rst  |  20 +-
 .../virtio_ipsec_cryptodev_func_test_plan.rst |   6 +-
 .../virtio_pvp_regression_test_plan.rst       |  32 +-
 test_plans/virtio_smoke_test_plan.rst         |   2 +-
 ...ser_for_container_networking_test_plan.rst |   4 +-
 .../vlan_ethertype_config_test_plan.rst       |   2 +-
 test_plans/vm2vm_virtio_net_dsa_test_plan.rst |  30 +-
 .../vm2vm_virtio_net_perf_cbdma_test_plan.rst |  14 +-
 .../vm2vm_virtio_net_perf_test_plan.rst       |   4 +-
 .../vm2vm_virtio_pmd_cbdma_test_plan.rst      |   8 +-
 .../vm2vm_virtio_user_cbdma_test_plan.rst     |  18 +-
 .../vm2vm_virtio_user_dsa_test_plan.rst       |  28 +-
 test_plans/vm_hotplug_test_plan.rst           |   8 +-
 test_plans/vm_pw_mgmt_policy_test_plan.rst    |  30 +-
 ...paths_performance_with_cbdma_test_plan.rst |  58 +--
 test_plans/vswitch_sample_cbdma_test_plan.rst |   8 +-
 test_plans/vswitch_sample_dsa_test_plan.rst   |  44 +-
 .../vxlan_gpe_support_in_i40e_test_plan.rst   |   4 +-
 test_plans/vxlan_test_plan.rst                |   2 +-
 114 files changed, 1447 insertions(+), 1449 deletions(-)
  

Comments

Juraj Linkeš June 10, 2022, 12:42 p.m. UTC | #1
Reviewed-by: Juraj Linkeš <juraj.linkes@pantheon.tech>

> -----Original Message-----
> From: Jun Dong <junx.dong@intel.com>
> Sent: Friday, June 10, 2022 7:08 AM
> To: dts@dpdk.org
> Cc: lijuan.tu@intel.com; qingx.sun@intel.com; junx.dong@intel.com; Juraj
> Linkeš <juraj.linkes@pantheon.tech>
> Subject: [V3 4/5] rename base classes 4
> 
> From: Juraj Linkeš <juraj.linkes@pantheon.tech>
> 
> the rest of test_plans/*
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> Signed-off-by: Jun Dong <junx.dong@intel.com>
> ---
>  test_plans/ice_rss_configure_test_plan.rst    |  20 +-
>  test_plans/ice_switch_filter_test_plan.rst    | 180 ++++----
>  test_plans/inline_ipsec_test_plan.rst         |  72 ++--
>  test_plans/interrupt_pmd_test_plan.rst        |   2 +-
>  test_plans/ip_pipeline_test_plan.rst          | 144 +++----
>  test_plans/ipgre_test_plan.rst                |   2 +-
>  test_plans/ipsec_gw_and_library_test_plan.rst |  18 +-
>  .../ipsec_gw_cryptodev_func_test_plan.rst     |   6 +-
>  test_plans/ipv4_reassembly_test_plan.rst      |  48 +--
>  ..._get_extra_queue_information_test_plan.rst |   6 +-
>  test_plans/kernelpf_iavf_test_plan.rst        |  28 +-
>  test_plans/kni_test_plan.rst                  |  18 +-
>  test_plans/l2fwd_cryptodev_func_test_plan.rst |   8 +-
>  test_plans/l2fwd_test_plan.rst                |  12 +-
>  test_plans/l2tp_esp_coverage_test_plan.rst    | 404 +++++++++---------
>  test_plans/l3fwd_func_test_plan.rst           |  42 +-
>  test_plans/l3fwdacl_test_plan.rst             |   4 +-
>  test_plans/link_flowctrl_test_plan.rst        |  16 +-
>  .../link_status_interrupt_test_plan.rst       |   4 +-
>  test_plans/linux_modules_test_plan.rst        |  20 +-
>  ...irtio_user_server_mode_cbdma_test_plan.rst |  14 +-
>  ..._virtio_user_server_mode_dsa_test_plan.rst |   6 +-
>  test_plans/mdd_test_plan.rst                  |  16 +-
>  .../metering_and_policing_test_plan.rst       |  22 +-
>  test_plans/metrics_test_plan.rst              |  22 +-
>  test_plans/multiple_pthread_test_plan.rst     |  24 +-
>  test_plans/nic_single_core_perf_test_plan.rst |  24 +-
>  test_plans/nvgre_test_plan.rst                |  30 +-
>  test_plans/packet_capture_test_plan.rst       |  84 ++--
>  test_plans/pf_smoke_test_plan.rst             |   4 +-
>  test_plans/pipeline_test_plan.rst             |  30 +-
>  test_plans/pmd_bonded_8023ad_test_plan.rst    |  12 +-
>  test_plans/pmd_bonded_test_plan.rst           |  22 +-
>  test_plans/pmd_stacked_bonded_test_plan.rst   |  18 +-
>  test_plans/pmd_test_plan.rst                  |  22 +-
>  test_plans/pmdpcap_test_plan.rst              |   2 +-
>  test_plans/pmdrss_hash_test_plan.rst          |   4 +-
>  test_plans/pmdrssreta_test_plan.rst           |   6 +-
>  test_plans/port_control_test_plan.rst         |   6 +-
>  test_plans/port_representor_test_plan.rst     |   4 +-
>  test_plans/power_branch_ratio_test_plan.rst   |   6 +-
>  ...power_managerment_throughput_test_plan.rst |   4 +-
>  test_plans/power_pbf_test_plan.rst            |   4 +-
>  test_plans/ptpclient_test_plan.rst            |  28 +-
>  .../pvp_diff_qemu_version_test_plan.rst       |   8 +-
>  .../pvp_multi_paths_performance_test_plan.rst |  20 +-
>  ...host_single_core_performance_test_plan.rst |  20 +-
>  ...rtio_single_core_performance_test_plan.rst |  20 +-
>  ...emu_multi_paths_port_restart_test_plan.rst |  12 +-
>  test_plans/pvp_share_lib_test_plan.rst        |   2 +-
>  test_plans/pvp_vhost_dsa_test_plan.rst        |  84 ++--
>  .../pvp_vhost_user_reconnect_test_plan.rst    |  32 +-
>  test_plans/pvp_virtio_bonding_test_plan.rst   |   2 +-
>  ...pvp_virtio_user_2M_hugepages_test_plan.rst |   4 +-
>  .../pvp_virtio_user_4k_pages_test_plan.rst    |   4 +-
>  ...er_multi_queues_port_restart_test_plan.rst |  20 +-
>  test_plans/qinq_filter_test_plan.rst          |  78 ++--
>  test_plans/qos_api_test_plan.rst              |  12 +-
>  test_plans/qos_meter_test_plan.rst            |   8 +-
>  test_plans/qos_sched_test_plan.rst            |  14 +-
>  test_plans/queue_start_stop_test_plan.rst     |  10 +-
>  test_plans/rte_flow_test_plan.rst             |  20 +-
>  test_plans/rteflow_priority_test_plan.rst     |  28 +-
>  ...ntime_vf_queue_number_kernel_test_plan.rst |   8 +-
>  .../runtime_vf_queue_number_test_plan.rst     |   8 +-
>  test_plans/rxtx_offload_test_plan.rst         |  16 +-
>  test_plans/shutdown_api_test_plan.rst         |  38 +-
>  test_plans/softnic_test_plan.rst              |  10 +-
>  test_plans/tso_test_plan.rst                  |  30 +-
>  test_plans/tx_preparation_test_plan.rst       |  10 +-
>  test_plans/uni_pkt_test_plan.rst              |   6 +-
>  test_plans/unit_tests_loopback_test_plan.rst  |   2 +-
>  test_plans/unit_tests_pmd_perf_test_plan.rst  |   2 +-
>  test_plans/userspace_ethtool_test_plan.rst    |  10 +-
>  test_plans/veb_switch_test_plan.rst           |   8 +-
>  test_plans/vf_daemon_test_plan.rst            |  64 +--
>  test_plans/vf_interrupt_pmd_test_plan.rst     |  26 +-
>  test_plans/vf_kernel_test_plan.rst            | 142 +++---
>  test_plans/vf_l3fwd_test_plan.rst             |   6 +-
>  test_plans/vf_macfilter_test_plan.rst         |   8 +-
>  test_plans/vf_offload_test_plan.rst           |  22 +-
>  test_plans/vf_packet_rxtx_test_plan.rst       |  20 +-
>  test_plans/vf_pf_reset_test_plan.rst          |  56 +--
>  test_plans/vf_port_start_stop_test_plan.rst   |   4 +-
>  test_plans/vf_rss_test_plan.rst               |   4 +-
>  test_plans/vf_single_core_perf_test_plan.rst  |  20 +-
>  test_plans/vf_smoke_test_plan.rst             |   4 +-
>  test_plans/vf_vlan_test_plan.rst              |  14 +-
>  test_plans/vhost_cbdma_test_plan.rst          |  30 +-
>  .../vhost_user_live_migration_test_plan.rst   |  66 +--
>  ...t_virtio_pmd_interrupt_cbdma_test_plan.rst |  10 +-
>  .../vhost_virtio_pmd_interrupt_test_plan.rst  |  14 +-
>  ..._virtio_user_interrupt_cbdma_test_plan.rst |  14 +-
>  .../vhost_virtio_user_interrupt_test_plan.rst |  24 +-
>  ...io_event_idx_interrupt_cbdma_test_plan.rst |  12 +-
>  .../virtio_event_idx_interrupt_test_plan.rst  |  20 +-
>  .../virtio_ipsec_cryptodev_func_test_plan.rst |   6 +-
>  .../virtio_pvp_regression_test_plan.rst       |  32 +-
>  test_plans/virtio_smoke_test_plan.rst         |   2 +-
>  ...ser_for_container_networking_test_plan.rst |   4 +-
>  .../vlan_ethertype_config_test_plan.rst       |   2 +-
>  test_plans/vm2vm_virtio_net_dsa_test_plan.rst |  30 +-
>  .../vm2vm_virtio_net_perf_cbdma_test_plan.rst |  14 +-
>  .../vm2vm_virtio_net_perf_test_plan.rst       |   4 +-
>  .../vm2vm_virtio_pmd_cbdma_test_plan.rst      |   8 +-
>  .../vm2vm_virtio_user_cbdma_test_plan.rst     |  18 +-
>  .../vm2vm_virtio_user_dsa_test_plan.rst       |  28 +-
>  test_plans/vm_hotplug_test_plan.rst           |   8 +-
>  test_plans/vm_pw_mgmt_policy_test_plan.rst    |  30 +-
>  ...paths_performance_with_cbdma_test_plan.rst |  58 +--
>  test_plans/vswitch_sample_cbdma_test_plan.rst |   8 +-
>  test_plans/vswitch_sample_dsa_test_plan.rst   |  44 +-
>  .../vxlan_gpe_support_in_i40e_test_plan.rst   |   4 +-
>  test_plans/vxlan_test_plan.rst                |   2 +-
>  114 files changed, 1447 insertions(+), 1449 deletions(-)
>
  

Patch

diff --git a/test_plans/ice_rss_configure_test_plan.rst b/test_plans/ice_rss_configure_test_plan.rst
index 34cfa73a..4ea63e84 100644
--- a/test_plans/ice_rss_configure_test_plan.rst
+++ b/test_plans/ice_rss_configure_test_plan.rst
@@ -42,7 +42,7 @@  Prerequisites
    - dpdk: http://dpdk.org/git/dpdk
    - scapy: http://www.secdev.org/projects/scapy/
 
-3. bind the Intel® Ethernet 800 Series port to dpdk driver in DUT::
+3. bind the Intel® Ethernet 800 Series port to dpdk driver in SUT::
 
     modprobe vfio-pci
     usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:18:00.0
@@ -50,7 +50,7 @@  Prerequisites
 Test Case: test_command_line_option_rss_ip
 ==========================================
 
-1. Launch the testpmd in DUT::
+1. Launch the testpmd in SUT::
 
     testpmd>./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -- -i --rxq=10 --txq=10 --rss-ip
     testpmd>set fwd rxonly
@@ -170,7 +170,7 @@  Test Case: test_command_line_option_rss_ip
 Test Case: test_command_line_option_rss_udp
 ===========================================
 
-1. Launch the testpmd in DUT::
+1. Launch the testpmd in SUT::
 
     testpmd>./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -- -i --rxq=10 --txq=10 --rss-udp
     testpmd>set fwd rxonly
@@ -234,7 +234,7 @@  Test Case: test_command_line_option_rss_udp
 Test Case: test_command_line_option_disable-rss
 ===============================================
 
-1. Launch the testpmd in DUT::
+1. Launch the testpmd in SUT::
 
     testpmd>./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -- -i --rxq=10 --txq=10 --disable-rss
     testpmd>set fwd rxonly
@@ -256,7 +256,7 @@  Test Case: test_command_line_option_disable-rss
 Test Case: test_RSS_configure_to_ip
 ===================================
 
-1. Launch the testpmd in DUT::
+1. Launch the testpmd in SUT::
 
     testpmd>./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -- -i --rxq=10 --txq=10
     testpmd>set fwd rxonly
@@ -380,7 +380,7 @@  Test Case: test_RSS_configure_to_ip
 Test Case: test_RSS_configure_to_udp
 ====================================
 
-1. Launch the testpmd in DUT::
+1. Launch the testpmd in SUT::
 
     testpmd>./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -- -i --rxq=10 --txq=10
     testpmd>set fwd rxonly
@@ -448,7 +448,7 @@  Test Case: test_RSS_configure_to_udp
 Test Case: test_RSS_configure_to_tcp
 ====================================
 
-1. Launch the testpmd in DUT::
+1. Launch the testpmd in SUT::
 
     testpmd>./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -- -i --rxq=10 --txq=10
     testpmd>set fwd rxonly
@@ -516,7 +516,7 @@  Test Case: test_RSS_configure_to_tcp
 Test Case: test_RSS_configure_to_sctp
 =====================================
 
-1. Launch the testpmd in DUT::
+1. Launch the testpmd in SUT::
 
     testpmd>./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -- -i --rxq=10 --txq=10
     testpmd>set fwd rxonly
@@ -584,7 +584,7 @@  Test Case: test_RSS_configure_to_sctp
 Test Case: test_RSS_configure_to_all
 ====================================
 
-1. Launch the testpmd in DUT::
+1. Launch the testpmd in SUT::
 
     testpmd>./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -- -i --rxq=10 --txq=10
     testpmd>set fwd rxonly
@@ -690,7 +690,7 @@  Test Case: test_RSS_configure_to_all
 Test Case: test_RSS_configure_to_default
 ========================================
 
-1. Launch the testpmd in DUT::
+1. Launch the testpmd in SUT::
 
     testpmd>./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xff -n 4 -- -i --rxq=10 --txq=10
     testpmd>set fwd rxonly
diff --git a/test_plans/ice_switch_filter_test_plan.rst b/test_plans/ice_switch_filter_test_plan.rst
index cf6e4f6f..a7534c44 100644
--- a/test_plans/ice_switch_filter_test_plan.rst
+++ b/test_plans/ice_switch_filter_test_plan.rst
@@ -4739,12 +4739,12 @@  Test steps for supported pattern
 1. validate rules.
 2. create rules and list rules.
 3. send matched packets, check the action is correct::
-    queue index: to correct queue 
-    rss queues: to correct queue group 
+    queue index: to correct queue
+    rss queues: to correct queue group
     drop: not receive pkt
 4. send mismatched packets, check the action is not correct::
-    queue index: not to correct queue 
-    rss queues: not to correctt queue group 
+    queue index: not to correct queue
+    rss queues: not to correctt queue group
     drop: receive pkt
 5. destroy rule, list rules, check no rules.
 6. send matched packets, check the action is not correct.
@@ -4754,7 +4754,7 @@  Test case: IPv4/IPv6 + TCP/UDP pipeline mode
 MAC_IPV4_UDP + L4 MASK
 ----------------------
 matched packets::
-  
+
   sendp((Ether()/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/UDP(sport=2048,dport=1)/Raw("x"*80)),iface="ens260f0",count=1)
   sendp([Ether()/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/UDP(sport=2303,dport=3841)/Raw("x"*80)],iface="ens260f0",count=1)
 
@@ -4780,10 +4780,10 @@  mismatched packets::
   sendp([Ether()/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=2601,dport=23)/Raw("x"*80)],iface="ens260f0",count=1)
 
 rss queues
-.......... 
+..........
 flow create 0 priority 0 ingress pattern eth / ipv4 / tcp src is 2345 src mask 0x0f0f / end actions rss queues 4 5 end / end
 
-MAC_IPV6_UDP + L4 MASK 
+MAC_IPV6_UDP + L4 MASK
 ----------------------
 matched packets::
 
@@ -4799,7 +4799,7 @@  queue index
 ...........
 flow create 0 priority 0 ingress pattern eth / ipv6 / udp dst is 3333 dst mask 0x0ff0 / end actions queue index 5 / end
 
-MAC_IPV6_TCP + L4 MASK 
+MAC_IPV6_TCP + L4 MASK
 ----------------------
 matched packets::
 
@@ -4810,7 +4810,7 @@  mismatched packets::
 
   sendp([Ether()/IPv6(src="CDCD:910A:2222:5498:8475:1111:3900:1515",dst="CDCD:910A:2222:5498:8475:1111:3900:2020",tc=3)/TCP(sport=50,dport=3077)/Raw("x"*80)],iface="ens260f0",count=1)
   sendp([Ether()/IPv6(src="CDCD:910A:2222:5498:8475:1111:3900:1515",dst="CDCD:910A:2222:5498:8475:1111:3900:2020",tc=3)/TCP(sport=50,dport=3349)/Raw("x"*80)],iface="ens260f0",count=1)
-   
+
 drop
 ....
 flow create 0 priority 0 ingress pattern eth / ipv6 / tcp dst is 3333 dst mask 0x0ff0 / end actions drop / end
@@ -4831,7 +4831,7 @@  mismatched packets::
 queue index
 ...........
 flow create 0 priority 0 ingress pattern eth / ipv4 dst is 192.168.0.1 / udp / vxlan vni is 2 / eth dst is 68:05:CA:C1:B8:F6 / ipv4 / udp src is 32 src mask 0x0f / end actions queue index 2 / end
- 
+
 MAC_IPV4_UDP_VXLAN_ETH_IPV4_TCP + L4 MASK
 -----------------------------------------
 matched packets::
@@ -4848,7 +4848,7 @@  queue index
 flow create 0 priority 0 ingress pattern eth / ipv4 dst is 192.168.0.1 / udp / vxlan vni is 2 / eth dst is 68:05:CA:C1:B8:F6 / ipv4 / tcp src is 32 src mask 0x0f / end actions queue index 3 / end
 
 Test case: NVGRE non-pipeline mode
-==================================  
+==================================
 MAC_IPV4_NVGRE_ETH_IPV4_UDP + L4 MASK
 -------------------------------------
 matched packets::
@@ -4893,7 +4893,7 @@  mismatched packets::
 rss queues
 ..........
 flow create 0 priority 0 ingress pattern eth / ipv4 / udp / gtpu / ipv4 / udp src is 1280 src mask 0xf00 / end actions rss queues 4 5 end / end
-         
+
 MAC_IPV4_GTPU_IPV4_TCP + L4 MASK
 --------------------------------
 matched packets::
@@ -4952,7 +4952,7 @@  mismatched packets::
 
 rss queues
 ..........
-flow create 0 priority 0 ingress pattern eth / ipv4 / udp / gtpu / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / udp src is 1280 src mask 0xf00 / end actions rss queues 4 5 end / end  
+flow create 0 priority 0 ingress pattern eth / ipv4 / udp / gtpu / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / udp src is 1280 src mask 0xf00 / end actions rss queues 4 5 end / end
 
 MAC_IPV4_GTPU_IPV6_TCP + L4 MASK
 --------------------------------
@@ -4967,8 +4967,8 @@  mismatched packets::
 
 queue index
 ...........
-flow create 0 priority 0 ingress pattern eth / ipv4 / udp / gtpu / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / tcp src is 1280 src mask 0xf00 / end actions queue index 7 / end  
- 
+flow create 0 priority 0 ingress pattern eth / ipv4 / udp / gtpu / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / tcp src is 1280 src mask 0xf00 / end actions queue index 7 / end
+
 MAC_IPV4_GTPU_EH_IPV6_UDP + L4 MASK
 -----------------------------------
 matched packets::
@@ -4977,12 +4977,12 @@  matched packets::
   sendp([Ether()/IP(src="192.168.0.20", dst="192.168.0.21")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12345678)/GTPPDUSessionContainer()/IPv6(src="CDCD:910A:2222:5498:8475:1111:3900:1536", dst="CDCD:910A:2222:5498:8475:1111:3900:2022")/UDP(sport=239)/Raw("x"*80)],iface="ens260f0",count=1)
 
 mismatched packets::
- 
+
   sendp([Ether()/IP(src="192.168.0.20", dst="192.168.0.21")/UDP(dport=2152)/GTP_U_Header(gtp_type=255, teid=0x12345678)/GTPPDUSessionContainer()/IPv6(src="CDCD:910A:2222:5498:8475:1111:3900:1536", dst="CDCD:910A:2222:5498:8475:1111:3900:2022")/UDP(sport=245)/Raw("x"*80)],iface="ens260f0",count=1)
 
 queue index
 ...........
-flow create 0 priority 0 ingress pattern eth / ipv4 / udp / gtpu / gtp_psc / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / udp src is 230 src mask 0x0f0 / end actions queue index 5 / end  
+flow create 0 priority 0 ingress pattern eth / ipv4 / udp / gtpu / gtp_psc / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / udp src is 230 src mask 0x0f0 / end actions queue index 5 / end
 
 MAC_IPV4_GTPU_EH_IPV6_TCP + L4 MASK
 -----------------------------------
@@ -4997,7 +4997,7 @@  mismatched packets::
 
 drop
 ....
-flow create 0 priority 0 ingress pattern eth / ipv4 / udp / gtpu / gtp_psc / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / tcp dst is 230 dst mask 0x0f0 / end actions drop / end 
+flow create 0 priority 0 ingress pattern eth / ipv4 / udp / gtpu / gtp_psc / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / tcp dst is 230 dst mask 0x0f0 / end actions drop / end
 
 Test case: GTPU non-pipeline mode
 =================================
@@ -5014,7 +5014,7 @@  mismatched packets::
 
 queue index
 ...........
-flow create 0 ingress pattern eth / ipv6 / udp / gtpu / ipv4 / udp src is 1280 src mask 0xf00 / end actions queue index 8 / end  
+flow create 0 ingress pattern eth / ipv6 / udp / gtpu / ipv4 / udp src is 1280 src mask 0xf00 / end actions queue index 8 / end
 
 MAC_IPV6_GTPU_IPV4_TCP + L4 MASK
 --------------------------------
@@ -5029,7 +5029,7 @@  mismatched packets::
 
 drop
 ....
-flow create 0 ingress pattern eth / ipv6 / udp / gtpu / ipv4 / tcp dst is 1280 dst mask 0xf00 / end actions drop / end  
+flow create 0 ingress pattern eth / ipv6 / udp / gtpu / ipv4 / tcp dst is 1280 dst mask 0xf00 / end actions drop / end
 
 MAC_IPV6_GTPU_EH_IPV4_UDP + L4 MASK
 -----------------------------------
@@ -5044,7 +5044,7 @@  mismatched packets::
 
 queue index
 ...........
-flow create 0 ingress pattern eth / ipv6 / udp / gtpu / gtp_psc / ipv4 / udp src is 230 src mask 0x0f0 / end actions queue index 5 / end  
+flow create 0 ingress pattern eth / ipv6 / udp / gtpu / gtp_psc / ipv4 / udp src is 230 src mask 0x0f0 / end actions queue index 5 / end
 
 MAC_IPV6_GTPU_EH_IPV4_TCP + L4 MASK
 -----------------------------------
@@ -5059,7 +5059,7 @@  mismatched packets::
 
 drop
 ....
-flow create 0 ingress pattern eth / ipv6 / udp / gtpu / gtp_psc / ipv4 / tcp dst is 230 dst mask 0x0f0 / end actions drop / end  
+flow create 0 ingress pattern eth / ipv6 / udp / gtpu / gtp_psc / ipv4 / tcp dst is 230 dst mask 0x0f0 / end actions drop / end
 
 MAC_IPV6_GTPU_IPV6_UDP + L4 MASK
 --------------------------------
@@ -5074,7 +5074,7 @@  mismatched packets::
 
 queue index
 ...........
-flow create 0 ingress pattern eth / ipv6 / udp / gtpu / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / udp src is 1280 src mask 0xf00 / end actions queue index 3 / end 
+flow create 0 ingress pattern eth / ipv6 / udp / gtpu / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / udp src is 1280 src mask 0xf00 / end actions queue index 3 / end
 
 MAC_IPV6_GTPU_IPV6_TCP + L4 MASK
 --------------------------------
@@ -5089,7 +5089,7 @@  mismatched packets::
 
 rss queues
 ..........
-flow create 0 ingress pattern eth / ipv6 / udp / gtpu / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / tcp dst is 230 dst mask 0x0f0 / end actions rss queues 2 3 end / end  
+flow create 0 ingress pattern eth / ipv6 / udp / gtpu / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / tcp dst is 230 dst mask 0x0f0 / end actions rss queues 2 3 end / end
 
 MAC_IPV6_GTPU_EH_IPV6_UDP + L4 MASK
 -----------------------------------
@@ -5104,7 +5104,7 @@  mismatched packets::
 
 drop
 ....
-flow create 0 ingress pattern eth / ipv6 / udp / gtpu / gtp_psc / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / udp dst is 32 dst mask 0x0f / end actions drop / end  
+flow create 0 ingress pattern eth / ipv6 / udp / gtpu / gtp_psc / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / udp dst is 32 dst mask 0x0f / end actions drop / end
 
 MAC_IPV6_GTPU_EH_IPV6_TCP + L4 MASK
 -----------------------------------
@@ -5119,7 +5119,7 @@  mismatched packets::
 
 queue index
 ...........
-flow create 0 ingress pattern eth / ipv6 / udp / gtpu / gtp_psc / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / tcp src is 32 src mask 0x0f / end actions queue index 7 / end 
+flow create 0 ingress pattern eth / ipv6 / udp / gtpu / gtp_psc / ipv6 dst is CDCD:910A:2222:5498:8475:1111:3900:2022 / tcp src is 32 src mask 0x0f / end actions queue index 7 / end
 
 #l4 qinq switch filter
 
@@ -5185,31 +5185,31 @@  Test Steps
    ID      Group   Prio    Attr    Rule
    0       0       0       i--     ETH VLAN VLAN IPV4 => QUEUE
 
-4. Send matched packet in scapy on tester, check the DUT received this packet and the action is right.
+4. Send matched packet in scapy on TG, check the SUT received this packet and the action is right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst>")/("X"*80)],iface="<TG interface>")
 
-DUT::
+SUT::
 
     testpmd> port 0/queue 2: received 1 packets
   src=A4:BF:01:4D:6F:32 - dst=00:11:22:33:44:55 - type=0x8100 - length=122 - nb_segs=1 - RSS hash=0x26878aad - RSS queue=0x2 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER_VLAN INNER_L2_ETHER_VLAN INNER_L3_IPV4  - l2_len=18 - inner_l2_len=4 - inner_l3_len=20 - Receive queue=0x2
   ol_flags: RTE_MBUF_F_RX_RSS_HASH RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
 
-5. Send mismatched packet in scapy on tester, check the DUT received this packet and the action is not right.
+5. Send mismatched packet in scapy on TG, check the SUT received this packet and the action is not right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst>")/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst>")/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst>")/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src change inputset>",dst="<ipv4 dst>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src change inputset>",dst="<ipv4 dst>")/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst change inputset>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP(src="<ipv4 src>",dst="<ipv4 dst change inputset>")/("X"*80)],iface="<TG interface>")
 
 6. Destroy a rule and list rules::
 
@@ -5255,29 +5255,29 @@  Test Steps
    ID      Group   Prio    Attr    Rule
    0       0       0       i--     ETH VLAN VLAN IPV6 => RSS
 
-4. Send matched packet in scapy on tester, check the DUT received this packet and the action is right.
+4. Send matched packet in scapy on TG, check the SUT received this packet and the action is right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/("X"*80)],iface="<TG interface>")
 
-DUT::
+SUT::
 
     testpmd> port 0/queue 2: received 1 packets
   src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x8100 - length=142 - nb_segs=1 - RSS hash=0xb0c13d2c - RSS queue=0x2 - hw ptype: L2_ETHER L3_IPV6_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER_VLAN INNER_L2_ETHER_VLAN INNER_L3_IPV6  - l2_len=18 - inner_l2_len=4 - inner_l3_len=40 - Receive queue=0x2
   ol_flags: RTE_MBUF_F_RX_RSS_HASH RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
 
-5. Send mismatched packet in scapy on tester, check the DUT received this packet and the action is not right.
+5. Send mismatched packet in scapy on TG, check the SUT received this packet and the action is not right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IPv6(dst="<ipv6 dst>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IPv6(dst="<ipv6 dst>")/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst change inputset>")/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst change inputset>")/("X"*80)],iface="<TG interface>")
 
 6. Destroy a rule and list rules::
 
@@ -5323,31 +5323,31 @@  Test Steps
    ID      Group   Prio    Attr    Rule
    0       0       0       i--     ETH VLAN VLAN IPV4 UDP => QUEUE
 
-4. Send matched packet in scapy on tester, check the DUT received this packet and the action is right.
+4. Send matched packet in scapy on TG, check the SUT received this packet and the action is right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-DUT::
+SUT::
 
     testpmd> port 0/queue 2: received 1 packets
   src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x8100 - length=130 - nb_segs=1 - RSS hash=0xddc4fdb3 - RSS queue=0x2 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER_VLAN INNER_L2_ETHER_VLAN INNER_L3_IPV4 INNER_L4_UDP  - l2_len=18 - inner_l2_len=4 - inner_l3_len=20 - inner_l4_len=8 - Receive queue=0x2
   ol_flags: RTE_MBUF_F_RX_RSS_HASH RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
 
-5. Send mismatched packet in scapy on tester, check the DUT received this packet and the action is not right.
+5. Send mismatched packet in scapy on TG, check the SUT received this packet and the action is not right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport change inputset>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport change inputset>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport change inputset>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/UDP(sport=<sport>,dport=<dport change inputset>)/("X"*80)],iface="<TG interface>")
 
 6. Destroy a rule and list rules::
 
@@ -5393,31 +5393,31 @@  Test Steps
    ID      Group   Prio    Attr    Rule
    0       0       0       i--     ETH VLAN VLAN IPV4 TCP => RSS
 
-4. Send matched packet in scapy on tester, check the DUT received this packet and the action is right.
+4. Send matched packet in scapy on TG, check the SUT received this packet and the action is right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-DUT::
+SUT::
 
     testpmd> port 0/queue 5: received 1 packets
   src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x8100 - length=142 - nb_segs=1 - RSS hash=0xddc4fdb3 - RSS queue=0x5 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP  - sw ptype: L2_ETHER_VLAN INNER_L2_ETHER_VLAN INNER_L3_IPV4 INNER_L4_TCP  - l2_len=18 - inner_l2_len=4 - inner_l3_len=20 - inner_l4_len=20 - Receive queue=0x5
   ol_flags: RTE_MBUF_F_RX_RSS_HASH RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
 
-5. Send mismatched packet in scapy on tester, check the DUT received this packet and the action is not right.
+5. Send mismatched packet in scapy on TG, check the SUT received this packet and the action is not right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport change inputset>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport change inputset>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport change inputset>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IP()/TCP(sport=<sport>,dport=<dport change inputset>)/("X"*80)],iface="<TG interface>")
 
 6. Destroy a rule and list rules::
 
@@ -5465,25 +5465,25 @@  Test Steps
    ID      Group   Prio    Attr    Rule
    0       0       0       i--     ETH VLAN VLAN IPV6 UDP => DROP
 
-4. Send matched packet in scapy on tester, check the DUT received this packet and the action is right.
+4. Send matched packet in scapy on TG, check the SUT received this packet and the action is right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-5. Send mismatched packet in scapy on tester, check the DUT received this packet and the action is not right.
+5. Send mismatched packet in scapy on TG, check the SUT received this packet and the action is not right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst change inputset>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst change inputset>")/UDP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport change inputset>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/UDP(sport=<sport>,dport=<dport change inputset>)/("X"*80)],iface="<TG interface>")
 
 6. Destroy a rule and list rules::
 
@@ -5529,31 +5529,31 @@  Test Steps
    ID      Group   Prio    Attr    Rule
    0       0       0       i--     ETH VLAN VLAN IPV6 TCP => QUEUE
 
-4. Send matched packet in scapy on tester, check the DUT received this packet and the action is right.
+4. Send matched packet in scapy on TG, check the SUT received this packet and the action is right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-DUT::
+SUT::
 
     testpmd> port 0/queue 7: received 1 packets
   src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x8100 - length=162 - nb_segs=1 - RSS hash=0xc5dfbe3f - RSS queue=0x7 - hw ptype: L2_ETHER L3_IPV6_EXT_UNKNOWN L4_TCP  - sw ptype: L2_ETHER_VLAN INNER_L2_ETHER_VLAN INNER_L3_IPV6 INNER_L4_TCP  - l2_len=18 - inner_l2_len=4 - inner_l3_len=40 - inner_l4_len=20 - Receive queue=0x7
   ol_flags: RTE_MBUF_F_RX_RSS_HASH RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
 
-5. Send mismatched packet in scapy on tester, check the DUT received this packet and the action is not right.
+5. Send mismatched packet in scapy on TG, check the SUT received this packet and the action is not right.
 
-Tester::
+TG::
 
-    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac change inputset>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci change inputset>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci change inputset>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst change inputset>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst change inputset>")/TCP(sport=<sport>,dport=<dport>)/("X"*80)],iface="<TG interface>")
 
-    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport change inputset>)/("X"*80)],iface="<tester interface>")
+    >>> sendp([Ether(dst="<dst mac>",type=0x8100)/Dot1Q(vlan=<outer vlan tci>,type=0x8100)/Dot1Q(vlan=0x<inner vlan tci>,type=0x0800)/IPv6(dst="<ipv6 dst>")/TCP(sport=<sport>,dport=<dport change inputset>)/("X"*80)],iface="<TG interface>")
 
 6. Destroy a rule and list rules::
 
diff --git a/test_plans/inline_ipsec_test_plan.rst b/test_plans/inline_ipsec_test_plan.rst
index bb5c659f..f725dc43 100644
--- a/test_plans/inline_ipsec_test_plan.rst
+++ b/test_plans/inline_ipsec_test_plan.rst
@@ -62,7 +62,7 @@  GCM: Galois Counter Mode
 
 Prerequisites
 =============
-2 *  10Gb Ethernet ports of the DUT are directly connected in full-duplex to
+2 *  10Gb Ethernet ports of the SUT are directly connected in full-duplex to
 different ports of the peer traffic generator.
 
 Bind two ports to vfio-pci.
@@ -116,14 +116,14 @@  Test Case: IPSec Encryption
 ===========================
 Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
 
-	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev 
-	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
+	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev
+	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
 	0x2 --config="(0,0,20),(1,0,21)" -f ./enc.cfg
 
 Use scapy to listen on unprotected port::
 
     sniff(iface='%s',count=1,timeout=10)
-	
+
 Use scapy send burst(32) normal packets with dst ip (192.168.105.0) to protected port.
 
 Check burst esp packets received from unprotected port::
@@ -166,11 +166,11 @@  Test Case: IPSec Encryption with Jumboframe
 ===========================================
 Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
 
-	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev 
-	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
+	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev
+	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
 	0x2 --config="(0,0,20),(1,0,21)" -f ./enc.cfg
 
-Use scapy to listen on unprotected port 
+Use scapy to listen on unprotected port
 
 Default frame size is 1518, send burst(1000) packets with dst ip (192.168.105.0) to protected port.
 
@@ -186,12 +186,12 @@  Check burst esp packets can't be received from unprotected port.
 
 Set jumbo frames size as 9000, start it with port 1 assigned to unprotected mode::
 
-	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev 
-	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
+	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev
+	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
 	0x2 -j 9000 --config="(0,0,20),(1,0,21)" -f ./enc.cfg
 
-Use scapy to listen on unprotected port 
-	
+Use scapy to listen on unprotected port
+
 Send burst(8192) jumbo packets with dst ip (192.168.105.0) to protected port.
 
 Check burst jumbo packets received from unprotected port.
@@ -211,12 +211,12 @@  Create configuration file with multiple SP/SA/RT rules for different ip address.
 
 Start ipsec-secgw with two queues enabled on each port and port 1 assigned to unprotected mode::
 
-	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev 
-	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
+	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev
+	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
 	0x2 --config="(0,0,20),(0,1,20),(1,0,21),(1,1,21)" -f ./enc_rss.cfg
 
-Use scapy to listen on unprotected port 
-	
+Use scapy to listen on unprotected port
+
 Send burst(32) packets with different dst ip to protected port.
 
 Check burst esp packets received from queue 0 and queue 1 on unprotected port.
@@ -231,13 +231,13 @@  Test Case: IPSec Decryption
 ===========================
 Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
 
-	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev 
-	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
+	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev
+	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
 	0x2 --config="(0,0,20),(1,0,21)" -f ./dec.cfg
 
 Send two burst(32) esp packets to unprotected port.
 
-First one will produce an error "IPSEC_ESP: failed crypto op" in the IPsec application, 
+First one will produce an error "IPSEC_ESP: failed crypto op" in the IPsec application,
 but it will setup the SA. Second one will decrypt and send back the decrypted packet.
 
 Check burst packets which have been decapsulated received from protected port
@@ -247,16 +247,16 @@  Test Case: IPSec Decryption with wrong key
 ==========================================
 Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
 
-	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev 
-	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
+	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev
+	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
 	0x2 --config="(0,0,20),(1,0,21)" -f ./dec.cfg
 
 Change dec.cfg key is not same with send packet encrypted key
-	
+
 Send one burst(32) esp packets to unprotected port.
 
-IPsec application will produce an error "IPSEC_ESP: failed crypto op" , 
-but it will setup the SA. 
+IPsec application will produce an error "IPSEC_ESP: failed crypto op" ,
+but it will setup the SA.
 
 Send one burst(32) esp packets to unprotected port.
 
@@ -267,13 +267,13 @@  IPsec application will produce error "IPSEC_ESP: failed crypto op".
 Test Case: IPSec Decryption with Jumboframe
 ===========================================
 Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
-	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev 
-	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
+	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev
+	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
 	0x2 --config="(0,0,20),(1,0,21)" -f ./dec.cfg
 
 Default frame size is 1518, Send two burst(1000) esp packets to unprotected port.
 
-First one will produce an error "IPSEC_ESP: failed crypto op" in the IPsec application, 
+First one will produce an error "IPSEC_ESP: failed crypto op" in the IPsec application,
 but it will setup the SA. Second one will decrypt and send back the decrypted packet.
 
 Check burst(1000) packets which have been decapsulated received from protected port.
@@ -284,13 +284,13 @@  Check burst(8192) packets which have been decapsulated can't be received from pr
 
 Set jumbo frames size as 9000, start it with port 1 assigned to unprotected mode::
 
-	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev 
-	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
+	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev
+	"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
 	0x2 -j 9000 --config="(0,0,20),(1,0,21)" -f ./dec.cfg
 
 Send two burst(8192) esp packets to unprotected port.
 
-First one will produce an error "IPSEC_ESP: failed crypto op" in the IPsec application, 
+First one will produce an error "IPSEC_ESP: failed crypto op" in the IPsec application,
 but it will setup the SA. Second one will decrypt and send back the decrypted packet.
 
 Check burst(8192) packets which have been decapsulated received from protected port.
@@ -306,13 +306,13 @@  Create configuration file with multiple SA rule for different ip address.
 
 Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
 
-	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev 
-        "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u 
+	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1 --vdev
+        "crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
 	0x2 -config="(0,0,20),(0,1,20),(1,0,21),(1,1,21)" -f ./dec_rss.cfg
 
 Send two burst(32) esp packets with different ip to unprotected port.
 
-First one will produce an error "IPSEC_ESP: failed crypto op" in the IPsec application, 
+First one will produce an error "IPSEC_ESP: failed crypto op" in the IPsec application,
 but it will setup the SA. Second one will decrypt and send back the decrypted packet.
 
 Check burst(32) packets which have been decapsulated received from queue 0 and
@@ -324,13 +324,13 @@  Test Case: IPSec Encryption/Decryption simultaneously
 Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
 
 	sudo ./x86_64-default-linuxapp-gcc/examples/dpdk-ipsec-secgw -l 20,21 -a 83:00.0 -a 83:00.1
-        --vdev "crypto_null" --log-level 8 --socket-mem 1024,1 
+        --vdev "crypto_null" --log-level 8 --socket-mem 1024,1
         -- -p 0xf -P -u 0x2 --config="(0,0,20),(1,0,21)" -f ./enc_dec.cfg
-	
+
 Send normal and esp packets to protected and unprotected ports simultaneously.
 
-Note when testing inbound IPSec, first one will produce an error "IPSEC_ESP: 
-invalid padding" in the IPsec application, but it will setup the SA. Second 
+Note when testing inbound IPSec, first one will produce an error "IPSEC_ESP:
+invalid padding" in the IPsec application, but it will setup the SA. Second
 one will decrypt and send back the decrypted packet.
 
 Check esp and normal packets received from unprotected and protected ports.
diff --git a/test_plans/interrupt_pmd_test_plan.rst b/test_plans/interrupt_pmd_test_plan.rst
index fd15d334..80f3eced 100644
--- a/test_plans/interrupt_pmd_test_plan.rst
+++ b/test_plans/interrupt_pmd_test_plan.rst
@@ -18,7 +18,7 @@  interrupt.
 Prerequisites
 =============
 
-Each of the 10Gb Ethernet* ports of the DUT is directly connected in
+Each of the 10Gb Ethernet* ports of the SUT is directly connected in
 full-duplex to a different port of the peer traffic generator.
 
 Assume PF port PCI addresses are 0000:08:00.0 and 0000:08:00.1,
diff --git a/test_plans/ip_pipeline_test_plan.rst b/test_plans/ip_pipeline_test_plan.rst
index c504efa4..e2b647e8 100644
--- a/test_plans/ip_pipeline_test_plan.rst
+++ b/test_plans/ip_pipeline_test_plan.rst
@@ -12,38 +12,38 @@  application.
 
 Prerequisites
 ==============
-The DUT must have four 10G Ethernet ports connected to four ports on
-Tester that are controlled by the Scapy packet generator::
+The SUT must have four 10G Ethernet ports connected to four ports on
+TG that are controlled by the Scapy traffic generator::
 
-    dut_port_0 <---> tester_port_0
-    dut_port_1 <---> tester_port_1
-    dut_port_2 <---> tester_port_2
-    dut_port_3 <---> tester_port_3
+    SUT_port_0 <---> TG_port_0
+    SUT_port_1 <---> TG_port_1
+    SUT_port_2 <---> TG_port_2
+    SUT_port_3 <---> TG_port_3
 
-Assume four DUT 10G Ethernet ports' pci device id is as the following::
+Assume four SUT 10G Ethernet ports' pci device id is as the following::
 
-    dut_port_0 : "0000:05:00.0"
-    dut_port_1 : "0000:05:00.1"
-    dut_port_2 : "0000:05:00.2"
-    dut_port_3 : "0000:05:00.3"
+    SUT_port_0 : "0000:05:00.0"
+    SUT_port_1 : "0000:05:00.1"
+    SUT_port_2 : "0000:05:00.2"
+    SUT_port_3 : "0000:05:00.3"
 
 Bind them to dpdk igb_uio driver::
 
     ./usertools/dpdk-devbind.py -b igb_uio 05:00.0 05:00.1 05:00.2 05:00.3
 
 Notes:
->>> if using trex as packet generator::
+>>> if using trex as traffic generator::
 
     trex>
     portattr --prom on -a
     service --port 1                                                                  1
     capture monitor start --rx 1 -v
 
-The crypto cases need an IXIA as packet generator::
+The crypto cases need an IXIA as traffic generator::
 
-    dut_port_0 <---> IXIA_port_0
+    SUT_port_0 <---> IXIA_port_0
 
-Change pci device id of LINK0 to pci device id of dut_port_0.
+Change pci device id of LINK0 to pci device id of SUT_port_0.
 There are two drivers supported now: aesni_gcm and aesni_mb.
 Different drivers support different Algorithms.
 
@@ -58,22 +58,22 @@  Test Case: l2fwd pipeline
 ===========================
 1. Edit examples/ip_pipeline/examples/l2fwd.cli,
    change pci device id of LINK0, LINK1, LINK2, LINK3 to pci device id of
-   dut_port_0, dut_port_1, dut_port_2, dut_port_3
+   SUT_port_0, SUT_port_1, SUT_port_2, SUT_port_3
 
 2. Run ip_pipeline app as the following::
 
     ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -- -s examples/l2fwd.cli
 
-3. Send packets at tester side with scapy, verify:
+3. Send packets at TG side with scapy, verify:
 
-   packets sent from tester_port_0 can be received at tester_port_1, and vice versa.
-   packets sent from tester_port_2 can be received at tester_port_3, and vice versa.
+   packets sent from TG_port_0 can be received at TG_port_1, and vice versa.
+   packets sent from TG_port_2 can be received at TG_port_3, and vice versa.
 
 Test Case: flow classification pipeline
 =========================================
 1. Edit examples/ip_pipeline/examples/flow.cli,
    change pci device id of LINK0, LINK1, LINK2, LINK3 to pci device id of
-   dut_port_0, dut_port_1, dut_port_2, dut_port_3
+   SUT_port_0, SUT_port_1, SUT_port_2, SUT_port_3
 
 2. Run ip_pipeline app as the following::
 
@@ -86,16 +86,16 @@  Test Case: flow classification pipeline
     packet_3:Ether(dst="00:11:22:33:44:55")/IP(src="100.0.0.12",dst="200.0.0.12")/TCP(sport=102,dport=202)/Raw(load="X"*6)
     packet_4:Ether(dst="00:11:22:33:44:55")/IP(src="100.0.0.13",dst="200.0.0.13")/TCP(sport=103,dport=203)/Raw(load="X"*6)
 
-   Verify packet_1 was received by tester_port_0.
-   Verify packet_2 was received by tester_port_1.
-   Verify packet_3 was received by tester_port_2.
-   Verify packet_4 was received by tester_port_3.
+   Verify packet_1 was received by TG_port_0.
+   Verify packet_2 was received by TG_port_1.
+   Verify packet_3 was received by TG_port_2.
+   Verify packet_4 was received by TG_port_3.
 
 Test Case: routing pipeline
 =============================
 1. Edit examples/ip_pipeline/examples/route.cli,
    change pci device id of LINK0, LINK1, LINK2, LINK3 to pci device id of
-   dut_port_0, dut_port_1, dut_port_2, dut_port_3.
+   SUT_port_0, SUT_port_1, SUT_port_2, SUT_port_3.
 
 2. Run ip_pipeline app as the following::
 
@@ -108,16 +108,16 @@  Test Case: routing pipeline
     packet_3:Ether(dst="00:11:22:33:44:55")/IP(dst="100.128.0.1")/Raw(load="X"*26)
     packet_4:Ether(dst="00:11:22:33:44:55")/IP(dst="100.192.0.1")/Raw(load="X"*26)
 
-   Verify packet_1 was received by tester_port_0 and src_mac="a0:a1:a2:a3:a4:a5" dst_mac="00:01:02:03:04:05".
-   Verify packet_2 was received by tester_port_1 and src_mac="b0:b1:b2:b3:b4:b5" dst_mac="10:11:12:13:14:15".
-   Verify packet_3 was received by tester_port_2 and src_mac="c0:c1:c2:c3:c4:c5" dst_mac="20:21:22:23:24:25".
-   Verify packet_4 was received by tester_port_3 and src_mac="d0:d1:d2:d3:d4:d5" dst_mac="30:31:32:33:34:35".
+   Verify packet_1 was received by TG_port_0 and src_mac="a0:a1:a2:a3:a4:a5" dst_mac="00:01:02:03:04:05".
+   Verify packet_2 was received by TG_port_1 and src_mac="b0:b1:b2:b3:b4:b5" dst_mac="10:11:12:13:14:15".
+   Verify packet_3 was received by TG_port_2 and src_mac="c0:c1:c2:c3:c4:c5" dst_mac="20:21:22:23:24:25".
+   Verify packet_4 was received by TG_port_3 and src_mac="d0:d1:d2:d3:d4:d5" dst_mac="30:31:32:33:34:35".
 
 Test Case: firewall pipeline
 ==============================
 1. Edit examples/ip_pipeline/examples/firewall.cli,
    change pci device id of LINK0, LINK1, LINK2, LINK3 to pci device id of
-   dut_port_0, dut_port_1, dut_port_2, dut_port_3.
+   SUT_port_0, SUT_port_1, SUT_port_2, SUT_port_3.
 
 2. Run ip_pipeline app as the following::
 
@@ -130,29 +130,29 @@  Test Case: firewall pipeline
     packet_3:Ether(dst="00:11:22:33:44:55")/IP(dst="100.128.0.1")/TCP(sport=100,dport=200)/Raw(load="X"*6)
     packet_4:Ether(dst="00:11:22:33:44:55")/IP(dst="100.192.0.1")/TCP(sport=100,dport=200)/Raw(load="X"*6)
 
-   Verify packet_1 was received by tester_port_0.
-   Verify packet_2 was received by tester_port_1.
-   Verify packet_3 was received by tester_port_2.
-   Verify packet_4 was received by tester_port_3.
+   Verify packet_1 was received by TG_port_0.
+   Verify packet_2 was received by TG_port_1.
+   Verify packet_3 was received by TG_port_2.
+   Verify packet_4 was received by TG_port_3.
 
 Test Case: pipeline with tap
 ==============================
 1. Edit examples/ip_pipeline/examples/tap.cli,
-   change pci device id of LINK0, LINK1 to pci device id of dut_port_0, dut_port_1.
+   change pci device id of LINK0, LINK1 to pci device id of SUT_port_0, SUT_port_1.
 
 2. Run ip_pipeline app as the following::
 
     ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 –- -s examples/tap.cli,
 
-3. Send packets at tester side with scapy, verify
-   packets sent from tester_port_0 can be received at tester_port_1, and vice versa.
+3. Send packets at TG side with scapy, verify
+   packets sent from TG_port_0 can be received at TG_port_1, and vice versa.
 
 Test Case: traffic management pipeline
 ========================================
-1. Connect dut_port_0 to one port of ixia network traffic generator.
+1. Connect SUT_port_0 to one port of ixia network traffic generator.
 
 2. Edit examples/ip_pipeline/examples/traffic_manager.cli,
-   change pci device id of LINK0 to pci device id of dut_port_0.
+   change pci device id of LINK0 to pci device id of SUT_port_0.
 
 3. Run ip_pipeline app as the following::
 
@@ -165,7 +165,7 @@  Test Case: RSS pipeline
 =========================
 1. Edit examples/ip_pipeline/examples/rss.cli,
    change pci device id of LINK0, LINK1, LINK2, LINK3 to pci device id of
-   dut_port_0, dut_port_1, dut_port_2, dut_port_3.
+   SUT_port_0, SUT_port_1, SUT_port_2, SUT_port_3.
 
 2. Run ip_pipeline app as the following::
 
@@ -178,10 +178,10 @@  Test Case: RSS pipeline
     packet_3:Ether(dst="00:11:22:33:44:55")/IP(src="100.0.10.1",dst="100.0.0.2")/Raw(load="X"*6)
     packet_4:Ether(dst="00:11:22:33:44:55")/IP(src="100.0.0.1",dst="100.0.10.2")/Raw(load="X"*6)
 
-   Verify packet_1 was received by tester_port_0.
-   Verify packet_2 was received by tester_port_1.
-   Verify packet_3 was received by tester_port_2.
-   Verify packet_4 was received by tester_port_3.
+   Verify packet_1 was received by TG_port_0.
+   Verify packet_2 was received by TG_port_1.
+   Verify packet_3 was received by TG_port_2.
+   Verify packet_4 was received by TG_port_3.
 
 Test Case: vf l2fwd pipeline(pf bound to dpdk driver)
 ======================================================
@@ -209,7 +209,7 @@  Test Case: vf l2fwd pipeline(pf bound to dpdk driver)
 
 3. Edit examples/ip_pipeline/examples/vf.cli,
    change pci device id of LINK0, LINK1, LINK2, LINK3 to pci device id of
-   dut_vf_port_0, dut_vf_port_1, dut_vf_port_2, dut_vf_port_3.
+   SUT_vf_port_0, SUT_vf_port_1, SUT_vf_port_2, SUT_vf_port_3.
 
 4. Run ip_pipeline app as the following::
 
@@ -218,7 +218,7 @@  Test Case: vf l2fwd pipeline(pf bound to dpdk driver)
 
    The exact format of port allowlist: domain:bus:devid:func
 
-5. Send packets at tester side with scapy::
+5. Send packets at TG side with scapy::
 
     packet_1:Ether(dst="00:11:22:33:44:55")/IP(src="100.0.0.1",dst="100.0.0.2")/Raw(load="X"*6)
     packet_2:Ether(dst="00:11:22:33:44:56")/IP(src="100.0.0.1",dst="100.0.0.2")/Raw(load="X"*6)
@@ -226,14 +226,14 @@  Test Case: vf l2fwd pipeline(pf bound to dpdk driver)
     packet_4:Ether(dst="00:11:22:33:44:58")/IP(src="100.0.0.1",dst="100.0.0.2")/Raw(load="X"*6)
 
    Verify:
-   Only packet_1 sent from tester_port_0 can be received at tester_port_1,
-   other packets sent from tester_port_0 cannot be received by any port.
-   Only packet_2 sent from tester_port_1 can be received at tester_port_0,
-   other packets sent from tester_port_1 cannot be received by any port.
-   Only packet_3 sent from tester_port_2 can be received at tester_port_3,
-   other packets sent from tester_port_2 cannot be received by any port.
-   Only packet_4 sent from tester_port_3 can be received at tester_port_2,
-   other packets sent from tester_port_3 cannot be received by any port.
+   Only packet_1 sent from TG_port_0 can be received at TG_port_1,
+   other packets sent from TG_port_0 cannot be received by any port.
+   Only packet_2 sent from TG_port_1 can be received at TG_port_0,
+   other packets sent from TG_port_1 cannot be received by any port.
+   Only packet_3 sent from TG_port_2 can be received at TG_port_3,
+   other packets sent from TG_port_2 cannot be received by any port.
+   Only packet_4 sent from TG_port_3 can be received at TG_port_2,
+   other packets sent from TG_port_3 cannot be received by any port.
 
 Test Case: vf l2fwd pipeline(pf bound to kernel driver)
 =========================================================
@@ -246,17 +246,17 @@  Test Case: vf l2fwd pipeline(pf bound to kernel driver)
 
 2. Set vf mac address::
 
-    ip link set dut_port_0 vf 0 mac 00:11:22:33:44:55
-    ip link set dut_port_1 vf 0 mac 00:11:22:33:44:56
-    ip link set dut_port_2 vf 0 mac 00:11:22:33:44:57
-    ip link set dut_port_3 vf 0 mac 00:11:22:33:44:58
+    ip link set SUT_port_0 vf 0 mac 00:11:22:33:44:55
+    ip link set SUT_port_1 vf 0 mac 00:11:22:33:44:56
+    ip link set SUT_port_2 vf 0 mac 00:11:22:33:44:57
+    ip link set SUT_port_3 vf 0 mac 00:11:22:33:44:58
 
    Disable spoof checking on vfs::
 
-    ip link set dut_port_0 vf 0 spoofchk off
-    ip link set dut_port_1 vf 0 spoofchk off
-    ip link set dut_port_2 vf 0 spoofchk off
-    ip link set dut_port_3 vf 0 spoofchk off
+    ip link set SUT_port_0 vf 0 spoofchk off
+    ip link set SUT_port_1 vf 0 spoofchk off
+    ip link set SUT_port_2 vf 0 spoofchk off
+    ip link set SUT_port_3 vf 0 spoofchk off
 
    Then bind the four vfs to dpdk vfio_pci driver::
 
@@ -264,13 +264,13 @@  Test Case: vf l2fwd pipeline(pf bound to kernel driver)
 
 3. Edit examples/ip_pipeline/examples/vf.cli,
    change pci device id of LINK0, LINK1, LINK2, LINK3 to pci device id of
-   dut_vf_port_0, dut_vf_port_1, dut_vf_port_2, dut_vf_port_3.
+   SUT_vf_port_0, SUT_vf_port_1, SUT_vf_port_2, SUT_vf_port_3.
 
 4. Run ip_pipeline app as the following::
 
     ./<build_target>/examples/dpdk-ip_pipeline -c 0x3 -n 4 -- -s examples/vf.cli
 
-5. Send packets at tester side with scapy::
+5. Send packets at TG side with scapy::
 
     packet_1:Ether(dst="00:11:22:33:44:55")/IP(src="100.0.0.1",dst="100.0.0.2")/Raw(load="X"*6)
     packet_2:Ether(dst="00:11:22:33:44:56")/IP(src="100.0.0.1",dst="100.0.0.2")/Raw(load="X"*6)
@@ -278,14 +278,14 @@  Test Case: vf l2fwd pipeline(pf bound to kernel driver)
     packet_4:Ether(dst="00:11:22:33:44:58")/IP(src="100.0.0.1",dst="100.0.0.2")/Raw(load="X"*6)
 
    Verify:
-   Only packet_1 sent from tester_port_0 can be received at tester_port_1,
-   other packets sent from tester_port_0 cannot be received by any port.
-   Only packet_2 sent from tester_port_1 can be received at tester_port_0,
-   other packets sent from tester_port_1 cannot be received by any port.
-   Only packet_3 sent from tester_port_2 can be received at tester_port_3,
-   other packets sent from tester_port_2 cannot be received by any port.
-   Only packet_4 sent from tester_port_3 can be received at tester_port_2,
-   other packets sent from tester_port_3 cannot be received by any port.
+   Only packet_1 sent from TG_port_0 can be received at TG_port_1,
+   other packets sent from TG_port_0 cannot be received by any port.
+   Only packet_2 sent from TG_port_1 can be received at TG_port_0,
+   other packets sent from TG_port_1 cannot be received by any port.
+   Only packet_3 sent from TG_port_2 can be received at TG_port_3,
+   other packets sent from TG_port_2 cannot be received by any port.
+   Only packet_4 sent from TG_port_3 can be received at TG_port_2,
+   other packets sent from TG_port_3 cannot be received by any port.
 
 Test Case: crypto pipeline - AEAD algorithm in aesni_gcm
 ===========================================================
diff --git a/test_plans/ipgre_test_plan.rst b/test_plans/ipgre_test_plan.rst
index 693432ef..2757f389 100644
--- a/test_plans/ipgre_test_plan.rst
+++ b/test_plans/ipgre_test_plan.rst
@@ -17,7 +17,7 @@  Prerequisites
 Intel® Ethernet 700 Series/
 Intel® Ethernet Network Adapter X710-T4L/
 Intel® Ethernet Network Adapter X710-T2L/
-Intel® Ethernet 800 Series nic should be on the DUT.
+Intel® Ethernet 800 Series nic should be on the SUT.
 
 Test Case 1: GRE ipv4 packet detect
 ===================================
diff --git a/test_plans/ipsec_gw_and_library_test_plan.rst b/test_plans/ipsec_gw_and_library_test_plan.rst
index f023ed23..e44f4f1d 100644
--- a/test_plans/ipsec_gw_and_library_test_plan.rst
+++ b/test_plans/ipsec_gw_and_library_test_plan.rst
@@ -124,13 +124,13 @@  dpdk/doc/guides/cryptodevs/aesni_mb.rst.
 Test cases: IPSec Function test
 ==================================
 Description:
-The SUT and DUT are connected through at least 2 NIC ports.
+The SUT and TG are connected through at least 2 NIC ports.
 
 One NIC port is expected to be managed by linux on both machines and will be
 used as a control path.
 
 The second NIC port (test-port) should be bound to DPDK on the SUT, and should
-be managed by linux on the DUT.
+be managed by linux on the TG.
 
 The script starts ``ipsec-secgw`` with 2 NIC devices: ``test-port`` and
 ``tap vdev``.
@@ -145,13 +145,13 @@  Traffic going over the TAP port in both directions does not have to be protected
 Test Topology:
 ---------------
 
-Two servers are connected with one cable, Tester run DPDK ipsec-secgw sample
-which includes 1 hardware NIC bind and a virtual device, DUT run linux kernal ipsec stack,
+Two servers are connected with one cable, SUT run DPDK ipsec-secgw sample
+which includes 1 hardware NIC bind and a virtual device, TG run linux kernal ipsec stack,
 This test will use linux kernal IPSec stack verify DPDK IPSec stack::
 
                         +----------+                 +----------+
                         |          |                 |          |
-        11.11.11.1/24   |   Tester | 11.11.11.2/24   |   DUT    |
+        11.11.11.1/24   |   TG     | 11.11.11.2/24   |   SUT    |
     dtap0 ------------> |          | --------------> |          |
                         |          |                 |          |
                         +----------+                 +----------+
@@ -224,8 +224,8 @@  AESNI_GCM device start cmd::
 Steps::
 
     1. start ipsec-secgw sample;
-    2. config tester kernal IPSec;
-    3. ping from DUT
+    2. config TG kernal IPSec;
+    3. ping from SUT
     # ping 11.11.11.1
 
 Expected result::
@@ -241,6 +241,6 @@  Description::
 Steps::
 
     1. start ipsec-secgw sample;
-    2. config tester kernal IPSec;
-    3. ping from DUT with a packets exceeds MTU
+    2. config TG kernal IPSec;
+    3. ping from SUT with a packets exceeds MTU
     # ping 11.11.11.1 -s 3000
diff --git a/test_plans/ipsec_gw_cryptodev_func_test_plan.rst b/test_plans/ipsec_gw_cryptodev_func_test_plan.rst
index 7239f46f..89394537 100644
--- a/test_plans/ipsec_gw_cryptodev_func_test_plan.rst
+++ b/test_plans/ipsec_gw_cryptodev_func_test_plan.rst
@@ -194,16 +194,16 @@  dpdk/doc/guides/cryptodevs/aesni_mb.rst.
 Test case: CryptoDev Function test
 ==================================
 
-For function test, the DUT forward UDP packets generated by scapy.
+For function test, the SUT forward UDP packets generated by scapy.
 
 After sending single packet from Scapy, CrytpoDev function encrypt/decrypt the
 payload in packet by using algorithm setting in command. The ipsec-secgw the
-packet back to tester.
+packet back to TG.
 
    +----------+                 +----------+
    |          |                 |          |
    |          | --------------> |          |
-   |  Tester  |                 |   DUT    |
+   |  TG      |                 |   SUT    |
    |          |                 |          |
    |          | <-------------> |          |
    +----------+                 +----------+
diff --git a/test_plans/ipv4_reassembly_test_plan.rst b/test_plans/ipv4_reassembly_test_plan.rst
index 7f721123..f8916498 100644
--- a/test_plans/ipv4_reassembly_test_plan.rst
+++ b/test_plans/ipv4_reassembly_test_plan.rst
@@ -44,8 +44,8 @@  Sends 1K packets split in 4 fragments each with a ``maxflows`` of 1K.
 
 It expects:
 
-  - 4K IP packets to be sent to the DUT.
-  - 1K TCP packets being forwarded back to the TESTER.
+  - 4K IP packets to be sent to the SUT.
+  - 1K TCP packets being forwarded back to the TG.
   - 1K packets with a valid TCP checksum.
 
 
@@ -61,8 +61,8 @@  Sends 2K packets split in 4 fragments each with a ``maxflows`` of 1K.
 
 It expects:
 
-  - 8K IP packets to be sent to the DUT.
-  - 1K TCP packets being forwarded back to the TESTER.
+  - 8K IP packets to be sent to the SUT.
+  - 1K TCP packets being forwarded back to the TG.
   - 1K packets with a valid TCP checksum.
 
 
@@ -81,8 +81,8 @@  Sends 4K packets split in 7 fragments each with a ``maxflows`` of 4K.
 
 It expects:
 
-  - 28K IP packets to be sent to the DUT.
-  - 4K TCP packets being forwarded back to the TESTER.
+  - 28K IP packets to be sent to the SUT.
+  - 4K TCP packets being forwarded back to the TG.
   - 4K packets with a valid TCP checksum.
 
 
@@ -98,8 +98,8 @@  Sends 1100 packets split in 4 fragments each.
 
 It expects:
 
-  - 4400 IP packets to be sent to the DUT.
-  - 1K TCP packets being forwarded back to the TESTER.
+  - 4400 IP packets to be sent to the SUT.
+  - 1K TCP packets being forwarded back to the TG.
   - 1K packets with a valid TCP checksum.
 
 
@@ -107,8 +107,8 @@  Then waits until the ``flowttl`` timeout and sends 1K packets.
 
 It expects:
 
-  - 4K IP packets to be sent to the DUT.
-  - 1K TCP packets being forwarded back to the TESTER.
+  - 4K IP packets to be sent to the SUT.
+  - 1K TCP packets being forwarded back to the TG.
   - 1K packets with a valid TCP checksum.
 
 
@@ -124,24 +124,24 @@  Sends 1K packets with ``maxflows`` equal to 1023.
 
 It expects:
 
-  - 4092 IP packets to be sent to the DUT.
-  - 1023 TCP packets being forwarded back to the TESTER.
+  - 4092 IP packets to be sent to the SUT.
+  - 1023 TCP packets being forwarded back to the TG.
   - 1023 packets with a valid TCP checksum.
 
 Then sends 1023 packets.
 
 It expects:
 
-  - 4092 IP packets to be sent to the DUT.
-  - 1023 TCP packets being forwarded back to the TESTER.
+  - 4092 IP packets to be sent to the SUT.
+  - 1023 TCP packets being forwarded back to the TG.
   - 1023 packets with a valid TCP checksum.
 
 Finally waits until the ``flowttl`` timeout and re-send 1K packets.
 
 It expects:
 
-  - 4092 IP packets to be sent to the DUT.
-  - 1023 TCP packets being forwarded back to the TESTER.
+  - 4092 IP packets to be sent to the SUT.
+  - 1023 TCP packets being forwarded back to the TG.
   - 1023 packets with a valid TCP checksum.
 
 
@@ -158,8 +158,8 @@  fragments per packet is 4.
 
 It expects:
 
-  - 5 IP packets to be sent to the DUT.
-  - 0 TCP packets being forwarded back to the TESTER.
+  - 5 IP packets to be sent to the SUT.
+  - 0 TCP packets being forwarded back to the TG.
   - 0 packets with a valid TCP checksum.
 
 
@@ -177,8 +177,8 @@  until the ``flowttl`` timeout. Then sends the 4th fragment.
 
 It expects:
 
-  - 4 IP packets to be sent to the DUT.
-  - 0 TCP packets being forwarded back to the TESTER.
+  - 4 IP packets to be sent to the SUT.
+  - 0 TCP packets being forwarded back to the TG.
   - 0 packets with a valid TCP checksum.
 
 
@@ -197,8 +197,8 @@  MTU previously defined.
 
 It expects:
 
-  - 4K IP packets to be sent to the DUT.
-  - 1K TCP packets being forwarded back to the TESTER.
+  - 4K IP packets to be sent to the SUT.
+  - 1K TCP packets being forwarded back to the TG.
   - 1K packets with a valid TCP checksum.
 
 
@@ -215,6 +215,6 @@  enabling support within the sample app.
 
 It expects:
 
-  - 4K IP packets to be sent to the DUT.
-  - 0 TCP packets being forwarded back to the TESTER.
+  - 4K IP packets to be sent to the SUT.
+  - 0 TCP packets being forwarded back to the TG.
   - 0 packets with a valid TCP checksum.
diff --git a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
index 9e21e6df..4252edac 100644
--- a/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
+++ b/test_plans/ixgbe_vf_get_extra_queue_information_test_plan.rst
@@ -18,7 +18,7 @@  Prerequisites
 
 1. Hardware:
    Ixgbe
-   connect tester to pf with cable.
+   connect TG to pf with cable.
 
 2. software:
    dpdk: http://dpdk.org/git/dpdk
@@ -80,7 +80,7 @@  Test case 1: DPDK PF, kernel VF, enable DCB mode with TC=4
 
    there is 1 tx queue and 4 rx queues which equals TC number.
 
-4. send packet from tester to VF::
+4. send packet from TG to VF::
 
     pkt1 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=0, vlan=0)/IP()/Raw('x'*20)
     pkt2 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=1, vlan=0)/IP()/Raw('x'*20)
@@ -132,7 +132,7 @@  Test case 2: DPDK PF, kernel VF, disable DCB mode
 
    there is 2 tx queues and 2 rx queues as default number.
 
-4. send packet from tester to VF::
+4. send packet from TG to VF::
 
     pkt1 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/IP()/Raw('x'*20)
     pkt2 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/IP(src="192.168.0.1", dst="192.168.0.3")/UDP(sport=23,dport=24)/Raw('x'*20)
diff --git a/test_plans/kernelpf_iavf_test_plan.rst b/test_plans/kernelpf_iavf_test_plan.rst
index 6f65a529..f85204db 100644
--- a/test_plans/kernelpf_iavf_test_plan.rst
+++ b/test_plans/kernelpf_iavf_test_plan.rst
@@ -21,7 +21,7 @@  ICE driver NIC.
 
 Prerequisites
 =============
-Get the pci device id of DUT, for example::
+Get the pci device id of NIC ports, for example::
 
     ./usertools/dpdk-devbind.py -s
 
@@ -48,11 +48,11 @@  Test case: VF basic RX/TX
 =========================
 Set rxonly forward, start testpmd
 
-Send 100 random packets from tester, check packets can be received
+Send 100 random packets from TG, check packets can be received
 
 Set txonly forward, start testpmd
 
-Check tester could receive the packets from application generated
+Check TG could receive the packets from application generated
 
 
 Test case: VF MAC filter
@@ -309,9 +309,9 @@  Check that out from VF contains the vlan tag.
 Test case: VF without jumboframe
 ================================
 
-Ensure tester's port supports sending jumboframe::
+Ensure TG's port supports sending jumboframe::
 
-    ifconfig 'tester interface' mtu 9000
+    ifconfig 'TG interface' mtu 9000
 
 Launch testpmd for VF port without enabling jumboframe option::
 
@@ -327,9 +327,9 @@  Check packet more than the standard maximum frame (1518) can not be received.
 Test case: VF with jumboframe
 =============================
 
-Ensure tester's port supports sending jumboframe::
+Ensure TG's port supports sending jumboframe::
 
-    ifconfig 'tester interface' mtu 9000
+    ifconfig 'TG interface' mtu 9000
 
 Launch testpmd for VF port with jumboframe option::
 
@@ -477,13 +477,13 @@  forwarded by VF port have the correct checksum value::
 
 Test case: VF TSO
 =================
-Turn off all hardware offloads on tester machine::
+Turn off all hardware offloads on TG machine::
 
    ethtool -K eth1 rx off tx off tso off gso off gro off lro off
 
 Change mtu for large packet::
 
-   ifconfig 'tester interface' mtu 9000
+   ifconfig 'TG interface' mtu 9000
 
 Launch the ``testpmd`` with the following arguments, add "--max-pkt-len"
 for large packet::
@@ -509,7 +509,7 @@  Set TSO turned on, set TSO size as 1460::
     testpmd> port start all
     testpmd> start
 
-Send few IP/TCP packets from tester machine to DUT. Check IP/TCP checksum
+Send few IP/TCP packets from TG machine to SUT. Check IP/TCP checksum
 correctness in captured packet and verify correctness of HW TSO offload
 for large packets. One large TCP packet (5214 bytes + headers) segmented
 to four fragments (1460 bytes+header, 1460 bytes+header, 1460 bytes+header
@@ -547,7 +547,7 @@  Set TSO turned off::
 
     testpmd> tso set 0 0
 
-Send few IP/TCP packets from tester machine to DUT. Check IP/TCP checksum
+Send few IP/TCP packets from TG machine to SUT. Check IP/TCP checksum
 correctness in captured packet and verify correctness of HW TSO offload
 for large packets, but don't do packet segmentation.
 
@@ -564,7 +564,7 @@  Start VF port::
 
 Repeat above stop and start port for 10 times
 
-Send packets from tester
+Send packets from TG
 
 Check VF could receive packets
 
@@ -584,7 +584,7 @@  Check VF port stats, RX-packets and TX-packets are 0
 
 Set mac forward, enable print out
 
-Send 100 packets from tester
+Send 100 packets from TG
 
 Check VF port stats, RX-packets and TX-packets are 100
 
@@ -603,7 +603,7 @@  Show VF port information, check link, speed...information correctness::
 
 Set mac forward, enable print out
 
-Send 100 packets from tester
+Send 100 packets from TG
 
 Check VF port stats, RX-packets and TX-packets are 100
 
diff --git a/test_plans/kni_test_plan.rst b/test_plans/kni_test_plan.rst
index 3e21bbb3..1d148e74 100644
--- a/test_plans/kni_test_plan.rst
+++ b/test_plans/kni_test_plan.rst
@@ -78,9 +78,9 @@  to the device under test::
    modprobe vfio-pci
    usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
 
-The DUT has at least 2 DPDK supported IXGBE NIC ports.
+The SUT has at least 2 DPDK supported IXGBE NIC ports.
 
-The DUT has to be able to install rte_kni kernel module and launch kni
+The SUT has to be able to install rte_kni kernel module and launch kni
 application with a default configuration (This configuration may change form a
 system to another)::
 
@@ -230,8 +230,8 @@  it can receive all packets and no packet loss::
     1 packets transmitted, 1 received, 0% packet loss, time 0ms
     rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms
 
-Assume ``port A`` on tester is linked with ``port 2`` on DUT. Verify the
-command ping from tester::
+Assume ``port A`` on TG is linked with ``port 2`` on SUT. Verify the
+command ping from TG::
 
     ping -w 1 -I "port A" 192.168.2.1
 
@@ -261,7 +261,7 @@  it can receive all packets and no packet loss::
     1 packets transmitted, 1 received, 0% packet loss, time 0ms
     rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
 
-Verify the command ping6 from tester::
+Verify the command ping6 from TG::
 
     ping6 -w 1 -I "port A" "Eth2_0's ipv6 address"
 
@@ -283,8 +283,8 @@  Repeat all the steps for interface ``vEth3_0``
 Test Case: Tcpdump testing
 ==========================
 
-Assume ``port A and B`` on packet generator connects to NIC ``port 2 and 3``.
-Trigger the packet generator of bursting packets from ``port A and B`, then
+Assume ``port A and B`` on traffic generator connects to NIC ``port 2 and 3``.
+Trigger the traffic generator of bursting packets from ``port A and B`, then
 check if tcpdump can capture all packets. The packets should include
 ``tcp`` packets, ``udp`` packets, ``icmp`` packets, ``ip`` packets,
 ``ether+vlan tag+ip`` packets, ``ether`` packets.
@@ -352,13 +352,13 @@  Assume that ``port 2 and 3`` are used by this application::
     insmod ./kmod/rte_kni.ko "lo_mode=lo_mode_ring_skb"
     ./<build_target>/examples/dpdk-kni -c 0xff -n 3 -- -p 0xf -i 0xf -o 0xf0
 
-Assume ``port A and B`` on tester connects to NIC ``port 2 and 3``.
+Assume ``port A and B`` on TG connects to NIC ``port 2 and 3``.
 
 Get the RX packets count and TX packets count::
 
     ifconfig vEth2_0
 
-Send 5 packets from tester. And check whether both RX and TX packets of
+Send 5 packets from TG. And check whether both RX and TX packets of
 ``vEth2_0`` have increased 5.
 
 Repeat for interface ``vEth3_0``
diff --git a/test_plans/l2fwd_cryptodev_func_test_plan.rst b/test_plans/l2fwd_cryptodev_func_test_plan.rst
index a4a7f3a1..c32db38a 100644
--- a/test_plans/l2fwd_cryptodev_func_test_plan.rst
+++ b/test_plans/l2fwd_cryptodev_func_test_plan.rst
@@ -312,18 +312,18 @@  dpdk/doc/guides/cryptodevs/zuc.rst.
 Test case: Cryptodev l2fwd test
 ===============================
 
-For function test, the DUT forward UDP packets generated by scapy.
+For function test, the SUT forward UDP packets generated by scapy.
 
 After sending single packet from Scapy, CrytpoDev function encrypt/decrypt the
 payload in packet by using algorithm setting in command. The l2fwd-crypto
-forward the packet back to tester.
-Use TCPDump to capture the received packet on tester. Then tester parses the payload
+forward the packet back to TG.
+Use TCPDump to capture the received packet on TG. Then TG parses the payload
 and compare the payload with correct answer pre-stored in scripts::
 
     +----------+                 +----------+
     |          |                 |          |
     |          | --------------> |          |
-    |  Tester  |                 |   DUT    |
+    |  TG      |                 |   SUT    |
     |          |                 |          |
     |          | <-------------> |          |
     +----------+                 +----------+
diff --git a/test_plans/l2fwd_test_plan.rst b/test_plans/l2fwd_test_plan.rst
index d8803cb5..adbc6e91 100644
--- a/test_plans/l2fwd_test_plan.rst
+++ b/test_plans/l2fwd_test_plan.rst
@@ -56,8 +56,8 @@  for the application itself for different test cases should be the same.
 Test Case: Port testing
 =======================
 
-Assume ``port A`` on packet generator connects to NIC ``port 0``, while ``port B``
-on packet generator connects to NIC ``port 1``. Set the destination mac address
+Assume ``port A`` on traffic generator connects to NIC ``port 0``, while ``port B``
+on traffic generator connects to NIC ``port 1``. Set the destination mac address
 of the packet stream to be sent out from ``port A`` to the mac address of
 ``port 0``, while the destination mac address of the packet stream to be sent out
 from ``port B`` to the mac address of ``port 1``. Other parameters of the packet
@@ -65,15 +65,15 @@  stream could be anything valid. Then run the test application as below::
 
     $ ./x86_64-native-linuxapp-gcc/examples/dpdk-l2fwd -n 1 -c f -- -q 8 -p 0x3
 
-Trigger the packet generator of bursting packets from ``port A``, then check if
+Trigger the traffic generator of bursting packets from ``port A``, then check if
 ``port 0`` could receive them and ``port 1`` could forward them back. Stop it
-and then trigger the packet generator of bursting packets from ``port B``, then
+and then trigger the traffic generator of bursting packets from ``port B``, then
 check if ``port 1`` could receive them and ``port 0`` could forward them back.
 
 Test Case: ``64/128/256/512/1024/1500`` bytes packet forwarding test
 ====================================================================
 
-Set the packet stream to be sent out from packet generator before testing as below.
+Set the packet stream to be sent out from traffic generator before testing as below.
 
 +-------+---------+---------+---------+-----------+
 | Frame |    1q   |    2q   |   4q    |    8 q    |
@@ -102,6 +102,6 @@  Then run the test application as below::
 
 The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
 
-Trigger the packet generator of bursting packets to the port 0 and 1 on the onboard
+Trigger the traffic generator of bursting packets to the port 0 and 1 on the onboard
 NIC to be tested. Then measure the forwarding throughput for different packet sizes
 and different number of queues.
diff --git a/test_plans/l2tp_esp_coverage_test_plan.rst b/test_plans/l2tp_esp_coverage_test_plan.rst
index d4f83d2b..3a789533 100644
--- a/test_plans/l2tp_esp_coverage_test_plan.rst
+++ b/test_plans/l2tp_esp_coverage_test_plan.rst
@@ -2,40 +2,40 @@ 
    Copyright(c) 2020 Intel Corporation
 
 ================================
-test coverage for L2TPv3 and ESP 
+test coverage for L2TPv3 and ESP
 ================================
 
 Description
 ===========
 For each protocol, below is a list of standard features supported by the
-Intel® Ethernet 800 Series hardware and the impact on the feature for
-each protocol.Some features are supported in a limited manner as stated below.
- 
+Intel® Ethernet 800 Series hardware and the impact on the feature for each protocol.
+Some features are supported in a limited manner as stated below.
+
 IPSec(ESP):
-L2 Tag offloads 
+L2 Tag offloads
 ----L2 Tag Stripping - Yes
 ----L2 Tag insertion - Yes
-Checksum offloads - Yes 
+Checksum offloads - Yes
 ----Only outer layer 3 checksum for IP+ESP and IP+AH packets
 ----Outer layer 3 and 4 checksum for ESP over UDP packets
-Manageability - No 
+Manageability - No
 ----Packets must be excluded
-RDMA - No 
+RDMA - No
 ----Packets must be excluded
-DCB 
+DCB
 ----Priority Flow Control - No
- 
+
 L2TPv3:
-L2 Tag offloads 
+L2 Tag offloads
 ----L2 Tag Stripping - Yes
 ----L2 Tag insertion - Yes
-Checksum offloads - Yes 
+Checksum offloads - Yes
 ----Only outer layer 3
-Manageability - No 
+Manageability - No
 ----Packets must be excluded
-RDMA - No 
+RDMA - No
 ----Packets must be excluded
-DCB 
+DCB
 ----Priority Flow Control - No
 
 this test plan is designed to check above offloads in L2TPv3 and ESP.
@@ -59,11 +59,11 @@  Prerequisites
 Test Case 1: test MAC_IPV4_L2TPv3 HW checksum offload
 =====================================================
 
-1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd::
+1. SUT enable rx checksum with "--enable-rx-cksum" when start testpmd::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum
 
-2. DUT setup csum forwarding mode::
+2. SUT setup csum forwarding mode::
 
     testpmd> port stop all
     testpmd> csum set ip hw 0
@@ -72,11 +72,11 @@  Test Case 1: test MAC_IPV4_L2TPv3 HW checksum offload
     testpmd> set verbose 1
     testpmd> start
 
-3. Tester send MAC_IPV4_L2TPv3 packets with correct checksum::
+3. TG send MAC_IPV4_L2TPv3 packets with correct checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.3", proto=115)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
-    
-4. DUT check the packets are correctly received with "PKT_RX_IP_CKSUM_GOOD" by DUT::
+
+4. SUT check the packets are correctly received with "PKT_RX_IP_CKSUM_GOOD" by SUT::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=518 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
@@ -100,12 +100,12 @@  Test Case 1: test MAC_IPV4_L2TPv3 HW checksum offload
     RX-packets: 1              RX-dropped: 0             RX-total: 1
     TX-packets: 1              TX-dropped: 0             TX-total: 1
     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-   
-5. Tester send MAC_IPV4_L2TPv3 packets with incorrect checksum::
+
+5. TG send MAC_IPV4_L2TPv3 packets with incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.3", proto=115,chksum=0x123)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
-    
-6. DUT check the packets are correctly received by DUT and report the checksum error::
+
+6. SUT check the packets are correctly received by SUT and report the checksum error::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=518 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
@@ -134,11 +134,11 @@  Test Case 1: test MAC_IPV4_L2TPv3 HW checksum offload
 Test Case 2: test MAC_IPV4_ESP HW checksum offload
 ==================================================
 
-1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode::
- 
+1. SUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode::
+
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum
 
-2. DUT setup csum forwarding mode::
+2. SUT setup csum forwarding mode::
 
     testpmd> port stop all
     testpmd> csum set ip hw 0
@@ -147,11 +147,11 @@  Test Case 2: test MAC_IPV4_ESP HW checksum offload
     testpmd> set verbose 1
     testpmd> start
 
-3. Tester send MAC_IPV4_ESP packets with correct checksum::
+3. TG send MAC_IPV4_ESP packets with correct checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=50)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
-    
-4. DUT check the packets are correctly received with "PKT_RX_IP_CKSUM_GOOD" by DUT::
+
+4. SUT check the packets are correctly received with "PKT_RX_IP_CKSUM_GOOD" by SUT::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=522 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
@@ -175,12 +175,12 @@  Test Case 2: test MAC_IPV4_ESP HW checksum offload
     RX-packets: 1              RX-dropped: 0             RX-total: 1
     TX-packets: 1              TX-dropped: 0             TX-total: 1
     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-   
-5. Tester send MAC_IPV4_ESP packets with incorrect checksum::
+
+5. TG send MAC_IPV4_ESP packets with incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=50,chksum=0x123)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
-    
-6. DUT check the packets are correctly received by DUT and report the checksum error::
+
+6. SUT check the packets are correctly received by SUT and report the checksum error::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=522 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
@@ -209,9 +209,9 @@  Test Case 2: test MAC_IPV4_ESP HW checksum offload
 Test Case 3: test MAC_IPV4_AH HW checksum offload
 =================================================
 
-1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode:
+1. SUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode:
 
-2. DUT setup csum forwarding mode::
+2. SUT setup csum forwarding mode::
 
     testpmd> port stop all
     testpmd> csum set ip hw 0
@@ -220,11 +220,11 @@  Test Case 3: test MAC_IPV4_AH HW checksum offload
     testpmd> set verbose 1
     testpmd> start
 
-3. Tester send MAC_IPV4_AH packets with correct checksum::
+3. TG send MAC_IPV4_AH packets with correct checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=51)/AH(spi=11)/Raw('x'*480)], iface="enp134s0f0")
-    
-4. DUT check the packets are correctly received with "PKT_RX_IP_CKSUM_GOOD" by DUT::
+
+4. SUT check the packets are correctly received with "PKT_RX_IP_CKSUM_GOOD" by SUT::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=526 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
@@ -249,11 +249,11 @@  Test Case 3: test MAC_IPV4_AH HW checksum offload
     TX-packets: 1              TX-dropped: 0             TX-total: 1
     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
-5. Tester send MAC_IPV4_AH packets with incorrect checksum::
+5. TG send MAC_IPV4_AH packets with incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=51,chksum=0x123)/AH(spi=11)/Raw('x'*480)], iface="enp134s0f0")
-    
-6. DUT check the packets are correctly received by DUT and report the checksum error::
+
+6. SUT check the packets are correctly received by SUT and report the checksum error::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=526 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
@@ -282,9 +282,9 @@  Test Case 3: test MAC_IPV4_AH HW checksum offload
 Test Case 4: test MAC_IPV4_NAT-T-ESP HW checksum offload
 ========================================================
 
-1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode:
+1. SUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode:
 
-2. DUT setup csum forwarding mode::
+2. SUT setup csum forwarding mode::
 
     testpmd> port stop all
     testpmd> csum set ip hw 0
@@ -294,11 +294,11 @@  Test Case 4: test MAC_IPV4_NAT-T-ESP HW checksum offload
     testpmd> set verbose 1
     testpmd> start
 
-3. Tester send MAC_IPV4_NAT-T-ESP pkt with correct IPv4 checksum and correct UDP checksum::
+3. TG send MAC_IPV4_NAT-T-ESP pkt with correct IPv4 checksum and correct UDP checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP(dport=4500)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
 
-4. DUT check the packets are correctly received with "PKT_RX_L4_CKSUM_GOOD" and "PKT_RX_IP_CKSUM_GOOD" by DUT::
+4. SUT check the packets are correctly received with "PKT_RX_L4_CKSUM_GOOD" and "PKT_RX_IP_CKSUM_GOOD" by SUT::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=530 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
@@ -323,11 +323,11 @@  Test Case 4: test MAC_IPV4_NAT-T-ESP HW checksum offload
     TX-packets: 1              TX-dropped: 0             TX-total: 1
     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
-5. Tester send MAC_IPV4_NAT-T-ESP pkt with correct IPv4 checksum and incorrect UDP checksum::
+5. TG send MAC_IPV4_NAT-T-ESP pkt with correct IPv4 checksum and incorrect UDP checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP(dport=4500,chksum=0x123)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
 
-6. DUT check the packets are correctly received with "PKT_RX_IP_CKSUM_GOOD" and report UDP checksum error by DUT::
+6. SUT check the packets are correctly received with "PKT_RX_IP_CKSUM_GOOD" and report UDP checksum error by SUT::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=530 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
@@ -352,11 +352,11 @@  Test Case 4: test MAC_IPV4_NAT-T-ESP HW checksum offload
     TX-packets: 1              TX-dropped: 0             TX-total: 1
     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
-7. Tester send MAC_IPV4_NAT-T-ESP pkt with incorrect IPv4 checksum and correct UDP checksum::
+7. TG send MAC_IPV4_NAT-T-ESP pkt with incorrect IPv4 checksum and correct UDP checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(chksum=0x123)/UDP(dport=4500)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
 
-8. DUT check the packets are correctly received with "PKT_RX_L4_CKSUM_GOOD" and report IP checksum error by DUT::
+8. SUT check the packets are correctly received with "PKT_RX_L4_CKSUM_GOOD" and report IP checksum error by SUT::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=530 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
@@ -381,11 +381,11 @@  Test Case 4: test MAC_IPV4_NAT-T-ESP HW checksum offload
     TX-packets: 1              TX-dropped: 0             TX-total: 1
     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
-9. Tester send MAC_IPV4_NAT-T-ESP pkt with incorrect IPv4 checksum and incorrect UDP checksum::
+9. TG send MAC_IPV4_NAT-T-ESP pkt with incorrect IPv4 checksum and incorrect UDP checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(chksum=0x123)/UDP(dport=4500,chksum=0x123)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
 
-10. DUT check the packets are correctly received by DUT and report the checksum error::
+10. SUT check the packets are correctly received by SUT and report the checksum error::
 
      testpmd> port 0/queue 0: received 1 packets
      src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x0800 - length=530 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
@@ -414,9 +414,9 @@  Test Case 4: test MAC_IPV4_NAT-T-ESP HW checksum offload
 Test Case 5: test MAC_IPV6_NAT-T-ESP HW checksum offload
 ========================================================
 
-1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode:
+1. SUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode:
 
-2. DUT setup csum forwarding mode::
+2. SUT setup csum forwarding mode::
 
     testpmd> port stop all
     testpmd> csum set udp hw 0
@@ -425,11 +425,11 @@  Test Case 5: test MAC_IPV6_NAT-T-ESP HW checksum offload
     testpmd> set verbose 1
     testpmd> start
 
-3. Tester send MAC_IPV6_NAT-T-ESP packets with correct checksum::
+3. TG send MAC_IPV6_NAT-T-ESP packets with correct checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(dport=4500)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
-    
-4. DUT check the packets are correctly received with "PKT_RX_L4_CKSUM_GOOD" by DUT::
+
+4. SUT check the packets are correctly received with "PKT_RX_L4_CKSUM_GOOD" by SUT::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x86dd - length=550 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV6 L4_UDP  - l2_len=14 - l3_len=40 - l4_len=8 - Receive queue=0x0
@@ -453,12 +453,12 @@  Test Case 5: test MAC_IPV6_NAT-T-ESP HW checksum offload
     RX-packets: 1              RX-dropped: 0             RX-total: 1
     TX-packets: 1              TX-dropped: 0             TX-total: 1
     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-     
-5. Tester send MAC_IPV6_NAT-T-ESP packets with incorrect checksum::
+
+5. TG send MAC_IPV6_NAT-T-ESP packets with incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(dport=4500,chksum=0x123)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
-    
-6. DUT check the packets are correctly received by DUT and report the checksum error::
+
+6. SUT check the packets are correctly received by SUT and report the checksum error::
 
     testpmd> port 0/queue 0: received 1 packets
     src=00:00:00:00:00:00 - dst=00:11:22:33:44:55 - type=0x86dd - length=550 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER L3_IPV6 L4_UDP  - l2_len=14 - l3_len=40 - l4_len=8 - Receive queue=0x0
@@ -489,23 +489,23 @@  Test Case 6: test MAC_IPV4_L2TPv3 l2 tag
 
 subcase 1: vlan stripping
 -------------------------
-1. DUT set vlan filter on and enable the vlan receipt::
+1. SUT set vlan filter on and enable the vlan receipt::
 
     testpmd > vlan set filter on 0
     testpmd > set fwd mac
     testpmd > set verbose 1
     testpmd > rx_vlan add 1 0
 
-2. DUT enable the vlan header stripping with vlan tag identifier 1::
-    
+2. SUT enable the vlan header stripping with vlan tag identifier 1::
+
     testpmd > vlan set strip off 0
     testpmd > start
 
-3. Tester send MAC_IPV4_L2TPv3 pkt with vlan tag identifier 1(ether/vlan/ip/l2tp):: 
+3. TG send MAC_IPV4_L2TPv3 pkt with vlan tag identifier 1(ether/vlan/ip/l2tp)::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=115)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
 
-4. DUT check the pkt is recieved and fwd with vlan tag 1::
+4. SUT check the pkt is recieved and fwd with vlan tag 1::
 
     testpmd> port 0/queue 0: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x8100 - length=522 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - sw ptype: L2_ETHER_VLAN L3_IPV4  - l2_len=18 - l3_len=20 - Receive queue=0x0
@@ -515,11 +515,11 @@  subcase 1: vlan stripping
     15:19:26.315127 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 522: vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto unknown (115), length 504)
     127.0.0.1 > 127.0.0.1:  ip-proto-115 484
 
-5. Tester send MAC_IPV4_L2TPv3 pkt with vlan tag identifier 2::
+5. TG send MAC_IPV4_L2TPv3 pkt with vlan tag identifier 2::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IP(proto=115)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
 
-6. DUT check the pkt is not recieved::
+6. SUT check the pkt is not recieved::
 
     testpmd> stop
     Telling cores to stop...
@@ -535,16 +535,16 @@  subcase 1: vlan stripping
     TX-packets: 0              TX-dropped: 0             TX-total: 0
     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
-7. DUT disable the vlan header stripping with vlan tag identifier 1::
+7. SUT disable the vlan header stripping with vlan tag identifier 1::
 
     testpmd > vlan set strip on 0
     testpmd > start
 
-8. Tester send MAC_IPV4_L2TPv3 pkt with vlan tag identifier 1::
+8. TG send MAC_IPV4_L2TPv3 pkt with vlan tag identifier 1::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=115)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
 
-9. DUT check the pkt is recieved and fwd without vlan tag identifier 1::
+9. SUT check the pkt is recieved and fwd without vlan tag identifier 1::
 
     testpmd> port 0/queue 0: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x0800 - length=518 - nb_segs=1 - RSS hash=0x0 - RSS queue=0x0 - VLAN tci=0x1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
@@ -566,11 +566,11 @@  subcase 2: vlan insertion
     testpmd> port start all
     testpmd> start
 
-2. Tester send MAC_IPV4_L2TPv3 packets without vlan to port 0::
+2. TG send MAC_IPV4_L2TPv3 packets without vlan to port 0::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=115)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
 
-3. Tester check recieved the pkt with vlan tag identifier 1::
+3. TG check recieved the pkt with vlan tag identifier 1::
 
     16:08:17.119129 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 526: vlan 1, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto unknown (115), length 504)
     127.0.0.1 > 127.0.0.1:  ip-proto-115 484
@@ -581,31 +581,31 @@  Test Case 7: test MAC_IPV6_L2TPv3 l2 tag
 
 subcase 1: vlan stripping
 -------------------------
-1. DUT set vlan filter on and enable the vlan receipt::
+1. SUT set vlan filter on and enable the vlan receipt::
 
     testpmd > vlan set filter on 0
     testpmd > set fwd mac
     testpmd > set verbose 1
     testpmd > rx_vlan add 1 0
 
-2. DUT enable the vlan header stripping with vlan tag identifier 1::
-    
+2. SUT enable the vlan header stripping with vlan tag identifier 1::
+
     testpmd > vlan set strip off 0
     testpmd > start
 
-3. Tester send MAC_IPV6_L2TPv3 pkt with vlan tag identifier 1(ether/vlan/ip/l2tp):: 
+3. TG send MAC_IPV6_L2TPv3 pkt with vlan tag identifier 1(ether/vlan/ip/l2tp)::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6(nh=115)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
 
-4. DUT check the pkt is fwd with vlan tag 1::
+4. SUT check the pkt is fwd with vlan tag 1::
 
     16:10:25.899116 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 542: vlan 1, p 0, ethertype IPv6, (hlim 64, next-header unknown (115) payload length: 484) ::1 > ::1: ip-proto-115 484
 
-5. Tester send MAC_IPV6_L2TPv3 pkt with vlan tag identifier 2::
+5. TG send MAC_IPV6_L2TPv3 pkt with vlan tag identifier 2::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IPv6(nh=115)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
 
-6. DUT check the pkt is not recieved::
+6. SUT check the pkt is not recieved::
 
     testpmd> stop
     Telling cores to stop...
@@ -621,16 +621,16 @@  subcase 1: vlan stripping
     TX-packets: 0              TX-dropped: 0             TX-total: 0
     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
-7. DUT disable the vlan header stripping with vlan tag identifier 1::
+7. SUT disable the vlan header stripping with vlan tag identifier 1::
 
     testpmd > vlan set strip on 0
     testpmd > start
 
-8. Tester send MAC_IPV6_L2TPv3 pkt with vlan tag identifier 1::
+8. TG send MAC_IPV6_L2TPv3 pkt with vlan tag identifier 1::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6(nh=115)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
 
-9. DUT check the pkt is fwd without vlan tag identifier 1::
+9. SUT check the pkt is fwd without vlan tag identifier 1::
 
     16:13:20.231049 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv6 (0x86dd), length 538: (hlim 64, next-header unknown (115) payload length: 484) ::1 > ::1: ip-proto-115 484
 
@@ -647,11 +647,11 @@  subcase 2: vlan insertion
     testpmd> port start all
     testpmd> start
 
-2. Tester send MAC_IPV6_L2TPv3 packets without vlan to port 0::
+2. TG send MAC_IPV6_L2TPv3 packets without vlan to port 0::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IPv6(nh=115)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
 
-3. Tester check recieved the pkt with vlan tag identifier 1::
+3. TG check recieved the pkt with vlan tag identifier 1::
 
     16:15:35.311109 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 546: vlan 1, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype IPv6, (hlim 64, next-header unknown (115) payload length: 484) ::1 > ::1: ip-proto-115 484
 
@@ -661,43 +661,43 @@  Test Case 8: test MAC_IPV4_ESP l2 tag
 
 subcase 1: vlan stripping
 -------------------------
-1. DUT set vlan filter on and enable the vlan receipt::
+1. SUT set vlan filter on and enable the vlan receipt::
 
     testpmd > vlan set filter on 0
     testpmd > set fwd mac
     testpmd > set verbose 1
     testpmd > rx_vlan add 1 0
 
-2. DUT enable the vlan header stripping with vlan tag identifier 1::
-    
+2. SUT enable the vlan header stripping with vlan tag identifier 1::
+
     testpmd > vlan set strip off 0
     testpmd > start
 
-3. Tester send MAC_IPV4_ESP pkt with vlan tag identifier 1(ether/vlan/ip/esp):: 
+3. TG send MAC_IPV4_ESP pkt with vlan tag identifier 1(ether/vlan/ip/esp)::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=50)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-4. DUT check the pkt is fwd with vlan tag 1::
+4. SUT check the pkt is fwd with vlan tag 1::
 
     16:19:22.039132 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 526: vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto ESP (50), length 508)
     127.0.0.1 > 127.0.0.1: ESP(spi=0x00000001,seq=0x0), length 488
 
-5. Tester send MAC_IPV4_ESP pkt with vlan tag identifier 2::
+5. TG send MAC_IPV4_ESP pkt with vlan tag identifier 2::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IP(proto=50)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-6. DUT check the pkt is not recieved:
+6. SUT check the pkt is not recieved:
 
-7. DUT disable the vlan header stripping with vlan tag identifier 1::
+7. SUT disable the vlan header stripping with vlan tag identifier 1::
 
     testpmd > vlan set strip on 0
     testpmd > start
 
-8. Tester send MAC_IPV4_ESP pkt with vlan tag identifier 1::
+8. TG send MAC_IPV4_ESP pkt with vlan tag identifier 1::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=50)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-9. DUT check the pkt is fwd without vlan tag identifier 1::
+9. SUT check the pkt is fwd without vlan tag identifier 1::
 
     16:20:49.995057 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv4 (0x0800), length 522: (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto ESP (50), length 508)
     127.0.0.1 > 127.0.0.1: ESP(spi=0x00000001,seq=0x0), length 488
@@ -715,11 +715,11 @@  subcase 2: vlan insertion
     testpmd> port start all
     testpmd> start
 
-2. Tester send MAC_IPV4_ESP packets without vlan to port 0::
+2. TG send MAC_IPV4_ESP packets without vlan to port 0::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=50)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-3. Tester check recieved the pkt with vlan tag identifier 1::
+3. TG check recieved the pkt with vlan tag identifier 1::
 
     16:23:08.631125 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 530: vlan 1, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto ESP (50), length 508)
     127.0.0.1 > 127.0.0.1: ESP(spi=0x00000001,seq=0x0), length 488
@@ -730,42 +730,42 @@  Test Case 9: test MAC_IPV6_ESP l2 tag
 
 subcase 1: vlan stripping
 -------------------------
-1. DUT set vlan filter on and enable the vlan receipt::
+1. SUT set vlan filter on and enable the vlan receipt::
 
     testpmd > vlan set filter on 0
     testpmd > set fwd mac
     testpmd > set verbose 1
     testpmd > rx_vlan add 1 0
 
-2. DUT enable the vlan header stripping with vlan tag identifier 1::
-    
+2. SUT enable the vlan header stripping with vlan tag identifier 1::
+
     testpmd > vlan set strip off 0
     testpmd > start
 
-3. Tester send MAC_IPV6_ESP pkt with vlan tag identifier 1(ether/vlan/ip/esp):: 
+3. TG send MAC_IPV6_ESP pkt with vlan tag identifier 1(ether/vlan/ip/esp)::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6(nh=50)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-4. DUT check the pkt is fwd with vlan tag 1::
+4. SUT check the pkt is fwd with vlan tag 1::
 
     16:25:49.075114 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 546: vlan 1, p 0, ethertype IPv6, (hlim 64, next-header ESP (50) payload length: 488) ::1 > ::1: ESP(spi=0x00000001,seq=0x0), length 488
 
-5. Tester send MAC_IPV6_ESP pkt with vlan tag identifier 2::
+5. TG send MAC_IPV6_ESP pkt with vlan tag identifier 2::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IPv6(nh=50)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-6. DUT check the pkt is not recieved:
+6. SUT check the pkt is not recieved:
 
-7. DUT disable the vlan header stripping with vlan tag identifier 1::
+7. SUT disable the vlan header stripping with vlan tag identifier 1::
 
     testpmd > vlan set strip on 0
     testpmd > start
 
-8. Tester send MAC_IPV6_ESP pkt with vlan tag identifier 1::
+8. TG send MAC_IPV6_ESP pkt with vlan tag identifier 1::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6(nh=50)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-9. DUT check the pkt is fwd without vlan tag identifier 1::
+9. SUT check the pkt is fwd without vlan tag identifier 1::
 
     16:26:40.279043 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv6 (0x86dd), length 542: (hlim 64, next-header ESP (50) payload length: 488) ::1 > ::1: ESP(spi=0x00000001,seq=0x0), length 488
 
@@ -782,11 +782,11 @@  subcase 2: vlan insertion
     testpmd> port start all
     testpmd> start
 
-2. Tester send MAC_IPV6_ESP packets without vlan to port 0::
+2. TG send MAC_IPV6_ESP packets without vlan to port 0::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IPv6(nh=50)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-3. Tester check recieved the pkt with vlan tag identifier 1::
+3. TG check recieved the pkt with vlan tag identifier 1::
 
     16:28:30.323047 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 550: vlan 1, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype IPv6, (hlim 64, next-header ESP (50) payload length: 488) ::1 > ::1: ESP(spi=0x00000001,seq=0x0), length 488
 
@@ -796,43 +796,43 @@  Test Case 10: test MAC_IPV4_AH l2 tag
 
 subcase 1: vlan stripping
 -------------------------
-1. DUT set vlan filter on and enable the vlan receipt::
+1. SUT set vlan filter on and enable the vlan receipt::
 
     testpmd > vlan set filter on 0
     testpmd > set fwd mac
     testpmd > set verbose 1
     testpmd > rx_vlan add 1 0
 
-2. DUT enable the vlan header stripping with vlan tag identifier 1::
-    
+2. SUT enable the vlan header stripping with vlan tag identifier 1::
+
     testpmd > vlan set strip off 0
     testpmd > start
 
-3. Tester send MAC_IPV4_AH pkt with vlan tag identifier 1(ether/vlan/ip/ahA):: 
+3. TG send MAC_IPV4_AH pkt with vlan tag identifier 1(ether/vlan/ip/ahA)::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=51)/AH(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-4. DUT check the pkt is fwd with vlan tag 1::
+4. SUT check the pkt is fwd with vlan tag 1::
 
     16:30:56.899138 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 530: vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto AH (51), length 512)
     127.0.0.1 > 127.0.0.1: AH(spi=0x00000001,sumlen=0,seq=0x0):  ip-proto-0 484
 
-5. Tester send MAC_IPV4_AH pkt with vlan tag identifier 2::
+5. TG send MAC_IPV4_AH pkt with vlan tag identifier 2::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IP(proto=51)/AH(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-6. DUT check the pkt is not recieved:
+6. SUT check the pkt is not recieved:
 
-7. DUT disable the vlan header stripping with vlan tag identifier 1::
+7. SUT disable the vlan header stripping with vlan tag identifier 1::
 
     testpmd > vlan set strip on 0
     testpmd > start
 
-8. Tester send MAC_IPV4_AH pkt with vlan tag identifier 1::
+8. TG send MAC_IPV4_AH pkt with vlan tag identifier 1::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=51)/AH(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-9. DUT check the pkt is fwd without vlan tag identifier 1::
+9. SUT check the pkt is fwd without vlan tag identifier 1::
 
     16:34:32.599097 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv4 (0x0800), length 526: (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto AH (51), length 512)
     127.0.0.1 > 127.0.0.1: AH(spi=0x00000001,sumlen=0,seq=0x0):  ip-proto-0 484
@@ -851,11 +851,11 @@  subcase 2: vlan insertion
     testpmd> port start all
     testpmd> start
 
-2. Tester send MAC_IPV4_AH packets without vlan to port 0::
+2. TG send MAC_IPV4_AH packets without vlan to port 0::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=51)/AH(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-3. Tester check recieved the pkt with vlan tag identifier 1::
+3. TG check recieved the pkt with vlan tag identifier 1::
 
     16:37:21.783066 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 534: vlan 1, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto AH (51), length 512)
     127.0.0.1 > 127.0.0.1: AH(spi=0x00000001,sumlen=0,seq=0x0):  ip-proto-0 484
@@ -866,42 +866,42 @@  Test Case 11: test MAC_IPV6_AH l2 tag
 
 subcase 1: vlan stripping
 -------------------------
-1. DUT set vlan filter on and enable the vlan receipt::
+1. SUT set vlan filter on and enable the vlan receipt::
 
     testpmd > vlan set filter on 0
     testpmd > set fwd mac
     testpmd > set verbose 1
     testpmd > rx_vlan add 1 0
 
-2. DUT enable the vlan header stripping with vlan tag identifier 1::
-    
+2. SUT enable the vlan header stripping with vlan tag identifier 1::
+
     testpmd > vlan set strip off 0
     testpmd > start
 
-3. Tester send MAC_IPV6_AH pkt with vlan tag identifier 1(ether/vlan/ip/ah):: 
+3. TG send MAC_IPV6_AH pkt with vlan tag identifier 1(ether/vlan/ip/ah)::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6(nh=51)/AH(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-4. DUT check the pkt is fwd with vlan tag 1::
+4. SUT check the pkt is fwd with vlan tag 1::
 
     16:32:11.519239 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 550: vlan 1, p 0, ethertype IPv6, (hlim 64, next-header AH (51) payload length: 492) ::1 > ::1: AH(spi=0x00000001,sumlen=0,seq=0x0): HBH (pad1)(pad1)[trunc] [|HBH]
 
-5. Tester send MAC_IPV6_AH pkt with vlan tag identifier 2::
+5. TG send MAC_IPV6_AH pkt with vlan tag identifier 2::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IPv6(nh=51)/AH(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-6. DUT check the pkt is not recieved:
+6. SUT check the pkt is not recieved:
 
-7. DUT disable the vlan header stripping with vlan tag identifier 1::
+7. SUT disable the vlan header stripping with vlan tag identifier 1::
 
     testpmd > vlan set strip on 0
     testpmd > start
 
-8. Tester send MAC_IPV6_AH pkt with vlan tag identifier 1::
+8. TG send MAC_IPV6_AH pkt with vlan tag identifier 1::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6(nh=51)/AH(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-9. DUT check the pkt is fwd without vlan tag identifier 1::
+9. SUT check the pkt is fwd without vlan tag identifier 1::
 
     16:35:27.395058 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv6 (0x86dd), length 546: (hlim 64, next-header AH (51) payload length: 492) ::1 > ::1: AH(spi=0x00000001,sumlen=0,seq=0x0): HBH (pad1)(pad1)[trunc] [|HBH]
 
@@ -919,11 +919,11 @@  subcase 2: vlan insertion
     testpmd> port start all
     testpmd> start
 
-2. Tester send MAC_IPV6_AH packets without vlan to port 0::
+2. TG send MAC_IPV6_AH packets without vlan to port 0::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IPv6(nh=51)/AH(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-3. Tester check recieved the pkt with vlan tag identifier 1::
+3. TG check recieved the pkt with vlan tag identifier 1::
 
     16:38:02.311042 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 554: vlan 1, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype IPv6, (hlim 64, next-header AH (51) payload length: 492) ::1 > ::1: AH(spi=0x00000001,sumlen=0,seq=0x0): HBH (pad1)(pad1)[trunc] [|HBH]
 
@@ -933,43 +933,43 @@  Test Case 12: test MAC_IPV4_NAT-T-ESP l2 tag
 
 subcase 1: vlan stripping
 -------------------------
-1. DUT set vlan filter on and enable the vlan receipt::
+1. SUT set vlan filter on and enable the vlan receipt::
 
     testpmd > vlan set filter on 0
     testpmd > set fwd mac
     testpmd > set verbose 1
     testpmd > rx_vlan add 1 0
 
-2. DUT enable the vlan header stripping with vlan tag identifier 1::
-    
+2. SUT enable the vlan header stripping with vlan tag identifier 1::
+
     testpmd > vlan set strip off 0
     testpmd > start
 
-3. Tester send MAC_IPV4_NAT-T-ESP pkt with vlan tag identifier 1(ether/vlan/ip/udp/esp):: 
+3. TG send MAC_IPV4_NAT-T-ESP pkt with vlan tag identifier 1(ether/vlan/ip/udp/esp)::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP()/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-4. DUT check the pkt is fwd with vlan tag 1::
+4. SUT check the pkt is fwd with vlan tag 1::
 
     16:43:18.351118 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 534: vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto UDP (17), length 516)
     127.0.0.1.4500 > 127.0.0.1.4500: UDP-encap: ESP(spi=0x00000001,seq=0x0), length 488
 
-5. Tester send MAC_IPV4_NAT-T-ESP pkt with vlan tag identifier 2::
+5. TG send MAC_IPV4_NAT-T-ESP pkt with vlan tag identifier 2::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IP()/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-6. DUT check the pkt is not recieved:
+6. SUT check the pkt is not recieved:
 
-7. DUT disable the vlan header stripping with vlan tag identifier 1::
+7. SUT disable the vlan header stripping with vlan tag identifier 1::
 
     testpmd > vlan set strip on 0
     testpmd > start
 
-8. Tester send MAC_IPV4_NAT-T-ESP pkt with vlan tag identifier 1::
+8. TG send MAC_IPV4_NAT-T-ESP pkt with vlan tag identifier 1::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP()/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-9. DUT check the pkt is recieved without vlan tag identifier 1::
+9. SUT check the pkt is recieved without vlan tag identifier 1::
 
     16:46:50.015123 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv4 (0x0800), length 530: (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto UDP (17), length 516)
     127.0.0.1.4500 > 127.0.0.1.4500: UDP-encap: ESP(spi=0x00000001,seq=0x0), length 488
@@ -987,11 +987,11 @@  subcase 2: vlan insertion
     testpmd> port start all
     testpmd> start
 
-2. Tester send MAC_IPV4_NAT-T-ESP packets without vlan to port 0::
+2. TG send MAC_IPV4_NAT-T-ESP packets without vlan to port 0::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP()/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-3. Tester check recieved the pkt with vlan tag identifier 1::
+3. TG check recieved the pkt with vlan tag identifier 1::
 
     16:49:41.875196 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 538: vlan 1, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto UDP (17), length 516)
     127.0.0.1.4500 > 127.0.0.1.4500: UDP-encap: ESP(spi=0x00000001,seq=0x0), length 488
@@ -1002,42 +1002,42 @@  Test Case 13: test MAC_IPV6_NAT-T-ESP l2 tag
 
 subcase 1: vlan stripping
 -------------------------
-1. DUT set vlan filter on and enable the vlan receipt::
+1. SUT set vlan filter on and enable the vlan receipt::
 
     testpmd > vlan set filter on 0
     testpmd > set fwd mac
     testpmd > set verbose 1
     testpmd > rx_vlan add 1 0
 
-2. DUT enable the vlan header stripping with vlan tag identifier 1::
-    
+2. SUT enable the vlan header stripping with vlan tag identifier 1::
+
     testpmd > vlan set strip off 0
     testpmd > start
 
-3. Tester send MAC_IPV6_NAT-T-ESP pkt with vlan tag identifier 1(ether/vlan/ip/udp/esp):: 
+3. TG send MAC_IPV6_NAT-T-ESP pkt with vlan tag identifier 1(ether/vlan/ip/udp/esp)::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6()/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-4. DUT check the pkt is fwd with vlan tag 1::
+4. SUT check the pkt is fwd with vlan tag 1::
 
     16:44:13.959467 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 554: vlan 1, p 0, ethertype IPv6, (hlim 64, next-header UDP (17) payload length: 496) ::1.4500 > ::1.4500: [udp sum ok] UDP-encap: ESP(spi=0x00000001,seq=0x0), length 488
 
-5. Tester send MAC_IPV6_NAT-T-ESP pkt with vlan tag identifier 2::
+5. TG send MAC_IPV6_NAT-T-ESP pkt with vlan tag identifier 2::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IPv6()/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-6. DUT check the pkt is not recieved:
+6. SUT check the pkt is not recieved:
 
-7. DUT disable the vlan header stripping with vlan tag identifier 1::
+7. SUT disable the vlan header stripping with vlan tag identifier 1::
 
     testpmd > vlan set strip on 0
     testpmd > start
 
-8. Tester send MAC_IPV6_NAT-T-ESP pkt with vlan tag identifier 1::
+8. TG send MAC_IPV6_NAT-T-ESP pkt with vlan tag identifier 1::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6()/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-9. DUT check the pkt is recieved without vlan tag identifier 1::
+9. SUT check the pkt is recieved without vlan tag identifier 1::
 
     16:47:30.747658 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv6 (0x86dd), length 550: (hlim 64, next-header UDP (17) payload length: 496) ::1.4500 > ::1.4500: [udp sum ok] UDP-encap: ESP(spi=0x00000001,seq=0x0), length 488
 
@@ -1054,11 +1054,11 @@  subcase 2: vlan insertion
     testpmd> port start all
     testpmd> start
 
-2. Tester send MAC_IPV4_NAT-T-ESP packets without vlan to port 0::
+2. TG send MAC_IPV4_NAT-T-ESP packets without vlan to port 0::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IPv6()/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-3. Tester check recieved the pkt with vlan tag identifier 1::
+3. TG check recieved the pkt with vlan tag identifier 1::
 
     16:50:29.791349 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype 802.1Q (0x8100), length 558: vlan 1, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype IPv6, (hlim 64, next-header UDP (17) payload length: 496) ::1.4500 > ::1.4500: [udp sum ok] UDP-encap: ESP(spi=0x00000001,seq=0x0), length 488
 
@@ -1070,7 +1070,7 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
 
 1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
 
-2. DUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark::
+2. SUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark::
 
     flow create 0 ingress pattern eth / ipv4 / l2tpv3oip session_id is 1 / end actions queue index 1 / mark id 4 / end
     flow create 0 ingress pattern eth / ipv4 / l2tpv3oip session_id is 2 / end actions queue index 2 / mark id 3 / end
@@ -1083,9 +1083,9 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     testpmd> rx_vlan add 1 0
     testpmd> vlan set strip on 0
     testpmd> set verbose 1
-     
+
 4. enable hw checksum::
-   
+
     testpmd> set fwd csum
     Set csum packet forwarding mode
     testpmd> port stop all
@@ -1094,11 +1094,11 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     testpmd> port start all
     testpmd> start
 
-5. Tester send matched packets with VLAN tag "1" and incorrect checksum::
+5. TG send matched packets with VLAN tag "1" and incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=115,chksum=0x123)/L2TP('\x00\x00\x00\x01')/Raw('x'*480)], iface="enp134s0f0")
-    
-6. DUT check the packets are distributed to expected queue with mark id and fwd without VLAN tag "1", and report the checksum error::
+
+6. SUT check the packets are distributed to expected queue with mark id and fwd without VLAN tag "1", and report the checksum error::
 
     testpmd> port 0/queue 1: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x0800 - length=518 - nb_segs=1 - RSS hash=0x828dafbf - RSS queue=0x1 - VLAN tci=0x1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
@@ -1112,12 +1112,12 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     15:20:43.803087 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv4 (0x0800), length 518: (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto unknown (115), length 504)
     127.0.0.1 > 127.0.0.1:  ip-proto-115 484
 
-7. Tester send mismatched packets with VLAN tag "1" and incorrect checksum::
+7. TG send mismatched packets with VLAN tag "1" and incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=115,chksum=0x123)/L2TP('\x00\x00\x00\x11')/Raw('x'*480)], iface="enp134s0f0")
 
-8. DUT check the packets are not distributed to expected queue without mark id and fwd without VLAN tag "1", and report the checksum error::
-   
+8. SUT check the packets are not distributed to expected queue without mark id and fwd without VLAN tag "1", and report the checksum error::
+
     port 0/queue 15: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x0800 - length=518 - nb_segs=1 - RSS hash=0x828dafbf - RSS queue=0xf - VLAN tci=0x1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0xf
     ol_flags: PKT_RX_VLAN PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_BAD PKT_RX_VLAN_STRIPPED PKT_RX_OUTER_L4_CKSUM_UNKNOWN
@@ -1130,7 +1130,7 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     15:20:43.803087 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv4 (0x0800), length 518: (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto unknown (115), length 504)
     127.0.0.1 > 127.0.0.1:  ip-proto-115 484
 
-9. DUT verify rule can be listed and destroyed::
+9. SUT verify rule can be listed and destroyed::
 
     testpmd> flow list 0
     ID      Group   Prio    Attr    Rule
@@ -1140,11 +1140,11 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     3       0       0       i--     ETH IPV4 L2TPV3OIP => QUEUE MARK
     testpmd> flow destroy 0 rule 0
 
-10. Tester send matched packets with VLAN tag "1" and incorrect checksum::
+10. TG send matched packets with VLAN tag "1" and incorrect checksum::
 
      sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=115,chksum=0x123)/L2TP('\x00\x00\x00\x01')/Raw('x'*480)], iface="enp134s0f0")
 
-11.DUT check the packets are not distributed to expected queue without mark id and and without VLAN tag "1", and report the checksum error::
+11.SUT check the packets are not distributed to expected queue without mark id and and without VLAN tag "1", and report the checksum error::
 
     testpmd> port 0/queue 15: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x0800 - length=518 - nb_segs=1 - RSS hash=0x828dafbf - RSS queue=0xf - VLAN tci=0x1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0xf
@@ -1164,7 +1164,7 @@  Test Case 15: MAC_IPV4_L2TPv3 vlan insert on + SW checksum offload check
 
 1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
 
-2. DUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark::
+2. SUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark::
 
     flow create 0 ingress pattern eth / ipv4 / l2tpv3oip session_id is 1 / end actions queue index 1 / mark id 4 / end
     flow create 0 ingress pattern eth / ipv4 / l2tpv3oip session_id is 2 / end actions queue index 2 / mark id 3 / end
@@ -1181,11 +1181,11 @@  Test Case 15: MAC_IPV4_L2TPv3 vlan insert on + SW checksum offload check
     testpmd> set fwd mac
     testpmd> set verbose 1
 
-4. Tester send matched packets without vlan::
+4. TG send matched packets without vlan::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=115)/L2TP('\x00\x00\x00\x02')/Raw('x'*480)], iface="enp134s0f0")
-    
-5. DUT check the packets are distributed to expected queue with mark id and fwd with VLAN tag "1" to tester::
+
+5. SUT check the packets are distributed to expected queue with mark id and fwd with VLAN tag "1" to TG::
 
     testpmd> port 0/queue 2: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x8100 - length=522 - nb_segs=1 - RSS hash=0xf20d0ef3 - RSS queue=0x2 - sw ptype: L2_ETHER_VLAN L3_IPV4  - l2_len=18 - l3_len=20 - Receive queue=0x2
@@ -1194,12 +1194,12 @@  Test Case 15: MAC_IPV4_L2TPv3 vlan insert on + SW checksum offload check
     port=0, mbuf=0x2268d26880, pkt_len=522, nb_segs=1:
     rx: l2_len=18 ethertype=800 l3_len=20 l4_proto=115 l4_len=0 flags=PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_BAD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
     tx: flags=PKT_TX_L4_NO_CKSUM PKT_TX_IPV4
-    
+
     17:25:40.615279 a4:bf:01:6a:62:58 > 00:11:22:33:44:55, ethertype 802.1Q (0x8100), length 522: vlan 1, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto unknown (115), length 504, bad cksum 123 (->7a90)!)
     127.0.0.1 > 127.0.0.1:  ip-proto-115 484
 
 6. enable sw checksum::
-    
+
     testpmd> set fwd csum
     Set csum packet forwarding mode
     testpmd> port stop all
@@ -1208,11 +1208,11 @@  Test Case 15: MAC_IPV4_L2TPv3 vlan insert on + SW checksum offload check
     testpmd> port start all
     testpmd> start
 
-7. Tester send mismatched packets with incorrect checksum::
+7. TG send mismatched packets with incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=115,chksum=0x123)/L2TP('\x00\x00\x00\x22')/Raw('x'*480)], iface="enp134s0f0")
 
-8. DUT check the packets are not distributed to expected queue without mark id and report the checksum error::
+8. SUT check the packets are not distributed to expected queue without mark id and report the checksum error::
 
     port 0/queue 3: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x8100 - length=522 - nb_segs=1 - RSS hash=0xf20d0ef3 - RSS queue=0x3 - sw ptype: L2_ETHER_VLAN L3_IPV4  - l2_len=18 - l3_len=20 - Receive queue=0x3
@@ -1222,7 +1222,7 @@  Test Case 15: MAC_IPV4_L2TPv3 vlan insert on + SW checksum offload check
     rx: l2_len=18 ethertype=800 l3_len=20 l4_proto=115 l4_len=0 flags=PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_BAD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
     tx: flags=PKT_TX_L4_NO_CKSUM PKT_TX_IPV4
 
-9. DUT verify rule can be listed and destroyed::
+9. SUT verify rule can be listed and destroyed::
 
     testpmd> flow list 0
     ID      Group   Prio    Attr    Rule
@@ -1232,11 +1232,11 @@  Test Case 15: MAC_IPV4_L2TPv3 vlan insert on + SW checksum offload check
     3       0       0       i--     ETH IPV4 L2TPV3OIP => QUEUE MARK
     testpmd> flow destroy 0 rule 1
 
-10. Tester send matched packets with incorrect checksum::
+10. TG send matched packets with incorrect checksum::
 
      sendp([Ether(dst="00:11:22:33:44:55")/IP(proto=115,chksum=0x123)/L2TP('\x00\x00\x00\x02')/Raw('x'*480)], iface="enp134s0f0")
 
-11.DUT check the packets are not distributed to expected queue without mark id and report the checksum error::
+11.SUT check the packets are not distributed to expected queue without mark id and report the checksum error::
 
     testpmd> port 0/queue 3: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x8100 - length=522 - nb_segs=1 - RSS hash=0xf20d0ef3 - RSS queue=0x3 - sw ptype: L2_ETHER_VLAN L3_IPV4  - l2_len=18 - l3_len=20 - Receive queue=0x3
@@ -1254,7 +1254,7 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
 
 1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
 
-2. DUT create fdir rules for MAC_IPV4_ESP with queue index and mark::
+2. SUT create fdir rules for MAC_IPV4_ESP with queue index and mark::
 
     flow create 0 ingress pattern eth / ipv4 / esp spi is 1 / end actions queue index 1 / mark id 4 / end
     flow create 0 ingress pattern eth / ipv4 / esp spi is 2 / end actions queue index 2 / mark id 3 / end
@@ -1266,9 +1266,9 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     testpmd> vlan set filter on 0
     testpmd> rx_vlan add 1 0
     testpmd> vlan set strip on 0
-     
+
 4. enable hw checksum::
-   
+
     testpmd> set fwd csum
     Set csum packet forwarding mode
     testpmd> set verbose 1
@@ -1278,11 +1278,11 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     testpmd> port start all
     testpmd> start
 
-5. Tester send matched packets with VLAN tag "1" and incorrect checksum::
+5. TG send matched packets with VLAN tag "1" and incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=50,chksum=0x123)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
-    
-6. DUT check the packets are distributed to expected queue with mark id and fwd without VLAN tag "1", and report the checksum error::
+
+6. SUT check the packets are distributed to expected queue with mark id and fwd without VLAN tag "1", and report the checksum error::
 
     testpmd> port 0/queue 1: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x0800 - length=522 - nb_segs=1 - RSS hash=0xeb9be2c9 - RSS queue=0x1 - VLAN tci=0x1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
@@ -1296,11 +1296,11 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     17:39:12.063112 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv4 (0x0800), length 522: (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto ESP (50), length 508)
     127.0.0.1 > 127.0.0.1: ESP(spi=0x00000001,seq=0x0), length 488
 
-7. Tester send mismatched packets with VLAN tag "1" and incorrect checksum::
+7. TG send mismatched packets with VLAN tag "1" and incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=50,chksum=0x123)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
 
-8. DUT check the packets are not distributed to expected queue without mark id and fwd without VLAN tag "1", and report the checksum error::
+8. SUT check the packets are not distributed to expected queue without mark id and fwd without VLAN tag "1", and report the checksum error::
 
     port 0/queue 9: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x0800 - length=522 - nb_segs=1 - RSS hash=0xeb9be2c9 - RSS queue=0x9 - VLAN tci=0x1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x9
@@ -1314,7 +1314,7 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     17:40:33.967072 00:11:22:33:44:55 > 02:00:00:00:00:00, ethertype IPv4 (0x0800), length 522: (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto ESP (50), length 508)
     127.0.0.1 > 127.0.0.1: ESP(spi=0x0000000b,seq=0x0), length 488
 
-9. DUT verify rule can be listed and destroyed::
+9. SUT verify rule can be listed and destroyed::
 
     testpmd> flow list 0
     0       0       0       i--     ETH IPV4 ESP => QUEUE MARK
@@ -1323,11 +1323,11 @@  The pre-steps are as l2tp_esp_iavf_test_plan.
     3       0       0       i--     ETH IPV4 ESP => QUEUE MARK
     testpmd> flow destroy 0 rule 0
 
-10. Tester send matched packets with VLAN tag "1" and incorrect checksum::
+10. TG send matched packets with VLAN tag "1" and incorrect checksum::
 
      sendp([Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(proto=50,chksum=0x123)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
-    
-11.DUT check the packets are not distributed to expected queue without mark id and and fwd without VLAN tag "1", and report the checksum error::
+
+11.SUT check the packets are not distributed to expected queue without mark id and and fwd without VLAN tag "1", and report the checksum error::
 
     testpmd> port 0/queue 9: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x0800 - length=522 - nb_segs=1 - RSS hash=0xeb9be2c9 - RSS queue=0x9 - VLAN tci=0x1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x9
@@ -1347,7 +1347,7 @@  Test Case 17: MAC_IPV6_NAT-T-ESP vlan insert on + SW checksum offload check
 
 1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum
 
-2. DUT create fdir rules for MAC_IPV6_NAT-T-ESP with queue index and mark::
+2. SUT create fdir rules for MAC_IPV6_NAT-T-ESP with queue index and mark::
 
     flow create 0 ingress pattern eth / ipv4 / udp / esp spi is 1 / end actions queue index 1 / mark id 4 / end
     flow create 0 ingress pattern eth / ipv4 / udp / esp spi is 2 / end actions queue index 2 / mark id 3 / end
@@ -1364,11 +1364,11 @@  Test Case 17: MAC_IPV6_NAT-T-ESP vlan insert on + SW checksum offload check
     testpmd> set fwd mac
     testpmd> set verbose 1
 
-4. Tester send matched packets without vlan::
+4. TG send matched packets without vlan::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(chksum=0x123)/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
-    
-5. DUT check the packets are distributed to expected queue with mark id and fwd with VLAN tag "1" to tester::
+
+5. SUT check the packets are distributed to expected queue with mark id and fwd with VLAN tag "1" to TG::
 
     testpmd> port 0/queue 1: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x8100 - length=534 - nb_segs=1 - RSS hash=0x89b546af - RSS queue=0x1 - sw ptype: L2_ETHER_VLAN L3_IPV4 L4_UDP  - l2_len=18 - l3_len=20 - l4_len=8 - Receive queue=0x1
@@ -1382,7 +1382,7 @@  Test Case 17: MAC_IPV6_NAT-T-ESP vlan insert on + SW checksum offload check
     127.0.0.1.4500 > 127.0.0.1.4500: UDP-encap: ESP(spi=0x00000001,seq=0x0), length 488
 
 6. enable sw checksum::
-    
+
     testpmd> set fwd csum
     Set csum packet forwarding mode
     testpmd> port stop all
@@ -1391,11 +1391,11 @@  Test Case 17: MAC_IPV6_NAT-T-ESP vlan insert on + SW checksum offload check
     testpmd> port start all
     testpmd> start
 
-7. Tester send mismatched packets with incorrect checksum::
+7. TG send mismatched packets with incorrect checksum::
 
     sendp([Ether(dst="00:11:22:33:44:55")/IP(chksum=0x123)/UDP(dport=4500)/ESP(spi=11)/Raw('x'*480)], iface="enp134s0f0")
 
-8. DUT check the packets are not distributed to expected queue without mark id and report the checksum error::
+8. SUT check the packets are not distributed to expected queue without mark id and report the checksum error::
 
     port 0/queue 15: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x8100 - length=534 - nb_segs=1 - RSS hash=0x89b546af - RSS queue=0xf - sw ptype: L2_ETHER_VLAN L3_IPV4 L4_UDP  - l2_len=18 - l3_len=20 - l4_len=8 - Receive queue=0xf
@@ -1405,7 +1405,7 @@  Test Case 17: MAC_IPV6_NAT-T-ESP vlan insert on + SW checksum offload check
     rx: l2_len=18 ethertype=800 l3_len=20 l4_proto=17 l4_len=8 flags=PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_BAD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
     tx: flags=PKT_TX_L4_NO_CKSUM PKT_TX_IPV4
 
-9. DUT verify rule can be listed and destroyed::
+9. SUT verify rule can be listed and destroyed::
 
     testpmd> flow list 0
     ID      Group   Prio    Attr    Rule
@@ -1415,11 +1415,11 @@  Test Case 17: MAC_IPV6_NAT-T-ESP vlan insert on + SW checksum offload check
     3       0       0       i--     ETH IPV4 UDP ESP => QUEUE MARK
     testpmd> flow destroy 0 rule 0
 
-10. Tester send matched packets with incorrect checksum::
+10. TG send matched packets with incorrect checksum::
 
      sendp([Ether(dst="00:11:22:33:44:55")/IP(chksum=0x123)/UDP(dport=4500)/ESP(spi=1)/Raw('x'*480)], iface="enp134s0f0")
 
-11.DUT check the packets are not distributed to expected queue without mark id and report the checksum error::
+11.SUT check the packets are not distributed to expected queue without mark id and report the checksum error::
 
     testpmd> port 0/queue 15: received 1 packets
     src=A4:BF:01:6A:62:58 - dst=00:11:22:33:44:55 - type=0x8100 - length=534 - nb_segs=1 - RSS hash=0x89b546af - RSS queue=0xf - sw ptype: L2_ETHER_VLAN L3_IPV4 L4_UDP  - l2_len=18 - l3_len=20 - l4_len=8 - Receive queue=0xf
diff --git a/test_plans/l3fwd_func_test_plan.rst b/test_plans/l3fwd_func_test_plan.rst
index 2922797a..bcd26bd0 100644
--- a/test_plans/l3fwd_func_test_plan.rst
+++ b/test_plans/l3fwd_func_test_plan.rst
@@ -39,20 +39,20 @@  Software
 
 General Set Up
 --------------
-Here assume that 0000:18:00.0 and 0000:18:00.1 are DUT ports, and ens785f0 and ens785f1 are tester interfaces.
+Here assume that 0000:18:00.0 and 0000:18:00.1 are SUT ports, and ens785f0 and ens785f1 are tg interfaces.
 
 #. Build DPDK and l3fwd application::
 
    <dpdk dir># meson -Dexamples=l3fwd <dpdk build dir>
    <dpdk dir># ninja -C <dpdk build dir>
 
-#. Get the DUT ports and tester interfaces::
+#. Get the SUT ports and tg interfaces::
 
     <dpdk dir># ./usertools/dpdk-devbind.py -s
     0000:18:00.0 'Ethernet Controller E810-C for QSFP 1592' if=ens785f0 drv=ice unused=vfio-pci
     0000:18:00.1 'Ethernet Controller E810-C for QSFP 1592' if=ens785f1 drv=ice unused=vfio-pci
 
-#. Bind the DUT ports to vfio-pci::
+#. Bind the SUT ports to vfio-pci::
 
     <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:18:00.0 0000:18:00.1
     0000:18:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=vfio-pci unused=ice
@@ -124,11 +124,11 @@  Test Case 1: 1 port 1 queue with default setting
 
    Here list some output logs which helps you understand l3fwd.
 
-   *  The DUT port is 0000:18:00.0::
+   *  The SUT port is 0000:18:00.0::
 
          EAL: Probe PCI driver: net_ice (8086:1592) device: 0000:18:00.0 (socket 0)
 
-   *  The lookup method is lpm and use default table. DUT mac address is 40:A6:B7:7B:3F:00, the egress packets dst mac is 02:00:00:00:00:00::
+   *  The lookup method is lpm and use default table. SUT mac address is 40:A6:B7:7B:3F:00, the egress packets dst mac is 02:00:00:00:00:00::
 
          Neither LPM, EM, or FIB selected, defaulting to LPM
          L3FWD: Missing 1 or more rule files, using default instead
@@ -151,19 +151,19 @@  Test Case 1: 1 port 1 queue with default setting
 
          L3FWD:  -- lcoreid=1 portid=0 rxqueueid=0
 
-   *  Link status, Packets sending to DUT have to wait port `link up`::
+   *  Link status, Packets sending to SUT have to wait port `link up`::
 
          Port 0 Link up at 100 Gbps FDX Autoneg
 
-#. run tcpdump to capture packets on tester interface::
+#. run tcpdump to capture packets on tg interface::
 
     tcpdump -i <TG interface> -vvv -Q in -e
     tcpdump -i ens2f0 -vvv -Q in -e
 
 #. TG send both 20 ipv4 and ipv6 packets which match the route table::
 
-   >>> sendp([Ether(dst="<matched mac>", src="<src mac>")/IP(src="<src ip>",dst="<198.168.0.x>")/Raw("x"*80)], iface="<tester tx port interface>")
-   >>> sendp([Ether(dst="<matched mac>", src="<src mac>")/IPv6(src="<src ip>",dst="<2001:200::x>")/Raw("x"*80)], iface="<tester tx port interface>")
+   >>> sendp([Ether(dst="<matched mac>", src="<src mac>")/IP(src="<src ip>",dst="<198.168.0.x>")/Raw("x"*80)], iface="<tg tx port interface>")
+   >>> sendp([Ether(dst="<matched mac>", src="<src mac>")/IPv6(src="<src ip>",dst="<2001:200::x>")/Raw("x"*80)], iface="<tg tx port interface>")
 
    >>> sendp([Ether(dst="40:A6:B7:7B:3F:00", src="b4:96:91:9f:64:b9")/IP(src="1.2.3.4",dst="198.168.0.1")/Raw("x"*80)], iface="ens2f0")
    >>> sendp([Ether(dst="40:A6:B7:7B:3F:00", src="b4:96:91:9f:64:b9")/IPv6(src="fe80::b696:91ff:fe9f:64b9",dst="2001:200::")/Raw("x"*80)], iface="ens2f0")
@@ -184,21 +184,21 @@  Test Case 2: 1 port 4 queue with non-default setting
    ./<build_dir>/examples/dpdk-l3fwd -l <lcore0,lcore1> -n 4 -- -p 0x1 --config="(0,0,<lcore0>),(0,1,<lcore0>),(0,2,<lcore1>),(0,3,<lcore1>)" -P --rule_ipv4="./examples/l3fwd/em_default_v4.cfg" --rule_ipv6="./examples/l3fwd/em_default_v6.cfg" --lookup=em --rx-queue-size=2048 --tx-queue-size=2048
    ./build/examples/dpdk-l3fwd -l 1,2 -n 4 -- -p 0x1 --config="(0,0,1),(0,1,1),(0,2,2),(0,3,2)" -P --rule_ipv4="./examples/l3fwd/em_default_v4.cfg" --rule_ipv6="./examples/l3fwd/em_default_v6.cfg" --lookup=em --rx-queue-size=2048 --tx-queue-size=2048 --parse-ptype
 
-   "--parse-ptype" is optional, add it if DUT do not support to parse RTE_PTYPE_L3_IPV4_EXT and RTE_PTYPE_L3_IPV6_EXT.
+   "--parse-ptype" is optional, add it if SUT do not support to parse RTE_PTYPE_L3_IPV4_EXT and RTE_PTYPE_L3_IPV6_EXT.
 
    *  Route rules::
 
          EM: Adding route 198.18.0.0, 198.18.0.1, 9, 9, 17 (0) [0000:18:00.0]
 
-#. run tcpdump to capture packets on tester interface::
+#. run tcpdump to capture packets on tg interface::
 
     tcpdump -i <TG interface> -vvv -Q in -e
     tcpdump -i ens2f0 -vvv -Q in -e
 
 #. TG send both ipv4 and ipv6 packets which match the route table and distributed to all queues::
 
-   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IP(src="<src ip>",dst="<198.168.0.x>")/Raw("x"*80)], iface="<tester tx port interface>")
-   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IPv6(src="<src ip>",dst="<match table>")/Raw("x"*80)], iface="<tester tx port interface>")
+   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IP(src="<src ip>",dst="<198.168.0.x>")/Raw("x"*80)], iface="<tg tx port interface>")
+   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IPv6(src="<src ip>",dst="<match table>")/Raw("x"*80)], iface="<tg tx port interface>")
 
 #. Check if the packets forwarded to TG, get the packets informartion from tcpdump output::
 
@@ -216,19 +216,19 @@  Test Case 3: 2 ports 4 queues with non-default setting
 
     ./<build_dir>/examples/dpdk-l3fwd -l <lcore0,lcore1> -n 4 -- -p 0x3 --config="(0,0,<lcore0>),(0,1,<lcore0>),(0,2,<lcore1>, ,(0,3,<lcore1>),(1,0,<lcore0>),(1,1,<lcore0>),(1,2,<lcore1>, ,(1,3,<lcore1>)" -P --rule_ipv4="rule_ipv4.cfg" --rule_ipv6="rule_ipv6.cfg" --lookup=em --rx-queue-size=2048 --tx-queue-size=2048
 
-#. run tcpdump to capture packets on tester interfaces::
+#. run tcpdump to capture packets on tg interfaces::
 
-    tcpdump -i <tester tx Port0 interface> -vvv -Q in -e
-    tcpdump -i <tester tx Port1 interface> -vvv -Q in -e
+    tcpdump -i <tg tx Port0 interface> -vvv -Q in -e
+    tcpdump -i <tg tx Port1 interface> -vvv -Q in -e
 
 #. All TG 2 ports send both ipv4 and ipv6 packets which match the route table and distributed to all queues::
 
-   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IP(src="<src ip>",dst="<198.168.0.x>")/Raw("x"*80)], iface="<tester tx Port0 interface>")
-   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IPv6(src="<src ip>",dst="<match table>")/Raw("x"*80)], iface="<tester tx port0 interface>")
-   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IP(src="<src ip>",dst="<198.168.0.x>")/Raw("x"*80)], iface="<tester tx Port1 interface>")
-   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IPv6(src="<src ip>",dst="<match table>")/Raw("x"*80)], iface="<tester tx port1 interface>")
+   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IP(src="<src ip>",dst="<198.168.0.x>")/Raw("x"*80)], iface="<tg tx Port0 interface>")
+   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IPv6(src="<src ip>",dst="<match table>")/Raw("x"*80)], iface="<tg tx port0 interface>")
+   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IP(src="<src ip>",dst="<198.168.0.x>")/Raw("x"*80)], iface="<tg tx Port1 interface>")
+   >>> sendp([Ether(dst="<unmatched mac>", src="<src mac>")/IPv6(src="<src ip>",dst="<match table>")/Raw("x"*80)], iface="<tg tx port1 interface>")
 
-#. Check if the packets forwarded to TG, run tcpdump to capture packets on tester interface::
+#. Check if the packets forwarded to TG, run tcpdump to capture packets on tg interface::
 
     07:44:32.770005 40:a6:b7:7b:3f:00 (oui Unknown) > 02:00:00:00:00:00 (oui Unknown), ethertype IPv4 (0x0800), length 114: (tos 0x0, ttl 63, id 1, offset 0, flags [none], proto Options (0), length 100)
         1.2.3.4 > 198.168.0.1:  hopopt 80
diff --git a/test_plans/l3fwdacl_test_plan.rst b/test_plans/l3fwdacl_test_plan.rst
index cf838ac0..23d0b96c 100644
--- a/test_plans/l3fwdacl_test_plan.rst
+++ b/test_plans/l3fwdacl_test_plan.rst
@@ -32,9 +32,9 @@  in a rule database to figure out whether the packets should be dropped
 Prerequisites
 =============
 
-1. The DUT has at least 2 DPDK supported IXGBE/I40E NIC ports::
+1. The SUT has at least 2 DPDK supported IXGBE/I40E NIC ports::
 
-    Tester      DUT
+    TG          SUT
     eth1  <---> PORT 0
     eth2  <---> PORT 1
 
diff --git a/test_plans/link_flowctrl_test_plan.rst b/test_plans/link_flowctrl_test_plan.rst
index 5a7d8c58..6ae0c9b6 100644
--- a/test_plans/link_flowctrl_test_plan.rst
+++ b/test_plans/link_flowctrl_test_plan.rst
@@ -79,7 +79,7 @@  to the device under test::
 
 Test Case: Ethernet link flow control
 =====================================
-This case series are focus on ``Ethernet link flow control features``, requires a high-speed packet generator, such as ixia.
+This case series are focus on ``Ethernet link flow control features``, requires a high-speed traffic generator, such as ixia.
 
 Subcase: test_perf_flowctrl_on_pause_fwd_on
 -------------------------------------------
@@ -210,7 +210,7 @@  Link flow control setting still working after port stop/start.
 Test Case: MAC Control Frame Forwarding
 =======================================
 This case series foucs on ``MAC Control Frame Forwarding``, no requirment of
-high-speed packets, it's very friendship to use scapy as packet generator.
+high-speed packets, it's very friendship to use scapy as traffic generator.
 
 Subcase: test_flowctrl_off_pause_fwd_off
 ----------------------------------------
@@ -219,7 +219,7 @@  MAC Control Frame Forwarding disabled::
 
   testpmd> set flow_ctrl rx off tx off 300 50 10 1 mac_ctrl_frame_fwd off autoneg off 0
 
-Send PAUSE packets to DUT with below options:
+Send PAUSE packets to SUT with below options:
 
 * Regular frame (correct src and dst mac addresses and opcode)
 * Wrong source frame (wrong src, correct and dst mac address and correct opcode)
@@ -235,7 +235,7 @@  MAC Control Frame Forwarding enabled::
 
   testpmd> set flow_ctrl rx off tx off 300 50 10 1 mac_ctrl_frame_fwd on autoneg off 0
 
-Send PAUSE packets to DUT with same options as ``test_flowctrl_off_pause_fwd_off``
+Send PAUSE packets to SUT with same options as ``test_flowctrl_off_pause_fwd_off``
 
 Validate port statistic match below table
 
@@ -261,7 +261,7 @@  MAC Control Frame Forwarding setting still working after port stop/start.
 
     testpmd> set flow_ctrl mac_ctrl_frame_fwd on 0
 
-  Send regular PAUSE packets to DUT, and validate packets are received.
+  Send regular PAUSE packets to SUT, and validate packets are received.
 
   Stop and start port::
 
@@ -270,14 +270,14 @@  MAC Control Frame Forwarding setting still working after port stop/start.
     testpmd> port start 0
     testpmd> start
 
-  Send regular PAUSE packets to DUT, and validate packets are received.
+  Send regular PAUSE packets to SUT, and validate packets are received.
 
 
 * ``disable`` MAC Control Frame Forwarding, and validate ``no`` packets are received::
 
     testpmd> set flow_ctrl mac_ctrl_frame_fwd off 0
 
-  Send regular PAUSE packets to DUT, and validate ``no`` packets are received.
+  Send regular PAUSE packets to SUT, and validate ``no`` packets are received.
 
   Stop and start port::
 
@@ -286,4 +286,4 @@  MAC Control Frame Forwarding setting still working after port stop/start.
     testpmd> port start 0
     testpmd> start
 
-  Send regular PAUSE packets to DUT, and validate ``no`` packets are received.
+  Send regular PAUSE packets to SUT, and validate ``no`` packets are received.
diff --git a/test_plans/link_status_interrupt_test_plan.rst b/test_plans/link_status_interrupt_test_plan.rst
index ea34daf6..378654e4 100644
--- a/test_plans/link_status_interrupt_test_plan.rst
+++ b/test_plans/link_status_interrupt_test_plan.rst
@@ -51,7 +51,7 @@  Build dpdk and examples=link_status_interrupt:
    meson configure -Dexamples=link_status_interrupt <build_target>
    ninja -C <build_target>
 
-Assume port 0 and 1 are connected to the remote ports, e.g. packet generator.
+Assume port 0 and 1 are connected to the remote ports, e.g. traffic generator.
 To run the test application in linuxapp environment with 4 lcores, 2 ports and
 2 RX queues per lcore::
 
@@ -74,5 +74,5 @@  Test Case: Port available
 
 Run the test application as above command with cable/fiber plugged out from both
 port 0 and 1, then plug it in. After several seconds and the link of all the ports
-is up. Together with packet generator, do layer 2 forwarding, and check if the
+is up. Together with traffic generator, do layer 2 forwarding, and check if the
 packets can be received on port 0/1 and sent out on port 1/0.
diff --git a/test_plans/linux_modules_test_plan.rst b/test_plans/linux_modules_test_plan.rst
index 8479dc45..71ada001 100644
--- a/test_plans/linux_modules_test_plan.rst
+++ b/test_plans/linux_modules_test_plan.rst
@@ -19,7 +19,7 @@  Prerequisites
 
 There are two prerequisites. First, all of the drivers that you wish
 to test must be compiled and installed so that they are available through
-modprobe. Secondly, there should be a user on the dut which has the same
+modprobe. Secondly, there should be a user on the SUT which has the same
 password as the primary account for dts. This account will be used as the
 unprivileged user, but it still should have permission to lock at least
 1 GiB of memory to ensure that it can lock all of the process memory.
@@ -59,12 +59,12 @@  Start packet forwarding ::
 
     testpmd> start
 
-Start a packet capture on the tester::
+Start a packet capture on the TG::
 
-    # tcpdump -i (interface) ether src (tester mac address)
+    # tcpdump -i (interface) ether src (TG mac address)
 
-Send some packets to the dut and check that they are properly sent back into
-the packet capture on the tester.
+Send some packets to the SUT and check that they are properly sent back into
+the packet capture on the TG.
 
 Test Case: TX RX Userspace
 ==========================
@@ -90,7 +90,7 @@  Bind the interface to the driver ::
 
 Grant permissions for all users to access the new character device ::
 
-    # setfacl -m u:dtsunprivilegedtester:rwx <DEV INTERFACE>
+    # setfacl -m u:dtsunprivilegedTG:rwx <DEV INTERFACE>
 
 Start testpmd in a loop configuration ::
 
@@ -101,12 +101,12 @@  Start packet forwarding ::
 
     testpmd> start
 
-Start a packet capture on the tester::
+Start a packet capture on the TG::
 
-    # tcpdump -i (interface) ether src (tester mac address)
+    # tcpdump -i (interface) ether src (TG mac address)
 
-Send some packets to the dut and check that they are properly sent back into
-the packet capture on the tester.
+Send some packets to the SUT and check that they are properly sent back into
+the packet capture on the TG.
 
 Test Case: Hello World
 ======================
diff --git a/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst
index 8e083bfe..ae908c95 100644
--- a/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst
+++ b/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst
@@ -10,7 +10,7 @@  Description
 
 Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way.
 In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA
-channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue operation with CBDMA channels is supported 
+channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue operation with CBDMA channels is supported
 in both split and packed ring.
 
 This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with
@@ -49,7 +49,7 @@  General set up
       CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
       ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
+2. Get the PCI device ID and DMA device ID of SUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
 
       <dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -69,8 +69,8 @@  Common steps
 ------------
 1. Bind 1 NIC port and CBDMA channels to vfio-pci::
 
-      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
-      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port pci device id>
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DMA device id>
 
       For example, bind 2 CBDMA channels:
       <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1
@@ -78,7 +78,7 @@  Common steps
 Test Case 1: loopback packed ring all path cbdma test payload check with server mode and multi-queues
 -----------------------------------------------------------------------------------------------------
 This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring
-all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with CBDMA channels. Both 
+all path multi-queues with server mode when vhost uses the asynchronous enqueue operations with CBDMA channels. Both
 iova as VA and PA mode test.
 
 1. Bind 8 CBDMA channel to vfio-pci, as common step 1.
@@ -293,7 +293,7 @@  iova as VA and PA mode test.
 
 Test Case 3: loopback split ring large chain packets stress test with server mode and cbdma enqueue
 ---------------------------------------------------------------------------------------------------
-This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode 
+This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode
 when vhost uses the asynchronous enqueue operations with CBDMA channels. Both iova as VA and PA mode test.
 
 1. Bind 1 CBDMA channel to vfio-pci, as common step 1.
@@ -325,7 +325,7 @@  when vhost uses the asynchronous enqueue operations with CBDMA channels. Both io
 
 Test Case 4: loopback packed ring large chain packets stress test with server mode and cbdma enqueue
 ----------------------------------------------------------------------------------------------------
-This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode 
+This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode
 when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
 
 1. Bind 1 CBDMA channel to vfio-pci, as common step 1.
diff --git a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst
index 8e5bdf3a..fc762fec 100644
--- a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst
+++ b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst
@@ -42,7 +42,7 @@  General set up
 	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=<dpdk build dir>
 	# ninja -C <dpdk build dir> -j 110
 
-2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
+2. Get the PCI device ID and DSA device ID of SUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -68,7 +68,7 @@  Common steps
 ------------
 1. Bind DSA devices to DPDK vfio-pci driver::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DSA device id>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DSA device id>
 
 	For example, bind 2 DMA devices to vfio-pci driver:
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0
@@ -102,7 +102,7 @@  Common steps
 
 Test Case 1: loopback split ring server mode large chain packets stress test with dsa dpdk driver
 ---------------------------------------------------------------------------------------------------
-This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode 
+This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode
 when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
 
 1. Bind 1 dsa device to vfio-pci like common step 1::
diff --git a/test_plans/mdd_test_plan.rst b/test_plans/mdd_test_plan.rst
index 4c0f9d39..ae21cc04 100644
--- a/test_plans/mdd_test_plan.rst
+++ b/test_plans/mdd_test_plan.rst
@@ -51,9 +51,9 @@  Test Case 1: enable_mdd_dpdk_disable
     testpmd> set fwd mac
     testpmd> start
 
-6. get mac address of VF0 and use it as dest mac, using scapy to send 2000 packets from tester::
+6. get mac address of VF0 and use it as dest mac, using scapy to send 2000 packets from TG::
 
-    sendp(Ether(src='tester_mac', dst='vm_port0_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX'), iface="tester_nic")
+    sendp(Ether(src='TG_mac', dst='vm_port0_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX'), iface="TG_nic")
 
 7. verify the packets can't be received by VF1,As follows::
 
@@ -119,9 +119,9 @@  Test Case 2: enable_mdd_dpdk_enable
     testpmd> set fwd mac
     testpmd> start
 
-6. get mac address of VF0 and use it as dest mac, using scapy to send 2000 packets from tester::
+6. get mac address of VF0 and use it as dest mac, using scapy to send 2000 packets from TG::
 
-    sendp(Ether(src='tester_mac', dst='vm_port0_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX'), iface="tester_nic")
+    sendp(Ether(src='TG_mac', dst='vm_port0_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX'), iface="TG_nic")
 
 7. verify the packets can't be received by VF1,As follows::
 
@@ -187,9 +187,9 @@  Test Case 3: disable_mdd_dpdk_disable
     testpmd> set fwd mac
     testpmd> start
 
-6. get mac address of VF0 and use it as dest mac, using scapy to send 2000 packets from tester::
+6. get mac address of VF0 and use it as dest mac, using scapy to send 2000 packets from TG::
 
-    sendp(Ether(src='tester_mac', dst='vm_port0_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX'), iface="tester_nic")
+    sendp(Ether(src='TG_mac', dst='vm_port0_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX'), iface="TG_nic")
 
 7. verify the packets can be received by VF1,As follows::
 
@@ -255,9 +255,9 @@  Test Case 4: disable_mdd_dpdk_enable
     testpmd> set fwd mac
     testpmd> start
 
-6. get mac address of VF0 and use it as dest mac, using scapy to send 2000 packets from tester::
+6. get mac address of VF0 and use it as dest mac, using scapy to send 2000 packets from TG::
 
-    sendp(Ether(src='tester_mac', dst='vm_port0_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX'), iface="tester_nic")
+    sendp(Ether(src='TG_mac', dst='vm_port0_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX'), iface="TG_nic")
 
 7. verify the packets can be received by VF1,As follows::
 
diff --git a/test_plans/metering_and_policing_test_plan.rst b/test_plans/metering_and_policing_test_plan.rst
index 67fb7f0f..d17e798f 100644
--- a/test_plans/metering_and_policing_test_plan.rst
+++ b/test_plans/metering_and_policing_test_plan.rst
@@ -32,24 +32,24 @@  always color aware mode API is invoked regardless of dscp table.
 
 Prerequisites
 -------------
-The DUT must have four 10G Ethernet ports connected to four ports on
-Tester that are controlled by the Scapy packet generator,
+The SUT must have four 10G Ethernet ports connected to four ports on
+TG that are controlled by the Scapy traffic generator,
 
   ::
 
-    dut_port_0 <---> tester_port_0
-    dut_port_1 <---> tester_port_1
-    dut_port_2 <---> tester_port_2
-    dut_port_3 <---> tester_port_3
+    SUT_port_0 <---> TG_port_0
+    SUT_port_1 <---> TG_port_1
+    SUT_port_2 <---> TG_port_2
+    SUT_port_3 <---> TG_port_3
 
-Assume four DUT 10G Ethernet ports' pci device id is as the following,
+Assume four SUT 10G Ethernet ports' pci device id is as the following,
 
   ::
 
-    dut_port_0 : "0000:05:00.0"
-    dut_port_1 : "0000:05:00.1"
-    dut_port_2 : "0000:05:00.2"
-    dut_port_3 : "0000:05:00.3"
+    SUT_port_0 : "0000:05:00.0"
+    SUT_port_1 : "0000:05:00.1"
+    SUT_port_2 : "0000:05:00.2"
+    SUT_port_3 : "0000:05:00.3"
 
 Bind them to dpdk igb_uio driver,
 
diff --git a/test_plans/metrics_test_plan.rst b/test_plans/metrics_test_plan.rst
index 7980b581..7181ecd4 100644
--- a/test_plans/metrics_test_plan.rst
+++ b/test_plans/metrics_test_plan.rst
@@ -48,7 +48,7 @@  plugged into the available PCIe Gen2 8-lane slots in two different configuration
 
 port topology diagram::
 
-       packet generator                         DUT
+       traffic generator                         SUT
         .-----------.                      .-----------.
         | .-------. |                      | .-------. |
         | | portA | | <------------------> | | port0 | |
@@ -64,7 +64,7 @@  latency stats
 -------------
 
 The idea behind the testing process is to send different frames number of
-different packets from packet generator to the DUT while these are being
+different packets from traffic generator to the SUT while these are being
 forwarded back by the app and measure some of statistics. These data are queried
 by the dpdk-proc app.
 
@@ -88,7 +88,7 @@  bit rate
 --------
 
 The idea behind the testing process is to send different frames number of
-different packets from packet generator to the DUT while these are being
+different packets from traffic generator to the SUT while these are being
 forwarded back by the app and measure some of statistics. These data are queried
 by the dpdk-proc app.
 
@@ -142,9 +142,9 @@  Test Case : test latency stats
     testpmd> set fwd io
     testpmd> start
 
-#. Configure packet flow in packet generator.
+#. Configure packet flow in traffic generator.
 
-#. Use packet generator to send packets, continue traffic lasting several minitues.
+#. Use traffic generator to send packets, continue traffic lasting several minitues.
 
 #. run dpdk-proc to get latency stats data, query data at a average interval and
    get 5 times data::
@@ -167,16 +167,16 @@  Test Case : test bit rate
     testpmd> set fwd io
     testpmd> start
 
-#. Configure packet flow in packet generator.
+#. Configure packet flow in traffic generator.
 
-#. Use packet generator to send packets, continue traffic lasting several minitues.
+#. Use traffic generator to send packets, continue traffic lasting several minitues.
 
 #. run dpdk-proc to get latency stats data, query data at a average interval and
    get 5 times data::
 
    ./x86_64-native-linuxapp-gcc/app/dpdk-proc-info -- --metrics
 
-#. Compare dpdk statistics data with packet generator statistics data.
+#. Compare dpdk statistics data with traffic generator statistics data.
 
 Test Case : test bit rate peak value
 ====================================
@@ -192,16 +192,16 @@  Test Case : test bit rate peak value
     testpmd> set fwd io
     testpmd> start
 
-#. Configure packet flow in packet generator.
+#. Configure packet flow in traffic generator.
 
-#. Use packet generator to send packets, continue traffic lasting several minitues.
+#. Use traffic generator to send packets, continue traffic lasting several minitues.
 
 #. run dpdk-proc to get latency stats data, query data at a average interval and
    get 5 times data::
 
    ./x86_64-native-linuxapp-gcc/app/dpdk-proc-info -- --metrics
 
-#. decline packet generator rate percent from 100%/80%/60%/20%, loop step 5/6.
+#. decline traffic generator rate percent from 100%/80%/60%/20%, loop step 5/6.
 
 #. check peak_bits_out/peak_bits_in should keep the first max value when packet
    generator work with decreasing traffic rate percent.
diff --git a/test_plans/multiple_pthread_test_plan.rst b/test_plans/multiple_pthread_test_plan.rst
index b145b3e4..def58a0c 100644
--- a/test_plans/multiple_pthread_test_plan.rst
+++ b/test_plans/multiple_pthread_test_plan.rst
@@ -9,14 +9,14 @@  Multiple Pthread Test
 Description
 -----------
 
-This test is a basic multiple pthread test which demonstrates the basics 
-of control group. Cgroup is a Linux kernel feature that limits, accounts 
-for and isolates the resource usage, like CPU, memory, disk I/O, network, 
-etc of a collection of processes. Now, it's focus on the CPU usage. 
+This test is a basic multiple pthread test which demonstrates the basics
+of control group. Cgroup is a Linux kernel feature that limits, accounts
+for and isolates the resource usage, like CPU, memory, disk I/O, network,
+etc of a collection of processes. Now, it's focus on the CPU usage.
 
 Prerequisites
 -------------
-Support igb_uio driver, kernel is 3.11+. 
+Support igb_uio driver, kernel is 3.11+.
 Use "modprobe uio" "modprobe igb_uio" and then
 use "./tools/dpdk_nic_bind.py --bind=igb_uio device_bus_id" to bind the ports.
 
@@ -30,12 +30,12 @@  The format pattern::
 
     –lcores=’<lcore_set>[@cpu_set][,<lcore_set>[@cpu_set],...]’
 
-‘lcore_set’ and ‘cpu_set’ can be a single number, range or a group. 
+‘lcore_set’ and ‘cpu_set’ can be a single number, range or a group.
 A number is a “digit([0-9]+)”; a range is “<number>-<number>”;
 a group is “(<number|range>[,<number|range>,...])”.
-If a ‘@cpu_set’ value is not supplied, 
+If a ‘@cpu_set’ value is not supplied,
 the value of ‘cpu_set’ will default to the value of ‘lcore_set’.
-For example, "--lcores='1,2@(5-7),(3-5)@(0,2),(0,6),7-8'" 
+For example, "--lcores='1,2@(5-7),(3-5)@(0,2),(0,6),7-8'"
 which means start 9 EAL thread::
 
     lcore 0 runs on cpuset 0x41 (cpu 0,6);
@@ -83,13 +83,13 @@  Their TIDs are for these threads as below::
     | 31042 | Pdump-thread   |
     +-------+----------------+
 
-Before running the test, make sure the core is a unique one otherwise, 
+Before running the test, make sure the core is a unique one otherwise,
 the throughput will be floating on different cores,
 configure lcore 4&5 used for packet forwarding, command as follows::
 
     testpmd>set corelist 4,5
 
-Pay attention that set corelist need to be configured before start, 
+Pay attention that set corelist need to be configured before start,
 otherwise, it will not work::
 
     testpmd>start
@@ -117,7 +117,7 @@  You can see TID 31040(Lcore 4), 31041(Lore 5) are running.
 Test Case 2: Positive Test
 --------------------------
 Input random valid commands to make sure the commands can work,
-Give examples, suppose DUT have 128 cpu core.
+Give examples, suppose SUT have 128 cpu core.
 
 Case 1::
 
@@ -172,7 +172,7 @@  It means start 8 EAL thread::
     lcore 8 runs on cpuset 0x100 (cpu 8);
     lcore 9 runs on cpuset 0x200 (cpu 9).
 
-Case 6::    
+Case 6::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2,(3-5)@3' -n 4 -- -i
 
diff --git a/test_plans/nic_single_core_perf_test_plan.rst b/test_plans/nic_single_core_perf_test_plan.rst
index 6d49de3f..3adb988e 100644
--- a/test_plans/nic_single_core_perf_test_plan.rst
+++ b/test_plans/nic_single_core_perf_test_plan.rst
@@ -15,22 +15,21 @@  Prerequisites
         on the same socket, pick one port per nic
     1.2) nic_single_core_perf test for 82599/500 Series 10G: four 82599 nics, all
         installed on the same socket, pick one port per nic
-  
 2. Software::
 
     dpdk: git clone http://dpdk.org/git/dpdk
     scapy: http://www.secdev.org/projects/scapy/
-    dts (next branch): git clone http://dpdk.org/git/tools/dts, 
-                       then "git checkout next" 
-    Trex code: http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz 
+    dts (next branch): git clone http://dpdk.org/git/tools/dts,
+                       then "git checkout next"
+    Trex code: http://trex-TGn.cisco.com/trex/release/v2.26.tar.gz
                (to be run in stateless Layer 2 mode, see section in
                 Getting Started Guide for more details)
     python-prettytable:
-        apt install python-prettytable (for ubuntu os) 
-        or dnf install python-prettytable (for fedora os). 
+        apt install python-prettytable (for ubuntu os)
+        or dnf install python-prettytable (for fedora os).
 
 3. Connect all the selected nic ports to traffic generator(IXIA,TREX,
-   PKTGEN) ports(TG ports)::
+   Scapy) ports(TG ports)::
 
     2 TG 25g ports for Intel® Ethernet Network Adapter XXV710-DA2 ports
     4 TG 10g ports for 4 82599/500 Series 10G ports
@@ -44,7 +43,6 @@  Prerequisites
     For Intel® Ethernet Network Adapter E810-XXVDA4, if test 16 Byte Descriptor,
     need to be configured with the
     "-Dc_args=-DRTE_LIBRTE_ICE_16BYTE_RX_DESC" option at compile time.
-    
 
 Test Case : Single Core Performance Measurement
 ===============================================
@@ -55,19 +53,19 @@  Test Case : Single Core Performance Measurement
      ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i \
          --portmask=0xf  --txd=128 --rxd=128
         testpmd> start
-        
+
 3) Configure traffic generator to send traffic
     configure one below stream for each TG port
         dst mac: peer nic port mac
         src ip : random
         dst ip : random
         packet length : 64 byte
-        
+
 4)  check throughput and compare it with the expected value.
 
 5)  for 82599/500 Series 10G, repeat above step 1-4 for txd=rxd=512,2048 separately.
-    for Intel® Ethernet Network Adapter XXV710-DA2  nic, just test
-    txd=rxd=512,2048 following above steps 1-4.
+    for Intel® Ethernet Network Adapter XXV710-DA2 nic, just test txd=rxd=512,2048 following above steps
+    1-4.
 
 6) Result tables for different NICs:
 
@@ -100,4 +98,4 @@  Note : The values for the expected throughput may vary due to different
        Ubuntu 16.04, and traffic generator IXIA)
 
 Case will raise failure if actual throughputs have more than 1Mpps gap
-from expected ones. 
+from expected ones.
diff --git a/test_plans/nvgre_test_plan.rst b/test_plans/nvgre_test_plan.rst
index 58e47153..59b0d362 100644
--- a/test_plans/nvgre_test_plan.rst
+++ b/test_plans/nvgre_test_plan.rst
@@ -24,7 +24,7 @@  plugged into the available PCIe Gen3 8-lane slot.
 1x XL710-DA4 (1x 10GbE full duplex optical ports per NIC)
 plugged into the available PCIe Gen3 8-lane slot.
 
-DUT board must be two sockets system and each cpu have more than 8 lcores.
+SUT board must be two sockets system and each cpu have more than 8 lcores.
 
 Test Case: NVGRE ipv4 packet detect
 ===================================
@@ -229,7 +229,7 @@  Test Case: NVGRE ipv4 checksum offload
 
 This test validates NVGRE IPv4 checksum by the hardware. In order to this, the packet should first
 be sent from ``Scapy`` with wrong checksum(0x00) value. Then the pmd forward package while checksum
-is modified on DUT tx port by hardware. To verify it, tcpdump captures the
+is modified on SUT tx port by hardware. To verify it, tcpdump captures the
 forwarded packet and checks the forwarded packet checksum correct or not.
 
 Start testpmd with tunneling packet type to NVGRE::
@@ -239,12 +239,12 @@  Start testpmd with tunneling packet type to NVGRE::
 Set csum packet forwarding mode and enable verbose log::
 
     set fwd csum
-    csum set ip hw <dut tx_port>
-    csum set udp hw <dut tx_port>
-    csum set tcp hw <dut tx_port>
-    csum set sctp hw <dut tx_port>
-    csum set nvgre hw <dut tx_port>
-    csum parse-tunnel on <dut tx_port>
+    csum set ip hw <SUT tx_port>
+    csum set udp hw <SUT tx_port>
+    csum set tcp hw <SUT tx_port>
+    csum set sctp hw <SUT tx_port>
+    csum set nvgre hw <SUT tx_port>
+    csum parse-tunnel on <SUT tx_port>
     set verbose 1
 
 Send packet with invalid checksum first. Then check forwarded packet checksum
@@ -272,7 +272,7 @@  Test Case: NVGRE ipv6 checksum offload
 
 This test validates NVGRE IPv6 checksum by the hardware. In order to this, the packet should first
 be sent from ``Scapy`` with wrong checksum(0x00) value. Then the pmd forward package while checksum
-is modified on DUT tx port by hardware. To verify it, tcpdump captures the
+is modified on SUT tx port by hardware. To verify it, tcpdump captures the
 forwarded packet and checks the forwarded packet checksum correct or not.
 
 Start testpmd with tunneling packet type::
@@ -282,12 +282,12 @@  Start testpmd with tunneling packet type::
 Set csum packet forwarding mode and enable verbose log::
 
     set fwd csum
-    csum set ip hw <dut tx_port>
-    csum set udp hw <dut tx_port>
-    csum set tcp hw <dut tx_port>
-    csum set sctp hw <dut tx_port>
-    csum set nvgre hw <dut tx_port>
-    csum parse-tunnel on <dut tx_port>
+    csum set ip hw <SUT tx_port>
+    csum set udp hw <SUT tx_port>
+    csum set tcp hw <SUT tx_port>
+    csum set sctp hw <SUT tx_port>
+    csum set nvgre hw <SUT tx_port>
+    csum parse-tunnel on <SUT tx_port>
     set verbose 1
 
 Send packet with invalid checksum first. Then check forwarded packet checksum
diff --git a/test_plans/packet_capture_test_plan.rst b/test_plans/packet_capture_test_plan.rst
index 274e95ca..4166d0fa 100644
--- a/test_plans/packet_capture_test_plan.rst
+++ b/test_plans/packet_capture_test_plan.rst
@@ -36,9 +36,9 @@  Test configuration
 2x NICs (2 full duplex ports per NIC) plugged into the available slots on a
 platform, another two nic ports are linked with cables.
 
-Connections ports between TESTER and DUT::
+Connections ports between TG and SUT::
 
-       TESTER                                DUT
+        TG                                  SUT
                     physical link
      .--------.                          .--------.
      | portA0 | <----------------------> | portB0 |
@@ -52,7 +52,7 @@  Connections ports between TESTER and DUT::
 
 note: portB0/portB1 are the binded ports.
       portB2/portB3 keep link up status and don't bind to dpdk driver.
-      Except portB0/portB1, DUT should have other two ports on link up status
+      Except portB0/portB1, SUT should have other two ports on link up status
 
 Prerequisites
 =============
@@ -63,7 +63,7 @@  Test cases
 
 The testpmd application act as server process with port-topology chained mode,
 the dpdk-pdump act as client process to dump capture packet with different
-options setting. Select one port of tester as tx port, another port of tester
+options setting. Select one port of TG as tx port, another port of TG
 as rx port, send different type packets from two ports, check pcap file
 content dumped by scapy and tcpdump to confirm testpmd working correctly,
 check pcap file content dumped by tcpdump and dpdk-pdump to confirm
@@ -138,19 +138,19 @@  port configuration
 
 #. confirm two NICs physical link on a platform::
 
-    dut port 0 <---> tester port 0
-    dut port 1 <---> tester port 1
+    SUT port 0 <---> TG port 0
+    SUT port 1 <---> TG port 1
 
-#. Bind two port on DUT::
+#. Bind two port on SUT::
 
-    ./usertools/dpdk_nic_bind.py --bind=igb_uio <dut port 0 pci address> <dut port 1 pci address>
+    ./usertools/dpdk_nic_bind.py --bind=igb_uio <SUT port 0 pci address> <SUT port 1 pci address>
 
-#. On dut, use port 0 as rx/tx port. If dut port 0 rx dump is set, scapy send
-   packet from tester port 0 and tcpdump dumps tester port 1's packet. If dut
-   port 0 tx dump is set, scapy send packet from tester port 1 and tcpdump dumps
-   tester port 0's packet.
+#. On SUT, use port 0 as rx/tx port. If SUT port 0 rx dump is set, scapy send
+   packet from TG port 0 and tcpdump dumps TG port 1's packet. If SUT
+   port 0 tx dump is set, scapy send packet from TG port 1 and tcpdump dumps
+   TG port 0's packet.
 
-#. If using interfaces as dpdk-pdump vdev, prepare two ports on DUT, which
+#. If using interfaces as dpdk-pdump vdev, prepare two ports on SUT, which
    haven't been binded to dpdk and have been in linked status
 
 Test Case: test pdump port
@@ -158,8 +158,8 @@  Test Case: test pdump port
 
 Test different port type definition options::
 
-    port=<dut port id>
-    device_id=<dut pci address>
+    port=<SUT port id>
+    device_id=<SUT pci address>
 
 steps:
 
@@ -179,19 +179,19 @@  steps:
     ./x86_64-native-linuxapp-gcc/examples/dpdk-pdump -- --pdump  '<port option>,queue=*,\
     tx-dev=/tmp/pdump-tx.pcap,rx-dev=/tmp/pdump-rx.pcap'
 
-#. Set up linux's tcpdump to receiver packet on tester::
+#. Set up linux's tcpdump to receiver packet on TG::
 
     tcpdump -i <rx port name> -w /tmp/sniff-<rx port name>.pcap
     tcpdump -i <tx port name> -w /tmp/sniff-<tx port name>.pcap
 
-#. Send packet on tester by port 0::
+#. Send packet on TG by port 0::
 
     sendp(<packet format>, iface=<port 0 name>)
 
 #. Compare pcap file of scapy with the pcap file dumped by tcpdump. Compare pcap
 file dumped by dpdk-pdump with pcap files dumped by tcpdump.
 
-#. Send packet on tester by port 1::
+#. Send packet on TG by port 1::
 
     sendp(<packet format>, iface=<port 1 name>)
 
@@ -232,19 +232,19 @@  steps:
     ./x86_64-native-linuxapp-gcc/examples/dpdk-pdump -- --pdump  'port=0,<queue option>,\
     tx-dev=/tmp/pdump-tx.pcap,rx-dev=/tmp/pdump-rx.pcap'
 
-#. Set up linux's tcpdump to receiver packet on tester::
+#. Set up linux's tcpdump to receiver packet on TG::
 
     tcpdump -i <rx port name> -w /tmp/sniff-<rx port name>.pcap
     tcpdump -i <tx port name> -w /tmp/sniff-<tx port name>.pcap
 
-#. Send packet on tester by port 0::
+#. Send packet on TG by port 0::
 
     sendp(<packet format>, iface=<port 0 name>)
 
 #. Compare pcap file of scapy with the pcap file dumped by tcpdump. Compare pcap
    file dumped by dpdk-pdump with pcap files dumped by tcpdump.
 
-#. Send packet on tester by port 1::
+#. Send packet on TG by port 1::
 
     sendp(<packet format>, iface=<port 1 name>)
 
@@ -285,12 +285,12 @@  steps:
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-pdump -- --pdump  'port=0,queue=*,<dump object>'
 
-#. Set up linux's tcpdump to receiver packet on tester::
+#. Set up linux's tcpdump to receiver packet on TG::
 
     tcpdump -i <rx port name> -w /tmp/sniff-<rx port name>.pcap
     tcpdump -i <tx port name> -w /tmp/sniff-<tx port name>.pcap
 
-#. Send packet on tester by port 0::
+#. Send packet on TG by port 0::
 
     sendp(<packet format>, iface=<port 0 name>)
 
@@ -298,7 +298,7 @@  steps:
    file dumped by dpdk-pdump with pcap files dumped by tcpdump(ignore when only
    set tx-dev).
 
-#. Send packet on tester by port 1::
+#. Send packet on TG by port 1::
 
     sendp(<packet format>, iface=<port 1 name>)
 
@@ -319,9 +319,9 @@  Dump rx/tx transmission packets to a specified port, which is on link status.
 
 test different dump options::
 
-    tx-dev=<dut tx port name>,rx-dev=<dut rx port name>
-    rx-dev=<dut rx port name>
-    tx-dev=<dut tx port name>
+    tx-dev=<SUT tx port name>,rx-dev=<SUT rx port name>
+    rx-dev=<SUT rx port name>
+    tx-dev=<SUT tx port name>
 
 steps:
 
@@ -340,17 +340,17 @@  steps:
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-pdump -- --pdump  'port=0,queue=*,<dump object>'
 
-#. Set up linux's tcpdump to receiver packet on tester::
+#. Set up linux's tcpdump to receiver packet on TG::
 
     tcpdump -i <rx port name> -w /tmp/sniff-<rx port name>.pcap
     tcpdump -i <tx port name> -w /tmp/sniff-<tx port name>.pcap
 
-#. Set up linux's tcpdump to receiver packet of dpdk-pdump on Dut::
+#. Set up linux's tcpdump to receiver packet of dpdk-pdump on Sut::
 
-    when rx-dev is set, use 'tcpdump -i <dut rx port name> -w /tmp/pdump-rx.pcap'
-    when tx-dev is set, use 'tcpdump -i <dut tx port name> -w /tmp/pdump-tx.pcap'
+    when rx-dev is set, use 'tcpdump -i <SUT rx port name> -w /tmp/pdump-rx.pcap'
+    when tx-dev is set, use 'tcpdump -i <SUT tx port name> -w /tmp/pdump-tx.pcap'
 
-#. Send packet on tester by port 0::
+#. Send packet on TG by port 0::
 
     sendp(<packet format>, iface=<port 0 name>)
 
@@ -358,7 +358,7 @@  steps:
    file dumped by dpdk-pdump with pcap files dumped by tcpdump(ignore when only
    set tx-dev).
 
-#. Send packet on tester by port 1::
+#. Send packet on TG by port 1::
 
     sendp(<packet format>, iface=<port 1 name>)
 
@@ -395,19 +395,19 @@  steps:
     ./x86_64-native-linuxapp-gcc/examples/dpdk-pdump -- --pdump  'port=0,queue=*,\
     tx-dev=/tmp/pdump-tx.pcap,rx-dev=/tmp/pdump-rx.pcap,ring-size=1024'
 
-#. Set up linux's tcpdump to receiver packet on tester::
+#. Set up linux's tcpdump to receiver packet on TG::
 
     tcpdump -i <rx port name> -w /tmp/sniff-<rx port name>.pcap
     tcpdump -i <tx port name> -w /tmp/sniff-<tx port name>.pcap
 
-#. Send packet on tester by port 0::
+#. Send packet on TG by port 0::
 
     sendp(<packet format>, iface=<port 0 name>)
 
 #. Compare pcap file of scapy with the pcap file dumped by tcpdump. Compare pcap
    file dumped by dpdk-pdump with pcap files dumped by tcpdump.
 
-#. Send packet on tester by port 1::
+#. Send packet on TG by port 1::
 
     sendp(<packet format>, iface=<port 1 name>)
 
@@ -444,19 +444,19 @@  steps:
     ./x86_64-native-linuxapp-gcc/examples/dpdk-pdump -- --pdump  'port=0,queue=*,\
     tx-dev=/tmp/pdump-tx.pcap,rx-dev=/tmp/pdump-rx.pcap,mbuf-size=2048'
 
-#. Set up linux's tcpdump to receiver packet on tester::
+#. Set up linux's tcpdump to receiver packet on TG::
 
     tcpdump -i <rx port name> -w /tmp/sniff-<rx port name>.pcap
     tcpdump -i <tx port name> -w /tmp/sniff-<tx port name>.pcap
 
-#. Send packet on tester by port 0::
+#. Send packet on TG by port 0::
 
     sendp(<packet format>, iface=<port 0 name>)
 
 #. Compare pcap file of scapy with the pcap file dumped by tcpdump. Compare pcap
    file dumped by dpdk-pdump with pcap files dumped by tcpdump.
 
-#. Send packet on tester by port 1::
+#. Send packet on TG by port 1::
 
     sendp(<packet format>, iface=<port 1 name>)
 
@@ -492,19 +492,19 @@  steps:
     ./x86_64-native-linuxapp-gcc/examples/dpdk-pdump -- --pdump  'port=0,queue=*,\
     tx-dev=/tmp/pdump-tx.pcap,rx-dev=/tmp/pdump-rx.pcap,total-num-mbufs=8191'
 
-#. Set up linux's tcpdump to receiver packet on tester::
+#. Set up linux's tcpdump to receiver packet on TG::
 
     tcpdump -i <rx port name> -w /tmp/sniff-<rx port name>.pcap
     tcpdump -i <tx port name> -w /tmp/sniff-<tx port name>.pcap
 
-#. Send packet on tester by port 0::
+#. Send packet on TG by port 0::
 
     sendp(<packet format>, iface=<port 0 name>)
 
 #. Compare pcap file of scapy with the pcap file dumped by tcpdump. Compare pcap
    file dumped by dpdk-pdump with pcap files dumped by tcpdump.
 
-#. Send packet on tester by port 1::
+#. Send packet on TG by port 1::
 
     sendp(<packet format>, iface=<port 1 name>)
 
diff --git a/test_plans/pf_smoke_test_plan.rst b/test_plans/pf_smoke_test_plan.rst
index 938f99e4..b0a2d17b 100644
--- a/test_plans/pf_smoke_test_plan.rst
+++ b/test_plans/pf_smoke_test_plan.rst
@@ -29,7 +29,7 @@  Prerequisites
     CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static x86_64-native-linuxapp-gcc
     ninja -C x86_64-native-linuxapp-gcc
 
-4. Get the pci device id of DUT, for example::
+4. Get the pci device id of NIC ports, for example::
 
     ./usertools/dpdk-devbind.py -s
 
@@ -131,6 +131,6 @@  Test Case 3: test reset RX/TX queues
 
 5. Check with ``show config rxtx`` that the configuration for these parameters changed.
 
-6. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+6. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check if testpmd is able to receive and forward packets
    successfully.
diff --git a/test_plans/pipeline_test_plan.rst b/test_plans/pipeline_test_plan.rst
index 416e92a0..10b3e8cb 100644
--- a/test_plans/pipeline_test_plan.rst
+++ b/test_plans/pipeline_test_plan.rst
@@ -12,20 +12,20 @@  application.
 
 Prerequisites
 ==============
-The DUT must have four 10G Ethernet ports connected to four ports on
-Tester that are controlled by the Scapy packet generator::
+The SUT must have four 10G Ethernet ports connected to four ports on
+TG that are controlled by the Scapy traffic generator::
 
-    dut_port_0 <---> tester_port_0
-    dut_port_1 <---> tester_port_1
-    dut_port_2 <---> tester_port_2
-    dut_port_3 <---> tester_port_3
+    SUT_port_0 <---> TG_port_0
+    SUT_port_1 <---> TG_port_1
+    SUT_port_2 <---> TG_port_2
+    SUT_port_3 <---> TG_port_3
 
-Assume four DUT 10G Ethernet ports' pci device id is as the following::
+Assume four SUT 10G Ethernet ports' pci device id is as the following::
 
-    dut_port_0 : "0000:00:04.0"
-    dut_port_1 : "0000:00:05.0"
-    dut_port_2 : "0000:00:06.0"
-    dut_port_3 : "0000:00:07.0"
+    SUT_port_0 : "0000:00:04.0"
+    SUT_port_1 : "0000:00:05.0"
+    SUT_port_2 : "0000:00:06.0"
+    SUT_port_3 : "0000:00:07.0"
 
 Bind them to dpdk igb_uio driver::
 
@@ -64,13 +64,13 @@  Template of each Test Case
 ===========================
 1. Edit test_case_name/test_case_name.cli:
    change pci device id of LINK0, LINK1, LINK2, LINK3 to pci device id of
-   dut_port_0, dut_port_1, dut_port_2, dut_port_3
+   SUT_port_0, SUT_port_1, SUT_port_2, SUT_port_3
 
 2. Run pipeline app as the following::
 
     x86_64-native-linuxapp-gcc/examples/dpdk-pipeline  -c 0x3 -n 4 -- -s /tmp/pipeline/test_case_name/test_case_name.cli
 
-3. Send packets at tester side using scapy. The packets to be sent are maintained in pipeline/test_case_name/pcap_files/in_x.txt
+3. Send packets at TG side using scapy. The packets to be sent are maintained in pipeline/test_case_name/pcap_files/in_x.txt
 
 4. Verify the packets received using tcpdump. The expected packets are maintained in pipeline/test_case_name/pcap_files/out_x.txt
 
@@ -80,13 +80,13 @@  Example Test Case: test_mov_001
 =========================================
 1. Edit mov_001/mov_001.cli:
    change pci device id of LINK0, LINK1, LINK2, LINK3 to pci device id of
-   dut_port_0, dut_port_1, dut_port_2, dut_port_3
+   SUT_port_0, SUT_port_1, SUT_port_2, SUT_port_3
 
 2. Run pipeline app as the following::
 
     x86_64-native-linuxapp-gcc/examples/dpdk-pipeline  -c 0x3 -n 4 -- -s /tmp/pipeline/mov_001/mov_001.cli
 
-3. Send packets at tester side using scapy. The packets to be sent are maintained in pipeline/mov_001/pcap_files/in_1.txt
+3. Send packets at TG side using scapy. The packets to be sent are maintained in pipeline/mov_001/pcap_files/in_1.txt
 
 4. Verify the packets received using tcpdump. The expected packets are maintained in pipeline/mov_001/pcap_files/out_1.txt
 
diff --git a/test_plans/pmd_bonded_8023ad_test_plan.rst b/test_plans/pmd_bonded_8023ad_test_plan.rst
index f54c51d2..d69fe170 100644
--- a/test_plans/pmd_bonded_8023ad_test_plan.rst
+++ b/test_plans/pmd_bonded_8023ad_test_plan.rst
@@ -15,7 +15,7 @@  realize it based on 802.1AX specification, it includes LACP protocol and Marker
 protocol. This mode requires a switch that supports IEEE 802.3ad Dynamic link
 aggregation.
 
-note: Slave selection for outgoing traffic is done according to the transmit
+note: Slave selection for ouTGoing traffic is done according to the transmit
 hash policy, which may be changed from the default simple XOR layer2 policy.
 
 Requirements
@@ -56,18 +56,18 @@  Requirements
 
 Prerequisites for Bonding
 =========================
-all link ports of switch/dut should be the same data rate and support full-duplex.
+all link ports of switch/SUT should be the same data rate and support full-duplex.
 
 Functional testing hardware configuration
 -----------------------------------------
-NIC and DUT ports requirements:
+NIC and SUT ports requirements:
 
-- Tester: 2 ports of nic
-- DUT:    2 ports of nic
+- TG:  2 ports of nic
+- SUT: 2 ports of nic
 
 port topology diagram::
 
-     Tester                           DUT
+       TG                             SUT
     .-------.                      .-------.
     | port0 | <------------------> | port0 |
     | port1 | <------------------> | port1 |
diff --git a/test_plans/pmd_bonded_test_plan.rst b/test_plans/pmd_bonded_test_plan.rst
index a76ac6b8..4c2eb0a4 100644
--- a/test_plans/pmd_bonded_test_plan.rst
+++ b/test_plans/pmd_bonded_test_plan.rst
@@ -25,9 +25,9 @@  Requirements
 
   - Mode = 3 (broadcast) Broadcast policy: Transmit network packets on all slave network interfaces. This mode provides fault tolerance but is only suitable for special cases.
 
-  - Mode = 4 (802.3ad) IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. This mode requires a switch that supports IEEE 802.3ad Dynamic link aggregation. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR layer2 policy.
+  - Mode = 4 (802.3ad) IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. This mode requires a switch that supports IEEE 802.3ad Dynamic link aggregation. Slave selection for ouTGoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR layer2 policy.
 
-  - Mode = 5 (balance-tlb) Adaptive transmit load balancing. Linux bonding driver mode that does not require any special network switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
+  - Mode = 5 (balance-tlb) Adaptive transmit load balancing. Linux bonding driver mode that does not require any special network switch support. The ouTGoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
 
   - Mode = 6 (balance-alb) Adaptive load balancing. Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
 * The available transmit policies SHALL be as follows;
@@ -51,11 +51,11 @@  Prerequisites for Bonding
 
 * NIC and IXIA ports requirements.
 
-  - Tester: have 4 10Gb (82599) ports and 4 1Gb ports.
-  - DUT: have 4 10Gb (82599) ports and 4 1Gb ports. All functional tests should be done on both 10G and 1G port.
+  - TG: have 4 10Gb (82599) ports and 4 1Gb ports.
+  - SUT: have 4 10Gb (82599) ports and 4 1Gb ports. All functional tests should be done on both 10G and 1G port.
   - IXIA: have 4 10G ports and 4 1G ports. IXIA is used for performance test.
 
-* BIOS settings on DUT:
+* BIOS settings on SUT:
 
   - Enhanced Intel Speedstep----DISABLED
   - Processor C3--------------------DISABLED
@@ -71,9 +71,9 @@  Prerequisites for Bonding
   - Memory Power Optimization---------------------Performance Optimized
   - Memory RAS and Performance Configuration-->NUMA Optimized----ENABLED
 
-* Connections ports between tester/ixia and DUT
+* Connections ports between TG/ixia and SUT
 
-  - TESTER(Or IXIA)-------DUT
+  - TG(Or IXIA)------------SUT
   - portA------------------port0
   - portB------------------port1
   - portC------------------port2
@@ -83,7 +83,7 @@  Prerequisites for Bonding
 Test Setup#1 for Functional test
 ================================
 
-Tester has 4 ports(portA--portD), and DUT has 4 ports(port0-port3), then connect portA to port0, portB to port1, portC to port2, portD to port3.
+TG has 4 ports(portA--portD), and SUT has 4 ports(port0-port3), then connect portA to port0, portB to port1, portC to port2, portD to port3.
 
 
 Test Case1: Basic bonding--Create bonded devices and slaves
@@ -298,7 +298,7 @@  TX:
 
 Add ports 1-3 as slave devices to the bonded port 5.
 Send a packet stream from port D on the traffic generator to be forwarded through the bonded port.
-Verify that traffic is distributed equally in a round robin manner through ports 1-3 on the DUT back to the traffic generator.
+Verify that traffic is distributed equally in a round robin manner through ports 1-3 of the SUT back to the traffic generator.
 The sum of the packets received on ports A-C should equal the total packets sent from port D.
 The sum of the packets transmitted on ports 1-3 should equal the total packets transmitted from port 5 and received on port 4::
 
@@ -337,7 +337,7 @@  Test Case5: Mode 0(Round Robin) Bring one slave link down
 Add ports 1-3 as slave devices to the bonded port 5.
 Bring the link on either port 1, 2 or 3 down.
 Send a packet stream from port D on the traffic generator to be forwarded through the bonded port.
-Verify that forwarded traffic is distributed equally in a round robin manner through the active bonded ports on the DUT back to the traffic generator.
+Verify that forwarded traffic is distributed equally in a round robin manner through the active bonded ports of the SUT back to the traffic generator.
 The sum of the packets received on ports A-C should equal the total packets sent from port D.
 The sum of the packets transmitted on the active bonded ports should equal the total packets transmitted from port 5 and received on port 4.
 No traffic should be sent on the bonded port which was brought down.
@@ -398,7 +398,7 @@  Repeat the transmission and reception(TX/RX) test verify that data is now transm
 Test Case10: Mode 1(Active Backup) Link up/down active eth dev
 ==============================================================
 
-Bring link between port A and port0 down. If tester is ixia, can use IxExplorer to set the "Simulate Cable Disconnect" at the port property.
+Bring link between port A and port0 down. If TG is ixia, can use IxExplorer to set the "Simulate Cable Disconnect" at the port property.
 Verify that the active slave has been changed from port0.
 Repeat the transmission and reception test verify that data is now transmitted and received through the new active slave and no longer through port0
 
diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst b/test_plans/pmd_stacked_bonded_test_plan.rst
index ca46c18d..cd589e45 100644
--- a/test_plans/pmd_stacked_bonded_test_plan.rst
+++ b/test_plans/pmd_stacked_bonded_test_plan.rst
@@ -26,18 +26,18 @@  Prerequisites
 hardware configuration
 ----------------------
 
-all link ports of tester/dut should be the same data rate and support full-duplex.
+all link ports of TG/SUT should be the same data rate and support full-duplex.
 Slave down test cases need four ports at least, other test cases can run with
 two ports.
 
-NIC/DUT/TESTER ports requirements::
+NIC/SUT/TG ports requirements::
 
-     DUT:     2/4 ports.
-     TESTER:  2/4 ports.
+     SUT: 2/4 ports.
+     TG:  2/4 ports.
 
 port topology diagram(4 peer links)::
 
-       TESTER                                   DUT
+         TG                                     SUT
                   physical link             logical link
      .---------.                .-------------------------------------------.
      | portA 0 | <------------> | portB 0 <---> .--------.                  |
@@ -132,7 +132,7 @@  steps
 
 Test Case: active-backup stacked bonded rx traffic
 ==================================================
-setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check
+setup SUT/testpmd stacked bonded ports, send tcp packet by scapy and check
 testpmd packet statistics.
 
 steps
@@ -187,7 +187,7 @@  steps
 
 Test Case: active-backup stacked bonded rx traffic with slave down
 ==================================================================
-setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded port
+setup SUT/testpmd stacked bonded ports, set one slave of 1st level bonded port
 to down status, send tcp packet by scapy and check testpmd packet statistics.
 
 steps
@@ -255,7 +255,7 @@  steps
 
 Test Case: balance-xor stacked bonded rx traffic
 ================================================
-setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check
+setup SUT/testpmd stacked bonded ports, send tcp packet by scapy and check
 packet statistics.
 
 steps
@@ -310,7 +310,7 @@  steps
 
 Test Case: balance-xor stacked bonded rx traffic with slave down
 ================================================================
-setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded
+setup SUT/testpmd stacked bonded ports, set one slave of 1st level bonded
 device to down status, send tcp packet by scapy and check packet statistics.
 
 steps
diff --git a/test_plans/pmd_test_plan.rst b/test_plans/pmd_test_plan.rst
index c371edb1..f6bc4c1f 100644
--- a/test_plans/pmd_test_plan.rst
+++ b/test_plans/pmd_test_plan.rst
@@ -24,7 +24,7 @@  The core configuration description is:
 Prerequisites
 =============
 
-Each of the 10Gb/25Gb/40Gb/100Gb Ethernet* ports of the DUT is directly connected in
+Each of the 10Gb/25Gb/40Gb/100Gb Ethernet* ports of the SUT is directly connected in
 full-duplex to a different port of the peer traffic generator.
 
 Using interactive commands, the traffic generator can be configured to
@@ -69,8 +69,8 @@  Test Case: Packet Checking
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -- -i
     testpmd> start
 
-#. The tester sends packets with different sizes (64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes)
-   which will be forwarded by the DUT. The test checks if the packets are correctly forwarded and
+#. The TG sends packets with different sizes (64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes)
+   which will be forwarded by the SUT. The test checks if the packets are correctly forwarded and
    if both RX and TX packet sizes match by `show port all stats`
 
 Test Case: Packet Checking in scalar mode
@@ -82,9 +82,9 @@  The linuxapp is started with the following parameters:
   -c 0x6 -n 4 -a <devid>,scalar_enable=1  -- -i --portmask=<portmask>
 
 
-This test is applicable for Marvell devices. The tester sends 1 packet at a
+This test is applicable for Marvell devices. The TG sends 1 packet at a
 time with different sizes (64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes),
-using scapy, which will be forwarded by the DUT. The test checks if the packets
+using scapy, which will be forwarded by the SUT. The test checks if the packets
 are correctly forwarded and if both RX and TX packet sizes match.
 
 
@@ -95,9 +95,9 @@  Test Case: Descriptors Checking
 
    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf0 -n 4 -- -i--rxd={rxd} --txd={txd}
 
-#. The tester sends packets with different sizes (64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes)
+#. The TG sends packets with different sizes (64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes)
    for different values of rxd and txd (128,,256, 512, 1024, 2048 and 4096)
-   The packets will be forwarded by the DUT. The test checks if the packets are correctly forwarded.
+   The packets will be forwarded by the SUT. The test checks if the packets are correctly forwarded.
 
 Test Case: Single Core Performance Benchmarking
 ===============================================
@@ -107,10 +107,10 @@  must grater than single core performance, then the bottleneck will be the core.
 Below is an example setup topology for performance test, NIC (one or more) ports connect to
 Traffic Generator ports directly::
 
-    Dut Card 0 port 0 ---- Traffic Generator port 0
-    Dut Card 1 port 0 ---- Traffic Generator port 1
+    Sut Card 0 port 0 ---- Traffic Generator port 0
+    Sut Card 1 port 0 ---- Traffic Generator port 1
      ...
-    DUT Card n port 0 ---- Traffic Generator port n
+    Sut Card n port 0 ---- Traffic Generator port n
 
 In order to trigger the best performance of NIC, there will be specific setting, and the setting vary
 from NIC to NIC.
@@ -140,7 +140,7 @@  Test steps:
 
    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1800000000 -n 4 -- -i--portmask=0x3 -txd=2048 --rxd=2048 --txq=2 --rxq=2
 
-#. The tester send packets which will be forwarded by the DUT, record the perfromance numbers.
+#. The TG send packets which will be forwarded by the SUT, record the perfromance numbers.
 
 The throughput is measured for each of these combinations of different packet size
 (64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes) and different value of rxd and txd(128,,256, 512, 1024, 2048 and 4096)
diff --git a/test_plans/pmdpcap_test_plan.rst b/test_plans/pmdpcap_test_plan.rst
index cc27abe8..b3359c1a 100644
--- a/test_plans/pmdpcap_test_plan.rst
+++ b/test_plans/pmdpcap_test_plan.rst
@@ -17,7 +17,7 @@  The core configurations description is:
 Prerequisites
 =============
 
-This test does not requires connections between DUT and tester as it is focused
+This test does not requires connections between SUT and TG as it is focused
 in PCAP devices created by Test PMD.
 
 It is Test PMD application itself which send and receives traffic from and to
diff --git a/test_plans/pmdrss_hash_test_plan.rst b/test_plans/pmdrss_hash_test_plan.rst
index 30bf3f99..f622bc0a 100644
--- a/test_plans/pmdrss_hash_test_plan.rst
+++ b/test_plans/pmdrss_hash_test_plan.rst
@@ -86,8 +86,8 @@  Testpmd configuration - 16 RX/TX queues per port
 
        testpmd command: start
 
-tester Configuration
---------------------
+TG Configuration
+----------------
 
 #. set up scapy
 
diff --git a/test_plans/pmdrssreta_test_plan.rst b/test_plans/pmdrssreta_test_plan.rst
index c0913cd2..0bea7437 100644
--- a/test_plans/pmdrssreta_test_plan.rst
+++ b/test_plans/pmdrssreta_test_plan.rst
@@ -116,8 +116,8 @@  interactive commands of the ``testpmd`` application.
 
      testpmd command: start
 
-tester Configuration
---------------------
+TG Configuration
+----------------
 
 #. In order to make most entries of the reta to be tested, the traffic
    generator has to be configured to randomize the value of the 5-tuple fields
@@ -127,7 +127,7 @@  tester Configuration
 #. Set the package numbers of one burst to a certain value.
 
 
-Example output (1P/2Q)  received by the dut):::
+Example output (1P/2Q)  received by the SUT):::
 -----------------------------------------------
 
 +--------------+-------------+------------+-----------------+------+
diff --git a/test_plans/port_control_test_plan.rst b/test_plans/port_control_test_plan.rst
index 07c3b948..5b6abda0 100644
--- a/test_plans/port_control_test_plan.rst
+++ b/test_plans/port_control_test_plan.rst
@@ -42,7 +42,7 @@  Test Case: pf start/stop/reset/close
 
    verify that the link status is up.
 
-   Using scapy to send 1000 random packets from tester,
+   Using scapy to send 1000 random packets from TG,
    verify the packets can be received and can be forwarded::
 
      scapy
@@ -76,7 +76,7 @@  Test Case: pf start/stop/reset/close
 
   verify that the link status is up.
 
-  Send the same 1000 packets with scapy from tester,
+  Send the same 1000 packets with scapy from TG,
   verify the packets can be received and forwarded.
 
 4. Reset the port, run the commands::
@@ -106,7 +106,7 @@  Test Case: pf start/stop/reset/close
      Link speed: 10000 Mbps
 
    verify that the link status is up.
-   Send the same 1000 packets with scapy from tester,
+   Send the same 1000 packets with scapy from TG,
    verify the packets can be received and forwarded.
 
 5. Close the port, run the commands::
diff --git a/test_plans/port_representor_test_plan.rst b/test_plans/port_representor_test_plan.rst
index 47dd0f96..00fa0c7f 100644
--- a/test_plans/port_representor_test_plan.rst
+++ b/test_plans/port_representor_test_plan.rst
@@ -107,7 +107,7 @@  Description: use control testpmd to enable/disable dataplane testpmd ports promi
     scapy> pkts=[pkt1, pkt2, pkt3, pkt4]*10
     scapy> sendp(pkts, iface="ens785f0")
 
-3. check port stats in DUT::
+3. check port stats in SUT::
 
     PF testpmd> show port stats all
 
@@ -135,7 +135,7 @@  Description: use control testpmd to set vf mac address
 
 3. use test case 2 step 2 to send packets from traffic generator
 
-4. check port stats in DUT::
+4. check port stats in SUT::
 
     PF testpmd> show port stats all
 
diff --git a/test_plans/power_branch_ratio_test_plan.rst b/test_plans/power_branch_ratio_test_plan.rst
index fe23d4d5..b509bddc 100644
--- a/test_plans/power_branch_ratio_test_plan.rst
+++ b/test_plans/power_branch_ratio_test_plan.rst
@@ -35,7 +35,7 @@  Test Case 1 : Set Branch-Ratio Test Rate by User ===============================
     ./<build_target>/app/dpdk-testpmd -v -c 0x6 -n 1 -m 1024 --file-prefix=vmpower2 -- -i
     > start
 
-3. Inject packet with packet generator to the NIC, with line rate,
+3. Inject packet with traffic generator to the NIC, with line rate,
 check the branch ratio and the related CPU frequency, in this case, the
 core 2 will be used by testpmd as worker core, branch ratio will be shown as
 following in dpdk-vm_power_manager's log output::
@@ -47,13 +47,13 @@  following in dpdk-vm_power_manager's log output::
 
 The above values in order are core number, ratio measured , # of branches, number of polls.
 
-4. [Check Point]Inject packets with packet generator with Line Rate(10G), check
+4. [Check Point]Inject packets with traffic generator with Line Rate(10G), check
 the core 2 frequency use following cmd, The Frequency reported should be at the
 highest frequency::
 cat /sys/devices/system/cpu/cpu2/cpufreq/scaling_cur_freq
 [no_turbo_max]: cur_freq >= no_turbo_max(P1)
 
-5. [Check Point]Stopped the traffic from packet generator. Check the core 2
+5. [Check Point]Stopped the traffic from traffic generator. Check the core 2
 frequency again, the Frequency reported should be::
 
     [sys_min]:cur_freq <= sys_min
diff --git a/test_plans/power_managerment_throughput_test_plan.rst b/test_plans/power_managerment_throughput_test_plan.rst
index 4005831d..896b5d5c 100644
--- a/test_plans/power_managerment_throughput_test_plan.rst
+++ b/test_plans/power_managerment_throughput_test_plan.rst
@@ -36,10 +36,10 @@  Test Case1: Check the CPU frequency can change according differernt packet speed
 
     ./<build_target>/examples/dpdk-l3fwd-power -c 0xc000000 -n 4 -- -P -p 0x01  --config '(0,0,27)'
 
-4. Send packets by packet generator with high speed, check the used cpu frequency is almost 100%::
+4. Send packets by traffic generator with high speed, check the used cpu frequency is almost 100%::
 
     cat /sys/devices/system/cpu/cpu27/cpufreq/cpuinfo_cur_freq
 
-5. Send packets by packet generator with low speed, the CPU frequency will reduce about 50%::
+5. Send packets by traffic generator with low speed, the CPU frequency will reduce about 50%::
 
     cat /sys/devices/system/cpu/cpu27/cpufreq/cpuinfo_cur_freq
\ No newline at end of file
diff --git a/test_plans/power_pbf_test_plan.rst b/test_plans/power_pbf_test_plan.rst
index f2b3b40b..5560db11 100644
--- a/test_plans/power_pbf_test_plan.rst
+++ b/test_plans/power_pbf_test_plan.rst
@@ -123,7 +123,7 @@  Step 4. Check the CPU frequency will be set to No turbo max frequency when turbo
 
 Test Case4:  Check Distributor Sample Use High Priority Core as Distribute Core
 ===============================================================================
-Step 1. Get the Priority core list on DUT in test case 1::
+Step 1. Get the Priority core list on SUT in test case 1::
 
     For example:
     6,7,13,14,15,16,21,26,27,29,36,38
@@ -140,7 +140,7 @@  Step 2. Launch distributor with 1 priority core, check the high priority core wi
 
 Test Case5:  Check Distributor Sample Will use High priority core for distribute core and rx/tx core
 ====================================================================================================
-Step 1. Get the Priority core list on DUT in test case 1::
+Step 1. Get the Priority core list on SUT in test case 1::
 
     Using pbf.py to check, or check from kernel
     For example:
diff --git a/test_plans/ptpclient_test_plan.rst b/test_plans/ptpclient_test_plan.rst
index 42c67b27..95ef3513 100644
--- a/test_plans/ptpclient_test_plan.rst
+++ b/test_plans/ptpclient_test_plan.rst
@@ -5,15 +5,15 @@ 
 Sample Application Tests: IEEE1588
 ==================================
 
-The PTP (Precision Time Protocol) client sample application is a simple 
-example of using the DPDK IEEE1588 API to communicate with a PTP master 
-clock to synchronize the time on the NIC and, optionally, on the Linux 
+The PTP (Precision Time Protocol) client sample application is a simple
+example of using the DPDK IEEE1588 API to communicate with a PTP master
+clock to synchronize the time on the NIC and, optionally, on the Linux
 system.
 
 Prerequisites
 =============
-Assume one port is connected to the tester and "linuxptp.x86_64"
-has been installed on the tester.
+Assume one port is connected to the TG and "linuxptp.x86_64"
+has been installed on the TG.
 
 Case Config::
 
@@ -23,37 +23,37 @@  The sample should be validated on Intel® Ethernet 700 Series, 82599 and i350 Ni
 
 Test case: ptp client
 ======================
-Start ptp server on tester with IEEE 802.3 network transport::
+Start ptp server on TG with IEEE 802.3 network transport::
 
     ptp4l -i p785p1 -2 -m
 
-Start ptp client on DUT and wait few seconds::
+Start ptp client on SUT and wait few seconds::
 
     ./<build_target>/examples/dpdk-ptpclient -c f -n 3 -- -T 0 -p 0x1
 
 Check that output message contained T1,T2,T3,T4 clock and time difference
 between master and slave time is about 10us in 82599, 20us in Intel® Ethernet 700 Series,
 8us in i350.
-   
+
 Test case: update system
 ========================
-Reset DUT clock to initial time and make sure system time has been changed::
+Reset SUT clock to initial time and make sure system time has been changed::
 
-    date -s "1970-01-01 00:00:00"    
+    date -s "1970-01-01 00:00:00"
 
-Strip DUT and tester board system time::
+Strip SUT and TG board system time::
 
     date +"%s.%N"
 
-Start ptp server on tester with IEEE 802.3 network transport::
+Start ptp server on TG with IEEE 802.3 network transport::
 
     ptp4l -i p785p1 -2 -m -S
 
-Start ptp client on DUT and wait few seconds::
+Start ptp client on SUT and wait few seconds::
 
     ./<build_target>/examples/dpdk-ptpclient -c f -n 3 -- -T 1 -p 0x1
 
-Make sure DUT system time has been changed to same as tester.
+Make sure SUT system time has been changed to same as TG.
 Check that output message contained T1,T2,T3,T4 clock and time difference
 between master and slave time is about 10us in 82599, 20us in Intel® Ethernet 700 Series,
 8us in i350.
diff --git a/test_plans/pvp_diff_qemu_version_test_plan.rst b/test_plans/pvp_diff_qemu_version_test_plan.rst
index 83a7b269..90cde274 100644
--- a/test_plans/pvp_diff_qemu_version_test_plan.rst
+++ b/test_plans/pvp_diff_qemu_version_test_plan.rst
@@ -27,7 +27,7 @@  Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed different version qemu, includes [qemu_2.5, qemu_2.6, qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0].
+2. Check SUT machine already has installed different version qemu, includes [qemu_2.5, qemu_2.6, qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0].
 
 3. Go to the absolute_path of different version qemu,then launch VM with different version qemu::
 
@@ -49,7 +49,7 @@  Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-5. Send packet by packet generator with different packet sizes(68,128,256,512,1024,1280,1518),repeat below command to get throughput 10 times,then calculate the average throughput::
+5. Send packet by traffic generator with different packet sizes(68,128,256,512,1024,1280,1518),repeat below command to get throughput 10 times,then calculate the average throughput::
 
     testpmd>show port stats all
 
@@ -65,7 +65,7 @@  Test Case 2: PVP test with virtio 1.0 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed different version qemu, includes [qemu_2.5, qemu_2.6, qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0].
+2. Check SUT machine already has installed different version qemu, includes [qemu_2.5, qemu_2.6, qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0].
 
 3. Go to the absolute_path of different version qemu,then launch VM with different version qemu, note: we need add "disable-modern=false" to enable virtio 1.0::
 
@@ -88,6 +88,6 @@  Test Case 2: PVP test with virtio 1.0 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packet by packet generator with different packet sizes(68,128,256,512,1024,1280,1518),repeat below command to get throughput 10 times,then calculate the average throughput::
+4. Send packet by traffic generator with different packet sizes(68,128,256,512,1024,1280,1518),repeat below command to get throughput 10 times,then calculate the average throughput::
 
     testpmd>show port stats all
\ No newline at end of file
diff --git a/test_plans/pvp_multi_paths_performance_test_plan.rst b/test_plans/pvp_multi_paths_performance_test_plan.rst
index 0929e5ef..7217d75b 100644
--- a/test_plans/pvp_multi_paths_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_performance_test_plan.rst
@@ -44,7 +44,7 @@  Test Case 1: pvp test with virtio 1.1 mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -68,7 +68,7 @@  Test Case 2: pvp test with virtio 1.1 non-mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -92,7 +92,7 @@  Test Case 3: pvp test with inorder mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -116,7 +116,7 @@  Test Case 4: pvp test with inorder non-mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -140,7 +140,7 @@  Test Case 5: pvp test with mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -164,7 +164,7 @@  Test Case 6: pvp test with non-mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -188,7 +188,7 @@  Test Case 7: pvp test with vectorized_rx path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -212,7 +212,7 @@  Test Case 8: pvp test with virtio 1.1 inorder mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -236,7 +236,7 @@  Test Case 9: pvp test with virtio 1.1 inorder non-mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -259,6 +259,6 @@  Test Case 10: pvp test with virtio 1.1 vectorized path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
diff --git a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
index 7ba4a470..af65ada7 100644
--- a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
@@ -34,7 +34,7 @@  Test Case 1: vhost single core performance test with virtio 1.1 mergeable path
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 2: vhost single core performance test with virtio 1.1 non-mergeable path
 ==================================================================================
@@ -55,7 +55,7 @@  Test Case 2: vhost single core performance test with virtio 1.1 non-mergeable pa
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 3: vhost single core performance test with inorder mergeable path
 ===========================================================================
@@ -76,7 +76,7 @@  Test Case 3: vhost single core performance test with inorder mergeable path
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 4: vhost single core performance test with inorder non-mergeable path
 ===============================================================================
@@ -97,7 +97,7 @@  Test Case 4: vhost single core performance test with inorder non-mergeable path
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 5: vhost single core performance test with mergeable path
 ===================================================================
@@ -118,7 +118,7 @@  Test Case 5: vhost single core performance test with mergeable path
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 6: vhost single core performance test with non-mergeable path
 =======================================================================
@@ -139,7 +139,7 @@  Test Case 6: vhost single core performance test with non-mergeable path
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 7: vhost single core performance test with vectorized_rx path
 =======================================================================
@@ -160,7 +160,7 @@  Test Case 7: vhost single core performance test with vectorized_rx path
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 8: vhost single core performance test with virtio 1.1 inorder mergeable path
 ======================================================================================
@@ -181,7 +181,7 @@  Test Case 8: vhost single core performance test with virtio 1.1 inorder mergeabl
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 9: vhost single core performance test with virtio 1.1 inorder non-mergeable path
 ==========================================================================================
@@ -202,7 +202,7 @@  Test Case 9: vhost single core performance test with virtio 1.1 inorder non-merg
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 10: vhost single core performance test with virtio 1.1 vectorized path
 ================================================================================
@@ -223,4 +223,4 @@  Test Case 10: vhost single core performance test with virtio 1.1 vectorized path
     >set fwd io
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
\ No newline at end of file
+3. Send packet with traffic generator with different packet size, check the throughput.
\ No newline at end of file
diff --git a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
index 8cb36668..61ee9483 100644
--- a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
@@ -33,7 +33,7 @@  Test Case 1: virtio single core performance test with virtio 1.1 mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 2: virtio single core performance test with virtio 1.1 non-mergeable path
 ===================================================================================
@@ -54,7 +54,7 @@  Test Case 2: virtio single core performance test with virtio 1.1 non-mergeable p
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 3: virtio single core performance test with inorder mergeable path
 ============================================================================
@@ -75,7 +75,7 @@  Test Case 3: virtio single core performance test with inorder mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 4: virtio single core performance test with inorder non-mergeable path
 ================================================================================
@@ -96,7 +96,7 @@  Test Case 4: virtio single core performance test with inorder non-mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 5: virtio single core performance test with mergeable path
 ====================================================================
@@ -117,7 +117,7 @@  Test Case 5: virtio single core performance test with mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 6: virtio single core performance test with non-mergeable path
 ========================================================================
@@ -138,7 +138,7 @@  Test Case 6: virtio single core performance test with non-mergeable path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 7: virtio single core performance test with vectorized_rx path
 ========================================================================
@@ -159,7 +159,7 @@  Test Case 7: virtio single core performance test with vectorized_rx path
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 8: virtio single core performance test with virtio 1.1 inorder mergeable path
 =======================================================================================
@@ -180,7 +180,7 @@  Test Case 8: virtio single core performance test with virtio 1.1 inorder mergeab
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 9: virtio single core performance test with virtio 1.1 inorder non-mergeable path
 ===========================================================================================
@@ -201,7 +201,7 @@  Test Case 9: virtio single core performance test with virtio 1.1 inorder non-mer
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
+3. Send packet with traffic generator with different packet size, check the throughput.
 
 Test Case 10: virtio single core performance test with virtio 1.1 vectorized path
 =================================================================================
@@ -222,4 +222,4 @@  Test Case 10: virtio single core performance test with virtio 1.1 vectorized pat
     >set fwd mac
     >start
 
-3. Send packet with packet generator with different packet size, check the throughput.
\ No newline at end of file
+3. Send packet with traffic generator with different packet size, check the throughput.
\ No newline at end of file
diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
index 017ea5f0..553a0504 100644
--- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
+++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
@@ -49,7 +49,7 @@  Test Case 1: pvp test with virtio 0.95 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
+4. Send packets by traffic generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
 
     testpmd>show port stats all
 
@@ -95,7 +95,7 @@  Test Case 2: pvp test with virtio 0.95 normal path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
+4. Send packets by traffic generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
 
     testpmd>show port stats all
 
@@ -141,7 +141,7 @@  Test Case 3: pvp test with virtio 0.95 vrctor_rx path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
+4. Send packets by traffic generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
 
     testpmd>show port stats all
 
@@ -187,7 +187,7 @@  Test Case 4: pvp test with virtio 1.0 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
+4. Send packets by traffic generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
 
     testpmd>show port stats all
 
@@ -233,7 +233,7 @@  Test Case 5: pvp test with virtio 1.0 normal path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
+4. Send packets by traffic generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
 
     testpmd>show port stats all
 
@@ -279,7 +279,7 @@  Test Case 6: pvp test with virtio 1.0 vrctor_rx path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
+4. Send packets by traffic generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command::
 
     testpmd>show port stats all
 
diff --git a/test_plans/pvp_share_lib_test_plan.rst b/test_plans/pvp_share_lib_test_plan.rst
index 936480bf..2af4ee76 100644
--- a/test_plans/pvp_share_lib_test_plan.rst
+++ b/test_plans/pvp_share_lib_test_plan.rst
@@ -37,7 +37,7 @@  Test Case1: Vhost/virtio-user pvp share lib test with 82599
     --no-pci --file-prefix=virtio  --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net -- -i
     testpmd>start
 
-6. Send traffic by packet generator, check the throughput with below command::
+6. Send traffic by traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
diff --git a/test_plans/pvp_vhost_dsa_test_plan.rst b/test_plans/pvp_vhost_dsa_test_plan.rst
index 71fb3a29..ab114d35 100644
--- a/test_plans/pvp_vhost_dsa_test_plan.rst
+++ b/test_plans/pvp_vhost_dsa_test_plan.rst
@@ -56,10 +56,10 @@  General set up
 	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
 	ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
+2. Get the PCI device ID and DSA device ID of SUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -s
-	
+
 	Network devices using kernel driver
 	===================================
 	0000:4f:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci
@@ -82,14 +82,14 @@  Common steps
 ------------
 1. Bind 1 NIC port to vfio-pci::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port pci device id>
 
 	For example:
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:4f.1
 
 2. Bind DSA devices to DPDK vfio-pci driver::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DSA device id>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DSA device id>
 
 	For example, bind 2 DMA devices to vfio-pci driver:
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0
@@ -108,7 +108,7 @@  Common steps
 
 .. note::
 
-	Better to reset WQ when need operate DSA devices that bound to idxd drvier: 
+	Better to reset WQ when need operate DSA devices that bound to idxd drvier:
 	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py --reset <numDevices * 2>
 	You can check it by 'ls /dev/dsa'
 	numDevices: number of devices, where 0<=numDevices<=7, corresponding to 0000:6a:01.0 - 0000:f6:01.0
@@ -157,7 +157,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -228,7 +228,7 @@  Test Case 2: PVP split ring all path multi-queues vhost async enqueue with 1:1 m
 -----------------------------------------------------------------------------------------------------------------------------------
 This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
 when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is 1:1.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested. 
+Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 1. Bind 8 dsa device(6a:01.0-f6:01.0) and one nic port(4f:00.1) to vfio-pci like common step 1-2::
 
@@ -252,7 +252,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -324,7 +324,7 @@  Test Case 3: PVP split ring all path multi-queues vhost enqueue operations with
 -----------------------------------------------------------------------------------------------------------------------------------------
 This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues
 when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels is M:1.
-Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested. 
+Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 1. Bind 1 dsa device and one nic port to vfio-pci like comon step 1-2::
 
@@ -347,7 +347,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -431,7 +431,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -507,7 +507,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 1. Bind 8 dsa device and one nic port to vfio-pci like common step 1-2::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0	
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
 
 2. Launch vhost by below command::
 
@@ -526,7 +526,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -630,7 +630,7 @@  with dsa dpdk driver and if the vhost-user can work well when the queue number d
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
+4. Send imix packets[64,1518] from traffic generator with random ip, check perforamnce can get target.
 
 5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
 
@@ -706,8 +706,8 @@  with dsa dpdk driver and if the vhost-user can work well when the queue number d
 
 Test Case 7: PVP packed ring all path vhost enqueue operations with 1:1 mapping between vrings and dsa dpdk driver channels
 ----------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with 1 core and 
-1 queue when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels 
+This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with 1 core and
+1 queue when vhost uses the asynchronous enqueue operations with dsa dpdk driver and the mapping between vrings and dsa virtual channels
 is 1:1. Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 
 1. Bind one dsa device and one nic port to vfio-pci like common step 1-2::
@@ -731,7 +731,7 @@  is 1:1. Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -823,7 +823,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -925,7 +925,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -1017,7 +1017,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -1120,7 +1120,7 @@  Both iova as VA and PA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -1232,7 +1232,7 @@  with dsa dpdk driver and if the vhost-user can work well when the queue number d
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
+4. Send imix packets[64,1518] from traffic generator with random ip, check perforamnce can get target.
 
 5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
 
@@ -1245,7 +1245,7 @@  with dsa dpdk driver and if the vhost-user can work well when the queue number d
 	testpmd>set fwd mac
 	testpmd>start
 
-7. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
+7. Send imix packets[64,1518] from traffic generator with random ip, check perforamnce can get target.
 
 8. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log.
 
@@ -1258,7 +1258,7 @@  with dsa dpdk driver and if the vhost-user can work well when the queue number d
 	testpmd>set fwd mac
 	testpmd>start
 
-10. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
+10. Send imix packets[64,1518] from traffic generator with random ip, check perforamnce can get target.
 
 11. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
@@ -1341,7 +1341,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -1408,8 +1408,8 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
 
 	#ls /dev/dsa,check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 2
 	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 4
@@ -1432,7 +1432,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -1521,7 +1521,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -1612,7 +1612,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -1708,7 +1708,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -1813,7 +1813,7 @@  with dsa kernel driver and if the vhost-user can work well when the queue number
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
+4. Send imix packets[64,1518] from traffic generator with random ip, check perforamnce can get target.
 
 5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
 
@@ -1911,7 +1911,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -2010,7 +2010,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -2107,7 +2107,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -2206,7 +2206,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -2310,7 +2310,7 @@  Both iova as VA mode, 'mac fwd' and 'csum fwd' have been tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator, check the throughput can get expected data::
 
 	testpmd>show port stats all
 
@@ -2423,7 +2423,7 @@  operations with dsa kernel driver and if the vhost-user can work well when the q
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target.
+4. Send imix packets[64,1518] from traffic generator with random ip, check perforamnce can get target.
 
 5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
 
@@ -2520,7 +2520,7 @@  Both iova as VA mode have beed tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-4. Send imix packets from packet generator with random ip, check perforamnce can get target.
+4. Send imix packets from traffic generator with random ip, check perforamnce can get target.
 
 5. Stop vhost port, check vhost RX and TX direction both exist packtes in 2 queues from vhost log.
 
@@ -2532,7 +2532,7 @@  Both iova as VA mode have beed tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-7. Send imix packets from packet generator with random ip, check perforamnce can get target.
+7. Send imix packets from traffic generator with random ip, check perforamnce can get target.
 
 8. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log.
 
@@ -2544,7 +2544,7 @@  Both iova as VA mode have beed tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-10. Send imix packets from packet generator with random ip, check perforamnce can get target.
+10. Send imix packets from traffic generator with random ip, check perforamnce can get target.
 
 11. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
@@ -2556,7 +2556,7 @@  Both iova as VA mode have beed tested.
 	testpmd>set fwd mac
 	testpmd>start
 
-13. Send imix packets from packet generator with random ip, check perforamnce can get target.
+13. Send imix packets from traffic generator with random ip, check perforamnce can get target.
 
 14. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log.
 
diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
index 6877aec4..f214cf7e 100644
--- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
+++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
@@ -17,8 +17,8 @@  Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vh
 * DPDK vhost-user acts as the client:
   Unlike the server mode, this mode doesn't create the socket file;it just tries to connect to the server (which responses to create the file instead).
   When the DPDK vhost-user application restarts, DPDK vhost-user will try to connect to the server again. This is how the "reconnect" feature works.
-  When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again. 
-  Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds. 
+  When DPDK vhost-user restarts from an normal or abnormal exit (such as a crash), the client mode allows DPDK to establish the connection again.
+  Also, when DPDK vhost-user acts as the client, it will keep trying to reconnect to the server (QEMU) until it succeeds.
   This is useful in two cases:
 
     * When QEMU is not started yet.
@@ -56,7 +56,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd::
+4. Send packets by traffic generator, check if packets can be RX/TX with virtio-pmd::
 
     testpmd>show port stats all
 
@@ -67,7 +67,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     testpmd>set fwd mac
     testpmd>start
 
-6. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with virtio-pmd::
+6. Check if the reconnection can work, still send packets by traffic generator, check if packets can be RX/TX with virtio-pmd::
 
     testpmd>show port stats all
 
@@ -101,7 +101,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd::
+4. Send packets by traffic generator, check if packets can be RX/TX with virtio-pmd::
 
     testpmd>show port stats all
 
@@ -164,7 +164,7 @@  Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
     testpmd>set fwd mac
     testpmd>start
 
-5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+5. Send packets by traffic generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
 
     testpmd>show port stats all
 
@@ -175,7 +175,7 @@  Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
     testpmd>set fwd mac
     testpmd>start
 
-7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+7. Check if the reconnection can work, still send packets by traffic generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
 
     testpmd>show port stats all
 
@@ -226,13 +226,13 @@  Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
     testpmd>set fwd mac
     testpmd>start
 
-5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+5. Send packets by traffic generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
 
     testpmd>show port stats all
 
 6. Reboot the two VMs, rerun step2-step5.
 
-7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+7. Check if the reconnection can work, still send packets by traffic generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
 
     testpmd>show port stats all
 
@@ -389,7 +389,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd::
+4. Send packets by traffic generator, check if packets can be RX/TX with virtio-pmd::
 
     testpmd>show port stats all
 
@@ -400,7 +400,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     testpmd>set fwd mac
     testpmd>start
 
-6. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with virtio-pmd::
+6. Check if the reconnection can work, still send packets by traffic generator, check if packets can be RX/TX with virtio-pmd::
 
     testpmd>show port stats all
 
@@ -434,7 +434,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     testpmd>set fwd mac
     testpmd>start
 
-4. Send packets by packet generator, check if packets can be RX/TX with virtio-pmd::
+4. Send packets by traffic generator, check if packets can be RX/TX with virtio-pmd::
 
     testpmd>show port stats all
 
@@ -497,7 +497,7 @@  Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro
     testpmd>set fwd mac
     testpmd>start
 
-5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+5. Send packets by traffic generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
 
     testpmd>show port stats all
 
@@ -508,7 +508,7 @@  Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro
     testpmd>set fwd mac
     testpmd>start
 
-7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+7. Check if the reconnection can work, still send packets by traffic generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
 
     testpmd>show port stats all
 
@@ -559,13 +559,13 @@  Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro
     testpmd>set fwd mac
     testpmd>start
 
-5. Send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+5. Send packets by traffic generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
 
     testpmd>show port stats all
 
 6. Reboot the two VMs, rerun step2-step5.
 
-7. Check if the reconnection can work, still send packets by packet generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
+7. Check if the reconnection can work, still send packets by traffic generator, check if packets can be RX/TX with two virtio-pmds in two VMs::
 
     testpmd>show port stats all
 
diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst b/test_plans/pvp_virtio_bonding_test_plan.rst
index 226a6dca..1c3b8434 100644
--- a/test_plans/pvp_virtio_bonding_test_plan.rst
+++ b/test_plans/pvp_virtio_bonding_test_plan.rst
@@ -74,7 +74,7 @@  Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
     testpmd>set fwd mac
     testpmd>start
 
-7. Send packets to nic port by packet generator.
+7. Send packets to nic port by traffic generator.
 
 8. Check port stats at VM side, there are throughput between port 3 and port 4::
 
diff --git a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
index 09d4aff7..9aeb0172 100644
--- a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
+++ b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
@@ -26,7 +26,7 @@  Test Case1:  Basic test for virtio-user split ring 2M hugepage
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,queues=1 -- -i
 
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -46,6 +46,6 @@  Test Case1:  Basic test for virtio-user packed ring 2M hugepage
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,packed_vq=1,queues=1 -- -i
 
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
\ No newline at end of file
diff --git a/test_plans/pvp_virtio_user_4k_pages_test_plan.rst b/test_plans/pvp_virtio_user_4k_pages_test_plan.rst
index 1e8da05e..b53d4bbe 100644
--- a/test_plans/pvp_virtio_user_4k_pages_test_plan.rst
+++ b/test_plans/pvp_virtio_user_4k_pages_test_plan.rst
@@ -33,7 +33,7 @@  Test Case1: Basic test vhost/virtio-user split ring with 4K-pages
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1 -- -i
     testpmd>start
 
-4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+4. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -59,6 +59,6 @@  Test Case2: Basic test vhost/virtio-user packed ring with 4K-pages
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i
     testpmd>start
 
-4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
+4. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
     testpmd>show port stats all
\ No newline at end of file
diff --git a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
index cc22fce7..0a7eab1a 100644
--- a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
+++ b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
@@ -39,7 +39,7 @@  Test Case 1: pvp 2 queues test with packed ring mergeable path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -77,7 +77,7 @@  Test Case 2: pvp 2 queues test with packed ring non-mergeable path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -110,7 +110,7 @@  Test Case 3: pvp 2 queues test with split ring inorder mergeable path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -143,7 +143,7 @@  Test Case 4: pvp 2 queues test with split ring inorder non-mergeable path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -176,7 +176,7 @@  Test Case 5: pvp 2 queues test with split ring mergeable path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -209,7 +209,7 @@  Test Case 6: pvp 2 queues test with split ring non-mergeable path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -242,7 +242,7 @@  Test Case 7: pvp 2 queues test with split ring vector_rx path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -275,7 +275,7 @@  Test Case 8: pvp 2 queues test with packed ring inorder mergeable path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -308,7 +308,7 @@  Test Case 9: pvp 2 queues test with packed ring inorder non-mergeable path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
@@ -341,7 +341,7 @@  Test Case 10: pvp 2 queues test with packed ring vectorized path
     >set fwd mac
     >start
 
-3. Send different ip packets with packet generator, check the throughput with below command::
+3. Send different ip packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
diff --git a/test_plans/qinq_filter_test_plan.rst b/test_plans/qinq_filter_test_plan.rst
index 007f921d..e6c24bb6 100644
--- a/test_plans/qinq_filter_test_plan.rst
+++ b/test_plans/qinq_filter_test_plan.rst
@@ -11,10 +11,10 @@  Prerequisites
 =============
 1.Hardware:
    Intel® Ethernet 700 Series
-   HarborChannel_DP_OEMGEN_8MB_J24798-001_0.65_80002DA4 
+   HarborChannel_DP_OEMGEN_8MB_J24798-001_0.65_80002DA4
    firmware-version: 5.70 0x80002da4 1.3908.0(Intel® Ethernet Network Adapter XXV710-DA2) or 6.0.0+
-   
-2.Software: 
+
+2.Software:
   dpdk: http://dpdk.org/git/dpdk
   scapy: http://www.secdev.org/projects/scapy/
   disable vector mode when build dpdk
@@ -31,21 +31,21 @@  Testpmd configuration - 4 RX/TX queues per port
 #. enable qinq::
 
     testpmd command: vlan set qinq on 0
-      
+
 #. PMD fwd only receive the packets::
 
     testpmd command: set fwd rxonly
-      
+
 #. verbose configuration::
 
     testpmd command: set verbose 1
-      
+
 #. start packet receive::
 
     testpmd command: start
 
-tester Configuration
--------------------- 
+TG Configuration
+--------------------
 
 #. send dual vlan packet with scapy, verify it can be recognized as qinq packet::
 
@@ -64,15 +64,15 @@  Testpmd configuration - 4 RX/TX queues per port
 #. enable qinq::
 
     testpmd command: vlan set qinq on 0
-      
+
 #. PMD fwd only receive the packets::
 
     testpmd command: set fwd rxonly
-      
+
 #. verbose configuration::
 
     testpmd command: set verbose 1
-      
+
 #. start packet receive::
 
     testpmd command: start
@@ -82,8 +82,8 @@  Testpmd configuration - 4 RX/TX queues per port
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 1 / vlan tci is 4093 / end actions pf / queue index 1 / end
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions pf / queue index 2 / end
 
-tester Configuration
--------------------- 
+TG Configuration
+--------------------
 
 #. send dual vlan packet with scapy, verify packets can filter to queues::
 
@@ -92,14 +92,14 @@  tester Configuration
 
 Test Case 3: qinq packet filter to VF queues
 ============================================
-#. create VF on dut::
+#. create VF on SUT::
 
     linux cmdline: echo 2 > /sys/bus/pci/devices/0000:81:00.0/max_vfs
 
 #. bind igb_uio to vfs::
 
     linux cmdline: ./usertools/dpdk-devbind.py -b igb_uio 81:02.0 81:02.1
- 
+
 #. set up testpmd with Intel® Ethernet 700 Series PF NICs::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4
@@ -107,21 +107,21 @@  Test Case 3: qinq packet filter to VF queues
 #. enable qinq::
 
     testpmd command: vlan set qinq on 0
-      
+
 #. PMD fwd only receive the packets::
 
     testpmd command: set fwd rxonly
-      
+
 #. verbose configuration::
 
     testpmd command: set verbose 1
-      
+
 #. start packet receive::
 
     testpmd command: start
-       
+
 #. create filter rules::
- 
+
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 1 / vlan tci is 4093 / end actions vf id 0 / queue index 2 / end
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions vf id 1 / queue index 3 / end
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 3 / vlan tci is 4094 / end actions pf / queue index 1 / end
@@ -133,11 +133,11 @@  Test Case 3: qinq packet filter to VF queues
 #. PMD fwd only receive the packets::
 
     testpmd command: set fwd rxonly
-      
+
 #. verbose configuration::
 
     testpmd command: set verbose 1
-      
+
 #. start packet receive::
 
     testpmd command: start
@@ -149,17 +149,17 @@  Test Case 3: qinq packet filter to VF queues
 #. PMD fwd only receive the packets::
 
     testpmd command: set fwd rxonly
-      
+
 #. verbose configuration::
 
     testpmd command: set verbose 1
-      
+
 #. start packet receive::
 
     testpmd command: start
-    
-tester Configuration
--------------------- 
+
+TG Configuration
+--------------------
 
 #. send dual vlan packet with scapy, verify packets can filter to the corresponding PF and VF queues::
 
@@ -169,14 +169,14 @@  tester Configuration
 
 Test Case 4: qinq packet filter with different tpid
 ====================================================
-#. create VF on dut::
+#. create VF on SUT::
 
     linux cmdline: echo 2 > /sys/bus/pci/devices/0000:81:00.0/max_vfs
 
 #. bind igb_uio to vfs::
 
     linux cmdline: ./usertools/dpdk-devbind.py -b igb_uio 81:02.0 81:02.1
- 
+
 #. set up testpmd with Intel® Ethernet 700 Series PF NICs::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4
@@ -184,15 +184,15 @@  Test Case 4: qinq packet filter with different tpid
 #. enable qinq::
 
     testpmd command: vlan set qinq on 0
-      
+
 #. PMD fwd only receive the packets::
 
     testpmd command: set fwd rxonly
-      
+
 #. verbose configuration::
 
     testpmd command: set verbose 1
-      
+
 #. start packet receive::
 
     testpmd command: start
@@ -202,7 +202,7 @@  Test Case 4: qinq packet filter with different tpid
     testpmd command: vlan set outer tpid 0x88a8 0
 
 #. create filter rules::
- 
+
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 1 / vlan tci is 4093 / end actions vf id 0 / queue index 2 / end
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions vf id 1 / queue index 3 / end
     testpmd command: flow create 0 ingress pattern eth / vlan tci is 3 / vlan tci is 4094 / end actions pf / queue index 1 / end
@@ -214,11 +214,11 @@  Test Case 4: qinq packet filter with different tpid
 #. PMD fwd only receive the packets::
 
     testpmd command: set fwd rxonly
-      
+
 #. verbose configuration::
 
     testpmd command: set verbose 1
-      
+
 #. start packet receive::
 
     testpmd command: start
@@ -230,17 +230,17 @@  Test Case 4: qinq packet filter with different tpid
 #. PMD fwd only receive the packets::
 
     testpmd command: set fwd rxonly
-      
+
 #. verbose configuration::
 
     testpmd command: set verbose 1
-      
+
 #. start packet receive::
 
     testpmd command: start
 
-tester Configuration
--------------------- 
+TG Configuration
+--------------------
 
 #. send dual vlan packet with scapy, verify packets can filter to the corresponding VF queues.
 #. send qinq packet with traffic generator, verify packets can filter to the corresponding VF queues.
diff --git a/test_plans/qos_api_test_plan.rst b/test_plans/qos_api_test_plan.rst
index 141820f4..a1270515 100644
--- a/test_plans/qos_api_test_plan.rst
+++ b/test_plans/qos_api_test_plan.rst
@@ -40,15 +40,15 @@  Prerequisites
 =============
 For i40e, need enable rss
 For ixgbe, need disable rss.
-The DUT must have two 10G Ethernet ports connected to two ports on tester::
+The SUT must have two 10G Ethernet ports connected to two ports on TG::
 
-    dut_port_0 <---> tester_port_0
-    dut_port_1 <---> tester_port_1
+    SUT_port_0 <---> TG_port_0
+    SUT_port_1 <---> TG_port_1
 
-Assume two DUT 10G Ethernet ports' pci device id is as the following::
+Assume two SUT 10G Ethernet ports' pci device id is as the following::
 
-    dut_port_0 : "0000:05:00.0"
-    dut_port_1 : "0000:05:00.1"
+    SUT_port_0 : "0000:05:00.0"
+    SUT_port_1 : "0000:05:00.1"
 
 Bind two ports to dpdk driver::
 
diff --git a/test_plans/qos_meter_test_plan.rst b/test_plans/qos_meter_test_plan.rst
index a7977d47..605df17f 100644
--- a/test_plans/qos_meter_test_plan.rst
+++ b/test_plans/qos_meter_test_plan.rst
@@ -14,12 +14,12 @@  https://doc.dpdk.org/guides/sample_app_ug/qos_metering.html
 
 Prerequisites
 =============
-The DUT must have two 10G Ethernet ports connected to two ports of IXIA.
+The SUT must have two 10G Ethernet ports connected to two ports of IXIA.
 
-Assume two DUT 10G Ethernet ports' pci device id is as the following,
+Assume two SUT 10G Ethernet ports' pci device id is as the following,
 
-dut_port_0 : "0000:05:00.0"
-dut_port_1 : "0000:05:00.1"
+SUT_port_0 : "0000:05:00.0"
+SUT_port_1 : "0000:05:00.1"
 
 1. Compile DPDK and sample
 
diff --git a/test_plans/qos_sched_test_plan.rst b/test_plans/qos_sched_test_plan.rst
index 456777de..67e5a866 100644
--- a/test_plans/qos_sched_test_plan.rst
+++ b/test_plans/qos_sched_test_plan.rst
@@ -38,16 +38,16 @@  https://doc.dpdk.org/guides/sample_app_ug/qos_scheduler.html
 
 Prerequisites
 =============
-The DUT must have four 10G Ethernet ports connected to two ports on
-Tester that are controlled by packet generator::
+The SUT must have four 10G Ethernet ports connected to two ports on
+TG that are controlled by traffic generator::
 
-    dut_port_0 <---> tester_port_0
-    dut_port_1 <---> tester_port_1
+    SUT_port_0 <---> TG_port_0
+    SUT_port_1 <---> TG_port_1
 
-Assume two DUT 10G Ethernet ports' pci device id is as the following::
+Assume two SUT 10G Ethernet ports' pci device id is as the following::
 
-    dut_port_0 : "0000:05:00.0"
-    dut_port_1 : "0000:05:00.1"
+    SUT_port_0 : "0000:05:00.0"
+    SUT_port_1 : "0000:05:00.1"
 
 1. Compile DPDK and sample with defining::
 
diff --git a/test_plans/queue_start_stop_test_plan.rst b/test_plans/queue_start_stop_test_plan.rst
index cf660710..e51de95c 100644
--- a/test_plans/queue_start_stop_test_plan.rst
+++ b/test_plans/queue_start_stop_test_plan.rst
@@ -20,7 +20,7 @@  order to be compatible with previous test framework.
 Prerequisites
 -------------
 
-Assume port A and B are connected to the remote ports, e.g. packet generator.
+Assume port A and B are connected to the remote ports, e.g. traffic generator.
 To run the testpmd application in linuxapp environment with 4 lcores,
 4 channels with other default parameters in interactive mode::
 
@@ -38,12 +38,12 @@  This case support PF (Intel® Ethernet 700 Series), VF (Intel® Ethernet 700 Ser
 #. Compile testpmd again, then run testpmd.
 #. Run "set fwd mac" to set fwd type
 #. Run "start" to start fwd package
-#. Start packet generator to transmit and receive packets
+#. Start traffic generator to transmit and receive packets
 #. Run "port 0 rxq 0 stop" to stop rxq 0 in port 0
-#. Start packet generator to transmit and not receive packets
+#. Start traffic generator to transmit and not receive packets
 #. Run "port 0 rxq 0 start" to start rxq 0 in port 0
 #. Run "port 1 txq 1 stop" to start txq 0 in port 1
-#. Start packet generator to transmit and not receive packets but in testpmd it is a "ports 0 queue 0 received 1 packages" print
+#. Start traffic generator to transmit and not receive packets but in testpmd it is a "ports 0 queue 0 received 1 packages" print
 #. Run "port 1 txq 1 start" to start txq 0 in port 1
-#. Start packet generator to transmit and receive packets
+#. Start traffic generator to transmit and receive packets
 #. Test it again with VF
diff --git a/test_plans/rte_flow_test_plan.rst b/test_plans/rte_flow_test_plan.rst
index a64db026..0de68d30 100644
--- a/test_plans/rte_flow_test_plan.rst
+++ b/test_plans/rte_flow_test_plan.rst
@@ -8,14 +8,14 @@  This document contains the test plan for the rte_flow API.
 
 Prerequisites
 =============
-The DUT must have one 10G Ethernet ports connected to one port on
-Tester that are controlled by packet generator::
+The SUT must have one 10G Ethernet ports connected to one port on
+TG that are controlled by traffic generator::
 
-    dut_port_0 <---> tester_port_0
+    SUT_port_0 <---> TG_port_0
 
-Assume the DUT 10G Ethernet ports' pci device id is as the following::
+Assume the SUT 10G Ethernet ports' pci device id is as the following::
 
-    dut_port_0 : "0000:05:00.0"
+    SUT_port_0 : "0000:05:00.0"
     mac_address: "00:00:00:00:01:00"
 
 Bind the port to dpdk igb_uio driver::
@@ -50,7 +50,7 @@  Test Case: dst (destination MAC) rule
 
     ./<build_target>/app/dpdk-testpmd -c 3 -- -i
 
-.. 
+..
 
 2. Set the test flow rule (If the Ethernet destination MAC is equal to 90:61:ae:fd:41:43, send the packet to queue 1):
 
@@ -141,7 +141,7 @@  Test Case: type (EtherType or TPID) rule
 
 ..
 
-3. Send a packet that matches the rule: 
+3. Send a packet that matches the rule:
 
 ::
 
@@ -186,7 +186,7 @@  Test Case: protocol (protocol type) rule
 
 ..
 
-3. Send a packet that matches the rule: 
+3. Send a packet that matches the rule:
 
 ::
 
@@ -231,7 +231,7 @@  Test Case: icmp_type (ICMP message type) rule
 
 ..
 
-3. Send a packet that matches the rule: 
+3. Send a packet that matches the rule:
 
 ::
 
@@ -275,7 +275,7 @@  We tested type 3, code 3.
 
 ..
 
-3. Send a packet that matches the rule: 
+3. Send a packet that matches the rule:
 
 ::
 
diff --git a/test_plans/rteflow_priority_test_plan.rst b/test_plans/rteflow_priority_test_plan.rst
index 111383ba..b8736b39 100644
--- a/test_plans/rteflow_priority_test_plan.rst
+++ b/test_plans/rteflow_priority_test_plan.rst
@@ -14,7 +14,7 @@  this feature uses devargs as a hint to active flow priority or not.
 
 This test plan is based on Intel E810 series ethernet cards.
 when priority is not active, flows are created on fdir then switch/ACL.
-when priority is active, flows are identified into 2 category: 
+when priority is active, flows are identified into 2 category:
 High priority as permission stage that maps to switch/ACL,
 Low priority as distribution stage that maps to fdir,
 a no destination high priority rule is not acceptable, since it may be overwritten
@@ -29,7 +29,7 @@  Prerequisites
 Bind the pf to dpdk driver::
 
    ./usertools/dpdk-devbind.py -b vfio-pci af:00.0
-   
+
 Note: The kernel must be >= 3.6+ and VT-d must be enabled in bios.
 
 Test Case: Setting Priority in Non-pipeline Mode
@@ -81,7 +81,7 @@  Patterns in this case:
 Test Case: Create Flow Rules with Priority in Pipeline Mode
 ============================================================
 
-Priority is active in pipeline mode. 
+Priority is active in pipeline mode.
 Creating flow rules and setting priority 0/1 will map switch/fdir filter separately.
 
 Patterns in this case:
@@ -108,7 +108,7 @@  Patterns in this case:
     flow create 0 priority 1 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.4 dst is 192.168.0.7 / udp src is 25 dst is 23 / end actions queue index 4 / end
 
 #. Check flow list with commands "flow list 0", all flows are created correctly::
-   
+
     +-----+--------+--------+--------+-----------------------+
     |ID	 | Group  | Prio   | Attr   | Rul                   |
     +=====+========+========+========+=======================+
@@ -121,14 +121,14 @@  Patterns in this case:
     | 3       ...			                    |
     +-----+--------+--------+--------+-----------------------+
 
-#. Send packets according to the created rules in tester::
+#. Send packets according to the created rules in TG::
 
     sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0")
     sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/UDP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0")
     sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.4",dst="192.168.0.7",tos=4,ttl=20)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0")
     sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.4 ",dst="192.168.0.7")/UDP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0")
 
-#. Check the packets are recieved in right queues by dut::
+#. Check the packets are recieved in right queues by SUT::
 
     testpmd> port 0/queue 1: received 1 packets
      src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0x96803f93 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP  - sw ptype: L2_ETHER L3_IPV4 L4_TCP  - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1
@@ -217,11 +217,11 @@  Patterns in this case:
     ice_flow_create(): Failed to create flow
     Caught error type 13 (specific pattern item): cause: 0x7fffe65b8128, Unsupported pattern: Invalid argument
 
-Test case: Create flow rules with same parameter but differenet actions 
+Test case: Create flow rules with same parameter but differenet actions
 ==========================================================================
 
 It is acceptable to create same rules with differenet filter in pipeline mode.
-When fdir filter and switch filter has the same parameter rules, the flow will map to switch then fdir. 
+When fdir filter and switch filter has the same parameter rules, the flow will map to switch then fdir.
 
 Patterns in this case:
 	MAC_IPV4_TCP
@@ -243,11 +243,11 @@  Patterns in this case:
     ice_flow_create(): Succeeded to create (1) flow
     Flow rule #1 created
 
-#. Tester send a pkt to dut::
+#. TG send a pkt to SUT::
 
     sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0")
 
-#. Check the packets are recieved by dut in queue 1::
+#. Check the packets are recieved by SUT in queue 1::
 
     testpmd> port 0/queue 1: received 1 packets
     src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP  - sw ptype: L2_ETHER L3_IPV4 L4_TCP  - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1
@@ -257,7 +257,7 @@  Patterns in this case:
 
     testpmd>flow destroy 0 rule 0
 
-#. Tester send a pkt to dut::
+#. TG send a pkt to SUT::
 
     sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0")
 
@@ -284,11 +284,11 @@  Patterns in this case:
    ice_flow_create(): Succeeded to create (2) flow
    Flow rule #1 created
 
-#. Tester send a pkt to dut::
+#. TG send a pkt to SUT::
 
     sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0")
 
-#. Check the packets are recieved by dut in queue 1::
+#. Check the packets are recieved by SUT in queue 1::
 
     testpmd> port 0/queue 1: received 1 packets
      src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP  - sw ptype: L2_ETHER L3_IPV4 L4_TCP  - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1
@@ -298,7 +298,7 @@  Patterns in this case:
 
     testpmd>flow destroy 0 rule 1
 
-#. Tester send a pkt to dut::
+#. TG send a pkt to SUT::
 
     sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0")
 
diff --git a/test_plans/runtime_vf_queue_number_kernel_test_plan.rst b/test_plans/runtime_vf_queue_number_kernel_test_plan.rst
index 99a43b21..85d3bddb 100644
--- a/test_plans/runtime_vf_queue_number_kernel_test_plan.rst
+++ b/test_plans/runtime_vf_queue_number_kernel_test_plan.rst
@@ -111,11 +111,11 @@  Test Case 1: set valid VF queue number in testpmd command-line options
 
      port 0: RX queue number: 3 Tx queue number: 3
 
-4. Send packets to VF from tester, and make sure they match the default RSS rules, IPV4_UNKNOW, and will be distributed to all the queues that you configured, Here is 3::
+4. Send packets to VF from TG, and make sure they match the default RSS rules, IPV4_UNKNOW, and will be distributed to all the queues that you configured, Here is 3::
 
-     pkt1 = Ether(dst="$vf_mac", src="$tester_mac")/IP(src="10.0.0.1",dst="192.168.0.1")/("X"*48)
-     pkt2 = Ether(dst="$vf_mac", src="$tester_mac")/IP(src="10.0.0.1",dst="192.168.0.2")/("X"*48)
-     pkt3 = Ether(dst="$vf_mac", src="$tester_mac")/IP(src="10.0.0.1",dst="192.168.0.3")/("X"*48)
+     pkt1 = Ether(dst="$vf_mac", src="$TG_mac")/IP(src="10.0.0.1",dst="192.168.0.1")/("X"*48)
+     pkt2 = Ether(dst="$vf_mac", src="$TG_mac")/IP(src="10.0.0.1",dst="192.168.0.2")/("X"*48)
+     pkt3 = Ether(dst="$vf_mac", src="$TG_mac")/IP(src="10.0.0.1",dst="192.168.0.3")/("X"*48)
 
 5. Stop forwarding, and check the queues statistics, every RX/TX queue must has 1 packet go through, and total 3 packets in uni-direction as well as 6 packets in bi-direction::
 
diff --git a/test_plans/runtime_vf_queue_number_test_plan.rst b/test_plans/runtime_vf_queue_number_test_plan.rst
index 1d8dde99..2910e306 100644
--- a/test_plans/runtime_vf_queue_number_test_plan.rst
+++ b/test_plans/runtime_vf_queue_number_test_plan.rst
@@ -197,11 +197,11 @@  Test case 3: set valid VF queue number in testpmd command-line options
 
      port 0: RX queue number: 3 Tx queue number: 3
 
-5. Send packets to VF from tester, and make sure they match the default RSS rules, IPV4_UNKNOW, and will be distributed to all the queues that you configured, Here is 3::
+5. Send packets to VF from TG, and make sure they match the default RSS rules, IPV4_UNKNOW, and will be distributed to all the queues that you configured, Here is 3::
 
-     pkt1 = Ether(dst="$vf_mac", src="$tester_mac")/IP(src="10.0.0.1",dst="192.168.0.1")/("X"*48)
-     pkt2 = Ether(dst="$vf_mac", src="$tester_mac")/IP(src="10.0.0.1",dst="192.168.0.2")/("X"*48)
-     pkt3 = Ether(dst="$vf_mac", src="$tester_mac")/IP(src="10.0.0.1",dst="192.168.0.3")/("X"*48)
+     pkt1 = Ether(dst="$vf_mac", src="$TG_mac")/IP(src="10.0.0.1",dst="192.168.0.1")/("X"*48)
+     pkt2 = Ether(dst="$vf_mac", src="$TG_mac")/IP(src="10.0.0.1",dst="192.168.0.2")/("X"*48)
+     pkt3 = Ether(dst="$vf_mac", src="$TG_mac")/IP(src="10.0.0.1",dst="192.168.0.3")/("X"*48)
 
 6. Stop forwarding, and check the queues statistics, every RX/TX queue must has 1 packet go through, and total 3 packets in uni-direction as well as 6 packets in bi-direction::
 
diff --git a/test_plans/rxtx_offload_test_plan.rst b/test_plans/rxtx_offload_test_plan.rst
index ea3ae6fe..f4e8198c 100644
--- a/test_plans/rxtx_offload_test_plan.rst
+++ b/test_plans/rxtx_offload_test_plan.rst
@@ -188,9 +188,9 @@  Test case: 82599/500 Series Rx offload per-queue setting
    queue3 doesn't support vlan strip.
 
    If set "set fwd mac",
-   Check the tester port connected to port1 which receive the forwarded packet
+   Check the TG port connected to port1 which receive the forwarded packet
    So you can check that there is vlan id in pkt1, while there is not vlan id in pkt2.
-   The result is consistent to the DUT port receive packets.
+   The result is consistent to the SUT port receive packets.
 
 5. Disable vlan_strip per_queue::
 
@@ -270,7 +270,7 @@  Test case: Tx offload per-port setting
       Queue[ 3] :
     testpmd> start
 
-   Tester port0 received the packet.
+   TG port0 received the packet.
    There is no vlan infomation in the received packet.
 
 2. Enable vlan_insert per_port::
@@ -292,7 +292,7 @@  Test case: Tx offload per-port setting
       Queue[ 3] : VLAN_INSERT
     testpmd> start
 
-   Tester port0 receive the packet.
+   TG port0 receive the packet.
    There is vlan ID in the received packet.
 
 3. Disable vlan_insert per_port::
@@ -334,7 +334,7 @@  Test case: Tx offload per-port setting in command-line
     testpmd> set fwd txonly
     testpmd> start
 
-   Tester port0 can receive the packets with vlan ID.
+   TG port0 can receive the packets with vlan ID.
 
 2. Disable vlan_insert per_queue::
 
@@ -353,7 +353,7 @@  Test case: Tx offload per-port setting in command-line
       Queue[ 3] :
     testpmd> start
 
-   The tester port0 still receive packets with vlan ID.
+   The TG port0 still receive packets with vlan ID.
    The per_port capability can't be disabled by per_queue command.
 
 3. Disable vlan_insert per_port::
@@ -370,7 +370,7 @@  Test case: Tx offload per-port setting in command-line
       Queue[ 3] :
     testpmd> start
 
-   The tester port receive packets without vlan ID.
+   The TG port receive packets without vlan ID.
    The per_port capability can be disabled by per_port command.
 
 4. Enable vlan_insert per_queue::
@@ -409,7 +409,7 @@  Test case: Tx offload per-port setting in command-line
     testpmd> port start 0
     testpmd> start
 
-   The tester port received packets with vlan ID.
+   The TG port received packets with vlan ID.
    The per_port capability can be enabled by per_port command.
 
 Test case: Tx offload checksum
diff --git a/test_plans/shutdown_api_test_plan.rst b/test_plans/shutdown_api_test_plan.rst
index df366872..00045bac 100644
--- a/test_plans/shutdown_api_test_plan.rst
+++ b/test_plans/shutdown_api_test_plan.rst
@@ -28,7 +28,7 @@  to the device under test::
    modprobe vfio-pci
    usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
 
-Assume port A and B are connected to the remote ports, e.g. packet generator.
+Assume port A and B are connected to the remote ports, e.g. traffic generator.
 To run the testpmd application in linuxapp environment with 4 lcores,
 4 channels with other default parameters in interactive mode::
 
@@ -44,10 +44,10 @@  Test Case: Stop and Restart
 3. Check that testpmd is able to forward traffic.
 4. Run ``stop`` to stop forwarding packets.
 5. Run ``port stop all`` to stop all ports.
-6. Check on the tester side that the ports are down using ethtool.
+6. Check on the TG side that the ports are down using ethtool.
 7. Run ``port start all`` to restart all ports.
-8. Check on the tester side that the ports are up using ethtool
-9. Run ``start`` again to restart the forwarding, then start packet generator to
+8. Check on the TG side that the ports are up using ethtool
+9. Run ``start`` again to restart the forwarding, then start traffic generator to
    transmit and receive packets, and check if testpmd is able to receive and
    forward packets successfully.
 
@@ -62,7 +62,7 @@  Test Case: Reset RX/TX Queues
 4. Run ``port config all txq 2`` to change the number of transmitting queues to two.
 5. Run ``port start all`` to restart all ports.
 6. Check with ``show config rxtx`` that the configuration for these parameters changed.
-7. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+7. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check if testpmd is able to receive and forward packets
    successfully.
 
@@ -75,13 +75,13 @@  Test Case: Set promiscuous mode
 2. Run ``port stop all`` to stop all ports.
 3. Run ``set promisc all off`` to disable promiscuous mode on all ports.
 4. Run ``port start all`` to restart all ports.
-5. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+5. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check that testpmd is NOT able to receive and forward packets
    successfully.
 6. Run ``port stop all`` to stop all ports.
 7. Run ``set promisc all on`` to enable promiscuous mode on all ports.
 8. Run ``port start all`` to restart all ports.
-9. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+9. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check that testpmd is able to receive and forward packets
    successfully.
 
@@ -119,7 +119,7 @@  Test Case: Reconfigure All Ports With The Same Configurations (CRC)
 3. Run ``port config all crc-strip on`` to enable the CRC stripping mode.
 4. Run ``port start all`` to restart all ports.
 5. Check with ``show config rxtx`` that the configuration for these parameters changed.
-6. Run ``start`` again to restart the forwarding, then start packet generator to
+6. Run ``start`` again to restart the forwarding, then start traffic generator to
    transmit and receive packets, and check if testpmd is able to receive and
    forward packets successfully. Check that the packet received is 4 bytes
    smaller than the packet sent.
@@ -133,8 +133,8 @@  Test Case: Change Link Speed
 2. Run ``port stop all`` to stop all ports.
 3. Run ``port config all speed SPEED duplex HALF/FULL`` to select the new config for the link.
 4. Run ``port start all`` to restart all ports.
-5. Check on the tester side that the configuration actually changed using ethtool.
-6. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+5. Check on the TG side that the configuration actually changed using ethtool.
+6. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check if testpmd is able to receive and forward packets
    successfully.
 7. Repeat this process for every compatible speed depending on the NIC driver.
@@ -154,8 +154,8 @@  This case support all the nic with driver i40e and ixgbe.
 4. Run ``port stop all`` to stop all ports.
 5. Run ``port config all speed SPEED duplex HALF/FULL`` to select the new config for the link.
 6. Run ``port start all`` to restart all ports.
-   show port info all Check on the tester side that the VF configuration actually changed using ethtool.
-7. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+   show port info all Check on the TG side that the VF configuration actually changed using ethtool.
+7. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check if testpmd is able to receive and forward packets
    successfully.
 8. Repeat this process for every compatible speed depending on the NIC driver.
@@ -169,7 +169,7 @@  Test Case: Enable/Disable Jumbo Frame
 2. Run ``port stop all`` to stop all ports.
 3. Run ``port config all max-pkt-len 2048`` to set the maximum packet length.
 4. Run ``port start all`` to restart all ports.
-5. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+5. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check if testpmd is able to receive and forward packets
    successfully. Check this with the following packet sizes: 2047, 2048 & 2049. Only the third one should fail.
 
@@ -182,7 +182,7 @@  Test Case: Enable/Disable RSS
 2. Run ``port stop all`` to stop all ports.
 3. Run ``port config rss ip`` to enable RSS.
 4. Run ``port start all`` to restart all ports.
-5. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+5. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check if testpmd is able to receive and forward packets
    successfully.
 
@@ -197,7 +197,7 @@  Test Case: Change the Number of rxd/txd
 4. Run ``port config all txd 1024`` to change the tx descriptors.
 5. Run ``port start all`` to restart all ports.
 6. Check with ``show config rxtx`` that the descriptors were actually changed.
-7. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+7. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check if testpmd is able to receive and forward packets
    successfully.
 
@@ -208,13 +208,13 @@  Test Case: link stats
    below steps to check if it works well after reconfiguring all ports without
    changing any configurations.
 2. Run ``set fwd mac`` to set fwd type.
-3. Run ``start`` to start the forwarding, then start packet generator to transmit
+3. Run ``start`` to start the forwarding, then start traffic generator to transmit
    and receive packets
 4. Run ``set link-down port X`` to set all port link down
-5. Check on the tester side that the configuration actually changed using ethtool.
-6. Start packet generator to transmit and not receive packets
+5. Check on the TG side that the configuration actually changed using ethtool.
+6. Start traffic generator to transmit and not receive packets
 7. Run ``set link-up port X`` to set all port link up
-8. Start packet generator to transmit and receive packets
+8. Start traffic generator to transmit and receive packets
    successfully.
 
 Test Case: RX/TX descriptor status
diff --git a/test_plans/softnic_test_plan.rst b/test_plans/softnic_test_plan.rst
index 84d34f63..c6f062fb 100644
--- a/test_plans/softnic_test_plan.rst
+++ b/test_plans/softnic_test_plan.rst
@@ -12,21 +12,21 @@  is configurable through firmware (DPDK Packet Framework script).
 
 Prerequisites
 =============
-1. The DUT must have one 10G Ethernet port connected to a port on tester
+1. The SUT must have one 10G Ethernet port connected to a port on TG
    that are controlled by the traffic generator::
 
-    dut_port_0 <---> tester_port_0
+    SUT_port_0 <---> TG_port_0
 
-   Assume the DUT 10G Ethernet port's pci device id is as the following::
+   Assume the SUT 10G Ethernet port's pci device id is as the following::
 
-    dut_port_0 : "0000:05:00.0"
+    SUT_port_0 : "0000:05:00.0"
 
    Bind it to dpdk igb_uio driver::
 
     ./usertools/dpdk-devbind.py -b igb_uio 05:00.0
 
 2. Change ./drivers/net/softnic/firmware.cli to meet the specific test environment.
-   Change the DUT port info to the actual port info in your test environment::
+   Change the SUT port info to the actual port info in your test environment::
 
     link LINK dev 0000:05:00.0
 
diff --git a/test_plans/tso_test_plan.rst b/test_plans/tso_test_plan.rst
index d0f96b2b..ab3a4589 100644
--- a/test_plans/tso_test_plan.rst
+++ b/test_plans/tso_test_plan.rst
@@ -20,18 +20,18 @@  according to the MTU size.
 Prerequisites
 =============
 
-The DUT must take one of the Ethernet controller ports connected to a port on another
-device that is controlled by the Scapy packet generator.
+The SUT must take one of the Ethernet controller ports connected to a port on another
+device that is controlled by the Scapy traffic generator.
 
 The Ethernet interface identifier of the port that Scapy will use must be known.
-On tester, all offload feature should be disabled on tx port, and start rx port capture::
+On TG, all offload feature should be disabled on tx port, and start rx port capture::
 
   ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
   ip l set <tx port> up
   tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
 
 
-On DUT, run pmd with parameter "--enable-rx-cksum". Then enable TSO on tx port
+On SUT, run pmd with parameter "--enable-rx-cksum". Then enable TSO on tx port
 and checksum on rx port. The test commands is below::
 
   #enable hw checksum on rx port
@@ -48,20 +48,20 @@  and checksum on rx port. The test commands is below::
 Test case: csum fwd engine, use TSO
 ===================================
 
-This test uses ``Scapy`` to send out one large TCP package. The dut forwards package
+This test uses ``Scapy`` to send out one large TCP package. The SUT forwards package
 with TSO enable on tx port while rx port turns checksum on. After package send out
-by TSO on tx port, the tester receives multiple small TCP package.
+by TSO on tx port, the TG receives multiple small TCP package.
 
-Turn off tx port by ethtool on tester::
+Turn off tx port by ethtool on TG::
 
   ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
   ip l set <tx port> up
 
-capture package rx port on tester::
+capture package rx port on TG::
 
   tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
 
-Launch the userland ``testpmd`` application on DUT as follows::
+Launch the userland ``testpmd`` application on SUT as follows::
 
    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xffffffff -n 2 -- -i --rxd=512 --txd=512
    --burst=32 --rxfreet=64 --mbcache=128 --portmask=0x3 --txpt=36 --txht=0 --txwt=0
@@ -90,20 +90,20 @@  Test IPv6() in scapy::
 Test case: csum fwd engine, use TSO tunneling
 =============================================
 
-This test uses ``Scapy`` to send out one large TCP package. The dut forwards package
+This test uses ``Scapy`` to send out one large TCP package. The SUT forwards package
 with TSO enable on tx port while rx port turns checksum on. After package send out
-by TSO on tx port, the tester receives multiple small TCP package.
+by TSO on tx port, the TG receives multiple small TCP package.
 
-Turn off tx port by ethtool on tester::
+Turn off tx port by ethtool on TG::
 
   ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
   ip l set <tx port> up
 
-capture package rx port on tester::
+capture package rx port on TG::
 
   tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
 
-Launch the userland ``testpmd`` application on DUT as follows::
+Launch the userland ``testpmd`` application on SUT as follows::
 
    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xffffffff -n 2 -- -i --rxd=512 --txd=512
    --burst=32 --rxfreet=64 --mbcache=128 --portmask=0x3 --txpt=36 --txht=0 --txwt=0
@@ -134,7 +134,7 @@  Test nvgre() in scapy::
 Test case: TSO performance
 ==========================
 
-Set the packet stream to be sent out from packet generator before testing as
+Set the packet stream to be sent out from traffic generator before testing as
 below.
 
 +-------+---------+---------+---------+----------+----------+
diff --git a/test_plans/tx_preparation_test_plan.rst b/test_plans/tx_preparation_test_plan.rst
index 7bf94f0b..e2c55152 100644
--- a/test_plans/tx_preparation_test_plan.rst
+++ b/test_plans/tx_preparation_test_plan.rst
@@ -28,14 +28,14 @@  Prerequisites
 =============
 
 Support igb_uio, test txprep forwarding features on e1000, i40e and ixgbe
-drivers.Send packets from tester platform through the interface eth1 to
+drivers.Send packets from TG platform through the interface eth1 to
 the tested port 0, then testpmd sends back packet using same port and uses
 tcpdump to capture packet information::
 
-   Tester      DUT
+   TG          SUT
    eth1  <---> port 0
 
-Turn off all hardware offloads on tester machine::
+Turn off all hardware offloads on TG machine::
 
    ethtool -K eth1 rx off tx off tso off gso off gro off lro off
 
@@ -76,7 +76,7 @@  Start the packet forwarding::
 
     testpmd> start
 
-Send few IP/TCP/UDP packets from tester machine to DUT. Check IP/TCP/UDP
+Send few IP/TCP/UDP packets from TG machine to SUT. Check IP/TCP/UDP
 checksum correctness in captured packet, such as correct as below:
 
 Transmitted packet::
@@ -108,7 +108,7 @@  Start the packet forwarding::
 
    testpmd> start
 
-Send few IP/TCP packets from tester machine to DUT. Check IP/TCP checksum
+Send few IP/TCP packets from TG machine to SUT. Check IP/TCP checksum
 correctness in captured packet and verify correctness of HW TSO offload
 for large packets. One large TCP packet (5214 bytes + headers) segmented
 to four fragments (1460 bytes+header,1460 bytes+header,1460 bytes+header
diff --git a/test_plans/uni_pkt_test_plan.rst b/test_plans/uni_pkt_test_plan.rst
index 39c9aa87..b8b3795a 100644
--- a/test_plans/uni_pkt_test_plan.rst
+++ b/test_plans/uni_pkt_test_plan.rst
@@ -37,7 +37,7 @@  Test Case: L2 Packet detect
 This case checked that whether Timesync, ARP, LLDP detection supported by
 Intel® Ethernet 700 Series.
 
-Send time sync packet from tester::
+Send time sync packet from TG::
 
     sendp([Ether(dst='FF:FF:FF:FF:FF:FF',type=0x88f7)/"\\x00\\x02"], iface=txItf)
 
@@ -45,7 +45,7 @@  Check below message dumped by testpmd::
 
     (outer) L2 type: ETHER_Timesync
 
-Send ARP packet from tester::
+Send ARP packet from TG::
 
     sendp([Ether(dst='FF:FF:FF:FF:FF:FF')/ARP()], iface=txItf)
 
@@ -53,7 +53,7 @@  Check below message dumped by testpmd::
 
     (outer) L2 type: ETHER_ARP
 
-Send LLDP packet from tester::
+Send LLDP packet from TG::
 
     sendp([Ether()/LLDP()/LLDPManagementAddress()], iface=txItf)
 
diff --git a/test_plans/unit_tests_loopback_test_plan.rst b/test_plans/unit_tests_loopback_test_plan.rst
index ed9351db..a9077cc6 100644
--- a/test_plans/unit_tests_loopback_test_plan.rst
+++ b/test_plans/unit_tests_loopback_test_plan.rst
@@ -14,7 +14,7 @@  Loopback mode can be used to support testing task.
 Prerequisites
 =============
 
-Two 10Gb/25Gb/40Gb Ethernet ports of the DUT are directly connected and link is up.
+Two 10Gb/25Gb/40Gb Ethernet ports of the SUT are directly connected and link is up.
 
 
 single port MAC loopback
diff --git a/test_plans/unit_tests_pmd_perf_test_plan.rst b/test_plans/unit_tests_pmd_perf_test_plan.rst
index 43a62294..6e98d0e0 100644
--- a/test_plans/unit_tests_pmd_perf_test_plan.rst
+++ b/test_plans/unit_tests_pmd_perf_test_plan.rst
@@ -8,7 +8,7 @@  Unit Tests: PMD Performance
 
 Prerequisites
 =============
-One 10Gb Ethernet port of the DUT is directly connected and link is up.
+One 10Gb Ethernet port of the SUT is directly connected and link is up.
 
 
 Continuous Mode Performance
diff --git a/test_plans/userspace_ethtool_test_plan.rst b/test_plans/userspace_ethtool_test_plan.rst
index 7a146e3e..978875dd 100644
--- a/test_plans/userspace_ethtool_test_plan.rst
+++ b/test_plans/userspace_ethtool_test_plan.rst
@@ -53,7 +53,7 @@  Notice:: On Intel® Ethernet 700 Series, link detect need a physical link discon
     Port 0: Up
     Port 1: Up
 
-Change tester port link status to down and re-check link status::
+Change TG port link status to down and re-check link status::
 
     EthApp> link
     Port 0: Down
@@ -125,7 +125,7 @@  Recheck ring size by ringparam command::
      Rx Pending: 256 (256 max)
      Tx Pending: 2048 (4096 max)
 
-send packet by scapy on Tester::
+send packet by scapy on TG::
 
    check tx/rx packets
    EthApp>  portstats 0
@@ -179,7 +179,7 @@  packets received and forwarded::
 
 Test case: Mtu config test
 ==========================
-Use "mtu" command to change port 0 mtu from default 1519 to 9000 on Tester's port.
+Use "mtu" command to change port 0 mtu from default 1519 to 9000 on TG's port.
 
 Send packet size over 1519 and check that packet will be detected as error::
 
@@ -197,12 +197,12 @@  Test Case: Pause tx/rx test(performance test)
 
 Enable port 0 Rx pause frame and then create two packets flows on IXIA port.
 One flow is 100000 normally packet and the second flow is pause frame.
-Check that dut's port 0 Rx speed dropped status. For example, 82599 will drop
+Check that NIC's port 0 Rx speed dropped status. For example, 82599 will drop
 from 14.8Mpps to 7.49Mpps::
 
     EthApp> pause 0 rx
 
-Use "pause" command to print dut's port pause status, check that dut's port 0 rx
+Use "pause" command to print NIC's port pause status, check that NIC's port 0 rx
 has been paused::
 
     EthApp> pause 0
diff --git a/test_plans/veb_switch_test_plan.rst b/test_plans/veb_switch_test_plan.rst
index c7f6e2cc..288f54fd 100644
--- a/test_plans/veb_switch_test_plan.rst
+++ b/test_plans/veb_switch_test_plan.rst
@@ -35,7 +35,7 @@  switch. It's similar as 82599's SRIOV switch.
 Prerequisites for VEB testing
 =============================
 
-1. Get the pci device id of DUT, for example::
+1. Get the pci device id of NIC ports, for example::
 
       ./dpdk-devbind.py --st
       0000:05:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens785f0 drv=i40e
@@ -222,7 +222,7 @@  Details:
    Check if VF1 can get the packets, so PF->VF1 is working.
    Check the packet content is not corrupted.
 
-3. tester->vf
+3. TG->vf
    PF, launch testpmd::
 
     ./<build_target>/app/dpdk-testpmd -c 0xf -n 4 --socket-mem 1024,1024 -a 0000:05:00.0 --file-prefix=test1 -- -i
@@ -238,8 +238,8 @@  Details:
     testpmd>set promisc all off
     testpmd>start
 
-   Send 100 packets with VF's MAC address from tester, check if VF1 can get
-   100 packets, so tester->VF1 is working. Check the packet content is not
+   Send 100 packets with VF's MAC address from TG, check if VF1 can get
+   100 packets, so TG->VF1 is working. Check the packet content is not
    corrupted.
 
 4. vf1->vf2
diff --git a/test_plans/vf_daemon_test_plan.rst b/test_plans/vf_daemon_test_plan.rst
index dcbe6dd0..e0c17edc 100644
--- a/test_plans/vf_daemon_test_plan.rst
+++ b/test_plans/vf_daemon_test_plan.rst
@@ -70,7 +70,7 @@  Test Case 1: Set VLAN insert for VF from PF
 
 2. Start VF0 testpmd, set it in mac forwarding mode and enable verbose output
 
-3. Send packet from tester to VF0 without vlan id
+3. Send packet from TG to VF0 without vlan id
 
 4. Stop VF0 testpmd and check VF0 can receive packet without any vlan id
 
@@ -80,7 +80,7 @@  Test Case 1: Set VLAN insert for VF from PF
 
 6. Start VF0 testpmd
 
-7. Send packet from tester to VF0 without vlan id
+7. Send packet from TG to VF0 without vlan id
 
 8. Stop VF0 testpmd and check VF0 can receive packet with configured vlan id
 
@@ -96,7 +96,7 @@  Test Case 2: Set VLAN strip for VF from PF
 
       testpmd> rx_vlan add id 0
 
-3. Send packet from tester to VF0 with configured vlan id
+3. Send packet from TG to VF0 with configured vlan id
 
 4. Stop VF0 testpmd and check VF0 can receive packet with configured vlan id
 
@@ -106,7 +106,7 @@  Test Case 2: Set VLAN strip for VF from PF
 
 6. Start VF0 testpmd
 
-7. Send packet from tester to VF0 with configured vlan id
+7. Send packet from TG to VF0 with configured vlan id
 
 8. Stop VF0 testpmd and check VF0 can receive packet without any vlan id
 
@@ -135,7 +135,7 @@  Test Case 3: Set VLAN antispoof for VF from PF
 
      testpmd> set verbose 1
 
-6. Send packets with matching/non-matching/no vlan id on tester port
+6. Send packets with matching/non-matching/no vlan id on TG port
 
 7. Stop VF0 testpmd and check VF0 can receive and transmit packets with
    matching/non-matching/no vlan id
@@ -147,7 +147,7 @@  Test Case 3: Set VLAN antispoof for VF from PF
 
 9. Start VF0 testpmd
 
-10. Send packets with matching/non-matching/no vlan id on tester port
+10. Send packets with matching/non-matching/no vlan id on TG port
 
 11. Stop VF0 testpmd and check VF0 can receive all but only transmit
     packet with matching vlan id
@@ -165,7 +165,7 @@  Test Case 5: Set the MAC address for VF from PF
 
 3. Set testpmd in mac forwarding mode and enable verbose output
 
-4. Send packet from tester to VF0 configured address
+4. Send packet from TG to VF0 configured address
 
 5. Stop VF0 testpmd and check VF0 can receive packet
 
@@ -178,11 +178,11 @@  Test Case 6: Enable/disable tx loopback
 
 2. Set VF0 in rxonly forwarding mode and start testpmd
 
-3. Tcpdump on the tester port
+3. Tcpdump on the TG port
 
 4. Send 10 packets from VF1 to VF0
 
-5. Stop VF0 testpmd, check VF0 can't receive any packet but tester port
+5. Stop VF0 testpmd, check VF0 can't receive any packet but TG port
    could capture packet
 
 6. Enable tx loopback for VF0 from PF::
@@ -193,7 +193,7 @@  Test Case 6: Enable/disable tx loopback
 
 8. Send packet from VF1 to VF0
 
-9. Stop VF0 testpmd, check VF0 can receive packet, but tester port can't
+9. Stop VF0 testpmd, check VF0 can receive packet, but TG port can't
    capture packet
 
 
@@ -281,7 +281,7 @@  Test Case 10: enhancement to identify VF MTU change
 2. Set VF0 in mac forwarding mode and start testpmd
 
 3. Default mtu size is 1500, send one packet with length bigger than default
-   mtu size, such as 2000 from tester, check VF0 can receive but can't transmit
+   mtu size, such as 2000 from TG, check VF0 can receive but can't transmit
    packet
 
 4. Set VF0 mtu size as 3000, but need to stop then restart port to active mtu::
@@ -291,11 +291,11 @@  Test Case 10: enhancement to identify VF MTU change
       testpmd> port start all
       testpmd> start
 
-5. Send one packet with length 2000 from tester, check VF0 can receive and
+5. Send one packet with length 2000 from TG, check VF0 can receive and
    transmit packet
 
 6. Send one packet with length bigger than configured mtu size, such as 5000
-   from tester, check VF0 can receive but can't transmit packet
+   from TG, check VF0 can receive but can't transmit packet
 
 
 Test Case 11: Enable/disable vlan tag forwarding to VSIs
@@ -307,7 +307,7 @@  Test Case 11: Enable/disable vlan tag forwarding to VSIs
 2. Start VF0 testpmd, add rx vlan id as random 1~4095, set it in mac forwarding
    mode and enable verbose output
 
-3. Send packet from tester to VF0 with vlan tag(vlan id should same as rx_vlan)
+3. Send packet from TG to VF0 with vlan tag(vlan id should same as rx_vlan)
 
 4. Stop VF0 testpmd and check VF0 can't receive vlan tag packet
 
@@ -317,7 +317,7 @@  Test Case 11: Enable/disable vlan tag forwarding to VSIs
 
 6. Start VF0 testpmd
 
-7. Send packet from tester to VF0 with vlan tag(vlan id should same as rx_vlan)
+7. Send packet from TG to VF0 with vlan tag(vlan id should same as rx_vlan)
 
 8. Stop VF0 testpmd and check VF0 can receive vlan tag packet
 
@@ -332,14 +332,14 @@  Test Case 12: Broadcast mode
 
        testpmd> set vf broadcast 0 0 off
 
-3. Send packets from tester with broadcast address, ff:ff:ff:ff:ff:ff, and check
+3. Send packets from TG with broadcast address, ff:ff:ff:ff:ff:ff, and check
    VF0 can not receive the packet
 
 4. Enable broadcast mode for VF0 from PF::
 
        testpmd> set vf broadcast 0 0 on
 
-5. Send packets from tester with broadcast address, ff:ff:ff:ff:ff:ff, and check
+5. Send packets from TG with broadcast address, ff:ff:ff:ff:ff:ff, and check
    VF0 can receive the packet
 
 
@@ -352,14 +352,14 @@  Test Case 13: Multicast mode
        testpmd> set vf promisc 0 0 off
        testpmd> set vf allmulti 0 0 off
 
-3. Send packet from tester to VF0 with multicast MAC, and check VF0 can not
+3. Send packet from TG to VF0 with multicast MAC, and check VF0 can not
    receive the packet
 
 4. Enable multicast mode for VF0 from PF::
 
        testpmd> set vf allmulti 0 0 on
 
-5. Send packet from tester to VF0 with multicast MAC, and check VF0 can receive
+5. Send packet from TG to VF0 with multicast MAC, and check VF0 can receive
    the packet
 
 
@@ -372,20 +372,20 @@  Test Case 14: Promisc mode
 
        testpmd>set vf promisc 0 0 off
 
-3. Send packet from tester to VF0 with random MAC, and check VF0 can not
+3. Send packet from TG to VF0 with random MAC, and check VF0 can not
    receive the packet
 
-4. Send packet from tester to VF0 with correct MAC, and check VF0 can receive
+4. Send packet from TG to VF0 with correct MAC, and check VF0 can receive
    the packet
 
 5. Enable promisc mode for VF from PF::
 
        testpmd>set vf promisc 0 0 on
 
-6. Send packet from tester to VF0 with random MAC, and the packet can be
+6. Send packet from TG to VF0 with random MAC, and the packet can be
    received by VF0
 
-7. Send packet from tester to VF0 with correct MAC, and the packet can be
+7. Send packet from TG to VF0 with correct MAC, and the packet can be
    received by VF0
 
 
@@ -399,20 +399,20 @@  Test Case 14: Set Vlan filter for VF from PF
 
        testpmd> rx_vlan add id port 0 vf 1
 
-4. Send packet from tester to VF0 with wrong vlan id to random MAC, check VF0
+4. Send packet from TG to VF0 with wrong vlan id to random MAC, check VF0
    can't receive packet
 
-5. Send packet from tester to VF0 with configured vlan id to random MAC, check
+5. Send packet from TG to VF0 with configured vlan id to random MAC, check
    VF0 can receive packet
 
 6. Remove vlan filter id for VF0 from PF::
 
        testpmd> rx_vlan rm id port 0 vf 1
 
-7. Send packet from tester to VF0 with wrong vlan id to random MAC, check VF0
+7. Send packet from TG to VF0 with wrong vlan id to random MAC, check VF0
    can receive packet
 
-8. Send packet from tester to VF0 with configured vlan id to random MAC, check
+8. Send packet from TG to VF0 with configured vlan id to random MAC, check
    VF0 can receive packet
 
 9. Send packet without vlan id to random MAC, check VF0 can receive packet
@@ -420,7 +420,7 @@  Test Case 14: Set Vlan filter for VF from PF
 Test Case 15: Ixgbe vf jumbo frame test
 =======================================
 1. Default mtu size is 1500, send one packet with length bigger than default
-   mtu size to VF0, such as 2000 from tester, check VF0 can't receive packet
+   mtu size to VF0, such as 2000 from TG, check VF0 can't receive packet
 
 2. Set VF0 mtu size as 3000, but need to stop then restart port to active mtu::
 
@@ -429,15 +429,15 @@  Test Case 15: Ixgbe vf jumbo frame test
       testpmd> port start all
       testpmd> start
 
-3. Send one packet with length 2000 from tester to VF0, check VF0 can receive packet
+3. Send one packet with length 2000 from TG to VF0, check VF0 can receive packet
 
 4. Send one packet with length bigger than configured mtu size to VF0,such as 4000
-   from tester, check VF0 can't receive packet
+   from TG, check VF0 can't receive packet
 
 5. Quit VF0 testpmd, restart VF0 testpmd, send one packet with length 2000 from
-   tester to VF0, check VF0 can receive packet
+   TG to VF0, check VF0 can receive packet
 
 6. send one packet with length bigger than configured mtu size to VF0, such as
-   5000 from tester, check VF0 can't receive packet
+   5000 from TG, check VF0 can't receive packet
 
 notes: only x550 and x540 support jumbo frames.
diff --git a/test_plans/vf_interrupt_pmd_test_plan.rst b/test_plans/vf_interrupt_pmd_test_plan.rst
index f468b491..ff99f803 100644
--- a/test_plans/vf_interrupt_pmd_test_plan.rst
+++ b/test_plans/vf_interrupt_pmd_test_plan.rst
@@ -18,7 +18,7 @@  interrupt.
 Prerequisites
 =============
 
-Each of the 10Gb Ethernet* ports of the DUT is directly connected in
+Each of the 10Gb Ethernet* ports of the SUT is directly connected in
 full-duplex to a different port of the peer traffic generator.
 
 Assume PF port PCI addresses is 0000:04:00.0, their
@@ -50,9 +50,9 @@  Test Case1: Check Interrupt for PF with vfio driver on ixgbe and i40e
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-3 -n 4 -- -P -p 0x01  --config '(0,0,2)'
 
-3. Send packet with packet generator to the pf NIC, check that thread core2 waked up::
+3. Send packet with traffic generator to the pf NIC, check that thread core2 waked up::
 
-    sendp([Ether(dst='pf_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX')], iface="tester_intf")
+    sendp([Ether(dst='pf_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX')], iface="TG_intf")
 
     L3FWD_POWER: lcore 2 is waked up from rx interrupt on port 0 queue 0
 
@@ -74,9 +74,9 @@  Test Case2: Check Interrupt for PF with igb_uio driver on ixgbe and i40e
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-3 -n 4 -- -P -p 0x01  --config '(0,0,2)'
 
-3. Send packet with packet generator to the pf NIC, check that thread core2 waked up::
+3. Send packet with traffic generator to the pf NIC, check that thread core2 waked up::
 
-    sendp([Ether(dst='pf_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX')], iface="tester_intf")
+    sendp([Ether(dst='pf_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX')], iface="TG_intf")
 
     L3FWD_POWER: lcore 2 is waked up from rx interrupt on port 0 queue 0
 
@@ -100,9 +100,9 @@  Test Case3: Check Interrupt for VF with vfio driver on ixgbe and i40e
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-3 -n 4 -- -P -p 0x01  --config '(0,0,2)'
 
-3. Send packet with packet generator to the pf NIC, check that thread core2 waked up::
+3. Send packet with traffic generator to the pf NIC, check that thread core2 waked up::
 
-    sendp([Ether(dst='vf_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX')], iface="tester_intf")
+    sendp([Ether(dst='vf_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX')], iface="TG_intf")
 
     L3FWD_POWER: lcore 2 is waked up from rx interrupt on port 0 queue 0
 
@@ -148,9 +148,9 @@  Test Case4: VF interrupt pmd in VM with vfio-pci
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-3 -n 4 -- -P -p 0x01  --config '(0,0,2)'
 
-6. Send packet with packet generator to the VM, check that thread core2 waked up::
+6. Send packet with traffic generator to the VM, check that thread core2 waked up::
 
-    sendp([Ether(dst='vf_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX')], iface="tester_intf")
+    sendp([Ether(dst='vf_mac')/IP()/UDP()/Raw(load='XXXXXXXXXXXXXXXXXX')], iface="TG_intf")
 
     L3FWD_POWER: lcore 2 is waked up from rx interrupt on port 0 queue 0
 
@@ -158,7 +158,7 @@  Test Case4: VF interrupt pmd in VM with vfio-pci
 
     L3FWD_POWER: lcore 2 sleeps until interrupt triggers
 
-Test Case5: vf multi-queue interrupt with vfio-pci on i40e 
+Test Case5: vf multi-queue interrupt with vfio-pci on i40e
 ==========================================================
 
 1. Generate NIC VF, then bind it to vfio drvier::
@@ -176,7 +176,7 @@  Test Case5: vf multi-queue interrupt with vfio-pci on i40e
 3. Send UDP packets with random ip and dest mac = vf mac addr::
 
       for x in range(0,10):
-       sendp(Ether(src="00:00:00:00:01:00",dst="vf_mac")/IP(src='2.1.1.' + str(x),dst='2.1.1.5')/UDP()/"Hello!0",iface="tester_intf")
+       sendp(Ether(src="00:00:00:00:01:00",dst="vf_mac")/IP(src='2.1.1.' + str(x),dst='2.1.1.5')/UDP()/"Hello!0",iface="TG_intf")
 
 4. Check if threads on all cores have waked up::
 
@@ -187,7 +187,7 @@  Test Case5: vf multi-queue interrupt with vfio-pci on i40e
 
 Test Case6: VF multi-queue interrupt in VM with vfio-pci on i40e
 ================================================================
-    
+
 1. Generate NIC VF, then bind it to vfio drvier::
 
     echo 1 > /sys/bus/pci/devices/0000\:88:00.1/sriov_numvfs
@@ -223,7 +223,7 @@  Test Case6: VF multi-queue interrupt in VM with vfio-pci on i40e
 5. Send UDP packets with random ip and dest mac = vf mac addr::
 
     for x in range(0,10):
-     sendp(Ether(src="00:00:00:00:01:00",dst="vf_mac")/IP(src='2.1.1.' + str(x),dst='2.1.1.5')/UDP()/"Hello!0",iface="tester_intf")
+     sendp(Ether(src="00:00:00:00:01:00",dst="vf_mac")/IP(src='2.1.1.' + str(x),dst='2.1.1.5')/UDP()/"Hello!0",iface="TG_intf")
 
 6. Check if threads on core 0 to core 3 can be waked up in VM::
 
diff --git a/test_plans/vf_kernel_test_plan.rst b/test_plans/vf_kernel_test_plan.rst
index 67986463..322eb61a 100644
--- a/test_plans/vf_kernel_test_plan.rst
+++ b/test_plans/vf_kernel_test_plan.rst
@@ -9,16 +9,16 @@  VFD is SRIOV Policy Manager (daemon) running on the host allowing
 configuration not supported by kernel NIC driver, supports ixgbe and
 i40e NIC. Run on the host for policy decisions w.r.t. what a VF can and
 can not do to the PF. Only the DPDK PF would provide a callback to implement
-these features, the normal kernel drivers would not have the callback so 
-would not support the features. Allow passing information to application 
-controlling PF when VF message box event received such as those listed below, 
-so action could be taken based on host policy. Stop VM1 from asking for 
-something that compromises VM2. Use DPDK DPDK PF + kernel VF mode to verify 
-below features. 
+these features, the normal kernel drivers would not have the callback so
+would not support the features. Allow passing information to application
+controlling PF when VF message box event received such as those listed below,
+so action could be taken based on host policy. Stop VM1 from asking for
+something that compromises VM2. Use DPDK DPDK PF + kernel VF mode to verify
+below features.
 
 Test Case 1: Set up environment and load driver
 ===============================================
-1. Get the pci device id of DUT, load ixgbe driver to required version, 
+1. Get the pci device id of NIC ports, load ixgbe driver to required version,
    take 82599 for example::
 
     rmmod ixgbe
@@ -27,10 +27,10 @@  Test Case 1: Set up environment and load driver
 2. Host PF in DPDK driver. Create VFs from PF with dpdk driver::
 
 	./usertools/dpdk-devbind.py -b igb_uio 05:00.0
-	echo 2 >/sys/bus/pci/devices/0000\:05\:00.0/max_vfs 
-	
+	echo 2 >/sys/bus/pci/devices/0000\:05\:00.0/max_vfs
+
 3. Check ixgbevf version and update ixgbevf to required version
-	
+
 4. Detach VFs from the host::
 
     rmmod ixgbevf
@@ -45,10 +45,10 @@  Test Case 2: Link
 Pre-environment::
 
   (1)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0,
-     start VM0 
+     start VM0
   (2)Load host DPDK driver and VM0 kernel driver
 
-Steps:  
+Steps:
 
 1. Enable multi-queues to start DPDK PF::
 
@@ -58,31 +58,31 @@  Steps:
 
 3. Link down kernel VF and expect VF link down
 
-4. Repeat above 2~3 for 100 times, expect no crash or core dump issues. 
+4. Repeat above 2~3 for 100 times, expect no crash or core dump issues.
 
 
-Test Case 3: ping 
+Test Case 3: ping
 ==================
-Pre-environment:: 
+Pre-environment::
 
   (1)Establish link with link partner.
   (2)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0,
      start VM0
   (3)Load host DPDK driver and VM0 kernel driver
 
-Steps: 
+Steps:
 
 1. Ifconfig IP on VF0 and VF1
 
-2. Ifconfig IP on link partner PF, name as tester PF
+2. Ifconfig IP on link partner PF, name as TG PF
 
 3. Start inbound and outbound pings, check ping successfully.
 
-4. Link down the devx, stop the pings, link up the devx then restart the 
-   pings, check port could ping successfully. 
+4. Link down the devx, stop the pings, link up the devx then restart the
+   pings, check port could ping successfully.
 
 5. Repeat step 3~4 for 5 times
-   
+
 
 Test Case 4: reset
 ==================
@@ -93,22 +93,22 @@  Pre-environment::
      VM1, start VM0 and VM1
   (3)Load host DPDK driver and VM kernel driver
 
-Steps: 
+Steps:
 
 1. Check host testpmd and PF at link up status
 
-2. Link up VF0 in VM0 and VF1 in VM1 
+2. Link up VF0 in VM0 and VF1 in VM1
 
 3. Link down VF1 in VM1 and check no impact on VF0 status
 
-4. Unload VF1 kernel driver and expect no impact on VF0 
+4. Unload VF1 kernel driver and expect no impact on VF0
 
 5. Use tcpdump to dump packet on VF0
 
 6. Send packets to VF0 using IXIA or scapy tool, expect RX successfully
 
-7. Link down and up DPDK PF, ensure that the VF recovers and continues to 
-   receive packet. 
+7. Link down and up DPDK PF, ensure that the VF recovers and continues to
+   receive packet.
 
 8. Load VF1 kernel driver and expect no impact on VF0
 
@@ -123,13 +123,13 @@  Pre-environment::
     (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
     (3)Load host DPDK driver and VM0 kernel drive
 
-Steps: 
+Steps:
 
-1. Ifconfig IP on kernel VF0 
+1. Ifconfig IP on kernel VF0
 
-2. Ifconfig IP on link partner PF, name as tester PF
+2. Ifconfig IP on link partner PF, name as TG PF
 
-3. Kernel VF0 ping tester PF, tester PF ping kernel VF0
+3. Kernel VF0 ping TG PF, TG PF ping kernel VF0
 
 4. Add IPv6 on kernel VF0(e.g: ens3)::
 
@@ -154,9 +154,9 @@  Pre-environment::
     (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
     (3)Load host DPDK driver and VM0 kernel driver
 
-Steps: 
+Steps:
 
-1. Add random vlan id(0~4095) on kernel VF0(e.g: ens3), take vlan id 51 
+1. Add random vlan id(0~4095) on kernel VF0(e.g: ens3), take vlan id 51
    for example::
 
     modprobe 8021q
@@ -166,10 +166,10 @@  Steps:
 
     ls /proc/net/vlan
 
-3. Send packet from tester to VF MAC with not-matching vlan id, check the 
+3. Send packet from TG to VF MAC with not-matching vlan id, check the
    packet can't be received at the vlan device
 
-4. Send packet from tester to VF MAC with matching vlan id, check the 
+4. Send packet from TG to VF MAC with matching vlan id, check the
    packet can be received at the vlan device.
 
 5. Delete configured vlan device::
@@ -178,8 +178,8 @@  Steps:
 
 6. Check delete vlan id 51 successfully
 
-7. Send packet from tester to VF MAC with vlan id(51), check that the 
-   packet can’t be received at the VF. 
+7. Send packet from TG to VF MAC with vlan id(51), check that the
+   packet can’t be received at the VF.
 
 
 Test Case 7: Get packet statistic
@@ -190,7 +190,7 @@  Pre-environment::
     (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
     (3)Load host DPDK driver and VM0 kernel driver
 
-Steps: 
+Steps:
 
 1. Send packet to kernel VF0 mac
 
@@ -207,14 +207,14 @@  Pre-environment::
     (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
     (3)Load host DPDK driver and VM0 kernel driver
 
-Steps: 
+Steps:
 
 1. Check DPDK PF and kernel VF mtu, normal is 1500
 
-2. Use scapy to send one packet with length as 2000 with DPDK PF MAC as 
+2. Use scapy to send one packet with length as 2000 with DPDK PF MAC as
    DST MAC, check that DPDK PF can't receive packet
 
-3. Use scapy to send one packet with length as 2000 with kernel VF MAC as 
+3. Use scapy to send one packet with length as 2000 with kernel VF MAC as
    DST MAC, check that Kernel VF can't receive packet
 
 4. Change DPDK PF mtu as 3000, check no confusion/crash on kernel VF::
@@ -223,19 +223,19 @@  Steps:
     Testpmd > port config mtu 0 3000
     Testpmd > port start all
 
-5. Use scapy to send one packet with length as 2000 with DPDK PF MAC as 
+5. Use scapy to send one packet with length as 2000 with DPDK PF MAC as
    DST MAC, check that DPDK PF can receive packet
 
 6. Change kernel VF mtu as 3000, check no confusion/crash on DPDK PF::
 
     ifconfig eth0 mtu 3000
 
-7. Use scapy to send one packet with length as 2000 with kernel VF MAC 
+7. Use scapy to send one packet with length as 2000 with kernel VF MAC
    as DST MAC, check kernel VF can receive packet
 
 Note:
-HW limitation on 82599, need add “--max-pkt-len=<length>” on testpmd to 
-set mtu value, all the VFs and PF share same MTU, the largest one takes 
+HW limitation on 82599, need add “--max-pkt-len=<length>” on testpmd to
+set mtu value, all the VFs and PF share same MTU, the largest one takes
 effect.
 
 
@@ -248,34 +248,34 @@  Pre-environment::
     (3)Load host DPDK driver and VM0 kernel driver
 
 Steps:
- 
+
 1. Start DPDK PF, enable promisc mode, set rxonly forwarding
 
-2. Set up kernel VF tcpdump without -p parameter, without/with -p parameter 
+2. Set up kernel VF tcpdump without -p parameter, without/with -p parameter
    could enable/disable promisc mode::
 
     sudo tcpdump -i ens3 -n -e -vv
 
-3. Send packet from tester with random DST MAC, check the packet can be 
+3. Send packet from TG with random DST MAC, check the packet can be
    received by DPDK PF and kernel VF
 
 4. Disable DPDK PF promisc mode
 
-5. Set up kernel VF tcpdump with -p parameter, which means disable promisc 
+5. Set up kernel VF tcpdump with -p parameter, which means disable promisc
    mode::
 
     sudo tcpdump -i ens3 -n -e –vv -p
 
-6. Send packet from tester with random DST MAC, check the packet can't be 
+6. Send packet from TG with random DST MAC, check the packet can't be
    received by DPDK PF and kernel VF
 
-7. Send packet from tester to VF with correct DST MAC, check the packet 
+7. Send packet from TG to VF with correct DST MAC, check the packet
    can be received by kernel VF
 
-8. Send packet from tester to PF with correct DST MAC, check the packet 
+8. Send packet from TG to PF with correct DST MAC, check the packet
    can be received by DPDK PF
 
-Note: 
+Note:
 82599 NIC un-supports this case.
 
 
@@ -287,20 +287,20 @@  Pre-environment::
     (2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
     (3)Load host DPDK driver and VM0 kernel driver
 
-Steps: 
+Steps:
 
-1. Verify kernel VF RSS using ethtool -"l" (lower case L) <devx> that the 
-   default RSS setting is equal to the number of CPUs in the system and 
-   that the maximum number of RSS queues displayed is correct for the DUT
+1. Verify kernel VF RSS using ethtool -"l" (lower case L) <devx> that the
+   default RSS setting is equal to the number of CPUs in the system and
+   that the maximum number of RSS queues displayed is correct for the NIC ports
 
-2. Run "ethtool -S <devx> | grep rx_bytes | column" to see the current 
+2. Run "ethtool -S <devx> | grep rx_bytes | column" to see the current
    queue count and verify that it is correct to step 1
 
-3. Send multi-threaded traffics to the DUT with a number of threads  
+3. Send multi-threaded traffics to the SUT with a number of threads
 
 4. Check kernel VF each queue can receive packets
 
-Note: 
+Note:
 82599 NIC un-supports this case.
 
 
@@ -311,10 +311,10 @@  Pre-environment::
     (1)Establish link with IXIA.
     (2)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0,
        start VM0
-    (3)Load host DPDK driver, VM0 DPDK driver and kernel driver 
+    (3)Load host DPDK driver, VM0 DPDK driver and kernel driver
 
 Steps:
- 
+
 1. Check DPDK testpmd and PF at link up status
 
 2. Bind kernel VF0 to igb_uio
@@ -327,11 +327,11 @@  Steps:
 
 6. Set up kernel VF1 tcpdump without -p parameter on promisc mode
 
-7. Create 2 streams on IXIA, set DST MAC as each VF MAC, transmit these 2 
-   streams at the same time, check DPDK VF0 and kernel VF1 can receive packet 
-   successfully 
+7. Create 2 streams on IXIA, set DST MAC as each VF MAC, transmit these 2
+   streams at the same time, check DPDK VF0 and kernel VF1 can receive packet
+   successfully
 
-8. Check DPDK VF0 and kernel VF1 don't impact each other and no performance 
+8. Check DPDK VF0 and kernel VF1 don't impact each other and no performance
    drop for 10 minutes
 
 
@@ -345,7 +345,7 @@  Pre-environment::
     (3)Load host DPDK driver, VM DPDK driver and kernel driver
 
 Steps:
- 
+
 1. Check DPDK testpmd and PF at link up status
 
 2. Bind kernel VF0, VF1 to igb_uio in VM0, bind kernel VF4 to igb_uio in VM1
@@ -354,13 +354,13 @@  Steps:
 
 4. Link up kernel VF2, VF3 in VM0, link up kernel VF5 in VM1
 
-5. Start DPDK VF0, VF1 in VM0 and VF4 in VM1, enable promisc mode and set 
+5. Start DPDK VF0, VF1 in VM0 and VF4 in VM1, enable promisc mode and set
    rxonly forwarding
 
-6. Set up kernel VF2, VF3 in VM0 and VF5 in VM1 tcpdump without -p parameter 
+6. Set up kernel VF2, VF3 in VM0 and VF5 in VM1 tcpdump without -p parameter
    on promisc mode
 
-7. Create 6 streams on IXIA, set DST MAC as each VF MAC, transmit 6 streams 
+7. Create 6 streams on IXIA, set DST MAC as each VF MAC, transmit 6 streams
    at the same time, expect RX successfully
 
 8. Link down DPDK VF0 and expect no impact on other VFs
@@ -371,7 +371,7 @@  Steps:
 
 11. Unload VF5 kernel driver and expect no impact on other VFs
 
-12. Reboot VM1 and expect no impact on VM0’s VFs 
+12. Reboot VM1 and expect no impact on VM0’s VFs
 
 
 Test Case 13: Load kernel driver stress
@@ -382,7 +382,7 @@  Pre-environment::
     (2)Load host DPDK driver and VM0 kernel driver
 
 Steps:
- 
+
 1. Check DPDK testpmd and PF at link up status
 
 2. Unload kernel VF0 driver
@@ -392,4 +392,4 @@  Steps:
 4. Write script to repeat step 2 and step 3 for 100 times stress test
 
 4. Check no error/crash and system work normally
-  
+
diff --git a/test_plans/vf_l3fwd_test_plan.rst b/test_plans/vf_l3fwd_test_plan.rst
index 1e3ef663..47f6054d 100644
--- a/test_plans/vf_l3fwd_test_plan.rst
+++ b/test_plans/vf_l3fwd_test_plan.rst
@@ -21,7 +21,7 @@  Prerequisites
     ::
 
       +------------------------------+
-      |  DUT           |  TESTER     |
+      |  SUT               |  TG     |
       +==============================+
       | NIC-1,Port-1  ---  TG,Port-1 |
       | NIC-2,Port-1  ---  TG,Port-2 |
@@ -33,7 +33,7 @@  Prerequisites
     ::
 
       +------------------------------+
-      |  DUT           |  TESTER     |
+      |  SUT               |  TG     |
       +==============================+
       | NIC-1,Port-1  ---  TG,Port-1 |
       | NIC-1,Port-2  ---  TG,Port-2 |
@@ -46,7 +46,7 @@  Prerequisites
     ::
 
       + -----------------------------+
-      |  DUT           |  TESTER     |
+      |  SUT               |  TG     |
       +==============================+
       | NIC-1,Port-1  ---  TG,Port-1 |
       | NIC-2,Port-1  ---  TG,Port-2 |
diff --git a/test_plans/vf_macfilter_test_plan.rst b/test_plans/vf_macfilter_test_plan.rst
index d623cf04..4a0c54cd 100644
--- a/test_plans/vf_macfilter_test_plan.rst
+++ b/test_plans/vf_macfilter_test_plan.rst
@@ -8,7 +8,7 @@  VF MAC Filter Tests
 Test Case 1: test_kernel_2pf_2vf_1vm_iplink_macfilter
 =====================================================
 
-1. Get the pci device id of DUT, for example::
+1. Get the pci device id of NIC ports, for example::
 
       ./usertools/dpdk-devbind.py -s
 
@@ -88,7 +88,7 @@  Test Case 1: test_kernel_2pf_2vf_1vm_iplink_macfilter
 Test Case 2: test_kernel_2pf_2vf_1vm_mac_add_filter
 ===================================================
 
-1. Get the pci device id of DUT, for example::
+1. Get the pci device id of NIC ports, for example::
 
       ./usertools/dpdk-devbind.py -s
 
@@ -178,7 +178,7 @@  Test Case 2: test_kernel_2pf_2vf_1vm_mac_add_filter
 Test Case 3: test_dpdk_2pf_2vf_1vm_mac_add_filter
 ===================================================
 
-1. Get the pci device id of DUT, bind them to igb_uio, for example::
+1. Get the pci device id of NIC ports, bind them to igb_uio, for example::
 
       ./usertools/dpdk-devbind.py -s
 
@@ -272,7 +272,7 @@  Test Case 3: test_dpdk_2pf_2vf_1vm_mac_add_filter
 Test Case 4: test_dpdk_2pf_2vf_1vm_iplink_macfilter
 ===================================================
 
-1. Get the pci device id of DUT, bind them to igb_uio, for example::
+1. Get the pci device id of NIC ports, bind them to igb_uio, for example::
 
       ./usertools/dpdk-devbind.py -s
 
diff --git a/test_plans/vf_offload_test_plan.rst b/test_plans/vf_offload_test_plan.rst
index 522fc017..e224e78c 100644
--- a/test_plans/vf_offload_test_plan.rst
+++ b/test_plans/vf_offload_test_plan.rst
@@ -86,7 +86,7 @@  Send packets with incorrect checksum,
 verify dpdk can rx it and report the checksum error,
 verify that the same number of packet are correctly received on the traffic
 generator side. And IPv4 checksum, TCP checksum, UDP checksum, SCTP checksum need
-be validated as pass by the tester.
+be validated as pass by the TG.
 
 The IPv4 source address will not be changed by testpmd.
 
@@ -125,7 +125,7 @@  Send packets with incorrect checksum,
 verify dpdk can rx it and report the checksum error,
 verify that the same number of packet are correctly received on the traffic
 generator side. And IPv4 checksum, TCP checksum, UDP checksum need
-be validated as pass by the tester.
+be validated as pass by the TG.
 
 The first byte of source IPv4 address will be increased by testpmd. The checksum
 is indeed recalculated by software algorithms.
@@ -133,18 +133,18 @@  is indeed recalculated by software algorithms.
 Prerequisites for TSO
 =====================
 
-The DUT must take one of the Ethernet controller ports connected to a port on another
-device that is controlled by the Scapy packet generator.
+The SUT must take one of the Ethernet controller ports connected to a port on another
+device that is controlled by the Scapy traffic generator.
 
 The Ethernet interface identifier of the port that Scapy will use must be known.
-On tester, all offload feature should be disabled on tx port, and start rx port capture::
+On TG, all offload feature should be disabled on tx port, and start rx port capture::
 
   ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
   ip l set <tx port> up
   tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
 
 
-On DUT, run pmd with parameter "--enable-rx-cksum". Then enable TSO on tx port
+On SUT, run pmd with parameter "--enable-rx-cksum". Then enable TSO on tx port
 and checksum on rx port. The test commands is below::
 
   # Enable hw checksum on rx port
@@ -163,20 +163,20 @@  and checksum on rx port. The test commands is below::
 Test case: csum fwd engine, use TSO
 ===================================
 
-This test uses ``Scapy`` to send out one large TCP package. The dut forwards package
+This test uses ``Scapy`` to send out one large TCP package. The SUT forwards package
 with TSO enable on tx port while rx port turns checksum on. After package send out
-by TSO on tx port, the tester receives multiple small TCP package.
+by TSO on tx port, the TG receives multiple small TCP package.
 
-Turn off tx port by ethtool on tester::
+Turn off tx port by ethtool on TG::
 
   ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
   ip l set <tx port> up
 
-Capture package rx port on tester::
+Capture package rx port on TG::
 
   tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
 
-Launch the userland ``testpmd`` application on DUT as follows::
+Launch the userland ``testpmd`` application on SUT as follows::
 
   testpmd> set verbose 1
   # Enable hw checksum on rx port
diff --git a/test_plans/vf_packet_rxtx_test_plan.rst b/test_plans/vf_packet_rxtx_test_plan.rst
index a3f979ed..b1679157 100644
--- a/test_plans/vf_packet_rxtx_test_plan.rst
+++ b/test_plans/vf_packet_rxtx_test_plan.rst
@@ -10,7 +10,7 @@  VF Packet RxTX Tests
 Test Case 1: VF_packet_IO_kernel_PF_dpdk_VF
 ===========================================
 
-1. Got the pci device id of DUT, for example::
+1. Got the pci device id of NIC ports, for example::
 
       ./usertools/dpdk-devbind.py -s
 
@@ -73,7 +73,7 @@  Test Case 1: VF_packet_IO_kernel_PF_dpdk_VF
       testpmd> set fwd mac
       testpmd> start
 
-6. Get mac address of one VF and use it as dest mac, using scapy to send 2000 random packets from tester,
+6. Get mac address of one VF and use it as dest mac, using scapy to send 2000 random packets from TG,
    verify the packets can be received by one VF and can be forward to another VF correctly.
 
 
@@ -81,7 +81,7 @@  Test Case 1: VF_packet_IO_kernel_PF_dpdk_VF
 Test Case 2: VF_packet_IO_dpdk_PF_dpdk_VF
 ===========================================
 
-1. Got the pci device id of DUT, for example::
+1. Got the pci device id of NIC ports, for example::
 
       ./usertools/dpdk-devbind.py -s
 
@@ -142,7 +142,7 @@  Test Case 2: VF_packet_IO_dpdk_PF_dpdk_VF
       testpmd> set fwd mac
       testpmd> start
 
-7. Get mac address of one VF and use it as dest mac, using scapy to send 2000 random packets from tester,
+7. Get mac address of one VF and use it as dest mac, using scapy to send 2000 random packets from TG,
    verify the packets can be received by one VF and can be forward to another VF correctly.
 
 Test Case 3: pf dpdk vf reset
@@ -150,7 +150,7 @@  Test Case 3: pf dpdk vf reset
 this case pf in dpdk
 ===========================================
 
-1. Got the pci device id of DUT, for example::
+1. Got the pci device id of NIC ports, for example::
 
       ./usertools/dpdk-devbind.py -s
 
@@ -213,14 +213,14 @@  this case pf in dpdk
 
       testpmd>clear port stats all
 
-10. Tester loop send packet to vf0 on vm0
+10. TG loop send packet to vf0 on vm0
 
 11. On vm1 loop start stop port 1000 times::
 
       testpmd>port stop all
       testpmd>port start all
 
-12. Tester stop send packet
+12. TG stop send packet
 
 13. On vm0,check port stats,verify vf0 vf1 can receive packet ,no error
 
@@ -230,7 +230,7 @@  Test Case 4: pf kernel vf reset
 this case pf in kernel
 ===========================================
 
-1. Got the pci device id of DUT, for example::
+1. Got the pci device id of NIC ports, for example::
 
       ./usertools/dpdk-devbind.py -s
 
@@ -288,13 +288,13 @@  this case pf in kernel
 
         testpmd>clear port stats all
 
-9. Tester loop send packet to vf0 on vm0
+9. TG loop send packet to vf0 on vm0
 
 10. On vm1 loop start stop port 1000 times::
 
       testpmd>port stop all
       testpmd>port start all
 
-11. Tester stop send packet
+11. TG stop send packet
 
 12. On vm0,check port stats,verify vf0 vf1 can receive packet ,no error
diff --git a/test_plans/vf_pf_reset_test_plan.rst b/test_plans/vf_pf_reset_test_plan.rst
index 31ca7238..b626738d 100644
--- a/test_plans/vf_pf_reset_test_plan.rst
+++ b/test_plans/vf_pf_reset_test_plan.rst
@@ -13,8 +13,8 @@  Prerequisites
 1. Hardware:
 
    * Intel® Ethernet 700 Series 4*10G NIC (driver: i40e)
-   * tester: ens3f0
-   * dut: ens5f0(pf0), ens5f1(pf1)
+   * TG: ens3f0
+   * SUT: ens5f0(pf0), ens5f1(pf1)
    * ens3f0 connect with ens5f0 by cable
    * the status of ens5f1 is linked
 
@@ -32,7 +32,7 @@  Prerequisites
 Test Case 1: vf reset -- create two vfs on one pf
 =================================================
 
-1. Get the pci device id of DUT, for example::
+1. Get the pci device id of NIC ports, for example::
 
      ./usertools/dpdk-devbind.py -s
 
@@ -75,7 +75,7 @@  Test Case 1: vf reset -- create two vfs on one pf
    The status are not different from the default value.
 
 6. Get mac address of one VF and use it as dest mac, using scapy to
-   send 1000 random packets from tester, verify the packets can be received
+   send 1000 random packets from TG, verify the packets can be received
    by one VF and can be forward to another VF correctly::
 
      scapy
@@ -86,14 +86,14 @@  Test Case 1: vf reset -- create two vfs on one pf
 
      ifconfig ens5f0 down
 
-   Send the same 1000 packets with scapy from tester,
+   Send the same 1000 packets with scapy from TG,
    the vf cannot receive any packets, including vlan=0 and vlan=1
 
 8. Set pf up::
 
      ifconfig ens5f0 up
 
-   Send the same 1000 packets with scapy from tester, verify the packets can be
+   Send the same 1000 packets with scapy from TG, verify the packets can be
    received by one VF and can be forward to another VF correctly.
 
 9. Reset the vfs, run the command::
@@ -104,7 +104,7 @@  Test Case 1: vf reset -- create two vfs on one pf
      testpmd> port start all
      testpmd> start
 
-   Send the same 1000 packets with scapy from tester, verify the packets can be
+   Send the same 1000 packets with scapy from TG, verify the packets can be
    received by one VF and can be forward to another VF correctly,
    check the port info::
 
@@ -148,7 +148,7 @@  Test Case 2: vf reset -- create two vfs on one pf, run testpmd separately
      testpmd> set fwd rxonly
      testpmd> start
 
-5. Send packets with scapy from tester::
+5. Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -186,7 +186,7 @@  Test Case 3: vf reset -- create one vf on each pf
      testpmd> set fwd mac
      testpmd> start
 
-5. Send packets with scapy from tester::
+5. Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -215,7 +215,7 @@  Test Case 4: vlan rx restore -- vf reset all ports
      testpmd> rx_vlan add 1 1
      testpmd> start
 
-   Send packets with scapy from tester::
+   Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -227,14 +227,14 @@  Test Case 4: vlan rx restore -- vf reset all ports
      iface="ens3f0",count=1000)
 
    vfs can receive the packets and forward it.
-   Send packets with scapy from tester::
+   Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=2)/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
 
    vf0 cannot receive any packets.
 
-3. Reset pf, don't reset vf, send the packets in step2 from tester,
+3. Reset pf, don't reset vf, send the packets in step2 from TG,
    vfs can receive the packets and forward it.
 
 4. Reset both vfs::
@@ -245,9 +245,9 @@  Test Case 4: vlan rx restore -- vf reset all ports
      testpmd> port start all
      testpmd> start
 
-   Send the packets in step2 from tester,
+   Send the packets in step2 from TG,
    vfs can receive the packets and forward it.
-   Send packets with scapy from tester::
+   Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=2)/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -270,7 +270,7 @@  test Case 5: vlan rx restore -- vf reset one port
      testpmd> rx_vlan add 1 1
      testpmd> start
 
-   Send packets with scapy from tester::
+   Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -283,7 +283,7 @@  test Case 5: vlan rx restore -- vf reset one port
 
    vfs can receive the packets and forward it.
 
-3. Pf reset, then reset vf0, send packets from tester::
+3. Pf reset, then reset vf0, send packets from TG::
 
      testpmd> stop
      testpmd> port stop 0
@@ -296,7 +296,7 @@  test Case 5: vlan rx restore -- vf reset one port
      iface="ens3f0",count=1000)
 
    vfs can receive and forward the packets.
-   Send packets from tester::
+   Send packets from TG::
 
      sendp([Ether(dst="00:11:22:33:44:12")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -339,7 +339,7 @@  Test Case 6: vlan rx restore -- create one vf on each pf
      testpmd> set fwd mac
      testpmd> start
 
-4. Send packets with scapy from tester::
+4. Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -347,7 +347,7 @@  Test Case 6: vlan rx restore -- create one vf on each pf
      iface="ens3f0",count=1000)
 
    vfs can forward the packets normally.
-   Send packets with scapy from tester::
+   Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=2)/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -361,20 +361,20 @@  Test Case 6: vlan rx restore -- create one vf on each pf
 
    vf0 can receive the packets, but vf1 can't transmit the packets.
 
-5. Reset pf, don't reset vf, send packets from tester::
+5. Reset pf, don't reset vf, send packets from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
 
    vf0 can receive the packets, but vf1 can't transmit the packets.
-   Send packets from tester::
+   Send packets from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
 
    vfs can forward the packets normally.
 
-4. Reset both vfs, send packets from tester::
+4. Reset both vfs, send packets from TG::
 
      testpmd> stop
      testpmd> port stop all
@@ -385,7 +385,7 @@  Test Case 6: vlan rx restore -- create one vf on each pf
      iface="ens3f0",count=1000)
 
    vf0 can receive the packets, but vf1 can't transmit the packets.
-   Send packets from tester::
+   Send packets from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -414,7 +414,7 @@  Test Case 7: vlan tx restore
      testpmd> tx_vlan set 1 51
      testpmd> start
 
-4. Send packets with scapy from tester::
+4. Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*18)], \
      iface="ens3f0",count=1)
@@ -427,7 +427,7 @@  Test Case 7: vlan tx restore
 
 6. Reset the pf, then reset the two vfs,
    send the same packet with no vlan tag,
-   check packets received by tester, the packet is configured with vlan 51.
+   check packets received by TG, the packet is configured with vlan 51.
 
 
 test Case 8: MAC address restore
@@ -458,7 +458,7 @@  test Case 8: MAC address restore
      testpmd> set fwd mac
      testpmd> start
 
-6. Send packets with scapy from tester::
+6. Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -525,7 +525,7 @@  test Case 9: vf reset (two vfs passed through to one VM)
      testpmd> set fwd mac
      testpmd> start
 
-6. Send packets with scapy from tester::
+6. Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
@@ -611,7 +611,7 @@  test Case 10: vf reset (two vfs passed through to two VM)
     testpmd> set fwd rxonly
     testpmd> start
 
-6. Send packets with scapy from tester::
+6. Send packets with scapy from TG::
 
      sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \
      iface="ens3f0",count=1000)
diff --git a/test_plans/vf_port_start_stop_test_plan.rst b/test_plans/vf_port_start_stop_test_plan.rst
index ce42008c..62f0a308 100644
--- a/test_plans/vf_port_start_stop_test_plan.rst
+++ b/test_plans/vf_port_start_stop_test_plan.rst
@@ -11,7 +11,7 @@  Prerequisites
 
 Create Two VF interfaces from two kernel PF interfaces, and then attach them to VM. Suppose PF is 0000:04:00.0. Generate 2VFs using commands below and make them in pci-stub mods.
 
-1. Get the pci device id of DUT::
+1. Get the pci device id of NIC ports::
 
     ./dpdk_nic_bind.py --st
     0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens261f0 drv=ixgbe unused=igb_uio
@@ -111,7 +111,7 @@  Create Two VF interfaces from two kernel PF interfaces, and then attach them to
 Test Case: port start/stop
 ==========================
 
-Start send packets from tester , then start/stop ports several times ,verify if it running right.
+Start send packets from TG , then start/stop ports several times ,verify if it running right.
 
 Commands could be used to start/stop ports refer to below:
 
diff --git a/test_plans/vf_rss_test_plan.rst b/test_plans/vf_rss_test_plan.rst
index 846de2d7..d4894445 100644
--- a/test_plans/vf_rss_test_plan.rst
+++ b/test_plans/vf_rss_test_plan.rst
@@ -11,7 +11,7 @@  Support configuring hash functions.
 Prerequisites
 -------------
 
-Each of the Ethernet ports of the DUT is directly connected in full-duplex
+Each of the Ethernet ports of the SUT is directly connected in full-duplex
 to a different port of the peer traffic generator.
 
 Network Traffic
@@ -62,7 +62,7 @@  The CPU IDs and the number of logical cores running the test in parallel can
 be manually set with the ``set corelist X,Y`` and the ``set nbcore N``
 interactive commands of the ``testpmd`` application.
 
-1. Got the pci device id of DUT, for example::
+1. Got the pci device id of NIC ports, for example::
 
      ./usertools/dpdk-devbind.py -s
 
diff --git a/test_plans/vf_single_core_perf_test_plan.rst b/test_plans/vf_single_core_perf_test_plan.rst
index 7b19eed0..4c2a6847 100644
--- a/test_plans/vf_single_core_perf_test_plan.rst
+++ b/test_plans/vf_single_core_perf_test_plan.rst
@@ -23,17 +23,17 @@  Prerequisites
 
     dpdk: git clone http://dpdk.org/git/dpdk
     scapy: http://www.secdev.org/projects/scapy/
-    dts (next branch): git clone http://dpdk.org/git/tools/dts, 
-                       then "git checkout next" 
-    Trex code: http://trex-tgn.cisco.com/trex/release/v2.84.tar.gz 
+    dts (next branch): git clone http://dpdk.org/git/tools/dts,
+                       then "git checkout next"
+    Trex code: http://trex-TGn.cisco.com/trex/release/v2.84.tar.gz
                (to be run in stateless Layer 2 mode, see section in
                 Getting Started Guide for more details)
     python-prettytable:
-        apt install python-prettytable (for ubuntu os) 
-        or dnf install python-prettytable (for fedora os). 
+        apt install python-prettytable (for ubuntu os)
+        or dnf install python-prettytable (for fedora os).
 
 3. Connect all the selected nic ports to traffic generator(IXIA,TREX,
-   PKTGEN) ports(TG ports)::
+   Scapy) ports(TG ports)::
 
     2 TG 25g  ports for Intel® Ethernet Network Adapter XXV710-DA2 ports
     4 TG 10g  ports for 4 82599/500 Series 10G ports
@@ -87,9 +87,9 @@  Test Case : Vf Single Core Performance Measurement
    |  1C/2T    |    64      |   2048  |  xxxxx Mpps |   xxx % |  xxxxxxx   Mpps     |
    +-----------+------------+---------+-------------+---------+---------------------+
 
-  Check throughput and compare it with the expected value. Case will raise failure 
+  Check throughput and compare it with the expected value. Case will raise failure
   if actual throughputs have more than 1Mpps gap from expected ones.
 
-Note : 
-   The values for the expected throughput may vary due to different platform and OS, 
-   and traffic generator, please correct threshold values accordingly. 
+Note :
+   The values for the expected throughput may vary due to different platform and OS,
+   and traffic generator, please correct threshold values accordingly.
diff --git a/test_plans/vf_smoke_test_plan.rst b/test_plans/vf_smoke_test_plan.rst
index 33a3273c..913d550f 100644
--- a/test_plans/vf_smoke_test_plan.rst
+++ b/test_plans/vf_smoke_test_plan.rst
@@ -29,7 +29,7 @@  Prerequisites
     CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static x86_64-native-linuxapp-gcc
     ninja -C x86_64-native-linuxapp-gcc
 
-4. Get the pci device id of DUT, for example::
+4. Get the pci device id of NIC ports, for example::
 
     ./usertools/dpdk-devbind.py -s
 
@@ -143,6 +143,6 @@  Test Case 3: test reset RX/TX queues
 
 5. Check with ``show config rxtx`` that the configuration for these parameters changed.
 
-6. Run ``start`` again to restart the forwarding, then start packet generator to transmit
+6. Run ``start`` again to restart the forwarding, then start traffic generator to transmit
    and receive packets, and check if testpmd is able to receive and forward packets
    successfully.
diff --git a/test_plans/vf_vlan_test_plan.rst b/test_plans/vf_vlan_test_plan.rst
index c183b3d6..a097f09d 100644
--- a/test_plans/vf_vlan_test_plan.rst
+++ b/test_plans/vf_vlan_test_plan.rst
@@ -109,7 +109,7 @@  Test Case 3: VF port based vlan tx
      testpmd> set fwd mac
      testpmd> start
 
-3. Send packet from tester port1 and check packet received by tester port0::
+3. Send packet from TG port1 and check packet received by TG port0::
 
      Check port1 received packet with configured vlan 2
 
@@ -126,7 +126,7 @@  Test Case 3: VF tagged vlan tx
 
      testpmd> tx_vlan set 0 1
 
-3. Send packet from tester port1 and check packet received by tester port0::
+3. Send packet from TG port1 and check packet received by TG port0::
 
      Check port- received packet with configured vlan 1
 
@@ -178,18 +178,18 @@  Test case5: VF Vlan strip test
 
      testpmd> rx_vlan add 1 0
 
-3. Disable VF0 vlan strip and sniff packet on tester port1::
+3. Disable VF0 vlan strip and sniff packet on TG port1::
 
      testpmd> vlan set strip off 0
 
-4. Set packet from tester port0 with vlan 1 and check sniffed packet has vlan
+4. Set packet from TG port0 with vlan 1 and check sniffed packet has vlan
 
-5. Enable vlan strip on VF0 and sniff packet on tester port1::
+5. Enable vlan strip on VF0 and sniff packet on TG port1::
 
      testpmd> vlan set strip on 0
 
-6. Send packet from tester port0 with vlan 1 and check sniffed packet without vlan
+6. Send packet from TG port0 with vlan 1 and check sniffed packet without vlan
 
-7. Send packet from tester port0 with vlan 0 and check sniffed packet without vlan
+7. Send packet from TG port0 with vlan 0 and check sniffed packet without vlan
 
 8. Rerun with step 2-8 with random vlan and max vlan 4095
diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index 08820a9c..f83c8be0 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -58,7 +58,7 @@  General set up
       CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
       ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 are DMA device IDs::
+2. Get the PCI device ID and DMA device ID of SUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 are DMA device IDs::
 
       <dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -78,8 +78,8 @@  Common steps
 ------------
 1. Bind 1 NIC port and CBDMA devices to vfio-pci::
 
-      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
-      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port pci device id>
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DMA device id>
 
       For example, Bind 1 NIC port and 2 CBDMA devices::
       ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0
@@ -121,7 +121,7 @@  Both iova as VA and PA mode have been tested.
       testpmd> set fwd mac
       testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and then check the throughput can get expected data::
 
       testpmd> show port stats all
 
@@ -207,7 +207,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -289,7 +289,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-3. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+3. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -393,7 +393,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -476,7 +476,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -569,7 +569,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -647,7 +647,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -742,7 +742,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -832,7 +832,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -944,7 +944,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -1035,7 +1035,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
@@ -1136,7 +1136,7 @@  Both iova as VA and PA mode have been tested.
        testpmd> set fwd mac
        testpmd> start
 
-4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data::
+4. Send imix packets [64,1518] from traffic generator as common step2, and check the throughput can get expected data::
 
        testpmd> show port stats all
 
diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst
index f39b52aa..08d2fbdd 100644
--- a/test_plans/vhost_user_live_migration_test_plan.rst
+++ b/test_plans/vhost_user_live_migration_test_plan.rst
@@ -14,7 +14,7 @@  Prerequisites
 HW setup
 
 1. Connect three ports to one switch, these three ports are from Host, Backup
-   host and tester. Ensure the tester can send packets out, then host/backup server ports
+   host and TG. Ensure the TG can send packets out, then host/backup server ports
    can receive these packets.
 2. Better to have 2 similar machine with the same CPU and OS.
 
@@ -108,11 +108,11 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# testpmd>set verbose 1
     host VM# testpmd>start
 
-8. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+8. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from TG port::
 
-    tester# scapy
-    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
-    tester# sendp(p, iface="p5p1", inter=1, loop=1)
+    TG# scapy
+    TG# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+    TG# sendp(p, iface="p5p1", inter=1, loop=1)
 
 9. Check the virtio-pmd can receive the packet, then detach the session for retach on backup server::
 
@@ -207,12 +207,12 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# testpmd>set verbose 1
     host VM# testpmd>start
 
-8. Start vhost testpmd on host and send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+8. Start vhost testpmd on host and send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from TG port::
 
     host# testpmd>start
-    tester# scapy
-    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
-    tester# sendp(p, iface="p5p1", inter=1, loop=1)
+    TG# scapy
+    TG# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+    TG# sendp(p, iface="p5p1", inter=1, loop=1)
 
 9. Check the virtio-pmd can receive packets, then detach the session for retach on backup server::
 
@@ -297,11 +297,11 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# screen -S vm
     host VM# tcpdump -i eth0
 
-7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from TG port::
 
-    tester# scapy
-    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
-    tester# sendp(p, iface="p5p1", inter=1, loop=1)
+    TG# scapy
+    TG# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+    TG# sendp(p, iface="p5p1", inter=1, loop=1)
 
 8. Check the virtio-net can receive the packet, then detach the session for retach on backup server::
 
@@ -385,11 +385,11 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# screen -S vm
     host VM# tcpdump -i eth0
 
-7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from TG port::
 
-    tester# scapy
-    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
-    tester# sendp(p, iface="p5p1", inter=1, loop=1)
+    TG# scapy
+    TG# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+    TG# sendp(p, iface="p5p1", inter=1, loop=1)
 
 8. Check the virtio-net can receive the packet, then detach the session for retach on backup server::
 
@@ -490,11 +490,11 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# testpmd>set verbose 1
     host VM# testpmd>start
 
-8. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+8. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from TG port::
 
-    tester# scapy
-    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
-    tester# sendp(p, iface="p5p1", inter=1, loop=1)
+    TG# scapy
+    TG# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+    TG# sendp(p, iface="p5p1", inter=1, loop=1)
 
 9. Check the virtio-pmd can receive the packet, then detach the session for retach on backup server::
 
@@ -589,12 +589,12 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# testpmd>set verbose 1
     host VM# testpmd>start
 
-8. Start vhost testpmd on host and send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+8. Start vhost testpmd on host and send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from TG port::
 
     host# testpmd>start
-    tester# scapy
-    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
-    tester# sendp(p, iface="p5p1", inter=1, loop=1)
+    TG# scapy
+    TG# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+    TG# sendp(p, iface="p5p1", inter=1, loop=1)
 
 9. Check the virtio-pmd can receive packets, then detach the session for retach on backup server::
 
@@ -679,11 +679,11 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# screen -S vm
     host VM# tcpdump -i eth0
 
-7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from TG port::
 
-    tester# scapy
-    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
-    tester# sendp(p, iface="p5p1", inter=1, loop=1)
+    TG# scapy
+    TG# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+    TG# sendp(p, iface="p5p1", inter=1, loop=1)
 
 8. Check the virtio-net can receive the packet, then detach the session for retach on backup server::
 
@@ -767,11 +767,11 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# screen -S vm
     host VM# tcpdump -i eth0
 
-7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from TG port::
 
-    tester# scapy
-    tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
-    tester# sendp(p, iface="p5p1", inter=1, loop=1)
+    TG# scapy
+    TG# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+    TG# sendp(p, iface="p5p1", inter=1, loop=1)
 
 8. Check the virtio-net can receive the packet, then detach the session for retach on backup server::
 
diff --git a/test_plans/vhost_virtio_pmd_interrupt_cbdma_test_plan.rst b/test_plans/vhost_virtio_pmd_interrupt_cbdma_test_plan.rst
index fc203020..4b0a4d53 100644
--- a/test_plans/vhost_virtio_pmd_interrupt_cbdma_test_plan.rst
+++ b/test_plans/vhost_virtio_pmd_interrupt_cbdma_test_plan.rst
@@ -10,12 +10,12 @@  Description
 
 Virtio-pmd interrupt need test with l3fwd-power sample, small packets send from traffic generator
 to virtio-pmd side,check virtio-pmd cores can be wakeup status,and virtio-pmd cores should be
-sleep status after stop sending packets from traffic generator when cbdma enabled.This test plan 
+sleep status after stop sending packets from traffic generator when cbdma enabled.This test plan
 cover virtio 0.95, virtio 1.0 and virtio 1.1.
 
 ..Note:
 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, sut to old qemu exist reconnect issue when multi-queues test.
 3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
 
 Prerequisites
@@ -62,7 +62,7 @@  Test Case1: Basic virtio interrupt test with 16 queues and cbdma enabled
     -- -p 1 -P  --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' \
     --no-numa  --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
@@ -99,7 +99,7 @@  Test Case2: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
@@ -136,7 +136,7 @@  Test Case3: Packed ring virtio interrupt test with 16 queues and cbdma enabled
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P  --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa  --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
diff --git a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
index 17565504..c2b90512 100644
--- a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
+++ b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
@@ -50,7 +50,7 @@  Test Case 1: Basic virtio interrupt test with 4 queues
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
@@ -86,7 +86,7 @@  Test Case 2: Basic virtio interrupt test with 16 queues
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P  --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa  --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
@@ -122,7 +122,7 @@  Test Case 3: Basic virtio-1.0 interrupt test with 4 queues
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
@@ -158,7 +158,7 @@  Test Case 4: Packed ring virtio interrupt test with 16 queues
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P  --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa  --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
@@ -193,7 +193,7 @@  Test Case 5: Basic virtio interrupt test with 16 queues and cbdma enabled
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P  --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa  --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
@@ -228,7 +228,7 @@  Test Case 6: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
@@ -263,7 +263,7 @@  Test Case 7: Packed ring virtio interrupt test with 16 queues and cbdma enabled
 
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P  --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa  --parse-ptype
 
-5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
+5. Send random dest ip address packets to host nic with traffic generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.
 
 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.
 
diff --git a/test_plans/vhost_virtio_user_interrupt_cbdma_test_plan.rst b/test_plans/vhost_virtio_user_interrupt_cbdma_test_plan.rst
index ccb74438..bde9467d 100644
--- a/test_plans/vhost_virtio_user_interrupt_cbdma_test_plan.rst
+++ b/test_plans/vhost_virtio_user_interrupt_cbdma_test_plan.rst
@@ -7,7 +7,7 @@  vhost/virtio-user interrupt mode with cbdma test plan
 
 Virtio-user interrupt need test with l3fwd-power sample, small packets send from traffic generator
 to virtio side, check virtio-user cores can be wakeup status, and virtio-user cores should be sleep
-status after stop sending packets from traffic generator when CBDMA enabled.This test plan cover 
+status after stop sending packets from traffic generator when CBDMA enabled.This test plan cover
 vhost-user as the backend.
 
 Test Case1: LSC event between vhost-user and virtio-user with split ring and cbdma enabled
@@ -52,11 +52,11 @@  flow: TG --> NIC --> Vhost --> Virtio
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \
     --vdev=virtio_user0,path=./vhost-net -- -p 1 -P --config="(0,0,14)" --parse-ptype
 
-3. Send packets with packet generator, check the virtio-user related core can be wakeup status.
+3. Send packets with traffic generator, check the virtio-user related core can be wakeup status.
 
-4. Stop sending packets with packet generator, check virtio-user related core change to sleep status.
+4. Stop sending packets with traffic generator, check virtio-user related core change to sleep status.
 
-5. Restart sending packets with packet generator, check virtio-user related core change to wakeup status again.
+5. Restart sending packets with traffic generator, check virtio-user related core change to wakeup status again.
 
 Test Case3: LSC event between vhost-user and virtio-user with packed ring and cbdma enabled
 ===========================================================================================
@@ -100,9 +100,9 @@  flow: TG --> NIC --> Vhost --> Virtio
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \
     --vdev=virtio_user0,path=./vhost-net,packed_vq=1 -- -p 1 -P --config="(0,0,14)" --parse-ptype
 
-3. Send packets with packet generator, check the virtio-user related core can be wakeup status.
+3. Send packets with traffic generator, check the virtio-user related core can be wakeup status.
 
-4. Stop sending packets with packet generator, check virtio-user related core change to sleep status.
+4. Stop sending packets with traffic generator, check virtio-user related core change to sleep status.
 
-5. Restart sending packets with packet generator, check virtio-user related core change to wakeup status again.
+5. Restart sending packets with traffic generator, check virtio-user related core change to wakeup status again.
 
diff --git a/test_plans/vhost_virtio_user_interrupt_test_plan.rst b/test_plans/vhost_virtio_user_interrupt_test_plan.rst
index d517893e..684dfc08 100644
--- a/test_plans/vhost_virtio_user_interrupt_test_plan.rst
+++ b/test_plans/vhost_virtio_user_interrupt_test_plan.rst
@@ -25,11 +25,11 @@  flow: TG --> NIC --> Vhost --> Virtio
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \
     --vdev=virtio_user0,path=./vhost-net -- -p 1 -P --config="(0,0,14)" --parse-ptype
 
-3. Send packets with packet generator, check the virtio-user related core can be wakeup status.
+3. Send packets with traffic generator, check the virtio-user related core can be wakeup status.
 
-4. Stop sending packets with packet generator, check virtio-user related core change to sleep status.
+4. Stop sending packets with traffic generator, check virtio-user related core change to sleep status.
 
-5. Restart sending packets with packet generator, check virtio-user related core change to wakeup status again.
+5. Restart sending packets with traffic generator, check virtio-user related core change to wakeup status again.
 
 Test Case2: Split ring virtio-user interrupt test with vhost-net as backend
 ===========================================================================
@@ -95,11 +95,11 @@  flow: TG --> NIC --> Vhost --> Virtio
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \
     --vdev=virtio_user0,path=./vhost-net,packed_vq=1 -- -p 1 -P --config="(0,0,14)" --parse-ptype
 
-3. Send packets with packet generator, check the virtio-user related core can be wakeup status.
+3. Send packets with traffic generator, check the virtio-user related core can be wakeup status.
 
-4. Stop sending packets with packet generator, check virtio-user related core change to sleep status.
+4. Stop sending packets with traffic generator, check virtio-user related core change to sleep status.
 
-5. Restart sending packets with packet generator, check virtio-user related core change to wakeup status again.
+5. Restart sending packets with traffic generator, check virtio-user related core change to wakeup status again.
 
 Test Case5: Packed ring virtio-user interrupt test with vhost-net as backend with
 =================================================================================
@@ -192,11 +192,11 @@  flow: TG --> NIC --> Vhost --> Virtio
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \
     --vdev=virtio_user0,path=./vhost-net -- -p 1 -P --config="(0,0,14)" --parse-ptype
 
-3. Send packets with packet generator, check the virtio-user related core can be wakeup status.
+3. Send packets with traffic generator, check the virtio-user related core can be wakeup status.
 
-4. Stop sending packets with packet generator, check virtio-user related core change to sleep status.
+4. Stop sending packets with traffic generator, check virtio-user related core change to sleep status.
 
-5. Restart sending packets with packet generator, check virtio-user related core change to wakeup status again.
+5. Restart sending packets with traffic generator, check virtio-user related core change to wakeup status again.
 
 Test Case9: LSC event between vhost-user and virtio-user with packed ring and cbdma enabled
 ===========================================================================================
@@ -240,8 +240,8 @@  flow: TG --> NIC --> Vhost --> Virtio
     ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \
     --vdev=virtio_user0,path=./vhost-net,packed_vq=1 -- -p 1 -P --config="(0,0,14)" --parse-ptype
 
-3. Send packets with packet generator, check the virtio-user related core can be wakeup status.
+3. Send packets with traffic generator, check the virtio-user related core can be wakeup status.
 
-4. Stop sending packets with packet generator, check virtio-user related core change to sleep status.
+4. Stop sending packets with traffic generator, check virtio-user related core change to sleep status.
 
-5. Restart sending packets with packet generator, check virtio-user related core change to wakeup status again.
+5. Restart sending packets with traffic generator, check virtio-user related core change to wakeup status again.
diff --git a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst
index 93cdb350..e1574048 100644
--- a/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst
+++ b/test_plans/virtio_event_idx_interrupt_cbdma_test_plan.rst
@@ -9,12 +9,12 @@  Description
 ===========
 
 This feature is to suppress interrupts for performance improvement, need compare
-interrupt times with and without virtio event idx enabled. This test plan test 
+interrupt times with and without virtio event idx enabled. This test plan test
 virtio event idx interrupt with cbdma enabled. Also need cover driver reload test.
 
 ..Note:
 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, sut to old qemu exist reconnect issue when multi-queues test.
 3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
 
 Test flow
@@ -44,7 +44,7 @@  Test Case1: Split ring virtio-pci driver reload test with CBDMA enabled
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :12 -daemonize
 
-3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
+3. On VM1, set virtio device IP, send 10M packets from traffic generator to nic then check virtio device can receive packets::
 
     ifconfig [ens3] 1.1.1.2      # [ens3] is the name of virtio-net
     tcpdump -i [ens3]
@@ -89,7 +89,7 @@  Test Case2: Wake up split ring virtio-net cores with event idx interrupt mode an
     ifconfig [ens3] 1.1.1.2           # [ens3] is the name of virtio-net
     ethtool -L [ens3] combined 16
 
-4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+4. Send 10M different ip addr packets from traffic generator to nic, check virtio-net interrupt times by below cmd in VM::
 
     cat /proc/interrupts
 
@@ -121,7 +121,7 @@  Test Case3: Packed ring virtio-pci driver reload test with CBDMA enabled
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
      -vnc :12 -daemonize
 
-3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
+3. On VM1, set virtio device IP, send 10M packets from traffic generator to nic then check virtio device can receive packets::
 
     ifconfig [ens3] 1.1.1.2      # [ens3] is the name of virtio-net
     tcpdump -i [ens3]
@@ -166,7 +166,7 @@  Test Case4: Wake up packed ring virtio-net cores with event idx interrupt mode a
     ifconfig [ens3] 1.1.1.2           # [ens3] is the name of virtio-net
     ethtool -L [ens3] combined 16
 
-4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+4. Send 10M different ip addr packets from traffic generator to nic, check virtio-net interrupt times by below cmd in VM::
 
     cat /proc/interrupts
 
diff --git a/test_plans/virtio_event_idx_interrupt_test_plan.rst b/test_plans/virtio_event_idx_interrupt_test_plan.rst
index 55edb4a9..96441d86 100644
--- a/test_plans/virtio_event_idx_interrupt_test_plan.rst
+++ b/test_plans/virtio_event_idx_interrupt_test_plan.rst
@@ -42,7 +42,7 @@  Test Case 1: Compare interrupt times with and without split ring virtio event id
 
     ifconfig [ens3] 1.1.1.2  # [ens3] is the name of virtio-net
 
-4. Send 10M packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+4. Send 10M packets from traffic generator to nic, check virtio-net interrupt times by below cmd in VM::
 
     cat /proc/interrupts
 
@@ -70,7 +70,7 @@  Test Case 2: Split ring virtio-pci driver reload test
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :12 -daemonize
 
-3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
+3. On VM1, set virtio device IP, send 10M packets from traffic generator to nic then check virtio device can receive packets::
 
     ifconfig [ens3] 1.1.1.2      # [ens3] is the name of virtio-net
     tcpdump -i [ens3]
@@ -113,7 +113,7 @@  Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 1
     ifconfig [ens3] 1.1.1.2           # [ens3] is the name of virtio-net
     ethtool -L [ens3] combined 16
 
-4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+4. Send 10M different ip addr packets from traffic generator to nic, check virtio-net interrupt times by below cmd in VM::
 
     cat /proc/interrupts
 
@@ -148,7 +148,7 @@  Test Case 4: Compare interrupt times with and without packed ring virtio event i
 
     ifconfig [ens3] 1.1.1.2  # [ens3] is the name of virtio-net
 
-4. Send 10M packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+4. Send 10M packets from traffic generator to nic, check virtio-net interrupt times by below cmd in VM::
 
     cat /proc/interrupts
 
@@ -176,7 +176,7 @@  Test Case 5: Packed ring virtio-pci driver reload test
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
      -vnc :12 -daemonize
 
-3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
+3. On VM1, set virtio device IP, send 10M packets from traffic generator to nic then check virtio device can receive packets::
 
     ifconfig [ens3] 1.1.1.2      # [ens3] is the name of virtio-net
     tcpdump -i [ens3]
@@ -219,7 +219,7 @@  Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode
     ifconfig [ens3] 1.1.1.2           # [ens3] is the name of virtio-net
     ethtool -L [ens3] combined 16
 
-4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+4. Send 10M different ip addr packets from traffic generator to nic, check virtio-net interrupt times by below cmd in VM::
 
     cat /proc/interrupts
 
@@ -249,7 +249,7 @@  Test Case 7: Split ring virtio-pci driver reload test with CBDMA enabled
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :12 -daemonize
 
-3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
+3. On VM1, set virtio device IP, send 10M packets from traffic generator to nic then check virtio device can receive packets::
 
     ifconfig [ens3] 1.1.1.2      # [ens3] is the name of virtio-net
     tcpdump -i [ens3]
@@ -292,7 +292,7 @@  Test Case 8: Wake up split ring virtio-net cores with event idx interrupt mode a
     ifconfig [ens3] 1.1.1.2           # [ens3] is the name of virtio-net
     ethtool -L [ens3] combined 16
 
-4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+4. Send 10M different ip addr packets from traffic generator to nic, check virtio-net interrupt times by below cmd in VM::
 
     cat /proc/interrupts
 
@@ -322,7 +322,7 @@  Test Case 9: Packed ring virtio-pci driver reload test with CBDMA enabled
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
      -vnc :12 -daemonize
 
-3. On VM1, set virtio device IP, send 10M packets from packet generator to nic then check virtio device can receive packets::
+3. On VM1, set virtio device IP, send 10M packets from traffic generator to nic then check virtio device can receive packets::
 
     ifconfig [ens3] 1.1.1.2      # [ens3] is the name of virtio-net
     tcpdump -i [ens3]
@@ -365,7 +365,7 @@  Test Case 10: Wake up packed ring virtio-net cores with event idx interrupt mode
     ifconfig [ens3] 1.1.1.2           # [ens3] is the name of virtio-net
     ethtool -L [ens3] combined 16
 
-4. Send 10M different ip addr packets from packet generator to nic, check virtio-net interrupt times by below cmd in VM::
+4. Send 10M different ip addr packets from traffic generator to nic, check virtio-net interrupt times by below cmd in VM::
 
     cat /proc/interrupts
 
diff --git a/test_plans/virtio_ipsec_cryptodev_func_test_plan.rst b/test_plans/virtio_ipsec_cryptodev_func_test_plan.rst
index cb390181..20312b5c 100644
--- a/test_plans/virtio_ipsec_cryptodev_func_test_plan.rst
+++ b/test_plans/virtio_ipsec_cryptodev_func_test_plan.rst
@@ -75,11 +75,11 @@  The options of ipsec-secgw is below::
 Test case setup:
 ================
 
-For function test, the DUT forward UDP packets generated by scapy.
+For function test, the SUT forward UDP packets generated by scapy.
 After sending single packet from Scapy, crytpoDev function encrypt/decrypt the
-payload in packet by using algorithm setting in VM. the packet back to tester.
+payload in packet by using algorithm setting in VM. the packet back to TG.
 
-Use TCPDump to capture the received packet on tester. Then tester parses the payload
+Use TCPDump to capture the received packet on TG. Then TG parses the payload
 and compare the payload with correct answer pre-stored in scripts:
 
 .. figure:: image/virtio_ipsec_cryptodev_func_test_plan.png
diff --git a/test_plans/virtio_pvp_regression_test_plan.rst b/test_plans/virtio_pvp_regression_test_plan.rst
index a953e2bb..ec758d17 100644
--- a/test_plans/virtio_pvp_regression_test_plan.rst
+++ b/test_plans/virtio_pvp_regression_test_plan.rst
@@ -28,7 +28,7 @@  Test Case 1: pvp test with virtio 0.95 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately::
+2. Check SUT machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately::
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 3 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -48,7 +48,7 @@  Test Case 1: pvp test with virtio 0.95 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+4. Send 64B and 1518B packets by traffic generator separately, show throughput with below command::
 
     testpmd>show port stats all
 
@@ -67,7 +67,7 @@  Test Case 2: pvp test with virtio 0.95 non-mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately::
+2. Check SUT machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately::
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 3 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -87,7 +87,7 @@  Test Case 2: pvp test with virtio 0.95 non-mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+4. Send 64B and 1518B packets by traffic generator separately, show throughput with below command::
 
     testpmd>show port stats all
 
@@ -106,7 +106,7 @@  Test Case 3: pvp test with virtio 0.95 vrctor_rx path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately::
+2. Check SUT machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately::
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 3 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -126,7 +126,7 @@  Test Case 3: pvp test with virtio 0.95 vrctor_rx path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+4. Send 64B and 1518B packets by traffic generator separately, show throughput with below command::
 
     testpmd>show port stats all
 
@@ -145,7 +145,7 @@  Test Case 4: pvp test with virtio 1.0 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately, note: we need add "disable-modern=false" to enable virtio 1.0::
+2. Check SUT machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately, note: we need add "disable-modern=false" to enable virtio 1.0::
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 3 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -165,7 +165,7 @@  Test Case 4: pvp test with virtio 1.0 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+4. Send 64B and 1518B packets by traffic generator separately, show throughput with below command::
 
     testpmd>show port stats all
 
@@ -184,7 +184,7 @@  Test Case 5: pvp test with virtio 1.0 non-mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately, note: we need add "disable-modern=false" to enable virtio 1.0::
+2. Check SUT machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately, note: we need add "disable-modern=false" to enable virtio 1.0::
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 3 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -204,7 +204,7 @@  Test Case 5: pvp test with virtio 1.0 non-mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+4. Send 64B and 1518B packets by traffic generator separately, show throughput with below command::
 
     testpmd>show port stats all
 
@@ -223,7 +223,7 @@  Test Case 6: pvp test with virtio 1.0 vrctor_rx path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately, note: we need add "disable-modern=false" to enable virtio 1.0::
+2. Check SUT machine already has installed different version qemu, includes [qemu_2.7, qemu_2.8, qemu_2.9, qemu_2.10, qemu_2.11, qemu_2.12, qemu_3.0], launch VM with different qemu version separately, note: we need add "disable-modern=false" to enable virtio 1.0::
 
     qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 3 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -243,7 +243,7 @@  Test Case 6: pvp test with virtio 1.0 vrctor_rx path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+4. Send 64B and 1518B packets by traffic generator separately, show throughput with below command::
 
     testpmd>show port stats all
 
@@ -262,7 +262,7 @@  Test Case 7: pvp test with virtio 1.1 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed qemu 4.2.0, then launch VM::
+2. Check SUT machine already has installed qemu 4.2.0, then launch VM::
 
     qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 3 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -282,7 +282,7 @@  Test Case 7: pvp test with virtio 1.1 mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+4. Send 64B and 1518B packets by traffic generator separately, show throughput with below command::
 
     testpmd>show port stats all
 
@@ -301,7 +301,7 @@  Test Case 8: pvp test with virtio 1.1 non-mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-2. Check dut machine already has installed qemu 4.2.0, then launch VM::
+2. Check SUT machine already has installed qemu 4.2.0, then launch VM::
 
     qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 3 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -321,7 +321,7 @@  Test Case 8: pvp test with virtio 1.1 non-mergeable path
     testpmd>set fwd mac
     testpmd>start
 
-4. Send 64B and 1518B packets by packet generator separately, show throughput with below command::
+4. Send 64B and 1518B packets by traffic generator separately, show throughput with below command::
 
     testpmd>show port stats all
 
diff --git a/test_plans/virtio_smoke_test_plan.rst b/test_plans/virtio_smoke_test_plan.rst
index cc184bf5..ef19d8ab 100644
--- a/test_plans/virtio_smoke_test_plan.rst
+++ b/test_plans/virtio_smoke_test_plan.rst
@@ -92,7 +92,7 @@  Test Case 2: pvp test with virtio packed ring vectorized path
     testpmd>set fwd mac
     testpmd>start
 
-3. Send 64B and 1518B packets with packet generator, check the throughput with below command::
+3. Send 64B and 1518B packets with traffic generator, check the throughput with below command::
 
     testpmd>show port stats all
 
diff --git a/test_plans/virtio_user_for_container_networking_test_plan.rst b/test_plans/virtio_user_for_container_networking_test_plan.rst
index afe60de1..ebd25829 100644
--- a/test_plans/virtio_user_for_container_networking_test_plan.rst
+++ b/test_plans/virtio_user_for_container_networking_test_plan.rst
@@ -51,7 +51,7 @@  Test Case 1: packet forward test for container networking
     -v /root/dpdk:/root/dpdk dpdk_image ./root/dpdk/x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-pci --file-prefix=container \
     --vdev=virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net -- -i
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check virtio could receive and fwd packets correctly in container::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check virtio could receive and fwd packets correctly in container::
 
     testpmd>show port stats all
 
@@ -73,7 +73,7 @@  Test Case 2: packet forward with multi-queues for container networking
     -v /root/dpdk:/root/dpdk dpdk_image ./root/dpdk/x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-6 -n 4 -m 1024 --no-pci --file-prefix=container \
     --vdev=virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=2 -- -i --rxq=2 --txq=2 --nb-cores=2
 
-3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check virtio could receive and fwd packets in container with two queues::
+3. Send packet with traffic generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check virtio could receive and fwd packets in container with two queues::
 
     testpmd>show port stats all
     testpmd>stop
diff --git a/test_plans/vlan_ethertype_config_test_plan.rst b/test_plans/vlan_ethertype_config_test_plan.rst
index 836bfa3f..992b5db7 100644
--- a/test_plans/vlan_ethertype_config_test_plan.rst
+++ b/test_plans/vlan_ethertype_config_test_plan.rst
@@ -24,7 +24,7 @@  Prerequisites
    * DPDK: http://dpdk.org/git/dpdk
    * Scapy: http://www.secdev.org/projects/scapy/
 
-3. Assuming that DUT ports ``0`` and ``1`` are connected to the tester's port ``A`` and ``B``.
+3. Assuming that SUT ports ``0`` and ``1`` are connected to the TG's port ``A`` and ``B``.
 
 Test Case 1: change VLAN TPID
 =============================
diff --git a/test_plans/vm2vm_virtio_net_dsa_test_plan.rst b/test_plans/vm2vm_virtio_net_dsa_test_plan.rst
index d910cb5a..cf82d718 100644
--- a/test_plans/vm2vm_virtio_net_dsa_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_dsa_test_plan.rst
@@ -15,7 +15,7 @@  in both split and packed ring.
 
 This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with
 DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in VM2VM virtio-net topology.
-1. check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP stack with vm2vm split ring and packed ring 
+1. check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP stack with vm2vm split ring and packed ring
 vhost-user/virtio-net mergeable path.
 2.Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring
 and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
@@ -25,9 +25,9 @@  IOMMU impact:
 If iommu off, idxd can work with iova=pa
 If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can't use iova=pa(fwd not work due to pkts payload wrong).
 
-Note: 
+Note:
 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1, and packed ring multi-queues not support reconnect in qemu yet.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, sut to old qemu exist reconnect issue when multi-queues test.
 3.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
 exceed IOMMU's max capability, better to use 1G guest hugepage.
 4.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd, and the suite has not yet been automated.
@@ -54,7 +54,7 @@  General set up
 	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
 	ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
+2. Get the PCI device ID and DSA device ID of SUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -80,7 +80,7 @@  Common steps
 ------------
 1. Bind DSA devices to DPDK vfio-pci driver::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DSA device id>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DSA device id>
 
 	For example, bind 2 DMA devices to vfio-pci driver:
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0
@@ -114,7 +114,7 @@  Common steps
 
 Test Case 1: VM2VM vhost-user/virtio-net split ring test TSO with dsa dpdk driver
 -----------------------------------------------------------------------------------
-This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path 
+This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path
 by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous enqueue operations with dsa dpdk driver.
 
 1. Bind 1 dsa device to vfio-pci like common step 1::
@@ -174,7 +174,7 @@  by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous e
 
 Test Case 2: VM2VM vhost-user/virtio-net split ring mergeable path 8 queues test with large packet payload with dsa dpdk driver
 ---------------------------------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous enqueue operations with dsa dpdk driver.
 The dynamic change of multi-queues number, iova as VA and PA mode also test.
 
@@ -376,7 +376,7 @@  The dynamic change of multi-queues number also test.
 
 Test Case 4: VM2VM vhost-user/virtio-net packed ring test TSO with dsa dpdk driver
 -----------------------------------------------------------------------------------
-This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path 
+This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path
 by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous enqueue operations with dsa dpdk driver.
 
 1. Bind 2 dsa device to vfio-pci like common step 1::
@@ -436,7 +436,7 @@  by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous e
 
 Test Case 5: VM2VM vhost-user/virtio-net packed ring mergeable path 8 queues test with large packet payload with dsa dpdk driver
 ---------------------------------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net packed ring mergeable path when vhost uses the asynchronous enqueue operations with dsa dpdk driver.
 The dynamic change of multi-queues number also test.
 
@@ -502,7 +502,7 @@  The dynamic change of multi-queues number also test.
 
 Test Case 6: VM2VM vhost-user/virtio-net packed ring non-mergeable path 8 queues test with large packet payload with dsa dpdk driver
 -------------------------------------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net packed ring non-mergeable path when vhost uses the asynchronous enqueue operations with dsa dpdk driver.
 The dynamic change of multi-queues number also test.
 
@@ -567,7 +567,7 @@  The dynamic change of multi-queues number also test.
 
 Test Case 7: VM2VM vhost-user/virtio-net packed ring test TSO with dsa dpdk driver and pa mode
 -----------------------------------------------------------------------------------------------
-This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path 
+This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path
 by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous enqueue operations with dsa dpdk driver and iova as PA mode.
 
 1. Bind 2  dsa device to vfio-pci like common step 1::
@@ -627,7 +627,7 @@  by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous e
 
 Test Case 8: VM2VM vhost-user/virtio-net packed ring mergeable path 8 queues test with large packet payload with dsa dpdk driver and pa mode
 ---------------------------------------------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net packed ring mergeable path when vhost uses the asynchronous enqueue operations with dsa dpdk driver
 and iova as PA mode. The dynamic change of multi-queues number also test.
 
@@ -692,7 +692,7 @@  and iova as PA mode. The dynamic change of multi-queues number also test.
 
 Test Case 9: VM2VM vhost-user/virtio-net split ring test TSO with dsa kernel driver
 ------------------------------------------------------------------------------------
-This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path 
+This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path
 by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 
 1. Bind 1 dsa device to idxd like common step 2::
@@ -756,7 +756,7 @@  by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous e
 
 Test Case 10: VM2VM vhost-user/virtio-net split ring mergeable path 8 queues test with large packet payload with dsa kernel driver
 -----------------------------------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 The dynamic change of multi-queues number also test.
 
@@ -867,7 +867,7 @@  The dynamic change of multi-queues number also test.
 
 Test Case 11: VM2VM vhost-user/virtio-net split ring non-mergeable path 8 queues test with large packet payload with dsa kernel driver
 ---------------------------------------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net split ring non-mergeable path when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 The dynamic change of multi-queues number also test.
 
diff --git a/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst
index 8433b3d4..bc7b9a09 100644
--- a/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst
@@ -14,7 +14,7 @@  channels and one DMA channel can be shared by multiple vrings at the same time.V
 in both split and packed ring.
 This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with
 CBDMA in VM2VM virtio-net topology.
-1. check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP stack with vm2vm split ring and packed ring 
+1. check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP stack with vm2vm split ring and packed ring
 vhost-user/virtio-net mergeable path.
 2.Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring
 and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
@@ -22,7 +22,7 @@  and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
 
 Note:
 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, sut to old qemu exist reconnect issue when multi-queues test.
 3.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
 exceed IOMMU's max capability, better to use 1G guest hugepage.
 4.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
@@ -53,7 +53,7 @@  General set up
       CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
       ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
+2. Get the PCI device ID and DMA device ID of SUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
 
       <dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -73,14 +73,14 @@  Common steps
 ------------
 1. Bind 2 CBDMA channels to vfio-pci::
 
-      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DMA device id>
 
       For example, Bind 1 NIC port and 2 CBDMA channels:
       <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1
 
 Test Case 1: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic
 --------------------------------------------------------------------------------------
-This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path 
+This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path
 by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous enqueue operations with CBDMA channels.
 
 1. Bind 2 CBDMA channels to vfio-pci, as common step 1.
@@ -141,7 +141,7 @@  by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous e
 
 Test Case 2: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check
 ------------------------------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous enqueue operations with CBDMA channels.
 The dynamic change of multi-queues number, iova as VA and PA mode also test.
 
@@ -281,7 +281,7 @@  The dynamic change of multi-queues number, iova as VA and PA mode also test.
 
 Test Case 3: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check
 ----------------------------------------------------------------------------------------------------------------------------------
-This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
+This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net split ring non-mergeable path when vhost uses the asynchronous enqueue operations with dsa dpdk driver.
 The dynamic change of multi-queues number also test.
 
diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index c04e9849..b25f4a6e 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -16,9 +16,9 @@  and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
 3. Check Vhost tx offload function by verifying the TSO/cksum in the TCP/IP stack with vm2vm split ring and
 packed ring vhost-user/virtio-net mergeable path with CBDMA channel.
 4. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost enqueue operation with multi-CBDMA channels.
-Note: 
+Note:
 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, dut to old qemu exist reconnect issue when multi-queues test.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, SUT to old qemu exist reconnect issue when multi-queues test.
 3.For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.
 
 Test flow
diff --git a/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst
index a491bd40..501ce5e7 100644
--- a/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst
+++ b/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst
@@ -19,7 +19,7 @@  This document provides the test plan for testing some basic functions with CBDMA
 
 Note:
 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, sut to old qemu exist reconnect issue when multi-queues test.
 3.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
 exceed IOMMU's max capability, better to use 1G guest hugepage.
 4.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
@@ -49,7 +49,7 @@  General set up
       CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
       ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
+2. Get the PCI device ID and DMA device ID of SUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
 
       <dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -69,8 +69,8 @@  Common steps
 ------------
 1. Bind 1 NIC port and CBDMA channels to vfio-pci::
 
-      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
-      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port pci device id>
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DMA device id>
 
       For example, Bind 1 NIC port and 2 CBDMA channels::
       <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0
diff --git a/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst
index 3fd04592..3479e045 100644
--- a/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst
+++ b/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst
@@ -17,9 +17,9 @@  This document provides the test plan for testing the following features when Vho
 CBDMA channels in VM2VM virtio-user topology.
 1. Split virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vector_rx path test and payload check.
 2. Packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vectorized path (ringsize not powerof 2) test and payload check.
-3. Test indirect descriptor feature. 
-For example, the split ring mergeable inorder path use non-indirect descriptor, the 2000,2000,2000,2000 chain packets will need 4 consequent ring, 
-still need one ring put header. 
+3. Test indirect descriptor feature.
+For example, the split ring mergeable inorder path use non-indirect descriptor, the 2000,2000,2000,2000 chain packets will need 4 consequent ring,
+still need one ring put header.
 The split ring mergeable path use indirect descriptor, the 2000,2000,2000,2000 chain packets will only occupy one ring.
 
 Note:
@@ -49,7 +49,7 @@  General set up
       CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
       ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
+2. Get the PCI device ID and DMA device ID of SUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
 
       <dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -69,15 +69,15 @@  Common steps
 ------------
 1. Bind 1 NIC port and CBDMA channels to vfio-pci::
 
-      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
-      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port pci device id>
+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DMA device id>
 
       For example, 2 CBDMA channels:
       <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1
 
 Test Case 1: split virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enable
 --------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path 
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path
 and multi-queues when vhost uses the asynchronous enqueue operations with CBDMA channels. Both iova as VA and PA mode test.
 
 1. Bind 2 CBDMA channel to vfio-pci, as common step 1.
@@ -396,7 +396,7 @@  and multi-queues when vhost uses the asynchronous enqueue operations with CBDMA
 Test Case 5: Split virtqueue vm2vm inorder mergeable path test non-indirect descriptor with cbdma enable
 --------------------------------------------------------------------------------------------------------
 This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user
-split ring inorder mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with CBDMA channels. Both 
+split ring inorder mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with CBDMA channels. Both
 iova as VA and PA mode test.
 
 1. Bind 4 CBDMA channel to vfio-pci, as common step 1.
@@ -1052,7 +1052,7 @@  So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w
 
 Test Case 14: packed virtqueue vm2vm vectorized-tx path test batch processing with cbdma enable
 -----------------------------------------------------------------------------------------------
-This case uses testpmd to test that one packet can forwarding in vhost-user/virtio-user packed ring vectorized-tx path 
+This case uses testpmd to test that one packet can forwarding in vhost-user/virtio-user packed ring vectorized-tx path
 when vhost uses the asynchronous enqueue operations with CBDMA channels.
 
 1. Bind 8 CBDMA channel to vfio-pci, as common step 1.
diff --git a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst
index 240a2a27..39d0d7fc 100644
--- a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst
+++ b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst
@@ -45,7 +45,7 @@  General set up
 	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
 	ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
+2. Get the PCI device ID and DSA device ID of SUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -71,7 +71,7 @@  Common steps
 ------------
 1. Bind DSA devices to DPDK vfio-pci driver::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DSA device id>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DSA device id>
 
 	For example, bind 2 DMA devices to vfio-pci driver:
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0
@@ -105,7 +105,7 @@  Common steps
 
 Test Case 1: VM2VM vhost-user/virtio-user split ring non-mergeable path and multi-queues payload check with dsa dpdk driver
 ----------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path 
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path
 and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
 
 1. bind 2 dsa device to vfio-pci like common step 1::
@@ -366,7 +366,7 @@  So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w
 
 Test Case 5: VM2VM vhost-user/virtio-user packed ring non-mergeable path and multi-queues payload check with dsa dpdk driver
 --------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring 
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring
 non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
 
 1. bind 3 dsa device to vfio-pci like common step 1::
@@ -518,12 +518,12 @@  non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
 
 Test Case 7: VM2VM vhost-user/virtio-user packed ring mergeable path and multi-queues payload check with dsa dpdk driver
 --------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring 
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring
 mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. Both iova as VA and PA mode test.
 
 1. bind 2 dsa device to vfio-pci like common step 1::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
 
 2. Launch vhost by below command::
 
@@ -666,7 +666,7 @@  mergeable path and multi-queues when vhost uses the asynchronous enqueue operati
 Test Case 9: VM2VM vhost-user/virtio-user packed ring vectorized-tx path and multi-queues indirect descriptor with dsa dpdk driver
 -----------------------------------------------------------------------------------------------------------------------------------
 This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
-packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver. 
+packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous enqueue operations with dsa dpdk driver.
 Both iova as VA and PA mode test.
 
 1. bind 4 dsa device to vfio-pci like common step 1::
@@ -723,8 +723,8 @@  So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w
 
 Test Case 10: VM2VM vhost-user/virtio-user split ring non-mergeable path and multi-queues payload check with dsa kernel driver
 --------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring 
-non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver. 
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring
+non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 
 1. bind 1 dsa device to idxd like common step 2::
 
@@ -805,7 +805,7 @@  non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
 Test Case 11: VM2VM vhost-user/virtio-user split ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver
 ----------------------------------------------------------------------------------------------------------------------------------------
 This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder
-non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver. 
+non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 
 1. bind 3 dsa device to idxd like common step 2::
 
@@ -887,7 +887,7 @@  non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
 Test Case 12: VM2VM vhost-user/virtio-user split ring inorder mergeable path and multi-queues non-indirect descriptor with dsa kernel driver
 ---------------------------------------------------------------------------------------------------------------------------------------------
 This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user
-split ring inorder mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver. 
+split ring inorder mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 
 1. bind 4 dsa device to idxd like common step 2::
 
@@ -969,7 +969,7 @@  still need one ring put header. So check 504 packets and 48128 bytes received by
 Test Case 13: VM2VM vhost-user/virtio-user split ring mergeable path and multi-queues indirect descriptor with dsa kernel driver
 ----------------------------------------------------------------------------------------------------------------------------------
 This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user
-split ring mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver. 
+split ring mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 
 1. bind 4 dsa device to idxd like common step 2::
 
@@ -1050,7 +1050,7 @@  So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets w
 
 Test Case 14: VM2VM vhost-user/virtio-user packed ring non-mergeable path and multi-queues payload check with dsa kernel driver
 ----------------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring 
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring
 non-mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 
 1. bind 3 dsa device to idxd like common step 2::
@@ -1215,7 +1215,7 @@  non-mergeable path and multi-queues when vhost uses the asynchronous enqueue ope
 
 Test Case 16: VM2VM vhost-user/virtio-user packed ring mergeable path and multi-queues payload check with dsa kernel driver
 -----------------------------------------------------------------------------------------------------------------------------
-This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring 
+This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring
 mergeable path and multi-queues when vhost uses the asynchronous enqueue operations with dsa kernel driver.
 
 1. bind 2 dsa device to idxd::
diff --git a/test_plans/vm_hotplug_test_plan.rst b/test_plans/vm_hotplug_test_plan.rst
index 941ace7b..780806d3 100644
--- a/test_plans/vm_hotplug_test_plan.rst
+++ b/test_plans/vm_hotplug_test_plan.rst
@@ -81,7 +81,7 @@  and enable verbose output::
     testpmd> set verbose 1
     testpmd> start
 
-Send packets from tester, check RX could work successfully
+Send packets from TG, check RX could work successfully
 
 Set txonly forward mode, send packet from testpmd, check TX could
 work successfully::
@@ -139,7 +139,7 @@  and enable verbose output::
     testpmd> set verbose 1
     testpmd> start
 
-Send packets from tester, check RX could work successfully
+Send packets from TG, check RX could work successfully
 
 Set txonly forward mode, send packet from testpmd, check TX could
 work successfully::
@@ -208,7 +208,7 @@  and enable verbose output::
     testpmd> set verbose 1
     testpmd> start
 
-Send packets from tester, check RX could work successfully
+Send packets from TG, check RX could work successfully
 Set txonly forward mode, send packet from testpmd, check TX could
 work successfully::
 
@@ -269,7 +269,7 @@  and enable verbose output::
     testpmd> set verbose 1
     testpmd> start
 
-Send packets from tester, check RX could work successfully
+Send packets from TG, check RX could work successfully
 
 Set txonly forward mode, send packets from testpmd, check TX could
 work successfully::
diff --git a/test_plans/vm_pw_mgmt_policy_test_plan.rst b/test_plans/vm_pw_mgmt_policy_test_plan.rst
index 67fab261..9dc8402d 100644
--- a/test_plans/vm_pw_mgmt_policy_test_plan.rst
+++ b/test_plans/vm_pw_mgmt_policy_test_plan.rst
@@ -72,7 +72,7 @@  Prerequisites
 
 #. port topology diagram::
 
-       packet generator                         DUT
+       traffic generator                         SUT
         .-----------.                      .-----------.
         | .-------. |                      | .-------. |
         | | portA | | <------------------> | | port0 | |
@@ -199,8 +199,8 @@  Set up testing environment
 Test Case : time policy
 =======================
 check these content.
-#. when dut clock is set to a desired busy hours, put core to max freq.
-#. when dut clock is set to a desired quiet hours, put core to min freq.
+#. when SUT clock is set to a desired busy hours, put core to max freq.
+#. when SUT clock is set to a desired quiet hours, put core to min freq.
 
 This case test multiple dpdk-guest_cli options, they are composited
 by these content as below::
@@ -223,25 +223,25 @@  example::
 
 steps:
 
-#. set DUT system time to desired time.
+#. set SUT system time to desired time.
 
 #. set up testing environment refer to ``Set up testing environment`` steps.
 
-#. trigger policy on vm DUT from dpdk-guest_cli console::
+#. trigger policy on vm SUT from dpdk-guest_cli console::
 
     vmpower(guest)> send_policy now
 
-#. check DUT platform cores frequency, which are in vcpu-list.
+#. check SUT platform cores frequency, which are in vcpu-list.
 
 
 Test Case : traffic policy
 ==========================
 check these content.
-#. use packet generator to send a stream with a pps rate bigger 2000000,
+#. use traffic generator to send a stream with a pps rate bigger 2000000,
 vcpu frequency will run at max frequency.
-#. use packet generator to send a stream with a pps rate between 1800000 and 2000000,
+#. use traffic generator to send a stream with a pps rate between 1800000 and 2000000,
 vcpus frequency will run at med frequency.
-#. use packet generator to send a stream with a pps rate less than 1800000,
+#. use traffic generator to send a stream with a pps rate less than 1800000,
 vcpu frequency will run at min frequency.
 
 This case test multiple dpdk-guest_cli options, they are composited
@@ -263,14 +263,14 @@  steps:
 
 #. set up testing environment refer to ``Set up testing environment`` steps.
 
-#. trigger policy on vm DUT from dpdk-guest_cli console::
+#. trigger policy on vm SUT from dpdk-guest_cli console::
 
     vmpower(guest)> send_policy now
 
 #. configure stream in traffic generator, set traffic generator line rate
    to desired pps and send packet continuously.
 
-#. check DUT platform cores frequency, which are in vcpu-list.
+#. check SUT platform cores frequency, which are in vcpu-list.
 
 
 Test Case : disable CPU turbo
@@ -281,11 +281,11 @@  steps:
 
 #. set up testing environment refer to ``Set up testing environment`` steps.
 
-#. set cpu turbo disable on vm DUT from dpdk-guest_cli console::
+#. set cpu turbo disable on vm SUT from dpdk-guest_cli console::
 
     vmpower(guest)> set_cpu_freq <core_num> disable_turbo
 
-#. verify the DUT physical CPU's turbo has been disable correctly, core frequency
+#. verify the SUT physical CPU's turbo has been disable correctly, core frequency
    should be secondary max value in scaling_available_frequencies::
 
     cat /sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_cur_freq
@@ -299,11 +299,11 @@  steps:
 
 #. set up testing environment refer to ``Set up testing environment`` steps.
 
-#. set cpu turbo enable on vm DUT from dpdk-guest_cli console::
+#. set cpu turbo enable on vm SUT from dpdk-guest_cli console::
 
     vmpower(guest)> set_cpu_freq <vm_core_num> enable_turbo
 
-#. Verify the DUT physical CPU's turbo has been enable correctly, core frequency
+#. Verify the SUT physical CPU's turbo has been enable correctly, core frequency
    should be max value in scaling_available_frequencies::
 
     cat /sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_cur_freq
diff --git a/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst b/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst
index 98f4dcea..11dce973 100644
--- a/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst
+++ b/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst
@@ -41,11 +41,11 @@  General set up
 --------------
 1. Compile DPDK and vhost example::
 
-	# meson <dpdk build dir>  
-	# meson configure -Dexamples=vhost <dpdk build dir> 
+	# meson <dpdk build dir>
+	# meson configure -Dexamples=vhost <dpdk build dir>
 	# ninja -C <dpdk build dir> -j 110
 
-2. Get the pci device id and DMA device id of DUT.
+2. Get the pci device id and DMA device id of SUT.
 
 For example, 0000:18:00.0 is pci device id, 0000:00:04.0 is DMA device id::
 
@@ -54,21 +54,21 @@  For example, 0000:18:00.0 is pci device id, 0000:00:04.0 is DMA device id::
 	Network devices using kernel driver
 	===================================
 	0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci
-	
+
 	DMA devices using kernel driver
 	===============================
 	0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
 
 Test case
-=========	
+=========
 
 Common steps
 ------------
 1. Bind one physical port and one CBDMA port to vfio-pci::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>
-	
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port pci device id>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DMA device id>
+
 	For example::
 	./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0
 	./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0
@@ -86,7 +86,7 @@  Common steps
 
 Test Case 1: Vswitch PVP split ring inorder mergeable path performance with CBDMA
 ---------------------------------------------------------------------------------
-This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring inorder mergeable path with CBDMA. 
+This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring inorder mergeable path with CBDMA.
 
 1. Bind one physical port and one CBDMA port to vfio-pci as common step 1.
 
@@ -99,7 +99,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 	<dpdk dir>#./<dpdk build dir>/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0  \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
-	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Send packets from virtio-user to let vswitch know the mac addr::
 
@@ -127,7 +127,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 	<dpdk dir>#./<dpdk build dir>/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0  \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
-	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Send packets from virtio-user to let vswitch know the mac addr::
 
@@ -139,7 +139,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 5. Send packets by traffic generator as common step 2, and check the throughput with below command::
 
 	testpmd> show port stats all
-	
+
 Test Case 3: Vswitch PVP split ring mergeable path performance with CBDMA
 -------------------------------------------------------------------------
 This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring mergeable path with CBDMA.
@@ -155,7 +155,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 	<dpdk dir>#./<dpdk build dir>/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0  \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
-	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Send packets from virtio-user to let vswitch know the mac addr::
 
@@ -166,8 +166,8 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 5. Send packets by traffic generator as common step 2, and check the throughput with below command::
 
-	testpmd> show port stats all	
-	
+	testpmd> show port stats all
+
 Test Case 4: Vswitch PVP split ring non-mergeable path performance with CBDMA
 -----------------------------------------------------------------------------
 This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring non-mergeable path with CBDMA.
@@ -183,7 +183,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 	<dpdk dir>#./<dpdk build dir>/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0  \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
-	-- -i --enable-hw-vlan-strip --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 
+	-- -i --enable-hw-vlan-strip --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Send packets from virtio-user to let vswitch know the mac addr::
 
@@ -194,7 +194,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 5. Send packets by traffic generator as common step 2, and check the throughput with below command::
 
-	testpmd> show port stats all		
+	testpmd> show port stats all
 
 Test Case 5: Vswitch PVP split ring vectorized path performance with CBDMA
 --------------------------------------------------------------------------
@@ -211,7 +211,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 	<dpdk dir>#./<dpdk build dir>/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1,vectorized=1 \
-	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Send packets from virtio-user to let vswitch know the mac addr::
 
@@ -222,9 +222,9 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 5. Send packets by traffic generator as common step 2, and check the throughput with below command::
 
-	testpmd> show port stats all		
-	
-	
+	testpmd> show port stats all
+
+
 Test Case 6: Vswitch PVP packed ring inorder mergeable path performance with CBDMA
 ----------------------------------------------------------------------------------
 This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring inorder mergeable path with CBDMA.
@@ -240,7 +240,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 	<dpdk dir>#./<dpdk build dir>/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0  \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
-	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Send packets from virtio-user to let vswitch know the mac addr::
 
@@ -268,7 +268,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 	<dpdk dir>#./<dpdk build dir>/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0  \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \
-	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Send packets from virtio-user to let vswitch know the mac addr::
 
@@ -280,7 +280,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 5. Send packets by traffic generator as common step 2, and check the throughput with below command::
 
 	testpmd> show port stats all
-	
+
 Test Case 8: Vswitch PVP packed ring mergeable path performance with CBDMA
 --------------------------------------------------------------------------
 This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring mergeable path with CBDMA.
@@ -307,8 +307,8 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 5. Send packets by traffic generator as common step 2, and check the throughput with below command::
 
-	testpmd> show port stats all	
-	
+	testpmd> show port stats all
+
 Test Case 9: Vswitch PVP packed ring non-mergeable path performance with CBDMA
 ------------------------------------------------------------------------------
 This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring non-mergeable path with CBDMA.
@@ -324,7 +324,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 	<dpdk dir>#./<dpdk build dir>/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0  \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
-	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 
+	-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
 4. Send packets from virtio-user to let vswitch know the mac addr::
 
@@ -335,7 +335,7 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 5. Send packets by traffic generator as common step 2, and check the throughput with below command::
 
-	testpmd> show port stats all		
+	testpmd> show port stats all
 
 Test Case 10: Vswitch PVP packed ring vectorized path performance with CBDMA
 ----------------------------------------------------------------------------
@@ -363,4 +363,4 @@  This case uses Vswitch and Traffic generator(For example, Trex) to test performa
 
 5. Send packets by traffic generator as common step 2, and check the throughput with below command::
 
-	testpmd> show port stats all		
+	testpmd> show port stats all
diff --git a/test_plans/vswitch_sample_cbdma_test_plan.rst b/test_plans/vswitch_sample_cbdma_test_plan.rst
index c207d842..2ad7ce73 100644
--- a/test_plans/vswitch_sample_cbdma_test_plan.rst
+++ b/test_plans/vswitch_sample_cbdma_test_plan.rst
@@ -37,7 +37,7 @@  Test Case1: PVP performance check with CBDMA channel using vhost async driver
 	testpmd>set fwd mac
 	testpmd>start tx_first
 
-5. Inject pkts (packets length=64...1518) separately with dest_mac=virtio_mac_address (specific in above cmd with 00:11:22:33:44:10) to NIC using packet generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected.
+5. Inject pkts (packets length=64...1518) separately with dest_mac=virtio_mac_address (specific in above cmd with 00:11:22:33:44:10) to NIC using traffic generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected.
 
 6. Quit and re-launch virtio-user with packed ring size not power of 2::
 
@@ -67,7 +67,7 @@  Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver
 
 	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
-	
+
 	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1
 
@@ -78,7 +78,7 @@  Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver
 	testpmd1>start tx_first
 	testpmd1>start tx_first
 
-5. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator,record performance number can get expected from Packet generator rx side.
+5. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using traffic generator,record performance number can get expected from traffic generator rx side.
 
 6. Stop dpdk-vhost side and relaunch it with same cmd as step2.
 
@@ -89,7 +89,7 @@  Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver
     testpmd1>stop
     testpmd1>start tx_first
 
-8. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator, ensure get same throughput as step5.
+8. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using traffic generator, ensure get same throughput as step5.
 
 Test Case3: VM2VM forwarding test with two CBDMA channels
 =========================================================
diff --git a/test_plans/vswitch_sample_dsa_test_plan.rst b/test_plans/vswitch_sample_dsa_test_plan.rst
index e9d74d33..9add1b98 100644
--- a/test_plans/vswitch_sample_dsa_test_plan.rst
+++ b/test_plans/vswitch_sample_dsa_test_plan.rst
@@ -11,7 +11,7 @@  For more about vswitch example, please refer to the DPDK docment:http://doc.dpdk
 
 Note:
 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1, and packed ring multi-queues not support reconnect in qemu yet.
-2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, sut to old qemu exist reconnect issue when multi-queues test.
 3.The suite has not yet been automated.
 
 Prerequisites
@@ -34,7 +34,7 @@  General set up
 	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc
 	ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
+2. Get the PCI device ID and DSA device ID of SUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -60,13 +60,13 @@  Common steps
 ------------
 1. Bind 1 NIC port to vfio-pci::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port pci device id>
 	For example:
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:4f.1
 
 2.Bind DSA devices to DPDK vfio-pci driver::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DSA device id>
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <SUT port DSA device id>
 	For example, bind 2 DMA devices to vfio-pci driver:
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:e7:01.0 0000:ec:01.0
 
@@ -99,7 +99,7 @@  Common steps
 
 Test Case 1: PVP performance vhost async enqueue operation with dsa dpdk channel
 ---------------------------------------------------------------------------------
-This case uses vhost example to test performance of split and packed ring when vhost uses the asynchronous enqueue operations 
+This case uses vhost example to test performance of split and packed ring when vhost uses the asynchronous enqueue operations
 with dsa dpdk driver in PVP topology environment.
 
 1. Bind one physical port(4f:00.1) and one dsa device(6a:01.0) to vfio-pci like common step 1-2.
@@ -122,7 +122,7 @@  with dsa dpdk driver in PVP topology environment.
 	testpmd>stop
 	testpmd>start
 
-5. Inject pkts (packets length=64...1518) separately with dest_mac=virtio_mac_addresss (specific in above cmd with 00:11:22:33:44:10) to NIC using packet generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected.
+5. Inject pkts (packets length=64...1518) separately with dest_mac=virtio_mac_addresss (specific in above cmd with 00:11:22:33:44:10) to NIC using traffic generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected.
 
 6. Quit and re-launch virtio-user with packed ring size not power of 2::
 
@@ -140,7 +140,7 @@  with dsa dpdk driver in PVP topology environment.
 
 Test Case 2: PVP vhost async enqueue operation with two VM and 2 dsa channels
 ------------------------------------------------------------------------------
-This case uses vhost example to test split and packed ring when vhost uses the asynchronous enqueue operations 
+This case uses vhost example to test split and packed ring when vhost uses the asynchronous enqueue operations
 with dsa dpdk driver in PVP topology environment with 2 VM and 2 queues.
 
 1. Bind one physical port and 2 dsa devices to vfio-pci like common step 1-2.
@@ -169,7 +169,7 @@  with dsa dpdk driver in PVP topology environment with 2 VM and 2 queues.
 	testpmd1>stop
 	testpmd1>start
 
-5. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_addresss (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator,record performance number can get expected from Packet generator rx side.
+5. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_addresss (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using traffic generator,record performance number can get expected from traffic generator rx side.
 
 6. Stop dpdk-vhost side and relaunch it with same cmd as step2.
 
@@ -184,7 +184,7 @@  with dsa dpdk driver in PVP topology environment with 2 VM and 2 queues.
 	testpmd1>stop
 	testpmd1>start
 
-8. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_addresss (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator, ensure get same throughput as step5.
+8. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_addresss (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using traffic generator, ensure get same throughput as step5.
 
 Test Case 3: VM2VM virtio-user forwarding test with 2 dsa dpdk channels
 -------------------------------------------------------------------------
@@ -238,12 +238,12 @@  asynchronous enqueue operations with dsa dpdk driver in VM2VM virtio-user topolo
 
 Test Case 4: VM2VM virtio-pmd test with 2 dsa channels register/unregister stable check
 -------------------------------------------------------------------------------------------------
-This case checks vhost can work stably after registering and unregistering the virtio port many times when vhost uses 
+This case checks vhost can work stably after registering and unregistering the virtio port many times when vhost uses
 the asynchronous enqueue operations with dsa dpdk driver in VM2VM topology environment with 2 queues.
 
 1. Bind one physical port and one dsa device to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1 
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0
 
 2. On host, launch dpdk-vhost by below command::
@@ -329,7 +329,7 @@  dsa dpdk driver in VM2VM topology environment with 2 queues.
 
 1. Bind one physical port and 1 dsa device to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1 
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0
 
 2. On host, launch dpdk-vhost by below command::
@@ -403,7 +403,7 @@  VM2VM topology environment with 2 queues.
 1. Bind one physical port and 1 dsa device to vfio-pci like common step 1-2::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0 
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:01.0
 
 2. Launch dpdk-vhost by below command::
 
@@ -468,7 +468,7 @@  with dsa kernel driver in PVP topology environment.
 1. Bind one physical port(4f:00.1) to vfio-pci and one dsa device(6a:01.0) to idxd like common step 1 and 3::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-	
+
 	ls /dev/dsa #check wq configure, reset if exist
 	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
 	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
@@ -493,7 +493,7 @@  with dsa kernel driver in PVP topology environment.
 	testpmd>stop
 	testpmd>start
 
-5. Inject pkts (packets length=64...1518) separately with dest_mac=virtio_mac_addresss (specific in above cmd with 00:11:22:33:44:10) to NIC using packet generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected.
+5. Inject pkts (packets length=64...1518) separately with dest_mac=virtio_mac_addresss (specific in above cmd with 00:11:22:33:44:10) to NIC using traffic generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected.
 
 6. Quit and re-launch virtio-user with packed ring size not power of 2::
 
@@ -549,7 +549,7 @@  with dsa kernel driver in PVP topology environment with 2 VM and 2 queues.
 	testpmd1>stop
 	testpmd1>start
 
-5. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_addresss (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator,record performance number can get expected from Packet generator rx side.
+5. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_addresss (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using traffic generator,record performance number can get expected from traffic generator rx side.
 
 6. Stop dpdk-vhost side and relaunch it with same cmd as step2.
 
@@ -564,7 +564,7 @@  with dsa kernel driver in PVP topology environment with 2 VM and 2 queues.
 	testpmd1>stop
 	testpmd1>start
 
-8. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_addresss (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator, ensure get same throughput as step5.
+8. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_addresss (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using traffic generator, ensure get same throughput as step5.
 
 Test Case 9: VM2VM virtio-user forwarding test with 2 dsa kernel channels
 ---------------------------------------------------------------------------------
@@ -724,7 +724,7 @@  dsa kernel driver in VM2VM topology environment with 2 queues.
 1. Bind one physical port to vfio-pci and 2 dsa device to idxd like common step 1 and 3::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-	
+
 	ls /dev/dsa #check wq configure, reset if exist
 	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
 	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
@@ -792,8 +792,8 @@  dsa kernel driver in VM2VM topology environment with 2 queues.
 	<dpdk dir># ./<dpdk build dir>/examples/dpdk-vhost -l 2-3 -n 4 -a 0000:4f:00.1 \
 	-- -p 0x1 --mergeable 1 --vm2vm 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 \
 	--dmas [txd0@wq0.0,txd1@wq2.1] --client
-	
-12. Rerun step 7-9 five times.	
+
+12. Rerun step 7-9 five times.
 
 Test Case 12: VM2VM packed ring with 2 dsa kernel channels stable test with iperf
 ----------------------------------------------------------------------------------
@@ -803,10 +803,10 @@  VM2VM topology environment with 2 queues.
 1. Bind one physical port to vfio-pci and 1 dsa device to idxd like common step 1 and 3::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 4f:00.1
-	
+
 	ls /dev/dsa #check wq configure, reset if exist
 	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
 	<dpdk dir># ./<dpdk build dir>drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0
 	ls /dev/dsa #check wq configure success
 
diff --git a/test_plans/vxlan_gpe_support_in_i40e_test_plan.rst b/test_plans/vxlan_gpe_support_in_i40e_test_plan.rst
index 2f4ea23f..3ec9d815 100644
--- a/test_plans/vxlan_gpe_support_in_i40e_test_plan.rst
+++ b/test_plans/vxlan_gpe_support_in_i40e_test_plan.rst
@@ -8,9 +8,9 @@  I40E VXLAN-GPE Support Tests
 Prerequisites
 =============
 
-1. The DUT has at least 2 DPDK supported I40E NIC ports::
+1. The SUT has at least 2 DPDK supported I40E NIC ports::
 
-    Tester      DUT
+    TG          SUT
     eth1  <---> PORT 0
     eth2  <---> PORT 1
 
diff --git a/test_plans/vxlan_test_plan.rst b/test_plans/vxlan_test_plan.rst
index d59cee31..c9bab8f8 100644
--- a/test_plans/vxlan_test_plan.rst
+++ b/test_plans/vxlan_test_plan.rst
@@ -22,7 +22,7 @@  optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
 1x Intel® XL710-DA4 (Eagle Fountain) (1x 10GbE full duplex optical ports per NIC)
 plugged into the available PCIe Gen3 8-lane slot.
 
-DUT board must be two sockets system and each cpu have more than 8 lcores.
+SUT board must be two sockets system and each cpu have more than 8 lcores.
 
 Test Case: Vxlan ipv4 packet detect
 ===================================