From patchwork Tue Mar 21 17:40:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiale, SongX" X-Patchwork-Id: 125365 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6158B427F6; Tue, 21 Mar 2023 10:43:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CC54410EE; Tue, 21 Mar 2023 10:43:44 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 1276A40A7A for ; Tue, 21 Mar 2023 10:43:42 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679391823; x=1710927823; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Y0/fgdCghFwzKcg6WZ8sOPcVxMZOmvC1e8XHwV4EVeI=; b=WHv9eDdHas1hrXjYgzu8RfqrXj4RubrZ0BSAbIxvC0hOT85zNm/3aJaY 3gkNeGHf5NhYzn1VjvFSl75j33WJf/17JiEVwlaQJF0gWL6+KaWJho3t7 ec62xOkf9837YC/K7jWjzJGn/ecDSkyhCQcUbUQYGQwcKJB9rDSw4340b HZ/+C/mIhMPwFsaBqM2+S3AdEU7fuFNpGtSmxVmvfhRD1yDQv+dWfuL9c svxpktGXJr80CXEYuhPUwEJSUIyYAa3j7ZE56bsGHeLOA1ArgNzJK6tp6 OLfwSx7l1ZVnbz6PCmTiqREco/1lW3ll7+Rj4Bxqj8aF1td70wLUPuPAF Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10655"; a="366624006" X-IronPort-AV: E=Sophos;i="5.98,278,1673942400"; d="scan'208";a="366624006" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Mar 2023 02:43:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10655"; a="770565587" X-IronPort-AV: E=Sophos;i="5.98,278,1673942400"; d="scan'208";a="770565587" Received: from unknown (HELO localhost.localdomain) ([10.239.252.20]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Mar 2023 02:43:41 -0700 From: Song Jiale To: dts@dpdk.org Cc: Song Jiale Subject: [dts] [PATCH V3 6/7] test_plans/vf_pmd_stacked_bonded: add cases to test vf bonded Date: Tue, 21 Mar 2023 17:40:12 +0000 Message-Id: <20230321174013.3479335-7-songx.jiale@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230321174013.3479335-1-songx.jiale@intel.com> References: <20230321174013.3479335-1-songx.jiale@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org add cases to test vf bonded. Signed-off-by: Song Jiale --- .../vf_pmd_stacked_bonded_test_plan.rst | 416 ++++++++++++++++++ 1 file changed, 416 insertions(+) create mode 100644 test_plans/vf_pmd_stacked_bonded_test_plan.rst diff --git a/test_plans/vf_pmd_stacked_bonded_test_plan.rst b/test_plans/vf_pmd_stacked_bonded_test_plan.rst new file mode 100644 index 00000000..9c9d9d2b --- /dev/null +++ b/test_plans/vf_pmd_stacked_bonded_test_plan.rst @@ -0,0 +1,416 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2023 Intel Corporation + +================= +VF Stacked Bonded +================= + +Stacked bonded mechanism allow a bonded port to be added to another bonded port. + +The demand arises from a discussion with a prospective customer for a 100G NIC +based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs +support a proper x16 PCIe interface so the host sees a single netdev and that +netdev corresponds directly to the 100G Ethernet port. They indicated that in +their current system they bond multiple 100G NICs together, using DPDK bonding +API in their application. They are interested in looking at an alternative source +for the 100G NIC and are in conversation with Silicom who are shipping a 100G +RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC +is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the +DPDK bonding could operate at 1st level on the two RRC netdevs to present a +single netdev could the application then bond multiple of these bonded +interfaces to implement NIC bonding. + +Prerequisites +============= + +hardware configuration +---------------------- + +all link ports of tester/dut should be the same data rate and support full-duplex. + +NIC/DUT/TESTER ports requirements: + +- Tester: 2/4 ports of nic +- DUT: 2/4 ports of nic + +enable ``link-down-on-close`` in tester:: + + ethtool --set-priv-flags {tport_iface0} link-down-on-close on + ethtool --set-priv-flags {tport_iface1} link-down-on-close on + ethtool --set-priv-flags {tport_iface2} link-down-on-close on + ethtool --set-priv-flags {tport_iface3} link-down-on-close on + +create 1 vf for 4 dut ports:: + + echo 0 > /sys/bus/pci/devices/0000\:31\:00.0/sriov_numvfs + echo 0 > /sys/bus/pci/devices/0000\:31\:00.1/sriov_numvfs + echo 0 > /sys/bus/pci/devices/0000\:31\:00.2/sriov_numvfs + echo 0 > /sys/bus/pci/devices/0000\:31\:00.3/sriov_numvfs + +disabel spoofchk for VF:: + + ip link set dev {pf0_iface} vf 0 spoofchk off + ip link set dev {pf1_iface} vf 0 spoofchk off + ip link set dev {pf2_iface} vf 0 spoofchk off + ip link set dev {pf3_iface} vf 0 spoofchk off + +port topology diagram(4 peer links):: + + TESTER DUT + physical link logical link + .---------. .------------------------------------------------. + | portA 0 | <------------> | portB pf0vf0 <---> .--------. | + | | | | bond 0 | <-----> .------. | + | portA 1 | <------------> | portB pf1vf0 <---> '--------' | | | + | | | |bond2 | | + | portA 2 | <------------> | portB pf2vf0 <---> .--------. | | | + | | | | bond 1 | <-----> '------' | + | portA 3 | <------------> | portB pf3vf0 <---> '--------' | + '---------' '------------------------------------------------' + +Test cases +========== +``tx-offloads`` value set based on nic type. Test cases' steps, which run for +slave down testing, are based on 4 ports. Other test cases' steps are based on +2 ports. + +Test Case: basic behavior +========================= +allow a bonded port to be added to another bonded port, which is +supported by:: + + balance-rr 0 + active-backup 1 + balance-xor 2 + broadcast 3 + balance-tlb 5 + balance-alb 6 + +#. 802.3ad mode is not supported if one or more slaves is a bond device. +#. add the same device twice to check exceptional process is good. +#. master bonded port/each slaves queue configuration is the same. + +steps +----- + +#. bind two ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX> + testpmd> port stop all + +#. create first bonded port and add one slave, check bond 2 config status:: + + testpmd> create bonded device 0 + testpmd> add bonding slave 0 2 + testpmd> show bonding config 2 + +#. create second bonded port and add one slave, check bond 3 config status:: + + testpmd> create bonded device 0 + testpmd> add bonding slave 1 3 + testpmd> show bonding config 3 + +#. create third bonded port and add first/second bonded port as its' slaves. + check if slaves are added successful. stacked bonded is forbidden by mode 4, + mode 4 will fail to add a bonded port as its' slave:: + + testpmd> create bonded device 0 + testpmd> add bonding slave 2 4 + testpmd> add bonding slave 3 4 + testpmd> show bonding config 4 + +#. check master bonded port/slave port's queue configuration are the same:: + + testpmd> show bonding config 0 + testpmd> show bonding config 1 + testpmd> show bonding config 2 + testpmd> show bonding config 3 + testpmd> show bonding config 4 + +#. start top level bond port to check ports start action:: + + testpmd> port start 4 + testpmd> start + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +#. repeat upper steps with the following mode number:: + + balance-rr 0 + active-backup 1 + balance-xor 2 + broadcast 3 + 802.3ad 4 + balance-tlb 5 + +Test Case: active-backup stacked bonded rx traffic +================================================== +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check +testpmd packet statistics. + +steps +----- + +#. bind two ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX> + testpmd> port stop all + +#. create first bonded port and add one port as slave:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 0 2 + +#. create second bonded port and add one port as slave:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 1 3 + +#. create third bonded port and add first/second bonded ports as its' slaves, + check if slaves are added successful:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 2 4 + testpmd> add bonding slave 3 4 + testpmd> show bonding config 4 + +#. start top level bond port:: + + testpmd> port start 4 + testpmd> start + +#. send 100 tcp packets to portA 0 and portA 1:: + + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + +#. first/second bonded port should receive 100 packets, third bonded port + should receive 200 packets:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +Test Case: active-backup stacked bonded rx traffic with slave down +================================================================== +setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded port +to down status, send tcp packet by scapy and check testpmd packet statistics. + +steps +----- + +#. bind four ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX> + testpmd> port stop all + +#. create first bonded port and add two ports as slaves:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 1 4 + +#. set portB 0 down:: + + ethtool --set-priv-flags {portA 0} link-down-on-close on + ifconfig {portA 0} down + +.. note:: + + The vf port link status cannot be changed directly. Change the peer port to make the vf port link down. + +#. create second bonded port and add two ports as slaves:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 2 5 + testpmd> add bonding slave 3 5 + +#. set portB 2 down:: + + ethtool --set-priv-flags {portA 2} link-down-on-close on + ifconfig {portA 2} down + +.. note:: + + The vf port link status cannot be changed directly. Change the peer port to make the vf port link down. + +#. create third bonded port and add first/second bonded port as its' slaves, + check if slave is added successful:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 packets to portB pf0vf0/portB pf1vf0/portB pf3vf0/portB pf4vf0 separately:: + + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + +#. check first/second bonded ports should receive 100 packets, third bonded + device should receive 200 packets.:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +Test Case: balance-xor stacked bonded rx traffic +================================================ +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check +packet statistics. + +steps +----- + +#. bind two ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX> + testpmd> port stop all + +#. create first bonded port and add one port as slave:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 0 2 + +#. create second bonded port and add one port as slave:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 1 3 + +#. create third bonded port and add first/second bonded ports as its' slaves + check if slaves are added successful:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 2 4 + testpmd> add bonding slave 3 4 + testpmd> show bonding config 4 + +#. start top level bond port:: + + testpmd> port start 4 + testpmd> start + +#. send 100 packets to portB pf0vf0/portB pf1vf0/portB pf3vf0/portB pf4vf0 separately:: + + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + +#. check first/second bonded port should receive 100 packets, third bonded + device should receive 200 packets:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +Test Case: balance-xor stacked bonded rx traffic with slave down +================================================================ +setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded +device to down status, send tcp packet by scapy and check packet statistics. + +steps +----- + +#. bind four ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i --tx-offloads=<0xXXXX> + testpmd> port stop all + +#. create first bonded port and add two ports as slaves:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 1 4 + +#. set portB 0 down:: + + ethtool --set-priv-flags {portA 0} link-down-on-close on + ifconfig {portA 0} down + +.. note:: + + The vf port link status cannot be changed directly. Change the peer port to make the vf port link down. + +#. create second bonded port and add two ports as slaves:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 2 5 + testpmd> add bonding slave 3 5 + +#. set portB 2 down:: + + ethtool --set-priv-flags {portA 2} link-down-on-close on + ifconfig {portA 2} down + +.. note:: + + The vf port link status cannot be changed directly. Change the peer port to make the vf port link down. + +#. create third bonded port and add first/second bonded port as its' slaves + check if slave is added successful:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 packets to portB pf0vf0/portB pf1vf0/portB pf3vf0/portB pf4vf0 separately:: + + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=) + +#. check first/second bonded port should receive 100 packets, third bonded + device should receive 200 packets:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit +