Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/121602/?format=api
http://patches.dpdk.org/api/patches/121602/?format=api", "web_url": "http://patches.dpdk.org/project/dts/patch/20230105110752.235201-7-songx.jiale@intel.com/", "project": { "id": 3, "url": "http://patches.dpdk.org/api/projects/3/?format=api", "name": "DTS", "link_name": "dts", "list_id": "dts.dpdk.org", "list_email": "dts@dpdk.org", "web_url": "", "scm_url": "git://dpdk.org/tools/dts", "webscm_url": "http://git.dpdk.org/tools/dts/", "list_archive_url": "https://inbox.dpdk.org/dts", "list_archive_url_format": "https://inbox.dpdk.org/dts/{}", "commit_url_format": "" }, "msgid": "<20230105110752.235201-7-songx.jiale@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dts/20230105110752.235201-7-songx.jiale@intel.com", "date": "2023-01-05T11:07:51", "name": "[V1,6/7] test_plans/vf_pmd_stacked_bonded: add cases to test vf bonding", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": false, "hash": "a6a88ed66629fa308ab547a1dda37f0f9e18ef53", "submitter": { "id": 2352, "url": "http://patches.dpdk.org/api/people/2352/?format=api", "name": "Jiale, SongX", "email": "songx.jiale@intel.com" }, "delegate": null, "mbox": "http://patches.dpdk.org/project/dts/patch/20230105110752.235201-7-songx.jiale@intel.com/mbox/", "series": [ { "id": 26392, "url": "http://patches.dpdk.org/api/series/26392/?format=api", "web_url": "http://patches.dpdk.org/project/dts/list/?series=26392", "date": "2023-01-05T11:07:45", "name": "add cases to test vf bonding", "version": 1, "mbox": "http://patches.dpdk.org/series/26392/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/121602/comments/", "check": "pending", "checks": "http://patches.dpdk.org/api/patches/121602/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dts-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 7A274A00C2;\n\tThu, 5 Jan 2023 04:10:16 +0100 (CET)", "from mails.dpdk.org (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 75CEC42D2A;\n\tThu, 5 Jan 2023 04:10:16 +0100 (CET)", "from mga06.intel.com (mga06b.intel.com [134.134.136.31])\n by mails.dpdk.org (Postfix) with ESMTP id ADD3940DFF\n for <dts@dpdk.org>; Thu, 5 Jan 2023 04:10:13 +0100 (CET)", "from orsmga008.jf.intel.com ([10.7.209.65])\n by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 04 Jan 2023 19:10:12 -0800", "from unknown (HELO localhost.localdomain) ([10.239.252.20])\n by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 04 Jan 2023 19:10:11 -0800" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1672888213; x=1704424213;\n h=from:to:cc:subject:date:message-id:in-reply-to:\n references:mime-version:content-transfer-encoding;\n bh=Pj/0+g6iW0iMWRhX2rQIJITCTKFKB+l/0vFv03WdceM=;\n b=i6EfZRT5W/pe2OJrDLKNeYd3lFy8C5w4vbY2FLi8Pa5qzUeX8roobowX\n A9wowwpDBLlnucqWDVovkIzczchqCrFZEeOECz9X/7XZDEetU3dre3gFG\n 0QOzKBzOzzjc4BtJuBIEDI8EhwStSyPQpg4gamzfU7G7NnSrEoOft6ef6\n i8r5M9VNo6cFARnaAIAOZ4GKA7dC1v/EgFU+bhna1Guzcm82wD9hLF1uq\n Xnd119Dl7Vch8MuJ/UN3B1TDE9nKMNob0h4HXTXoroU07wsjYzaO2R60l\n W+70R9MkGL6G0pcqDxYUgXmVuALsUZulCSdfJDlK6BIoFNJmt2X7XIH6h Q==;", "X-IronPort-AV": [ "E=McAfee;i=\"6500,9779,10580\"; a=\"384398317\"", "E=Sophos;i=\"5.96,301,1665471600\"; d=\"scan'208\";a=\"384398317\"", "E=McAfee;i=\"6500,9779,10580\"; a=\"685949754\"", "E=Sophos;i=\"5.96,301,1665471600\"; d=\"scan'208\";a=\"685949754\"" ], "From": "Song Jiale <songx.jiale@intel.com>", "To": "dts@dpdk.org", "Cc": "Song Jiale <songx.jiale@intel.com>", "Subject": "[dts] [PATCH V1 6/7] test_plans/vf_pmd_stacked_bonded: add cases to\n test vf bonding", "Date": "Thu, 5 Jan 2023 11:07:51 +0000", "Message-Id": "<20230105110752.235201-7-songx.jiale@intel.com>", "X-Mailer": "git-send-email 2.25.1", "In-Reply-To": "<20230105110752.235201-1-songx.jiale@intel.com>", "References": "<20230105110752.235201-1-songx.jiale@intel.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "X-BeenThere": "dts@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "test suite reviews and discussions <dts.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dts>,\n <mailto:dts-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dts/>", "List-Post": "<mailto:dts@dpdk.org>", "List-Help": "<mailto:dts-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dts>,\n <mailto:dts-request@dpdk.org?subject=subscribe>", "Errors-To": "dts-bounces@dpdk.org" }, "content": "add testplan to test vf bonding.\n\nSigned-off-by: Song Jiale <songx.jiale@intel.com>\n---\n .../vf_pmd_stacked_bonded_test_plan.rst | 406 ++++++++++++++++++\n 1 file changed, 406 insertions(+)\n create mode 100644 test_plans/vf_pmd_stacked_bonded_test_plan.rst", "diff": "diff --git a/test_plans/vf_pmd_stacked_bonded_test_plan.rst b/test_plans/vf_pmd_stacked_bonded_test_plan.rst\nnew file mode 100644\nindex 00000000..8c6c6851\n--- /dev/null\n+++ b/test_plans/vf_pmd_stacked_bonded_test_plan.rst\n@@ -0,0 +1,406 @@\n+.. SPDX-License-Identifier: BSD-3-Clause\n+ Copyright(c) 2023 Intel Corporation\n+\n+==============\n+stacked Bonded\n+==============\n+\n+Stacked bonded mechanism allow a bonded port to be added to another bonded port.\n+\n+The demand arises from a discussion with a prospective customer for a 100G NIC\n+based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs\n+support a proper x16 PCIe interface so the host sees a single netdev and that\n+netdev corresponds directly to the 100G Ethernet port. They indicated that in\n+their current system they bond multiple 100G NICs together, using DPDK bonding\n+API in their application. They are interested in looking at an alternative source\n+for the 100G NIC and are in conversation with Silicom who are shipping a 100G\n+RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC\n+is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the\n+DPDK bonding could operate at 1st level on the two RRC netdevs to present a\n+single netdev could the application then bond multiple of these bonded\n+interfaces to implement NIC bonding.\n+\n+Prerequisites\n+=============\n+\n+hardware configuration\n+----------------------\n+\n+all link ports of tester/dut should be the same data rate and support full-duplex.\n+Slave down test cases need four ports at least, other test cases can run with\n+two ports.\n+\n+NIC/DUT/TESTER ports requirements:\n+\n+- Tester: 2 ports of nic\n+- DUT: 2 ports of nic\n+\n+enable ``link-down-on-close`` in tester::\n+\n+ ethtool --set-priv-flags {tport_iface0} link-down-on-close on\n+ ethtool --set-priv-flags {tport_iface1} link-down-on-close on\n+\n+create 2 vf for two dut ports::\n+\n+ echo 2 > /sys/bus/pci/devices/0000\\:31\\:00.0/sriov_numvfs\n+ echo 2 > /sys/bus/pci/devices/0000\\:31\\:00.1/sriov_numvfs\n+\n+port topology diagram(2 peer links)::\n+\n+ TESTER DUT\n+ physical link logical link\n+ .---------. .------------------------------------------------.\n+ | portA 0 | <------------> | portB pf0vf0 <---> .--------. |\n+ | | | | bond 0 | <-----> .------. |\n+ | portA 1 | <------------> | portB pf1vf0 <---> '--------' | | |\n+ | | | |bond2 | |\n+ | portA 0 | <------------> | portB pf0vf1 <---> .--------. | | |\n+ | | | | bond 1 | <-----> '------' |\n+ | portA 1 | <------------> | portB pf1vf1 <---> '--------' |\n+ '---------' '------------------------------------------------'\n+\n+Test cases\n+==========\n+``tx-offloads`` value set based on nic type. Test cases' steps, which run for\n+slave down testing, are based on 4 ports. Other test cases' steps are based on\n+2 ports.\n+\n+Test Case: basic behavior\n+=========================\n+allow a bonded port to be added to another bonded port, which is\n+supported by::\n+\n+ balance-rr 0\n+ active-backup 1\n+ balance-xor 2\n+ broadcast 3\n+ balance-tlb 5\n+ balance-alb 6\n+\n+#. 802.3ad mode is not supported if one or more slaves is a bond device.\n+#. add the same device twice to check exceptional process is good.\n+#. master bonded port/each slaves queue configuration is the same.\n+\n+steps\n+-----\n+\n+#. bind two ports::\n+\n+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>\n+\n+#. boot up testpmd, stop all ports::\n+\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i\n+ testpmd> port stop all\n+\n+#. create first bonded port and add two slave, check bond 4 config status::\n+\n+ testpmd> create bonded device <mode> 0\n+ testpmd> add bonding slave 0 4\n+ testpmd> add bonding slave 2 4\n+ testpmd> show bonding config 4\n+\n+#. create second bonded port and add two slave, check bond 5 config status::\n+\n+ testpmd> create bonded device <mode> 0\n+ testpmd> add bonding slave 1 5\n+ testpmd> add bonding slave 3 5\n+ testpmd> show bonding config 5\n+\n+#. create third bonded port and add first/second bonded port as its' slaves.\n+ check if slaves are added successful. stacked bonded is forbidden by mode 4,\n+ mode 4 will fail to add a bonded port as its' slave::\n+\n+ testpmd> create bonded device <mode> 0\n+ testpmd> add bonding slave 4 6\n+ testpmd> add bonding slave 5 6\n+ testpmd> show bonding config 6\n+\n+#. check master bonded port/slave port's queue configuration are the same::\n+\n+ testpmd> show bonding config 0\n+ testpmd> show bonding config 1\n+ testpmd> show bonding config 2\n+ testpmd> show bonding config 3\n+ testpmd> show bonding config 4\n+ testpmd> show bonding config 5\n+ testpmd> show bonding config 6\n+\n+#. start top level bond port to check ports start action::\n+\n+ testpmd> port start 4\n+ testpmd> start\n+\n+#. close testpmd::\n+\n+ testpmd> stop\n+ testpmd> quit\n+\n+#. repeat upper steps with the following mode number::\n+\n+ balance-rr 0\n+ active-backup 1\n+ balance-xor 2\n+ broadcast 3\n+ 802.3ad 4\n+ balance-tlb 5\n+\n+Test Case: active-backup stacked bonded rx traffic\n+==================================================\n+setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check\n+testpmd packet statistics.\n+\n+steps\n+-----\n+\n+#. bind two ports::\n+\n+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>\n+\n+#. boot up testpmd, stop all ports::\n+\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i\n+ testpmd> port stop all\n+\n+#. create first bonded port and add two port as slave::\n+\n+ testpmd> create bonded device 1 0\n+ testpmd> add bonding slave 0 4\n+ testpmd> add bonding slave 2 4\n+\n+#. create second bonded port and add two port as slave::\n+\n+ testpmd> create bonded device 1 0\n+ testpmd> add bonding slave 1 5\n+ testpmd> add bonding slave 3 5\n+\n+#. create third bonded port and add first/second bonded ports as its' slaves,\n+ check if slaves are added successful::\n+\n+ testpmd> create bonded device 1 0\n+ testpmd> add bonding slave 4 6\n+ testpmd> add bonding slave 5 6\n+ testpmd> show bonding config 6\n+\n+#. start top level bond port::\n+\n+ testpmd> port start 6\n+ testpmd> start\n+\n+#. send 100 tcp packets to portA 0 and portA 1::\n+\n+ sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 0>)\n+ sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 0>)\n+ sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 1>)\n+ sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 1>)\n+\n+#. first/second bonded port should receive 400 packets, third bonded port\n+ should receive 800 packets::\n+\n+ testpmd> show port stats all\n+\n+#. close testpmd::\n+\n+ testpmd> stop\n+ testpmd> quit\n+\n+Test Case: active-backup stacked bonded rx traffic with slave down\n+==================================================================\n+setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded port\n+to down status, send tcp packet by scapy and check testpmd packet statistics.\n+\n+steps\n+-----\n+\n+#. bind four ports::\n+\n+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>\n+\n+#. boot up testpmd, stop all ports::\n+\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i\n+ testpmd> port stop all\n+\n+#. create first bonded port and add two ports as slaves::\n+\n+ testpmd> create bonded device 1 0\n+ testpmd> add bonding slave 0 4\n+ testpmd> add bonding slave 2 4\n+\n+#. set portB pf0vf0 and pf0vf1 down::\n+\n+ ethtool --set-priv-flags {portA 0} link-down-on-close on\n+ ifconfig {portA 0} down\n+\n+.. note::\n+\n+ The vf port link status cannot be changed directly. Change the connection status of\n+ the opposite port to make the vf port link down.\n+\n+#. create second bonded port and add two ports as slaves::\n+\n+ testpmd> create bonded device 1 0\n+ testpmd> add bonding slave 1 5\n+ testpmd> add bonding slave 3 5\n+\n+#. create third bonded port and add first/second bonded port as its' slaves,\n+ check if slave is added successful::\n+\n+ testpmd> create bonded device 1 0\n+ testpmd> add bonding slave 4 6\n+ testpmd> add bonding slave 5 6\n+ testpmd> show bonding config 6\n+\n+#. start top level bond port::\n+\n+ testpmd> port start 6\n+ testpmd> start\n+\n+#. send 100 packets to portB pf0vf0/portB pf0vf1/portB pf1vf0/portB pf1vf1 separately::\n+\n+ sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portB pf0>)\n+ sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portB pf0>)\n+ sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 1>)\n+ sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 1>)\n+\n+#. check first/second bonded ports should receive 100 packets, third bonded\n+ device should receive 200 packets.::\n+\n+ testpmd> show port stats all\n+\n+#. close testpmd::\n+\n+ testpmd> stop\n+ testpmd> quit\n+\n+Test Case: balance-xor stacked bonded rx traffic\n+================================================\n+setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check\n+packet statistics.\n+\n+steps\n+-----\n+\n+#. bind two ports::\n+\n+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4>\n+\n+#. boot up testpmd, stop all ports::\n+\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i\n+ testpmd> port stop all\n+\n+#. create first bonded port and add one port as slave::\n+\n+ testpmd> create bonded device 2 0\n+ testpmd> add bonding slave 0 4\n+ testpmd> add bonding slave 2 4\n+\n+#. create second bonded port and add one port as slave::\n+\n+ testpmd> create bonded device 2 0\n+ testpmd> add bonding slave 1 5\n+ testpmd> add bonding slave 3 5\n+\n+#. create third bonded port and add first/second bonded ports as its' slaves\n+ check if slaves are added successful::\n+\n+ testpmd> create bonded device 2 0\n+ testpmd> add bonding slave 4 6\n+ testpmd> add bonding slave 5 6\n+ testpmd> show bonding config 6\n+\n+#. start top level bond port::\n+\n+ testpmd> port start 6\n+ testpmd> start\n+\n+#. send 100 packets to portA 0 and portA 1::\n+\n+ sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 0>)\n+ sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 0>)\n+ sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 1>)\n+ sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 1>)\n+\n+#. check first/second bonded port should receive 200 packets, third bonded\n+ device should receive 400 packets::\n+\n+ testpmd> show port stats all\n+\n+#. close testpmd::\n+\n+ testpmd> stop\n+ testpmd> quit\n+\n+Test Case: balance-xor stacked bonded rx traffic with slave down\n+================================================================\n+setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded\n+device to down status, send tcp packet by scapy and check packet statistics.\n+\n+steps\n+-----\n+\n+#. bind four ports::\n+\n+ ./usertools/dpdk-devbind.py --bind=vfio-pci <pci address 1> <pci address 2> <pci address 3> <pci address 4> \\\n+ <pci address 3> <pci address 4>\n+\n+#. boot up testpmd, stop all ports::\n+\n+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i\n+ testpmd> port stop all\n+\n+#. create first bonded port and add two ports as slaves, set portA 0a down::\n+\n+ testpmd> create bonded device 2 0\n+ testpmd> add bonding slave 0 4\n+ testpmd> add bonding slave 2 4\n+ testpmd> port stop 1\n+\n+#. create second bonded port and add two ports as slaves::\n+\n+ testpmd> create bonded device 2 0\n+ testpmd> add bonding slave 1 5\n+ testpmd> add bonding slave 3 5\n+ testpmd> port stop 3\n+\n+#. set portB pf0vf0 and pf0vf1 down::\n+\n+ ethtool --set-priv-flags {portA 0} link-down-on-close on\n+ ifconfig {portA 0} down\n+\n+.. note::\n+\n+ The vf port link status cannot be changed directly. Change the connection status of\n+ the opposite port to make the vf port link down.\n+\n+#. create third bonded port and add first/second bonded port as its' slaves\n+ check if slave is added successful::\n+\n+ testpmd> create bonded device 2 0\n+ testpmd> add bonding slave 4 6\n+ testpmd> add bonding slave 5 6\n+ testpmd> show bonding config 6\n+\n+#. start top level bond port::\n+\n+ testpmd> port start 6\n+ testpmd> start\n+\n+#. send 100 packets to portB pf0vf0/portB pf0vf1/portB pf1vf0/portB pf1vf1 separately::\n+\n+ sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portB pf0>)\n+ sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portB pf0>)\n+ sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 1>)\n+ sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\\0'*60)], iface=<portA 1>)\n+\n+#. check first/second bonded port should receive 100 packets, third bonded\n+ device should receive 200 packets::\n+\n+ testpmd> show port stats all\n+\n+#. close testpmd::\n+\n+ testpmd> stop\n+ testpmd> quit\n+\n", "prefixes": [ "V1", "6/7" ] }{ "id": 121602, "url": "