From patchwork Tue Feb 7 03:23:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ke Xu X-Patchwork-Id: 123191 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A1F441C28; Tue, 7 Feb 2023 04:25:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 71DAD410F6; Tue, 7 Feb 2023 04:25:22 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 9EFD2410F6 for ; Tue, 7 Feb 2023 04:25:20 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675740320; x=1707276320; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gcy7teNwhxzdrZ4dKTWob/5ELoeBLg01zcCo9QBRz9A=; b=dJSfopR4OehQwD5oF4Iv2kevNSL8UydsHmsNVIyHeytupZBpbubU4i57 5OTsB3PA/5V95oCs38jH3TVgyHBAOy9QpH14APazv+oFpQwK96MhVoAt9 qFuvrWiowuvhbZOOUTAptXRsiabluz/j1osVea8T1QTX8knDPGRs+GvDR 2fyPpq6Hy/dE8YDn31XL+YLk/LK75HPtqLUolryUOpuOJVdFIah9MqYU5 lINxhVQDW3qEcj6FeugA6vGCV1nHfMR8hK092NmBLpiE2fcRy8iiYE148 knSgsDiLeQVmv3lAc0ney6JN8jVg7uaO/hN/3tepTyBDNbD+6ZftAGCxE g==; X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="356757991" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="356757991" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Feb 2023 19:25:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="668636712" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="668636712" Received: from dpdk-xuke-lab.sh.intel.com ([10.67.119.8]) by fmsmga007.fm.intel.com with ESMTP; 06 Feb 2023 19:25:18 -0800 From: Ke Xu To: dts@dpdk.org Cc: ke1.xu@intel.com, yux.jiang@intel.com, lijuan.tu@intel.com, qi.fu@intel.com Subject: [DTS][PATCH V1 1/5] tests/vf_offload: add VLAN packets to test scope. Date: Tue, 7 Feb 2023 11:23:09 +0800 Message-Id: <20230207032313.404935-2-ke1.xu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230207032313.404935-1-ke1.xu@intel.com> References: <20230207032313.404935-1-ke1.xu@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add VLAN Packets to ensure checksum offload works well on packets with Dot1Q part. Add VLAN to filter. Update checksum counting. Signed-off-by: Ke Xu --- tests/TestSuite_vf_offload.py | 252 +++++++++++++++++++++++++++++++--- 1 file changed, 233 insertions(+), 19 deletions(-) diff --git a/tests/TestSuite_vf_offload.py b/tests/TestSuite_vf_offload.py index 5def34c1..ce1a6f13 100644 --- a/tests/TestSuite_vf_offload.py +++ b/tests/TestSuite_vf_offload.py @@ -258,8 +258,8 @@ class TestVfOffload(TestCase): p for p in packets if len(p.layers()) >= 3 - and p.layers()[1] in {IP, IPv6} - and p.layers()[2] in {IP, IPv6, UDP, TCP, SCTP, GRE, MPLS} + and p.layers()[1] in {IP, IPv6, Dot1Q} + and p.layers()[2] in {IP, IPv6, Dot1Q, UDP, TCP, SCTP, GRE, MPLS} and Raw in p ] @@ -400,6 +400,70 @@ class TestVfOffload(TestCase): self.verify(len(result) == 0, ",".join(list(result.values()))) + def test_checksum_offload_vlan_enable(self): + """ + Enable HW checksum offload. + Send packet with incorrect checksum, + can rx it and report the checksum error, + verify forwarded packets have correct checksum. + """ + self.launch_testpmd( + dcf_flag=self.dcf_mode, + param="--portmask=%s " % (self.portMask) + "--enable-rx-cksum " + "", + ) + self.vm0_testpmd.execute_cmd("set fwd csum") + self.vm0_testpmd.execute_cmd("csum mac-swap off 0", "testpmd>") + self.vm0_testpmd.execute_cmd("csum mac-swap off 1", "testpmd>") + self.vm0_testpmd.execute_cmd("set promisc 1 on") + self.vm0_testpmd.execute_cmd("set promisc 0 on") + + time.sleep(2) + mac = self.vm0_testpmd.get_port_mac(0) + sndIP = "10.0.0.1" + sndIPv6 = "::1" + pkts = { + "IP/UDP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IP(src="%s", chksum=0xf)/UDP(chksum=0xf)/("X"*46)' + % (mac, sndIP), + "IP/TCP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IP(src="%s", chksum=0xf)/TCP(chksum=0xf)/("X"*46)' + % (mac, sndIP), + "IP/SCTP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IP(src="%s", chksum=0xf)/SCTP(chksum=0x0)/("X"*48)' + % (mac, sndIP), + "IPv6/UDP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IPv6(src="%s")/UDP(chksum=0xf)/("X"*46)' + % (mac, sndIPv6), + "IPv6/TCP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IPv6(src="%s")/TCP(chksum=0xf)/("X"*46)' + % (mac, sndIPv6), + } + + expIP = sndIP + expIPv6 = sndIPv6 + pkts_ref = { + "IP/UDP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IP(src="%s")/UDP()/("X"*46)' + % (mac, expIP), + "IP/TCP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IP(src="%s")/TCP()/("X"*46)' + % (mac, expIP), + "IP/SCTP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IP(src="%s")/SCTP()/("X"*48)' + % (mac, expIP), + "IPv6/UDP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IPv6(src="%s")/UDP()/("X"*46)' + % (mac, expIPv6), + "IPv6/TCP": 'Ether(dst="%s", src="52:00:00:00:00:00")/Dot1Q(vlan=100)/IPv6(src="%s")/TCP()/("X"*46)' + % (mac, expIPv6), + } + + self.checksum_enablehw(0, self.vm_dut_0) + self.checksum_enablehw(1, self.vm_dut_0) + + self.vm0_testpmd.execute_cmd("start") + result = self.checksum_validate(pkts, pkts_ref) + + # Validate checksum on the receive packet + out = self.vm0_testpmd.execute_cmd("stop") + bad_ipcsum = self.vm0_testpmd.get_pmd_value("Bad-ipcsum:", out) + bad_l4csum = self.vm0_testpmd.get_pmd_value("Bad-l4csum:", out) + self.verify(bad_ipcsum == 3, "Bad-ipcsum check error") + self.verify(bad_l4csum == 5, "Bad-l4csum check error") + + self.verify(len(result) == 0, ",".join(list(result.values()))) + @check_supported_nic( ["ICE_100G-E810C_QSFP", "ICE_25G-E810C_SFP", "ICE_25G-E810_XXV_SFP"] ) @@ -429,8 +493,8 @@ class TestVfOffload(TestCase): expIPv6 = sndIPv6 pkts_outer = { - "IP/UDP/VXLAN-GPE": f'IP(src = "{sndIP}") / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN()', - "IP/UDP/VXLAN-GPE/ETH": f'IP(src = "{sndIP}") / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN() / Ether()', + "IP/UDP/VXLAN-GPE": f'IP(src = "{sndIP}", chksum = 0xff) / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN()', + "IP/UDP/VXLAN-GPE/ETH": f'IP(src = "{sndIP}", chksum = 0xff) / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN() / Ether()', "IPv6/UDP/VXLAN-GPE": f'IPv6(src = "{sndIPv6}") / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN()', "IPv6/UDP/VXLAN-GPE/ETH": f'IPv6(src = "{sndIPv6}") / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN() / Ether()', "IP/GRE": f'IP(src = "{sndIP}", proto = 47, chksum = 0xff) / GRE()', @@ -452,13 +516,12 @@ class TestVfOffload(TestCase): } if self.dcf_mode == "enable": - pkts_outer[ - "IP/UDP/VXLAN/ETH" - ] = f'IP(src = "{sndIP}") / UDP(sport = 4789, dport = 4789, chksum = 0xff) / VXLAN() / Ether()' - pkts_outer[ - "IPv6/UDP/VXLAN/ETH" - ] = f'IPv6(src = "{sndIPv6}") / UDP(sport = 4789, dport = 4789, chksum = 0xff) / VXLAN() / Ether()' - + pkts_outer.update( + { + "IP/UDP/VXLAN/ETH": f'IP(src = "{sndIP}") / UDP(sport = 4789, dport = 4789, chksum = 0xff) / VXLAN() / Ether()', + "IPv6/UDP/VXLAN/ETH": f'IPv6(src = "{sndIPv6}") / UDP(sport = 4789, dport = 4789, chksum = 0xff) / VXLAN() / Ether()', + } + ) pkts = { key_outer + "/" @@ -494,13 +557,156 @@ class TestVfOffload(TestCase): } if self.dcf_mode == "enable": - pkts_outer_ref[ - "IP/UDP/VXLAN/ETH" - ] = f'IP(src = "{sndIP}") / UDP(sport = 4789, dport = 4789) / VXLAN() / Ether()' - pkts_outer_ref[ - "IPv6/UDP/VXLAN/ETH" - ] = f'IPv6(src = "{sndIPv6}") / UDP(sport = 4789, dport = 4789) / VXLAN() / Ether()' + pkts_outer.update( + { + "IP/UDP/VXLAN/ETH": f'IP(src = "{sndIP}", chksum = 0xff) / UDP(sport = 4789, dport = 4789) / VXLAN() / Ether()', + "IPv6/UDP/VXLAN/ETH": f'IPv6(src = "{sndIPv6}") / UDP(sport = 4789, dport = 4789) / VXLAN() / Ether()', + } + ) + pkts_ref = { + key_outer + + "/" + + key_inner: f'Ether(dst="{mac}", src="52:00:00:00:00:00") / ' + + p_outer + + " / " + + p_inner + for key_outer, p_outer in pkts_outer_ref.items() + for key_inner, p_inner in pkts_inner_ref.items() + } + + self.checksum_enablehw_tunnel(0, self.vm_dut_0) + self.checksum_enablehw_tunnel(1, self.vm_dut_0) + + self.vm0_testpmd.execute_cmd("start") + self.vm0_testpmd.wait_link_status_up(0) + self.vm0_testpmd.wait_link_status_up(1) + result = self.checksum_validate(pkts, pkts_ref) + # Validate checksum on the receive packet + out = self.vm0_testpmd.execute_cmd("stop") + bad_outer_ipcsum = self.vm0_testpmd.get_pmd_value("Bad-outer-ipcsum:", out) + bad_outer_l4csum = self.vm0_testpmd.get_pmd_value("Bad-outer-l4csum:", out) + bad_inner_ipcsum = self.vm0_testpmd.get_pmd_value("Bad-ipcsum:", out) + bad_inner_l4csum = self.vm0_testpmd.get_pmd_value("Bad-l4csum:", out) + if self.dcf_mode == "enable": + # Outer IP checksum error = 7 (outer-ip) * 6 (inner packet) + self.verify(bad_outer_ipcsum == 42, "Bad-outer-ipcsum check error") + # Outer IP checksum error = 8 (outer-UDP) * 6 (inner packet) + self.verify(bad_outer_l4csum == 48, "Bad-outer-l4csum check error") + # Outer L4 checksum error = 14 (outer packets) * 3 (inner-IP) + self.verify(bad_inner_ipcsum == 42, "Bad-ipcsum check error") + # Outer L4 checksum error = 14 (outer packets) * 6 (inner-L4) + self.verify(bad_inner_l4csum == 84, "Bad-l4csum check error") + else: + # Outer IP checksum error = 6 (outer-ip) * 6 (inner packet) + self.verify(bad_outer_ipcsum == 36, "Bad-outer-ipcsum check error") + # Outer IP checksum error = 6 (outer-UDP) * 6 (inner packet) + self.verify(bad_outer_l4csum == 36, "Bad-outer-l4csum check error") + # Outer L4 checksum error = 12 (outer packets) * 3 (inner-IP) + self.verify(bad_inner_ipcsum == 36, "Bad-ipcsum check error") + # Outer L4 checksum error = 12 (outer packets) * 6 (inner-L4) + self.verify(bad_inner_l4csum == 72, "Bad-l4csum check error") + + self.verify(len(result) == 0, ",".join(list(result.values()))) + + @check_supported_nic( + ["ICE_100G-E810C_QSFP", "ICE_25G-E810C_SFP", "ICE_25G-E810_XXV_SFP"] + ) + @skip_unsupported_pkg(["os default"]) + def test_checksum_offload_vlan_tunnel_enable(self): + """ + Enable HW checksum offload. + Send packet with inner and outer incorrect checksum, + can rx it and report the checksum error, + verify forwarded packets have correct checksum. + """ + self.launch_testpmd( + dcf_flag=self.dcf_mode, + param="--portmask=%s " % (self.portMask) + "--enable-rx-cksum " + "", + ) + self.vm0_testpmd.execute_cmd("set fwd csum") + self.vm0_testpmd.execute_cmd("set promisc 1 on") + self.vm0_testpmd.execute_cmd("set promisc 0 on") + self.vm0_testpmd.execute_cmd("csum mac-swap off 0", "testpmd>") + self.vm0_testpmd.execute_cmd("csum mac-swap off 1", "testpmd>") + time.sleep(2) + port_id_0 = 0 + mac = self.vm0_testpmd.get_port_mac(0) + sndIP = "10.0.0.1" + sndIPv6 = "::1" + expIP = sndIP + expIPv6 = sndIPv6 + + pkts_outer = { + "VLAN/IP/UDP/VXLAN-GPE": f'Dot1Q(vlan=100) / IP(src = "{sndIP}", chksum = 0xff) / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN()', + "VLAN/IP/UDP/VXLAN-GPE/ETH": f'Dot1Q(vlan=100) / IP(src = "{sndIP}", chksum = 0xff) / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN() / Ether()', + "VLAN/IPv6/UDP/VXLAN-GPE": f'Dot1Q(vlan=100) / IPv6(src = "{sndIPv6}") / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN()', + "VLAN/IPv6/UDP/VXLAN-GPE/ETH": f'Dot1Q(vlan=100) / IPv6(src = "{sndIPv6}") / UDP(sport = 4790, dport = 4790, chksum = 0xff) / VXLAN() / Ether()', + "VLAN/IP/GRE": f'Dot1Q(vlan=100) / IP(src = "{sndIP}", proto = 47, chksum = 0xff) / GRE()', + "VLAN/IP/GRE/ETH": f'Dot1Q(vlan=100) / IP(src = "{sndIP}", proto = 47, chksum = 0xff) / GRE() / Ether()', + "VLAN/IP/NVGRE/ETH": f'Dot1Q(vlan=100) / IP(src = "{sndIP}", proto = 47, chksum = 0xff) / GRE(key_present=1, proto=0x6558, key=0x00000100) / Ether()', + "VLAN/IPv6/GRE": f'Dot1Q(vlan=100) / IPv6(src = "{sndIPv6}", nh = 47) / GRE()', + "VLAN/IPv6/GRE/ETH": f'Dot1Q(vlan=100) / IPv6(src = "{sndIPv6}", nh = 47) / GRE() / Ether()', + "VLAN/IPv6/NVGRE/ETH": f'Dot1Q(vlan=100) / IPv6(src = "{sndIPv6}", nh = 47) / GRE(key_present=1, proto=0x6558, key=0x00000100) / Ether()', + "VLAN/IP/UDP/GTPU": f'Dot1Q(vlan=100) / IP(src = "{sndIP}", chksum = 0xff) / UDP(dport = 2152, chksum = 0xff) / GTP_U_Header(gtp_type=255, teid=0x123456)', + "VLAN/IPv6/UDP/GTPU": f'Dot1Q(vlan=100) / IPv6(src = "{sndIPv6}") / UDP(dport = 2152, chksum = 0xff) / GTP_U_Header(gtp_type=255, teid=0x123456)', + } + pkts_inner = { + "IP/UDP": f'IP(src = "{sndIP}", chksum = 0xff) / UDP(sport = 29999, dport = 30000, chksum = 0xff) / Raw("x" * 100)', + "IP/TCP": f'IP(src = "{sndIP}", chksum = 0xff) / TCP(sport = 29999, dport = 30000, chksum = 0xff) / Raw("x" * 100)', + "IP/SCTP": f'IP(src = "{sndIP}", chksum = 0xff) / SCTP(sport = 29999, dport = 30000, chksum = 0x0) / Raw("x" * 128)', + "IPv6/UDP": f'IPv6(src = "{sndIPv6}") / UDP(sport = 29999, dport = 30000, chksum = 0xff) / Raw("x" * 100)', + "IPv6/TCP": f'IPv6(src = "{sndIPv6}") / TCP(sport = 29999, dport = 30000, chksum = 0xff) / Raw("x" * 100)', + "IPv6/SCTP": f'IPv6(src = "{sndIPv6}") / SCTP(sport = 29999, dport = 30000, chksum = 0x0) / Raw("x" * 128)', + } + + if self.dcf_mode == "enable": + pkts_outer.update( + { + "VLAN/IP/UDP/VXLAN/ETH": f'Dot1Q(vlan=100) / IP(src = "{sndIP}", chksum = 0xff) / UDP(sport = 4789, dport = 4789, chksum = 0xff) / VXLAN() / Ether()', + "VLAN/IPv6/UDP/VXLAN/ETH": f'Dot1Q(vlan=100) / IPv6(src = "{sndIPv6}") / UDP(sport = 4789, dport = 4789, chksum = 0xff) / VXLAN() / Ether()', + } + ) + pkts = { + key_outer + + "/" + + key_inner: f'Ether(dst="{mac}", src="52:00:00:00:00:00") / ' + + p_outer + + " / " + + p_inner + for key_outer, p_outer in pkts_outer.items() + for key_inner, p_inner in pkts_inner.items() + } + pkts_outer_ref = { + "VLAN/IP/UDP/VXLAN-GPE": f'Dot1Q(vlan=100) / IP(src = "{expIP}") / UDP(sport = 4790, dport = 4790) / VXLAN()', + "VLAN/IP/UDP/VXLAN-GPE/ETH": f'Dot1Q(vlan=100) / IP(src = "{expIP}") / UDP(sport = 4790, dport = 4790) / VXLAN() / Ether()', + "VLAN/IPv6/UDP/VXLAN-GPE": f'Dot1Q(vlan=100) / IPv6(src = "{expIPv6}") / UDP(sport = 4790, dport = 4790) / VXLAN()', + "VLAN/IPv6/UDP/VXLAN-GPE/ETH": f'Dot1Q(vlan=100) / IPv6(src = "{expIPv6}") / UDP(sport = 4790, dport = 4790) / VXLAN() / Ether()', + "VLAN/IP/GRE": f'Dot1Q(vlan=100) / IP(src = "{expIP}", proto = 47) / GRE()', + "VLAN/IP/GRE/ETH": f'Dot1Q(vlan=100) / IP(src = "{expIP}", proto = 47) / GRE() / Ether()', + "VLAN/IP/NVGRE/ETH": f'Dot1Q(vlan=100) / IP(src = "{expIP}", proto = 47) / GRE(key_present=1, proto=0x6558, key=0x00000100) / Ether()', + "VLAN/IPv6/GRE": f'Dot1Q(vlan=100) / IPv6(src = "{expIPv6}", nh = 47) / GRE()', + "VLAN/IPv6/GRE/ETH": f'Dot1Q(vlan=100) / IPv6(src = "{expIPv6}", nh = 47) / GRE() / Ether()', + "VLAN/IPv6/NVGRE/ETH": f'Dot1Q(vlan=100) / IPv6(src = "{expIPv6}", nh = 47) / GRE(key_present=1, proto=0x6558, key=0x00000100) / Ether()', + "VLAN/IP/UDP/GTPU": f'Dot1Q(vlan=100) / IP(src = "{expIP}") / UDP(dport = 2152) / GTP_U_Header(gtp_type=255, teid=0x123456)', + "VLAN/IPv6/UDP/GTPU": f'Dot1Q(vlan=100) / IPv6(src = "{expIPv6}") / UDP(dport = 2152) / GTP_U_Header(gtp_type=255, teid=0x123456)', + } + pkts_inner_ref = { + "IP/UDP": f'IP(src = "{expIP}") / UDP(sport = 29999, dport = 30000) / Raw("x" * 100)', + "IP/TCP": f'IP(src = "{expIP}") / TCP(sport = 29999, dport = 30000) / Raw("x" * 100)', + "IP/SCTP": f'IP(src = "{expIP}") / SCTP(sport = 29999, dport = 30000) / Raw("x" * 128)', + "IPv6/UDP": f'IPv6(src = "{expIPv6}") / UDP(sport = 29999, dport = 30000) / Raw("x" * 100)', + "IPv6/TCP": f'IPv6(src = "{expIPv6}") / TCP(sport = 29999, dport = 30000) / Raw("x" * 100)', + "IPv6/SCTP": f'IPv6(src = "{expIPv6}") / SCTP(sport = 29999, dport = 30000) / Raw("x" * 128)', + } + + if self.dcf_mode == "enable": + pkts_outer.update( + { + "VLAN/IP/UDP/VXLAN/ETH": f'Dot1Q(vlan=100) / IP(src = "{sndIP}", chksum = 0xff) / UDP(sport = 4789, dport = 4789) / VXLAN() / Ether()', + "VLAN/IPv6/UDP/VXLAN/ETH": f'Dot1Q(vlan=100) / IPv6(src = "{sndIPv6}") / UDP(sport = 4789, dport = 4789) / VXLAN() / Ether()', + } + ) pkts_ref = { key_outer + "/" @@ -526,14 +732,22 @@ class TestVfOffload(TestCase): bad_inner_ipcsum = self.vm0_testpmd.get_pmd_value("Bad-ipcsum:", out) bad_inner_l4csum = self.vm0_testpmd.get_pmd_value("Bad-l4csum:", out) if self.dcf_mode == "enable": - self.verify(bad_outer_ipcsum == 24, "Bad-outer-ipcsum check error") + # Outer IP checksum error = 7 (outer-ip) * 6 (inner packet) + self.verify(bad_outer_ipcsum == 42, "Bad-outer-ipcsum check error") + # Outer IP checksum error = 8 (outer-UDP) * 6 (inner packet) self.verify(bad_outer_l4csum == 48, "Bad-outer-l4csum check error") + # Outer L4 checksum error = 14 (outer packets) * 3 (inner-IP) self.verify(bad_inner_ipcsum == 42, "Bad-ipcsum check error") + # Outer L4 checksum error = 14 (outer packets) * 6 (inner-L4) self.verify(bad_inner_l4csum == 84, "Bad-l4csum check error") else: - self.verify(bad_outer_ipcsum == 24, "Bad-outer-ipcsum check error") + # Outer IP checksum error = 6 (outer-ip) * 6 (inner packet) + self.verify(bad_outer_ipcsum == 36, "Bad-outer-ipcsum check error") + # Outer IP checksum error = 6 (outer-UDP) * 6 (inner packet) self.verify(bad_outer_l4csum == 36, "Bad-outer-l4csum check error") + # Outer L4 checksum error = 12 (outer packets) * 3 (inner-IP) self.verify(bad_inner_ipcsum == 36, "Bad-ipcsum check error") + # Outer L4 checksum error = 12 (outer packets) * 6 (inner-L4) self.verify(bad_inner_l4csum == 72, "Bad-l4csum check error") self.verify(len(result) == 0, ",".join(list(result.values()))) From patchwork Tue Feb 7 03:23:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ke Xu X-Patchwork-Id: 123192 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF34341C28; Tue, 7 Feb 2023 04:25:24 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A5330427F5; Tue, 7 Feb 2023 04:25:24 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 8166F427E9 for ; Tue, 7 Feb 2023 04:25:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675740322; x=1707276322; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aztkFZbUVHqU1yp0Opuh2eoyFsGPaS4jDzzh9xA3d8Q=; b=ODECdsSxD68H9UmV1cde2dTDsP7pcO+1g0bI0yuNZzD82FsUHpTZpGI/ pMoDWuiqOmrllpEqC60NNgZMPj3DuR7ny0zOL/ucnVGPlBDOpURx+GyZi jDDF+Fy3lKrXrAa4W4oD0M1A2T4wRPmSn22tQdu5sdIOapg/JheLwE9VB PX5FNUfbE7xojfW2p/FXy71evq53m/dhpHuUPonCE1pKnDfht/MOTce81 +r+gC0Vnc+ucRaKaxUDFGWi0XpsscrGtIXgBYkLRzPJPDrlVhezu8GAZl flt+o+cQs+uhUsc9zDYpT9uDARNg0tjjT6JgaHGxJ6mrzjpTktzKGvHmR g==; X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="356757996" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="356757996" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Feb 2023 19:25:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="668636717" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="668636717" Received: from dpdk-xuke-lab.sh.intel.com ([10.67.119.8]) by fmsmga007.fm.intel.com with ESMTP; 06 Feb 2023 19:25:20 -0800 From: Ke Xu To: dts@dpdk.org Cc: ke1.xu@intel.com, yux.jiang@intel.com, lijuan.tu@intel.com, qi.fu@intel.com Subject: [DTS][PATCH V1 2/5] tests/vf_offload: improve TSO validating. Date: Tue, 7 Feb 2023 11:23:10 +0800 Message-Id: <20230207032313.404935-3-ke1.xu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230207032313.404935-1-ke1.xu@intel.com> References: <20230207032313.404935-1-ke1.xu@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Use segment_validate and tso_validate methods to do the validate, providing scalarble interface for futrher cases. Signed-off-by: Ke Xu --- tests/TestSuite_vf_offload.py | 465 +++++++++++++--------------------- 1 file changed, 180 insertions(+), 285 deletions(-) diff --git a/tests/TestSuite_vf_offload.py b/tests/TestSuite_vf_offload.py index ce1a6f13..bd412100 100644 --- a/tests/TestSuite_vf_offload.py +++ b/tests/TestSuite_vf_offload.py @@ -849,6 +849,148 @@ class TestVfOffload(TestCase): rx_packet_size = [len(p[Raw].load) for p in pkts] return rx_packet_count, rx_packet_size + def segment_validate( + self, + segment_size, + loading_size, + packet_count, + tx_stats, + rx_stats, + payload_size_list, + ): + """ + Validate the segmentation, checking if the result is segmented + as expected. + segment_size: segment size, + loading_size: tx payload size, + packet_count: tx packet count, + tx_stats: tx packets count sniffed, + rx_stats: rx packets count, + payload_size_list: rx packets payload size list, + Return a message of validate result. + """ + num_segs = (loading_size + segment_size - 1) // segment_size + num_segs_full = loading_size // segment_size + if not packet_count == tx_stats: + return "Failed: TX packet count is of inconsitent with sniffed TX packet count." + elif loading_size <= segment_size and not rx_stats == packet_count: + return "Failed: RX packet count is of inconsitent with TX packet count." + elif loading_size <= segment_size and not all( + [int(payload_size_list[j]) == loading_size for j in range(packet_count)] + ): + return "Failed: RX packet size is of inconsitent with TX packet size." + elif rx_stats != num_segs: + return "Failed: RX packet count is of inconsitent with segmented TX packet count." + elif not ( + all( + [ + # i * packet_count + j is the i-th segmentation for j-th packet. + payload_size_list[i * packet_count + j] == 800 + for j in range(packet_count) + for i in range(num_segs_full) + ] + ) + and all( + [ + # num_segs_full * packet_count + j is the last segmentation for j-th packet. + payload_size_list[num_segs_full * packet_count + j] + == (loading_size % 800) + for j in range(packet_count) + ] + ) + ): + return ( + "Failed: RX packet segmentation size incorrect, %s." % payload_size_list + ) + return None + + def tso_validate( + self, + tx_interface, + rx_interface, + mac, + inet_type, + size_and_count, + outer_pkts=None, + ): + + validate_result = [] + + self.tester.scapy_foreground() + self.tester.scapy_append("from scapy.contrib.gtp import GTP_U_Header") + time.sleep(5) + + if not outer_pkts is None: + for key_outer in outer_pkts: + for loading_size, packet_count in size_and_count: + out = self.vm0_testpmd.execute_cmd( + "clear port info all", "testpmd> ", 120 + ) + self.tcpdump_start_sniffing([tx_interface, rx_interface]) + self.tester.scapy_append( + ( + 'sendp([Ether(dst="%s",src="52:00:00:00:00:00")/' + + outer_pkts[key_outer] + + '/%s(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=1021,dport=1021)/Raw(RandString(size=%s))], iface="%s", count=%s)' + ) + % (mac, inet_type, loading_size, tx_interface, packet_count) + ) + out = self.tester.scapy_execute() + out = self.vm0_testpmd.execute_cmd("show port stats all") + print(out) + self.tcpdump_stop_sniff() + rx_stats, payload_size_list = self.tcpdump_analyse_sniff( + rx_interface + ) + tx_stats, _ = self.tcpdump_analyse_sniff(tx_interface) + payload_size_list.sort(reverse=True) + self.logger.info(payload_size_list) + segment_result = self.segment_validate( + 800, + loading_size, + packet_count, + tx_stats, + rx_stats, + payload_size_list, + ) + if segment_result: + validate_result.append( + f"Packet: {key_outer}, inet type: {inet_type}, loading size: {loading_size} packet count: {packet_count}: " + + segment_result + ) + else: + for loading_size, packet_count in size_and_count: + out = self.vm0_testpmd.execute_cmd( + "clear port info all", "testpmd> ", 120 + ) + self.tcpdump_start_sniffing([tx_interface, rx_interface]) + self.tester.scapy_append( + 'sendp([Ether(dst="%s",src="52:00:00:00:00:00")/%s(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=1021,dport=1021)/Raw(RandString(size=%s))], iface="%s", count=%s)' + % (mac, inet_type, loading_size, tx_interface, packet_count) + ) + out = self.tester.scapy_execute() + out = self.vm0_testpmd.execute_cmd("show port stats all") + print(out) + self.tcpdump_stop_sniff() + rx_stats, payload_size_list = self.tcpdump_analyse_sniff(rx_interface) + tx_stats, _ = self.tcpdump_analyse_sniff(tx_interface) + payload_size_list.sort(reverse=True) + self.logger.info(payload_size_list) + segment_result = self.segment_validate( + 800, + loading_size, + packet_count, + tx_stats, + rx_stats, + payload_size_list, + ) + if segment_result: + validate_result.append( + f"Inet type: {inet_type}, loading size: {loading_size} packet count: {packet_count}: " + + segment_result + ) + return validate_result + def test_tso(self): """ TSO IPv4 TCP, IPv6 TCP testing. @@ -863,7 +1005,7 @@ class TestVfOffload(TestCase): # Here size_and_count is a list of tuples for the test scopes that # in a tuple (size, count) means, sending packets for count times # for TSO with a payload size of size. - self.size_and_count = [ + size_and_count = [ (128, 10), (800, 10), (801, 10), @@ -895,162 +1037,30 @@ class TestVfOffload(TestCase): mac = self.vm0_testpmd.get_port_mac(0) self.vm0_testpmd.execute_cmd("set verbose 1", "testpmd> ", 120) - self.vm0_testpmd.execute_cmd("port stop all", "testpmd> ", 120) - self.vm0_testpmd.execute_cmd( - "csum set ip hw %d" % self.dut_ports[0], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum set udp hw %d" % self.dut_ports[0], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum set tcp hw %d" % self.dut_ports[0], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum set sctp hw %d" % self.dut_ports[0], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum set outer-ip hw %d" % self.dut_ports[0], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum parse-tunnel on %d" % self.dut_ports[0], "testpmd> ", 120 - ) - - self.vm0_testpmd.execute_cmd( - "csum set ip hw %d" % self.dut_ports[1], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum set udp hw %d" % self.dut_ports[1], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum set tcp hw %d" % self.dut_ports[1], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum set sctp hw %d" % self.dut_ports[1], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum set outer-ip hw %d" % self.dut_ports[1], "testpmd> ", 120 - ) - self.vm0_testpmd.execute_cmd( - "csum parse-tunnel on %d" % self.dut_ports[1], "testpmd> ", 120 - ) - - self.vm0_testpmd.execute_cmd("tso set 800 %d" % self.vm0_dut_ports[1]) self.vm0_testpmd.execute_cmd("set fwd csum") - self.vm0_testpmd.execute_cmd("port start all", "testpmd> ", 120) + self.tso_enable(self.vm0_dut_ports[0], self.vm_dut_0) + self.tso_enable(self.vm0_dut_ports[1], self.vm_dut_0) self.vm0_testpmd.execute_cmd("set promisc 0 on", "testpmd> ", 120) self.vm0_testpmd.execute_cmd("set promisc 1 on", "testpmd> ", 120) self.vm0_testpmd.execute_cmd("start") + self.vm0_testpmd.wait_link_status_up(self.vm0_dut_ports[0]) + self.vm0_testpmd.wait_link_status_up(self.vm0_dut_ports[1]) - self.tester.scapy_foreground() - time.sleep(5) - - for loading_size, packet_count in self.size_and_count: - # IPv4 tcp test - out = self.vm0_testpmd.execute_cmd("clear port info all", "testpmd> ", 120) - self.tcpdump_start_sniffing([tx_interface, rx_interface]) - self.tester.scapy_append( - 'sendp([Ether(dst="%s",src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=1021,dport=1021)/Raw(RandString(size=%s))], iface="%s", count=%s)' - % (mac, loading_size, tx_interface, packet_count) - ) - out = self.tester.scapy_execute() - out = self.vm0_testpmd.execute_cmd("show port stats all") - print(out) - self.tcpdump_stop_sniff() - rx_stats, payload_size_list = self.tcpdump_analyse_sniff(rx_interface) - tx_stats, _ = self.tcpdump_analyse_sniff(tx_interface) - payload_size_list.sort(reverse=True) - self.logger.info(payload_size_list) - if loading_size <= 800: - self.verify( - # for all packet_count packets, verifying the packet size equals the size we sent. - rx_stats == tx_stats - and all( - [ - int(payload_size_list[j]) == loading_size - for j in range(packet_count) - ] - ), - "IPV4 RX or TX packet number not correct", - ) - else: - num = loading_size // 800 - for i in range(num): - self.verify( - # i * packet_count + j is the i-th segmentation for j-th packet. - all( - [ - int(payload_size_list[i * packet_count + j]) == 800 - for j in range(packet_count) - ] - ), - "the packet segmentation incorrect, %s" % payload_size_list, - ) - if loading_size % 800 != 0: - self.verify( - # num * packet_count + j is the last segmentation for j-th packet. - all( - [ - int(payload_size_list[num * packet_count + j]) - == loading_size % 800 - for j in range(packet_count) - ] - ), - "the packet segmentation incorrect, %s" % payload_size_list, - ) - - for loading_size, packet_count in self.size_and_count: - # IPv6 tcp test - out = self.vm0_testpmd.execute_cmd("clear port info all", "testpmd> ", 120) - self.tcpdump_start_sniffing([tx_interface, rx_interface]) - self.tester.scapy_append( - 'sendp([Ether(dst="%s", src="52:00:00:00:00:00")/IPv6(src="FE80:0:0:0:200:1FF:FE00:200", dst="3555:5555:6666:6666:7777:7777:8888:8888")/TCP(sport=1021,dport=1021)/Raw(RandString(size=%s))], iface="%s", count=%s)' - % (mac, loading_size, tx_interface, packet_count) - ) - out = self.tester.scapy_execute() - out = self.vm0_testpmd.execute_cmd("show port stats all") - print(out) - self.tcpdump_stop_sniff() - rx_stats, payload_size_list = self.tcpdump_analyse_sniff(rx_interface) - tx_stats, _ = self.tcpdump_analyse_sniff(tx_interface) - payload_size_list.sort(reverse=True) - self.logger.info(payload_size_list) - if loading_size <= 800: - self.verify( - # for all packet_count packets, verifying the packet size equals the size we sent. - rx_stats == tx_stats - and all( - [ - int(payload_size_list[j]) == loading_size - for j in range(packet_count) - ] - ), - "IPV6 RX or TX packet number not correct", - ) - else: - num = loading_size // 800 - for i in range(num): - self.verify( - # i * packet_count + j is the i-th segmentation for j-th packet. - all( - [ - int(payload_size_list[i * packet_count + j]) == 800 - for j in range(packet_count) - ] - ), - "the packet segmentation incorrect, %s" % payload_size_list, - ) - if loading_size % 800 != 0: - self.verify( - # num * packet_count + j is the last segmentation for j-th packet. - all( - [ - int(payload_size_list[num * packet_count + j]) - == loading_size % 800 - for j in range(packet_count) - ] - ), - "the packet segmentation incorrect, %s" % payload_size_list, - ) + validate_result = [] + validate_result += self.tso_validate( + tx_interface=tx_interface, + rx_interface=rx_interface, + mac=mac, + inet_type="IP", + size_and_count=size_and_count, + ) + validate_result += self.tso_validate( + tx_interface=tx_interface, + rx_interface=rx_interface, + mac=mac, + inet_type="IPv6", + size_and_count=size_and_count, + ) @check_supported_nic( ["ICE_100G-E810C_QSFP", "ICE_25G-E810C_SFP", "ICE_25G-E810_XXV_SFP"] @@ -1070,7 +1080,7 @@ class TestVfOffload(TestCase): # Here size_and_count is a list of tuples for the test scopes that # in a tuple (size, count) means, sending packets for count times # for TSO with a payload size of size. - self.size_and_count = [ + size_and_count = [ (128, 10), (800, 10), (801, 10), @@ -1130,138 +1140,23 @@ class TestVfOffload(TestCase): "IPv6/UDP/GTPU": 'IPv6(src = "FE80:0:0:0:200:1FF:FE00:200", dst = "3555:5555:6666:6666:7777:7777:8888:8888") / UDP(dport = 2152) / GTP_U_Header(gtp_type=255, teid=0x123456)', } - self.tester.scapy_foreground() - time.sleep(5) - - for key_outer in pkts_outer: - for loading_size, packet_count in self.size_and_count: - # IPv4 tcp test - out = self.vm0_testpmd.execute_cmd( - "clear port info all", "testpmd> ", 120 - ) - self.tcpdump_start_sniffing([tx_interface, rx_interface]) - if "GTPU" in key_outer: - self.tester.scapy_append( - "from scapy.contrib.gtp import GTP_U_Header" - ) - self.tester.scapy_append( - ( - 'sendp([Ether(dst="%s",src="52:00:00:00:00:00")/' - + pkts_outer[key_outer] - + '/IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=1021,dport=1021)/Raw(RandString(size=%s))], iface="%s", count=%s)' - ) - % (mac, loading_size, tx_interface, packet_count) - ) - out = self.tester.scapy_execute() - out = self.vm0_testpmd.execute_cmd("show port stats all") - print(out) - self.tcpdump_stop_sniff() - rx_stats, payload_size_list = self.tcpdump_analyse_sniff(rx_interface) - tx_stats, _ = self.tcpdump_analyse_sniff(tx_interface) - payload_size_list.sort(reverse=True) - self.logger.info(payload_size_list) - if loading_size <= 800: - self.verify( - # for all packet_count packets, verifying the packet size equals the size we sent. - rx_stats == tx_stats - and all( - [ - int(payload_size_list[j]) == loading_size - for j in range(packet_count) - ] - ), - f"{key_outer} tunnel IPV4 RX or TX packet number not correct", - ) - else: - num = loading_size // 800 - for i in range(num): - self.verify( - # i * packet_count + j is the i-th segmentation for j-th packet. - all( - [ - payload_size_list[i * packet_count + j] == 800 - for j in range(packet_count) - ] - ), - "the packet segmentation incorrect, %s" % payload_size_list, - ) - if loading_size % 800 != 0: - self.verify( - # num * packet_count + j is the last segmentation for j-th packet. - all( - [ - payload_size_list[num * packet_count + j] - == loading_size % 800 - for j in range(packet_count) - ] - ), - "the packet segmentation incorrect, %s" % payload_size_list, - ) - - for loading_size, packet_count in self.size_and_count: - # IPv6 tcp test - out = self.vm0_testpmd.execute_cmd( - "clear port info all", "testpmd> ", 120 - ) - self.tcpdump_start_sniffing([tx_interface, rx_interface]) - if "GTPU" in key_outer: - self.tester.scapy_append( - "from scapy.contrib.gtp import GTP_U_Header" - ) - self.logger.info([mac, loading_size, tx_interface, packet_count]) - self.tester.scapy_append( - ( - 'sendp([Ether(dst="%s", src="52:00:00:00:00:00")/' - + pkts_outer[key_outer] - + '/IPv6(src="FE80:0:0:0:200:1FF:FE00:200", dst="3555:5555:6666:6666:7777:7777:8888:8888")/TCP(sport=1021,dport=1021)/("X"*%s)], iface="%s", count=%s)' - ) - % (mac, loading_size, tx_interface, packet_count) - ) - out = self.tester.scapy_execute() - out = self.vm0_testpmd.execute_cmd("show port stats all") - print(out) - self.tcpdump_stop_sniff() - rx_stats, payload_size_list = self.tcpdump_analyse_sniff(rx_interface) - tx_stats, _ = self.tcpdump_analyse_sniff(tx_interface) - payload_size_list.sort(reverse=True) - self.logger.info(payload_size_list) - if loading_size <= 800: - self.verify( - # for all packet_count packets, verifying the packet size equals the size we sent. - rx_stats == tx_stats - and all( - [ - payload_size_list[j] == loading_size - for j in range(packet_count) - ] - ), - f"{key_outer} tunnel IPV6 RX or TX packet number not correct", - ) - else: - num = loading_size // 800 - for i in range(num): - self.verify( - # i * packet_count + j is the i-th segmentation for j-th packet. - all( - [ - payload_size_list[i * packet_count + j] == 800 - for j in range(packet_count) - ] - ), - "the packet segmentation incorrect, %s" % payload_size_list, - ) - if loading_size % 800 != 0: - self.verify( - # num * packet_count + j is the last segmentation for j-th packet. - all( - [ - payload_size_list[num * packet_count + j] - == loading_size % 800 - for j in range(packet_count) - ] - ), - "the packet segmentation incorrect, %s" % payload_size_list, - ) + validate_result = [] + validate_result += self.tso_validate( + tx_interface=tx_interface, + rx_interface=rx_interface, + mac=mac, + inet_type="IP", + size_and_count=size_and_count, + outer_pkts=pkts_outer, + ) + validate_result += self.tso_validate( + tx_interface=tx_interface, + rx_interface=rx_interface, + mac=mac, + inet_type="IPv6", + size_and_count=size_and_count, + outer_pkts=pkts_outer, + ) def tear_down(self): self.vm0_testpmd.execute_cmd("quit", "# ") From patchwork Tue Feb 7 03:23:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ke Xu X-Patchwork-Id: 123193 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D28C341C28; Tue, 7 Feb 2023 04:25:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB59C427E9; Tue, 7 Feb 2023 04:25:26 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 6363240E6E for ; Tue, 7 Feb 2023 04:25:24 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675740324; x=1707276324; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KKod1xNTy9VgcR7GEXT5aqem2BDS4Q3+DNeKIEaweeM=; b=k4inFzyQEnhl8AkZt+tuqw1R0RvzK8V8VMROJg7tMiwl2rbShrLJlAQq dBRGh6tfB/HS3dmSUjlFCv7eSdIO/tw1mSupIEcGnluJ8kvY231um/Oxr XkLUkNchhq6PL1RU3KJi2m1MpfgPQbgxaLy4a33Bs0oixJEPudUtfnvps 0BIG5E35RF4xi2SI7ovhKo33KSaxys2GPxsKCpux7grDeTSe6cumOPb73 HEr/xpYahgvWXLZ+ncr13eIMpM63/i/OervNN9BGfdqadb1MgOZnVVbHd aldtlw1kd0ubgn4ebgQhf7LoyinqTZTEicnuNDh2tw+GB8nsubC0jnIjE g==; X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="356758003" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="356758003" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Feb 2023 19:25:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="668636726" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="668636726" Received: from dpdk-xuke-lab.sh.intel.com ([10.67.119.8]) by fmsmga007.fm.intel.com with ESMTP; 06 Feb 2023 19:25:22 -0800 From: Ke Xu To: dts@dpdk.org Cc: ke1.xu@intel.com, yux.jiang@intel.com, lijuan.tu@intel.com, qi.fu@intel.com Subject: [DTS][PATCH V1 3/5] tests/vf_offload: improve vector path validating. Date: Tue, 7 Feb 2023 11:23:11 +0800 Message-Id: <20230207032313.404935-4-ke1.xu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230207032313.404935-1-ke1.xu@intel.com> References: <20230207032313.404935-1-ke1.xu@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org For better deployment for daily regression, we introduce wrapped cases for each path. Signed-off-by: Ke Xu --- tests/TestSuite_vf_offload.py | 152 +++++++++++++++++++++++++++++++++- 1 file changed, 151 insertions(+), 1 deletion(-) diff --git a/tests/TestSuite_vf_offload.py b/tests/TestSuite_vf_offload.py index bd412100..93b28afd 100644 --- a/tests/TestSuite_vf_offload.py +++ b/tests/TestSuite_vf_offload.py @@ -185,10 +185,18 @@ class TestVfOffload(TestCase): def launch_testpmd(self, **kwargs): dcf_flag = kwargs.get("dcf_flag") + eal_param = self.eal_para if hasattr(self, "eal_para") else "" + eal_param += ( + " --force-max-simd-bitwidth=%d " % self.specific_bitwidth + if hasattr(self, "specific_bitwidth") + and not "force-max-simd-bitwidth" in eal_param + else "" + ) param = kwargs.get("param") if kwargs.get("param") else "" if dcf_flag == "enable": self.vm0_testpmd.start_testpmd( VM_CORES_MASK, + eal_param=eal_param, param=param, ports=[self.vf0_guest_pci, self.vf1_guest_pci], port_options={ @@ -197,7 +205,9 @@ class TestVfOffload(TestCase): }, ) else: - self.vm0_testpmd.start_testpmd(VM_CORES_MASK, param=param) + self.vm0_testpmd.start_testpmd( + VM_CORES_MASK, eal_param=eal_param, param=param + ) def checksum_enablehw(self, port, dut): dut.send_expect("port stop all", "testpmd>") @@ -812,6 +822,106 @@ class TestVfOffload(TestCase): self.verify(len(result) == 0, ",".join(list(result.values()))) + def test_checksum_offload_enable_scalar(self): + self.specific_bitwidth = 64 + self.test_checksum_offload_enable() + del self.specific_bitwidth + + def test_checksum_offload_enable_sse(self): + self.specific_bitwidth = 128 + self.test_checksum_offload_enable() + del self.specific_bitwidth + + def test_checksum_offload_enable_avx2(self): + self.specific_bitwidth = 256 + self.test_checksum_offload_enable() + del self.specific_bitwidth + + def test_checksum_offload_enable_avx512(self): + self.specific_bitwidth = 512 + self.test_checksum_offload_enable() + del self.specific_bitwidth + + def test_checksum_offload_vlan_enable_scalar(self): + self.specific_bitwidth = 64 + self.test_checksum_offload_vlan_enable() + del self.specific_bitwidth + + def test_checksum_offload_vlan_enable_sse(self): + self.specific_bitwidth = 128 + self.test_checksum_offload_vlan_enable() + del self.specific_bitwidth + + def test_checksum_offload_vlan_enable_avx2(self): + self.specific_bitwidth = 256 + self.test_checksum_offload_vlan_enable() + del self.specific_bitwidth + + def test_checksum_offload_vlan_enable_avx512(self): + self.specific_bitwidth = 512 + self.test_checksum_offload_vlan_enable() + del self.specific_bitwidth + + def test_checksum_offload_tunnel_enable_scalar(self): + self.specific_bitwidth = 64 + self.test_checksum_offload_tunnel_enable() + del self.specific_bitwidth + + def test_checksum_offload_tunnel_enable_sse(self): + self.specific_bitwidth = 128 + self.test_checksum_offload_tunnel_enable() + del self.specific_bitwidth + + def test_checksum_offload_tunnel_enable_avx2(self): + self.specific_bitwidth = 256 + self.test_checksum_offload_tunnel_enable() + del self.specific_bitwidth + + def test_checksum_offload_tunnel_enable_avx512(self): + self.specific_bitwidth = 512 + self.test_checksum_offload_tunnel_enable() + del self.specific_bitwidth + + def test_checksum_offload_vlan_tunnel_enable_scalar(self): + self.specific_bitwidth = 64 + self.test_checksum_offload_vlan_tunnel_enable() + del self.specific_bitwidth + + def test_checksum_offload_vlan_tunnel_enable_sse(self): + self.specific_bitwidth = 128 + self.test_checksum_offload_vlan_tunnel_enable() + del self.specific_bitwidth + + def test_checksum_offload_vlan_tunnel_enable_avx2(self): + self.specific_bitwidth = 256 + self.test_checksum_offload_vlan_tunnel_enable() + del self.specific_bitwidth + + def test_checksum_offload_vlan_tunnel_enable_avx512(self): + self.specific_bitwidth = 512 + self.test_checksum_offload_vlan_tunnel_enable() + del self.specific_bitwidth + + def test_checksum_offload_disable_scalar(self): + self.specific_bitwidth = 64 + self.test_checksum_offload_disable() + del self.specific_bitwidth + + def test_checksum_offload_disable_sse(self): + self.specific_bitwidth = 128 + self.test_checksum_offload_disable() + del self.specific_bitwidth + + def test_checksum_offload_disable_avx2(self): + self.specific_bitwidth = 256 + self.test_checksum_offload_disable() + del self.specific_bitwidth + + def test_checksum_offload_disable_avx512(self): + self.specific_bitwidth = 512 + self.test_checksum_offload_disable() + del self.specific_bitwidth + def tcpdump_start_sniffing(self, ifaces=[]): """ Start tcpdump in the background to sniff the tester interface where @@ -1158,6 +1268,46 @@ class TestVfOffload(TestCase): outer_pkts=pkts_outer, ) + def test_tso_scalar(self): + self.specific_bitwidth = 64 + self.test_tso() + del self.specific_bitwidth + + def test_tso_sse(self): + self.specific_bitwidth = 128 + self.test_tso() + del self.specific_bitwidth + + def test_tso_avx2(self): + self.specific_bitwidth = 256 + self.test_tso() + del self.specific_bitwidth + + def test_tso_avx512(self): + self.specific_bitwidth = 512 + self.test_tso() + del self.specific_bitwidth + + def test_tso_tunnel_scalar(self): + self.specific_bitwidth = 64 + self.test_tso_tunnel() + del self.specific_bitwidth + + def test_tso_sse(self): + self.specific_bitwidth = 128 + self.test_tso_tunnel() + del self.specific_bitwidth + + def test_tso_avx2(self): + self.specific_bitwidth = 256 + self.test_tso_tunnel() + del self.specific_bitwidth + + def test_tso_avx512(self): + self.specific_bitwidth = 512 + self.test_tso_tunnel() + del self.specific_bitwidth + def tear_down(self): self.vm0_testpmd.execute_cmd("quit", "# ") self.dut.send_expect( From patchwork Tue Feb 7 03:23:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ke Xu X-Patchwork-Id: 123194 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1DFBB41C2A; Tue, 7 Feb 2023 04:25:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 00FAC427F2; Tue, 7 Feb 2023 04:25:27 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id A04A340E6E for ; Tue, 7 Feb 2023 04:25:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675740325; x=1707276325; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ACh/Nb6UNMhoHqa9bTBeYkQalujTaTcTh1ZHPsNgyok=; b=JO1R02xxxyadmGsqEdoiGyyGgntBPLEzP6cfGzuraf2l596IlKorX7rW ff37U0ARLIZ8G3zUUOO3SAJzUTQ2p2BLIUuolGmqKMWFrWJeulWhjUelU sKYrA8vE4uxL3sWN/ZTan73atu9YeH4u/zvjaP2nRl8GR9S06hQlmbrNw ZLHy4nnTZn3Bs/ChN8hGZTOsq8HATrDEy9gkPdPRDGMfGEii4wUJUpWlB pEEuINmrewMxtuB4T8eATzp9Yiy7rFpnsAFXIXIdQWEgeDO6F1tWPBNQK 4docA9EDAO0zt/sT8g2Jzl6806q6Mqi/Ux+pn1phnLNG83DrWSPXFEoVf Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="356758006" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="356758006" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Feb 2023 19:25:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="668636733" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="668636733" Received: from dpdk-xuke-lab.sh.intel.com ([10.67.119.8]) by fmsmga007.fm.intel.com with ESMTP; 06 Feb 2023 19:25:23 -0800 From: Ke Xu To: dts@dpdk.org Cc: ke1.xu@intel.com, yux.jiang@intel.com, lijuan.tu@intel.com, qi.fu@intel.com Subject: [DTS][PATCH V1 4/5] tests/vf_offload: fix error when no packet captured. Date: Tue, 7 Feb 2023 11:23:12 +0800 Message-Id: <20230207032313.404935-5-ke1.xu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230207032313.404935-1-ke1.xu@intel.com> References: <20230207032313.404935-1-ke1.xu@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Signed-off-by: Ke Xu --- tests/TestSuite_vf_offload.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/TestSuite_vf_offload.py b/tests/TestSuite_vf_offload.py index 93b28afd..e9f70562 100644 --- a/tests/TestSuite_vf_offload.py +++ b/tests/TestSuite_vf_offload.py @@ -266,7 +266,7 @@ class TestVfOffload(TestCase): def filter_packets(self, packets): return [ p - for p in packets + for p in (packets if packets else []) if len(p.layers()) >= 3 and p.layers()[1] in {IP, IPv6, Dot1Q} and p.layers()[2] in {IP, IPv6, Dot1Q, UDP, TCP, SCTP, GRE, MPLS} From patchwork Tue Feb 7 03:23:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ke Xu X-Patchwork-Id: 123195 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33CE141C28; Tue, 7 Feb 2023 04:25:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2BF3B42B8C; Tue, 7 Feb 2023 04:25:29 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 7D02940E6E for ; Tue, 7 Feb 2023 04:25:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675740327; x=1707276327; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SUQem2bphwZFoUjKjsYUqz+eObP41CvnVDrLqUtGLf0=; b=f4McS7PTB8B7bNqyguWGMlUhQr8IYWp+cDZVNyN3MRxNTgU999g051KW 4lyNpUHWWV5RiFn/oCt99tMuXjVMzzetHgkSM9K69mDdoZmEdRubBSBRh n050j8Wc4iM0DME0o+5QlufT3e/4CZ27iaxstQG7cN+ErU9COqV+Q+oTV auJzu/xnHq1zbeHT/0z3/XHJZQdaI1Ta5XI+CkGEOX7ThLiG+SU7QTOpH XY+oxhQLsd0qFbOFF06petF5Oy0H5V7qX78bNcwnJGv0ghTkY0tU6+2dp SzQ1LhGXOb8fKAoj8xZmVW+eOEVjHhUYVFvngSo7WiJNdlm2bX9OZJsVF w==; X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="356758017" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="356758017" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Feb 2023 19:25:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="668636753" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="668636753" Received: from dpdk-xuke-lab.sh.intel.com ([10.67.119.8]) by fmsmga007.fm.intel.com with ESMTP; 06 Feb 2023 19:25:25 -0800 From: Ke Xu To: dts@dpdk.org Cc: ke1.xu@intel.com, yux.jiang@intel.com, lijuan.tu@intel.com, qi.fu@intel.com Subject: [DTS][PATCH V1 5/5] test_plans/vf_offload: add VLAN packets to test scope and improve vector path validating. Date: Tue, 7 Feb 2023 11:23:13 +0800 Message-Id: <20230207032313.404935-6-ke1.xu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230207032313.404935-1-ke1.xu@intel.com> References: <20230207032313.404935-1-ke1.xu@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add VLAN Packets to ensure checksum offload works well on packets with Dot1Q part. For daily regression, we introduce wrapped cases for each path. Signed-off-by: Ke Xu --- test_plans/vf_offload_test_plan.rst | 158 ++++++++++++++++++++++++++++ 1 file changed, 158 insertions(+) diff --git a/test_plans/vf_offload_test_plan.rst b/test_plans/vf_offload_test_plan.rst index 3e9b658b..1c929371 100644 --- a/test_plans/vf_offload_test_plan.rst +++ b/test_plans/vf_offload_test_plan.rst @@ -286,6 +286,135 @@ be validated as pass by the tester. The first byte of source IPv4 address will be increased by testpmd. The checksum is indeed recalculated by software algorithms. + +Test Case: HW checksum offload check with vlan +============================================== +Start testpmd and enable checksum offload on rx port. Based on test steps of +HW checksum offload check, configure the traffic generator to send the multiple +packets for the following combination: + + +----------------+----------------------------------------+ + | packet type | packet organization | + +================+========================================+ + | | Ether / VLAN / IPv4 / UDP / payload | + | +----------------------------------------+ + | | Ether / VLAN / IPv4 / TCP / payload | + | +----------------------------------------+ + | packets | Ether / VLAN / IPv4 / SCTP / payload | + | for checksum +----------------------------------------+ + | offload test | Ether / VLAN / IPv6 / UDP / payload | + | +----------------------------------------+ + | | Ether / VLAN / IPv6 / TCP / payload | + +----------------+----------------------------------------+ + + +Test Case: HW tunneling checksum offload check with vlan +======================================================== +Based on test steps of HW tunneling checksum offload check, configure the +traffic generator to send the multiple packets combination with outer or +tunneling package of: + + +----------------+--------------------------------------------+ + | packet type | packet organization | + +================+============================================+ + | | Ether / VLAN / IPv4 / UDP / VXLAN / Ether | + | +--------------------------------------------+ + | | Ether / VLAN / IPv6 / UDP / VXLAN / Ether | + | +--------------------------------------------+ + | | Ether / VLAN / IPv4 / GRE | + | outer and +--------------------------------------------+ + | tunneling | Ether / VLAN / IPv4 / GRE / Ether | + | packets +--------------------------------------------+ + | for checksum | Ether / VLAN / IPv6 / GRE | + | offload test +--------------------------------------------+ + | | Ether / VLAN / IPv6 / GRE / Ether | + | +--------------------------------------------+ + | | Ether / VLAN / IPv4 / NVGRE | + | +--------------------------------------------+ + | | Ether / VLAN / IPv4 / NVGRE / Ether | + | +--------------------------------------------+ + | | Ether / VLAN / IPv6 / NVGRE | + | +--------------------------------------------+ + | | Ether / VLAN / IPv6 / NVGRE / Ether | + | +--------------------------------------------+ + | | Ether / VLAN / IPv4 / UDP / GTPU | + | +--------------------------------------------+ + | | Ether / VLAN / IPv6 / UDP / GTPU | + +----------------+--------------------------------------------+ + + +Test Case: HW checksum offload check on scalar path +=================================================== +These set of cases based on existing cases are designed for better case managment for +regression test. + +Start testpmd with eal parameter --force-max-simd-bitwidth=64. Based on test steps of +'HW checksum offload check'. + +Test Case: HW checksum offload check on sse path +================================================ +Start testpmd with eal parameter --force-max-simd-bitwidth=128. Based on test steps of +'HW checksum offload check'. + +Test Case: HW checksum offload check on avx2 path +================================================= +Start testpmd with eal parameter --force-max-simd-bitwidth=256. Based on test steps of +'HW checksum offload check'. + +Test Case: HW checksum offload check on avx512 path +=================================================== +Start testpmd with eal parameter --force-max-simd-bitwidth=512. Based on test steps of +'HW checksum offload check'. + +Test Case: HW checksum offload check with vlan on scalar path +============================================================= + +Test Case: HW checksum offload check with vlan on sse path +============================================================= + +Test Case: HW checksum offload check with vlan on avx2 path +============================================================= + +Test Case: HW checksum offload check with vlan on avx512 path +============================================================= + +Test Case: HW tunneling checksum offload check on scalar path +============================================================= + +Test Case: HW tunneling checksum offload check on sse path +========================================================== + +Test Case: HW tunneling checksum offload check on avx2 path +=========================================================== + +Test Case: HW tunneling checksum offload check on avx512 path +============================================================= + +Test Case: HW tunneling checksum offload check with vlan on scalar path +======================================================================= + +Test Case: HW tunneling checksum offload check with vlan on sse path +==================================================================== + +Test Case: HW tunneling checksum offload check with vlan on avx2 path +===================================================================== + +Test Case: HW tunneling checksum offload check with vlan on avx512 path +======================================================================= + +Test Case: SW checksum offload check on scalar path +=================================================== + +Test Case: SW checksum offload check on sse path +================================================ + +Test Case: SW checksum offload check on avx2 path +================================================= + +Test Case: SW checksum offload check on avx512 path +=================================================== + + Prerequisites for TSO ===================== @@ -510,3 +639,32 @@ Test IPv6() in scapy:: for one_outer_packet in outer_packet_list: sendp([Ether(dst="%s", src="52:00:00:00:00:00")/one_outer_packet/IPv6(src="FE80:0:0:0:200:1FF:FE00:200", dst="3555:5555:6666:6666:7777:7777:8888:8888")/TCP(sport=1021,dport=1021)/Raw(load=RandString(size=%s))], iface="%s", count = %s) + +Test case: csum fwd engine, use TSO, on scalar path +=================================================== +These set of cases based on existing cases are designed for better case managment for +regression test. + +Start testpmd with eal parameter --force-max-simd-bitwidth=64. Based on test steps of +'csum fwd engine, use TSO'. + +Test case: csum fwd engine, use TSO, on sse path +================================================ + +Test case: csum fwd engine, use TSO, on avx2 path +================================================= + +Test case: csum fwd engine, use TSO, on avx512 path +=================================================== + +Test case: csum fwd engine, use tunnel TSO, on scalar path +========================================================== + +Test case: csum fwd engine, use tunnel TSO, on sse path +======================================================= + +Test case: csum fwd engine, use tunnel TSO, on avx2 path +======================================================== + +Test case: csum fwd engine, use tunnel TSO, on avx512 path +==========================================================