From patchwork Tue Aug 16 09:05:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yingya Han X-Patchwork-Id: 115163 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5DD80A00C3; Tue, 16 Aug 2022 11:05:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5311D40F19; Tue, 16 Aug 2022 11:05:23 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 1DF3240150 for ; Tue, 16 Aug 2022 11:05:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660640722; x=1692176722; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=nH82ApSaRhYl5D2mUxwQZn8F3EqA1W1zb/mLO2LtL6M=; b=IAB85esxLvynxNGixAsORrjbTFzqaW2OKbq4eFNpbG6crrGZNTrk+yEv Yw9Jn1uBn7jtRL0Z1uArJxMmfySqSkfowuvgmNJi4zeMc8KBURZGGWAg2 mVSTZzqjUwkvPfWMVMIFLHF9jytm21ACPtNUjK1/+0PmG8Z5AnJcPyTTJ ERjzRjJ4uMOCOFPT1y3G6zlHwZL2fuR8GEpniJxJevnViG8pNqDywg+Ho fqHziwBtv0Lybt5FSpebHsWvCCwAyqs6d7x3T/rfqxgrMvYNg57NGAbkJ JelthHqz16nP0mfNST/ZlfT1StOgKSmvRDFfybsUA0rQRL9XwbW3GRxz7 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10440"; a="292165163" X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208";a="292165163" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Aug 2022 02:05:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208";a="583223527" Received: from dpdk-lijuan-icelake4.sh.intel.com ([10.67.118.208]) by orsmga006.jf.intel.com with ESMTP; 16 Aug 2022 02:05:06 -0700 From: Yingya Han To: dts@dpdk.org Cc: Yingya Han Subject: [dts][PATCH V1 1/3]test_plans: add rx_timestamp_perf test plan Date: Tue, 16 Aug 2022 17:05:03 +0800 Message-Id: <20220816090503.1752363-1-yingyax.han@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Signed-off-by: Yingya Han --- test_plans/rx_timestamp_perf_test_plan.rst | 180 +++++++++++++++++++++ 1 file changed, 180 insertions(+) create mode 100644 test_plans/rx_timestamp_perf_test_plan.rst diff --git a/test_plans/rx_timestamp_perf_test_plan.rst b/test_plans/rx_timestamp_perf_test_plan.rst new file mode 100644 index 00000000..0fe97b08 --- /dev/null +++ b/test_plans/rx_timestamp_perf_test_plan.rst @@ -0,0 +1,180 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2022 Intel Corporation + +============================================================== +Benchmark the performance of rx timestamp forwarding with E810 +============================================================== + +This document provides the plan for testing the performance of Intel Ethernet Controller. +This test plan cover the test cases of throughput with enable rx timestamp. +The Performance results are produced using ``dpdk-testpmd`` application. + +Prerequisites +============= + +1. Hardware: + + 1.1) rx timestamp perf test for IntelĀ® Ethernet Network Adapter E810-CQDA2: + 1 NIC or 2 NIC cards attached to the same processor and 1 port used of each NIC. + 1.2) rx timestamp perf test for IntelĀ® Ethernet Network Adapter E810-XXVDA4: + 1 NIC card attached to the processor and 4 ports used. + +2. Software:: + + dpdk: git clone http://dpdk.org/git/dpdk + trex: git clone http://trex-tgn.cisco.com/trex/release/v2.93.tar.gz + +Test Case +========= +The test case check the throughput result with ipv4, in the case, +we will send the bi-direction flows with line rate, then we can check the +passthrough rate. + +Common Steps +------------ +1. Bind tested ports to vfio-pci:: + + #./usertools/dpdk-devbind.py -s + 0000:17:00.0 'Device 1592' if=ens5f0 drv=ice unused=vfio-pci + 0000:4b:00.1 'Device 1592' if=ens6f0 drv=ice unused=vfio-pci + #./usertools/dpdk-devbind.py -b vfio-pci + #./usertools/dpdk-devbind.py -b vfio-pci 0000:17:00.0 + #./usertools/dpdk-devbind.py -b vfio-pci 0000:4b:00.1 + +2. Configure traffic generator to send traffic + + Test flow MAC table. + + +------+---------+------------+---------------+ + | Flow | Traffic | MAC | MAC | + | | Gen. | Src. | Dst. | + | | Port | Address | Address | + +======+=========+============+===============+ + | 1 | TG0 | Random MAC | DUT Port0 Mac | + +------+---------+------------+---------------+ + | 2 | TG1 | Random Mac | DUT port1 Mac | + +------+---------+------------+---------------+ + | 3 | TG2 | Random Mac | DUT port2 Mac | + +------+---------+------------+---------------+ + | 4 | TG3 | Random Mac | DUT port3 Mac | + +------+---------+------------+---------------+ + + The Flow IP table. + + +------+---------+------------+---------+ + | Flow | Traffic | IPV4 | IPV4 | + | | Gen. | Src. | Dest. | + | | Port | Address | Address | + +======+=========+============+=========+ + | 1 | TG0 | Any IP | 2.1.1.1 | + +------+---------+------------+---------+ + | 2 | TG1 | Any IP | 1.1.1.1 | + +------+---------+------------+---------+ + | 3 | TG2 | Any IP | 4.1.1.1 | + +------+---------+------------+---------+ + | 4 | TG3 | Any IP | 3.1.1.1 | + +------+---------+------------+---------+ + + Set the packet length : 64 bytes-1518 bytes + The IPV4 Dest Address increase with the num 1024. + +3. Test results table. + + +-----------+------------+-------------+---------+ + | Fwd_core | Frame Size | Throughput | Rate | + +===========+============+=============+=========+ + | 1C/1T | 64 | xxxxx Mpps | xxx % | + +-----------+------------+-------------+---------+ + | 1C/1T | ... | xxxxx Mpps | xxx % | + +-----------+------------+-------------+---------+ + | 2C/2T | 64 | xxxxx Mpps | xxx % | + +-----------+------------+-------------+---------+ + | 2C/2T | ... | xxxxx Mpps | xxx % | + +-----------+------------+-------------+---------+ + | 4C/4T | 64 | xxxxx Mpps | xxx % | + +-----------+------------+-------------+---------+ + | 4C/4T | ... | xxxxx Mpps | xxx % | + +-----------+------------+-------------+---------+ + | 8C/8T | 64 | xxxxx Mpps | xxx % | + +-----------+------------+-------------+---------+ + | 8C/8T | ... | xxxxx Mpps | xxx % | + +-----------+------------+-------------+---------+ + +Test Case 1: iavf_throughput_enable_ptp_scalar +---------------------------------------------- + +1. Bind PF ports to kernel driver(ice), then create 1 VF from each PF, + take E810-CQDA2 for example:: + + echo 1 > /sys/bus/pci/devices/0000\:17\:00.0/sriov_numvfs + echo 1 > /sys/bus/pci/devices/0000\:4b\:00.1/sriov_numvfs + +2. Set vf mac address:: + + ip link set ens5f0 vf 0 mac 00:12:34:56:78:01 + ip link set ens6f0 vf 0 mac 00:12:34:56:78:02 + +3. Bind all the created VFs to dpdk driver as common step 1:: + + ./usertools/dpdk-devbind.py -b vfio-pci 17:01.0 4b:01.0 + +4. Start dpdk-testpmd:: + + /app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \ + -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=mac \ + --nb-cores=1 --enable-rx-timestamp + + Note: + -force-max-simd-bitwidth: Set 64, the feature only support 64. + -enable-rx-timestamp: enable rx-timestamp. + +5. Configure traffic generator to send traffic as common step 2. + +6. Record Test results as common step 3. + +Test Case 2: iavf_throughput_disable_ptp_scalar +----------------------------------------------- + +1. Excute iavf_throughput_enable_ptp_scalar steps 1-3. + +2. Start dpdk-testpmd with disable rx-timestamp:: + + /app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \ + -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=mac \ + --nb-cores=1 + +3. Excute iavf_throughput_enable_ptp_scalar steps 5-6. + +Test Case 3: pf_throughput_enable_ptp_scalar +-------------------------------------------- + +1. Bind PF ports to dpdk driver as common step 1:: + + ./usertools/dpdk-devbind.py -b vfio-pci 17:00.0 4b:00.0 + +2. Start dpdk-testpmd:: + + /app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \ + -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=io \ + --nb-cores=1 --enable-rx-timestamp + +3. Configure traffic generator to send traffic as common step 2. + +4. Record Test results as common step 3. + +Test Case 4: pf_throughput_disable_ptp_scalar +--------------------------------------------- + +1. Bind PF ports to dpdk driver as common step 1:: + + ./usertools/dpdk-devbind.py -b vfio-pci 17:00.0 4b:00.0 + +2. Start dpdk-testpmd with disable rx-timestamp:: + + /app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 \ + -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --forward=io \ + --nb-cores=1 + +3. Configure traffic generator to send traffic as common step 2. + +4. Record Test results as common step 3. From patchwork Tue Aug 16 09:05:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yingya Han X-Patchwork-Id: 115162 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32BE2A00C3; Tue, 16 Aug 2022 11:05:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2B72F40694; Tue, 16 Aug 2022 11:05:21 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 82F4A40150 for ; Tue, 16 Aug 2022 11:05:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660640719; x=1692176719; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=2QnJdk7kUER2rEB4elvmhFtt3ztL89uGmXAeQDdGENo=; b=m1Lro8Ze1CvWFAfxhfIL7enM1fvcM3oepq87Y/ejfyTc4nSpzyJCIcse O1vLTpNpFWbVupih8oYOJW1ilNfArED6dDCuixJ56f2vN8kUyM0j/DdDn 1Q2/p+HXPKKnTqkguaOBtEumBiBJZrFlVfbDoh3A2TTf6TD2yutcR66x6 yp330UmNB/f5phIK0+zvgBq2Hp0pEwcy2l6z2MTrTgVmtxPcy0meZDyI/ 4FPTz3BRtzGnoupwk9w39paCUuIKr9Uk8veLy9dzOKuEsFYWHHb1L+ge3 prWME0vZ2Ak5xSjQjA2/c7SUcNYR9jh1CH+bS3H4iKPZ1rmY5xCcrOcMV A==; X-IronPort-AV: E=McAfee;i="6400,9594,10440"; a="318160211" X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208";a="318160211" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Aug 2022 02:05:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208";a="583223578" Received: from dpdk-lijuan-icelake4.sh.intel.com ([10.67.118.208]) by orsmga006.jf.intel.com with ESMTP; 16 Aug 2022 02:05:17 -0700 From: Yingya Han To: dts@dpdk.org Cc: Yingya Han Subject: [dts][PATCH V1 2/3]conf: add rx_timestamp_perf configuration file Date: Tue, 16 Aug 2022 17:05:13 +0800 Message-Id: <20220816090513.1752423-1-yingyax.han@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Signed-off-by: Yingya Han --- conf/rx_timestamp_perf.cfg | 11 +++++++++++ 1 file changed, 11 insertions(+) create mode 100644 conf/rx_timestamp_perf.cfg diff --git a/conf/rx_timestamp_perf.cfg b/conf/rx_timestamp_perf.cfg new file mode 100644 index 00000000..e76c805c --- /dev/null +++ b/conf/rx_timestamp_perf.cfg @@ -0,0 +1,11 @@ +[suite] +test_duration = 15 +traffuc_stop_wait_time = 2 +test_parameters = { + '1C/1T-1Q': ['64', '128', '256', '512', '1024', '1280', '1518',], + '1C/2T-2Q': ['64', '128', '256', '512', '1024', '1280', '1518',], + '2C/2T-2Q': ['64', '128', '256', '512', '1024', '1280', '1518',], + '2C/4T-4Q': ['64', '128', '256', '512', '1024', '1280', '1518',], + '4C/4T-4Q': ['64', '128', '256', '512', '1024', '1280', '1518',], + '4C/8T-8Q': ['64', '128', '256', '512', '1024', '1280', '1518',], + '8C/8T-8Q': ['64', '128', '256', '512', '1024', '1280', '1518',],} From patchwork Tue Aug 16 09:05:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yingya Han X-Patchwork-Id: 115164 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 82325A00C3; Tue, 16 Aug 2022 11:05:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7DABF40A79; Tue, 16 Aug 2022 11:05:33 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 166A140150 for ; Tue, 16 Aug 2022 11:05:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660640732; x=1692176732; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=osEqgIOlfYDQfafT610VOBEUap5jQZ0FIWUiOhj+I80=; b=htdsn0dfXAvwsRXpAyzhkNuFu8fYFNqqj1j/wvxZeCbmeepvsh4oOgj4 XyEbVbarbaF8tU2a8pSLrNsLKPu+8UDFk0q/09cApYh3rPHbmEStRBDHE VucrU3j3DoGB7L8Cmleiuz3jH85tQsuJzbs/aphWpZ67Zq/BLfl/mp+LX j31Yu224XLy0sE4kxOUEfe1bZK4qKMDqe50K88adOcjWhl17XVyMPmFTK ziBgTs4rKTyYEsNKWogVnLdDss70S1vUeq1HzMulUtmgV1pJQppY7Nwef 6f19OTH7zG4s62rojeJDPmfOdRImFlPccreLlyFLFAAq1BTBWff5Rh0Ua w==; X-IronPort-AV: E=McAfee;i="6400,9594,10440"; a="292165209" X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208";a="292165209" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Aug 2022 02:05:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,240,1654585200"; d="scan'208";a="583223620" Received: from dpdk-lijuan-icelake4.sh.intel.com ([10.67.118.208]) by orsmga006.jf.intel.com with ESMTP; 16 Aug 2022 02:05:28 -0700 From: Yingya Han To: dts@dpdk.org Cc: Yingya Han Subject: [dts][PATCH V1 3/3]tests: add rx_timestamp_perf test script Date: Tue, 16 Aug 2022 17:05:25 +0800 Message-Id: <20220816090525.1752488-1-yingyax.han@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Signed-off-by: Yingya Han --- tests/TestSuite_rx_timestamp_perf.py | 416 +++++++++++++++++++++++++++ 1 file changed, 416 insertions(+) create mode 100644 tests/TestSuite_rx_timestamp_perf.py diff --git a/tests/TestSuite_rx_timestamp_perf.py b/tests/TestSuite_rx_timestamp_perf.py new file mode 100644 index 00000000..ce1c8d16 --- /dev/null +++ b/tests/TestSuite_rx_timestamp_perf.py @@ -0,0 +1,416 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2022 Intel Corporation +# + +""" +DPDK Test suite. +""" + +import os +import re +import time +from copy import deepcopy +from framework.exception import VerifyFailure +import framework.utils as utils +from framework.settings import HEADER_SIZE +from framework.test_case import TestCase +from framework.packet import Packet +from framework.pktgen import TRANSMIT_CONT +from framework.utils import convert_int2ip, convert_ip2int + + +class TestRxTimestamoPerf(TestCase): + # + # Test cases. + # + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.verify( + self.nic in ["ICE_100G-E810C_QSFP", "ICE_25G-E810C_SFP", "ICE_25G-E810_XXV_SFP"], + "NIC Unsupported: " + str(self.nic) + ) + self.dut_ports = self.dut.get_ports(self.nic) + self.verify(len(self.dut_ports) >= 1, "At least 1 port is required to test") + # get socket and cores + self.socket = self.dut.get_numa_id(self.dut_ports[0]) + cores = self.dut.get_core_list("1S/6C/1T", socket=self.socket) + self.verify(cores, "Requested 6 cores failed") + self.core_offset = 3 + self.test_content = self.get_test_content_from_cfg(self.get_suite_cfg()) + + def set_up(self): + """ + Run before each test case. + """ + self.test_result = {"header": [], "data": []} + self.vf_port_info = {} + + def flows(self): + """ + Return a list of packets that implements the flows described. + """ + return [ + "198.18.0.0/24", + "198.18.1.0/24", + "198.18.2.0/24", + "198.18.3.0/24", + "198.18.4.0/24", + "198.18.5.0/24", + "198.18.6.0/24", + "198.18.7.0/24", + ] + + def parse_test_config(self, config): + """ + [n]C/[mT]-[i]Q + n: how many physical core use for polling. + m: how many cpu thread use for polling, if Hyper-threading disabled + in BIOS, m equals n, if enabled, m is 2 times as n. + i: how many queues use per port, so total queues = i x nb_port + """ + pat = "(.*)/(.*)-(.*)" + result = re.findall(pat, config) + if not result: + msg = f"{config} is wrong format, please check" + raise VerifyFailure(msg) + cores, threads, queue = result[0] + _thread_num = int(int(threads[:-1]) // int(cores[:-1])) + + _thread = str(_thread_num) + "T" + _cores = str(self.core_offset + int(cores[:-1])) + "C" + cores_config = "/".join(["1S", _cores, _thread]) + queues_per_port = int(queue[:-1]) + return cores_config, _thread_num, queues_per_port + + def get_test_configs(self, test_parameters): + configs = [] + frame_sizes_grp = [] + for test_item, frame_sizes in sorted(test_parameters.items()): + _frame_sizes = [int(frame_size) for frame_size in frame_sizes] + frame_sizes_grp.extend([int(item) for item in _frame_sizes]) + cores, thread_num, queues = self.parse_test_config(test_item) + corelist = self.dut.get_core_list(cores, self.socket) + core_list = corelist[(self.core_offset - 1) * thread_num:] + if "2T" in cores: + core_list = core_list[1:2] + core_list[0::2] + core_list[1::2][1:] + _core_list = core_list[thread_num - 1:] + configs.append([ + test_item, + _core_list, + [ + " --txd=1024 --rxd=1024" + + " --rxq={0} --txq={0}".format(queues) + + " --nb-cores={}".format(len(core_list) - thread_num) + ], + ]) + return configs, sorted(set(frame_sizes_grp)) + + def get_test_content_from_cfg(self, test_content): + test_content["flows"] = self.flows() + configs, frame_sizes = self.get_test_configs(test_content["test_parameters"]) + test_content["configs"] = configs + test_content["frame_sizes"] = frame_sizes + return test_content + + def get_mac_layer(self, port_id, mode): + smac = "02:00:00:00:00:0%d" % port_id + dmac = "52:00:00:00:00:0%d" % port_id + if mode == "vf": + dmac = self.vf_port_info[port_id]["vf_mac"] + + layer = { + "ether": { + "dst": dmac, + "src": smac, + }, + } + return layer + + def get_ipv4_config(self, config): + netaddr, mask = config.split("/") + ip_range = int("1" * (32 - int(mask)), 2) + start_ip = convert_int2ip(convert_ip2int(netaddr) + 1) + end_ip = convert_int2ip(convert_ip2int(start_ip) + ip_range - 1) + layers = { + "ipv4": { + "src": start_ip, + }, + } + fields_config = { + "ip": { + "src": { + "start": start_ip, + "end": end_ip, + "step": 1, + "action": "random", + }, + }, + } + return layers, fields_config + + def preset_flows_configs(self, mode): + flows = self.test_content.get("flows") + flows_configs = [] + for index, config in enumerate(flows): + if index >= len(self.dut_ports): + break + port_id = self.dut_ports[index] + _layer = self.get_mac_layer(port_id, mode) + _layer2, fields_config = self.get_ipv4_config(config) + _layer.update(_layer2) + flows_configs.append([_layer, fields_config]) + return flows_configs + + def preset_streams(self, mode): + frame_sizes = self.test_content.get("frame_sizes") + test_streams = {} + flows_configs = self.preset_flows_configs(mode) + for frame_size in frame_sizes: + for flow_config in flows_configs: + _layers, fields_config = flow_config + pkt = self.config_stream(_layers, frame_size) + test_streams.setdefault(frame_size, []).append([pkt, fields_config]) + return test_streams + + def vf_create(self): + """ + require enough PF ports,using kernel or dpdk driver, create 1 VF from each PF. + """ + # set vf assign method and vf driver + vf_driver = self.test_content.get("vf_driver") + if vf_driver is None: + vf_driver = self.drivername + for port_id in self.dut_ports: + pf_driver = self.dut.ports_info[port_id]["port"].default_driver + self.dut.generate_sriov_vfs_by_port(port_id, 1, driver=pf_driver) + pf_pci = self.dut.ports_info[port_id]["port"].pci + sriov_vfs_port = self.dut.ports_info[port_id].get("vfs_port") + if not sriov_vfs_port: + msg = f"failed to create vf on dut port {pf_pci}" + self.logger.error(msg) + continue + # set vf mac address. + vf_mac = "00:12:34:56:78:0%d" % (port_id + 1) + self.vf_port_info[port_id] = { + "pf_pci": pf_pci, + "vf_pci": self.dut.ports_info[port_id]["port"].get_sriov_vfs_pci(), + "vf_mac": vf_mac + } + self.dut.ports_info[port_id]["port"].set_vf_mac_addr(mac=vf_mac) + # bind vf to vf driver + try: + for port in sriov_vfs_port: + port.bind_driver(driver=vf_driver) + except Exception as e: + self.vf_destroy() + raise Exception(e) + + def vf_destroy(self): + if not self.vf_port_info: + return + for port_id, _ in self.vf_port_info.items(): + self.dut.destroy_sriov_vfs_by_port(port_id) + self.dut.ports_info[port_id]["port"].bind_driver(self.drivername) + self.vf_port_info = None + + def config_stream(self, layers, frame_size): + """ + Prepare traffic flow + """ + headers_size = sum([HEADER_SIZE[x] for x in ["eth", "ip"]]) + payload_size = frame_size - headers_size + # set streams for traffic + pkt_config = { + "type": "IP_RAW", + "pkt_layers": {"raw": {"payload": ["58"] * payload_size}}, + } + pkt_config["pkt_layers"].update(layers) + pkt_type = pkt_config.get("type") + pkt_layers = pkt_config.get("pkt_layers") + pkt = Packet(pkt_type=pkt_type) + for layer in list(pkt_layers.keys()): + pkt.config_layer(layer, pkt_layers[layer]) + + return pkt.pktgen.pkt + + def add_stream_to_pktgen(self, streams, option): + def port(index): + p = self.tester.get_local_port(self.dut_ports[index]) + return p + topos = ( + [ + [port(index), port(index - 1)] + if index % 2 + else [port(index), port(index + 1)] + for index, _ in enumerate(self.dut_ports) + ] + if len(self.dut_ports) > 1 + else [[port(0), port(0)]] + ) + stream_ids = [] + step = int(len(streams) / len(self.dut_ports)) + for cnt, stream in enumerate(streams): + pkt, fields_config = stream + index = cnt // step + txport, rxport = topos[index] + _option = deepcopy(option) + _option["pcap"] = pkt + if fields_config: + _option["fields_config"] = fields_config + stream_id = self.tester.pktgen.add_stream(txport, rxport, pkt) + self.tester.pktgen.config_stream(stream_id, _option) + stream_ids.append(stream_id) + return stream_ids + + def start_testpmd(self, eal_pare, eal): + bin = os.path.join(self.dut.base_dir, self.dut.apps_name["test-pmd"]) + command_line = ( + "{bin} " + "{eal_para}" + " --force-max-simd-bitwidth=64 " + "-- -i " + "--portmask {port_mask} " + "{config} " + "" + ).format( + **{ + "bin": bin, + "eal_para": eal_pare, + "port_mask": utils.create_mask(self.dut_ports), + "config": eal + } + ) + self.dut.send_expect(command_line, "testpmd>", 60) + self.dut.send_expect("start", "testpmd> ", 15) + + def throughput(self, frame_size): + streams = self.stream.get(frame_size) + # set traffic option + duration = self.test_content.get("test_duration") + traffic_stop_wait_time = self.test_content.get("traffic_stop_wait_time", 0) + # clear streams before add new streams + self.tester.pktgen.clear_streams() + # ser stream into pktgen + stream_option = { + "stream_config": { + "txmode": {}, + "transmit_mode": TRANSMIT_CONT, + "rate": 100, + } + } + traffic_option = { + "method": "throughput", + "duration": duration, + } + stream_ids = self.add_stream_to_pktgen(streams, stream_option) + # run packet generator + result = self.tester.pktgen.measure(stream_ids, traffic_option) + time.sleep(traffic_stop_wait_time) + # statistics result + _, pps = result + self.verify(pps > 0, "No traffic detected") + self.logger.info( + "Throughput of " + + "framesize: {}, is: {} Mpps".format(frame_size, pps / 1000000) + ) + return result + + def get_port_allowlist(self, port_list=None): + allowlist = [] + if port_list: + for port_id in port_list: + pci = self.dut.ports_info[port_id]["port"].pci + allowlist.append(pci) + else: + for port_id in self.dut_ports: + allowlist.append(self.vf_port_info[port_id]["vf_pci"][port_id]) + return allowlist + + def display_result(self, datas): + # display result table + header_row = ["Fwd Core", "Frame Size", "Throughput", "Rate"] + self.test_result["header"] = header_row + self.result_table_create(header_row) + self.test_result["data"] = [] + for data in datas: + config, frame_size, result = data + _, pps = result + pps /= 1000000.0 + linerate = self.wirespeed(self.nic, frame_size, len(self.dut_ports)) + percentage = pps * 100 / linerate + data_row = [ + config, + frame_size, + "{:.3f} Mpps".format(pps), + "{:.3f}%".format(percentage), + ] + self.result_table_add(data_row) + self.test_result["data"].append(data_row) + self.result_table_print() + + def perf_test(self, mode="", param="disable"): + """ + Benchmarking test + """ + self.stream = self.preset_streams(mode) + # ports allow list + if mode == "vf": + port_allowlist = self.get_port_allowlist() + fwd_mode = " --forward=mac" + else: + port_allowlist = self.get_port_allowlist(self.dut_ports) + fwd_mode = " --forward=io" + results = [] + for config, core_list, eal in self.test_content["configs"]: + self.logger.info( + ( + "Executing Test Using cores: {0} of config {1}, " + ).format(core_list, config) + ) + eal_pare = self.dut.create_eal_parameters( + cores=core_list, + ports=port_allowlist, + socket=self.socket + ) + eal = eal[0] + fwd_mode + if "enable" == param: + eal += " --enable-rx-timestamp" + self.start_testpmd(eal_pare, eal) + for frame_size in self.test_content["frame_sizes"]: + self.logger.info( + "Test running at framesize: {}".format(frame_size) + ) + result = self.throughput(frame_size) + if result: + results.append([config, frame_size, result]) + self.dut.send_expect("stop", "testpmd> ", 15) + self.dut.send_expect("quit", "# ", 15) + self.display_result(results) + + def test_perf_iavf_throughput_enable_ptp_scalar(self): + self.vf_create() + self.perf_test(mode="vf", param="enable") + + def test_perf_iavf_throughput_disable_ptp_scalar(self): + self.vf_create() + self.perf_test(mode="vf", param="disable") + + def test_perf_pf_throughput_enable_ptp_scalar(self): + self.perf_test(mode="pf", param="enable") + + def test_perf_pf_throughput_disable_ptp_scalar(self): + self.perf_test(mode="pf", param="disable") + + def tear_down(self): + """ + Run after each test case. + """ + self.vf_destroy() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.kill_all()