From patchwork Fri Apr 19 08:51:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 139541 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B5C1743EAC; Fri, 19 Apr 2024 10:51:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A4986400D6; Fri, 19 Apr 2024 10:51:12 +0200 (CEST) Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by mails.dpdk.org (Postfix) with ESMTP id 5470140042 for ; Fri, 19 Apr 2024 10:51:11 +0200 (CEST) Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-56e477db7fbso2910539a12.3 for ; Fri, 19 Apr 2024 01:51:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1713516671; x=1714121471; darn=dpdk.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=IN3ZZZnqOhYPZ++krb6A3+SdV0k1oVtyveCKbdvMAyU=; b=Y90CqdD+Y1a2UAE56AquLQEHHl6EDstdKa1lDEH0lAwpZKZ3Ts19cMR5zoZw520hhJ 8PgWraREndbsRhoQTMYE1X1kxWdx6QEYHJ5oKm9fUjoHZZ7b3F+9IEQahlHH7oY1M5mL 4Xq4WSoqgUs5SLK0pEnVS9Tra7wy6Kf7lXbKJpx9z8/YXjVKHjgOculim1Tg2YBCdVOp 8SlWMH3GpA7xQuR58cxshwuME+giCWfXN2o8wjeruStBBVddM0evmNziWlKbmTrPoyxX 2Q6CXJxyhY5EX8gg5hk2PpoRPHWGbwem0Wc651rOQgTtb5z9qEZTxszVnwLUqMl3R+vw FytA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713516671; x=1714121471; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=IN3ZZZnqOhYPZ++krb6A3+SdV0k1oVtyveCKbdvMAyU=; b=nFWcW3mWxkWlyWxoIddfn8j7XDCX5HMuG9KUb6PpwZtxs3XJKFEdlWn3yTY4usE8h2 ZPL8LcxYb8bp3dmjTNUvzxCWWwkz3iYw1a+KRLYLK21d9KdPUCiheewCQRxNANmmlr6Q gAhxMoJQ+rZj+fDZtFbigIwVlI1fUJHdFmGtvM45Hrlg0hT110lLvO5vykA/sDcG0A7a +OesgDiLTVxVUHVT93UmuwNBg4U2/mkES6Iu3cWGRSD2VRd+A9car9Bs2MaFJ5Fy1ApZ +qB7kAZDlZM+IVXVYs526eXGxluYRRE+nQTsS7HZLbxGr1yVhu7pkNsNDtUrsiEQZHVt 6VSw== X-Gm-Message-State: AOJu0Yyn6RMqidspRn9RZYfgDm8IqeeYOkV9XBEKyPtmKLMdQE/oWpkA SNLETI0zznlfk/dG1fs2gVQII5Kr0/4522+8HsMaSRvN4g0M5s2NZ/nyo6PnvW0= X-Google-Smtp-Source: AGHT+IGUzOrQomn5mk8efCKG2JmNyAc84kgLhZ0si6wm3YAYziznLYPRjXU3FNTe37sTTc/sBBE+Cw== X-Received: by 2002:a50:ab09:0:b0:56e:6e0:9f39 with SMTP id s9-20020a50ab09000000b0056e06e09f39mr1003255edc.17.1713516670542; Fri, 19 Apr 2024 01:51:10 -0700 (PDT) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([84.245.121.75]) by smtp.gmail.com with ESMTPSA id en17-20020a056402529100b005704751808asm1861087edb.63.2024.04.19.01.51.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Apr 2024 01:51:10 -0700 (PDT) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com, npratte@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v1] dts: rename execution to test run Date: Fri, 19 Apr 2024 10:51:08 +0200 Message-Id: <20240419085108.97519-1-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The configuration containing the combination of: 1. what testbed to use, 2. which tests to run, 3. and what build targets to test is called an execution. This is confusing since we're using the exact same term to describe other things and execution does not really capture that well the three items listed above. A new term is thus needed to describe the configuration. Test run is much less confusing and better captures what the configuration contains. Signed-off-by: Juraj Linkeš Reviewed-by: Jeremy Spewock Reviewed-by: Luca Vizzarro --- doc/guides/tools/dts.rst | 10 +- dts/conf.yaml | 8 +- dts/framework/config/__init__.py | 46 +++--- dts/framework/config/conf_yaml_schema.json | 6 +- dts/framework/config/types.py | 8 +- dts/framework/logger.py | 16 +- dts/framework/runner.py | 170 +++++++++++---------- dts/framework/test_result.py | 70 ++++----- dts/framework/testbed_model/node.py | 38 ++--- dts/tests/TestSuite_pmd_buffer_scatter.py | 2 +- 10 files changed, 193 insertions(+), 181 deletions(-) diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index d1c3c2af7a..c34f750c96 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -198,7 +198,7 @@ and then run the tests with the newly built binaries. Configuring DTS ~~~~~~~~~~~~~~~ -DTS configuration is split into nodes and executions and build targets within executions, +DTS configuration is split into nodes and test runs and build targets within test runs, and follows a defined schema as described in `Configuration Schema`_. By default, DTS will try to use the ``dts/conf.yaml`` :ref:`config file `, which is a template that illustrates what can be configured in DTS. @@ -537,12 +537,12 @@ _`Test target` Properties ~~~~~~~~~~ -The configuration requires listing all the execution environments and nodes +The configuration requires listing all the test run environments and nodes involved in the testing. These can be defined with the following mappings: -``executions`` +``test runs`` `sequence `_ listing - the execution environments. Each entry is described as per the following + the test run environments. Each entry is described as per the following `mapping `_: +----------------------------+-------------------------------------------------------------------+ @@ -637,4 +637,4 @@ And they both have two network ports which are physically connected to each othe .. literalinclude:: ../../../dts/conf.yaml :language: yaml - :start-at: executions: + :start-at: test_runs: diff --git a/dts/conf.yaml b/dts/conf.yaml index 8068345dd5..7d9173fe7c 100644 --- a/dts/conf.yaml +++ b/dts/conf.yaml @@ -2,8 +2,8 @@ # Copyright 2022-2023 The DPDK contributors # Copyright 2023 Arm Limited -executions: - # define one execution environment +test_runs: + # define one test run environment - build_targets: - arch: x86_64 os: linux @@ -20,9 +20,9 @@ executions: # The machine running the DPDK test executable system_under_test_node: node_name: "SUT 1" - vdevs: # optional; if removed, vdevs won't be used in the execution + vdevs: # optional; if removed, vdevs won't be used in the test run - "crypto_openssl" - # Traffic generator node to use for this execution environment + # Traffic generator node to use for this test run traffic_generator_node: "TG 1" nodes: # Define a system under test node, having two network ports physically diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 4cb5c74059..5faad1bf50 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -16,7 +16,7 @@ The test run configuration has two main sections: - * The :class:`ExecutionConfiguration` which defines what tests are going to be run + * The :class:`TestRunConfiguration` which defines what tests are going to be run and how DPDK will be built. It also references the testbed where these tests and DPDK are going to be run, * The nodes of the testbed are defined in the other section, @@ -46,9 +46,9 @@ from framework.config.types import ( BuildTargetConfigDict, ConfigurationDict, - ExecutionConfigDict, NodeConfigDict, PortConfigDict, + TestRunConfigDict, TestSuiteConfigDict, TrafficGeneratorConfigDict, ) @@ -428,8 +428,8 @@ def from_dict( @dataclass(slots=True, frozen=True) -class ExecutionConfiguration: - """The configuration of an execution. +class TestRunConfiguration: + """The configuration of a test run. The configuration contains testbed information, what tests to execute and with what DPDK build. @@ -440,8 +440,8 @@ class ExecutionConfiguration: func: Whether to run functional tests. skip_smoke_tests: Whether to skip smoke tests. test_suites: The names of test suites and/or test cases to execute. - system_under_test_node: The SUT node to use in this execution. - traffic_generator_node: The TG node to use in this execution. + system_under_test_node: The SUT node to use in this test run. + traffic_generator_node: The TG node to use in this test run. vdevs: The names of virtual devices to test. """ @@ -456,9 +456,9 @@ class ExecutionConfiguration: @staticmethod def from_dict( - d: ExecutionConfigDict, + d: TestRunConfigDict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]], - ) -> "ExecutionConfiguration": + ) -> "TestRunConfiguration": """A convenience method that processes the inputs before creating an instance. The build target and the test suite config are transformed into their respective objects. @@ -466,11 +466,11 @@ def from_dict( are just stored. Args: - d: The configuration dictionary. + d: The test run configuration dictionary. node_map: A dictionary mapping node names to their config objects. Returns: - The execution configuration instance. + The test run configuration instance. """ build_targets: list[BuildTargetConfiguration] = list( map(BuildTargetConfiguration.from_dict, d["build_targets"]) @@ -478,14 +478,14 @@ def from_dict( test_suites: list[TestSuiteConfig] = list(map(TestSuiteConfig.from_dict, d["test_suites"])) sut_name = d["system_under_test_node"]["node_name"] skip_smoke_tests = d.get("skip_smoke_tests", False) - assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}" + assert sut_name in node_map, f"Unknown SUT {sut_name} in test run {d}" system_under_test_node = node_map[sut_name] assert isinstance( system_under_test_node, SutNodeConfiguration ), f"Invalid SUT configuration {system_under_test_node}" tg_name = d["traffic_generator_node"] - assert tg_name in node_map, f"Unknown TG {tg_name} in execution {d}" + assert tg_name in node_map, f"Unknown TG {tg_name} in test run {d}" traffic_generator_node = node_map[tg_name] assert isinstance( traffic_generator_node, TGNodeConfiguration @@ -494,7 +494,7 @@ def from_dict( vdevs = ( d["system_under_test_node"]["vdevs"] if "vdevs" in d["system_under_test_node"] else [] ) - return ExecutionConfiguration( + return TestRunConfiguration( build_targets=build_targets, perf=d["perf"], func=d["func"], @@ -505,7 +505,7 @@ def from_dict( vdevs=vdevs, ) - def copy_and_modify(self, **kwargs) -> "ExecutionConfiguration": + def copy_and_modify(self, **kwargs) -> "TestRunConfiguration": """Create a shallow copy with any of the fields modified. The only new data are those passed to this method. @@ -513,10 +513,10 @@ def copy_and_modify(self, **kwargs) -> "ExecutionConfiguration": Args: **kwargs: The names and types of keyword arguments are defined - by the fields of the :class:`ExecutionConfiguration` class. + by the fields of the :class:`TestRunConfiguration` class. Returns: - The copied and modified execution configuration. + The copied and modified test run configuration. """ new_config = {} for field in fields(self): @@ -525,7 +525,7 @@ def copy_and_modify(self, **kwargs) -> "ExecutionConfiguration": else: new_config[field.name] = getattr(self, field.name) - return ExecutionConfiguration(**new_config) + return TestRunConfiguration(**new_config) @dataclass(slots=True, frozen=True) @@ -533,13 +533,13 @@ class Configuration: """DTS testbed and test configuration. The node configuration is not stored in this object. Rather, all used node configurations - are stored inside the execution configuration where the nodes are actually used. + are stored inside the test run configuration where the nodes are actually used. Attributes: - executions: Execution configurations. + test_runs: Test run configurations. """ - executions: list[ExecutionConfiguration] + test_runs: list[TestRunConfiguration] @staticmethod def from_dict(d: ConfigurationDict) -> "Configuration": @@ -563,11 +563,11 @@ def from_dict(d: ConfigurationDict) -> "Configuration": node_map = {node.name: node for node in nodes} assert len(nodes) == len(node_map), "Duplicate node names are not allowed" - executions: list[ExecutionConfiguration] = list( - map(ExecutionConfiguration.from_dict, d["executions"], [node_map for _ in d]) + test_runs: list[TestRunConfiguration] = list( + map(TestRunConfiguration.from_dict, d["test_runs"], [node_map for _ in d]) ) - return Configuration(executions=executions) + return Configuration(test_runs=test_runs) def load_config(config_file_path: Path) -> Configuration: diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index 4731f4511d..e616a9a0eb 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -322,7 +322,7 @@ }, "minimum": 1 }, - "executions": { + "test_runs": { "type": "array", "items": { "type": "object", @@ -366,7 +366,7 @@ "$ref": "#/definitions/node_name" }, "vdevs": { - "description": "Optional list of names of vdevs to be used in execution", + "description": "Optional list of names of vdevs to be used in the test run", "type": "array", "items": { "type": "string" @@ -395,7 +395,7 @@ } }, "required": [ - "executions", + "test_runs", "nodes" ], "additionalProperties": false diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py index 1927910d88..eb3e6bb1d9 100644 --- a/dts/framework/config/types.py +++ b/dts/framework/config/types.py @@ -95,7 +95,7 @@ class TestSuiteConfigDict(TypedDict): cases: list[str] -class ExecutionSUTConfigDict(TypedDict): +class TestRunSUTConfigDict(TypedDict): """Allowed keys and values.""" #: @@ -104,7 +104,7 @@ class ExecutionSUTConfigDict(TypedDict): vdevs: list[str] -class ExecutionConfigDict(TypedDict): +class TestRunConfigDict(TypedDict): """Allowed keys and values.""" #: @@ -118,7 +118,7 @@ class ExecutionConfigDict(TypedDict): #: test_suites: TestSuiteConfigDict #: - system_under_test_node: ExecutionSUTConfigDict + system_under_test_node: TestRunSUTConfigDict #: traffic_generator_node: str @@ -129,4 +129,4 @@ class ConfigurationDict(TypedDict): #: nodes: list[NodeConfigDict] #: - executions: list[ExecutionConfigDict] + test_runs: list[TestRunConfigDict] diff --git a/dts/framework/logger.py b/dts/framework/logger.py index fc6c50c983..a219de70e3 100644 --- a/dts/framework/logger.py +++ b/dts/framework/logger.py @@ -29,23 +29,23 @@ class DtsStage(StrEnum): """The DTS execution stage.""" #: - pre_execution = auto() + pre_run = auto() #: - execution_setup = auto() - #: - execution_teardown = auto() + test_run_setup = auto() #: build_target_setup = auto() #: - build_target_teardown = auto() - #: test_suite_setup = auto() #: test_suite = auto() #: test_suite_teardown = auto() #: - post_execution = auto() + build_target_teardown = auto() + #: + test_run_teardown = auto() + #: + post_run = auto() class DTSLogger(logging.Logger): @@ -59,7 +59,7 @@ class DTSLogger(logging.Logger): a new stage switch occurs. This is useful mainly for logging per test suite. """ - _stage: ClassVar[DtsStage] = DtsStage.pre_execution + _stage: ClassVar[DtsStage] = DtsStage.pre_run _extra_file_handlers: list[FileHandler] = [] def __init__(self, *args, **kwargs): diff --git a/dts/framework/runner.py b/dts/framework/runner.py index db8e3ba96b..ba19977d67 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -7,12 +7,12 @@ The module is responsible for running DTS in a series of stages: - #. Execution stage, + #. Test run stage, #. Build target stage, #. Test suite stage, #. Test case stage. -The execution and build target stages set up the environment before running test suites. +The test run and build target stages set up the environment before running test suites. The test suite stage sets up steps common to all test cases and the test case stage runs test cases individually. """ @@ -29,7 +29,7 @@ from .config import ( BuildTargetConfiguration, Configuration, - ExecutionConfiguration, + TestRunConfiguration, TestSuiteConfig, load_config, ) @@ -44,9 +44,9 @@ from .test_result import ( BuildTargetResult, DTSResult, - ExecutionResult, Result, TestCaseResult, + TestRunResult, TestSuiteResult, TestSuiteWithCases, ) @@ -70,7 +70,7 @@ class DTSRunner: An error occurs in a build target setup. The current build target is aborted, all test suites and their test cases are marked as blocked and the run continues with the next build target. If the errored build target was the last one in the - given execution, the next execution begins. + given test run, the next test run begins. """ _configuration: Configuration @@ -95,24 +95,24 @@ def __init__(self): self._perf_test_case_regex = r"test_perf_" def run(self): - """Run all build targets in all executions from the test run configuration. + """Run all build targets in all test runs from the test run configuration. - Before running test suites, executions and build targets are first set up. - The executions and build targets defined in the test run configuration are iterated over. - The executions define which tests to run and where to run them and build targets define + Before running test suites, test runs and build targets are first set up. + The test runs and build targets defined in the test run configuration are iterated over. + The test runs define which tests to run and where to run them and build targets define the DPDK build setup. - The tests suites are set up for each execution/build target tuple and each discovered + The tests suites are set up for each test run/build target tuple and each discovered test case within the test suite is set up, executed and torn down. After all test cases have been executed, the test suite is torn down and the next build target will be tested. In order to properly mark test suites and test cases as blocked in case of a failure, we need to have discovered which test suites and test cases to run before any failures - happen. The discovery happens at the earliest point at the start of each execution. + happen. The discovery happens at the earliest point at the start of each test run. All the nested steps look like this: - #. Execution setup + #. Test run setup #. Build target setup @@ -126,7 +126,7 @@ def run(self): #. Build target teardown - #. Execution teardown + #. Test run teardown The test cases are filtered according to the specification in the test run configuration and the :option:`--test-suite` command line argument or @@ -139,33 +139,37 @@ def run(self): self._check_dts_python_version() self._result.update_setup(Result.PASS) - # for all Execution sections - for execution in self._configuration.executions: - self._logger.set_stage(DtsStage.execution_setup) + # for all test run sections + for test_run_config in self._configuration.test_runs: + self._logger.set_stage(DtsStage.test_run_setup) self._logger.info( - f"Running execution with SUT '{execution.system_under_test_node.name}'." + f"Running test run with SUT '{test_run_config.system_under_test_node.name}'." ) - execution_result = self._result.add_execution(execution) + test_run_result = self._result.add_test_run(test_run_config) # we don't want to modify the original config, so create a copy - execution_test_suites = list( - SETTINGS.test_suites if SETTINGS.test_suites else execution.test_suites + test_run_test_suites = list( + SETTINGS.test_suites if SETTINGS.test_suites else test_run_config.test_suites ) - if not execution.skip_smoke_tests: - execution_test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] + if not test_run_config.skip_smoke_tests: + test_run_test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] try: test_suites_with_cases = self._get_test_suites_with_cases( - execution_test_suites, execution.func, execution.perf + test_run_test_suites, test_run_config.func, test_run_config.perf ) - execution_result.test_suites_with_cases = test_suites_with_cases + test_run_result.test_suites_with_cases = test_suites_with_cases except Exception as e: self._logger.exception( - f"Invalid test suite configuration found: " f"{execution_test_suites}." + f"Invalid test suite configuration found: " f"{test_run_test_suites}." ) - execution_result.update_setup(Result.FAIL, e) + test_run_result.update_setup(Result.FAIL, e) else: - self._connect_nodes_and_run_execution( - sut_nodes, tg_nodes, execution, execution_result, test_suites_with_cases + self._connect_nodes_and_run_test_run( + sut_nodes, + tg_nodes, + test_run_config, + test_run_result, + test_suites_with_cases, ) except Exception as e: @@ -175,7 +179,7 @@ def run(self): finally: try: - self._logger.set_stage(DtsStage.post_execution) + self._logger.set_stage(DtsStage.post_run) for node in (sut_nodes | tg_nodes).values(): node.close() self._result.update_teardown(Result.PASS) @@ -354,17 +358,17 @@ def _filter_test_cases( return func_test_cases, perf_test_cases - def _connect_nodes_and_run_execution( + def _connect_nodes_and_run_test_run( self, sut_nodes: dict[str, SutNode], tg_nodes: dict[str, TGNode], - execution: ExecutionConfiguration, - execution_result: ExecutionResult, + test_run_config: TestRunConfiguration, + test_run_result: TestRunResult, test_suites_with_cases: Iterable[TestSuiteWithCases], ) -> None: - """Connect nodes, then continue to run the given execution. + """Connect nodes, then continue to run the given test run. - Connect the :class:`SutNode` and the :class:`TGNode` of this `execution`. + Connect the :class:`SutNode` and the :class:`TGNode` of this `test_run_config`. If either has already been connected, it's going to be in either `sut_nodes` or `tg_nodes`, respectively. If not, connect and add the node to the respective `sut_nodes` or `tg_nodes` :class:`dict`. @@ -372,104 +376,110 @@ def _connect_nodes_and_run_execution( Args: sut_nodes: A dictionary storing connected/to be connected SUT nodes. tg_nodes: A dictionary storing connected/to be connected TG nodes. - execution: An execution's test run configuration. - execution_result: The execution's result. + test_run_config: A test run configuration. + test_run_result: The test run's result. test_suites_with_cases: The test suites with test cases to run. """ - sut_node = sut_nodes.get(execution.system_under_test_node.name) - tg_node = tg_nodes.get(execution.traffic_generator_node.name) + sut_node = sut_nodes.get(test_run_config.system_under_test_node.name) + tg_node = tg_nodes.get(test_run_config.traffic_generator_node.name) try: if not sut_node: - sut_node = SutNode(execution.system_under_test_node) + sut_node = SutNode(test_run_config.system_under_test_node) sut_nodes[sut_node.name] = sut_node if not tg_node: - tg_node = TGNode(execution.traffic_generator_node) + tg_node = TGNode(test_run_config.traffic_generator_node) tg_nodes[tg_node.name] = tg_node except Exception as e: - failed_node = execution.system_under_test_node.name + failed_node = test_run_config.system_under_test_node.name if sut_node: - failed_node = execution.traffic_generator_node.name + failed_node = test_run_config.traffic_generator_node.name self._logger.exception(f"The Creation of node {failed_node} failed.") - execution_result.update_setup(Result.FAIL, e) + test_run_result.update_setup(Result.FAIL, e) else: - self._run_execution( - sut_node, tg_node, execution, execution_result, test_suites_with_cases + self._run_test_run( + sut_node, tg_node, test_run_config, test_run_result, test_suites_with_cases ) - def _run_execution( + def _run_test_run( self, sut_node: SutNode, tg_node: TGNode, - execution: ExecutionConfiguration, - execution_result: ExecutionResult, + test_run_config: TestRunConfiguration, + test_run_result: TestRunResult, test_suites_with_cases: Iterable[TestSuiteWithCases], ) -> None: - """Run the given execution. + """Run the given test run. - This involves running the execution setup as well as running all build targets - in the given execution. After that, execution teardown is run. + This involves running the test run setup as well as running all build targets + in the given test run. After that, the test run teardown is run. Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. - execution: An execution's test run configuration. - execution_result: The execution's result. + sut_node: The test run's SUT node. + tg_node: The test run's TG node. + test_run_config: A test run configuration. + test_run_result: The test run's result. test_suites_with_cases: The test suites with test cases to run. """ - self._logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.") - execution_result.add_sut_info(sut_node.node_info) + self._logger.info( + f"Running test run with SUT '{test_run_config.system_under_test_node.name}'." + ) + test_run_result.add_sut_info(sut_node.node_info) try: - sut_node.set_up_execution(execution) - execution_result.update_setup(Result.PASS) + sut_node.set_up_test_run(test_run_config) + test_run_result.update_setup(Result.PASS) except Exception as e: - self._logger.exception("Execution setup failed.") - execution_result.update_setup(Result.FAIL, e) + self._logger.exception("Test run setup failed.") + test_run_result.update_setup(Result.FAIL, e) else: - for build_target in execution.build_targets: - build_target_result = execution_result.add_build_target(build_target) + for build_target_config in test_run_config.build_targets: + build_target_result = test_run_result.add_build_target(build_target_config) self._run_build_target( - sut_node, tg_node, build_target, build_target_result, test_suites_with_cases + sut_node, + tg_node, + build_target_config, + build_target_result, + test_suites_with_cases, ) finally: try: - self._logger.set_stage(DtsStage.execution_teardown) - sut_node.tear_down_execution() - execution_result.update_teardown(Result.PASS) + self._logger.set_stage(DtsStage.test_run_teardown) + sut_node.tear_down_test_run() + test_run_result.update_teardown(Result.PASS) except Exception as e: - self._logger.exception("Execution teardown failed.") - execution_result.update_teardown(Result.FAIL, e) + self._logger.exception("Test run teardown failed.") + test_run_result.update_teardown(Result.FAIL, e) def _run_build_target( self, sut_node: SutNode, tg_node: TGNode, - build_target: BuildTargetConfiguration, + build_target_config: BuildTargetConfiguration, build_target_result: BuildTargetResult, test_suites_with_cases: Iterable[TestSuiteWithCases], ) -> None: """Run the given build target. This involves running the build target setup as well as running all test suites - of the build target's execution. + of the build target's test run. After that, build target teardown is run. Args: - sut_node: The execution's sut node. - tg_node: The execution's tg node. - build_target: A build target's test run configuration. + sut_node: The test run's sut node. + tg_node: The test run's tg node. + build_target_config: A build target's test run configuration. build_target_result: The build target level result object associated with the current build target. test_suites_with_cases: The test suites with test cases to run. """ self._logger.set_stage(DtsStage.build_target_setup) - self._logger.info(f"Running build target '{build_target.name}'.") + self._logger.info(f"Running build target '{build_target_config.name}'.") try: - sut_node.set_up_build_target(build_target) + sut_node.set_up_build_target(build_target_config) self._result.dpdk_version = sut_node.dpdk_version build_target_result.add_build_target_info(sut_node.get_build_target_info()) build_target_result.update_setup(Result.PASS) @@ -505,8 +515,8 @@ def _run_test_suites( in the current build target won't be executed. Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. + sut_node: The test run's SUT node. + tg_node: The test run's TG node. build_target_result: The build target level result object associated with the current build target. test_suites_with_cases: The test suites with test cases to run. @@ -545,8 +555,8 @@ def _run_test_suite( Record the setup and the teardown and handle failures. Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. + sut_node: The test run's SUT node. + tg_node: The test run's TG node. test_suite_result: The test suite level result object associated with the current test suite. test_suite_with_cases: The test suite with test cases to run. diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 28f84fd793..dec14108bd 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -7,7 +7,7 @@ The results are recorded in a hierarchical manner: * :class:`DTSResult` contains - * :class:`ExecutionResult` contains + * :class:`TestRunResult` contains * :class:`BuildTargetResult` contains * :class:`TestSuiteResult` contains * :class:`TestCaseResult` @@ -37,8 +37,8 @@ BuildTargetInfo, Compiler, CPUType, - ExecutionConfiguration, NodeInfo, + TestRunConfiguration, TestSuiteConfig, ) from .exception import DTSError, ErrorSeverity @@ -137,8 +137,8 @@ class BaseResult(object): Stores the results of the setup and teardown portions of the corresponding stage. The hierarchical nature of DTS results is captured recursively in an internal list. - A stage is each level in this particular hierarchy (pre-execution or the top-most level, - execution, build target, test suite and test case.) + A stage is each level in this particular hierarchy (pre-run or the top-most level, + test run, build target, test suite and test case.) Attributes: setup_result: The result of the setup of the particular stage. @@ -222,7 +222,7 @@ def add_stats(self, statistics: "Statistics") -> None: class DTSResult(BaseResult): """Stores environment information and test results from a DTS run. - * Execution level information, such as testbed and the test suite list, + * Test run level information, such as testbed and the test suite list, * Build target level information, such as compiler, target OS and cpu, * Test suite and test case results, * All errors that are caught and recorded during DTS execution. @@ -230,7 +230,7 @@ class DTSResult(BaseResult): The information is stored hierarchically. This is the first level of the hierarchy and as such is where the data form the whole hierarchy is collated or processed. - The internal list stores the results of all executions. + The internal list stores the results of all test runs. Attributes: dpdk_version: The DPDK version to record. @@ -257,21 +257,21 @@ def __init__(self, logger: DTSLogger): self._stats_result = None self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") - def add_execution(self, execution: ExecutionConfiguration) -> "ExecutionResult": - """Add and return the child result (execution). + def add_test_run(self, test_run_config: TestRunConfiguration) -> "TestRunResult": + """Add and return the child result (test run). Args: - execution: The execution's test run configuration. + test_run_config: A test run configuration. Returns: - The execution's result. + The test run's result. """ - result = ExecutionResult(execution) + result = TestRunResult(test_run_config) self.child_results.append(result) return result def add_error(self, error: Exception) -> None: - """Record an error that occurred outside any execution. + """Record an error that occurred outside any test run. Args: error: The exception to record. @@ -314,10 +314,10 @@ def get_return_code(self) -> int: return int(self._return_code) -class ExecutionResult(BaseResult): - """The execution specific result. +class TestRunResult(BaseResult): + """The test run specific result. - The internal list stores the results of all build targets in a given execution. + The internal list stores the results of all build targets in a given test run. Attributes: sut_os_name: The operating system of the SUT node. @@ -328,45 +328,47 @@ class ExecutionResult(BaseResult): sut_os_name: str sut_os_version: str sut_kernel_version: str - _config: ExecutionConfiguration + _config: TestRunConfiguration _parent_result: DTSResult _test_suites_with_cases: list[TestSuiteWithCases] - def __init__(self, execution: ExecutionConfiguration): - """Extend the constructor with the execution's config and DTSResult. + def __init__(self, test_run_config: TestRunConfiguration): + """Extend the constructor with the test run's config and DTSResult. Args: - execution: The execution's test run configuration. + test_run_config: A test run configuration. """ - super(ExecutionResult, self).__init__() - self._config = execution + super(TestRunResult, self).__init__() + self._config = test_run_config self._test_suites_with_cases = [] - def add_build_target(self, build_target: BuildTargetConfiguration) -> "BuildTargetResult": + def add_build_target( + self, build_target_config: BuildTargetConfiguration + ) -> "BuildTargetResult": """Add and return the child result (build target). Args: - build_target: The build target's test run configuration. + build_target_config: The build target's test run configuration. Returns: The build target's result. """ result = BuildTargetResult( self._test_suites_with_cases, - build_target, + build_target_config, ) self.child_results.append(result) return result @property def test_suites_with_cases(self) -> list[TestSuiteWithCases]: - """The test suites with test cases to be executed in this execution. + """The test suites with test cases to be executed in this test run. The test suites can only be assigned once. Returns: The list of test suites with test cases. If an error occurs between - the initialization of :class:`ExecutionResult` and assigning test cases to the instance, + the initialization of :class:`TestRunResult` and assigning test cases to the instance, return an empty list, representing that we don't know what to execute. """ return self._test_suites_with_cases @@ -375,7 +377,7 @@ def test_suites_with_cases(self) -> list[TestSuiteWithCases]: def test_suites_with_cases(self, test_suites_with_cases: list[TestSuiteWithCases]) -> None: if self._test_suites_with_cases: raise ValueError( - "Attempted to assign test suites to an execution result " + "Attempted to assign test suites to a test run result " "which already has test suites." ) self._test_suites_with_cases = test_suites_with_cases @@ -422,19 +424,19 @@ class BuildTargetResult(BaseResult): def __init__( self, test_suites_with_cases: list[TestSuiteWithCases], - build_target: BuildTargetConfiguration, + build_target_config: BuildTargetConfiguration, ): - """Extend the constructor with the build target's config and ExecutionResult. + """Extend the constructor with the build target's config and test suites with cases. Args: test_suites_with_cases: The test suites with test cases to be run in this build target. - build_target: The build target's test run configuration. + build_target_config: The build target's test run configuration. """ super(BuildTargetResult, self).__init__() - self.arch = build_target.arch - self.os = build_target.os - self.cpu = build_target.cpu - self.compiler = build_target.compiler + self.arch = build_target_config.arch + self.os = build_target_config.os + self.cpu = build_target_config.cpu + self.compiler = build_target_config.compiler self.compiler_version = None self.dpdk_version = None self._test_suites_with_cases = test_suites_with_cases diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index 74061f6262..e2216ba6cb 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -19,8 +19,8 @@ from framework.config import ( OS, BuildTargetConfiguration, - ExecutionConfiguration, NodeConfiguration, + TestRunConfiguration, ) from framework.exception import ConfigurationError from framework.logger import DTSLogger, get_dts_logger @@ -65,7 +65,7 @@ class Node(ABC): ports: list[Port] _logger: DTSLogger _other_sessions: list[OSSession] - _execution_config: ExecutionConfiguration + _test_run_config: TestRunConfiguration virtual_devices: list[VirtualDevice] def __init__(self, node_config: NodeConfiguration): @@ -103,40 +103,40 @@ def _init_ports(self) -> None: for port in self.ports: self.configure_port_state(port) - def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """Execution setup steps. + def set_up_test_run(self, test_run_config: TestRunConfiguration) -> None: + """Test run setup steps. - Configure hugepages and call :meth:`_set_up_execution` where + Configure hugepages and call :meth:`_set_up_test_run` where the rest of the configuration steps (if any) are implemented. Args: - execution_config: The execution test run configuration according to which + test_run_config: A test run configuration according to which the setup steps will be taken. """ self._setup_hugepages() - self._set_up_execution(execution_config) - self._execution_config = execution_config - for vdev in execution_config.vdevs: + self._set_up_test_run(test_run_config) + self._test_run_config = test_run_config + for vdev in test_run_config.vdevs: self.virtual_devices.append(VirtualDevice(vdev)) - def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """Optional additional execution setup steps for subclasses. + def _set_up_test_run(self, test_run_config: TestRunConfiguration) -> None: + """Optional additional test run setup steps for subclasses. - Subclasses should override this if they need to add additional execution setup steps. + Subclasses should override this if they need to add additional test run setup steps. """ - def tear_down_execution(self) -> None: - """Execution teardown steps. + def tear_down_test_run(self) -> None: + """Test run teardown steps. - There are currently no common execution teardown steps common to all DTS node types. + There are currently no common test run teardown steps common to all DTS node types. """ self.virtual_devices = [] - self._tear_down_execution() + self._tear_down_test_run() - def _tear_down_execution(self) -> None: - """Optional additional execution teardown steps for subclasses. + def _tear_down_test_run(self) -> None: + """Optional additional test run teardown steps for subclasses. - Subclasses should override this if they need to add additional execution teardown steps. + Subclasses should override this if they need to add additional test run teardown steps. """ def set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None: diff --git a/dts/tests/TestSuite_pmd_buffer_scatter.py b/dts/tests/TestSuite_pmd_buffer_scatter.py index 3701c47408..c6d49189fc 100644 --- a/dts/tests/TestSuite_pmd_buffer_scatter.py +++ b/dts/tests/TestSuite_pmd_buffer_scatter.py @@ -52,7 +52,7 @@ def set_up_suite(self) -> None: """Set up the test suite. Setup: - Verify that we have at least 2 port links in the current execution + Verify that we have at least 2 port links in the current test run and increase the MTU of both ports on the traffic generator to 9000 to support larger packet sizes. """