From patchwork Fri Feb 23 07:54:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137085 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 845DD43B9B; Fri, 23 Feb 2024 08:55:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1DC5740EE7; Fri, 23 Feb 2024 08:55:08 +0100 (CET) Received: from mail-ej1-f51.google.com (mail-ej1-f51.google.com [209.85.218.51]) by mails.dpdk.org (Postfix) with ESMTP id 4B55C402ED for ; Fri, 23 Feb 2024 08:55:06 +0100 (CET) Received: by mail-ej1-f51.google.com with SMTP id a640c23a62f3a-a3ddc13bbb3so11670166b.0 for ; Thu, 22 Feb 2024 23:55:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1708674906; x=1709279706; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=joyBwk0zgt5jOkjegucQm0mrXA4WSkWrbHPUr9eGsiU=; b=RI5SSUfc1db+xg9xR4CsLKgjzuXI4spE4UZiv3L8FaXnj5+8mA+3u6kZfc9HmvDu4Q 65HhjcuYp4JmL4WvxNftFTow8C5Y9utSWie8rEoyULVgdVO0ViYG+oXJKMW1Y9gG1HWF 0IL56d6HoWzGUOE9a9pjjoE9Ncae9PImS81xp28n0lsePAow0Z2G2OgZNheXV77tthTN FEB7XCXVvhG2KF4L2nHIZYzcFgeGzDZx1uA4pXs60ZDYzWnbJFXpOtWKKxcG7O95SRWP f6GzH0C3A3G+u66R5E2kBV5F/WvvnY6bYmmk7zhRQZF91AHL3DXM/J3+eqyw2jgWeas2 GOFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708674906; x=1709279706; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=joyBwk0zgt5jOkjegucQm0mrXA4WSkWrbHPUr9eGsiU=; b=ZF70EW+5d9MjEPvjkLGd8NlwRkNllhCx24Nrsnosx0Ht3caCaZkt20+KMOTRTrGMIR W+uLzoMmWTEmHdpx8uRayByFOxdJkFZKKulXSZXl699Ee92sy3ZzyBYERTiGogSlxjZV iuQ/8QB/UcdZp66s1sV0SQo/+MSfCWN5AiuZAvf6r0rY7po5FFEmKdaUI05/t18HVcLa Hk4VmKSI+xLeosdZ+NToG/zsqPfkUYjmiS7pw85USKt85z8ItRQwm8t2pnm6pbTUfzfA PwAVJ3sPQW2BVj+HXhuqAOvmsqSkpb8zGfKMrcRacNaRANpUoR8Xx2I0ONpepCZla0Yu OrYQ== X-Gm-Message-State: AOJu0Yx+V/mhrRmVsZ3u0MTuWpBSzthxipTSUOAgOiZZQpZakgrSunim SwcOhgoyfNu/9jl6exD8NCq2ACfNwsjo+lG25TmeCrY8KAEucdXWgi35Am+dEOs= X-Google-Smtp-Source: AGHT+IH+cAE3UJFYR681BcFzzHkz+G8IIfPOL6AY8xHld6hPdzwJaXB19YgBxg8cFNVz4Dn4fDhHTQ== X-Received: by 2002:a17:906:3582:b0:a3e:5b55:db3 with SMTP id o2-20020a170906358200b00a3e5b550db3mr3982924ejb.29.1708674905669; Thu, 22 Feb 2024 23:55:05 -0800 (PST) Received: from localhost.localdomain ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id th7-20020a1709078e0700b00a3e059c5c5fsm6660235ejc.188.2024.02.22.23.55.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 23:55:05 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v3 1/7] dts: convert dts.py methods to class Date: Fri, 23 Feb 2024 08:54:56 +0100 Message-Id: <20240223075502.60485-2-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240223075502.60485-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240223075502.60485-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The dts.py module deviates from the rest of the code without a clear reason. Converting it into a class and using better naming will improve organization and code readability. Signed-off-by: Juraj Linkeš --- dts/framework/dts.py | 338 ---------------------------------------- dts/framework/runner.py | 333 +++++++++++++++++++++++++++++++++++++++ dts/main.py | 6 +- 3 files changed, 337 insertions(+), 340 deletions(-) delete mode 100644 dts/framework/dts.py create mode 100644 dts/framework/runner.py diff --git a/dts/framework/dts.py b/dts/framework/dts.py deleted file mode 100644 index e16d4578a0..0000000000 --- a/dts/framework/dts.py +++ /dev/null @@ -1,338 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2010-2019 Intel Corporation -# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. -# Copyright(c) 2022-2023 University of New Hampshire - -r"""Test suite runner module. - -A DTS run is split into stages: - - #. Execution stage, - #. Build target stage, - #. Test suite stage, - #. Test case stage. - -The module is responsible for running tests on testbeds defined in the test run configuration. -Each setup or teardown of each stage is recorded in a :class:`~.test_result.DTSResult` or -one of its subclasses. The test case results are also recorded. - -If an error occurs, the current stage is aborted, the error is recorded and the run continues in -the next iteration of the same stage. The return code is the highest `severity` of all -:class:`~.exception.DTSError`\s. - -Example: - An error occurs in a build target setup. The current build target is aborted and the run - continues with the next build target. If the errored build target was the last one in the given - execution, the next execution begins. - -Attributes: - dts_logger: The logger instance used in this module. - result: The top level result used in the module. -""" - -import sys - -from .config import ( - BuildTargetConfiguration, - ExecutionConfiguration, - TestSuiteConfig, - load_config, -) -from .exception import BlockingTestSuiteError -from .logger import DTSLOG, getLogger -from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result -from .test_suite import get_test_suites -from .testbed_model import SutNode, TGNode - -# dummy defaults to satisfy linters -dts_logger: DTSLOG = None # type: ignore[assignment] -result: DTSResult = DTSResult(dts_logger) - - -def run_all() -> None: - """Run all build targets in all executions from the test run configuration. - - Before running test suites, executions and build targets are first set up. - The executions and build targets defined in the test run configuration are iterated over. - The executions define which tests to run and where to run them and build targets define - the DPDK build setup. - - The tests suites are set up for each execution/build target tuple and each scheduled - test case within the test suite is set up, executed and torn down. After all test cases - have been executed, the test suite is torn down and the next build target will be tested. - - All the nested steps look like this: - - #. Execution setup - - #. Build target setup - - #. Test suite setup - - #. Test case setup - #. Test case logic - #. Test case teardown - - #. Test suite teardown - - #. Build target teardown - - #. Execution teardown - - The test cases are filtered according to the specification in the test run configuration and - the :option:`--test-cases` command line argument or - the :envvar:`DTS_TESTCASES` environment variable. - """ - global dts_logger - global result - - # create a regular DTS logger and create a new result with it - dts_logger = getLogger("DTSRunner") - result = DTSResult(dts_logger) - - # check the python version of the server that run dts - _check_dts_python_version() - - sut_nodes: dict[str, SutNode] = {} - tg_nodes: dict[str, TGNode] = {} - try: - # for all Execution sections - for execution in load_config().executions: - sut_node = sut_nodes.get(execution.system_under_test_node.name) - tg_node = tg_nodes.get(execution.traffic_generator_node.name) - - try: - if not sut_node: - sut_node = SutNode(execution.system_under_test_node) - sut_nodes[sut_node.name] = sut_node - if not tg_node: - tg_node = TGNode(execution.traffic_generator_node) - tg_nodes[tg_node.name] = tg_node - result.update_setup(Result.PASS) - except Exception as e: - failed_node = execution.system_under_test_node.name - if sut_node: - failed_node = execution.traffic_generator_node.name - dts_logger.exception(f"Creation of node {failed_node} failed.") - result.update_setup(Result.FAIL, e) - - else: - _run_execution(sut_node, tg_node, execution, result) - - except Exception as e: - dts_logger.exception("An unexpected error has occurred.") - result.add_error(e) - raise - - finally: - try: - for node in (sut_nodes | tg_nodes).values(): - node.close() - result.update_teardown(Result.PASS) - except Exception as e: - dts_logger.exception("Final cleanup of nodes failed.") - result.update_teardown(Result.ERROR, e) - - # we need to put the sys.exit call outside the finally clause to make sure - # that unexpected exceptions will propagate - # in that case, the error that should be reported is the uncaught exception as - # that is a severe error originating from the framework - # at that point, we'll only have partial results which could be impacted by the - # error causing the uncaught exception, making them uninterpretable - _exit_dts() - - -def _check_dts_python_version() -> None: - """Check the required Python version - v3.10.""" - - def RED(text: str) -> str: - return f"\u001B[31;1m{str(text)}\u001B[0m" - - if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10): - print( - RED( - ( - "WARNING: DTS execution node's python version is lower than" - "python 3.10, is deprecated and will not work in future releases." - ) - ), - file=sys.stderr, - ) - print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) - - -def _run_execution( - sut_node: SutNode, - tg_node: TGNode, - execution: ExecutionConfiguration, - result: DTSResult, -) -> None: - """Run the given execution. - - This involves running the execution setup as well as running all build targets - in the given execution. After that, execution teardown is run. - - Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. - execution: An execution's test run configuration. - result: The top level result object. - """ - dts_logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.") - execution_result = result.add_execution(sut_node.config) - execution_result.add_sut_info(sut_node.node_info) - - try: - sut_node.set_up_execution(execution) - execution_result.update_setup(Result.PASS) - except Exception as e: - dts_logger.exception("Execution setup failed.") - execution_result.update_setup(Result.FAIL, e) - - else: - for build_target in execution.build_targets: - _run_build_target(sut_node, tg_node, build_target, execution, execution_result) - - finally: - try: - sut_node.tear_down_execution() - execution_result.update_teardown(Result.PASS) - except Exception as e: - dts_logger.exception("Execution teardown failed.") - execution_result.update_teardown(Result.FAIL, e) - - -def _run_build_target( - sut_node: SutNode, - tg_node: TGNode, - build_target: BuildTargetConfiguration, - execution: ExecutionConfiguration, - execution_result: ExecutionResult, -) -> None: - """Run the given build target. - - This involves running the build target setup as well as running all test suites - in the given execution the build target is defined in. - After that, build target teardown is run. - - Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. - build_target: A build target's test run configuration. - execution: The build target's execution's test run configuration. - execution_result: The execution level result object associated with the execution. - """ - dts_logger.info(f"Running build target '{build_target.name}'.") - build_target_result = execution_result.add_build_target(build_target) - - try: - sut_node.set_up_build_target(build_target) - result.dpdk_version = sut_node.dpdk_version - build_target_result.add_build_target_info(sut_node.get_build_target_info()) - build_target_result.update_setup(Result.PASS) - except Exception as e: - dts_logger.exception("Build target setup failed.") - build_target_result.update_setup(Result.FAIL, e) - - else: - _run_all_suites(sut_node, tg_node, execution, build_target_result) - - finally: - try: - sut_node.tear_down_build_target() - build_target_result.update_teardown(Result.PASS) - except Exception as e: - dts_logger.exception("Build target teardown failed.") - build_target_result.update_teardown(Result.FAIL, e) - - -def _run_all_suites( - sut_node: SutNode, - tg_node: TGNode, - execution: ExecutionConfiguration, - build_target_result: BuildTargetResult, -) -> None: - """Run the execution's (possibly a subset) test suites using the current build target. - - The function assumes the build target we're testing has already been built on the SUT node. - The current build target thus corresponds to the current DPDK build present on the SUT node. - - If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites - in the current build target won't be executed. - - Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. - execution: The execution's test run configuration associated with the current build target. - build_target_result: The build target level result object associated - with the current build target. - """ - end_build_target = False - if not execution.skip_smoke_tests: - execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] - for test_suite_config in execution.test_suites: - try: - _run_single_suite(sut_node, tg_node, execution, build_target_result, test_suite_config) - except BlockingTestSuiteError as e: - dts_logger.exception( - f"An error occurred within {test_suite_config.test_suite}. Skipping build target." - ) - result.add_error(e) - end_build_target = True - # if a blocking test failed and we need to bail out of suite executions - if end_build_target: - break - - -def _run_single_suite( - sut_node: SutNode, - tg_node: TGNode, - execution: ExecutionConfiguration, - build_target_result: BuildTargetResult, - test_suite_config: TestSuiteConfig, -) -> None: - """Run all test suite in a single test suite module. - - The function assumes the build target we're testing has already been built on the SUT node. - The current build target thus corresponds to the current DPDK build present on the SUT node. - - Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. - execution: The execution's test run configuration associated with the current build target. - build_target_result: The build target level result object associated - with the current build target. - test_suite_config: Test suite test run configuration specifying the test suite module - and possibly a subset of test cases of test suites in that module. - - Raises: - BlockingTestSuiteError: If a blocking test suite fails. - """ - try: - full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" - test_suite_classes = get_test_suites(full_suite_path) - suites_str = ", ".join((x.__name__ for x in test_suite_classes)) - dts_logger.debug(f"Found test suites '{suites_str}' in '{full_suite_path}'.") - except Exception as e: - dts_logger.exception("An error occurred when searching for test suites.") - result.update_setup(Result.ERROR, e) - - else: - for test_suite_class in test_suite_classes: - test_suite = test_suite_class( - sut_node, - tg_node, - test_suite_config.test_cases, - execution.func, - build_target_result, - ) - test_suite.run() - - -def _exit_dts() -> None: - """Process all errors and exit with the proper exit code.""" - result.process() - - if dts_logger: - dts_logger.info("DTS execution has ended.") - sys.exit(result.get_return_code()) diff --git a/dts/framework/runner.py b/dts/framework/runner.py new file mode 100644 index 0000000000..acc1c4d6db --- /dev/null +++ b/dts/framework/runner.py @@ -0,0 +1,333 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2010-2019 Intel Corporation +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 University of New Hampshire + +"""Test suite runner module. + +The module is responsible for running DTS in a series of stages: + + #. Execution stage, + #. Build target stage, + #. Test suite stage, + #. Test case stage. + +The execution and build target stages set up the environment before running test suites. +The test suite stage sets up steps common to all test cases +and the test case stage runs test cases individually. +""" + +import logging +import sys + +from .config import ( + BuildTargetConfiguration, + ExecutionConfiguration, + TestSuiteConfig, + load_config, +) +from .exception import BlockingTestSuiteError +from .logger import DTSLOG, getLogger +from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result +from .test_suite import get_test_suites +from .testbed_model import SutNode, TGNode + + +class DTSRunner: + r"""Test suite runner class. + + The class is responsible for running tests on testbeds defined in the test run configuration. + Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` + or one of its subclasses. The test case results are also recorded. + + If an error occurs, the current stage is aborted, the error is recorded and the run continues in + the next iteration of the same stage. The return code is the highest `severity` of all + :class:`~.framework.exception.DTSError`\s. + + Example: + An error occurs in a build target setup. The current build target is aborted and the run + continues with the next build target. If the errored build target was the last one in the + given execution, the next execution begins. + """ + + _logger: DTSLOG + _result: DTSResult + + def __init__(self): + """Initialize the instance with logger and result.""" + self._logger = getLogger("DTSRunner") + self._result = DTSResult(self._logger) + + def run(self): + """Run all build targets in all executions from the test run configuration. + + Before running test suites, executions and build targets are first set up. + The executions and build targets defined in the test run configuration are iterated over. + The executions define which tests to run and where to run them and build targets define + the DPDK build setup. + + The tests suites are set up for each execution/build target tuple and each discovered + test case within the test suite is set up, executed and torn down. After all test cases + have been executed, the test suite is torn down and the next build target will be tested. + + All the nested steps look like this: + + #. Execution setup + + #. Build target setup + + #. Test suite setup + + #. Test case setup + #. Test case logic + #. Test case teardown + + #. Test suite teardown + + #. Build target teardown + + #. Execution teardown + + The test cases are filtered according to the specification in the test run configuration and + the :option:`--test-cases` command line argument or + the :envvar:`DTS_TESTCASES` environment variable. + """ + sut_nodes: dict[str, SutNode] = {} + tg_nodes: dict[str, TGNode] = {} + try: + # check the python version of the server that runs dts + self._check_dts_python_version() + + # for all Execution sections + for execution in load_config().executions: + sut_node = sut_nodes.get(execution.system_under_test_node.name) + tg_node = tg_nodes.get(execution.traffic_generator_node.name) + + try: + if not sut_node: + sut_node = SutNode(execution.system_under_test_node) + sut_nodes[sut_node.name] = sut_node + if not tg_node: + tg_node = TGNode(execution.traffic_generator_node) + tg_nodes[tg_node.name] = tg_node + self._result.update_setup(Result.PASS) + except Exception as e: + failed_node = execution.system_under_test_node.name + if sut_node: + failed_node = execution.traffic_generator_node.name + self._logger.exception(f"The Creation of node {failed_node} failed.") + self._result.update_setup(Result.FAIL, e) + + else: + self._run_execution(sut_node, tg_node, execution) + + except Exception as e: + self._logger.exception("An unexpected error has occurred.") + self._result.add_error(e) + raise + + finally: + try: + for node in (sut_nodes | tg_nodes).values(): + node.close() + self._result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception("The final cleanup of nodes failed.") + self._result.update_teardown(Result.ERROR, e) + + # we need to put the sys.exit call outside the finally clause to make sure + # that unexpected exceptions will propagate + # in that case, the error that should be reported is the uncaught exception as + # that is a severe error originating from the framework + # at that point, we'll only have partial results which could be impacted by the + # error causing the uncaught exception, making them uninterpretable + self._exit_dts() + + def _check_dts_python_version(self) -> None: + """Check the required Python version - v3.10.""" + if sys.version_info.major < 3 or ( + sys.version_info.major == 3 and sys.version_info.minor < 10 + ): + self._logger.warning( + "DTS execution node's python version is lower than Python 3.10, " + "is deprecated and will not work in future releases." + ) + self._logger.warning("Please use Python >= 3.10 instead.") + + def _run_execution( + self, + sut_node: SutNode, + tg_node: TGNode, + execution: ExecutionConfiguration, + ) -> None: + """Run the given execution. + + This involves running the execution setup as well as running all build targets + in the given execution. After that, execution teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: An execution's test run configuration. + """ + self._logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.") + execution_result = self._result.add_execution(sut_node.config) + execution_result.add_sut_info(sut_node.node_info) + + try: + sut_node.set_up_execution(execution) + execution_result.update_setup(Result.PASS) + except Exception as e: + self._logger.exception("Execution setup failed.") + execution_result.update_setup(Result.FAIL, e) + + else: + for build_target in execution.build_targets: + self._run_build_target(sut_node, tg_node, build_target, execution, execution_result) + + finally: + try: + sut_node.tear_down_execution() + execution_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception("Execution teardown failed.") + execution_result.update_teardown(Result.FAIL, e) + + def _run_build_target( + self, + sut_node: SutNode, + tg_node: TGNode, + build_target: BuildTargetConfiguration, + execution: ExecutionConfiguration, + execution_result: ExecutionResult, + ) -> None: + """Run the given build target. + + This involves running the build target setup as well as running all test suites + of the build target's execution. + After that, build target teardown is run. + + Args: + sut_node: The execution's sut node. + tg_node: The execution's tg node. + build_target: A build target's test run configuration. + execution: The build target's execution's test run configuration. + execution_result: The execution level result object associated with the execution. + """ + self._logger.info(f"Running build target '{build_target.name}'.") + build_target_result = execution_result.add_build_target(build_target) + + try: + sut_node.set_up_build_target(build_target) + self._result.dpdk_version = sut_node.dpdk_version + build_target_result.add_build_target_info(sut_node.get_build_target_info()) + build_target_result.update_setup(Result.PASS) + except Exception as e: + self._logger.exception("Build target setup failed.") + build_target_result.update_setup(Result.FAIL, e) + + else: + self._run_all_suites(sut_node, tg_node, execution, build_target_result) + + finally: + try: + sut_node.tear_down_build_target() + build_target_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception("Build target teardown failed.") + build_target_result.update_teardown(Result.FAIL, e) + + def _run_all_suites( + self, + sut_node: SutNode, + tg_node: TGNode, + execution: ExecutionConfiguration, + build_target_result: BuildTargetResult, + ) -> None: + """Run the execution's (possibly a subset of) test suites using the current build target. + + The method assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated + with the current build target. + build_target_result: The build target level result object associated + with the current build target. + """ + end_build_target = False + if not execution.skip_smoke_tests: + execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] + for test_suite_config in execution.test_suites: + try: + self._run_single_suite( + sut_node, tg_node, execution, build_target_result, test_suite_config + ) + except BlockingTestSuiteError as e: + self._logger.exception( + f"An error occurred within {test_suite_config.test_suite}. " + "Skipping build target..." + ) + self._result.add_error(e) + end_build_target = True + # if a blocking test failed and we need to bail out of suite executions + if end_build_target: + break + + def _run_single_suite( + self, + sut_node: SutNode, + tg_node: TGNode, + execution: ExecutionConfiguration, + build_target_result: BuildTargetResult, + test_suite_config: TestSuiteConfig, + ) -> None: + """Run all test suites in a single test suite module. + + The method assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated + with the current build target. + build_target_result: The build target level result object associated + with the current build target. + test_suite_config: Test suite test run configuration specifying the test suite module + and possibly a subset of test cases of test suites in that module. + + Raises: + BlockingTestSuiteError: If a blocking test suite fails. + """ + try: + full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" + test_suite_classes = get_test_suites(full_suite_path) + suites_str = ", ".join((x.__name__ for x in test_suite_classes)) + self._logger.debug(f"Found test suites '{suites_str}' in '{full_suite_path}'.") + except Exception as e: + self._logger.exception("An error occurred when searching for test suites.") + self._result.update_setup(Result.ERROR, e) + + else: + for test_suite_class in test_suite_classes: + test_suite = test_suite_class( + sut_node, + tg_node, + test_suite_config.test_cases, + execution.func, + build_target_result, + ) + test_suite.run() + + def _exit_dts(self) -> None: + """Process all errors and exit with the proper exit code.""" + self._result.process() + + if self._logger: + self._logger.info("DTS execution has ended.") + + logging.shutdown() + sys.exit(self._result.get_return_code()) diff --git a/dts/main.py b/dts/main.py index f703615d11..1ffe8ff81f 100755 --- a/dts/main.py +++ b/dts/main.py @@ -21,9 +21,11 @@ def main() -> None: be modified before the settings module is imported anywhere else in the framework. """ settings.SETTINGS = settings.get_settings() - from framework import dts - dts.run_all() + from framework.runner import DTSRunner + + dts = DTSRunner() + dts.run() # Main program begins here From patchwork Fri Feb 23 07:54:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137086 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2957443B9B; Fri, 23 Feb 2024 08:55:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A5B3541109; Fri, 23 Feb 2024 08:55:10 +0100 (CET) Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by mails.dpdk.org (Postfix) with ESMTP id 9E92240E25 for ; Fri, 23 Feb 2024 08:55:07 +0100 (CET) Received: by mail-ej1-f45.google.com with SMTP id a640c23a62f3a-a3e75e30d36so108434066b.1 for ; Thu, 22 Feb 2024 23:55:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1708674907; x=1709279707; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hy2D+aQY0wT5R5bQd1L65md2e7K9b0WwIpuk8bYCJUo=; b=t/81o33TV2tygvg++Z2eI3m5FJPHh6QTSMgHqXGf2Ws3B12Oyh6iDpXT1aPUfXb72c VX8mu7jEDBIXbxk2N1AL3X9ctRICpLLeL6EpE1pwibO9nfPxDo2RMdiZVODqnjxznt3+ ytnQBYxtm+vGbHDheTNiSnvB1KS3sk0M0ZyNDIolEF6+0pHBvCoPrQh6CIgaAEAv1c/4 pwK3ZYylh3C2TQ0aopfjS1kUPtGZuVcemZunvXvwQWID0mRUYREajWxFLax4urzab2gW yXJyO77ncRvV4nYovKAMCtAyhxnYzE4trqDmbKBcbH6MK/LylfzR8zXKFqQpyXkUosKA 6htA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708674907; x=1709279707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hy2D+aQY0wT5R5bQd1L65md2e7K9b0WwIpuk8bYCJUo=; b=e59E/l5klnGJazNjVR1DaHso3Q74/QbA6onb0abBfjdpTiWUohL6KcGrxVTjvqpL+b Wv9A3n6u+eRtyloxI0yiLyUKOa+OXHOuAIcS09I4SU3jGZ4z+bk4QQtcLkdO94AIu1nx vYQA9rSVgrN5RwmSdEIZB8QtgkiAkH/BtSYpRuLwEx8bW7ueMtoERZLxi6PeN+i3x9EN +OFWa8eHAGIbjLbFG1R9Pl3gyvLTmZC9Iow3Y9fFmqj6JoLM5jsMhGiKkjCQhHxEh3EF 2mKanl9odRWOJHdr7rui13Z1sZn2ZOgWyiqQqao4Z3X4exrkhHbgEcVwqtayddvu5bMn tdaA== X-Gm-Message-State: AOJu0Yw7Wr1M1gD6PTptpw2cEh8zNwu3K0T9GFmsw8MPYifQgb9JZN85 OYgQcfM8dat0kac+OnU+fjGZBVjjyvbDFuNZSdEeZywpVsfKWESskA4anDj3grM= X-Google-Smtp-Source: AGHT+IF+RVf0qpCAkaEEcrL3namqVuLK+J4eoQQBCsmK63jlbX9Cfj/f4obxYaO79Rh8gNDeK0lCwA== X-Received: by 2002:a17:906:1dcc:b0:a3f:47ff:47d with SMTP id v12-20020a1709061dcc00b00a3f47ff047dmr1013357ejh.26.1708674907141; Thu, 22 Feb 2024 23:55:07 -0800 (PST) Received: from localhost.localdomain ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id th7-20020a1709078e0700b00a3e059c5c5fsm6660235ejc.188.2024.02.22.23.55.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 23:55:06 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v3 2/7] dts: move test suite execution logic to DTSRunner Date: Fri, 23 Feb 2024 08:54:57 +0100 Message-Id: <20240223075502.60485-3-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240223075502.60485-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240223075502.60485-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move the code responsible for running the test suite from the TestSuite class to the DTSRunner class. This restructuring decision was made to consolidate and unify the related logic into a single unit. Signed-off-by: Juraj Linkeš --- dts/framework/runner.py | 175 ++++++++++++++++++++++++++++++++---- dts/framework/test_suite.py | 152 ++----------------------------- 2 files changed, 169 insertions(+), 158 deletions(-) diff --git a/dts/framework/runner.py b/dts/framework/runner.py index acc1c4d6db..933685d638 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -19,6 +19,7 @@ import logging import sys +from types import MethodType from .config import ( BuildTargetConfiguration, @@ -26,10 +27,18 @@ TestSuiteConfig, load_config, ) -from .exception import BlockingTestSuiteError +from .exception import BlockingTestSuiteError, SSHTimeoutError, TestCaseVerifyError from .logger import DTSLOG, getLogger -from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result -from .test_suite import get_test_suites +from .settings import SETTINGS +from .test_result import ( + BuildTargetResult, + DTSResult, + ExecutionResult, + Result, + TestCaseResult, + TestSuiteResult, +) +from .test_suite import TestSuite, get_test_suites from .testbed_model import SutNode, TGNode @@ -227,7 +236,7 @@ def _run_build_target( build_target_result.update_setup(Result.FAIL, e) else: - self._run_all_suites(sut_node, tg_node, execution, build_target_result) + self._run_test_suites(sut_node, tg_node, execution, build_target_result) finally: try: @@ -237,7 +246,7 @@ def _run_build_target( self._logger.exception("Build target teardown failed.") build_target_result.update_teardown(Result.FAIL, e) - def _run_all_suites( + def _run_test_suites( self, sut_node: SutNode, tg_node: TGNode, @@ -249,6 +258,9 @@ def _run_all_suites( The method assumes the build target we're testing has already been built on the SUT node. The current build target thus corresponds to the current DPDK build present on the SUT node. + If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites + in the current build target won't be executed. + Args: sut_node: The execution's SUT node. tg_node: The execution's TG node. @@ -262,7 +274,7 @@ def _run_all_suites( execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] for test_suite_config in execution.test_suites: try: - self._run_single_suite( + self._run_test_suite_module( sut_node, tg_node, execution, build_target_result, test_suite_config ) except BlockingTestSuiteError as e: @@ -276,7 +288,7 @@ def _run_all_suites( if end_build_target: break - def _run_single_suite( + def _run_test_suite_module( self, sut_node: SutNode, tg_node: TGNode, @@ -284,11 +296,18 @@ def _run_single_suite( build_target_result: BuildTargetResult, test_suite_config: TestSuiteConfig, ) -> None: - """Run all test suites in a single test suite module. + """Set up, execute and tear down all test suites in a single test suite module. The method assumes the build target we're testing has already been built on the SUT node. The current build target thus corresponds to the current DPDK build present on the SUT node. + Test suite execution consists of running the discovered test cases. + A test case run consists of setup, execution and teardown of said test case. + + Record the setup and the teardown and handle failures. + + The test cases to execute are discovered when creating the :class:`TestSuite` object. + Args: sut_node: The execution's SUT node. tg_node: The execution's TG node. @@ -313,14 +332,140 @@ def _run_single_suite( else: for test_suite_class in test_suite_classes: - test_suite = test_suite_class( - sut_node, - tg_node, - test_suite_config.test_cases, - execution.func, - build_target_result, + test_suite = test_suite_class(sut_node, tg_node, test_suite_config.test_cases) + + test_suite_name = test_suite.__class__.__name__ + test_suite_result = build_target_result.add_test_suite(test_suite_name) + try: + self._logger.info(f"Starting test suite setup: {test_suite_name}") + test_suite.set_up_suite() + test_suite_result.update_setup(Result.PASS) + self._logger.info(f"Test suite setup successful: {test_suite_name}") + except Exception as e: + self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") + test_suite_result.update_setup(Result.ERROR, e) + + else: + self._execute_test_suite(execution.func, test_suite, test_suite_result) + + finally: + try: + test_suite.tear_down_suite() + sut_node.kill_cleanup_dpdk_apps() + test_suite_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") + self._logger.warning( + f"Test suite '{test_suite_name}' teardown failed, " + f"the next test suite may be affected." + ) + test_suite_result.update_setup(Result.ERROR, e) + if len(test_suite_result.get_errors()) > 0 and test_suite.is_blocking: + raise BlockingTestSuiteError(test_suite_name) + + def _execute_test_suite( + self, func: bool, test_suite: TestSuite, test_suite_result: TestSuiteResult + ) -> None: + """Execute all discovered test cases in `test_suite`. + + If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment + variable is set, in case of a test case failure, the test case will be executed again + until it passes or it fails that many times in addition of the first failure. + + Args: + func: Whether to execute functional test cases. + test_suite: The test suite object. + test_suite_result: The test suite level result object associated + with the current test suite. + """ + if func: + for test_case_method in test_suite._get_functional_test_cases(): + test_case_name = test_case_method.__name__ + test_case_result = test_suite_result.add_test_case(test_case_name) + all_attempts = SETTINGS.re_run + 1 + attempt_nr = 1 + self._run_test_case(test_suite, test_case_method, test_case_result) + while not test_case_result and attempt_nr < all_attempts: + attempt_nr += 1 + self._logger.info( + f"Re-running FAILED test case '{test_case_name}'. " + f"Attempt number {attempt_nr} out of {all_attempts}." + ) + self._run_test_case(test_suite, test_case_method, test_case_result) + + def _run_test_case( + self, + test_suite: TestSuite, + test_case_method: MethodType, + test_case_result: TestCaseResult, + ) -> None: + """Setup, execute and teardown a test case in `test_suite`. + + Record the result of the setup and the teardown and handle failures. + + Args: + test_suite: The test suite object. + test_case_method: The test case method. + test_case_result: The test case level result object associated + with the current test case. + """ + test_case_name = test_case_method.__name__ + + try: + # run set_up function for each case + test_suite.set_up_test_case() + test_case_result.update_setup(Result.PASS) + except SSHTimeoutError as e: + self._logger.exception(f"Test case setup FAILED: {test_case_name}") + test_case_result.update_setup(Result.FAIL, e) + except Exception as e: + self._logger.exception(f"Test case setup ERROR: {test_case_name}") + test_case_result.update_setup(Result.ERROR, e) + + else: + # run test case if setup was successful + self._execute_test_case(test_case_method, test_case_result) + + finally: + try: + test_suite.tear_down_test_case() + test_case_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception(f"Test case teardown ERROR: {test_case_name}") + self._logger.warning( + f"Test case '{test_case_name}' teardown failed, " + f"the next test case may be affected." ) - test_suite.run() + test_case_result.update_teardown(Result.ERROR, e) + test_case_result.update(Result.ERROR) + + def _execute_test_case( + self, test_case_method: MethodType, test_case_result: TestCaseResult + ) -> None: + """Execute one test case, record the result and handle failures. + + Args: + test_case_method: The test case method. + test_case_result: The test case level result object associated + with the current test case. + """ + test_case_name = test_case_method.__name__ + try: + self._logger.info(f"Starting test case execution: {test_case_name}") + test_case_method() + test_case_result.update(Result.PASS) + self._logger.info(f"Test case execution PASSED: {test_case_name}") + + except TestCaseVerifyError as e: + self._logger.exception(f"Test case execution FAILED: {test_case_name}") + test_case_result.update(Result.FAIL, e) + except Exception as e: + self._logger.exception(f"Test case execution ERROR: {test_case_name}") + test_case_result.update(Result.ERROR, e) + except KeyboardInterrupt: + self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}") + test_case_result.update(Result.SKIP) + raise KeyboardInterrupt("Stop DTS") def _exit_dts(self) -> None: """Process all errors and exit with the proper exit code.""" diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index dfb391ffbd..b02fd36147 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -8,7 +8,6 @@ must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics needed by subclasses: - * Test suite and test case execution flow, * Testbed (SUT, TG) configuration, * Packet sending and verification, * Test case verification. @@ -28,27 +27,22 @@ from scapy.layers.l2 import Ether # type: ignore[import] from scapy.packet import Packet, Padding # type: ignore[import] -from .exception import ( - BlockingTestSuiteError, - ConfigurationError, - SSHTimeoutError, - TestCaseVerifyError, -) +from .exception import ConfigurationError, TestCaseVerifyError from .logger import DTSLOG, getLogger from .settings import SETTINGS -from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries class TestSuite(object): - """The base class with methods for handling the basic flow of a test suite. + """The base class with building blocks needed by most test cases. * Test case filtering and collection, - * Test suite setup/cleanup, - * Test setup/cleanup, - * Test case execution, - * Error handling and results storage. + * Test suite setup/cleanup methods to override, + * Test case setup/cleanup methods to override, + * Test case verification, + * Testbed configuration, + * Traffic sending and verification. Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``, further divided into performance test cases (starting with ``test_perf_``) @@ -60,10 +54,6 @@ class TestSuite(object): The union of both lists will be used. Any unknown test cases from the latter lists will be silently ignored. - If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable - is set, in case of a test case failure, the test case will be executed again until it passes - or it fails that many times in addition of the first failure. - The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses if the appropriate test suite/test case fixtures are needed. @@ -82,8 +72,6 @@ class TestSuite(object): is_blocking: ClassVar[bool] = False _logger: DTSLOG _test_cases_to_run: list[str] - _func: bool - _result: TestSuiteResult _port_links: list[PortLink] _sut_port_ingress: Port _sut_port_egress: Port @@ -99,30 +87,23 @@ def __init__( sut_node: SutNode, tg_node: TGNode, test_cases: list[str], - func: bool, - build_target_result: BuildTargetResult, ): """Initialize the test suite testbed information and basic configuration. - Process what test cases to run, create the associated - :class:`~.test_result.TestSuiteResult`, find links between ports - and set up default IP addresses to be used when configuring them. + Process what test cases to run, find links between ports and set up + default IP addresses to be used when configuring them. Args: sut_node: The SUT node where the test suite will run. tg_node: The TG node where the test suite will run. test_cases: The list of test cases to execute. If empty, all test cases will be executed. - func: Whether to run functional tests. - build_target_result: The build target result this test suite is run in. """ self.sut_node = sut_node self.tg_node = tg_node self._logger = getLogger(self.__class__.__name__) self._test_cases_to_run = test_cases self._test_cases_to_run.extend(SETTINGS.test_cases) - self._func = func - self._result = build_target_result.add_test_suite(self.__class__.__name__) self._port_links = [] self._process_links() self._sut_port_ingress, self._tg_port_egress = ( @@ -384,62 +365,6 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool: return False return True - def run(self) -> None: - """Set up, execute and tear down the whole suite. - - Test suite execution consists of running all test cases scheduled to be executed. - A test case run consists of setup, execution and teardown of said test case. - - Record the setup and the teardown and handle failures. - - The list of scheduled test cases is constructed when creating the :class:`TestSuite` object. - """ - test_suite_name = self.__class__.__name__ - - try: - self._logger.info(f"Starting test suite setup: {test_suite_name}") - self.set_up_suite() - self._result.update_setup(Result.PASS) - self._logger.info(f"Test suite setup successful: {test_suite_name}") - except Exception as e: - self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") - self._result.update_setup(Result.ERROR, e) - - else: - self._execute_test_suite() - - finally: - try: - self.tear_down_suite() - self.sut_node.kill_cleanup_dpdk_apps() - self._result.update_teardown(Result.PASS) - except Exception as e: - self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") - self._logger.warning( - f"Test suite '{test_suite_name}' teardown failed, " - f"the next test suite may be affected." - ) - self._result.update_setup(Result.ERROR, e) - if len(self._result.get_errors()) > 0 and self.is_blocking: - raise BlockingTestSuiteError(test_suite_name) - - def _execute_test_suite(self) -> None: - """Execute all test cases scheduled to be executed in this suite.""" - if self._func: - for test_case_method in self._get_functional_test_cases(): - test_case_name = test_case_method.__name__ - test_case_result = self._result.add_test_case(test_case_name) - all_attempts = SETTINGS.re_run + 1 - attempt_nr = 1 - self._run_test_case(test_case_method, test_case_result) - while not test_case_result and attempt_nr < all_attempts: - attempt_nr += 1 - self._logger.info( - f"Re-running FAILED test case '{test_case_name}'. " - f"Attempt number {attempt_nr} out of {all_attempts}." - ) - self._run_test_case(test_case_method, test_case_result) - def _get_functional_test_cases(self) -> list[MethodType]: """Get all functional test cases defined in this TestSuite. @@ -471,65 +396,6 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool return match - def _run_test_case( - self, test_case_method: MethodType, test_case_result: TestCaseResult - ) -> None: - """Setup, execute and teardown a test case in this suite. - - Record the result of the setup and the teardown and handle failures. - """ - test_case_name = test_case_method.__name__ - - try: - # run set_up function for each case - self.set_up_test_case() - test_case_result.update_setup(Result.PASS) - except SSHTimeoutError as e: - self._logger.exception(f"Test case setup FAILED: {test_case_name}") - test_case_result.update_setup(Result.FAIL, e) - except Exception as e: - self._logger.exception(f"Test case setup ERROR: {test_case_name}") - test_case_result.update_setup(Result.ERROR, e) - - else: - # run test case if setup was successful - self._execute_test_case(test_case_method, test_case_result) - - finally: - try: - self.tear_down_test_case() - test_case_result.update_teardown(Result.PASS) - except Exception as e: - self._logger.exception(f"Test case teardown ERROR: {test_case_name}") - self._logger.warning( - f"Test case '{test_case_name}' teardown failed, " - f"the next test case may be affected." - ) - test_case_result.update_teardown(Result.ERROR, e) - test_case_result.update(Result.ERROR) - - def _execute_test_case( - self, test_case_method: MethodType, test_case_result: TestCaseResult - ) -> None: - """Execute one test case, record the result and handle failures.""" - test_case_name = test_case_method.__name__ - try: - self._logger.info(f"Starting test case execution: {test_case_name}") - test_case_method() - test_case_result.update(Result.PASS) - self._logger.info(f"Test case execution PASSED: {test_case_name}") - - except TestCaseVerifyError as e: - self._logger.exception(f"Test case execution FAILED: {test_case_name}") - test_case_result.update(Result.FAIL, e) - except Exception as e: - self._logger.exception(f"Test case execution ERROR: {test_case_name}") - test_case_result.update(Result.ERROR, e) - except KeyboardInterrupt: - self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}") - test_case_result.update(Result.SKIP) - raise KeyboardInterrupt("Stop DTS") - def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: r"""Find all :class:`TestSuite`\s in a Python module. From patchwork Fri Feb 23 07:54:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137087 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4A0CB43B9B; Fri, 23 Feb 2024 08:55:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BB34D40EE5; Fri, 23 Feb 2024 08:55:11 +0100 (CET) Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by mails.dpdk.org (Postfix) with ESMTP id E4398410E3 for ; Fri, 23 Feb 2024 08:55:08 +0100 (CET) Received: by mail-ej1-f45.google.com with SMTP id a640c23a62f3a-a28a6cef709so78853266b.1 for ; Thu, 22 Feb 2024 23:55:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1708674908; x=1709279708; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=72yc44tlXFhYUSExrjebuQ9Vgr/FP9SQhm4CE/2KR3Q=; b=K9hIbsi9oKwhajyVjljMbt/9qKIhr/hDVTZqWBsBgnr/tJaII4sDWuo21LOqxAVubh ipV7t/JBHqPMZQd0QIMaEyu3P7wQAtsMV7ro2y5TZeRDdeOcFfHDT1we1oTcIUP/Cr8B n5jZWSRDqC3tIZgwWCU+9hk4hnRGdceDPZJ0OdlP+SjjiKnOPD6DP27CoSaVIpG6v+tS rzHLbsVV+dmv1qwwf3fKnmZXqTJjZaVZUmwawxA4ttJqfHpd5F4HRr2k+NVhFKCI5wjc NCKx7EwnyCYoFn0NBvk71OsA2kLLQpacR9KmWC1Gnnh6gG25/Be1j7EqJWNPYXHWeHLM aLYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708674908; x=1709279708; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=72yc44tlXFhYUSExrjebuQ9Vgr/FP9SQhm4CE/2KR3Q=; b=Fc8gin31ClzCwrZHK9sP+5cbsmUd15lLr5JzT5ri40Un+oE9eTc/Q43UVBIyL3/eqC DTkDiUs9PihVQiSaIghjBWcodjzs0S5n+5xsDVvJtAF+k9L1gVpHtrH3eVk/I/DOJyg1 hgu/BxozUxjrNM2Q442vsfCcxqJQt2IcywJDujE6Jn27Jfu3gvQWPUdqc0Hro+3zaEgM UhKvTEgoWTelnaDL1HkIjqe7PQ3e/TKNzzyOhqRcvBDMMyiept81KJBnNATUulS4x5jq KG7Rlib+UhWTCBqQiDjft3CAsIviVEjQOooh6PgRHCBBVZfJxSV4ZM1FE/nsrbqTcSWU ypnA== X-Gm-Message-State: AOJu0YzzU0hNl3Aeq9FP96/xDzqxy0f7zHWYwZdjnBzCKqHn/GdUvWe0 2xsgQPmU9N/KDFJP9gs+rY0dbql5ueTaG3pEZwfIIinbcckVV/UDSMSDW/SYtbI= X-Google-Smtp-Source: AGHT+IEZZC/jCEGnhyto6edJk6N50d1iSc/JXy/Wb0Ft/hIUEDnHt0uWhO35AH00Xs+I6SP1QV72Xw== X-Received: by 2002:a17:906:a408:b0:a3f:b26e:147 with SMTP id l8-20020a170906a40800b00a3fb26e0147mr710910ejz.51.1708674908299; Thu, 22 Feb 2024 23:55:08 -0800 (PST) Received: from localhost.localdomain ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id th7-20020a1709078e0700b00a3e059c5c5fsm6660235ejc.188.2024.02.22.23.55.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 23:55:07 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v3 3/7] dts: filter test suites in executions Date: Fri, 23 Feb 2024 08:54:58 +0100 Message-Id: <20240223075502.60485-4-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240223075502.60485-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240223075502.60485-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org We're currently filtering which test cases to run after some setup steps, such as DPDK build, have already been taken. This prohibits us to mark the test suites and cases that were supposed to be run as blocked when an earlier setup fails, as that information is not available at that time. To remedy this, move the filtering to the beginning of each execution. This is the first action taken in each execution and if we can't filter the test cases, such as due to invalid inputs, we abort the whole execution. No test suites nor cases will be marked as blocked as we don't know which were supposed to be run. On top of that, the filtering takes place in the TestSuite class, which should only concern itself with test suite and test case logic, not the processing behind the scenes. The logic has been moved to DTSRunner which should do all the processing needed to run test suites. The filtering itself introduces a few changes/assumptions which are more sensible than before: 1. Assumption: There is just one TestSuite child class in each test suite module. This was an implicit assumption before as we couldn't specify the TestSuite classes in the test run configuration, just the modules. The name of the TestSuite child class starts with "Test" and then corresponds to the name of the module with CamelCase naming. 2. Unknown test cases specified both in the test run configuration and the environment variable/command line argument are no longer silently ignored. This is a quality of life improvement for users, as they could easily be not aware of the silent ignoration. Also, a change in the code results in pycodestyle warning and error: [E] E203 whitespace before ':' [W] W503 line break before binary operator These two are not PEP8 compliant, so they're disabled. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 24 +- dts/framework/config/conf_yaml_schema.json | 2 +- dts/framework/runner.py | 432 +++++++++++++++------ dts/framework/settings.py | 3 +- dts/framework/test_result.py | 34 ++ dts/framework/test_suite.py | 85 +--- dts/pyproject.toml | 3 + dts/tests/TestSuite_smoke_tests.py | 2 +- 8 files changed, 388 insertions(+), 197 deletions(-) diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 62eded7f04..c6a93b3b89 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -36,7 +36,7 @@ import json import os.path import pathlib -from dataclasses import dataclass +from dataclasses import dataclass, fields from enum import auto, unique from typing import Union @@ -506,6 +506,28 @@ def from_dict( vdevs=vdevs, ) + def copy_and_modify(self, **kwargs) -> "ExecutionConfiguration": + """Create a shallow copy with any of the fields modified. + + The only new data are those passed to this method. + The rest are copied from the object's fields calling the method. + + Args: + **kwargs: The names and types of keyword arguments are defined + by the fields of the :class:`ExecutionConfiguration` class. + + Returns: + The copied and modified execution configuration. + """ + new_config = {} + for field in fields(self): + if field.name in kwargs: + new_config[field.name] = kwargs[field.name] + else: + new_config[field.name] = getattr(self, field.name) + + return ExecutionConfiguration(**new_config) + @dataclass(slots=True, frozen=True) class Configuration: diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index 84e45fe3c2..051b079fe4 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -197,7 +197,7 @@ }, "cases": { "type": "array", - "description": "If specified, only this subset of test suite's test cases will be run. Unknown test cases will be silently ignored.", + "description": "If specified, only this subset of test suite's test cases will be run.", "items": { "type": "string" }, diff --git a/dts/framework/runner.py b/dts/framework/runner.py index 933685d638..e8030365ac 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -17,17 +17,27 @@ and the test case stage runs test cases individually. """ +import importlib +import inspect import logging +import re import sys from types import MethodType +from typing import Iterable from .config import ( BuildTargetConfiguration, + Configuration, ExecutionConfiguration, TestSuiteConfig, load_config, ) -from .exception import BlockingTestSuiteError, SSHTimeoutError, TestCaseVerifyError +from .exception import ( + BlockingTestSuiteError, + ConfigurationError, + SSHTimeoutError, + TestCaseVerifyError, +) from .logger import DTSLOG, getLogger from .settings import SETTINGS from .test_result import ( @@ -37,8 +47,9 @@ Result, TestCaseResult, TestSuiteResult, + TestSuiteWithCases, ) -from .test_suite import TestSuite, get_test_suites +from .test_suite import TestSuite from .testbed_model import SutNode, TGNode @@ -59,13 +70,23 @@ class DTSRunner: given execution, the next execution begins. """ + _configuration: Configuration _logger: DTSLOG _result: DTSResult + _test_suite_class_prefix: str + _test_suite_module_prefix: str + _func_test_case_regex: str + _perf_test_case_regex: str def __init__(self): - """Initialize the instance with logger and result.""" + """Initialize the instance with configuration, logger, result and string constants.""" + self._configuration = load_config() self._logger = getLogger("DTSRunner") self._result = DTSResult(self._logger) + self._test_suite_class_prefix = "Test" + self._test_suite_module_prefix = "tests.TestSuite_" + self._func_test_case_regex = r"test_(?!perf_)" + self._perf_test_case_regex = r"test_perf_" def run(self): """Run all build targets in all executions from the test run configuration. @@ -106,29 +127,32 @@ def run(self): try: # check the python version of the server that runs dts self._check_dts_python_version() + self._result.update_setup(Result.PASS) # for all Execution sections - for execution in load_config().executions: - sut_node = sut_nodes.get(execution.system_under_test_node.name) - tg_node = tg_nodes.get(execution.traffic_generator_node.name) - + for execution in self._configuration.executions: + self._logger.info( + f"Running execution with SUT '{execution.system_under_test_node.name}'." + ) + execution_result = self._result.add_execution(execution.system_under_test_node) + # we don't want to modify the original config, so create a copy + execution_test_suites = list(execution.test_suites) + if not execution.skip_smoke_tests: + execution_test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] try: - if not sut_node: - sut_node = SutNode(execution.system_under_test_node) - sut_nodes[sut_node.name] = sut_node - if not tg_node: - tg_node = TGNode(execution.traffic_generator_node) - tg_nodes[tg_node.name] = tg_node - self._result.update_setup(Result.PASS) + test_suites_with_cases = self._get_test_suites_with_cases( + execution_test_suites, execution.func, execution.perf + ) except Exception as e: - failed_node = execution.system_under_test_node.name - if sut_node: - failed_node = execution.traffic_generator_node.name - self._logger.exception(f"The Creation of node {failed_node} failed.") - self._result.update_setup(Result.FAIL, e) + self._logger.exception( + f"Invalid test suite configuration found: " f"{execution_test_suites}." + ) + execution_result.update_setup(Result.FAIL, e) else: - self._run_execution(sut_node, tg_node, execution) + self._connect_nodes_and_run_execution( + sut_nodes, tg_nodes, execution, execution_result, test_suites_with_cases + ) except Exception as e: self._logger.exception("An unexpected error has occurred.") @@ -163,11 +187,206 @@ def _check_dts_python_version(self) -> None: ) self._logger.warning("Please use Python >= 3.10 instead.") + def _get_test_suites_with_cases( + self, + test_suite_configs: list[TestSuiteConfig], + func: bool, + perf: bool, + ) -> list[TestSuiteWithCases]: + """Test suites with test cases discovery. + + The test suites with test cases defined in the user configuration are discovered + and stored for future use so that we don't import the modules twice and so that + the list of test suites with test cases is available for recording right away. + + Args: + test_suite_configs: Test suite configurations. + func: Whether to include functional test cases in the final list. + perf: Whether to include performance test cases in the final list. + + Returns: + The discovered test suites, each with test cases. + """ + test_suites_with_cases = [] + + for test_suite_config in test_suite_configs: + test_suite_class = self._get_test_suite_class(test_suite_config.test_suite) + test_cases = [] + func_test_cases, perf_test_cases = self._filter_test_cases( + test_suite_class, set(test_suite_config.test_cases + SETTINGS.test_cases) + ) + if func: + test_cases.extend(func_test_cases) + if perf: + test_cases.extend(perf_test_cases) + + test_suites_with_cases.append( + TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases) + ) + + return test_suites_with_cases + + def _get_test_suite_class(self, module_name: str) -> type[TestSuite]: + """Find the :class:`TestSuite` class in `module_name`. + + The full module name is `module_name` prefixed with `self._test_suite_module_prefix`. + The module name is a standard filename with words separated with underscores. + Search the `module_name` for a :class:`TestSuite` class which starts + with `self._test_suite_class_prefix`, continuing with CamelCase `module_name`. + The first matching class is returned. + + The CamelCase convention is not tested, only lowercase strings are compared. + + Args: + module_name: The module name without prefix where to search for the test suite. + + Returns: + The found test suite class. + + Raises: + ConfigurationError: If the corresponding module is not found or + a valid :class:`TestSuite` is not found in the module. + """ + + def is_test_suite(object) -> bool: + """Check whether `object` is a :class:`TestSuite`. + + The `object` is a subclass of :class:`TestSuite`, but not :class:`TestSuite` itself. + + Args: + object: The object to be checked. + + Returns: + :data:`True` if `object` is a subclass of `TestSuite`. + """ + try: + if issubclass(object, TestSuite) and object is not TestSuite: + return True + except TypeError: + return False + return False + + testsuite_module_path = f"{self._test_suite_module_prefix}{module_name}" + try: + test_suite_module = importlib.import_module(testsuite_module_path) + except ModuleNotFoundError as e: + raise ConfigurationError( + f"Test suite module '{testsuite_module_path}' not found." + ) from e + + lowercase_suite_to_find = ( + f"{self._test_suite_class_prefix}{module_name.replace('_', '')}".lower() + ) + for class_name, class_obj in inspect.getmembers(test_suite_module, is_test_suite): + if ( + class_name.startswith(self._test_suite_class_prefix) + and lowercase_suite_to_find == class_name.lower() + ): + return class_obj + raise ConfigurationError( + f"Couldn't find any valid test suites in {test_suite_module.__name__}." + ) + + def _filter_test_cases( + self, test_suite_class: type[TestSuite], test_cases_to_run: set[str] + ) -> tuple[list[MethodType], list[MethodType]]: + """Filter `test_cases_to_run` from `test_suite_class`. + + There are two rounds of filtering if `test_cases_to_run` is not empty. + The first filters `test_cases_to_run` from all methods of `test_suite_class`. + Then the methods are separated into functional and performance test cases. + If a method matches neither the functional nor performance name prefix, it's an error. + + Args: + test_suite_class: The class of the test suite. + test_cases_to_run: Test case names to filter from `test_suite_class`. + If empty, return all matching test cases. + + Returns: + A list of test case methods that should be executed. + + Raises: + ConfigurationError: If a test case from `test_cases_to_run` is not found + or it doesn't match either the functional nor performance name prefix. + """ + func_test_cases = [] + perf_test_cases = [] + name_method_tuples = inspect.getmembers(test_suite_class, inspect.isfunction) + if test_cases_to_run: + name_method_tuples = [ + (name, method) for name, method in name_method_tuples if name in test_cases_to_run + ] + if len(name_method_tuples) < len(test_cases_to_run): + missing_test_cases = test_cases_to_run - {name for name, _ in name_method_tuples} + raise ConfigurationError( + f"Test cases {missing_test_cases} not found among methods " + f"of {test_suite_class.__name__}." + ) + + for test_case_name, test_case_method in name_method_tuples: + if re.match(self._func_test_case_regex, test_case_name): + func_test_cases.append(test_case_method) + elif re.match(self._perf_test_case_regex, test_case_name): + perf_test_cases.append(test_case_method) + elif test_cases_to_run: + raise ConfigurationError( + f"Method '{test_case_name}' matches neither " + f"a functional nor a performance test case name." + ) + + return func_test_cases, perf_test_cases + + def _connect_nodes_and_run_execution( + self, + sut_nodes: dict[str, SutNode], + tg_nodes: dict[str, TGNode], + execution: ExecutionConfiguration, + execution_result: ExecutionResult, + test_suites_with_cases: Iterable[TestSuiteWithCases], + ) -> None: + """Connect nodes, then continue to run the given execution. + + Connect the :class:`SutNode` and the :class:`TGNode` of this `execution`. + If either has already been connected, it's going to be in either `sut_nodes` or `tg_nodes`, + respectively. + If not, connect and add the node to the respective `sut_nodes` or `tg_nodes` :class:`dict`. + + Args: + sut_nodes: A dictionary storing connected/to be connected SUT nodes. + tg_nodes: A dictionary storing connected/to be connected TG nodes. + execution: An execution's test run configuration. + execution_result: The execution's result. + test_suites_with_cases: The test suites with test cases to run. + """ + sut_node = sut_nodes.get(execution.system_under_test_node.name) + tg_node = tg_nodes.get(execution.traffic_generator_node.name) + + try: + if not sut_node: + sut_node = SutNode(execution.system_under_test_node) + sut_nodes[sut_node.name] = sut_node + if not tg_node: + tg_node = TGNode(execution.traffic_generator_node) + tg_nodes[tg_node.name] = tg_node + except Exception as e: + failed_node = execution.system_under_test_node.name + if sut_node: + failed_node = execution.traffic_generator_node.name + self._logger.exception(f"The Creation of node {failed_node} failed.") + execution_result.update_setup(Result.FAIL, e) + + else: + self._run_execution( + sut_node, tg_node, execution, execution_result, test_suites_with_cases + ) + def _run_execution( self, sut_node: SutNode, tg_node: TGNode, execution: ExecutionConfiguration, + execution_result: ExecutionResult, + test_suites_with_cases: Iterable[TestSuiteWithCases], ) -> None: """Run the given execution. @@ -178,11 +397,11 @@ def _run_execution( sut_node: The execution's SUT node. tg_node: The execution's TG node. execution: An execution's test run configuration. + execution_result: The execution's result. + test_suites_with_cases: The test suites with test cases to run. """ self._logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.") - execution_result = self._result.add_execution(sut_node.config) execution_result.add_sut_info(sut_node.node_info) - try: sut_node.set_up_execution(execution) execution_result.update_setup(Result.PASS) @@ -192,7 +411,10 @@ def _run_execution( else: for build_target in execution.build_targets: - self._run_build_target(sut_node, tg_node, build_target, execution, execution_result) + build_target_result = execution_result.add_build_target(build_target) + self._run_build_target( + sut_node, tg_node, build_target, build_target_result, test_suites_with_cases + ) finally: try: @@ -207,8 +429,8 @@ def _run_build_target( sut_node: SutNode, tg_node: TGNode, build_target: BuildTargetConfiguration, - execution: ExecutionConfiguration, - execution_result: ExecutionResult, + build_target_result: BuildTargetResult, + test_suites_with_cases: Iterable[TestSuiteWithCases], ) -> None: """Run the given build target. @@ -220,11 +442,11 @@ def _run_build_target( sut_node: The execution's sut node. tg_node: The execution's tg node. build_target: A build target's test run configuration. - execution: The build target's execution's test run configuration. - execution_result: The execution level result object associated with the execution. + build_target_result: The build target level result object associated + with the current build target. + test_suites_with_cases: The test suites with test cases to run. """ self._logger.info(f"Running build target '{build_target.name}'.") - build_target_result = execution_result.add_build_target(build_target) try: sut_node.set_up_build_target(build_target) @@ -236,7 +458,7 @@ def _run_build_target( build_target_result.update_setup(Result.FAIL, e) else: - self._run_test_suites(sut_node, tg_node, execution, build_target_result) + self._run_test_suites(sut_node, tg_node, build_target_result, test_suites_with_cases) finally: try: @@ -250,10 +472,10 @@ def _run_test_suites( self, sut_node: SutNode, tg_node: TGNode, - execution: ExecutionConfiguration, build_target_result: BuildTargetResult, + test_suites_with_cases: Iterable[TestSuiteWithCases], ) -> None: - """Run the execution's (possibly a subset of) test suites using the current build target. + """Run `test_suites_with_cases` with the current build target. The method assumes the build target we're testing has already been built on the SUT node. The current build target thus corresponds to the current DPDK build present on the SUT node. @@ -264,22 +486,20 @@ def _run_test_suites( Args: sut_node: The execution's SUT node. tg_node: The execution's TG node. - execution: The execution's test run configuration associated - with the current build target. build_target_result: The build target level result object associated with the current build target. + test_suites_with_cases: The test suites with test cases to run. """ end_build_target = False - if not execution.skip_smoke_tests: - execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] - for test_suite_config in execution.test_suites: + for test_suite_with_cases in test_suites_with_cases: + test_suite_result = build_target_result.add_test_suite( + test_suite_with_cases.test_suite_class.__name__ + ) try: - self._run_test_suite_module( - sut_node, tg_node, execution, build_target_result, test_suite_config - ) + self._run_test_suite(sut_node, tg_node, test_suite_result, test_suite_with_cases) except BlockingTestSuiteError as e: self._logger.exception( - f"An error occurred within {test_suite_config.test_suite}. " + f"An error occurred within {test_suite_with_cases.test_suite_class.__name__}. " "Skipping build target..." ) self._result.add_error(e) @@ -288,15 +508,14 @@ def _run_test_suites( if end_build_target: break - def _run_test_suite_module( + def _run_test_suite( self, sut_node: SutNode, tg_node: TGNode, - execution: ExecutionConfiguration, - build_target_result: BuildTargetResult, - test_suite_config: TestSuiteConfig, + test_suite_result: TestSuiteResult, + test_suite_with_cases: TestSuiteWithCases, ) -> None: - """Set up, execute and tear down all test suites in a single test suite module. + """Set up, execute and tear down `test_suite_with_cases`. The method assumes the build target we're testing has already been built on the SUT node. The current build target thus corresponds to the current DPDK build present on the SUT node. @@ -306,92 +525,79 @@ def _run_test_suite_module( Record the setup and the teardown and handle failures. - The test cases to execute are discovered when creating the :class:`TestSuite` object. - Args: sut_node: The execution's SUT node. tg_node: The execution's TG node. - execution: The execution's test run configuration associated - with the current build target. - build_target_result: The build target level result object associated - with the current build target. - test_suite_config: Test suite test run configuration specifying the test suite module - and possibly a subset of test cases of test suites in that module. + test_suite_result: The test suite level result object associated + with the current test suite. + test_suite_with_cases: The test suite with test cases to run. Raises: BlockingTestSuiteError: If a blocking test suite fails. """ + test_suite_name = test_suite_with_cases.test_suite_class.__name__ + test_suite = test_suite_with_cases.test_suite_class(sut_node, tg_node) try: - full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" - test_suite_classes = get_test_suites(full_suite_path) - suites_str = ", ".join((x.__name__ for x in test_suite_classes)) - self._logger.debug(f"Found test suites '{suites_str}' in '{full_suite_path}'.") + self._logger.info(f"Starting test suite setup: {test_suite_name}") + test_suite.set_up_suite() + test_suite_result.update_setup(Result.PASS) + self._logger.info(f"Test suite setup successful: {test_suite_name}") except Exception as e: - self._logger.exception("An error occurred when searching for test suites.") - self._result.update_setup(Result.ERROR, e) + self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") + test_suite_result.update_setup(Result.ERROR, e) else: - for test_suite_class in test_suite_classes: - test_suite = test_suite_class(sut_node, tg_node, test_suite_config.test_cases) - - test_suite_name = test_suite.__class__.__name__ - test_suite_result = build_target_result.add_test_suite(test_suite_name) - try: - self._logger.info(f"Starting test suite setup: {test_suite_name}") - test_suite.set_up_suite() - test_suite_result.update_setup(Result.PASS) - self._logger.info(f"Test suite setup successful: {test_suite_name}") - except Exception as e: - self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") - test_suite_result.update_setup(Result.ERROR, e) - - else: - self._execute_test_suite(execution.func, test_suite, test_suite_result) - - finally: - try: - test_suite.tear_down_suite() - sut_node.kill_cleanup_dpdk_apps() - test_suite_result.update_teardown(Result.PASS) - except Exception as e: - self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") - self._logger.warning( - f"Test suite '{test_suite_name}' teardown failed, " - f"the next test suite may be affected." - ) - test_suite_result.update_setup(Result.ERROR, e) - if len(test_suite_result.get_errors()) > 0 and test_suite.is_blocking: - raise BlockingTestSuiteError(test_suite_name) + self._execute_test_suite( + test_suite, + test_suite_with_cases.test_cases, + test_suite_result, + ) + finally: + try: + test_suite.tear_down_suite() + sut_node.kill_cleanup_dpdk_apps() + test_suite_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") + self._logger.warning( + f"Test suite '{test_suite_name}' teardown failed, " + "the next test suite may be affected." + ) + test_suite_result.update_setup(Result.ERROR, e) + if len(test_suite_result.get_errors()) > 0 and test_suite.is_blocking: + raise BlockingTestSuiteError(test_suite_name) def _execute_test_suite( - self, func: bool, test_suite: TestSuite, test_suite_result: TestSuiteResult + self, + test_suite: TestSuite, + test_cases: Iterable[MethodType], + test_suite_result: TestSuiteResult, ) -> None: - """Execute all discovered test cases in `test_suite`. + """Execute all `test_cases` in `test_suite`. If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable is set, in case of a test case failure, the test case will be executed again until it passes or it fails that many times in addition of the first failure. Args: - func: Whether to execute functional test cases. test_suite: The test suite object. + test_cases: The list of test case methods. test_suite_result: The test suite level result object associated with the current test suite. """ - if func: - for test_case_method in test_suite._get_functional_test_cases(): - test_case_name = test_case_method.__name__ - test_case_result = test_suite_result.add_test_case(test_case_name) - all_attempts = SETTINGS.re_run + 1 - attempt_nr = 1 + for test_case_method in test_cases: + test_case_name = test_case_method.__name__ + test_case_result = test_suite_result.add_test_case(test_case_name) + all_attempts = SETTINGS.re_run + 1 + attempt_nr = 1 + self._run_test_case(test_suite, test_case_method, test_case_result) + while not test_case_result and attempt_nr < all_attempts: + attempt_nr += 1 + self._logger.info( + f"Re-running FAILED test case '{test_case_name}'. " + f"Attempt number {attempt_nr} out of {all_attempts}." + ) self._run_test_case(test_suite, test_case_method, test_case_result) - while not test_case_result and attempt_nr < all_attempts: - attempt_nr += 1 - self._logger.info( - f"Re-running FAILED test case '{test_case_name}'. " - f"Attempt number {attempt_nr} out of {all_attempts}." - ) - self._run_test_case(test_suite, test_case_method, test_case_result) def _run_test_case( self, @@ -399,7 +605,7 @@ def _run_test_case( test_case_method: MethodType, test_case_result: TestCaseResult, ) -> None: - """Setup, execute and teardown a test case in `test_suite`. + """Setup, execute and teardown `test_case_method` from `test_suite`. Record the result of the setup and the teardown and handle failures. @@ -424,7 +630,7 @@ def _run_test_case( else: # run test case if setup was successful - self._execute_test_case(test_case_method, test_case_result) + self._execute_test_case(test_suite, test_case_method, test_case_result) finally: try: @@ -440,11 +646,15 @@ def _run_test_case( test_case_result.update(Result.ERROR) def _execute_test_case( - self, test_case_method: MethodType, test_case_result: TestCaseResult + self, + test_suite: TestSuite, + test_case_method: MethodType, + test_case_result: TestCaseResult, ) -> None: - """Execute one test case, record the result and handle failures. + """Execute `test_case_method` from `test_suite`, record the result and handle failures. Args: + test_suite: The test suite object. test_case_method: The test case method. test_case_result: The test case level result object associated with the current test case. @@ -452,7 +662,7 @@ def _execute_test_case( test_case_name = test_case_method.__name__ try: self._logger.info(f"Starting test case execution: {test_case_name}") - test_case_method() + test_case_method(test_suite) test_case_result.update(Result.PASS) self._logger.info(f"Test case execution PASSED: {test_case_name}") diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 609c8d0e62..2b8bfbe0ed 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -253,8 +253,7 @@ def _get_parser() -> argparse.ArgumentParser: "--test-cases", action=_env_arg("DTS_TESTCASES"), default="", - help="[DTS_TESTCASES] Comma-separated list of test cases to execute. " - "Unknown test cases will be silently ignored.", + help="[DTS_TESTCASES] Comma-separated list of test cases to execute.", ) parser.add_argument( diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 4467749a9d..075195fd5b 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -25,7 +25,9 @@ import os.path from collections.abc import MutableSequence +from dataclasses import dataclass from enum import Enum, auto +from types import MethodType from .config import ( OS, @@ -36,10 +38,42 @@ CPUType, NodeConfiguration, NodeInfo, + TestSuiteConfig, ) from .exception import DTSError, ErrorSeverity from .logger import DTSLOG from .settings import SETTINGS +from .test_suite import TestSuite + + +@dataclass(slots=True, frozen=True) +class TestSuiteWithCases: + """A test suite class with test case methods. + + An auxiliary class holding a test case class with test case methods. The intended use of this + class is to hold a subset of test cases (which could be all test cases) because we don't have + all the data to instantiate the class at the point of inspection. The knowledge of this subset + is needed in case an error occurs before the class is instantiated and we need to record + which test cases were blocked by the error. + + Attributes: + test_suite_class: The test suite class. + test_cases: The test case methods. + """ + + test_suite_class: type[TestSuite] + test_cases: list[MethodType] + + def create_config(self) -> TestSuiteConfig: + """Generate a :class:`TestSuiteConfig` from the stored test suite with test cases. + + Returns: + The :class:`TestSuiteConfig` representation. + """ + return TestSuiteConfig( + test_suite=self.test_suite_class.__name__, + test_cases=[test_case.__name__ for test_case in self.test_cases], + ) class Result(Enum): diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index b02fd36147..f9fe88093e 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -11,25 +11,17 @@ * Testbed (SUT, TG) configuration, * Packet sending and verification, * Test case verification. - -The module also defines a function, :func:`get_test_suites`, -for gathering test suites from a Python module. """ -import importlib -import inspect -import re from ipaddress import IPv4Interface, IPv6Interface, ip_interface -from types import MethodType -from typing import Any, ClassVar, Union +from typing import ClassVar, Union from scapy.layers.inet import IP # type: ignore[import] from scapy.layers.l2 import Ether # type: ignore[import] from scapy.packet import Packet, Padding # type: ignore[import] -from .exception import ConfigurationError, TestCaseVerifyError +from .exception import TestCaseVerifyError from .logger import DTSLOG, getLogger -from .settings import SETTINGS from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries @@ -37,7 +29,6 @@ class TestSuite(object): """The base class with building blocks needed by most test cases. - * Test case filtering and collection, * Test suite setup/cleanup methods to override, * Test case setup/cleanup methods to override, * Test case verification, @@ -71,7 +62,6 @@ class TestSuite(object): #: will block the execution of all subsequent test suites in the current build target. is_blocking: ClassVar[bool] = False _logger: DTSLOG - _test_cases_to_run: list[str] _port_links: list[PortLink] _sut_port_ingress: Port _sut_port_egress: Port @@ -86,24 +76,19 @@ def __init__( self, sut_node: SutNode, tg_node: TGNode, - test_cases: list[str], ): """Initialize the test suite testbed information and basic configuration. - Process what test cases to run, find links between ports and set up - default IP addresses to be used when configuring them. + Find links between ports and set up default IP addresses to be used when + configuring them. Args: sut_node: The SUT node where the test suite will run. tg_node: The TG node where the test suite will run. - test_cases: The list of test cases to execute. - If empty, all test cases will be executed. """ self.sut_node = sut_node self.tg_node = tg_node self._logger = getLogger(self.__class__.__name__) - self._test_cases_to_run = test_cases - self._test_cases_to_run.extend(SETTINGS.test_cases) self._port_links = [] self._process_links() self._sut_port_ingress, self._tg_port_egress = ( @@ -364,65 +349,3 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool: if received_packet.src != expected_packet.src or received_packet.dst != expected_packet.dst: return False return True - - def _get_functional_test_cases(self) -> list[MethodType]: - """Get all functional test cases defined in this TestSuite. - - Returns: - The list of functional test cases of this TestSuite. - """ - return self._get_test_cases(r"test_(?!perf_)") - - def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: - """Return a list of test cases matching test_case_regex. - - Returns: - The list of test cases matching test_case_regex of this TestSuite. - """ - self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.") - filtered_test_cases = [] - for test_case_name, test_case in inspect.getmembers(self, inspect.ismethod): - if self._should_be_executed(test_case_name, test_case_regex): - filtered_test_cases.append(test_case) - cases_str = ", ".join((x.__name__ for x in filtered_test_cases)) - self._logger.debug(f"Found test cases '{cases_str}' in {self.__class__.__name__}.") - return filtered_test_cases - - def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool: - """Check whether the test case should be scheduled to be executed.""" - match = bool(re.match(test_case_regex, test_case_name)) - if self._test_cases_to_run: - return match and test_case_name in self._test_cases_to_run - - return match - - -def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: - r"""Find all :class:`TestSuite`\s in a Python module. - - Args: - testsuite_module_path: The path to the Python module. - - Returns: - The list of :class:`TestSuite`\s found within the Python module. - - Raises: - ConfigurationError: The test suite module was not found. - """ - - def is_test_suite(object: Any) -> bool: - try: - if issubclass(object, TestSuite) and object is not TestSuite: - return True - except TypeError: - return False - return False - - try: - testcase_module = importlib.import_module(testsuite_module_path) - except ModuleNotFoundError as e: - raise ConfigurationError(f"Test suite '{testsuite_module_path}' not found.") from e - return [ - test_suite_class - for _, test_suite_class in inspect.getmembers(testcase_module, is_test_suite) - ] diff --git a/dts/pyproject.toml b/dts/pyproject.toml index 28bd970ae4..8eb92b4f11 100644 --- a/dts/pyproject.toml +++ b/dts/pyproject.toml @@ -51,6 +51,9 @@ linters = "mccabe,pycodestyle,pydocstyle,pyflakes" format = "pylint" max_line_length = 100 +[tool.pylama.linter.pycodestyle] +ignore = "E203,W503" + [tool.pylama.linter.pydocstyle] convention = "google" diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py index 5e2bac14bd..7b2a0e97f8 100644 --- a/dts/tests/TestSuite_smoke_tests.py +++ b/dts/tests/TestSuite_smoke_tests.py @@ -21,7 +21,7 @@ from framework.utils import REGEX_FOR_PCI_ADDRESS -class SmokeTests(TestSuite): +class TestSmokeTests(TestSuite): """DPDK and infrastructure smoke test suite. The test cases validate the most basic DPDK functionality needed for all other test suites. From patchwork Fri Feb 23 07:54:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137088 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9AE7F43B9B; Fri, 23 Feb 2024 08:55:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 59C37427D7; Fri, 23 Feb 2024 08:55:13 +0100 (CET) Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com [209.85.218.49]) by mails.dpdk.org (Postfix) with ESMTP id 4D08C402ED for ; Fri, 23 Feb 2024 08:55:10 +0100 (CET) Received: by mail-ej1-f49.google.com with SMTP id a640c23a62f3a-a3e4765c86eso11962966b.0 for ; Thu, 22 Feb 2024 23:55:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1708674910; x=1709279710; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Xl86siJVlDGlX5Wkzr/K/eUz6vp6W3gCEYpA70LynMk=; b=EDRE8VlDHdaC+oyuAutZfQckWkAJqpJDRJuW9RwOl1Gg9NjZcPOUm6VMzdet1aexIy hXCldlIE4OUkOovWciv0xmFz5RWZtHdacXbDKDo+Mmg0htr4Z/DnNCFYT0OzuGAGBRy+ pqZ9H/NxDJPr5EoXRTKIABKhVvn/H9y0ThMKJmkT4pcFSimUcxDZ80SvqrBtuq/U6SKc whoCGpPYa9okPFCjQOFUpZE3vu476Xt7eNFOEfVPM0StpBTeW46OKGxrVaigi0Th5iCe pFcvIw7/puq1Xq4TSOB5FCeTMwlay4wif9FJOtZ6gHKHmE4c5KWT9rFIPR+gKkixVWbY Fd1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708674910; x=1709279710; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xl86siJVlDGlX5Wkzr/K/eUz6vp6W3gCEYpA70LynMk=; b=NYjo7XhXMwcsyv6jMKuZIXDV8h9rTIdnGUjQzSZlx5AvQWvXE9v5SvD0o4ReEePC/V 3SHP8MbB3EAdgNiC53jE1LYswFTCYiu6EH3IKwonyRVZa3coGrqSJiRhqSj5tFSyi2ir QVasY8nM8GoCjIZpfhsFQTbZ1gS7Nw++IkUHq664FD8aoUAZpMz0sGXxcQkTh/+fbQ2S n0YBzx90LsalOG+OL9Ois1+tVewGb4U7uIjGhfLQFfJe1Zy31jQSfcUQpCqjOawWM4uC LTS80OAMwOFGqfZCbrTRzQpa7888qBKcZ/OhmkkKc9eTLsWO90ocJVUjt7tQ9iJJPaNQ Rg6w== X-Gm-Message-State: AOJu0Yz1bjUh8NkiXm9vnMfu2w20qmekqHr3SK03jC5oIKcIfz3dztk7 H94MzM050c7xHFI9VvQBpLTAJkB75qUVF1Pt3z+FpFTQT4GN4hik90XkjfXazGg= X-Google-Smtp-Source: AGHT+IGisrO0vvldoyz+vLzCnUgvmgqOrTptfjavxokFBxNn76x9Ql0V1xppM/yYc98JVS3cqucnfA== X-Received: by 2002:a17:907:1008:b0:a3e:c738:884b with SMTP id ox8-20020a170907100800b00a3ec738884bmr626373ejb.69.1708674909634; Thu, 22 Feb 2024 23:55:09 -0800 (PST) Received: from localhost.localdomain ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id th7-20020a1709078e0700b00a3e059c5c5fsm6660235ejc.188.2024.02.22.23.55.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 23:55:09 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v3 4/7] dts: reorganize test result Date: Fri, 23 Feb 2024 08:54:59 +0100 Message-Id: <20240223075502.60485-5-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240223075502.60485-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240223075502.60485-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The current order of Result classes in the test_suite.py module is guided by the needs of type hints, which is not as intuitively readable as ordering them by the occurrences in code. The order goes from the topmost level to lowermost: BaseResult DTSResult ExecutionResult BuildTargetResult TestSuiteResult TestCaseResult This is the same order as they're used in the runner module and they're also used in the same order between themselves in the test_result module. Signed-off-by: Juraj Linkeš --- dts/framework/test_result.py | 411 ++++++++++++++++++----------------- 1 file changed, 206 insertions(+), 205 deletions(-) diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 075195fd5b..abdbafab10 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -28,6 +28,7 @@ from dataclasses import dataclass from enum import Enum, auto from types import MethodType +from typing import Union from .config import ( OS, @@ -129,58 +130,6 @@ def __bool__(self) -> bool: return bool(self.result) -class Statistics(dict): - """How many test cases ended in which result state along some other basic information. - - Subclassing :class:`dict` provides a convenient way to format the data. - - The data are stored in the following keys: - - * **PASS RATE** (:class:`int`) -- The FAIL/PASS ratio of all test cases. - * **DPDK VERSION** (:class:`str`) -- The tested DPDK version. - """ - - def __init__(self, dpdk_version: str | None): - """Extend the constructor with keys in which the data are stored. - - Args: - dpdk_version: The version of tested DPDK. - """ - super(Statistics, self).__init__() - for result in Result: - self[result.name] = 0 - self["PASS RATE"] = 0.0 - self["DPDK VERSION"] = dpdk_version - - def __iadd__(self, other: Result) -> "Statistics": - """Add a Result to the final count. - - Example: - stats: Statistics = Statistics() # empty Statistics - stats += Result.PASS # add a Result to `stats` - - Args: - other: The Result to add to this statistics object. - - Returns: - The modified statistics object. - """ - self[other.name] += 1 - self["PASS RATE"] = ( - float(self[Result.PASS.name]) * 100 / sum(self[result.name] for result in Result) - ) - return self - - def __str__(self) -> str: - """Each line contains the formatted key = value pair.""" - stats_str = "" - for key, value in self.items(): - stats_str += f"{key:<12} = {value}\n" - # according to docs, we should use \n when writing to text files - # on all platforms - return stats_str - - class BaseResult(object): """Common data and behavior of DTS results. @@ -245,7 +194,7 @@ def get_errors(self) -> list[Exception]: """ return self._get_setup_teardown_errors() + self._get_inner_errors() - def add_stats(self, statistics: Statistics) -> None: + def add_stats(self, statistics: "Statistics") -> None: """Collate stats from the whole result hierarchy. Args: @@ -255,91 +204,149 @@ def add_stats(self, statistics: Statistics) -> None: inner_result.add_stats(statistics) -class TestCaseResult(BaseResult, FixtureResult): - r"""The test case specific result. +class DTSResult(BaseResult): + """Stores environment information and test results from a DTS run. - Stores the result of the actual test case. This is done by adding an extra superclass - in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and - the class is itself a record of the test case. + * Execution level information, such as testbed and the test suite list, + * Build target level information, such as compiler, target OS and cpu, + * Test suite and test case results, + * All errors that are caught and recorded during DTS execution. + + The information is stored hierarchically. This is the first level of the hierarchy + and as such is where the data form the whole hierarchy is collated or processed. + + The internal list stores the results of all executions. Attributes: - test_case_name: The test case name. + dpdk_version: The DPDK version to record. """ - test_case_name: str + dpdk_version: str | None + _logger: DTSLOG + _errors: list[Exception] + _return_code: ErrorSeverity + _stats_result: Union["Statistics", None] + _stats_filename: str - def __init__(self, test_case_name: str): - """Extend the constructor with `test_case_name`. + def __init__(self, logger: DTSLOG): + """Extend the constructor with top-level specifics. Args: - test_case_name: The test case's name. + logger: The logger instance the whole result will use. """ - super(TestCaseResult, self).__init__() - self.test_case_name = test_case_name + super(DTSResult, self).__init__() + self.dpdk_version = None + self._logger = logger + self._errors = [] + self._return_code = ErrorSeverity.NO_ERR + self._stats_result = None + self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") - def update(self, result: Result, error: Exception | None = None) -> None: - """Update the test case result. + def add_execution(self, sut_node: NodeConfiguration) -> "ExecutionResult": + """Add and return the inner result (execution). - This updates the result of the test case itself and doesn't affect - the results of the setup and teardown steps in any way. + Args: + sut_node: The SUT node's test run configuration. + + Returns: + The execution's result. + """ + execution_result = ExecutionResult(sut_node) + self._inner_results.append(execution_result) + return execution_result + + def add_error(self, error: Exception) -> None: + """Record an error that occurred outside any execution. Args: - result: The result of the test case. - error: The error that occurred in case of a failure. + error: The exception to record. """ - self.result = result - self.error = error + self._errors.append(error) - def _get_inner_errors(self) -> list[Exception]: - if self.error: - return [self.error] - return [] + def process(self) -> None: + """Process the data after a whole DTS run. - def add_stats(self, statistics: Statistics) -> None: - r"""Add the test case result to statistics. + The data is added to inner objects during runtime and this object is not updated + at that time. This requires us to process the inner data after it's all been gathered. - The base method goes through the hierarchy recursively and this method is here to stop - the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree. + The processing gathers all errors and the statistics of test case results. + """ + self._errors += self.get_errors() + if self._errors and self._logger: + self._logger.debug("Summary of errors:") + for error in self._errors: + self._logger.debug(repr(error)) - Args: - statistics: The :class:`Statistics` object where the stats will be added. + self._stats_result = Statistics(self.dpdk_version) + self.add_stats(self._stats_result) + with open(self._stats_filename, "w+") as stats_file: + stats_file.write(str(self._stats_result)) + + def get_return_code(self) -> int: + """Go through all stored Exceptions and return the final DTS error code. + + Returns: + The highest error code found. """ - statistics += self.result + for error in self._errors: + error_return_code = ErrorSeverity.GENERIC_ERR + if isinstance(error, DTSError): + error_return_code = error.severity - def __bool__(self) -> bool: - """The test case passed only if setup, teardown and the test case itself passed.""" - return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) + if error_return_code > self._return_code: + self._return_code = error_return_code + return int(self._return_code) -class TestSuiteResult(BaseResult): - """The test suite specific result. - The internal list stores the results of all test cases in a given test suite. +class ExecutionResult(BaseResult): + """The execution specific result. + + The internal list stores the results of all build targets in a given execution. Attributes: - suite_name: The test suite name. + sut_node: The SUT node used in the execution. + sut_os_name: The operating system of the SUT node. + sut_os_version: The operating system version of the SUT node. + sut_kernel_version: The operating system kernel version of the SUT node. """ - suite_name: str + sut_node: NodeConfiguration + sut_os_name: str + sut_os_version: str + sut_kernel_version: str - def __init__(self, suite_name: str): - """Extend the constructor with `suite_name`. + def __init__(self, sut_node: NodeConfiguration): + """Extend the constructor with the `sut_node`'s config. Args: - suite_name: The test suite's name. + sut_node: The SUT node's test run configuration used in the execution. """ - super(TestSuiteResult, self).__init__() - self.suite_name = suite_name + super(ExecutionResult, self).__init__() + self.sut_node = sut_node - def add_test_case(self, test_case_name: str) -> TestCaseResult: - """Add and return the inner result (test case). + def add_build_target(self, build_target: BuildTargetConfiguration) -> "BuildTargetResult": + """Add and return the inner result (build target). + + Args: + build_target: The build target's test run configuration. Returns: - The test case's result. + The build target's result. """ - test_case_result = TestCaseResult(test_case_name) - self._inner_results.append(test_case_result) - return test_case_result + build_target_result = BuildTargetResult(build_target) + self._inner_results.append(build_target_result) + return build_target_result + + def add_sut_info(self, sut_info: NodeInfo) -> None: + """Add SUT information gathered at runtime. + + Args: + sut_info: The additional SUT node information. + """ + self.sut_os_name = sut_info.os_name + self.sut_os_version = sut_info.os_version + self.sut_kernel_version = sut_info.kernel_version class BuildTargetResult(BaseResult): @@ -386,7 +393,7 @@ def add_build_target_info(self, versions: BuildTargetInfo) -> None: self.compiler_version = versions.compiler_version self.dpdk_version = versions.dpdk_version - def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: + def add_test_suite(self, test_suite_name: str) -> "TestSuiteResult": """Add and return the inner result (test suite). Returns: @@ -397,146 +404,140 @@ def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: return test_suite_result -class ExecutionResult(BaseResult): - """The execution specific result. +class TestSuiteResult(BaseResult): + """The test suite specific result. - The internal list stores the results of all build targets in a given execution. + The internal list stores the results of all test cases in a given test suite. Attributes: - sut_node: The SUT node used in the execution. - sut_os_name: The operating system of the SUT node. - sut_os_version: The operating system version of the SUT node. - sut_kernel_version: The operating system kernel version of the SUT node. + suite_name: The test suite name. """ - sut_node: NodeConfiguration - sut_os_name: str - sut_os_version: str - sut_kernel_version: str + suite_name: str - def __init__(self, sut_node: NodeConfiguration): - """Extend the constructor with the `sut_node`'s config. + def __init__(self, suite_name: str): + """Extend the constructor with `suite_name`. Args: - sut_node: The SUT node's test run configuration used in the execution. + suite_name: The test suite's name. """ - super(ExecutionResult, self).__init__() - self.sut_node = sut_node - - def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTargetResult: - """Add and return the inner result (build target). + super(TestSuiteResult, self).__init__() + self.suite_name = suite_name - Args: - build_target: The build target's test run configuration. + def add_test_case(self, test_case_name: str) -> "TestCaseResult": + """Add and return the inner result (test case). Returns: - The build target's result. - """ - build_target_result = BuildTargetResult(build_target) - self._inner_results.append(build_target_result) - return build_target_result - - def add_sut_info(self, sut_info: NodeInfo) -> None: - """Add SUT information gathered at runtime. - - Args: - sut_info: The additional SUT node information. + The test case's result. """ - self.sut_os_name = sut_info.os_name - self.sut_os_version = sut_info.os_version - self.sut_kernel_version = sut_info.kernel_version + test_case_result = TestCaseResult(test_case_name) + self._inner_results.append(test_case_result) + return test_case_result -class DTSResult(BaseResult): - """Stores environment information and test results from a DTS run. - - * Execution level information, such as testbed and the test suite list, - * Build target level information, such as compiler, target OS and cpu, - * Test suite and test case results, - * All errors that are caught and recorded during DTS execution. - - The information is stored hierarchically. This is the first level of the hierarchy - and as such is where the data form the whole hierarchy is collated or processed. +class TestCaseResult(BaseResult, FixtureResult): + r"""The test case specific result. - The internal list stores the results of all executions. + Stores the result of the actual test case. This is done by adding an extra superclass + in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and + the class is itself a record of the test case. Attributes: - dpdk_version: The DPDK version to record. + test_case_name: The test case name. """ - dpdk_version: str | None - _logger: DTSLOG - _errors: list[Exception] - _return_code: ErrorSeverity - _stats_result: Statistics | None - _stats_filename: str + test_case_name: str - def __init__(self, logger: DTSLOG): - """Extend the constructor with top-level specifics. + def __init__(self, test_case_name: str): + """Extend the constructor with `test_case_name`. Args: - logger: The logger instance the whole result will use. + test_case_name: The test case's name. """ - super(DTSResult, self).__init__() - self.dpdk_version = None - self._logger = logger - self._errors = [] - self._return_code = ErrorSeverity.NO_ERR - self._stats_result = None - self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") + super(TestCaseResult, self).__init__() + self.test_case_name = test_case_name - def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: - """Add and return the inner result (execution). + def update(self, result: Result, error: Exception | None = None) -> None: + """Update the test case result. - Args: - sut_node: The SUT node's test run configuration. + This updates the result of the test case itself and doesn't affect + the results of the setup and teardown steps in any way. - Returns: - The execution's result. + Args: + result: The result of the test case. + error: The error that occurred in case of a failure. """ - execution_result = ExecutionResult(sut_node) - self._inner_results.append(execution_result) - return execution_result + self.result = result + self.error = error - def add_error(self, error: Exception) -> None: - """Record an error that occurred outside any execution. + def _get_inner_errors(self) -> list[Exception]: + if self.error: + return [self.error] + return [] + + def add_stats(self, statistics: "Statistics") -> None: + r"""Add the test case result to statistics. + + The base method goes through the hierarchy recursively and this method is here to stop + the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree. Args: - error: The exception to record. + statistics: The :class:`Statistics` object where the stats will be added. """ - self._errors.append(error) + statistics += self.result - def process(self) -> None: - """Process the data after a whole DTS run. + def __bool__(self) -> bool: + """The test case passed only if setup, teardown and the test case itself passed.""" + return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) - The data is added to inner objects during runtime and this object is not updated - at that time. This requires us to process the inner data after it's all been gathered. - The processing gathers all errors and the statistics of test case results. +class Statistics(dict): + """How many test cases ended in which result state along some other basic information. + + Subclassing :class:`dict` provides a convenient way to format the data. + + The data are stored in the following keys: + + * **PASS RATE** (:class:`int`) -- The FAIL/PASS ratio of all test cases. + * **DPDK VERSION** (:class:`str`) -- The tested DPDK version. + """ + + def __init__(self, dpdk_version: str | None): + """Extend the constructor with keys in which the data are stored. + + Args: + dpdk_version: The version of tested DPDK. """ - self._errors += self.get_errors() - if self._errors and self._logger: - self._logger.debug("Summary of errors:") - for error in self._errors: - self._logger.debug(repr(error)) + super(Statistics, self).__init__() + for result in Result: + self[result.name] = 0 + self["PASS RATE"] = 0.0 + self["DPDK VERSION"] = dpdk_version - self._stats_result = Statistics(self.dpdk_version) - self.add_stats(self._stats_result) - with open(self._stats_filename, "w+") as stats_file: - stats_file.write(str(self._stats_result)) + def __iadd__(self, other: Result) -> "Statistics": + """Add a Result to the final count. - def get_return_code(self) -> int: - """Go through all stored Exceptions and return the final DTS error code. + Example: + stats: Statistics = Statistics() # empty Statistics + stats += Result.PASS # add a Result to `stats` + + Args: + other: The Result to add to this statistics object. Returns: - The highest error code found. + The modified statistics object. """ - for error in self._errors: - error_return_code = ErrorSeverity.GENERIC_ERR - if isinstance(error, DTSError): - error_return_code = error.severity - - if error_return_code > self._return_code: - self._return_code = error_return_code + self[other.name] += 1 + self["PASS RATE"] = ( + float(self[Result.PASS.name]) * 100 / sum(self[result.name] for result in Result) + ) + return self - return int(self._return_code) + def __str__(self) -> str: + """Each line contains the formatted key = value pair.""" + stats_str = "" + for key, value in self.items(): + stats_str += f"{key:<12} = {value}\n" + # according to docs, we should use \n when writing to text files + # on all platforms + return stats_str From patchwork Fri Feb 23 07:55:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137089 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32E0543B9D; Fri, 23 Feb 2024 08:55:47 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 84462427DD; Fri, 23 Feb 2024 08:55:14 +0100 (CET) Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) by mails.dpdk.org (Postfix) with ESMTP id 7B5EE40EE5 for ; Fri, 23 Feb 2024 08:55:11 +0100 (CET) Received: by mail-ej1-f52.google.com with SMTP id a640c23a62f3a-a3e7f7b3d95so56961666b.3 for ; Thu, 22 Feb 2024 23:55:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1708674911; x=1709279711; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dI+ZHLbHHy/sO5OXKGpzrSsMvXGhpMV1sYPyysMEqW8=; b=LdDUPx95SMQzdaKP4Szbn/Y7dnzcUax+fENbEBU6x50u7TQ2x2EQ43U0z2ewjX/3BU zUexY5FZiWiDVHvU/M4S6Cxnb6hz7tmgRJRmdyEUBneZE9Nymolb6D6jSrTUmBNtPYA3 MQ9Xd+SrDci8KoRQwKG5eAqtcuC2/OY3IagGm6PHvm4dpBXy+/22F5dPiYU1n+WKYgdK y7E8YQwylxK/z+PPX1tq7sXQHHjK0PPdpLcWof729YGMDQB1OFlPvt3NYn1jCS+Ep/um Ey92sym2zm3ir+PZUfuuMIiDvI4oKPrLVVGZInf/EmaidA0Z12whtvzlcAOe0WAMMa2Z T7aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708674911; x=1709279711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dI+ZHLbHHy/sO5OXKGpzrSsMvXGhpMV1sYPyysMEqW8=; b=EXnGiy0KxuNppXK0n7fZ5gb2uWTP1d1sZAW3PxB7WU8rTgpOTiFuOictCuwgzMhK0N Fc6GwPiM/n5ke04XwghqkTUNVpgPMi/arpYrb0iFCFPlD3LfQSYMNJx68xWGXzv/MH76 hzNj7yHusHk+3HqtHZfNypqm8YG+bPkjMQRx4xP2Mes69kMl4Cha4xkppo7Gy8eG75cK bcLXa1aMcU6RBpG+rtyqFHptBGyAQCGd81TH9JbnbhzdKj0Lwm2I2SDiiuOSSyyaGpR/ Ctxmn0Xs9RWXwXD/pgAGX1SpBHNiVBVcBFOMLl7UWWuK5lCGgzYRqSsSq+fU0NXb1DQG aLpA== X-Gm-Message-State: AOJu0Yz23Sehem65OiRuFYIwQjy62VnOaXprrvGNtOcau3MwOpQOUq2x AkbbP8XFSenaTum2ntgschsIKDJ8fQKeP4dSZB+zri1Pp7GWSWy+FeMln0JPh8Td2Yp7zMc/0QD D0AM= X-Google-Smtp-Source: AGHT+IE1D8rZzaxS/3NW+HNpZFxvEKtxnnKkVeq/PiN01DJ9pUVfNaAYOL4J9jEEIydg0j7Kvhzqug== X-Received: by 2002:a17:906:3413:b0:a3f:9971:bf73 with SMTP id c19-20020a170906341300b00a3f9971bf73mr78360ejb.55.1708674911064; Thu, 22 Feb 2024 23:55:11 -0800 (PST) Received: from localhost.localdomain ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id th7-20020a1709078e0700b00a3e059c5c5fsm6660235ejc.188.2024.02.22.23.55.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 23:55:10 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v3 5/7] dts: block all test cases when earlier setup fails Date: Fri, 23 Feb 2024 08:55:00 +0100 Message-Id: <20240223075502.60485-6-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240223075502.60485-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240223075502.60485-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In case of a failure before a test suite, the child results will be recursively recorded as blocked, giving us a full report which was missing previously. Signed-off-by: Juraj Linkeš --- dts/framework/runner.py | 21 ++-- dts/framework/test_result.py | 186 +++++++++++++++++++++++++---------- 2 files changed, 148 insertions(+), 59 deletions(-) diff --git a/dts/framework/runner.py b/dts/framework/runner.py index e8030365ac..511f8be064 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -60,13 +60,15 @@ class DTSRunner: Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or one of its subclasses. The test case results are also recorded. - If an error occurs, the current stage is aborted, the error is recorded and the run continues in - the next iteration of the same stage. The return code is the highest `severity` of all + If an error occurs, the current stage is aborted, the error is recorded, everything in + the inner stages is marked as blocked and the run continues in the next iteration + of the same stage. The return code is the highest `severity` of all :class:`~.framework.exception.DTSError`\s. Example: - An error occurs in a build target setup. The current build target is aborted and the run - continues with the next build target. If the errored build target was the last one in the + An error occurs in a build target setup. The current build target is aborted, + all test suites and their test cases are marked as blocked and the run continues + with the next build target. If the errored build target was the last one in the given execution, the next execution begins. """ @@ -100,6 +102,10 @@ def run(self): test case within the test suite is set up, executed and torn down. After all test cases have been executed, the test suite is torn down and the next build target will be tested. + In order to properly mark test suites and test cases as blocked in case of a failure, + we need to have discovered which test suites and test cases to run before any failures + happen. The discovery happens at the earliest point at the start of each execution. + All the nested steps look like this: #. Execution setup @@ -134,7 +140,7 @@ def run(self): self._logger.info( f"Running execution with SUT '{execution.system_under_test_node.name}'." ) - execution_result = self._result.add_execution(execution.system_under_test_node) + execution_result = self._result.add_execution(execution) # we don't want to modify the original config, so create a copy execution_test_suites = list(execution.test_suites) if not execution.skip_smoke_tests: @@ -143,6 +149,7 @@ def run(self): test_suites_with_cases = self._get_test_suites_with_cases( execution_test_suites, execution.func, execution.perf ) + execution_result.test_suites_with_cases = test_suites_with_cases except Exception as e: self._logger.exception( f"Invalid test suite configuration found: " f"{execution_test_suites}." @@ -492,9 +499,7 @@ def _run_test_suites( """ end_build_target = False for test_suite_with_cases in test_suites_with_cases: - test_suite_result = build_target_result.add_test_suite( - test_suite_with_cases.test_suite_class.__name__ - ) + test_suite_result = build_target_result.add_test_suite(test_suite_with_cases) try: self._run_test_suite(sut_node, tg_node, test_suite_result, test_suite_with_cases) except BlockingTestSuiteError as e: diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index abdbafab10..eedb2d20ee 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -37,7 +37,7 @@ BuildTargetInfo, Compiler, CPUType, - NodeConfiguration, + ExecutionConfiguration, NodeInfo, TestSuiteConfig, ) @@ -88,6 +88,8 @@ class Result(Enum): ERROR = auto() #: SKIP = auto() + #: + BLOCK = auto() def __bool__(self) -> bool: """Only PASS is True.""" @@ -141,21 +143,26 @@ class BaseResult(object): Attributes: setup_result: The result of the setup of the particular stage. teardown_result: The results of the teardown of the particular stage. + child_results: The results of the descendants in the results hierarchy. """ setup_result: FixtureResult teardown_result: FixtureResult - _inner_results: MutableSequence["BaseResult"] + child_results: MutableSequence["BaseResult"] def __init__(self): """Initialize the constructor.""" self.setup_result = FixtureResult() self.teardown_result = FixtureResult() - self._inner_results = [] + self.child_results = [] def update_setup(self, result: Result, error: Exception | None = None) -> None: """Store the setup result. + If the result is :attr:`~Result.BLOCK`, :attr:`~Result.ERROR` or :attr:`~Result.FAIL`, + then the corresponding child results in result hierarchy + are also marked with :attr:`~Result.BLOCK`. + Args: result: The result of the setup. error: The error that occurred in case of a failure. @@ -163,6 +170,16 @@ def update_setup(self, result: Result, error: Exception | None = None) -> None: self.setup_result.result = result self.setup_result.error = error + if result in [Result.BLOCK, Result.ERROR, Result.FAIL]: + self.update_teardown(Result.BLOCK) + self._block_result() + + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed. + + The blocking of child results should be done in overloaded methods. + """ + def update_teardown(self, result: Result, error: Exception | None = None) -> None: """Store the teardown result. @@ -181,10 +198,8 @@ def _get_setup_teardown_errors(self) -> list[Exception]: errors.append(self.teardown_result.error) return errors - def _get_inner_errors(self) -> list[Exception]: - return [ - error for inner_result in self._inner_results for error in inner_result.get_errors() - ] + def _get_child_errors(self) -> list[Exception]: + return [error for child_result in self.child_results for error in child_result.get_errors()] def get_errors(self) -> list[Exception]: """Compile errors from the whole result hierarchy. @@ -192,7 +207,7 @@ def get_errors(self) -> list[Exception]: Returns: The errors from setup, teardown and all errors found in the whole result hierarchy. """ - return self._get_setup_teardown_errors() + self._get_inner_errors() + return self._get_setup_teardown_errors() + self._get_child_errors() def add_stats(self, statistics: "Statistics") -> None: """Collate stats from the whole result hierarchy. @@ -200,8 +215,8 @@ def add_stats(self, statistics: "Statistics") -> None: Args: statistics: The :class:`Statistics` object where the stats will be collated. """ - for inner_result in self._inner_results: - inner_result.add_stats(statistics) + for child_result in self.child_results: + child_result.add_stats(statistics) class DTSResult(BaseResult): @@ -242,18 +257,18 @@ def __init__(self, logger: DTSLOG): self._stats_result = None self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") - def add_execution(self, sut_node: NodeConfiguration) -> "ExecutionResult": - """Add and return the inner result (execution). + def add_execution(self, execution: ExecutionConfiguration) -> "ExecutionResult": + """Add and return the child result (execution). Args: - sut_node: The SUT node's test run configuration. + execution: The execution's test run configuration. Returns: The execution's result. """ - execution_result = ExecutionResult(sut_node) - self._inner_results.append(execution_result) - return execution_result + result = ExecutionResult(execution) + self.child_results.append(result) + return result def add_error(self, error: Exception) -> None: """Record an error that occurred outside any execution. @@ -266,8 +281,8 @@ def add_error(self, error: Exception) -> None: def process(self) -> None: """Process the data after a whole DTS run. - The data is added to inner objects during runtime and this object is not updated - at that time. This requires us to process the inner data after it's all been gathered. + The data is added to child objects during runtime and this object is not updated + at that time. This requires us to process the child data after it's all been gathered. The processing gathers all errors and the statistics of test case results. """ @@ -305,28 +320,30 @@ class ExecutionResult(BaseResult): The internal list stores the results of all build targets in a given execution. Attributes: - sut_node: The SUT node used in the execution. sut_os_name: The operating system of the SUT node. sut_os_version: The operating system version of the SUT node. sut_kernel_version: The operating system kernel version of the SUT node. """ - sut_node: NodeConfiguration sut_os_name: str sut_os_version: str sut_kernel_version: str + _config: ExecutionConfiguration + _parent_result: DTSResult + _test_suites_with_cases: list[TestSuiteWithCases] - def __init__(self, sut_node: NodeConfiguration): - """Extend the constructor with the `sut_node`'s config. + def __init__(self, execution: ExecutionConfiguration): + """Extend the constructor with the execution's config and DTSResult. Args: - sut_node: The SUT node's test run configuration used in the execution. + execution: The execution's test run configuration. """ super(ExecutionResult, self).__init__() - self.sut_node = sut_node + self._config = execution + self._test_suites_with_cases = [] def add_build_target(self, build_target: BuildTargetConfiguration) -> "BuildTargetResult": - """Add and return the inner result (build target). + """Add and return the child result (build target). Args: build_target: The build target's test run configuration. @@ -334,9 +351,34 @@ def add_build_target(self, build_target: BuildTargetConfiguration) -> "BuildTarg Returns: The build target's result. """ - build_target_result = BuildTargetResult(build_target) - self._inner_results.append(build_target_result) - return build_target_result + result = BuildTargetResult( + self._test_suites_with_cases, + build_target, + ) + self.child_results.append(result) + return result + + @property + def test_suites_with_cases(self) -> list[TestSuiteWithCases]: + """The test suites with test cases to be executed in this execution. + + The test suites can only be assigned once. + + Returns: + The list of test suites with test cases. If an error occurs between + the initialization of :class:`ExecutionResult` and assigning test cases to the instance, + return an empty list, representing that we don't know what to execute. + """ + return self._test_suites_with_cases + + @test_suites_with_cases.setter + def test_suites_with_cases(self, test_suites_with_cases: list[TestSuiteWithCases]) -> None: + if self._test_suites_with_cases: + raise ValueError( + "Attempted to assign test suites to an execution result " + "which already has test suites." + ) + self._test_suites_with_cases = test_suites_with_cases def add_sut_info(self, sut_info: NodeInfo) -> None: """Add SUT information gathered at runtime. @@ -348,6 +390,12 @@ def add_sut_info(self, sut_info: NodeInfo) -> None: self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed.""" + for build_target in self._config.build_targets: + child_result = self.add_build_target(build_target) + child_result.update_setup(Result.BLOCK) + class BuildTargetResult(BaseResult): """The build target specific result. @@ -369,11 +417,17 @@ class BuildTargetResult(BaseResult): compiler: Compiler compiler_version: str | None dpdk_version: str | None + _test_suites_with_cases: list[TestSuiteWithCases] - def __init__(self, build_target: BuildTargetConfiguration): - """Extend the constructor with the `build_target`'s build target config. + def __init__( + self, + test_suites_with_cases: list[TestSuiteWithCases], + build_target: BuildTargetConfiguration, + ): + """Extend the constructor with the build target's config and ExecutionResult. Args: + test_suites_with_cases: The test suites with test cases to be run in this build target. build_target: The build target's test run configuration. """ super(BuildTargetResult, self).__init__() @@ -383,6 +437,23 @@ def __init__(self, build_target: BuildTargetConfiguration): self.compiler = build_target.compiler self.compiler_version = None self.dpdk_version = None + self._test_suites_with_cases = test_suites_with_cases + + def add_test_suite( + self, + test_suite_with_cases: TestSuiteWithCases, + ) -> "TestSuiteResult": + """Add and return the child result (test suite). + + Args: + test_suite_with_cases: The test suite with test cases. + + Returns: + The test suite's result. + """ + result = TestSuiteResult(test_suite_with_cases) + self.child_results.append(result) + return result def add_build_target_info(self, versions: BuildTargetInfo) -> None: """Add information about the build target gathered at runtime. @@ -393,15 +464,11 @@ def add_build_target_info(self, versions: BuildTargetInfo) -> None: self.compiler_version = versions.compiler_version self.dpdk_version = versions.dpdk_version - def add_test_suite(self, test_suite_name: str) -> "TestSuiteResult": - """Add and return the inner result (test suite). - - Returns: - The test suite's result. - """ - test_suite_result = TestSuiteResult(test_suite_name) - self._inner_results.append(test_suite_result) - return test_suite_result + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed.""" + for test_suite_with_cases in self._test_suites_with_cases: + child_result = self.add_test_suite(test_suite_with_cases) + child_result.update_setup(Result.BLOCK) class TestSuiteResult(BaseResult): @@ -410,29 +477,42 @@ class TestSuiteResult(BaseResult): The internal list stores the results of all test cases in a given test suite. Attributes: - suite_name: The test suite name. + test_suite_name: The test suite name. """ - suite_name: str + test_suite_name: str + _test_suite_with_cases: TestSuiteWithCases + _parent_result: BuildTargetResult + _child_configs: list[str] - def __init__(self, suite_name: str): - """Extend the constructor with `suite_name`. + def __init__(self, test_suite_with_cases: TestSuiteWithCases): + """Extend the constructor with test suite's config and BuildTargetResult. Args: - suite_name: The test suite's name. + test_suite_with_cases: The test suite with test cases. """ super(TestSuiteResult, self).__init__() - self.suite_name = suite_name + self.test_suite_name = test_suite_with_cases.test_suite_class.__name__ + self._test_suite_with_cases = test_suite_with_cases def add_test_case(self, test_case_name: str) -> "TestCaseResult": - """Add and return the inner result (test case). + """Add and return the child result (test case). + + Args: + test_case_name: The name of the test case. Returns: The test case's result. """ - test_case_result = TestCaseResult(test_case_name) - self._inner_results.append(test_case_result) - return test_case_result + result = TestCaseResult(test_case_name) + self.child_results.append(result) + return result + + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed.""" + for test_case_method in self._test_suite_with_cases.test_cases: + child_result = self.add_test_case(test_case_method.__name__) + child_result.update_setup(Result.BLOCK) class TestCaseResult(BaseResult, FixtureResult): @@ -449,7 +529,7 @@ class TestCaseResult(BaseResult, FixtureResult): test_case_name: str def __init__(self, test_case_name: str): - """Extend the constructor with `test_case_name`. + """Extend the constructor with test case's name and TestSuiteResult. Args: test_case_name: The test case's name. @@ -470,7 +550,7 @@ def update(self, result: Result, error: Exception | None = None) -> None: self.result = result self.error = error - def _get_inner_errors(self) -> list[Exception]: + def _get_child_errors(self) -> list[Exception]: if self.error: return [self.error] return [] @@ -486,6 +566,10 @@ def add_stats(self, statistics: "Statistics") -> None: """ statistics += self.result + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed.""" + self.update(Result.BLOCK) + def __bool__(self) -> bool: """The test case passed only if setup, teardown and the test case itself passed.""" return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) From patchwork Fri Feb 23 07:55:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137090 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4623843B90; Fri, 23 Feb 2024 08:55:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA79B4114B; Fri, 23 Feb 2024 08:55:15 +0100 (CET) Received: from mail-ej1-f41.google.com (mail-ej1-f41.google.com [209.85.218.41]) by mails.dpdk.org (Postfix) with ESMTP id DBB4E41143 for ; Fri, 23 Feb 2024 08:55:12 +0100 (CET) Received: by mail-ej1-f41.google.com with SMTP id a640c23a62f3a-a3e72ec566aso65321666b.2 for ; Thu, 22 Feb 2024 23:55:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1708674912; x=1709279712; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xYO9SkfVXps/O6GJ4ZH48V5K8mEL6IBzmtVCuWrlI88=; b=HvZfHBz3nL4TaFd/IhTCgXJKvt/X/wYXShxCPcXEmVcH93Wt5pq/7g8W8NlX00BgWX 9Nal1SGuaDbFJ9y38FGZgETpOtJwGsDIV33XYUflQHeqlvEQWt3emNFlXe7ObgNkTJTA qUo8HgK2/ZlkPp7GJ5pUGxxJrgJJxDg1d8/mThCpSGcH3r6xyTskxW4Ci1wde+Yvtbp3 YlWJZX82pXR17pYn5pjb3y06/sgiGmxTG11DDDrIo63xycUKfYHg4p49VE0EARqxxpVR HpkvcK1VS/zCCy+9IUqovQVxAbp8FXkXXpP8+szYjQx0aR4vlK6WMJnbJkm4Rav4n4TL RbWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708674912; x=1709279712; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xYO9SkfVXps/O6GJ4ZH48V5K8mEL6IBzmtVCuWrlI88=; b=aawgM2TEGJxMAY98wtogI5uH1xGL1BakRWQGk49OF0Ueb5B+z/umnHESX9t5BT9Ztu oyrkJ/A0g3REGtcrPd/BoEsJLr7lcCNVXb3PdYROTg9xD9yppFWuckeptZ34DSjn5o1A CfeueNMhtdISYZknDamhXJLQ31XRVV+3tWfvabuw5uOdDKoLD4TWi+Hga40G+kZStZ/Q n7w82LrXDrvg2xbJAtazFViIwGFzK0+lAXHo9AjsdurRNgu6uFy6dHRVpUkVNKq05Rdq 5HHd3yB4AIS6vMusSS6Nv9EEmnAzg1yNwkAI0u+676tWS9ZP0+hPeeUoXAgwye5hbCGN 8EcA== X-Gm-Message-State: AOJu0YyF/sTrOa7Uujntebs880WIqHelGT0SLUYFkuR3tta/Wy3VgilW 5FlODakLTdkDpMVs3JC4RlHSJmwO49Jq4Ag0ajmecCNTnaPwM0T4gdzb9FHaoUs= X-Google-Smtp-Source: AGHT+IEdmoHwINvE/0B62XNnYVfI6d4YYQTzP1QYBZdyD3dFIWGXDuC7cN/6Q6b4h8bQuCgFnEfG2g== X-Received: by 2002:a17:906:f253:b0:a3f:86f0:cf6a with SMTP id gy19-20020a170906f25300b00a3f86f0cf6amr700224ejb.16.1708674912390; Thu, 22 Feb 2024 23:55:12 -0800 (PST) Received: from localhost.localdomain ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id th7-20020a1709078e0700b00a3e059c5c5fsm6660235ejc.188.2024.02.22.23.55.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 23:55:11 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v3 6/7] dts: refactor logging configuration Date: Fri, 23 Feb 2024 08:55:01 +0100 Message-Id: <20240223075502.60485-7-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240223075502.60485-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240223075502.60485-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Remove unused parts of the code and add useful features: 1. Add DTS execution stages such as execution and test suite to better identify where in the DTS lifecycle we are when investigating logs, 2. Logging to separate files in specific stages, which is mainly useful for having test suite logs in additional separate files. 3. Remove the dependence on the settings module which enhances the usefulness of the logger module, as it can now be imported in more modules. The execution stages and the files to log to are the same for all DTS loggers. To achieve this, we have one DTS root logger which should be used for handling stage switching and all other loggers are children of this DTS root logger. The DTS root logger is the one where we change the behavior of all loggers (the stage and which files to log to) and the child loggers just log messages under a different name. Signed-off-by: Juraj Linkeš --- dts/framework/logger.py | 236 +++++++++++------- dts/framework/remote_session/__init__.py | 6 +- .../interactive_remote_session.py | 6 +- .../remote_session/interactive_shell.py | 6 +- .../remote_session/remote_session.py | 8 +- dts/framework/runner.py | 19 +- dts/framework/test_result.py | 6 +- dts/framework/test_suite.py | 6 +- dts/framework/testbed_model/node.py | 11 +- dts/framework/testbed_model/os_session.py | 7 +- .../traffic_generator/traffic_generator.py | 6 +- dts/main.py | 3 - 12 files changed, 184 insertions(+), 136 deletions(-) diff --git a/dts/framework/logger.py b/dts/framework/logger.py index cfa6e8cd72..db4a7698e0 100644 --- a/dts/framework/logger.py +++ b/dts/framework/logger.py @@ -5,141 +5,187 @@ """DTS logger module. -DTS framework and TestSuite logs are saved in different log files. +The module provides several additional features: + + * The storage of DTS execution stages, + * Logging to console, a human-readable log file and a machine-readable log file, + * Optional log files for specific stages. """ import logging -import os.path -from typing import TypedDict +from enum import auto +from logging import FileHandler, StreamHandler +from pathlib import Path +from typing import ClassVar -from .settings import SETTINGS +from .utils import StrEnum date_fmt = "%Y/%m/%d %H:%M:%S" -stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s" +stream_fmt = "%(asctime)s - %(stage)s - %(name)s - %(levelname)s - %(message)s" +dts_root_logger_name = "dts" + + +class DtsStage(StrEnum): + """The DTS execution stage.""" + #: + pre_execution = auto() + #: + execution = auto() + #: + build_target = auto() + #: + suite = auto() + #: + post_execution = auto() -class DTSLOG(logging.LoggerAdapter): - """DTS logger adapter class for framework and testsuites. - The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment - variable control the verbosity of output. If enabled, all messages will be emitted to the - console. +class DTSLogger(logging.Logger): + """The DTS logger class. - The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment - variable modify the directory where the logs will be stored. + The class extends the :class:`~logging.Logger` class to add the DTS execution stage information + to log records. The stage is common to all loggers, so it's stored in a class variable. - Attributes: - node: The additional identifier. Currently unused. - sh: The handler which emits logs to console. - fh: The handler which emits logs to a file. - verbose_fh: Just as fh, but logs with a different, more verbose, format. + Any time we switch to a new stage, we have the ability to log to an additional log file along + with a supplementary log file with machine-readable format. These two log files are used until + a new stage switch occurs. This is useful mainly for logging per test suite. """ - _logger: logging.Logger - node: str - sh: logging.StreamHandler - fh: logging.FileHandler - verbose_fh: logging.FileHandler + _stage: ClassVar[DtsStage] = DtsStage.pre_execution + _extra_file_handlers: list[FileHandler] = [] - def __init__(self, logger: logging.Logger, node: str = "suite"): - """Extend the constructor with additional handlers. + def __init__(self, *args, **kwargs): + """Extend the constructor with extra file handlers.""" + self._extra_file_handlers = [] + super().__init__(*args, **kwargs) - One handler logs to the console, the other one to a file, with either a regular or verbose - format. + def makeRecord(self, *args, **kwargs) -> logging.LogRecord: + """Generates a record with additional stage information. - Args: - logger: The logger from which to create the logger adapter. - node: An additional identifier. Currently unused. + This is the default method for the :class:`~logging.Logger` class. We extend it + to add stage information to the record. + + :meta private: + + Returns: + record: The generated record with the stage information. """ - self._logger = logger - # 1 means log everything, this will be used by file handlers if their level - # is not set - self._logger.setLevel(1) + record = super().makeRecord(*args, **kwargs) + record.stage = DTSLogger._stage # type: ignore[attr-defined] + return record + + def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None: + """Add logger handlers to the DTS root logger. + + This method should be called only on the DTS root logger. + The log records from child loggers will propagate to these handlers. + + Three handlers are added: - self.node = node + * A console handler, + * A file handler, + * A supplementary file handler with machine-readable logs + containing more debug information. - # add handler to emit to stdout - sh = logging.StreamHandler() + All log messages will be logged to files. The log level of the console handler + is configurable with `verbose`. + + Args: + verbose: If :data:`True`, log all messages to the console. + If :data:`False`, log to console with the :data:`logging.INFO` level. + output_dir: The directory where the log files will be located. + The names of the log files correspond to the name of the logger instance. + """ + self.setLevel(1) + + sh = StreamHandler() sh.setFormatter(logging.Formatter(stream_fmt, date_fmt)) - sh.setLevel(logging.INFO) # console handler default level + if not verbose: + sh.setLevel(logging.INFO) + self.addHandler(sh) - if SETTINGS.verbose is True: - sh.setLevel(logging.DEBUG) + self._add_file_handlers(Path(output_dir, self.name)) - self._logger.addHandler(sh) - self.sh = sh + def set_stage(self, stage: DtsStage, log_file_path: Path | None = None) -> None: + """Set the DTS execution stage and optionally log to files. - # prepare the output folder - if not os.path.exists(SETTINGS.output_dir): - os.mkdir(SETTINGS.output_dir) + Set the DTS execution stage of the DTSLog class and optionally add + file handlers to the instance if the log file name is provided. - logging_path_prefix = os.path.join(SETTINGS.output_dir, node) + The file handlers log all messages. One is a regular human-readable log file and + the other one is a machine-readable log file with extra debug information. - fh = logging.FileHandler(f"{logging_path_prefix}.log") - fh.setFormatter( - logging.Formatter( - fmt="%(asctime)s - %(name)s - %(levelname)s - %(message)s", - datefmt=date_fmt, - ) - ) + Args: + stage: The DTS stage to set. + log_file_path: An optional path of the log file to use. This should be a full path + (either relative or absolute) without suffix (which will be appended). + """ + self._remove_extra_file_handlers() + + if DTSLogger._stage != stage: + self.info(f"Moving from stage '{DTSLogger._stage}' to stage '{stage}'.") + DTSLogger._stage = stage + + if log_file_path: + self._extra_file_handlers.extend(self._add_file_handlers(log_file_path)) + + def _add_file_handlers(self, log_file_path: Path) -> list[FileHandler]: + """Add file handlers to the DTS root logger. + + Add two type of file handlers: + + * A regular file handler with suffix ".log", + * A machine-readable file handler with suffix ".verbose.log". + This format provides extensive information for debugging and detailed analysis. + + Args: + log_file_path: The full path to the log file without suffix. + + Returns: + The newly created file handlers. - self._logger.addHandler(fh) - self.fh = fh + """ + fh = FileHandler(f"{log_file_path}.log") + fh.setFormatter(logging.Formatter(stream_fmt, date_fmt)) + self.addHandler(fh) - # This outputs EVERYTHING, intended for post-mortem debugging - # Also optimized for processing via AWK (awk -F '|' ...) - verbose_fh = logging.FileHandler(f"{logging_path_prefix}.verbose.log") + verbose_fh = FileHandler(f"{log_file_path}.verbose.log") verbose_fh.setFormatter( logging.Formatter( - fmt="%(asctime)s|%(name)s|%(levelname)s|%(pathname)s|%(lineno)d|" + "%(asctime)s|%(stage)s|%(name)s|%(levelname)s|%(pathname)s|%(lineno)d|" "%(funcName)s|%(process)d|%(thread)d|%(threadName)s|%(message)s", datefmt=date_fmt, ) ) + self.addHandler(verbose_fh) - self._logger.addHandler(verbose_fh) - self.verbose_fh = verbose_fh - - super(DTSLOG, self).__init__(self._logger, dict(node=self.node)) - - def logger_exit(self) -> None: - """Remove the stream handler and the logfile handler.""" - for handler in (self.sh, self.fh, self.verbose_fh): - handler.flush() - self._logger.removeHandler(handler) - - -class _LoggerDictType(TypedDict): - logger: DTSLOG - name: str - node: str - + return [fh, verbose_fh] -# List for saving all loggers in use -_Loggers: list[_LoggerDictType] = [] + def _remove_extra_file_handlers(self) -> None: + """Remove any extra file handlers that have been added to the logger.""" + if self._extra_file_handlers: + for extra_file_handler in self._extra_file_handlers: + self.removeHandler(extra_file_handler) + self._extra_file_handlers = [] -def getLogger(name: str, node: str = "suite") -> DTSLOG: - """Get DTS logger adapter identified by name and node. - An existing logger will be returned if one with the exact name and node already exists. - A new one will be created and stored otherwise. +def get_dts_logger(name: str = None) -> DTSLogger: + """Return a DTS logger instance identified by `name`. Args: - name: The name of the logger. - node: An additional identifier for the logger. + name: If :data:`None`, return the DTS root logger. + If specified, return a child of the DTS root logger. Returns: - A logger uniquely identified by both name and node. + The DTS root logger or a child logger identified by `name`. """ - global _Loggers - # return saved logger - logger: _LoggerDictType - for logger in _Loggers: - if logger["name"] == name and logger["node"] == node: - return logger["logger"] - - # return new logger - dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node) - _Loggers.append({"logger": dts_logger, "name": name, "node": node}) - return dts_logger + original_logger_class = logging.getLoggerClass() + logging.setLoggerClass(DTSLogger) + if name: + name = f"{dts_root_logger_name}.{name}" + else: + name = dts_root_logger_name + logger = logging.getLogger(name) + logging.setLoggerClass(original_logger_class) + return logger # type: ignore[return-value] diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 51a01d6b5e..1910c81c3c 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -15,7 +15,7 @@ # pylama:ignore=W0611 from framework.config import NodeConfiguration -from framework.logger import DTSLOG +from framework.logger import DTSLogger from .interactive_remote_session import InteractiveRemoteSession from .interactive_shell import InteractiveShell @@ -26,7 +26,7 @@ def create_remote_session( - node_config: NodeConfiguration, name: str, logger: DTSLOG + node_config: NodeConfiguration, name: str, logger: DTSLogger ) -> RemoteSession: """Factory for non-interactive remote sessions. @@ -45,7 +45,7 @@ def create_remote_session( def create_interactive_session( - node_config: NodeConfiguration, logger: DTSLOG + node_config: NodeConfiguration, logger: DTSLogger ) -> InteractiveRemoteSession: """Factory for interactive remote sessions. diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py index 1cc82e3377..c50790db79 100644 --- a/dts/framework/remote_session/interactive_remote_session.py +++ b/dts/framework/remote_session/interactive_remote_session.py @@ -16,7 +16,7 @@ from framework.config import NodeConfiguration from framework.exception import SSHConnectionError -from framework.logger import DTSLOG +from framework.logger import DTSLogger class InteractiveRemoteSession: @@ -50,11 +50,11 @@ class InteractiveRemoteSession: username: str password: str session: SSHClient - _logger: DTSLOG + _logger: DTSLogger _node_config: NodeConfiguration _transport: Transport | None - def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None: + def __init__(self, node_config: NodeConfiguration, logger: DTSLogger) -> None: """Connect to the node during initialization. Args: diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py index b158f963b6..5cfe202e15 100644 --- a/dts/framework/remote_session/interactive_shell.py +++ b/dts/framework/remote_session/interactive_shell.py @@ -20,7 +20,7 @@ from paramiko import Channel, SSHClient, channel # type: ignore[import] -from framework.logger import DTSLOG +from framework.logger import DTSLogger from framework.settings import SETTINGS @@ -38,7 +38,7 @@ class InteractiveShell(ABC): _stdin: channel.ChannelStdinFile _stdout: channel.ChannelFile _ssh_channel: Channel - _logger: DTSLOG + _logger: DTSLogger _timeout: float _app_args: str @@ -61,7 +61,7 @@ class InteractiveShell(ABC): def __init__( self, interactive_session: SSHClient, - logger: DTSLOG, + logger: DTSLogger, get_privileged_command: Callable[[str], str] | None, app_args: str = "", timeout: float = SETTINGS.timeout, diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py index 2059f9a981..a69dc99400 100644 --- a/dts/framework/remote_session/remote_session.py +++ b/dts/framework/remote_session/remote_session.py @@ -9,14 +9,13 @@ the structure of the result of a command execution. """ - import dataclasses from abc import ABC, abstractmethod from pathlib import PurePath from framework.config import NodeConfiguration from framework.exception import RemoteCommandExecutionError -from framework.logger import DTSLOG +from framework.logger import DTSLogger from framework.settings import SETTINGS @@ -75,14 +74,14 @@ class RemoteSession(ABC): username: str password: str history: list[CommandResult] - _logger: DTSLOG + _logger: DTSLogger _node_config: NodeConfiguration def __init__( self, node_config: NodeConfiguration, session_name: str, - logger: DTSLOG, + logger: DTSLogger, ): """Connect to the node during initialization. @@ -181,7 +180,6 @@ def close(self, force: bool = False) -> None: Args: force: Force the closure of the connection. This may not clean up all resources. """ - self._logger.logger_exit() self._close(force) @abstractmethod diff --git a/dts/framework/runner.py b/dts/framework/runner.py index 511f8be064..af927b11a9 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -19,9 +19,10 @@ import importlib import inspect -import logging +import os import re import sys +from pathlib import Path from types import MethodType from typing import Iterable @@ -38,7 +39,7 @@ SSHTimeoutError, TestCaseVerifyError, ) -from .logger import DTSLOG, getLogger +from .logger import DTSLogger, DtsStage, get_dts_logger from .settings import SETTINGS from .test_result import ( BuildTargetResult, @@ -73,7 +74,7 @@ class DTSRunner: """ _configuration: Configuration - _logger: DTSLOG + _logger: DTSLogger _result: DTSResult _test_suite_class_prefix: str _test_suite_module_prefix: str @@ -83,7 +84,10 @@ class DTSRunner: def __init__(self): """Initialize the instance with configuration, logger, result and string constants.""" self._configuration = load_config() - self._logger = getLogger("DTSRunner") + self._logger = get_dts_logger() + if not os.path.exists(SETTINGS.output_dir): + os.makedirs(SETTINGS.output_dir) + self._logger.add_dts_root_logger_handlers(SETTINGS.verbose, SETTINGS.output_dir) self._result = DTSResult(self._logger) self._test_suite_class_prefix = "Test" self._test_suite_module_prefix = "tests.TestSuite_" @@ -137,6 +141,7 @@ def run(self): # for all Execution sections for execution in self._configuration.executions: + self._logger.set_stage(DtsStage.execution) self._logger.info( f"Running execution with SUT '{execution.system_under_test_node.name}'." ) @@ -168,6 +173,7 @@ def run(self): finally: try: + self._logger.set_stage(DtsStage.post_execution) for node in (sut_nodes | tg_nodes).values(): node.close() self._result.update_teardown(Result.PASS) @@ -425,6 +431,7 @@ def _run_execution( finally: try: + self._logger.set_stage(DtsStage.execution) sut_node.tear_down_execution() execution_result.update_teardown(Result.PASS) except Exception as e: @@ -453,6 +460,7 @@ def _run_build_target( with the current build target. test_suites_with_cases: The test suites with test cases to run. """ + self._logger.set_stage(DtsStage.build_target) self._logger.info(f"Running build target '{build_target.name}'.") try: @@ -469,6 +477,7 @@ def _run_build_target( finally: try: + self._logger.set_stage(DtsStage.build_target) sut_node.tear_down_build_target() build_target_result.update_teardown(Result.PASS) except Exception as e: @@ -541,6 +550,7 @@ def _run_test_suite( BlockingTestSuiteError: If a blocking test suite fails. """ test_suite_name = test_suite_with_cases.test_suite_class.__name__ + self._logger.set_stage(DtsStage.suite, Path(SETTINGS.output_dir, test_suite_name)) test_suite = test_suite_with_cases.test_suite_class(sut_node, tg_node) try: self._logger.info(f"Starting test suite setup: {test_suite_name}") @@ -689,5 +699,4 @@ def _exit_dts(self) -> None: if self._logger: self._logger.info("DTS execution has ended.") - logging.shutdown() sys.exit(self._result.get_return_code()) diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index eedb2d20ee..28f84fd793 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -42,7 +42,7 @@ TestSuiteConfig, ) from .exception import DTSError, ErrorSeverity -from .logger import DTSLOG +from .logger import DTSLogger from .settings import SETTINGS from .test_suite import TestSuite @@ -237,13 +237,13 @@ class DTSResult(BaseResult): """ dpdk_version: str | None - _logger: DTSLOG + _logger: DTSLogger _errors: list[Exception] _return_code: ErrorSeverity _stats_result: Union["Statistics", None] _stats_filename: str - def __init__(self, logger: DTSLOG): + def __init__(self, logger: DTSLogger): """Extend the constructor with top-level specifics. Args: diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index f9fe88093e..365f80e21a 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -21,7 +21,7 @@ from scapy.packet import Packet, Padding # type: ignore[import] from .exception import TestCaseVerifyError -from .logger import DTSLOG, getLogger +from .logger import DTSLogger, get_dts_logger from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries @@ -61,7 +61,7 @@ class TestSuite(object): #: Whether the test suite is blocking. A failure of a blocking test suite #: will block the execution of all subsequent test suites in the current build target. is_blocking: ClassVar[bool] = False - _logger: DTSLOG + _logger: DTSLogger _port_links: list[PortLink] _sut_port_ingress: Port _sut_port_egress: Port @@ -88,7 +88,7 @@ def __init__( """ self.sut_node = sut_node self.tg_node = tg_node - self._logger = getLogger(self.__class__.__name__) + self._logger = get_dts_logger(self.__class__.__name__) self._port_links = [] self._process_links() self._sut_port_ingress, self._tg_port_egress = ( diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index 1a55fadf78..74061f6262 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -23,7 +23,7 @@ NodeConfiguration, ) from framework.exception import ConfigurationError -from framework.logger import DTSLOG, getLogger +from framework.logger import DTSLogger, get_dts_logger from framework.settings import SETTINGS from .cpu import ( @@ -63,7 +63,7 @@ class Node(ABC): name: str lcores: list[LogicalCore] ports: list[Port] - _logger: DTSLOG + _logger: DTSLogger _other_sessions: list[OSSession] _execution_config: ExecutionConfiguration virtual_devices: list[VirtualDevice] @@ -82,7 +82,7 @@ def __init__(self, node_config: NodeConfiguration): """ self.config = node_config self.name = node_config.name - self._logger = getLogger(self.name) + self._logger = get_dts_logger(self.name) self.main_session = create_session(self.config, self.name, self._logger) self._logger.info(f"Connected to node: {self.name}") @@ -189,7 +189,7 @@ def create_session(self, name: str) -> OSSession: connection = create_session( self.config, session_name, - getLogger(session_name, node=self.name), + get_dts_logger(session_name), ) self._other_sessions.append(connection) return connection @@ -299,7 +299,6 @@ def close(self) -> None: self.main_session.close() for session in self._other_sessions: session.close() - self._logger.logger_exit() @staticmethod def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: @@ -314,7 +313,7 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: return func -def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession: +def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger) -> OSSession: """Factory for OS-aware sessions. Args: diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py index ac6bb5e112..6983aa4a77 100644 --- a/dts/framework/testbed_model/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -21,7 +21,6 @@ the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the node's OS is Linux and other commands for other OSs. It also translates the path to match the underlying OS. """ - from abc import ABC, abstractmethod from collections.abc import Iterable from ipaddress import IPv4Interface, IPv6Interface @@ -29,7 +28,7 @@ from typing import Type, TypeVar, Union from framework.config import Architecture, NodeConfiguration, NodeInfo -from framework.logger import DTSLOG +from framework.logger import DTSLogger from framework.remote_session import ( CommandResult, InteractiveRemoteSession, @@ -62,7 +61,7 @@ class OSSession(ABC): _config: NodeConfiguration name: str - _logger: DTSLOG + _logger: DTSLogger remote_session: RemoteSession interactive_session: InteractiveRemoteSession @@ -70,7 +69,7 @@ def __init__( self, node_config: NodeConfiguration, name: str, - logger: DTSLOG, + logger: DTSLogger, ): """Initialize the OS-aware session. diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py index c49fbff488..d86d7fb532 100644 --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -13,7 +13,7 @@ from scapy.packet import Packet # type: ignore[import] from framework.config import TrafficGeneratorConfig -from framework.logger import DTSLOG, getLogger +from framework.logger import DTSLogger, get_dts_logger from framework.testbed_model.node import Node from framework.testbed_model.port import Port from framework.utils import get_packet_summaries @@ -28,7 +28,7 @@ class TrafficGenerator(ABC): _config: TrafficGeneratorConfig _tg_node: Node - _logger: DTSLOG + _logger: DTSLogger def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): """Initialize the traffic generator. @@ -39,7 +39,7 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): """ self._config = config self._tg_node = tg_node - self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}") + self._logger = get_dts_logger(f"{self._tg_node.name} {self._config.traffic_generator_type}") def send_packet(self, packet: Packet, port: Port) -> None: """Send `packet` and block until it is fully sent. diff --git a/dts/main.py b/dts/main.py index 1ffe8ff81f..fa878cc16e 100755 --- a/dts/main.py +++ b/dts/main.py @@ -6,8 +6,6 @@ """The DTS executable.""" -import logging - from framework import settings @@ -30,5 +28,4 @@ def main() -> None: # Main program begins here if __name__ == "__main__": - logging.raiseExceptions = True main() From patchwork Fri Feb 23 07:55:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137091 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36B9243B90; Fri, 23 Feb 2024 08:56:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F3C9942D66; Fri, 23 Feb 2024 08:55:17 +0100 (CET) Received: from mail-ej1-f47.google.com (mail-ej1-f47.google.com [209.85.218.47]) by mails.dpdk.org (Postfix) with ESMTP id 19AB5427DB for ; Fri, 23 Feb 2024 08:55:14 +0100 (CET) Received: by mail-ej1-f47.google.com with SMTP id a640c23a62f3a-a3e7ce7dac9so62353566b.1 for ; Thu, 22 Feb 2024 23:55:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1708674914; x=1709279714; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4V96kwVvAVxOmMRPK5ekSKc5kDGiytDsrQB0QNl24rI=; b=Pgw1jnlAdC3qLtuiz8wH+/ItlbIPygOZgpGOi5Qn+UJZ8MHaLobeI09HuNVDe4doa2 5CZVrmjelJXg3iQ/KBvuIuwFLX2C4VGETk+uC9PUjtAO8hS1Juso0r/+STfsdKHT8g3G FWsqlEU1WcRY3SJucoFOIVo0XDoUATvgfL+CxhDRZ/nGTOvD0nxM3G/lbuK4Pzi0Dw0L kUm3BX57vDLothKCq5JZNdhRLwsJBs4Tqr97gqHwhYQDCxrW1ayrU4TYiJkDPLsx0ZBo Wrvd8b6lDiqJ6oEksmiAVaOn+1LTow8aTmdwU+hwVLWUGnez77oF9C2MpNGwDZb4pAYL Nq/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708674914; x=1709279714; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4V96kwVvAVxOmMRPK5ekSKc5kDGiytDsrQB0QNl24rI=; b=lQBANp3bGeLg7xXmZkYxJlfol0kd57o95usP5F0fB5u9H86pm/4IT+GHagNx97PaE6 rSHvvpLYQFGEogl0yfBMwzPwUfLKMfK3ZTKPflcVfnoPstcG0xwuQo7nlniDCj5m63dC KAhvhQrZm4mjzJu/WDIWPOu1ds4l310mx+oEuJNLbVgDBP6N9+pZ0y3mYCj650QzTvh9 9JvauqunGP8pZKCeDg0mGGrprnHP3P3BG81EU8dJHM5UXDyFvyBe0Q0OvIyynqmojPW1 4wi5Fk2SMiDSbfPtMmL69mt+jTXCrl+2AHbvF7ul5Wq8ELULY7tQMCgsd5LEP6wDAxlL R1nQ== X-Gm-Message-State: AOJu0YxHmmGmfcLzumdee8qyR8557XR1iTuVWUFpLRIyrs8uDSOhIgof YSmRmM9a+1tTV2r8HdYlTvc1BkjmVdz3pR+/yO3faatHt35l+SHceKFVhXXTUew= X-Google-Smtp-Source: AGHT+IELxriK9uRO7ww7hYmnmnDQtLXnU4Vz35XH5hX0yX0hrzOm8tcMY+pOcthdOF2Nd9Za3NXGLg== X-Received: by 2002:a17:906:b088:b0:a3e:7f54:12c1 with SMTP id x8-20020a170906b08800b00a3e7f5412c1mr768950ejy.72.1708674913528; Thu, 22 Feb 2024 23:55:13 -0800 (PST) Received: from localhost.localdomain ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id th7-20020a1709078e0700b00a3e059c5c5fsm6660235ejc.188.2024.02.22.23.55.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 23:55:13 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v3 7/7] dts: improve test suite and case filtering Date: Fri, 23 Feb 2024 08:55:02 +0100 Message-Id: <20240223075502.60485-8-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240223075502.60485-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240223075502.60485-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The two places where we specify which test suite and test cases to run are complimentary and not that intuitive to use. A unified way provides a better user experience. The syntax in test run configuration file has not changed, but the environment variable and the command line arguments was changed to match the config file syntax. This required changes in the settings module which greatly simplified the parsing of the environment variables while retaining the same functionality. Signed-off-by: Juraj Linkeš --- doc/guides/tools/dts.rst | 14 ++- dts/framework/config/__init__.py | 12 +- dts/framework/runner.py | 18 +-- dts/framework/settings.py | 187 ++++++++++++++----------------- dts/framework/test_suite.py | 2 +- 5 files changed, 114 insertions(+), 119 deletions(-) diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index f686ca487c..d1c3c2af7a 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -215,28 +215,30 @@ DTS is run with ``main.py`` located in the ``dts`` directory after entering Poet .. code-block:: console (dts-py3.10) $ ./main.py --help - usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT] [-v] [-s] [--tarball TARBALL] [--compile-timeout COMPILE_TIMEOUT] [--test-cases TEST_CASES] [--re-run RE_RUN] + usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT] [-v] [-s] [--tarball TARBALL] [--compile-timeout COMPILE_TIMEOUT] [--test-suite TEST_SUITE [TEST_CASES ...]] [--re-run RE_RUN] Run DPDK test suites. All options may be specified with the environment variables provided in brackets. Command line arguments have higher priority. options: -h, --help show this help message and exit --config-file CONFIG_FILE - [DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets. (default: ./conf.yaml) + [DTS_CFG_FILE] The configuration file that describes the test cases, SUTs and targets. (default: ./conf.yaml) --output-dir OUTPUT_DIR, --output OUTPUT_DIR [DTS_OUTPUT_DIR] Output directory where DTS logs and results are saved. (default: output) -t TIMEOUT, --timeout TIMEOUT [DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK. (default: 15) -v, --verbose [DTS_VERBOSE] Specify to enable verbose output, logging all messages to the console. (default: False) - -s, --skip-setup [DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes. (default: None) + -s, --skip-setup [DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes. (default: False) --tarball TARBALL, --snapshot TARBALL, --git-ref TARBALL [DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, tag ID or tree ID to test. To test local changes, first commit them, then use the commit ID with this option. (default: dpdk.tar.xz) --compile-timeout COMPILE_TIMEOUT [DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK. (default: 1200) - --test-cases TEST_CASES - [DTS_TESTCASES] Comma-separated list of test cases to execute. Unknown test cases will be silently ignored. (default: ) + --test-suite TEST_SUITE [TEST_CASES ...] + [DTS_TEST_SUITES] A list containing a test suite with test cases. The first parameter is the test suite name, and the rest are test case names, which are optional. May be specified multiple times. To specify multiple test suites in the environment + variable, join the lists with a comma. Examples: --test-suite suite case case --test-suite suite case ... | DTS_TEST_SUITES='suite case case, suite case, ...' | --test-suite suite --test-suite suite case ... | DTS_TEST_SUITES='suite, suite case, ...' + (default: []) --re-run RE_RUN, --re_run RE_RUN - [DTS_RERUN] Re-run each test case the specified number of times if a test failure occurs (default: 0) + [DTS_RERUN] Re-run each test case the specified number of times if a test failure occurs. (default: 0) The brackets contain the names of environment variables that set the same thing. diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index c6a93b3b89..4cb5c74059 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -35,9 +35,9 @@ import json import os.path -import pathlib from dataclasses import dataclass, fields from enum import auto, unique +from pathlib import Path from typing import Union import warlock # type: ignore[import] @@ -53,7 +53,6 @@ TrafficGeneratorConfigDict, ) from framework.exception import ConfigurationError -from framework.settings import SETTINGS from framework.utils import StrEnum @@ -571,7 +570,7 @@ def from_dict(d: ConfigurationDict) -> "Configuration": return Configuration(executions=executions) -def load_config() -> Configuration: +def load_config(config_file_path: Path) -> Configuration: """Load DTS test run configuration from a file. Load the YAML test run configuration file @@ -581,13 +580,16 @@ def load_config() -> Configuration: The YAML test run configuration file is specified in the :option:`--config-file` command line argument or the :envvar:`DTS_CFG_FILE` environment variable. + Args: + config_file_path: The path to the YAML test run configuration file. + Returns: The parsed test run configuration. """ - with open(SETTINGS.config_file_path, "r") as f: + with open(config_file_path, "r") as f: config_data = yaml.safe_load(f) - schema_path = os.path.join(pathlib.Path(__file__).parent.resolve(), "conf_yaml_schema.json") + schema_path = os.path.join(Path(__file__).parent.resolve(), "conf_yaml_schema.json") with open(schema_path, "r") as f: schema = json.load(f) diff --git a/dts/framework/runner.py b/dts/framework/runner.py index af927b11a9..cabff0a7b2 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -24,7 +24,7 @@ import sys from pathlib import Path from types import MethodType -from typing import Iterable +from typing import Iterable, Sequence from .config import ( BuildTargetConfiguration, @@ -83,7 +83,7 @@ class DTSRunner: def __init__(self): """Initialize the instance with configuration, logger, result and string constants.""" - self._configuration = load_config() + self._configuration = load_config(SETTINGS.config_file_path) self._logger = get_dts_logger() if not os.path.exists(SETTINGS.output_dir): os.makedirs(SETTINGS.output_dir) @@ -129,7 +129,7 @@ def run(self): #. Execution teardown The test cases are filtered according to the specification in the test run configuration and - the :option:`--test-cases` command line argument or + the :option:`--test-suite` command line argument or the :envvar:`DTS_TESTCASES` environment variable. """ sut_nodes: dict[str, SutNode] = {} @@ -147,7 +147,9 @@ def run(self): ) execution_result = self._result.add_execution(execution) # we don't want to modify the original config, so create a copy - execution_test_suites = list(execution.test_suites) + execution_test_suites = list( + SETTINGS.test_suites if SETTINGS.test_suites else execution.test_suites + ) if not execution.skip_smoke_tests: execution_test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] try: @@ -226,7 +228,7 @@ def _get_test_suites_with_cases( test_suite_class = self._get_test_suite_class(test_suite_config.test_suite) test_cases = [] func_test_cases, perf_test_cases = self._filter_test_cases( - test_suite_class, set(test_suite_config.test_cases + SETTINGS.test_cases) + test_suite_class, test_suite_config.test_cases ) if func: test_cases.extend(func_test_cases) @@ -301,7 +303,7 @@ def is_test_suite(object) -> bool: ) def _filter_test_cases( - self, test_suite_class: type[TestSuite], test_cases_to_run: set[str] + self, test_suite_class: type[TestSuite], test_cases_to_run: Sequence[str] ) -> tuple[list[MethodType], list[MethodType]]: """Filter `test_cases_to_run` from `test_suite_class`. @@ -330,7 +332,9 @@ def _filter_test_cases( (name, method) for name, method in name_method_tuples if name in test_cases_to_run ] if len(name_method_tuples) < len(test_cases_to_run): - missing_test_cases = test_cases_to_run - {name for name, _ in name_method_tuples} + missing_test_cases = set(test_cases_to_run) - { + name for name, _ in name_method_tuples + } raise ConfigurationError( f"Test cases {missing_test_cases} not found among methods " f"of {test_suite_class.__name__}." diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 2b8bfbe0ed..688e8679a7 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -48,10 +48,11 @@ The path to a DPDK tarball, git commit ID, tag ID or tree ID to test. -.. option:: --test-cases -.. envvar:: DTS_TESTCASES +.. option:: --test-suite +.. envvar:: DTS_TEST_SUITES - A comma-separated list of test cases to execute. Unknown test cases will be silently ignored. + A test suite with test cases which may be specified multiple times. + In the environment variable, the suites are joined with a comma. .. option:: --re-run, --re_run .. envvar:: DTS_RERUN @@ -71,83 +72,13 @@ import argparse import os -from collections.abc import Callable, Iterable, Sequence from dataclasses import dataclass, field from pathlib import Path -from typing import Any, TypeVar +from typing import Any +from .config import TestSuiteConfig from .utils import DPDKGitTarball -_T = TypeVar("_T") - - -def _env_arg(env_var: str) -> Any: - """A helper method augmenting the argparse Action with environment variables. - - If the supplied environment variable is defined, then the default value - of the argument is modified. This satisfies the priority order of - command line argument > environment variable > default value. - - Arguments with no values (flags) should be defined using the const keyword argument - (True or False). When the argument is specified, it will be set to const, if not specified, - the default will be stored (possibly modified by the corresponding environment variable). - - Other arguments work the same as default argparse arguments, that is using - the default 'store' action. - - Returns: - The modified argparse.Action. - """ - - class _EnvironmentArgument(argparse.Action): - def __init__( - self, - option_strings: Sequence[str], - dest: str, - nargs: str | int | None = None, - const: bool | None = None, - default: Any = None, - type: Callable[[str], _T | argparse.FileType | None] = None, - choices: Iterable[_T] | None = None, - required: bool = False, - help: str | None = None, - metavar: str | tuple[str, ...] | None = None, - ) -> None: - env_var_value = os.environ.get(env_var) - default = env_var_value or default - if const is not None: - nargs = 0 - default = const if env_var_value else default - type = None - choices = None - metavar = None - super(_EnvironmentArgument, self).__init__( - option_strings, - dest, - nargs=nargs, - const=const, - default=default, - type=type, - choices=choices, - required=required, - help=help, - metavar=metavar, - ) - - def __call__( - self, - parser: argparse.ArgumentParser, - namespace: argparse.Namespace, - values: Any, - option_string: str = None, - ) -> None: - if self.const is not None: - setattr(namespace, self.dest, self.const) - else: - setattr(namespace, self.dest, values) - - return _EnvironmentArgument - @dataclass(slots=True) class Settings: @@ -171,7 +102,7 @@ class Settings: #: compile_timeout: float = 1200 #: - test_cases: list[str] = field(default_factory=list) + test_suites: list[TestSuiteConfig] = field(default_factory=list) #: re_run: int = 0 @@ -180,6 +111,31 @@ class Settings: def _get_parser() -> argparse.ArgumentParser: + """Create the argument parser for DTS. + + Command line options take precedence over environment variables, which in turn take precedence + over default values. + + Returns: + argparse.ArgumentParser: The configured argument parser with defined options. + """ + + def env_arg(env_var: str, default: Any) -> Any: + """A helper function augmenting the argparse with environment variables. + + If the supplied environment variable is defined, then the default value + of the argument is modified. This satisfies the priority order of + command line argument > environment variable > default value. + + Args: + env_var: Environment variable name. + default: Default value. + + Returns: + Environment variable or default value. + """ + return os.environ.get(env_var) or default + parser = argparse.ArgumentParser( description="Run DPDK test suites. All options may be specified with the environment " "variables provided in brackets. Command line arguments have higher priority.", @@ -188,25 +144,23 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--config-file", - action=_env_arg("DTS_CFG_FILE"), - default=SETTINGS.config_file_path, + default=env_arg("DTS_CFG_FILE", SETTINGS.config_file_path), type=Path, - help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets.", + help="[DTS_CFG_FILE] The configuration file that describes the test cases, " + "SUTs and targets.", ) parser.add_argument( "--output-dir", "--output", - action=_env_arg("DTS_OUTPUT_DIR"), - default=SETTINGS.output_dir, + default=env_arg("DTS_OUTPUT_DIR", SETTINGS.output_dir), help="[DTS_OUTPUT_DIR] Output directory where DTS logs and results are saved.", ) parser.add_argument( "-t", "--timeout", - action=_env_arg("DTS_TIMEOUT"), - default=SETTINGS.timeout, + default=env_arg("DTS_TIMEOUT", SETTINGS.timeout), type=float, help="[DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK.", ) @@ -214,9 +168,8 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "-v", "--verbose", - action=_env_arg("DTS_VERBOSE"), - default=SETTINGS.verbose, - const=True, + action="store_true", + default=env_arg("DTS_VERBOSE", SETTINGS.verbose), help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages " "to the console.", ) @@ -224,8 +177,8 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "-s", "--skip-setup", - action=_env_arg("DTS_SKIP_SETUP"), - const=True, + action="store_true", + default=env_arg("DTS_SKIP_SETUP", SETTINGS.skip_setup), help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.", ) @@ -233,8 +186,7 @@ def _get_parser() -> argparse.ArgumentParser: "--tarball", "--snapshot", "--git-ref", - action=_env_arg("DTS_DPDK_TARBALL"), - default=SETTINGS.dpdk_tarball_path, + default=env_arg("DTS_DPDK_TARBALL", SETTINGS.dpdk_tarball_path), type=Path, help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, " "tag ID or tree ID to test. To test local changes, first commit them, " @@ -243,36 +195,71 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--compile-timeout", - action=_env_arg("DTS_COMPILE_TIMEOUT"), - default=SETTINGS.compile_timeout, + default=env_arg("DTS_COMPILE_TIMEOUT", SETTINGS.compile_timeout), type=float, help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.", ) parser.add_argument( - "--test-cases", - action=_env_arg("DTS_TESTCASES"), - default="", - help="[DTS_TESTCASES] Comma-separated list of test cases to execute.", + "--test-suite", + action="append", + nargs="+", + metavar=("TEST_SUITE", "TEST_CASES"), + default=env_arg("DTS_TEST_SUITES", SETTINGS.test_suites), + help="[DTS_TEST_SUITES] A list containing a test suite with test cases. " + "The first parameter is the test suite name, and the rest are test case names, " + "which are optional. May be specified multiple times. To specify multiple test suites in " + "the environment variable, join the lists with a comma. " + "Examples: " + "--test-suite suite case case --test-suite suite case ... | " + "DTS_TEST_SUITES='suite case case, suite case, ...' | " + "--test-suite suite --test-suite suite case ... | " + "DTS_TEST_SUITES='suite, suite case, ...'", ) parser.add_argument( "--re-run", "--re_run", - action=_env_arg("DTS_RERUN"), - default=SETTINGS.re_run, + default=env_arg("DTS_RERUN", SETTINGS.re_run), type=int, help="[DTS_RERUN] Re-run each test case the specified number of times " - "if a test failure occurs", + "if a test failure occurs.", ) return parser +def _process_test_suites(args: str | list[list[str]]) -> list[TestSuiteConfig]: + """Process the given argument to a list of :class:`TestSuiteConfig` to execute. + + Args: + args: The arguments to process. The args is a string from an environment variable + or a list of from the user input containing tests suites with tests cases, + each of which is a list of [test_suite, test_case, test_case, ...]. + + Returns: + A list of test suite configurations to execute. + """ + if isinstance(args, str): + # Environment variable in the form of "suite case case, suite case, suite, ..." + args = [suite_with_cases.split() for suite_with_cases in args.split(",")] + + test_suites_to_run = [] + for suite_with_cases in args: + test_suites_to_run.append( + TestSuiteConfig(test_suite=suite_with_cases[0], test_cases=suite_with_cases[1:]) + ) + + return test_suites_to_run + + def get_settings() -> Settings: """Create new settings with inputs from the user. The inputs are taken from the command line and from environment variables. + + Returns: + The new settings object. """ parsed_args = _get_parser().parse_args() return Settings( @@ -287,6 +274,6 @@ def get_settings() -> Settings: else Path(parsed_args.tarball) ), compile_timeout=parsed_args.compile_timeout, - test_cases=(parsed_args.test_cases.split(",") if parsed_args.test_cases else []), + test_suites=_process_test_suites(parsed_args.test_suite), re_run=parsed_args.re_run, ) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 365f80e21a..1957ea7328 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -40,7 +40,7 @@ class TestSuite(object): and functional test cases (all other test cases). By default, all test cases will be executed. A list of testcase names may be specified - in the YAML test run configuration file and in the :option:`--test-cases` command line argument + in the YAML test run configuration file and in the :option:`--test-suite` command line argument or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run. The union of both lists will be used. Any unknown test cases from the latter lists will be silently ignored.