From patchwork Fri Mar 1 10:55:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137666 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33BA543BB1; Fri, 1 Mar 2024 11:55:31 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1E681433B3; Fri, 1 Mar 2024 11:55:28 +0100 (CET) Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) by mails.dpdk.org (Postfix) with ESMTP id E260A400D5 for ; Fri, 1 Mar 2024 11:55:25 +0100 (CET) Received: by mail-ed1-f41.google.com with SMTP id 4fb4d7f45d1cf-5658082d2c4so2929538a12.1 for ; Fri, 01 Mar 2024 02:55:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1709290525; x=1709895325; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=joyBwk0zgt5jOkjegucQm0mrXA4WSkWrbHPUr9eGsiU=; b=linmNid2bPb85a63QfwQ8riQyYjAwkwuBmPfvBC79iAVItidi7wRLTYPuMQEw9Qxg6 x11fY+uH62ZuQASmdbKE3fowWKb8Fl23mOiCzH6fIK996mvEdsZyCAscK9HPDYwMsank SaUgvRjDfCtdmxC+jtiCrBg5CP6UPy+Rb1nJH9H7k0a4mGS/uOVnGYa9V6Dnz9tPtPuE lP8yI5xaAnCZWKl8U5jWCuAq2hdmMpWOqTnlqpivvoDUKcqzUIDTT0/k6NMJSoXGpUVG 5vFrjOTRdpwmJRIqXQZjmhLmUUJuAmEzEYGzChAjvu1Fh9BHpj3uOKMA638wv7ypP1du JHUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709290525; x=1709895325; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=joyBwk0zgt5jOkjegucQm0mrXA4WSkWrbHPUr9eGsiU=; b=QYD8bA/cMyZ9FURM8drKAqbnMmQYau0pA+5Nc6gL6tHg1LiqhZZ5OHbKKtU8mfInI8 S7VsXzFsGVYu+Gy4iBUi5eagulBcCUokCHJaHx6gztcXS8RCiBhIvwLXyAIAteU+/Hg1 ug9h6Gu266Ris4xY96A/WsLNJvPMq1lKzyJG7i0Q5VKtI3OwtokjFak93jVB5/IukQZK 026WRd51hSRhloZIHEtCiV5zdz1t8emgMAaMp9ErcSJcH1C6gwQHYBnKvWYX7oqn7qAy rahy7yCoMQmWxAFR689qwOBSloYynFtdWVPo18oIsf0hzznTkMBtOExVHZ1LhvaASew0 1cEg== X-Gm-Message-State: AOJu0Yw0zUvqWiS2HtYoHoJunn47jnXGQcM9V58s4vHvvHbbwjdU4zh1 CKyG+pnNfK2xDs1ICQ8yyxKU7zdNN3qRa77nhOhs4Nu8ZI0ObtIhUROvAnHImi8= X-Google-Smtp-Source: AGHT+IG6FOdncTHQSq+x6kAwktL30Su27JvVfOnYUTsDo+eHD0/838Ig6gw3mxVXMZE9SFUa5wQ1VA== X-Received: by 2002:aa7:cb0b:0:b0:564:dd13:56e9 with SMTP id s11-20020aa7cb0b000000b00564dd1356e9mr1107112edt.29.1709290525333; Fri, 01 Mar 2024 02:55:25 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id f12-20020a056402194c00b0056661ec3f24sm1461734edz.81.2024.03.01.02.55.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 02:55:24 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com, npratte@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v4 1/7] dts: convert dts.py methods to class Date: Fri, 1 Mar 2024 11:55:16 +0100 Message-Id: <20240301105522.79870-2-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240301105522.79870-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240301105522.79870-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The dts.py module deviates from the rest of the code without a clear reason. Converting it into a class and using better naming will improve organization and code readability. Signed-off-by: Juraj Linkeš --- dts/framework/dts.py | 338 ---------------------------------------- dts/framework/runner.py | 333 +++++++++++++++++++++++++++++++++++++++ dts/main.py | 6 +- 3 files changed, 337 insertions(+), 340 deletions(-) delete mode 100644 dts/framework/dts.py create mode 100644 dts/framework/runner.py diff --git a/dts/framework/dts.py b/dts/framework/dts.py deleted file mode 100644 index e16d4578a0..0000000000 --- a/dts/framework/dts.py +++ /dev/null @@ -1,338 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2010-2019 Intel Corporation -# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. -# Copyright(c) 2022-2023 University of New Hampshire - -r"""Test suite runner module. - -A DTS run is split into stages: - - #. Execution stage, - #. Build target stage, - #. Test suite stage, - #. Test case stage. - -The module is responsible for running tests on testbeds defined in the test run configuration. -Each setup or teardown of each stage is recorded in a :class:`~.test_result.DTSResult` or -one of its subclasses. The test case results are also recorded. - -If an error occurs, the current stage is aborted, the error is recorded and the run continues in -the next iteration of the same stage. The return code is the highest `severity` of all -:class:`~.exception.DTSError`\s. - -Example: - An error occurs in a build target setup. The current build target is aborted and the run - continues with the next build target. If the errored build target was the last one in the given - execution, the next execution begins. - -Attributes: - dts_logger: The logger instance used in this module. - result: The top level result used in the module. -""" - -import sys - -from .config import ( - BuildTargetConfiguration, - ExecutionConfiguration, - TestSuiteConfig, - load_config, -) -from .exception import BlockingTestSuiteError -from .logger import DTSLOG, getLogger -from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result -from .test_suite import get_test_suites -from .testbed_model import SutNode, TGNode - -# dummy defaults to satisfy linters -dts_logger: DTSLOG = None # type: ignore[assignment] -result: DTSResult = DTSResult(dts_logger) - - -def run_all() -> None: - """Run all build targets in all executions from the test run configuration. - - Before running test suites, executions and build targets are first set up. - The executions and build targets defined in the test run configuration are iterated over. - The executions define which tests to run and where to run them and build targets define - the DPDK build setup. - - The tests suites are set up for each execution/build target tuple and each scheduled - test case within the test suite is set up, executed and torn down. After all test cases - have been executed, the test suite is torn down and the next build target will be tested. - - All the nested steps look like this: - - #. Execution setup - - #. Build target setup - - #. Test suite setup - - #. Test case setup - #. Test case logic - #. Test case teardown - - #. Test suite teardown - - #. Build target teardown - - #. Execution teardown - - The test cases are filtered according to the specification in the test run configuration and - the :option:`--test-cases` command line argument or - the :envvar:`DTS_TESTCASES` environment variable. - """ - global dts_logger - global result - - # create a regular DTS logger and create a new result with it - dts_logger = getLogger("DTSRunner") - result = DTSResult(dts_logger) - - # check the python version of the server that run dts - _check_dts_python_version() - - sut_nodes: dict[str, SutNode] = {} - tg_nodes: dict[str, TGNode] = {} - try: - # for all Execution sections - for execution in load_config().executions: - sut_node = sut_nodes.get(execution.system_under_test_node.name) - tg_node = tg_nodes.get(execution.traffic_generator_node.name) - - try: - if not sut_node: - sut_node = SutNode(execution.system_under_test_node) - sut_nodes[sut_node.name] = sut_node - if not tg_node: - tg_node = TGNode(execution.traffic_generator_node) - tg_nodes[tg_node.name] = tg_node - result.update_setup(Result.PASS) - except Exception as e: - failed_node = execution.system_under_test_node.name - if sut_node: - failed_node = execution.traffic_generator_node.name - dts_logger.exception(f"Creation of node {failed_node} failed.") - result.update_setup(Result.FAIL, e) - - else: - _run_execution(sut_node, tg_node, execution, result) - - except Exception as e: - dts_logger.exception("An unexpected error has occurred.") - result.add_error(e) - raise - - finally: - try: - for node in (sut_nodes | tg_nodes).values(): - node.close() - result.update_teardown(Result.PASS) - except Exception as e: - dts_logger.exception("Final cleanup of nodes failed.") - result.update_teardown(Result.ERROR, e) - - # we need to put the sys.exit call outside the finally clause to make sure - # that unexpected exceptions will propagate - # in that case, the error that should be reported is the uncaught exception as - # that is a severe error originating from the framework - # at that point, we'll only have partial results which could be impacted by the - # error causing the uncaught exception, making them uninterpretable - _exit_dts() - - -def _check_dts_python_version() -> None: - """Check the required Python version - v3.10.""" - - def RED(text: str) -> str: - return f"\u001B[31;1m{str(text)}\u001B[0m" - - if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10): - print( - RED( - ( - "WARNING: DTS execution node's python version is lower than" - "python 3.10, is deprecated and will not work in future releases." - ) - ), - file=sys.stderr, - ) - print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) - - -def _run_execution( - sut_node: SutNode, - tg_node: TGNode, - execution: ExecutionConfiguration, - result: DTSResult, -) -> None: - """Run the given execution. - - This involves running the execution setup as well as running all build targets - in the given execution. After that, execution teardown is run. - - Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. - execution: An execution's test run configuration. - result: The top level result object. - """ - dts_logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.") - execution_result = result.add_execution(sut_node.config) - execution_result.add_sut_info(sut_node.node_info) - - try: - sut_node.set_up_execution(execution) - execution_result.update_setup(Result.PASS) - except Exception as e: - dts_logger.exception("Execution setup failed.") - execution_result.update_setup(Result.FAIL, e) - - else: - for build_target in execution.build_targets: - _run_build_target(sut_node, tg_node, build_target, execution, execution_result) - - finally: - try: - sut_node.tear_down_execution() - execution_result.update_teardown(Result.PASS) - except Exception as e: - dts_logger.exception("Execution teardown failed.") - execution_result.update_teardown(Result.FAIL, e) - - -def _run_build_target( - sut_node: SutNode, - tg_node: TGNode, - build_target: BuildTargetConfiguration, - execution: ExecutionConfiguration, - execution_result: ExecutionResult, -) -> None: - """Run the given build target. - - This involves running the build target setup as well as running all test suites - in the given execution the build target is defined in. - After that, build target teardown is run. - - Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. - build_target: A build target's test run configuration. - execution: The build target's execution's test run configuration. - execution_result: The execution level result object associated with the execution. - """ - dts_logger.info(f"Running build target '{build_target.name}'.") - build_target_result = execution_result.add_build_target(build_target) - - try: - sut_node.set_up_build_target(build_target) - result.dpdk_version = sut_node.dpdk_version - build_target_result.add_build_target_info(sut_node.get_build_target_info()) - build_target_result.update_setup(Result.PASS) - except Exception as e: - dts_logger.exception("Build target setup failed.") - build_target_result.update_setup(Result.FAIL, e) - - else: - _run_all_suites(sut_node, tg_node, execution, build_target_result) - - finally: - try: - sut_node.tear_down_build_target() - build_target_result.update_teardown(Result.PASS) - except Exception as e: - dts_logger.exception("Build target teardown failed.") - build_target_result.update_teardown(Result.FAIL, e) - - -def _run_all_suites( - sut_node: SutNode, - tg_node: TGNode, - execution: ExecutionConfiguration, - build_target_result: BuildTargetResult, -) -> None: - """Run the execution's (possibly a subset) test suites using the current build target. - - The function assumes the build target we're testing has already been built on the SUT node. - The current build target thus corresponds to the current DPDK build present on the SUT node. - - If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites - in the current build target won't be executed. - - Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. - execution: The execution's test run configuration associated with the current build target. - build_target_result: The build target level result object associated - with the current build target. - """ - end_build_target = False - if not execution.skip_smoke_tests: - execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] - for test_suite_config in execution.test_suites: - try: - _run_single_suite(sut_node, tg_node, execution, build_target_result, test_suite_config) - except BlockingTestSuiteError as e: - dts_logger.exception( - f"An error occurred within {test_suite_config.test_suite}. Skipping build target." - ) - result.add_error(e) - end_build_target = True - # if a blocking test failed and we need to bail out of suite executions - if end_build_target: - break - - -def _run_single_suite( - sut_node: SutNode, - tg_node: TGNode, - execution: ExecutionConfiguration, - build_target_result: BuildTargetResult, - test_suite_config: TestSuiteConfig, -) -> None: - """Run all test suite in a single test suite module. - - The function assumes the build target we're testing has already been built on the SUT node. - The current build target thus corresponds to the current DPDK build present on the SUT node. - - Args: - sut_node: The execution's SUT node. - tg_node: The execution's TG node. - execution: The execution's test run configuration associated with the current build target. - build_target_result: The build target level result object associated - with the current build target. - test_suite_config: Test suite test run configuration specifying the test suite module - and possibly a subset of test cases of test suites in that module. - - Raises: - BlockingTestSuiteError: If a blocking test suite fails. - """ - try: - full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" - test_suite_classes = get_test_suites(full_suite_path) - suites_str = ", ".join((x.__name__ for x in test_suite_classes)) - dts_logger.debug(f"Found test suites '{suites_str}' in '{full_suite_path}'.") - except Exception as e: - dts_logger.exception("An error occurred when searching for test suites.") - result.update_setup(Result.ERROR, e) - - else: - for test_suite_class in test_suite_classes: - test_suite = test_suite_class( - sut_node, - tg_node, - test_suite_config.test_cases, - execution.func, - build_target_result, - ) - test_suite.run() - - -def _exit_dts() -> None: - """Process all errors and exit with the proper exit code.""" - result.process() - - if dts_logger: - dts_logger.info("DTS execution has ended.") - sys.exit(result.get_return_code()) diff --git a/dts/framework/runner.py b/dts/framework/runner.py new file mode 100644 index 0000000000..acc1c4d6db --- /dev/null +++ b/dts/framework/runner.py @@ -0,0 +1,333 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2010-2019 Intel Corporation +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 University of New Hampshire + +"""Test suite runner module. + +The module is responsible for running DTS in a series of stages: + + #. Execution stage, + #. Build target stage, + #. Test suite stage, + #. Test case stage. + +The execution and build target stages set up the environment before running test suites. +The test suite stage sets up steps common to all test cases +and the test case stage runs test cases individually. +""" + +import logging +import sys + +from .config import ( + BuildTargetConfiguration, + ExecutionConfiguration, + TestSuiteConfig, + load_config, +) +from .exception import BlockingTestSuiteError +from .logger import DTSLOG, getLogger +from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result +from .test_suite import get_test_suites +from .testbed_model import SutNode, TGNode + + +class DTSRunner: + r"""Test suite runner class. + + The class is responsible for running tests on testbeds defined in the test run configuration. + Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` + or one of its subclasses. The test case results are also recorded. + + If an error occurs, the current stage is aborted, the error is recorded and the run continues in + the next iteration of the same stage. The return code is the highest `severity` of all + :class:`~.framework.exception.DTSError`\s. + + Example: + An error occurs in a build target setup. The current build target is aborted and the run + continues with the next build target. If the errored build target was the last one in the + given execution, the next execution begins. + """ + + _logger: DTSLOG + _result: DTSResult + + def __init__(self): + """Initialize the instance with logger and result.""" + self._logger = getLogger("DTSRunner") + self._result = DTSResult(self._logger) + + def run(self): + """Run all build targets in all executions from the test run configuration. + + Before running test suites, executions and build targets are first set up. + The executions and build targets defined in the test run configuration are iterated over. + The executions define which tests to run and where to run them and build targets define + the DPDK build setup. + + The tests suites are set up for each execution/build target tuple and each discovered + test case within the test suite is set up, executed and torn down. After all test cases + have been executed, the test suite is torn down and the next build target will be tested. + + All the nested steps look like this: + + #. Execution setup + + #. Build target setup + + #. Test suite setup + + #. Test case setup + #. Test case logic + #. Test case teardown + + #. Test suite teardown + + #. Build target teardown + + #. Execution teardown + + The test cases are filtered according to the specification in the test run configuration and + the :option:`--test-cases` command line argument or + the :envvar:`DTS_TESTCASES` environment variable. + """ + sut_nodes: dict[str, SutNode] = {} + tg_nodes: dict[str, TGNode] = {} + try: + # check the python version of the server that runs dts + self._check_dts_python_version() + + # for all Execution sections + for execution in load_config().executions: + sut_node = sut_nodes.get(execution.system_under_test_node.name) + tg_node = tg_nodes.get(execution.traffic_generator_node.name) + + try: + if not sut_node: + sut_node = SutNode(execution.system_under_test_node) + sut_nodes[sut_node.name] = sut_node + if not tg_node: + tg_node = TGNode(execution.traffic_generator_node) + tg_nodes[tg_node.name] = tg_node + self._result.update_setup(Result.PASS) + except Exception as e: + failed_node = execution.system_under_test_node.name + if sut_node: + failed_node = execution.traffic_generator_node.name + self._logger.exception(f"The Creation of node {failed_node} failed.") + self._result.update_setup(Result.FAIL, e) + + else: + self._run_execution(sut_node, tg_node, execution) + + except Exception as e: + self._logger.exception("An unexpected error has occurred.") + self._result.add_error(e) + raise + + finally: + try: + for node in (sut_nodes | tg_nodes).values(): + node.close() + self._result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception("The final cleanup of nodes failed.") + self._result.update_teardown(Result.ERROR, e) + + # we need to put the sys.exit call outside the finally clause to make sure + # that unexpected exceptions will propagate + # in that case, the error that should be reported is the uncaught exception as + # that is a severe error originating from the framework + # at that point, we'll only have partial results which could be impacted by the + # error causing the uncaught exception, making them uninterpretable + self._exit_dts() + + def _check_dts_python_version(self) -> None: + """Check the required Python version - v3.10.""" + if sys.version_info.major < 3 or ( + sys.version_info.major == 3 and sys.version_info.minor < 10 + ): + self._logger.warning( + "DTS execution node's python version is lower than Python 3.10, " + "is deprecated and will not work in future releases." + ) + self._logger.warning("Please use Python >= 3.10 instead.") + + def _run_execution( + self, + sut_node: SutNode, + tg_node: TGNode, + execution: ExecutionConfiguration, + ) -> None: + """Run the given execution. + + This involves running the execution setup as well as running all build targets + in the given execution. After that, execution teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: An execution's test run configuration. + """ + self._logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.") + execution_result = self._result.add_execution(sut_node.config) + execution_result.add_sut_info(sut_node.node_info) + + try: + sut_node.set_up_execution(execution) + execution_result.update_setup(Result.PASS) + except Exception as e: + self._logger.exception("Execution setup failed.") + execution_result.update_setup(Result.FAIL, e) + + else: + for build_target in execution.build_targets: + self._run_build_target(sut_node, tg_node, build_target, execution, execution_result) + + finally: + try: + sut_node.tear_down_execution() + execution_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception("Execution teardown failed.") + execution_result.update_teardown(Result.FAIL, e) + + def _run_build_target( + self, + sut_node: SutNode, + tg_node: TGNode, + build_target: BuildTargetConfiguration, + execution: ExecutionConfiguration, + execution_result: ExecutionResult, + ) -> None: + """Run the given build target. + + This involves running the build target setup as well as running all test suites + of the build target's execution. + After that, build target teardown is run. + + Args: + sut_node: The execution's sut node. + tg_node: The execution's tg node. + build_target: A build target's test run configuration. + execution: The build target's execution's test run configuration. + execution_result: The execution level result object associated with the execution. + """ + self._logger.info(f"Running build target '{build_target.name}'.") + build_target_result = execution_result.add_build_target(build_target) + + try: + sut_node.set_up_build_target(build_target) + self._result.dpdk_version = sut_node.dpdk_version + build_target_result.add_build_target_info(sut_node.get_build_target_info()) + build_target_result.update_setup(Result.PASS) + except Exception as e: + self._logger.exception("Build target setup failed.") + build_target_result.update_setup(Result.FAIL, e) + + else: + self._run_all_suites(sut_node, tg_node, execution, build_target_result) + + finally: + try: + sut_node.tear_down_build_target() + build_target_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception("Build target teardown failed.") + build_target_result.update_teardown(Result.FAIL, e) + + def _run_all_suites( + self, + sut_node: SutNode, + tg_node: TGNode, + execution: ExecutionConfiguration, + build_target_result: BuildTargetResult, + ) -> None: + """Run the execution's (possibly a subset of) test suites using the current build target. + + The method assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated + with the current build target. + build_target_result: The build target level result object associated + with the current build target. + """ + end_build_target = False + if not execution.skip_smoke_tests: + execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] + for test_suite_config in execution.test_suites: + try: + self._run_single_suite( + sut_node, tg_node, execution, build_target_result, test_suite_config + ) + except BlockingTestSuiteError as e: + self._logger.exception( + f"An error occurred within {test_suite_config.test_suite}. " + "Skipping build target..." + ) + self._result.add_error(e) + end_build_target = True + # if a blocking test failed and we need to bail out of suite executions + if end_build_target: + break + + def _run_single_suite( + self, + sut_node: SutNode, + tg_node: TGNode, + execution: ExecutionConfiguration, + build_target_result: BuildTargetResult, + test_suite_config: TestSuiteConfig, + ) -> None: + """Run all test suites in a single test suite module. + + The method assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated + with the current build target. + build_target_result: The build target level result object associated + with the current build target. + test_suite_config: Test suite test run configuration specifying the test suite module + and possibly a subset of test cases of test suites in that module. + + Raises: + BlockingTestSuiteError: If a blocking test suite fails. + """ + try: + full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" + test_suite_classes = get_test_suites(full_suite_path) + suites_str = ", ".join((x.__name__ for x in test_suite_classes)) + self._logger.debug(f"Found test suites '{suites_str}' in '{full_suite_path}'.") + except Exception as e: + self._logger.exception("An error occurred when searching for test suites.") + self._result.update_setup(Result.ERROR, e) + + else: + for test_suite_class in test_suite_classes: + test_suite = test_suite_class( + sut_node, + tg_node, + test_suite_config.test_cases, + execution.func, + build_target_result, + ) + test_suite.run() + + def _exit_dts(self) -> None: + """Process all errors and exit with the proper exit code.""" + self._result.process() + + if self._logger: + self._logger.info("DTS execution has ended.") + + logging.shutdown() + sys.exit(self._result.get_return_code()) diff --git a/dts/main.py b/dts/main.py index f703615d11..1ffe8ff81f 100755 --- a/dts/main.py +++ b/dts/main.py @@ -21,9 +21,11 @@ def main() -> None: be modified before the settings module is imported anywhere else in the framework. """ settings.SETTINGS = settings.get_settings() - from framework import dts - dts.run_all() + from framework.runner import DTSRunner + + dts = DTSRunner() + dts.run() # Main program begins here From patchwork Fri Mar 1 10:55:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137667 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8858A43BB1; Fri, 1 Mar 2024 11:55:40 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B638E433CA; Fri, 1 Mar 2024 11:55:29 +0100 (CET) Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by mails.dpdk.org (Postfix) with ESMTP id 5E257433AF for ; Fri, 1 Mar 2024 11:55:27 +0100 (CET) Received: by mail-ed1-f46.google.com with SMTP id 4fb4d7f45d1cf-563c403719cso2937476a12.2 for ; Fri, 01 Mar 2024 02:55:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1709290527; x=1709895327; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hy2D+aQY0wT5R5bQd1L65md2e7K9b0WwIpuk8bYCJUo=; b=XutN37Cu1R5YrbhZdxkI1oOPrfB/RUmHNBTXNnhQuEP9mYUG56HzTIUAUFas7CLtiY 8kD+nmmADhgPZHHfUJQJZDqPqawxS0OdyqIG+niKwy2oILCQJbpL4WlqUUCdoohq6uHh EpIIOpIfKRRVGfiBJVg0q/nbePVPx3os86kxthAg/1hifAlV9fPFea5eu7WiipLmFLDK vgL5ZG1ZskuI9UNumzn0iTLEsiDOEct96DF2D8yMmJD/QgLZ6D3O01spKXw6e6pizw07 7lPkQyP/KrAxhCxUJJDfLPTLy+BJO3fzZB07ovpHSUCt0sUJL7epGzuHcsGf6TVPUmAL 0TAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709290527; x=1709895327; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hy2D+aQY0wT5R5bQd1L65md2e7K9b0WwIpuk8bYCJUo=; b=H0qovnDou01VnZaGYYU1C14V4dUn2yGSYyXY8FiLZKEsLbt4NH89AaGDnSghVKTXoQ V6MLFcCtGoaFoofTSqkK8U4MF9Cir2H4xV7unfTOMwAXqRCLUsnlN0M9xafyDx6VWDw7 FtWROaTSynjs4P0qqZPxcxIbuxjznXpaJL24gWyGYilYTx8xMuP7AWubpn9LZm7EkU+D vCSLK0OP2sHDS20g84V7gNOmkfXBCHz7K7CCoJCoo7WmEZQ+eKhrELVILvaezl/8SDLZ cUTqJBfS7mLb0AaKNCrryruf4RmVHmqqA0QZtarKn8XH9P33mVxQnNQCQ8Tk6x5/lkGA BVEg== X-Gm-Message-State: AOJu0Yywu15cBx1St/XD1DifzM4QYCnFgD1htKDSkgz+mCt4+pG9wzl8 Cqx88AIRHN7+S3VP++IBqK5qALcSlHfgUqPp3dxGRO49HikQGXxRbfVq3p+FR6sKkHM50itPCzF FZbw= X-Google-Smtp-Source: AGHT+IHpNisciMjmjNim8nQwEwujWcnUejGpGiT3L72Ah2TGSPzzUBVrdRc5iyMWC4T3bf7beeT1qg== X-Received: by 2002:a05:6402:390c:b0:566:db27:837b with SMTP id fe12-20020a056402390c00b00566db27837bmr708709edb.40.1709290526819; Fri, 01 Mar 2024 02:55:26 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id f12-20020a056402194c00b0056661ec3f24sm1461734edz.81.2024.03.01.02.55.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 02:55:26 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com, npratte@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v4 2/7] dts: move test suite execution logic to DTSRunner Date: Fri, 1 Mar 2024 11:55:17 +0100 Message-Id: <20240301105522.79870-3-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240301105522.79870-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240301105522.79870-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move the code responsible for running the test suite from the TestSuite class to the DTSRunner class. This restructuring decision was made to consolidate and unify the related logic into a single unit. Signed-off-by: Juraj Linkeš --- dts/framework/runner.py | 175 ++++++++++++++++++++++++++++++++---- dts/framework/test_suite.py | 152 ++----------------------------- 2 files changed, 169 insertions(+), 158 deletions(-) diff --git a/dts/framework/runner.py b/dts/framework/runner.py index acc1c4d6db..933685d638 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -19,6 +19,7 @@ import logging import sys +from types import MethodType from .config import ( BuildTargetConfiguration, @@ -26,10 +27,18 @@ TestSuiteConfig, load_config, ) -from .exception import BlockingTestSuiteError +from .exception import BlockingTestSuiteError, SSHTimeoutError, TestCaseVerifyError from .logger import DTSLOG, getLogger -from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result -from .test_suite import get_test_suites +from .settings import SETTINGS +from .test_result import ( + BuildTargetResult, + DTSResult, + ExecutionResult, + Result, + TestCaseResult, + TestSuiteResult, +) +from .test_suite import TestSuite, get_test_suites from .testbed_model import SutNode, TGNode @@ -227,7 +236,7 @@ def _run_build_target( build_target_result.update_setup(Result.FAIL, e) else: - self._run_all_suites(sut_node, tg_node, execution, build_target_result) + self._run_test_suites(sut_node, tg_node, execution, build_target_result) finally: try: @@ -237,7 +246,7 @@ def _run_build_target( self._logger.exception("Build target teardown failed.") build_target_result.update_teardown(Result.FAIL, e) - def _run_all_suites( + def _run_test_suites( self, sut_node: SutNode, tg_node: TGNode, @@ -249,6 +258,9 @@ def _run_all_suites( The method assumes the build target we're testing has already been built on the SUT node. The current build target thus corresponds to the current DPDK build present on the SUT node. + If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites + in the current build target won't be executed. + Args: sut_node: The execution's SUT node. tg_node: The execution's TG node. @@ -262,7 +274,7 @@ def _run_all_suites( execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] for test_suite_config in execution.test_suites: try: - self._run_single_suite( + self._run_test_suite_module( sut_node, tg_node, execution, build_target_result, test_suite_config ) except BlockingTestSuiteError as e: @@ -276,7 +288,7 @@ def _run_all_suites( if end_build_target: break - def _run_single_suite( + def _run_test_suite_module( self, sut_node: SutNode, tg_node: TGNode, @@ -284,11 +296,18 @@ def _run_single_suite( build_target_result: BuildTargetResult, test_suite_config: TestSuiteConfig, ) -> None: - """Run all test suites in a single test suite module. + """Set up, execute and tear down all test suites in a single test suite module. The method assumes the build target we're testing has already been built on the SUT node. The current build target thus corresponds to the current DPDK build present on the SUT node. + Test suite execution consists of running the discovered test cases. + A test case run consists of setup, execution and teardown of said test case. + + Record the setup and the teardown and handle failures. + + The test cases to execute are discovered when creating the :class:`TestSuite` object. + Args: sut_node: The execution's SUT node. tg_node: The execution's TG node. @@ -313,14 +332,140 @@ def _run_single_suite( else: for test_suite_class in test_suite_classes: - test_suite = test_suite_class( - sut_node, - tg_node, - test_suite_config.test_cases, - execution.func, - build_target_result, + test_suite = test_suite_class(sut_node, tg_node, test_suite_config.test_cases) + + test_suite_name = test_suite.__class__.__name__ + test_suite_result = build_target_result.add_test_suite(test_suite_name) + try: + self._logger.info(f"Starting test suite setup: {test_suite_name}") + test_suite.set_up_suite() + test_suite_result.update_setup(Result.PASS) + self._logger.info(f"Test suite setup successful: {test_suite_name}") + except Exception as e: + self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") + test_suite_result.update_setup(Result.ERROR, e) + + else: + self._execute_test_suite(execution.func, test_suite, test_suite_result) + + finally: + try: + test_suite.tear_down_suite() + sut_node.kill_cleanup_dpdk_apps() + test_suite_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") + self._logger.warning( + f"Test suite '{test_suite_name}' teardown failed, " + f"the next test suite may be affected." + ) + test_suite_result.update_setup(Result.ERROR, e) + if len(test_suite_result.get_errors()) > 0 and test_suite.is_blocking: + raise BlockingTestSuiteError(test_suite_name) + + def _execute_test_suite( + self, func: bool, test_suite: TestSuite, test_suite_result: TestSuiteResult + ) -> None: + """Execute all discovered test cases in `test_suite`. + + If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment + variable is set, in case of a test case failure, the test case will be executed again + until it passes or it fails that many times in addition of the first failure. + + Args: + func: Whether to execute functional test cases. + test_suite: The test suite object. + test_suite_result: The test suite level result object associated + with the current test suite. + """ + if func: + for test_case_method in test_suite._get_functional_test_cases(): + test_case_name = test_case_method.__name__ + test_case_result = test_suite_result.add_test_case(test_case_name) + all_attempts = SETTINGS.re_run + 1 + attempt_nr = 1 + self._run_test_case(test_suite, test_case_method, test_case_result) + while not test_case_result and attempt_nr < all_attempts: + attempt_nr += 1 + self._logger.info( + f"Re-running FAILED test case '{test_case_name}'. " + f"Attempt number {attempt_nr} out of {all_attempts}." + ) + self._run_test_case(test_suite, test_case_method, test_case_result) + + def _run_test_case( + self, + test_suite: TestSuite, + test_case_method: MethodType, + test_case_result: TestCaseResult, + ) -> None: + """Setup, execute and teardown a test case in `test_suite`. + + Record the result of the setup and the teardown and handle failures. + + Args: + test_suite: The test suite object. + test_case_method: The test case method. + test_case_result: The test case level result object associated + with the current test case. + """ + test_case_name = test_case_method.__name__ + + try: + # run set_up function for each case + test_suite.set_up_test_case() + test_case_result.update_setup(Result.PASS) + except SSHTimeoutError as e: + self._logger.exception(f"Test case setup FAILED: {test_case_name}") + test_case_result.update_setup(Result.FAIL, e) + except Exception as e: + self._logger.exception(f"Test case setup ERROR: {test_case_name}") + test_case_result.update_setup(Result.ERROR, e) + + else: + # run test case if setup was successful + self._execute_test_case(test_case_method, test_case_result) + + finally: + try: + test_suite.tear_down_test_case() + test_case_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception(f"Test case teardown ERROR: {test_case_name}") + self._logger.warning( + f"Test case '{test_case_name}' teardown failed, " + f"the next test case may be affected." ) - test_suite.run() + test_case_result.update_teardown(Result.ERROR, e) + test_case_result.update(Result.ERROR) + + def _execute_test_case( + self, test_case_method: MethodType, test_case_result: TestCaseResult + ) -> None: + """Execute one test case, record the result and handle failures. + + Args: + test_case_method: The test case method. + test_case_result: The test case level result object associated + with the current test case. + """ + test_case_name = test_case_method.__name__ + try: + self._logger.info(f"Starting test case execution: {test_case_name}") + test_case_method() + test_case_result.update(Result.PASS) + self._logger.info(f"Test case execution PASSED: {test_case_name}") + + except TestCaseVerifyError as e: + self._logger.exception(f"Test case execution FAILED: {test_case_name}") + test_case_result.update(Result.FAIL, e) + except Exception as e: + self._logger.exception(f"Test case execution ERROR: {test_case_name}") + test_case_result.update(Result.ERROR, e) + except KeyboardInterrupt: + self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}") + test_case_result.update(Result.SKIP) + raise KeyboardInterrupt("Stop DTS") def _exit_dts(self) -> None: """Process all errors and exit with the proper exit code.""" diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index dfb391ffbd..b02fd36147 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -8,7 +8,6 @@ must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics needed by subclasses: - * Test suite and test case execution flow, * Testbed (SUT, TG) configuration, * Packet sending and verification, * Test case verification. @@ -28,27 +27,22 @@ from scapy.layers.l2 import Ether # type: ignore[import] from scapy.packet import Packet, Padding # type: ignore[import] -from .exception import ( - BlockingTestSuiteError, - ConfigurationError, - SSHTimeoutError, - TestCaseVerifyError, -) +from .exception import ConfigurationError, TestCaseVerifyError from .logger import DTSLOG, getLogger from .settings import SETTINGS -from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries class TestSuite(object): - """The base class with methods for handling the basic flow of a test suite. + """The base class with building blocks needed by most test cases. * Test case filtering and collection, - * Test suite setup/cleanup, - * Test setup/cleanup, - * Test case execution, - * Error handling and results storage. + * Test suite setup/cleanup methods to override, + * Test case setup/cleanup methods to override, + * Test case verification, + * Testbed configuration, + * Traffic sending and verification. Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``, further divided into performance test cases (starting with ``test_perf_``) @@ -60,10 +54,6 @@ class TestSuite(object): The union of both lists will be used. Any unknown test cases from the latter lists will be silently ignored. - If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable - is set, in case of a test case failure, the test case will be executed again until it passes - or it fails that many times in addition of the first failure. - The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses if the appropriate test suite/test case fixtures are needed. @@ -82,8 +72,6 @@ class TestSuite(object): is_blocking: ClassVar[bool] = False _logger: DTSLOG _test_cases_to_run: list[str] - _func: bool - _result: TestSuiteResult _port_links: list[PortLink] _sut_port_ingress: Port _sut_port_egress: Port @@ -99,30 +87,23 @@ def __init__( sut_node: SutNode, tg_node: TGNode, test_cases: list[str], - func: bool, - build_target_result: BuildTargetResult, ): """Initialize the test suite testbed information and basic configuration. - Process what test cases to run, create the associated - :class:`~.test_result.TestSuiteResult`, find links between ports - and set up default IP addresses to be used when configuring them. + Process what test cases to run, find links between ports and set up + default IP addresses to be used when configuring them. Args: sut_node: The SUT node where the test suite will run. tg_node: The TG node where the test suite will run. test_cases: The list of test cases to execute. If empty, all test cases will be executed. - func: Whether to run functional tests. - build_target_result: The build target result this test suite is run in. """ self.sut_node = sut_node self.tg_node = tg_node self._logger = getLogger(self.__class__.__name__) self._test_cases_to_run = test_cases self._test_cases_to_run.extend(SETTINGS.test_cases) - self._func = func - self._result = build_target_result.add_test_suite(self.__class__.__name__) self._port_links = [] self._process_links() self._sut_port_ingress, self._tg_port_egress = ( @@ -384,62 +365,6 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool: return False return True - def run(self) -> None: - """Set up, execute and tear down the whole suite. - - Test suite execution consists of running all test cases scheduled to be executed. - A test case run consists of setup, execution and teardown of said test case. - - Record the setup and the teardown and handle failures. - - The list of scheduled test cases is constructed when creating the :class:`TestSuite` object. - """ - test_suite_name = self.__class__.__name__ - - try: - self._logger.info(f"Starting test suite setup: {test_suite_name}") - self.set_up_suite() - self._result.update_setup(Result.PASS) - self._logger.info(f"Test suite setup successful: {test_suite_name}") - except Exception as e: - self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") - self._result.update_setup(Result.ERROR, e) - - else: - self._execute_test_suite() - - finally: - try: - self.tear_down_suite() - self.sut_node.kill_cleanup_dpdk_apps() - self._result.update_teardown(Result.PASS) - except Exception as e: - self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") - self._logger.warning( - f"Test suite '{test_suite_name}' teardown failed, " - f"the next test suite may be affected." - ) - self._result.update_setup(Result.ERROR, e) - if len(self._result.get_errors()) > 0 and self.is_blocking: - raise BlockingTestSuiteError(test_suite_name) - - def _execute_test_suite(self) -> None: - """Execute all test cases scheduled to be executed in this suite.""" - if self._func: - for test_case_method in self._get_functional_test_cases(): - test_case_name = test_case_method.__name__ - test_case_result = self._result.add_test_case(test_case_name) - all_attempts = SETTINGS.re_run + 1 - attempt_nr = 1 - self._run_test_case(test_case_method, test_case_result) - while not test_case_result and attempt_nr < all_attempts: - attempt_nr += 1 - self._logger.info( - f"Re-running FAILED test case '{test_case_name}'. " - f"Attempt number {attempt_nr} out of {all_attempts}." - ) - self._run_test_case(test_case_method, test_case_result) - def _get_functional_test_cases(self) -> list[MethodType]: """Get all functional test cases defined in this TestSuite. @@ -471,65 +396,6 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool return match - def _run_test_case( - self, test_case_method: MethodType, test_case_result: TestCaseResult - ) -> None: - """Setup, execute and teardown a test case in this suite. - - Record the result of the setup and the teardown and handle failures. - """ - test_case_name = test_case_method.__name__ - - try: - # run set_up function for each case - self.set_up_test_case() - test_case_result.update_setup(Result.PASS) - except SSHTimeoutError as e: - self._logger.exception(f"Test case setup FAILED: {test_case_name}") - test_case_result.update_setup(Result.FAIL, e) - except Exception as e: - self._logger.exception(f"Test case setup ERROR: {test_case_name}") - test_case_result.update_setup(Result.ERROR, e) - - else: - # run test case if setup was successful - self._execute_test_case(test_case_method, test_case_result) - - finally: - try: - self.tear_down_test_case() - test_case_result.update_teardown(Result.PASS) - except Exception as e: - self._logger.exception(f"Test case teardown ERROR: {test_case_name}") - self._logger.warning( - f"Test case '{test_case_name}' teardown failed, " - f"the next test case may be affected." - ) - test_case_result.update_teardown(Result.ERROR, e) - test_case_result.update(Result.ERROR) - - def _execute_test_case( - self, test_case_method: MethodType, test_case_result: TestCaseResult - ) -> None: - """Execute one test case, record the result and handle failures.""" - test_case_name = test_case_method.__name__ - try: - self._logger.info(f"Starting test case execution: {test_case_name}") - test_case_method() - test_case_result.update(Result.PASS) - self._logger.info(f"Test case execution PASSED: {test_case_name}") - - except TestCaseVerifyError as e: - self._logger.exception(f"Test case execution FAILED: {test_case_name}") - test_case_result.update(Result.FAIL, e) - except Exception as e: - self._logger.exception(f"Test case execution ERROR: {test_case_name}") - test_case_result.update(Result.ERROR, e) - except KeyboardInterrupt: - self._logger.error(f"Test case execution INTERRUPTED by user: {test_case_name}") - test_case_result.update(Result.SKIP) - raise KeyboardInterrupt("Stop DTS") - def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: r"""Find all :class:`TestSuite`\s in a Python module. From patchwork Fri Mar 1 10:55:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137668 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1BDA143BB1; Fri, 1 Mar 2024 11:55:48 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 08C78433D0; Fri, 1 Mar 2024 11:55:31 +0100 (CET) Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by mails.dpdk.org (Postfix) with ESMTP id 96F03433B9 for ; Fri, 1 Mar 2024 11:55:28 +0100 (CET) Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-564372fb762so2895181a12.0 for ; Fri, 01 Mar 2024 02:55:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1709290528; x=1709895328; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kn0No+6wjiK8lXcCArIY2pd5BVm/tY+1jO4dNb7k6HI=; b=eRC0WK92IF/wk449ocC/edyb4AQoKKbT1fagJAg3UvnazX1R0ybNU0mCU7bTPf30jo 5N/I+njFaKOVgA9MtNCWiVQ3ZRqlm68TvNnuHjqtJ79H2XGeTxjDOm7vKabkmZW8QSvD 36Y8XjnOC9VUk/rCdoLwaNcIXKu0RZ7dQPF0Q99Xck7X/HsEI+ZsUB6h60yz4eo1ikS3 /7Lu/1hNB2j5bteZpg2qCfi/j287jmRqY5fgBEieAWQstglT+h/oW+jRt0YrhuVpYv/r JTq7ymmJ/dfDJJj+JmnRi6/gGJEa+SCAKkeLkG0nfmbPSIOA3jdTd3PhvbmbPIUsdvHM Ortg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709290528; x=1709895328; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kn0No+6wjiK8lXcCArIY2pd5BVm/tY+1jO4dNb7k6HI=; b=IcCO/N+0WQxwPOW8jPi+RyOr4N0zUqO3Aeviu1sY6Cwk90Rv1/RqSpkUSKO2FuubUC zdcmzJqcOw12mPIBiuIgAIBoYVH0CMT99654NaZZEI7EqfmYccHn4fBjWzaasXRD18NO u2t4/FJQd4WOWw3UvQG56OOKAEGWuU1jEgbGx9TuL8NoKpxVLgpiYyfT3BGZ7lmGtkTw Mw58HT56P7r4XcMEjSPFiRwcR9jNdNKqv2wi3VL78NdJBk1X79tq/IpBEjwTNSJ7bV2A axoKqzY/ZuoV0OVWpqwLWCYn9YvMWGqdCvh37UCrIPvxTVUYL3ARmDXhOoXhOCl5v+pL j7PA== X-Gm-Message-State: AOJu0Yz246wqe/THJCTAofki9Xoloe9u1UUvYrd6EWxd+acd2e9hxWZN 3wUQbICLxUfIHJhxTNBcJdVJPVxN+pw3IHmm0pSWnTDse9QrmqjAsI567mUA0vw= X-Google-Smtp-Source: AGHT+IGZyUa5NdAvvEQyYHPSYVbHBxMz9AiFDg0HFGHTFiArG9LZX4NO1LSDxJI15qkuYTBam9F8sA== X-Received: by 2002:a50:9e4b:0:b0:565:6e34:da30 with SMTP id z69-20020a509e4b000000b005656e34da30mr980293ede.21.1709290527917; Fri, 01 Mar 2024 02:55:27 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id f12-20020a056402194c00b0056661ec3f24sm1461734edz.81.2024.03.01.02.55.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 02:55:27 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com, npratte@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v4 3/7] dts: filter test suites in executions Date: Fri, 1 Mar 2024 11:55:18 +0100 Message-Id: <20240301105522.79870-4-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240301105522.79870-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240301105522.79870-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org We're currently filtering which test cases to run after some setup steps, such as DPDK build, have already been taken. This prohibits us to mark the test suites and cases that were supposed to be run as blocked when an earlier setup fails, as that information is not available at that time. To remedy this, move the filtering to the beginning of each execution. This is the first action taken in each execution and if we can't filter the test cases, such as due to invalid inputs, we abort the whole execution. No test suites nor cases will be marked as blocked as we don't know which were supposed to be run. On top of that, the filtering takes place in the TestSuite class, which should only concern itself with test suite and test case logic, not the processing behind the scenes. The logic has been moved to DTSRunner which should do all the processing needed to run test suites. The filtering itself introduces a few changes/assumptions which are more sensible than before: 1. Assumption: There is just one TestSuite child class in each test suite module. This was an implicit assumption before as we couldn't specify the TestSuite classes in the test run configuration, just the modules. The name of the TestSuite child class starts with "Test" and then corresponds to the name of the module with CamelCase naming. 2. Unknown test cases specified both in the test run configuration and the environment variable/command line argument are no longer silently ignored. This is a quality of life improvement for users, as they could easily be not aware of the silent ignoration. Also, a change in the code results in pycodestyle warning and error: [E] E203 whitespace before ':' [W] W503 line break before binary operator These two are not PEP8 compliant, so they're disabled. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 24 +- dts/framework/config/conf_yaml_schema.json | 2 +- dts/framework/runner.py | 433 +++++++++++++++------ dts/framework/settings.py | 3 +- dts/framework/test_result.py | 34 ++ dts/framework/test_suite.py | 85 +--- dts/pyproject.toml | 3 + dts/tests/TestSuite_os_udp.py | 2 +- dts/tests/TestSuite_smoke_tests.py | 2 +- 9 files changed, 390 insertions(+), 198 deletions(-) diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 62eded7f04..c6a93b3b89 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -36,7 +36,7 @@ import json import os.path import pathlib -from dataclasses import dataclass +from dataclasses import dataclass, fields from enum import auto, unique from typing import Union @@ -506,6 +506,28 @@ def from_dict( vdevs=vdevs, ) + def copy_and_modify(self, **kwargs) -> "ExecutionConfiguration": + """Create a shallow copy with any of the fields modified. + + The only new data are those passed to this method. + The rest are copied from the object's fields calling the method. + + Args: + **kwargs: The names and types of keyword arguments are defined + by the fields of the :class:`ExecutionConfiguration` class. + + Returns: + The copied and modified execution configuration. + """ + new_config = {} + for field in fields(self): + if field.name in kwargs: + new_config[field.name] = kwargs[field.name] + else: + new_config[field.name] = getattr(self, field.name) + + return ExecutionConfiguration(**new_config) + @dataclass(slots=True, frozen=True) class Configuration: diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json index 84e45fe3c2..051b079fe4 100644 --- a/dts/framework/config/conf_yaml_schema.json +++ b/dts/framework/config/conf_yaml_schema.json @@ -197,7 +197,7 @@ }, "cases": { "type": "array", - "description": "If specified, only this subset of test suite's test cases will be run. Unknown test cases will be silently ignored.", + "description": "If specified, only this subset of test suite's test cases will be run.", "items": { "type": "string" }, diff --git a/dts/framework/runner.py b/dts/framework/runner.py index 933685d638..5f6bcbbb86 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -17,17 +17,27 @@ and the test case stage runs test cases individually. """ +import importlib +import inspect import logging +import re import sys from types import MethodType +from typing import Iterable from .config import ( BuildTargetConfiguration, + Configuration, ExecutionConfiguration, TestSuiteConfig, load_config, ) -from .exception import BlockingTestSuiteError, SSHTimeoutError, TestCaseVerifyError +from .exception import ( + BlockingTestSuiteError, + ConfigurationError, + SSHTimeoutError, + TestCaseVerifyError, +) from .logger import DTSLOG, getLogger from .settings import SETTINGS from .test_result import ( @@ -37,8 +47,9 @@ Result, TestCaseResult, TestSuiteResult, + TestSuiteWithCases, ) -from .test_suite import TestSuite, get_test_suites +from .test_suite import TestSuite from .testbed_model import SutNode, TGNode @@ -59,13 +70,23 @@ class DTSRunner: given execution, the next execution begins. """ + _configuration: Configuration _logger: DTSLOG _result: DTSResult + _test_suite_class_prefix: str + _test_suite_module_prefix: str + _func_test_case_regex: str + _perf_test_case_regex: str def __init__(self): - """Initialize the instance with logger and result.""" + """Initialize the instance with configuration, logger, result and string constants.""" + self._configuration = load_config() self._logger = getLogger("DTSRunner") self._result = DTSResult(self._logger) + self._test_suite_class_prefix = "Test" + self._test_suite_module_prefix = "tests.TestSuite_" + self._func_test_case_regex = r"test_(?!perf_)" + self._perf_test_case_regex = r"test_perf_" def run(self): """Run all build targets in all executions from the test run configuration. @@ -106,29 +127,32 @@ def run(self): try: # check the python version of the server that runs dts self._check_dts_python_version() + self._result.update_setup(Result.PASS) # for all Execution sections - for execution in load_config().executions: - sut_node = sut_nodes.get(execution.system_under_test_node.name) - tg_node = tg_nodes.get(execution.traffic_generator_node.name) - + for execution in self._configuration.executions: + self._logger.info( + f"Running execution with SUT '{execution.system_under_test_node.name}'." + ) + execution_result = self._result.add_execution(execution.system_under_test_node) + # we don't want to modify the original config, so create a copy + execution_test_suites = list(execution.test_suites) + if not execution.skip_smoke_tests: + execution_test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] try: - if not sut_node: - sut_node = SutNode(execution.system_under_test_node) - sut_nodes[sut_node.name] = sut_node - if not tg_node: - tg_node = TGNode(execution.traffic_generator_node) - tg_nodes[tg_node.name] = tg_node - self._result.update_setup(Result.PASS) + test_suites_with_cases = self._get_test_suites_with_cases( + execution_test_suites, execution.func, execution.perf + ) except Exception as e: - failed_node = execution.system_under_test_node.name - if sut_node: - failed_node = execution.traffic_generator_node.name - self._logger.exception(f"The Creation of node {failed_node} failed.") - self._result.update_setup(Result.FAIL, e) + self._logger.exception( + f"Invalid test suite configuration found: " f"{execution_test_suites}." + ) + execution_result.update_setup(Result.FAIL, e) else: - self._run_execution(sut_node, tg_node, execution) + self._connect_nodes_and_run_execution( + sut_nodes, tg_nodes, execution, execution_result, test_suites_with_cases + ) except Exception as e: self._logger.exception("An unexpected error has occurred.") @@ -163,11 +187,207 @@ def _check_dts_python_version(self) -> None: ) self._logger.warning("Please use Python >= 3.10 instead.") + def _get_test_suites_with_cases( + self, + test_suite_configs: list[TestSuiteConfig], + func: bool, + perf: bool, + ) -> list[TestSuiteWithCases]: + """Test suites with test cases discovery. + + The test suites with test cases defined in the user configuration are discovered + and stored for future use so that we don't import the modules twice and so that + the list of test suites with test cases is available for recording right away. + + Args: + test_suite_configs: Test suite configurations. + func: Whether to include functional test cases in the final list. + perf: Whether to include performance test cases in the final list. + + Returns: + The discovered test suites, each with test cases. + """ + test_suites_with_cases = [] + + for test_suite_config in test_suite_configs: + test_suite_class = self._get_test_suite_class(test_suite_config.test_suite) + test_cases = [] + func_test_cases, perf_test_cases = self._filter_test_cases( + test_suite_class, set(test_suite_config.test_cases + SETTINGS.test_cases) + ) + if func: + test_cases.extend(func_test_cases) + if perf: + test_cases.extend(perf_test_cases) + + test_suites_with_cases.append( + TestSuiteWithCases(test_suite_class=test_suite_class, test_cases=test_cases) + ) + + return test_suites_with_cases + + def _get_test_suite_class(self, module_name: str) -> type[TestSuite]: + """Find the :class:`TestSuite` class in `module_name`. + + The full module name is `module_name` prefixed with `self._test_suite_module_prefix`. + The module name is a standard filename with words separated with underscores. + Search the `module_name` for a :class:`TestSuite` class which starts + with `self._test_suite_class_prefix`, continuing with CamelCase `module_name`. + The first matching class is returned. + + The CamelCase convention applies to abbreviations, acronyms, initialisms and so on:: + + OS -> Os + TCP -> Tcp + + Args: + module_name: The module name without prefix where to search for the test suite. + + Returns: + The found test suite class. + + Raises: + ConfigurationError: If the corresponding module is not found or + a valid :class:`TestSuite` is not found in the module. + """ + + def is_test_suite(object) -> bool: + """Check whether `object` is a :class:`TestSuite`. + + The `object` is a subclass of :class:`TestSuite`, but not :class:`TestSuite` itself. + + Args: + object: The object to be checked. + + Returns: + :data:`True` if `object` is a subclass of `TestSuite`. + """ + try: + if issubclass(object, TestSuite) and object is not TestSuite: + return True + except TypeError: + return False + return False + + testsuite_module_path = f"{self._test_suite_module_prefix}{module_name}" + try: + test_suite_module = importlib.import_module(testsuite_module_path) + except ModuleNotFoundError as e: + raise ConfigurationError( + f"Test suite module '{testsuite_module_path}' not found." + ) from e + + camel_case_suite_name = "".join( + [suite_word.capitalize() for suite_word in module_name.split("_")] + ) + full_suite_name_to_find = f"{self._test_suite_class_prefix}{camel_case_suite_name}" + for class_name, class_obj in inspect.getmembers(test_suite_module, is_test_suite): + if class_name == full_suite_name_to_find: + return class_obj + raise ConfigurationError( + f"Couldn't find any valid test suites in {test_suite_module.__name__}." + ) + + def _filter_test_cases( + self, test_suite_class: type[TestSuite], test_cases_to_run: set[str] + ) -> tuple[list[MethodType], list[MethodType]]: + """Filter `test_cases_to_run` from `test_suite_class`. + + There are two rounds of filtering if `test_cases_to_run` is not empty. + The first filters `test_cases_to_run` from all methods of `test_suite_class`. + Then the methods are separated into functional and performance test cases. + If a method matches neither the functional nor performance name prefix, it's an error. + + Args: + test_suite_class: The class of the test suite. + test_cases_to_run: Test case names to filter from `test_suite_class`. + If empty, return all matching test cases. + + Returns: + A list of test case methods that should be executed. + + Raises: + ConfigurationError: If a test case from `test_cases_to_run` is not found + or it doesn't match either the functional nor performance name prefix. + """ + func_test_cases = [] + perf_test_cases = [] + name_method_tuples = inspect.getmembers(test_suite_class, inspect.isfunction) + if test_cases_to_run: + name_method_tuples = [ + (name, method) for name, method in name_method_tuples if name in test_cases_to_run + ] + if len(name_method_tuples) < len(test_cases_to_run): + missing_test_cases = test_cases_to_run - {name for name, _ in name_method_tuples} + raise ConfigurationError( + f"Test cases {missing_test_cases} not found among methods " + f"of {test_suite_class.__name__}." + ) + + for test_case_name, test_case_method in name_method_tuples: + if re.match(self._func_test_case_regex, test_case_name): + func_test_cases.append(test_case_method) + elif re.match(self._perf_test_case_regex, test_case_name): + perf_test_cases.append(test_case_method) + elif test_cases_to_run: + raise ConfigurationError( + f"Method '{test_case_name}' matches neither " + f"a functional nor a performance test case name." + ) + + return func_test_cases, perf_test_cases + + def _connect_nodes_and_run_execution( + self, + sut_nodes: dict[str, SutNode], + tg_nodes: dict[str, TGNode], + execution: ExecutionConfiguration, + execution_result: ExecutionResult, + test_suites_with_cases: Iterable[TestSuiteWithCases], + ) -> None: + """Connect nodes, then continue to run the given execution. + + Connect the :class:`SutNode` and the :class:`TGNode` of this `execution`. + If either has already been connected, it's going to be in either `sut_nodes` or `tg_nodes`, + respectively. + If not, connect and add the node to the respective `sut_nodes` or `tg_nodes` :class:`dict`. + + Args: + sut_nodes: A dictionary storing connected/to be connected SUT nodes. + tg_nodes: A dictionary storing connected/to be connected TG nodes. + execution: An execution's test run configuration. + execution_result: The execution's result. + test_suites_with_cases: The test suites with test cases to run. + """ + sut_node = sut_nodes.get(execution.system_under_test_node.name) + tg_node = tg_nodes.get(execution.traffic_generator_node.name) + + try: + if not sut_node: + sut_node = SutNode(execution.system_under_test_node) + sut_nodes[sut_node.name] = sut_node + if not tg_node: + tg_node = TGNode(execution.traffic_generator_node) + tg_nodes[tg_node.name] = tg_node + except Exception as e: + failed_node = execution.system_under_test_node.name + if sut_node: + failed_node = execution.traffic_generator_node.name + self._logger.exception(f"The Creation of node {failed_node} failed.") + execution_result.update_setup(Result.FAIL, e) + + else: + self._run_execution( + sut_node, tg_node, execution, execution_result, test_suites_with_cases + ) + def _run_execution( self, sut_node: SutNode, tg_node: TGNode, execution: ExecutionConfiguration, + execution_result: ExecutionResult, + test_suites_with_cases: Iterable[TestSuiteWithCases], ) -> None: """Run the given execution. @@ -178,11 +398,11 @@ def _run_execution( sut_node: The execution's SUT node. tg_node: The execution's TG node. execution: An execution's test run configuration. + execution_result: The execution's result. + test_suites_with_cases: The test suites with test cases to run. """ self._logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.") - execution_result = self._result.add_execution(sut_node.config) execution_result.add_sut_info(sut_node.node_info) - try: sut_node.set_up_execution(execution) execution_result.update_setup(Result.PASS) @@ -192,7 +412,10 @@ def _run_execution( else: for build_target in execution.build_targets: - self._run_build_target(sut_node, tg_node, build_target, execution, execution_result) + build_target_result = execution_result.add_build_target(build_target) + self._run_build_target( + sut_node, tg_node, build_target, build_target_result, test_suites_with_cases + ) finally: try: @@ -207,8 +430,8 @@ def _run_build_target( sut_node: SutNode, tg_node: TGNode, build_target: BuildTargetConfiguration, - execution: ExecutionConfiguration, - execution_result: ExecutionResult, + build_target_result: BuildTargetResult, + test_suites_with_cases: Iterable[TestSuiteWithCases], ) -> None: """Run the given build target. @@ -220,11 +443,11 @@ def _run_build_target( sut_node: The execution's sut node. tg_node: The execution's tg node. build_target: A build target's test run configuration. - execution: The build target's execution's test run configuration. - execution_result: The execution level result object associated with the execution. + build_target_result: The build target level result object associated + with the current build target. + test_suites_with_cases: The test suites with test cases to run. """ self._logger.info(f"Running build target '{build_target.name}'.") - build_target_result = execution_result.add_build_target(build_target) try: sut_node.set_up_build_target(build_target) @@ -236,7 +459,7 @@ def _run_build_target( build_target_result.update_setup(Result.FAIL, e) else: - self._run_test_suites(sut_node, tg_node, execution, build_target_result) + self._run_test_suites(sut_node, tg_node, build_target_result, test_suites_with_cases) finally: try: @@ -250,10 +473,10 @@ def _run_test_suites( self, sut_node: SutNode, tg_node: TGNode, - execution: ExecutionConfiguration, build_target_result: BuildTargetResult, + test_suites_with_cases: Iterable[TestSuiteWithCases], ) -> None: - """Run the execution's (possibly a subset of) test suites using the current build target. + """Run `test_suites_with_cases` with the current build target. The method assumes the build target we're testing has already been built on the SUT node. The current build target thus corresponds to the current DPDK build present on the SUT node. @@ -264,22 +487,20 @@ def _run_test_suites( Args: sut_node: The execution's SUT node. tg_node: The execution's TG node. - execution: The execution's test run configuration associated - with the current build target. build_target_result: The build target level result object associated with the current build target. + test_suites_with_cases: The test suites with test cases to run. """ end_build_target = False - if not execution.skip_smoke_tests: - execution.test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] - for test_suite_config in execution.test_suites: + for test_suite_with_cases in test_suites_with_cases: + test_suite_result = build_target_result.add_test_suite( + test_suite_with_cases.test_suite_class.__name__ + ) try: - self._run_test_suite_module( - sut_node, tg_node, execution, build_target_result, test_suite_config - ) + self._run_test_suite(sut_node, tg_node, test_suite_result, test_suite_with_cases) except BlockingTestSuiteError as e: self._logger.exception( - f"An error occurred within {test_suite_config.test_suite}. " + f"An error occurred within {test_suite_with_cases.test_suite_class.__name__}. " "Skipping build target..." ) self._result.add_error(e) @@ -288,15 +509,14 @@ def _run_test_suites( if end_build_target: break - def _run_test_suite_module( + def _run_test_suite( self, sut_node: SutNode, tg_node: TGNode, - execution: ExecutionConfiguration, - build_target_result: BuildTargetResult, - test_suite_config: TestSuiteConfig, + test_suite_result: TestSuiteResult, + test_suite_with_cases: TestSuiteWithCases, ) -> None: - """Set up, execute and tear down all test suites in a single test suite module. + """Set up, execute and tear down `test_suite_with_cases`. The method assumes the build target we're testing has already been built on the SUT node. The current build target thus corresponds to the current DPDK build present on the SUT node. @@ -306,92 +526,79 @@ def _run_test_suite_module( Record the setup and the teardown and handle failures. - The test cases to execute are discovered when creating the :class:`TestSuite` object. - Args: sut_node: The execution's SUT node. tg_node: The execution's TG node. - execution: The execution's test run configuration associated - with the current build target. - build_target_result: The build target level result object associated - with the current build target. - test_suite_config: Test suite test run configuration specifying the test suite module - and possibly a subset of test cases of test suites in that module. + test_suite_result: The test suite level result object associated + with the current test suite. + test_suite_with_cases: The test suite with test cases to run. Raises: BlockingTestSuiteError: If a blocking test suite fails. """ + test_suite_name = test_suite_with_cases.test_suite_class.__name__ + test_suite = test_suite_with_cases.test_suite_class(sut_node, tg_node) try: - full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" - test_suite_classes = get_test_suites(full_suite_path) - suites_str = ", ".join((x.__name__ for x in test_suite_classes)) - self._logger.debug(f"Found test suites '{suites_str}' in '{full_suite_path}'.") + self._logger.info(f"Starting test suite setup: {test_suite_name}") + test_suite.set_up_suite() + test_suite_result.update_setup(Result.PASS) + self._logger.info(f"Test suite setup successful: {test_suite_name}") except Exception as e: - self._logger.exception("An error occurred when searching for test suites.") - self._result.update_setup(Result.ERROR, e) + self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") + test_suite_result.update_setup(Result.ERROR, e) else: - for test_suite_class in test_suite_classes: - test_suite = test_suite_class(sut_node, tg_node, test_suite_config.test_cases) - - test_suite_name = test_suite.__class__.__name__ - test_suite_result = build_target_result.add_test_suite(test_suite_name) - try: - self._logger.info(f"Starting test suite setup: {test_suite_name}") - test_suite.set_up_suite() - test_suite_result.update_setup(Result.PASS) - self._logger.info(f"Test suite setup successful: {test_suite_name}") - except Exception as e: - self._logger.exception(f"Test suite setup ERROR: {test_suite_name}") - test_suite_result.update_setup(Result.ERROR, e) - - else: - self._execute_test_suite(execution.func, test_suite, test_suite_result) - - finally: - try: - test_suite.tear_down_suite() - sut_node.kill_cleanup_dpdk_apps() - test_suite_result.update_teardown(Result.PASS) - except Exception as e: - self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") - self._logger.warning( - f"Test suite '{test_suite_name}' teardown failed, " - f"the next test suite may be affected." - ) - test_suite_result.update_setup(Result.ERROR, e) - if len(test_suite_result.get_errors()) > 0 and test_suite.is_blocking: - raise BlockingTestSuiteError(test_suite_name) + self._execute_test_suite( + test_suite, + test_suite_with_cases.test_cases, + test_suite_result, + ) + finally: + try: + test_suite.tear_down_suite() + sut_node.kill_cleanup_dpdk_apps() + test_suite_result.update_teardown(Result.PASS) + except Exception as e: + self._logger.exception(f"Test suite teardown ERROR: {test_suite_name}") + self._logger.warning( + f"Test suite '{test_suite_name}' teardown failed, " + "the next test suite may be affected." + ) + test_suite_result.update_setup(Result.ERROR, e) + if len(test_suite_result.get_errors()) > 0 and test_suite.is_blocking: + raise BlockingTestSuiteError(test_suite_name) def _execute_test_suite( - self, func: bool, test_suite: TestSuite, test_suite_result: TestSuiteResult + self, + test_suite: TestSuite, + test_cases: Iterable[MethodType], + test_suite_result: TestSuiteResult, ) -> None: - """Execute all discovered test cases in `test_suite`. + """Execute all `test_cases` in `test_suite`. If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable is set, in case of a test case failure, the test case will be executed again until it passes or it fails that many times in addition of the first failure. Args: - func: Whether to execute functional test cases. test_suite: The test suite object. + test_cases: The list of test case methods. test_suite_result: The test suite level result object associated with the current test suite. """ - if func: - for test_case_method in test_suite._get_functional_test_cases(): - test_case_name = test_case_method.__name__ - test_case_result = test_suite_result.add_test_case(test_case_name) - all_attempts = SETTINGS.re_run + 1 - attempt_nr = 1 + for test_case_method in test_cases: + test_case_name = test_case_method.__name__ + test_case_result = test_suite_result.add_test_case(test_case_name) + all_attempts = SETTINGS.re_run + 1 + attempt_nr = 1 + self._run_test_case(test_suite, test_case_method, test_case_result) + while not test_case_result and attempt_nr < all_attempts: + attempt_nr += 1 + self._logger.info( + f"Re-running FAILED test case '{test_case_name}'. " + f"Attempt number {attempt_nr} out of {all_attempts}." + ) self._run_test_case(test_suite, test_case_method, test_case_result) - while not test_case_result and attempt_nr < all_attempts: - attempt_nr += 1 - self._logger.info( - f"Re-running FAILED test case '{test_case_name}'. " - f"Attempt number {attempt_nr} out of {all_attempts}." - ) - self._run_test_case(test_suite, test_case_method, test_case_result) def _run_test_case( self, @@ -399,7 +606,7 @@ def _run_test_case( test_case_method: MethodType, test_case_result: TestCaseResult, ) -> None: - """Setup, execute and teardown a test case in `test_suite`. + """Setup, execute and teardown `test_case_method` from `test_suite`. Record the result of the setup and the teardown and handle failures. @@ -424,7 +631,7 @@ def _run_test_case( else: # run test case if setup was successful - self._execute_test_case(test_case_method, test_case_result) + self._execute_test_case(test_suite, test_case_method, test_case_result) finally: try: @@ -440,11 +647,15 @@ def _run_test_case( test_case_result.update(Result.ERROR) def _execute_test_case( - self, test_case_method: MethodType, test_case_result: TestCaseResult + self, + test_suite: TestSuite, + test_case_method: MethodType, + test_case_result: TestCaseResult, ) -> None: - """Execute one test case, record the result and handle failures. + """Execute `test_case_method` from `test_suite`, record the result and handle failures. Args: + test_suite: The test suite object. test_case_method: The test case method. test_case_result: The test case level result object associated with the current test case. @@ -452,7 +663,7 @@ def _execute_test_case( test_case_name = test_case_method.__name__ try: self._logger.info(f"Starting test case execution: {test_case_name}") - test_case_method() + test_case_method(test_suite) test_case_result.update(Result.PASS) self._logger.info(f"Test case execution PASSED: {test_case_name}") diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 609c8d0e62..2b8bfbe0ed 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -253,8 +253,7 @@ def _get_parser() -> argparse.ArgumentParser: "--test-cases", action=_env_arg("DTS_TESTCASES"), default="", - help="[DTS_TESTCASES] Comma-separated list of test cases to execute. " - "Unknown test cases will be silently ignored.", + help="[DTS_TESTCASES] Comma-separated list of test cases to execute.", ) parser.add_argument( diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 4467749a9d..075195fd5b 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -25,7 +25,9 @@ import os.path from collections.abc import MutableSequence +from dataclasses import dataclass from enum import Enum, auto +from types import MethodType from .config import ( OS, @@ -36,10 +38,42 @@ CPUType, NodeConfiguration, NodeInfo, + TestSuiteConfig, ) from .exception import DTSError, ErrorSeverity from .logger import DTSLOG from .settings import SETTINGS +from .test_suite import TestSuite + + +@dataclass(slots=True, frozen=True) +class TestSuiteWithCases: + """A test suite class with test case methods. + + An auxiliary class holding a test case class with test case methods. The intended use of this + class is to hold a subset of test cases (which could be all test cases) because we don't have + all the data to instantiate the class at the point of inspection. The knowledge of this subset + is needed in case an error occurs before the class is instantiated and we need to record + which test cases were blocked by the error. + + Attributes: + test_suite_class: The test suite class. + test_cases: The test case methods. + """ + + test_suite_class: type[TestSuite] + test_cases: list[MethodType] + + def create_config(self) -> TestSuiteConfig: + """Generate a :class:`TestSuiteConfig` from the stored test suite with test cases. + + Returns: + The :class:`TestSuiteConfig` representation. + """ + return TestSuiteConfig( + test_suite=self.test_suite_class.__name__, + test_cases=[test_case.__name__ for test_case in self.test_cases], + ) class Result(Enum): diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index b02fd36147..f9fe88093e 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -11,25 +11,17 @@ * Testbed (SUT, TG) configuration, * Packet sending and verification, * Test case verification. - -The module also defines a function, :func:`get_test_suites`, -for gathering test suites from a Python module. """ -import importlib -import inspect -import re from ipaddress import IPv4Interface, IPv6Interface, ip_interface -from types import MethodType -from typing import Any, ClassVar, Union +from typing import ClassVar, Union from scapy.layers.inet import IP # type: ignore[import] from scapy.layers.l2 import Ether # type: ignore[import] from scapy.packet import Packet, Padding # type: ignore[import] -from .exception import ConfigurationError, TestCaseVerifyError +from .exception import TestCaseVerifyError from .logger import DTSLOG, getLogger -from .settings import SETTINGS from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries @@ -37,7 +29,6 @@ class TestSuite(object): """The base class with building blocks needed by most test cases. - * Test case filtering and collection, * Test suite setup/cleanup methods to override, * Test case setup/cleanup methods to override, * Test case verification, @@ -71,7 +62,6 @@ class TestSuite(object): #: will block the execution of all subsequent test suites in the current build target. is_blocking: ClassVar[bool] = False _logger: DTSLOG - _test_cases_to_run: list[str] _port_links: list[PortLink] _sut_port_ingress: Port _sut_port_egress: Port @@ -86,24 +76,19 @@ def __init__( self, sut_node: SutNode, tg_node: TGNode, - test_cases: list[str], ): """Initialize the test suite testbed information and basic configuration. - Process what test cases to run, find links between ports and set up - default IP addresses to be used when configuring them. + Find links between ports and set up default IP addresses to be used when + configuring them. Args: sut_node: The SUT node where the test suite will run. tg_node: The TG node where the test suite will run. - test_cases: The list of test cases to execute. - If empty, all test cases will be executed. """ self.sut_node = sut_node self.tg_node = tg_node self._logger = getLogger(self.__class__.__name__) - self._test_cases_to_run = test_cases - self._test_cases_to_run.extend(SETTINGS.test_cases) self._port_links = [] self._process_links() self._sut_port_ingress, self._tg_port_egress = ( @@ -364,65 +349,3 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool: if received_packet.src != expected_packet.src or received_packet.dst != expected_packet.dst: return False return True - - def _get_functional_test_cases(self) -> list[MethodType]: - """Get all functional test cases defined in this TestSuite. - - Returns: - The list of functional test cases of this TestSuite. - """ - return self._get_test_cases(r"test_(?!perf_)") - - def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: - """Return a list of test cases matching test_case_regex. - - Returns: - The list of test cases matching test_case_regex of this TestSuite. - """ - self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.") - filtered_test_cases = [] - for test_case_name, test_case in inspect.getmembers(self, inspect.ismethod): - if self._should_be_executed(test_case_name, test_case_regex): - filtered_test_cases.append(test_case) - cases_str = ", ".join((x.__name__ for x in filtered_test_cases)) - self._logger.debug(f"Found test cases '{cases_str}' in {self.__class__.__name__}.") - return filtered_test_cases - - def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool: - """Check whether the test case should be scheduled to be executed.""" - match = bool(re.match(test_case_regex, test_case_name)) - if self._test_cases_to_run: - return match and test_case_name in self._test_cases_to_run - - return match - - -def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: - r"""Find all :class:`TestSuite`\s in a Python module. - - Args: - testsuite_module_path: The path to the Python module. - - Returns: - The list of :class:`TestSuite`\s found within the Python module. - - Raises: - ConfigurationError: The test suite module was not found. - """ - - def is_test_suite(object: Any) -> bool: - try: - if issubclass(object, TestSuite) and object is not TestSuite: - return True - except TypeError: - return False - return False - - try: - testcase_module = importlib.import_module(testsuite_module_path) - except ModuleNotFoundError as e: - raise ConfigurationError(f"Test suite '{testsuite_module_path}' not found.") from e - return [ - test_suite_class - for _, test_suite_class in inspect.getmembers(testcase_module, is_test_suite) - ] diff --git a/dts/pyproject.toml b/dts/pyproject.toml index 28bd970ae4..8eb92b4f11 100644 --- a/dts/pyproject.toml +++ b/dts/pyproject.toml @@ -51,6 +51,9 @@ linters = "mccabe,pycodestyle,pydocstyle,pyflakes" format = "pylint" max_line_length = 100 +[tool.pylama.linter.pycodestyle] +ignore = "E203,W503" + [tool.pylama.linter.pydocstyle] convention = "google" diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py index 2cf29d37bb..b4784dd95e 100644 --- a/dts/tests/TestSuite_os_udp.py +++ b/dts/tests/TestSuite_os_udp.py @@ -13,7 +13,7 @@ from framework.test_suite import TestSuite -class TestOSUdp(TestSuite): +class TestOsUdp(TestSuite): """IPv4 UDP OS routing test suite.""" def set_up_suite(self) -> None: diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py index 5e2bac14bd..7b2a0e97f8 100644 --- a/dts/tests/TestSuite_smoke_tests.py +++ b/dts/tests/TestSuite_smoke_tests.py @@ -21,7 +21,7 @@ from framework.utils import REGEX_FOR_PCI_ADDRESS -class SmokeTests(TestSuite): +class TestSmokeTests(TestSuite): """DPDK and infrastructure smoke test suite. The test cases validate the most basic DPDK functionality needed for all other test suites. From patchwork Fri Mar 1 10:55:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137669 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1D03C43BB1; Fri, 1 Mar 2024 11:55:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E09A9433DC; Fri, 1 Mar 2024 11:55:32 +0100 (CET) Received: from mail-ed1-f54.google.com (mail-ed1-f54.google.com [209.85.208.54]) by mails.dpdk.org (Postfix) with ESMTP id 2C85C433CE for ; Fri, 1 Mar 2024 11:55:30 +0100 (CET) Received: by mail-ed1-f54.google.com with SMTP id 4fb4d7f45d1cf-5656e5754ccso2643493a12.0 for ; Fri, 01 Mar 2024 02:55:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1709290530; x=1709895330; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Xl86siJVlDGlX5Wkzr/K/eUz6vp6W3gCEYpA70LynMk=; b=NQH26d9nANOMoFkfPp2dj+9ILBhYnOqeS3oHO4UIDMYsMAKxOpF9E/PqqnviC2yAy+ MQzxvQYyE5FrXGQ1vE36pn9aOEShRwcT0hfQeLO7cD1qTmWUuByP1CPifbD9p4U1z1du l/KYpcE3uktTmWpKRJDBXavPuLrckfvrmAP0GDn/MiTWdthiwt8/ER3WyCMnO/kpd2Qh bCfUWtxVqCVzXGNMnlCnQhi1UXXN/sdhmLzZ7DOujjwowUfkmAz5hZNYnU4S3rNf3i+z 8KlguDC8XgvgQHSzJsobGAondryQiPXjg5P9fQ9sJJKAuzjqxUAB8kiXCGh/vzEgDR9r BlAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709290530; x=1709895330; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xl86siJVlDGlX5Wkzr/K/eUz6vp6W3gCEYpA70LynMk=; b=ISlivYuP3T5D3FGdC9wkKz7ylFzsCtGNsPYZuEFpmTUBOXE3PE6ignA6gZYh9/CN9v ErPu8LNPSqGt8xS+oww5j7oshf48NuMGQBPa7YpSb5z2d+gAPNACHDTLBHV7qjxtk/ZA B+l8v34/n5+J9PxMQFpu7uKQI14T0pBq/8Ey2BI8J174eiNFbRT5jUanyW962+gzuivH 2iNFwG7oPDWSNvDrY6i2mBF/jgiRj9/Ic2Sr6gnSVLFiOglpwgyc0MKEuQaSaLvBDzq0 VWvOBpO3Jp2P5KHqmoWZa6gkC2XnBGlN7ZDOq8wSEPB3k2kqA+nk9H42MULYGL1tEQCD WnRQ== X-Gm-Message-State: AOJu0YyfL2bcX5ciqV3td9Y5WUIl2e7bDBz3lAiRgw6KIDxYA6y2YKlh ZVpo3eDrcoRGdFnRCv6nvYnUA+1rSd38XEOdjq/UUq/DelyepJjvhTkp6TtHXJKjBKeDYBknMLN fe6Y= X-Google-Smtp-Source: AGHT+IEOZ3POxRf8++2Vhsr3lZCupdpgba85rdDG6nKUbGfQX2IwhH1PNcBRlbHE7VGHvMr0tH2CvA== X-Received: by 2002:a50:d6d7:0:b0:566:1952:694c with SMTP id l23-20020a50d6d7000000b005661952694cmr1120236edj.20.1709290529483; Fri, 01 Mar 2024 02:55:29 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id f12-20020a056402194c00b0056661ec3f24sm1461734edz.81.2024.03.01.02.55.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 02:55:28 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com, npratte@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v4 4/7] dts: reorganize test result Date: Fri, 1 Mar 2024 11:55:19 +0100 Message-Id: <20240301105522.79870-5-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240301105522.79870-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240301105522.79870-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The current order of Result classes in the test_suite.py module is guided by the needs of type hints, which is not as intuitively readable as ordering them by the occurrences in code. The order goes from the topmost level to lowermost: BaseResult DTSResult ExecutionResult BuildTargetResult TestSuiteResult TestCaseResult This is the same order as they're used in the runner module and they're also used in the same order between themselves in the test_result module. Signed-off-by: Juraj Linkeš --- dts/framework/test_result.py | 411 ++++++++++++++++++----------------- 1 file changed, 206 insertions(+), 205 deletions(-) diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 075195fd5b..abdbafab10 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -28,6 +28,7 @@ from dataclasses import dataclass from enum import Enum, auto from types import MethodType +from typing import Union from .config import ( OS, @@ -129,58 +130,6 @@ def __bool__(self) -> bool: return bool(self.result) -class Statistics(dict): - """How many test cases ended in which result state along some other basic information. - - Subclassing :class:`dict` provides a convenient way to format the data. - - The data are stored in the following keys: - - * **PASS RATE** (:class:`int`) -- The FAIL/PASS ratio of all test cases. - * **DPDK VERSION** (:class:`str`) -- The tested DPDK version. - """ - - def __init__(self, dpdk_version: str | None): - """Extend the constructor with keys in which the data are stored. - - Args: - dpdk_version: The version of tested DPDK. - """ - super(Statistics, self).__init__() - for result in Result: - self[result.name] = 0 - self["PASS RATE"] = 0.0 - self["DPDK VERSION"] = dpdk_version - - def __iadd__(self, other: Result) -> "Statistics": - """Add a Result to the final count. - - Example: - stats: Statistics = Statistics() # empty Statistics - stats += Result.PASS # add a Result to `stats` - - Args: - other: The Result to add to this statistics object. - - Returns: - The modified statistics object. - """ - self[other.name] += 1 - self["PASS RATE"] = ( - float(self[Result.PASS.name]) * 100 / sum(self[result.name] for result in Result) - ) - return self - - def __str__(self) -> str: - """Each line contains the formatted key = value pair.""" - stats_str = "" - for key, value in self.items(): - stats_str += f"{key:<12} = {value}\n" - # according to docs, we should use \n when writing to text files - # on all platforms - return stats_str - - class BaseResult(object): """Common data and behavior of DTS results. @@ -245,7 +194,7 @@ def get_errors(self) -> list[Exception]: """ return self._get_setup_teardown_errors() + self._get_inner_errors() - def add_stats(self, statistics: Statistics) -> None: + def add_stats(self, statistics: "Statistics") -> None: """Collate stats from the whole result hierarchy. Args: @@ -255,91 +204,149 @@ def add_stats(self, statistics: Statistics) -> None: inner_result.add_stats(statistics) -class TestCaseResult(BaseResult, FixtureResult): - r"""The test case specific result. +class DTSResult(BaseResult): + """Stores environment information and test results from a DTS run. - Stores the result of the actual test case. This is done by adding an extra superclass - in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and - the class is itself a record of the test case. + * Execution level information, such as testbed and the test suite list, + * Build target level information, such as compiler, target OS and cpu, + * Test suite and test case results, + * All errors that are caught and recorded during DTS execution. + + The information is stored hierarchically. This is the first level of the hierarchy + and as such is where the data form the whole hierarchy is collated or processed. + + The internal list stores the results of all executions. Attributes: - test_case_name: The test case name. + dpdk_version: The DPDK version to record. """ - test_case_name: str + dpdk_version: str | None + _logger: DTSLOG + _errors: list[Exception] + _return_code: ErrorSeverity + _stats_result: Union["Statistics", None] + _stats_filename: str - def __init__(self, test_case_name: str): - """Extend the constructor with `test_case_name`. + def __init__(self, logger: DTSLOG): + """Extend the constructor with top-level specifics. Args: - test_case_name: The test case's name. + logger: The logger instance the whole result will use. """ - super(TestCaseResult, self).__init__() - self.test_case_name = test_case_name + super(DTSResult, self).__init__() + self.dpdk_version = None + self._logger = logger + self._errors = [] + self._return_code = ErrorSeverity.NO_ERR + self._stats_result = None + self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") - def update(self, result: Result, error: Exception | None = None) -> None: - """Update the test case result. + def add_execution(self, sut_node: NodeConfiguration) -> "ExecutionResult": + """Add and return the inner result (execution). - This updates the result of the test case itself and doesn't affect - the results of the setup and teardown steps in any way. + Args: + sut_node: The SUT node's test run configuration. + + Returns: + The execution's result. + """ + execution_result = ExecutionResult(sut_node) + self._inner_results.append(execution_result) + return execution_result + + def add_error(self, error: Exception) -> None: + """Record an error that occurred outside any execution. Args: - result: The result of the test case. - error: The error that occurred in case of a failure. + error: The exception to record. """ - self.result = result - self.error = error + self._errors.append(error) - def _get_inner_errors(self) -> list[Exception]: - if self.error: - return [self.error] - return [] + def process(self) -> None: + """Process the data after a whole DTS run. - def add_stats(self, statistics: Statistics) -> None: - r"""Add the test case result to statistics. + The data is added to inner objects during runtime and this object is not updated + at that time. This requires us to process the inner data after it's all been gathered. - The base method goes through the hierarchy recursively and this method is here to stop - the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree. + The processing gathers all errors and the statistics of test case results. + """ + self._errors += self.get_errors() + if self._errors and self._logger: + self._logger.debug("Summary of errors:") + for error in self._errors: + self._logger.debug(repr(error)) - Args: - statistics: The :class:`Statistics` object where the stats will be added. + self._stats_result = Statistics(self.dpdk_version) + self.add_stats(self._stats_result) + with open(self._stats_filename, "w+") as stats_file: + stats_file.write(str(self._stats_result)) + + def get_return_code(self) -> int: + """Go through all stored Exceptions and return the final DTS error code. + + Returns: + The highest error code found. """ - statistics += self.result + for error in self._errors: + error_return_code = ErrorSeverity.GENERIC_ERR + if isinstance(error, DTSError): + error_return_code = error.severity - def __bool__(self) -> bool: - """The test case passed only if setup, teardown and the test case itself passed.""" - return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) + if error_return_code > self._return_code: + self._return_code = error_return_code + return int(self._return_code) -class TestSuiteResult(BaseResult): - """The test suite specific result. - The internal list stores the results of all test cases in a given test suite. +class ExecutionResult(BaseResult): + """The execution specific result. + + The internal list stores the results of all build targets in a given execution. Attributes: - suite_name: The test suite name. + sut_node: The SUT node used in the execution. + sut_os_name: The operating system of the SUT node. + sut_os_version: The operating system version of the SUT node. + sut_kernel_version: The operating system kernel version of the SUT node. """ - suite_name: str + sut_node: NodeConfiguration + sut_os_name: str + sut_os_version: str + sut_kernel_version: str - def __init__(self, suite_name: str): - """Extend the constructor with `suite_name`. + def __init__(self, sut_node: NodeConfiguration): + """Extend the constructor with the `sut_node`'s config. Args: - suite_name: The test suite's name. + sut_node: The SUT node's test run configuration used in the execution. """ - super(TestSuiteResult, self).__init__() - self.suite_name = suite_name + super(ExecutionResult, self).__init__() + self.sut_node = sut_node - def add_test_case(self, test_case_name: str) -> TestCaseResult: - """Add and return the inner result (test case). + def add_build_target(self, build_target: BuildTargetConfiguration) -> "BuildTargetResult": + """Add and return the inner result (build target). + + Args: + build_target: The build target's test run configuration. Returns: - The test case's result. + The build target's result. """ - test_case_result = TestCaseResult(test_case_name) - self._inner_results.append(test_case_result) - return test_case_result + build_target_result = BuildTargetResult(build_target) + self._inner_results.append(build_target_result) + return build_target_result + + def add_sut_info(self, sut_info: NodeInfo) -> None: + """Add SUT information gathered at runtime. + + Args: + sut_info: The additional SUT node information. + """ + self.sut_os_name = sut_info.os_name + self.sut_os_version = sut_info.os_version + self.sut_kernel_version = sut_info.kernel_version class BuildTargetResult(BaseResult): @@ -386,7 +393,7 @@ def add_build_target_info(self, versions: BuildTargetInfo) -> None: self.compiler_version = versions.compiler_version self.dpdk_version = versions.dpdk_version - def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: + def add_test_suite(self, test_suite_name: str) -> "TestSuiteResult": """Add and return the inner result (test suite). Returns: @@ -397,146 +404,140 @@ def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: return test_suite_result -class ExecutionResult(BaseResult): - """The execution specific result. +class TestSuiteResult(BaseResult): + """The test suite specific result. - The internal list stores the results of all build targets in a given execution. + The internal list stores the results of all test cases in a given test suite. Attributes: - sut_node: The SUT node used in the execution. - sut_os_name: The operating system of the SUT node. - sut_os_version: The operating system version of the SUT node. - sut_kernel_version: The operating system kernel version of the SUT node. + suite_name: The test suite name. """ - sut_node: NodeConfiguration - sut_os_name: str - sut_os_version: str - sut_kernel_version: str + suite_name: str - def __init__(self, sut_node: NodeConfiguration): - """Extend the constructor with the `sut_node`'s config. + def __init__(self, suite_name: str): + """Extend the constructor with `suite_name`. Args: - sut_node: The SUT node's test run configuration used in the execution. + suite_name: The test suite's name. """ - super(ExecutionResult, self).__init__() - self.sut_node = sut_node - - def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTargetResult: - """Add and return the inner result (build target). + super(TestSuiteResult, self).__init__() + self.suite_name = suite_name - Args: - build_target: The build target's test run configuration. + def add_test_case(self, test_case_name: str) -> "TestCaseResult": + """Add and return the inner result (test case). Returns: - The build target's result. - """ - build_target_result = BuildTargetResult(build_target) - self._inner_results.append(build_target_result) - return build_target_result - - def add_sut_info(self, sut_info: NodeInfo) -> None: - """Add SUT information gathered at runtime. - - Args: - sut_info: The additional SUT node information. + The test case's result. """ - self.sut_os_name = sut_info.os_name - self.sut_os_version = sut_info.os_version - self.sut_kernel_version = sut_info.kernel_version + test_case_result = TestCaseResult(test_case_name) + self._inner_results.append(test_case_result) + return test_case_result -class DTSResult(BaseResult): - """Stores environment information and test results from a DTS run. - - * Execution level information, such as testbed and the test suite list, - * Build target level information, such as compiler, target OS and cpu, - * Test suite and test case results, - * All errors that are caught and recorded during DTS execution. - - The information is stored hierarchically. This is the first level of the hierarchy - and as such is where the data form the whole hierarchy is collated or processed. +class TestCaseResult(BaseResult, FixtureResult): + r"""The test case specific result. - The internal list stores the results of all executions. + Stores the result of the actual test case. This is done by adding an extra superclass + in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and + the class is itself a record of the test case. Attributes: - dpdk_version: The DPDK version to record. + test_case_name: The test case name. """ - dpdk_version: str | None - _logger: DTSLOG - _errors: list[Exception] - _return_code: ErrorSeverity - _stats_result: Statistics | None - _stats_filename: str + test_case_name: str - def __init__(self, logger: DTSLOG): - """Extend the constructor with top-level specifics. + def __init__(self, test_case_name: str): + """Extend the constructor with `test_case_name`. Args: - logger: The logger instance the whole result will use. + test_case_name: The test case's name. """ - super(DTSResult, self).__init__() - self.dpdk_version = None - self._logger = logger - self._errors = [] - self._return_code = ErrorSeverity.NO_ERR - self._stats_result = None - self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") + super(TestCaseResult, self).__init__() + self.test_case_name = test_case_name - def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: - """Add and return the inner result (execution). + def update(self, result: Result, error: Exception | None = None) -> None: + """Update the test case result. - Args: - sut_node: The SUT node's test run configuration. + This updates the result of the test case itself and doesn't affect + the results of the setup and teardown steps in any way. - Returns: - The execution's result. + Args: + result: The result of the test case. + error: The error that occurred in case of a failure. """ - execution_result = ExecutionResult(sut_node) - self._inner_results.append(execution_result) - return execution_result + self.result = result + self.error = error - def add_error(self, error: Exception) -> None: - """Record an error that occurred outside any execution. + def _get_inner_errors(self) -> list[Exception]: + if self.error: + return [self.error] + return [] + + def add_stats(self, statistics: "Statistics") -> None: + r"""Add the test case result to statistics. + + The base method goes through the hierarchy recursively and this method is here to stop + the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree. Args: - error: The exception to record. + statistics: The :class:`Statistics` object where the stats will be added. """ - self._errors.append(error) + statistics += self.result - def process(self) -> None: - """Process the data after a whole DTS run. + def __bool__(self) -> bool: + """The test case passed only if setup, teardown and the test case itself passed.""" + return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) - The data is added to inner objects during runtime and this object is not updated - at that time. This requires us to process the inner data after it's all been gathered. - The processing gathers all errors and the statistics of test case results. +class Statistics(dict): + """How many test cases ended in which result state along some other basic information. + + Subclassing :class:`dict` provides a convenient way to format the data. + + The data are stored in the following keys: + + * **PASS RATE** (:class:`int`) -- The FAIL/PASS ratio of all test cases. + * **DPDK VERSION** (:class:`str`) -- The tested DPDK version. + """ + + def __init__(self, dpdk_version: str | None): + """Extend the constructor with keys in which the data are stored. + + Args: + dpdk_version: The version of tested DPDK. """ - self._errors += self.get_errors() - if self._errors and self._logger: - self._logger.debug("Summary of errors:") - for error in self._errors: - self._logger.debug(repr(error)) + super(Statistics, self).__init__() + for result in Result: + self[result.name] = 0 + self["PASS RATE"] = 0.0 + self["DPDK VERSION"] = dpdk_version - self._stats_result = Statistics(self.dpdk_version) - self.add_stats(self._stats_result) - with open(self._stats_filename, "w+") as stats_file: - stats_file.write(str(self._stats_result)) + def __iadd__(self, other: Result) -> "Statistics": + """Add a Result to the final count. - def get_return_code(self) -> int: - """Go through all stored Exceptions and return the final DTS error code. + Example: + stats: Statistics = Statistics() # empty Statistics + stats += Result.PASS # add a Result to `stats` + + Args: + other: The Result to add to this statistics object. Returns: - The highest error code found. + The modified statistics object. """ - for error in self._errors: - error_return_code = ErrorSeverity.GENERIC_ERR - if isinstance(error, DTSError): - error_return_code = error.severity - - if error_return_code > self._return_code: - self._return_code = error_return_code + self[other.name] += 1 + self["PASS RATE"] = ( + float(self[Result.PASS.name]) * 100 / sum(self[result.name] for result in Result) + ) + return self - return int(self._return_code) + def __str__(self) -> str: + """Each line contains the formatted key = value pair.""" + stats_str = "" + for key, value in self.items(): + stats_str += f"{key:<12} = {value}\n" + # according to docs, we should use \n when writing to text files + # on all platforms + return stats_str From patchwork Fri Mar 1 10:55:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137670 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D96E543BB1; Fri, 1 Mar 2024 11:56:07 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 22C2A433E0; Fri, 1 Mar 2024 11:55:34 +0100 (CET) Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by mails.dpdk.org (Postfix) with ESMTP id 8ADA8433C1 for ; Fri, 1 Mar 2024 11:55:31 +0100 (CET) Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-563cb3ba9daso2657016a12.3 for ; Fri, 01 Mar 2024 02:55:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1709290531; x=1709895331; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lgi8/GLgdFdN8/MgBNoep+IflRBRy9l2DQ/JD62PMuA=; b=kHYKFg9/ioZgsgFR8t1+dEGYkaicWJR1pCuE+yQ2plUm6J0XV9XQBeJspLiF3fkQ+w WmMSZewiviTrg11RUxz6HANMadwXKZcEgWXpBRlvFV/nOpNpwMPUis0F658tuspp+YJq LClXcY7m28VCkWOAEPVkSplRkIrwpzAAEBksM1zjON+B8PS31R9Vu0CftIA3K6sxQl/O Ysqtbn2W90j93MS7+91tXKQKVh2n74AxvYwI0kEiWHcHDqntvMvo67PIOcOhxsGBvZXN hp1rFsRgpUgtzhYGNEnVh8mPyj8T1dGR0MKtSo6iH6X6aU+xN16ddoqOs0sGVHwcBCzY GH6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709290531; x=1709895331; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lgi8/GLgdFdN8/MgBNoep+IflRBRy9l2DQ/JD62PMuA=; b=Is2LU9Oi33r6PPH95izkGN19t7J9ptLya5Cz+mfvEI9or47CjLoEOsdd/j+P6v8oce NbaDKHPBEawUH4UUTPzJFbTcZh08glcjwCE/0W8B0vKtB4Q/xZYDTaC7Jmv8orIalwZF Yv1K9Fm9vvyXqvV59yuRGeeejSfEGc5f5UEpdM9vq/JOB1J/rP1p0qwUCBQ/3AbV/o8N EBTYXIAIL/ci6b87NRomtZ9Xq9CuQepHDATiVWxO75b5UwONDCKcBvu6tz1X8HvRkOOh n7o5iShh5x6z5ju0Cg9IT4Zz4eEEk7M4mERkuY7H6mWxPNRnpmouyip0GUf2CloPs1xR h6mA== X-Gm-Message-State: AOJu0YwDarXk/LCw2KXn+ARCzSwcBwuNaTr5V8NB/VZ5pEB7ur6hIeeI /z/Prg+FTIoKEuqd1FQMkrhv6cPI3coHsNzJZXcMfphfiqbcU73kruaOZvKHbfQ= X-Google-Smtp-Source: AGHT+IHSP9A8ld+US0+z5q4B3dH51hQn8BVU3zQd4J42JqzpdxPlMjGSoqdoUOpQvqeMmH/F2tC5Mw== X-Received: by 2002:a05:6402:5245:b0:566:dede:1f82 with SMTP id t5-20020a056402524500b00566dede1f82mr732645edd.29.1709290531072; Fri, 01 Mar 2024 02:55:31 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id f12-20020a056402194c00b0056661ec3f24sm1461734edz.81.2024.03.01.02.55.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 02:55:30 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com, npratte@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v4 5/7] dts: block all test cases when earlier setup fails Date: Fri, 1 Mar 2024 11:55:20 +0100 Message-Id: <20240301105522.79870-6-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240301105522.79870-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240301105522.79870-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In case of a failure before a test suite, the child results will be recursively recorded as blocked, giving us a full report which was missing previously. Signed-off-by: Juraj Linkeš --- dts/framework/runner.py | 21 ++-- dts/framework/test_result.py | 186 +++++++++++++++++++++++++---------- 2 files changed, 148 insertions(+), 59 deletions(-) diff --git a/dts/framework/runner.py b/dts/framework/runner.py index 5f6bcbbb86..864015c350 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -60,13 +60,15 @@ class DTSRunner: Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or one of its subclasses. The test case results are also recorded. - If an error occurs, the current stage is aborted, the error is recorded and the run continues in - the next iteration of the same stage. The return code is the highest `severity` of all + If an error occurs, the current stage is aborted, the error is recorded, everything in + the inner stages is marked as blocked and the run continues in the next iteration + of the same stage. The return code is the highest `severity` of all :class:`~.framework.exception.DTSError`\s. Example: - An error occurs in a build target setup. The current build target is aborted and the run - continues with the next build target. If the errored build target was the last one in the + An error occurs in a build target setup. The current build target is aborted, + all test suites and their test cases are marked as blocked and the run continues + with the next build target. If the errored build target was the last one in the given execution, the next execution begins. """ @@ -100,6 +102,10 @@ def run(self): test case within the test suite is set up, executed and torn down. After all test cases have been executed, the test suite is torn down and the next build target will be tested. + In order to properly mark test suites and test cases as blocked in case of a failure, + we need to have discovered which test suites and test cases to run before any failures + happen. The discovery happens at the earliest point at the start of each execution. + All the nested steps look like this: #. Execution setup @@ -134,7 +140,7 @@ def run(self): self._logger.info( f"Running execution with SUT '{execution.system_under_test_node.name}'." ) - execution_result = self._result.add_execution(execution.system_under_test_node) + execution_result = self._result.add_execution(execution) # we don't want to modify the original config, so create a copy execution_test_suites = list(execution.test_suites) if not execution.skip_smoke_tests: @@ -143,6 +149,7 @@ def run(self): test_suites_with_cases = self._get_test_suites_with_cases( execution_test_suites, execution.func, execution.perf ) + execution_result.test_suites_with_cases = test_suites_with_cases except Exception as e: self._logger.exception( f"Invalid test suite configuration found: " f"{execution_test_suites}." @@ -493,9 +500,7 @@ def _run_test_suites( """ end_build_target = False for test_suite_with_cases in test_suites_with_cases: - test_suite_result = build_target_result.add_test_suite( - test_suite_with_cases.test_suite_class.__name__ - ) + test_suite_result = build_target_result.add_test_suite(test_suite_with_cases) try: self._run_test_suite(sut_node, tg_node, test_suite_result, test_suite_with_cases) except BlockingTestSuiteError as e: diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index abdbafab10..eedb2d20ee 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -37,7 +37,7 @@ BuildTargetInfo, Compiler, CPUType, - NodeConfiguration, + ExecutionConfiguration, NodeInfo, TestSuiteConfig, ) @@ -88,6 +88,8 @@ class Result(Enum): ERROR = auto() #: SKIP = auto() + #: + BLOCK = auto() def __bool__(self) -> bool: """Only PASS is True.""" @@ -141,21 +143,26 @@ class BaseResult(object): Attributes: setup_result: The result of the setup of the particular stage. teardown_result: The results of the teardown of the particular stage. + child_results: The results of the descendants in the results hierarchy. """ setup_result: FixtureResult teardown_result: FixtureResult - _inner_results: MutableSequence["BaseResult"] + child_results: MutableSequence["BaseResult"] def __init__(self): """Initialize the constructor.""" self.setup_result = FixtureResult() self.teardown_result = FixtureResult() - self._inner_results = [] + self.child_results = [] def update_setup(self, result: Result, error: Exception | None = None) -> None: """Store the setup result. + If the result is :attr:`~Result.BLOCK`, :attr:`~Result.ERROR` or :attr:`~Result.FAIL`, + then the corresponding child results in result hierarchy + are also marked with :attr:`~Result.BLOCK`. + Args: result: The result of the setup. error: The error that occurred in case of a failure. @@ -163,6 +170,16 @@ def update_setup(self, result: Result, error: Exception | None = None) -> None: self.setup_result.result = result self.setup_result.error = error + if result in [Result.BLOCK, Result.ERROR, Result.FAIL]: + self.update_teardown(Result.BLOCK) + self._block_result() + + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed. + + The blocking of child results should be done in overloaded methods. + """ + def update_teardown(self, result: Result, error: Exception | None = None) -> None: """Store the teardown result. @@ -181,10 +198,8 @@ def _get_setup_teardown_errors(self) -> list[Exception]: errors.append(self.teardown_result.error) return errors - def _get_inner_errors(self) -> list[Exception]: - return [ - error for inner_result in self._inner_results for error in inner_result.get_errors() - ] + def _get_child_errors(self) -> list[Exception]: + return [error for child_result in self.child_results for error in child_result.get_errors()] def get_errors(self) -> list[Exception]: """Compile errors from the whole result hierarchy. @@ -192,7 +207,7 @@ def get_errors(self) -> list[Exception]: Returns: The errors from setup, teardown and all errors found in the whole result hierarchy. """ - return self._get_setup_teardown_errors() + self._get_inner_errors() + return self._get_setup_teardown_errors() + self._get_child_errors() def add_stats(self, statistics: "Statistics") -> None: """Collate stats from the whole result hierarchy. @@ -200,8 +215,8 @@ def add_stats(self, statistics: "Statistics") -> None: Args: statistics: The :class:`Statistics` object where the stats will be collated. """ - for inner_result in self._inner_results: - inner_result.add_stats(statistics) + for child_result in self.child_results: + child_result.add_stats(statistics) class DTSResult(BaseResult): @@ -242,18 +257,18 @@ def __init__(self, logger: DTSLOG): self._stats_result = None self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") - def add_execution(self, sut_node: NodeConfiguration) -> "ExecutionResult": - """Add and return the inner result (execution). + def add_execution(self, execution: ExecutionConfiguration) -> "ExecutionResult": + """Add and return the child result (execution). Args: - sut_node: The SUT node's test run configuration. + execution: The execution's test run configuration. Returns: The execution's result. """ - execution_result = ExecutionResult(sut_node) - self._inner_results.append(execution_result) - return execution_result + result = ExecutionResult(execution) + self.child_results.append(result) + return result def add_error(self, error: Exception) -> None: """Record an error that occurred outside any execution. @@ -266,8 +281,8 @@ def add_error(self, error: Exception) -> None: def process(self) -> None: """Process the data after a whole DTS run. - The data is added to inner objects during runtime and this object is not updated - at that time. This requires us to process the inner data after it's all been gathered. + The data is added to child objects during runtime and this object is not updated + at that time. This requires us to process the child data after it's all been gathered. The processing gathers all errors and the statistics of test case results. """ @@ -305,28 +320,30 @@ class ExecutionResult(BaseResult): The internal list stores the results of all build targets in a given execution. Attributes: - sut_node: The SUT node used in the execution. sut_os_name: The operating system of the SUT node. sut_os_version: The operating system version of the SUT node. sut_kernel_version: The operating system kernel version of the SUT node. """ - sut_node: NodeConfiguration sut_os_name: str sut_os_version: str sut_kernel_version: str + _config: ExecutionConfiguration + _parent_result: DTSResult + _test_suites_with_cases: list[TestSuiteWithCases] - def __init__(self, sut_node: NodeConfiguration): - """Extend the constructor with the `sut_node`'s config. + def __init__(self, execution: ExecutionConfiguration): + """Extend the constructor with the execution's config and DTSResult. Args: - sut_node: The SUT node's test run configuration used in the execution. + execution: The execution's test run configuration. """ super(ExecutionResult, self).__init__() - self.sut_node = sut_node + self._config = execution + self._test_suites_with_cases = [] def add_build_target(self, build_target: BuildTargetConfiguration) -> "BuildTargetResult": - """Add and return the inner result (build target). + """Add and return the child result (build target). Args: build_target: The build target's test run configuration. @@ -334,9 +351,34 @@ def add_build_target(self, build_target: BuildTargetConfiguration) -> "BuildTarg Returns: The build target's result. """ - build_target_result = BuildTargetResult(build_target) - self._inner_results.append(build_target_result) - return build_target_result + result = BuildTargetResult( + self._test_suites_with_cases, + build_target, + ) + self.child_results.append(result) + return result + + @property + def test_suites_with_cases(self) -> list[TestSuiteWithCases]: + """The test suites with test cases to be executed in this execution. + + The test suites can only be assigned once. + + Returns: + The list of test suites with test cases. If an error occurs between + the initialization of :class:`ExecutionResult` and assigning test cases to the instance, + return an empty list, representing that we don't know what to execute. + """ + return self._test_suites_with_cases + + @test_suites_with_cases.setter + def test_suites_with_cases(self, test_suites_with_cases: list[TestSuiteWithCases]) -> None: + if self._test_suites_with_cases: + raise ValueError( + "Attempted to assign test suites to an execution result " + "which already has test suites." + ) + self._test_suites_with_cases = test_suites_with_cases def add_sut_info(self, sut_info: NodeInfo) -> None: """Add SUT information gathered at runtime. @@ -348,6 +390,12 @@ def add_sut_info(self, sut_info: NodeInfo) -> None: self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed.""" + for build_target in self._config.build_targets: + child_result = self.add_build_target(build_target) + child_result.update_setup(Result.BLOCK) + class BuildTargetResult(BaseResult): """The build target specific result. @@ -369,11 +417,17 @@ class BuildTargetResult(BaseResult): compiler: Compiler compiler_version: str | None dpdk_version: str | None + _test_suites_with_cases: list[TestSuiteWithCases] - def __init__(self, build_target: BuildTargetConfiguration): - """Extend the constructor with the `build_target`'s build target config. + def __init__( + self, + test_suites_with_cases: list[TestSuiteWithCases], + build_target: BuildTargetConfiguration, + ): + """Extend the constructor with the build target's config and ExecutionResult. Args: + test_suites_with_cases: The test suites with test cases to be run in this build target. build_target: The build target's test run configuration. """ super(BuildTargetResult, self).__init__() @@ -383,6 +437,23 @@ def __init__(self, build_target: BuildTargetConfiguration): self.compiler = build_target.compiler self.compiler_version = None self.dpdk_version = None + self._test_suites_with_cases = test_suites_with_cases + + def add_test_suite( + self, + test_suite_with_cases: TestSuiteWithCases, + ) -> "TestSuiteResult": + """Add and return the child result (test suite). + + Args: + test_suite_with_cases: The test suite with test cases. + + Returns: + The test suite's result. + """ + result = TestSuiteResult(test_suite_with_cases) + self.child_results.append(result) + return result def add_build_target_info(self, versions: BuildTargetInfo) -> None: """Add information about the build target gathered at runtime. @@ -393,15 +464,11 @@ def add_build_target_info(self, versions: BuildTargetInfo) -> None: self.compiler_version = versions.compiler_version self.dpdk_version = versions.dpdk_version - def add_test_suite(self, test_suite_name: str) -> "TestSuiteResult": - """Add and return the inner result (test suite). - - Returns: - The test suite's result. - """ - test_suite_result = TestSuiteResult(test_suite_name) - self._inner_results.append(test_suite_result) - return test_suite_result + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed.""" + for test_suite_with_cases in self._test_suites_with_cases: + child_result = self.add_test_suite(test_suite_with_cases) + child_result.update_setup(Result.BLOCK) class TestSuiteResult(BaseResult): @@ -410,29 +477,42 @@ class TestSuiteResult(BaseResult): The internal list stores the results of all test cases in a given test suite. Attributes: - suite_name: The test suite name. + test_suite_name: The test suite name. """ - suite_name: str + test_suite_name: str + _test_suite_with_cases: TestSuiteWithCases + _parent_result: BuildTargetResult + _child_configs: list[str] - def __init__(self, suite_name: str): - """Extend the constructor with `suite_name`. + def __init__(self, test_suite_with_cases: TestSuiteWithCases): + """Extend the constructor with test suite's config and BuildTargetResult. Args: - suite_name: The test suite's name. + test_suite_with_cases: The test suite with test cases. """ super(TestSuiteResult, self).__init__() - self.suite_name = suite_name + self.test_suite_name = test_suite_with_cases.test_suite_class.__name__ + self._test_suite_with_cases = test_suite_with_cases def add_test_case(self, test_case_name: str) -> "TestCaseResult": - """Add and return the inner result (test case). + """Add and return the child result (test case). + + Args: + test_case_name: The name of the test case. Returns: The test case's result. """ - test_case_result = TestCaseResult(test_case_name) - self._inner_results.append(test_case_result) - return test_case_result + result = TestCaseResult(test_case_name) + self.child_results.append(result) + return result + + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed.""" + for test_case_method in self._test_suite_with_cases.test_cases: + child_result = self.add_test_case(test_case_method.__name__) + child_result.update_setup(Result.BLOCK) class TestCaseResult(BaseResult, FixtureResult): @@ -449,7 +529,7 @@ class TestCaseResult(BaseResult, FixtureResult): test_case_name: str def __init__(self, test_case_name: str): - """Extend the constructor with `test_case_name`. + """Extend the constructor with test case's name and TestSuiteResult. Args: test_case_name: The test case's name. @@ -470,7 +550,7 @@ def update(self, result: Result, error: Exception | None = None) -> None: self.result = result self.error = error - def _get_inner_errors(self) -> list[Exception]: + def _get_child_errors(self) -> list[Exception]: if self.error: return [self.error] return [] @@ -486,6 +566,10 @@ def add_stats(self, statistics: "Statistics") -> None: """ statistics += self.result + def _block_result(self) -> None: + r"""Mark the result as :attr:`~Result.BLOCK`\ed.""" + self.update(Result.BLOCK) + def __bool__(self) -> bool: """The test case passed only if setup, teardown and the test case itself passed.""" return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) From patchwork Fri Mar 1 10:55:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137671 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DAE4543B68; Fri, 1 Mar 2024 11:56:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CFF54433F1; Fri, 1 Mar 2024 11:55:35 +0100 (CET) Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) by mails.dpdk.org (Postfix) with ESMTP id 0E169433DE for ; Fri, 1 Mar 2024 11:55:33 +0100 (CET) Received: by mail-ed1-f48.google.com with SMTP id 4fb4d7f45d1cf-5649c25369aso2899260a12.2 for ; Fri, 01 Mar 2024 02:55:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1709290532; x=1709895332; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GBlAEoS7m0FawAf/KcXbpTzSzIQFOsj5MUn56lTdfj4=; b=cXh85dPKERifqUPBaXvg2Zxejuvcmg2TYgSUOM57qYdjRiS6dFcHjjRmuhf2wxkM5d SdsnU1vNTpeysBvOCE7YPjN2ZB9arPMtPuacWrG9Z7tn3k5yqll3rnahFFV0xhppTk0A WCzAm04K7SKP0Rvj5e1hk9VrP16CsScO9oBZ7LD+pSgjnsMBJT9gZLaCgUHuu6qn9Zp/ fVKbepC9Qe1nvW+Vzs4nMpaPIjgpy1fK5WTIznLu6UjL2GsD7NOWEQ4V7OewL98u/zEY 0AzPOQjR14GBLQbgwUueoVROd7WH4Ld0csAnzHmRxE/0js6O9WQjiUVQpnM1NEYQ6jip JkyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709290532; x=1709895332; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GBlAEoS7m0FawAf/KcXbpTzSzIQFOsj5MUn56lTdfj4=; b=sWC/YvIGeHSQAbYxE1at9nKvLGRqhEttRSmiFKLxi8Wf7LYBV6s7qiIcjnjtsthQOd Ewrf0igmOI8hmYXiW/yhaDGwDPxDSTguglH549IQiIqKC6GbswS8PxCFQ7GuRHR9g0Vx xQrAZ81wbEXTHuZrTdpWY2XUyau014us4isbao662GBMyk6YyBrqsJaX9uz47y+PT+l9 ooYB2JLHaC+qAOSG/PLQShoiJ7eRY0hs0aid5VY7Ob6wSBnOTRDqTsDnVE5Tpq+gFVKH GUdUAQCwhkEyc1j4RqlapB6cYFiSkEzeJ2RYSg2ecvq7kuTr5NxGdcTClzkZJcypn4UD 0eVw== X-Gm-Message-State: AOJu0Yzr19Xe0sm44OqKa6diA4PXtDlwXV14pgHY67CB5Jjrd2dPrgB/ CLUuxttJilJaTlkRY5wtw1tGpaKMQ78C7e93O/tcfYeLBjR+15DaHGmzEfLN9ww= X-Google-Smtp-Source: AGHT+IFsQKbo5X0iGlokrXkdnXSgrZhVP2AH5y9+AWzl/CQFCWFWkT6jOK0kvs1vEK1+khZEoUP5uA== X-Received: by 2002:a50:cddd:0:b0:565:46e6:56db with SMTP id h29-20020a50cddd000000b0056546e656dbmr1025737edj.19.1709290532523; Fri, 01 Mar 2024 02:55:32 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id f12-20020a056402194c00b0056661ec3f24sm1461734edz.81.2024.03.01.02.55.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 02:55:31 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com, npratte@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v4 6/7] dts: refactor logging configuration Date: Fri, 1 Mar 2024 11:55:21 +0100 Message-Id: <20240301105522.79870-7-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240301105522.79870-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240301105522.79870-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Remove unused parts of the code and add useful features: 1. Add DTS execution stages such as execution and test suite to better identify where in the DTS lifecycle we are when investigating logs, 2. Logging to separate files in specific stages, which is mainly useful for having test suite logs in additional separate files. 3. Remove the dependence on the settings module which enhances the usefulness of the logger module, as it can now be imported in more modules. The execution stages and the files to log to are the same for all DTS loggers. To achieve this, we have one DTS root logger which should be used for handling stage switching and all other loggers are children of this DTS root logger. The DTS root logger is the one where we change the behavior of all loggers (the stage and which files to log to) and the child loggers just log messages under a different name. Signed-off-by: Juraj Linkeš --- dts/framework/logger.py | 246 +++++++++++------- dts/framework/remote_session/__init__.py | 6 +- .../interactive_remote_session.py | 6 +- .../remote_session/interactive_shell.py | 6 +- .../remote_session/remote_session.py | 8 +- dts/framework/runner.py | 23 +- dts/framework/test_result.py | 6 +- dts/framework/test_suite.py | 6 +- dts/framework/testbed_model/node.py | 11 +- dts/framework/testbed_model/os_session.py | 7 +- .../traffic_generator/traffic_generator.py | 6 +- dts/main.py | 3 - 12 files changed, 197 insertions(+), 137 deletions(-) diff --git a/dts/framework/logger.py b/dts/framework/logger.py index cfa6e8cd72..fc6c50c983 100644 --- a/dts/framework/logger.py +++ b/dts/framework/logger.py @@ -5,141 +5,195 @@ """DTS logger module. -DTS framework and TestSuite logs are saved in different log files. +The module provides several additional features: + + * The storage of DTS execution stages, + * Logging to console, a human-readable log file and a machine-readable log file, + * Optional log files for specific stages. """ import logging -import os.path -from typing import TypedDict +from enum import auto +from logging import FileHandler, StreamHandler +from pathlib import Path +from typing import ClassVar -from .settings import SETTINGS +from .utils import StrEnum date_fmt = "%Y/%m/%d %H:%M:%S" -stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s" +stream_fmt = "%(asctime)s - %(stage)s - %(name)s - %(levelname)s - %(message)s" +dts_root_logger_name = "dts" + + +class DtsStage(StrEnum): + """The DTS execution stage.""" + + #: + pre_execution = auto() + #: + execution_setup = auto() + #: + execution_teardown = auto() + #: + build_target_setup = auto() + #: + build_target_teardown = auto() + #: + test_suite_setup = auto() + #: + test_suite = auto() + #: + test_suite_teardown = auto() + #: + post_execution = auto() + + +class DTSLogger(logging.Logger): + """The DTS logger class. + + The class extends the :class:`~logging.Logger` class to add the DTS execution stage information + to log records. The stage is common to all loggers, so it's stored in a class variable. + + Any time we switch to a new stage, we have the ability to log to an additional log file along + with a supplementary log file with machine-readable format. These two log files are used until + a new stage switch occurs. This is useful mainly for logging per test suite. + """ + _stage: ClassVar[DtsStage] = DtsStage.pre_execution + _extra_file_handlers: list[FileHandler] = [] -class DTSLOG(logging.LoggerAdapter): - """DTS logger adapter class for framework and testsuites. + def __init__(self, *args, **kwargs): + """Extend the constructor with extra file handlers.""" + self._extra_file_handlers = [] + super().__init__(*args, **kwargs) - The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment - variable control the verbosity of output. If enabled, all messages will be emitted to the - console. + def makeRecord(self, *args, **kwargs) -> logging.LogRecord: + """Generates a record with additional stage information. - The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment - variable modify the directory where the logs will be stored. + This is the default method for the :class:`~logging.Logger` class. We extend it + to add stage information to the record. - Attributes: - node: The additional identifier. Currently unused. - sh: The handler which emits logs to console. - fh: The handler which emits logs to a file. - verbose_fh: Just as fh, but logs with a different, more verbose, format. - """ + :meta private: + + Returns: + record: The generated record with the stage information. + """ + record = super().makeRecord(*args, **kwargs) + record.stage = DTSLogger._stage # type: ignore[attr-defined] + return record + + def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None: + """Add logger handlers to the DTS root logger. + + This method should be called only on the DTS root logger. + The log records from child loggers will propagate to these handlers. - _logger: logging.Logger - node: str - sh: logging.StreamHandler - fh: logging.FileHandler - verbose_fh: logging.FileHandler + Three handlers are added: - def __init__(self, logger: logging.Logger, node: str = "suite"): - """Extend the constructor with additional handlers. + * A console handler, + * A file handler, + * A supplementary file handler with machine-readable logs + containing more debug information. - One handler logs to the console, the other one to a file, with either a regular or verbose - format. + All log messages will be logged to files. The log level of the console handler + is configurable with `verbose`. Args: - logger: The logger from which to create the logger adapter. - node: An additional identifier. Currently unused. + verbose: If :data:`True`, log all messages to the console. + If :data:`False`, log to console with the :data:`logging.INFO` level. + output_dir: The directory where the log files will be located. + The names of the log files correspond to the name of the logger instance. """ - self._logger = logger - # 1 means log everything, this will be used by file handlers if their level - # is not set - self._logger.setLevel(1) + self.setLevel(1) - self.node = node - - # add handler to emit to stdout - sh = logging.StreamHandler() + sh = StreamHandler() sh.setFormatter(logging.Formatter(stream_fmt, date_fmt)) - sh.setLevel(logging.INFO) # console handler default level + if not verbose: + sh.setLevel(logging.INFO) + self.addHandler(sh) - if SETTINGS.verbose is True: - sh.setLevel(logging.DEBUG) + self._add_file_handlers(Path(output_dir, self.name)) - self._logger.addHandler(sh) - self.sh = sh + def set_stage(self, stage: DtsStage, log_file_path: Path | None = None) -> None: + """Set the DTS execution stage and optionally log to files. - # prepare the output folder - if not os.path.exists(SETTINGS.output_dir): - os.mkdir(SETTINGS.output_dir) + Set the DTS execution stage of the DTSLog class and optionally add + file handlers to the instance if the log file name is provided. - logging_path_prefix = os.path.join(SETTINGS.output_dir, node) + The file handlers log all messages. One is a regular human-readable log file and + the other one is a machine-readable log file with extra debug information. - fh = logging.FileHandler(f"{logging_path_prefix}.log") - fh.setFormatter( - logging.Formatter( - fmt="%(asctime)s - %(name)s - %(levelname)s - %(message)s", - datefmt=date_fmt, - ) - ) + Args: + stage: The DTS stage to set. + log_file_path: An optional path of the log file to use. This should be a full path + (either relative or absolute) without suffix (which will be appended). + """ + self._remove_extra_file_handlers() - self._logger.addHandler(fh) - self.fh = fh + if DTSLogger._stage != stage: + self.info(f"Moving from stage '{DTSLogger._stage}' to stage '{stage}'.") + DTSLogger._stage = stage - # This outputs EVERYTHING, intended for post-mortem debugging - # Also optimized for processing via AWK (awk -F '|' ...) - verbose_fh = logging.FileHandler(f"{logging_path_prefix}.verbose.log") + if log_file_path: + self._extra_file_handlers.extend(self._add_file_handlers(log_file_path)) + + def _add_file_handlers(self, log_file_path: Path) -> list[FileHandler]: + """Add file handlers to the DTS root logger. + + Add two type of file handlers: + + * A regular file handler with suffix ".log", + * A machine-readable file handler with suffix ".verbose.log". + This format provides extensive information for debugging and detailed analysis. + + Args: + log_file_path: The full path to the log file without suffix. + + Returns: + The newly created file handlers. + + """ + fh = FileHandler(f"{log_file_path}.log") + fh.setFormatter(logging.Formatter(stream_fmt, date_fmt)) + self.addHandler(fh) + + verbose_fh = FileHandler(f"{log_file_path}.verbose.log") verbose_fh.setFormatter( logging.Formatter( - fmt="%(asctime)s|%(name)s|%(levelname)s|%(pathname)s|%(lineno)d|" + "%(asctime)s|%(stage)s|%(name)s|%(levelname)s|%(pathname)s|%(lineno)d|" "%(funcName)s|%(process)d|%(thread)d|%(threadName)s|%(message)s", datefmt=date_fmt, ) ) + self.addHandler(verbose_fh) - self._logger.addHandler(verbose_fh) - self.verbose_fh = verbose_fh - - super(DTSLOG, self).__init__(self._logger, dict(node=self.node)) - - def logger_exit(self) -> None: - """Remove the stream handler and the logfile handler.""" - for handler in (self.sh, self.fh, self.verbose_fh): - handler.flush() - self._logger.removeHandler(handler) - - -class _LoggerDictType(TypedDict): - logger: DTSLOG - name: str - node: str - + return [fh, verbose_fh] -# List for saving all loggers in use -_Loggers: list[_LoggerDictType] = [] + def _remove_extra_file_handlers(self) -> None: + """Remove any extra file handlers that have been added to the logger.""" + if self._extra_file_handlers: + for extra_file_handler in self._extra_file_handlers: + self.removeHandler(extra_file_handler) + self._extra_file_handlers = [] -def getLogger(name: str, node: str = "suite") -> DTSLOG: - """Get DTS logger adapter identified by name and node. - An existing logger will be returned if one with the exact name and node already exists. - A new one will be created and stored otherwise. +def get_dts_logger(name: str = None) -> DTSLogger: + """Return a DTS logger instance identified by `name`. Args: - name: The name of the logger. - node: An additional identifier for the logger. + name: If :data:`None`, return the DTS root logger. + If specified, return a child of the DTS root logger. Returns: - A logger uniquely identified by both name and node. + The DTS root logger or a child logger identified by `name`. """ - global _Loggers - # return saved logger - logger: _LoggerDictType - for logger in _Loggers: - if logger["name"] == name and logger["node"] == node: - return logger["logger"] - - # return new logger - dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node) - _Loggers.append({"logger": dts_logger, "name": name, "node": node}) - return dts_logger + original_logger_class = logging.getLoggerClass() + logging.setLoggerClass(DTSLogger) + if name: + name = f"{dts_root_logger_name}.{name}" + else: + name = dts_root_logger_name + logger = logging.getLogger(name) + logging.setLoggerClass(original_logger_class) + return logger # type: ignore[return-value] diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 51a01d6b5e..1910c81c3c 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -15,7 +15,7 @@ # pylama:ignore=W0611 from framework.config import NodeConfiguration -from framework.logger import DTSLOG +from framework.logger import DTSLogger from .interactive_remote_session import InteractiveRemoteSession from .interactive_shell import InteractiveShell @@ -26,7 +26,7 @@ def create_remote_session( - node_config: NodeConfiguration, name: str, logger: DTSLOG + node_config: NodeConfiguration, name: str, logger: DTSLogger ) -> RemoteSession: """Factory for non-interactive remote sessions. @@ -45,7 +45,7 @@ def create_remote_session( def create_interactive_session( - node_config: NodeConfiguration, logger: DTSLOG + node_config: NodeConfiguration, logger: DTSLogger ) -> InteractiveRemoteSession: """Factory for interactive remote sessions. diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py index 1cc82e3377..c50790db79 100644 --- a/dts/framework/remote_session/interactive_remote_session.py +++ b/dts/framework/remote_session/interactive_remote_session.py @@ -16,7 +16,7 @@ from framework.config import NodeConfiguration from framework.exception import SSHConnectionError -from framework.logger import DTSLOG +from framework.logger import DTSLogger class InteractiveRemoteSession: @@ -50,11 +50,11 @@ class InteractiveRemoteSession: username: str password: str session: SSHClient - _logger: DTSLOG + _logger: DTSLogger _node_config: NodeConfiguration _transport: Transport | None - def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None: + def __init__(self, node_config: NodeConfiguration, logger: DTSLogger) -> None: """Connect to the node during initialization. Args: diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py index b158f963b6..5cfe202e15 100644 --- a/dts/framework/remote_session/interactive_shell.py +++ b/dts/framework/remote_session/interactive_shell.py @@ -20,7 +20,7 @@ from paramiko import Channel, SSHClient, channel # type: ignore[import] -from framework.logger import DTSLOG +from framework.logger import DTSLogger from framework.settings import SETTINGS @@ -38,7 +38,7 @@ class InteractiveShell(ABC): _stdin: channel.ChannelStdinFile _stdout: channel.ChannelFile _ssh_channel: Channel - _logger: DTSLOG + _logger: DTSLogger _timeout: float _app_args: str @@ -61,7 +61,7 @@ class InteractiveShell(ABC): def __init__( self, interactive_session: SSHClient, - logger: DTSLOG, + logger: DTSLogger, get_privileged_command: Callable[[str], str] | None, app_args: str = "", timeout: float = SETTINGS.timeout, diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py index 2059f9a981..a69dc99400 100644 --- a/dts/framework/remote_session/remote_session.py +++ b/dts/framework/remote_session/remote_session.py @@ -9,14 +9,13 @@ the structure of the result of a command execution. """ - import dataclasses from abc import ABC, abstractmethod from pathlib import PurePath from framework.config import NodeConfiguration from framework.exception import RemoteCommandExecutionError -from framework.logger import DTSLOG +from framework.logger import DTSLogger from framework.settings import SETTINGS @@ -75,14 +74,14 @@ class RemoteSession(ABC): username: str password: str history: list[CommandResult] - _logger: DTSLOG + _logger: DTSLogger _node_config: NodeConfiguration def __init__( self, node_config: NodeConfiguration, session_name: str, - logger: DTSLOG, + logger: DTSLogger, ): """Connect to the node during initialization. @@ -181,7 +180,6 @@ def close(self, force: bool = False) -> None: Args: force: Force the closure of the connection. This may not clean up all resources. """ - self._logger.logger_exit() self._close(force) @abstractmethod diff --git a/dts/framework/runner.py b/dts/framework/runner.py index 864015c350..dfee8ebd7c 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -19,9 +19,10 @@ import importlib import inspect -import logging +import os import re import sys +from pathlib import Path from types import MethodType from typing import Iterable @@ -38,7 +39,7 @@ SSHTimeoutError, TestCaseVerifyError, ) -from .logger import DTSLOG, getLogger +from .logger import DTSLogger, DtsStage, get_dts_logger from .settings import SETTINGS from .test_result import ( BuildTargetResult, @@ -73,7 +74,7 @@ class DTSRunner: """ _configuration: Configuration - _logger: DTSLOG + _logger: DTSLogger _result: DTSResult _test_suite_class_prefix: str _test_suite_module_prefix: str @@ -83,7 +84,10 @@ class DTSRunner: def __init__(self): """Initialize the instance with configuration, logger, result and string constants.""" self._configuration = load_config() - self._logger = getLogger("DTSRunner") + self._logger = get_dts_logger() + if not os.path.exists(SETTINGS.output_dir): + os.makedirs(SETTINGS.output_dir) + self._logger.add_dts_root_logger_handlers(SETTINGS.verbose, SETTINGS.output_dir) self._result = DTSResult(self._logger) self._test_suite_class_prefix = "Test" self._test_suite_module_prefix = "tests.TestSuite_" @@ -137,6 +141,7 @@ def run(self): # for all Execution sections for execution in self._configuration.executions: + self._logger.set_stage(DtsStage.execution_setup) self._logger.info( f"Running execution with SUT '{execution.system_under_test_node.name}'." ) @@ -168,6 +173,7 @@ def run(self): finally: try: + self._logger.set_stage(DtsStage.post_execution) for node in (sut_nodes | tg_nodes).values(): node.close() self._result.update_teardown(Result.PASS) @@ -426,6 +432,7 @@ def _run_execution( finally: try: + self._logger.set_stage(DtsStage.execution_teardown) sut_node.tear_down_execution() execution_result.update_teardown(Result.PASS) except Exception as e: @@ -454,6 +461,7 @@ def _run_build_target( with the current build target. test_suites_with_cases: The test suites with test cases to run. """ + self._logger.set_stage(DtsStage.build_target_setup) self._logger.info(f"Running build target '{build_target.name}'.") try: @@ -470,6 +478,7 @@ def _run_build_target( finally: try: + self._logger.set_stage(DtsStage.build_target_teardown) sut_node.tear_down_build_target() build_target_result.update_teardown(Result.PASS) except Exception as e: @@ -542,6 +551,9 @@ def _run_test_suite( BlockingTestSuiteError: If a blocking test suite fails. """ test_suite_name = test_suite_with_cases.test_suite_class.__name__ + self._logger.set_stage( + DtsStage.test_suite_setup, Path(SETTINGS.output_dir, test_suite_name) + ) test_suite = test_suite_with_cases.test_suite_class(sut_node, tg_node) try: self._logger.info(f"Starting test suite setup: {test_suite_name}") @@ -560,6 +572,7 @@ def _run_test_suite( ) finally: try: + self._logger.set_stage(DtsStage.test_suite_teardown) test_suite.tear_down_suite() sut_node.kill_cleanup_dpdk_apps() test_suite_result.update_teardown(Result.PASS) @@ -591,6 +604,7 @@ def _execute_test_suite( test_suite_result: The test suite level result object associated with the current test suite. """ + self._logger.set_stage(DtsStage.test_suite) for test_case_method in test_cases: test_case_name = test_case_method.__name__ test_case_result = test_suite_result.add_test_case(test_case_name) @@ -690,5 +704,4 @@ def _exit_dts(self) -> None: if self._logger: self._logger.info("DTS execution has ended.") - logging.shutdown() sys.exit(self._result.get_return_code()) diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index eedb2d20ee..28f84fd793 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -42,7 +42,7 @@ TestSuiteConfig, ) from .exception import DTSError, ErrorSeverity -from .logger import DTSLOG +from .logger import DTSLogger from .settings import SETTINGS from .test_suite import TestSuite @@ -237,13 +237,13 @@ class DTSResult(BaseResult): """ dpdk_version: str | None - _logger: DTSLOG + _logger: DTSLogger _errors: list[Exception] _return_code: ErrorSeverity _stats_result: Union["Statistics", None] _stats_filename: str - def __init__(self, logger: DTSLOG): + def __init__(self, logger: DTSLogger): """Extend the constructor with top-level specifics. Args: diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index f9fe88093e..365f80e21a 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -21,7 +21,7 @@ from scapy.packet import Packet, Padding # type: ignore[import] from .exception import TestCaseVerifyError -from .logger import DTSLOG, getLogger +from .logger import DTSLogger, get_dts_logger from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries @@ -61,7 +61,7 @@ class TestSuite(object): #: Whether the test suite is blocking. A failure of a blocking test suite #: will block the execution of all subsequent test suites in the current build target. is_blocking: ClassVar[bool] = False - _logger: DTSLOG + _logger: DTSLogger _port_links: list[PortLink] _sut_port_ingress: Port _sut_port_egress: Port @@ -88,7 +88,7 @@ def __init__( """ self.sut_node = sut_node self.tg_node = tg_node - self._logger = getLogger(self.__class__.__name__) + self._logger = get_dts_logger(self.__class__.__name__) self._port_links = [] self._process_links() self._sut_port_ingress, self._tg_port_egress = ( diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index 1a55fadf78..74061f6262 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -23,7 +23,7 @@ NodeConfiguration, ) from framework.exception import ConfigurationError -from framework.logger import DTSLOG, getLogger +from framework.logger import DTSLogger, get_dts_logger from framework.settings import SETTINGS from .cpu import ( @@ -63,7 +63,7 @@ class Node(ABC): name: str lcores: list[LogicalCore] ports: list[Port] - _logger: DTSLOG + _logger: DTSLogger _other_sessions: list[OSSession] _execution_config: ExecutionConfiguration virtual_devices: list[VirtualDevice] @@ -82,7 +82,7 @@ def __init__(self, node_config: NodeConfiguration): """ self.config = node_config self.name = node_config.name - self._logger = getLogger(self.name) + self._logger = get_dts_logger(self.name) self.main_session = create_session(self.config, self.name, self._logger) self._logger.info(f"Connected to node: {self.name}") @@ -189,7 +189,7 @@ def create_session(self, name: str) -> OSSession: connection = create_session( self.config, session_name, - getLogger(session_name, node=self.name), + get_dts_logger(session_name), ) self._other_sessions.append(connection) return connection @@ -299,7 +299,6 @@ def close(self) -> None: self.main_session.close() for session in self._other_sessions: session.close() - self._logger.logger_exit() @staticmethod def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: @@ -314,7 +313,7 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: return func -def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession: +def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger) -> OSSession: """Factory for OS-aware sessions. Args: diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py index ac6bb5e112..6983aa4a77 100644 --- a/dts/framework/testbed_model/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -21,7 +21,6 @@ the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the node's OS is Linux and other commands for other OSs. It also translates the path to match the underlying OS. """ - from abc import ABC, abstractmethod from collections.abc import Iterable from ipaddress import IPv4Interface, IPv6Interface @@ -29,7 +28,7 @@ from typing import Type, TypeVar, Union from framework.config import Architecture, NodeConfiguration, NodeInfo -from framework.logger import DTSLOG +from framework.logger import DTSLogger from framework.remote_session import ( CommandResult, InteractiveRemoteSession, @@ -62,7 +61,7 @@ class OSSession(ABC): _config: NodeConfiguration name: str - _logger: DTSLOG + _logger: DTSLogger remote_session: RemoteSession interactive_session: InteractiveRemoteSession @@ -70,7 +69,7 @@ def __init__( self, node_config: NodeConfiguration, name: str, - logger: DTSLOG, + logger: DTSLogger, ): """Initialize the OS-aware session. diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py index c49fbff488..d86d7fb532 100644 --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -13,7 +13,7 @@ from scapy.packet import Packet # type: ignore[import] from framework.config import TrafficGeneratorConfig -from framework.logger import DTSLOG, getLogger +from framework.logger import DTSLogger, get_dts_logger from framework.testbed_model.node import Node from framework.testbed_model.port import Port from framework.utils import get_packet_summaries @@ -28,7 +28,7 @@ class TrafficGenerator(ABC): _config: TrafficGeneratorConfig _tg_node: Node - _logger: DTSLOG + _logger: DTSLogger def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): """Initialize the traffic generator. @@ -39,7 +39,7 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): """ self._config = config self._tg_node = tg_node - self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}") + self._logger = get_dts_logger(f"{self._tg_node.name} {self._config.traffic_generator_type}") def send_packet(self, packet: Packet, port: Port) -> None: """Send `packet` and block until it is fully sent. diff --git a/dts/main.py b/dts/main.py index 1ffe8ff81f..fa878cc16e 100755 --- a/dts/main.py +++ b/dts/main.py @@ -6,8 +6,6 @@ """The DTS executable.""" -import logging - from framework import settings @@ -30,5 +28,4 @@ def main() -> None: # Main program begins here if __name__ == "__main__": - logging.raiseExceptions = True main() From patchwork Fri Mar 1 10:55:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 137672 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DF5E543B68; Fri, 1 Mar 2024 11:56:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7D240433B1; Fri, 1 Mar 2024 11:55:37 +0100 (CET) Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by mails.dpdk.org (Postfix) with ESMTP id 66561433E4 for ; Fri, 1 Mar 2024 11:55:34 +0100 (CET) Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-56454c695e6so3342351a12.0 for ; Fri, 01 Mar 2024 02:55:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1709290534; x=1709895334; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+e2O+oz667Uh5Qx/cHVDD4JInkMV2ojCBO+jIy/4LXI=; b=HDaNxBBo7TgQw3+jNZhKQzSru7tSB5y1XHKr/WfIgn2BO3C79GLcTdICZfhOamFmSp wKu/NmrtHV/URvIsjIjSzz0/Cq1f/+/ieVbRqVYgLZUeIvNJ1k1Gw8lGPWvFZEdsFLE3 +UW/Y42Ia7UM4dHT1Fr1eaDiizXz1ixs+L4zD8KIPGo3GAaYrw4MjbFapHkSnbk9lrBl L1WaduLikxVGetjv7Qj3LLZQ3d/E47JjnUVY1q3/Kk6WVZnCs/33DMsxWxf8wWnOlu89 fFEZ0uVhOZ13XCq+hWgl+T47CKRiKY6QB4JgAWRHrRVjfND8bdWcpo/rowuzobZeHV7p 3eyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709290534; x=1709895334; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+e2O+oz667Uh5Qx/cHVDD4JInkMV2ojCBO+jIy/4LXI=; b=S/jV0lyOKmRKkf70VIB7eXBHTP8xVBCssE6iDCkIoQxWSvLQcW2xSB0da0aSW23jXw u6NvsuN1R7LVVpOYXnGxKFWf80DRBDVHBDFEUbiDql/0qGYUgzjBty+EZOOSQkn0o04W M81oP9zEosBT8qrLBHCO43DndV7uVRaHdC+lHMxW1wHPzj40DtHFnAoOSgGyKhiUE+2T RZh6+oLZNrqOfw+Kx4jEfuSVuC8qX3FwGtrJOSvrB5YcQvJS3prv35XJlvrXq/YRRlt8 zc9ng51K1etcplUqs1Gh0BaKKXeTM1IGGPUUzZe7nHKt9n9lJKYdBZlsql/JLfiPVB6k sF4w== X-Gm-Message-State: AOJu0YxcPDwQrgm5dTSL+oTK4T02csWnIDi6mu8raRjQOjhAwKPpUYWm jyaUvZFchxW4WMSYmFvT0LBWtNNpE9P1/t/q/SxesLRJaepuVrKELfdwlxZxG9c= X-Google-Smtp-Source: AGHT+IHnkEU6Om4swLulbEpfpwFeAC4FmErOpAY91iHKTg5Z6Y+LMuudz0fGqHBzET377HNBb0Z/lw== X-Received: by 2002:a05:6402:901:b0:566:be4a:21ec with SMTP id g1-20020a056402090100b00566be4a21ecmr1450197edz.16.1709290533995; Fri, 01 Mar 2024 02:55:33 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([84.245.120.62]) by smtp.gmail.com with ESMTPSA id f12-20020a056402194c00b0056661ec3f24sm1461734edz.81.2024.03.01.02.55.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Mar 2024 02:55:33 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, Luca.Vizzarro@arm.com, npratte@iol.unh.edu Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v4 7/7] dts: improve test suite and case filtering Date: Fri, 1 Mar 2024 11:55:22 +0100 Message-Id: <20240301105522.79870-8-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240301105522.79870-1-juraj.linkes@pantheon.tech> References: <20231220103331.60888-1-juraj.linkes@pantheon.tech> <20240301105522.79870-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The two places where we specify which test suite and test cases to run are complimentary and not that intuitive to use. A unified way provides a better user experience. The syntax in test run configuration file has not changed, but the environment variable and the command line arguments was changed to match the config file syntax. This required changes in the settings module which greatly simplified the parsing of the environment variables while retaining the same functionality. Signed-off-by: Juraj Linkeš Reviewed-by: Jeremy Spewock --- doc/guides/tools/dts.rst | 14 ++- dts/framework/config/__init__.py | 12 +- dts/framework/runner.py | 18 +-- dts/framework/settings.py | 187 ++++++++++++++----------------- dts/framework/test_suite.py | 2 +- 5 files changed, 114 insertions(+), 119 deletions(-) diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index f686ca487c..d1c3c2af7a 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -215,28 +215,30 @@ DTS is run with ``main.py`` located in the ``dts`` directory after entering Poet .. code-block:: console (dts-py3.10) $ ./main.py --help - usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT] [-v] [-s] [--tarball TARBALL] [--compile-timeout COMPILE_TIMEOUT] [--test-cases TEST_CASES] [--re-run RE_RUN] + usage: main.py [-h] [--config-file CONFIG_FILE] [--output-dir OUTPUT_DIR] [-t TIMEOUT] [-v] [-s] [--tarball TARBALL] [--compile-timeout COMPILE_TIMEOUT] [--test-suite TEST_SUITE [TEST_CASES ...]] [--re-run RE_RUN] Run DPDK test suites. All options may be specified with the environment variables provided in brackets. Command line arguments have higher priority. options: -h, --help show this help message and exit --config-file CONFIG_FILE - [DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets. (default: ./conf.yaml) + [DTS_CFG_FILE] The configuration file that describes the test cases, SUTs and targets. (default: ./conf.yaml) --output-dir OUTPUT_DIR, --output OUTPUT_DIR [DTS_OUTPUT_DIR] Output directory where DTS logs and results are saved. (default: output) -t TIMEOUT, --timeout TIMEOUT [DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK. (default: 15) -v, --verbose [DTS_VERBOSE] Specify to enable verbose output, logging all messages to the console. (default: False) - -s, --skip-setup [DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes. (default: None) + -s, --skip-setup [DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes. (default: False) --tarball TARBALL, --snapshot TARBALL, --git-ref TARBALL [DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, tag ID or tree ID to test. To test local changes, first commit them, then use the commit ID with this option. (default: dpdk.tar.xz) --compile-timeout COMPILE_TIMEOUT [DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK. (default: 1200) - --test-cases TEST_CASES - [DTS_TESTCASES] Comma-separated list of test cases to execute. Unknown test cases will be silently ignored. (default: ) + --test-suite TEST_SUITE [TEST_CASES ...] + [DTS_TEST_SUITES] A list containing a test suite with test cases. The first parameter is the test suite name, and the rest are test case names, which are optional. May be specified multiple times. To specify multiple test suites in the environment + variable, join the lists with a comma. Examples: --test-suite suite case case --test-suite suite case ... | DTS_TEST_SUITES='suite case case, suite case, ...' | --test-suite suite --test-suite suite case ... | DTS_TEST_SUITES='suite, suite case, ...' + (default: []) --re-run RE_RUN, --re_run RE_RUN - [DTS_RERUN] Re-run each test case the specified number of times if a test failure occurs (default: 0) + [DTS_RERUN] Re-run each test case the specified number of times if a test failure occurs. (default: 0) The brackets contain the names of environment variables that set the same thing. diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index c6a93b3b89..4cb5c74059 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -35,9 +35,9 @@ import json import os.path -import pathlib from dataclasses import dataclass, fields from enum import auto, unique +from pathlib import Path from typing import Union import warlock # type: ignore[import] @@ -53,7 +53,6 @@ TrafficGeneratorConfigDict, ) from framework.exception import ConfigurationError -from framework.settings import SETTINGS from framework.utils import StrEnum @@ -571,7 +570,7 @@ def from_dict(d: ConfigurationDict) -> "Configuration": return Configuration(executions=executions) -def load_config() -> Configuration: +def load_config(config_file_path: Path) -> Configuration: """Load DTS test run configuration from a file. Load the YAML test run configuration file @@ -581,13 +580,16 @@ def load_config() -> Configuration: The YAML test run configuration file is specified in the :option:`--config-file` command line argument or the :envvar:`DTS_CFG_FILE` environment variable. + Args: + config_file_path: The path to the YAML test run configuration file. + Returns: The parsed test run configuration. """ - with open(SETTINGS.config_file_path, "r") as f: + with open(config_file_path, "r") as f: config_data = yaml.safe_load(f) - schema_path = os.path.join(pathlib.Path(__file__).parent.resolve(), "conf_yaml_schema.json") + schema_path = os.path.join(Path(__file__).parent.resolve(), "conf_yaml_schema.json") with open(schema_path, "r") as f: schema = json.load(f) diff --git a/dts/framework/runner.py b/dts/framework/runner.py index dfee8ebd7c..db8e3ba96b 100644 --- a/dts/framework/runner.py +++ b/dts/framework/runner.py @@ -24,7 +24,7 @@ import sys from pathlib import Path from types import MethodType -from typing import Iterable +from typing import Iterable, Sequence from .config import ( BuildTargetConfiguration, @@ -83,7 +83,7 @@ class DTSRunner: def __init__(self): """Initialize the instance with configuration, logger, result and string constants.""" - self._configuration = load_config() + self._configuration = load_config(SETTINGS.config_file_path) self._logger = get_dts_logger() if not os.path.exists(SETTINGS.output_dir): os.makedirs(SETTINGS.output_dir) @@ -129,7 +129,7 @@ def run(self): #. Execution teardown The test cases are filtered according to the specification in the test run configuration and - the :option:`--test-cases` command line argument or + the :option:`--test-suite` command line argument or the :envvar:`DTS_TESTCASES` environment variable. """ sut_nodes: dict[str, SutNode] = {} @@ -147,7 +147,9 @@ def run(self): ) execution_result = self._result.add_execution(execution) # we don't want to modify the original config, so create a copy - execution_test_suites = list(execution.test_suites) + execution_test_suites = list( + SETTINGS.test_suites if SETTINGS.test_suites else execution.test_suites + ) if not execution.skip_smoke_tests: execution_test_suites[:0] = [TestSuiteConfig.from_dict("smoke_tests")] try: @@ -226,7 +228,7 @@ def _get_test_suites_with_cases( test_suite_class = self._get_test_suite_class(test_suite_config.test_suite) test_cases = [] func_test_cases, perf_test_cases = self._filter_test_cases( - test_suite_class, set(test_suite_config.test_cases + SETTINGS.test_cases) + test_suite_class, test_suite_config.test_cases ) if func: test_cases.extend(func_test_cases) @@ -302,7 +304,7 @@ def is_test_suite(object) -> bool: ) def _filter_test_cases( - self, test_suite_class: type[TestSuite], test_cases_to_run: set[str] + self, test_suite_class: type[TestSuite], test_cases_to_run: Sequence[str] ) -> tuple[list[MethodType], list[MethodType]]: """Filter `test_cases_to_run` from `test_suite_class`. @@ -331,7 +333,9 @@ def _filter_test_cases( (name, method) for name, method in name_method_tuples if name in test_cases_to_run ] if len(name_method_tuples) < len(test_cases_to_run): - missing_test_cases = test_cases_to_run - {name for name, _ in name_method_tuples} + missing_test_cases = set(test_cases_to_run) - { + name for name, _ in name_method_tuples + } raise ConfigurationError( f"Test cases {missing_test_cases} not found among methods " f"of {test_suite_class.__name__}." diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 2b8bfbe0ed..688e8679a7 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -48,10 +48,11 @@ The path to a DPDK tarball, git commit ID, tag ID or tree ID to test. -.. option:: --test-cases -.. envvar:: DTS_TESTCASES +.. option:: --test-suite +.. envvar:: DTS_TEST_SUITES - A comma-separated list of test cases to execute. Unknown test cases will be silently ignored. + A test suite with test cases which may be specified multiple times. + In the environment variable, the suites are joined with a comma. .. option:: --re-run, --re_run .. envvar:: DTS_RERUN @@ -71,83 +72,13 @@ import argparse import os -from collections.abc import Callable, Iterable, Sequence from dataclasses import dataclass, field from pathlib import Path -from typing import Any, TypeVar +from typing import Any +from .config import TestSuiteConfig from .utils import DPDKGitTarball -_T = TypeVar("_T") - - -def _env_arg(env_var: str) -> Any: - """A helper method augmenting the argparse Action with environment variables. - - If the supplied environment variable is defined, then the default value - of the argument is modified. This satisfies the priority order of - command line argument > environment variable > default value. - - Arguments with no values (flags) should be defined using the const keyword argument - (True or False). When the argument is specified, it will be set to const, if not specified, - the default will be stored (possibly modified by the corresponding environment variable). - - Other arguments work the same as default argparse arguments, that is using - the default 'store' action. - - Returns: - The modified argparse.Action. - """ - - class _EnvironmentArgument(argparse.Action): - def __init__( - self, - option_strings: Sequence[str], - dest: str, - nargs: str | int | None = None, - const: bool | None = None, - default: Any = None, - type: Callable[[str], _T | argparse.FileType | None] = None, - choices: Iterable[_T] | None = None, - required: bool = False, - help: str | None = None, - metavar: str | tuple[str, ...] | None = None, - ) -> None: - env_var_value = os.environ.get(env_var) - default = env_var_value or default - if const is not None: - nargs = 0 - default = const if env_var_value else default - type = None - choices = None - metavar = None - super(_EnvironmentArgument, self).__init__( - option_strings, - dest, - nargs=nargs, - const=const, - default=default, - type=type, - choices=choices, - required=required, - help=help, - metavar=metavar, - ) - - def __call__( - self, - parser: argparse.ArgumentParser, - namespace: argparse.Namespace, - values: Any, - option_string: str = None, - ) -> None: - if self.const is not None: - setattr(namespace, self.dest, self.const) - else: - setattr(namespace, self.dest, values) - - return _EnvironmentArgument - @dataclass(slots=True) class Settings: @@ -171,7 +102,7 @@ class Settings: #: compile_timeout: float = 1200 #: - test_cases: list[str] = field(default_factory=list) + test_suites: list[TestSuiteConfig] = field(default_factory=list) #: re_run: int = 0 @@ -180,6 +111,31 @@ class Settings: def _get_parser() -> argparse.ArgumentParser: + """Create the argument parser for DTS. + + Command line options take precedence over environment variables, which in turn take precedence + over default values. + + Returns: + argparse.ArgumentParser: The configured argument parser with defined options. + """ + + def env_arg(env_var: str, default: Any) -> Any: + """A helper function augmenting the argparse with environment variables. + + If the supplied environment variable is defined, then the default value + of the argument is modified. This satisfies the priority order of + command line argument > environment variable > default value. + + Args: + env_var: Environment variable name. + default: Default value. + + Returns: + Environment variable or default value. + """ + return os.environ.get(env_var) or default + parser = argparse.ArgumentParser( description="Run DPDK test suites. All options may be specified with the environment " "variables provided in brackets. Command line arguments have higher priority.", @@ -188,25 +144,23 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--config-file", - action=_env_arg("DTS_CFG_FILE"), - default=SETTINGS.config_file_path, + default=env_arg("DTS_CFG_FILE", SETTINGS.config_file_path), type=Path, - help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets.", + help="[DTS_CFG_FILE] The configuration file that describes the test cases, " + "SUTs and targets.", ) parser.add_argument( "--output-dir", "--output", - action=_env_arg("DTS_OUTPUT_DIR"), - default=SETTINGS.output_dir, + default=env_arg("DTS_OUTPUT_DIR", SETTINGS.output_dir), help="[DTS_OUTPUT_DIR] Output directory where DTS logs and results are saved.", ) parser.add_argument( "-t", "--timeout", - action=_env_arg("DTS_TIMEOUT"), - default=SETTINGS.timeout, + default=env_arg("DTS_TIMEOUT", SETTINGS.timeout), type=float, help="[DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK.", ) @@ -214,9 +168,8 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "-v", "--verbose", - action=_env_arg("DTS_VERBOSE"), - default=SETTINGS.verbose, - const=True, + action="store_true", + default=env_arg("DTS_VERBOSE", SETTINGS.verbose), help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages " "to the console.", ) @@ -224,8 +177,8 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "-s", "--skip-setup", - action=_env_arg("DTS_SKIP_SETUP"), - const=True, + action="store_true", + default=env_arg("DTS_SKIP_SETUP", SETTINGS.skip_setup), help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.", ) @@ -233,8 +186,7 @@ def _get_parser() -> argparse.ArgumentParser: "--tarball", "--snapshot", "--git-ref", - action=_env_arg("DTS_DPDK_TARBALL"), - default=SETTINGS.dpdk_tarball_path, + default=env_arg("DTS_DPDK_TARBALL", SETTINGS.dpdk_tarball_path), type=Path, help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, " "tag ID or tree ID to test. To test local changes, first commit them, " @@ -243,36 +195,71 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--compile-timeout", - action=_env_arg("DTS_COMPILE_TIMEOUT"), - default=SETTINGS.compile_timeout, + default=env_arg("DTS_COMPILE_TIMEOUT", SETTINGS.compile_timeout), type=float, help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.", ) parser.add_argument( - "--test-cases", - action=_env_arg("DTS_TESTCASES"), - default="", - help="[DTS_TESTCASES] Comma-separated list of test cases to execute.", + "--test-suite", + action="append", + nargs="+", + metavar=("TEST_SUITE", "TEST_CASES"), + default=env_arg("DTS_TEST_SUITES", SETTINGS.test_suites), + help="[DTS_TEST_SUITES] A list containing a test suite with test cases. " + "The first parameter is the test suite name, and the rest are test case names, " + "which are optional. May be specified multiple times. To specify multiple test suites in " + "the environment variable, join the lists with a comma. " + "Examples: " + "--test-suite suite case case --test-suite suite case ... | " + "DTS_TEST_SUITES='suite case case, suite case, ...' | " + "--test-suite suite --test-suite suite case ... | " + "DTS_TEST_SUITES='suite, suite case, ...'", ) parser.add_argument( "--re-run", "--re_run", - action=_env_arg("DTS_RERUN"), - default=SETTINGS.re_run, + default=env_arg("DTS_RERUN", SETTINGS.re_run), type=int, help="[DTS_RERUN] Re-run each test case the specified number of times " - "if a test failure occurs", + "if a test failure occurs.", ) return parser +def _process_test_suites(args: str | list[list[str]]) -> list[TestSuiteConfig]: + """Process the given argument to a list of :class:`TestSuiteConfig` to execute. + + Args: + args: The arguments to process. The args is a string from an environment variable + or a list of from the user input containing tests suites with tests cases, + each of which is a list of [test_suite, test_case, test_case, ...]. + + Returns: + A list of test suite configurations to execute. + """ + if isinstance(args, str): + # Environment variable in the form of "suite case case, suite case, suite, ..." + args = [suite_with_cases.split() for suite_with_cases in args.split(",")] + + test_suites_to_run = [] + for suite_with_cases in args: + test_suites_to_run.append( + TestSuiteConfig(test_suite=suite_with_cases[0], test_cases=suite_with_cases[1:]) + ) + + return test_suites_to_run + + def get_settings() -> Settings: """Create new settings with inputs from the user. The inputs are taken from the command line and from environment variables. + + Returns: + The new settings object. """ parsed_args = _get_parser().parse_args() return Settings( @@ -287,6 +274,6 @@ def get_settings() -> Settings: else Path(parsed_args.tarball) ), compile_timeout=parsed_args.compile_timeout, - test_cases=(parsed_args.test_cases.split(",") if parsed_args.test_cases else []), + test_suites=_process_test_suites(parsed_args.test_suite), re_run=parsed_args.re_run, ) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 365f80e21a..1957ea7328 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -40,7 +40,7 @@ class TestSuite(object): and functional test cases (all other test cases). By default, all test cases will be executed. A list of testcase names may be specified - in the YAML test run configuration file and in the :option:`--test-cases` command line argument + in the YAML test run configuration file and in the :option:`--test-suite` command line argument or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run. The union of both lists will be used. Any unknown test cases from the latter lists will be silently ignored.