From patchwork Wed Nov 15 13:09:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134378 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68DA243339; Wed, 15 Nov 2023 14:11:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5590D40A6E; Wed, 15 Nov 2023 14:11:13 +0100 (CET) Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by mails.dpdk.org (Postfix) with ESMTP id 70C8640A6D for ; Wed, 15 Nov 2023 14:11:11 +0100 (CET) Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-543c3756521so10161947a12.2 for ; Wed, 15 Nov 2023 05:11:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053871; x=1700658671; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/URPxx3oTEuUs/yVxKSHnnhahGRy8CJIrBPHdcqp/Ms=; b=QumWsLnr6gYLzrS6+Uh/ji7TpwLwAsNtlxwpJWrgYMJEi1XiHD0HloDOtZZOjMl+TP hdm+2YXNOHyp49P/XMOthTg+Gczi1fGPZ6nOdHwndJ6iELy39SxVG/DOUev6n50NHZz2 aC2dRUle5G/bXmeFvYw3od3OiRK/BgQeCuWkefzzkmuD0eIpqJND5+oZMCcTnBvC3Wms mCh1r7dxEWH2lkSIRP0LV081kNrji7yVezfcavmKFQL8lPUC7V+WAz8asK9sBCZinSH/ TivrLhoGaYGYrWVe1xsO8z96zLgS6NJqkh5mdmiW6yB7Qd7tcXCoSLW/QooXFyP60Cn5 Cirg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053871; x=1700658671; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/URPxx3oTEuUs/yVxKSHnnhahGRy8CJIrBPHdcqp/Ms=; b=rDWYtv5pEjwYxvc4OHR0PFh/sml5iPYDhZyLiK7gQ5jeq11QPWWjn9jS2e6MTDktM+ CIo3su2/hQrQRsGNqbAMgoD2e6vMU/DkKYhLrjCXVgL+/VuvxpkxvM3OItn1TgaMwJkw Oycdiu/rGiU9WE15+tRcAhbIw0l/4Mwju8KSXMXt/os3HN1Mfgsc/o6OD4ClpD34NkdF hCbFYDbUaXu45ZkCFKrY1wPxa3w+weG8vNOTXFIcYrTcnw8YkoMpBJ4R+ETiDDxl+HwQ V0Pbt3OgZlELTXdi/zampKDImEEhav2OVjgJ9klq8tPmHyQ3/s7Ef+uIltPcQu8CyC5E +0Fw== X-Gm-Message-State: AOJu0YxUE4P1Wm1Sj++856z1FX1kyM8HpHYPyDJOBqgCTg/VoEXqibn8 dTNNII8cjnpwPYnAw+q9lLJ7kg== X-Google-Smtp-Source: AGHT+IEvaboqvwVySHCsVKOtOKY+0r5p1F6tknf7ZgHiDm4xi37dC8ZrKe6f+2zZQQqQO96AHGBotQ== X-Received: by 2002:a17:906:7943:b0:9a5:874a:9745 with SMTP id l3-20020a170906794300b009a5874a9745mr11608522ejo.26.1700053870951; Wed, 15 Nov 2023 05:11:10 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.10.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:10:53 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 01/21] dts: code adjustments for doc generation Date: Wed, 15 Nov 2023 14:09:39 +0100 Message-Id: <20231115130959.39420-2-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The standard Python tool for generating API documentation, Sphinx, imports modules one-by-one when generating the documentation. This requires code changes: * properly guarding argument parsing in the if __name__ == '__main__' block, * the logger used by DTS runner underwent the same treatment so that it doesn't create log files outside of a DTS run, * however, DTS uses the arguments to construct an object holding global variables. The defaults for the global variables needed to be moved from argument parsing elsewhere, * importing the remote_session module from framework resulted in circular imports because of one module trying to import another module. This is fixed by reorganizing the code, * some code reorganization was done because the resulting structure makes more sense, improving documentation clarity. The are some other changes which are documentation related: * added missing type annotation so they appear in the generated docs, * reordered arguments in some methods, * removed superfluous arguments and attributes, * change private functions/methods/attributes to private and vice-versa. The above all appear in the generated documentation and the with them, the documentation is improved. Signed-off-by: Juraj Linkeš Reviewed-by: Yoan Picchi --- dts/framework/config/__init__.py | 10 ++- dts/framework/dts.py | 33 +++++-- dts/framework/exception.py | 54 +++++------- dts/framework/remote_session/__init__.py | 41 ++++----- .../interactive_remote_session.py | 0 .../{remote => }/interactive_shell.py | 0 .../{remote => }/python_shell.py | 0 .../remote_session/remote/__init__.py | 27 ------ .../{remote => }/remote_session.py | 0 .../{remote => }/ssh_session.py | 12 +-- .../{remote => }/testpmd_shell.py | 0 dts/framework/settings.py | 87 +++++++++++-------- dts/framework/test_result.py | 4 +- dts/framework/test_suite.py | 7 +- dts/framework/testbed_model/__init__.py | 12 +-- dts/framework/testbed_model/{hw => }/cpu.py | 13 +++ dts/framework/testbed_model/hw/__init__.py | 27 ------ .../linux_session.py | 6 +- dts/framework/testbed_model/node.py | 25 ++++-- .../os_session.py | 22 ++--- dts/framework/testbed_model/{hw => }/port.py | 0 .../posix_session.py | 4 +- dts/framework/testbed_model/sut_node.py | 8 +- dts/framework/testbed_model/tg_node.py | 30 +------ .../traffic_generator/__init__.py | 24 +++++ .../capturing_traffic_generator.py | 6 +- .../{ => traffic_generator}/scapy.py | 23 ++--- .../traffic_generator.py | 16 +++- .../testbed_model/{hw => }/virtual_device.py | 0 dts/framework/utils.py | 46 +++------- dts/main.py | 9 +- 31 files changed, 258 insertions(+), 288 deletions(-) rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%) rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%) rename dts/framework/remote_session/{remote => }/python_shell.py (100%) delete mode 100644 dts/framework/remote_session/remote/__init__.py rename dts/framework/remote_session/{remote => }/remote_session.py (100%) rename dts/framework/remote_session/{remote => }/ssh_session.py (91%) rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%) rename dts/framework/testbed_model/{hw => }/cpu.py (95%) delete mode 100644 dts/framework/testbed_model/hw/__init__.py rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%) rename dts/framework/{remote_session => testbed_model}/os_session.py (95%) rename dts/framework/testbed_model/{hw => }/port.py (100%) rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%) create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (96%) rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%) rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%) rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%) diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index cb7e00ba34..2044c82611 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -17,6 +17,7 @@ import warlock # type: ignore[import] import yaml +from framework.exception import ConfigurationError from framework.settings import SETTINGS from framework.utils import StrEnum @@ -89,7 +90,7 @@ class TrafficGeneratorConfig: traffic_generator_type: TrafficGeneratorType @staticmethod - def from_dict(d: dict): + def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": # This looks useless now, but is designed to allow expansion to traffic # generators that require more configuration later. match TrafficGeneratorType(d["type"]): @@ -97,6 +98,10 @@ def from_dict(d: dict): return ScapyTrafficGeneratorConfig( traffic_generator_type=TrafficGeneratorType.SCAPY ) + case _: + raise ConfigurationError( + f'Unknown traffic generator type "{d["type"]}".' + ) @dataclass(slots=True, frozen=True) @@ -324,6 +329,3 @@ def load_config() -> Configuration: config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data) config_obj: Configuration = Configuration.from_dict(dict(config)) return config_obj - - -CONFIGURATION = load_config() diff --git a/dts/framework/dts.py b/dts/framework/dts.py index f773f0c38d..4c7fb0c40a 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -6,19 +6,19 @@ import sys from .config import ( - CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration, TestSuiteConfig, + load_config, ) from .exception import BlockingTestSuiteError from .logger import DTSLOG, getLogger from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result from .test_suite import get_test_suites from .testbed_model import SutNode, TGNode -from .utils import check_dts_python_version -dts_logger: DTSLOG = getLogger("DTSRunner") +# dummy defaults to satisfy linters +dts_logger: DTSLOG = None # type: ignore[assignment] result: DTSResult = DTSResult(dts_logger) @@ -30,14 +30,18 @@ def run_all() -> None: global dts_logger global result + # create a regular DTS logger and create a new result with it + dts_logger = getLogger("DTSRunner") + result = DTSResult(dts_logger) + # check the python version of the server that run dts - check_dts_python_version() + _check_dts_python_version() sut_nodes: dict[str, SutNode] = {} tg_nodes: dict[str, TGNode] = {} try: # for all Execution sections - for execution in CONFIGURATION.executions: + for execution in load_config().executions: sut_node = sut_nodes.get(execution.system_under_test_node.name) tg_node = tg_nodes.get(execution.traffic_generator_node.name) @@ -82,6 +86,25 @@ def run_all() -> None: _exit_dts() +def _check_dts_python_version() -> None: + def RED(text: str) -> str: + return f"\u001B[31;1m{str(text)}\u001B[0m" + + if sys.version_info.major < 3 or ( + sys.version_info.major == 3 and sys.version_info.minor < 10 + ): + print( + RED( + ( + "WARNING: DTS execution node's python version is lower than" + "python 3.10, is deprecated and will not work in future releases." + ) + ), + file=sys.stderr, + ) + print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) + + def _run_execution( sut_node: SutNode, tg_node: TGNode, diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 001a5a5496..7489c03570 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError): Command execution timeout. """ - command: str - output: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _command: str - def __init__(self, command: str, output: str): - self.command = command - self.output = output + def __init__(self, command: str): + self._command = command def __str__(self) -> str: - return f"TIMEOUT on {self.command}" - - def get_output(self) -> str: - return self.output + return f"TIMEOUT on {self._command}" class SSHConnectionError(DTSError): @@ -62,18 +57,18 @@ class SSHConnectionError(DTSError): SSH connection error. """ - host: str - errors: list[str] severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _host: str + _errors: list[str] def __init__(self, host: str, errors: list[str] | None = None): - self.host = host - self.errors = [] if errors is None else errors + self._host = host + self._errors = [] if errors is None else errors def __str__(self) -> str: - message = f"Error trying to connect with {self.host}." - if self.errors: - message += f" Errors encountered while retrying: {', '.join(self.errors)}" + message = f"Error trying to connect with {self._host}." + if self._errors: + message += f" Errors encountered while retrying: {', '.join(self._errors)}" return message @@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError): It can no longer be used. """ - host: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _host: str def __init__(self, host: str): - self.host = host + self._host = host def __str__(self) -> str: - return f"SSH session with {self.host} has died" + return f"SSH session with {self._host} has died" class ConfigurationError(DTSError): @@ -107,18 +102,18 @@ class RemoteCommandExecutionError(DTSError): Raised when a command executed on a Node returns a non-zero exit status. """ - command: str - command_return_code: int severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + command: str + _command_return_code: int def __init__(self, command: str, command_return_code: int): self.command = command - self.command_return_code = command_return_code + self._command_return_code = command_return_code def __str__(self) -> str: return ( f"Command {self.command} returned a non-zero exit code: " - f"{self.command_return_code}" + f"{self._command_return_code}" ) @@ -143,22 +138,15 @@ class TestCaseVerifyError(DTSError): Used in test cases to verify the expected behavior. """ - value: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR - def __init__(self, value: str): - self.value = value - - def __str__(self) -> str: - return repr(self.value) - class BlockingTestSuiteError(DTSError): - suite_name: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR + _suite_name: str def __init__(self, suite_name: str) -> None: - self.suite_name = suite_name + self._suite_name = suite_name def __str__(self) -> str: - return f"Blocking suite {self.suite_name} failed." + return f"Blocking suite {self._suite_name} failed." diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 00b6d1f03a..5e7ddb2b05 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -12,29 +12,24 @@ # pylama:ignore=W0611 -from framework.config import OS, NodeConfiguration -from framework.exception import ConfigurationError +from framework.config import NodeConfiguration from framework.logger import DTSLOG -from .linux_session import LinuxSession -from .os_session import InteractiveShellType, OSSession -from .remote import ( - CommandResult, - InteractiveRemoteSession, - InteractiveShell, - PythonShell, - RemoteSession, - SSHSession, - TestPmdDevice, - TestPmdShell, -) - - -def create_session( +from .interactive_remote_session import InteractiveRemoteSession +from .interactive_shell import InteractiveShell +from .python_shell import PythonShell +from .remote_session import CommandResult, RemoteSession +from .ssh_session import SSHSession +from .testpmd_shell import TestPmdShell + + +def create_remote_session( node_config: NodeConfiguration, name: str, logger: DTSLOG -) -> OSSession: - match node_config.os: - case OS.linux: - return LinuxSession(node_config, name, logger) - case _: - raise ConfigurationError(f"Unsupported OS {node_config.os}") +) -> RemoteSession: + return SSHSession(node_config, name, logger) + + +def create_interactive_session( + node_config: NodeConfiguration, logger: DTSLOG +) -> InteractiveRemoteSession: + return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py similarity index 100% rename from dts/framework/remote_session/remote/interactive_remote_session.py rename to dts/framework/remote_session/interactive_remote_session.py diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py similarity index 100% rename from dts/framework/remote_session/remote/interactive_shell.py rename to dts/framework/remote_session/interactive_shell.py diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py similarity index 100% rename from dts/framework/remote_session/remote/python_shell.py rename to dts/framework/remote_session/python_shell.py diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py deleted file mode 100644 index 06403691a5..0000000000 --- a/dts/framework/remote_session/remote/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2023 PANTHEON.tech s.r.o. -# Copyright(c) 2023 University of New Hampshire - -# pylama:ignore=W0611 - -from framework.config import NodeConfiguration -from framework.logger import DTSLOG - -from .interactive_remote_session import InteractiveRemoteSession -from .interactive_shell import InteractiveShell -from .python_shell import PythonShell -from .remote_session import CommandResult, RemoteSession -from .ssh_session import SSHSession -from .testpmd_shell import TestPmdDevice, TestPmdShell - - -def create_remote_session( - node_config: NodeConfiguration, name: str, logger: DTSLOG -) -> RemoteSession: - return SSHSession(node_config, name, logger) - - -def create_interactive_session( - node_config: NodeConfiguration, logger: DTSLOG -) -> InteractiveRemoteSession: - return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py similarity index 100% rename from dts/framework/remote_session/remote/remote_session.py rename to dts/framework/remote_session/remote_session.py diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py similarity index 91% rename from dts/framework/remote_session/remote/ssh_session.py rename to dts/framework/remote_session/ssh_session.py index 8d127f1601..cee11d14d6 100644 --- a/dts/framework/remote_session/remote/ssh_session.py +++ b/dts/framework/remote_session/ssh_session.py @@ -18,9 +18,7 @@ SSHException, ) -from framework.config import NodeConfiguration from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError -from framework.logger import DTSLOG from .remote_session import CommandResult, RemoteSession @@ -45,14 +43,6 @@ class SSHSession(RemoteSession): session: Connection - def __init__( - self, - node_config: NodeConfiguration, - session_name: str, - logger: DTSLOG, - ): - super(SSHSession, self).__init__(node_config, session_name, logger) - def _connect(self) -> None: errors = [] retry_attempts = 10 @@ -117,7 +107,7 @@ def _send_command( except CommandTimedOut as e: self._logger.exception(e) - raise SSHTimeoutError(command, e.result.stderr) from e + raise SSHTimeoutError(command) from e return CommandResult( self.name, command, output.stdout, output.stderr, output.return_code diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py similarity index 100% rename from dts/framework/remote_session/remote/testpmd_shell.py rename to dts/framework/remote_session/testpmd_shell.py diff --git a/dts/framework/settings.py b/dts/framework/settings.py index cfa39d011b..7f5841d073 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -6,7 +6,7 @@ import argparse import os from collections.abc import Callable, Iterable, Sequence -from dataclasses import dataclass +from dataclasses import dataclass, field from pathlib import Path from typing import Any, TypeVar @@ -22,8 +22,8 @@ def __init__( option_strings: Sequence[str], dest: str, nargs: str | int | None = None, - const: str | None = None, - default: str = None, + const: bool | None = None, + default: Any = None, type: Callable[[str], _T | argparse.FileType | None] = None, choices: Iterable[_T] | None = None, required: bool = False, @@ -32,6 +32,12 @@ def __init__( ) -> None: env_var_value = os.environ.get(env_var) default = env_var_value or default + if const is not None: + nargs = 0 + default = const if env_var_value else default + type = None + choices = None + metavar = None super(_EnvironmentArgument, self).__init__( option_strings, dest, @@ -52,22 +58,28 @@ def __call__( values: Any, option_string: str = None, ) -> None: - setattr(namespace, self.dest, values) + if self.const is not None: + setattr(namespace, self.dest, self.const) + else: + setattr(namespace, self.dest, values) return _EnvironmentArgument -@dataclass(slots=True, frozen=True) -class _Settings: - config_file_path: str - output_dir: str - timeout: float - verbose: bool - skip_setup: bool - dpdk_tarball_path: Path - compile_timeout: float - test_cases: list - re_run: int +@dataclass(slots=True) +class Settings: + config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml") + output_dir: str = "output" + timeout: float = 15 + verbose: bool = False + skip_setup: bool = False + dpdk_tarball_path: Path | str = "dpdk.tar.xz" + compile_timeout: float = 1200 + test_cases: list[str] = field(default_factory=list) + re_run: int = 0 + + +SETTINGS: Settings = Settings() def _get_parser() -> argparse.ArgumentParser: @@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--config-file", action=_env_arg("DTS_CFG_FILE"), - default="conf.yaml", + default=SETTINGS.config_file_path, + type=Path, help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs " "and targets.", ) @@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser: "--output-dir", "--output", action=_env_arg("DTS_OUTPUT_DIR"), - default="output", + default=SETTINGS.output_dir, help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.", ) @@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser: "-t", "--timeout", action=_env_arg("DTS_TIMEOUT"), - default=15, + default=SETTINGS.timeout, type=float, help="[DTS_TIMEOUT] The default timeout for all DTS operations except for " "compiling DPDK.", @@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser: "-v", "--verbose", action=_env_arg("DTS_VERBOSE"), - default="N", - help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages " + default=SETTINGS.verbose, + const=True, + help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages " "to the console.", ) @@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser: "-s", "--skip-setup", action=_env_arg("DTS_SKIP_SETUP"), - default="N", - help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.", + const=True, + help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.", ) parser.add_argument( @@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser: "--snapshot", "--git-ref", action=_env_arg("DTS_DPDK_TARBALL"), - default="dpdk.tar.xz", + default=SETTINGS.dpdk_tarball_path, type=Path, help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, " "tag ID or tree ID to test. To test local changes, first commit them, " @@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--compile-timeout", action=_env_arg("DTS_COMPILE_TIMEOUT"), - default=1200, + default=SETTINGS.compile_timeout, type=float, help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.", ) @@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser: "--re-run", "--re_run", action=_env_arg("DTS_RERUN"), - default=0, + default=SETTINGS.re_run, type=int, help="[DTS_RERUN] Re-run each test case the specified amount of times " "if a test failure occurs", @@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser: return parser -def _get_settings() -> _Settings: +def get_settings() -> Settings: parsed_args = _get_parser().parse_args() - return _Settings( + return Settings( config_file_path=parsed_args.config_file, output_dir=parsed_args.output_dir, timeout=parsed_args.timeout, - verbose=(parsed_args.verbose == "Y"), - skip_setup=(parsed_args.skip_setup == "Y"), + verbose=parsed_args.verbose, + skip_setup=parsed_args.skip_setup, dpdk_tarball_path=Path( - DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir) - ) - if not os.path.exists(parsed_args.tarball) - else Path(parsed_args.tarball), + Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)) + if not os.path.exists(parsed_args.tarball) + else Path(parsed_args.tarball) + ), compile_timeout=parsed_args.compile_timeout, - test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [], + test_cases=( + parsed_args.test_cases.split(",") if parsed_args.test_cases else [] + ), re_run=parsed_args.re_run, ) - - -SETTINGS: _Settings = _get_settings() diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index f0fbe80f6f..603e18872c 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -254,7 +254,7 @@ def add_build_target( self._inner_results.append(build_target_result) return build_target_result - def add_sut_info(self, sut_info: NodeInfo): + def add_sut_info(self, sut_info: NodeInfo) -> None: self.sut_os_name = sut_info.os_name self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version @@ -297,7 +297,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: self._inner_results.append(execution_result) return execution_result - def add_error(self, error) -> None: + def add_error(self, error: Exception) -> None: self._errors.append(error) def process(self) -> None: diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 3b890c0451..d53553bf34 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -11,7 +11,7 @@ import re from ipaddress import IPv4Interface, IPv6Interface, ip_interface from types import MethodType -from typing import Union +from typing import Any, Union from scapy.layers.inet import IP # type: ignore[import] from scapy.layers.l2 import Ether # type: ignore[import] @@ -26,8 +26,7 @@ from .logger import DTSLOG, getLogger from .settings import SETTINGS from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult -from .testbed_model import SutNode, TGNode -from .testbed_model.hw.port import Port, PortLink +from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries @@ -453,7 +452,7 @@ def _execute_test_case( def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: - def is_test_suite(object) -> bool: + def is_test_suite(object: Any) -> bool: try: if issubclass(object, TestSuite) and object is not TestSuite: return True diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 5cbb859e47..8ced05653b 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -9,15 +9,9 @@ # pylama:ignore=W0611 -from .hw import ( - LogicalCore, - LogicalCoreCount, - LogicalCoreCountFilter, - LogicalCoreList, - LogicalCoreListFilter, - VirtualDevice, - lcore_filter, -) +from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList from .node import Node +from .port import Port, PortLink from .sut_node import SutNode from .tg_node import TGNode +from .virtual_device import VirtualDevice diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py similarity index 95% rename from dts/framework/testbed_model/hw/cpu.py rename to dts/framework/testbed_model/cpu.py index d1918a12dc..8fe785dfe4 100644 --- a/dts/framework/testbed_model/hw/cpu.py +++ b/dts/framework/testbed_model/cpu.py @@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]: ) return filtered_lcores + + +def lcore_filter( + core_list: list[LogicalCore], + filter_specifier: LogicalCoreCount | LogicalCoreList, + ascending: bool, +) -> LogicalCoreFilter: + if isinstance(filter_specifier, LogicalCoreList): + return LogicalCoreListFilter(core_list, filter_specifier, ascending) + elif isinstance(filter_specifier, LogicalCoreCount): + return LogicalCoreCountFilter(core_list, filter_specifier, ascending) + else: + raise ValueError(f"Unsupported filter r{filter_specifier}") diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py deleted file mode 100644 index 88ccac0b0e..0000000000 --- a/dts/framework/testbed_model/hw/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2023 PANTHEON.tech s.r.o. - -# pylama:ignore=W0611 - -from .cpu import ( - LogicalCore, - LogicalCoreCount, - LogicalCoreCountFilter, - LogicalCoreFilter, - LogicalCoreList, - LogicalCoreListFilter, -) -from .virtual_device import VirtualDevice - - -def lcore_filter( - core_list: list[LogicalCore], - filter_specifier: LogicalCoreCount | LogicalCoreList, - ascending: bool, -) -> LogicalCoreFilter: - if isinstance(filter_specifier, LogicalCoreList): - return LogicalCoreListFilter(core_list, filter_specifier, ascending) - elif isinstance(filter_specifier, LogicalCoreCount): - return LogicalCoreCountFilter(core_list, filter_specifier, ascending) - else: - raise ValueError(f"Unsupported filter r{filter_specifier}") diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py similarity index 97% rename from dts/framework/remote_session/linux_session.py rename to dts/framework/testbed_model/linux_session.py index a3f1a6bf3b..f472bb8f0f 100644 --- a/dts/framework/remote_session/linux_session.py +++ b/dts/framework/testbed_model/linux_session.py @@ -9,10 +9,10 @@ from typing_extensions import NotRequired from framework.exception import RemoteCommandExecutionError -from framework.testbed_model import LogicalCore -from framework.testbed_model.hw.port import Port from framework.utils import expand_range +from .cpu import LogicalCore +from .port import Port from .posix_session import PosixSession @@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: lcores.append(LogicalCore(lcore, core, socket, node)) return lcores - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: return dpdk_prefix def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index fc01e0bf8e..fa5b143cdd 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -12,23 +12,26 @@ from typing import Any, Callable, Type, Union from framework.config import ( + OS, BuildTargetConfiguration, ExecutionConfiguration, NodeConfiguration, ) +from framework.exception import ConfigurationError from framework.logger import DTSLOG, getLogger -from framework.remote_session import InteractiveShellType, OSSession, create_session from framework.settings import SETTINGS -from .hw import ( +from .cpu import ( LogicalCore, LogicalCoreCount, LogicalCoreList, LogicalCoreListFilter, - VirtualDevice, lcore_filter, ) -from .hw.port import Port +from .linux_session import LinuxSession +from .os_session import InteractiveShellType, OSSession +from .port import Port +from .virtual_device import VirtualDevice class Node(ABC): @@ -172,9 +175,9 @@ def create_interactive_shell( return self.main_session.create_interactive_shell( shell_cls, - app_args, timeout, privileged, + app_args, ) def filter_lcores( @@ -205,7 +208,7 @@ def _get_remote_cpus(self) -> None: self._logger.info("Getting CPU information.") self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) - def _setup_hugepages(self): + def _setup_hugepages(self) -> None: """ Setup hugepages on the Node. Different architectures can supply different amounts of memory for hugepages and numa-based hugepage allocation may need @@ -249,3 +252,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: return lambda *args: None else: return func + + +def create_session( + node_config: NodeConfiguration, name: str, logger: DTSLOG +) -> OSSession: + match node_config.os: + case OS.linux: + return LinuxSession(node_config, name, logger) + case _: + raise ConfigurationError(f"Unsupported OS {node_config.os}") diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py similarity index 95% rename from dts/framework/remote_session/os_session.py rename to dts/framework/testbed_model/os_session.py index 8a709eac1c..76e595a518 100644 --- a/dts/framework/remote_session/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -10,19 +10,19 @@ from framework.config import Architecture, NodeConfiguration, NodeInfo from framework.logger import DTSLOG -from framework.remote_session.remote import InteractiveShell -from framework.settings import SETTINGS -from framework.testbed_model import LogicalCore -from framework.testbed_model.hw.port import Port -from framework.utils import MesonArgs - -from .remote import ( +from framework.remote_session import ( CommandResult, InteractiveRemoteSession, + InteractiveShell, RemoteSession, create_interactive_session, create_remote_session, ) +from framework.settings import SETTINGS +from framework.utils import MesonArgs + +from .cpu import LogicalCore +from .port import Port InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell) @@ -85,9 +85,9 @@ def send_command( def create_interactive_shell( self, shell_cls: Type[InteractiveShellType], - eal_parameters: str, timeout: float, privileged: bool, + app_args: str, ) -> InteractiveShellType: """ See "create_interactive_shell" in SutNode @@ -96,7 +96,7 @@ def create_interactive_shell( self.interactive_session.session, self._logger, self._get_privileged_command if privileged else None, - eal_parameters, + app_args, timeout, ) @@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str: """ @abstractmethod - def guess_dpdk_remote_dir(self, remote_dir) -> PurePath: + def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath: """ Try to find DPDK remote dir in remote_dir. """ @@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: """ @abstractmethod - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: """ Get the DPDK file prefix that will be used when running DPDK apps. """ diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py similarity index 100% rename from dts/framework/testbed_model/hw/port.py rename to dts/framework/testbed_model/port.py diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py similarity index 98% rename from dts/framework/remote_session/posix_session.py rename to dts/framework/testbed_model/posix_session.py index 5da0516e05..1d1d5b1b26 100644 --- a/dts/framework/remote_session/posix_session.py +++ b/dts/framework/testbed_model/posix_session.py @@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str: return ret_opts - def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath: + def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath: remote_guess = self.join_remote_path(remote_dir, "dpdk-*") result = self.send_command(f"ls -d {remote_guess} | tail -1") return PurePosixPath(result.stdout) @@ -219,7 +219,7 @@ def _remove_dpdk_runtime_dirs( for dpdk_runtime_dir in dpdk_runtime_dirs: self.remove_remote_dir(dpdk_runtime_dir) - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: return "" def get_compiler_version(self, compiler_name: str) -> str: diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 4161d3a4d5..17deea06e2 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -15,12 +15,14 @@ NodeInfo, SutNodeConfiguration, ) -from framework.remote_session import CommandResult, InteractiveShellType, OSSession +from framework.remote_session import CommandResult from framework.settings import SETTINGS from framework.utils import MesonArgs -from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice +from .cpu import LogicalCoreCount, LogicalCoreList from .node import Node +from .os_session import InteractiveShellType, OSSession +from .virtual_device import VirtualDevice class EalParameters(object): @@ -307,7 +309,7 @@ def create_eal_parameters( prefix: str = "dpdk", append_prefix_timestamp: bool = True, no_pci: bool = False, - vdevs: list[VirtualDevice] = None, + vdevs: list[VirtualDevice] | None = None, other_eal_param: str = "", ) -> "EalParameters": """ diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py index 27025cfa31..166eb8430e 100644 --- a/dts/framework/testbed_model/tg_node.py +++ b/dts/framework/testbed_model/tg_node.py @@ -16,16 +16,11 @@ from scapy.packet import Packet # type: ignore[import] -from framework.config import ( - ScapyTrafficGeneratorConfig, - TGNodeConfiguration, - TrafficGeneratorType, -) -from framework.exception import ConfigurationError - -from .capturing_traffic_generator import CapturingTrafficGenerator -from .hw.port import Port +from framework.config import TGNodeConfiguration + from .node import Node +from .port import Port +from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator class TGNode(Node): @@ -80,20 +75,3 @@ def close(self) -> None: """Free all resources used by the node""" self.traffic_generator.close() super(TGNode, self).close() - - -def create_traffic_generator( - tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig -) -> CapturingTrafficGenerator: - """A factory function for creating traffic generator object from user config.""" - - from .scapy import ScapyTrafficGenerator - - match traffic_generator_config.traffic_generator_type: - case TrafficGeneratorType.SCAPY: - return ScapyTrafficGenerator(tg_node, traffic_generator_config) - case _: - raise ConfigurationError( - "Unknown traffic generator: " - f"{traffic_generator_config.traffic_generator_type}" - ) diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py new file mode 100644 index 0000000000..11bfa1ee0f --- /dev/null +++ b/dts/framework/testbed_model/traffic_generator/__init__.py @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType +from framework.exception import ConfigurationError +from framework.testbed_model.node import Node + +from .capturing_traffic_generator import CapturingTrafficGenerator +from .scapy import ScapyTrafficGenerator + + +def create_traffic_generator( + tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig +) -> CapturingTrafficGenerator: + """A factory function for creating traffic generator object from user config.""" + + match traffic_generator_config.traffic_generator_type: + case TrafficGeneratorType.SCAPY: + return ScapyTrafficGenerator(tg_node, traffic_generator_config) + case _: + raise ConfigurationError( + "Unknown traffic generator: " + f"{traffic_generator_config.traffic_generator_type}" + ) diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py similarity index 96% rename from dts/framework/testbed_model/capturing_traffic_generator.py rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py index ab98987f8e..e521211ef0 100644 --- a/dts/framework/testbed_model/capturing_traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py @@ -16,9 +16,9 @@ from scapy.packet import Packet # type: ignore[import] from framework.settings import SETTINGS +from framework.testbed_model.port import Port from framework.utils import get_packet_summaries -from .hw.port import Port from .traffic_generator import TrafficGenerator @@ -130,7 +130,9 @@ def _send_packets_and_capture( for the specified duration. It must be able to handle no received packets. """ - def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]): + def _write_capture_from_packets( + self, capture_name: str, packets: list[Packet] + ) -> None: file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap" self._logger.debug(f"Writing packets to {file_name}.") scapy.utils.wrpcap(file_name, packets) diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py similarity index 95% rename from dts/framework/testbed_model/scapy.py rename to dts/framework/testbed_model/traffic_generator/scapy.py index af0d4dbb25..51864b6e6b 100644 --- a/dts/framework/testbed_model/scapy.py +++ b/dts/framework/testbed_model/traffic_generator/scapy.py @@ -24,16 +24,15 @@ from scapy.packet import Packet # type: ignore[import] from framework.config import OS, ScapyTrafficGeneratorConfig -from framework.logger import DTSLOG, getLogger from framework.remote_session import PythonShell from framework.settings import SETTINGS +from framework.testbed_model.node import Node +from framework.testbed_model.port import Port from .capturing_traffic_generator import ( CapturingTrafficGenerator, _get_default_capture_name, ) -from .hw.port import Port -from .tg_node import TGNode """ ========= BEGIN RPC FUNCTIONS ========= @@ -146,7 +145,7 @@ def quit(self) -> None: self._BaseServer__shutdown_request = True return None - def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary): + def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None: """Add a function to the server. This is meant to be executed remotely. @@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator): session: PythonShell rpc_server_proxy: xmlrpc.client.ServerProxy _config: ScapyTrafficGeneratorConfig - _tg_node: TGNode - _logger: DTSLOG - - def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig): - self._config = config - self._tg_node = tg_node - self._logger = getLogger( - f"{self._tg_node.name} {self._config.traffic_generator_type}" - ) + + def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig): + super().__init__(tg_node, config) assert ( self._tg_node.config.os == OS.linux @@ -235,7 +228,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig): function_bytes = marshal.dumps(function.__code__) self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes) - def _start_xmlrpc_server_in_remote_python(self, listen_port: int): + def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None: # load the source of the function src = inspect.getsource(QuittableXMLRPCServer) # Lines with only whitespace break the repl if in the middle of a function @@ -280,7 +273,7 @@ def _send_packets_and_capture( scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets] return scapy_packets - def close(self): + def close(self) -> None: try: self.rpc_server_proxy.quit() except ConnectionRefusedError: diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py similarity index 80% rename from dts/framework/testbed_model/traffic_generator.py rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py index 28c35d3ce4..ea7c3963da 100644 --- a/dts/framework/testbed_model/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -12,11 +12,12 @@ from scapy.packet import Packet # type: ignore[import] -from framework.logger import DTSLOG +from framework.config import TrafficGeneratorConfig +from framework.logger import DTSLOG, getLogger +from framework.testbed_model.node import Node +from framework.testbed_model.port import Port from framework.utils import get_packet_summaries -from .hw.port import Port - class TrafficGenerator(ABC): """The base traffic generator. @@ -24,8 +25,17 @@ class TrafficGenerator(ABC): Defines the few basic methods that each traffic generator must implement. """ + _config: TrafficGeneratorConfig + _tg_node: Node _logger: DTSLOG + def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): + self._config = config + self._tg_node = tg_node + self._logger = getLogger( + f"{self._tg_node.name} {self._config.traffic_generator_type}" + ) + def send_packet(self, packet: Packet, port: Port) -> None: """Send a packet and block until it is fully sent. diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py similarity index 100% rename from dts/framework/testbed_model/hw/virtual_device.py rename to dts/framework/testbed_model/virtual_device.py diff --git a/dts/framework/utils.py b/dts/framework/utils.py index d27c2c5b5f..f0c916471c 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -7,7 +7,6 @@ import json import os import subprocess -import sys from enum import Enum from pathlib import Path from subprocess import SubprocessError @@ -16,35 +15,7 @@ from .exception import ConfigurationError - -class StrEnum(Enum): - @staticmethod - def _generate_next_value_( - name: str, start: int, count: int, last_values: object - ) -> str: - return name - - def __str__(self) -> str: - return self.name - - -REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" - - -def check_dts_python_version() -> None: - if sys.version_info.major < 3 or ( - sys.version_info.major == 3 and sys.version_info.minor < 10 - ): - print( - RED( - ( - "WARNING: DTS execution node's python version is lower than" - "python 3.10, is deprecated and will not work in future releases." - ) - ), - file=sys.stderr, - ) - print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) +REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" def expand_range(range_str: str) -> list[int]: @@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]: return expanded_range -def get_packet_summaries(packets: list[Packet]): +def get_packet_summaries(packets: list[Packet]) -> str: if len(packets) == 1: packet_summaries = packets[0].summary() else: @@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]): return f"Packet contents: \n{packet_summaries}" -def RED(text: str) -> str: - return f"\u001B[31;1m{str(text)}\u001B[0m" +class StrEnum(Enum): + @staticmethod + def _generate_next_value_( + name: str, start: int, count: int, last_values: object + ) -> str: + return name + + def __str__(self) -> str: + return self.name class MesonArgs(object): @@ -225,5 +203,5 @@ def _delete_tarball(self) -> None: if self._tarball_path and os.path.exists(self._tarball_path): os.remove(self._tarball_path) - def __fspath__(self): + def __fspath__(self) -> str: return str(self._tarball_path) diff --git a/dts/main.py b/dts/main.py index 43311fa847..5d4714b0c3 100755 --- a/dts/main.py +++ b/dts/main.py @@ -10,10 +10,17 @@ import logging -from framework import dts +from framework import settings def main() -> None: + """Set DTS settings, then run DTS. + + The DTS settings are taken from the command line arguments and the environment variables. + """ + settings.SETTINGS = settings.get_settings() + from framework import dts + dts.run_all() From patchwork Wed Nov 15 13:09:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134379 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 21A0843339; Wed, 15 Nov 2023 14:11:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0B07240294; Wed, 15 Nov 2023 14:11:54 +0100 (CET) Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by mails.dpdk.org (Postfix) with ESMTP id 2A25240285 for ; Wed, 15 Nov 2023 14:11:53 +0100 (CET) Received: by mail-ej1-f45.google.com with SMTP id a640c23a62f3a-9c3aec5f326so166995566b.1 for ; Wed, 15 Nov 2023 05:11:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053913; x=1700658713; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U6/AvRBkllFQ+XgMDx3380G1DrnvTmtIBIE/1+tA3H8=; b=Oy8euVzdVDmXBE51sM2tjRomGEUAZFukMStD5znboNKBIlbFVdFf1dRyBS3M+kG+Yz mFBlhDCyXPheNThj1HnZ+j6t/uIptddehmz0vBPsBwtbMM3P+v4hj+3r/YWYwGXxNHZ2 5pJ/cLO1TPLy/vMOoxkW/LcVqUASbvW0SJJ+TpMCdgeO1MhkdkCp8l3qTZ+rI/5oVfvw BvUz37/UQmjXTHqcU8ONbaem3RrtubBldmdbFGbno2bD15hRL6cgYxT1Gq6qLO5ONvBS ZzQMT2a+p+Y8CxMmnkGoxqIMp0BSlfwa+qeWyqtNGLqOzBxeRYOrtKc+lehr9ckkyZbJ cA3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053913; x=1700658713; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U6/AvRBkllFQ+XgMDx3380G1DrnvTmtIBIE/1+tA3H8=; b=iQ5EZswabEBKGcJmvfDsOO0KgEj3LywK64/+9+kYzfzvaQnXG3k970AYCLtz5DLXsX uyoTpwJe5WlgchBRGmC/A717LUT0G1BRlgDK4I6+qDruz+HZ1x/pE2zd9WwMpCtAfqUR L5SglHeX63HyWr7WnXtu9ncL+gf0n/SKKI0C8astBhESOwmwlrksVjKsUR43PGbdTWcs nNNy6CCvCrbGABhSYv9jVgHFmLuquSMQQfmhRHNocyoqtWWB//Wn5iBpwWSMsXGAHQGp q+4vqatgqaofc8w+HuDWaUjaGXUwm5eSQAZk+DFjvgjh3Eynn93jrapKc4vaB6sxHe8X 4FJA== X-Gm-Message-State: AOJu0Yyxj7QcHuz8a/qhgqcBQqcepcdmvzgrZjKa9/GqYu3rJxx6+PKS SbfVkE50OPeT04p0b8dPLpyN9g== X-Google-Smtp-Source: AGHT+IFp+kLpuF/aEMEp7qi3OrKxHwkTm94u5gzM+7fMioHrPLCdgITFbGbuejU3Cm6jVl1A5AwDKw== X-Received: by 2002:a17:907:7256:b0:9d5:96e7:5ae1 with SMTP id ds22-20020a170907725600b009d596e75ae1mr5607971ejc.12.1700053912839; Wed, 15 Nov 2023 05:11:52 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.11.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:11:34 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 02/21] dts: add docstring checker Date: Wed, 15 Nov 2023 14:09:40 +0100 Message-Id: <20231115130959.39420-3-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Python docstrings are the in-code way to document the code. The docstring checker of choice is pydocstyle which we're executing from Pylama, but the current latest versions are not complatible due to [0], so pin the pydocstyle version to the latest working version. [0] https://github.com/klen/pylama/issues/232 Signed-off-by: Juraj Linkeš Reviewed-by: Yoan Picchi --- dts/poetry.lock | 12 ++++++------ dts/pyproject.toml | 6 +++++- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/dts/poetry.lock b/dts/poetry.lock index f7b3b6d602..a734fa71f0 100644 --- a/dts/poetry.lock +++ b/dts/poetry.lock @@ -489,20 +489,20 @@ files = [ [[package]] name = "pydocstyle" -version = "6.3.0" +version = "6.1.1" description = "Python docstring style checker" optional = false python-versions = ">=3.6" files = [ - {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"}, - {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"}, + {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"}, + {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"}, ] [package.dependencies] -snowballstemmer = ">=2.2.0" +snowballstemmer = "*" [package.extras] -toml = ["tomli (>=1.2.3)"] +toml = ["toml"] [[package]] name = "pyflakes" @@ -837,4 +837,4 @@ jsonschema = ">=4,<5" [metadata] lock-version = "2.0" python-versions = "^3.10" -content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639" +content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b" diff --git a/dts/pyproject.toml b/dts/pyproject.toml index 6762edfa6b..3943c87c87 100644 --- a/dts/pyproject.toml +++ b/dts/pyproject.toml @@ -25,6 +25,7 @@ PyYAML = "^6.0" types-PyYAML = "^6.0.8" fabric = "^2.7.1" scapy = "^2.5.0" +pydocstyle = "6.1.1" [tool.poetry.group.dev.dependencies] mypy = "^0.961" @@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.pylama] -linters = "mccabe,pycodestyle,pyflakes" +linters = "mccabe,pycodestyle,pydocstyle,pyflakes" format = "pylint" max_line_length = 88 # https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#line-length +[tool.pylama.linter.pydocstyle] +convention = "google" + [tool.mypy] python_version = "3.10" enable_error_code = ["ignore-without-code"] From patchwork Wed Nov 15 13:09:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134380 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 651F343339; Wed, 15 Nov 2023 14:11:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 388BC40A7A; Wed, 15 Nov 2023 14:11:58 +0100 (CET) Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by mails.dpdk.org (Postfix) with ESMTP id 29F0340A6E for ; Wed, 15 Nov 2023 14:11:54 +0100 (CET) Received: by mail-ed1-f42.google.com with SMTP id 4fb4d7f45d1cf-53dd3f169d8so10145359a12.3 for ; Wed, 15 Nov 2023 05:11:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053914; x=1700658714; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZYiXW2Z88ESYnyFNqmTUgTLHumo8eKturZmLII8hMZI=; b=aXku2v9p486K31hn6J8mzDzvKuoCaBnhPh9pGS4SRel+U0hae9AcAg+74oy26k/VKI AxisP4NzK/SUccvjZMJ6bowMl3Oa3Ggn1/Rjzx91VPWsPMbtJbpcarWlDofA/unQFeCI GRUYUxrZ2RBHto3YhpbHjBPUHoR5cPrrINuCpC7kpWu8suKQ8QLKDtqUXPLhTxPsOY9H iqGDsKRF92PUrNmFsL0gZO6EpUHcvGCHvnFZ1LCMOqbfnL4ECkD2LqD0dHxxv6kkdYh5 RZr036/J85UjaaCxn3bEL0wqHSg5UTszNZLSJ/KfF0B2F9csf4qLNvmh3CKDh2lFBLFR cSYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053914; x=1700658714; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZYiXW2Z88ESYnyFNqmTUgTLHumo8eKturZmLII8hMZI=; b=rY07RU84bGEJDytztVac4DBphx/FUd4WjOs+CKaKjUib6v5+w2bLFR+w1gqA5/ITas P4AzmDQbc37nsiHhtPnXu72BxpwA8PRvpXJn7Ym/HLxaNklzW8/BRQAShi1rUjY4PbsW 3zXgwdiM4fGe6xzOuBc7ga2yo4KHp/TR+syx3LpRk9y4rfonUyCypPpSEONiDDjeb3Kl H5NKYJ3snYlNMYafP52nSVn0r8dGVbLuqXP6DQGnjMv3Xk7fZbel+7PaYf0Uq0zu2wXN n7FGpsQmfoqE7Rb9uLajnaOxdWovdt/IYH8eq63GToIMpv0Iqx7t32SsNwLuiY71yJHj R0zQ== X-Gm-Message-State: AOJu0YwXt4nNlcgY84GPcYfXgj8f/EG3M1Yq07oM6NEaKgIngM2DZ4oE gFZLae8pvGxlj6IP7rgkR/nNyA== X-Google-Smtp-Source: AGHT+IG61Q9nwDKc1AdvPqoWCKviySVGivXbaGKJnUmfSPmmCsivVj3hGvPptgJ2yt0s6i2XJ+CYYQ== X-Received: by 2002:a17:906:1147:b0:9da:f372:4f6c with SMTP id i7-20020a170906114700b009daf3724f6cmr8005614eja.32.1700053913821; Wed, 15 Nov 2023 05:11:53 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.11.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:11:53 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 03/21] dts: add basic developer docs Date: Wed, 15 Nov 2023 14:09:41 +0100 Message-Id: <20231115130959.39420-4-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Expand the framework contribution guidelines and add how to document the code with Python docstrings. Signed-off-by: Juraj Linkeš Reviewed-by: Yoan Picchi --- doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index 32c18ee472..cd771a428c 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument. The results contain basic statistics of passed/failed test cases and DPDK version. +Contributing to DTS +------------------- + +There are two areas of contribution: The DTS framework and DTS test suites. + +The framework contains the logic needed to run test cases, such as connecting to nodes, +running DPDK apps and collecting results. + +The test cases call APIs from the framework to test their scenarios. Adding test cases may +require adding code to the framework as well. + + +Framework Coding Guidelines +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When adding code to the DTS framework, pay attention to the rest of the code +and try not to divert much from it. The :ref:`DTS developer tools ` will issue +warnings when some of the basics are not met. + +The code must be properly documented with docstrings. The style must conform to +the `Google style `_. +See an example of the style +`here `_. +For cases which are not covered by the Google style, refer +to `PEP 257 `_. There are some cases which are not covered by +the two style guides, where we deviate or where some additional clarification is helpful: + + * The __init__() methods of classes are documented separately from the docstring of the class + itself. + * The docstrigs of implemented abstract methods should refer to the superclass's definition + if there's no deviation. + * Instance variables/attributes should be documented in the docstring of the class + in the ``Attributes:`` section. + * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass + attributes which result in instance variables/attributes should also be recorded + in the ``Attributes:`` section. + * Class variables/attributes, on the other hand, should be documented with ``#:`` above + the type annotated line. The description may be omitted if the meaning is obvious. + * The Enum and TypedDict also process the attributes in particular ways and should be documented + with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value. + * When referencing a parameter of a function or a method in their docstring, don't use + any articles and put the parameter into single backticks. This mimics the style of + `Python's documentation `_. + * When specifying a value, use double backticks:: + + def foo(greet: bool) -> None: + """Demonstration of single and double backticks. + + `greet` controls whether ``Hello World`` is printed. + + Args: + greet: Whether to print the ``Hello World`` message. + """ + if greet: + print(f"Hello World") + + * The docstring maximum line length is the same as the code maximum line length. + + How To Write a Test Suite ------------------------- @@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite: | These methods don't need to be implemented if there's no need for them in a test suite. In that case, nothing will happen when they're is executed. +#. **Configuration, traffic and other logic** + + The ``TestSuite`` class contains a variety of methods for anything that + a test suite setup, a teardown, or a test case may need to do. + + The test suites also frequently use a DPDK app, such as testpmd, in interactive mode + and use the interactive shell instances directly. + + These are the two main ways to call the framework logic in test suites. If there's any + functionality or logic missing from the framework, it should be implemented so that + the test suites can use one of these two ways. + #. **Test case verification** Test case verification should be done with the ``verify`` method, which records the result. @@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite: and used by the test suite via the ``sut_node`` field. +.. _dts_dev_tools: + DTS Developer Tools ------------------- From patchwork Wed Nov 15 13:09:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134381 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E35A243339; Wed, 15 Nov 2023 14:12:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5FE3F40A84; Wed, 15 Nov 2023 14:11:59 +0100 (CET) Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by mails.dpdk.org (Postfix) with ESMTP id 79E3540A70 for ; Wed, 15 Nov 2023 14:11:55 +0100 (CET) Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-53dd752685fso10371786a12.3 for ; Wed, 15 Nov 2023 05:11:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053915; x=1700658715; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1nf+Rn9xF8AJgKgVM5dfS3bAeJ3oXogtFzeJDmarYnY=; b=FFPMUDlgbVhm8wlwfUR29ILh9s3DRU/19hvBczjyNKb/NSjlMLYxCvfGGV1NzTWURg fRgpV8sCd/Wd1SPLlPgBBrL5rWiCc/cRBQjIOVW235Bfd2qvfQ0/fyZNNzB4/DatufoU Ss5KiiKsNTR9LI5n2VyWB4PhCkPWZ1Jmq60SSWqojYbE1JWKAahA6ZgG0Ja+ZGBj7cpE b5s9aVyldKjMtKLqG4S6BVzoehZ0K9ddkXx3Js8cj1BriDmpPfbEkKW5xj8oNtXkcZa2 fUSqD1U7iQ8Fxif9vX71+PkvsbSjJwjOEenZLJJnkw9vXqKpzpStWl4qSxLPIY06qbAX QlCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053915; x=1700658715; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1nf+Rn9xF8AJgKgVM5dfS3bAeJ3oXogtFzeJDmarYnY=; b=mmBld+RivwqlZ5YIII8PNfRguqspgNryfxbw30cs+hnz06sG2WfnrR058+UM7eXmaA fbahSjMuOSFV9HnY28ZV/7LvqYS5fByw+WsTd61p4Scg7a+z6G3iTp4mDD45jn3Xcyx3 XyTyJCDADCXnLXldzMFHYcZ8myOLgOZHrFxH/3+9jWf1zae5AvI4WqXnq9iVKb/+2vmw SOCYg+AzHSVAZz1jPO1OJ+FQp6Jsb72YBH6WsbKctuDuaV+zV91CKb8OC+LzNsDWKz/k wmBgkvcMAl6qLVmzlYfI56Yc8EVOJohZKrV16Yfx89RseSCSOVt5eqMTOjCEnd3MS/Hd i2nQ== X-Gm-Message-State: AOJu0YxtmKbuYJno58Ph4qv1UbXnFklqiWn7QlP9Ifgxy4miao4uyxsL ES6ZDU4g1a0di4kUdGXEioN2Jw== X-Google-Smtp-Source: AGHT+IENqC8b+dvKHfDexRMH1Uhg4H07vLp58PB/IESMHhSMRZVlVItsoZSxaBafVVjTh+SerlxHsQ== X-Received: by 2002:a17:906:3e50:b0:9bd:f3b0:c087 with SMTP id t16-20020a1709063e5000b009bdf3b0c087mr9991921eji.2.1700053915185; Wed, 15 Nov 2023 05:11:55 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.11.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:11:54 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 04/21] dts: exceptions docstring update Date: Wed, 15 Nov 2023 14:09:42 +0100 Message-Id: <20231115130959.39420-5-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/__init__.py | 12 ++++- dts/framework/exception.py | 106 +++++++++++++++++++++++++------------ 2 files changed, 83 insertions(+), 35 deletions(-) diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py index d551ad4bf0..662e6ccad2 100644 --- a/dts/framework/__init__.py +++ b/dts/framework/__init__.py @@ -1,3 +1,13 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2022 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire + +"""Libraries and utilities for running DPDK Test Suite (DTS). + +The various modules in the DTS framework offer: + +* Connections to nodes, both interactive and non-interactive, +* A straightforward way to add support for different operating systems of remote nodes, +* Test suite setup, execution and teardown, along with test case setup, execution and teardown, +* Pre-test suite setup and post-test suite teardown. +""" diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 7489c03570..ee1562c672 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -3,8 +3,10 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -User-defined exceptions used across the framework. +"""DTS exceptions. + +The exceptions all have different severities expressed as an integer. +The highest severity of all raised exception is used as the exit code of DTS. """ from enum import IntEnum, unique @@ -13,59 +15,79 @@ @unique class ErrorSeverity(IntEnum): - """ - The severity of errors that occur during DTS execution. + """The severity of errors that occur during DTS execution. + All exceptions are caught and the most severe error is used as return code. """ + #: NO_ERR = 0 + #: GENERIC_ERR = 1 + #: CONFIG_ERR = 2 + #: REMOTE_CMD_EXEC_ERR = 3 + #: SSH_ERR = 4 + #: DPDK_BUILD_ERR = 10 + #: TESTCASE_VERIFY_ERR = 20 + #: BLOCKING_TESTSUITE_ERR = 25 class DTSError(Exception): - """ - The base exception from which all DTS exceptions are derived. - Stores error severity. + """The base exception from which all DTS exceptions are subclassed. + + Do not use this exception, only use subclassed exceptions. """ + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR class SSHTimeoutError(DTSError): - """ - Command execution timeout. - """ + """The SSH execution of a command timed out.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _command: str def __init__(self, command: str): + """Define the meaning of the first argument. + + Args: + command: The executed command. + """ self._command = command def __str__(self) -> str: - return f"TIMEOUT on {self._command}" + """Add some context to the string representation.""" + return f"{self._command} execution timed out." class SSHConnectionError(DTSError): - """ - SSH connection error. - """ + """An unsuccessful SSH connection.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _host: str _errors: list[str] def __init__(self, host: str, errors: list[str] | None = None): + """Define the meaning of the first two arguments. + + Args: + host: The hostname to which we're trying to connect. + errors: Any errors that occurred during the connection attempt. + """ self._host = host self._errors = [] if errors is None else errors def __str__(self) -> str: + """Include the errors in the string representation.""" message = f"Error trying to connect with {self._host}." if self._errors: message += f" Errors encountered while retrying: {', '.join(self._errors)}" @@ -74,43 +96,53 @@ def __str__(self) -> str: class SSHSessionDeadError(DTSError): - """ - SSH session is not alive. - It can no longer be used. - """ + """The SSH session is no longer alive.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _host: str def __init__(self, host: str): + """Define the meaning of the first argument. + + Args: + host: The hostname of the disconnected node. + """ self._host = host def __str__(self) -> str: - return f"SSH session with {self._host} has died" + """Add some context to the string representation.""" + return f"SSH session with {self._host} has died." class ConfigurationError(DTSError): - """ - Raised when an invalid configuration is encountered. - """ + """An invalid configuration.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR class RemoteCommandExecutionError(DTSError): - """ - Raised when a command executed on a Node returns a non-zero exit status. - """ + """An unsuccessful execution of a remote command.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + #: The executed command. command: str _command_return_code: int def __init__(self, command: str, command_return_code: int): + """Define the meaning of the first two arguments. + + Args: + command: The executed command. + command_return_code: The return code of the executed command. + """ self.command = command self._command_return_code = command_return_code def __str__(self) -> str: + """Include both the command and return code in the string representation.""" return ( f"Command {self.command} returned a non-zero exit code: " f"{self._command_return_code}" @@ -118,35 +150,41 @@ def __str__(self) -> str: class RemoteDirectoryExistsError(DTSError): - """ - Raised when a remote directory to be created already exists. - """ + """A directory that exists on a remote node.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR class DPDKBuildError(DTSError): - """ - Raised when DPDK build fails for any reason. - """ + """A DPDK build failure.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR class TestCaseVerifyError(DTSError): - """ - Used in test cases to verify the expected behavior. - """ + """A test case failure.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR class BlockingTestSuiteError(DTSError): + """A failure in a blocking test suite.""" + + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR _suite_name: str def __init__(self, suite_name: str) -> None: + """Define the meaning of the first argument. + + Args: + suite_name: The blocking test suite. + """ self._suite_name = suite_name def __str__(self) -> str: + """Add some context to the string representation.""" return f"Blocking suite {self._suite_name} failed." From patchwork Wed Nov 15 13:09:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134382 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD03C43339; Wed, 15 Nov 2023 14:12:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D2A7740C35; Wed, 15 Nov 2023 14:12:00 +0100 (CET) Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by mails.dpdk.org (Postfix) with ESMTP id 7F54A40285 for ; Wed, 15 Nov 2023 14:11:56 +0100 (CET) Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-53e04b17132so10469149a12.0 for ; Wed, 15 Nov 2023 05:11:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053916; x=1700658716; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kmW3fi6spX7pauruzCoWkDfbA7XzG9GyuKge+npBQwQ=; b=ey0T+HnGhqE9Prm1ZQ4SQXHI21CUktXbBVNvW98G0RyqrwLpgJxviKc5kch013/bUp AU8sHrLu70g1i47q5mHP49dxxWJC6rK/BRomcdBOwUcg81JjY8phW1+FbAHRQJNn/dVP h8V5889CrqxgDQEo3ka0EFaELt2u2oBLV25jC9R92r2ykIk7ylmdMKNYvKOQAU3Adw5M RQV/X5EVl7vQpCjEEw0qpz1aLMJbVrmfridrYwZYUS0IPAi4r2mvmTM1QUKTsXcZntMI UvyrX69UYHDcPskaQ8HTib1cjs9QBQi9w4gY3hrkL9wKVqOkRwfuO9OjGdubmeTmvMRW oAHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053916; x=1700658716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kmW3fi6spX7pauruzCoWkDfbA7XzG9GyuKge+npBQwQ=; b=X1kuAZYYezj4ZYNTU6Ua0r7oYuw13nxM2tQDK0IGfYWv4WA2UY7aIaNzYFHQOdYqHB OF43c84TJaXbydSwUqwFPtwVV4NrGt6M6SqxlISRaHhmuruY0VifAhiQSi6qlKAOx3Et XhqXe0MpT8HRiEtHql0/jK4ErXRJxWpNSgaZC1uqlXX9QSLC7rlLQdB7vKMoJAzww/YT 1CYvxDg4C5tUYcQOjccI73iFLxRaj3XlU/yyQVkkYKljVRI3rvYfTJtP7l9MVdIrlJxY uNqjQCGhf13sz6iClTgrt9GzQE2N2SqN0t6g5/XUfXHsXHv83E6OT0AeVCZwG4AyjGDV rRnQ== X-Gm-Message-State: AOJu0Yxaen/y0uSPGw8wluv0DC4LwDSIB3xQ0viu60+jp6oWOxKzQtLh wWqirPj7jNAA8s7NGoD+yHU34Q== X-Google-Smtp-Source: AGHT+IF7vXxUAdrc9hWDX2T5wIje6BJ611oRQt5+BbxNNfciYRCkvwIrC+t6GyMoZLceKRXYiE+u0w== X-Received: by 2002:a17:906:68d1:b0:9bd:a73a:7a0a with SMTP id y17-20020a17090668d100b009bda73a7a0amr9675822ejr.58.1700053916140; Wed, 15 Nov 2023 05:11:56 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.11.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:11:55 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 05/21] dts: settings docstring update Date: Wed, 15 Nov 2023 14:09:43 +0100 Message-Id: <20231115130959.39420-6-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/settings.py | 103 +++++++++++++++++++++++++++++++++++++- 1 file changed, 102 insertions(+), 1 deletion(-) diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 7f5841d073..fc7c4e00e8 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -3,6 +3,72 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire +"""Environment variables and command line arguments parsing. + +This is a simple module utilizing the built-in argparse module to parse command line arguments, +augment them with values from environment variables and make them available across the framework. + +The command line value takes precedence, followed by the environment variable value, +followed by the default value defined in this module. + +The command line arguments along with the supported environment variables are: + +.. option:: --config-file +.. envvar:: DTS_CFG_FILE + + The path to the YAML test run configuration file. + +.. option:: --output-dir, --output +.. envvar:: DTS_OUTPUT_DIR + + The directory where DTS logs and results are saved. + +.. option:: --compile-timeout +.. envvar:: DTS_COMPILE_TIMEOUT + + The timeout for compiling DPDK. + +.. option:: -t, --timeout +.. envvar:: DTS_TIMEOUT + + The timeout for all DTS operation except for compiling DPDK. + +.. option:: -v, --verbose +.. envvar:: DTS_VERBOSE + + Set to any value to enable logging everything to the console. + +.. option:: -s, --skip-setup +.. envvar:: DTS_SKIP_SETUP + + Set to any value to skip building DPDK. + +.. option:: --tarball, --snapshot, --git-ref +.. envvar:: DTS_DPDK_TARBALL + + The path to a DPDK tarball, git commit ID, tag ID or tree ID to test. + +.. option:: --test-cases +.. envvar:: DTS_TESTCASES + + A comma-separated list of test cases to execute. Unknown test cases will be silently ignored. + +.. option:: --re-run, --re_run +.. envvar:: DTS_RERUN + + Re-run each test case this many times in case of a failure. + +The module provides one key module-level variable: + +Attributes: + SETTINGS: The module level variable storing framework-wide DTS settings. + +Typical usage example:: + + from framework.settings import SETTINGS + foo = SETTINGS.foo +""" + import argparse import os from collections.abc import Callable, Iterable, Sequence @@ -16,6 +82,23 @@ def _env_arg(env_var: str) -> Any: + """A helper method augmenting the argparse Action with environment variables. + + If the supplied environment variable is defined, then the default value + of the argument is modified. This satisfies the priority order of + command line argument > environment variable > default value. + + Arguments with no values (flags) should be defined using the const keyword argument + (True or False). When the argument is specified, it will be set to const, if not specified, + the default will be stored (possibly modified by the corresponding environment variable). + + Other arguments work the same as default argparse arguments, that is using + the default 'store' action. + + Returns: + The modified argparse.Action. + """ + class _EnvironmentArgument(argparse.Action): def __init__( self, @@ -68,14 +151,28 @@ def __call__( @dataclass(slots=True) class Settings: + """Default framework-wide user settings. + + The defaults may be modified at the start of the run. + """ + + #: config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml") + #: output_dir: str = "output" + #: timeout: float = 15 + #: verbose: bool = False + #: skip_setup: bool = False + #: dpdk_tarball_path: Path | str = "dpdk.tar.xz" + #: compile_timeout: float = 1200 + #: test_cases: list[str] = field(default_factory=list) + #: re_run: int = 0 @@ -169,7 +266,7 @@ def _get_parser() -> argparse.ArgumentParser: action=_env_arg("DTS_RERUN"), default=SETTINGS.re_run, type=int, - help="[DTS_RERUN] Re-run each test case the specified amount of times " + help="[DTS_RERUN] Re-run each test case the specified number of times " "if a test failure occurs", ) @@ -177,6 +274,10 @@ def _get_parser() -> argparse.ArgumentParser: def get_settings() -> Settings: + """Create new settings with inputs from the user. + + The inputs are taken from the command line and from environment variables. + """ parsed_args = _get_parser().parse_args() return Settings( config_file_path=parsed_args.config_file, From patchwork Wed Nov 15 13:09:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134383 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 274E143339; Wed, 15 Nov 2023 14:12:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 722CC40DF6; Wed, 15 Nov 2023 14:12:04 +0100 (CET) Received: from mail-ej1-f51.google.com (mail-ej1-f51.google.com [209.85.218.51]) by mails.dpdk.org (Postfix) with ESMTP id EF4BC40A77 for ; Wed, 15 Nov 2023 14:11:57 +0100 (CET) Received: by mail-ej1-f51.google.com with SMTP id a640c23a62f3a-9c3aec5f326so167011766b.1 for ; Wed, 15 Nov 2023 05:11:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053917; x=1700658717; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4pYVcP09l1t5ff07XjovR/hMQPMd7GEFTrwDqd89mEA=; b=P0fHJuNP/M370pfbDk6LYZTw9shaEAbq/UxY/zL9yVm6nK9a7MF5dyjUX1GtfAz2ID ClsEz/TC20VGnQY3errUraF5sZ5vQy3tbaC+34cjj2lXm9oWi9X2msqgYMCQRYkCPgTn SEu0Vdjy7sGoGsW+lnsZHCUtr18t1xBQXGRgvB7KeNxrpHZtnNh91OCKHXFMupYPdg7K ZK19SYlBKiH616qodDdk+MegJijICEuk3ybQNRa+Z0pWNzsME/xcU12eOP6n7k+ZNdYs tz16P0JwMFjkfB5ZGYIaHywH/mzUHcHKqcE6oyZ505d9I2pY8fLXvJHl1rDLjsyzNdLQ wftw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053917; x=1700658717; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4pYVcP09l1t5ff07XjovR/hMQPMd7GEFTrwDqd89mEA=; b=SiUp7ifxlOewfcX6b4R7Uw4uAcIYi66mmpd/xIb8eei4Muc/fTIlcDb0xbah8nclj5 hHLqUekeheTm/+uhgdWvMpPIe0QrgKUjr7dA2KZeqd0kiwGmW/W+LLAYUY99teTQWIrH 2sYMydFXRzeknmenUYH56PGmfGAXgUvawIL+/0wPq08xtvq8zFkn4/UOw2nMOdYoLMnE tH/dFscYDMgRBd/8+i6faCHVp5/X/EDLGZ3dQi4rEA3CefpM+wJS8IT37+Dsfamy0B9x MW7FOiUGYyW2JNjEZYXjkK47GWgb++TBotQws3xT04SnDU+uv/yIP1y8NjB/UP2a0uaf Yl1Q== X-Gm-Message-State: AOJu0Yzs9QlWz/er4+ulVwGVsWrR9iPmE4IGxwjUbr6Vs/2KVeSEZkgY od+UUPhI1272CV1kozyFPSkdSXIAsnp7C/8PV5w1uQ== X-Google-Smtp-Source: AGHT+IGCeGfckdkfVPRRBj6/5YmyuBek2GyLJiNyYhKUVf5tkLlEKjx5+UKQaTFqB+b+qZ2MA3GoJw== X-Received: by 2002:a17:907:7204:b0:9b2:bdbb:f145 with SMTP id dr4-20020a170907720400b009b2bdbbf145mr5705081ejc.34.1700053917600; Wed, 15 Nov 2023 05:11:57 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.11.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:11:56 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 06/21] dts: logger and utils docstring update Date: Wed, 15 Nov 2023 14:09:44 +0100 Message-Id: <20231115130959.39420-7-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/logger.py | 72 ++++++++++++++++++++++----------- dts/framework/utils.py | 88 +++++++++++++++++++++++++++++------------ 2 files changed, 113 insertions(+), 47 deletions(-) diff --git a/dts/framework/logger.py b/dts/framework/logger.py index bb2991e994..d3eb75a4e4 100644 --- a/dts/framework/logger.py +++ b/dts/framework/logger.py @@ -3,9 +3,9 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -DTS logger module with several log level. DTS framework and TestSuite logs -are saved in different log files. +"""DTS logger module. + +DTS framework and TestSuite logs are saved in different log files. """ import logging @@ -18,19 +18,21 @@ stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s" -class LoggerDictType(TypedDict): - logger: "DTSLOG" - name: str - node: str - +class DTSLOG(logging.LoggerAdapter): + """DTS logger adapter class for framework and testsuites. -# List for saving all using loggers -Loggers: list[LoggerDictType] = [] + The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment + variable control the verbosity of output. If enabled, all messages will be emitted to the + console. + The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment + variable modify the directory where the logs will be stored. -class DTSLOG(logging.LoggerAdapter): - """ - DTS log class for framework and testsuite. + Attributes: + node: The additional identifier. Currently unused. + sh: The handler which emits logs to console. + fh: The handler which emits logs to a file. + verbose_fh: Just as fh, but logs with a different, more verbose, format. """ _logger: logging.Logger @@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter): verbose_fh: logging.FileHandler def __init__(self, logger: logging.Logger, node: str = "suite"): + """Extend the constructor with additional handlers. + + One handler logs to the console, the other one to a file, with either a regular or verbose + format. + + Args: + logger: The logger from which to create the logger adapter. + node: An additional identifier. Currently unused. + """ self._logger = logger # 1 means log everything, this will be used by file handlers if their level # is not set @@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"): super(DTSLOG, self).__init__(self._logger, dict(node=self.node)) def logger_exit(self) -> None: - """ - Remove stream handler and logfile handler. - """ + """Remove the stream handler and the logfile handler.""" for handler in (self.sh, self.fh, self.verbose_fh): handler.flush() self._logger.removeHandler(handler) +class _LoggerDictType(TypedDict): + logger: DTSLOG + name: str + node: str + + +# List for saving all loggers in use +_Loggers: list[_LoggerDictType] = [] + + def getLogger(name: str, node: str = "suite") -> DTSLOG: + """Get DTS logger adapter identified by name and node. + + An existing logger will be return if one with the exact name and node already exists. + A new one will be created and stored otherwise. + + Args: + name: The name of the logger. + node: An additional identifier for the logger. + + Returns: + A logger uniquely identified by both name and node. """ - Get logger handler and if there's no handler for specified Node will create one. - """ - global Loggers + global _Loggers # return saved logger - logger: LoggerDictType - for logger in Loggers: + logger: _LoggerDictType + for logger in _Loggers: if logger["name"] == name and logger["node"] == node: return logger["logger"] # return new logger dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node) - Loggers.append({"logger": dts_logger, "name": name, "node": node}) + _Loggers.append({"logger": dts_logger, "name": name, "node": node}) return dts_logger diff --git a/dts/framework/utils.py b/dts/framework/utils.py index f0c916471c..5016e3be10 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -3,6 +3,16 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +"""Various utility classes and functions. + +These are used in multiple modules across the framework. They're here because +they provide some non-specific functionality, greatly simplify imports or just don't +fit elsewhere. + +Attributes: + REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``. +""" + import atexit import json import os @@ -19,12 +29,20 @@ def expand_range(range_str: str) -> list[int]: - """ - Process range string into a list of integers. There are two possible formats: - n - a single integer - n-m - a range of integers + """Process `range_str` into a list of integers. + + There are two possible formats of `range_str`: + + * ``n`` - a single integer, + * ``n-m`` - a range of integers. - The returned range includes both n and m. Empty string returns an empty list. + The returned range includes both ``n`` and ``m``. Empty string returns an empty list. + + Args: + range_str: The range to expand. + + Returns: + All the numbers from the range. """ expanded_range: list[int] = [] if range_str: @@ -39,6 +57,14 @@ def expand_range(range_str: str) -> list[int]: def get_packet_summaries(packets: list[Packet]) -> str: + """Format a string summary from `packets`. + + Args: + packets: The packets to format. + + Returns: + The summary of `packets`. + """ if len(packets) == 1: packet_summaries = packets[0].summary() else: @@ -49,6 +75,8 @@ def get_packet_summaries(packets: list[Packet]) -> str: class StrEnum(Enum): + """Enum with members stored as strings.""" + @staticmethod def _generate_next_value_( name: str, start: int, count: int, last_values: object @@ -56,22 +84,29 @@ def _generate_next_value_( return name def __str__(self) -> str: + """The string representation is the name of the member.""" return self.name class MesonArgs(object): - """ - Aggregate the arguments needed to build DPDK: - default_library: Default library type, Meson allows "shared", "static" and "both". - Defaults to None, in which case the argument won't be used. - Keyword arguments: The arguments found in meson_options.txt in root DPDK directory. - Do not use -D with them, for example: - meson_args = MesonArgs(enable_kmods=True). - """ + """Aggregate the arguments needed to build DPDK.""" _default_library: str def __init__(self, default_library: str | None = None, **dpdk_args: str | bool): + """Initialize the meson arguments. + + Args: + default_library: The default library type, Meson supports ``shared``, ``static`` and + ``both``. Defaults to :data:`None`, in which case the argument won't be used. + dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory. + Do not use ``-D`` with them. + + Example: + :: + + meson_args = MesonArgs(enable_kmods=True). + """ self._default_library = ( f"--default-library={default_library}" if default_library else "" ) @@ -83,6 +118,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool): ) def __str__(self) -> str: + """The actual args.""" return " ".join(f"{self._default_library} {self._dpdk_args}".split()) @@ -104,24 +140,14 @@ class _TarCompressionFormat(StrEnum): class DPDKGitTarball(object): - """Create a compressed tarball of DPDK from the repository. - - The DPDK version is specified with git object git_ref. - The tarball will be compressed with _TarCompressionFormat, - which must be supported by the DTS execution environment. - The resulting tarball will be put into output_dir. + """Compressed tarball of DPDK from the repository. - The class supports the os.PathLike protocol, + The class supports the :class:`os.PathLike` protocol, which is used to get the Path of the tarball:: from pathlib import Path tarball = DPDKGitTarball("HEAD", "output") tarball_path = Path(tarball) - - Arguments: - git_ref: A git commit ID, tag ID or tree ID. - output_dir: The directory where to put the resulting tarball. - tar_compression_format: The compression format to use. """ _git_ref: str @@ -136,6 +162,17 @@ def __init__( output_dir: str, tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz, ): + """Create the tarball during initialization. + + The DPDK version is specified with `git_ref`. The tarball will be compressed with + `tar_compression_format`, which must be supported by the DTS execution environment. + The resulting tarball will be put into `output_dir`. + + Args: + git_ref: A git commit ID, tag ID or tree ID. + output_dir: The directory where to put the resulting tarball. + tar_compression_format: The compression format to use. + """ self._git_ref = git_ref self._tar_compression_format = tar_compression_format @@ -204,4 +241,5 @@ def _delete_tarball(self) -> None: os.remove(self._tarball_path) def __fspath__(self) -> str: + """The os.PathLike protocol implementation.""" return str(self._tarball_path) From patchwork Wed Nov 15 13:09:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134384 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 03C6043339; Wed, 15 Nov 2023 14:12:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9FA9740E0F; Wed, 15 Nov 2023 14:12:05 +0100 (CET) Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by mails.dpdk.org (Postfix) with ESMTP id 6029040A87 for ; Wed, 15 Nov 2023 14:11:59 +0100 (CET) Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-53e04b17132so10469206a12.0 for ; Wed, 15 Nov 2023 05:11:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053919; x=1700658719; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2ZNMam0a1R4XoTdcPdnNVy5Fpo9kpNHkFyw2wLxXUoc=; b=NWWSFAL9cETgz9i/MV4ufWfaGOhmhBtmXUpRxEwKPm5YKkq63T7vlwxkGWTN72tuXP RM+H+ThwxwScMxlhGMSf04XE231L41W5rCwRYvNdTfoXBH2t80iZiIB8TFZIdmBqsvKy 6Vgu3Et78hhP3efX6JXXPP/xUljZ0PUPy2Wml3RzKP5rhcXsOlSqOc7OfXWghqdHbDtX rSC/WuJTOx8sSkF710Zd7pkvSsPXht1X9t1W6v7BpOmN6DdcdzO5rcsmI5i+v6OD0eLK VFgHZckRwWJ/yPYNNPHsW5NQ7Oh4PcwBanEkOFshM7AAi3qgqfQTSpD1FkSP/9NtFwXK Wjgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053919; x=1700658719; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2ZNMam0a1R4XoTdcPdnNVy5Fpo9kpNHkFyw2wLxXUoc=; b=O87/fK3QdTgeKnNH0DcOi7qDudOjhv6UrMyL3gVvyHErwNud/+f4GS8+VXvc2KEuXK ADINlXTHch+ZsrznMKPmYRu5JPdHGBVFKoj6VnwS90TjXbslCgJhT5BC7xQmbpzMNGrO yY8ijSuMKP22QuapkgR3ioi/rfXkkaxIBPPE7IVKAD4/MmWeouC+3Q/e65hq3k8XJefb ryPiNW/dIKmRui0UGohy3xDv1abb7Mbo5d45MHwQm0+Du1mFQY9wwLp2kosaaL869or1 fmQHlpyx0TeqOtnHWb/TgcdWUj1xdZhOfJrVE5vkO2/F9jXiuFT88KqsUkMkEXX0DfjG kJaw== X-Gm-Message-State: AOJu0YyrVM6gnAJhw1NPFZnMA0qYGLOjzlg307PKO2vRwqtwr6tHJvJE NERq4OKxTBa/rA7PgHPRN3Q0MLiNdqZb32LnxANN8A== X-Google-Smtp-Source: AGHT+IHu5GId6jxSCCLsG8bJpRmJxXw8NOD6DlDFJZrMIXzJ42dLGbXFkF2VhxWxoeg7doV16UUiaw== X-Received: by 2002:a17:906:404:b0:9e6:4156:af4f with SMTP id d4-20020a170906040400b009e64156af4fmr9595592eja.55.1700053919082; Wed, 15 Nov 2023 05:11:59 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.11.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:11:58 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 07/21] dts: dts runner and main docstring update Date: Wed, 15 Nov 2023 14:09:45 +0100 Message-Id: <20231115130959.39420-8-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/dts.py | 128 ++++++++++++++++++++++++++++++++++++------- dts/main.py | 8 ++- 2 files changed, 112 insertions(+), 24 deletions(-) diff --git a/dts/framework/dts.py b/dts/framework/dts.py index 4c7fb0c40a..331fed7dc4 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -3,6 +3,33 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +r"""Test suite runner module. + +A DTS run is split into stages: + + #. Execution stage, + #. Build target stage, + #. Test suite stage, + #. Test case stage. + +The module is responsible for running tests on testbeds defined in the test run configuration. +Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or +one of its subclasses. The test case results are also recorded. + +If an error occurs, the current stage is aborted, the error is recorded and the run continues in +the next iteration of the same stage. The return code is the highest `severity` of all +:class:`~.framework.exception.DTSError`\s. + +Example: + An error occurs in a build target setup. The current build target is aborted and the run + continues with the next build target. If the errored build target was the last one in the given + execution, the next execution begins. + +Attributes: + dts_logger: The logger instance used in this module. + result: The top level result used in the module. +""" + import sys from .config import ( @@ -23,9 +50,38 @@ def run_all() -> None: - """ - The main process of DTS. Runs all build targets in all executions from the main - config file. + """Run all build targets in all executions from the test run configuration. + + Before running test suites, executions and build targets are first set up. + The executions and build targets defined in the test run configuration are iterated over. + The executions define which tests to run and where to run them and build targets define + the DPDK build setup. + + The tests suites are set up for each execution/build target tuple and each scheduled + test case within the test suite is set up, executed and torn down. After all test cases + have been executed, the test suite is torn down and the next build target will be tested. + + All the nested steps look like this: + + #. Execution setup + + #. Build target setup + + #. Test suite setup + + #. Test case setup + #. Test case logic + #. Test case teardown + + #. Test suite teardown + + #. Build target teardown + + #. Execution teardown + + The test cases are filtered according to the specification in the test run configuration and + the :option:`--test-cases` command line argument or + the :envvar:`DTS_TESTCASES` environment variable. """ global dts_logger global result @@ -87,6 +143,8 @@ def run_all() -> None: def _check_dts_python_version() -> None: + """Check the required Python version - v3.10.""" + def RED(text: str) -> str: return f"\u001B[31;1m{str(text)}\u001B[0m" @@ -111,9 +169,16 @@ def _run_execution( execution: ExecutionConfiguration, result: DTSResult, ) -> None: - """ - Run the given execution. This involves running the execution setup as well as - running all build targets in the given execution. + """Run the given execution. + + This involves running the execution setup as well as running all build targets + in the given execution. After that, execution teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: An execution's test run configuration. + result: The top level result object. """ dts_logger.info( f"Running execution with SUT '{execution.system_under_test_node.name}'." @@ -150,8 +215,18 @@ def _run_build_target( execution: ExecutionConfiguration, execution_result: ExecutionResult, ) -> None: - """ - Run the given build target. + """Run the given build target. + + This involves running the build target setup as well as running all test suites + in the given execution the build target is defined in. + After that, build target teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + build_target: A build target's test run configuration. + execution: The build target's execution's test run configuration. + execution_result: The execution level result object associated with the execution. """ dts_logger.info(f"Running build target '{build_target.name}'.") build_target_result = execution_result.add_build_target(build_target) @@ -183,10 +258,17 @@ def _run_all_suites( execution: ExecutionConfiguration, build_target_result: BuildTargetResult, ) -> None: - """ - Use the given build_target to run execution's test suites - with possibly only a subset of test cases. - If no subset is specified, run all test cases. + """Run the execution's (possibly a subset) test suites using the current build_target. + + The function assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated with the current build target. + build_target_result: The build target level result object associated + with the current build target. """ end_build_target = False if not execution.skip_smoke_tests: @@ -215,16 +297,22 @@ def _run_single_suite( build_target_result: BuildTargetResult, test_suite_config: TestSuiteConfig, ) -> None: - """Runs a single test suite. + """Run all test suite in a single test suite module. + + The function assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. Args: - sut_node: Node to run tests on. - execution: Execution the test case belongs to. - build_target_result: Build target configuration test case is run on - test_suite_config: Test suite configuration + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated with the current build target. + build_target_result: The build target level result object associated + with the current build target. + test_suite_config: Test suite test run configuration specifying the test suite module + and possibly a subset of test cases of test suites in that module. Raises: - BlockingTestSuiteError: If a test suite that was marked as blocking fails. + BlockingTestSuiteError: If a blocking test suite fails. """ try: full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" @@ -248,9 +336,7 @@ def _run_single_suite( def _exit_dts() -> None: - """ - Process all errors and exit with the proper exit code. - """ + """Process all errors and exit with the proper exit code.""" result.process() if dts_logger: diff --git a/dts/main.py b/dts/main.py index 5d4714b0c3..f703615d11 100755 --- a/dts/main.py +++ b/dts/main.py @@ -4,9 +4,7 @@ # Copyright(c) 2022 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire -""" -A test framework for testing DPDK. -""" +"""The DTS executable.""" import logging @@ -17,6 +15,10 @@ def main() -> None: """Set DTS settings, then run DTS. The DTS settings are taken from the command line arguments and the environment variables. + The settings object is stored in the module-level variable settings.SETTINGS which the entire + framework uses. After importing the module (or the variable), any changes to the variable are + not going to be reflected without a re-import. This means that the SETTINGS variable must + be modified before the settings module is imported anywhere else in the framework. """ settings.SETTINGS = settings.get_settings() from framework import dts From patchwork Wed Nov 15 13:09:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134385 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6CD7943339; Wed, 15 Nov 2023 14:12:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CEEDF40E36; Wed, 15 Nov 2023 14:12:06 +0100 (CET) Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) by mails.dpdk.org (Postfix) with ESMTP id D8B9940DC9 for ; Wed, 15 Nov 2023 14:12:00 +0100 (CET) Received: by mail-ej1-f50.google.com with SMTP id a640c23a62f3a-9c3aec5f326so167020166b.1 for ; Wed, 15 Nov 2023 05:12:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053920; x=1700658720; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xaI45AbgHhXPPqULTa3dssytFvoTpD3FjsJ1di/iTiU=; b=VKuUSTQvSuFONKdX7JfBonoKkK+2xE00U9yPph+BRfguAevPnwR2pBoZZWGZtmwEWP INWHQrvYVy0lWQGlxPhBWvbejAYL9frOO43mAyy3/GLKiPDxYD0Pz8kdS6socSofpi3Q 04OeYYz7OAd4lP5oAgvnRtzbPSOGOCsQF/0eYJcFkeT2ev0QhXujFP4QHGZ+EVBUDv6/ 7/kFWXWs5aNCE3GzyHjBGazTbr/bHlzrEImEQYS7ZGxv9TwwnYAyX3Kh5gouEeYzToeg tV7dBOt+NxNn8g2LK4PLAMSw4HEqklOt56RacsoUiwoL0XvWVIWCV9QVCAhwM9s3MiAt Mi8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053920; x=1700658720; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xaI45AbgHhXPPqULTa3dssytFvoTpD3FjsJ1di/iTiU=; b=qLH4rD8FocPuu1cmRyqMQdxO6CqnhD/MF2kQKPWR9XmTItTs5cSREZoqladuGfmEJz rC0ASDU22GoAw4Wktp13FhlX3TCyqWk+f4Q1damreD6rGEJaIxVEamObCZYNRjR1iGRi 4/ld4W74Kua+QpgbIKqwU8OH5nrSaXc7Ux8JBWH1KTJZxPR0jsR1McCzTueMHKwPag5P KB9ZWfHSASZVg+1hm11x+iFHwisiKAvNMJ0nHtS0Oy2hrKWqOObo8AEjRYahxEDnVNig FatOpIn10DMC5oz+Y6pqcTaLvNMjWFyiLZtyNK1n6HmoxaqV5nLJPEoG1tbdtFIff5l8 MZpQ== X-Gm-Message-State: AOJu0YyDMe3SF0c2/QxuQLdl9RDru2H+DYT8PA0DisjjbZV8RNgG9tyz 15oct01Gb0PdQR9WDnrb/oqn65B6ntvJ2XQNaOXnBQ== X-Google-Smtp-Source: AGHT+IEIPOUZfknrqLwdXvKGrqjhRtctaCE5JITAkFAnG10ROra3S5tz3nKV3YadeGt7vQMafI6yOQ== X-Received: by 2002:a17:906:37ce:b0:9ee:462c:7924 with SMTP id o14-20020a17090637ce00b009ee462c7924mr4501434ejc.9.1700053920469; Wed, 15 Nov 2023 05:12:00 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.11.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:11:59 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 08/21] dts: test suite docstring update Date: Wed, 15 Nov 2023 14:09:46 +0100 Message-Id: <20231115130959.39420-9-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/test_suite.py | 223 +++++++++++++++++++++++++++--------- 1 file changed, 168 insertions(+), 55 deletions(-) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index d53553bf34..9e5251ffc6 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -2,8 +2,19 @@ # Copyright(c) 2010-2014 Intel Corporation # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -Base class for creating DTS test cases. +"""Features common to all test suites. + +The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such +must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics +needed by subclasses: + + * Test suite and test case execution flow, + * Testbed (SUT, TG) configuration, + * Packet sending and verification, + * Test case verification. + +The module also defines a function, :func:`get_test_suites`, +for gathering test suites from a Python module. """ import importlib @@ -31,25 +42,44 @@ class TestSuite(object): - """ - The base TestSuite class provides methods for handling basic flow of a test suite: - * test case filtering and collection - * test suite setup/cleanup - * test setup/cleanup - * test case execution - * error handling and results storage - Test cases are implemented by derived classes. Test cases are all methods - starting with test_, further divided into performance test cases - (starting with test_perf_) and functional test cases (all other test cases). - By default, all test cases will be executed. A list of testcase str names - may be specified in conf.yaml or on the command line - to filter which test cases to run. - The methods named [set_up|tear_down]_[suite|test_case] should be overridden - in derived classes if the appropriate suite/test case fixtures are needed. + """The base class with methods for handling the basic flow of a test suite. + + * Test case filtering and collection, + * Test suite setup/cleanup, + * Test setup/cleanup, + * Test case execution, + * Error handling and results storage. + + Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``, + further divided into performance test cases (starting with ``test_perf_``) + and functional test cases (all other test cases). + + By default, all test cases will be executed. A list of testcase names may be specified + in the YAML test run configuration file and in the :option:`--test-cases` command line argument + or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run. + The union of both lists will be used. Any unknown test cases from the latter lists + will be silently ignored. + + If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable + is set, in case of a test case failure, the test case will be executed again until it passes + or it fails that many times in addition of the first failure. + + The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses + if the appropriate test suite/test case fixtures are needed. + + The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can + properly choose the IP addresses and other configuration that must be tailored to the testbed. + + Attributes: + sut_node: The SUT node where the test suite is running. + tg_node: The TG node where the test suite is running. + is_blocking: Whether the test suite is blocking. A failure of a blocking test suite + will block the execution of all subsequent test suites in the current build target. """ sut_node: SutNode - is_blocking = False + tg_node: TGNode + is_blocking: bool = False _logger: DTSLOG _test_cases_to_run: list[str] _func: bool @@ -72,6 +102,19 @@ def __init__( func: bool, build_target_result: BuildTargetResult, ): + """Initialize the test suite testbed information and basic configuration. + + Process what test cases to run, create the associated :class:`TestSuiteResult`, + find links between ports and set up default IP addresses to be used when configuring them. + + Args: + sut_node: The SUT node where the test suite will run. + tg_node: The TG node where the test suite will run. + test_cases: The list of test cases to execute. + If empty, all test cases will be executed. + func: Whether to run functional tests. + build_target_result: The build target result this test suite is run in. + """ self.sut_node = sut_node self.tg_node = tg_node self._logger = getLogger(self.__class__.__name__) @@ -95,6 +138,7 @@ def __init__( self._tg_ip_address_ingress = ip_interface("192.168.101.3/24") def _process_links(self) -> None: + """Construct links between SUT and TG ports.""" for sut_port in self.sut_node.ports: for tg_port in self.tg_node.ports: if (sut_port.identifier, sut_port.peer) == ( @@ -106,27 +150,42 @@ def _process_links(self) -> None: ) def set_up_suite(self) -> None: - """ - Set up test fixtures common to all test cases; this is done before - any test case is run. + """Set up test fixtures common to all test cases. + + This is done before any test case has been run. """ def tear_down_suite(self) -> None: - """ - Tear down the previously created test fixtures common to all test cases. + """Tear down the previously created test fixtures common to all test cases. + + This is done after all test have been run. """ def set_up_test_case(self) -> None: - """ - Set up test fixtures before each test case. + """Set up test fixtures before each test case. + + This is done before *each* test case. """ def tear_down_test_case(self) -> None: - """ - Tear down the previously created test fixtures after each test case. + """Tear down the previously created test fixtures after each test case. + + This is done after *each* test case. """ def configure_testbed_ipv4(self, restore: bool = False) -> None: + """Configure IPv4 addresses on all testbed ports. + + The configured ports are: + + * SUT ingress port, + * SUT egress port, + * TG ingress port, + * TG egress port. + + Args: + restore: If :data:`True`, will remove the configuration instead. + """ delete = True if restore else False enable = False if restore else True self._configure_ipv4_forwarding(enable) @@ -153,11 +212,13 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None: def send_packet_and_capture( self, packet: Packet, duration: float = 1 ) -> list[Packet]: - """ - Send a packet through the appropriate interface and - receive on the appropriate interface. - Modify the packet with l3/l2 addresses corresponding - to the testbed and desired traffic. + """Send and receive `packet` using the associated TG. + + Send `packet` through the appropriate interface and receive on the appropriate interface. + Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic. + + Returns: + A list of received packets. """ packet = self._adjust_addresses(packet) return self.tg_node.send_packet_and_capture( @@ -165,13 +226,25 @@ def send_packet_and_capture( ) def get_expected_packet(self, packet: Packet) -> Packet: + """Inject the proper L2/L3 addresses into `packet`. + + Args: + packet: The packet to modify. + + Returns: + `packet` with injected L2/L3 addresses. + """ return self._adjust_addresses(packet, expected=True) def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: - """ + """L2 and L3 address additions in both directions. + Assumptions: - Two links between SUT and TG, one link is TG -> SUT, - the other SUT -> TG. + Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG. + + Args: + packet: The packet to modify. + expected: If :data:`True`, the direction is SUT -> TG, otherwise the direction is TG -> SUT. """ if expected: # The packet enters the TG from SUT @@ -197,6 +270,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: return Ether(packet.build()) def verify(self, condition: bool, failure_description: str) -> None: + """Verify `condition` and handle failures. + + When `condition` is :data:`False`, raise an exception and log the last 10 commands + executed on both the SUT and TG. + + Args: + condition: The condition to check. + failure_description: A short description of the failure + that will be stored in the raised exception. + + Raises: + TestCaseVerifyError: `condition` is :data:`False`. + """ if not condition: self._fail_test_case_verify(failure_description) @@ -216,6 +302,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None: def verify_packets( self, expected_packet: Packet, received_packets: list[Packet] ) -> None: + """Verify that `expected_packet` has been received. + + Go through `received_packets` and check that `expected_packet` is among them. + If not, raise an exception and log the last 10 commands + executed on both the SUT and TG. + + Args: + expected_packet: The packet we're expecting to receive. + received_packets: The packets where we're looking for `expected_packet`. + + Raises: + TestCaseVerifyError: `expected_packet` is not among `received_packets`. + """ for received_packet in received_packets: if self._compare_packets(expected_packet, received_packet): break @@ -303,10 +402,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool: return True def run(self) -> None: - """ - Setup, execute and teardown the whole suite. - Suite execution consists of running all test cases scheduled to be executed. - A test cast run consists of setup, execution and teardown of said test case. + """Set up, execute and tear down the whole suite. + + Test suite execution consists of running all test cases scheduled to be executed. + A test case run consists of setup, execution and teardown of said test case. + + Record the setup and the teardown and handle failures. + + The list of scheduled test cases is constructed when creating the :class:`TestSuite` object. """ test_suite_name = self.__class__.__name__ @@ -338,9 +441,7 @@ def run(self) -> None: raise BlockingTestSuiteError(test_suite_name) def _execute_test_suite(self) -> None: - """ - Execute all test cases scheduled to be executed in this suite. - """ + """Execute all test cases scheduled to be executed in this suite.""" if self._func: for test_case_method in self._get_functional_test_cases(): test_case_name = test_case_method.__name__ @@ -357,14 +458,18 @@ def _execute_test_suite(self) -> None: self._run_test_case(test_case_method, test_case_result) def _get_functional_test_cases(self) -> list[MethodType]: - """ - Get all functional test cases. + """Get all functional test cases defined in this TestSuite. + + Returns: + The list of functional test cases of this TestSuite. """ return self._get_test_cases(r"test_(?!perf_)") def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: - """ - Return a list of test cases matching test_case_regex. + """Return a list of test cases matching test_case_regex. + + Returns: + The list of test cases matching test_case_regex of this TestSuite. """ self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.") filtered_test_cases = [] @@ -378,9 +483,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: return filtered_test_cases def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool: - """ - Check whether the test case should be executed. - """ + """Check whether the test case should be scheduled to be executed.""" match = bool(re.match(test_case_regex, test_case_name)) if self._test_cases_to_run: return match and test_case_name in self._test_cases_to_run @@ -390,9 +493,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool def _run_test_case( self, test_case_method: MethodType, test_case_result: TestCaseResult ) -> None: - """ - Setup, execute and teardown a test case in this suite. - Exceptions are caught and recorded in logs and results. + """Setup, execute and teardown a test case in this suite. + + Record the result of the setup and the teardown and handle failures. """ test_case_name = test_case_method.__name__ @@ -427,9 +530,7 @@ def _run_test_case( def _execute_test_case( self, test_case_method: MethodType, test_case_result: TestCaseResult ) -> None: - """ - Execute one test case and handle failures. - """ + """Execute one test case, record the result and handle failures.""" test_case_name = test_case_method.__name__ try: self._logger.info(f"Starting test case execution: {test_case_name}") @@ -452,6 +553,18 @@ def _execute_test_case( def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: + r"""Find all :class:`TestSuite`\s in a Python module. + + Args: + testsuite_module_path: The path to the Python module. + + Returns: + The list of :class:`TestSuite`\s found within the Python module. + + Raises: + ConfigurationError: The test suite module was not found. + """ + def is_test_suite(object: Any) -> bool: try: if issubclass(object, TestSuite) and object is not TestSuite: From patchwork Wed Nov 15 13:09:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134386 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2482F43339; Wed, 15 Nov 2023 14:12:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A5E240E68; Wed, 15 Nov 2023 14:12:08 +0100 (CET) Received: from mail-ej1-f51.google.com (mail-ej1-f51.google.com [209.85.218.51]) by mails.dpdk.org (Postfix) with ESMTP id A0F2240A80 for ; Wed, 15 Nov 2023 14:12:02 +0100 (CET) Received: by mail-ej1-f51.google.com with SMTP id a640c23a62f3a-9d0b4dfd60dso1003690666b.1 for ; Wed, 15 Nov 2023 05:12:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053922; x=1700658722; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SNuBm0P/F+2D+ZgXgH9yFiu7r2iZtcMqM4yuUE6GvFY=; b=UjApxVlpOyVnC6wH172ioL9h54fZclbqsf/+yPrBo6r28i039megIwMXz0M8SKK5y4 y13qIG1B+UHmCYOic4BtZyMtbEWDa0w84I9pN5h5FHV2RhAUbgwx8eKR3KgJqwR4w2zJ 1O3GxgA2N1c2IJ26FIc6xGUXpLoCoB5Xj7f9HcaXEaOMyk4MVJMZ8R9J8hOm7i7KZz+e br86ZUROoK+VjPDVk6v+B5yNgxUlwHDn90dm6byzPhUx6+YihVysRZPIdKrLj4jEPWAm ucC+0mbQbwDu37a+dxOp3GUK+lfjswrKbVS94MLkn7Ahoo9gITQWHEWcFyRc2WbX7pQX kzIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053922; x=1700658722; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SNuBm0P/F+2D+ZgXgH9yFiu7r2iZtcMqM4yuUE6GvFY=; b=hnfYqX34hzPcNrZnQCJSEyXAMOIWwG7+Lc9OpFSGQeJqnb88qAs7YOlp4XSzNRWKFj O3aiEB7P+2MEdSxihoKTmrVIAluH0EZqXy//MXqqWyae0z/GWAO0+ee5UlYZO/i6avp4 twdVThQSclhDqT/MZqSL/hriD9gz/yFk8ccB07bbvLmhCu4XnAUiq490O3I053fMyCWm NKlSU5spJgOCkHzi/ihYZMEw7on82Zvn/9z5FfTKfAI/oRMmlumROMMgMK0XRkx9l9vq D01uv4DpVCtvGp1OQ9rkoApzoFaYLysqLWBP+pEqDJ+ETUNgRYBHV2twDZI45G04C2gB 1jsQ== X-Gm-Message-State: AOJu0YxIeXDZ29TfTOmakFhqI0ZvnUDqjC7ZDEbn7AgE/OeyP5nCYhRj JSb+h7CoqiQcsYIFmgJgVdH1Xw== X-Google-Smtp-Source: AGHT+IESyvtu0RXdaFrRyg9AS/+01+ez3tqcEemZian/TUlsXdlUJK+9hHZ6t38V2zXMNIOh6ujygg== X-Received: by 2002:a17:906:45a:b0:9c7:56ee:b6e5 with SMTP id e26-20020a170906045a00b009c756eeb6e5mr9061787eja.40.1700053921943; Wed, 15 Nov 2023 05:12:01 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:01 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 09/21] dts: test result docstring update Date: Wed, 15 Nov 2023 14:09:47 +0100 Message-Id: <20231115130959.39420-10-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/test_result.py | 292 ++++++++++++++++++++++++++++------- 1 file changed, 234 insertions(+), 58 deletions(-) diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 603e18872c..05e210f6e7 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -2,8 +2,25 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire -""" -Generic result container and reporters +r"""Record and process DTS results. + +The results are recorded in a hierarchical manner: + + * :class:`DTSResult` contains + * :class:`ExecutionResult` contains + * :class:`BuildTargetResult` contains + * :class:`TestSuiteResult` contains + * :class:`TestCaseResult` + +Each result may contain multiple lower level results, e.g. there are multiple +:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`. +The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`, +which also defines some common behaviors in its methods. + +Each result class has its own idiosyncrasies which they implement in overridden methods. + +The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment +variable modify the directory where the files with results will be stored. """ import os.path @@ -26,26 +43,34 @@ class Result(Enum): - """ - An Enum defining the possible states that - a setup, a teardown or a test case may end up in. - """ + """The possible states that a setup, a teardown or a test case may end up in.""" + #: PASS = auto() + #: FAIL = auto() + #: ERROR = auto() + #: SKIP = auto() def __bool__(self) -> bool: + """Only PASS is True.""" return self is self.PASS class FixtureResult(object): - """ - A record that stored the result of a setup or a teardown. - The default is FAIL because immediately after creating the object - the setup of the corresponding stage will be executed, which also guarantees - the execution of teardown. + """A record that stores the result of a setup or a teardown. + + FAIL is a sensible default since it prevents false positives + (which could happen if the default was PASS). + + Preventing false positives or other false results is preferable since a failure + is mostly likely to be investigated (the other false results may not be investigated at all). + + Attributes: + result: The associated result. + error: The error in case of a failure. """ result: Result @@ -56,21 +81,32 @@ def __init__( result: Result = Result.FAIL, error: Exception | None = None, ): + """Initialize the constructor with the fixture result and store a possible error. + + Args: + result: The result to store. + error: The error which happened when a failure occurred. + """ self.result = result self.error = error def __bool__(self) -> bool: + """A wrapper around the stored :class:`Result`.""" return bool(self.result) class Statistics(dict): - """ - A helper class used to store the number of test cases by its result - along a few other basic information. - Using a dict provides a convenient way to format the data. + """How many test cases ended in which result state along some other basic information. + + Subclassing :class:`dict` provides a convenient way to format the data. """ def __init__(self, dpdk_version: str | None): + """Extend the constructor with relevant keys. + + Args: + dpdk_version: The version of tested DPDK. + """ super(Statistics, self).__init__() for result in Result: self[result.name] = 0 @@ -78,8 +114,17 @@ def __init__(self, dpdk_version: str | None): self["DPDK VERSION"] = dpdk_version def __iadd__(self, other: Result) -> "Statistics": - """ - Add a Result to the final count. + """Add a Result to the final count. + + Example: + stats: Statistics = Statistics() # empty Statistics + stats += Result.PASS # add a Result to `stats` + + Args: + other: The Result to add to this statistics object. + + Returns: + The modified statistics object. """ self[other.name] += 1 self["PASS RATE"] = ( @@ -90,9 +135,7 @@ def __iadd__(self, other: Result) -> "Statistics": return self def __str__(self) -> str: - """ - Provide a string representation of the data. - """ + """Each line contains the formatted key = value pair.""" stats_str = "" for key, value in self.items(): stats_str += f"{key:<12} = {value}\n" @@ -102,10 +145,16 @@ def __str__(self) -> str: class BaseResult(object): - """ - The Base class for all results. Stores the results of - the setup and teardown portions of the corresponding stage - and a list of results from each inner stage in _inner_results. + """Common data and behavior of DTS results. + + Stores the results of the setup and teardown portions of the corresponding stage. + The hierarchical nature of DTS results is captured recursively in an internal list. + A stage is each level in this particular hierarchy (pre-execution or the top-most level, + execution, build target, test suite and test case.) + + Attributes: + setup_result: The result of the setup of the particular stage. + teardown_result: The results of the teardown of the particular stage. """ setup_result: FixtureResult @@ -113,15 +162,28 @@ class BaseResult(object): _inner_results: MutableSequence["BaseResult"] def __init__(self): + """Initialize the constructor.""" self.setup_result = FixtureResult() self.teardown_result = FixtureResult() self._inner_results = [] def update_setup(self, result: Result, error: Exception | None = None) -> None: + """Store the setup result. + + Args: + result: The result of the setup. + error: The error that occurred in case of a failure. + """ self.setup_result.result = result self.setup_result.error = error def update_teardown(self, result: Result, error: Exception | None = None) -> None: + """Store the teardown result. + + Args: + result: The result of the teardown. + error: The error that occurred in case of a failure. + """ self.teardown_result.result = result self.teardown_result.error = error @@ -141,27 +203,55 @@ def _get_inner_errors(self) -> list[Exception]: ] def get_errors(self) -> list[Exception]: + """Compile errors from the whole result hierarchy. + + Returns: + The errors from setup, teardown and all errors found in the whole result hierarchy. + """ return self._get_setup_teardown_errors() + self._get_inner_errors() def add_stats(self, statistics: Statistics) -> None: + """Collate stats from the whole result hierarchy. + + Args: + statistics: The :class:`Statistics` object where the stats will be collated. + """ for inner_result in self._inner_results: inner_result.add_stats(statistics) class TestCaseResult(BaseResult, FixtureResult): - """ - The test case specific result. - Stores the result of the actual test case. - Also stores the test case name. + r"""The test case specific result. + + Stores the result of the actual test case. This is done by adding an extra superclass + in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and + the class is itself a record of the test case. + + Attributes: + test_case_name: The test case name. """ test_case_name: str def __init__(self, test_case_name: str): + """Extend the constructor with `test_case_name`. + + Args: + test_case_name: The test case's name. + """ super(TestCaseResult, self).__init__() self.test_case_name = test_case_name def update(self, result: Result, error: Exception | None = None) -> None: + """Update the test case result. + + This updates the result of the test case itself and doesn't affect + the results of the setup and teardown steps in any way. + + Args: + result: The result of the test case. + error: The error that occurred in case of a failure. + """ self.result = result self.error = error @@ -171,38 +261,66 @@ def _get_inner_errors(self) -> list[Exception]: return [] def add_stats(self, statistics: Statistics) -> None: + r"""Add the test case result to statistics. + + The base method goes through the hierarchy recursively and this method is here to stop + the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree. + + Args: + statistics: The :class:`Statistics` object where the stats will be added. + """ statistics += self.result def __bool__(self) -> bool: + """The test case passed only if setup, teardown and the test case itself passed.""" return ( bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) ) class TestSuiteResult(BaseResult): - """ - The test suite specific result. - The _inner_results list stores results of test cases in a given test suite. - Also stores the test suite name. + """The test suite specific result. + + The internal list stores the results of all test cases in a given test suite. + + Attributes: + suite_name: The test suite name. """ suite_name: str def __init__(self, suite_name: str): + """Extend the constructor with `suite_name`. + + Args: + suite_name: The test suite's name. + """ super(TestSuiteResult, self).__init__() self.suite_name = suite_name def add_test_case(self, test_case_name: str) -> TestCaseResult: + """Add and return the inner result (test case). + + Returns: + The test case's result. + """ test_case_result = TestCaseResult(test_case_name) self._inner_results.append(test_case_result) return test_case_result class BuildTargetResult(BaseResult): - """ - The build target specific result. - The _inner_results list stores results of test suites in a given build target. - Also stores build target specifics, such as compiler used to build DPDK. + """The build target specific result. + + The internal list stores the results of all test suites in a given build target. + + Attributes: + arch: The DPDK build target architecture. + os: The DPDK build target operating system. + cpu: The DPDK build target CPU. + compiler: The DPDK build target compiler. + compiler_version: The DPDK build target compiler version. + dpdk_version: The built DPDK version. """ arch: Architecture @@ -213,6 +331,11 @@ class BuildTargetResult(BaseResult): dpdk_version: str | None def __init__(self, build_target: BuildTargetConfiguration): + """Extend the constructor with the `build_target`'s build target config. + + Args: + build_target: The build target's test run configuration. + """ super(BuildTargetResult, self).__init__() self.arch = build_target.arch self.os = build_target.os @@ -222,20 +345,35 @@ def __init__(self, build_target: BuildTargetConfiguration): self.dpdk_version = None def add_build_target_info(self, versions: BuildTargetInfo) -> None: + """Add information about the build target gathered at runtime. + + Args: + versions: The additional information. + """ self.compiler_version = versions.compiler_version self.dpdk_version = versions.dpdk_version def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: + """Add and return the inner result (test suite). + + Returns: + The test suite's result. + """ test_suite_result = TestSuiteResult(test_suite_name) self._inner_results.append(test_suite_result) return test_suite_result class ExecutionResult(BaseResult): - """ - The execution specific result. - The _inner_results list stores results of build targets in a given execution. - Also stores the SUT node configuration. + """The execution specific result. + + The internal list stores the results of all build targets in a given execution. + + Attributes: + sut_node: The SUT node used in the execution. + sut_os_name: The operating system of the SUT node. + sut_os_version: The operating system version of the SUT node. + sut_kernel_version: The operating system kernel version of the SUT node. """ sut_node: NodeConfiguration @@ -244,36 +382,55 @@ class ExecutionResult(BaseResult): sut_kernel_version: str def __init__(self, sut_node: NodeConfiguration): + """Extend the constructor with the `sut_node`'s config. + + Args: + sut_node: The SUT node's test run configuration used in the execution. + """ super(ExecutionResult, self).__init__() self.sut_node = sut_node def add_build_target( self, build_target: BuildTargetConfiguration ) -> BuildTargetResult: + """Add and return the inner result (build target). + + Args: + build_target: The build target's test run configuration. + + Returns: + The build target's result. + """ build_target_result = BuildTargetResult(build_target) self._inner_results.append(build_target_result) return build_target_result def add_sut_info(self, sut_info: NodeInfo) -> None: + """Add SUT information gathered at runtime. + + Args: + sut_info: The additional SUT node information. + """ self.sut_os_name = sut_info.os_name self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version class DTSResult(BaseResult): - """ - Stores environment information and test results from a DTS run, which are: - * Execution level information, such as SUT and TG hardware. - * Build target level information, such as compiler, target OS and cpu. - * Test suite results. - * All errors that are caught and recorded during DTS execution. + """Stores environment information and test results from a DTS run. - The information is stored in nested objects. + * Execution level information, such as testbed and the test suite list, + * Build target level information, such as compiler, target OS and cpu, + * Test suite and test case results, + * All errors that are caught and recorded during DTS execution. - The class is capable of computing the return code used to exit DTS with - from the stored error. + The information is stored hierarchically. This is the first level of the hierarchy + and as such is where the data form the whole hierarchy is collated or processed. - It also provides a brief statistical summary of passed/failed test cases. + The internal list stores the results of all executions. + + Attributes: + dpdk_version: The DPDK version to record. """ dpdk_version: str | None @@ -284,6 +441,11 @@ class DTSResult(BaseResult): _stats_filename: str def __init__(self, logger: DTSLOG): + """Extend the constructor with top-level specifics. + + Args: + logger: The logger instance the whole result will use. + """ super(DTSResult, self).__init__() self.dpdk_version = None self._logger = logger @@ -293,21 +455,33 @@ def __init__(self, logger: DTSLOG): self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: + """Add and return the inner result (execution). + + Args: + sut_node: The SUT node's test run configuration. + + Returns: + The execution's result. + """ execution_result = ExecutionResult(sut_node) self._inner_results.append(execution_result) return execution_result def add_error(self, error: Exception) -> None: + """Record an error that occurred outside any execution. + + Args: + error: The exception to record. + """ self._errors.append(error) def process(self) -> None: - """ - Process the data after a DTS run. - The data is added to nested objects during runtime and this parent object - is not updated at that time. This requires us to process the nested data - after it's all been gathered. + """Process the data after a whole DTS run. + + The data is added to inner objects during runtime and this object is not updated + at that time. This requires us to process the inner data after it's all been gathered. - The processing gathers all errors and the result statistics of test cases. + The processing gathers all errors and the statistics of test case results. """ self._errors += self.get_errors() if self._errors and self._logger: @@ -321,8 +495,10 @@ def process(self) -> None: stats_file.write(str(self._stats_result)) def get_return_code(self) -> int: - """ - Go through all stored Exceptions and return the highest error code found. + """Go through all stored Exceptions and return the final DTS error code. + + Returns: + The highest error code found. """ for error in self._errors: error_return_code = ErrorSeverity.GENERIC_ERR From patchwork Wed Nov 15 13:09:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134387 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 500BE43339; Wed, 15 Nov 2023 14:13:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AEC3740ED8; Wed, 15 Nov 2023 14:12:09 +0100 (CET) Received: from mail-ej1-f43.google.com (mail-ej1-f43.google.com [209.85.218.43]) by mails.dpdk.org (Postfix) with ESMTP id 5F14240A77 for ; Wed, 15 Nov 2023 14:12:04 +0100 (CET) Received: by mail-ej1-f43.google.com with SMTP id a640c23a62f3a-9f26ee4a6e5so119070166b.2 for ; Wed, 15 Nov 2023 05:12:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053924; x=1700658724; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UPLVAuBLaKUHh0e7ghZ5KBaj+aS4lDT4JMKEmjiTvQs=; b=b23qKbKeUSMNeRDMEquTSC52rmUnF2VXAeE7Jd2zAn6rSe1NASDiSigGaFBsRx6pCq fUAUFbHd/j0bWSzCOmG1T+P4o9ssqV16LytXQSh6WoxKG7D3FAjXXwrY5bNeEpvpSRiI i7B7Mob0l4MgsN9i/6hFN4FuxG7OMHRr9XkNr81wXTiIZoFH6yaZhFVn7U272mM2rhJg cPY4FMtxZAP3BA/WJlrhXQyD5gjMcFe/PW+fOy6YCezZH+CpzlCJW/VLFi2NRUnN5FPK tg9JgWbRxEi+QjmoaPTQmMv99UF4AirRS5B1zBMd0tA952Gc/37oDjMNh6PLIgTVNspS XIiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053924; x=1700658724; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UPLVAuBLaKUHh0e7ghZ5KBaj+aS4lDT4JMKEmjiTvQs=; b=Lg4M1GxWs99zrv3fQCZxlc9XoHnjTcj6NGGgTEWSpBDnpGfc06Y2PLUAonbeaIpp0m Q2moIjEdnpPgz6i4btEWrT1EHmol4jxgzwfqFjz9MZUneely4vymH2rpi51FAFyDHWfn LpzI1R0ApCKTd6DhWvLTc3x4hF7Rr3CBsaI1rbx/ad9dsp5mwVG6o/lbH6tbcnPVE/Rz MBTD/gk42ATKHUe+/t+g40zmA1e5E4/rIStMSuTqTg51qac7DobRFn6pKr4YN+gHHlXL 75qf+ns0SPflpMgE/BGPKX0QbKOe7oNtA1n19YIP2j4cuhJro8zMMdDecjl7/4mKDBIL mBnQ== X-Gm-Message-State: AOJu0Ywu4Tecg+jE88YNwU5z5sgWXTdhWNSbRCkBeyELepCG2xKBIU6g 9Gsx2yF1+2jD4mw/3mJEC4YcxQ== X-Google-Smtp-Source: AGHT+IF8sv8pi2vbva494y78kWh/Ix2RNZ3i4u/d/eNHNw3+OGAxRrb+/wH8rpjTNAOl0rEJAt/cNw== X-Received: by 2002:a17:906:53c3:b0:9be:53ef:211d with SMTP id p3-20020a17090653c300b009be53ef211dmr9679350ejo.72.1700053923736; Wed, 15 Nov 2023 05:12:03 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:02 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 10/21] dts: config docstring update Date: Wed, 15 Nov 2023 14:09:48 +0100 Message-Id: <20231115130959.39420-11-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 371 ++++++++++++++++++++++++++----- dts/framework/config/types.py | 132 +++++++++++ 2 files changed, 446 insertions(+), 57 deletions(-) create mode 100644 dts/framework/config/types.py diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 2044c82611..0aa149a53d 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -3,8 +3,34 @@ # Copyright(c) 2022-2023 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -Yaml config parsing methods +"""Testbed configuration and test suite specification. + +This package offers classes that hold real-time information about the testbed, hold test run +configuration describing the tested testbed and a loader function, :func:`load_config`, which loads +the YAML test run configuration file +and validates it according to :download:`the schema `. + +The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout +this package. The allowed keys and types inside this dictionary are defined in +the :doc:`types ` module. + +The test run configuration has two main sections: + + * The :class:`ExecutionConfiguration` which defines what tests are going to be run + and how DPDK will be built. It also references the testbed where these tests and DPDK + are going to be run, + * The nodes of the testbed are defined in the other section, + a :class:`list` of :class:`NodeConfiguration` objects. + +The real-time information about testbed is supposed to be gathered at runtime. + +The classes defined in this package make heavy use of :mod:`dataclasses`. +All of them use slots and are frozen: + + * Slots enables some optimizations, by pre-allocating space for the defined + attributes in the underlying data structure, + * Frozen makes the object immutable. This enables further optimizations, + and makes it thread safe should we every want to move in that direction. """ import json @@ -12,11 +38,20 @@ import pathlib from dataclasses import dataclass from enum import auto, unique -from typing import Any, TypedDict, Union +from typing import Union import warlock # type: ignore[import] import yaml +from framework.config.types import ( + BuildTargetConfigDict, + ConfigurationDict, + ExecutionConfigDict, + NodeConfigDict, + PortConfigDict, + TestSuiteConfigDict, + TrafficGeneratorConfigDict, +) from framework.exception import ConfigurationError from framework.settings import SETTINGS from framework.utils import StrEnum @@ -24,55 +59,97 @@ @unique class Architecture(StrEnum): + r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s.""" + + #: i686 = auto() + #: x86_64 = auto() + #: x86_32 = auto() + #: arm64 = auto() + #: ppc64le = auto() @unique class OS(StrEnum): + r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s.""" + + #: linux = auto() + #: freebsd = auto() + #: windows = auto() @unique class CPUType(StrEnum): + r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s.""" + + #: native = auto() + #: armv8a = auto() + #: dpaa2 = auto() + #: thunderx = auto() + #: xgene1 = auto() @unique class Compiler(StrEnum): + r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s.""" + + #: gcc = auto() + #: clang = auto() + #: icc = auto() + #: msvc = auto() @unique class TrafficGeneratorType(StrEnum): + """The supported traffic generators.""" + + #: SCAPY = auto() -# Slots enables some optimizations, by pre-allocating space for the defined -# attributes in the underlying data structure. -# -# Frozen makes the object immutable. This enables further optimizations, -# and makes it thread safe should we every want to move in that direction. @dataclass(slots=True, frozen=True) class HugepageConfiguration: + r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + amount: The number of hugepages. + force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node. + """ + amount: int force_first_numa: bool @dataclass(slots=True, frozen=True) class PortConfig: + r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + node: The :class:`~framework.testbed_model.node.Node` where this port exists. + pci: The PCI address of the port. + os_driver_for_dpdk: The operating system driver name for use with DPDK. + os_driver: The operating system driver name when the operating system controls the port. + peer_node: The :class:`~framework.testbed_model.node.Node` of the port + connected to this port. + peer_pci: The PCI address of the port connected to this port. + """ + node: str pci: str os_driver_for_dpdk: str @@ -81,18 +158,44 @@ class PortConfig: peer_pci: str @staticmethod - def from_dict(node: str, d: dict) -> "PortConfig": + def from_dict(node: str, d: PortConfigDict) -> "PortConfig": + """A convenience method that creates the object from fewer inputs. + + Args: + node: The node where this port exists. + d: The configuration dictionary. + + Returns: + The port configuration instance. + """ return PortConfig(node=node, **d) @dataclass(slots=True, frozen=True) class TrafficGeneratorConfig: + """The configuration of traffic generators. + + The class will be expanded when more configuration is needed. + + Attributes: + traffic_generator_type: The type of the traffic generator. + """ + traffic_generator_type: TrafficGeneratorType @staticmethod - def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": - # This looks useless now, but is designed to allow expansion to traffic - # generators that require more configuration later. + def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig": + """A convenience method that produces traffic generator config of the proper type. + + Args: + d: The configuration dictionary. + + Returns: + The traffic generator configuration instance. + + Raises: + ConfigurationError: An unknown traffic generator type was encountered. + """ match TrafficGeneratorType(d["type"]): case TrafficGeneratorType.SCAPY: return ScapyTrafficGeneratorConfig( @@ -106,11 +209,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": @dataclass(slots=True, frozen=True) class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig): + """Scapy traffic generator specific configuration.""" + pass @dataclass(slots=True, frozen=True) class NodeConfiguration: + r"""The configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + name: The name of the :class:`~framework.testbed_model.node.Node`. + hostname: The hostname of the :class:`~framework.testbed_model.node.Node`. + Can be an IP or a domain name. + user: The name of the user used to connect to + the :class:`~framework.testbed_model.node.Node`. + password: The password of the user. The use of passwords is heavily discouraged. + Please use keys instead. + arch: The architecture of the :class:`~framework.testbed_model.node.Node`. + os: The operating system of the :class:`~framework.testbed_model.node.Node`. + lcores: A comma delimited list of logical cores to use when running DPDK. + use_first_core: If :data:`True`, the first logical core won't be used. + hugepages: An optional hugepage configuration. + ports: The ports that can be used in testing. + """ + name: str hostname: str user: str @@ -123,57 +246,91 @@ class NodeConfiguration: ports: list[PortConfig] @staticmethod - def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]: - hugepage_config = d.get("hugepages") - if hugepage_config: - if "force_first_numa" not in hugepage_config: - hugepage_config["force_first_numa"] = False - hugepage_config = HugepageConfiguration(**hugepage_config) - - common_config = { - "name": d["name"], - "hostname": d["hostname"], - "user": d["user"], - "password": d.get("password"), - "arch": Architecture(d["arch"]), - "os": OS(d["os"]), - "lcores": d.get("lcores", "1"), - "use_first_core": d.get("use_first_core", False), - "hugepages": hugepage_config, - "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]], - } - + def from_dict( + d: NodeConfigDict, + ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]: + """A convenience method that processes the inputs before creating a specialized instance. + + Args: + d: The configuration dictionary. + + Returns: + Either an SUT or TG configuration instance. + """ + hugepage_config = None + if "hugepages" in d: + hugepage_config_dict = d["hugepages"] + if "force_first_numa" not in hugepage_config_dict: + hugepage_config_dict["force_first_numa"] = False + hugepage_config = HugepageConfiguration(**hugepage_config_dict) + + # The calls here contain duplicated code which is here because Mypy doesn't + # properly support dictionary unpacking with TypedDicts if "traffic_generator" in d: return TGNodeConfiguration( + name=d["name"], + hostname=d["hostname"], + user=d["user"], + password=d.get("password"), + arch=Architecture(d["arch"]), + os=OS(d["os"]), + lcores=d.get("lcores", "1"), + use_first_core=d.get("use_first_core", False), + hugepages=hugepage_config, + ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]], traffic_generator=TrafficGeneratorConfig.from_dict( d["traffic_generator"] ), - **common_config, ) else: return SutNodeConfiguration( - memory_channels=d.get("memory_channels", 1), **common_config + name=d["name"], + hostname=d["hostname"], + user=d["user"], + password=d.get("password"), + arch=Architecture(d["arch"]), + os=OS(d["os"]), + lcores=d.get("lcores", "1"), + use_first_core=d.get("use_first_core", False), + hugepages=hugepage_config, + ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]], + memory_channels=d.get("memory_channels", 1), ) @dataclass(slots=True, frozen=True) class SutNodeConfiguration(NodeConfiguration): + """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration. + + Attributes: + memory_channels: The number of memory channels to use when running DPDK. + """ + memory_channels: int @dataclass(slots=True, frozen=True) class TGNodeConfiguration(NodeConfiguration): + """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration. + + Attributes: + traffic_generator: The configuration of the traffic generator present on the TG node. + """ + traffic_generator: ScapyTrafficGeneratorConfig @dataclass(slots=True, frozen=True) class NodeInfo: - """Class to hold important versions within the node. - - This class, unlike the NodeConfiguration class, cannot be generated at the start. - This is because we need to initialize a connection with the node before we can - collect the information needed in this class. Therefore, it cannot be a part of - the configuration class above. + """Supplemental node information. + + Attributes: + os_name: The name of the running operating system of + the :class:`~framework.testbed_model.node.Node`. + os_version: The version of the running operating system of + the :class:`~framework.testbed_model.node.Node`. + kernel_version: The kernel version of the running operating system of + the :class:`~framework.testbed_model.node.Node`. """ os_name: str @@ -183,6 +340,20 @@ class NodeInfo: @dataclass(slots=True, frozen=True) class BuildTargetConfiguration: + """DPDK build configuration. + + The configuration used for building DPDK. + + Attributes: + arch: The target architecture to build for. + os: The target os to build for. + cpu: The target CPU to build for. + compiler: The compiler executable to use. + compiler_wrapper: This string will be put in front of the compiler when + executing the build. Useful for adding wrapper commands, such as ``ccache``. + name: The name of the compiler. + """ + arch: Architecture os: OS cpu: CPUType @@ -191,7 +362,18 @@ class BuildTargetConfiguration: name: str @staticmethod - def from_dict(d: dict) -> "BuildTargetConfiguration": + def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration": + r"""A convenience method that processes the inputs before creating an instance. + + `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and + `name` is constructed from `arch`, `os`, `cpu` and `compiler`. + + Args: + d: The configuration dictionary. + + Returns: + The build target configuration instance. + """ return BuildTargetConfiguration( arch=Architecture(d["arch"]), os=OS(d["os"]), @@ -204,23 +386,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration": @dataclass(slots=True, frozen=True) class BuildTargetInfo: - """Class to hold important versions within the build target. + """Various versions and other information about a build target. - This is very similar to the NodeInfo class, it just instead holds information - for the build target. + Attributes: + dpdk_version: The DPDK version that was built. + compiler_version: The version of the compiler used to build DPDK. """ dpdk_version: str compiler_version: str -class TestSuiteConfigDict(TypedDict): - suite: str - cases: list[str] - - @dataclass(slots=True, frozen=True) class TestSuiteConfig: + """Test suite configuration. + + Information about a single test suite to be executed. + + Attributes: + test_suite: The name of the test suite module without the starting ``TestSuite_``. + test_cases: The names of test cases from this test suite to execute. + If empty, all test cases will be executed. + """ + test_suite: str test_cases: list[str] @@ -228,6 +416,14 @@ class TestSuiteConfig: def from_dict( entry: str | TestSuiteConfigDict, ) -> "TestSuiteConfig": + """Create an instance from two different types. + + Args: + entry: Either a suite name or a dictionary containing the config. + + Returns: + The test suite configuration instance. + """ if isinstance(entry, str): return TestSuiteConfig(test_suite=entry, test_cases=[]) elif isinstance(entry, dict): @@ -238,19 +434,49 @@ def from_dict( @dataclass(slots=True, frozen=True) class ExecutionConfiguration: + """The configuration of an execution. + + The configuration contains testbed information, what tests to execute + and with what DPDK build. + + Attributes: + build_targets: A list of DPDK builds to test. + perf: Whether to run performance tests. + func: Whether to run functional tests. + skip_smoke_tests: Whether to skip smoke tests. + test_suites: The names of test suites and/or test cases to execute. + system_under_test_node: The SUT node to use in this execution. + traffic_generator_node: The TG node to use in this execution. + vdevs: The names of virtual devices to test. + """ + build_targets: list[BuildTargetConfiguration] perf: bool func: bool + skip_smoke_tests: bool test_suites: list[TestSuiteConfig] system_under_test_node: SutNodeConfiguration traffic_generator_node: TGNodeConfiguration vdevs: list[str] - skip_smoke_tests: bool @staticmethod def from_dict( - d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]] + d: ExecutionConfigDict, + node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]], ) -> "ExecutionConfiguration": + """A convenience method that processes the inputs before creating an instance. + + The build target and the test suite config is transformed into their respective objects. + SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are + just stored. + + Args: + d: The configuration dictionary. + node_map: A dictionary mapping node names to their config objects. + + Returns: + The execution configuration instance. + """ build_targets: list[BuildTargetConfiguration] = list( map(BuildTargetConfiguration.from_dict, d["build_targets"]) ) @@ -291,10 +517,31 @@ def from_dict( @dataclass(slots=True, frozen=True) class Configuration: + """DTS testbed and test configuration. + + The node configuration is not stored in this object. Rather, all used node configurations + are stored inside the execution configuration where the nodes are actually used. + + Attributes: + executions: Execution configurations. + """ + executions: list[ExecutionConfiguration] @staticmethod - def from_dict(d: dict) -> "Configuration": + def from_dict(d: ConfigurationDict) -> "Configuration": + """A convenience method that processes the inputs before creating an instance. + + Build target and test suite config is transformed into their respective objects. + SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are + just stored. + + Args: + d: The configuration dictionary. + + Returns: + The whole configuration instance. + """ nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list( map(NodeConfiguration.from_dict, d["nodes"]) ) @@ -313,9 +560,17 @@ def from_dict(d: dict) -> "Configuration": def load_config() -> Configuration: - """ - Loads the configuration file and the configuration file schema, - validates the configuration file, and creates a configuration object. + """Load DTS test run configuration from a file. + + Load the YAML test run configuration file + and :download:`the configuration file schema `, + validate the test run configuration file, and create a test run configuration object. + + The YAML test run configuration file is specified in the :option:`--config-file` command line + argument or the :envvar:`DTS_CFG_FILE` environment variable. + + Returns: + The parsed test run configuration. """ with open(SETTINGS.config_file_path, "r") as f: config_data = yaml.safe_load(f) @@ -326,6 +581,8 @@ def load_config() -> Configuration: with open(schema_path, "r") as f: schema = json.load(f) - config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data) - config_obj: Configuration = Configuration.from_dict(dict(config)) + config = warlock.model_factory(schema, name="_Config")(config_data) + config_obj: Configuration = Configuration.from_dict( + dict(config) # type: ignore[arg-type] + ) return config_obj diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py new file mode 100644 index 0000000000..1927910d88 --- /dev/null +++ b/dts/framework/config/types.py @@ -0,0 +1,132 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +"""Configuration dictionary contents specification. + +These type definitions serve as documentation of the configuration dictionary contents. + +The definitions use the built-in :class:`~typing.TypedDict` construct. +""" + +from typing import TypedDict + + +class PortConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + pci: str + #: + os_driver_for_dpdk: str + #: + os_driver: str + #: + peer_node: str + #: + peer_pci: str + + +class TrafficGeneratorConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + type: str + + +class HugepageConfigurationDict(TypedDict): + """Allowed keys and values.""" + + #: + amount: int + #: + force_first_numa: bool + + +class NodeConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + hugepages: HugepageConfigurationDict + #: + name: str + #: + hostname: str + #: + user: str + #: + password: str + #: + arch: str + #: + os: str + #: + lcores: str + #: + use_first_core: bool + #: + ports: list[PortConfigDict] + #: + memory_channels: int + #: + traffic_generator: TrafficGeneratorConfigDict + + +class BuildTargetConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + arch: str + #: + os: str + #: + cpu: str + #: + compiler: str + #: + compiler_wrapper: str + + +class TestSuiteConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + suite: str + #: + cases: list[str] + + +class ExecutionSUTConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + node_name: str + #: + vdevs: list[str] + + +class ExecutionConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + build_targets: list[BuildTargetConfigDict] + #: + perf: bool + #: + func: bool + #: + skip_smoke_tests: bool + #: + test_suites: TestSuiteConfigDict + #: + system_under_test_node: ExecutionSUTConfigDict + #: + traffic_generator_node: str + + +class ConfigurationDict(TypedDict): + """Allowed keys and values.""" + + #: + nodes: list[NodeConfigDict] + #: + executions: list[ExecutionConfigDict] From patchwork Wed Nov 15 13:09:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134388 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 146EF43339; Wed, 15 Nov 2023 14:13:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DB01740EDC; Wed, 15 Nov 2023 14:12:10 +0100 (CET) Received: from mail-ej1-f44.google.com (mail-ej1-f44.google.com [209.85.218.44]) by mails.dpdk.org (Postfix) with ESMTP id 4FCF140A77 for ; Wed, 15 Nov 2023 14:12:05 +0100 (CET) Received: by mail-ej1-f44.google.com with SMTP id a640c23a62f3a-9d2e7726d5bso1003999966b.0 for ; Wed, 15 Nov 2023 05:12:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053925; x=1700658725; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t7XOoo85AsM+xPhfLGE4ILJ7NUrsoVLB8WhGDmtkhUM=; b=v404OBSD3QlTryBAJ5H4WPuJyziCoQYUTrDc3I9NpDrHMBbP76KSGiSVkw9eGGnn/g KTH8YvCb9/gDyVAMN+h3naj+qCtqYx/q+ftU8Yg0Qla9yoNXVky2pTdmcbUdmao2vWvG UBKv4ofqx4vz+46XmCV6KbcZ7RnfmV+84hi5xRXW1qyTGUBzP/coEJyo0f4OaNX7X+JE HVcnGX6oHpXeDN13aIAhjy7kzrdvdDRygGU76wiKuXZyRDPdicXDBqQ6pv4UjPgWI0f8 AOXJz+LtqfhqwOjHeT1c0F11Rk03xR2maeMapYPrZ9jo4AWk9xujDXGiZh2l9sT/NIRC IiAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053925; x=1700658725; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t7XOoo85AsM+xPhfLGE4ILJ7NUrsoVLB8WhGDmtkhUM=; b=FG2YCGCbqrdGrgd/B7mqE7TEnAEXl/I2KRedBkVDlrAwNeRR/CjK3dHgWvJnBpRqBs igvPBFdu2DOopm76LqTaSMfar+CVdbMfHNTOiDHYO1875lEwyu0f3pX0Sg9Xp36ZSxH4 y5Vn+BUYYiRyfDLcyg6PdVDn/LOKP7/Xux8KWSibbxSmwaC/CKnYRwRIu2O8kJUMvP9a Lmsher15QxFRD85F7GGNF+4mH02cMFszxGxaADz/MaZsemj02ElVg1cUH+fiOI5bNepp /S9wfQAq9k94MzSKe2/pYykgutcHE/fagwimpgcJJ873J5Uxz7uMIjdxoFfbRi+fk0W9 4srA== X-Gm-Message-State: AOJu0Yw80MpRlvqmP2yDUEebqadXUHeB48ryvtmqq0mzyjbbGHxgpXyG dGOyDacDAVGYo1lP4hhhUidsig== X-Google-Smtp-Source: AGHT+IFuEh9WAj5hB9D3WRFO4fQVhaQXlfxOD40jz0+lS8KDJNVv6aAZ1V7k4tykIDS86U9V1NbE4A== X-Received: by 2002:a17:906:d8ac:b0:9bf:5df1:38cc with SMTP id qc12-20020a170906d8ac00b009bf5df138ccmr8515594ejb.4.1700053924915; Wed, 15 Nov 2023 05:12:04 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:04 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 11/21] dts: remote session docstring update Date: Wed, 15 Nov 2023 14:09:49 +0100 Message-Id: <20231115130959.39420-12-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/remote_session/__init__.py | 39 +++++- .../remote_session/remote_session.py | 128 +++++++++++++----- dts/framework/remote_session/ssh_session.py | 16 +-- 3 files changed, 135 insertions(+), 48 deletions(-) diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 5e7ddb2b05..51a01d6b5e 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -2,12 +2,14 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire -""" -The package provides modules for managing remote connections to a remote host (node), -differentiated by OS. -The package provides a factory function, create_session, that returns the appropriate -remote connection based on the passed configuration. The differences are in the -underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux). +"""Remote interactive and non-interactive sessions. + +This package provides modules for managing remote connections to a remote host (node). + +The non-interactive sessions send commands and return their output and exit code. + +The interactive sessions open an interactive shell which is continuously open, +allowing it to send and receive data within that particular shell. """ # pylama:ignore=W0611 @@ -26,10 +28,35 @@ def create_remote_session( node_config: NodeConfiguration, name: str, logger: DTSLOG ) -> RemoteSession: + """Factory for non-interactive remote sessions. + + The function returns an SSH session, but will be extended if support + for other protocols is added. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + + Returns: + The SSH remote session. + """ return SSHSession(node_config, name, logger) def create_interactive_session( node_config: NodeConfiguration, logger: DTSLOG ) -> InteractiveRemoteSession: + """Factory for interactive remote sessions. + + The function returns an interactive SSH session, but will be extended if support + for other protocols is added. + + Args: + node_config: The test run configuration of the node to connect to. + logger: The logger instance this session will use. + + Returns: + The interactive SSH remote session. + """ return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py index 0647d93de4..629c2d7b9c 100644 --- a/dts/framework/remote_session/remote_session.py +++ b/dts/framework/remote_session/remote_session.py @@ -3,6 +3,13 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +"""Base remote session. + +This module contains the abstract base class for remote sessions and defines +the structure of the result of a command execution. +""" + + import dataclasses from abc import ABC, abstractmethod from pathlib import PurePath @@ -15,8 +22,14 @@ @dataclasses.dataclass(slots=True, frozen=True) class CommandResult: - """ - The result of remote execution of a command. + """The result of remote execution of a command. + + Attributes: + name: The name of the session that executed the command. + command: The executed command. + stdout: The standard output the command produced. + stderr: The standard error output the command produced. + return_code: The return code the command exited with. """ name: str @@ -26,6 +39,7 @@ class CommandResult: return_code: int def __str__(self) -> str: + """Format the command outputs.""" return ( f"stdout: '{self.stdout}'\n" f"stderr: '{self.stderr}'\n" @@ -34,13 +48,24 @@ def __str__(self) -> str: class RemoteSession(ABC): - """ - The base class for defining which methods must be implemented in order to connect - to a remote host (node) and maintain a remote session. The derived classes are - supposed to implement/use some underlying transport protocol (e.g. SSH) to - implement the methods. On top of that, it provides some basic services common to - all derived classes, such as keeping history and logging what's being executed - on the remote node. + """Non-interactive remote session. + + The abstract methods must be implemented in order to connect to a remote host (node) + and maintain a remote session. + The subclasses must use (or implement) some underlying transport protocol (e.g. SSH) + to implement the methods. On top of that, it provides some basic services common to all + subclasses, such as keeping history and logging what's being executed on the remote node. + + Attributes: + name: The name of the session. + hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon) + or a domain name. + ip: The IP address of the node or a domain name, whichever was used in `hostname`. + port: The port of the node, if given in `hostname`. + username: The username used in the connection. + password: The password used in the connection. Most frequently empty, + as the use of passwords is discouraged. + history: The executed commands during this session. """ name: str @@ -59,6 +84,16 @@ def __init__( session_name: str, logger: DTSLOG, ): + """Connect to the node during initialization. + + Args: + node_config: The test run configuration of the node to connect to. + session_name: The name of the session. + logger: The logger instance this session will use. + + Raises: + SSHConnectionError: If the connection to the node was not successful. + """ self._node_config = node_config self.name = session_name @@ -79,8 +114,13 @@ def __init__( @abstractmethod def _connect(self) -> None: - """ - Create connection to assigned node. + """Create a connection to the node. + + The implementation must assign the established session to self.session. + + The implementation must except all exceptions and convert them to an SSHConnectionError. + + The implementation may optionally implement retry attempts. """ def send_command( @@ -90,11 +130,24 @@ def send_command( verify: bool = False, env: dict | None = None, ) -> CommandResult: - """ - Send a command to the connected node using optional env vars - and return CommandResult. - If verify is True, check the return code of the executed command - and raise a RemoteCommandExecutionError if the command failed. + """Send `command` to the connected node. + + The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` + environment variable configure the timeout of command execution. + + Args: + command: The command to execute. + timeout: Wait at most this long in seconds to execute `command`. + verify: If :data:`True`, will check the exit code of `command`. + env: A dictionary with environment variables to be used with `command` execution. + + Raises: + SSHSessionDeadError: If the session isn't alive when sending `command`. + SSHTimeoutError: If `command` execution timed out. + RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed. + + Returns: + The output of the command along with the return code. """ self._logger.info( f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "") @@ -115,29 +168,36 @@ def send_command( def _send_command( self, command: str, timeout: float, env: dict | None ) -> CommandResult: - """ - Use the underlying protocol to execute the command using optional env vars - and return CommandResult. + """Send a command to the connected node. + + The implementation must execute the command remotely with `env` environment variables + and return the result. + + The implementation must except all exceptions and raise an SSHSessionDeadError if + the session is not alive and an SSHTimeoutError if the command execution times out. """ def close(self, force: bool = False) -> None: - """ - Close the remote session and free all used resources. + """Close the remote session and free all used resources. + + Args: + force: Force the closure of the connection. This may not clean up all resources. """ self._logger.logger_exit() self._close(force) @abstractmethod def _close(self, force: bool = False) -> None: - """ - Execute protocol specific steps needed to close the session properly. + """Protocol specific steps needed to close the session properly. + + Args: + force: Force the closure of the connection. This may not clean up all resources. + This doesn't have to be implemented in the overloaded method. """ @abstractmethod def is_alive(self) -> bool: - """ - Check whether the remote session is still responding. - """ + """Check whether the remote session is still responding.""" @abstractmethod def copy_from( @@ -147,12 +207,12 @@ def copy_from( ) -> None: """Copy a file from the remote Node to the local filesystem. - Copy source_file from the remote Node associated with this remote - session to destination_file on the local filesystem. + Copy `source_file` from the remote Node associated with this remote session + to `destination_file` on the local filesystem. Args: - source_file: the file on the remote Node. - destination_file: a file or directory path on the local filesystem. + source_file: The file on the remote Node. + destination_file: A file or directory path on the local filesystem. """ @abstractmethod @@ -163,10 +223,10 @@ def copy_to( ) -> None: """Copy a file from local filesystem to the remote Node. - Copy source_file from local filesystem to destination_file - on the remote Node associated with this remote session. + Copy `source_file` from local filesystem to `destination_file` on the remote Node + associated with this remote session. Args: - source_file: the file on the local filesystem. - destination_file: a file or directory path on the remote Node. + source_file: The file on the local filesystem. + destination_file: A file or directory path on the remote Node. """ diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py index cee11d14d6..7186490a9a 100644 --- a/dts/framework/remote_session/ssh_session.py +++ b/dts/framework/remote_session/ssh_session.py @@ -1,6 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""SSH session remote session.""" + import socket import traceback from pathlib import PurePath @@ -26,13 +28,8 @@ class SSHSession(RemoteSession): """A persistent SSH connection to a remote Node. - The connection is implemented with the Fabric Python library. - - Args: - node_config: The configuration of the Node to connect to. - session_name: The name of the session. - logger: The logger used for logging. - This should be passed from the parent OSSession. + The connection is implemented with + `the Fabric Python library `_. Attributes: session: The underlying Fabric SSH connection. @@ -80,6 +77,7 @@ def _connect(self) -> None: raise SSHConnectionError(self.hostname, errors) def is_alive(self) -> bool: + """Overrides :meth:`~.remote_session.RemoteSession.is_alive`.""" return self.session.is_connected def _send_command( @@ -89,7 +87,7 @@ def _send_command( Args: command: The command to execute. - timeout: Wait at most this many seconds for the execution to complete. + timeout: Wait at most this long in seconds to execute the command. env: Extra environment variables that will be used in command execution. Raises: @@ -118,6 +116,7 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.remote_session.RemoteSession.copy_from`.""" self.session.get(str(destination_file), str(source_file)) def copy_to( @@ -125,6 +124,7 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.remote_session.RemoteSession.copy_to`.""" self.session.put(str(source_file), str(destination_file)) def _close(self, force: bool = False) -> None: From patchwork Wed Nov 15 13:09:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134389 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 24BD143339; Wed, 15 Nov 2023 14:13:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1CCBE40E4A; Wed, 15 Nov 2023 14:12:12 +0100 (CET) Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) by mails.dpdk.org (Postfix) with ESMTP id 0292540E54 for ; Wed, 15 Nov 2023 14:12:06 +0100 (CET) Received: by mail-ej1-f50.google.com with SMTP id a640c23a62f3a-9dbb3e0ff65so957023366b.1 for ; Wed, 15 Nov 2023 05:12:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053926; x=1700658726; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OaP22jIOd/XS/oO773MM8LvK7q54KXbqtVf2qdMt7ao=; b=kyRqItyXjhbrwyXEUq9w0Pj6nDFGyNypkxGdHjksWqQLr/aZ/PAUQI7x/9IWX2O1jc Q8V6q1922rgqjj19AEV7qy7TfCbtsslsX5FCPNhhaSjNP/SU/+fN2gBhj4Cuv57IZ1qH udYD+mMyf2uzHVltQY+jNXRFCCo7HS6X7JR4SJVNFleu8SRP3UbbJRFAdx8YM9bpm8S0 O429hnmBlwitJDgGe6Xar9QNPlpzqboWgzoMw9mVOwPlSwnmtASXu0h4bCI9xC+Er0UH /58ZA2e4KwdrI/9fXn6hkqhcX/04CMrdsT5+N4K0NnEGbojpALBW9njQDuRbielWRua2 2UHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053926; x=1700658726; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OaP22jIOd/XS/oO773MM8LvK7q54KXbqtVf2qdMt7ao=; b=cTE18ok2+bN+0b31mY0F7EWadaiQnXpKmCUwU8i2m8xr8DlmtIpjIKeq0EUzQPY8ox 90b9YBcCYnfFgvSE5lKbjUdUjtQu0ZDskLYs69mCv9ZlAPhlMPd1Ezhc/eXOUYWl7qEb yvuEY39ehvpE4iLnbjD0/FIMi1jBM7n4Bvnpo1Jl7w7GVnd8XDaiPnMEIbpOrjm5VEgU iWdx2ukvxwpxCZ/BxiwLpZTNzjMfJjKjGJfYPW7scZ5AJ2aDUIiBBWmw2YxeU/yuRB0I lmqPjRcilbaRkf1+bhOsDoHCurs/em9kRRLxcb9bO43dxowrNUvh9nYsZJrC5McsHk6T c01Q== X-Gm-Message-State: AOJu0YzdSYOVq/wvF7khI8xYP8U83I1pQ0a1POrPJG4GKlAUsEklNVHY b12Qh4Q8cGHEhT4se7vGkDWx4w== X-Google-Smtp-Source: AGHT+IGplwgAmpJEcQjQAUbScTeKOAen8ea/4DGHqsXHvq459vef6Y4D7gJcKppPcpPYvSfspTtw4g== X-Received: by 2002:a17:906:414c:b0:9e6:e3fc:60d4 with SMTP id l12-20020a170906414c00b009e6e3fc60d4mr9343384ejk.22.1700053926493; Wed, 15 Nov 2023 05:12:06 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:06 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 12/21] dts: interactive remote session docstring update Date: Wed, 15 Nov 2023 14:09:50 +0100 Message-Id: <20231115130959.39420-13-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../interactive_remote_session.py | 36 +++---- .../remote_session/interactive_shell.py | 99 +++++++++++-------- dts/framework/remote_session/python_shell.py | 26 ++++- dts/framework/remote_session/testpmd_shell.py | 61 +++++++++--- 4 files changed, 150 insertions(+), 72 deletions(-) diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py index 9085a668e8..c1bf30ac61 100644 --- a/dts/framework/remote_session/interactive_remote_session.py +++ b/dts/framework/remote_session/interactive_remote_session.py @@ -22,27 +22,23 @@ class InteractiveRemoteSession: """SSH connection dedicated to interactive applications. - This connection is created using paramiko and is a persistent connection to the - host. This class defines methods for connecting to the node and configures this - connection to send "keep alive" packets every 30 seconds. Because paramiko attempts - to use SSH keys to establish a connection first, providing a password is optional. - This session is utilized by InteractiveShells and cannot be interacted with - directly. - - Arguments: - node_config: Configuration class for the node you are connecting to. - _logger: Desired logger for this session to use. + The connection is created using `paramiko `_ + and is a persistent connection to the host. This class defines the methods for connecting + to the node and configures the connection to send "keep alive" packets every 30 seconds. + Because paramiko attempts to use SSH keys to establish a connection first, providing + a password is optional. This session is utilized by InteractiveShells + and cannot be interacted with directly. Attributes: - hostname: Hostname that will be used to initialize a connection to the node. - ip: A subsection of hostname that removes the port for the connection if there + hostname: The hostname that will be used to initialize a connection to the node. + ip: A subsection of `hostname` that removes the port for the connection if there is one. If there is no port, this will be the same as hostname. - port: Port to use for the ssh connection. This will be extracted from the - hostname if there is a port included, otherwise it will default to 22. + port: Port to use for the ssh connection. This will be extracted from `hostname` + if there is a port included, otherwise it will default to ``22``. username: User to connect to the node with. password: Password of the user connecting to the host. This will default to an empty string if a password is not provided. - session: Underlying paramiko connection. + session: The underlying paramiko connection. Raises: SSHConnectionError: There is an error creating the SSH connection. @@ -58,9 +54,15 @@ class InteractiveRemoteSession: _node_config: NodeConfiguration _transport: Transport | None - def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None: + def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None: + """Connect to the node during initialization. + + Args: + node_config: The test run configuration of the node to connect to. + logger: The logger instance this session will use. + """ self._node_config = node_config - self._logger = _logger + self._logger = logger self.hostname = node_config.hostname self.username = node_config.user self.password = node_config.password if node_config.password else "" diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py index c24376b2a8..a98a822e91 100644 --- a/dts/framework/remote_session/interactive_shell.py +++ b/dts/framework/remote_session/interactive_shell.py @@ -3,18 +3,20 @@ """Common functionality for interactive shell handling. -This base class, InteractiveShell, is meant to be extended by other classes that -contain functionality specific to that shell type. These derived classes will often -modify things like the prompt to expect or the arguments to pass into the application, -but still utilize the same method for sending a command and collecting output. How -this output is handled however is often application specific. If an application needs -elevated privileges to start it is expected that the method for gaining those -privileges is provided when initializing the class. +The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain +functionality specific to that shell type. These subclasses will often modify things like +the prompt to expect or the arguments to pass into the application, but still utilize +the same method for sending a command and collecting output. How this output is handled however +is often application specific. If an application needs elevated privileges to start it is expected +that the method for gaining those privileges is provided when initializing the class. + +The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` +environment variable configure the timeout of getting the output from command execution. """ from abc import ABC from pathlib import PurePath -from typing import Callable +from typing import Callable, ClassVar from paramiko import Channel, SSHClient, channel # type: ignore[import] @@ -30,28 +32,6 @@ class InteractiveShell(ABC): and collecting input until reaching a certain prompt. All interactive applications will use the same SSH connection, but each will create their own channel on that session. - - Arguments: - interactive_session: The SSH session dedicated to interactive shells. - logger: Logger used for displaying information in the console. - get_privileged_command: Method for modifying a command to allow it to use - elevated privileges. If this is None, the application will not be started - with elevated privileges. - app_args: Command line arguments to be passed to the application on startup. - timeout: Timeout used for the SSH channel that is dedicated to this interactive - shell. This timeout is for collecting output, so if reading from the buffer - and no output is gathered within the timeout, an exception is thrown. - - Attributes - _default_prompt: Prompt to expect at the end of output when sending a command. - This is often overridden by derived classes. - _command_extra_chars: Extra characters to add to the end of every command - before sending them. This is often overridden by derived classes and is - most commonly an additional newline character. - path: Path to the executable to start the interactive application. - dpdk_app: Whether this application is a DPDK app. If it is, the build - directory for DPDK on the node will be prepended to the path to the - executable. """ _interactive_session: SSHClient @@ -61,10 +41,22 @@ class InteractiveShell(ABC): _logger: DTSLOG _timeout: float _app_args: str - _default_prompt: str = "" - _command_extra_chars: str = "" - path: PurePath - dpdk_app: bool = False + + #: Prompt to expect at the end of output when sending a command. + #: This is often overridden by subclasses. + _default_prompt: ClassVar[str] = "" + + #: Extra characters to add to the end of every command + #: before sending them. This is often overridden by subclasses and is + #: most commonly an additional newline character. + _command_extra_chars: ClassVar[str] = "" + + #: Path to the executable to start the interactive application. + path: ClassVar[PurePath] + + #: Whether this application is a DPDK app. If it is, the build directory + #: for DPDK on the node will be prepended to the path to the executable. + dpdk_app: ClassVar[bool] = False def __init__( self, @@ -74,6 +66,19 @@ def __init__( app_args: str = "", timeout: float = SETTINGS.timeout, ) -> None: + """Create an SSH channel during initialization. + + Args: + interactive_session: The SSH session dedicated to interactive shells. + logger: The logger instance this session will use. + get_privileged_command: A method for modifying a command to allow it to use + elevated privileges. If :data:`None`, the application will not be started + with elevated privileges. + app_args: The command line arguments to be passed to the application on startup. + timeout: The timeout used for the SSH channel that is dedicated to this interactive + shell. This timeout is for collecting output, so if reading from the buffer + and no output is gathered within the timeout, an exception is thrown. + """ self._interactive_session = interactive_session self._ssh_channel = self._interactive_session.invoke_shell() self._stdin = self._ssh_channel.makefile_stdin("w") @@ -92,6 +97,10 @@ def _start_application( This method is often overridden by subclasses as their process for starting may look different. + + Args: + get_privileged_command: A function (but could be any callable) that produces + the version of the command with elevated privileges. """ start_command = f"{self.path} {self._app_args}" if get_privileged_command is not None: @@ -99,16 +108,24 @@ def _start_application( self.send_command(start_command) def send_command(self, command: str, prompt: str | None = None) -> str: - """Send a command and get all output before the expected ending string. + """Send `command` and get all output before the expected ending string. Lines that expect input are not included in the stdout buffer, so they cannot - be used for expect. For example, if you were prompted to log into something - with a username and password, you cannot expect "username:" because it won't - yet be in the stdout buffer. A workaround for this could be consuming an - extra newline character to force the current prompt into the stdout buffer. + be used for expect. + + Example: + If you were prompted to log into something with a username and password, + you cannot expect ``username:`` because it won't yet be in the stdout buffer. + A workaround for this could be consuming an extra newline character to force + the current `prompt` into the stdout buffer. + + Args: + command: The command to send. + prompt: After sending the command, `send_command` will be expecting this string. + If :data:`None`, will use the class's default prompt. Returns: - All output in the buffer before expected string + All output in the buffer before expected string. """ self._logger.info(f"Sending: '{command}'") if prompt is None: @@ -126,8 +143,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str: return out def close(self) -> None: + """Properly free all resources.""" self._stdin.close() self._ssh_channel.close() def __del__(self) -> None: + """Make sure the session is properly closed before deleting the object.""" self.close() diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py index cc3ad48a68..ccfd3783e8 100644 --- a/dts/framework/remote_session/python_shell.py +++ b/dts/framework/remote_session/python_shell.py @@ -1,12 +1,32 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""Python interactive shell. + +Typical usage example in a TestSuite:: + + from framework.remote_session import PythonShell + python_shell = self.tg_node.create_interactive_shell( + PythonShell, timeout=5, privileged=True + ) + python_shell.send_command("print('Hello World')") + python_shell.close() +""" + from pathlib import PurePath +from typing import ClassVar from .interactive_shell import InteractiveShell class PythonShell(InteractiveShell): - _default_prompt: str = ">>>" - _command_extra_chars: str = "\n" - path: PurePath = PurePath("python3") + """Python interactive shell.""" + + #: Python's prompt. + _default_prompt: ClassVar[str] = ">>>" + + #: This forces the prompt to appear after sending a command. + _command_extra_chars: ClassVar[str] = "\n" + + #: The Python executable. + path: ClassVar[PurePath] = PurePath("python3") diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index 1455b5a199..2632515d74 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -1,45 +1,82 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 University of New Hampshire +"""Testpmd interactive shell. + +Typical usage example in a TestSuite:: + + testpmd_shell = self.sut_node.create_interactive_shell( + TestPmdShell, privileged=True + ) + devices = testpmd_shell.get_devices() + for device in devices: + print(device) + testpmd_shell.close() +""" + from pathlib import PurePath -from typing import Callable +from typing import Callable, ClassVar from .interactive_shell import InteractiveShell class TestPmdDevice(object): + """The data of a device that testpmd can recognize. + + Attributes: + pci_address: The PCI address of the device. + """ + pci_address: str def __init__(self, pci_address_line: str): + """Initialize the device from the testpmd output line string. + + Args: + pci_address_line: A line of testpmd output that contains a device. + """ self.pci_address = pci_address_line.strip().split(": ")[1].strip() def __str__(self) -> str: + """The PCI address captures what the device is.""" return self.pci_address class TestPmdShell(InteractiveShell): - path: PurePath = PurePath("app", "dpdk-testpmd") - dpdk_app: bool = True - _default_prompt: str = "testpmd>" - _command_extra_chars: str = ( - "\n" # We want to append an extra newline to every command - ) + """Testpmd interactive shell. + + The testpmd shell users should never use + the :meth:`~framework.remote_session.interactive_shell.InteractiveShell.send_command` method + directly, but rather call specialized methods. If there isn't one that satisfies a need, + it should be added. + """ + + #: The path to the testpmd executable. + path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd") + + #: Flag this as a DPDK app so that it's clear this is not a system app and + #: needs to be looked in a specific path. + dpdk_app: ClassVar[bool] = True + + #: The testpmd's prompt. + _default_prompt: ClassVar[str] = "testpmd>" + + #: This forces the prompt to appear after sending a command. + _command_extra_chars: ClassVar[str] = "\n" def _start_application( self, get_privileged_command: Callable[[str], str] | None ) -> None: - """See "_start_application" in InteractiveShell.""" self._app_args += " -- -i" super()._start_application(get_privileged_command) def get_devices(self) -> list[TestPmdDevice]: - """Get a list of device names that are known to testpmd + """Get a list of device names that are known to testpmd. - Uses the device info listed in testpmd and then parses the output to - return only the names of the devices. + Uses the device info listed in testpmd and then parses the output. Returns: - A list of strings representing device names (e.g. 0000:14:00.1) + A list of devices. """ dev_info: str = self.send_command("show device info all") dev_list: list[TestPmdDevice] = [] From patchwork Wed Nov 15 13:09:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134390 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6127143339; Wed, 15 Nov 2023 14:13:31 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5F9B9410EE; Wed, 15 Nov 2023 14:12:14 +0100 (CET) Received: from mail-ej1-f48.google.com (mail-ej1-f48.google.com [209.85.218.48]) by mails.dpdk.org (Postfix) with ESMTP id 5308B40E72 for ; Wed, 15 Nov 2023 14:12:08 +0100 (CET) Received: by mail-ej1-f48.google.com with SMTP id a640c23a62f3a-9bf86b77a2aso995758266b.0 for ; Wed, 15 Nov 2023 05:12:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053928; x=1700658728; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nNWqU1DLwXxy+jHTYIQUWcQVb4gb9gLJSwll5Z93zLQ=; b=OXj1Nb4elZldjlmhOW7fP7IdS7XU3XQcjl37ZVB3vZtovE6TrNYV81z0+Sg0rs3Tdm B8/cUhu8eoYXi7Q1Y9L88HpaL//J8QdD3bITon9Oxzw318od4VnOlEcwshvBLGoAOuqf KI3p9kymPRs8H7HKpO5W2FTvAlbXK0jXJlzx4G8JQWUDas+HZ7FoTV3xXbtM6E5YFHH4 gLuwVEI/r1jjGRjIPofAVUcBtPvf2G8prnvSYPzpYIZ3e0V1WAwyeqSFLZM9CH8sHzPR gx3GuvolUar/sLPTJnrMCbHD8CDqWIRdNfU49zRpx2/sSb4rXqeQTnkwyFunW8izpmE9 OUDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053928; x=1700658728; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nNWqU1DLwXxy+jHTYIQUWcQVb4gb9gLJSwll5Z93zLQ=; b=VZ0ReDZ+Vk/5nF6GNW5/oOetcdVp5dfrYIsxZokViLAYUn431eRYScFhPp+CP0JVTw AjGO8U3yfQ9wqCKWlDp8rGdW8z8AbJrzZndYgA2N85AX/jOpzYc5E/Zalos9Z0uHFkeW ARVoKhR50N70s+9v1c7SbWAp4WNnxaRPAE4elgFLqGiZJdGc/x4GY6EkAvAW/Xo3Sr9V 7gpKOfHyoW5J8YirGsXz9lTWQFMRDEfKkaEK1GujxGAdYKLKOLDx8GZwmKVpSxPuXNVe sYuagAeNeAYSF/PrwNibCRS/zb3/wd4TDJppCKUWXrsqTskWcuNkeHpliMU9v4Z3ro1c o33Q== X-Gm-Message-State: AOJu0YwL/Kw7ziCoRG1tk5M3Nwu71OFtVZijx/Q2QVLy3qE/BO79HVIV XtgUcqeazJYjiLU135wMT3tlKg== X-Google-Smtp-Source: AGHT+IH7ewU/ig4Bnfoss/sGijjaIdoix8e+kf4dEgBtgWf/eVa/mF33qBmCnGO1TWukchoNX0RlnA== X-Received: by 2002:a17:906:e0c1:b0:9be:839a:3372 with SMTP id gl1-20020a170906e0c100b009be839a3372mr7716069ejb.59.1700053927994; Wed, 15 Nov 2023 05:12:07 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:07 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 13/21] dts: port and virtual device docstring update Date: Wed, 15 Nov 2023 14:09:51 +0100 Message-Id: <20231115130959.39420-14-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/__init__.py | 16 ++++-- dts/framework/testbed_model/port.py | 53 +++++++++++++++---- dts/framework/testbed_model/virtual_device.py | 17 +++++- 3 files changed, 71 insertions(+), 15 deletions(-) diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 8ced05653b..a02be1f2d9 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -2,9 +2,19 @@ # Copyright(c) 2022-2023 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -This package contains the classes used to model the physical traffic generator, -system under test and any other components that need to be interacted with. +"""Testbed modelling. + +This package defines the testbed elements DTS works with: + + * A system under test node: :class:`SutNode`, + * A traffic generator node: :class:`TGNode`, + * The ports of network interface cards (NICs) present on nodes: :class:`Port`, + * The logical cores of CPUs present on nodes: :class:`LogicalCore`, + * The virtual devices that can be created on nodes: :class:`VirtualDevice`, + * The operating systems running on nodes: :class:`LinuxSession` and :class:`PosixSession`. + +DTS needs to be able to connect to nodes and understand some of the hardware present on these nodes +to properly build and test DPDK. """ # pylama:ignore=W0611 diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py index 680c29bfe3..817405bea4 100644 --- a/dts/framework/testbed_model/port.py +++ b/dts/framework/testbed_model/port.py @@ -2,6 +2,13 @@ # Copyright(c) 2022 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""NIC port model. + +Basic port information, such as location (the port are identified by their PCI address on a node), +drivers and address. +""" + + from dataclasses import dataclass from framework.config import PortConfig @@ -9,24 +16,35 @@ @dataclass(slots=True, frozen=True) class PortIdentifier: + """The port identifier. + + Attributes: + node: The node where the port resides. + pci: The PCI address of the port on `node`. + """ + node: str pci: str @dataclass(slots=True) class Port: - """ - identifier: The PCI address of the port on a node. - - os_driver: The driver used by this port when the OS is controlling it. - Example: i40e - os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it, - Example: vfio-pci. + """Physical port on a node. - Note: os_driver and os_driver_for_dpdk may be the same thing. - Example: mlx5_core + The ports are identified by the node they're on and their PCI addresses. The port on the other + side of the connection is also captured here. + Each port is serviced by a driver, which may be different for the operating system (`os_driver`) + and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``. - peer: The identifier of a port this port is connected with. + Attributes: + identifier: The PCI address of the port on a node. + os_driver: The operating system driver name when the operating system controls the port, + e.g.: ``i40e``. + os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``. + peer: The identifier of a port this port is connected with. + The `peer` is on a different node. + mac_address: The MAC address of the port. + logical_name: The logical name of the port. Must be discovered. """ identifier: PortIdentifier @@ -37,6 +55,12 @@ class Port: logical_name: str = "" def __init__(self, node_name: str, config: PortConfig): + """Initialize the port from `node_name` and `config`. + + Args: + node_name: The name of the port's node. + config: The test run configuration of the port. + """ self.identifier = PortIdentifier( node=node_name, pci=config.pci, @@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig): @property def node(self) -> str: + """The node where the port resides.""" return self.identifier.node @property def pci(self) -> str: + """The PCI address of the port.""" return self.identifier.pci @dataclass(slots=True, frozen=True) class PortLink: + """The physical, cabled connection between the ports. + + Attributes: + sut_port: The port on the SUT node connected to `tg_port`. + tg_port: The port on the TG node connected to `sut_port`. + """ + sut_port: Port tg_port: Port diff --git a/dts/framework/testbed_model/virtual_device.py b/dts/framework/testbed_model/virtual_device.py index eb664d9f17..e9b5e9c3be 100644 --- a/dts/framework/testbed_model/virtual_device.py +++ b/dts/framework/testbed_model/virtual_device.py @@ -1,16 +1,29 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""Virtual devices model. + +Alongside support for physical hardware, DPDK can create various virtual devices. +""" + class VirtualDevice(object): - """ - Base class for virtual devices used by DPDK. + """Base class for virtual devices used by DPDK. + + Attributes: + name: The name of the virtual device. """ name: str def __init__(self, name: str): + """Initialize the virtual device. + + Args: + name: The name of the virtual device. + """ self.name = name def __str__(self) -> str: + """This corresponds to the name used for DPDK devices.""" return self.name From patchwork Wed Nov 15 13:09:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134391 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 12B9543339; Wed, 15 Nov 2023 14:13:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 86DF7410F1; Wed, 15 Nov 2023 14:12:15 +0100 (CET) Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by mails.dpdk.org (Postfix) with ESMTP id CDE6C40EDB for ; Wed, 15 Nov 2023 14:12:09 +0100 (CET) Received: by mail-ej1-f45.google.com with SMTP id a640c23a62f3a-9e623356d5dso756867866b.3 for ; Wed, 15 Nov 2023 05:12:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053929; x=1700658729; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pLeNkk+BE6ZnW6umBr/glSV3roMC3ycMuTjWgCecNoQ=; b=pxFn9BJM719n/VgulVv/d+C/ZW18Ne1zOz2KpNKw50FiIfihFNA0a+W6G66lAbHrIe bPgSHbtgBcxgZku7H3Flb/M7zvFukEj45VhAI8k9zVCHmv9ZQA63diDVLOb4bvKWM9zh QExMzV6RSGcnPcHXRO389ZBH/on/LqYPODavvLg2bkQdop88VUgkMA5Psx+aMBViARjM 8lhGb0O/rppAajg/TQ2R1+evmESl25aTyBxi0fJDJXXFKcewxu9vOoL1i5NLlw4kuOVs vE6zxV3xmW71cz/YWGwVtcH971UbDwxtibhViK5FNFbLzMBjPLd+3q5HXINExf71hzxC 3u4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053929; x=1700658729; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pLeNkk+BE6ZnW6umBr/glSV3roMC3ycMuTjWgCecNoQ=; b=VXG+Chv3tm/Ah6AqyuUmbSln4Ty+yuHXw1ywXVvg8zsRMHn40zKHUzqdT4YXDQ3AmU KzAl0rKl8eTYRaIhohh61PfnMgCkQbNr89di5f6RzxoP9cRRCrYQ+IZPkP/wsS6ogInU jgRUntIMKe5YNrceszy2uxsIisuJahh+KI+0QS2mTTDCbw8oXHceuJG+IKRPt93CkxTZ dHdfJ76izqYrF9WqtEqidTon4afBAO5AQLuq8WK8q4Mu19JoDKDorb4la77u6d+qoYJi O/hpr9MDPDb4giwxK3gNHFdKqBWD+0IEwACH1OUFo6e3EKg4uOdOqxUgbTFb2xbblzb7 QQKg== X-Gm-Message-State: AOJu0Yydg8CjDhgIe9YNNzyOs+0g+IdxytY0WHbPJHFV2l1k/egBd6dg RrkNgeyZah/uhTsNqsikTl6TyQ== X-Google-Smtp-Source: AGHT+IED/0A+/RN4sp8j9IZg6Bl7jqTw57p7F62Rq6d5ZMmkX67bC5j8ZuMWjHKLDS4ouLujwSRB9Q== X-Received: by 2002:a17:906:34d2:b0:9d2:20ee:b18b with SMTP id h18-20020a17090634d200b009d220eeb18bmr9516373ejb.42.1700053929359; Wed, 15 Nov 2023 05:12:09 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:08 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 14/21] dts: cpu docstring update Date: Wed, 15 Nov 2023 14:09:52 +0100 Message-Id: <20231115130959.39420-15-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++-------- 1 file changed, 144 insertions(+), 52 deletions(-) diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py index 8fe785dfe4..4edeb4a7c2 100644 --- a/dts/framework/testbed_model/cpu.py +++ b/dts/framework/testbed_model/cpu.py @@ -1,6 +1,22 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""CPU core representation and filtering. + +This module provides a unified representation of logical CPU cores along +with filtering capabilities. + +When symmetric multiprocessing (SMP or multithreading) is enabled on a server, +the physical CPU cores are split into logical CPU cores with different IDs. + +:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify +the socket from which to filter the number of logical cores. It's also possible to not use all +logical CPU cores from each physical core (e.g. only the first logical core of each physical core). + +:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that +the logical cores are actually present on the server. +""" + import dataclasses from abc import ABC, abstractmethod from collections.abc import Iterable, ValuesView @@ -11,9 +27,17 @@ @dataclass(slots=True, frozen=True) class LogicalCore(object): - """ - Representation of a CPU core. A physical core is represented in OS - by multiple logical cores (lcores) if CPU multithreading is enabled. + """Representation of a logical CPU core. + + A physical core is represented in OS by multiple logical cores (lcores) + if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same. + + Attributes: + lcore: The logical core ID of a CPU core. It's the same as `core` with + disabled multithreading. + core: The physical core ID of a CPU core. + socket: The physical socket ID where the CPU resides. + node: The NUMA node ID where the CPU resides. """ lcore: int @@ -22,27 +46,36 @@ class LogicalCore(object): node: int def __int__(self) -> int: + """The CPU is best represented by the logical core, as that's what we configure in EAL.""" return self.lcore class LogicalCoreList(object): - """ - Convert these options into a list of logical core ids. - lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores - lcore_list=[0,1,2,3] - a list of int indices - lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported - lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported - - The class creates a unified format used across the framework and allows - the user to use either a str representation (using str(instance) or directly - in f-strings) or a list representation (by accessing instance.lcore_list). - Empty lcore_list is allowed. + r"""A unified way to store :class:`LogicalCore`\s. + + Create a unified format used across the framework and allow the user to use + either a :class:`str` representation (using ``str(instance)`` or directly in f-strings) + or a :class:`list` representation (by accessing the `lcore_list` property, + which stores logical core IDs). """ _lcore_list: list[int] _lcore_str: str def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str): + """Process `lcore_list`, then sort. + + There are four supported logical core list formats:: + + lcore_list=[LogicalCore1, LogicalCore2] # a list of LogicalCores + lcore_list=[0,1,2,3] # a list of int indices + lcore_list=['0','1','2-3'] # a list of str indices; ranges are supported + lcore_list='0,1,2-3' # a comma delimited str of indices; ranges are supported + + Args: + lcore_list: Various ways to represent multiple logical cores. + Empty `lcore_list` is allowed. + """ self._lcore_list = [] if isinstance(lcore_list, str): lcore_list = lcore_list.split(",") @@ -60,6 +93,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str): @property def lcore_list(self) -> list[int]: + """The logical core IDs.""" return self._lcore_list def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]: @@ -89,28 +123,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]: return formatted_core_list def __str__(self) -> str: + """The consecutive ranges of logical core IDs.""" return self._lcore_str @dataclasses.dataclass(slots=True, frozen=True) class LogicalCoreCount(object): - """ - Define the number of logical cores to use. - If sockets is not None, socket_count is ignored. - """ + """Define the number of logical cores per physical cores per sockets.""" + #: Use this many logical cores per each physical core. lcores_per_core: int = 1 + #: Use this many physical cores per each socket. cores_per_socket: int = 2 + #: Use this many sockets. socket_count: int = 1 + #: Use exactly these sockets. This takes precedence over `socket_count`, + #: so when `sockets` is not :data:`None`, `socket_count` is ignored. sockets: list[int] | None = None class LogicalCoreFilter(ABC): - """ - Filter according to the input filter specifier. Each filter needs to be - implemented in a derived class. - This class only implements operations common to all filters, such as sorting - the list to be filtered beforehand. + """Common filtering class. + + Each filter needs to be implemented in a subclass. This base class sorts the list of cores + and defines the filtering method, which must be implemented by subclasses. """ _filter_specifier: LogicalCoreCount | LogicalCoreList @@ -122,6 +158,17 @@ def __init__( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool = True, ): + """Filter according to the input filter specifier. + + The input `lcore_list` is copied and sorted by physical core before filtering. + The list is copied so that the original is left intact. + + Args: + lcore_list: The logical CPU cores to filter. + filter_specifier: Filter cores from `lcore_list` according to this filter. + ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`, + sort in descending order. + """ self._filter_specifier = filter_specifier # sorting by core is needed in case hyperthreading is enabled @@ -132,31 +179,45 @@ def __init__( @abstractmethod def filter(self) -> list[LogicalCore]: - """ - Use self._filter_specifier to filter self._lcores_to_filter - and return the list of filtered LogicalCores. - self._lcores_to_filter is a sorted copy of the original list, - so it may be modified. + r"""Filter the cores. + + Use `self._filter_specifier` to filter `self._lcores_to_filter` and return + the filtered :class:`LogicalCore`\s. + `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified. + + Returns: + The filtered cores. """ class LogicalCoreCountFilter(LogicalCoreFilter): - """ + """Filter cores by specified counts. + Filter the input list of LogicalCores according to specified rules: - Use cores from the specified number of sockets or from the specified socket ids. - If sockets is specified, it takes precedence over socket_count. - From each of those sockets, use only cores_per_socket of cores. - And for each core, use lcores_per_core of logical cores. Hypertheading - must be enabled for this to take effect. - If ascending is True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the highest - id and continue in descending order. This ordering affects which - sockets to consider first as well. + + * The input `filter_specifier` is :class:`LogicalCoreCount`, + * Use cores from the specified number of sockets or from the specified socket ids, + * If `sockets` is specified, it takes precedence over `socket_count`, + * From each of those sockets, use only `cores_per_socket` of cores, + * And for each core, use `lcores_per_core` of logical cores. Hypertheading + must be enabled for this to take effect. """ _filter_specifier: LogicalCoreCount def filter(self) -> list[LogicalCore]: + """Filter the cores according to :class:`LogicalCoreCount`. + + Start by filtering the allowed sockets. The cores matching the allowed socket are returned. + The cores of each socket are stored in separate lists. + + Then filter the allowed physical cores from those lists of cores per socket. When filtering + physical cores, store the desired number of logical cores per physical core which then + together constitute the final filtered list. + + Returns: + The filtered cores. + """ sockets_to_filter = self._filter_sockets(self._lcores_to_filter) filtered_lcores = [] for socket_to_filter in sockets_to_filter: @@ -166,24 +227,37 @@ def filter(self) -> list[LogicalCore]: def _filter_sockets( self, lcores_to_filter: Iterable[LogicalCore] ) -> ValuesView[list[LogicalCore]]: - """ - Remove all lcores that don't match the specified socket(s). - If self._filter_specifier.sockets is not None, keep lcores from those sockets, - otherwise keep lcores from the first - self._filter_specifier.socket_count sockets. + """Filter a list of cores per each allowed socket. + + The sockets may be specified in two ways, either a number or a specific list of sockets. + In case of a specific list, we just need to return the cores from those sockets. + If filtering a number of cores, we need to go through all cores and note which sockets + appear and only filter from the first n that appear. + + Args: + lcores_to_filter: The cores to filter. These must be sorted by the physical core. + + Returns: + A list of lists of logical CPU cores. Each list contains cores from one socket. """ allowed_sockets: set[int] = set() socket_count = self._filter_specifier.socket_count if self._filter_specifier.sockets: + # when sockets in filter is specified, the sockets are already set socket_count = len(self._filter_specifier.sockets) allowed_sockets = set(self._filter_specifier.sockets) + # filter socket_count sockets from all sockets by checking the socket of each CPU filtered_lcores: dict[int, list[LogicalCore]] = {} for lcore in lcores_to_filter: if not self._filter_specifier.sockets: + # this is when sockets is not set, so we do the actual filtering + # when it is set, allowed_sockets is already defined and can't be changed if len(allowed_sockets) < socket_count: + # allowed_sockets is a set, so adding an existing socket won't re-add it allowed_sockets.add(lcore.socket) if lcore.socket in allowed_sockets: + # separate sockets per socket; this makes it easier in further processing if lcore.socket in filtered_lcores: filtered_lcores[lcore.socket].append(lcore) else: @@ -200,12 +274,13 @@ def _filter_sockets( def _filter_cores_from_socket( self, lcores_to_filter: Iterable[LogicalCore] ) -> list[LogicalCore]: - """ - Keep only the first self._filter_specifier.cores_per_socket cores. - In multithreaded environments, keep only - the first self._filter_specifier.lcores_per_core lcores of those cores. - """ + """Filter a list of cores from the given socket. + + Go through the cores and note how many logical cores per physical core have been filtered. + Returns: + The filtered logical CPU cores. + """ # no need to use ordered dict, from Python3.7 the dict # insertion order is preserved (LIFO). lcore_count_per_core_map: dict[int, int] = {} @@ -248,15 +323,21 @@ def _filter_cores_from_socket( class LogicalCoreListFilter(LogicalCoreFilter): - """ - Filter the input list of Logical Cores according to the input list of - lcore indices. - An empty LogicalCoreList won't filter anything. + """Filter the logical CPU cores by logical CPU core IDs. + + This is a simple filter that looks at logical CPU IDs and only filter those that match. + + The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything. """ _filter_specifier: LogicalCoreList def filter(self) -> list[LogicalCore]: + """Filter based on logical CPU core ID. + + Return: + The filtered logical CPU cores. + """ if not len(self._filter_specifier.lcore_list): return self._lcores_to_filter @@ -279,6 +360,17 @@ def lcore_filter( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool, ) -> LogicalCoreFilter: + """Factory for using the right filter with `filter_specifier`. + + Args: + core_list: The logical CPU cores to filter. + filter_specifier: The filter to use. + ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`, + sort in descending order. + + Returns: + The filter matching `filter_specifier`. + """ if isinstance(filter_specifier, LogicalCoreList): return LogicalCoreListFilter(core_list, filter_specifier, ascending) elif isinstance(filter_specifier, LogicalCoreCount): From patchwork Wed Nov 15 13:09:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134392 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05FA743339; Wed, 15 Nov 2023 14:13:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3E66410FD; Wed, 15 Nov 2023 14:12:16 +0100 (CET) Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by mails.dpdk.org (Postfix) with ESMTP id 5716D40EE1 for ; Wed, 15 Nov 2023 14:12:11 +0100 (CET) Received: by mail-ed1-f51.google.com with SMTP id 4fb4d7f45d1cf-5401bab7525so10896857a12.2 for ; Wed, 15 Nov 2023 05:12:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053931; x=1700658731; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qYdfQWFJC0WA1HYq7eq9Ozl3YOL6I8HTK3o5h9NMkcQ=; b=FWkZ6Sm/8eJnKTae7puJmz0E0eEj/J9JrsRUmiNybFFgvg1tiRE8tqeupu1eJ0IB8o woHDhU2PLj4fk9nDibR922uPyDdO+9CGW4Ur4vfiaTBOdDP2qmi5/9VefoKYp248CKuJ McjqzQmsYZ63qMAd6TIJmyfM18Wqo1PmhCsgCyu2X7uVlwV1favcQFEs0oxiKJew3T1J rqPBIBgCNXgi+Jw8MFoISd1VGUW9+/P+NFGpAfIsTSjMCGuwzdCdZEJkMv8btjRWbNrR PND5c90ePXgvnZqSezAZIpFpeSclTgg5y9+gwuLPAW+pzvY17Tcm3HnNHmjBl5EMkGCr N51w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053931; x=1700658731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qYdfQWFJC0WA1HYq7eq9Ozl3YOL6I8HTK3o5h9NMkcQ=; b=Jyt0hlnhwxtlfO1bpRngaRy098GDx9qMDv6DNE2CWBpREuTYV0U2cY7JiAZFHDyb3Z iwFlSQVfg1d3GbL8xZrcewG00Yk3OlzQ36oI27rlfb+5CY4DXkESXE1gL9juktcuNiRf SY/gW3uu2nNPKRPK2tqHD/kcwi8BSAn+DkPixzKnmhWDQlJZK03Kf7VGtIyMPGJqyBos tS9Hry+9X0vgZZzB4DDBAtmtIGfsly+pkzM5eIhggd1q/I5zrv1hp1807I7RTm99SbMV 9k/QIxGRxWbVWeppXMBx5DJyITHBx5XtkTYkyRdQBpTuYxgfucR0zceSfOwlv9Crg5cL xBpg== X-Gm-Message-State: AOJu0Yxo4OatqhEIibxpmBKmbGJjlCvhPDNVqMZkUNWsga1I/LxI193/ eywM0RiOvc3yawrZWsyRUSg5SQ== X-Google-Smtp-Source: AGHT+IHkQ30naDTpgsn3JbCeN9Hy8Lq97bMoj72xrqhvEkJJJLy1fe2gDOrMTVx2ctXrXp2Czw/7cg== X-Received: by 2002:a17:906:d298:b0:9be:ab38:a361 with SMTP id ay24-20020a170906d29800b009beab38a361mr10163309ejb.8.1700053930896; Wed, 15 Nov 2023 05:12:10 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:09 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 15/21] dts: os session docstring update Date: Wed, 15 Nov 2023 14:09:53 +0100 Message-Id: <20231115130959.39420-16-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/os_session.py | 275 ++++++++++++++++------ 1 file changed, 208 insertions(+), 67 deletions(-) diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py index 76e595a518..72b9193a61 100644 --- a/dts/framework/testbed_model/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -2,6 +2,29 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""OS-aware remote session. + +DPDK supports multiple different operating systems, meaning it can run on these different operating +systems. This module defines the common API that OS-unaware layers use and translates the API into +OS-aware calls/utility usage. + +Note: + Running commands with administrative privileges requires OS awareness. This is the only layer + that's aware of OS differences, so this is where non-privileged command get converted + to privileged commands. + +Example: + A user wishes to remove a directory on + a remote :class:`~framework.testbed_model.sut_node.SutNode`. + The :class:`~framework.testbed_model.sut_node.SutNode` object isn't aware what OS the node + is running - it delegates the OS translation logic + to :attr:`~framework.testbed_model.node.Node.main_session`. The SUT node calls + :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and + the :attr:`~framework.testbed_model.node.Node.main_session` translates that + to ``rm -rf`` if the node's OS is Linux and other commands for other OSs. + It also translates the path to match the underlying OS. +""" + from abc import ABC, abstractmethod from collections.abc import Iterable from ipaddress import IPv4Interface, IPv6Interface @@ -28,10 +51,16 @@ class OSSession(ABC): - """ - The OS classes create a DTS node remote session and implement OS specific + """OS-unaware to OS-aware translation API definition. + + The OSSession classes create a remote session to a DTS node and implement OS specific behavior. There a few control methods implemented by the base class, the rest need - to be implemented by derived classes. + to be implemented by subclasses. + + Attributes: + name: The name of the session. + remote_session: The remote session maintaining the connection to the node. + interactive_session: The interactive remote session maintaining the connection to the node. """ _config: NodeConfiguration @@ -46,6 +75,15 @@ def __init__( name: str, logger: DTSLOG, ): + """Initialize the OS-aware session. + + Connect to the node right away and also create an interactive remote session. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + """ self._config = node_config self.name = name self._logger = logger @@ -53,15 +91,15 @@ def __init__( self.interactive_session = create_interactive_session(node_config, logger) def close(self, force: bool = False) -> None: - """ - Close the remote session. + """Close the underlying remote session. + + Args: + force: Force the closure of the connection. """ self.remote_session.close(force) def is_alive(self) -> bool: - """ - Check whether the remote session is still responding. - """ + """Check whether the underlying remote session is still responding.""" return self.remote_session.is_alive() def send_command( @@ -72,10 +110,23 @@ def send_command( verify: bool = False, env: dict | None = None, ) -> CommandResult: - """ - An all-purpose API in case the command to be executed is already - OS-agnostic, such as when the path to the executed command has been - constructed beforehand. + """An all-purpose API for OS-agnostic commands. + + This can be used for an execution of a portable command that's executed the same way + on all operating systems, such as Python. + + The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` + environment variable configure the timeout of command execution. + + Args: + command: The command to execute. + timeout: Wait at most this long in seconds to execute the command. + privileged: Whether to run the command with administrative privileges. + verify: If :data:`True`, will check the exit code of the command. + env: A dictionary with environment variables to be used with the command execution. + + Raises: + RemoteCommandExecutionError: If verify is :data:`True` and the command failed. """ if privileged: command = self._get_privileged_command(command) @@ -89,8 +140,20 @@ def create_interactive_shell( privileged: bool, app_args: str, ) -> InteractiveShellType: - """ - See "create_interactive_shell" in SutNode + """Factory for interactive session handlers. + + Instantiate `shell_cls` according to the remote OS specifics. + + Args: + shell_cls: The class of the shell. + timeout: Timeout for reading output from the SSH channel. If you are + reading from the buffer and don't receive any data within the timeout + it will throw an error. + privileged: Whether to run the shell with administrative privileges. + app_args: The arguments to be passed to the application. + + Returns: + An instance of the desired interactive application shell. """ return shell_cls( self.interactive_session.session, @@ -114,27 +177,42 @@ def _get_privileged_command(command: str) -> str: @abstractmethod def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath: - """ - Try to find DPDK remote dir in remote_dir. + """Try to find DPDK directory in `remote_dir`. + + The directory is the one which is created after the extraction of the tarball. The files + are usually extracted into a directory starting with ``dpdk-``. + + Returns: + The absolute path of the DPDK remote directory, empty path if not found. """ @abstractmethod def get_remote_tmp_dir(self) -> PurePath: - """ - Get the path of the temporary directory of the remote OS. + """Get the path of the temporary directory of the remote OS. + + Returns: + The absolute path of the temporary directory. """ @abstractmethod def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: - """ - Create extra environment variables needed for the target architecture. Get - information from the node if needed. + """Create extra environment variables needed for the target architecture. + + Different architectures may require different configuration, such as setting 32-bit CFLAGS. + + Returns: + A dictionary with keys as environment variables. """ @abstractmethod def join_remote_path(self, *args: str | PurePath) -> PurePath: - """ - Join path parts using the path separator that fits the remote OS. + """Join path parts using the path separator that fits the remote OS. + + Args: + args: Any number of paths to join. + + Returns: + The resulting joined path. """ @abstractmethod @@ -143,13 +221,13 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: - """Copy a file from the remote Node to the local filesystem. + """Copy a file from the remote node to the local filesystem. - Copy source_file from the remote Node associated with this remote - session to destination_file on the local filesystem. + Copy `source_file` from the remote node associated with this remote + session to `destination_file` on the local filesystem. Args: - source_file: the file on the remote Node. + source_file: the file on the remote node. destination_file: a file or directory path on the local filesystem. """ @@ -159,14 +237,14 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: - """Copy a file from local filesystem to the remote Node. + """Copy a file from local filesystem to the remote node. - Copy source_file from local filesystem to destination_file - on the remote Node associated with this remote session. + Copy `source_file` from local filesystem to `destination_file` + on the remote node associated with this remote session. Args: source_file: the file on the local filesystem. - destination_file: a file or directory path on the remote Node. + destination_file: a file or directory path on the remote node. """ @abstractmethod @@ -176,8 +254,12 @@ def remove_remote_dir( recursive: bool = True, force: bool = True, ) -> None: - """ - Remove remote directory, by default remove recursively and forcefully. + """Remove remote directory, by default remove recursively and forcefully. + + Args: + remote_dir_path: The path of the directory to remove. + recursive: If :data:`True`, also remove all contents inside the directory. + force: If :data:`True`, ignore all warnings and try to remove at all costs. """ @abstractmethod @@ -186,9 +268,12 @@ def extract_remote_tarball( remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None, ) -> None: - """ - Extract remote tarball in place. If expected_dir is a non-empty string, check - whether the dir exists after extracting the archive. + """Extract remote tarball in its remote directory. + + Args: + remote_tarball_path: The path of the tarball on the remote node. + expected_dir: If non-empty, check whether `expected_dir` exists after extracting + the archive. """ @abstractmethod @@ -201,69 +286,119 @@ def build_dpdk( rebuild: bool = False, timeout: float = SETTINGS.compile_timeout, ) -> None: - """ - Build DPDK in the input dir with specified environment variables and meson - arguments. + """Build DPDK on the remote node. + + An extracted DPDK tarball must be present on the node. The build consists of two steps:: + + meson setup remote_dpdk_dir remote_dpdk_build_dir + ninja -C remote_dpdk_build_dir + + The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT` + environment variable configure the timeout of DPDK build. + + Args: + env_vars: Use these environment variables then building DPDK. + meson_args: Use these meson arguments when building DPDK. + remote_dpdk_dir: The directory on the remote node where DPDK will be built. + remote_dpdk_build_dir: The target build directory on the remote node. + rebuild: If :data:`True`, do a subsequent build with ``meson configure`` instead + of ``meson setup``. + timeout: Wait at most this long in seconds for the build to execute. """ @abstractmethod def get_dpdk_version(self, version_path: str | PurePath) -> str: - """ - Inspect DPDK version on the remote node from version_path. + """Inspect the DPDK version on the remote node. + + Args: + version_path: The path to the VERSION file containing the DPDK version. + + Returns: + The DPDK version. """ @abstractmethod def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: - """ - Compose a list of LogicalCores present on the remote node. - If use_first_core is False, the first physical core won't be used. + r"""Get the list of :class:`~framework.testbed_model.cpu.LogicalCore`\s on the remote node. + + Args: + use_first_core: If :data:`False`, the first physical core won't be used. + + Returns: + The logical cores present on the node. """ @abstractmethod def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: - """ - Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If - dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean. + """Kill and cleanup all DPDK apps. + + Args: + dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`. + If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean. """ @abstractmethod def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: - """ - Get the DPDK file prefix that will be used when running DPDK apps. + """Make OS-specific modification to the DPDK file prefix. + + Args: + dpdk_prefix: The OS-unaware file prefix. + + Returns: + The OS-specific file prefix. """ @abstractmethod - def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: - """ - Get the node's Hugepage Size, configure the specified amount of hugepages + def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None: + """Configure hugepages on the node. + + Get the node's Hugepage Size, configure the specified count of hugepages if needed and mount the hugepages if needed. - If force_first_numa is True, configure hugepages just on the first socket. + + Args: + hugepage_count: Configure this many hugepages. + force_first_numa: If :data:`True`, configure hugepages just on the first socket. """ @abstractmethod def get_compiler_version(self, compiler_name: str) -> str: - """ - Get installed version of compiler used for DPDK + """Get installed version of compiler used for DPDK. + + Args: + compiler_name: The name of the compiler executable. + + Returns: + The compiler's version. """ @abstractmethod def get_node_info(self) -> NodeInfo: - """ - Collect information about the node + """Collect additional information about the node. + + Returns: + Node information. """ @abstractmethod def update_ports(self, ports: list[Port]) -> None: - """ - Get additional information about ports: - Logical name (e.g. enp7s0) if applicable - Mac address + """Get additional information about ports from the operating system and update them. + + The additional information is: + + * Logical name (e.g. ``enp7s0``) if applicable, + * Mac address. + + Args: + ports: The ports to update. """ @abstractmethod def configure_port_state(self, port: Port, enable: bool) -> None: - """ - Enable/disable port. + """Enable/disable `port` in the operating system. + + Args: + port: The port to configure. + enable: If :data:`True`, enable the port, otherwise shut it down. """ @abstractmethod @@ -273,12 +408,18 @@ def configure_port_ip_address( port: Port, delete: bool, ) -> None: - """ - Configure (add or delete) an IP address of the input port. + """Configure an IP address on `port` in the operating system. + + Args: + address: The address to configure. + port: The port to configure. + delete: If :data:`True`, remove the IP address, otherwise configure it. """ @abstractmethod def configure_ipv4_forwarding(self, enable: bool) -> None: - """ - Enable IPv4 forwarding in the underlying OS. + """Enable IPv4 forwarding in the operating system. + + Args: + enable: If :data:`True`, enable the forwarding, otherwise disable it. """ From patchwork Wed Nov 15 13:09:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134393 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8362243339; Wed, 15 Nov 2023 14:13:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4FD764113D; Wed, 15 Nov 2023 14:12:18 +0100 (CET) Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by mails.dpdk.org (Postfix) with ESMTP id 0AB7C40DF5 for ; Wed, 15 Nov 2023 14:12:13 +0100 (CET) Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-54394328f65so10336260a12.3 for ; Wed, 15 Nov 2023 05:12:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053932; x=1700658732; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HtpE/O7NYRazzLGD2fslpp4IFCJwcEtAC0VMwjolv2s=; b=UP3w6Syp20B45+BrrgVtxoVroHvvtaDj6WcjsxO293ymO9pBEZD+ZelNGIO0M9AR2h sdJ5LhRQ+GnQS3JUE67/wEUTLSx7P8bOogRKj0Z1SWl8vBM6QCdd2c6f5aFkzRdJVIVC I1vXUUDu/qb5lqzQK08DlrGGd+ZNpN/gqRfzJk+RT+pC6ms22jSy66iwGV2tXzbiIp8O q3tF5R5nOBoutu9mNvc1h/y4KZnr1hDKZZYP6FvB3FD9sh3y2Jq79Wv+yDd9fwjUucy4 SdU7ypi6MLWT7Gnd+jX/aTOj9wZ5rvakzkfULlzvS2zfGZNDSQyzkbkfppkkWOF5JeHm TG9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053932; x=1700658732; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HtpE/O7NYRazzLGD2fslpp4IFCJwcEtAC0VMwjolv2s=; b=Q5s+zuHvnMODX24bM1J+P0CEfXwSQypzdQRhHiStXBRdSpDPSpSaLu73GWuBVmtj45 DQa8+pGyDBP+3RuQVXXTtqc3hnW6Y6VJ5TBRWZx5pr/NUMBF5FKh8ZUrcrW0AoyQF1uI X9MsCvH9cR6V24I38Pr/+hcBMY7OeldtM24xHGz6vglTXF0fCe4hligHibFJBE2BQ9be FIirH3AZwWCWw9CIx+XZICZ1MJ+Ru9BiKeeXBSIo7ix9kBMzk2FDdzwMvRYUKDjPcSJ2 fzAM47Rc7LlQDcpm6lWNntYeYXRyWrQ0sUls6eq9dMaxYMxJNv5oYzI2+2kFx7gEwCsw eBzQ== X-Gm-Message-State: AOJu0YzOVWUivsqjl7lMZC/a41hP5lHM9LXGxGJPUiXz7U5rwrsOM6+0 iKR0YinLeipEnPmbRGEpzhZB7g== X-Google-Smtp-Source: AGHT+IEWXh0lWyiXA17eqyd6EpUzlSyHVmul3ZUsjVdkYuxKvDKOiBkUjMXxZX/1kRCHunjAAmJsCQ== X-Received: by 2002:a17:906:ce32:b0:9ae:52fb:2202 with SMTP id sd18-20020a170906ce3200b009ae52fb2202mr8813324ejb.40.1700053932470; Wed, 15 Nov 2023 05:12:12 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:11 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 16/21] dts: posix and linux sessions docstring update Date: Wed, 15 Nov 2023 14:09:54 +0100 Message-Id: <20231115130959.39420-17-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/linux_session.py | 63 ++++++++++----- dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++--- 2 files changed, 113 insertions(+), 31 deletions(-) diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py index f472bb8f0f..279954ff63 100644 --- a/dts/framework/testbed_model/linux_session.py +++ b/dts/framework/testbed_model/linux_session.py @@ -2,6 +2,13 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""Linux OS translator. + +Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly +compliant with POSIX standards, so this module only implements the parts that aren't. +This intermediate module implements the common parts of mostly POSIX compliant distributions. +""" + import json from ipaddress import IPv4Interface, IPv6Interface from typing import TypedDict, Union @@ -17,43 +24,51 @@ class LshwConfigurationOutput(TypedDict): + """The relevant parts of ``lshw``'s ``configuration`` section.""" + + #: link: str class LshwOutput(TypedDict): - """ - A model of the relevant information from json lshw output, e.g.: - { - ... - "businfo" : "pci@0000:08:00.0", - "logicalname" : "enp8s0", - "version" : "00", - "serial" : "52:54:00:59:e1:ac", - ... - "configuration" : { - ... - "link" : "yes", - ... - }, - ... + """A model of the relevant information from ``lshw``'s json output. + + e.g.:: + + { + ... + "businfo" : "pci@0000:08:00.0", + "logicalname" : "enp8s0", + "version" : "00", + "serial" : "52:54:00:59:e1:ac", + ... + "configuration" : { + ... + "link" : "yes", + ... + }, + ... """ + #: businfo: str + #: logicalname: NotRequired[str] + #: serial: NotRequired[str] + #: configuration: LshwConfigurationOutput class LinuxSession(PosixSession): - """ - The implementation of non-Posix compliant parts of Linux remote sessions. - """ + """The implementation of non-Posix compliant parts of Linux.""" @staticmethod def _get_privileged_command(command: str) -> str: return f"sudo -- sh -c '{command}'" def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: + """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`.""" cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout lcores = [] for cpu_line in cpu_info.splitlines(): @@ -65,18 +80,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: return lcores def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`.""" return dpdk_prefix - def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: + def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.setup_hugepages`.""" self._logger.info("Getting Hugepage information.") hugepage_size = self._get_hugepage_size() hugepages_total = self._get_hugepages_total() self._numa_nodes = self._get_numa_nodes() - if force_first_numa or hugepages_total != hugepage_amount: + if force_first_numa or hugepages_total != hugepage_count: # when forcing numa, we need to clear existing hugepages regardless # of size, so they can be moved to the first numa node - self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa) + self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa) else: self._logger.info("Hugepages already configured.") self._mount_huge_pages() @@ -140,6 +157,7 @@ def _configure_huge_pages( ) def update_ports(self, ports: list[Port]) -> None: + """Overrides :meth:`~.os_session.OSSession.update_ports`.""" self._logger.debug("Gathering port info.") for port in ports: assert ( @@ -178,6 +196,7 @@ def _update_port_attr( ) def configure_port_state(self, port: Port, enable: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_port_state`.""" state = "up" if enable else "down" self.send_command( f"ip link set dev {port.logical_name} {state}", privileged=True @@ -189,6 +208,7 @@ def configure_port_ip_address( port: Port, delete: bool, ) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`.""" command = "del" if delete else "add" self.send_command( f"ip address {command} {address} dev {port.logical_name}", @@ -197,5 +217,6 @@ def configure_port_ip_address( ) def configure_ipv4_forwarding(self, enable: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`.""" state = 1 if enable else 0 self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True) diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py index 1d1d5b1b26..a4824aa274 100644 --- a/dts/framework/testbed_model/posix_session.py +++ b/dts/framework/testbed_model/posix_session.py @@ -2,6 +2,15 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""POSIX compliant OS translator. + +Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards +for portability between Unix operating systems which not all Linux distributions +(or the tools most frequently bundled with said distributions) adhere to. Most of Linux +distributions are mostly compliant though. +This intermediate module implements the common parts of mostly POSIX compliant distributions. +""" + import re from collections.abc import Iterable from pathlib import PurePath, PurePosixPath @@ -15,13 +24,21 @@ class PosixSession(OSSession): - """ - An intermediary class implementing the Posix compliant parts of - Linux and other OS remote sessions. - """ + """An intermediary class implementing the POSIX standard.""" @staticmethod def combine_short_options(**opts: bool) -> str: + """Combine shell options into one argument. + + These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``. + + Args: + opts: The keys are option names (usually one letter) and the bool values indicate + whether to include the option in the resulting argument. + + Returns: + The options combined into one argument. + """ ret_opts = "" for opt, include in opts.items(): if include: @@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str: return ret_opts def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`.""" remote_guess = self.join_remote_path(remote_dir, "dpdk-*") result = self.send_command(f"ls -d {remote_guess} | tail -1") return PurePosixPath(result.stdout) def get_remote_tmp_dir(self) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`.""" return PurePosixPath("/tmp") def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: - """ - Create extra environment variables needed for i686 arch build. Get information - from the node if needed. + """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`. + + Supported architecture: ``i686``. """ env_vars = {} if arch == Architecture.i686: @@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: return env_vars def join_remote_path(self, *args: str | PurePath) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.join_remote_path`.""" return PurePosixPath(*args) def copy_from( @@ -70,6 +90,7 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.os_session.OSSession.copy_from`.""" self.remote_session.copy_from(source_file, destination_file) def copy_to( @@ -77,6 +98,7 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.os_session.OSSession.copy_to`.""" self.remote_session.copy_to(source_file, destination_file) def remove_remote_dir( @@ -85,6 +107,7 @@ def remove_remote_dir( recursive: bool = True, force: bool = True, ) -> None: + """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`.""" opts = PosixSession.combine_short_options(r=recursive, f=force) self.send_command(f"rm{opts} {remote_dir_path}") @@ -93,6 +116,7 @@ def extract_remote_tarball( remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None, ) -> None: + """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`.""" self.send_command( f"tar xfm {remote_tarball_path} " f"-C {PurePosixPath(remote_tarball_path).parent}", @@ -110,6 +134,7 @@ def build_dpdk( rebuild: bool = False, timeout: float = SETTINGS.compile_timeout, ) -> None: + """Overrides :meth:`~.os_session.OSSession.build_dpdk`.""" try: if rebuild: # reconfigure, then build @@ -140,12 +165,14 @@ def build_dpdk( raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.") def get_dpdk_version(self, build_dir: str | PurePath) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`.""" out = self.send_command( f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True ) return out.stdout def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: + """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`.""" self._logger.info("Cleaning up DPDK apps.") dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list) if dpdk_runtime_dirs: @@ -159,6 +186,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: def _get_dpdk_runtime_dirs( self, dpdk_prefix_list: Iterable[str] ) -> list[PurePosixPath]: + """Find runtime directories DPDK apps are currently using. + + Args: + dpdk_prefix_list: The prefixes DPDK apps were started with. + + Returns: + The paths of DPDK apps' runtime dirs. + """ prefix = PurePosixPath("/var", "run", "dpdk") if not dpdk_prefix_list: remote_prefixes = self._list_remote_dirs(prefix) @@ -170,9 +205,13 @@ def _get_dpdk_runtime_dirs( return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list] def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None: - """ - Return a list of directories of the remote_dir. - If remote_path doesn't exist, return None. + """Contents of remote_path. + + Args: + remote_path: List the contents of this path. + + Returns: + The contents of remote_path. If remote_path doesn't exist, return None. """ out = self.send_command( f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'" @@ -183,6 +222,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None: return out.splitlines() def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]: + """Find PIDs of running DPDK apps. + + Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes + that opened those file. + + Args: + dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs. + + Returns: + The PIDs of running DPDK apps. + """ pids = [] pid_regex = r"p(\d+)" for dpdk_runtime_dir in dpdk_runtime_dirs: @@ -203,6 +253,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool: def _check_dpdk_hugepages( self, dpdk_runtime_dirs: Iterable[str | PurePath] ) -> None: + """Check there aren't any leftover hugepages. + + If any hugegapes are found, emit a warning. The hugepages are investigated in the + "hugepage_info" file of dpdk_runtime_dirs. + + Args: + dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs. + """ for dpdk_runtime_dir in dpdk_runtime_dirs: hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info") if self._remote_files_exists(hugepage_info): @@ -220,9 +278,11 @@ def _remove_dpdk_runtime_dirs( self.remove_remote_dir(dpdk_runtime_dir) def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`.""" return "" def get_compiler_version(self, compiler_name: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_compiler_version`.""" match compiler_name: case "gcc": return self.send_command( @@ -240,6 +300,7 @@ def get_compiler_version(self, compiler_name: str) -> str: raise ValueError(f"Unknown compiler {compiler_name}") def get_node_info(self) -> NodeInfo: + """Overrides :meth:`~.os_session.OSSession.get_node_info`.""" os_release_info = self.send_command( "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release", SETTINGS.timeout, From patchwork Wed Nov 15 13:09:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134394 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4DA0743339; Wed, 15 Nov 2023 14:14:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8398041143; Wed, 15 Nov 2023 14:12:19 +0100 (CET) Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) by mails.dpdk.org (Postfix) with ESMTP id F18D541060 for ; Wed, 15 Nov 2023 14:12:13 +0100 (CET) Received: by mail-ed1-f48.google.com with SMTP id 4fb4d7f45d1cf-53e07db272cso10451269a12.3 for ; Wed, 15 Nov 2023 05:12:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053933; x=1700658733; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AsEFkOR8CjPD2Ia21uwZ6Fh5yGmfbujPu8jJXG9zNeU=; b=pdL9yYHhiWiNFcXnwdTCT5gtLW5cDPiZd8xLIijcKE1yaaHVn0v84DIDdon0/XQPS5 XiPSEs6BDZvFto6/H/gJtzUe8IlI+HiaZvwAg+ZOovsOnTlXerylY0IbyRzVBFibfs3c cGPWzt1y16rBdNYGeCV5xhxCn+vIP3YeNBv2HHQfrXWCXdqiX2bal0riVaitClwKwLu4 fE599MYzmJTz16Ul5SjiPWJxKfUsRdZJb0wTx4m0fYops1KhRlmIo2yow0ifzlbjnZ/K +1X04UJwtlWKXRifjoW4ZhxUXzr2vTWJk8J7O+nVkkyEPvjOcbymNgX5mAHTmtyTGp/L p/XA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053933; x=1700658733; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AsEFkOR8CjPD2Ia21uwZ6Fh5yGmfbujPu8jJXG9zNeU=; b=CgZjyP5hdZ8yPa639l2ZcacGgp6v0fzJjg2E1BnLg2NlqFxId3uCvtxUkOEEzcCZhq 1vEOnNxNP6tM2PbWRXhMEgzNn8t8qQnYV2wPWUlyoHOJgwuyDESFew/MaNP8/AUYh4VD l3Apt/1gcjala8k4sMwgTC5Y2H1eTyqk3AQux0BYY8Qf3lvquLT7IYc6OADWJ9L73iB3 RrNSVYdF2UWUuwiwTfDFv/q/EC5Jc6aV0qfs7TAwo96Q+gViRhcFQZTazg8erG5C9WRC +q2YAsRz/Ry2f06eAwRCRDvfIY1Ebln8g/qW/qH7KShYSVoIVYMD6UejPXlqfnuX/WKQ 8NoQ== X-Gm-Message-State: AOJu0Yw8qgCEVkbhUz1qoZ8AqEP7ITCgDBYD0/6bPHAQEhUCIK07yy7s lEezUD93xrRKPInrw6SyGa3OUQ== X-Google-Smtp-Source: AGHT+IG9Qh+yC2xiAfzFaJ/GGI+YC3CnT7Yl4JXF5XLHTIqPRn2WxwV/fTsUL2YQymQaLzsvF8+I7A== X-Received: by 2002:a17:907:100e:b0:9b2:d78c:afe9 with SMTP id ox14-20020a170907100e00b009b2d78cafe9mr9166050ejb.49.1700053933602; Wed, 15 Nov 2023 05:12:13 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:13 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 17/21] dts: node docstring update Date: Wed, 15 Nov 2023 14:09:55 +0100 Message-Id: <20231115130959.39420-18-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/node.py | 191 +++++++++++++++++++--------- 1 file changed, 131 insertions(+), 60 deletions(-) diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index fa5b143cdd..f93b4acecd 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -3,8 +3,13 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -A node is a generic host that DTS connects to and manages. +"""Common functionality for node management. + +A node is any host/server DTS connects to. + +The base class, :class:`Node`, provides functionality common to all nodes and is supposed +to be extended by subclasses with functionality specific to each node type. +The decorator :func:`Node.skip_setup` can be used without subclassing. """ from abc import ABC @@ -35,10 +40,22 @@ class Node(ABC): - """ - Basic class for node management. This class implements methods that - manage a node, such as information gathering (of CPU/PCI/NIC) and - environment setup. + """The base class for node management. + + It shouldn't be instantiated, but rather subclassed. + It implements common methods to manage any node: + + * Connection to the node, + * Hugepages setup. + + Attributes: + main_session: The primary OS-aware remote session used to communicate with the node. + config: The node configuration. + name: The name of the node. + lcores: The list of logical cores that DTS can use on the node. + It's derived from logical cores present on the node and the test run configuration. + ports: The ports of this node specified in the test run configuration. + virtual_devices: The virtual devices used on the node. """ main_session: OSSession @@ -52,6 +69,17 @@ class Node(ABC): virtual_devices: list[VirtualDevice] def __init__(self, node_config: NodeConfiguration): + """Connect to the node and gather info during initialization. + + Extra gathered information: + + * The list of available logical CPUs. This is then filtered by + the ``lcores`` configuration in the YAML test run configuration file, + * Information about ports from the YAML test run configuration file. + + Args: + node_config: The node's test run configuration. + """ self.config = node_config self.name = node_config.name self._logger = getLogger(self.name) @@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration): self._logger.info(f"Connected to node: {self.name}") self._get_remote_cpus() - # filter the node lcores according to user config + # filter the node lcores according to the test run configuration self.lcores = LogicalCoreListFilter( self.lcores, LogicalCoreList(self.config.lcores) ).filter() @@ -76,9 +104,14 @@ def _init_ports(self) -> None: self.configure_port_state(port) def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """ - Perform the execution setup that will be done for each execution - this node is part of. + """Execution setup steps. + + Configure hugepages and call :meth:`_set_up_execution` where + the rest of the configuration steps (if any) are implemented. + + Args: + execution_config: The execution test run configuration according to which + the setup steps will be taken. """ self._setup_hugepages() self._set_up_execution(execution_config) @@ -87,58 +120,74 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: self.virtual_devices.append(VirtualDevice(vdev)) def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional execution setup steps for subclasses. + + Subclasses should override this if they need to add additional execution setup steps. """ def tear_down_execution(self) -> None: - """ - Perform the execution teardown that will be done after each execution - this node is part of concludes. + """Execution teardown steps. + + There are currently no common execution teardown steps common to all DTS node types. """ self.virtual_devices = [] self._tear_down_execution() def _tear_down_execution(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional execution teardown steps for subclasses. + + Subclasses should override this if they need to add additional execution teardown steps. """ def set_up_build_target( self, build_target_config: BuildTargetConfiguration ) -> None: - """ - Perform the build target setup that will be done for each build target - tested on this node. + """Build target setup steps. + + There are currently no common build target setup steps common to all DTS node types. + + Args: + build_target_config: The build target test run configuration according to which + the setup steps will be taken. """ self._set_up_build_target(build_target_config) def _set_up_build_target( self, build_target_config: BuildTargetConfiguration ) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional build target setup steps for subclasses. + + Subclasses should override this if they need to add additional build target setup steps. """ def tear_down_build_target(self) -> None: - """ - Perform the build target teardown that will be done after each build target - tested on this node. + """Build target teardown steps. + + There are currently no common build target teardown steps common to all DTS node types. """ self._tear_down_build_target() def _tear_down_build_target(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional build target teardown steps for subclasses. + + Subclasses should override this if they need to add additional build target teardown steps. """ def create_session(self, name: str) -> OSSession: - """ - Create and return a new OSSession tailored to the remote OS. + """Create and return a new OS-aware remote session. + + The returned session won't be used by the node creating it. The session must be used by + the caller. The session will be maintained for the entire lifecycle of the node object, + at the end of which the session will be cleaned up automatically. + + Note: + Any number of these supplementary sessions may be created. + + Args: + name: The name of the session. + + Returns: + A new OS-aware remote session. """ session_name = f"{self.name} {name}" connection = create_session( @@ -156,19 +205,19 @@ def create_interactive_shell( privileged: bool = False, app_args: str = "", ) -> InteractiveShellType: - """Create a handler for an interactive session. + """Factory for interactive session handlers. - Instantiate shell_cls according to the remote OS specifics. + Instantiate `shell_cls` according to the remote OS specifics. Args: shell_cls: The class of the shell. - timeout: Timeout for reading output from the SSH channel. If you are - reading from the buffer and don't receive any data within the timeout - it will throw an error. + timeout: Timeout for reading output from the SSH channel. If you are reading from + the buffer and don't receive any data within the timeout it will throw an error. privileged: Whether to run the shell with administrative privileges. app_args: The arguments to be passed to the application. + Returns: - Instance of the desired interactive application. + An instance of the desired interactive application shell. """ if not shell_cls.dpdk_app: shell_cls.path = self.main_session.join_remote_path(shell_cls.path) @@ -185,14 +234,22 @@ def filter_lcores( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool = True, ) -> list[LogicalCore]: - """ - Filter the LogicalCores found on the Node according to - a LogicalCoreCount or a LogicalCoreList. + """Filter the node's logical cores that DTS can use. + + Logical cores that DTS can use are the ones that are present on the node, but filtered + according to the test run configuration. The `filter_specifier` will filter cores from + those logical cores. + + Args: + filter_specifier: Two different filters can be used, one that specifies the number + of logical cores per core, cores per socket and the number of sockets, + and another one that specifies a logical core list. + ascending: If :data:`True`, use cores with the lowest numerical id first and continue + in ascending order. If :data:`False`, start with the highest id and continue + in descending order. This ordering affects which sockets to consider first as well. - If ascending is True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the highest - id and continue in descending order. This ordering affects which - sockets to consider first as well. + Returns: + The filtered logical cores. """ self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.") return lcore_filter( @@ -202,17 +259,14 @@ def filter_lcores( ).filter() def _get_remote_cpus(self) -> None: - """ - Scan CPUs in the remote OS and store a list of LogicalCores. - """ + """Scan CPUs in the remote OS and store a list of LogicalCores.""" self._logger.info("Getting CPU information.") self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) def _setup_hugepages(self) -> None: - """ - Setup hugepages on the Node. Different architectures can supply different - amounts of memory for hugepages and numa-based hugepage allocation may need - to be considered. + """Setup hugepages on the node. + + Configure the hugepages only if they're specified in the node's test run configuration. """ if self.config.hugepages: self.main_session.setup_hugepages( @@ -220,8 +274,11 @@ def _setup_hugepages(self) -> None: ) def configure_port_state(self, port: Port, enable: bool = True) -> None: - """ - Enable/disable port. + """Enable/disable `port`. + + Args: + port: The port to enable/disable. + enable: :data:`True` to enable, :data:`False` to disable. """ self.main_session.configure_port_state(port, enable) @@ -231,15 +288,17 @@ def configure_port_ip_address( port: Port, delete: bool = False, ) -> None: - """ - Configure the IP address of a port on this node. + """Add an IP address to `port` on this node. + + Args: + address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6. + port: The port to which to add the address. + delete: If :data:`True`, will delete the address from the port instead of adding it. """ self.main_session.configure_port_ip_address(address, port, delete) def close(self) -> None: - """ - Close all connections and free other resources. - """ + """Close all connections and free other resources.""" if self.main_session: self.main_session.close() for session in self._other_sessions: @@ -248,6 +307,11 @@ def close(self) -> None: @staticmethod def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: + """Skip the decorated function. + + The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP` + environment variable enable the decorator. + """ if SETTINGS.skip_setup: return lambda *args: None else: @@ -257,6 +321,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: def create_session( node_config: NodeConfiguration, name: str, logger: DTSLOG ) -> OSSession: + """Factory for OS-aware sessions. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + """ match node_config.os: case OS.linux: return LinuxSession(node_config, name, logger) From patchwork Wed Nov 15 13:09:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134395 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0EA143339; Wed, 15 Nov 2023 14:14:09 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B20F74114B; Wed, 15 Nov 2023 14:12:20 +0100 (CET) Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by mails.dpdk.org (Postfix) with ESMTP id 94AC6410F6 for ; Wed, 15 Nov 2023 14:12:15 +0100 (CET) Received: by mail-ed1-f42.google.com with SMTP id 4fb4d7f45d1cf-53e3b8f906fso10355924a12.2 for ; Wed, 15 Nov 2023 05:12:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053935; x=1700658735; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C7ydtWmwUXlZj9C3QriL/vnjaRjG/RxKRJvVTYnZhi4=; b=obM8ionti033ySS/zrPseAcGCcPzDzSELUIeYaPJgotS5s2gF7n0ZcnN1R4SKONljz FmWjaQ1h9l7RsXBuse/beVsp3POcTev9iMshw4OgN/4PsxFlgq6jWa/daTxRWSxJ4lCG qrcTBCaoALpAkfCzw3G7KLzpKsjwk30C5WfGVQLm7QawAOFZ2op0OQhkODLFz330nf43 P/pewziLbnP5mKK4zDVo4s8trgud04LWe89VymhP89KxS37RA+hCygKeHRtrerVcTCLT P2lBTqHFIxNpuRoEeddntVJeZmceVENpCTGQvhSb09VLW5YWx3r7QDS5cL+T33AJIUk+ C3UA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053935; x=1700658735; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C7ydtWmwUXlZj9C3QriL/vnjaRjG/RxKRJvVTYnZhi4=; b=avRZtoqNGFWUip2JSfgWTcFrMCDt1rqcfXV3ToiIyNVCpp0rSbywjJj1ObS7d4sCeI aYrhED+nSha9mEt3jIkPyKtAdGPSFn8cRa4YCoFvi6FIOivKPYzJr1Zg0mL5X3g2s+Iy iwVTfECIPxIXe6pLsp8c4zx51RL6cdo58RuExnwoAQWO7byjXc0+FzA/Hd9VptrXVS3Z IKgBD5SoyHL0rLvwuRlgf3hN9b9QzSqOiwg25Ksz5HIc95s5Tw+xhVb9Fzru4Ofn/mBe KdKd1Kje5rwasry2x4236Fy28Duqk+rMd27WMcG1G4QbbydQqhCYqc9UxUd/GHcCw5xl j32w== X-Gm-Message-State: AOJu0YyXlkP2gYqqavYxZ3yb+40cWNklS2VqWbmZdjTRkgjiIVUh4VPG 2GsKy/YNlHzDy3FLwz2k8eUvww== X-Google-Smtp-Source: AGHT+IEiTEWTKt0YaLD29r3eDfmi/yGetZO4ODCWLeKaP3D8luqs/R8HKtM/Ld70mEsiul+GqOV5JQ== X-Received: by 2002:a17:906:3942:b0:9e3:b88c:d735 with SMTP id g2-20020a170906394200b009e3b88cd735mr8911065eje.61.1700053935044; Wed, 15 Nov 2023 05:12:15 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:14 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 18/21] dts: sut and tg nodes docstring update Date: Wed, 15 Nov 2023 14:09:56 +0100 Message-Id: <20231115130959.39420-19-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/sut_node.py | 224 ++++++++++++++++-------- dts/framework/testbed_model/tg_node.py | 42 +++-- 2 files changed, 173 insertions(+), 93 deletions(-) diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 17deea06e2..123b16fee0 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -3,6 +3,14 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""System under test (DPDK + hardware) node. + +A system under test (SUT) is the combination of DPDK +and the hardware we're testing with DPDK (NICs, crypto and other devices). +An SUT node is where this SUT runs. +""" + + import os import tarfile import time @@ -26,6 +34,11 @@ class EalParameters(object): + """The environment abstraction layer parameters. + + The string representation can be created by converting the instance to a string. + """ + def __init__( self, lcore_list: LogicalCoreList, @@ -35,21 +48,23 @@ def __init__( vdevs: list[VirtualDevice], other_eal_param: str, ): - """ - Generate eal parameters character string; - :param lcore_list: the list of logical cores to use. - :param memory_channels: the number of memory channels to use. - :param prefix: set file prefix string, eg: - prefix='vf' - :param no_pci: switch of disable PCI bus eg: - no_pci=True - :param vdevs: virtual device list, eg: - vdevs=[ - VirtualDevice('net_ring0'), - VirtualDevice('net_ring1') - ] - :param other_eal_param: user defined DPDK eal parameters, eg: - other_eal_param='--single-file-segments' + """Initialize the parameters according to inputs. + + Process the parameters into the format used on the command line. + + Args: + lcore_list: The list of logical cores to use. + memory_channels: The number of memory channels to use. + prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``. + no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``. + vdevs: Virtual devices, e.g.:: + + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + other_eal_param: user defined DPDK EAL parameters, e.g.: + ``other_eal_param='--single-file-segments'`` """ self._lcore_list = f"-l {lcore_list}" self._memory_channels = f"-n {memory_channels}" @@ -61,6 +76,7 @@ def __init__( self._other_eal_param = other_eal_param def __str__(self) -> str: + """Create the EAL string.""" return ( f"{self._lcore_list} " f"{self._memory_channels} " @@ -72,11 +88,21 @@ def __str__(self) -> str: class SutNode(Node): - """ - A class for managing connections to the System under Test, providing - methods that retrieve the necessary information about the node (such as - CPU, memory and NIC details) and configuration capabilities. - Another key capability is building DPDK according to given build target. + """The system under test node. + + The SUT node extends :class:`Node` with DPDK specific features: + + * DPDK build, + * Gathering of DPDK build info, + * The running of DPDK apps, interactively or one-time execution, + * DPDK apps cleanup. + + The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL` + environment variable configure the path to the DPDK tarball + or the git commit ID, tag ID or tree ID to test. + + Attributes: + config: The SUT node configuration """ config: SutNodeConfiguration @@ -94,6 +120,11 @@ class SutNode(Node): _path_to_devbind_script: PurePath | None def __init__(self, node_config: SutNodeConfiguration): + """Extend the constructor with SUT node specifics. + + Args: + node_config: The SUT node's test run configuration. + """ super(SutNode, self).__init__(node_config) self._dpdk_prefix_list = [] self._build_target_config = None @@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration): @property def _remote_dpdk_dir(self) -> PurePath: + """The remote DPDK dir. + + This internal property should be set after extracting the DPDK tarball. If it's not set, + that implies the DPDK setup step has been skipped, in which case we can guess where + a previous build was located. + """ if self.__remote_dpdk_dir is None: self.__remote_dpdk_dir = self._guess_dpdk_remote_dir() return self.__remote_dpdk_dir @@ -123,6 +160,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None: @property def remote_dpdk_build_dir(self) -> PurePath: + """The remote DPDK build directory. + + This is the directory where DPDK was built. + We assume it was built in a subdirectory of the extracted tarball. + """ if self._build_target_config: return self.main_session.join_remote_path( self._remote_dpdk_dir, self._build_target_config.name @@ -132,6 +174,7 @@ def remote_dpdk_build_dir(self) -> PurePath: @property def dpdk_version(self) -> str: + """Last built DPDK version.""" if self._dpdk_version is None: self._dpdk_version = self.main_session.get_dpdk_version( self._remote_dpdk_dir @@ -140,12 +183,14 @@ def dpdk_version(self) -> str: @property def node_info(self) -> NodeInfo: + """Additional node information.""" if self._node_info is None: self._node_info = self.main_session.get_node_info() return self._node_info @property def compiler_version(self) -> str: + """The node's compiler version.""" if self._compiler_version is None: if self._build_target_config is not None: self._compiler_version = self.main_session.get_compiler_version( @@ -161,6 +206,7 @@ def compiler_version(self) -> str: @property def path_to_devbind_script(self) -> PurePath: + """The path to the dpdk-devbind.py script on the node.""" if self._path_to_devbind_script is None: self._path_to_devbind_script = self.main_session.join_remote_path( self._remote_dpdk_dir, "usertools", "dpdk-devbind.py" @@ -168,6 +214,11 @@ def path_to_devbind_script(self) -> PurePath: return self._path_to_devbind_script def get_build_target_info(self) -> BuildTargetInfo: + """Get additional build target information. + + Returns: + The build target information, + """ return BuildTargetInfo( dpdk_version=self.dpdk_version, compiler_version=self.compiler_version ) @@ -178,8 +229,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath: def _set_up_build_target( self, build_target_config: BuildTargetConfiguration ) -> None: - """ - Setup DPDK on the SUT node. + """Setup DPDK on the SUT node. + + Additional build target setup steps on top of those in :class:`Node`. """ # we want to ensure that dpdk_version and compiler_version is reset for new # build targets @@ -200,9 +252,7 @@ def _tear_down_build_target(self) -> None: def _configure_build_target( self, build_target_config: BuildTargetConfiguration ) -> None: - """ - Populate common environment variables and set build target config. - """ + """Populate common environment variables and set build target config.""" self._env_vars = {} self._build_target_config = build_target_config self._env_vars.update( @@ -217,9 +267,7 @@ def _configure_build_target( @Node.skip_setup def _copy_dpdk_tarball(self) -> None: - """ - Copy to and extract DPDK tarball on the SUT node. - """ + """Copy to and extract DPDK tarball on the SUT node.""" self._logger.info("Copying DPDK tarball to SUT.") self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir) @@ -250,8 +298,9 @@ def _copy_dpdk_tarball(self) -> None: @Node.skip_setup def _build_dpdk(self) -> None: - """ - Build DPDK. Uses the already configured target. Assumes that the tarball has + """Build DPDK. + + Uses the already configured target. Assumes that the tarball has already been copied to and extracted on the SUT node. """ self.main_session.build_dpdk( @@ -262,15 +311,19 @@ def _build_dpdk(self) -> None: ) def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath: - """ - Build one or all DPDK apps. Requires DPDK to be already built on the SUT node. - When app_name is 'all', build all example apps. - When app_name is any other string, tries to build that example app. - Return the directory path of the built app. If building all apps, return - the path to the examples directory (where all apps reside). - The meson_dpdk_args are keyword arguments - found in meson_option.txt in root DPDK directory. Do not use -D with them, - for example: enable_kmods=True. + """Build one or all DPDK apps. + + Requires DPDK to be already built on the SUT node. + + Args: + app_name: The name of the DPDK app to build. + When `app_name` is ``all``, build all example apps. + meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory. + Do not use ``-D`` with them. + + Returns: + The directory path of the built app. If building all apps, return + the path to the examples directory (where all apps reside). """ self.main_session.build_dpdk( self._env_vars, @@ -291,9 +344,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa ) def kill_cleanup_dpdk_apps(self) -> None: - """ - Kill all dpdk applications on the SUT. Cleanup hugepages. - """ + """Kill all dpdk applications on the SUT, then clean up hugepages.""" if self._dpdk_kill_session and self._dpdk_kill_session.is_alive(): # we can use the session if it exists and responds self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list) @@ -312,33 +363,34 @@ def create_eal_parameters( vdevs: list[VirtualDevice] | None = None, other_eal_param: str = "", ) -> "EalParameters": - """ - Generate eal parameters character string; - :param lcore_filter_specifier: a number of lcores/cores/sockets to use - or a list of lcore ids to use. - The default will select one lcore for each of two cores - on one socket, in ascending order of core ids. - :param ascending_cores: True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the - highest id and continue in descending order. This ordering - affects which sockets to consider first as well. - :param prefix: set file prefix string, eg: - prefix='vf' - :param append_prefix_timestamp: if True, will append a timestamp to - DPDK file prefix. - :param no_pci: switch of disable PCI bus eg: - no_pci=True - :param vdevs: virtual device list, eg: - vdevs=[ - VirtualDevice('net_ring0'), - VirtualDevice('net_ring1') - ] - :param other_eal_param: user defined DPDK eal parameters, eg: - other_eal_param='--single-file-segments' - :return: eal param string, eg: - '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420'; - """ + """Compose the EAL parameters. + + Process the list of cores and the DPDK prefix and pass that along with + the rest of the arguments. + Args: + lcore_filter_specifier: A number of lcores/cores/sockets to use + or a list of lcore ids to use. + The default will select one lcore for each of two cores + on one socket, in ascending order of core ids. + ascending_cores: Sort cores in ascending order (lowest to highest IDs). + If :data:`False`, sort in descending order. + prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``. + append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix. + no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``. + vdevs: Virtual devices, e.g.:: + + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + other_eal_param: user defined DPDK EAL parameters, e.g.: + ``other_eal_param='--single-file-segments'``. + + Returns: + An EAL param string, such as + ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``. + """ lcore_list = LogicalCoreList( self.filter_lcores(lcore_filter_specifier, ascending_cores) ) @@ -364,14 +416,29 @@ def create_eal_parameters( def run_dpdk_app( self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30 ) -> CommandResult: - """ - Run DPDK application on the remote node. + """Run DPDK application on the remote node. + + The application is not run interactively - the command that starts the application + is executed and then the call waits for it to finish execution. + + Args: + app_path: The remote path to the DPDK application. + eal_args: EAL parameters to run the DPDK application with. + timeout: Wait at most this long in seconds to execute the command. + + Returns: + The result of the DPDK app execution. """ return self.main_session.send_command( f"{app_path} {eal_args}", timeout, privileged=True, verify=True ) def configure_ipv4_forwarding(self, enable: bool) -> None: + """Enable/disable IPv4 forwarding on the node. + + Args: + enable: If :data:`True`, enable the forwarding, otherwise disable it. + """ self.main_session.configure_ipv4_forwarding(enable) def create_interactive_shell( @@ -381,9 +448,13 @@ def create_interactive_shell( privileged: bool = False, eal_parameters: EalParameters | str | None = None, ) -> InteractiveShellType: - """Factory method for creating a handler for an interactive session. + """Extend the factory for interactive session handlers. + + The extensions are SUT node specific: - Instantiate shell_cls according to the remote OS specifics. + * The default for `eal_parameters`, + * The interactive shell path `shell_cls.path` is prepended with path to the remote + DPDK build directory for DPDK apps. Args: shell_cls: The class of the shell. @@ -393,9 +464,10 @@ def create_interactive_shell( privileged: Whether to run the shell with administrative privileges. eal_parameters: List of EAL parameters to use to launch the app. If this isn't provided or an empty string is passed, it will default to calling - create_eal_parameters(). + :meth:`create_eal_parameters`. + Returns: - Instance of the desired interactive application. + An instance of the desired interactive application shell. """ if not eal_parameters: eal_parameters = self.create_eal_parameters() @@ -414,8 +486,8 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None: """Bind all ports on the SUT to a driver. Args: - for_dpdk: Boolean that, when True, binds ports to os_driver_for_dpdk - or, when False, binds to os_driver. Defaults to True. + for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk. + If :data:`False`, binds to os_driver. """ for port in self.ports: driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py index 166eb8430e..69eb33ccb1 100644 --- a/dts/framework/testbed_model/tg_node.py +++ b/dts/framework/testbed_model/tg_node.py @@ -5,13 +5,8 @@ """Traffic generator node. -This is the node where the traffic generator resides. -The distinction between a node and a traffic generator is as follows: -A node is a host that DTS connects to. It could be a baremetal server, -a VM or a container. -A traffic generator is software running on the node. -A traffic generator node is a node running a traffic generator. -A node can be a traffic generator node as well as system under test node. +A traffic generator (TG) generates traffic that's sent towards the SUT node. +A TG node is where the TG runs. """ from scapy.packet import Packet # type: ignore[import] @@ -24,13 +19,16 @@ class TGNode(Node): - """Manage connections to a node with a traffic generator. + """The traffic generator node. - Apart from basic node management capabilities, the Traffic Generator node has - specialized methods for handling the traffic generator running on it. + The TG node extends :class:`Node` with TG specific features: - Arguments: - node_config: The user configuration of the traffic generator node. + * Traffic generator initialization, + * The sending of traffic and receiving packets, + * The sending of traffic without receiving packets. + + Not all traffic generators are capable of capturing traffic, which is why there + must be a way to send traffic without that. Attributes: traffic_generator: The traffic generator running on the node. @@ -39,6 +37,13 @@ class TGNode(Node): traffic_generator: CapturingTrafficGenerator def __init__(self, node_config: TGNodeConfiguration): + """Extend the constructor with TG node specifics. + + Initialize the traffic generator on the TG node. + + Args: + node_config: The TG node's test run configuration. + """ super(TGNode, self).__init__(node_config) self.traffic_generator = create_traffic_generator( self, node_config.traffic_generator @@ -52,17 +57,17 @@ def send_packet_and_capture( receive_port: Port, duration: float = 1, ) -> list[Packet]: - """Send a packet, return received traffic. + """Send `packet`, return received traffic. - Send a packet on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic + Send `packet` on `send_port` and then return all traffic captured + on `receive_port` for the given duration. Also record the captured traffic in a pcap file. Args: packet: The packet to send. send_port: The egress port on the TG node. receive_port: The ingress port in the TG node. - duration: Capture traffic for this amount of time after sending the packet. + duration: Capture traffic for this amount of time after sending `packet`. Returns: A list of received packets. May be empty if no packets are captured. @@ -72,6 +77,9 @@ def send_packet_and_capture( ) def close(self) -> None: - """Free all resources used by the node""" + """Free all resources used by the node. + + This extends the superclass method with TG cleanup. + """ self.traffic_generator.close() super(TGNode, self).close() From patchwork Wed Nov 15 13:09:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134396 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CCBCD43339; Wed, 15 Nov 2023 14:14:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB57C411F3; Wed, 15 Nov 2023 14:12:21 +0100 (CET) Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by mails.dpdk.org (Postfix) with ESMTP id 7D22440EA5 for ; Wed, 15 Nov 2023 14:12:16 +0100 (CET) Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-5440f25dcc7so10378769a12.0 for ; Wed, 15 Nov 2023 05:12:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053936; x=1700658736; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g4hWHvKq7wmrNScwYf5rL6W0ga2jchBf0GSHuvS0F2E=; b=WkTtAltTirdoM0ZbbsiBeNL6BqP16fPAbdqn5HVlHuujGKJpiWMjGbArAtSes4KpgP LH0557D5yRUvGiyRHxGdFA7EA/NRCy2Fi3ahvylFOe3DKRFmj2seBbj34Aeal2e0+PnQ mq5cFxs/cpo2UtmcbTH4A/WoguLpwWk3HoykULwsB4uASMtnTVHAQaiIftEnjlpIQ8Xm jWct8X9sGHBtgcCegDPedClC9CKcSpRlHU/uitlgqPJ+zxqK0/1WCqp+CZfZb4SI22q+ q9oeOZ8BlMe8f90nPN4+woAoD+wIL5ULw0BVwbFVD2fpt3BprEDRt/cI0NWrRMZHonoK KL0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053936; x=1700658736; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g4hWHvKq7wmrNScwYf5rL6W0ga2jchBf0GSHuvS0F2E=; b=j3+1X4wmKYhvMSZ3GS0LlX73sJudYrCnYWy1n9Ki1b5iAWee4eHmzTEvzThCpae1xo y1imLp3Dc9pQKIDOREjrEfWpVgy4GRSUyYIkXTCY3zVpWOmKGPZWRQ3lw3DGdM0KR0Q1 OpprS6FhK06cBVDbwjtKGCg+RI4aMePdtZDZLmFoLR65UmxOHp/r5Xw1fKbl3vOhj3Wc 1mskgialBPOVNEErrpVNRPBLIbUoqxbX62cul3TqxniMp2pWks/ME0/bHmtFGJYIsfkk AO+DZhAV5sH7k4iJhDZ44c06caHkbYPPwVZNRFhVgu5UUoKii4M6JVi/GFw+N5bO4dgp q1fA== X-Gm-Message-State: AOJu0YzLH3xXIkEoq8m3XuxI4I1eW8bv50J3svu1uYVJackubo9YsMuN INAmRqoS/xpuSnXnsoqp73PauQ== X-Google-Smtp-Source: AGHT+IElm3XiBhazL1q4Dynjie7fm8BEbAahRMQ0J9nCSo7nhOoL/ALTqB9Y0zUsOHGsMUoS6tteXQ== X-Received: by 2002:a17:906:ad4:b0:9e0:4910:1649 with SMTP id z20-20020a1709060ad400b009e049101649mr9126882ejf.8.1700053936159; Wed, 15 Nov 2023 05:12:16 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:15 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 19/21] dts: base traffic generators docstring update Date: Wed, 15 Nov 2023 14:09:57 +0100 Message-Id: <20231115130959.39420-20-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../traffic_generator/__init__.py | 22 ++++++++- .../capturing_traffic_generator.py | 46 +++++++++++-------- .../traffic_generator/traffic_generator.py | 33 +++++++------ 3 files changed, 68 insertions(+), 33 deletions(-) diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py index 11bfa1ee0f..51cca77da4 100644 --- a/dts/framework/testbed_model/traffic_generator/__init__.py +++ b/dts/framework/testbed_model/traffic_generator/__init__.py @@ -1,6 +1,19 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""DTS traffic generators. + +A traffic generator is capable of generating traffic and then monitor returning traffic. +A traffic generator may just count the number of received packets +and it may additionally capture individual packets. + +A traffic generator may be software running on generic hardware or it could be specialized hardware. + +The traffic generators that only count the number of received packets are suitable only for +performance testing. In functional testing, we need to be able to dissect each arrived packet +and a capturing traffic generator is required. +""" + from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType from framework.exception import ConfigurationError from framework.testbed_model.node import Node @@ -12,8 +25,15 @@ def create_traffic_generator( tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig ) -> CapturingTrafficGenerator: - """A factory function for creating traffic generator object from user config.""" + """The factory function for creating traffic generator objects from the test run configuration. + + Args: + tg_node: The traffic generator node where the created traffic generator will be running. + traffic_generator_config: The traffic generator config. + Returns: + A traffic generator capable of capturing received packets. + """ match traffic_generator_config.traffic_generator_type: case TrafficGeneratorType.SCAPY: return ScapyTrafficGenerator(tg_node, traffic_generator_config) diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py index e521211ef0..b0a43ad003 100644 --- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py @@ -23,19 +23,22 @@ def _get_default_capture_name() -> str: - """ - This is the function used for the default implementation of capture names. - """ return str(uuid.uuid4()) class CapturingTrafficGenerator(TrafficGenerator): """Capture packets after sending traffic. - A mixin interface which enables a packet generator to declare that it can capture + The intermediary interface which enables a packet generator to declare that it can capture packets and return them to the user. + Similarly to + :class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`, + this class exposes the public methods specific to capturing traffic generators and defines + a private method that must implement the traffic generation and capturing logic in subclasses. + The methods of capturing traffic generators obey the following workflow: + 1. send packets 2. capture packets 3. write the capture to a .pcap file @@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator): @property def is_capturing(self) -> bool: + """This traffic generator can capture traffic.""" return True def send_packet_and_capture( @@ -54,11 +58,12 @@ def send_packet_and_capture( duration: float, capture_name: str = _get_default_capture_name(), ) -> list[Packet]: - """Send a packet, return received traffic. + """Send `packet` and capture received traffic. + + Send `packet` on `send_port` and then return all traffic captured + on `receive_port` for the given `duration`. - Send a packet on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic - in a pcap file. + The captured traffic is recorded in the `capture_name`.pcap file. Args: packet: The packet to send. @@ -68,7 +73,7 @@ def send_packet_and_capture( capture_name: The name of the .pcap file where to store the capture. Returns: - A list of received packets. May be empty if no packets are captured. + The received packets. May be empty if no packets are captured. """ return self.send_packets_and_capture( [packet], send_port, receive_port, duration, capture_name @@ -82,11 +87,14 @@ def send_packets_and_capture( duration: float, capture_name: str = _get_default_capture_name(), ) -> list[Packet]: - """Send packets, return received traffic. + """Send `packets` and capture received traffic. - Send packets on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic - in a pcap file. + Send `packets` on `send_port` and then return all traffic captured + on `receive_port` for the given `duration`. + + The captured traffic is recorded in the `capture_name`.pcap file. The target directory + can be configured with the :option:`--output-dir` command line argument or + the :envvar:`DTS_OUTPUT_DIR` environment variable. Args: packets: The packets to send. @@ -96,7 +104,7 @@ def send_packets_and_capture( capture_name: The name of the .pcap file where to store the capture. Returns: - A list of received packets. May be empty if no packets are captured. + The received packets. May be empty if no packets are captured. """ self._logger.debug(get_packet_summaries(packets)) self._logger.debug( @@ -124,10 +132,12 @@ def _send_packets_and_capture( receive_port: Port, duration: float, ) -> list[Packet]: - """ - The extended classes must implement this method which - sends packets on send_port and receives packets on the receive_port - for the specified duration. It must be able to handle no received packets. + """The implementation of :method:`send_packets_and_capture`. + + The subclasses must implement this method which sends `packets` on `send_port` + and receives packets on `receive_port` for the specified `duration`. + + It must be able to handle no received packets. """ def _write_capture_from_packets( diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py index ea7c3963da..ed396c6a2f 100644 --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -22,7 +22,8 @@ class TrafficGenerator(ABC): """The base traffic generator. - Defines the few basic methods that each traffic generator must implement. + Exposes the common public methods of all traffic generators and defines private methods + that must implement the traffic generation logic in subclasses. """ _config: TrafficGeneratorConfig @@ -30,6 +31,12 @@ class TrafficGenerator(ABC): _logger: DTSLOG def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): + """Initialize the traffic generator. + + Args: + tg_node: The traffic generator node where the created traffic generator will be running. + config: The traffic generator's test run configuration. + """ self._config = config self._tg_node = tg_node self._logger = getLogger( @@ -37,9 +44,9 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): ) def send_packet(self, packet: Packet, port: Port) -> None: - """Send a packet and block until it is fully sent. + """Send `packet` and block until it is fully sent. - What fully sent means is defined by the traffic generator. + Send `packet` on `port`, then wait until `packet` is fully sent. Args: packet: The packet to send. @@ -48,9 +55,9 @@ def send_packet(self, packet: Packet, port: Port) -> None: self.send_packets([packet], port) def send_packets(self, packets: list[Packet], port: Port) -> None: - """Send packets and block until they are fully sent. + """Send `packets` and block until they are fully sent. - What fully sent means is defined by the traffic generator. + Send `packets` on `port`, then wait until `packets` are fully sent. Args: packets: The packets to send. @@ -62,19 +69,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None: @abstractmethod def _send_packets(self, packets: list[Packet], port: Port) -> None: - """ - The extended classes must implement this method which - sends packets on send_port. The method should block until all packets - are fully sent. + """The implementation of :method:`send_packets`. + + The subclasses must implement this method which sends `packets` on `port`. + The method should block until all `packets` are fully sent. + + What full sent means is defined by the traffic generator. """ @property def is_capturing(self) -> bool: - """Whether this traffic generator can capture traffic. - - Returns: - True if the traffic generator can capture traffic, False otherwise. - """ + """This traffic generator can't capture traffic.""" return False @abstractmethod From patchwork Wed Nov 15 13:09:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134397 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3C6F43339; Wed, 15 Nov 2023 14:14:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 60B54427E0; Wed, 15 Nov 2023 14:12:24 +0100 (CET) Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) by mails.dpdk.org (Postfix) with ESMTP id D966B410F6 for ; Wed, 15 Nov 2023 14:12:17 +0100 (CET) Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-9f26ee4a6e5so119105866b.2 for ; Wed, 15 Nov 2023 05:12:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053937; x=1700658737; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l+YRs3S0yhFtTSeinZqTtWuwCfuutnSDZBMU3VA9kyM=; b=IJ1EpIc1C28/DnBEdhdAPK+D5b+nCpCpZTUyRTG8lg1Qlbn7eFDhdVdzY5nFusXPma /DUnbgNGhx6JPFSlJIvUKVzKhpXbJONoncGeklWJQgRNDTkaAgIhlVj8vrn1puwd32IF dlFZNhhGerQrJveI9lzYgw4E0dt+TctxMmDuwTS7ManwFyBd4E1YC1ow78YE3sf5oK5U 8eB4iskJD7ZjXrQWpn6bISOBNmF+D/984Mky+LA+e0B3rgs4bC3Cpo6GZHQKSjuq6tBk q5J3r1F+mrFItXWXXYWERGdGV6Ds7irIC4G7kGYfhRuZ5s/okWXVPChdfmqUj2vJ/1e6 iqZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053937; x=1700658737; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l+YRs3S0yhFtTSeinZqTtWuwCfuutnSDZBMU3VA9kyM=; b=OqU5lIWi1AcxB1NOpx9exUckyECddOtEybkKa3gOIIGPXexfvkRMHkln++sgOTFwb/ g/Uy0XZ9c6CBNwO2iBMHrQrrk22R8Po33GPR+OPPql4X3pbTmyJNc3GLELoJLBOWbAAi Yt0bhKkp7IgA+5AZcr1TK4+GtGJS+SBXlI9BBmcnSFVz90/1xLX2L9WHgWE1NoA9hGeE O+0N7c0MWQ9SWSF66xC3g6J+cRo82ytNQ2KN3KKIGb5NWshPE5O8XkqWEjOL4QSeaPkk Ad0pz8g8IlQJwjatJrl+umgGcZX8NPaVOwVMBclc6SYx+zw10ovkO9c+O57XrlVePW7d 9Emw== X-Gm-Message-State: AOJu0YyV2QEfRYZI+XrsEdW9K1iEj+KYSoBaFgr2Q5PdJ3vJ1RDWj2kg gYm89L7sH54i2DFN8zs09jOrRg== X-Google-Smtp-Source: AGHT+IFbq2GpKNqHoASLWbj06Lfr7P9IgSIlSrn3dxBxfy0ALEegNGaJ71p5JKL/whozMM1mH/IQaw== X-Received: by 2002:a17:906:f2c3:b0:9bd:a738:2bfe with SMTP id gz3-20020a170906f2c300b009bda7382bfemr9420755ejb.38.1700053937562; Wed, 15 Nov 2023 05:12:17 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:16 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 20/21] dts: scapy tg docstring update Date: Wed, 15 Nov 2023 14:09:58 +0100 Message-Id: <20231115130959.39420-21-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../testbed_model/traffic_generator/scapy.py | 91 +++++++++++-------- 1 file changed, 54 insertions(+), 37 deletions(-) diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py index 51864b6e6b..ed4f879925 100644 --- a/dts/framework/testbed_model/traffic_generator/scapy.py +++ b/dts/framework/testbed_model/traffic_generator/scapy.py @@ -2,14 +2,15 @@ # Copyright(c) 2022 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -"""Scapy traffic generator. +"""The Scapy traffic generator. -Traffic generator used for functional testing, implemented using the Scapy library. +A traffic generator used for functional testing, implemented with +`the Scapy library `_. The traffic generator uses an XML-RPC server to run Scapy on the remote TG node. -The XML-RPC server runs in an interactive remote SSH session running Python console, -where we start the server. The communication with the server is facilitated with -a local server proxy. +The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server +in an interactive remote Python SSH session. The communication with the server is facilitated +with a local server proxy from the :mod:`xmlrpc.client` module. """ import inspect @@ -69,20 +70,20 @@ def scapy_send_packets_and_capture( recv_iface: str, duration: float, ) -> list[bytes]: - """RPC function to send and capture packets. + """The RPC function to send and capture packets. - The function is meant to be executed on the remote TG node. + The function is meant to be executed on the remote TG node via the server proxy. Args: xmlrpc_packets: The packets to send. These need to be converted to - xmlrpc.client.Binary before sending to the remote server. + :class:`~xmlrpc.client.Binary` objects before sending to the remote server. send_iface: The logical name of the egress interface. recv_iface: The logical name of the ingress interface. duration: Capture for this amount of time, in seconds. Returns: A list of bytes. Each item in the list represents one packet, which needs - to be converted back upon transfer from the remote node. + to be converted back upon transfer from the remote node. """ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets] sniffer = scapy.all.AsyncSniffer( @@ -98,19 +99,15 @@ def scapy_send_packets_and_capture( def scapy_send_packets( xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str ) -> None: - """RPC function to send packets. + """The RPC function to send packets. - The function is meant to be executed on the remote TG node. - It doesn't return anything, only sends packets. + The function is meant to be executed on the remote TG node via the server proxy. + It only sends `xmlrpc_packets`, without capturing them. Args: xmlrpc_packets: The packets to send. These need to be converted to - xmlrpc.client.Binary before sending to the remote server. + :class:`~xmlrpc.client.Binary` objects before sending to the remote server. send_iface: The logical name of the egress interface. - - Returns: - A list of bytes. Each item in the list represents one packet, which needs - to be converted back upon transfer from the remote node. """ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets] scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True) @@ -130,11 +127,19 @@ def scapy_send_packets( class QuittableXMLRPCServer(SimpleXMLRPCServer): - """Basic XML-RPC server that may be extended - by functions serializable by the marshal module. + r"""Basic XML-RPC server. + + The server may be augmented by functions serializable by the :mod:`marshal` module. """ def __init__(self, *args, **kwargs): + """Extend the XML-RPC server initialization. + + Args: + args: The positional arguments that will be passed to the superclass's constructor. + kwargs: The keyword arguments that will be passed to the superclass's constructor. + The `allow_none` argument will be set to :data:`True`. + """ kwargs["allow_none"] = True super().__init__(*args, **kwargs) self.register_introspection_functions() @@ -142,13 +147,12 @@ def __init__(self, *args, **kwargs): self.register_function(self.add_rpc_function) def quit(self) -> None: + """Quit the server.""" self._BaseServer__shutdown_request = True return None def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None: - """Add a function to the server. - - This is meant to be executed remotely. + """Add a function to the server from the local server proxy. Args: name: The name of the function. @@ -159,6 +163,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N self.register_function(function) def serve_forever(self, poll_interval: float = 0.5) -> None: + """Extend the superclass method with an additional print. + + Once executed in the local server proxy, the print gives us a clear string to expect + when starting the server. The print means the function was executed on the XML-RPC server. + """ print("XMLRPC OK") super().serve_forever(poll_interval) @@ -166,19 +175,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None: class ScapyTrafficGenerator(CapturingTrafficGenerator): """Provides access to scapy functions via an RPC interface. - The traffic generator first starts an XML-RPC on the remote TG node. - Then it populates the server with functions which use the Scapy library - to send/receive traffic. - - Any packets sent to the remote server are first converted to bytes. - They are received as xmlrpc.client.Binary objects on the server side. - When the server sends the packets back, they are also received as - xmlrpc.client.Binary object on the client side, are converted back to Scapy - packets and only then returned from the methods. + The class extends the base with remote execution of scapy functions. - Arguments: - tg_node: The node where the traffic generator resides. - config: The user configuration of the traffic generator. + Any packets sent to the remote server are first converted to bytes. They are received as + :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets + back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are + converted back to :class:`scapy.packet.Packet` objects and only then returned from the methods. Attributes: session: The exclusive interactive remote session created by the Scapy @@ -192,6 +194,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator): _config: ScapyTrafficGeneratorConfig def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig): + """Extend the constructor with Scapy TG specifics. + + The traffic generator first starts an XML-RPC on the remote `tg_node`. + Then it populates the server with functions which use the Scapy library + to send/receive traffic: + + * :func:`scapy_send_packets_and_capture` + * :func:`scapy_send_packets` + + To enable verbose logging from the xmlrpc client, use the :option:`--verbose` + command line argument or the :envvar:`DTS_VERBOSE` environment variable. + + Args: + tg_node: The node where the traffic generator resides. + config: The traffic generator's test run configuration. + """ super().__init__(tg_node, config) assert ( @@ -237,10 +255,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None: [line for line in src.splitlines() if not line.isspace() and line != ""] ) - spacing = "\n" * 4 - # execute it in the python terminal - self.session.send_command(spacing + src + spacing) + self.session.send_command(src + "\n") self.session.send_command( f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));" f"server.serve_forever()", @@ -274,6 +290,7 @@ def _send_packets_and_capture( return scapy_packets def close(self) -> None: + """Close the traffic generator.""" try: self.rpc_server_proxy.quit() except ConnectionRefusedError: From patchwork Wed Nov 15 13:09:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134398 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 55CD643339; Wed, 15 Nov 2023 14:14:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 880CE427E5; Wed, 15 Nov 2023 14:12:25 +0100 (CET) Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) by mails.dpdk.org (Postfix) with ESMTP id 1669940ED0 for ; Wed, 15 Nov 2023 14:12:19 +0100 (CET) Received: by mail-ej1-f53.google.com with SMTP id a640c23a62f3a-9e5dd91b0acso827357666b.1 for ; Wed, 15 Nov 2023 05:12:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700053939; x=1700658739; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tm6Opm89TVep6rTOjKIkP+0MrI/vnwJrCVP9TMX0aHA=; b=rC80orYDDbX7yCSq3c7mj5oE7kH3jSAsb/jVoWWFaNaD3cJD49B3Uc37tUhhnqZ1my zgqtneiT3VWiHmAhvDzjnTVrlVtxSn76vG+39Xr9HlZTbjl99yZ/YJ/c6hjXvGxdGRpU TblVXQltZjSGw0vBocqaa4xZjH3eJdus02e+hALhH6+ip0865Y+vqvOu86lviJklxRKi 69hMOV4kRWZIfuxz96N/Hu2E0tj95zJ7TiMZV3jcg5Qe14fuf8tu+UNTWIhykA1kHDgm e8M7GnbuF8sOhkMPBu4EKvKTkzv8vYfENO0TtiGYVIZW7HBD96fLNV/FKtOT0rZHAtI8 QLDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700053939; x=1700658739; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tm6Opm89TVep6rTOjKIkP+0MrI/vnwJrCVP9TMX0aHA=; b=BJNIeapPvDd49pw9oCkuIym+IYGAfx4mSbaUNT5eifCBKsAP8viANGFbz9czXi4NNX TkMkNDU/kQfWfP6AqDbf1twD9YEEvVl5Ju4ahhBnTOUNEZ5iFjpkH9SloLV2UBDKb2hA d/eVb32yF+xejfh2F1s3WxaYtrV4c4TgRVIYuI9d1NP2tf71UN1SLflzfdnbIqobU8oT XbPASNEs/NHFyTrk5s65dNqvoTvIFZVXVlwMN0i5MTYIW3vf5hqMtItrurqFZa0znqz+ UCbR2mCCOp1ce/WwhoW59ENunhWvA4Dg/Mr/qLcuuNAJedp3fKtVuivVoHrUAWs6HOBl hNOg== X-Gm-Message-State: AOJu0YwWzqDuunXWIINvSiEYb07/ib4HkCuyisj+7OM41oMcrRGnT4ep Vo5VZ9htBW7vLIsnuinMN7M5oA== X-Google-Smtp-Source: AGHT+IFszhz9Zg+FIkzpb7rELnYe780Tpe7pKLRhbAQYks5NyXPzT9ns3CtSY4+Arc6HX6x/hxx/RA== X-Received: by 2002:a17:906:708f:b0:9e5:2ab3:b74e with SMTP id b15-20020a170906708f00b009e52ab3b74emr9818130ejk.75.1700053938570; Wed, 15 Nov 2023 05:12:18 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local (81.89.53.154.host.vnet.sk. [81.89.53.154]) by smtp.gmail.com with ESMTPSA id tb16-20020a1709078b9000b009f2b7282387sm1011914ejc.46.2023.11.15.05.12.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 05:12:18 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v7 21/21] dts: test suites docstring update Date: Wed, 15 Nov 2023 14:09:59 +0100 Message-Id: <20231115130959.39420-22-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115130959.39420-1-juraj.linkes@pantheon.tech> References: <20231108125324.191005-23-juraj.linkes@pantheon.tech> <20231115130959.39420-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/tests/TestSuite_hello_world.py | 16 +++++---- dts/tests/TestSuite_os_udp.py | 19 +++++++---- dts/tests/TestSuite_smoke_tests.py | 53 +++++++++++++++++++++++++++--- 3 files changed, 70 insertions(+), 18 deletions(-) diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py index 7e3d95c0cf..662a8f8726 100644 --- a/dts/tests/TestSuite_hello_world.py +++ b/dts/tests/TestSuite_hello_world.py @@ -1,7 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -""" +"""The DPDK hello world app test suite. + Run the helloworld example app and verify it prints a message for each used core. No other EAL parameters apart from cores are used. """ @@ -15,22 +16,25 @@ class TestHelloWorld(TestSuite): + """DPDK hello world app test suite.""" + def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: Build the app we're about to test - helloworld. """ self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld") def test_hello_world_single_core(self) -> None: - """ + """Single core test case. + Steps: Run the helloworld app on the first usable logical core. Verify: The app prints a message from the used core: "hello from core " """ - # get the first usable core lcore_amount = LogicalCoreCount(1, 1, 1) lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter() @@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None: ) def test_hello_world_all_cores(self) -> None: - """ + """All cores test case. + Steps: Run the helloworld app on all usable logical cores. Verify: The app prints a message from all used cores: "hello from core " """ - # get the maximum logical core number eal_para = self.sut_node.create_eal_parameters( lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores) diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py index bf6b93deb5..e0c5239612 100644 --- a/dts/tests/TestSuite_os_udp.py +++ b/dts/tests/TestSuite_os_udp.py @@ -1,7 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" +"""Basic IPv4 OS routing test suite. + Configure SUT node to route traffic from if1 to if2. Send a packet to the SUT node, verify it comes back on the second port on the TG node. """ @@ -13,24 +14,27 @@ class TestOSUdp(TestSuite): + """IPv4 UDP OS routing test suite.""" + def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: - Configure SUT ports and SUT to route traffic from if1 to if2. + Bind the SUT ports to the OS driver, configure the ports and configure the SUT + to route traffic from if1 to if2. """ - # This test uses kernel drivers self.sut_node.bind_ports_to_driver(for_dpdk=False) self.configure_testbed_ipv4() def test_os_udp(self) -> None: - """ + """Basic UDP IPv4 traffic test case. + Steps: Send a UDP packet. Verify: The packet with proper addresses arrives at the other TG port. """ - packet = Ether() / IP() / UDP() received_packets = self.send_packet_and_capture(packet) @@ -40,7 +44,8 @@ def test_os_udp(self) -> None: self.verify_packets(expected_packet, received_packets) def tear_down_suite(self) -> None: - """ + """Tear down the test suite. + Teardown: Remove the SUT port configuration configured in setup. """ diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py index e8016d1b54..6fae099a0e 100644 --- a/dts/tests/TestSuite_smoke_tests.py +++ b/dts/tests/TestSuite_smoke_tests.py @@ -1,6 +1,17 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 University of New Hampshire +"""Smoke test suite. + +Smoke tests are a class of tests which are used for validating a minimal set of important features. +These are the most important features without which (or when they're faulty) the software wouldn't +work properly. Thus, if any failure occurs while testing these features, +there isn't that much of a reason to continue testing, as the software is fundamentally broken. + +These tests don't have to include only DPDK tests, as the reason for failures could be +in the infrastructure (a faulty link between NICs or a misconfiguration). +""" + import re from framework.config import PortConfig @@ -11,13 +22,25 @@ class SmokeTests(TestSuite): + """DPDK and infrastructure smoke test suite. + + The test cases validate the most basic DPDK functionality needed for all other test suites. + The infrastructure also needs to be tested, as that is also used by all other test suites. + + Attributes: + is_blocking: This test suite will block the execution of all other test suites + in the build target after it. + nics_in_node: The NICs present on the SUT node. + """ + is_blocking = True # dicts in this list are expected to have two keys: # "pci_address" and "current_driver" nics_in_node: list[PortConfig] = [] def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: Set the build directory path and generate a list of NICs in the SUT node. """ @@ -25,7 +48,13 @@ def set_up_suite(self) -> None: self.nics_in_node = self.sut_node.config.ports def test_unit_tests(self) -> None: - """ + """DPDK meson fast-tests unit tests. + + The DPDK unit tests are basic tests that indicate regressions and other critical failures. + These need to be addressed before other testing. + + The fast-tests unit tests are a subset with only the most basic tests. + Test: Run the fast-test unit-test suite through meson. """ @@ -37,7 +66,14 @@ def test_unit_tests(self) -> None: ) def test_driver_tests(self) -> None: - """ + """DPDK meson driver-tests unit tests. + + The DPDK unit tests are basic tests that indicate regressions and other critical failures. + These need to be addressed before other testing. + + The driver-tests unit tests are a subset that test only drivers. These may be run + with virtual devices as well. + Test: Run the driver-test unit-test suite through meson. """ @@ -63,7 +99,10 @@ def test_driver_tests(self) -> None: ) def test_devices_listed_in_testpmd(self) -> None: - """ + """Testpmd device discovery. + + If the configured devices can't be found in testpmd, they can't be tested. + Test: Uses testpmd driver to verify that devices have been found by testpmd. """ @@ -79,7 +118,11 @@ def test_devices_listed_in_testpmd(self) -> None: ) def test_device_bound_to_driver(self) -> None: - """ + """Device driver in OS. + + The devices must be bound to the proper driver, otherwise they can't be used by DPDK + or the traffic generators. + Test: Ensure that all drivers listed in the config are bound to the correct driver.