From patchwork Thu Nov 23 15:13:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134567 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36885433AC; Thu, 23 Nov 2023 16:13:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D611C42FA3; Thu, 23 Nov 2023 16:13:49 +0100 (CET) Received: from mail-wr1-f47.google.com (mail-wr1-f47.google.com [209.85.221.47]) by mails.dpdk.org (Postfix) with ESMTP id 6271042F73 for ; Thu, 23 Nov 2023 16:13:48 +0100 (CET) Received: by mail-wr1-f47.google.com with SMTP id ffacd0b85a97d-32dc9ff4a8fso640787f8f.1 for ; Thu, 23 Nov 2023 07:13:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752428; x=1701357228; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XMeWNutabCKWcYmi2u76n4b9C8CHoUajMv+jU5N0o+o=; b=ooF5Tvd5aYsZNd16llfzkjpmq3IOjqzEvUteg2uv+oiuixFY6coWnkQFFRuKQvrEWL FH/Ruy9UEBy93a/+YHohyLDfH55QWVJFJkswOZIzm2OFZJizXL4c3O7BM3kd0rMmggwz 5Yjje/t4c5IzId4WWNY3Er8sN4fSAGhsgRaufF8IAuRx3Kvk0wcC8/X39SMciwmX7wQP y7y6WwkG4TivqAZFTTK1cJw9z9dqBybhLTeXoTAhJf6xIwU5aYOICTRF94WFUIHkD5mS SOe5r0zFb/HPanV846brDCZGsZF0GDrBLbenZWxHNABh9aYRXSB1mACjYBx07DwPhUaB w0Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752428; x=1701357228; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XMeWNutabCKWcYmi2u76n4b9C8CHoUajMv+jU5N0o+o=; b=n+0Yxp40wYbCel6Qh912bpeyZRkgY3GnzC23XB+cgEAmBaULeldWnIFbkqBWuPLKBV gARVCLrNM311yXFrKBHkiFT0Z5wpoRLEukKnaxBhhVCDzAW3rWRinJTfD4BsNJ5KWDtg 9zXjf1/jrV/INn3+P7gGsw6qby4goH06r8uhwyVQk25NF6X8/qD8NxGNZDyXIclITnvN Kk9tXe+52ctD0o4jnsw4qMgo4Ip21GzGtV6ust9pCkQgaQTf395EL0L+okqsfCimATcX 8tKAQA+wCmrgi6CFD9xGJ6fAh8veMewAVKr7uRG4F6UjOBFLYcV9Ad3RrtCVi5X/wgVR Mbbw== X-Gm-Message-State: AOJu0Yzz68DR5lRsdnkjGtH9gztwFEEUFegx7TDJlH3PbAC8KVhodw42 5LeUF+xN5COc8ResiIlsViP4wQ== X-Google-Smtp-Source: AGHT+IGDUwrFU/LBr6C7pnIyftrNhGdUr4J5tm/9WNbhQ74mY8i98MEt5MW6ik/8QiY7JvkkkJ+4+Q== X-Received: by 2002:a05:6000:112:b0:331:6961:6cc7 with SMTP id o18-20020a056000011200b0033169616cc7mr4394729wrx.25.1700752427858; Thu, 23 Nov 2023 07:13:47 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:47 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 01/21] dts: code adjustments for doc generation Date: Thu, 23 Nov 2023 16:13:24 +0100 Message-Id: <20231123151344.162812-2-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The standard Python tool for generating API documentation, Sphinx, imports modules one-by-one when generating the documentation. This requires code changes: * properly guarding argument parsing in the if __name__ == '__main__' block, * the logger used by DTS runner underwent the same treatment so that it doesn't create log files outside of a DTS run, * however, DTS uses the arguments to construct an object holding global variables. The defaults for the global variables needed to be moved from argument parsing elsewhere, * importing the remote_session module from framework resulted in circular imports because of one module trying to import another module. This is fixed by reorganizing the code, * some code reorganization was done because the resulting structure makes more sense, improving documentation clarity. The are some other changes which are documentation related: * added missing type annotation so they appear in the generated docs, * reordered arguments in some methods, * removed superfluous arguments and attributes, * change private functions/methods/attributes to private and vice-versa. The above all appear in the generated documentation and the with them, the documentation is improved. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 8 +- dts/framework/dts.py | 31 +++++-- dts/framework/exception.py | 54 +++++------- dts/framework/remote_session/__init__.py | 41 +++++---- .../interactive_remote_session.py | 0 .../{remote => }/interactive_shell.py | 0 .../{remote => }/python_shell.py | 0 .../remote_session/remote/__init__.py | 27 ------ .../{remote => }/remote_session.py | 0 .../{remote => }/ssh_session.py | 12 +-- .../{remote => }/testpmd_shell.py | 0 dts/framework/settings.py | 85 +++++++++++-------- dts/framework/test_result.py | 4 +- dts/framework/test_suite.py | 7 +- dts/framework/testbed_model/__init__.py | 12 +-- dts/framework/testbed_model/{hw => }/cpu.py | 13 +++ dts/framework/testbed_model/hw/__init__.py | 27 ------ .../linux_session.py | 6 +- dts/framework/testbed_model/node.py | 23 +++-- .../os_session.py | 22 ++--- dts/framework/testbed_model/{hw => }/port.py | 0 .../posix_session.py | 4 +- dts/framework/testbed_model/sut_node.py | 8 +- dts/framework/testbed_model/tg_node.py | 29 +------ .../traffic_generator/__init__.py | 23 +++++ .../capturing_traffic_generator.py | 4 +- .../{ => traffic_generator}/scapy.py | 19 ++--- .../traffic_generator.py | 14 ++- .../testbed_model/{hw => }/virtual_device.py | 0 dts/framework/utils.py | 40 +++------ dts/main.py | 9 +- 31 files changed, 244 insertions(+), 278 deletions(-) rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%) rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%) rename dts/framework/remote_session/{remote => }/python_shell.py (100%) delete mode 100644 dts/framework/remote_session/remote/__init__.py rename dts/framework/remote_session/{remote => }/remote_session.py (100%) rename dts/framework/remote_session/{remote => }/ssh_session.py (91%) rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%) rename dts/framework/testbed_model/{hw => }/cpu.py (95%) delete mode 100644 dts/framework/testbed_model/hw/__init__.py rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%) rename dts/framework/{remote_session => testbed_model}/os_session.py (95%) rename dts/framework/testbed_model/{hw => }/port.py (100%) rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%) create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (98%) rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%) rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (81%) rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%) diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 9b32cf0532..ef25a463c0 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -17,6 +17,7 @@ import warlock # type: ignore[import] import yaml +from framework.exception import ConfigurationError from framework.settings import SETTINGS from framework.utils import StrEnum @@ -89,7 +90,7 @@ class TrafficGeneratorConfig: traffic_generator_type: TrafficGeneratorType @staticmethod - def from_dict(d: dict): + def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": # This looks useless now, but is designed to allow expansion to traffic # generators that require more configuration later. match TrafficGeneratorType(d["type"]): @@ -97,6 +98,8 @@ def from_dict(d: dict): return ScapyTrafficGeneratorConfig( traffic_generator_type=TrafficGeneratorType.SCAPY ) + case _: + raise ConfigurationError(f'Unknown traffic generator type "{d["type"]}".') @dataclass(slots=True, frozen=True) @@ -314,6 +317,3 @@ def load_config() -> Configuration: config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data) config_obj: Configuration = Configuration.from_dict(dict(config)) return config_obj - - -CONFIGURATION = load_config() diff --git a/dts/framework/dts.py b/dts/framework/dts.py index 25d6942d81..356368ef10 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -6,19 +6,19 @@ import sys from .config import ( - CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration, TestSuiteConfig, + load_config, ) from .exception import BlockingTestSuiteError from .logger import DTSLOG, getLogger from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result from .test_suite import get_test_suites from .testbed_model import SutNode, TGNode -from .utils import check_dts_python_version -dts_logger: DTSLOG = getLogger("DTSRunner") +# dummy defaults to satisfy linters +dts_logger: DTSLOG = None # type: ignore[assignment] result: DTSResult = DTSResult(dts_logger) @@ -30,14 +30,18 @@ def run_all() -> None: global dts_logger global result + # create a regular DTS logger and create a new result with it + dts_logger = getLogger("DTSRunner") + result = DTSResult(dts_logger) + # check the python version of the server that run dts - check_dts_python_version() + _check_dts_python_version() sut_nodes: dict[str, SutNode] = {} tg_nodes: dict[str, TGNode] = {} try: # for all Execution sections - for execution in CONFIGURATION.executions: + for execution in load_config().executions: sut_node = sut_nodes.get(execution.system_under_test_node.name) tg_node = tg_nodes.get(execution.traffic_generator_node.name) @@ -82,6 +86,23 @@ def run_all() -> None: _exit_dts() +def _check_dts_python_version() -> None: + def RED(text: str) -> str: + return f"\u001B[31;1m{str(text)}\u001B[0m" + + if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10): + print( + RED( + ( + "WARNING: DTS execution node's python version is lower than" + "python 3.10, is deprecated and will not work in future releases." + ) + ), + file=sys.stderr, + ) + print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) + + def _run_execution( sut_node: SutNode, tg_node: TGNode, diff --git a/dts/framework/exception.py b/dts/framework/exception.py index b362e42924..151e4d3aa9 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError): Command execution timeout. """ - command: str - output: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _command: str - def __init__(self, command: str, output: str): - self.command = command - self.output = output + def __init__(self, command: str): + self._command = command def __str__(self) -> str: - return f"TIMEOUT on {self.command}" - - def get_output(self) -> str: - return self.output + return f"TIMEOUT on {self._command}" class SSHConnectionError(DTSError): @@ -62,18 +57,18 @@ class SSHConnectionError(DTSError): SSH connection error. """ - host: str - errors: list[str] severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _host: str + _errors: list[str] def __init__(self, host: str, errors: list[str] | None = None): - self.host = host - self.errors = [] if errors is None else errors + self._host = host + self._errors = [] if errors is None else errors def __str__(self) -> str: - message = f"Error trying to connect with {self.host}." - if self.errors: - message += f" Errors encountered while retrying: {', '.join(self.errors)}" + message = f"Error trying to connect with {self._host}." + if self._errors: + message += f" Errors encountered while retrying: {', '.join(self._errors)}" return message @@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError): It can no longer be used. """ - host: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _host: str def __init__(self, host: str): - self.host = host + self._host = host def __str__(self) -> str: - return f"SSH session with {self.host} has died" + return f"SSH session with {self._host} has died" class ConfigurationError(DTSError): @@ -107,16 +102,16 @@ class RemoteCommandExecutionError(DTSError): Raised when a command executed on a Node returns a non-zero exit status. """ - command: str - command_return_code: int severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + command: str + _command_return_code: int def __init__(self, command: str, command_return_code: int): self.command = command - self.command_return_code = command_return_code + self._command_return_code = command_return_code def __str__(self) -> str: - return f"Command {self.command} returned a non-zero exit code: {self.command_return_code}" + return f"Command {self.command} returned a non-zero exit code: {self._command_return_code}" class RemoteDirectoryExistsError(DTSError): @@ -140,22 +135,15 @@ class TestCaseVerifyError(DTSError): Used in test cases to verify the expected behavior. """ - value: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR - def __init__(self, value: str): - self.value = value - - def __str__(self) -> str: - return repr(self.value) - class BlockingTestSuiteError(DTSError): - suite_name: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR + _suite_name: str def __init__(self, suite_name: str) -> None: - self.suite_name = suite_name + self._suite_name = suite_name def __str__(self) -> str: - return f"Blocking suite {self.suite_name} failed." + return f"Blocking suite {self._suite_name} failed." diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 6124417bd7..5e7ddb2b05 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -12,27 +12,24 @@ # pylama:ignore=W0611 -from framework.config import OS, NodeConfiguration -from framework.exception import ConfigurationError +from framework.config import NodeConfiguration from framework.logger import DTSLOG -from .linux_session import LinuxSession -from .os_session import InteractiveShellType, OSSession -from .remote import ( - CommandResult, - InteractiveRemoteSession, - InteractiveShell, - PythonShell, - RemoteSession, - SSHSession, - TestPmdDevice, - TestPmdShell, -) - - -def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession: - match node_config.os: - case OS.linux: - return LinuxSession(node_config, name, logger) - case _: - raise ConfigurationError(f"Unsupported OS {node_config.os}") +from .interactive_remote_session import InteractiveRemoteSession +from .interactive_shell import InteractiveShell +from .python_shell import PythonShell +from .remote_session import CommandResult, RemoteSession +from .ssh_session import SSHSession +from .testpmd_shell import TestPmdShell + + +def create_remote_session( + node_config: NodeConfiguration, name: str, logger: DTSLOG +) -> RemoteSession: + return SSHSession(node_config, name, logger) + + +def create_interactive_session( + node_config: NodeConfiguration, logger: DTSLOG +) -> InteractiveRemoteSession: + return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py similarity index 100% rename from dts/framework/remote_session/remote/interactive_remote_session.py rename to dts/framework/remote_session/interactive_remote_session.py diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py similarity index 100% rename from dts/framework/remote_session/remote/interactive_shell.py rename to dts/framework/remote_session/interactive_shell.py diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py similarity index 100% rename from dts/framework/remote_session/remote/python_shell.py rename to dts/framework/remote_session/python_shell.py diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py deleted file mode 100644 index 06403691a5..0000000000 --- a/dts/framework/remote_session/remote/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2023 PANTHEON.tech s.r.o. -# Copyright(c) 2023 University of New Hampshire - -# pylama:ignore=W0611 - -from framework.config import NodeConfiguration -from framework.logger import DTSLOG - -from .interactive_remote_session import InteractiveRemoteSession -from .interactive_shell import InteractiveShell -from .python_shell import PythonShell -from .remote_session import CommandResult, RemoteSession -from .ssh_session import SSHSession -from .testpmd_shell import TestPmdDevice, TestPmdShell - - -def create_remote_session( - node_config: NodeConfiguration, name: str, logger: DTSLOG -) -> RemoteSession: - return SSHSession(node_config, name, logger) - - -def create_interactive_session( - node_config: NodeConfiguration, logger: DTSLOG -) -> InteractiveRemoteSession: - return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py similarity index 100% rename from dts/framework/remote_session/remote/remote_session.py rename to dts/framework/remote_session/remote_session.py diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py similarity index 91% rename from dts/framework/remote_session/remote/ssh_session.py rename to dts/framework/remote_session/ssh_session.py index 1a7ee649ab..a467033a13 100644 --- a/dts/framework/remote_session/remote/ssh_session.py +++ b/dts/framework/remote_session/ssh_session.py @@ -18,9 +18,7 @@ SSHException, ) -from framework.config import NodeConfiguration from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError -from framework.logger import DTSLOG from .remote_session import CommandResult, RemoteSession @@ -45,14 +43,6 @@ class SSHSession(RemoteSession): session: Connection - def __init__( - self, - node_config: NodeConfiguration, - session_name: str, - logger: DTSLOG, - ): - super(SSHSession, self).__init__(node_config, session_name, logger) - def _connect(self) -> None: errors = [] retry_attempts = 10 @@ -111,7 +101,7 @@ def _send_command(self, command: str, timeout: float, env: dict | None) -> Comma except CommandTimedOut as e: self._logger.exception(e) - raise SSHTimeoutError(command, e.result.stderr) from e + raise SSHTimeoutError(command) from e return CommandResult(self.name, command, output.stdout, output.stderr, output.return_code) diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py similarity index 100% rename from dts/framework/remote_session/remote/testpmd_shell.py rename to dts/framework/remote_session/testpmd_shell.py diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 974793a11a..25b5dcff22 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -6,7 +6,7 @@ import argparse import os from collections.abc import Callable, Iterable, Sequence -from dataclasses import dataclass +from dataclasses import dataclass, field from pathlib import Path from typing import Any, TypeVar @@ -22,8 +22,8 @@ def __init__( option_strings: Sequence[str], dest: str, nargs: str | int | None = None, - const: str | None = None, - default: str = None, + const: bool | None = None, + default: Any = None, type: Callable[[str], _T | argparse.FileType | None] = None, choices: Iterable[_T] | None = None, required: bool = False, @@ -32,6 +32,12 @@ def __init__( ) -> None: env_var_value = os.environ.get(env_var) default = env_var_value or default + if const is not None: + nargs = 0 + default = const if env_var_value else default + type = None + choices = None + metavar = None super(_EnvironmentArgument, self).__init__( option_strings, dest, @@ -52,22 +58,28 @@ def __call__( values: Any, option_string: str = None, ) -> None: - setattr(namespace, self.dest, values) + if self.const is not None: + setattr(namespace, self.dest, self.const) + else: + setattr(namespace, self.dest, values) return _EnvironmentArgument -@dataclass(slots=True, frozen=True) -class _Settings: - config_file_path: str - output_dir: str - timeout: float - verbose: bool - skip_setup: bool - dpdk_tarball_path: Path - compile_timeout: float - test_cases: list - re_run: int +@dataclass(slots=True) +class Settings: + config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml") + output_dir: str = "output" + timeout: float = 15 + verbose: bool = False + skip_setup: bool = False + dpdk_tarball_path: Path | str = "dpdk.tar.xz" + compile_timeout: float = 1200 + test_cases: list[str] = field(default_factory=list) + re_run: int = 0 + + +SETTINGS: Settings = Settings() def _get_parser() -> argparse.ArgumentParser: @@ -80,7 +92,8 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--config-file", action=_env_arg("DTS_CFG_FILE"), - default="conf.yaml", + default=SETTINGS.config_file_path, + type=Path, help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets.", ) @@ -88,7 +101,7 @@ def _get_parser() -> argparse.ArgumentParser: "--output-dir", "--output", action=_env_arg("DTS_OUTPUT_DIR"), - default="output", + default=SETTINGS.output_dir, help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.", ) @@ -96,7 +109,7 @@ def _get_parser() -> argparse.ArgumentParser: "-t", "--timeout", action=_env_arg("DTS_TIMEOUT"), - default=15, + default=SETTINGS.timeout, type=float, help="[DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK.", ) @@ -105,8 +118,9 @@ def _get_parser() -> argparse.ArgumentParser: "-v", "--verbose", action=_env_arg("DTS_VERBOSE"), - default="N", - help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages " + default=SETTINGS.verbose, + const=True, + help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages " "to the console.", ) @@ -114,8 +128,8 @@ def _get_parser() -> argparse.ArgumentParser: "-s", "--skip-setup", action=_env_arg("DTS_SKIP_SETUP"), - default="N", - help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.", + const=True, + help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.", ) parser.add_argument( @@ -123,7 +137,7 @@ def _get_parser() -> argparse.ArgumentParser: "--snapshot", "--git-ref", action=_env_arg("DTS_DPDK_TARBALL"), - default="dpdk.tar.xz", + default=SETTINGS.dpdk_tarball_path, type=Path, help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, " "tag ID or tree ID to test. To test local changes, first commit them, " @@ -133,7 +147,7 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--compile-timeout", action=_env_arg("DTS_COMPILE_TIMEOUT"), - default=1200, + default=SETTINGS.compile_timeout, type=float, help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.", ) @@ -150,7 +164,7 @@ def _get_parser() -> argparse.ArgumentParser: "--re-run", "--re_run", action=_env_arg("DTS_RERUN"), - default=0, + default=SETTINGS.re_run, type=int, help="[DTS_RERUN] Re-run each test case the specified amount of times " "if a test failure occurs", @@ -159,21 +173,20 @@ def _get_parser() -> argparse.ArgumentParser: return parser -def _get_settings() -> _Settings: +def get_settings() -> Settings: parsed_args = _get_parser().parse_args() - return _Settings( + return Settings( config_file_path=parsed_args.config_file, output_dir=parsed_args.output_dir, timeout=parsed_args.timeout, - verbose=(parsed_args.verbose == "Y"), - skip_setup=(parsed_args.skip_setup == "Y"), - dpdk_tarball_path=Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)) - if not os.path.exists(parsed_args.tarball) - else Path(parsed_args.tarball), + verbose=parsed_args.verbose, + skip_setup=parsed_args.skip_setup, + dpdk_tarball_path=Path( + Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)) + if not os.path.exists(parsed_args.tarball) + else Path(parsed_args.tarball) + ), compile_timeout=parsed_args.compile_timeout, - test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [], + test_cases=(parsed_args.test_cases.split(",") if parsed_args.test_cases else []), re_run=parsed_args.re_run, ) - - -SETTINGS: _Settings = _get_settings() diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 4c2e7e2418..57090feb04 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -246,7 +246,7 @@ def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTarge self._inner_results.append(build_target_result) return build_target_result - def add_sut_info(self, sut_info: NodeInfo): + def add_sut_info(self, sut_info: NodeInfo) -> None: self.sut_os_name = sut_info.os_name self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version @@ -289,7 +289,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: self._inner_results.append(execution_result) return execution_result - def add_error(self, error) -> None: + def add_error(self, error: Exception) -> None: self._errors.append(error) def process(self) -> None: diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 4a7907ec33..f9e66e814a 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -11,7 +11,7 @@ import re from ipaddress import IPv4Interface, IPv6Interface, ip_interface from types import MethodType -from typing import Union +from typing import Any, Union from scapy.layers.inet import IP # type: ignore[import] from scapy.layers.l2 import Ether # type: ignore[import] @@ -26,8 +26,7 @@ from .logger import DTSLOG, getLogger from .settings import SETTINGS from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult -from .testbed_model import SutNode, TGNode -from .testbed_model.hw.port import Port, PortLink +from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries @@ -426,7 +425,7 @@ def _execute_test_case( def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: - def is_test_suite(object) -> bool: + def is_test_suite(object: Any) -> bool: try: if issubclass(object, TestSuite) and object is not TestSuite: return True diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 5cbb859e47..8ced05653b 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -9,15 +9,9 @@ # pylama:ignore=W0611 -from .hw import ( - LogicalCore, - LogicalCoreCount, - LogicalCoreCountFilter, - LogicalCoreList, - LogicalCoreListFilter, - VirtualDevice, - lcore_filter, -) +from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList from .node import Node +from .port import Port, PortLink from .sut_node import SutNode from .tg_node import TGNode +from .virtual_device import VirtualDevice diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py similarity index 95% rename from dts/framework/testbed_model/hw/cpu.py rename to dts/framework/testbed_model/cpu.py index cbc5fe7fff..1b392689f5 100644 --- a/dts/framework/testbed_model/hw/cpu.py +++ b/dts/framework/testbed_model/cpu.py @@ -262,3 +262,16 @@ def filter(self) -> list[LogicalCore]: ) return filtered_lcores + + +def lcore_filter( + core_list: list[LogicalCore], + filter_specifier: LogicalCoreCount | LogicalCoreList, + ascending: bool, +) -> LogicalCoreFilter: + if isinstance(filter_specifier, LogicalCoreList): + return LogicalCoreListFilter(core_list, filter_specifier, ascending) + elif isinstance(filter_specifier, LogicalCoreCount): + return LogicalCoreCountFilter(core_list, filter_specifier, ascending) + else: + raise ValueError(f"Unsupported filter r{filter_specifier}") diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py deleted file mode 100644 index 88ccac0b0e..0000000000 --- a/dts/framework/testbed_model/hw/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2023 PANTHEON.tech s.r.o. - -# pylama:ignore=W0611 - -from .cpu import ( - LogicalCore, - LogicalCoreCount, - LogicalCoreCountFilter, - LogicalCoreFilter, - LogicalCoreList, - LogicalCoreListFilter, -) -from .virtual_device import VirtualDevice - - -def lcore_filter( - core_list: list[LogicalCore], - filter_specifier: LogicalCoreCount | LogicalCoreList, - ascending: bool, -) -> LogicalCoreFilter: - if isinstance(filter_specifier, LogicalCoreList): - return LogicalCoreListFilter(core_list, filter_specifier, ascending) - elif isinstance(filter_specifier, LogicalCoreCount): - return LogicalCoreCountFilter(core_list, filter_specifier, ascending) - else: - raise ValueError(f"Unsupported filter r{filter_specifier}") diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py similarity index 97% rename from dts/framework/remote_session/linux_session.py rename to dts/framework/testbed_model/linux_session.py index fd877fbfae..055765ba2d 100644 --- a/dts/framework/remote_session/linux_session.py +++ b/dts/framework/testbed_model/linux_session.py @@ -9,10 +9,10 @@ from typing_extensions import NotRequired from framework.exception import RemoteCommandExecutionError -from framework.testbed_model import LogicalCore -from framework.testbed_model.hw.port import Port from framework.utils import expand_range +from .cpu import LogicalCore +from .port import Port from .posix_session import PosixSession @@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: lcores.append(LogicalCore(lcore, core, socket, node)) return lcores - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: return dpdk_prefix def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index ef700d8114..b313b5ad54 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -12,23 +12,26 @@ from typing import Any, Callable, Type, Union from framework.config import ( + OS, BuildTargetConfiguration, ExecutionConfiguration, NodeConfiguration, ) +from framework.exception import ConfigurationError from framework.logger import DTSLOG, getLogger -from framework.remote_session import InteractiveShellType, OSSession, create_session from framework.settings import SETTINGS -from .hw import ( +from .cpu import ( LogicalCore, LogicalCoreCount, LogicalCoreList, LogicalCoreListFilter, - VirtualDevice, lcore_filter, ) -from .hw.port import Port +from .linux_session import LinuxSession +from .os_session import InteractiveShellType, OSSession +from .port import Port +from .virtual_device import VirtualDevice class Node(ABC): @@ -168,9 +171,9 @@ def create_interactive_shell( return self.main_session.create_interactive_shell( shell_cls, - app_args, timeout, privileged, + app_args, ) def filter_lcores( @@ -201,7 +204,7 @@ def _get_remote_cpus(self) -> None: self._logger.info("Getting CPU information.") self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) - def _setup_hugepages(self): + def _setup_hugepages(self) -> None: """ Setup hugepages on the Node. Different architectures can supply different amounts of memory for hugepages and numa-based hugepage allocation may need @@ -245,3 +248,11 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: return lambda *args: None else: return func + + +def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession: + match node_config.os: + case OS.linux: + return LinuxSession(node_config, name, logger) + case _: + raise ConfigurationError(f"Unsupported OS {node_config.os}") diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py similarity index 95% rename from dts/framework/remote_session/os_session.py rename to dts/framework/testbed_model/os_session.py index 8a709eac1c..76e595a518 100644 --- a/dts/framework/remote_session/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -10,19 +10,19 @@ from framework.config import Architecture, NodeConfiguration, NodeInfo from framework.logger import DTSLOG -from framework.remote_session.remote import InteractiveShell -from framework.settings import SETTINGS -from framework.testbed_model import LogicalCore -from framework.testbed_model.hw.port import Port -from framework.utils import MesonArgs - -from .remote import ( +from framework.remote_session import ( CommandResult, InteractiveRemoteSession, + InteractiveShell, RemoteSession, create_interactive_session, create_remote_session, ) +from framework.settings import SETTINGS +from framework.utils import MesonArgs + +from .cpu import LogicalCore +from .port import Port InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell) @@ -85,9 +85,9 @@ def send_command( def create_interactive_shell( self, shell_cls: Type[InteractiveShellType], - eal_parameters: str, timeout: float, privileged: bool, + app_args: str, ) -> InteractiveShellType: """ See "create_interactive_shell" in SutNode @@ -96,7 +96,7 @@ def create_interactive_shell( self.interactive_session.session, self._logger, self._get_privileged_command if privileged else None, - eal_parameters, + app_args, timeout, ) @@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str: """ @abstractmethod - def guess_dpdk_remote_dir(self, remote_dir) -> PurePath: + def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath: """ Try to find DPDK remote dir in remote_dir. """ @@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: """ @abstractmethod - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: """ Get the DPDK file prefix that will be used when running DPDK apps. """ diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py similarity index 100% rename from dts/framework/testbed_model/hw/port.py rename to dts/framework/testbed_model/port.py diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py similarity index 98% rename from dts/framework/remote_session/posix_session.py rename to dts/framework/testbed_model/posix_session.py index a29e2e8280..5657cc0bc9 100644 --- a/dts/framework/remote_session/posix_session.py +++ b/dts/framework/testbed_model/posix_session.py @@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str: return ret_opts - def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath: + def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath: remote_guess = self.join_remote_path(remote_dir, "dpdk-*") result = self.send_command(f"ls -d {remote_guess} | tail -1") return PurePosixPath(result.stdout) @@ -207,7 +207,7 @@ def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) for dpdk_runtime_dir in dpdk_runtime_dirs: self.remove_remote_dir(dpdk_runtime_dir) - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: return "" def get_compiler_version(self, compiler_name: str) -> str: diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 7f75043bd3..5ce9446dba 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -15,12 +15,14 @@ NodeInfo, SutNodeConfiguration, ) -from framework.remote_session import CommandResult, InteractiveShellType, OSSession +from framework.remote_session import CommandResult from framework.settings import SETTINGS from framework.utils import MesonArgs -from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice +from .cpu import LogicalCoreCount, LogicalCoreList from .node import Node +from .os_session import InteractiveShellType, OSSession +from .virtual_device import VirtualDevice class EalParameters(object): @@ -293,7 +295,7 @@ def create_eal_parameters( prefix: str = "dpdk", append_prefix_timestamp: bool = True, no_pci: bool = False, - vdevs: list[VirtualDevice] = None, + vdevs: list[VirtualDevice] | None = None, other_eal_param: str = "", ) -> "EalParameters": """ diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py index 79a55663b5..8a8f0019f3 100644 --- a/dts/framework/testbed_model/tg_node.py +++ b/dts/framework/testbed_model/tg_node.py @@ -16,16 +16,11 @@ from scapy.packet import Packet # type: ignore[import] -from framework.config import ( - ScapyTrafficGeneratorConfig, - TGNodeConfiguration, - TrafficGeneratorType, -) -from framework.exception import ConfigurationError - -from .capturing_traffic_generator import CapturingTrafficGenerator -from .hw.port import Port +from framework.config import TGNodeConfiguration + from .node import Node +from .port import Port +from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator class TGNode(Node): @@ -78,19 +73,3 @@ def close(self) -> None: """Free all resources used by the node""" self.traffic_generator.close() super(TGNode, self).close() - - -def create_traffic_generator( - tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig -) -> CapturingTrafficGenerator: - """A factory function for creating traffic generator object from user config.""" - - from .scapy import ScapyTrafficGenerator - - match traffic_generator_config.traffic_generator_type: - case TrafficGeneratorType.SCAPY: - return ScapyTrafficGenerator(tg_node, traffic_generator_config) - case _: - raise ConfigurationError( - f"Unknown traffic generator: {traffic_generator_config.traffic_generator_type}" - ) diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py new file mode 100644 index 0000000000..52888d03fa --- /dev/null +++ b/dts/framework/testbed_model/traffic_generator/__init__.py @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType +from framework.exception import ConfigurationError +from framework.testbed_model.node import Node + +from .capturing_traffic_generator import CapturingTrafficGenerator +from .scapy import ScapyTrafficGenerator + + +def create_traffic_generator( + tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig +) -> CapturingTrafficGenerator: + """A factory function for creating traffic generator object from user config.""" + + match traffic_generator_config.traffic_generator_type: + case TrafficGeneratorType.SCAPY: + return ScapyTrafficGenerator(tg_node, traffic_generator_config) + case _: + raise ConfigurationError( + "Unknown traffic generator: {traffic_generator_config.traffic_generator_type}" + ) diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py similarity index 98% rename from dts/framework/testbed_model/capturing_traffic_generator.py rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py index e6512061d7..1fc7f98c05 100644 --- a/dts/framework/testbed_model/capturing_traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py @@ -16,9 +16,9 @@ from scapy.packet import Packet # type: ignore[import] from framework.settings import SETTINGS +from framework.testbed_model.port import Port from framework.utils import get_packet_summaries -from .hw.port import Port from .traffic_generator import TrafficGenerator @@ -127,7 +127,7 @@ def _send_packets_and_capture( for the specified duration. It must be able to handle no received packets. """ - def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]): + def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]) -> None: file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap" self._logger.debug(f"Writing packets to {file_name}.") scapy.utils.wrpcap(file_name, packets) diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py similarity index 95% rename from dts/framework/testbed_model/scapy.py rename to dts/framework/testbed_model/traffic_generator/scapy.py index 9083e92b3d..c88cf28369 100644 --- a/dts/framework/testbed_model/scapy.py +++ b/dts/framework/testbed_model/traffic_generator/scapy.py @@ -24,16 +24,15 @@ from scapy.packet import Packet # type: ignore[import] from framework.config import OS, ScapyTrafficGeneratorConfig -from framework.logger import DTSLOG, getLogger from framework.remote_session import PythonShell from framework.settings import SETTINGS +from framework.testbed_model.node import Node +from framework.testbed_model.port import Port from .capturing_traffic_generator import ( CapturingTrafficGenerator, _get_default_capture_name, ) -from .hw.port import Port -from .tg_node import TGNode """ ========= BEGIN RPC FUNCTIONS ========= @@ -144,7 +143,7 @@ def quit(self) -> None: self._BaseServer__shutdown_request = True return None - def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary): + def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None: """Add a function to the server. This is meant to be executed remotely. @@ -189,13 +188,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator): session: PythonShell rpc_server_proxy: xmlrpc.client.ServerProxy _config: ScapyTrafficGeneratorConfig - _tg_node: TGNode - _logger: DTSLOG - def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig): - self._config = config - self._tg_node = tg_node - self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}") + def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig): + super().__init__(tg_node, config) assert ( self._tg_node.config.os == OS.linux @@ -229,7 +224,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig): function_bytes = marshal.dumps(function.__code__) self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes) - def _start_xmlrpc_server_in_remote_python(self, listen_port: int): + def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None: # load the source of the function src = inspect.getsource(QuittableXMLRPCServer) # Lines with only whitespace break the repl if in the middle of a function @@ -271,7 +266,7 @@ def _send_packets_and_capture( scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets] return scapy_packets - def close(self): + def close(self) -> None: try: self.rpc_server_proxy.quit() except ConnectionRefusedError: diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py similarity index 81% rename from dts/framework/testbed_model/traffic_generator.py rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py index 28c35d3ce4..0d9902ddb7 100644 --- a/dts/framework/testbed_model/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -12,11 +12,12 @@ from scapy.packet import Packet # type: ignore[import] -from framework.logger import DTSLOG +from framework.config import TrafficGeneratorConfig +from framework.logger import DTSLOG, getLogger +from framework.testbed_model.node import Node +from framework.testbed_model.port import Port from framework.utils import get_packet_summaries -from .hw.port import Port - class TrafficGenerator(ABC): """The base traffic generator. @@ -24,8 +25,15 @@ class TrafficGenerator(ABC): Defines the few basic methods that each traffic generator must implement. """ + _config: TrafficGeneratorConfig + _tg_node: Node _logger: DTSLOG + def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): + self._config = config + self._tg_node = tg_node + self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}") + def send_packet(self, packet: Packet, port: Port) -> None: """Send a packet and block until it is fully sent. diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py similarity index 100% rename from dts/framework/testbed_model/hw/virtual_device.py rename to dts/framework/testbed_model/virtual_device.py diff --git a/dts/framework/utils.py b/dts/framework/utils.py index d098d364ff..a0f2173949 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -7,7 +7,6 @@ import json import os import subprocess -import sys from enum import Enum from pathlib import Path from subprocess import SubprocessError @@ -16,31 +15,7 @@ from .exception import ConfigurationError - -class StrEnum(Enum): - @staticmethod - def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str: - return name - - def __str__(self) -> str: - return self.name - - -REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" - - -def check_dts_python_version() -> None: - if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10): - print( - RED( - ( - "WARNING: DTS execution node's python version is lower than" - "python 3.10, is deprecated and will not work in future releases." - ) - ), - file=sys.stderr, - ) - print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) +REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" def expand_range(range_str: str) -> list[int]: @@ -61,7 +36,7 @@ def expand_range(range_str: str) -> list[int]: return expanded_range -def get_packet_summaries(packets: list[Packet]): +def get_packet_summaries(packets: list[Packet]) -> str: if len(packets) == 1: packet_summaries = packets[0].summary() else: @@ -69,8 +44,13 @@ def get_packet_summaries(packets: list[Packet]): return f"Packet contents: \n{packet_summaries}" -def RED(text: str) -> str: - return f"\u001B[31;1m{str(text)}\u001B[0m" +class StrEnum(Enum): + @staticmethod + def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str: + return name + + def __str__(self) -> str: + return self.name class MesonArgs(object): @@ -215,5 +195,5 @@ def _delete_tarball(self) -> None: if self._tarball_path and os.path.exists(self._tarball_path): os.remove(self._tarball_path) - def __fspath__(self): + def __fspath__(self) -> str: return str(self._tarball_path) diff --git a/dts/main.py b/dts/main.py index 43311fa847..5d4714b0c3 100755 --- a/dts/main.py +++ b/dts/main.py @@ -10,10 +10,17 @@ import logging -from framework import dts +from framework import settings def main() -> None: + """Set DTS settings, then run DTS. + + The DTS settings are taken from the command line arguments and the environment variables. + """ + settings.SETTINGS = settings.get_settings() + from framework import dts + dts.run_all() From patchwork Thu Nov 23 15:13:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134568 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23239433AC; Thu, 23 Nov 2023 16:14:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA6D142FB1; Thu, 23 Nov 2023 16:13:53 +0100 (CET) Received: from mail-lf1-f44.google.com (mail-lf1-f44.google.com [209.85.167.44]) by mails.dpdk.org (Postfix) with ESMTP id C2D5F42F9B for ; Thu, 23 Nov 2023 16:13:49 +0100 (CET) Received: by mail-lf1-f44.google.com with SMTP id 2adb3069b0e04-507c5249d55so1381388e87.3 for ; Thu, 23 Nov 2023 07:13:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752429; x=1701357229; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Kn/2U/8Wo1A2Bfu3Vot6EMFi0yZ63e9DkWj/pMBSXyA=; b=rEVnuHKtAwjSyuTYbktFIC6jSybru5qg0csp1le4cGhiSNqPsIDZnoMlfGxVpCHx7L GLU/H3NjqLE1GzUsKz7VpfZzgNEpCbT2z0vQ+KXjnHWIjjYoPLsgg8kNiQ1TQ1lzmmyR gF6ELAZDxiWwoWFX+YrPC5MOzYxTomfXjHus8rsbAqXIK6sNKx7kismJ3smIjC58wh4G ++4NYmoKc4022VPLYP2mEzsZYVThcbk+zdBLY0Ptwzh0orjmMVoto0ojfstMiSocXlBS e3A4CsMD6Ax4fGRGkIqLgcOphTQHZkp620NBeBM19ifQBuntRsslm4gzhA0UO8Jxp/tg qE5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752429; x=1701357229; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Kn/2U/8Wo1A2Bfu3Vot6EMFi0yZ63e9DkWj/pMBSXyA=; b=A6e9BIZ/wVDA7NY8PumQORjyCuVaRiuF6/j7JeFWL++AT7ewi74PfieH7M7rdNb7uJ jzhCgb/+CW+6tH/hjZ76WtsCuK39U1xLwefVLe7U8bgCJJ0tav2UWUemTaAvID8Y8wSk dAxX2C4a7QkFoDWhNb/2AB9vUE+1BOsb/nT0ugsvRHsKkpsPW4aK2BdGVoeE1mOgCEcS aAoS68APuAGg/agGBu/EjO9PT+uVWw9rajHzu1/PA6Co3+yRhJKngnVcxrCMai/ITpD2 Q1YQB+vKr1lG96PNHLy2RzncUPz4EkPbMCpTus57iA0Lawi6MxJwfMqCoAMTWmo7Fzqh 7Lqw== X-Gm-Message-State: AOJu0Yz4KjHwbdPBJkYliKre9NqXH1Hx4MxqHSnnh/v9o6WfZaQMVKQJ GhTPP3JOQvYp7ddL2kiWoWJFjg== X-Google-Smtp-Source: AGHT+IEeJ4giSaObXfzA3wq8VhC4jif3ju5BERjflnKKi14RXqH+/R2z60ibKW1J4QsN9r9Gd8/eMA== X-Received: by 2002:a19:4f01:0:b0:509:8e20:e7c6 with SMTP id d1-20020a194f01000000b005098e20e7c6mr2162765lfb.32.1700752429217; Thu, 23 Nov 2023 07:13:49 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:48 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 02/21] dts: add docstring checker Date: Thu, 23 Nov 2023 16:13:25 +0100 Message-Id: <20231123151344.162812-3-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Python docstrings are the in-code way to document the code. The docstring checker of choice is pydocstyle which we're executing from Pylama, but the current latest versions are not complatible due to [0], so pin the pydocstyle version to the latest working version. [0] https://github.com/klen/pylama/issues/232 Signed-off-by: Juraj Linkeš --- dts/poetry.lock | 12 ++++++------ dts/pyproject.toml | 6 +++++- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/dts/poetry.lock b/dts/poetry.lock index f7b3b6d602..a734fa71f0 100644 --- a/dts/poetry.lock +++ b/dts/poetry.lock @@ -489,20 +489,20 @@ files = [ [[package]] name = "pydocstyle" -version = "6.3.0" +version = "6.1.1" description = "Python docstring style checker" optional = false python-versions = ">=3.6" files = [ - {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"}, - {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"}, + {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"}, + {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"}, ] [package.dependencies] -snowballstemmer = ">=2.2.0" +snowballstemmer = "*" [package.extras] -toml = ["tomli (>=1.2.3)"] +toml = ["toml"] [[package]] name = "pyflakes" @@ -837,4 +837,4 @@ jsonschema = ">=4,<5" [metadata] lock-version = "2.0" python-versions = "^3.10" -content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639" +content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b" diff --git a/dts/pyproject.toml b/dts/pyproject.toml index 980ac3c7db..37a692d655 100644 --- a/dts/pyproject.toml +++ b/dts/pyproject.toml @@ -25,6 +25,7 @@ PyYAML = "^6.0" types-PyYAML = "^6.0.8" fabric = "^2.7.1" scapy = "^2.5.0" +pydocstyle = "6.1.1" [tool.poetry.group.dev.dependencies] mypy = "^0.961" @@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.pylama] -linters = "mccabe,pycodestyle,pyflakes" +linters = "mccabe,pycodestyle,pydocstyle,pyflakes" format = "pylint" max_line_length = 100 +[tool.pylama.linter.pydocstyle] +convention = "google" + [tool.mypy] python_version = "3.10" enable_error_code = ["ignore-without-code"] From patchwork Thu Nov 23 15:13:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134569 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18020433AC; Thu, 23 Nov 2023 16:14:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4853442FC9; Thu, 23 Nov 2023 16:13:55 +0100 (CET) Received: from mail-wr1-f54.google.com (mail-wr1-f54.google.com [209.85.221.54]) by mails.dpdk.org (Postfix) with ESMTP id 951C842FAF for ; Thu, 23 Nov 2023 16:13:50 +0100 (CET) Received: by mail-wr1-f54.google.com with SMTP id ffacd0b85a97d-3316c6e299eso637331f8f.1 for ; Thu, 23 Nov 2023 07:13:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752430; x=1701357230; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZYiXW2Z88ESYnyFNqmTUgTLHumo8eKturZmLII8hMZI=; b=JT1sxQadPmIn/6QQP+72WAIw+tgX/6KmYbyBkl/Ylwa4fIeabm3h0/kMQdAUs+At9K GkYK3AxAPlwBwnPvzH5T2zhA9PV3SlsO3XNNUW1tOFHtnaZjYDv6QofEH/0CKbJqnDqB v4FOfi1aM7zs8uzbxEqikYCiPFJ88kEoD7PsnE9sn09lsx7lpCUSZvLtsS8FV02pmG6B WKM02JW+u2xWjGeazjPiKd+AQIuO/g/dBgzBpAH1L7dMO/29LlRBVzmR621+1OwvjLEC 4R/oQfJNvy2B2MvgoR/YTr9c0D63lOmeOfVtKxak9DGoeFUC6IYa28GIlqvaza8Df6ls Uzkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752430; x=1701357230; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZYiXW2Z88ESYnyFNqmTUgTLHumo8eKturZmLII8hMZI=; b=pH9npJ3TYF0ub3MBhK8mx+QUoLauLCCa8ibmqseyUdprlGHfQ5Uf+krDwre+FZOxkj 810iG5NVRBy34iShydq2rvwOOp3fpKswHCfIyvnJ2hAP/h0DmOKOAtMKtk7ciTvXVxP1 ghIhTdfqWKf4AZcn5u5b1H/aOhJljwb0M/GWa+AWtJFs/FE+oJq69NZRWRTnsjQl5cOa hGbWv7db75SRdu/2an9xV7Y2v19TGOhDh9lICbnn1RbrRIy6uJmXWuXdfFWrxEXRbxr5 VzcMEHgYLPebzpYm2EPm6cAPrSLbGF2DkYMCN4dByMJs6Si0RFTmwWHanHeazAG4RAN9 M9dA== X-Gm-Message-State: AOJu0YyaQxmw2jogtnUVjjHIu4873jpRUMhZNUTantIIrR99uzLPb5fH YFPOlB11HBhKoC/8IvXurCJaDg== X-Google-Smtp-Source: AGHT+IFW3tLf8dOYmJ0x4bcZm5aMnjCDJBh9gaXDVNKXqnofNPESezyNnFlLtyuKKxTLBo/Xi5qCeg== X-Received: by 2002:adf:e8c7:0:b0:32d:96a7:9551 with SMTP id k7-20020adfe8c7000000b0032d96a79551mr3282996wrn.36.1700752430478; Thu, 23 Nov 2023 07:13:50 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:50 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 03/21] dts: add basic developer docs Date: Thu, 23 Nov 2023 16:13:26 +0100 Message-Id: <20231123151344.162812-4-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Expand the framework contribution guidelines and add how to document the code with Python docstrings. Signed-off-by: Juraj Linkeš --- doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index 32c18ee472..cd771a428c 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument. The results contain basic statistics of passed/failed test cases and DPDK version. +Contributing to DTS +------------------- + +There are two areas of contribution: The DTS framework and DTS test suites. + +The framework contains the logic needed to run test cases, such as connecting to nodes, +running DPDK apps and collecting results. + +The test cases call APIs from the framework to test their scenarios. Adding test cases may +require adding code to the framework as well. + + +Framework Coding Guidelines +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When adding code to the DTS framework, pay attention to the rest of the code +and try not to divert much from it. The :ref:`DTS developer tools ` will issue +warnings when some of the basics are not met. + +The code must be properly documented with docstrings. The style must conform to +the `Google style `_. +See an example of the style +`here `_. +For cases which are not covered by the Google style, refer +to `PEP 257 `_. There are some cases which are not covered by +the two style guides, where we deviate or where some additional clarification is helpful: + + * The __init__() methods of classes are documented separately from the docstring of the class + itself. + * The docstrigs of implemented abstract methods should refer to the superclass's definition + if there's no deviation. + * Instance variables/attributes should be documented in the docstring of the class + in the ``Attributes:`` section. + * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass + attributes which result in instance variables/attributes should also be recorded + in the ``Attributes:`` section. + * Class variables/attributes, on the other hand, should be documented with ``#:`` above + the type annotated line. The description may be omitted if the meaning is obvious. + * The Enum and TypedDict also process the attributes in particular ways and should be documented + with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value. + * When referencing a parameter of a function or a method in their docstring, don't use + any articles and put the parameter into single backticks. This mimics the style of + `Python's documentation `_. + * When specifying a value, use double backticks:: + + def foo(greet: bool) -> None: + """Demonstration of single and double backticks. + + `greet` controls whether ``Hello World`` is printed. + + Args: + greet: Whether to print the ``Hello World`` message. + """ + if greet: + print(f"Hello World") + + * The docstring maximum line length is the same as the code maximum line length. + + How To Write a Test Suite ------------------------- @@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite: | These methods don't need to be implemented if there's no need for them in a test suite. In that case, nothing will happen when they're is executed. +#. **Configuration, traffic and other logic** + + The ``TestSuite`` class contains a variety of methods for anything that + a test suite setup, a teardown, or a test case may need to do. + + The test suites also frequently use a DPDK app, such as testpmd, in interactive mode + and use the interactive shell instances directly. + + These are the two main ways to call the framework logic in test suites. If there's any + functionality or logic missing from the framework, it should be implemented so that + the test suites can use one of these two ways. + #. **Test case verification** Test case verification should be done with the ``verify`` method, which records the result. @@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite: and used by the test suite via the ``sut_node`` field. +.. _dts_dev_tools: + DTS Developer Tools ------------------- From patchwork Thu Nov 23 15:13:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134570 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1F948433AC; Thu, 23 Nov 2023 16:14:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 56EB242FD1; Thu, 23 Nov 2023 16:13:56 +0100 (CET) Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) by mails.dpdk.org (Postfix) with ESMTP id 10E7942FAF for ; Thu, 23 Nov 2023 16:13:52 +0100 (CET) Received: by mail-lf1-f51.google.com with SMTP id 2adb3069b0e04-507cee17b00so1239879e87.2 for ; Thu, 23 Nov 2023 07:13:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752431; x=1701357231; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wjMjWbqHGp5MGkpbw1QW6JBQYnzdCysgXfdMDvK6Qbg=; b=u7Uz4LO6UgQh9QsrHerUP7zjcyuBzUJColCWgRn9DbzVj4chyPXAsziDI/pxLlv8i6 9sZQgoUrIFRyKjGbN5d0mjd1SLPht70hNqyhvv5a2ouG3Yp8jnUlJt4ZsJcCPkthgbyg glF9pw5Y9PxR4hxCuFAC3rCSXcSS+9jytgpKHwuv3jnRAn12Kmkp9IRnLN8cLAwIx3sk JZU3q+XNPc5Erx19VhQig0tWBYeN5RLObGCDzBO3fJ/DFOc6nQU9GIPNojc9gbzadc66 zHZzkqJ5L6X8bZCtdDEftqd/DVxGxPFEA5O4kfrIKsf35u4YVlnrBj4weW1inGo0pPFZ EgDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752431; x=1701357231; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wjMjWbqHGp5MGkpbw1QW6JBQYnzdCysgXfdMDvK6Qbg=; b=XMVSToAKbtLWTR6GRzPTV64wFPC1nupFN18HaaLTvOs1gWrl4H7+JBjQw1Q6MRLxWM zvptTcWbhGJUO+N4TiYM2vZ+sFp2QbsnFwjBSWViy1Ge3Y4F3Wtz/7/2fHqMPc6uhPjQ CdS62Cal0Y+xyZQr47CeqefSwFIGsKxv9iUUTwXnS7lei4yXNst7vZpUZettszpUqq/c cKLLGQ/yZN6WTcsU7ju+zzwC91akTSJ2T1lrwbZzBY/gHpqAu63FSn8AZwd8JvLhoDmf aCGPwHITwyg+e/XqIGm3LVUYqHiRbqtjBsVMbZ8u19WbNdw6O4aK+G9Ng/JM/PHjPtUl 3Z1g== X-Gm-Message-State: AOJu0YwqF7Rcuhd3pMz51Z3oNE1K+r6p/d2OYvKj5SxK9mMB0XCauTf9 Ip/39qqGnU3iHDwzVJ0e3K0ijg== X-Google-Smtp-Source: AGHT+IEdqBEP7lh+V6RVzciuZ3R7Uwr6dAqF20xfoXx6hzXPsOSUQtMsO9kLQ9MH4QE1hWf01t2ziQ== X-Received: by 2002:a05:6512:3b0a:b0:50a:aa72:27eb with SMTP id f10-20020a0565123b0a00b0050aaa7227ebmr5021308lfv.41.1700752431476; Thu, 23 Nov 2023 07:13:51 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:51 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 04/21] dts: exceptions docstring update Date: Thu, 23 Nov 2023 16:13:27 +0100 Message-Id: <20231123151344.162812-5-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/__init__.py | 12 ++++- dts/framework/exception.py | 106 +++++++++++++++++++++++++------------ 2 files changed, 83 insertions(+), 35 deletions(-) diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py index d551ad4bf0..662e6ccad2 100644 --- a/dts/framework/__init__.py +++ b/dts/framework/__init__.py @@ -1,3 +1,13 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2022 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire + +"""Libraries and utilities for running DPDK Test Suite (DTS). + +The various modules in the DTS framework offer: + +* Connections to nodes, both interactive and non-interactive, +* A straightforward way to add support for different operating systems of remote nodes, +* Test suite setup, execution and teardown, along with test case setup, execution and teardown, +* Pre-test suite setup and post-test suite teardown. +""" diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 151e4d3aa9..658eee2c38 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -3,8 +3,10 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -User-defined exceptions used across the framework. +"""DTS exceptions. + +The exceptions all have different severities expressed as an integer. +The highest severity of all raised exceptions is used as the exit code of DTS. """ from enum import IntEnum, unique @@ -13,59 +15,79 @@ @unique class ErrorSeverity(IntEnum): - """ - The severity of errors that occur during DTS execution. + """The severity of errors that occur during DTS execution. + All exceptions are caught and the most severe error is used as return code. """ + #: NO_ERR = 0 + #: GENERIC_ERR = 1 + #: CONFIG_ERR = 2 + #: REMOTE_CMD_EXEC_ERR = 3 + #: SSH_ERR = 4 + #: DPDK_BUILD_ERR = 10 + #: TESTCASE_VERIFY_ERR = 20 + #: BLOCKING_TESTSUITE_ERR = 25 class DTSError(Exception): - """ - The base exception from which all DTS exceptions are derived. - Stores error severity. + """The base exception from which all DTS exceptions are subclassed. + + Do not use this exception, only use subclassed exceptions. """ + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR class SSHTimeoutError(DTSError): - """ - Command execution timeout. - """ + """The SSH execution of a command timed out.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _command: str def __init__(self, command: str): + """Define the meaning of the first argument. + + Args: + command: The executed command. + """ self._command = command def __str__(self) -> str: - return f"TIMEOUT on {self._command}" + """Add some context to the string representation.""" + return f"{self._command} execution timed out." class SSHConnectionError(DTSError): - """ - SSH connection error. - """ + """An unsuccessful SSH connection.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _host: str _errors: list[str] def __init__(self, host: str, errors: list[str] | None = None): + """Define the meaning of the first two arguments. + + Args: + host: The hostname to which we're trying to connect. + errors: Any errors that occurred during the connection attempt. + """ self._host = host self._errors = [] if errors is None else errors def __str__(self) -> str: + """Include the errors in the string representation.""" message = f"Error trying to connect with {self._host}." if self._errors: message += f" Errors encountered while retrying: {', '.join(self._errors)}" @@ -74,76 +96,92 @@ def __str__(self) -> str: class SSHSessionDeadError(DTSError): - """ - SSH session is not alive. - It can no longer be used. - """ + """The SSH session is no longer alive.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _host: str def __init__(self, host: str): + """Define the meaning of the first argument. + + Args: + host: The hostname of the disconnected node. + """ self._host = host def __str__(self) -> str: - return f"SSH session with {self._host} has died" + """Add some context to the string representation.""" + return f"SSH session with {self._host} has died." class ConfigurationError(DTSError): - """ - Raised when an invalid configuration is encountered. - """ + """An invalid configuration.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR class RemoteCommandExecutionError(DTSError): - """ - Raised when a command executed on a Node returns a non-zero exit status. - """ + """An unsuccessful execution of a remote command.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + #: The executed command. command: str _command_return_code: int def __init__(self, command: str, command_return_code: int): + """Define the meaning of the first two arguments. + + Args: + command: The executed command. + command_return_code: The return code of the executed command. + """ self.command = command self._command_return_code = command_return_code def __str__(self) -> str: + """Include both the command and return code in the string representation.""" return f"Command {self.command} returned a non-zero exit code: {self._command_return_code}" class RemoteDirectoryExistsError(DTSError): - """ - Raised when a remote directory to be created already exists. - """ + """A directory that exists on a remote node.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR class DPDKBuildError(DTSError): - """ - Raised when DPDK build fails for any reason. - """ + """A DPDK build failure.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR class TestCaseVerifyError(DTSError): - """ - Used in test cases to verify the expected behavior. - """ + """A test case failure.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR class BlockingTestSuiteError(DTSError): + """A failure in a blocking test suite.""" + + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR _suite_name: str def __init__(self, suite_name: str) -> None: + """Define the meaning of the first argument. + + Args: + suite_name: The blocking test suite. + """ self._suite_name = suite_name def __str__(self) -> str: + """Add some context to the string representation.""" return f"Blocking suite {self._suite_name} failed." From patchwork Thu Nov 23 15:13:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134571 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0800433AC; Thu, 23 Nov 2023 16:14:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6696A42FDB; Thu, 23 Nov 2023 16:13:57 +0100 (CET) Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) by mails.dpdk.org (Postfix) with ESMTP id 368BC42FAF for ; Thu, 23 Nov 2023 16:13:53 +0100 (CET) Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-40b36339549so5213095e9.1 for ; Thu, 23 Nov 2023 07:13:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752433; x=1701357233; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CCNc9kWJ1NNoU6JO0SvKKLXK/BM6pco6XY3SjKbCz6s=; b=t7wLnq/zfk+636Zo7FEKcubzvXQSJ8pAEivWHbC5vQiIPOk6q2P48PnYJcrMUStg7U m2spSHPiZPhVsXZyQCv1aX9SKn9+dyb50QPzgHXpATu6kwNlpNowyMRY6VMlkqZFH7JE OR7TNcVoCSbbcfLhtu03JmLZ0BJPj8ex9+sf7hiT3Mh8um1Ld0EcNR0kkZdVXis7/Gh+ /WN1Lxv3T0QqLpZ+GdTlwczd47y5bNv6e6P9QzF094D/uT0tIEAI9GlL0zWLJ8of6ESY dQDOwqvRn0RtmPtKSpEhccp8+dAF9gPE73pJUFPKTlUiTFdBa28OCLCcIP+lsHAF2tac 5GIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752433; x=1701357233; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CCNc9kWJ1NNoU6JO0SvKKLXK/BM6pco6XY3SjKbCz6s=; b=CiKkkfcax3YIbdNIHNz4zThXLsjTYZmmT28HqgA1ilMo07ghWf+MbrImdD2CxlvJBj bZRqPlxupAvXJcKNkyeE2jbrE89zY2Gl9vUm9kAcZK1po6rwNnivr70yhg2lat6B7fYP lsViW7JtBZYP7ClmdOVPrRfy0zE3nERQAt7ZNC2lNGmSLv0KsAUnsrpEOn+fkO+cp2DD RIDQTkGbRYA9L38Tn8uwyVki0fZgyXXlBaoNfuCsdqfmcNkUBSsacxLbfC1DYLRuE8HC cq+nYbhEKQ1z1dEzpFct2pvK/mJrTeLM/7IsJIfiGDp2iFUsO+7XPUsbZaWMnHeZOV8X dJcw== X-Gm-Message-State: AOJu0Yzxd/Ehj+0fqJu9vRftpinHtkvkaYOOb5TpQY1EawpYn1bPg1WE KrkHpiH3/48mB6CmHKmnsG6c4A== X-Google-Smtp-Source: AGHT+IHFY033ILYkAQezw9BXyUhkYmk0jjcunTmNt1k5hOj7lSguiBxUnJtUSK1pAlkAkYQgliUT7A== X-Received: by 2002:adf:f0c6:0:b0:332:cff4:3bd2 with SMTP id x6-20020adff0c6000000b00332cff43bd2mr3966431wro.22.1700752432875; Thu, 23 Nov 2023 07:13:52 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:52 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 05/21] dts: settings docstring update Date: Thu, 23 Nov 2023 16:13:28 +0100 Message-Id: <20231123151344.162812-6-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/settings.py | 103 +++++++++++++++++++++++++++++++++++++- 1 file changed, 102 insertions(+), 1 deletion(-) diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 25b5dcff22..41f98e8519 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -3,6 +3,72 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire +"""Environment variables and command line arguments parsing. + +This is a simple module utilizing the built-in argparse module to parse command line arguments, +augment them with values from environment variables and make them available across the framework. + +The command line value takes precedence, followed by the environment variable value, +followed by the default value defined in this module. + +The command line arguments along with the supported environment variables are: + +.. option:: --config-file +.. envvar:: DTS_CFG_FILE + + The path to the YAML test run configuration file. + +.. option:: --output-dir, --output +.. envvar:: DTS_OUTPUT_DIR + + The directory where DTS logs and results are saved. + +.. option:: --compile-timeout +.. envvar:: DTS_COMPILE_TIMEOUT + + The timeout for compiling DPDK. + +.. option:: -t, --timeout +.. envvar:: DTS_TIMEOUT + + The timeout for all DTS operation except for compiling DPDK. + +.. option:: -v, --verbose +.. envvar:: DTS_VERBOSE + + Set to any value to enable logging everything to the console. + +.. option:: -s, --skip-setup +.. envvar:: DTS_SKIP_SETUP + + Set to any value to skip building DPDK. + +.. option:: --tarball, --snapshot, --git-ref +.. envvar:: DTS_DPDK_TARBALL + + The path to a DPDK tarball, git commit ID, tag ID or tree ID to test. + +.. option:: --test-cases +.. envvar:: DTS_TESTCASES + + A comma-separated list of test cases to execute. Unknown test cases will be silently ignored. + +.. option:: --re-run, --re_run +.. envvar:: DTS_RERUN + + Re-run each test case this many times in case of a failure. + +The module provides one key module-level variable: + +Attributes: + SETTINGS: The module level variable storing framework-wide DTS settings. + +Typical usage example:: + + from framework.settings import SETTINGS + foo = SETTINGS.foo +""" + import argparse import os from collections.abc import Callable, Iterable, Sequence @@ -16,6 +82,23 @@ def _env_arg(env_var: str) -> Any: + """A helper method augmenting the argparse Action with environment variables. + + If the supplied environment variable is defined, then the default value + of the argument is modified. This satisfies the priority order of + command line argument > environment variable > default value. + + Arguments with no values (flags) should be defined using the const keyword argument + (True or False). When the argument is specified, it will be set to const, if not specified, + the default will be stored (possibly modified by the corresponding environment variable). + + Other arguments work the same as default argparse arguments, that is using + the default 'store' action. + + Returns: + The modified argparse.Action. + """ + class _EnvironmentArgument(argparse.Action): def __init__( self, @@ -68,14 +151,28 @@ def __call__( @dataclass(slots=True) class Settings: + """Default framework-wide user settings. + + The defaults may be modified at the start of the run. + """ + + #: config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml") + #: output_dir: str = "output" + #: timeout: float = 15 + #: verbose: bool = False + #: skip_setup: bool = False + #: dpdk_tarball_path: Path | str = "dpdk.tar.xz" + #: compile_timeout: float = 1200 + #: test_cases: list[str] = field(default_factory=list) + #: re_run: int = 0 @@ -166,7 +263,7 @@ def _get_parser() -> argparse.ArgumentParser: action=_env_arg("DTS_RERUN"), default=SETTINGS.re_run, type=int, - help="[DTS_RERUN] Re-run each test case the specified amount of times " + help="[DTS_RERUN] Re-run each test case the specified number of times " "if a test failure occurs", ) @@ -174,6 +271,10 @@ def _get_parser() -> argparse.ArgumentParser: def get_settings() -> Settings: + """Create new settings with inputs from the user. + + The inputs are taken from the command line and from environment variables. + """ parsed_args = _get_parser().parse_args() return Settings( config_file_path=parsed_args.config_file, From patchwork Thu Nov 23 15:13:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134572 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0443433AC; Thu, 23 Nov 2023 16:14:47 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E0B7842F9F; Thu, 23 Nov 2023 16:14:03 +0100 (CET) Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com [209.85.221.51]) by mails.dpdk.org (Postfix) with ESMTP id 1CB4442FC3 for ; Thu, 23 Nov 2023 16:13:55 +0100 (CET) Received: by mail-wr1-f51.google.com with SMTP id ffacd0b85a97d-331733acbacso637505f8f.1 for ; Thu, 23 Nov 2023 07:13:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752435; x=1701357235; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HwU+LFHRLrk+IQzVJORkHjCvcMai8+Em9v6JYz1n0S0=; b=A0EBrvVjbV/JtFHISUscGqf8NrZD/IgasmY5d3amneojMIJGjVR74rt+z14o15SiP8 9vUFrc2bhFoolZqHYC8kpYb88J1VUkDeDzZARAR9j/BsDCkXpBq/aq2xSWRkeTlA2i53 5AzU8SDvJLv/cy1NNykFPdgP2zCQ0FIbNjfM2+RwqHKBLfiwIgBT2FwjRP+Oy1j3x0XD 8ifQnWNFvHZjDvJv7yLuN8dl8diGcK16vEAErI14SCZNpdYQY/+GRk9zvSvIoKQLsfTz G1XlIXsjDYnL8Tk/Rv9yPzCN/czTrEt1D/G7wTrsWD1rggBmJyZGDIQSbIFOHkskDJUm oqUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752435; x=1701357235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HwU+LFHRLrk+IQzVJORkHjCvcMai8+Em9v6JYz1n0S0=; b=wr1MPs5N+/WJIn5LEpCn0yjXvKpojP4RVQtpaCYH1Xe7aPE/fgrcpZzKMoW4fnDaCi exhcihlUxS4LPSkg5Y6UCferNK6zPsjL8KTbvCgh7Q1CjlatsUn+k7iMgQTElDAg+5OZ eJbfndcEXZdcpVuTdZ9LfyXhiXjJkBkzdxuIJ/WyvA5bFVzC+iSBpdgD274jaCWU2CIW gf11eQrk5JSGMhqtZMP9o+ugxi5/QI4jB/NCjPGgaoN2+QNc/B8321+Rsn1IP+uTTBZQ DaR9iZmnXa+JT4MthUEkZ9YtFuXbN0MFY2S/0auwiDs6BO72l7chKo2MAYamLBl8aUxF 5Uww== X-Gm-Message-State: AOJu0YxqzqFlQeGVoZ9rHUdUQmsofRLZe2iUVyzozm8VyssadQa8Pd/K wKeefs2GlgeH5Pl/fCSk2FSdFA== X-Google-Smtp-Source: AGHT+IFEaQDX4Z/YTsSgdd2X3WhPF56/3xw79/4SKxvtIMZFCbedJ5Qt+Ze8Q6OVhO0nQC9e9VzNtA== X-Received: by 2002:a5d:64a8:0:b0:31f:d52a:82b3 with SMTP id m8-20020a5d64a8000000b0031fd52a82b3mr5856420wrp.46.1700752434653; Thu, 23 Nov 2023 07:13:54 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:54 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 06/21] dts: logger and utils docstring update Date: Thu, 23 Nov 2023 16:13:29 +0100 Message-Id: <20231123151344.162812-7-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/logger.py | 72 ++++++++++++++++++++++----------- dts/framework/utils.py | 88 +++++++++++++++++++++++++++++------------ 2 files changed, 113 insertions(+), 47 deletions(-) diff --git a/dts/framework/logger.py b/dts/framework/logger.py index bb2991e994..cfa6e8cd72 100644 --- a/dts/framework/logger.py +++ b/dts/framework/logger.py @@ -3,9 +3,9 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -DTS logger module with several log level. DTS framework and TestSuite logs -are saved in different log files. +"""DTS logger module. + +DTS framework and TestSuite logs are saved in different log files. """ import logging @@ -18,19 +18,21 @@ stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s" -class LoggerDictType(TypedDict): - logger: "DTSLOG" - name: str - node: str - +class DTSLOG(logging.LoggerAdapter): + """DTS logger adapter class for framework and testsuites. -# List for saving all using loggers -Loggers: list[LoggerDictType] = [] + The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment + variable control the verbosity of output. If enabled, all messages will be emitted to the + console. + The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment + variable modify the directory where the logs will be stored. -class DTSLOG(logging.LoggerAdapter): - """ - DTS log class for framework and testsuite. + Attributes: + node: The additional identifier. Currently unused. + sh: The handler which emits logs to console. + fh: The handler which emits logs to a file. + verbose_fh: Just as fh, but logs with a different, more verbose, format. """ _logger: logging.Logger @@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter): verbose_fh: logging.FileHandler def __init__(self, logger: logging.Logger, node: str = "suite"): + """Extend the constructor with additional handlers. + + One handler logs to the console, the other one to a file, with either a regular or verbose + format. + + Args: + logger: The logger from which to create the logger adapter. + node: An additional identifier. Currently unused. + """ self._logger = logger # 1 means log everything, this will be used by file handlers if their level # is not set @@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"): super(DTSLOG, self).__init__(self._logger, dict(node=self.node)) def logger_exit(self) -> None: - """ - Remove stream handler and logfile handler. - """ + """Remove the stream handler and the logfile handler.""" for handler in (self.sh, self.fh, self.verbose_fh): handler.flush() self._logger.removeHandler(handler) +class _LoggerDictType(TypedDict): + logger: DTSLOG + name: str + node: str + + +# List for saving all loggers in use +_Loggers: list[_LoggerDictType] = [] + + def getLogger(name: str, node: str = "suite") -> DTSLOG: + """Get DTS logger adapter identified by name and node. + + An existing logger will be returned if one with the exact name and node already exists. + A new one will be created and stored otherwise. + + Args: + name: The name of the logger. + node: An additional identifier for the logger. + + Returns: + A logger uniquely identified by both name and node. """ - Get logger handler and if there's no handler for specified Node will create one. - """ - global Loggers + global _Loggers # return saved logger - logger: LoggerDictType - for logger in Loggers: + logger: _LoggerDictType + for logger in _Loggers: if logger["name"] == name and logger["node"] == node: return logger["logger"] # return new logger dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node) - Loggers.append({"logger": dts_logger, "name": name, "node": node}) + _Loggers.append({"logger": dts_logger, "name": name, "node": node}) return dts_logger diff --git a/dts/framework/utils.py b/dts/framework/utils.py index a0f2173949..cc5e458cc8 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -3,6 +3,16 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +"""Various utility classes and functions. + +These are used in multiple modules across the framework. They're here because +they provide some non-specific functionality, greatly simplify imports or just don't +fit elsewhere. + +Attributes: + REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``. +""" + import atexit import json import os @@ -19,12 +29,20 @@ def expand_range(range_str: str) -> list[int]: - """ - Process range string into a list of integers. There are two possible formats: - n - a single integer - n-m - a range of integers + """Process `range_str` into a list of integers. + + There are two possible formats of `range_str`: + + * ``n`` - a single integer, + * ``n-m`` - a range of integers. - The returned range includes both n and m. Empty string returns an empty list. + The returned range includes both ``n`` and ``m``. Empty string returns an empty list. + + Args: + range_str: The range to expand. + + Returns: + All the numbers from the range. """ expanded_range: list[int] = [] if range_str: @@ -37,6 +55,14 @@ def expand_range(range_str: str) -> list[int]: def get_packet_summaries(packets: list[Packet]) -> str: + """Format a string summary from `packets`. + + Args: + packets: The packets to format. + + Returns: + The summary of `packets`. + """ if len(packets) == 1: packet_summaries = packets[0].summary() else: @@ -45,27 +71,36 @@ def get_packet_summaries(packets: list[Packet]) -> str: class StrEnum(Enum): + """Enum with members stored as strings.""" + @staticmethod def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str: return name def __str__(self) -> str: + """The string representation is the name of the member.""" return self.name class MesonArgs(object): - """ - Aggregate the arguments needed to build DPDK: - default_library: Default library type, Meson allows "shared", "static" and "both". - Defaults to None, in which case the argument won't be used. - Keyword arguments: The arguments found in meson_options.txt in root DPDK directory. - Do not use -D with them, for example: - meson_args = MesonArgs(enable_kmods=True). - """ + """Aggregate the arguments needed to build DPDK.""" _default_library: str def __init__(self, default_library: str | None = None, **dpdk_args: str | bool): + """Initialize the meson arguments. + + Args: + default_library: The default library type, Meson supports ``shared``, ``static`` and + ``both``. Defaults to :data:`None`, in which case the argument won't be used. + dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory. + Do not use ``-D`` with them. + + Example: + :: + + meson_args = MesonArgs(enable_kmods=True). + """ self._default_library = f"--default-library={default_library}" if default_library else "" self._dpdk_args = " ".join( ( @@ -75,6 +110,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool): ) def __str__(self) -> str: + """The actual args.""" return " ".join(f"{self._default_library} {self._dpdk_args}".split()) @@ -96,24 +132,14 @@ class _TarCompressionFormat(StrEnum): class DPDKGitTarball(object): - """Create a compressed tarball of DPDK from the repository. - - The DPDK version is specified with git object git_ref. - The tarball will be compressed with _TarCompressionFormat, - which must be supported by the DTS execution environment. - The resulting tarball will be put into output_dir. + """Compressed tarball of DPDK from the repository. - The class supports the os.PathLike protocol, + The class supports the :class:`os.PathLike` protocol, which is used to get the Path of the tarball:: from pathlib import Path tarball = DPDKGitTarball("HEAD", "output") tarball_path = Path(tarball) - - Arguments: - git_ref: A git commit ID, tag ID or tree ID. - output_dir: The directory where to put the resulting tarball. - tar_compression_format: The compression format to use. """ _git_ref: str @@ -128,6 +154,17 @@ def __init__( output_dir: str, tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz, ): + """Create the tarball during initialization. + + The DPDK version is specified with `git_ref`. The tarball will be compressed with + `tar_compression_format`, which must be supported by the DTS execution environment. + The resulting tarball will be put into `output_dir`. + + Args: + git_ref: A git commit ID, tag ID or tree ID. + output_dir: The directory where to put the resulting tarball. + tar_compression_format: The compression format to use. + """ self._git_ref = git_ref self._tar_compression_format = tar_compression_format @@ -196,4 +233,5 @@ def _delete_tarball(self) -> None: os.remove(self._tarball_path) def __fspath__(self) -> str: + """The os.PathLike protocol implementation.""" return str(self._tarball_path) From patchwork Thu Nov 23 15:13:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134573 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78B07433AC; Thu, 23 Nov 2023 16:14:56 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1997F42FE3; Thu, 23 Nov 2023 16:14:05 +0100 (CET) Received: from mail-wr1-f45.google.com (mail-wr1-f45.google.com [209.85.221.45]) by mails.dpdk.org (Postfix) with ESMTP id 0B99F42FCD for ; Thu, 23 Nov 2023 16:13:56 +0100 (CET) Received: by mail-wr1-f45.google.com with SMTP id ffacd0b85a97d-332e3ad436cso380935f8f.3 for ; Thu, 23 Nov 2023 07:13:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752436; x=1701357236; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6i4bo/AJKOFxXNVyyjE//kxyMsAoBQPx3oSImSMe8yo=; b=s8svDtUZEV0ucTE+CbduWfPDaIaj15gilnOJEJyWJgmql50Z+rBCcSMoZJ2LAUarlU HxvZl/3rRUqhvquXkA+nUYWBOZHVdTwyKFwyzJllWNyK3MRJu0o6PA6xIXpkepKlU/bf SNTZNxXY1BdYE/4YhPlijNY4vtDmWhUw4MpoGtN6sxTItdmUKPEqNRHkbkj5Jvdn7zU8 yjqZUeCG+xfZObQs4aQ+kMOdykFaroqLoE2Alup6RDa6T+P7i3sUMbrf2awpBbbkrvWe jX4k+zkBf5UFG/5PLVJQnAhXIKBcE2AIq5/nN3nDFtM0O1MrjKKW+Rn8Znve5pr8Sgpq GT5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752436; x=1701357236; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6i4bo/AJKOFxXNVyyjE//kxyMsAoBQPx3oSImSMe8yo=; b=F4lQjbRzsHbP0mbQhXOs2Rc8daB8c0eD2BWzusYQYtMhPcNWWQmFazABGl+1FiWyI+ WUJre0VGi3k4n7/DIch1awFKo+YBved/ZhBNl6WJQSg8Bk1qjYG1x2MADe7JNbnP56jZ AVQOiovmJQ3eShe3ggPtivdLJhl3JZu1A1c4fXsqfrFjVV/QAK+x7thcoYuPm9AndGiN xaqsm2Tgo66saG3QV0dsx6YBWL3aFw/f/jokwBDxqmAY7NNpmRqDizk5ETbM2u5TnTbn buAdoAsuWixqYK5PXG9Kz4diOvkJyn69vYwj3jbPk4npsZSw4eOfhDH/KUQ5kDSkuwEK gQpg== X-Gm-Message-State: AOJu0YzmH8fdFx2gpneRoC9eZ9MX9LS2eC2ZCmLlcb/Cy75V3jLiSdEj 6C0ZCDMcBWsQtYmtoPVfGJUZp4Zhk4t8XHLajh646g== X-Google-Smtp-Source: AGHT+IHKY56vFtNyn6yV0X9Q3jvJW6xVjA2rtYnCwlNbu3EaNpW2NsA+V02MzqlKxtvg/VAFJJkNjA== X-Received: by 2002:a05:6000:b8a:b0:332:e62e:f0ba with SMTP id dl10-20020a0560000b8a00b00332e62ef0bamr803746wrb.18.1700752435742; Thu, 23 Nov 2023 07:13:55 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:55 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 07/21] dts: dts runner and main docstring update Date: Thu, 23 Nov 2023 16:13:30 +0100 Message-Id: <20231123151344.162812-8-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/dts.py | 131 ++++++++++++++++++++++++++++++++++++------- dts/main.py | 10 ++-- 2 files changed, 116 insertions(+), 25 deletions(-) diff --git a/dts/framework/dts.py b/dts/framework/dts.py index 356368ef10..e16d4578a0 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -3,6 +3,33 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +r"""Test suite runner module. + +A DTS run is split into stages: + + #. Execution stage, + #. Build target stage, + #. Test suite stage, + #. Test case stage. + +The module is responsible for running tests on testbeds defined in the test run configuration. +Each setup or teardown of each stage is recorded in a :class:`~.test_result.DTSResult` or +one of its subclasses. The test case results are also recorded. + +If an error occurs, the current stage is aborted, the error is recorded and the run continues in +the next iteration of the same stage. The return code is the highest `severity` of all +:class:`~.exception.DTSError`\s. + +Example: + An error occurs in a build target setup. The current build target is aborted and the run + continues with the next build target. If the errored build target was the last one in the given + execution, the next execution begins. + +Attributes: + dts_logger: The logger instance used in this module. + result: The top level result used in the module. +""" + import sys from .config import ( @@ -23,9 +50,38 @@ def run_all() -> None: - """ - The main process of DTS. Runs all build targets in all executions from the main - config file. + """Run all build targets in all executions from the test run configuration. + + Before running test suites, executions and build targets are first set up. + The executions and build targets defined in the test run configuration are iterated over. + The executions define which tests to run and where to run them and build targets define + the DPDK build setup. + + The tests suites are set up for each execution/build target tuple and each scheduled + test case within the test suite is set up, executed and torn down. After all test cases + have been executed, the test suite is torn down and the next build target will be tested. + + All the nested steps look like this: + + #. Execution setup + + #. Build target setup + + #. Test suite setup + + #. Test case setup + #. Test case logic + #. Test case teardown + + #. Test suite teardown + + #. Build target teardown + + #. Execution teardown + + The test cases are filtered according to the specification in the test run configuration and + the :option:`--test-cases` command line argument or + the :envvar:`DTS_TESTCASES` environment variable. """ global dts_logger global result @@ -87,6 +143,8 @@ def run_all() -> None: def _check_dts_python_version() -> None: + """Check the required Python version - v3.10.""" + def RED(text: str) -> str: return f"\u001B[31;1m{str(text)}\u001B[0m" @@ -109,9 +167,16 @@ def _run_execution( execution: ExecutionConfiguration, result: DTSResult, ) -> None: - """ - Run the given execution. This involves running the execution setup as well as - running all build targets in the given execution. + """Run the given execution. + + This involves running the execution setup as well as running all build targets + in the given execution. After that, execution teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: An execution's test run configuration. + result: The top level result object. """ dts_logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.") execution_result = result.add_execution(sut_node.config) @@ -144,8 +209,18 @@ def _run_build_target( execution: ExecutionConfiguration, execution_result: ExecutionResult, ) -> None: - """ - Run the given build target. + """Run the given build target. + + This involves running the build target setup as well as running all test suites + in the given execution the build target is defined in. + After that, build target teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + build_target: A build target's test run configuration. + execution: The build target's execution's test run configuration. + execution_result: The execution level result object associated with the execution. """ dts_logger.info(f"Running build target '{build_target.name}'.") build_target_result = execution_result.add_build_target(build_target) @@ -177,10 +252,20 @@ def _run_all_suites( execution: ExecutionConfiguration, build_target_result: BuildTargetResult, ) -> None: - """ - Use the given build_target to run execution's test suites - with possibly only a subset of test cases. - If no subset is specified, run all test cases. + """Run the execution's (possibly a subset) test suites using the current build target. + + The function assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. + + If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites + in the current build target won't be executed. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated with the current build target. + build_target_result: The build target level result object associated + with the current build target. """ end_build_target = False if not execution.skip_smoke_tests: @@ -206,16 +291,22 @@ def _run_single_suite( build_target_result: BuildTargetResult, test_suite_config: TestSuiteConfig, ) -> None: - """Runs a single test suite. + """Run all test suite in a single test suite module. + + The function assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. Args: - sut_node: Node to run tests on. - execution: Execution the test case belongs to. - build_target_result: Build target configuration test case is run on - test_suite_config: Test suite configuration + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated with the current build target. + build_target_result: The build target level result object associated + with the current build target. + test_suite_config: Test suite test run configuration specifying the test suite module + and possibly a subset of test cases of test suites in that module. Raises: - BlockingTestSuiteError: If a test suite that was marked as blocking fails. + BlockingTestSuiteError: If a blocking test suite fails. """ try: full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" @@ -239,9 +330,7 @@ def _run_single_suite( def _exit_dts() -> None: - """ - Process all errors and exit with the proper exit code. - """ + """Process all errors and exit with the proper exit code.""" result.process() if dts_logger: diff --git a/dts/main.py b/dts/main.py index 5d4714b0c3..b856ba86be 100755 --- a/dts/main.py +++ b/dts/main.py @@ -1,12 +1,10 @@ #!/usr/bin/env python3 # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire -""" -A test framework for testing DPDK. -""" +"""The DTS executable.""" import logging @@ -17,6 +15,10 @@ def main() -> None: """Set DTS settings, then run DTS. The DTS settings are taken from the command line arguments and the environment variables. + The settings object is stored in the module-level variable settings.SETTINGS which the entire + framework uses. After importing the module (or the variable), any changes to the variable are + not going to be reflected without a re-import. This means that the SETTINGS variable must + be modified before the settings module is imported anywhere else in the framework. """ settings.SETTINGS = settings.get_settings() from framework import dts From patchwork Thu Nov 23 15:13:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134574 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6FCAF433AC; Thu, 23 Nov 2023 16:15:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 19D3142FF3; Thu, 23 Nov 2023 16:14:06 +0100 (CET) Received: from mail-wr1-f50.google.com (mail-wr1-f50.google.com [209.85.221.50]) by mails.dpdk.org (Postfix) with ESMTP id 2E84E42FD5 for ; Thu, 23 Nov 2023 16:13:57 +0100 (CET) Received: by mail-wr1-f50.google.com with SMTP id ffacd0b85a97d-32deb2809daso634354f8f.3 for ; Thu, 23 Nov 2023 07:13:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752437; x=1701357237; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3J4tuvvC25Dr3XVRHIMGMh8HcTBlhxLQyKucmyiGS50=; b=U3v5KTFL+DDAw50Molyw/oTFwZ9ZRceVicRZa0ovC4CqguIeaaVZBIw8JBGRU1MbAx lQstl+LP+uQ84Eu+nbalRRqpawKZ5qZqTnKhDdkE9U+/FdOgCPAxLjQjWqD6D6QF5QNh wChURFoLJsBqb9u8bAc0TpjLq9NJZHdO2XGpU3sqPEoexoSLCsPBPphyYtnFFCKxEphT rpPETRvCiFB5zpAwt+Ba/JBSUR/71PoobfOXRxnZGq4W+0xlOQLsQzbq05p93mGmVtDL hNfNJi3riJFNwDY6r0Bn15oxzq2f+BmjdysiElCjxEM+F4RH6i85TAdioWjTHnGyFBkk e+hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752437; x=1701357237; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3J4tuvvC25Dr3XVRHIMGMh8HcTBlhxLQyKucmyiGS50=; b=NBXH+RP2bNbVsEHZa3oBN5wInDq15x7uXTXoTMRyeVjEK1g4TwJ5kzt+v0qS1oH/9I 7KOdwAxY8BXwf++n1M3jXLP7ZS+j7yMpy67+i3AY/zuslBlFPu/4HGJ4Dsj2RGgg9vYO 6xS+w2M5rQ7RWg7dnJHVx+9j4mj6xmpcvc5r+ZhyA8DjkFn71UAtlB5GfmQ/mnZJS6lb HY0XAnIp3gsNne6wD8lCv74I/dkN2kyQyF6HOkFwPFmvVSxMOLV6gofnClyzAsNE24Fd KMYNfzRVqzT+6pLI6DyHEHHHwy4HEdmHnSlFtC7tIL8P/gipI2IdPvq3jgNDDbOmHSRU gNAA== X-Gm-Message-State: AOJu0YwLaKtj7FhikWgytFdWNyshgTFMPk2yb5hZf9Y2sw5WFxcA5hcq eSnIiEkWFOK9x6w15Xh7Zk2TAw== X-Google-Smtp-Source: AGHT+IFi7QHe60RMMJEWrjnaPI8ZGNE/COmEsRgvAnlJDjVutmH46IG0Xw5rmYHLiI7vyeywIsefFQ== X-Received: by 2002:adf:ee88:0:b0:332:e1e6:eeda with SMTP id b8-20020adfee88000000b00332e1e6eedamr1908609wro.69.1700752436838; Thu, 23 Nov 2023 07:13:56 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:56 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 08/21] dts: test suite docstring update Date: Thu, 23 Nov 2023 16:13:31 +0100 Message-Id: <20231123151344.162812-9-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/test_suite.py | 231 +++++++++++++++++++++++++++--------- 1 file changed, 175 insertions(+), 56 deletions(-) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index f9e66e814a..dfb391ffbd 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -2,8 +2,19 @@ # Copyright(c) 2010-2014 Intel Corporation # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -Base class for creating DTS test cases. +"""Features common to all test suites. + +The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such +must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics +needed by subclasses: + + * Test suite and test case execution flow, + * Testbed (SUT, TG) configuration, + * Packet sending and verification, + * Test case verification. + +The module also defines a function, :func:`get_test_suites`, +for gathering test suites from a Python module. """ import importlib @@ -11,7 +22,7 @@ import re from ipaddress import IPv4Interface, IPv6Interface, ip_interface from types import MethodType -from typing import Any, Union +from typing import Any, ClassVar, Union from scapy.layers.inet import IP # type: ignore[import] from scapy.layers.l2 import Ether # type: ignore[import] @@ -31,25 +42,44 @@ class TestSuite(object): - """ - The base TestSuite class provides methods for handling basic flow of a test suite: - * test case filtering and collection - * test suite setup/cleanup - * test setup/cleanup - * test case execution - * error handling and results storage - Test cases are implemented by derived classes. Test cases are all methods - starting with test_, further divided into performance test cases - (starting with test_perf_) and functional test cases (all other test cases). - By default, all test cases will be executed. A list of testcase str names - may be specified in conf.yaml or on the command line - to filter which test cases to run. - The methods named [set_up|tear_down]_[suite|test_case] should be overridden - in derived classes if the appropriate suite/test case fixtures are needed. + """The base class with methods for handling the basic flow of a test suite. + + * Test case filtering and collection, + * Test suite setup/cleanup, + * Test setup/cleanup, + * Test case execution, + * Error handling and results storage. + + Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``, + further divided into performance test cases (starting with ``test_perf_``) + and functional test cases (all other test cases). + + By default, all test cases will be executed. A list of testcase names may be specified + in the YAML test run configuration file and in the :option:`--test-cases` command line argument + or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run. + The union of both lists will be used. Any unknown test cases from the latter lists + will be silently ignored. + + If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable + is set, in case of a test case failure, the test case will be executed again until it passes + or it fails that many times in addition of the first failure. + + The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses + if the appropriate test suite/test case fixtures are needed. + + The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can + properly choose the IP addresses and other configuration that must be tailored to the testbed. + + Attributes: + sut_node: The SUT node where the test suite is running. + tg_node: The TG node where the test suite is running. """ sut_node: SutNode - is_blocking = False + tg_node: TGNode + #: Whether the test suite is blocking. A failure of a blocking test suite + #: will block the execution of all subsequent test suites in the current build target. + is_blocking: ClassVar[bool] = False _logger: DTSLOG _test_cases_to_run: list[str] _func: bool @@ -72,6 +102,20 @@ def __init__( func: bool, build_target_result: BuildTargetResult, ): + """Initialize the test suite testbed information and basic configuration. + + Process what test cases to run, create the associated + :class:`~.test_result.TestSuiteResult`, find links between ports + and set up default IP addresses to be used when configuring them. + + Args: + sut_node: The SUT node where the test suite will run. + tg_node: The TG node where the test suite will run. + test_cases: The list of test cases to execute. + If empty, all test cases will be executed. + func: Whether to run functional tests. + build_target_result: The build target result this test suite is run in. + """ self.sut_node = sut_node self.tg_node = tg_node self._logger = getLogger(self.__class__.__name__) @@ -95,6 +139,7 @@ def __init__( self._tg_ip_address_ingress = ip_interface("192.168.101.3/24") def _process_links(self) -> None: + """Construct links between SUT and TG ports.""" for sut_port in self.sut_node.ports: for tg_port in self.tg_node.ports: if (sut_port.identifier, sut_port.peer) == ( @@ -104,27 +149,42 @@ def _process_links(self) -> None: self._port_links.append(PortLink(sut_port=sut_port, tg_port=tg_port)) def set_up_suite(self) -> None: - """ - Set up test fixtures common to all test cases; this is done before - any test case is run. + """Set up test fixtures common to all test cases. + + This is done before any test case has been run. """ def tear_down_suite(self) -> None: - """ - Tear down the previously created test fixtures common to all test cases. + """Tear down the previously created test fixtures common to all test cases. + + This is done after all test have been run. """ def set_up_test_case(self) -> None: - """ - Set up test fixtures before each test case. + """Set up test fixtures before each test case. + + This is done before *each* test case. """ def tear_down_test_case(self) -> None: - """ - Tear down the previously created test fixtures after each test case. + """Tear down the previously created test fixtures after each test case. + + This is done after *each* test case. """ def configure_testbed_ipv4(self, restore: bool = False) -> None: + """Configure IPv4 addresses on all testbed ports. + + The configured ports are: + + * SUT ingress port, + * SUT egress port, + * TG ingress port, + * TG egress port. + + Args: + restore: If :data:`True`, will remove the configuration instead. + """ delete = True if restore else False enable = False if restore else True self._configure_ipv4_forwarding(enable) @@ -149,11 +209,17 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None: self.sut_node.configure_ipv4_forwarding(enable) def send_packet_and_capture(self, packet: Packet, duration: float = 1) -> list[Packet]: - """ - Send a packet through the appropriate interface and - receive on the appropriate interface. - Modify the packet with l3/l2 addresses corresponding - to the testbed and desired traffic. + """Send and receive `packet` using the associated TG. + + Send `packet` through the appropriate interface and receive on the appropriate interface. + Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic. + + Args: + packet: The packet to send. + duration: Capture traffic for this amount of time after sending `packet`. + + Returns: + A list of received packets. """ packet = self._adjust_addresses(packet) return self.tg_node.send_packet_and_capture( @@ -161,13 +227,26 @@ def send_packet_and_capture(self, packet: Packet, duration: float = 1) -> list[P ) def get_expected_packet(self, packet: Packet) -> Packet: + """Inject the proper L2/L3 addresses into `packet`. + + Args: + packet: The packet to modify. + + Returns: + `packet` with injected L2/L3 addresses. + """ return self._adjust_addresses(packet, expected=True) def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: - """ + """L2 and L3 address additions in both directions. + Assumptions: - Two links between SUT and TG, one link is TG -> SUT, - the other SUT -> TG. + Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG. + + Args: + packet: The packet to modify. + expected: If :data:`True`, the direction is SUT -> TG, + otherwise the direction is TG -> SUT. """ if expected: # The packet enters the TG from SUT @@ -193,6 +272,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: return Ether(packet.build()) def verify(self, condition: bool, failure_description: str) -> None: + """Verify `condition` and handle failures. + + When `condition` is :data:`False`, raise an exception and log the last 10 commands + executed on both the SUT and TG. + + Args: + condition: The condition to check. + failure_description: A short description of the failure + that will be stored in the raised exception. + + Raises: + TestCaseVerifyError: `condition` is :data:`False`. + """ if not condition: self._fail_test_case_verify(failure_description) @@ -206,6 +298,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None: raise TestCaseVerifyError(failure_description) def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None: + """Verify that `expected_packet` has been received. + + Go through `received_packets` and check that `expected_packet` is among them. + If not, raise an exception and log the last 10 commands + executed on both the SUT and TG. + + Args: + expected_packet: The packet we're expecting to receive. + received_packets: The packets where we're looking for `expected_packet`. + + Raises: + TestCaseVerifyError: `expected_packet` is not among `received_packets`. + """ for received_packet in received_packets: if self._compare_packets(expected_packet, received_packet): break @@ -280,10 +385,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool: return True def run(self) -> None: - """ - Setup, execute and teardown the whole suite. - Suite execution consists of running all test cases scheduled to be executed. - A test cast run consists of setup, execution and teardown of said test case. + """Set up, execute and tear down the whole suite. + + Test suite execution consists of running all test cases scheduled to be executed. + A test case run consists of setup, execution and teardown of said test case. + + Record the setup and the teardown and handle failures. + + The list of scheduled test cases is constructed when creating the :class:`TestSuite` object. """ test_suite_name = self.__class__.__name__ @@ -315,9 +424,7 @@ def run(self) -> None: raise BlockingTestSuiteError(test_suite_name) def _execute_test_suite(self) -> None: - """ - Execute all test cases scheduled to be executed in this suite. - """ + """Execute all test cases scheduled to be executed in this suite.""" if self._func: for test_case_method in self._get_functional_test_cases(): test_case_name = test_case_method.__name__ @@ -334,14 +441,18 @@ def _execute_test_suite(self) -> None: self._run_test_case(test_case_method, test_case_result) def _get_functional_test_cases(self) -> list[MethodType]: - """ - Get all functional test cases. + """Get all functional test cases defined in this TestSuite. + + Returns: + The list of functional test cases of this TestSuite. """ return self._get_test_cases(r"test_(?!perf_)") def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: - """ - Return a list of test cases matching test_case_regex. + """Return a list of test cases matching test_case_regex. + + Returns: + The list of test cases matching test_case_regex of this TestSuite. """ self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.") filtered_test_cases = [] @@ -353,9 +464,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: return filtered_test_cases def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool: - """ - Check whether the test case should be executed. - """ + """Check whether the test case should be scheduled to be executed.""" match = bool(re.match(test_case_regex, test_case_name)) if self._test_cases_to_run: return match and test_case_name in self._test_cases_to_run @@ -365,9 +474,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool def _run_test_case( self, test_case_method: MethodType, test_case_result: TestCaseResult ) -> None: - """ - Setup, execute and teardown a test case in this suite. - Exceptions are caught and recorded in logs and results. + """Setup, execute and teardown a test case in this suite. + + Record the result of the setup and the teardown and handle failures. """ test_case_name = test_case_method.__name__ @@ -402,9 +511,7 @@ def _run_test_case( def _execute_test_case( self, test_case_method: MethodType, test_case_result: TestCaseResult ) -> None: - """ - Execute one test case and handle failures. - """ + """Execute one test case, record the result and handle failures.""" test_case_name = test_case_method.__name__ try: self._logger.info(f"Starting test case execution: {test_case_name}") @@ -425,6 +532,18 @@ def _execute_test_case( def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: + r"""Find all :class:`TestSuite`\s in a Python module. + + Args: + testsuite_module_path: The path to the Python module. + + Returns: + The list of :class:`TestSuite`\s found within the Python module. + + Raises: + ConfigurationError: The test suite module was not found. + """ + def is_test_suite(object: Any) -> bool: try: if issubclass(object, TestSuite) and object is not TestSuite: From patchwork Thu Nov 23 15:13:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134575 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 40A88433AC; Thu, 23 Nov 2023 16:15:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2FD3342FFB; Thu, 23 Nov 2023 16:14:07 +0100 (CET) Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) by mails.dpdk.org (Postfix) with ESMTP id C528E42FED for ; Thu, 23 Nov 2023 16:13:58 +0100 (CET) Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-40b31232bf0so7566515e9.1 for ; Thu, 23 Nov 2023 07:13:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752438; x=1701357238; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r8ptTkrC1tsM0AxROhb6o0FU3WTzzkdfCea6YQDDbXw=; b=eU8PxLod17RyytYrldN9PFWYPZ3oz5tF+xIg+BBzgGzyeQqj4Syju04vbwvzitt/d1 M1C/6glkhnjeiOLmuLyQb3QPSzwfCLf7Acz3gjeOtgfGGkByIhb9H21VdfNBPCZptJHZ gicHRjKTTRX898eWRrSV4ShaQA9ywRnir/LWWEpUr14FEGNN7fo7Vqu5dWb2YPEwAlUw pk2+n28kUbIeHut0aw4hRSuCksvkpmtjv9+AnsY0YHyHsdsX3zGk/Uf3t0h9bonBOBMX UvvHaETivviL1f6VNtJp1v8SUOwFBLRcz2qdKs+QtPZtCDxkC/LnzIPp+7ltaotKiA3P Cbrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752438; x=1701357238; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r8ptTkrC1tsM0AxROhb6o0FU3WTzzkdfCea6YQDDbXw=; b=aZi3x5W26QdNp7p/f2XkRJEirnxmQ7v3+ENSTLAge1tSjkhPjcA7n6oGROvXoxbhSs xNIcGUgmTCMgZUm6ebMDLI8W4k8L+rMxtiw8wOKYSbI/DYu9hr5cSI4VItOgHhmTYcBY B7Oj5CBEh5r2E9bNGAfoLl+4F9V6990CiXGJXR5SxNm8lDjWyMzd4ewxM+KN/y0S/Vp7 yd22TR5nUSDrdvrr/TgDwD88OiOK95wb/Mf510DvezifdNiVb9BDGJsjPi79vhMDQfNc e9uEIN/mgJBcvytHfQJynUCfoG8Gl9nrlswiFStlED8SQLcDTJwMz4cPcq1T8Lq2/NZX whCw== X-Gm-Message-State: AOJu0Yx8IJ6xXZYBCxUmxhAQYQPuZKbKJScjsFTW4mRbMaDCwcKSSSWa +77RsmDOgEiIzQw/ZFRZbNwBfA== X-Google-Smtp-Source: AGHT+IEokFc5aR5oemBwozvusQniGOTWe5LzjJT2fOt0Vp3FT+ICVvLufIR3uhtuHJNntVHGoYZXJA== X-Received: by 2002:a05:600c:45c7:b0:40b:37ef:3671 with SMTP id s7-20020a05600c45c700b0040b37ef3671mr799560wmo.38.1700752438318; Thu, 23 Nov 2023 07:13:58 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:57 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 09/21] dts: test result docstring update Date: Thu, 23 Nov 2023 16:13:32 +0100 Message-Id: <20231123151344.162812-10-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/test_result.py | 297 ++++++++++++++++++++++++++++------- 1 file changed, 239 insertions(+), 58 deletions(-) diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 57090feb04..4467749a9d 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -2,8 +2,25 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire -""" -Generic result container and reporters +r"""Record and process DTS results. + +The results are recorded in a hierarchical manner: + + * :class:`DTSResult` contains + * :class:`ExecutionResult` contains + * :class:`BuildTargetResult` contains + * :class:`TestSuiteResult` contains + * :class:`TestCaseResult` + +Each result may contain multiple lower level results, e.g. there are multiple +:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`. +The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`, +which also defines some common behaviors in its methods. + +Each result class has its own idiosyncrasies which they implement in overridden methods. + +The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment +variable modify the directory where the files with results will be stored. """ import os.path @@ -26,26 +43,34 @@ class Result(Enum): - """ - An Enum defining the possible states that - a setup, a teardown or a test case may end up in. - """ + """The possible states that a setup, a teardown or a test case may end up in.""" + #: PASS = auto() + #: FAIL = auto() + #: ERROR = auto() + #: SKIP = auto() def __bool__(self) -> bool: + """Only PASS is True.""" return self is self.PASS class FixtureResult(object): - """ - A record that stored the result of a setup or a teardown. - The default is FAIL because immediately after creating the object - the setup of the corresponding stage will be executed, which also guarantees - the execution of teardown. + """A record that stores the result of a setup or a teardown. + + :attr:`~Result.FAIL` is a sensible default since it prevents false positives (which could happen + if the default was :attr:`~Result.PASS`). + + Preventing false positives or other false results is preferable since a failure + is mostly likely to be investigated (the other false results may not be investigated at all). + + Attributes: + result: The associated result. + error: The error in case of a failure. """ result: Result @@ -56,21 +81,37 @@ def __init__( result: Result = Result.FAIL, error: Exception | None = None, ): + """Initialize the constructor with the fixture result and store a possible error. + + Args: + result: The result to store. + error: The error which happened when a failure occurred. + """ self.result = result self.error = error def __bool__(self) -> bool: + """A wrapper around the stored :class:`Result`.""" return bool(self.result) class Statistics(dict): - """ - A helper class used to store the number of test cases by its result - along a few other basic information. - Using a dict provides a convenient way to format the data. + """How many test cases ended in which result state along some other basic information. + + Subclassing :class:`dict` provides a convenient way to format the data. + + The data are stored in the following keys: + + * **PASS RATE** (:class:`int`) -- The FAIL/PASS ratio of all test cases. + * **DPDK VERSION** (:class:`str`) -- The tested DPDK version. """ def __init__(self, dpdk_version: str | None): + """Extend the constructor with keys in which the data are stored. + + Args: + dpdk_version: The version of tested DPDK. + """ super(Statistics, self).__init__() for result in Result: self[result.name] = 0 @@ -78,8 +119,17 @@ def __init__(self, dpdk_version: str | None): self["DPDK VERSION"] = dpdk_version def __iadd__(self, other: Result) -> "Statistics": - """ - Add a Result to the final count. + """Add a Result to the final count. + + Example: + stats: Statistics = Statistics() # empty Statistics + stats += Result.PASS # add a Result to `stats` + + Args: + other: The Result to add to this statistics object. + + Returns: + The modified statistics object. """ self[other.name] += 1 self["PASS RATE"] = ( @@ -88,9 +138,7 @@ def __iadd__(self, other: Result) -> "Statistics": return self def __str__(self) -> str: - """ - Provide a string representation of the data. - """ + """Each line contains the formatted key = value pair.""" stats_str = "" for key, value in self.items(): stats_str += f"{key:<12} = {value}\n" @@ -100,10 +148,16 @@ def __str__(self) -> str: class BaseResult(object): - """ - The Base class for all results. Stores the results of - the setup and teardown portions of the corresponding stage - and a list of results from each inner stage in _inner_results. + """Common data and behavior of DTS results. + + Stores the results of the setup and teardown portions of the corresponding stage. + The hierarchical nature of DTS results is captured recursively in an internal list. + A stage is each level in this particular hierarchy (pre-execution or the top-most level, + execution, build target, test suite and test case.) + + Attributes: + setup_result: The result of the setup of the particular stage. + teardown_result: The results of the teardown of the particular stage. """ setup_result: FixtureResult @@ -111,15 +165,28 @@ class BaseResult(object): _inner_results: MutableSequence["BaseResult"] def __init__(self): + """Initialize the constructor.""" self.setup_result = FixtureResult() self.teardown_result = FixtureResult() self._inner_results = [] def update_setup(self, result: Result, error: Exception | None = None) -> None: + """Store the setup result. + + Args: + result: The result of the setup. + error: The error that occurred in case of a failure. + """ self.setup_result.result = result self.setup_result.error = error def update_teardown(self, result: Result, error: Exception | None = None) -> None: + """Store the teardown result. + + Args: + result: The result of the teardown. + error: The error that occurred in case of a failure. + """ self.teardown_result.result = result self.teardown_result.error = error @@ -137,27 +204,55 @@ def _get_inner_errors(self) -> list[Exception]: ] def get_errors(self) -> list[Exception]: + """Compile errors from the whole result hierarchy. + + Returns: + The errors from setup, teardown and all errors found in the whole result hierarchy. + """ return self._get_setup_teardown_errors() + self._get_inner_errors() def add_stats(self, statistics: Statistics) -> None: + """Collate stats from the whole result hierarchy. + + Args: + statistics: The :class:`Statistics` object where the stats will be collated. + """ for inner_result in self._inner_results: inner_result.add_stats(statistics) class TestCaseResult(BaseResult, FixtureResult): - """ - The test case specific result. - Stores the result of the actual test case. - Also stores the test case name. + r"""The test case specific result. + + Stores the result of the actual test case. This is done by adding an extra superclass + in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and + the class is itself a record of the test case. + + Attributes: + test_case_name: The test case name. """ test_case_name: str def __init__(self, test_case_name: str): + """Extend the constructor with `test_case_name`. + + Args: + test_case_name: The test case's name. + """ super(TestCaseResult, self).__init__() self.test_case_name = test_case_name def update(self, result: Result, error: Exception | None = None) -> None: + """Update the test case result. + + This updates the result of the test case itself and doesn't affect + the results of the setup and teardown steps in any way. + + Args: + result: The result of the test case. + error: The error that occurred in case of a failure. + """ self.result = result self.error = error @@ -167,36 +262,64 @@ def _get_inner_errors(self) -> list[Exception]: return [] def add_stats(self, statistics: Statistics) -> None: + r"""Add the test case result to statistics. + + The base method goes through the hierarchy recursively and this method is here to stop + the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree. + + Args: + statistics: The :class:`Statistics` object where the stats will be added. + """ statistics += self.result def __bool__(self) -> bool: + """The test case passed only if setup, teardown and the test case itself passed.""" return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) class TestSuiteResult(BaseResult): - """ - The test suite specific result. - The _inner_results list stores results of test cases in a given test suite. - Also stores the test suite name. + """The test suite specific result. + + The internal list stores the results of all test cases in a given test suite. + + Attributes: + suite_name: The test suite name. """ suite_name: str def __init__(self, suite_name: str): + """Extend the constructor with `suite_name`. + + Args: + suite_name: The test suite's name. + """ super(TestSuiteResult, self).__init__() self.suite_name = suite_name def add_test_case(self, test_case_name: str) -> TestCaseResult: + """Add and return the inner result (test case). + + Returns: + The test case's result. + """ test_case_result = TestCaseResult(test_case_name) self._inner_results.append(test_case_result) return test_case_result class BuildTargetResult(BaseResult): - """ - The build target specific result. - The _inner_results list stores results of test suites in a given build target. - Also stores build target specifics, such as compiler used to build DPDK. + """The build target specific result. + + The internal list stores the results of all test suites in a given build target. + + Attributes: + arch: The DPDK build target architecture. + os: The DPDK build target operating system. + cpu: The DPDK build target CPU. + compiler: The DPDK build target compiler. + compiler_version: The DPDK build target compiler version. + dpdk_version: The built DPDK version. """ arch: Architecture @@ -207,6 +330,11 @@ class BuildTargetResult(BaseResult): dpdk_version: str | None def __init__(self, build_target: BuildTargetConfiguration): + """Extend the constructor with the `build_target`'s build target config. + + Args: + build_target: The build target's test run configuration. + """ super(BuildTargetResult, self).__init__() self.arch = build_target.arch self.os = build_target.os @@ -216,20 +344,35 @@ def __init__(self, build_target: BuildTargetConfiguration): self.dpdk_version = None def add_build_target_info(self, versions: BuildTargetInfo) -> None: + """Add information about the build target gathered at runtime. + + Args: + versions: The additional information. + """ self.compiler_version = versions.compiler_version self.dpdk_version = versions.dpdk_version def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: + """Add and return the inner result (test suite). + + Returns: + The test suite's result. + """ test_suite_result = TestSuiteResult(test_suite_name) self._inner_results.append(test_suite_result) return test_suite_result class ExecutionResult(BaseResult): - """ - The execution specific result. - The _inner_results list stores results of build targets in a given execution. - Also stores the SUT node configuration. + """The execution specific result. + + The internal list stores the results of all build targets in a given execution. + + Attributes: + sut_node: The SUT node used in the execution. + sut_os_name: The operating system of the SUT node. + sut_os_version: The operating system version of the SUT node. + sut_kernel_version: The operating system kernel version of the SUT node. """ sut_node: NodeConfiguration @@ -238,34 +381,53 @@ class ExecutionResult(BaseResult): sut_kernel_version: str def __init__(self, sut_node: NodeConfiguration): + """Extend the constructor with the `sut_node`'s config. + + Args: + sut_node: The SUT node's test run configuration used in the execution. + """ super(ExecutionResult, self).__init__() self.sut_node = sut_node def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTargetResult: + """Add and return the inner result (build target). + + Args: + build_target: The build target's test run configuration. + + Returns: + The build target's result. + """ build_target_result = BuildTargetResult(build_target) self._inner_results.append(build_target_result) return build_target_result def add_sut_info(self, sut_info: NodeInfo) -> None: + """Add SUT information gathered at runtime. + + Args: + sut_info: The additional SUT node information. + """ self.sut_os_name = sut_info.os_name self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version class DTSResult(BaseResult): - """ - Stores environment information and test results from a DTS run, which are: - * Execution level information, such as SUT and TG hardware. - * Build target level information, such as compiler, target OS and cpu. - * Test suite results. - * All errors that are caught and recorded during DTS execution. + """Stores environment information and test results from a DTS run. - The information is stored in nested objects. + * Execution level information, such as testbed and the test suite list, + * Build target level information, such as compiler, target OS and cpu, + * Test suite and test case results, + * All errors that are caught and recorded during DTS execution. - The class is capable of computing the return code used to exit DTS with - from the stored error. + The information is stored hierarchically. This is the first level of the hierarchy + and as such is where the data form the whole hierarchy is collated or processed. - It also provides a brief statistical summary of passed/failed test cases. + The internal list stores the results of all executions. + + Attributes: + dpdk_version: The DPDK version to record. """ dpdk_version: str | None @@ -276,6 +438,11 @@ class DTSResult(BaseResult): _stats_filename: str def __init__(self, logger: DTSLOG): + """Extend the constructor with top-level specifics. + + Args: + logger: The logger instance the whole result will use. + """ super(DTSResult, self).__init__() self.dpdk_version = None self._logger = logger @@ -285,21 +452,33 @@ def __init__(self, logger: DTSLOG): self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: + """Add and return the inner result (execution). + + Args: + sut_node: The SUT node's test run configuration. + + Returns: + The execution's result. + """ execution_result = ExecutionResult(sut_node) self._inner_results.append(execution_result) return execution_result def add_error(self, error: Exception) -> None: + """Record an error that occurred outside any execution. + + Args: + error: The exception to record. + """ self._errors.append(error) def process(self) -> None: - """ - Process the data after a DTS run. - The data is added to nested objects during runtime and this parent object - is not updated at that time. This requires us to process the nested data - after it's all been gathered. + """Process the data after a whole DTS run. + + The data is added to inner objects during runtime and this object is not updated + at that time. This requires us to process the inner data after it's all been gathered. - The processing gathers all errors and the result statistics of test cases. + The processing gathers all errors and the statistics of test case results. """ self._errors += self.get_errors() if self._errors and self._logger: @@ -313,8 +492,10 @@ def process(self) -> None: stats_file.write(str(self._stats_result)) def get_return_code(self) -> int: - """ - Go through all stored Exceptions and return the highest error code found. + """Go through all stored Exceptions and return the final DTS error code. + + Returns: + The highest error code found. """ for error in self._errors: error_return_code = ErrorSeverity.GENERIC_ERR From patchwork Thu Nov 23 15:13:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134576 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0AC4433AC; Thu, 23 Nov 2023 16:15:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3E3D643003; Thu, 23 Nov 2023 16:14:08 +0100 (CET) Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) by mails.dpdk.org (Postfix) with ESMTP id F03D642FB9 for ; Thu, 23 Nov 2023 16:13:59 +0100 (CET) Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-4083f613275so6669765e9.2 for ; Thu, 23 Nov 2023 07:13:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752439; x=1701357239; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zMSy2qEiR6GNRiO4OC9OI2GqwU9Yj9AIYnq6MiKrGeo=; b=eZnPC72AJvl/O1VKdbWJw9W0MRECxnRFzxvTEoS3js38Or2yoGESnPnAx9t5rPDLuC 8hORZnXhMS3+apxvK1giYtwHhMkPsGeUK4R7QbFC3XQXl4yUC3pVSpYOTUEoa8l1JqkM NB2Uva/yxztUD0M6ZkbCNh0Nc1Ma1bPS0Ie+D/5wi5bQEZ30YkRthX4fkFvQaXvkPSxi pkrHLX2SWxjUeg3gy+u1wb5QTo1hyx+o0dO+/lU8dW10AcjMJ4Z5KwZlSIfq47hwsWpT PyLRzZxeShoDVCJ1dkAGd6EAIyOTBKNdU3l/S6XVM2wCntKJKERAFJ5yJ0UKmFNwsDha 4lrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752439; x=1701357239; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zMSy2qEiR6GNRiO4OC9OI2GqwU9Yj9AIYnq6MiKrGeo=; b=tJb1ROEgCMTwd9MEJCXOvEzwTdAuBt0nwdj4tO5WyQKMl4dho9Uug+sXLm+K4DLGrS bamuw4h/keCfEDZZjff6hduiJyWc8sGHbU7wRVIAmX2ZB5IgtR6ZJXD2aZrx5knfKpAd BHPKjomgEoV+f5oK8MwUJAaA9z/tAZYeFCDTdNFPavZGtOML+iVCrNjlE7lZMCoKzyyW Qv3vO35JSJIsMaHLy/vkPvotkAVG1p+cZYnp2JF3Ynv1EXmRShtvAiVpBAZC4Q3Xhw7L krn+F5S4FDAeR8EtY9Dny9+CyVdF8dIhvwBzZFT4U0M2P1BVVjlM3ZG8lGf+oznL03Xw Gb9g== X-Gm-Message-State: AOJu0YzfndPT1V+Rb1y6BHqvLHkicNgSHQbeUdE82Nr9zJnpfGpBGv9t T0x94+JILNRaFppy0Ues0kK0yQ== X-Google-Smtp-Source: AGHT+IEO2GVTDHXpz//98g2CQCIgrBt39er2yVwjfjOKk3nAMivseS3ufWGsJ0t2UC02zjfmzKV82A== X-Received: by 2002:a05:6000:178b:b0:332:d3f7:46f7 with SMTP id e11-20020a056000178b00b00332d3f746f7mr3499917wrg.53.1700752439576; Thu, 23 Nov 2023 07:13:59 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:13:59 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 10/21] dts: config docstring update Date: Thu, 23 Nov 2023 16:13:33 +0100 Message-Id: <20231123151344.162812-11-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 369 ++++++++++++++++++++++++++----- dts/framework/config/types.py | 132 +++++++++++ 2 files changed, 444 insertions(+), 57 deletions(-) create mode 100644 dts/framework/config/types.py diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index ef25a463c0..62eded7f04 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -3,8 +3,34 @@ # Copyright(c) 2022-2023 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -Yaml config parsing methods +"""Testbed configuration and test suite specification. + +This package offers classes that hold real-time information about the testbed, hold test run +configuration describing the tested testbed and a loader function, :func:`load_config`, which loads +the YAML test run configuration file +and validates it according to :download:`the schema `. + +The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout +this package. The allowed keys and types inside this dictionary are defined in +the :doc:`types ` module. + +The test run configuration has two main sections: + + * The :class:`ExecutionConfiguration` which defines what tests are going to be run + and how DPDK will be built. It also references the testbed where these tests and DPDK + are going to be run, + * The nodes of the testbed are defined in the other section, + a :class:`list` of :class:`NodeConfiguration` objects. + +The real-time information about testbed is supposed to be gathered at runtime. + +The classes defined in this package make heavy use of :mod:`dataclasses`. +All of them use slots and are frozen: + + * Slots enables some optimizations, by pre-allocating space for the defined + attributes in the underlying data structure, + * Frozen makes the object immutable. This enables further optimizations, + and makes it thread safe should we ever want to move in that direction. """ import json @@ -12,11 +38,20 @@ import pathlib from dataclasses import dataclass from enum import auto, unique -from typing import Any, TypedDict, Union +from typing import Union import warlock # type: ignore[import] import yaml +from framework.config.types import ( + BuildTargetConfigDict, + ConfigurationDict, + ExecutionConfigDict, + NodeConfigDict, + PortConfigDict, + TestSuiteConfigDict, + TrafficGeneratorConfigDict, +) from framework.exception import ConfigurationError from framework.settings import SETTINGS from framework.utils import StrEnum @@ -24,55 +59,97 @@ @unique class Architecture(StrEnum): + r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s.""" + + #: i686 = auto() + #: x86_64 = auto() + #: x86_32 = auto() + #: arm64 = auto() + #: ppc64le = auto() @unique class OS(StrEnum): + r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s.""" + + #: linux = auto() + #: freebsd = auto() + #: windows = auto() @unique class CPUType(StrEnum): + r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s.""" + + #: native = auto() + #: armv8a = auto() + #: dpaa2 = auto() + #: thunderx = auto() + #: xgene1 = auto() @unique class Compiler(StrEnum): + r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s.""" + + #: gcc = auto() + #: clang = auto() + #: icc = auto() + #: msvc = auto() @unique class TrafficGeneratorType(StrEnum): + """The supported traffic generators.""" + + #: SCAPY = auto() -# Slots enables some optimizations, by pre-allocating space for the defined -# attributes in the underlying data structure. -# -# Frozen makes the object immutable. This enables further optimizations, -# and makes it thread safe should we every want to move in that direction. @dataclass(slots=True, frozen=True) class HugepageConfiguration: + r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + amount: The number of hugepages. + force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node. + """ + amount: int force_first_numa: bool @dataclass(slots=True, frozen=True) class PortConfig: + r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + node: The :class:`~framework.testbed_model.node.Node` where this port exists. + pci: The PCI address of the port. + os_driver_for_dpdk: The operating system driver name for use with DPDK. + os_driver: The operating system driver name when the operating system controls the port. + peer_node: The :class:`~framework.testbed_model.node.Node` of the port + connected to this port. + peer_pci: The PCI address of the port connected to this port. + """ + node: str pci: str os_driver_for_dpdk: str @@ -81,18 +158,44 @@ class PortConfig: peer_pci: str @staticmethod - def from_dict(node: str, d: dict) -> "PortConfig": + def from_dict(node: str, d: PortConfigDict) -> "PortConfig": + """A convenience method that creates the object from fewer inputs. + + Args: + node: The node where this port exists. + d: The configuration dictionary. + + Returns: + The port configuration instance. + """ return PortConfig(node=node, **d) @dataclass(slots=True, frozen=True) class TrafficGeneratorConfig: + """The configuration of traffic generators. + + The class will be expanded when more configuration is needed. + + Attributes: + traffic_generator_type: The type of the traffic generator. + """ + traffic_generator_type: TrafficGeneratorType @staticmethod - def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": - # This looks useless now, but is designed to allow expansion to traffic - # generators that require more configuration later. + def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig": + """A convenience method that produces traffic generator config of the proper type. + + Args: + d: The configuration dictionary. + + Returns: + The traffic generator configuration instance. + + Raises: + ConfigurationError: An unknown traffic generator type was encountered. + """ match TrafficGeneratorType(d["type"]): case TrafficGeneratorType.SCAPY: return ScapyTrafficGeneratorConfig( @@ -104,11 +207,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": @dataclass(slots=True, frozen=True) class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig): + """Scapy traffic generator specific configuration.""" + pass @dataclass(slots=True, frozen=True) class NodeConfiguration: + r"""The configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + name: The name of the :class:`~framework.testbed_model.node.Node`. + hostname: The hostname of the :class:`~framework.testbed_model.node.Node`. + Can be an IP or a domain name. + user: The name of the user used to connect to + the :class:`~framework.testbed_model.node.Node`. + password: The password of the user. The use of passwords is heavily discouraged. + Please use keys instead. + arch: The architecture of the :class:`~framework.testbed_model.node.Node`. + os: The operating system of the :class:`~framework.testbed_model.node.Node`. + lcores: A comma delimited list of logical cores to use when running DPDK. + use_first_core: If :data:`True`, the first logical core won't be used. + hugepages: An optional hugepage configuration. + ports: The ports that can be used in testing. + """ + name: str hostname: str user: str @@ -121,55 +244,89 @@ class NodeConfiguration: ports: list[PortConfig] @staticmethod - def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]: - hugepage_config = d.get("hugepages") - if hugepage_config: - if "force_first_numa" not in hugepage_config: - hugepage_config["force_first_numa"] = False - hugepage_config = HugepageConfiguration(**hugepage_config) - - common_config = { - "name": d["name"], - "hostname": d["hostname"], - "user": d["user"], - "password": d.get("password"), - "arch": Architecture(d["arch"]), - "os": OS(d["os"]), - "lcores": d.get("lcores", "1"), - "use_first_core": d.get("use_first_core", False), - "hugepages": hugepage_config, - "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]], - } - + def from_dict( + d: NodeConfigDict, + ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]: + """A convenience method that processes the inputs before creating a specialized instance. + + Args: + d: The configuration dictionary. + + Returns: + Either an SUT or TG configuration instance. + """ + hugepage_config = None + if "hugepages" in d: + hugepage_config_dict = d["hugepages"] + if "force_first_numa" not in hugepage_config_dict: + hugepage_config_dict["force_first_numa"] = False + hugepage_config = HugepageConfiguration(**hugepage_config_dict) + + # The calls here contain duplicated code which is here because Mypy doesn't + # properly support dictionary unpacking with TypedDicts if "traffic_generator" in d: return TGNodeConfiguration( + name=d["name"], + hostname=d["hostname"], + user=d["user"], + password=d.get("password"), + arch=Architecture(d["arch"]), + os=OS(d["os"]), + lcores=d.get("lcores", "1"), + use_first_core=d.get("use_first_core", False), + hugepages=hugepage_config, + ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]], traffic_generator=TrafficGeneratorConfig.from_dict(d["traffic_generator"]), - **common_config, ) else: return SutNodeConfiguration( - memory_channels=d.get("memory_channels", 1), **common_config + name=d["name"], + hostname=d["hostname"], + user=d["user"], + password=d.get("password"), + arch=Architecture(d["arch"]), + os=OS(d["os"]), + lcores=d.get("lcores", "1"), + use_first_core=d.get("use_first_core", False), + hugepages=hugepage_config, + ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]], + memory_channels=d.get("memory_channels", 1), ) @dataclass(slots=True, frozen=True) class SutNodeConfiguration(NodeConfiguration): + """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration. + + Attributes: + memory_channels: The number of memory channels to use when running DPDK. + """ + memory_channels: int @dataclass(slots=True, frozen=True) class TGNodeConfiguration(NodeConfiguration): + """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration. + + Attributes: + traffic_generator: The configuration of the traffic generator present on the TG node. + """ + traffic_generator: ScapyTrafficGeneratorConfig @dataclass(slots=True, frozen=True) class NodeInfo: - """Class to hold important versions within the node. - - This class, unlike the NodeConfiguration class, cannot be generated at the start. - This is because we need to initialize a connection with the node before we can - collect the information needed in this class. Therefore, it cannot be a part of - the configuration class above. + """Supplemental node information. + + Attributes: + os_name: The name of the running operating system of + the :class:`~framework.testbed_model.node.Node`. + os_version: The version of the running operating system of + the :class:`~framework.testbed_model.node.Node`. + kernel_version: The kernel version of the running operating system of + the :class:`~framework.testbed_model.node.Node`. """ os_name: str @@ -179,6 +336,20 @@ class NodeInfo: @dataclass(slots=True, frozen=True) class BuildTargetConfiguration: + """DPDK build configuration. + + The configuration used for building DPDK. + + Attributes: + arch: The target architecture to build for. + os: The target os to build for. + cpu: The target CPU to build for. + compiler: The compiler executable to use. + compiler_wrapper: This string will be put in front of the compiler when + executing the build. Useful for adding wrapper commands, such as ``ccache``. + name: The name of the compiler. + """ + arch: Architecture os: OS cpu: CPUType @@ -187,7 +358,18 @@ class BuildTargetConfiguration: name: str @staticmethod - def from_dict(d: dict) -> "BuildTargetConfiguration": + def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration": + r"""A convenience method that processes the inputs before creating an instance. + + `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and + `name` is constructed from `arch`, `os`, `cpu` and `compiler`. + + Args: + d: The configuration dictionary. + + Returns: + The build target configuration instance. + """ return BuildTargetConfiguration( arch=Architecture(d["arch"]), os=OS(d["os"]), @@ -200,23 +382,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration": @dataclass(slots=True, frozen=True) class BuildTargetInfo: - """Class to hold important versions within the build target. + """Various versions and other information about a build target. - This is very similar to the NodeInfo class, it just instead holds information - for the build target. + Attributes: + dpdk_version: The DPDK version that was built. + compiler_version: The version of the compiler used to build DPDK. """ dpdk_version: str compiler_version: str -class TestSuiteConfigDict(TypedDict): - suite: str - cases: list[str] - - @dataclass(slots=True, frozen=True) class TestSuiteConfig: + """Test suite configuration. + + Information about a single test suite to be executed. + + Attributes: + test_suite: The name of the test suite module without the starting ``TestSuite_``. + test_cases: The names of test cases from this test suite to execute. + If empty, all test cases will be executed. + """ + test_suite: str test_cases: list[str] @@ -224,6 +412,14 @@ class TestSuiteConfig: def from_dict( entry: str | TestSuiteConfigDict, ) -> "TestSuiteConfig": + """Create an instance from two different types. + + Args: + entry: Either a suite name or a dictionary containing the config. + + Returns: + The test suite configuration instance. + """ if isinstance(entry, str): return TestSuiteConfig(test_suite=entry, test_cases=[]) elif isinstance(entry, dict): @@ -234,19 +430,49 @@ def from_dict( @dataclass(slots=True, frozen=True) class ExecutionConfiguration: + """The configuration of an execution. + + The configuration contains testbed information, what tests to execute + and with what DPDK build. + + Attributes: + build_targets: A list of DPDK builds to test. + perf: Whether to run performance tests. + func: Whether to run functional tests. + skip_smoke_tests: Whether to skip smoke tests. + test_suites: The names of test suites and/or test cases to execute. + system_under_test_node: The SUT node to use in this execution. + traffic_generator_node: The TG node to use in this execution. + vdevs: The names of virtual devices to test. + """ + build_targets: list[BuildTargetConfiguration] perf: bool func: bool + skip_smoke_tests: bool test_suites: list[TestSuiteConfig] system_under_test_node: SutNodeConfiguration traffic_generator_node: TGNodeConfiguration vdevs: list[str] - skip_smoke_tests: bool @staticmethod def from_dict( - d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]] + d: ExecutionConfigDict, + node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]], ) -> "ExecutionConfiguration": + """A convenience method that processes the inputs before creating an instance. + + The build target and the test suite config are transformed into their respective objects. + SUT and TG configurations are taken from `node_map`. The other (:class:`bool`) attributes + are just stored. + + Args: + d: The configuration dictionary. + node_map: A dictionary mapping node names to their config objects. + + Returns: + The execution configuration instance. + """ build_targets: list[BuildTargetConfiguration] = list( map(BuildTargetConfiguration.from_dict, d["build_targets"]) ) @@ -283,10 +509,31 @@ def from_dict( @dataclass(slots=True, frozen=True) class Configuration: + """DTS testbed and test configuration. + + The node configuration is not stored in this object. Rather, all used node configurations + are stored inside the execution configuration where the nodes are actually used. + + Attributes: + executions: Execution configurations. + """ + executions: list[ExecutionConfiguration] @staticmethod - def from_dict(d: dict) -> "Configuration": + def from_dict(d: ConfigurationDict) -> "Configuration": + """A convenience method that processes the inputs before creating an instance. + + Build target and test suite config are transformed into their respective objects. + SUT and TG configurations are taken from `node_map`. The other (:class:`bool`) attributes + are just stored. + + Args: + d: The configuration dictionary. + + Returns: + The whole configuration instance. + """ nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list( map(NodeConfiguration.from_dict, d["nodes"]) ) @@ -303,9 +550,17 @@ def from_dict(d: dict) -> "Configuration": def load_config() -> Configuration: - """ - Loads the configuration file and the configuration file schema, - validates the configuration file, and creates a configuration object. + """Load DTS test run configuration from a file. + + Load the YAML test run configuration file + and :download:`the configuration file schema `, + validate the test run configuration file, and create a test run configuration object. + + The YAML test run configuration file is specified in the :option:`--config-file` command line + argument or the :envvar:`DTS_CFG_FILE` environment variable. + + Returns: + The parsed test run configuration. """ with open(SETTINGS.config_file_path, "r") as f: config_data = yaml.safe_load(f) @@ -314,6 +569,6 @@ def load_config() -> Configuration: with open(schema_path, "r") as f: schema = json.load(f) - config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data) - config_obj: Configuration = Configuration.from_dict(dict(config)) + config = warlock.model_factory(schema, name="_Config")(config_data) + config_obj: Configuration = Configuration.from_dict(dict(config)) # type: ignore[arg-type] return config_obj diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py new file mode 100644 index 0000000000..1927910d88 --- /dev/null +++ b/dts/framework/config/types.py @@ -0,0 +1,132 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +"""Configuration dictionary contents specification. + +These type definitions serve as documentation of the configuration dictionary contents. + +The definitions use the built-in :class:`~typing.TypedDict` construct. +""" + +from typing import TypedDict + + +class PortConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + pci: str + #: + os_driver_for_dpdk: str + #: + os_driver: str + #: + peer_node: str + #: + peer_pci: str + + +class TrafficGeneratorConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + type: str + + +class HugepageConfigurationDict(TypedDict): + """Allowed keys and values.""" + + #: + amount: int + #: + force_first_numa: bool + + +class NodeConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + hugepages: HugepageConfigurationDict + #: + name: str + #: + hostname: str + #: + user: str + #: + password: str + #: + arch: str + #: + os: str + #: + lcores: str + #: + use_first_core: bool + #: + ports: list[PortConfigDict] + #: + memory_channels: int + #: + traffic_generator: TrafficGeneratorConfigDict + + +class BuildTargetConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + arch: str + #: + os: str + #: + cpu: str + #: + compiler: str + #: + compiler_wrapper: str + + +class TestSuiteConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + suite: str + #: + cases: list[str] + + +class ExecutionSUTConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + node_name: str + #: + vdevs: list[str] + + +class ExecutionConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + build_targets: list[BuildTargetConfigDict] + #: + perf: bool + #: + func: bool + #: + skip_smoke_tests: bool + #: + test_suites: TestSuiteConfigDict + #: + system_under_test_node: ExecutionSUTConfigDict + #: + traffic_generator_node: str + + +class ConfigurationDict(TypedDict): + """Allowed keys and values.""" + + #: + nodes: list[NodeConfigDict] + #: + executions: list[ExecutionConfigDict] From patchwork Thu Nov 23 15:13:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134577 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2AE74433AC; Thu, 23 Nov 2023 16:15:29 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4DDE243067; Thu, 23 Nov 2023 16:14:09 +0100 (CET) Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) by mails.dpdk.org (Postfix) with ESMTP id E27E942FB9 for ; Thu, 23 Nov 2023 16:14:01 +0100 (CET) Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-4083cd3917eso6267285e9.3 for ; Thu, 23 Nov 2023 07:14:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752441; x=1701357241; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=P0Nblz39VHMsl/sn7WugqAoH0RlFKnbia6lvcHGMAig=; b=TIxmYH2jyurT+oXl2/hsFSAIGm8C1jH5wePTEwSRUj916U5XsAymhP39gEtEnglQkw j4YWMq44lCFtdiXiVQWFhqAEdFpsb2ZfHtg7LiPFh2W5KDUzsAUel6HK0uTyqaEM36+J 4pvAyaL3wETpm6GqRUw/q2J6yqV2aui3I285HM7myRGnUpsgS5vdNZmtFaw1QJwWXhh/ mH0b5pxduUy5ERyu4DSeWuYCqzGeGLd2cIv4xl5WtnIIc6oTij6XzuvMgeyX95Iqwwkj mP8/2Ff9/1sbygDKgSgCXqK7xuTWmVj+eoqIxGciiOr0+SbFzJIINYftTO0hIv3lusqx PDww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752441; x=1701357241; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P0Nblz39VHMsl/sn7WugqAoH0RlFKnbia6lvcHGMAig=; b=v1gGFLx9JYKe/+QuxJ/HIt5D7md+9IaQklHvwRppdyX+GUMjJnoxwFrtElnbfr6TbK vp/EssQZXPtz8AoiCOcKQuEuawnlkecyyiTaYXcH7bHC1e9Gv9I5bN4XzcZ4gZBYC7Gu MBNepuTE93iEW/AY9MBUfzjat6jqMHqpfF9O2L7PXiSdhISSPzM33o93gD/2dmKEoiit 80WYLM/OScUT7Lv1/QFFR2w6Vdk5zwTSE1CjAkrbguLAUstwo6gxVsMhogt/cezY3bvQ eUFzDa9Ss+Z1Xm4Q/V8YbT9FRFeKxKSJxhgmqC2a7mwY8qhDkXSSz/QfA0ZCVwP+873s lmRQ== X-Gm-Message-State: AOJu0YxiMyRrEeGJNu5o8xdQiodw5xYDIZdfjTuCyS7wc0mpwMtbIzPB szAEXJZM3Cv4k7BpAOEzj/NmFw== X-Google-Smtp-Source: AGHT+IFwjiDvbuAMRWMP0KrUGm4xHQRIZS+4fwy2rPx9QPvIOwL2WXze0wh+ygvlD3vBTnxMXm26DA== X-Received: by 2002:adf:a447:0:b0:32f:aaff:96dd with SMTP id e7-20020adfa447000000b0032faaff96ddmr3462509wra.4.1700752440614; Thu, 23 Nov 2023 07:14:00 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.13.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:00 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 11/21] dts: remote session docstring update Date: Thu, 23 Nov 2023 16:13:34 +0100 Message-Id: <20231123151344.162812-12-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/remote_session/__init__.py | 39 +++++- .../remote_session/remote_session.py | 130 +++++++++++++----- dts/framework/remote_session/ssh_session.py | 16 +-- 3 files changed, 137 insertions(+), 48 deletions(-) diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 5e7ddb2b05..51a01d6b5e 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -2,12 +2,14 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire -""" -The package provides modules for managing remote connections to a remote host (node), -differentiated by OS. -The package provides a factory function, create_session, that returns the appropriate -remote connection based on the passed configuration. The differences are in the -underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux). +"""Remote interactive and non-interactive sessions. + +This package provides modules for managing remote connections to a remote host (node). + +The non-interactive sessions send commands and return their output and exit code. + +The interactive sessions open an interactive shell which is continuously open, +allowing it to send and receive data within that particular shell. """ # pylama:ignore=W0611 @@ -26,10 +28,35 @@ def create_remote_session( node_config: NodeConfiguration, name: str, logger: DTSLOG ) -> RemoteSession: + """Factory for non-interactive remote sessions. + + The function returns an SSH session, but will be extended if support + for other protocols is added. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + + Returns: + The SSH remote session. + """ return SSHSession(node_config, name, logger) def create_interactive_session( node_config: NodeConfiguration, logger: DTSLOG ) -> InteractiveRemoteSession: + """Factory for interactive remote sessions. + + The function returns an interactive SSH session, but will be extended if support + for other protocols is added. + + Args: + node_config: The test run configuration of the node to connect to. + logger: The logger instance this session will use. + + Returns: + The interactive SSH remote session. + """ return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py index 719f7d1ef7..2059f9a981 100644 --- a/dts/framework/remote_session/remote_session.py +++ b/dts/framework/remote_session/remote_session.py @@ -3,6 +3,13 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +"""Base remote session. + +This module contains the abstract base class for remote sessions and defines +the structure of the result of a command execution. +""" + + import dataclasses from abc import ABC, abstractmethod from pathlib import PurePath @@ -15,8 +22,14 @@ @dataclasses.dataclass(slots=True, frozen=True) class CommandResult: - """ - The result of remote execution of a command. + """The result of remote execution of a command. + + Attributes: + name: The name of the session that executed the command. + command: The executed command. + stdout: The standard output the command produced. + stderr: The standard error output the command produced. + return_code: The return code the command exited with. """ name: str @@ -26,6 +39,7 @@ class CommandResult: return_code: int def __str__(self) -> str: + """Format the command outputs.""" return ( f"stdout: '{self.stdout}'\n" f"stderr: '{self.stderr}'\n" @@ -34,13 +48,24 @@ def __str__(self) -> str: class RemoteSession(ABC): - """ - The base class for defining which methods must be implemented in order to connect - to a remote host (node) and maintain a remote session. The derived classes are - supposed to implement/use some underlying transport protocol (e.g. SSH) to - implement the methods. On top of that, it provides some basic services common to - all derived classes, such as keeping history and logging what's being executed - on the remote node. + """Non-interactive remote session. + + The abstract methods must be implemented in order to connect to a remote host (node) + and maintain a remote session. + The subclasses must use (or implement) some underlying transport protocol (e.g. SSH) + to implement the methods. On top of that, it provides some basic services common to all + subclasses, such as keeping history and logging what's being executed on the remote node. + + Attributes: + name: The name of the session. + hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon) + or a domain name. + ip: The IP address of the node or a domain name, whichever was used in `hostname`. + port: The port of the node, if given in `hostname`. + username: The username used in the connection. + password: The password used in the connection. Most frequently empty, + as the use of passwords is discouraged. + history: The executed commands during this session. """ name: str @@ -59,6 +84,16 @@ def __init__( session_name: str, logger: DTSLOG, ): + """Connect to the node during initialization. + + Args: + node_config: The test run configuration of the node to connect to. + session_name: The name of the session. + logger: The logger instance this session will use. + + Raises: + SSHConnectionError: If the connection to the node was not successful. + """ self._node_config = node_config self.name = session_name @@ -79,8 +114,13 @@ def __init__( @abstractmethod def _connect(self) -> None: - """ - Create connection to assigned node. + """Create a connection to the node. + + The implementation must assign the established session to self.session. + + The implementation must except all exceptions and convert them to an SSHConnectionError. + + The implementation may optionally implement retry attempts. """ def send_command( @@ -90,11 +130,24 @@ def send_command( verify: bool = False, env: dict | None = None, ) -> CommandResult: - """ - Send a command to the connected node using optional env vars - and return CommandResult. - If verify is True, check the return code of the executed command - and raise a RemoteCommandExecutionError if the command failed. + """Send `command` to the connected node. + + The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` + environment variable configure the timeout of command execution. + + Args: + command: The command to execute. + timeout: Wait at most this long in seconds for `command` execution to complete. + verify: If :data:`True`, will check the exit code of `command`. + env: A dictionary with environment variables to be used with `command` execution. + + Raises: + SSHSessionDeadError: If the session isn't alive when sending `command`. + SSHTimeoutError: If `command` execution timed out. + RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed. + + Returns: + The output of the command along with the return code. """ self._logger.info(f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")) result = self._send_command(command, timeout, env) @@ -111,29 +164,38 @@ def send_command( @abstractmethod def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult: - """ - Use the underlying protocol to execute the command using optional env vars - and return CommandResult. + """Send a command to the connected node. + + The implementation must execute the command remotely with `env` environment variables + and return the result. + + The implementation must except all exceptions and raise: + + * SSHSessionDeadError if the session is not alive, + * SSHTimeoutError if the command execution times out. """ def close(self, force: bool = False) -> None: - """ - Close the remote session and free all used resources. + """Close the remote session and free all used resources. + + Args: + force: Force the closure of the connection. This may not clean up all resources. """ self._logger.logger_exit() self._close(force) @abstractmethod def _close(self, force: bool = False) -> None: - """ - Execute protocol specific steps needed to close the session properly. + """Protocol specific steps needed to close the session properly. + + Args: + force: Force the closure of the connection. This may not clean up all resources. + This doesn't have to be implemented in the overloaded method. """ @abstractmethod def is_alive(self) -> bool: - """ - Check whether the remote session is still responding. - """ + """Check whether the remote session is still responding.""" @abstractmethod def copy_from( @@ -143,12 +205,12 @@ def copy_from( ) -> None: """Copy a file from the remote Node to the local filesystem. - Copy source_file from the remote Node associated with this remote - session to destination_file on the local filesystem. + Copy `source_file` from the remote Node associated with this remote session + to `destination_file` on the local filesystem. Args: - source_file: the file on the remote Node. - destination_file: a file or directory path on the local filesystem. + source_file: The file on the remote Node. + destination_file: A file or directory path on the local filesystem. """ @abstractmethod @@ -159,10 +221,10 @@ def copy_to( ) -> None: """Copy a file from local filesystem to the remote Node. - Copy source_file from local filesystem to destination_file - on the remote Node associated with this remote session. + Copy `source_file` from local filesystem to `destination_file` on the remote Node + associated with this remote session. Args: - source_file: the file on the local filesystem. - destination_file: a file or directory path on the remote Node. + source_file: The file on the local filesystem. + destination_file: A file or directory path on the remote Node. """ diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py index a467033a13..782220092c 100644 --- a/dts/framework/remote_session/ssh_session.py +++ b/dts/framework/remote_session/ssh_session.py @@ -1,6 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""SSH remote session.""" + import socket import traceback from pathlib import PurePath @@ -26,13 +28,8 @@ class SSHSession(RemoteSession): """A persistent SSH connection to a remote Node. - The connection is implemented with the Fabric Python library. - - Args: - node_config: The configuration of the Node to connect to. - session_name: The name of the session. - logger: The logger used for logging. - This should be passed from the parent OSSession. + The connection is implemented with + `the Fabric Python library `_. Attributes: session: The underlying Fabric SSH connection. @@ -78,6 +75,7 @@ def _connect(self) -> None: raise SSHConnectionError(self.hostname, errors) def is_alive(self) -> bool: + """Overrides :meth:`~.remote_session.RemoteSession.is_alive`.""" return self.session.is_connected def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult: @@ -85,7 +83,7 @@ def _send_command(self, command: str, timeout: float, env: dict | None) -> Comma Args: command: The command to execute. - timeout: Wait at most this many seconds for the execution to complete. + timeout: Wait at most this long in seconds for the command execution to complete. env: Extra environment variables that will be used in command execution. Raises: @@ -110,6 +108,7 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.remote_session.RemoteSession.copy_from`.""" self.session.get(str(destination_file), str(source_file)) def copy_to( @@ -117,6 +116,7 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.remote_session.RemoteSession.copy_to`.""" self.session.put(str(source_file), str(destination_file)) def _close(self, force: bool = False) -> None: From patchwork Thu Nov 23 15:13:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134578 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E5D7433AC; Thu, 23 Nov 2023 16:15:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4FF4B43246; Thu, 23 Nov 2023 16:14:10 +0100 (CET) Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by mails.dpdk.org (Postfix) with ESMTP id EF1C642FB9 for ; Thu, 23 Nov 2023 16:14:02 +0100 (CET) Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-40b27b498c3so6781095e9.0 for ; Thu, 23 Nov 2023 07:14:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752442; x=1701357242; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DqZz7e38mO0YfQURv/Wivn7BoETZ8f7mwoH6Gsm/AFI=; b=mcEtj3lHkvDC7mq/z6Tp34r6ImOrALzXSuz1nG0MrnYkPA5m3bsOirHsxDEukPa2RF hrTgLAvWZocTasgpybk8bqD7+GaKUZsF6vQYmJfuNRD6y/xX9n33wDXcXyHm9CcIhVYA Uv2fcDrHWodIbt4SyN5/Bwuxl3kkIvGJVmGJuzL5bx72OUz32ifeUQ7ZHFP5wJ94TixH v2J2VS7EmnBJUIvHRKK11Cih9NXSi2jJnpHh2jjU5AZH2RkoJ9NwAt/yWHPGVkWSlplA bqwRvH2cV47lKnzwCosnLxX2KwGjSe6OTu7EJ0A54r3csXB0I84SvpNCio3fNtKOTKyP aqLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752442; x=1701357242; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DqZz7e38mO0YfQURv/Wivn7BoETZ8f7mwoH6Gsm/AFI=; b=vUthiTXFlTpSV2bIdRRUB17uocHTM5bKlnBTH+we7zkAL21fk6HK4ZzA49QaJx5bqT RSkgwggmfFsJoUbetichCd+NlkHHL4EtZ3O06S65SXz9qUohvQCEG5KDo4jt9E+dxlkG w7BfQkqe0H1MHN1irGJETkA3aGz4n/ZvyXT0MUTGDl7NOaCVdxkK6l+4c57KNtqDY/VQ 9pYjwzridd4enVIMmB+SXcAYk2sJ9NAdF/I65Ovmlub5RTSiWRHVWyKbnQjzwKJi4NED cTBqFdcREe3vEh4ZpDKlXGHxAEQt2gI4RYYoia/GMikp/KEj15exoFqpEzcrRo5cZ4QV aISw== X-Gm-Message-State: AOJu0Yx1KscdwDAbtQ87qPmz7/kfZg4nKRnSbBXiqvRjpFeAeoisyPKd dtSVglLmnZSYIzHNQhWXWpjV1g== X-Google-Smtp-Source: AGHT+IGkADZ2dT/HZ3vR1OmAjVyA1bWiAdaufAjjBujYFEA5aLsRVKisk0kU32mwVvNThmjDTMpC2Q== X-Received: by 2002:a05:600c:45d3:b0:409:5a92:4711 with SMTP id s19-20020a05600c45d300b004095a924711mr3830983wmo.34.1700752442506; Thu, 23 Nov 2023 07:14:02 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:02 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 12/21] dts: interactive remote session docstring update Date: Thu, 23 Nov 2023 16:13:35 +0100 Message-Id: <20231123151344.162812-13-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../interactive_remote_session.py | 36 +++---- .../remote_session/interactive_shell.py | 99 +++++++++++-------- dts/framework/remote_session/python_shell.py | 26 ++++- dts/framework/remote_session/testpmd_shell.py | 58 +++++++++-- 4 files changed, 149 insertions(+), 70 deletions(-) diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py index 098ded1bb0..1cc82e3377 100644 --- a/dts/framework/remote_session/interactive_remote_session.py +++ b/dts/framework/remote_session/interactive_remote_session.py @@ -22,27 +22,23 @@ class InteractiveRemoteSession: """SSH connection dedicated to interactive applications. - This connection is created using paramiko and is a persistent connection to the - host. This class defines methods for connecting to the node and configures this - connection to send "keep alive" packets every 30 seconds. Because paramiko attempts - to use SSH keys to establish a connection first, providing a password is optional. - This session is utilized by InteractiveShells and cannot be interacted with - directly. - - Arguments: - node_config: Configuration class for the node you are connecting to. - _logger: Desired logger for this session to use. + The connection is created using `paramiko `_ + and is a persistent connection to the host. This class defines the methods for connecting + to the node and configures the connection to send "keep alive" packets every 30 seconds. + Because paramiko attempts to use SSH keys to establish a connection first, providing + a password is optional. This session is utilized by InteractiveShells + and cannot be interacted with directly. Attributes: - hostname: Hostname that will be used to initialize a connection to the node. - ip: A subsection of hostname that removes the port for the connection if there + hostname: The hostname that will be used to initialize a connection to the node. + ip: A subsection of `hostname` that removes the port for the connection if there is one. If there is no port, this will be the same as hostname. - port: Port to use for the ssh connection. This will be extracted from the - hostname if there is a port included, otherwise it will default to 22. + port: Port to use for the ssh connection. This will be extracted from `hostname` + if there is a port included, otherwise it will default to ``22``. username: User to connect to the node with. password: Password of the user connecting to the host. This will default to an empty string if a password is not provided. - session: Underlying paramiko connection. + session: The underlying paramiko connection. Raises: SSHConnectionError: There is an error creating the SSH connection. @@ -58,9 +54,15 @@ class InteractiveRemoteSession: _node_config: NodeConfiguration _transport: Transport | None - def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None: + def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None: + """Connect to the node during initialization. + + Args: + node_config: The test run configuration of the node to connect to. + logger: The logger instance this session will use. + """ self._node_config = node_config - self._logger = _logger + self._logger = logger self.hostname = node_config.hostname self.username = node_config.user self.password = node_config.password if node_config.password else "" diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py index 4db19fb9b3..b158f963b6 100644 --- a/dts/framework/remote_session/interactive_shell.py +++ b/dts/framework/remote_session/interactive_shell.py @@ -3,18 +3,20 @@ """Common functionality for interactive shell handling. -This base class, InteractiveShell, is meant to be extended by other classes that -contain functionality specific to that shell type. These derived classes will often -modify things like the prompt to expect or the arguments to pass into the application, -but still utilize the same method for sending a command and collecting output. How -this output is handled however is often application specific. If an application needs -elevated privileges to start it is expected that the method for gaining those -privileges is provided when initializing the class. +The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain +functionality specific to that shell type. These subclasses will often modify things like +the prompt to expect or the arguments to pass into the application, but still utilize +the same method for sending a command and collecting output. How this output is handled however +is often application specific. If an application needs elevated privileges to start it is expected +that the method for gaining those privileges is provided when initializing the class. + +The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` +environment variable configure the timeout of getting the output from command execution. """ from abc import ABC from pathlib import PurePath -from typing import Callable +from typing import Callable, ClassVar from paramiko import Channel, SSHClient, channel # type: ignore[import] @@ -30,28 +32,6 @@ class InteractiveShell(ABC): and collecting input until reaching a certain prompt. All interactive applications will use the same SSH connection, but each will create their own channel on that session. - - Arguments: - interactive_session: The SSH session dedicated to interactive shells. - logger: Logger used for displaying information in the console. - get_privileged_command: Method for modifying a command to allow it to use - elevated privileges. If this is None, the application will not be started - with elevated privileges. - app_args: Command line arguments to be passed to the application on startup. - timeout: Timeout used for the SSH channel that is dedicated to this interactive - shell. This timeout is for collecting output, so if reading from the buffer - and no output is gathered within the timeout, an exception is thrown. - - Attributes - _default_prompt: Prompt to expect at the end of output when sending a command. - This is often overridden by derived classes. - _command_extra_chars: Extra characters to add to the end of every command - before sending them. This is often overridden by derived classes and is - most commonly an additional newline character. - path: Path to the executable to start the interactive application. - dpdk_app: Whether this application is a DPDK app. If it is, the build - directory for DPDK on the node will be prepended to the path to the - executable. """ _interactive_session: SSHClient @@ -61,10 +41,22 @@ class InteractiveShell(ABC): _logger: DTSLOG _timeout: float _app_args: str - _default_prompt: str = "" - _command_extra_chars: str = "" - path: PurePath - dpdk_app: bool = False + + #: Prompt to expect at the end of output when sending a command. + #: This is often overridden by subclasses. + _default_prompt: ClassVar[str] = "" + + #: Extra characters to add to the end of every command + #: before sending them. This is often overridden by subclasses and is + #: most commonly an additional newline character. + _command_extra_chars: ClassVar[str] = "" + + #: Path to the executable to start the interactive application. + path: ClassVar[PurePath] + + #: Whether this application is a DPDK app. If it is, the build directory + #: for DPDK on the node will be prepended to the path to the executable. + dpdk_app: ClassVar[bool] = False def __init__( self, @@ -74,6 +66,19 @@ def __init__( app_args: str = "", timeout: float = SETTINGS.timeout, ) -> None: + """Create an SSH channel during initialization. + + Args: + interactive_session: The SSH session dedicated to interactive shells. + logger: The logger instance this session will use. + get_privileged_command: A method for modifying a command to allow it to use + elevated privileges. If :data:`None`, the application will not be started + with elevated privileges. + app_args: The command line arguments to be passed to the application on startup. + timeout: The timeout used for the SSH channel that is dedicated to this interactive + shell. This timeout is for collecting output, so if reading from the buffer + and no output is gathered within the timeout, an exception is thrown. + """ self._interactive_session = interactive_session self._ssh_channel = self._interactive_session.invoke_shell() self._stdin = self._ssh_channel.makefile_stdin("w") @@ -90,6 +95,10 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None This method is often overridden by subclasses as their process for starting may look different. + + Args: + get_privileged_command: A function (but could be any callable) that produces + the version of the command with elevated privileges. """ start_command = f"{self.path} {self._app_args}" if get_privileged_command is not None: @@ -97,16 +106,24 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None self.send_command(start_command) def send_command(self, command: str, prompt: str | None = None) -> str: - """Send a command and get all output before the expected ending string. + """Send `command` and get all output before the expected ending string. Lines that expect input are not included in the stdout buffer, so they cannot - be used for expect. For example, if you were prompted to log into something - with a username and password, you cannot expect "username:" because it won't - yet be in the stdout buffer. A workaround for this could be consuming an - extra newline character to force the current prompt into the stdout buffer. + be used for expect. + + Example: + If you were prompted to log into something with a username and password, + you cannot expect ``username:`` because it won't yet be in the stdout buffer. + A workaround for this could be consuming an extra newline character to force + the current `prompt` into the stdout buffer. + + Args: + command: The command to send. + prompt: After sending the command, `send_command` will be expecting this string. + If :data:`None`, will use the class's default prompt. Returns: - All output in the buffer before expected string + All output in the buffer before expected string. """ self._logger.info(f"Sending: '{command}'") if prompt is None: @@ -124,8 +141,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str: return out def close(self) -> None: + """Properly free all resources.""" self._stdin.close() self._ssh_channel.close() def __del__(self) -> None: + """Make sure the session is properly closed before deleting the object.""" self.close() diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py index cc3ad48a68..ccfd3783e8 100644 --- a/dts/framework/remote_session/python_shell.py +++ b/dts/framework/remote_session/python_shell.py @@ -1,12 +1,32 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""Python interactive shell. + +Typical usage example in a TestSuite:: + + from framework.remote_session import PythonShell + python_shell = self.tg_node.create_interactive_shell( + PythonShell, timeout=5, privileged=True + ) + python_shell.send_command("print('Hello World')") + python_shell.close() +""" + from pathlib import PurePath +from typing import ClassVar from .interactive_shell import InteractiveShell class PythonShell(InteractiveShell): - _default_prompt: str = ">>>" - _command_extra_chars: str = "\n" - path: PurePath = PurePath("python3") + """Python interactive shell.""" + + #: Python's prompt. + _default_prompt: ClassVar[str] = ">>>" + + #: This forces the prompt to appear after sending a command. + _command_extra_chars: ClassVar[str] = "\n" + + #: The Python executable. + path: ClassVar[PurePath] = PurePath("python3") diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index 08ac311016..79481e845c 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -1,41 +1,79 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 University of New Hampshire +"""Testpmd interactive shell. + +Typical usage example in a TestSuite:: + + testpmd_shell = self.sut_node.create_interactive_shell( + TestPmdShell, privileged=True + ) + devices = testpmd_shell.get_devices() + for device in devices: + print(device) + testpmd_shell.close() +""" + from pathlib import PurePath -from typing import Callable +from typing import Callable, ClassVar from .interactive_shell import InteractiveShell class TestPmdDevice(object): + """The data of a device that testpmd can recognize. + + Attributes: + pci_address: The PCI address of the device. + """ + pci_address: str def __init__(self, pci_address_line: str): + """Initialize the device from the testpmd output line string. + + Args: + pci_address_line: A line of testpmd output that contains a device. + """ self.pci_address = pci_address_line.strip().split(": ")[1].strip() def __str__(self) -> str: + """The PCI address captures what the device is.""" return self.pci_address class TestPmdShell(InteractiveShell): - path: PurePath = PurePath("app", "dpdk-testpmd") - dpdk_app: bool = True - _default_prompt: str = "testpmd>" - _command_extra_chars: str = "\n" # We want to append an extra newline to every command + """Testpmd interactive shell. + + The testpmd shell users should never use + the :meth:`~.interactive_shell.InteractiveShell.send_command` method directly, but rather + call specialized methods. If there isn't one that satisfies a need, it should be added. + """ + + #: The path to the testpmd executable. + path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd") + + #: Flag this as a DPDK app so that it's clear this is not a system app and + #: needs to be looked in a specific path. + dpdk_app: ClassVar[bool] = True + + #: The testpmd's prompt. + _default_prompt: ClassVar[str] = "testpmd>" + + #: This forces the prompt to appear after sending a command. + _command_extra_chars: ClassVar[str] = "\n" def _start_application(self, get_privileged_command: Callable[[str], str] | None) -> None: - """See "_start_application" in InteractiveShell.""" self._app_args += " -- -i" super()._start_application(get_privileged_command) def get_devices(self) -> list[TestPmdDevice]: - """Get a list of device names that are known to testpmd + """Get a list of device names that are known to testpmd. - Uses the device info listed in testpmd and then parses the output to - return only the names of the devices. + Uses the device info listed in testpmd and then parses the output. Returns: - A list of strings representing device names (e.g. 0000:14:00.1) + A list of devices. """ dev_info: str = self.send_command("show device info all") dev_list: list[TestPmdDevice] = [] From patchwork Thu Nov 23 15:13:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134579 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8121D433AC; Thu, 23 Nov 2023 16:15:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C98E242FC5; Thu, 23 Nov 2023 16:14:21 +0100 (CET) Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by mails.dpdk.org (Postfix) with ESMTP id B313442FBD for ; Thu, 23 Nov 2023 16:14:04 +0100 (CET) Received: by mail-lf1-f50.google.com with SMTP id 2adb3069b0e04-50abbb23122so1190041e87.3 for ; Thu, 23 Nov 2023 07:14:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752444; x=1701357244; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kLCM/2aeBwtLd6yTf6evW5OSTnsSROs8IoIoBeLdShI=; b=jV9xFG7dlnRYRK1hy6KozxUNVp0eH4+42Z/WkDq4sMYP99KLW0l9oV3ND6bB1oiTlF Ddpqzi59mfJ0TauTIjv4uHK0TMZFN5h4FpwiRhh/vwG/nUg3+ZQWohH1eJqSlhOftdrO uaAIXOvwt4N9Lr/eBjpq5z3oo7EZo2ZzNCre2zifMwUXlFRSVw1tVbFVBoqTqtl20YkY ho3KHS7Akj8de2U4NYY6TDwXA8wlzbS5WA5QoLb8a9h488mlS/m75n9yzi3MgblEjhWz e6qcz9d2CtUp7MmjxVY9DQdyJ5F7u63OHwy9IGL7fi+OoWQI1RUrBv5HJKSZs2ps25lr wpXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752444; x=1701357244; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kLCM/2aeBwtLd6yTf6evW5OSTnsSROs8IoIoBeLdShI=; b=ZhkfOvxFQ9cwrzwdTv8ugJj9uFYKiVaTyTXHwF76lBRUQ28DKvkq16wFbDo7jEZEMw KKsgNabWLveRWsfba/+PSNNN2Sv5EaNwjsPXYr56+2ehMPY6mvEQDEjgNnGGpGucgCA8 2hFzJVQGYZ/huZgqqXZsNdnJmnf5D3GtQaeDF/ZLhr3Lu9cRX5Kcv+hgjQCbungHyUih MlYeHeLfVCG+DOnuYl4AnCEioiAci1kmZT8ooHOdlarMEO4MuPpfGx5LxtlDgjUCTY6H x61iJu9pJ+91DoUAyMNwGefHZbkr/q+pxay97zLx+uAORCy8qVMjpvvMmSFucqMrNoFh 97ew== X-Gm-Message-State: AOJu0YzfMzEAv3XizJgMkeQtPqC1IAgwgXLABjnnfQDXuKD45uBsV/cd HqfmFIH5jmImukbghnVcJ4iOZsX1qSWXliVFqyS0Kg== X-Google-Smtp-Source: AGHT+IFOhXeHJMOeKqvt2X+GNut1TDDpmzcQqTeF3rnEEwefQCRsBOTZojLwcovjoG8kW3/SH70fzw== X-Received: by 2002:a05:6512:104c:b0:505:7371:ec83 with SMTP id c12-20020a056512104c00b005057371ec83mr5013439lfb.48.1700752444200; Thu, 23 Nov 2023 07:14:04 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:03 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 13/21] dts: port and virtual device docstring update Date: Thu, 23 Nov 2023 16:13:36 +0100 Message-Id: <20231123151344.162812-14-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/__init__.py | 17 ++++-- dts/framework/testbed_model/port.py | 53 +++++++++++++++---- dts/framework/testbed_model/virtual_device.py | 17 +++++- 3 files changed, 72 insertions(+), 15 deletions(-) diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 8ced05653b..6086512ca2 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -2,9 +2,20 @@ # Copyright(c) 2022-2023 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -This package contains the classes used to model the physical traffic generator, -system under test and any other components that need to be interacted with. +"""Testbed modelling. + +This package defines the testbed elements DTS works with: + + * A system under test node: :class:`~.sut_node.SutNode`, + * A traffic generator node: :class:`~.tg_node.TGNode`, + * The ports of network interface cards (NICs) present on nodes: :class:`~.port.Port`, + * The logical cores of CPUs present on nodes: :class:`~.cpu.LogicalCore`, + * The virtual devices that can be created on nodes: :class:`~.virtual_device.VirtualDevice`, + * The operating systems running on nodes: :class:`~.linux_session.LinuxSession` + and :class:`~.posix_session.PosixSession`. + +DTS needs to be able to connect to nodes and understand some of the hardware present on these nodes +to properly build and test DPDK. """ # pylama:ignore=W0611 diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py index 680c29bfe3..817405bea4 100644 --- a/dts/framework/testbed_model/port.py +++ b/dts/framework/testbed_model/port.py @@ -2,6 +2,13 @@ # Copyright(c) 2022 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""NIC port model. + +Basic port information, such as location (the port are identified by their PCI address on a node), +drivers and address. +""" + + from dataclasses import dataclass from framework.config import PortConfig @@ -9,24 +16,35 @@ @dataclass(slots=True, frozen=True) class PortIdentifier: + """The port identifier. + + Attributes: + node: The node where the port resides. + pci: The PCI address of the port on `node`. + """ + node: str pci: str @dataclass(slots=True) class Port: - """ - identifier: The PCI address of the port on a node. - - os_driver: The driver used by this port when the OS is controlling it. - Example: i40e - os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it, - Example: vfio-pci. + """Physical port on a node. - Note: os_driver and os_driver_for_dpdk may be the same thing. - Example: mlx5_core + The ports are identified by the node they're on and their PCI addresses. The port on the other + side of the connection is also captured here. + Each port is serviced by a driver, which may be different for the operating system (`os_driver`) + and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``. - peer: The identifier of a port this port is connected with. + Attributes: + identifier: The PCI address of the port on a node. + os_driver: The operating system driver name when the operating system controls the port, + e.g.: ``i40e``. + os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``. + peer: The identifier of a port this port is connected with. + The `peer` is on a different node. + mac_address: The MAC address of the port. + logical_name: The logical name of the port. Must be discovered. """ identifier: PortIdentifier @@ -37,6 +55,12 @@ class Port: logical_name: str = "" def __init__(self, node_name: str, config: PortConfig): + """Initialize the port from `node_name` and `config`. + + Args: + node_name: The name of the port's node. + config: The test run configuration of the port. + """ self.identifier = PortIdentifier( node=node_name, pci=config.pci, @@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig): @property def node(self) -> str: + """The node where the port resides.""" return self.identifier.node @property def pci(self) -> str: + """The PCI address of the port.""" return self.identifier.pci @dataclass(slots=True, frozen=True) class PortLink: + """The physical, cabled connection between the ports. + + Attributes: + sut_port: The port on the SUT node connected to `tg_port`. + tg_port: The port on the TG node connected to `sut_port`. + """ + sut_port: Port tg_port: Port diff --git a/dts/framework/testbed_model/virtual_device.py b/dts/framework/testbed_model/virtual_device.py index eb664d9f17..e9b5e9c3be 100644 --- a/dts/framework/testbed_model/virtual_device.py +++ b/dts/framework/testbed_model/virtual_device.py @@ -1,16 +1,29 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""Virtual devices model. + +Alongside support for physical hardware, DPDK can create various virtual devices. +""" + class VirtualDevice(object): - """ - Base class for virtual devices used by DPDK. + """Base class for virtual devices used by DPDK. + + Attributes: + name: The name of the virtual device. """ name: str def __init__(self, name: str): + """Initialize the virtual device. + + Args: + name: The name of the virtual device. + """ self.name = name def __str__(self) -> str: + """This corresponds to the name used for DPDK devices.""" return self.name From patchwork Thu Nov 23 15:13:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134580 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98AD1433AC; Thu, 23 Nov 2023 16:16:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E645042FD9; Thu, 23 Nov 2023 16:14:22 +0100 (CET) Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) by mails.dpdk.org (Postfix) with ESMTP id B9E8B42FE7 for ; Thu, 23 Nov 2023 16:14:05 +0100 (CET) Received: by mail-wr1-f53.google.com with SMTP id ffacd0b85a97d-3316bd84749so638471f8f.2 for ; Thu, 23 Nov 2023 07:14:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752445; x=1701357245; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lDAzXB13bmA5V61UH4hz6KNJGJbhuXFju4FDnIEpq4c=; b=BSypjx1JOT1gkqqognG1fTQ8gEItO2k/Njz7AzL9ODbtF6BiJerKtVFLZqMApkTVD1 DGYPylWBEN9nolkdm6kTQyLxuOPIGUJocD5AIndq6GuvwGaSwLtIkqEOkz1nuLahZ9vA ySqwP4MR6Id9knjdDW04R9MrxWLmd1ySYKZwQc0in8BM2YZN4J5GCUctjaFO24SJaImF WYELSx4cnRifhneQNqFyX3bh/6V8Fj8c6pRJRDpktBGR4zXmLG8SxGhr9g/ntS4fcKtt G9rUGoHXV9Uymo7Hh25KD+RLzeNSlAZvipP3Q3JL/HBxIXJmN1pIEmki7MXjZG/GraNt ZbPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752445; x=1701357245; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lDAzXB13bmA5V61UH4hz6KNJGJbhuXFju4FDnIEpq4c=; b=Hof1DRyGpds1w39kDMTuYnrUzD1j6fBlt50F1b4DCwxspk3lpCox5ouu2FA9ISTqmN PSnpDYIRWipF9S4opFfl7FcHJpp5DhNXd+8VSiOIlT774IjV7Ih/hRwAOyozU30LGkuj FlQ6WOaY8M17omXYarF/PJ+qFYsh9NklZOlCmSAUYwBtBow5Lu+wX2XVeOWEGjpOsK89 RgWkvJX6IfUTvlDzr0dVx4TjANVr3r9851HDDWLujCkatfnIGEOqXTWxMDksPU9Nfzl6 jMYnCQHGePOdKqq99auGVNwMAbQOLHUxtH6Wz/gopp6mbvMwFH+4JVk+aPQkT0+p1pEn 2Pjg== X-Gm-Message-State: AOJu0YwC6FjEril01OQomiYwLVoDEWlo2Wiy9gMQvUg+i4p1YuRvfCuY zBPO0c0odjxBKJgEiOsejumDNA== X-Google-Smtp-Source: AGHT+IEZ2F9gYo4NvV2y78ZhWSwZBZO3N1ZTkAoCNm30chYS880ScISj6j5qJYsslbEcaesMsQA+Zw== X-Received: by 2002:a05:6000:1145:b0:32d:9df1:6f6d with SMTP id d5-20020a056000114500b0032d9df16f6dmr4134899wrx.17.1700752445315; Thu, 23 Nov 2023 07:14:05 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:05 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 14/21] dts: cpu docstring update Date: Thu, 23 Nov 2023 16:13:37 +0100 Message-Id: <20231123151344.162812-15-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++-------- 1 file changed, 144 insertions(+), 52 deletions(-) diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py index 1b392689f5..9e33b2825d 100644 --- a/dts/framework/testbed_model/cpu.py +++ b/dts/framework/testbed_model/cpu.py @@ -1,6 +1,22 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""CPU core representation and filtering. + +This module provides a unified representation of logical CPU cores along +with filtering capabilities. + +When symmetric multiprocessing (SMP or multithreading) is enabled on a server, +the physical CPU cores are split into logical CPU cores with different IDs. + +:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify +the socket from which to filter the number of logical cores. It's also possible to not use all +logical CPU cores from each physical core (e.g. only the first logical core of each physical core). + +:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that +the logical cores are actually present on the server. +""" + import dataclasses from abc import ABC, abstractmethod from collections.abc import Iterable, ValuesView @@ -11,9 +27,17 @@ @dataclass(slots=True, frozen=True) class LogicalCore(object): - """ - Representation of a CPU core. A physical core is represented in OS - by multiple logical cores (lcores) if CPU multithreading is enabled. + """Representation of a logical CPU core. + + A physical core is represented in OS by multiple logical cores (lcores) + if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same. + + Attributes: + lcore: The logical core ID of a CPU core. It's the same as `core` with + disabled multithreading. + core: The physical core ID of a CPU core. + socket: The physical socket ID where the CPU resides. + node: The NUMA node ID where the CPU resides. """ lcore: int @@ -22,27 +46,36 @@ class LogicalCore(object): node: int def __int__(self) -> int: + """The CPU is best represented by the logical core, as that's what we configure in EAL.""" return self.lcore class LogicalCoreList(object): - """ - Convert these options into a list of logical core ids. - lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores - lcore_list=[0,1,2,3] - a list of int indices - lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported - lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported - - The class creates a unified format used across the framework and allows - the user to use either a str representation (using str(instance) or directly - in f-strings) or a list representation (by accessing instance.lcore_list). - Empty lcore_list is allowed. + r"""A unified way to store :class:`LogicalCore`\s. + + Create a unified format used across the framework and allow the user to use + either a :class:`str` representation (using ``str(instance)`` or directly in f-strings) + or a :class:`list` representation (by accessing the `lcore_list` property, + which stores logical core IDs). """ _lcore_list: list[int] _lcore_str: str def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str): + """Process `lcore_list`, then sort. + + There are four supported logical core list formats:: + + lcore_list=[LogicalCore1, LogicalCore2] # a list of LogicalCores + lcore_list=[0,1,2,3] # a list of int indices + lcore_list=['0','1','2-3'] # a list of str indices; ranges are supported + lcore_list='0,1,2-3' # a comma delimited str of indices; ranges are supported + + Args: + lcore_list: Various ways to represent multiple logical cores. + Empty `lcore_list` is allowed. + """ self._lcore_list = [] if isinstance(lcore_list, str): lcore_list = lcore_list.split(",") @@ -58,6 +91,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str): @property def lcore_list(self) -> list[int]: + """The logical core IDs.""" return self._lcore_list def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]: @@ -83,28 +117,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]: return formatted_core_list def __str__(self) -> str: + """The consecutive ranges of logical core IDs.""" return self._lcore_str @dataclasses.dataclass(slots=True, frozen=True) class LogicalCoreCount(object): - """ - Define the number of logical cores to use. - If sockets is not None, socket_count is ignored. - """ + """Define the number of logical cores per physical cores per sockets.""" + #: Use this many logical cores per each physical core. lcores_per_core: int = 1 + #: Use this many physical cores per each socket. cores_per_socket: int = 2 + #: Use this many sockets. socket_count: int = 1 + #: Use exactly these sockets. This takes precedence over `socket_count`, + #: so when `sockets` is not :data:`None`, `socket_count` is ignored. sockets: list[int] | None = None class LogicalCoreFilter(ABC): - """ - Filter according to the input filter specifier. Each filter needs to be - implemented in a derived class. - This class only implements operations common to all filters, such as sorting - the list to be filtered beforehand. + """Common filtering class. + + Each filter needs to be implemented in a subclass. This base class sorts the list of cores + and defines the filtering method, which must be implemented by subclasses. """ _filter_specifier: LogicalCoreCount | LogicalCoreList @@ -116,6 +152,17 @@ def __init__( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool = True, ): + """Filter according to the input filter specifier. + + The input `lcore_list` is copied and sorted by physical core before filtering. + The list is copied so that the original is left intact. + + Args: + lcore_list: The logical CPU cores to filter. + filter_specifier: Filter cores from `lcore_list` according to this filter. + ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`, + sort in descending order. + """ self._filter_specifier = filter_specifier # sorting by core is needed in case hyperthreading is enabled @@ -124,31 +171,45 @@ def __init__( @abstractmethod def filter(self) -> list[LogicalCore]: - """ - Use self._filter_specifier to filter self._lcores_to_filter - and return the list of filtered LogicalCores. - self._lcores_to_filter is a sorted copy of the original list, - so it may be modified. + r"""Filter the cores. + + Use `self._filter_specifier` to filter `self._lcores_to_filter` and return + the filtered :class:`LogicalCore`\s. + `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified. + + Returns: + The filtered cores. """ class LogicalCoreCountFilter(LogicalCoreFilter): - """ + """Filter cores by specified counts. + Filter the input list of LogicalCores according to specified rules: - Use cores from the specified number of sockets or from the specified socket ids. - If sockets is specified, it takes precedence over socket_count. - From each of those sockets, use only cores_per_socket of cores. - And for each core, use lcores_per_core of logical cores. Hypertheading - must be enabled for this to take effect. - If ascending is True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the highest - id and continue in descending order. This ordering affects which - sockets to consider first as well. + + * The input `filter_specifier` is :class:`LogicalCoreCount`, + * Use cores from the specified number of sockets or from the specified socket ids, + * If `sockets` is specified, it takes precedence over `socket_count`, + * From each of those sockets, use only `cores_per_socket` of cores, + * And for each core, use `lcores_per_core` of logical cores. Hypertheading + must be enabled for this to take effect. """ _filter_specifier: LogicalCoreCount def filter(self) -> list[LogicalCore]: + """Filter the cores according to :class:`LogicalCoreCount`. + + Start by filtering the allowed sockets. The cores matching the allowed sockets are returned. + The cores of each socket are stored in separate lists. + + Then filter the allowed physical cores from those lists of cores per socket. When filtering + physical cores, store the desired number of logical cores per physical core which then + together constitute the final filtered list. + + Returns: + The filtered cores. + """ sockets_to_filter = self._filter_sockets(self._lcores_to_filter) filtered_lcores = [] for socket_to_filter in sockets_to_filter: @@ -158,24 +219,37 @@ def filter(self) -> list[LogicalCore]: def _filter_sockets( self, lcores_to_filter: Iterable[LogicalCore] ) -> ValuesView[list[LogicalCore]]: - """ - Remove all lcores that don't match the specified socket(s). - If self._filter_specifier.sockets is not None, keep lcores from those sockets, - otherwise keep lcores from the first - self._filter_specifier.socket_count sockets. + """Filter a list of cores per each allowed socket. + + The sockets may be specified in two ways, either a number or a specific list of sockets. + In case of a specific list, we just need to return the cores from those sockets. + If filtering a number of cores, we need to go through all cores and note which sockets + appear and only filter from the first n that appear. + + Args: + lcores_to_filter: The cores to filter. These must be sorted by the physical core. + + Returns: + A list of lists of logical CPU cores. Each list contains cores from one socket. """ allowed_sockets: set[int] = set() socket_count = self._filter_specifier.socket_count if self._filter_specifier.sockets: + # when sockets in filter is specified, the sockets are already set socket_count = len(self._filter_specifier.sockets) allowed_sockets = set(self._filter_specifier.sockets) + # filter socket_count sockets from all sockets by checking the socket of each CPU filtered_lcores: dict[int, list[LogicalCore]] = {} for lcore in lcores_to_filter: if not self._filter_specifier.sockets: + # this is when sockets is not set, so we do the actual filtering + # when it is set, allowed_sockets is already defined and can't be changed if len(allowed_sockets) < socket_count: + # allowed_sockets is a set, so adding an existing socket won't re-add it allowed_sockets.add(lcore.socket) if lcore.socket in allowed_sockets: + # separate lcores into sockets; this makes it easier in further processing if lcore.socket in filtered_lcores: filtered_lcores[lcore.socket].append(lcore) else: @@ -192,12 +266,13 @@ def _filter_sockets( def _filter_cores_from_socket( self, lcores_to_filter: Iterable[LogicalCore] ) -> list[LogicalCore]: - """ - Keep only the first self._filter_specifier.cores_per_socket cores. - In multithreaded environments, keep only - the first self._filter_specifier.lcores_per_core lcores of those cores. - """ + """Filter a list of cores from the given socket. + + Go through the cores and note how many logical cores per physical core have been filtered. + Returns: + The filtered logical CPU cores. + """ # no need to use ordered dict, from Python3.7 the dict # insertion order is preserved (LIFO). lcore_count_per_core_map: dict[int, int] = {} @@ -238,15 +313,21 @@ def _filter_cores_from_socket( class LogicalCoreListFilter(LogicalCoreFilter): - """ - Filter the input list of Logical Cores according to the input list of - lcore indices. - An empty LogicalCoreList won't filter anything. + """Filter the logical CPU cores by logical CPU core IDs. + + This is a simple filter that looks at logical CPU IDs and only filter those that match. + + The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything. """ _filter_specifier: LogicalCoreList def filter(self) -> list[LogicalCore]: + """Filter based on logical CPU core ID. + + Return: + The filtered logical CPU cores. + """ if not len(self._filter_specifier.lcore_list): return self._lcores_to_filter @@ -269,6 +350,17 @@ def lcore_filter( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool, ) -> LogicalCoreFilter: + """Factory for providing the filter that corresponds to `filter_specifier`. + + Args: + core_list: The logical CPU cores to filter. + filter_specifier: The filter to use. + ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`, + sort in descending order. + + Returns: + The filter that corresponds to `filter_specifier`. + """ if isinstance(filter_specifier, LogicalCoreList): return LogicalCoreListFilter(core_list, filter_specifier, ascending) elif isinstance(filter_specifier, LogicalCoreCount): From patchwork Thu Nov 23 15:13:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134581 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 377CB433AC; Thu, 23 Nov 2023 16:16:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0484E4328B; Thu, 23 Nov 2023 16:14:24 +0100 (CET) Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by mails.dpdk.org (Postfix) with ESMTP id EA22C42FFB for ; Thu, 23 Nov 2023 16:14:06 +0100 (CET) Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-40839652b97so6268995e9.3 for ; Thu, 23 Nov 2023 07:14:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752446; x=1701357246; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NWs3KzhDflxN0byCnr6b/OQ3hAoOKSw+CnBBJeMLuY4=; b=Qnb3gJzaXXhzhQ6e6i1TUPjhvrpvdP5GL98jVWwGYOxPweBjjArYirvFZIAcYBmHeo 2OVGEkUSNPvTx8fE9KF9V87gLGTheqpxc730nYQMYyraTDHnw4o/yV1/5VVVR5rfV0Dz AzKm0MzPHG96yGG3/+r7Lpu5+A4WqEE4K9r9UVvGWHyVjP53BpWG36+uhrjl0zEbGxdh ARfQ33UJ001NpZWdtmXOIrWYZ3nLSW2NHVUxPz/F4JErbDwhfcH6d3TPZE8qvrQP1CzS tYTr/9O/BXcPB7dxE6beQzJx14Fubf0OhXVkpzZM3WVV5AiZKwRoMv/knftAsYeDRDZ8 TaXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752446; x=1701357246; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NWs3KzhDflxN0byCnr6b/OQ3hAoOKSw+CnBBJeMLuY4=; b=lkRa4Z7EHRAOag5Io9I4yr2vQUx+4d5HmY3zlZ3PcFbI98E3apOAYFnhWQiJ9q0vca NyZuBel+LQV54vw9ziMaTNVJaghI1kKFBUIKcE8qr4HQSMbuJHVr9JjOMZCsmlV9RCA4 ZXk8nhcQJ1oPaHDiZ5XszhwFaoOhVwhh89h/CQ7Zu4srUGdemidFkpNNVouZqJfnP2Ov u+faUFU/gD3K5XKPr+RQArczBu7avFDI6cRxg2mRu9Z0u37MGQrEnWedTDaFTciz6RSs VO6GvOg/YOvSuJZ34apX/KqmiNtjOQWPWTmwXhzHDnImjC5sqF+vzsEDgIN588ZZECcK VE4A== X-Gm-Message-State: AOJu0YxeBhHEzv+SY21oBD7utqs68tHZSSv/G0ULpzY9sR/zmk2mOmnP 5I0aijgGZWH7Kpnx3A1DL/Q9Pw== X-Google-Smtp-Source: AGHT+IE/kPGwfO5uPxVJ3I5jzMCDpeboCo3vq7yqNSGMUZCBOqbEhgYGP3Qsd17pXGqGIRFr4wk4SQ== X-Received: by 2002:a05:600c:3587:b0:409:6edc:6e5c with SMTP id p7-20020a05600c358700b004096edc6e5cmr4775144wmq.0.1700752446533; Thu, 23 Nov 2023 07:14:06 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:06 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 15/21] dts: os session docstring update Date: Thu, 23 Nov 2023 16:13:38 +0100 Message-Id: <20231123151344.162812-16-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/os_session.py | 272 ++++++++++++++++------ 1 file changed, 205 insertions(+), 67 deletions(-) diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py index 76e595a518..cfdbd1c4bd 100644 --- a/dts/framework/testbed_model/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -2,6 +2,26 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""OS-aware remote session. + +DPDK supports multiple different operating systems, meaning it can run on these different operating +systems. This module defines the common API that OS-unaware layers use and translates the API into +OS-aware calls/utility usage. + +Note: + Running commands with administrative privileges requires OS awareness. This is the only layer + that's aware of OS differences, so this is where non-privileged command get converted + to privileged commands. + +Example: + A user wishes to remove a directory on a remote :class:`~.sut_node.SutNode`. + The :class:`~.sut_node.SutNode` object isn't aware what OS the node is running - it delegates + the OS translation logic to :attr:`~.node.Node.main_session`. The SUT node calls + :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and + the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the node's OS is Linux + and other commands for other OSs. It also translates the path to match the underlying OS. +""" + from abc import ABC, abstractmethod from collections.abc import Iterable from ipaddress import IPv4Interface, IPv6Interface @@ -28,10 +48,16 @@ class OSSession(ABC): - """ - The OS classes create a DTS node remote session and implement OS specific + """OS-unaware to OS-aware translation API definition. + + The OSSession classes create a remote session to a DTS node and implement OS specific behavior. There a few control methods implemented by the base class, the rest need - to be implemented by derived classes. + to be implemented by subclasses. + + Attributes: + name: The name of the session. + remote_session: The remote session maintaining the connection to the node. + interactive_session: The interactive remote session maintaining the connection to the node. """ _config: NodeConfiguration @@ -46,6 +72,15 @@ def __init__( name: str, logger: DTSLOG, ): + """Initialize the OS-aware session. + + Connect to the node right away and also create an interactive remote session. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + """ self._config = node_config self.name = name self._logger = logger @@ -53,15 +88,15 @@ def __init__( self.interactive_session = create_interactive_session(node_config, logger) def close(self, force: bool = False) -> None: - """ - Close the remote session. + """Close the underlying remote session. + + Args: + force: Force the closure of the connection. """ self.remote_session.close(force) def is_alive(self) -> bool: - """ - Check whether the remote session is still responding. - """ + """Check whether the underlying remote session is still responding.""" return self.remote_session.is_alive() def send_command( @@ -72,10 +107,23 @@ def send_command( verify: bool = False, env: dict | None = None, ) -> CommandResult: - """ - An all-purpose API in case the command to be executed is already - OS-agnostic, such as when the path to the executed command has been - constructed beforehand. + """An all-purpose API for OS-agnostic commands. + + This can be used for an execution of a portable command that's executed the same way + on all operating systems, such as Python. + + The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` + environment variable configure the timeout of command execution. + + Args: + command: The command to execute. + timeout: Wait at most this long in seconds for `command` execution to complete. + privileged: Whether to run the command with administrative privileges. + verify: If :data:`True`, will check the exit code of the command. + env: A dictionary with environment variables to be used with the command execution. + + Raises: + RemoteCommandExecutionError: If verify is :data:`True` and the command failed. """ if privileged: command = self._get_privileged_command(command) @@ -89,8 +137,20 @@ def create_interactive_shell( privileged: bool, app_args: str, ) -> InteractiveShellType: - """ - See "create_interactive_shell" in SutNode + """Factory for interactive session handlers. + + Instantiate `shell_cls` according to the remote OS specifics. + + Args: + shell_cls: The class of the shell. + timeout: Timeout for reading output from the SSH channel. If you are + reading from the buffer and don't receive any data within the timeout + it will throw an error. + privileged: Whether to run the shell with administrative privileges. + app_args: The arguments to be passed to the application. + + Returns: + An instance of the desired interactive application shell. """ return shell_cls( self.interactive_session.session, @@ -114,27 +174,42 @@ def _get_privileged_command(command: str) -> str: @abstractmethod def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath: - """ - Try to find DPDK remote dir in remote_dir. + """Try to find DPDK directory in `remote_dir`. + + The directory is the one which is created after the extraction of the tarball. The files + are usually extracted into a directory starting with ``dpdk-``. + + Returns: + The absolute path of the DPDK remote directory, empty path if not found. """ @abstractmethod def get_remote_tmp_dir(self) -> PurePath: - """ - Get the path of the temporary directory of the remote OS. + """Get the path of the temporary directory of the remote OS. + + Returns: + The absolute path of the temporary directory. """ @abstractmethod def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: - """ - Create extra environment variables needed for the target architecture. Get - information from the node if needed. + """Create extra environment variables needed for the target architecture. + + Different architectures may require different configuration, such as setting 32-bit CFLAGS. + + Returns: + A dictionary with keys as environment variables. """ @abstractmethod def join_remote_path(self, *args: str | PurePath) -> PurePath: - """ - Join path parts using the path separator that fits the remote OS. + """Join path parts using the path separator that fits the remote OS. + + Args: + args: Any number of paths to join. + + Returns: + The resulting joined path. """ @abstractmethod @@ -143,13 +218,13 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: - """Copy a file from the remote Node to the local filesystem. + """Copy a file from the remote node to the local filesystem. - Copy source_file from the remote Node associated with this remote - session to destination_file on the local filesystem. + Copy `source_file` from the remote node associated with this remote + session to `destination_file` on the local filesystem. Args: - source_file: the file on the remote Node. + source_file: the file on the remote node. destination_file: a file or directory path on the local filesystem. """ @@ -159,14 +234,14 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: - """Copy a file from local filesystem to the remote Node. + """Copy a file from local filesystem to the remote node. - Copy source_file from local filesystem to destination_file - on the remote Node associated with this remote session. + Copy `source_file` from local filesystem to `destination_file` + on the remote node associated with this remote session. Args: source_file: the file on the local filesystem. - destination_file: a file or directory path on the remote Node. + destination_file: a file or directory path on the remote node. """ @abstractmethod @@ -176,8 +251,12 @@ def remove_remote_dir( recursive: bool = True, force: bool = True, ) -> None: - """ - Remove remote directory, by default remove recursively and forcefully. + """Remove remote directory, by default remove recursively and forcefully. + + Args: + remote_dir_path: The path of the directory to remove. + recursive: If :data:`True`, also remove all contents inside the directory. + force: If :data:`True`, ignore all warnings and try to remove at all costs. """ @abstractmethod @@ -186,9 +265,12 @@ def extract_remote_tarball( remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None, ) -> None: - """ - Extract remote tarball in place. If expected_dir is a non-empty string, check - whether the dir exists after extracting the archive. + """Extract remote tarball in its remote directory. + + Args: + remote_tarball_path: The path of the tarball on the remote node. + expected_dir: If non-empty, check whether `expected_dir` exists after extracting + the archive. """ @abstractmethod @@ -201,69 +283,119 @@ def build_dpdk( rebuild: bool = False, timeout: float = SETTINGS.compile_timeout, ) -> None: - """ - Build DPDK in the input dir with specified environment variables and meson - arguments. + """Build DPDK on the remote node. + + An extracted DPDK tarball must be present on the node. The build consists of two steps:: + + meson setup remote_dpdk_dir remote_dpdk_build_dir + ninja -C remote_dpdk_build_dir + + The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT` + environment variable configure the timeout of DPDK build. + + Args: + env_vars: Use these environment variables then building DPDK. + meson_args: Use these meson arguments when building DPDK. + remote_dpdk_dir: The directory on the remote node where DPDK will be built. + remote_dpdk_build_dir: The target build directory on the remote node. + rebuild: If :data:`True`, do a subsequent build with ``meson configure`` instead + of ``meson setup``. + timeout: Wait at most this long in seconds for the build execution to complete. """ @abstractmethod def get_dpdk_version(self, version_path: str | PurePath) -> str: - """ - Inspect DPDK version on the remote node from version_path. + """Inspect the DPDK version on the remote node. + + Args: + version_path: The path to the VERSION file containing the DPDK version. + + Returns: + The DPDK version. """ @abstractmethod def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: - """ - Compose a list of LogicalCores present on the remote node. - If use_first_core is False, the first physical core won't be used. + r"""Get the list of :class:`~.cpu.LogicalCore`\s on the remote node. + + Args: + use_first_core: If :data:`False`, the first physical core won't be used. + + Returns: + The logical cores present on the node. """ @abstractmethod def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: - """ - Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If - dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean. + """Kill and cleanup all DPDK apps. + + Args: + dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`. + If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean. """ @abstractmethod def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: - """ - Get the DPDK file prefix that will be used when running DPDK apps. + """Make OS-specific modification to the DPDK file prefix. + + Args: + dpdk_prefix: The OS-unaware file prefix. + + Returns: + The OS-specific file prefix. """ @abstractmethod - def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: - """ - Get the node's Hugepage Size, configure the specified amount of hugepages + def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None: + """Configure hugepages on the node. + + Get the node's Hugepage Size, configure the specified count of hugepages if needed and mount the hugepages if needed. - If force_first_numa is True, configure hugepages just on the first socket. + + Args: + hugepage_count: Configure this many hugepages. + force_first_numa: If :data:`True`, configure hugepages just on the first numa node. """ @abstractmethod def get_compiler_version(self, compiler_name: str) -> str: - """ - Get installed version of compiler used for DPDK + """Get installed version of compiler used for DPDK. + + Args: + compiler_name: The name of the compiler executable. + + Returns: + The compiler's version. """ @abstractmethod def get_node_info(self) -> NodeInfo: - """ - Collect information about the node + """Collect additional information about the node. + + Returns: + Node information. """ @abstractmethod def update_ports(self, ports: list[Port]) -> None: - """ - Get additional information about ports: - Logical name (e.g. enp7s0) if applicable - Mac address + """Get additional information about ports from the operating system and update them. + + The additional information is: + + * Logical name (e.g. ``enp7s0``) if applicable, + * Mac address. + + Args: + ports: The ports to update. """ @abstractmethod def configure_port_state(self, port: Port, enable: bool) -> None: - """ - Enable/disable port. + """Enable/disable `port` in the operating system. + + Args: + port: The port to configure. + enable: If :data:`True`, enable the port, otherwise shut it down. """ @abstractmethod @@ -273,12 +405,18 @@ def configure_port_ip_address( port: Port, delete: bool, ) -> None: - """ - Configure (add or delete) an IP address of the input port. + """Configure an IP address on `port` in the operating system. + + Args: + address: The address to configure. + port: The port to configure. + delete: If :data:`True`, remove the IP address, otherwise configure it. """ @abstractmethod def configure_ipv4_forwarding(self, enable: bool) -> None: - """ - Enable IPv4 forwarding in the underlying OS. + """Enable IPv4 forwarding in the operating system. + + Args: + enable: If :data:`True`, enable the forwarding, otherwise disable it. """ From patchwork Thu Nov 23 15:13:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134582 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62827433AC; Thu, 23 Nov 2023 16:16:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 260A743293; Thu, 23 Nov 2023 16:14:25 +0100 (CET) Received: from mail-wr1-f50.google.com (mail-wr1-f50.google.com [209.85.221.50]) by mails.dpdk.org (Postfix) with ESMTP id 07A2743003 for ; Thu, 23 Nov 2023 16:14:08 +0100 (CET) Received: by mail-wr1-f50.google.com with SMTP id ffacd0b85a97d-332e40315bdso379153f8f.1 for ; Thu, 23 Nov 2023 07:14:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752447; x=1701357247; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t6KeZaLHiysMT+K7nNxDXG0i4w3eJCHNkMFr/vP4ebc=; b=BObjpgOynFcf6IIYdRZfCQr2EZqKj+mNKXLBgmsAiDe7oGeAkzoeL5CZi+tssmXgIv OIy+KK6201xugWSMw3ZlzRvdW1DfK5WK+TcUERikCn9HMIZfpYXgTvoVn4DjqIQ+Kf/d Xnw9cbdUUqIR0CyHooQscMzwYS8WZ32l1HFKa+eE/hBLyu8n2MgiMVRupys1dcqMAnre Asnd1tyqY8bOb0XvjrJt7dWk1PifQZUVwhLOg1jEq/QINPkcmpLUzSRQ7omENqLGHUto 3Nx8GYMP97WB4pcfVtLJ+5eEk5jzUfsywLharkNZEmnY8k7+G/woDailMc6yRgBOfCmT 6sEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752447; x=1701357247; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t6KeZaLHiysMT+K7nNxDXG0i4w3eJCHNkMFr/vP4ebc=; b=E76oEUk6p1lGodjIqjeFUIyhvQH4EsgxR459sqH33CKZGEp+Mjr7/XDsIAF+OiGzTU RRSbfhIWdd84xC0brXceu2OkP/5SydmD4WNHUcVy7nWeGEHOvmJC2iTT0D+n48MyhEHS /bDd9cTKSr7ZJL023FPqez61tOHmzePIMNZy3Gw5WKM7waenDodnLzrEOCDM9BRmUkw2 DpHbYo5mMz6Qm0V0i3ccA9q2oqcFZLix2YfoVWEnOrxY380Xxvr4G2anMVANSyNb6k31 VqhPQRMVEChFp4m0bEgpFxRa8w03ZPaKMGr8MZJUWm2nP4aZRg+jNiKVgWGaNVdTKUCF SuiQ== X-Gm-Message-State: AOJu0YxfF79PPJBkx6oZAtNLmSnGkovHuiTLEX1BY9sgtlxKVcu7DV/s p0ruWXvXsA8oCpfWgWIQ/tiXgA== X-Google-Smtp-Source: AGHT+IEXjmFEdxeZnk2zjZUKxCGRw3IdEyWDN0pFWG1hNlRi3h4Bqo3CRPCSF2SojrwrAWTDgRtRSA== X-Received: by 2002:adf:cc82:0:b0:331:6976:c8c7 with SMTP id p2-20020adfcc82000000b003316976c8c7mr3874544wrj.38.1700752447666; Thu, 23 Nov 2023 07:14:07 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:07 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 16/21] dts: posix and linux sessions docstring update Date: Thu, 23 Nov 2023 16:13:39 +0100 Message-Id: <20231123151344.162812-17-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/linux_session.py | 64 +++++++++++----- dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++--- 2 files changed, 114 insertions(+), 31 deletions(-) diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py index 055765ba2d..0ab59cef85 100644 --- a/dts/framework/testbed_model/linux_session.py +++ b/dts/framework/testbed_model/linux_session.py @@ -2,6 +2,13 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""Linux OS translator. + +Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly +compliant with POSIX standards, so this module only implements the parts that aren't. +This intermediate module implements the common parts of mostly POSIX compliant distributions. +""" + import json from ipaddress import IPv4Interface, IPv6Interface from typing import TypedDict, Union @@ -17,43 +24,52 @@ class LshwConfigurationOutput(TypedDict): + """The relevant parts of ``lshw``'s ``configuration`` section.""" + + #: link: str class LshwOutput(TypedDict): - """ - A model of the relevant information from json lshw output, e.g.: - { - ... - "businfo" : "pci@0000:08:00.0", - "logicalname" : "enp8s0", - "version" : "00", - "serial" : "52:54:00:59:e1:ac", - ... - "configuration" : { - ... - "link" : "yes", - ... - }, - ... + """A model of the relevant information from ``lshw``'s json output. + + Example: + :: + + { + ... + "businfo" : "pci@0000:08:00.0", + "logicalname" : "enp8s0", + "version" : "00", + "serial" : "52:54:00:59:e1:ac", + ... + "configuration" : { + ... + "link" : "yes", + ... + }, + ... """ + #: businfo: str + #: logicalname: NotRequired[str] + #: serial: NotRequired[str] + #: configuration: LshwConfigurationOutput class LinuxSession(PosixSession): - """ - The implementation of non-Posix compliant parts of Linux remote sessions. - """ + """The implementation of non-Posix compliant parts of Linux.""" @staticmethod def _get_privileged_command(command: str) -> str: return f"sudo -- sh -c '{command}'" def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: + """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`.""" cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout lcores = [] for cpu_line in cpu_info.splitlines(): @@ -65,18 +81,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: return lcores def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`.""" return dpdk_prefix - def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: + def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.setup_hugepages`.""" self._logger.info("Getting Hugepage information.") hugepage_size = self._get_hugepage_size() hugepages_total = self._get_hugepages_total() self._numa_nodes = self._get_numa_nodes() - if force_first_numa or hugepages_total != hugepage_amount: + if force_first_numa or hugepages_total != hugepage_count: # when forcing numa, we need to clear existing hugepages regardless # of size, so they can be moved to the first numa node - self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa) + self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa) else: self._logger.info("Hugepages already configured.") self._mount_huge_pages() @@ -132,6 +150,7 @@ def _configure_huge_pages(self, amount: int, size: int, force_first_numa: bool) self.send_command(f"echo {amount} | tee {hugepage_config_path}", privileged=True) def update_ports(self, ports: list[Port]) -> None: + """Overrides :meth:`~.os_session.OSSession.update_ports`.""" self._logger.debug("Gathering port info.") for port in ports: assert port.node == self.name, "Attempted to gather port info on the wrong node" @@ -161,6 +180,7 @@ def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str) ) def configure_port_state(self, port: Port, enable: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_port_state`.""" state = "up" if enable else "down" self.send_command(f"ip link set dev {port.logical_name} {state}", privileged=True) @@ -170,6 +190,7 @@ def configure_port_ip_address( port: Port, delete: bool, ) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`.""" command = "del" if delete else "add" self.send_command( f"ip address {command} {address} dev {port.logical_name}", @@ -178,5 +199,6 @@ def configure_port_ip_address( ) def configure_ipv4_forwarding(self, enable: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`.""" state = 1 if enable else 0 self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True) diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py index 5657cc0bc9..d279bb8b53 100644 --- a/dts/framework/testbed_model/posix_session.py +++ b/dts/framework/testbed_model/posix_session.py @@ -2,6 +2,15 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""POSIX compliant OS translator. + +Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards +for portability between Unix operating systems which not all Linux distributions +(or the tools most frequently bundled with said distributions) adhere to. Most of Linux +distributions are mostly compliant though. +This intermediate module implements the common parts of mostly POSIX compliant distributions. +""" + import re from collections.abc import Iterable from pathlib import PurePath, PurePosixPath @@ -15,13 +24,21 @@ class PosixSession(OSSession): - """ - An intermediary class implementing the Posix compliant parts of - Linux and other OS remote sessions. - """ + """An intermediary class implementing the POSIX standard.""" @staticmethod def combine_short_options(**opts: bool) -> str: + """Combine shell options into one argument. + + These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``. + + Args: + opts: The keys are option names (usually one letter) and the bool values indicate + whether to include the option in the resulting argument. + + Returns: + The options combined into one argument. + """ ret_opts = "" for opt, include in opts.items(): if include: @@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str: return ret_opts def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`.""" remote_guess = self.join_remote_path(remote_dir, "dpdk-*") result = self.send_command(f"ls -d {remote_guess} | tail -1") return PurePosixPath(result.stdout) def get_remote_tmp_dir(self) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`.""" return PurePosixPath("/tmp") def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: - """ - Create extra environment variables needed for i686 arch build. Get information - from the node if needed. + """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`. + + Supported architecture: ``i686``. """ env_vars = {} if arch == Architecture.i686: @@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: return env_vars def join_remote_path(self, *args: str | PurePath) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.join_remote_path`.""" return PurePosixPath(*args) def copy_from( @@ -70,6 +90,7 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.os_session.OSSession.copy_from`.""" self.remote_session.copy_from(source_file, destination_file) def copy_to( @@ -77,6 +98,7 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.os_session.OSSession.copy_to`.""" self.remote_session.copy_to(source_file, destination_file) def remove_remote_dir( @@ -85,6 +107,7 @@ def remove_remote_dir( recursive: bool = True, force: bool = True, ) -> None: + """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`.""" opts = PosixSession.combine_short_options(r=recursive, f=force) self.send_command(f"rm{opts} {remote_dir_path}") @@ -93,6 +116,7 @@ def extract_remote_tarball( remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None, ) -> None: + """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`.""" self.send_command( f"tar xfm {remote_tarball_path} -C {PurePosixPath(remote_tarball_path).parent}", 60, @@ -109,6 +133,7 @@ def build_dpdk( rebuild: bool = False, timeout: float = SETTINGS.compile_timeout, ) -> None: + """Overrides :meth:`~.os_session.OSSession.build_dpdk`.""" try: if rebuild: # reconfigure, then build @@ -138,10 +163,12 @@ def build_dpdk( raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.") def get_dpdk_version(self, build_dir: str | PurePath) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`.""" out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True) return out.stdout def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: + """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`.""" self._logger.info("Cleaning up DPDK apps.") dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list) if dpdk_runtime_dirs: @@ -153,6 +180,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs) def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]: + """Find runtime directories DPDK apps are currently using. + + Args: + dpdk_prefix_list: The prefixes DPDK apps were started with. + + Returns: + The paths of DPDK apps' runtime dirs. + """ prefix = PurePosixPath("/var", "run", "dpdk") if not dpdk_prefix_list: remote_prefixes = self._list_remote_dirs(prefix) @@ -164,9 +199,13 @@ def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePo return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list] def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None: - """ - Return a list of directories of the remote_dir. - If remote_path doesn't exist, return None. + """Contents of remote_path. + + Args: + remote_path: List the contents of this path. + + Returns: + The contents of remote_path. If remote_path doesn't exist, return None. """ out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout if "No such file or directory" in out: @@ -175,6 +214,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None: return out.splitlines() def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]: + """Find PIDs of running DPDK apps. + + Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes + that opened those file. + + Args: + dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs. + + Returns: + The PIDs of running DPDK apps. + """ pids = [] pid_regex = r"p(\d+)" for dpdk_runtime_dir in dpdk_runtime_dirs: @@ -193,6 +243,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool: return not result.return_code def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None: + """Check there aren't any leftover hugepages. + + If any hugepages are found, emit a warning. The hugepages are investigated in the + "hugepage_info" file of dpdk_runtime_dirs. + + Args: + dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs. + """ for dpdk_runtime_dir in dpdk_runtime_dirs: hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info") if self._remote_files_exists(hugepage_info): @@ -208,9 +266,11 @@ def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) self.remove_remote_dir(dpdk_runtime_dir) def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`.""" return "" def get_compiler_version(self, compiler_name: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_compiler_version`.""" match compiler_name: case "gcc": return self.send_command( @@ -228,6 +288,7 @@ def get_compiler_version(self, compiler_name: str) -> str: raise ValueError(f"Unknown compiler {compiler_name}") def get_node_info(self) -> NodeInfo: + """Overrides :meth:`~.os_session.OSSession.get_node_info`.""" os_release_info = self.send_command( "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release", SETTINGS.timeout, From patchwork Thu Nov 23 15:13:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134583 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 95DA1433AC; Thu, 23 Nov 2023 16:16:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A93D43299; Thu, 23 Nov 2023 16:14:26 +0100 (CET) Received: from mail-wr1-f54.google.com (mail-wr1-f54.google.com [209.85.221.54]) by mails.dpdk.org (Postfix) with ESMTP id 657C342FCD for ; Thu, 23 Nov 2023 16:14:09 +0100 (CET) Received: by mail-wr1-f54.google.com with SMTP id ffacd0b85a97d-3316ad2bee5so545447f8f.1 for ; Thu, 23 Nov 2023 07:14:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752449; x=1701357249; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vQu+v+3UG/m2JX1k4LoRsjqvWSYETb2xzJAIFf7nUBU=; b=sivPSFb0BW+ZstSPBx48N5dct1/aITLz4gnjeYR2jjJnKN5D8Rx4DoTR9IJNQNe6Fx 7ASo8XKmH1cRK2PpoOWZ+GG/lggAvqcxWZolendruS94pmPmyEICq1olS6id6OEfVZC6 I3979p/TXyUHl4ToV/3I/+4ju9AK948Jpi8m2CTf4cPBMHSsZtVGNDAqx8QiW0zbz1D8 hD+9lHZJNufosbq3eOobvMfSR0D71RKcLnYC9ufn+4U2s1HYn2YFJgh8ENLxXxQqLp7g PodqPCJWGnVoojRzmYNe/+U8GQeS+qzZCoNSlmriYfRnrrZ3CJ/MlpvhmP0UucC8lH0W 1kcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752449; x=1701357249; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vQu+v+3UG/m2JX1k4LoRsjqvWSYETb2xzJAIFf7nUBU=; b=n1gsIotJcwpECgzi8efsgtFqgFB4M2iEn4JQLF6p3K5y1B+uBSYo336I74dRfwS66T cV+0oO07gQOPcQbDkl9LDQgeeV4hxS8xf664Ox5GOYG/x65iUQtQuY5/I8N+FfSB5FaN Wmkaf8cFcEFjEA+X5f8P1jXhONlRF7MJh8OR9FH49CUMhD57qfgdpfWdbV4CHVwRvr3W BQE2vuUAZsZVryqhKhZOFCjFXLlOYVImc9WneRgKDtFSNAyjeIH2jGERHKx6j1dMicAO koA+C3aI6gY6L2RUSayGQnKaO61BCjtthGdGjnVktepCCvoERzUSeNmKrtJ4+qbhmkjC 94ug== X-Gm-Message-State: AOJu0YzqMNjgU00ugo8L2HAklUIzKzC10lfLCRoyx6qh6h3J1BJG+izW 8N8XchKVWKm1vqJECmXxBbgj/05O/G4Ut92hLiF6Lw== X-Google-Smtp-Source: AGHT+IGWzIvS0u1tj5bMEvL7c96OPArFRSkRbQchAfvtzLm5SEAYzZPUWfjuIDfZt+tKQJb4Ic1jmw== X-Received: by 2002:a5d:5749:0:b0:332:e692:a127 with SMTP id q9-20020a5d5749000000b00332e692a127mr585291wrw.50.1700752448987; Thu, 23 Nov 2023 07:14:08 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:08 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 17/21] dts: node docstring update Date: Thu, 23 Nov 2023 16:13:40 +0100 Message-Id: <20231123151344.162812-18-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/node.py | 191 +++++++++++++++++++--------- 1 file changed, 131 insertions(+), 60 deletions(-) diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index b313b5ad54..6eecbdfd6a 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -3,8 +3,13 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -A node is a generic host that DTS connects to and manages. +"""Common functionality for node management. + +A node is any host/server DTS connects to. + +The base class, :class:`Node`, provides functionality common to all nodes and is supposed +to be extended by subclasses with functionalities specific to each node type. +The :func:`~Node.skip_setup` decorator can be used without subclassing. """ from abc import ABC @@ -35,10 +40,22 @@ class Node(ABC): - """ - Basic class for node management. This class implements methods that - manage a node, such as information gathering (of CPU/PCI/NIC) and - environment setup. + """The base class for node management. + + It shouldn't be instantiated, but rather subclassed. + It implements common methods to manage any node: + + * Connection to the node, + * Hugepages setup. + + Attributes: + main_session: The primary OS-aware remote session used to communicate with the node. + config: The node configuration. + name: The name of the node. + lcores: The list of logical cores that DTS can use on the node. + It's derived from logical cores present on the node and the test run configuration. + ports: The ports of this node specified in the test run configuration. + virtual_devices: The virtual devices used on the node. """ main_session: OSSession @@ -52,6 +69,17 @@ class Node(ABC): virtual_devices: list[VirtualDevice] def __init__(self, node_config: NodeConfiguration): + """Connect to the node and gather info during initialization. + + Extra gathered information: + + * The list of available logical CPUs. This is then filtered by + the ``lcores`` configuration in the YAML test run configuration file, + * Information about ports from the YAML test run configuration file. + + Args: + node_config: The node's test run configuration. + """ self.config = node_config self.name = node_config.name self._logger = getLogger(self.name) @@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration): self._logger.info(f"Connected to node: {self.name}") self._get_remote_cpus() - # filter the node lcores according to user config + # filter the node lcores according to the test run configuration self.lcores = LogicalCoreListFilter( self.lcores, LogicalCoreList(self.config.lcores) ).filter() @@ -76,9 +104,14 @@ def _init_ports(self) -> None: self.configure_port_state(port) def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """ - Perform the execution setup that will be done for each execution - this node is part of. + """Execution setup steps. + + Configure hugepages and call :meth:`_set_up_execution` where + the rest of the configuration steps (if any) are implemented. + + Args: + execution_config: The execution test run configuration according to which + the setup steps will be taken. """ self._setup_hugepages() self._set_up_execution(execution_config) @@ -87,54 +120,70 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: self.virtual_devices.append(VirtualDevice(vdev)) def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional execution setup steps for subclasses. + + Subclasses should override this if they need to add additional execution setup steps. """ def tear_down_execution(self) -> None: - """ - Perform the execution teardown that will be done after each execution - this node is part of concludes. + """Execution teardown steps. + + There are currently no common execution teardown steps common to all DTS node types. """ self.virtual_devices = [] self._tear_down_execution() def _tear_down_execution(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional execution teardown steps for subclasses. + + Subclasses should override this if they need to add additional execution teardown steps. """ def set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None: - """ - Perform the build target setup that will be done for each build target - tested on this node. + """Build target setup steps. + + There are currently no common build target setup steps common to all DTS node types. + + Args: + build_target_config: The build target test run configuration according to which + the setup steps will be taken. """ self._set_up_build_target(build_target_config) def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional build target setup steps for subclasses. + + Subclasses should override this if they need to add additional build target setup steps. """ def tear_down_build_target(self) -> None: - """ - Perform the build target teardown that will be done after each build target - tested on this node. + """Build target teardown steps. + + There are currently no common build target teardown steps common to all DTS node types. """ self._tear_down_build_target() def _tear_down_build_target(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional build target teardown steps for subclasses. + + Subclasses should override this if they need to add additional build target teardown steps. """ def create_session(self, name: str) -> OSSession: - """ - Create and return a new OSSession tailored to the remote OS. + """Create and return a new OS-aware remote session. + + The returned session won't be used by the node creating it. The session must be used by + the caller. The session will be maintained for the entire lifecycle of the node object, + at the end of which the session will be cleaned up automatically. + + Note: + Any number of these supplementary sessions may be created. + + Args: + name: The name of the session. + + Returns: + A new OS-aware remote session. """ session_name = f"{self.name} {name}" connection = create_session( @@ -152,19 +201,19 @@ def create_interactive_shell( privileged: bool = False, app_args: str = "", ) -> InteractiveShellType: - """Create a handler for an interactive session. + """Factory for interactive session handlers. - Instantiate shell_cls according to the remote OS specifics. + Instantiate `shell_cls` according to the remote OS specifics. Args: shell_cls: The class of the shell. - timeout: Timeout for reading output from the SSH channel. If you are - reading from the buffer and don't receive any data within the timeout - it will throw an error. + timeout: Timeout for reading output from the SSH channel. If you are reading from + the buffer and don't receive any data within the timeout it will throw an error. privileged: Whether to run the shell with administrative privileges. app_args: The arguments to be passed to the application. + Returns: - Instance of the desired interactive application. + An instance of the desired interactive application shell. """ if not shell_cls.dpdk_app: shell_cls.path = self.main_session.join_remote_path(shell_cls.path) @@ -181,14 +230,22 @@ def filter_lcores( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool = True, ) -> list[LogicalCore]: - """ - Filter the LogicalCores found on the Node according to - a LogicalCoreCount or a LogicalCoreList. + """Filter the node's logical cores that DTS can use. + + Logical cores that DTS can use are the ones that are present on the node, but filtered + according to the test run configuration. The `filter_specifier` will filter cores from + those logical cores. + + Args: + filter_specifier: Two different filters can be used, one that specifies the number + of logical cores per core, cores per socket and the number of sockets, + and another one that specifies a logical core list. + ascending: If :data:`True`, use cores with the lowest numerical id first and continue + in ascending order. If :data:`False`, start with the highest id and continue + in descending order. This ordering affects which sockets to consider first as well. - If ascending is True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the highest - id and continue in descending order. This ordering affects which - sockets to consider first as well. + Returns: + The filtered logical cores. """ self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.") return lcore_filter( @@ -198,17 +255,14 @@ def filter_lcores( ).filter() def _get_remote_cpus(self) -> None: - """ - Scan CPUs in the remote OS and store a list of LogicalCores. - """ + """Scan CPUs in the remote OS and store a list of LogicalCores.""" self._logger.info("Getting CPU information.") self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) def _setup_hugepages(self) -> None: - """ - Setup hugepages on the Node. Different architectures can supply different - amounts of memory for hugepages and numa-based hugepage allocation may need - to be considered. + """Setup hugepages on the node. + + Configure the hugepages only if they're specified in the node's test run configuration. """ if self.config.hugepages: self.main_session.setup_hugepages( @@ -216,8 +270,11 @@ def _setup_hugepages(self) -> None: ) def configure_port_state(self, port: Port, enable: bool = True) -> None: - """ - Enable/disable port. + """Enable/disable `port`. + + Args: + port: The port to enable/disable. + enable: :data:`True` to enable, :data:`False` to disable. """ self.main_session.configure_port_state(port, enable) @@ -227,15 +284,17 @@ def configure_port_ip_address( port: Port, delete: bool = False, ) -> None: - """ - Configure the IP address of a port on this node. + """Add an IP address to `port` on this node. + + Args: + address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6. + port: The port to which to add the address. + delete: If :data:`True`, will delete the address from the port instead of adding it. """ self.main_session.configure_port_ip_address(address, port, delete) def close(self) -> None: - """ - Close all connections and free other resources. - """ + """Close all connections and free other resources.""" if self.main_session: self.main_session.close() for session in self._other_sessions: @@ -244,6 +303,11 @@ def close(self) -> None: @staticmethod def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: + """Skip the decorated function. + + The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP` + environment variable enable the decorator. + """ if SETTINGS.skip_setup: return lambda *args: None else: @@ -251,6 +315,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession: + """Factory for OS-aware sessions. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + """ match node_config.os: case OS.linux: return LinuxSession(node_config, name, logger) From patchwork Thu Nov 23 15:13:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134584 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE078433AC; Thu, 23 Nov 2023 16:16:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 30A74432B8; Thu, 23 Nov 2023 16:14:28 +0100 (CET) Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) by mails.dpdk.org (Postfix) with ESMTP id DF55E43251 for ; Thu, 23 Nov 2023 16:14:10 +0100 (CET) Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-40b2a8575d9so6405525e9.0 for ; Thu, 23 Nov 2023 07:14:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752450; x=1701357250; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1GQUWl4fKCWCJwAOefP/gX8Mj3lF66GL8g1NajdfKkM=; b=JpQ/yfTD0d7/3tkZ6DpH+t9O08mbRCARRnOC7qvl9kuKN+ewfUaDMSTjFWpxnP5uw7 jUnaJhIXBnfYfaU9CraKO9CAIgkgxeZ/LewGT5XfBGX6XRMid/bWxTBMURiQvGpmAYBa VFd9YfFm7WmY2R2ypgWxV6eEAkIakxGsmYsTy68z+SOWF+e/HVP+B6MPGo0pGofnLT+g iRpTJU9Mi5dIgdTedFAAB4G5GKLH2ScwKy9Jzyf0SfYjS+KwNl89U5OaQihxSC4apNkC SJ6NRYgfxjq22bjcJTpyN0v6LDY5pt2WOfGcRPnBzabfY+o5WiLljnPEBLraWCplii6r q5lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752450; x=1701357250; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1GQUWl4fKCWCJwAOefP/gX8Mj3lF66GL8g1NajdfKkM=; b=YHBXNqnQIwSVpG2NCT9HwJA+tvYL6M4IyzyvXZCwRlD9aJ9Ujil+OBiMERB1Ij0yE0 ubNa2SlexKIWxVL8WFZ8iyS+efs7DGzexZQrCrJgTAzZugI7BoLuYwx0wyyuI6Ynpb3l ac+bIhKURk+6X5ui6zV+gYpo75NMVA4f9WRyB0R2o6tmJpXh8kLj6/TJlNgVyJIDKm4Y 2utkDkAFEeIDz9MHCvfgMou8AnU312MCYMNGv9ri2KCr3qUVrIuH+uF8ENPiUdMomcBB tT3e9kX7g1jIheXvCoGSMFJC3fgnnzbwNbUaUGKNaH80avDbfqkQxA1xMjuKRNm5revW iEbg== X-Gm-Message-State: AOJu0YxzpOLGQ/RfpEgLwRK94N/kwkv7foAtJHb1oXzfRSh1BFeZxsuQ Jw/4HZ0GW7oAMY/ub96Nkmvkpw== X-Google-Smtp-Source: AGHT+IH2Co3MWV61PSRXaex3I85fvks+fFZ1r1PkTqmMNglsxn8KVRt0/K4NDZ9IkWMHseziHQSbbQ== X-Received: by 2002:adf:cd86:0:b0:332:e573:5c63 with SMTP id q6-20020adfcd86000000b00332e5735c63mr902178wrj.7.1700752450380; Thu, 23 Nov 2023 07:14:10 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:09 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 18/21] dts: sut and tg nodes docstring update Date: Thu, 23 Nov 2023 16:13:41 +0100 Message-Id: <20231123151344.162812-19-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/sut_node.py | 230 ++++++++++++++++-------- dts/framework/testbed_model/tg_node.py | 42 +++-- 2 files changed, 176 insertions(+), 96 deletions(-) diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 5ce9446dba..c4acea38d1 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -3,6 +3,14 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""System under test (DPDK + hardware) node. + +A system under test (SUT) is the combination of DPDK +and the hardware we're testing with DPDK (NICs, crypto and other devices). +An SUT node is where this SUT runs. +""" + + import os import tarfile import time @@ -26,6 +34,11 @@ class EalParameters(object): + """The environment abstraction layer parameters. + + The string representation can be created by converting the instance to a string. + """ + def __init__( self, lcore_list: LogicalCoreList, @@ -35,21 +48,23 @@ def __init__( vdevs: list[VirtualDevice], other_eal_param: str, ): - """ - Generate eal parameters character string; - :param lcore_list: the list of logical cores to use. - :param memory_channels: the number of memory channels to use. - :param prefix: set file prefix string, eg: - prefix='vf' - :param no_pci: switch of disable PCI bus eg: - no_pci=True - :param vdevs: virtual device list, eg: - vdevs=[ - VirtualDevice('net_ring0'), - VirtualDevice('net_ring1') - ] - :param other_eal_param: user defined DPDK eal parameters, eg: - other_eal_param='--single-file-segments' + """Initialize the parameters according to inputs. + + Process the parameters into the format used on the command line. + + Args: + lcore_list: The list of logical cores to use. + memory_channels: The number of memory channels to use. + prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``. + no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``. + vdevs: Virtual devices, e.g.:: + + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + other_eal_param: user defined DPDK EAL parameters, e.g.: + ``other_eal_param='--single-file-segments'`` """ self._lcore_list = f"-l {lcore_list}" self._memory_channels = f"-n {memory_channels}" @@ -61,6 +76,7 @@ def __init__( self._other_eal_param = other_eal_param def __str__(self) -> str: + """Create the EAL string.""" return ( f"{self._lcore_list} " f"{self._memory_channels} " @@ -72,11 +88,21 @@ def __str__(self) -> str: class SutNode(Node): - """ - A class for managing connections to the System under Test, providing - methods that retrieve the necessary information about the node (such as - CPU, memory and NIC details) and configuration capabilities. - Another key capability is building DPDK according to given build target. + """The system under test node. + + The SUT node extends :class:`Node` with DPDK specific features: + + * DPDK build, + * Gathering of DPDK build info, + * The running of DPDK apps, interactively or one-time execution, + * DPDK apps cleanup. + + The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL` + environment variable configure the path to the DPDK tarball + or the git commit ID, tag ID or tree ID to test. + + Attributes: + config: The SUT node configuration """ config: SutNodeConfiguration @@ -94,6 +120,11 @@ class SutNode(Node): _path_to_devbind_script: PurePath | None def __init__(self, node_config: SutNodeConfiguration): + """Extend the constructor with SUT node specifics. + + Args: + node_config: The SUT node's test run configuration. + """ super(SutNode, self).__init__(node_config) self._dpdk_prefix_list = [] self._build_target_config = None @@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration): @property def _remote_dpdk_dir(self) -> PurePath: + """The remote DPDK dir. + + This internal property should be set after extracting the DPDK tarball. If it's not set, + that implies the DPDK setup step has been skipped, in which case we can guess where + a previous build was located. + """ if self.__remote_dpdk_dir is None: self.__remote_dpdk_dir = self._guess_dpdk_remote_dir() return self.__remote_dpdk_dir @@ -123,6 +160,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None: @property def remote_dpdk_build_dir(self) -> PurePath: + """The remote DPDK build directory. + + This is the directory where DPDK was built. + We assume it was built in a subdirectory of the extracted tarball. + """ if self._build_target_config: return self.main_session.join_remote_path( self._remote_dpdk_dir, self._build_target_config.name @@ -132,18 +174,21 @@ def remote_dpdk_build_dir(self) -> PurePath: @property def dpdk_version(self) -> str: + """Last built DPDK version.""" if self._dpdk_version is None: self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_dir) return self._dpdk_version @property def node_info(self) -> NodeInfo: + """Additional node information.""" if self._node_info is None: self._node_info = self.main_session.get_node_info() return self._node_info @property def compiler_version(self) -> str: + """The node's compiler version.""" if self._compiler_version is None: if self._build_target_config is not None: self._compiler_version = self.main_session.get_compiler_version( @@ -158,6 +203,7 @@ def compiler_version(self) -> str: @property def path_to_devbind_script(self) -> PurePath: + """The path to the dpdk-devbind.py script on the node.""" if self._path_to_devbind_script is None: self._path_to_devbind_script = self.main_session.join_remote_path( self._remote_dpdk_dir, "usertools", "dpdk-devbind.py" @@ -165,6 +211,11 @@ def path_to_devbind_script(self) -> PurePath: return self._path_to_devbind_script def get_build_target_info(self) -> BuildTargetInfo: + """Get additional build target information. + + Returns: + The build target information, + """ return BuildTargetInfo( dpdk_version=self.dpdk_version, compiler_version=self.compiler_version ) @@ -173,8 +224,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath: return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir) def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None: - """ - Setup DPDK on the SUT node. + """Setup DPDK on the SUT node. + + Additional build target setup steps on top of those in :class:`Node`. """ # we want to ensure that dpdk_version and compiler_version is reset for new # build targets @@ -186,16 +238,14 @@ def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> self.bind_ports_to_driver() def _tear_down_build_target(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Bind ports to the operating system drivers. + + Additional build target teardown steps on top of those in :class:`Node`. """ self.bind_ports_to_driver(for_dpdk=False) def _configure_build_target(self, build_target_config: BuildTargetConfiguration) -> None: - """ - Populate common environment variables and set build target config. - """ + """Populate common environment variables and set build target config.""" self._env_vars = {} self._build_target_config = build_target_config self._env_vars.update(self.main_session.get_dpdk_build_env_vars(build_target_config.arch)) @@ -207,9 +257,7 @@ def _configure_build_target(self, build_target_config: BuildTargetConfiguration) @Node.skip_setup def _copy_dpdk_tarball(self) -> None: - """ - Copy to and extract DPDK tarball on the SUT node. - """ + """Copy to and extract DPDK tarball on the SUT node.""" self._logger.info("Copying DPDK tarball to SUT.") self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir) @@ -238,8 +286,9 @@ def _copy_dpdk_tarball(self) -> None: @Node.skip_setup def _build_dpdk(self) -> None: - """ - Build DPDK. Uses the already configured target. Assumes that the tarball has + """Build DPDK. + + Uses the already configured target. Assumes that the tarball has already been copied to and extracted on the SUT node. """ self.main_session.build_dpdk( @@ -250,15 +299,19 @@ def _build_dpdk(self) -> None: ) def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath: - """ - Build one or all DPDK apps. Requires DPDK to be already built on the SUT node. - When app_name is 'all', build all example apps. - When app_name is any other string, tries to build that example app. - Return the directory path of the built app. If building all apps, return - the path to the examples directory (where all apps reside). - The meson_dpdk_args are keyword arguments - found in meson_option.txt in root DPDK directory. Do not use -D with them, - for example: enable_kmods=True. + """Build one or all DPDK apps. + + Requires DPDK to be already built on the SUT node. + + Args: + app_name: The name of the DPDK app to build. + When `app_name` is ``all``, build all example apps. + meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory. + Do not use ``-D`` with them. + + Returns: + The directory path of the built app. If building all apps, return + the path to the examples directory (where all apps reside). """ self.main_session.build_dpdk( self._env_vars, @@ -277,9 +330,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa ) def kill_cleanup_dpdk_apps(self) -> None: - """ - Kill all dpdk applications on the SUT. Cleanup hugepages. - """ + """Kill all dpdk applications on the SUT, then clean up hugepages.""" if self._dpdk_kill_session and self._dpdk_kill_session.is_alive(): # we can use the session if it exists and responds self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list) @@ -298,33 +349,34 @@ def create_eal_parameters( vdevs: list[VirtualDevice] | None = None, other_eal_param: str = "", ) -> "EalParameters": - """ - Generate eal parameters character string; - :param lcore_filter_specifier: a number of lcores/cores/sockets to use - or a list of lcore ids to use. - The default will select one lcore for each of two cores - on one socket, in ascending order of core ids. - :param ascending_cores: True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the - highest id and continue in descending order. This ordering - affects which sockets to consider first as well. - :param prefix: set file prefix string, eg: - prefix='vf' - :param append_prefix_timestamp: if True, will append a timestamp to - DPDK file prefix. - :param no_pci: switch of disable PCI bus eg: - no_pci=True - :param vdevs: virtual device list, eg: - vdevs=[ - VirtualDevice('net_ring0'), - VirtualDevice('net_ring1') - ] - :param other_eal_param: user defined DPDK eal parameters, eg: - other_eal_param='--single-file-segments' - :return: eal param string, eg: - '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420'; - """ + """Compose the EAL parameters. + + Process the list of cores and the DPDK prefix and pass that along with + the rest of the arguments. + Args: + lcore_filter_specifier: A number of lcores/cores/sockets to use + or a list of lcore ids to use. + The default will select one lcore for each of two cores + on one socket, in ascending order of core ids. + ascending_cores: Sort cores in ascending order (lowest to highest IDs). + If :data:`False`, sort in descending order. + prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``. + append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix. + no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``. + vdevs: Virtual devices, e.g.:: + + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + other_eal_param: user defined DPDK EAL parameters, e.g.: + ``other_eal_param='--single-file-segments'``. + + Returns: + An EAL param string, such as + ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``. + """ lcore_list = LogicalCoreList(self.filter_lcores(lcore_filter_specifier, ascending_cores)) if append_prefix_timestamp: @@ -348,14 +400,29 @@ def create_eal_parameters( def run_dpdk_app( self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30 ) -> CommandResult: - """ - Run DPDK application on the remote node. + """Run DPDK application on the remote node. + + The application is not run interactively - the command that starts the application + is executed and then the call waits for it to finish execution. + + Args: + app_path: The remote path to the DPDK application. + eal_args: EAL parameters to run the DPDK application with. + timeout: Wait at most this long in seconds for `command` execution to complete. + + Returns: + The result of the DPDK app execution. """ return self.main_session.send_command( f"{app_path} {eal_args}", timeout, privileged=True, verify=True ) def configure_ipv4_forwarding(self, enable: bool) -> None: + """Enable/disable IPv4 forwarding on the node. + + Args: + enable: If :data:`True`, enable the forwarding, otherwise disable it. + """ self.main_session.configure_ipv4_forwarding(enable) def create_interactive_shell( @@ -365,9 +432,13 @@ def create_interactive_shell( privileged: bool = False, eal_parameters: EalParameters | str | None = None, ) -> InteractiveShellType: - """Factory method for creating a handler for an interactive session. + """Extend the factory for interactive session handlers. + + The extensions are SUT node specific: - Instantiate shell_cls according to the remote OS specifics. + * The default for `eal_parameters`, + * The interactive shell path `shell_cls.path` is prepended with path to the remote + DPDK build directory for DPDK apps. Args: shell_cls: The class of the shell. @@ -377,9 +448,10 @@ def create_interactive_shell( privileged: Whether to run the shell with administrative privileges. eal_parameters: List of EAL parameters to use to launch the app. If this isn't provided or an empty string is passed, it will default to calling - create_eal_parameters(). + :meth:`create_eal_parameters`. + Returns: - Instance of the desired interactive application. + An instance of the desired interactive application shell. """ if not eal_parameters: eal_parameters = self.create_eal_parameters() @@ -396,8 +468,8 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None: """Bind all ports on the SUT to a driver. Args: - for_dpdk: Boolean that, when True, binds ports to os_driver_for_dpdk - or, when False, binds to os_driver. Defaults to True. + for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk. + If :data:`False`, binds to os_driver. """ for port in self.ports: driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py index 8a8f0019f3..f269d4c585 100644 --- a/dts/framework/testbed_model/tg_node.py +++ b/dts/framework/testbed_model/tg_node.py @@ -5,13 +5,8 @@ """Traffic generator node. -This is the node where the traffic generator resides. -The distinction between a node and a traffic generator is as follows: -A node is a host that DTS connects to. It could be a baremetal server, -a VM or a container. -A traffic generator is software running on the node. -A traffic generator node is a node running a traffic generator. -A node can be a traffic generator node as well as system under test node. +A traffic generator (TG) generates traffic that's sent towards the SUT node. +A TG node is where the TG runs. """ from scapy.packet import Packet # type: ignore[import] @@ -24,13 +19,16 @@ class TGNode(Node): - """Manage connections to a node with a traffic generator. + """The traffic generator node. - Apart from basic node management capabilities, the Traffic Generator node has - specialized methods for handling the traffic generator running on it. + The TG node extends :class:`Node` with TG specific features: - Arguments: - node_config: The user configuration of the traffic generator node. + * Traffic generator initialization, + * The sending of traffic and receiving packets, + * The sending of traffic without receiving packets. + + Not all traffic generators are capable of capturing traffic, which is why there + must be a way to send traffic without that. Attributes: traffic_generator: The traffic generator running on the node. @@ -39,6 +37,13 @@ class TGNode(Node): traffic_generator: CapturingTrafficGenerator def __init__(self, node_config: TGNodeConfiguration): + """Extend the constructor with TG node specifics. + + Initialize the traffic generator on the TG node. + + Args: + node_config: The TG node's test run configuration. + """ super(TGNode, self).__init__(node_config) self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator) self._logger.info(f"Created node: {self.name}") @@ -50,17 +55,17 @@ def send_packet_and_capture( receive_port: Port, duration: float = 1, ) -> list[Packet]: - """Send a packet, return received traffic. + """Send `packet`, return received traffic. - Send a packet on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic + Send `packet` on `send_port` and then return all traffic captured + on `receive_port` for the given duration. Also record the captured traffic in a pcap file. Args: packet: The packet to send. send_port: The egress port on the TG node. receive_port: The ingress port in the TG node. - duration: Capture traffic for this amount of time after sending the packet. + duration: Capture traffic for this amount of time after sending `packet`. Returns: A list of received packets. May be empty if no packets are captured. @@ -70,6 +75,9 @@ def send_packet_and_capture( ) def close(self) -> None: - """Free all resources used by the node""" + """Free all resources used by the node. + + This extends the superclass method with TG cleanup. + """ self.traffic_generator.close() super(TGNode, self).close() From patchwork Thu Nov 23 15:13:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134585 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E667433AC; Thu, 23 Nov 2023 16:16:52 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 644BD432B2; Thu, 23 Nov 2023 16:14:29 +0100 (CET) Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by mails.dpdk.org (Postfix) with ESMTP id 5992F43254 for ; Thu, 23 Nov 2023 16:14:12 +0100 (CET) Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-40b2c8e91afso6620125e9.3 for ; Thu, 23 Nov 2023 07:14:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752452; x=1701357252; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=u+jXYWRtRK7ydX9kuwINjBiVTNuHBTeW42d20sdTgTs=; b=sqkfoRwy2r7uPRo1O1AxAjJ2YcuX7yt+j6sxvMz+Q1VR8HadeYRqhgPoB+4gtIeHgA umaJXpUruW0wvEgDURuLC7I2H/md+aVDPoKyObFbHChbDxznftVyZKw13JntEuuAWTqR xCys7KQZbaCb0buNrDyvBTmtDZKo5vUXFANuxG1zbg0tYppLnvcqu0cdXqVO2FcL41B4 AMKt7wmUmxNoNMX1FZ8F0TSPwSe3mmruyeGCovbanjlFWt1z/euwbd8e1AY+OcC52Sm2 m6VVO3Mrcb0PWcT/YYKFicDRlmEEsvUVD4A4JY1kVT+l0X5c4xjuGPJwI1ZMAbikrLac /6Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752452; x=1701357252; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=u+jXYWRtRK7ydX9kuwINjBiVTNuHBTeW42d20sdTgTs=; b=DTN57gcIjTahbVWl7+5ShPnabGBBnXn9FK/A8JmNW0i9hX0vN/+xwfMg6ur2eEHuRS x2oQu2yLzhzB0TXqi4BLk3Vgn4nrulUojtpeGqxtcmh8rjsukhtfVlxR7P+N86jk6/q4 RCnkRkLAuuQAZBEO91puciU+864dAdMLteTNkrciG7ikJryiNVqB5MYwzIJz+4e+r/fJ 15H7T+LyeD9RAa26p/eMoVytrskhsP6cSazPe02ff5nTMxfNipwblxS3IkF5wVdiJcIt /obemwJnytR/U542mfPrxGkFcJtHKsTEQloQCrmJavwItJ+Aw5M9ZM7DwcATWBL3SIK9 f0AQ== X-Gm-Message-State: AOJu0YwT0SpeYowQFnZff916foCgOlzPHqMnjOLUci834I9D8Hd59aJo L2FQW+EDAfaTLlM7VUsVlm/5og== X-Google-Smtp-Source: AGHT+IFQiJ/qMV871m1MpMrHKCX248DCl39ZZKPZReWnvQS0G8a7Q9bolTOvLdxflsSLCjUk52wliA== X-Received: by 2002:a05:6000:18a9:b0:332:e4fb:6b62 with SMTP id b9-20020a05600018a900b00332e4fb6b62mr1488106wri.39.1700752451989; Thu, 23 Nov 2023 07:14:11 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:11 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 19/21] dts: base traffic generators docstring update Date: Thu, 23 Nov 2023 16:13:42 +0100 Message-Id: <20231123151344.162812-20-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../traffic_generator/__init__.py | 22 ++++++++- .../capturing_traffic_generator.py | 45 +++++++++++-------- .../traffic_generator/traffic_generator.py | 33 ++++++++------ 3 files changed, 67 insertions(+), 33 deletions(-) diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py index 52888d03fa..11e2bd7d97 100644 --- a/dts/framework/testbed_model/traffic_generator/__init__.py +++ b/dts/framework/testbed_model/traffic_generator/__init__.py @@ -1,6 +1,19 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""DTS traffic generators. + +A traffic generator is capable of generating traffic and then monitor returning traffic. +All traffic generators must count the number of received packets. Some may additionally capture +individual packets. + +A traffic generator may be software running on generic hardware or it could be specialized hardware. + +The traffic generators that only count the number of received packets are suitable only for +performance testing. In functional testing, we need to be able to dissect each arrived packet +and a capturing traffic generator is required. +""" + from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType from framework.exception import ConfigurationError from framework.testbed_model.node import Node @@ -12,8 +25,15 @@ def create_traffic_generator( tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig ) -> CapturingTrafficGenerator: - """A factory function for creating traffic generator object from user config.""" + """The factory function for creating traffic generator objects from the test run configuration. + + Args: + tg_node: The traffic generator node where the created traffic generator will be running. + traffic_generator_config: The traffic generator config. + Returns: + A traffic generator capable of capturing received packets. + """ match traffic_generator_config.traffic_generator_type: case TrafficGeneratorType.SCAPY: return ScapyTrafficGenerator(tg_node, traffic_generator_config) diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py index 1fc7f98c05..0246590333 100644 --- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py @@ -23,19 +23,21 @@ def _get_default_capture_name() -> str: - """ - This is the function used for the default implementation of capture names. - """ return str(uuid.uuid4()) class CapturingTrafficGenerator(TrafficGenerator): """Capture packets after sending traffic. - A mixin interface which enables a packet generator to declare that it can capture + The intermediary interface which enables a packet generator to declare that it can capture packets and return them to the user. + Similarly to :class:`~.traffic_generator.TrafficGenerator`, this class exposes + the public methods specific to capturing traffic generators and defines a private method + that must implement the traffic generation and capturing logic in subclasses. + The methods of capturing traffic generators obey the following workflow: + 1. send packets 2. capture packets 3. write the capture to a .pcap file @@ -44,6 +46,7 @@ class CapturingTrafficGenerator(TrafficGenerator): @property def is_capturing(self) -> bool: + """This traffic generator can capture traffic.""" return True def send_packet_and_capture( @@ -54,11 +57,12 @@ def send_packet_and_capture( duration: float, capture_name: str = _get_default_capture_name(), ) -> list[Packet]: - """Send a packet, return received traffic. + """Send `packet` and capture received traffic. + + Send `packet` on `send_port` and then return all traffic captured + on `receive_port` for the given `duration`. - Send a packet on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic - in a pcap file. + The captured traffic is recorded in the `capture_name`.pcap file. Args: packet: The packet to send. @@ -68,7 +72,7 @@ def send_packet_and_capture( capture_name: The name of the .pcap file where to store the capture. Returns: - A list of received packets. May be empty if no packets are captured. + The received packets. May be empty if no packets are captured. """ return self.send_packets_and_capture( [packet], send_port, receive_port, duration, capture_name @@ -82,11 +86,14 @@ def send_packets_and_capture( duration: float, capture_name: str = _get_default_capture_name(), ) -> list[Packet]: - """Send packets, return received traffic. + """Send `packets` and capture received traffic. - Send packets on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic - in a pcap file. + Send `packets` on `send_port` and then return all traffic captured + on `receive_port` for the given `duration`. + + The captured traffic is recorded in the `capture_name`.pcap file. The target directory + can be configured with the :option:`--output-dir` command line argument or + the :envvar:`DTS_OUTPUT_DIR` environment variable. Args: packets: The packets to send. @@ -96,7 +103,7 @@ def send_packets_and_capture( capture_name: The name of the .pcap file where to store the capture. Returns: - A list of received packets. May be empty if no packets are captured. + The received packets. May be empty if no packets are captured. """ self._logger.debug(get_packet_summaries(packets)) self._logger.debug( @@ -121,10 +128,12 @@ def _send_packets_and_capture( receive_port: Port, duration: float, ) -> list[Packet]: - """ - The extended classes must implement this method which - sends packets on send_port and receives packets on the receive_port - for the specified duration. It must be able to handle no received packets. + """The implementation of :method:`send_packets_and_capture`. + + The subclasses must implement this method which sends `packets` on `send_port` + and receives packets on `receive_port` for the specified `duration`. + + It must be able to handle receiving no packets. """ def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]) -> None: diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py index 0d9902ddb7..5fb9824568 100644 --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -22,7 +22,8 @@ class TrafficGenerator(ABC): """The base traffic generator. - Defines the few basic methods that each traffic generator must implement. + Exposes the common public methods of all traffic generators and defines private methods + that must implement the traffic generation logic in subclasses. """ _config: TrafficGeneratorConfig @@ -30,14 +31,20 @@ class TrafficGenerator(ABC): _logger: DTSLOG def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): + """Initialize the traffic generator. + + Args: + tg_node: The traffic generator node where the created traffic generator will be running. + config: The traffic generator's test run configuration. + """ self._config = config self._tg_node = tg_node self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}") def send_packet(self, packet: Packet, port: Port) -> None: - """Send a packet and block until it is fully sent. + """Send `packet` and block until it is fully sent. - What fully sent means is defined by the traffic generator. + Send `packet` on `port`, then wait until `packet` is fully sent. Args: packet: The packet to send. @@ -46,9 +53,9 @@ def send_packet(self, packet: Packet, port: Port) -> None: self.send_packets([packet], port) def send_packets(self, packets: list[Packet], port: Port) -> None: - """Send packets and block until they are fully sent. + """Send `packets` and block until they are fully sent. - What fully sent means is defined by the traffic generator. + Send `packets` on `port`, then wait until `packets` are fully sent. Args: packets: The packets to send. @@ -60,19 +67,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None: @abstractmethod def _send_packets(self, packets: list[Packet], port: Port) -> None: - """ - The extended classes must implement this method which - sends packets on send_port. The method should block until all packets - are fully sent. + """The implementation of :method:`send_packets`. + + The subclasses must implement this method which sends `packets` on `port`. + The method should block until all `packets` are fully sent. + + What full sent means is defined by the traffic generator. """ @property def is_capturing(self) -> bool: - """Whether this traffic generator can capture traffic. - - Returns: - True if the traffic generator can capture traffic, False otherwise. - """ + """This traffic generator can't capture traffic.""" return False @abstractmethod From patchwork Thu Nov 23 15:13:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134586 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 86A11433AC; Thu, 23 Nov 2023 16:17:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 736FB432C4; Thu, 23 Nov 2023 16:14:30 +0100 (CET) Received: from mail-wr1-f41.google.com (mail-wr1-f41.google.com [209.85.221.41]) by mails.dpdk.org (Postfix) with ESMTP id 8FE8D43254 for ; Thu, 23 Nov 2023 16:14:13 +0100 (CET) Received: by mail-wr1-f41.google.com with SMTP id ffacd0b85a97d-332cb136335so635526f8f.0 for ; Thu, 23 Nov 2023 07:14:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752453; x=1701357253; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=L2pgBiNC+M0CDWk5LE0ZO7ROyha4y6BygIw8dBZSi1c=; b=sM4XfduQ00aaAwDAg8jNgrELa+GWb1JtTQao0Ki82GvUne6JON2azJui/FLB+1gDZF DiZctqbXqjZP8byf6hYqvloZxFf2Yi0DsnU+9BiWzc5RS0usquIuzhmAfnJ9YGTGvHVk 1uV0OStXQ1XMBTCV83WbybgAiz6WKAnBCFE/zmGWRz0JRKocXBfY6SWxhQWqQ+sKi9pr jhtSlldx5b9x6/96GcirQTbXlnzIiK7P86MFQ6pxHPcUI3bAyBvtOXDGwfTN9IshWBnO E3bn7wmRm9rgKuka3tTcL2MIT34e8jGE9yOItFhGs4ZWLVvm13LdGslo6gYnhm76fBA/ lMEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752453; x=1701357253; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=L2pgBiNC+M0CDWk5LE0ZO7ROyha4y6BygIw8dBZSi1c=; b=F1ywoVDuR9N4JMTkxE5hWFCLuDlnQy5A2WLwg5blNX8OCdmmFFsTwMlidcjLP+jR8n hjDu0xyVb+VaYTxAcudC0svIUaDqaZnjnuVjWPcCJZ/Uaz51WJqv+/5rsWS/eRf41lXF GfBkGxpPM/uW1SqjIIZP0atyPOi8oVUhvdCkyc2YqYHVRvrG7SEPkZwSN94xCGm3OQnz 7d36Fu0H4FNneAb3bNDQYMW7vHc8HsnV84TCbDg7i4JNbGWWP09SbGdO0iTf1HUYZ2NT ZyHB1QzCygYt202/5CXwXI8hn/sbYSYOfCic8kbq5AAkiJl0jak5kv+hZH/xzfvH3SzB 1P8Q== X-Gm-Message-State: AOJu0Yyw5XCpLwG+4dnd4x/PKJkiPulUpxQMsVO1lfdXX8r2gG63xZN0 6XR/m7My0HDYF82CaTJOmizGWQ== X-Google-Smtp-Source: AGHT+IHXZmOcu5EH9haJdXnp+u4zXHK5CWcXEEr/vVzJuRjz8ONXF32/AApo+bxn966fHkpgGzOPzg== X-Received: by 2002:a5d:6da7:0:b0:332:ca10:37f with SMTP id u7-20020a5d6da7000000b00332ca10037fmr4732592wrs.43.1700752453200; Thu, 23 Nov 2023 07:14:13 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:12 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 20/21] dts: scapy tg docstring update Date: Thu, 23 Nov 2023 16:13:43 +0100 Message-Id: <20231123151344.162812-21-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../testbed_model/traffic_generator/scapy.py | 91 +++++++++++-------- 1 file changed, 54 insertions(+), 37 deletions(-) diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py index c88cf28369..30ea3914ee 100644 --- a/dts/framework/testbed_model/traffic_generator/scapy.py +++ b/dts/framework/testbed_model/traffic_generator/scapy.py @@ -2,14 +2,15 @@ # Copyright(c) 2022 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -"""Scapy traffic generator. +"""The Scapy traffic generator. -Traffic generator used for functional testing, implemented using the Scapy library. +A traffic generator used for functional testing, implemented with +`the Scapy library `_. The traffic generator uses an XML-RPC server to run Scapy on the remote TG node. -The XML-RPC server runs in an interactive remote SSH session running Python console, -where we start the server. The communication with the server is facilitated with -a local server proxy. +The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server +in an interactive remote Python SSH session. The communication with the server is facilitated +with a local server proxy from the :mod:`xmlrpc.client` module. """ import inspect @@ -69,20 +70,20 @@ def scapy_send_packets_and_capture( recv_iface: str, duration: float, ) -> list[bytes]: - """RPC function to send and capture packets. + """The RPC function to send and capture packets. - The function is meant to be executed on the remote TG node. + The function is meant to be executed on the remote TG node via the server proxy. Args: xmlrpc_packets: The packets to send. These need to be converted to - xmlrpc.client.Binary before sending to the remote server. + :class:`~xmlrpc.client.Binary` objects before sending to the remote server. send_iface: The logical name of the egress interface. recv_iface: The logical name of the ingress interface. duration: Capture for this amount of time, in seconds. Returns: A list of bytes. Each item in the list represents one packet, which needs - to be converted back upon transfer from the remote node. + to be converted back upon transfer from the remote node. """ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets] sniffer = scapy.all.AsyncSniffer( @@ -96,19 +97,15 @@ def scapy_send_packets_and_capture( def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str) -> None: - """RPC function to send packets. + """The RPC function to send packets. - The function is meant to be executed on the remote TG node. - It doesn't return anything, only sends packets. + The function is meant to be executed on the remote TG node via the server proxy. + It only sends `xmlrpc_packets`, without capturing them. Args: xmlrpc_packets: The packets to send. These need to be converted to - xmlrpc.client.Binary before sending to the remote server. + :class:`~xmlrpc.client.Binary` objects before sending to the remote server. send_iface: The logical name of the egress interface. - - Returns: - A list of bytes. Each item in the list represents one packet, which needs - to be converted back upon transfer from the remote node. """ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets] scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True) @@ -128,11 +125,19 @@ def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: s class QuittableXMLRPCServer(SimpleXMLRPCServer): - """Basic XML-RPC server that may be extended - by functions serializable by the marshal module. + """Basic XML-RPC server. + + The server may be augmented by functions serializable by the :mod:`marshal` module. """ def __init__(self, *args, **kwargs): + """Extend the XML-RPC server initialization. + + Args: + args: The positional arguments that will be passed to the superclass's constructor. + kwargs: The keyword arguments that will be passed to the superclass's constructor. + The `allow_none` argument will be set to :data:`True`. + """ kwargs["allow_none"] = True super().__init__(*args, **kwargs) self.register_introspection_functions() @@ -140,13 +145,12 @@ def __init__(self, *args, **kwargs): self.register_function(self.add_rpc_function) def quit(self) -> None: + """Quit the server.""" self._BaseServer__shutdown_request = True return None def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None: - """Add a function to the server. - - This is meant to be executed remotely. + """Add a function to the server from the local server proxy. Args: name: The name of the function. @@ -157,6 +161,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N self.register_function(function) def serve_forever(self, poll_interval: float = 0.5) -> None: + """Extend the superclass method with an additional print. + + Once executed in the local server proxy, the print gives us a clear string to expect + when starting the server. The print means the function was executed on the XML-RPC server. + """ print("XMLRPC OK") super().serve_forever(poll_interval) @@ -164,19 +173,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None: class ScapyTrafficGenerator(CapturingTrafficGenerator): """Provides access to scapy functions via an RPC interface. - The traffic generator first starts an XML-RPC on the remote TG node. - Then it populates the server with functions which use the Scapy library - to send/receive traffic. - - Any packets sent to the remote server are first converted to bytes. - They are received as xmlrpc.client.Binary objects on the server side. - When the server sends the packets back, they are also received as - xmlrpc.client.Binary object on the client side, are converted back to Scapy - packets and only then returned from the methods. + The class extends the base with remote execution of scapy functions. - Arguments: - tg_node: The node where the traffic generator resides. - config: The user configuration of the traffic generator. + Any packets sent to the remote server are first converted to bytes. They are received as + :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets + back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are + converted back to :class:`~scapy.packet.Packet` objects and only then returned from the methods. Attributes: session: The exclusive interactive remote session created by the Scapy @@ -190,6 +192,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator): _config: ScapyTrafficGeneratorConfig def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig): + """Extend the constructor with Scapy TG specifics. + + The traffic generator first starts an XML-RPC on the remote `tg_node`. + Then it populates the server with functions which use the Scapy library + to send/receive traffic: + + * :func:`scapy_send_packets_and_capture` + * :func:`scapy_send_packets` + + To enable verbose logging from the xmlrpc client, use the :option:`--verbose` + command line argument or the :envvar:`DTS_VERBOSE` environment variable. + + Args: + tg_node: The node where the traffic generator resides. + config: The traffic generator's test run configuration. + """ super().__init__(tg_node, config) assert ( @@ -231,10 +249,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None: # or class, so strip all lines containing only whitespace src = "\n".join([line for line in src.splitlines() if not line.isspace() and line != ""]) - spacing = "\n" * 4 - # execute it in the python terminal - self.session.send_command(spacing + src + spacing) + self.session.send_command(src + "\n") self.session.send_command( f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));server.serve_forever()", "XMLRPC OK", @@ -267,6 +283,7 @@ def _send_packets_and_capture( return scapy_packets def close(self) -> None: + """Close the traffic generator.""" try: self.rpc_server_proxy.quit() except ConnectionRefusedError: From patchwork Thu Nov 23 15:13:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134587 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F63C433AC; Thu, 23 Nov 2023 16:17:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9F9BE432CA; Thu, 23 Nov 2023 16:14:31 +0100 (CET) Received: from mail-wr1-f50.google.com (mail-wr1-f50.google.com [209.85.221.50]) by mails.dpdk.org (Postfix) with ESMTP id 93B0842FD9 for ; Thu, 23 Nov 2023 16:14:14 +0100 (CET) Received: by mail-wr1-f50.google.com with SMTP id ffacd0b85a97d-32dc9ff4a8fso641073f8f.1 for ; Thu, 23 Nov 2023 07:14:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1700752454; x=1701357254; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Gu+4JrC2IuPtEiL7DfKPBV14u/4QgUSAAuxdak3zylg=; b=iitlq/k/emhXzi/+TIuSK5kYq/fAAakr1QWl8a9IMcj8YgV8hGYvraNJMTGWChghvB lCBKW4nD7VHbw1A6lZwL+koK/H/DWP8c7EyjQsKwzPA0SecrmoEE4Jc23rtqqwKUPU/5 Ee/MBy3icvY/MbC38PvRt9FKjcQElWHbafmUyBBdwLH8dcuG0E/bGXi62bUbXqZix/iJ ijJsG2AcJ8BCkTadc2OkOBLjnMapuJokITMEefhM+j+ikZEydnRcuqy0o6NOaghQGVh0 24pbbpIPi7dtFOZVx3i5mkFr+8I5n0cmCJMP/OQ08rVOvZXUM40xFelDG1ly+2IBHtUP k+Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700752454; x=1701357254; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Gu+4JrC2IuPtEiL7DfKPBV14u/4QgUSAAuxdak3zylg=; b=uVqzOcUrr/3XrnlPDUp5aa6y6Fw7eSO6qik6uwHK1f6iSpKaROxrvXZn/tNZE3x5ee wCSglQuhVUGj/woSjXNjThEOO9uvs0AzimkW9y87u9O7/m0F1Rq8XAhjYQIOxwA8xbRn seheby9XrRq+33v6MU4A1eAcnnMSL2te+1obQzaFZ7UoCjroA5ir4WjtpnXhIpepvmXA 9/eA79TX8yWyXKr8LqN5It8COV8lcGTqjawLDZu4lHWh6hBLrOj5jJpJN45ptgEPuSZF yvitQCSE1n+8hDQh0fQpzhXW9iLJ07lb3PqYa8IHZl1LoFbL0Ay1TtGuPnvBEPYvibf8 Y2yw== X-Gm-Message-State: AOJu0YzrHbFPxVxco3Py3Ku2B9nh13xbzuDO1Hz1rqOBdxafbkfnQF0g kPJF+4xwRbE+oS0Co/QlQxmGwA== X-Google-Smtp-Source: AGHT+IHuvRdxsjO1gTvqaZnQFq4HjH74AbVs7cw7qN5ApRzrnSU2ggjxZ2GioDZFc5jDLnnJRN3bbA== X-Received: by 2002:a05:6000:a18:b0:332:c331:f508 with SMTP id co24-20020a0560000a1800b00332c331f508mr5473530wrb.16.1700752454221; Thu, 23 Nov 2023 07:14:14 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.. ([84.245.121.10]) by smtp.gmail.com with ESMTPSA id q4-20020adfea04000000b003296b488961sm1870143wrm.31.2023.11.23.07.14.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 07:14:13 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v8 21/21] dts: test suites docstring update Date: Thu, 23 Nov 2023 16:13:44 +0100 Message-Id: <20231123151344.162812-22-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231123151344.162812-1-juraj.linkes@pantheon.tech> References: <20231115130959.39420-1-juraj.linkes@pantheon.tech> <20231123151344.162812-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/tests/TestSuite_hello_world.py | 16 +++++--- dts/tests/TestSuite_os_udp.py | 20 ++++++---- dts/tests/TestSuite_smoke_tests.py | 61 ++++++++++++++++++++++++------ 3 files changed, 72 insertions(+), 25 deletions(-) diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py index 768ba1cfa8..fd7ff1534d 100644 --- a/dts/tests/TestSuite_hello_world.py +++ b/dts/tests/TestSuite_hello_world.py @@ -1,7 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -""" +"""The DPDK hello world app test suite. + Run the helloworld example app and verify it prints a message for each used core. No other EAL parameters apart from cores are used. """ @@ -15,22 +16,25 @@ class TestHelloWorld(TestSuite): + """DPDK hello world app test suite.""" + def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: Build the app we're about to test - helloworld. """ self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld") def test_hello_world_single_core(self) -> None: - """ + """Single core test case. + Steps: Run the helloworld app on the first usable logical core. Verify: The app prints a message from the used core: "hello from core " """ - # get the first usable core lcore_amount = LogicalCoreCount(1, 1, 1) lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter() @@ -42,14 +46,14 @@ def test_hello_world_single_core(self) -> None: ) def test_hello_world_all_cores(self) -> None: - """ + """All cores test case. + Steps: Run the helloworld app on all usable logical cores. Verify: The app prints a message from all used cores: "hello from core " """ - # get the maximum logical core number eal_para = self.sut_node.create_eal_parameters( lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores) diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py index bf6b93deb5..2cf29d37bb 100644 --- a/dts/tests/TestSuite_os_udp.py +++ b/dts/tests/TestSuite_os_udp.py @@ -1,7 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" +"""Basic IPv4 OS routing test suite. + Configure SUT node to route traffic from if1 to if2. Send a packet to the SUT node, verify it comes back on the second port on the TG node. """ @@ -13,24 +14,26 @@ class TestOSUdp(TestSuite): + """IPv4 UDP OS routing test suite.""" + def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: - Configure SUT ports and SUT to route traffic from if1 to if2. + Bind the SUT ports to the OS driver, configure the ports and configure the SUT + to route traffic from if1 to if2. """ - - # This test uses kernel drivers self.sut_node.bind_ports_to_driver(for_dpdk=False) self.configure_testbed_ipv4() def test_os_udp(self) -> None: - """ + """Basic UDP IPv4 traffic test case. + Steps: Send a UDP packet. Verify: The packet with proper addresses arrives at the other TG port. """ - packet = Ether() / IP() / UDP() received_packets = self.send_packet_and_capture(packet) @@ -40,7 +43,8 @@ def test_os_udp(self) -> None: self.verify_packets(expected_packet, received_packets) def tear_down_suite(self) -> None: - """ + """Tear down the test suite. + Teardown: Remove the SUT port configuration configured in setup. """ diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py index 8958f58dac..5e2bac14bd 100644 --- a/dts/tests/TestSuite_smoke_tests.py +++ b/dts/tests/TestSuite_smoke_tests.py @@ -1,6 +1,17 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 University of New Hampshire +"""Smoke test suite. + +Smoke tests are a class of tests which are used for validating a minimal set of important features. +These are the most important features without which (or when they're faulty) the software wouldn't +work properly. Thus, if any failure occurs while testing these features, +there isn't that much of a reason to continue testing, as the software is fundamentally broken. + +These tests don't have to include only DPDK tests, as the reason for failures could be +in the infrastructure (a faulty link between NICs or a misconfiguration). +""" + import re from framework.config import PortConfig @@ -11,23 +22,39 @@ class SmokeTests(TestSuite): + """DPDK and infrastructure smoke test suite. + + The test cases validate the most basic DPDK functionality needed for all other test suites. + The infrastructure also needs to be tested, as that is also used by all other test suites. + + Attributes: + is_blocking: This test suite will block the execution of all other test suites + in the build target after it. + nics_in_node: The NICs present on the SUT node. + """ + is_blocking = True # dicts in this list are expected to have two keys: # "pci_address" and "current_driver" nics_in_node: list[PortConfig] = [] def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: - Set the build directory path and generate a list of NICs in the SUT node. + Set the build directory path and a list of NICs in the SUT node. """ self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir self.nics_in_node = self.sut_node.config.ports def test_unit_tests(self) -> None: - """ + """DPDK meson ``fast-tests`` unit tests. + + Test that all unit test from the ``fast-tests`` suite pass. + The suite is a subset with only the most basic tests. + Test: - Run the fast-test unit-test suite through meson. + Run the ``fast-tests`` unit test suite through meson. """ self.sut_node.main_session.send_command( f"meson test -C {self.dpdk_build_dir_path} --suite fast-tests -t 60", @@ -37,9 +64,14 @@ def test_unit_tests(self) -> None: ) def test_driver_tests(self) -> None: - """ + """DPDK meson ``driver-tests`` unit tests. + + Test that all unit test from the ``driver-tests`` suite pass. + The suite is a subset with driver tests. This suite may be run with virtual devices + configured in the test run configuration. + Test: - Run the driver-test unit-test suite through meson. + Run the ``driver-tests`` unit test suite through meson. """ vdev_args = "" for dev in self.sut_node.virtual_devices: @@ -60,9 +92,12 @@ def test_driver_tests(self) -> None: ) def test_devices_listed_in_testpmd(self) -> None: - """ + """Testpmd device discovery. + + Test that the devices configured in the test run configuration are found in testpmd. + Test: - Uses testpmd driver to verify that devices have been found by testpmd. + List all devices found in testpmd and verify the configured devices are among them. """ testpmd_driver = self.sut_node.create_interactive_shell(TestPmdShell, privileged=True) dev_list = [str(x) for x in testpmd_driver.get_devices()] @@ -74,10 +109,14 @@ def test_devices_listed_in_testpmd(self) -> None: ) def test_device_bound_to_driver(self) -> None: - """ + """Device driver in OS. + + Test that the devices configured in the test run configuration are bound to + the proper driver. + Test: - Ensure that all drivers listed in the config are bound to the correct - driver. + List all devices with the ``dpdk-devbind.py`` script and verify that + the configured devices are bound to the proper driver. """ path_to_devbind = self.sut_node.path_to_devbind_script