From patchwork Mon Dec 4 10:24:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134789 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E0294366A; Mon, 4 Dec 2023 11:24:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ED98740DFB; Mon, 4 Dec 2023 11:24:33 +0100 (CET) Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by mails.dpdk.org (Postfix) with ESMTP id 8009440DDE for ; Mon, 4 Dec 2023 11:24:32 +0100 (CET) Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-40c09f5a7cfso11129455e9.0 for ; Mon, 04 Dec 2023 02:24:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685472; x=1702290272; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XMeWNutabCKWcYmi2u76n4b9C8CHoUajMv+jU5N0o+o=; b=aFv+l7wyl+Tz/CC73gFVzf6ulOSs0X0FbGsp0c6TzVnpAV4PVEjguotJWAgTtSKagL zTN8UNTyaHB/36+B5no4aIoXihUPU8doMhRwMGM7G2E4BXWkFaYQSlsmlkABHFI0fxTF beffEZE5yJhNKyv3gFuVmOkMO/fx4Xw1adUOY+b8ym3cU4teinqKfPP2C3zbdwGJDqNk BPCPHcuK8UK5gQEZgyxBr4vY3AhGgNrlA1Qlw6TRr4J5NwoGXm2YDAd9GNDW8MJWJxrC wd6f/GD3Zu5vTstUaDlZP+dBcSxmngjaBQqXhk5Egc2LtW2qblMXDLsVounxgwgJpLXy utoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685472; x=1702290272; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XMeWNutabCKWcYmi2u76n4b9C8CHoUajMv+jU5N0o+o=; b=jsc0WLjbM7UhpgTGMhNS2nXHpoYIqs9xAf9Uqm8h3l4DSfWOc7uiOgZHChMTaI6Ci+ Jn045gU5P2ubpCqYSNEpOwdVPrQaYao+Q7bxre1E/Ubvw28Iti0Y7JK/iWpMnZN5d0km K+/yAat3tEbZrrHkMjp12QQSUPHhZfrEXgZDrPZEjzV7jEPV8JCPO4YRzUT0vRPrv53t qz/PRNEFf00qMSBgCwFBfvOq06LVI9czmWh/N0tKYTkrgDyRUqg/yTjzO08HoCrZuncH HOoYTUwjByhTsa5UGZ9hG0fGh+aN/Q6P0LKbcRRWGhJEWBv4rHX+tJmZhHEtuFP7u8oh 75sg== X-Gm-Message-State: AOJu0Yw6/2Znl8raQA2w6+G2576MFo/4QPnnl3ZhEMGvAYTNzQX0sU0q C6GQxzjhleB4mB2eqXDKPNeKapcpz+w2tBzuNq6wpQ== X-Google-Smtp-Source: AGHT+IFG9Gr2lksZxHjJgd6bXCcHq8BAihei+jI9htSy5avKOWQhXQ0jw67oUeGL6hFyyVUoVBJOgg== X-Received: by 2002:a05:600c:26d3:b0:40b:5e21:ec36 with SMTP id 19-20020a05600c26d300b0040b5e21ec36mr2170550wmv.104.1701685472055; Mon, 04 Dec 2023 02:24:32 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:31 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 01/21] dts: code adjustments for doc generation Date: Mon, 4 Dec 2023 11:24:09 +0100 Message-Id: <20231204102429.106709-2-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The standard Python tool for generating API documentation, Sphinx, imports modules one-by-one when generating the documentation. This requires code changes: * properly guarding argument parsing in the if __name__ == '__main__' block, * the logger used by DTS runner underwent the same treatment so that it doesn't create log files outside of a DTS run, * however, DTS uses the arguments to construct an object holding global variables. The defaults for the global variables needed to be moved from argument parsing elsewhere, * importing the remote_session module from framework resulted in circular imports because of one module trying to import another module. This is fixed by reorganizing the code, * some code reorganization was done because the resulting structure makes more sense, improving documentation clarity. The are some other changes which are documentation related: * added missing type annotation so they appear in the generated docs, * reordered arguments in some methods, * removed superfluous arguments and attributes, * change private functions/methods/attributes to private and vice-versa. The above all appear in the generated documentation and the with them, the documentation is improved. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 8 +- dts/framework/dts.py | 31 +++++-- dts/framework/exception.py | 54 +++++------- dts/framework/remote_session/__init__.py | 41 +++++---- .../interactive_remote_session.py | 0 .../{remote => }/interactive_shell.py | 0 .../{remote => }/python_shell.py | 0 .../remote_session/remote/__init__.py | 27 ------ .../{remote => }/remote_session.py | 0 .../{remote => }/ssh_session.py | 12 +-- .../{remote => }/testpmd_shell.py | 0 dts/framework/settings.py | 85 +++++++++++-------- dts/framework/test_result.py | 4 +- dts/framework/test_suite.py | 7 +- dts/framework/testbed_model/__init__.py | 12 +-- dts/framework/testbed_model/{hw => }/cpu.py | 13 +++ dts/framework/testbed_model/hw/__init__.py | 27 ------ .../linux_session.py | 6 +- dts/framework/testbed_model/node.py | 23 +++-- .../os_session.py | 22 ++--- dts/framework/testbed_model/{hw => }/port.py | 0 .../posix_session.py | 4 +- dts/framework/testbed_model/sut_node.py | 8 +- dts/framework/testbed_model/tg_node.py | 29 +------ .../traffic_generator/__init__.py | 23 +++++ .../capturing_traffic_generator.py | 4 +- .../{ => traffic_generator}/scapy.py | 19 ++--- .../traffic_generator.py | 14 ++- .../testbed_model/{hw => }/virtual_device.py | 0 dts/framework/utils.py | 40 +++------ dts/main.py | 9 +- 31 files changed, 244 insertions(+), 278 deletions(-) rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%) rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%) rename dts/framework/remote_session/{remote => }/python_shell.py (100%) delete mode 100644 dts/framework/remote_session/remote/__init__.py rename dts/framework/remote_session/{remote => }/remote_session.py (100%) rename dts/framework/remote_session/{remote => }/ssh_session.py (91%) rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%) rename dts/framework/testbed_model/{hw => }/cpu.py (95%) delete mode 100644 dts/framework/testbed_model/hw/__init__.py rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%) rename dts/framework/{remote_session => testbed_model}/os_session.py (95%) rename dts/framework/testbed_model/{hw => }/port.py (100%) rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%) create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (98%) rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%) rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (81%) rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%) diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index 9b32cf0532..ef25a463c0 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -17,6 +17,7 @@ import warlock # type: ignore[import] import yaml +from framework.exception import ConfigurationError from framework.settings import SETTINGS from framework.utils import StrEnum @@ -89,7 +90,7 @@ class TrafficGeneratorConfig: traffic_generator_type: TrafficGeneratorType @staticmethod - def from_dict(d: dict): + def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": # This looks useless now, but is designed to allow expansion to traffic # generators that require more configuration later. match TrafficGeneratorType(d["type"]): @@ -97,6 +98,8 @@ def from_dict(d: dict): return ScapyTrafficGeneratorConfig( traffic_generator_type=TrafficGeneratorType.SCAPY ) + case _: + raise ConfigurationError(f'Unknown traffic generator type "{d["type"]}".') @dataclass(slots=True, frozen=True) @@ -314,6 +317,3 @@ def load_config() -> Configuration: config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data) config_obj: Configuration = Configuration.from_dict(dict(config)) return config_obj - - -CONFIGURATION = load_config() diff --git a/dts/framework/dts.py b/dts/framework/dts.py index 25d6942d81..356368ef10 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -6,19 +6,19 @@ import sys from .config import ( - CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration, TestSuiteConfig, + load_config, ) from .exception import BlockingTestSuiteError from .logger import DTSLOG, getLogger from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result from .test_suite import get_test_suites from .testbed_model import SutNode, TGNode -from .utils import check_dts_python_version -dts_logger: DTSLOG = getLogger("DTSRunner") +# dummy defaults to satisfy linters +dts_logger: DTSLOG = None # type: ignore[assignment] result: DTSResult = DTSResult(dts_logger) @@ -30,14 +30,18 @@ def run_all() -> None: global dts_logger global result + # create a regular DTS logger and create a new result with it + dts_logger = getLogger("DTSRunner") + result = DTSResult(dts_logger) + # check the python version of the server that run dts - check_dts_python_version() + _check_dts_python_version() sut_nodes: dict[str, SutNode] = {} tg_nodes: dict[str, TGNode] = {} try: # for all Execution sections - for execution in CONFIGURATION.executions: + for execution in load_config().executions: sut_node = sut_nodes.get(execution.system_under_test_node.name) tg_node = tg_nodes.get(execution.traffic_generator_node.name) @@ -82,6 +86,23 @@ def run_all() -> None: _exit_dts() +def _check_dts_python_version() -> None: + def RED(text: str) -> str: + return f"\u001B[31;1m{str(text)}\u001B[0m" + + if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10): + print( + RED( + ( + "WARNING: DTS execution node's python version is lower than" + "python 3.10, is deprecated and will not work in future releases." + ) + ), + file=sys.stderr, + ) + print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) + + def _run_execution( sut_node: SutNode, tg_node: TGNode, diff --git a/dts/framework/exception.py b/dts/framework/exception.py index b362e42924..151e4d3aa9 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError): Command execution timeout. """ - command: str - output: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _command: str - def __init__(self, command: str, output: str): - self.command = command - self.output = output + def __init__(self, command: str): + self._command = command def __str__(self) -> str: - return f"TIMEOUT on {self.command}" - - def get_output(self) -> str: - return self.output + return f"TIMEOUT on {self._command}" class SSHConnectionError(DTSError): @@ -62,18 +57,18 @@ class SSHConnectionError(DTSError): SSH connection error. """ - host: str - errors: list[str] severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _host: str + _errors: list[str] def __init__(self, host: str, errors: list[str] | None = None): - self.host = host - self.errors = [] if errors is None else errors + self._host = host + self._errors = [] if errors is None else errors def __str__(self) -> str: - message = f"Error trying to connect with {self.host}." - if self.errors: - message += f" Errors encountered while retrying: {', '.join(self.errors)}" + message = f"Error trying to connect with {self._host}." + if self._errors: + message += f" Errors encountered while retrying: {', '.join(self._errors)}" return message @@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError): It can no longer be used. """ - host: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR + _host: str def __init__(self, host: str): - self.host = host + self._host = host def __str__(self) -> str: - return f"SSH session with {self.host} has died" + return f"SSH session with {self._host} has died" class ConfigurationError(DTSError): @@ -107,16 +102,16 @@ class RemoteCommandExecutionError(DTSError): Raised when a command executed on a Node returns a non-zero exit status. """ - command: str - command_return_code: int severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + command: str + _command_return_code: int def __init__(self, command: str, command_return_code: int): self.command = command - self.command_return_code = command_return_code + self._command_return_code = command_return_code def __str__(self) -> str: - return f"Command {self.command} returned a non-zero exit code: {self.command_return_code}" + return f"Command {self.command} returned a non-zero exit code: {self._command_return_code}" class RemoteDirectoryExistsError(DTSError): @@ -140,22 +135,15 @@ class TestCaseVerifyError(DTSError): Used in test cases to verify the expected behavior. """ - value: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR - def __init__(self, value: str): - self.value = value - - def __str__(self) -> str: - return repr(self.value) - class BlockingTestSuiteError(DTSError): - suite_name: str severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR + _suite_name: str def __init__(self, suite_name: str) -> None: - self.suite_name = suite_name + self._suite_name = suite_name def __str__(self) -> str: - return f"Blocking suite {self.suite_name} failed." + return f"Blocking suite {self._suite_name} failed." diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 6124417bd7..5e7ddb2b05 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -12,27 +12,24 @@ # pylama:ignore=W0611 -from framework.config import OS, NodeConfiguration -from framework.exception import ConfigurationError +from framework.config import NodeConfiguration from framework.logger import DTSLOG -from .linux_session import LinuxSession -from .os_session import InteractiveShellType, OSSession -from .remote import ( - CommandResult, - InteractiveRemoteSession, - InteractiveShell, - PythonShell, - RemoteSession, - SSHSession, - TestPmdDevice, - TestPmdShell, -) - - -def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession: - match node_config.os: - case OS.linux: - return LinuxSession(node_config, name, logger) - case _: - raise ConfigurationError(f"Unsupported OS {node_config.os}") +from .interactive_remote_session import InteractiveRemoteSession +from .interactive_shell import InteractiveShell +from .python_shell import PythonShell +from .remote_session import CommandResult, RemoteSession +from .ssh_session import SSHSession +from .testpmd_shell import TestPmdShell + + +def create_remote_session( + node_config: NodeConfiguration, name: str, logger: DTSLOG +) -> RemoteSession: + return SSHSession(node_config, name, logger) + + +def create_interactive_session( + node_config: NodeConfiguration, logger: DTSLOG +) -> InteractiveRemoteSession: + return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py similarity index 100% rename from dts/framework/remote_session/remote/interactive_remote_session.py rename to dts/framework/remote_session/interactive_remote_session.py diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py similarity index 100% rename from dts/framework/remote_session/remote/interactive_shell.py rename to dts/framework/remote_session/interactive_shell.py diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py similarity index 100% rename from dts/framework/remote_session/remote/python_shell.py rename to dts/framework/remote_session/python_shell.py diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py deleted file mode 100644 index 06403691a5..0000000000 --- a/dts/framework/remote_session/remote/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2023 PANTHEON.tech s.r.o. -# Copyright(c) 2023 University of New Hampshire - -# pylama:ignore=W0611 - -from framework.config import NodeConfiguration -from framework.logger import DTSLOG - -from .interactive_remote_session import InteractiveRemoteSession -from .interactive_shell import InteractiveShell -from .python_shell import PythonShell -from .remote_session import CommandResult, RemoteSession -from .ssh_session import SSHSession -from .testpmd_shell import TestPmdDevice, TestPmdShell - - -def create_remote_session( - node_config: NodeConfiguration, name: str, logger: DTSLOG -) -> RemoteSession: - return SSHSession(node_config, name, logger) - - -def create_interactive_session( - node_config: NodeConfiguration, logger: DTSLOG -) -> InteractiveRemoteSession: - return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py similarity index 100% rename from dts/framework/remote_session/remote/remote_session.py rename to dts/framework/remote_session/remote_session.py diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py similarity index 91% rename from dts/framework/remote_session/remote/ssh_session.py rename to dts/framework/remote_session/ssh_session.py index 1a7ee649ab..a467033a13 100644 --- a/dts/framework/remote_session/remote/ssh_session.py +++ b/dts/framework/remote_session/ssh_session.py @@ -18,9 +18,7 @@ SSHException, ) -from framework.config import NodeConfiguration from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError -from framework.logger import DTSLOG from .remote_session import CommandResult, RemoteSession @@ -45,14 +43,6 @@ class SSHSession(RemoteSession): session: Connection - def __init__( - self, - node_config: NodeConfiguration, - session_name: str, - logger: DTSLOG, - ): - super(SSHSession, self).__init__(node_config, session_name, logger) - def _connect(self) -> None: errors = [] retry_attempts = 10 @@ -111,7 +101,7 @@ def _send_command(self, command: str, timeout: float, env: dict | None) -> Comma except CommandTimedOut as e: self._logger.exception(e) - raise SSHTimeoutError(command, e.result.stderr) from e + raise SSHTimeoutError(command) from e return CommandResult(self.name, command, output.stdout, output.stderr, output.return_code) diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py similarity index 100% rename from dts/framework/remote_session/remote/testpmd_shell.py rename to dts/framework/remote_session/testpmd_shell.py diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 974793a11a..25b5dcff22 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -6,7 +6,7 @@ import argparse import os from collections.abc import Callable, Iterable, Sequence -from dataclasses import dataclass +from dataclasses import dataclass, field from pathlib import Path from typing import Any, TypeVar @@ -22,8 +22,8 @@ def __init__( option_strings: Sequence[str], dest: str, nargs: str | int | None = None, - const: str | None = None, - default: str = None, + const: bool | None = None, + default: Any = None, type: Callable[[str], _T | argparse.FileType | None] = None, choices: Iterable[_T] | None = None, required: bool = False, @@ -32,6 +32,12 @@ def __init__( ) -> None: env_var_value = os.environ.get(env_var) default = env_var_value or default + if const is not None: + nargs = 0 + default = const if env_var_value else default + type = None + choices = None + metavar = None super(_EnvironmentArgument, self).__init__( option_strings, dest, @@ -52,22 +58,28 @@ def __call__( values: Any, option_string: str = None, ) -> None: - setattr(namespace, self.dest, values) + if self.const is not None: + setattr(namespace, self.dest, self.const) + else: + setattr(namespace, self.dest, values) return _EnvironmentArgument -@dataclass(slots=True, frozen=True) -class _Settings: - config_file_path: str - output_dir: str - timeout: float - verbose: bool - skip_setup: bool - dpdk_tarball_path: Path - compile_timeout: float - test_cases: list - re_run: int +@dataclass(slots=True) +class Settings: + config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml") + output_dir: str = "output" + timeout: float = 15 + verbose: bool = False + skip_setup: bool = False + dpdk_tarball_path: Path | str = "dpdk.tar.xz" + compile_timeout: float = 1200 + test_cases: list[str] = field(default_factory=list) + re_run: int = 0 + + +SETTINGS: Settings = Settings() def _get_parser() -> argparse.ArgumentParser: @@ -80,7 +92,8 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--config-file", action=_env_arg("DTS_CFG_FILE"), - default="conf.yaml", + default=SETTINGS.config_file_path, + type=Path, help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets.", ) @@ -88,7 +101,7 @@ def _get_parser() -> argparse.ArgumentParser: "--output-dir", "--output", action=_env_arg("DTS_OUTPUT_DIR"), - default="output", + default=SETTINGS.output_dir, help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.", ) @@ -96,7 +109,7 @@ def _get_parser() -> argparse.ArgumentParser: "-t", "--timeout", action=_env_arg("DTS_TIMEOUT"), - default=15, + default=SETTINGS.timeout, type=float, help="[DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK.", ) @@ -105,8 +118,9 @@ def _get_parser() -> argparse.ArgumentParser: "-v", "--verbose", action=_env_arg("DTS_VERBOSE"), - default="N", - help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages " + default=SETTINGS.verbose, + const=True, + help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages " "to the console.", ) @@ -114,8 +128,8 @@ def _get_parser() -> argparse.ArgumentParser: "-s", "--skip-setup", action=_env_arg("DTS_SKIP_SETUP"), - default="N", - help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.", + const=True, + help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.", ) parser.add_argument( @@ -123,7 +137,7 @@ def _get_parser() -> argparse.ArgumentParser: "--snapshot", "--git-ref", action=_env_arg("DTS_DPDK_TARBALL"), - default="dpdk.tar.xz", + default=SETTINGS.dpdk_tarball_path, type=Path, help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, " "tag ID or tree ID to test. To test local changes, first commit them, " @@ -133,7 +147,7 @@ def _get_parser() -> argparse.ArgumentParser: parser.add_argument( "--compile-timeout", action=_env_arg("DTS_COMPILE_TIMEOUT"), - default=1200, + default=SETTINGS.compile_timeout, type=float, help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.", ) @@ -150,7 +164,7 @@ def _get_parser() -> argparse.ArgumentParser: "--re-run", "--re_run", action=_env_arg("DTS_RERUN"), - default=0, + default=SETTINGS.re_run, type=int, help="[DTS_RERUN] Re-run each test case the specified amount of times " "if a test failure occurs", @@ -159,21 +173,20 @@ def _get_parser() -> argparse.ArgumentParser: return parser -def _get_settings() -> _Settings: +def get_settings() -> Settings: parsed_args = _get_parser().parse_args() - return _Settings( + return Settings( config_file_path=parsed_args.config_file, output_dir=parsed_args.output_dir, timeout=parsed_args.timeout, - verbose=(parsed_args.verbose == "Y"), - skip_setup=(parsed_args.skip_setup == "Y"), - dpdk_tarball_path=Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)) - if not os.path.exists(parsed_args.tarball) - else Path(parsed_args.tarball), + verbose=parsed_args.verbose, + skip_setup=parsed_args.skip_setup, + dpdk_tarball_path=Path( + Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)) + if not os.path.exists(parsed_args.tarball) + else Path(parsed_args.tarball) + ), compile_timeout=parsed_args.compile_timeout, - test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [], + test_cases=(parsed_args.test_cases.split(",") if parsed_args.test_cases else []), re_run=parsed_args.re_run, ) - - -SETTINGS: _Settings = _get_settings() diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 4c2e7e2418..57090feb04 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -246,7 +246,7 @@ def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTarge self._inner_results.append(build_target_result) return build_target_result - def add_sut_info(self, sut_info: NodeInfo): + def add_sut_info(self, sut_info: NodeInfo) -> None: self.sut_os_name = sut_info.os_name self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version @@ -289,7 +289,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: self._inner_results.append(execution_result) return execution_result - def add_error(self, error) -> None: + def add_error(self, error: Exception) -> None: self._errors.append(error) def process(self) -> None: diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index 4a7907ec33..f9e66e814a 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -11,7 +11,7 @@ import re from ipaddress import IPv4Interface, IPv6Interface, ip_interface from types import MethodType -from typing import Union +from typing import Any, Union from scapy.layers.inet import IP # type: ignore[import] from scapy.layers.l2 import Ether # type: ignore[import] @@ -26,8 +26,7 @@ from .logger import DTSLOG, getLogger from .settings import SETTINGS from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult -from .testbed_model import SutNode, TGNode -from .testbed_model.hw.port import Port, PortLink +from .testbed_model import Port, PortLink, SutNode, TGNode from .utils import get_packet_summaries @@ -426,7 +425,7 @@ def _execute_test_case( def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: - def is_test_suite(object) -> bool: + def is_test_suite(object: Any) -> bool: try: if issubclass(object, TestSuite) and object is not TestSuite: return True diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 5cbb859e47..8ced05653b 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -9,15 +9,9 @@ # pylama:ignore=W0611 -from .hw import ( - LogicalCore, - LogicalCoreCount, - LogicalCoreCountFilter, - LogicalCoreList, - LogicalCoreListFilter, - VirtualDevice, - lcore_filter, -) +from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList from .node import Node +from .port import Port, PortLink from .sut_node import SutNode from .tg_node import TGNode +from .virtual_device import VirtualDevice diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py similarity index 95% rename from dts/framework/testbed_model/hw/cpu.py rename to dts/framework/testbed_model/cpu.py index cbc5fe7fff..1b392689f5 100644 --- a/dts/framework/testbed_model/hw/cpu.py +++ b/dts/framework/testbed_model/cpu.py @@ -262,3 +262,16 @@ def filter(self) -> list[LogicalCore]: ) return filtered_lcores + + +def lcore_filter( + core_list: list[LogicalCore], + filter_specifier: LogicalCoreCount | LogicalCoreList, + ascending: bool, +) -> LogicalCoreFilter: + if isinstance(filter_specifier, LogicalCoreList): + return LogicalCoreListFilter(core_list, filter_specifier, ascending) + elif isinstance(filter_specifier, LogicalCoreCount): + return LogicalCoreCountFilter(core_list, filter_specifier, ascending) + else: + raise ValueError(f"Unsupported filter r{filter_specifier}") diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py deleted file mode 100644 index 88ccac0b0e..0000000000 --- a/dts/framework/testbed_model/hw/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2023 PANTHEON.tech s.r.o. - -# pylama:ignore=W0611 - -from .cpu import ( - LogicalCore, - LogicalCoreCount, - LogicalCoreCountFilter, - LogicalCoreFilter, - LogicalCoreList, - LogicalCoreListFilter, -) -from .virtual_device import VirtualDevice - - -def lcore_filter( - core_list: list[LogicalCore], - filter_specifier: LogicalCoreCount | LogicalCoreList, - ascending: bool, -) -> LogicalCoreFilter: - if isinstance(filter_specifier, LogicalCoreList): - return LogicalCoreListFilter(core_list, filter_specifier, ascending) - elif isinstance(filter_specifier, LogicalCoreCount): - return LogicalCoreCountFilter(core_list, filter_specifier, ascending) - else: - raise ValueError(f"Unsupported filter r{filter_specifier}") diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py similarity index 97% rename from dts/framework/remote_session/linux_session.py rename to dts/framework/testbed_model/linux_session.py index fd877fbfae..055765ba2d 100644 --- a/dts/framework/remote_session/linux_session.py +++ b/dts/framework/testbed_model/linux_session.py @@ -9,10 +9,10 @@ from typing_extensions import NotRequired from framework.exception import RemoteCommandExecutionError -from framework.testbed_model import LogicalCore -from framework.testbed_model.hw.port import Port from framework.utils import expand_range +from .cpu import LogicalCore +from .port import Port from .posix_session import PosixSession @@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: lcores.append(LogicalCore(lcore, core, socket, node)) return lcores - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: return dpdk_prefix def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index ef700d8114..b313b5ad54 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -12,23 +12,26 @@ from typing import Any, Callable, Type, Union from framework.config import ( + OS, BuildTargetConfiguration, ExecutionConfiguration, NodeConfiguration, ) +from framework.exception import ConfigurationError from framework.logger import DTSLOG, getLogger -from framework.remote_session import InteractiveShellType, OSSession, create_session from framework.settings import SETTINGS -from .hw import ( +from .cpu import ( LogicalCore, LogicalCoreCount, LogicalCoreList, LogicalCoreListFilter, - VirtualDevice, lcore_filter, ) -from .hw.port import Port +from .linux_session import LinuxSession +from .os_session import InteractiveShellType, OSSession +from .port import Port +from .virtual_device import VirtualDevice class Node(ABC): @@ -168,9 +171,9 @@ def create_interactive_shell( return self.main_session.create_interactive_shell( shell_cls, - app_args, timeout, privileged, + app_args, ) def filter_lcores( @@ -201,7 +204,7 @@ def _get_remote_cpus(self) -> None: self._logger.info("Getting CPU information.") self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) - def _setup_hugepages(self): + def _setup_hugepages(self) -> None: """ Setup hugepages on the Node. Different architectures can supply different amounts of memory for hugepages and numa-based hugepage allocation may need @@ -245,3 +248,11 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: return lambda *args: None else: return func + + +def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession: + match node_config.os: + case OS.linux: + return LinuxSession(node_config, name, logger) + case _: + raise ConfigurationError(f"Unsupported OS {node_config.os}") diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py similarity index 95% rename from dts/framework/remote_session/os_session.py rename to dts/framework/testbed_model/os_session.py index 8a709eac1c..76e595a518 100644 --- a/dts/framework/remote_session/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -10,19 +10,19 @@ from framework.config import Architecture, NodeConfiguration, NodeInfo from framework.logger import DTSLOG -from framework.remote_session.remote import InteractiveShell -from framework.settings import SETTINGS -from framework.testbed_model import LogicalCore -from framework.testbed_model.hw.port import Port -from framework.utils import MesonArgs - -from .remote import ( +from framework.remote_session import ( CommandResult, InteractiveRemoteSession, + InteractiveShell, RemoteSession, create_interactive_session, create_remote_session, ) +from framework.settings import SETTINGS +from framework.utils import MesonArgs + +from .cpu import LogicalCore +from .port import Port InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell) @@ -85,9 +85,9 @@ def send_command( def create_interactive_shell( self, shell_cls: Type[InteractiveShellType], - eal_parameters: str, timeout: float, privileged: bool, + app_args: str, ) -> InteractiveShellType: """ See "create_interactive_shell" in SutNode @@ -96,7 +96,7 @@ def create_interactive_shell( self.interactive_session.session, self._logger, self._get_privileged_command if privileged else None, - eal_parameters, + app_args, timeout, ) @@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str: """ @abstractmethod - def guess_dpdk_remote_dir(self, remote_dir) -> PurePath: + def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath: """ Try to find DPDK remote dir in remote_dir. """ @@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: """ @abstractmethod - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: """ Get the DPDK file prefix that will be used when running DPDK apps. """ diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py similarity index 100% rename from dts/framework/testbed_model/hw/port.py rename to dts/framework/testbed_model/port.py diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py similarity index 98% rename from dts/framework/remote_session/posix_session.py rename to dts/framework/testbed_model/posix_session.py index a29e2e8280..5657cc0bc9 100644 --- a/dts/framework/remote_session/posix_session.py +++ b/dts/framework/testbed_model/posix_session.py @@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str: return ret_opts - def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath: + def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath: remote_guess = self.join_remote_path(remote_dir, "dpdk-*") result = self.send_command(f"ls -d {remote_guess} | tail -1") return PurePosixPath(result.stdout) @@ -207,7 +207,7 @@ def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) for dpdk_runtime_dir in dpdk_runtime_dirs: self.remove_remote_dir(dpdk_runtime_dir) - def get_dpdk_file_prefix(self, dpdk_prefix) -> str: + def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: return "" def get_compiler_version(self, compiler_name: str) -> str: diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 7f75043bd3..5ce9446dba 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -15,12 +15,14 @@ NodeInfo, SutNodeConfiguration, ) -from framework.remote_session import CommandResult, InteractiveShellType, OSSession +from framework.remote_session import CommandResult from framework.settings import SETTINGS from framework.utils import MesonArgs -from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice +from .cpu import LogicalCoreCount, LogicalCoreList from .node import Node +from .os_session import InteractiveShellType, OSSession +from .virtual_device import VirtualDevice class EalParameters(object): @@ -293,7 +295,7 @@ def create_eal_parameters( prefix: str = "dpdk", append_prefix_timestamp: bool = True, no_pci: bool = False, - vdevs: list[VirtualDevice] = None, + vdevs: list[VirtualDevice] | None = None, other_eal_param: str = "", ) -> "EalParameters": """ diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py index 79a55663b5..8a8f0019f3 100644 --- a/dts/framework/testbed_model/tg_node.py +++ b/dts/framework/testbed_model/tg_node.py @@ -16,16 +16,11 @@ from scapy.packet import Packet # type: ignore[import] -from framework.config import ( - ScapyTrafficGeneratorConfig, - TGNodeConfiguration, - TrafficGeneratorType, -) -from framework.exception import ConfigurationError - -from .capturing_traffic_generator import CapturingTrafficGenerator -from .hw.port import Port +from framework.config import TGNodeConfiguration + from .node import Node +from .port import Port +from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator class TGNode(Node): @@ -78,19 +73,3 @@ def close(self) -> None: """Free all resources used by the node""" self.traffic_generator.close() super(TGNode, self).close() - - -def create_traffic_generator( - tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig -) -> CapturingTrafficGenerator: - """A factory function for creating traffic generator object from user config.""" - - from .scapy import ScapyTrafficGenerator - - match traffic_generator_config.traffic_generator_type: - case TrafficGeneratorType.SCAPY: - return ScapyTrafficGenerator(tg_node, traffic_generator_config) - case _: - raise ConfigurationError( - f"Unknown traffic generator: {traffic_generator_config.traffic_generator_type}" - ) diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py new file mode 100644 index 0000000000..52888d03fa --- /dev/null +++ b/dts/framework/testbed_model/traffic_generator/__init__.py @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType +from framework.exception import ConfigurationError +from framework.testbed_model.node import Node + +from .capturing_traffic_generator import CapturingTrafficGenerator +from .scapy import ScapyTrafficGenerator + + +def create_traffic_generator( + tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig +) -> CapturingTrafficGenerator: + """A factory function for creating traffic generator object from user config.""" + + match traffic_generator_config.traffic_generator_type: + case TrafficGeneratorType.SCAPY: + return ScapyTrafficGenerator(tg_node, traffic_generator_config) + case _: + raise ConfigurationError( + "Unknown traffic generator: {traffic_generator_config.traffic_generator_type}" + ) diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py similarity index 98% rename from dts/framework/testbed_model/capturing_traffic_generator.py rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py index e6512061d7..1fc7f98c05 100644 --- a/dts/framework/testbed_model/capturing_traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py @@ -16,9 +16,9 @@ from scapy.packet import Packet # type: ignore[import] from framework.settings import SETTINGS +from framework.testbed_model.port import Port from framework.utils import get_packet_summaries -from .hw.port import Port from .traffic_generator import TrafficGenerator @@ -127,7 +127,7 @@ def _send_packets_and_capture( for the specified duration. It must be able to handle no received packets. """ - def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]): + def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]) -> None: file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap" self._logger.debug(f"Writing packets to {file_name}.") scapy.utils.wrpcap(file_name, packets) diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py similarity index 95% rename from dts/framework/testbed_model/scapy.py rename to dts/framework/testbed_model/traffic_generator/scapy.py index 9083e92b3d..c88cf28369 100644 --- a/dts/framework/testbed_model/scapy.py +++ b/dts/framework/testbed_model/traffic_generator/scapy.py @@ -24,16 +24,15 @@ from scapy.packet import Packet # type: ignore[import] from framework.config import OS, ScapyTrafficGeneratorConfig -from framework.logger import DTSLOG, getLogger from framework.remote_session import PythonShell from framework.settings import SETTINGS +from framework.testbed_model.node import Node +from framework.testbed_model.port import Port from .capturing_traffic_generator import ( CapturingTrafficGenerator, _get_default_capture_name, ) -from .hw.port import Port -from .tg_node import TGNode """ ========= BEGIN RPC FUNCTIONS ========= @@ -144,7 +143,7 @@ def quit(self) -> None: self._BaseServer__shutdown_request = True return None - def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary): + def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None: """Add a function to the server. This is meant to be executed remotely. @@ -189,13 +188,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator): session: PythonShell rpc_server_proxy: xmlrpc.client.ServerProxy _config: ScapyTrafficGeneratorConfig - _tg_node: TGNode - _logger: DTSLOG - def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig): - self._config = config - self._tg_node = tg_node - self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}") + def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig): + super().__init__(tg_node, config) assert ( self._tg_node.config.os == OS.linux @@ -229,7 +224,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig): function_bytes = marshal.dumps(function.__code__) self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes) - def _start_xmlrpc_server_in_remote_python(self, listen_port: int): + def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None: # load the source of the function src = inspect.getsource(QuittableXMLRPCServer) # Lines with only whitespace break the repl if in the middle of a function @@ -271,7 +266,7 @@ def _send_packets_and_capture( scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets] return scapy_packets - def close(self): + def close(self) -> None: try: self.rpc_server_proxy.quit() except ConnectionRefusedError: diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py similarity index 81% rename from dts/framework/testbed_model/traffic_generator.py rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py index 28c35d3ce4..0d9902ddb7 100644 --- a/dts/framework/testbed_model/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -12,11 +12,12 @@ from scapy.packet import Packet # type: ignore[import] -from framework.logger import DTSLOG +from framework.config import TrafficGeneratorConfig +from framework.logger import DTSLOG, getLogger +from framework.testbed_model.node import Node +from framework.testbed_model.port import Port from framework.utils import get_packet_summaries -from .hw.port import Port - class TrafficGenerator(ABC): """The base traffic generator. @@ -24,8 +25,15 @@ class TrafficGenerator(ABC): Defines the few basic methods that each traffic generator must implement. """ + _config: TrafficGeneratorConfig + _tg_node: Node _logger: DTSLOG + def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): + self._config = config + self._tg_node = tg_node + self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}") + def send_packet(self, packet: Packet, port: Port) -> None: """Send a packet and block until it is fully sent. diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py similarity index 100% rename from dts/framework/testbed_model/hw/virtual_device.py rename to dts/framework/testbed_model/virtual_device.py diff --git a/dts/framework/utils.py b/dts/framework/utils.py index d098d364ff..a0f2173949 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -7,7 +7,6 @@ import json import os import subprocess -import sys from enum import Enum from pathlib import Path from subprocess import SubprocessError @@ -16,31 +15,7 @@ from .exception import ConfigurationError - -class StrEnum(Enum): - @staticmethod - def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str: - return name - - def __str__(self) -> str: - return self.name - - -REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" - - -def check_dts_python_version() -> None: - if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10): - print( - RED( - ( - "WARNING: DTS execution node's python version is lower than" - "python 3.10, is deprecated and will not work in future releases." - ) - ), - file=sys.stderr, - ) - print(RED("Please use Python >= 3.10 instead"), file=sys.stderr) +REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/" def expand_range(range_str: str) -> list[int]: @@ -61,7 +36,7 @@ def expand_range(range_str: str) -> list[int]: return expanded_range -def get_packet_summaries(packets: list[Packet]): +def get_packet_summaries(packets: list[Packet]) -> str: if len(packets) == 1: packet_summaries = packets[0].summary() else: @@ -69,8 +44,13 @@ def get_packet_summaries(packets: list[Packet]): return f"Packet contents: \n{packet_summaries}" -def RED(text: str) -> str: - return f"\u001B[31;1m{str(text)}\u001B[0m" +class StrEnum(Enum): + @staticmethod + def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str: + return name + + def __str__(self) -> str: + return self.name class MesonArgs(object): @@ -215,5 +195,5 @@ def _delete_tarball(self) -> None: if self._tarball_path and os.path.exists(self._tarball_path): os.remove(self._tarball_path) - def __fspath__(self): + def __fspath__(self) -> str: return str(self._tarball_path) diff --git a/dts/main.py b/dts/main.py index 43311fa847..5d4714b0c3 100755 --- a/dts/main.py +++ b/dts/main.py @@ -10,10 +10,17 @@ import logging -from framework import dts +from framework import settings def main() -> None: + """Set DTS settings, then run DTS. + + The DTS settings are taken from the command line arguments and the environment variables. + """ + settings.SETTINGS = settings.get_settings() + from framework import dts + dts.run_all() From patchwork Mon Dec 4 10:24:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134790 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8B7764366A; Mon, 4 Dec 2023 11:24:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 34C6040E36; Mon, 4 Dec 2023 11:24:35 +0100 (CET) Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) by mails.dpdk.org (Postfix) with ESMTP id 3881640DF5 for ; Mon, 4 Dec 2023 11:24:33 +0100 (CET) Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-40c09f5a7cfso11129565e9.0 for ; Mon, 04 Dec 2023 02:24:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685473; x=1702290273; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Kn/2U/8Wo1A2Bfu3Vot6EMFi0yZ63e9DkWj/pMBSXyA=; b=JDOL/0ifLfO6hT+RildAXuTPWImFsub/4Zn2Qt0+Cj7ziZ1CgxzJDznhuzMWi+RkQf TM4CB77tCZvRLRDHxHpzP9NNZR9DZQALtv/sjUMFq6ke5vCcfF8JIkLBPkxUt5lySbkN 2uQrxNq1d5DtoTPpy+BSf1uQO+YjNmT6iqCHhF3mACCPHeCSwC10V6IwlrVuszt6Um4Y 3xpB8el2K1Hbf/QuQSUNVjgqrpccsxXxsatZ3zSEmWmtWhLt4GI1Mam6RNEqZWj2Uxw0 LGrkr/9zywTWnrKnBDJpboekklmnykTl5m2ejpFZ3Bd6rCvGptFg1/qiqISVAEukHc/P vf3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685473; x=1702290273; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Kn/2U/8Wo1A2Bfu3Vot6EMFi0yZ63e9DkWj/pMBSXyA=; b=UvDL5eLl4NPfb/6ohPCmfBd7Eac5SK8sXPCix86sjmJOBRvI4PpMrh3Go1aTAlkVVn x+ndbboWk3kIzmFfi2VcODROjWoN2FvKgIWlVyBTbNz4dKSUQVx5ZfCfuA5qd/1YPEWa pV2efk4NZIArsXKfJVB7m0Sduse7N3EfeKU6RC61ss0oURr2C/+mCJ3UcG9wUKBmG6fD GmYiG0bRElURYOlucFa654W6Gt4u7BlV6XKcfQNGnNSDQYv7IIn33UCkAdqXRH/vDhOQ 3wwj5p20tes7OfSuL9KWIgOCJrnU2mXyJ5GMLLY8unA9qD4btdYrRpflL9457u6HAQyH vuPg== X-Gm-Message-State: AOJu0Yxy3JEoEHeKb+eubycB5Aki8fnZRktVmR50XurTlkxP+J/J6zRw hS1DxtVMriQF2B3v4Luuh8wnFA== X-Google-Smtp-Source: AGHT+IFJ1ndvJdHijXOAbbFaBjJogLg35crCfDVc6Rhhd2FtPQ9G4ZhSfaP25Ob/LlfXAOVcK2FpLw== X-Received: by 2002:a05:600c:3115:b0:40b:5e21:ec0e with SMTP id g21-20020a05600c311500b0040b5e21ec0emr2107129wmo.64.1701685472861; Mon, 04 Dec 2023 02:24:32 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:32 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 02/21] dts: add docstring checker Date: Mon, 4 Dec 2023 11:24:10 +0100 Message-Id: <20231204102429.106709-3-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Python docstrings are the in-code way to document the code. The docstring checker of choice is pydocstyle which we're executing from Pylama, but the current latest versions are not complatible due to [0], so pin the pydocstyle version to the latest working version. [0] https://github.com/klen/pylama/issues/232 Signed-off-by: Juraj Linkeš --- dts/poetry.lock | 12 ++++++------ dts/pyproject.toml | 6 +++++- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/dts/poetry.lock b/dts/poetry.lock index f7b3b6d602..a734fa71f0 100644 --- a/dts/poetry.lock +++ b/dts/poetry.lock @@ -489,20 +489,20 @@ files = [ [[package]] name = "pydocstyle" -version = "6.3.0" +version = "6.1.1" description = "Python docstring style checker" optional = false python-versions = ">=3.6" files = [ - {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"}, - {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"}, + {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"}, + {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"}, ] [package.dependencies] -snowballstemmer = ">=2.2.0" +snowballstemmer = "*" [package.extras] -toml = ["tomli (>=1.2.3)"] +toml = ["toml"] [[package]] name = "pyflakes" @@ -837,4 +837,4 @@ jsonschema = ">=4,<5" [metadata] lock-version = "2.0" python-versions = "^3.10" -content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639" +content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b" diff --git a/dts/pyproject.toml b/dts/pyproject.toml index 980ac3c7db..37a692d655 100644 --- a/dts/pyproject.toml +++ b/dts/pyproject.toml @@ -25,6 +25,7 @@ PyYAML = "^6.0" types-PyYAML = "^6.0.8" fabric = "^2.7.1" scapy = "^2.5.0" +pydocstyle = "6.1.1" [tool.poetry.group.dev.dependencies] mypy = "^0.961" @@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.pylama] -linters = "mccabe,pycodestyle,pyflakes" +linters = "mccabe,pycodestyle,pydocstyle,pyflakes" format = "pylint" max_line_length = 100 +[tool.pylama.linter.pydocstyle] +convention = "google" + [tool.mypy] python_version = "3.10" enable_error_code = ["ignore-without-code"] From patchwork Mon Dec 4 10:24:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134791 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB6114366A; Mon, 4 Dec 2023 11:24:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D0E4E40EDB; Mon, 4 Dec 2023 11:24:36 +0100 (CET) Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) by mails.dpdk.org (Postfix) with ESMTP id 031A640DFD for ; Mon, 4 Dec 2023 11:24:34 +0100 (CET) Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-40c09f5a7cfso11129755e9.0 for ; Mon, 04 Dec 2023 02:24:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685473; x=1702290273; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZYiXW2Z88ESYnyFNqmTUgTLHumo8eKturZmLII8hMZI=; b=eNfjMMfB5HTPkS89EsXPMYZEn0EMkLhxjz8ppcgTfCu2eBmjOcA7uTRmZ3ZsmIyoLt mM385Rs9qymuwTilo7PcMyyjeHdcNGN8aIaas5EW09gHYmYIm1XTU6Eu05/vGwe5PUUJ oNNcYothAqgXpPPXiW8A0/YA7RUAvr9fxGheFnQH06Uev2zR4P63pkPCL+LUala5f88P bCqkJnYyyHc+Hddn2cGIxR1weaGF6KTUy6nFwKQI9HdJ8MDgkctZM/5fVBSc10O4cNY/ 0PpXv4NQ6xGjTSOLTPJ9c7kjs3G1iYKdbpI4drJlr4FHC+gVN+JzgwUGQWbw/zpSKyEU KAlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685473; x=1702290273; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZYiXW2Z88ESYnyFNqmTUgTLHumo8eKturZmLII8hMZI=; b=EW4KdTR8pnbOXiG4upq/zDRGhIu7KJrQ7oWQDWAruudTe/Y8x/SVUVTPS3FtrF+E/T 7at3ufOevPhyVn0KQ8p1i/vaFr1Xa0/XPLNblRQXA1bGkQW+ED5MpkhLm6cR1CI71pNE +GMvHCzcDeNqdUL1y94G7saVgzPvmScg5sYWh07uRVLhpxExDYz7G5IKh2ZEh9CEEED4 Fcw+ON5JRU/vZbFoNC6s1RiTtm4tn/AII31bqoE7KaLsFnVrK3FH2vKWB6BPsvytbBf3 5goHipm2r7/+76rgPVUdgBMl7bm0phonGNymMqKRx392pffPkNYaiXJcDcXmZYv/NmNk t7zA== X-Gm-Message-State: AOJu0Yz1OhLMIAhnUk77olD0PbtR9mYR0X+HD61gBULwvS3qaY8lHLNo u55Os9QGAOQX95QWa554NAoeHXQ/XNqu8U8xiAPzxQ== X-Google-Smtp-Source: AGHT+IGgjtgagnPp0taT88/COgyUFNq9wHVVFcUXkW6c7nmCnMswgp8L4vb0MhUhEMX65voqKn7hnQ== X-Received: by 2002:a05:600c:3647:b0:40b:5e59:e9fe with SMTP id y7-20020a05600c364700b0040b5e59e9femr2587244wmq.157.1701685473725; Mon, 04 Dec 2023 02:24:33 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:33 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 03/21] dts: add basic developer docs Date: Mon, 4 Dec 2023 11:24:11 +0100 Message-Id: <20231204102429.106709-4-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Expand the framework contribution guidelines and add how to document the code with Python docstrings. Signed-off-by: Juraj Linkeš --- doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst index 32c18ee472..cd771a428c 100644 --- a/doc/guides/tools/dts.rst +++ b/doc/guides/tools/dts.rst @@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument. The results contain basic statistics of passed/failed test cases and DPDK version. +Contributing to DTS +------------------- + +There are two areas of contribution: The DTS framework and DTS test suites. + +The framework contains the logic needed to run test cases, such as connecting to nodes, +running DPDK apps and collecting results. + +The test cases call APIs from the framework to test their scenarios. Adding test cases may +require adding code to the framework as well. + + +Framework Coding Guidelines +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When adding code to the DTS framework, pay attention to the rest of the code +and try not to divert much from it. The :ref:`DTS developer tools ` will issue +warnings when some of the basics are not met. + +The code must be properly documented with docstrings. The style must conform to +the `Google style `_. +See an example of the style +`here `_. +For cases which are not covered by the Google style, refer +to `PEP 257 `_. There are some cases which are not covered by +the two style guides, where we deviate or where some additional clarification is helpful: + + * The __init__() methods of classes are documented separately from the docstring of the class + itself. + * The docstrigs of implemented abstract methods should refer to the superclass's definition + if there's no deviation. + * Instance variables/attributes should be documented in the docstring of the class + in the ``Attributes:`` section. + * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass + attributes which result in instance variables/attributes should also be recorded + in the ``Attributes:`` section. + * Class variables/attributes, on the other hand, should be documented with ``#:`` above + the type annotated line. The description may be omitted if the meaning is obvious. + * The Enum and TypedDict also process the attributes in particular ways and should be documented + with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value. + * When referencing a parameter of a function or a method in their docstring, don't use + any articles and put the parameter into single backticks. This mimics the style of + `Python's documentation `_. + * When specifying a value, use double backticks:: + + def foo(greet: bool) -> None: + """Demonstration of single and double backticks. + + `greet` controls whether ``Hello World`` is printed. + + Args: + greet: Whether to print the ``Hello World`` message. + """ + if greet: + print(f"Hello World") + + * The docstring maximum line length is the same as the code maximum line length. + + How To Write a Test Suite ------------------------- @@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite: | These methods don't need to be implemented if there's no need for them in a test suite. In that case, nothing will happen when they're is executed. +#. **Configuration, traffic and other logic** + + The ``TestSuite`` class contains a variety of methods for anything that + a test suite setup, a teardown, or a test case may need to do. + + The test suites also frequently use a DPDK app, such as testpmd, in interactive mode + and use the interactive shell instances directly. + + These are the two main ways to call the framework logic in test suites. If there's any + functionality or logic missing from the framework, it should be implemented so that + the test suites can use one of these two ways. + #. **Test case verification** Test case verification should be done with the ``verify`` method, which records the result. @@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite: and used by the test suite via the ``sut_node`` field. +.. _dts_dev_tools: + DTS Developer Tools ------------------- From patchwork Mon Dec 4 10:24:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134792 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8D634366A; Mon, 4 Dec 2023 11:25:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0AE5340EE3; Mon, 4 Dec 2023 11:24:38 +0100 (CET) Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by mails.dpdk.org (Postfix) with ESMTP id 13F7040DFD for ; Mon, 4 Dec 2023 11:24:35 +0100 (CET) Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-40c0a0d068bso9348645e9.3 for ; Mon, 04 Dec 2023 02:24:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685475; x=1702290275; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wjMjWbqHGp5MGkpbw1QW6JBQYnzdCysgXfdMDvK6Qbg=; b=fzU2oe/pG6/IGYfICjfR+Mp9t9whuma974IRdA1e+XT84na+IWE0xS7vuUq3XaWwku VeXKKDHgG0N9ZLgbdADBHtp/jqvyXEcQZeZyC+v8BN9L+5RIxPBrMRgHJRGmyxo0fkzb xCmK0SUxCJz140ARiCspQEEcnHF4jZkqAXxk/9OnLXA5A5DwvqiOwnclYG5y8JNiMVxH ITfh5FfJwqGailpq/oQeflpDMgnH4W60rDdOt54Fgfrr1YZMnmks2W+oH/Nwi1e8HcuB /jF+Z7vam5hSl8V9kGu0Nl638lyQyQFiv8/waZXM/sdJVR10uCzUqlbXIb07c5TfOmzK oPGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685475; x=1702290275; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wjMjWbqHGp5MGkpbw1QW6JBQYnzdCysgXfdMDvK6Qbg=; b=nRuvpEuqRT1Ph0zig9dltQ9le4lermChxzfz+GjPhyT0MGcLNo1X9Y/agYazYM3DJc VYBMViyKZQIZRPFbeKGPk3wXzwobAg8UGLtWsQN/TLqyPJYhAg9PyQ5B5u5BRvB1qpaQ JoO0YjW5j7p4OhLyNGoYcs/7qfN/GoA7rZbYZQA+9IkvEewv6vEySAw65sponAeMw7+p qq4ONjb7EQXs1E85gphDEFONFi+RO8iPUi9jHe5GRoScV39C0+4zRKShQm5AKEc4RDb1 c030OBeZ5XsjPxxHAW9QaVX3yuggJV7/lPIeXzoNhZTTvGTqBS+CTNaO2WiaabuZWHQM fs/g== X-Gm-Message-State: AOJu0Yy7sJyiCptC4Z8wU7RtSnXfh+GUmHLi1Hg8+ZAg8B4ZX6cdkcRS Wt/Y1ZviV5jogjP6Yz7SE12nWZCGbD53FsOTYu6Xqg== X-Google-Smtp-Source: AGHT+IGaGs0FZxrwDYbWJQATu/K8Z36MXHSUfB+ZUPd1A0nrLyRum/JsrZRq9yBlWYzmn0C/kQc6jQ== X-Received: by 2002:a05:600c:520e:b0:40b:5e22:97c with SMTP id fb14-20020a05600c520e00b0040b5e22097cmr3123453wmb.107.1701685474711; Mon, 04 Dec 2023 02:24:34 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:34 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 04/21] dts: exceptions docstring update Date: Mon, 4 Dec 2023 11:24:12 +0100 Message-Id: <20231204102429.106709-5-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/__init__.py | 12 ++++- dts/framework/exception.py | 106 +++++++++++++++++++++++++------------ 2 files changed, 83 insertions(+), 35 deletions(-) diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py index d551ad4bf0..662e6ccad2 100644 --- a/dts/framework/__init__.py +++ b/dts/framework/__init__.py @@ -1,3 +1,13 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2022 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire + +"""Libraries and utilities for running DPDK Test Suite (DTS). + +The various modules in the DTS framework offer: + +* Connections to nodes, both interactive and non-interactive, +* A straightforward way to add support for different operating systems of remote nodes, +* Test suite setup, execution and teardown, along with test case setup, execution and teardown, +* Pre-test suite setup and post-test suite teardown. +""" diff --git a/dts/framework/exception.py b/dts/framework/exception.py index 151e4d3aa9..658eee2c38 100644 --- a/dts/framework/exception.py +++ b/dts/framework/exception.py @@ -3,8 +3,10 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -User-defined exceptions used across the framework. +"""DTS exceptions. + +The exceptions all have different severities expressed as an integer. +The highest severity of all raised exceptions is used as the exit code of DTS. """ from enum import IntEnum, unique @@ -13,59 +15,79 @@ @unique class ErrorSeverity(IntEnum): - """ - The severity of errors that occur during DTS execution. + """The severity of errors that occur during DTS execution. + All exceptions are caught and the most severe error is used as return code. """ + #: NO_ERR = 0 + #: GENERIC_ERR = 1 + #: CONFIG_ERR = 2 + #: REMOTE_CMD_EXEC_ERR = 3 + #: SSH_ERR = 4 + #: DPDK_BUILD_ERR = 10 + #: TESTCASE_VERIFY_ERR = 20 + #: BLOCKING_TESTSUITE_ERR = 25 class DTSError(Exception): - """ - The base exception from which all DTS exceptions are derived. - Stores error severity. + """The base exception from which all DTS exceptions are subclassed. + + Do not use this exception, only use subclassed exceptions. """ + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR class SSHTimeoutError(DTSError): - """ - Command execution timeout. - """ + """The SSH execution of a command timed out.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _command: str def __init__(self, command: str): + """Define the meaning of the first argument. + + Args: + command: The executed command. + """ self._command = command def __str__(self) -> str: - return f"TIMEOUT on {self._command}" + """Add some context to the string representation.""" + return f"{self._command} execution timed out." class SSHConnectionError(DTSError): - """ - SSH connection error. - """ + """An unsuccessful SSH connection.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _host: str _errors: list[str] def __init__(self, host: str, errors: list[str] | None = None): + """Define the meaning of the first two arguments. + + Args: + host: The hostname to which we're trying to connect. + errors: Any errors that occurred during the connection attempt. + """ self._host = host self._errors = [] if errors is None else errors def __str__(self) -> str: + """Include the errors in the string representation.""" message = f"Error trying to connect with {self._host}." if self._errors: message += f" Errors encountered while retrying: {', '.join(self._errors)}" @@ -74,76 +96,92 @@ def __str__(self) -> str: class SSHSessionDeadError(DTSError): - """ - SSH session is not alive. - It can no longer be used. - """ + """The SSH session is no longer alive.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR _host: str def __init__(self, host: str): + """Define the meaning of the first argument. + + Args: + host: The hostname of the disconnected node. + """ self._host = host def __str__(self) -> str: - return f"SSH session with {self._host} has died" + """Add some context to the string representation.""" + return f"SSH session with {self._host} has died." class ConfigurationError(DTSError): - """ - Raised when an invalid configuration is encountered. - """ + """An invalid configuration.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR class RemoteCommandExecutionError(DTSError): - """ - Raised when a command executed on a Node returns a non-zero exit status. - """ + """An unsuccessful execution of a remote command.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR + #: The executed command. command: str _command_return_code: int def __init__(self, command: str, command_return_code: int): + """Define the meaning of the first two arguments. + + Args: + command: The executed command. + command_return_code: The return code of the executed command. + """ self.command = command self._command_return_code = command_return_code def __str__(self) -> str: + """Include both the command and return code in the string representation.""" return f"Command {self.command} returned a non-zero exit code: {self._command_return_code}" class RemoteDirectoryExistsError(DTSError): - """ - Raised when a remote directory to be created already exists. - """ + """A directory that exists on a remote node.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR class DPDKBuildError(DTSError): - """ - Raised when DPDK build fails for any reason. - """ + """A DPDK build failure.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR class TestCaseVerifyError(DTSError): - """ - Used in test cases to verify the expected behavior. - """ + """A test case failure.""" + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR class BlockingTestSuiteError(DTSError): + """A failure in a blocking test suite.""" + + #: severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR _suite_name: str def __init__(self, suite_name: str) -> None: + """Define the meaning of the first argument. + + Args: + suite_name: The blocking test suite. + """ self._suite_name = suite_name def __str__(self) -> str: + """Add some context to the string representation.""" return f"Blocking suite {self._suite_name} failed." From patchwork Mon Dec 4 10:24:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134793 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CDE4E4366A; Mon, 4 Dec 2023 11:25:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 549C241060; Mon, 4 Dec 2023 11:24:39 +0100 (CET) Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) by mails.dpdk.org (Postfix) with ESMTP id 5DAA040E72 for ; Mon, 4 Dec 2023 11:24:36 +0100 (CET) Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-40c0873cf84so13881395e9.1 for ; Mon, 04 Dec 2023 02:24:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685476; x=1702290276; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CCNc9kWJ1NNoU6JO0SvKKLXK/BM6pco6XY3SjKbCz6s=; b=jFIWKNkfC99P5v0CGsetKAwSeDRHFfkq+vUu29NHVvODQ2n313ImEvGl3SGef6oOpx IKmVooYEqCiBkGr8QA31u9ZG4Y4NO/ecN4CByaF1YvI/ooAtOLM8WVXPINbgssjeU+rA PFVxaRFePMzo5zG/WTQlg7ApGmtEfSPmNzbRuY2YgVNETitzyrQz6+mKJNGaSfwx3K0s 3Bf2Sh83uunpYzlYPjhh6ZlJfQUgeRXdMB8Zpf9Hady+Gptq6igLy9hCp52aEC4BFucv TtBO6rhRz0i0ZumxMPZkje/T6Rw9ILTkaHY2xSL8+j39tjxrLRAcIrPVHC64zl07orHW yaVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685476; x=1702290276; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CCNc9kWJ1NNoU6JO0SvKKLXK/BM6pco6XY3SjKbCz6s=; b=PVRGPMBPBqQ6+yXoyYCheu/UPFeuxSltuKunrLje5BHB65WDkHyiY28n9UbPVUx4X1 0rDlELFp9zDwUGsYp4P4sAoAmdjaEu06BQmJ4OW/msllg1oNfO/Op4V6IOhz/4YHTEDx kHdvDi6DkxQ98wRr2iRQqEYjDrg6Vg25+ckAVmhuXUyX4ZKWAxzyi4k6ZCFZnNbMgo5Y YXKRlKftNQz8MlKI71GQoeRXbhhz7oQqSiPh9Wyy3h+V9KRx3P2jn+04ders4ONRLKBO U1cWAO8i8DdZmCxau1DTXRmvM0glkk2FErOapmSkjmgXusTKfxZ7XWwWdlFyL8pTM84U +fJQ== X-Gm-Message-State: AOJu0Yz3Mf9j5KzdIvo+kj1XzKvTSemCIIutRfCCL8v9kMeejq7/HYCc RHhR02K70abl5bfGpshClM6ICA== X-Google-Smtp-Source: AGHT+IEqMUl8pYA9fdU7wlSJJ9qFJ4ZPdRMb+lWPSlpcbYc8aeJ+KT5d8WYYvXh8JmffaAmZeI4npA== X-Received: by 2002:a05:600c:3111:b0:40b:5e1e:cf3 with SMTP id g17-20020a05600c311100b0040b5e1e0cf3mr2232978wmo.46.1701685476013; Mon, 04 Dec 2023 02:24:36 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:35 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 05/21] dts: settings docstring update Date: Mon, 4 Dec 2023 11:24:13 +0100 Message-Id: <20231204102429.106709-6-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/settings.py | 103 +++++++++++++++++++++++++++++++++++++- 1 file changed, 102 insertions(+), 1 deletion(-) diff --git a/dts/framework/settings.py b/dts/framework/settings.py index 25b5dcff22..41f98e8519 100644 --- a/dts/framework/settings.py +++ b/dts/framework/settings.py @@ -3,6 +3,72 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire +"""Environment variables and command line arguments parsing. + +This is a simple module utilizing the built-in argparse module to parse command line arguments, +augment them with values from environment variables and make them available across the framework. + +The command line value takes precedence, followed by the environment variable value, +followed by the default value defined in this module. + +The command line arguments along with the supported environment variables are: + +.. option:: --config-file +.. envvar:: DTS_CFG_FILE + + The path to the YAML test run configuration file. + +.. option:: --output-dir, --output +.. envvar:: DTS_OUTPUT_DIR + + The directory where DTS logs and results are saved. + +.. option:: --compile-timeout +.. envvar:: DTS_COMPILE_TIMEOUT + + The timeout for compiling DPDK. + +.. option:: -t, --timeout +.. envvar:: DTS_TIMEOUT + + The timeout for all DTS operation except for compiling DPDK. + +.. option:: -v, --verbose +.. envvar:: DTS_VERBOSE + + Set to any value to enable logging everything to the console. + +.. option:: -s, --skip-setup +.. envvar:: DTS_SKIP_SETUP + + Set to any value to skip building DPDK. + +.. option:: --tarball, --snapshot, --git-ref +.. envvar:: DTS_DPDK_TARBALL + + The path to a DPDK tarball, git commit ID, tag ID or tree ID to test. + +.. option:: --test-cases +.. envvar:: DTS_TESTCASES + + A comma-separated list of test cases to execute. Unknown test cases will be silently ignored. + +.. option:: --re-run, --re_run +.. envvar:: DTS_RERUN + + Re-run each test case this many times in case of a failure. + +The module provides one key module-level variable: + +Attributes: + SETTINGS: The module level variable storing framework-wide DTS settings. + +Typical usage example:: + + from framework.settings import SETTINGS + foo = SETTINGS.foo +""" + import argparse import os from collections.abc import Callable, Iterable, Sequence @@ -16,6 +82,23 @@ def _env_arg(env_var: str) -> Any: + """A helper method augmenting the argparse Action with environment variables. + + If the supplied environment variable is defined, then the default value + of the argument is modified. This satisfies the priority order of + command line argument > environment variable > default value. + + Arguments with no values (flags) should be defined using the const keyword argument + (True or False). When the argument is specified, it will be set to const, if not specified, + the default will be stored (possibly modified by the corresponding environment variable). + + Other arguments work the same as default argparse arguments, that is using + the default 'store' action. + + Returns: + The modified argparse.Action. + """ + class _EnvironmentArgument(argparse.Action): def __init__( self, @@ -68,14 +151,28 @@ def __call__( @dataclass(slots=True) class Settings: + """Default framework-wide user settings. + + The defaults may be modified at the start of the run. + """ + + #: config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml") + #: output_dir: str = "output" + #: timeout: float = 15 + #: verbose: bool = False + #: skip_setup: bool = False + #: dpdk_tarball_path: Path | str = "dpdk.tar.xz" + #: compile_timeout: float = 1200 + #: test_cases: list[str] = field(default_factory=list) + #: re_run: int = 0 @@ -166,7 +263,7 @@ def _get_parser() -> argparse.ArgumentParser: action=_env_arg("DTS_RERUN"), default=SETTINGS.re_run, type=int, - help="[DTS_RERUN] Re-run each test case the specified amount of times " + help="[DTS_RERUN] Re-run each test case the specified number of times " "if a test failure occurs", ) @@ -174,6 +271,10 @@ def _get_parser() -> argparse.ArgumentParser: def get_settings() -> Settings: + """Create new settings with inputs from the user. + + The inputs are taken from the command line and from environment variables. + """ parsed_args = _get_parser().parse_args() return Settings( config_file_path=parsed_args.config_file, From patchwork Mon Dec 4 10:24:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134794 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B7C54366A; Mon, 4 Dec 2023 11:25:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E5B1841101; Mon, 4 Dec 2023 11:24:40 +0100 (CET) Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) by mails.dpdk.org (Postfix) with ESMTP id 7CC3340DF5 for ; Mon, 4 Dec 2023 11:24:37 +0100 (CET) Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-40c09f4bea8so9309595e9.1 for ; Mon, 04 Dec 2023 02:24:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685477; x=1702290277; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HwU+LFHRLrk+IQzVJORkHjCvcMai8+Em9v6JYz1n0S0=; b=s7SGDW2RSWesA5wtzCjFs2UTLxXeMGfxp4A60QmL5IOUAsaE3ePw/vPBcJIbunOaZx +l3ixCDWWDBJBU9wm9uulbqIqCvFQtwVuNgn1Wt0X4WirEf7f+93g3Nn4ZDZJGouoxc7 5DRLCKlKrZVr8RopQztfTAv2mdp00g/44vx4QC75gGGq09D9ms9rE1wql1lsv45OBMoP q3we0vBMkOwrOMaaqZgmAxkRQzPUpqDmPxBW/lpGa0felFRBw7p0wlPy+cPRF3CkWJvb c6KNL8rG+W+1GZGpFq88z9UuZNl2RrU6ur+KSPP0i1vBLmjAxRJViYjTG9nLUMu79JkE 63lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685477; x=1702290277; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HwU+LFHRLrk+IQzVJORkHjCvcMai8+Em9v6JYz1n0S0=; b=FpHEFWsLStztLpHpGT7d7grwHo0ui9btg0HRphRsmvG0WNNJKspwuDs9Ky+JOqIGeJ mHFRmbGCB0qUjiU9Ec/dMGOhh9X/76NFOfcrqZfcJ7KM33iTlm5PeX1KuWbVN80zUAPj vRKLTG4rz+8/N2u10MLkIssdZreKTPfNGjF79TwE2LWwEYgJbX21KdEk+mWJrfIDcGqc wIYcprn1NcPJijz9XpKcUFGEhZiOLe6VSxthpmTEIbNearM9g10PHM8bW1z1doRF4i8s YzlkAUFjt8hRZ9C0j9HguZhUpAO9xTKejV/gJidpg/nB6HAIKzoMmbmN6CVTSSIAWfiz lf7A== X-Gm-Message-State: AOJu0Yw+klItUfYzo1T2lg8sjOYc0jV1/FTwiyJiyvaTplvQDLtmox/L sqcOad4W6NOHB9nd6swvIzq7Vg== X-Google-Smtp-Source: AGHT+IH1uVgmitNN/f45TtEPbXCCaRsP+Bt+rTSvE+GJ6LCmiPXQkK97jxPHMCb/yoNKsa5NAVLMpg== X-Received: by 2002:a05:600c:46c7:b0:40b:35aa:bfdd with SMTP id q7-20020a05600c46c700b0040b35aabfddmr2209558wmo.27.1701685477142; Mon, 04 Dec 2023 02:24:37 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:36 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 06/21] dts: logger and utils docstring update Date: Mon, 4 Dec 2023 11:24:14 +0100 Message-Id: <20231204102429.106709-7-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/logger.py | 72 ++++++++++++++++++++++----------- dts/framework/utils.py | 88 +++++++++++++++++++++++++++++------------ 2 files changed, 113 insertions(+), 47 deletions(-) diff --git a/dts/framework/logger.py b/dts/framework/logger.py index bb2991e994..cfa6e8cd72 100644 --- a/dts/framework/logger.py +++ b/dts/framework/logger.py @@ -3,9 +3,9 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -DTS logger module with several log level. DTS framework and TestSuite logs -are saved in different log files. +"""DTS logger module. + +DTS framework and TestSuite logs are saved in different log files. """ import logging @@ -18,19 +18,21 @@ stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s" -class LoggerDictType(TypedDict): - logger: "DTSLOG" - name: str - node: str - +class DTSLOG(logging.LoggerAdapter): + """DTS logger adapter class for framework and testsuites. -# List for saving all using loggers -Loggers: list[LoggerDictType] = [] + The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment + variable control the verbosity of output. If enabled, all messages will be emitted to the + console. + The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment + variable modify the directory where the logs will be stored. -class DTSLOG(logging.LoggerAdapter): - """ - DTS log class for framework and testsuite. + Attributes: + node: The additional identifier. Currently unused. + sh: The handler which emits logs to console. + fh: The handler which emits logs to a file. + verbose_fh: Just as fh, but logs with a different, more verbose, format. """ _logger: logging.Logger @@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter): verbose_fh: logging.FileHandler def __init__(self, logger: logging.Logger, node: str = "suite"): + """Extend the constructor with additional handlers. + + One handler logs to the console, the other one to a file, with either a regular or verbose + format. + + Args: + logger: The logger from which to create the logger adapter. + node: An additional identifier. Currently unused. + """ self._logger = logger # 1 means log everything, this will be used by file handlers if their level # is not set @@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"): super(DTSLOG, self).__init__(self._logger, dict(node=self.node)) def logger_exit(self) -> None: - """ - Remove stream handler and logfile handler. - """ + """Remove the stream handler and the logfile handler.""" for handler in (self.sh, self.fh, self.verbose_fh): handler.flush() self._logger.removeHandler(handler) +class _LoggerDictType(TypedDict): + logger: DTSLOG + name: str + node: str + + +# List for saving all loggers in use +_Loggers: list[_LoggerDictType] = [] + + def getLogger(name: str, node: str = "suite") -> DTSLOG: + """Get DTS logger adapter identified by name and node. + + An existing logger will be returned if one with the exact name and node already exists. + A new one will be created and stored otherwise. + + Args: + name: The name of the logger. + node: An additional identifier for the logger. + + Returns: + A logger uniquely identified by both name and node. """ - Get logger handler and if there's no handler for specified Node will create one. - """ - global Loggers + global _Loggers # return saved logger - logger: LoggerDictType - for logger in Loggers: + logger: _LoggerDictType + for logger in _Loggers: if logger["name"] == name and logger["node"] == node: return logger["logger"] # return new logger dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node) - Loggers.append({"logger": dts_logger, "name": name, "node": node}) + _Loggers.append({"logger": dts_logger, "name": name, "node": node}) return dts_logger diff --git a/dts/framework/utils.py b/dts/framework/utils.py index a0f2173949..cc5e458cc8 100644 --- a/dts/framework/utils.py +++ b/dts/framework/utils.py @@ -3,6 +3,16 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +"""Various utility classes and functions. + +These are used in multiple modules across the framework. They're here because +they provide some non-specific functionality, greatly simplify imports or just don't +fit elsewhere. + +Attributes: + REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``. +""" + import atexit import json import os @@ -19,12 +29,20 @@ def expand_range(range_str: str) -> list[int]: - """ - Process range string into a list of integers. There are two possible formats: - n - a single integer - n-m - a range of integers + """Process `range_str` into a list of integers. + + There are two possible formats of `range_str`: + + * ``n`` - a single integer, + * ``n-m`` - a range of integers. - The returned range includes both n and m. Empty string returns an empty list. + The returned range includes both ``n`` and ``m``. Empty string returns an empty list. + + Args: + range_str: The range to expand. + + Returns: + All the numbers from the range. """ expanded_range: list[int] = [] if range_str: @@ -37,6 +55,14 @@ def expand_range(range_str: str) -> list[int]: def get_packet_summaries(packets: list[Packet]) -> str: + """Format a string summary from `packets`. + + Args: + packets: The packets to format. + + Returns: + The summary of `packets`. + """ if len(packets) == 1: packet_summaries = packets[0].summary() else: @@ -45,27 +71,36 @@ def get_packet_summaries(packets: list[Packet]) -> str: class StrEnum(Enum): + """Enum with members stored as strings.""" + @staticmethod def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str: return name def __str__(self) -> str: + """The string representation is the name of the member.""" return self.name class MesonArgs(object): - """ - Aggregate the arguments needed to build DPDK: - default_library: Default library type, Meson allows "shared", "static" and "both". - Defaults to None, in which case the argument won't be used. - Keyword arguments: The arguments found in meson_options.txt in root DPDK directory. - Do not use -D with them, for example: - meson_args = MesonArgs(enable_kmods=True). - """ + """Aggregate the arguments needed to build DPDK.""" _default_library: str def __init__(self, default_library: str | None = None, **dpdk_args: str | bool): + """Initialize the meson arguments. + + Args: + default_library: The default library type, Meson supports ``shared``, ``static`` and + ``both``. Defaults to :data:`None`, in which case the argument won't be used. + dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory. + Do not use ``-D`` with them. + + Example: + :: + + meson_args = MesonArgs(enable_kmods=True). + """ self._default_library = f"--default-library={default_library}" if default_library else "" self._dpdk_args = " ".join( ( @@ -75,6 +110,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool): ) def __str__(self) -> str: + """The actual args.""" return " ".join(f"{self._default_library} {self._dpdk_args}".split()) @@ -96,24 +132,14 @@ class _TarCompressionFormat(StrEnum): class DPDKGitTarball(object): - """Create a compressed tarball of DPDK from the repository. - - The DPDK version is specified with git object git_ref. - The tarball will be compressed with _TarCompressionFormat, - which must be supported by the DTS execution environment. - The resulting tarball will be put into output_dir. + """Compressed tarball of DPDK from the repository. - The class supports the os.PathLike protocol, + The class supports the :class:`os.PathLike` protocol, which is used to get the Path of the tarball:: from pathlib import Path tarball = DPDKGitTarball("HEAD", "output") tarball_path = Path(tarball) - - Arguments: - git_ref: A git commit ID, tag ID or tree ID. - output_dir: The directory where to put the resulting tarball. - tar_compression_format: The compression format to use. """ _git_ref: str @@ -128,6 +154,17 @@ def __init__( output_dir: str, tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz, ): + """Create the tarball during initialization. + + The DPDK version is specified with `git_ref`. The tarball will be compressed with + `tar_compression_format`, which must be supported by the DTS execution environment. + The resulting tarball will be put into `output_dir`. + + Args: + git_ref: A git commit ID, tag ID or tree ID. + output_dir: The directory where to put the resulting tarball. + tar_compression_format: The compression format to use. + """ self._git_ref = git_ref self._tar_compression_format = tar_compression_format @@ -196,4 +233,5 @@ def _delete_tarball(self) -> None: os.remove(self._tarball_path) def __fspath__(self) -> str: + """The os.PathLike protocol implementation.""" return str(self._tarball_path) From patchwork Mon Dec 4 10:24:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134795 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AED704366A; Mon, 4 Dec 2023 11:25:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2DEA04111C; Mon, 4 Dec 2023 11:24:42 +0100 (CET) Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by mails.dpdk.org (Postfix) with ESMTP id 748B240F1A for ; Mon, 4 Dec 2023 11:24:38 +0100 (CET) Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-40b4746ae3bso38083315e9.0 for ; Mon, 04 Dec 2023 02:24:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685478; x=1702290278; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6i4bo/AJKOFxXNVyyjE//kxyMsAoBQPx3oSImSMe8yo=; b=Xqq0V2+V6vwpxi0g5cFp1MCSoAyKGJM49+C2PZl25vURJPi5CtfBN6XND3+mDAnRF9 HSK2sDrzr/3+ekeRLByDq6MZUbfXc7wllHHGjkjgtKTfV5b7M8Ne3lMCmjRB9+OUqnz1 /T+qalIDk4PiFudZKwxQirzB1qoIOcNQqLJyBWAoca6DB0pKVWI8NqVDiUqY31VAloZh 2LKI1bGkICIIf8QnN1mwHx9ZIBrXeauxxyumGgMG8OM07ZZRWNkpSBeeeutWxDP/3Hfy +HhSnSrJ+zNGts4DhLF6DGs5V91yip8QDMtJtGTeifF+k0ykZB/tdONFdm8vjtqap5hw znyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685478; x=1702290278; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6i4bo/AJKOFxXNVyyjE//kxyMsAoBQPx3oSImSMe8yo=; b=SEZfOx6i/6jtekzErF9yTf642izJC73JG2vcVV3NNpsIHTpyCpbRvi0Um5bVRd9yHd sTMb/rRZdt/bV7+HuoEb20BJ4Dh4gkgAImMQmWAcmmokw5AFT2T6hRe18nyYjeoKwMGk wNk3Pv3zFLzSCDdWyROKQcB2cpbBxsE8Mvt4kPlETJQpBOBeFAftnDc/LMHm8Qv7dm8I 3MI4k6r4omGSHmNDlLsN9HSGNpJrR0L+XR0sDt98kn75rE7ehVmFhjWeOiNRvBzQs9jV pCU1yH85fBydsN/htrZ66b5EXrz7MoSX6hy/jBt/rEaqD9rC/6DswmXdnLoRSfUL6mdc 8Pfg== X-Gm-Message-State: AOJu0Yxt+3ymlLIk3e8abgbQ5Npjicml3SQt7r1SG5iYA78Gn0H2OXEs vfdnAjZ2J53YoKGX14CKst3nrQ== X-Google-Smtp-Source: AGHT+IGxvNooRXC+0X1SatBRPW/ppcWeNfeJC7L+HV63raBg1djDNKFeKRpTb8bp/pEaQy1iJV2hqw== X-Received: by 2002:a05:600c:2210:b0:40b:37eb:900c with SMTP id z16-20020a05600c221000b0040b37eb900cmr2054077wml.6.1701685478115; Mon, 04 Dec 2023 02:24:38 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:37 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 07/21] dts: dts runner and main docstring update Date: Mon, 4 Dec 2023 11:24:15 +0100 Message-Id: <20231204102429.106709-8-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/dts.py | 131 ++++++++++++++++++++++++++++++++++++------- dts/main.py | 10 ++-- 2 files changed, 116 insertions(+), 25 deletions(-) diff --git a/dts/framework/dts.py b/dts/framework/dts.py index 356368ef10..e16d4578a0 100644 --- a/dts/framework/dts.py +++ b/dts/framework/dts.py @@ -3,6 +3,33 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +r"""Test suite runner module. + +A DTS run is split into stages: + + #. Execution stage, + #. Build target stage, + #. Test suite stage, + #. Test case stage. + +The module is responsible for running tests on testbeds defined in the test run configuration. +Each setup or teardown of each stage is recorded in a :class:`~.test_result.DTSResult` or +one of its subclasses. The test case results are also recorded. + +If an error occurs, the current stage is aborted, the error is recorded and the run continues in +the next iteration of the same stage. The return code is the highest `severity` of all +:class:`~.exception.DTSError`\s. + +Example: + An error occurs in a build target setup. The current build target is aborted and the run + continues with the next build target. If the errored build target was the last one in the given + execution, the next execution begins. + +Attributes: + dts_logger: The logger instance used in this module. + result: The top level result used in the module. +""" + import sys from .config import ( @@ -23,9 +50,38 @@ def run_all() -> None: - """ - The main process of DTS. Runs all build targets in all executions from the main - config file. + """Run all build targets in all executions from the test run configuration. + + Before running test suites, executions and build targets are first set up. + The executions and build targets defined in the test run configuration are iterated over. + The executions define which tests to run and where to run them and build targets define + the DPDK build setup. + + The tests suites are set up for each execution/build target tuple and each scheduled + test case within the test suite is set up, executed and torn down. After all test cases + have been executed, the test suite is torn down and the next build target will be tested. + + All the nested steps look like this: + + #. Execution setup + + #. Build target setup + + #. Test suite setup + + #. Test case setup + #. Test case logic + #. Test case teardown + + #. Test suite teardown + + #. Build target teardown + + #. Execution teardown + + The test cases are filtered according to the specification in the test run configuration and + the :option:`--test-cases` command line argument or + the :envvar:`DTS_TESTCASES` environment variable. """ global dts_logger global result @@ -87,6 +143,8 @@ def run_all() -> None: def _check_dts_python_version() -> None: + """Check the required Python version - v3.10.""" + def RED(text: str) -> str: return f"\u001B[31;1m{str(text)}\u001B[0m" @@ -109,9 +167,16 @@ def _run_execution( execution: ExecutionConfiguration, result: DTSResult, ) -> None: - """ - Run the given execution. This involves running the execution setup as well as - running all build targets in the given execution. + """Run the given execution. + + This involves running the execution setup as well as running all build targets + in the given execution. After that, execution teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: An execution's test run configuration. + result: The top level result object. """ dts_logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.") execution_result = result.add_execution(sut_node.config) @@ -144,8 +209,18 @@ def _run_build_target( execution: ExecutionConfiguration, execution_result: ExecutionResult, ) -> None: - """ - Run the given build target. + """Run the given build target. + + This involves running the build target setup as well as running all test suites + in the given execution the build target is defined in. + After that, build target teardown is run. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + build_target: A build target's test run configuration. + execution: The build target's execution's test run configuration. + execution_result: The execution level result object associated with the execution. """ dts_logger.info(f"Running build target '{build_target.name}'.") build_target_result = execution_result.add_build_target(build_target) @@ -177,10 +252,20 @@ def _run_all_suites( execution: ExecutionConfiguration, build_target_result: BuildTargetResult, ) -> None: - """ - Use the given build_target to run execution's test suites - with possibly only a subset of test cases. - If no subset is specified, run all test cases. + """Run the execution's (possibly a subset) test suites using the current build target. + + The function assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. + + If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites + in the current build target won't be executed. + + Args: + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated with the current build target. + build_target_result: The build target level result object associated + with the current build target. """ end_build_target = False if not execution.skip_smoke_tests: @@ -206,16 +291,22 @@ def _run_single_suite( build_target_result: BuildTargetResult, test_suite_config: TestSuiteConfig, ) -> None: - """Runs a single test suite. + """Run all test suite in a single test suite module. + + The function assumes the build target we're testing has already been built on the SUT node. + The current build target thus corresponds to the current DPDK build present on the SUT node. Args: - sut_node: Node to run tests on. - execution: Execution the test case belongs to. - build_target_result: Build target configuration test case is run on - test_suite_config: Test suite configuration + sut_node: The execution's SUT node. + tg_node: The execution's TG node. + execution: The execution's test run configuration associated with the current build target. + build_target_result: The build target level result object associated + with the current build target. + test_suite_config: Test suite test run configuration specifying the test suite module + and possibly a subset of test cases of test suites in that module. Raises: - BlockingTestSuiteError: If a test suite that was marked as blocking fails. + BlockingTestSuiteError: If a blocking test suite fails. """ try: full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}" @@ -239,9 +330,7 @@ def _run_single_suite( def _exit_dts() -> None: - """ - Process all errors and exit with the proper exit code. - """ + """Process all errors and exit with the proper exit code.""" result.process() if dts_logger: diff --git a/dts/main.py b/dts/main.py index 5d4714b0c3..b856ba86be 100755 --- a/dts/main.py +++ b/dts/main.py @@ -1,12 +1,10 @@ #!/usr/bin/env python3 # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -# Copyright(c) 2022 PANTHEON.tech s.r.o. +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022 University of New Hampshire -""" -A test framework for testing DPDK. -""" +"""The DTS executable.""" import logging @@ -17,6 +15,10 @@ def main() -> None: """Set DTS settings, then run DTS. The DTS settings are taken from the command line arguments and the environment variables. + The settings object is stored in the module-level variable settings.SETTINGS which the entire + framework uses. After importing the module (or the variable), any changes to the variable are + not going to be reflected without a re-import. This means that the SETTINGS variable must + be modified before the settings module is imported anywhere else in the framework. """ settings.SETTINGS = settings.get_settings() from framework import dts From patchwork Mon Dec 4 10:24:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134796 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 641EF4366A; Mon, 4 Dec 2023 11:25:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4AF41410F1; Mon, 4 Dec 2023 11:24:43 +0100 (CET) Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by mails.dpdk.org (Postfix) with ESMTP id 7D430410D5 for ; Mon, 4 Dec 2023 11:24:39 +0100 (CET) Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-40b4746ae3bso38083475e9.0 for ; Mon, 04 Dec 2023 02:24:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685479; x=1702290279; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3J4tuvvC25Dr3XVRHIMGMh8HcTBlhxLQyKucmyiGS50=; b=qgytLXMe89qHjBhy+ZW9AEtdo0GbP5h7qlfBh/lvefymItO5iU7Yk84+ua7Xi9Svhz ncnWhTlO0RrygAe7kQWzrox4HEsxcdEuXayikOg3kpi4H39GP4BAY3S8559qyw0ARmQX hnYaasSxOtjcrialbRBDyuawfoL/blIFpDmB24t+I455mloB3SM7J4CC3rEuTkDxYjFO Al8Bl9Er+qUmpFMP4xpH+AHVaoHd6vmlhUDvtTSD7EYA9WHghLGjUUusugxjhePnTp9h tCoxZ10WtXjVapc8/l2kyKMRrkk4xDS0K9bWJHm2NAPF/97byz+jh5HzxfwPQW+A2nB6 TcBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685479; x=1702290279; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3J4tuvvC25Dr3XVRHIMGMh8HcTBlhxLQyKucmyiGS50=; b=G3Rg+ucnRVNeMa6LjYh1wSXB2Vs3nAfcmEvhPSHSGF63/fZZnn7IqlHZFLDlhUSPCS /awKA34OLR66qUicYVTT/tRNtWmUCIdEYaUNoxWwdCAcyGjrN6sK6J08SkCtYPhmZhW5 x1ublCCMCGoyutGFdsxYASW7E6ObpBnCOWbtZ+3lgswHXOhcfdJmpEQ0lxjKWlR1oxW3 kS2chvIuACU22ofyLKvTaYcLhEQEGUjhTzb/sd/vkRmwfNeBmszxFfv5WscbRm92tiG8 Zx9f+6xqpuJhAWq8XMFTSw+CjgoInqGNJsWW63ib9rebc3nXyjnXBGceLOcDDkAOukaA RvYg== X-Gm-Message-State: AOJu0YxjJr1GH2MHpFBbskDfbQefJZISSU73PUEu7JMzKAvbpqCiAhDN 0EnkuSN/nMwal/kPda7Ifvvh0A== X-Google-Smtp-Source: AGHT+IH+LVUCD2ax8txg+fztEj0rj5FVivs4Tse9mkAvZEB0gNNb7gpRd3EFjle7PP2CFvgQ4dK5Gw== X-Received: by 2002:a7b:cbc9:0:b0:40c:a19:8d9c with SMTP id n9-20020a7bcbc9000000b0040c0a198d9cmr1021409wmi.162.1701685479088; Mon, 04 Dec 2023 02:24:39 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:38 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 08/21] dts: test suite docstring update Date: Mon, 4 Dec 2023 11:24:16 +0100 Message-Id: <20231204102429.106709-9-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/test_suite.py | 231 +++++++++++++++++++++++++++--------- 1 file changed, 175 insertions(+), 56 deletions(-) diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py index f9e66e814a..dfb391ffbd 100644 --- a/dts/framework/test_suite.py +++ b/dts/framework/test_suite.py @@ -2,8 +2,19 @@ # Copyright(c) 2010-2014 Intel Corporation # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -Base class for creating DTS test cases. +"""Features common to all test suites. + +The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such +must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics +needed by subclasses: + + * Test suite and test case execution flow, + * Testbed (SUT, TG) configuration, + * Packet sending and verification, + * Test case verification. + +The module also defines a function, :func:`get_test_suites`, +for gathering test suites from a Python module. """ import importlib @@ -11,7 +22,7 @@ import re from ipaddress import IPv4Interface, IPv6Interface, ip_interface from types import MethodType -from typing import Any, Union +from typing import Any, ClassVar, Union from scapy.layers.inet import IP # type: ignore[import] from scapy.layers.l2 import Ether # type: ignore[import] @@ -31,25 +42,44 @@ class TestSuite(object): - """ - The base TestSuite class provides methods for handling basic flow of a test suite: - * test case filtering and collection - * test suite setup/cleanup - * test setup/cleanup - * test case execution - * error handling and results storage - Test cases are implemented by derived classes. Test cases are all methods - starting with test_, further divided into performance test cases - (starting with test_perf_) and functional test cases (all other test cases). - By default, all test cases will be executed. A list of testcase str names - may be specified in conf.yaml or on the command line - to filter which test cases to run. - The methods named [set_up|tear_down]_[suite|test_case] should be overridden - in derived classes if the appropriate suite/test case fixtures are needed. + """The base class with methods for handling the basic flow of a test suite. + + * Test case filtering and collection, + * Test suite setup/cleanup, + * Test setup/cleanup, + * Test case execution, + * Error handling and results storage. + + Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``, + further divided into performance test cases (starting with ``test_perf_``) + and functional test cases (all other test cases). + + By default, all test cases will be executed. A list of testcase names may be specified + in the YAML test run configuration file and in the :option:`--test-cases` command line argument + or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run. + The union of both lists will be used. Any unknown test cases from the latter lists + will be silently ignored. + + If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable + is set, in case of a test case failure, the test case will be executed again until it passes + or it fails that many times in addition of the first failure. + + The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses + if the appropriate test suite/test case fixtures are needed. + + The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can + properly choose the IP addresses and other configuration that must be tailored to the testbed. + + Attributes: + sut_node: The SUT node where the test suite is running. + tg_node: The TG node where the test suite is running. """ sut_node: SutNode - is_blocking = False + tg_node: TGNode + #: Whether the test suite is blocking. A failure of a blocking test suite + #: will block the execution of all subsequent test suites in the current build target. + is_blocking: ClassVar[bool] = False _logger: DTSLOG _test_cases_to_run: list[str] _func: bool @@ -72,6 +102,20 @@ def __init__( func: bool, build_target_result: BuildTargetResult, ): + """Initialize the test suite testbed information and basic configuration. + + Process what test cases to run, create the associated + :class:`~.test_result.TestSuiteResult`, find links between ports + and set up default IP addresses to be used when configuring them. + + Args: + sut_node: The SUT node where the test suite will run. + tg_node: The TG node where the test suite will run. + test_cases: The list of test cases to execute. + If empty, all test cases will be executed. + func: Whether to run functional tests. + build_target_result: The build target result this test suite is run in. + """ self.sut_node = sut_node self.tg_node = tg_node self._logger = getLogger(self.__class__.__name__) @@ -95,6 +139,7 @@ def __init__( self._tg_ip_address_ingress = ip_interface("192.168.101.3/24") def _process_links(self) -> None: + """Construct links between SUT and TG ports.""" for sut_port in self.sut_node.ports: for tg_port in self.tg_node.ports: if (sut_port.identifier, sut_port.peer) == ( @@ -104,27 +149,42 @@ def _process_links(self) -> None: self._port_links.append(PortLink(sut_port=sut_port, tg_port=tg_port)) def set_up_suite(self) -> None: - """ - Set up test fixtures common to all test cases; this is done before - any test case is run. + """Set up test fixtures common to all test cases. + + This is done before any test case has been run. """ def tear_down_suite(self) -> None: - """ - Tear down the previously created test fixtures common to all test cases. + """Tear down the previously created test fixtures common to all test cases. + + This is done after all test have been run. """ def set_up_test_case(self) -> None: - """ - Set up test fixtures before each test case. + """Set up test fixtures before each test case. + + This is done before *each* test case. """ def tear_down_test_case(self) -> None: - """ - Tear down the previously created test fixtures after each test case. + """Tear down the previously created test fixtures after each test case. + + This is done after *each* test case. """ def configure_testbed_ipv4(self, restore: bool = False) -> None: + """Configure IPv4 addresses on all testbed ports. + + The configured ports are: + + * SUT ingress port, + * SUT egress port, + * TG ingress port, + * TG egress port. + + Args: + restore: If :data:`True`, will remove the configuration instead. + """ delete = True if restore else False enable = False if restore else True self._configure_ipv4_forwarding(enable) @@ -149,11 +209,17 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None: self.sut_node.configure_ipv4_forwarding(enable) def send_packet_and_capture(self, packet: Packet, duration: float = 1) -> list[Packet]: - """ - Send a packet through the appropriate interface and - receive on the appropriate interface. - Modify the packet with l3/l2 addresses corresponding - to the testbed and desired traffic. + """Send and receive `packet` using the associated TG. + + Send `packet` through the appropriate interface and receive on the appropriate interface. + Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic. + + Args: + packet: The packet to send. + duration: Capture traffic for this amount of time after sending `packet`. + + Returns: + A list of received packets. """ packet = self._adjust_addresses(packet) return self.tg_node.send_packet_and_capture( @@ -161,13 +227,26 @@ def send_packet_and_capture(self, packet: Packet, duration: float = 1) -> list[P ) def get_expected_packet(self, packet: Packet) -> Packet: + """Inject the proper L2/L3 addresses into `packet`. + + Args: + packet: The packet to modify. + + Returns: + `packet` with injected L2/L3 addresses. + """ return self._adjust_addresses(packet, expected=True) def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: - """ + """L2 and L3 address additions in both directions. + Assumptions: - Two links between SUT and TG, one link is TG -> SUT, - the other SUT -> TG. + Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG. + + Args: + packet: The packet to modify. + expected: If :data:`True`, the direction is SUT -> TG, + otherwise the direction is TG -> SUT. """ if expected: # The packet enters the TG from SUT @@ -193,6 +272,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet: return Ether(packet.build()) def verify(self, condition: bool, failure_description: str) -> None: + """Verify `condition` and handle failures. + + When `condition` is :data:`False`, raise an exception and log the last 10 commands + executed on both the SUT and TG. + + Args: + condition: The condition to check. + failure_description: A short description of the failure + that will be stored in the raised exception. + + Raises: + TestCaseVerifyError: `condition` is :data:`False`. + """ if not condition: self._fail_test_case_verify(failure_description) @@ -206,6 +298,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None: raise TestCaseVerifyError(failure_description) def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None: + """Verify that `expected_packet` has been received. + + Go through `received_packets` and check that `expected_packet` is among them. + If not, raise an exception and log the last 10 commands + executed on both the SUT and TG. + + Args: + expected_packet: The packet we're expecting to receive. + received_packets: The packets where we're looking for `expected_packet`. + + Raises: + TestCaseVerifyError: `expected_packet` is not among `received_packets`. + """ for received_packet in received_packets: if self._compare_packets(expected_packet, received_packet): break @@ -280,10 +385,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool: return True def run(self) -> None: - """ - Setup, execute and teardown the whole suite. - Suite execution consists of running all test cases scheduled to be executed. - A test cast run consists of setup, execution and teardown of said test case. + """Set up, execute and tear down the whole suite. + + Test suite execution consists of running all test cases scheduled to be executed. + A test case run consists of setup, execution and teardown of said test case. + + Record the setup and the teardown and handle failures. + + The list of scheduled test cases is constructed when creating the :class:`TestSuite` object. """ test_suite_name = self.__class__.__name__ @@ -315,9 +424,7 @@ def run(self) -> None: raise BlockingTestSuiteError(test_suite_name) def _execute_test_suite(self) -> None: - """ - Execute all test cases scheduled to be executed in this suite. - """ + """Execute all test cases scheduled to be executed in this suite.""" if self._func: for test_case_method in self._get_functional_test_cases(): test_case_name = test_case_method.__name__ @@ -334,14 +441,18 @@ def _execute_test_suite(self) -> None: self._run_test_case(test_case_method, test_case_result) def _get_functional_test_cases(self) -> list[MethodType]: - """ - Get all functional test cases. + """Get all functional test cases defined in this TestSuite. + + Returns: + The list of functional test cases of this TestSuite. """ return self._get_test_cases(r"test_(?!perf_)") def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: - """ - Return a list of test cases matching test_case_regex. + """Return a list of test cases matching test_case_regex. + + Returns: + The list of test cases matching test_case_regex of this TestSuite. """ self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.") filtered_test_cases = [] @@ -353,9 +464,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]: return filtered_test_cases def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool: - """ - Check whether the test case should be executed. - """ + """Check whether the test case should be scheduled to be executed.""" match = bool(re.match(test_case_regex, test_case_name)) if self._test_cases_to_run: return match and test_case_name in self._test_cases_to_run @@ -365,9 +474,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool def _run_test_case( self, test_case_method: MethodType, test_case_result: TestCaseResult ) -> None: - """ - Setup, execute and teardown a test case in this suite. - Exceptions are caught and recorded in logs and results. + """Setup, execute and teardown a test case in this suite. + + Record the result of the setup and the teardown and handle failures. """ test_case_name = test_case_method.__name__ @@ -402,9 +511,7 @@ def _run_test_case( def _execute_test_case( self, test_case_method: MethodType, test_case_result: TestCaseResult ) -> None: - """ - Execute one test case and handle failures. - """ + """Execute one test case, record the result and handle failures.""" test_case_name = test_case_method.__name__ try: self._logger.info(f"Starting test case execution: {test_case_name}") @@ -425,6 +532,18 @@ def _execute_test_case( def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]: + r"""Find all :class:`TestSuite`\s in a Python module. + + Args: + testsuite_module_path: The path to the Python module. + + Returns: + The list of :class:`TestSuite`\s found within the Python module. + + Raises: + ConfigurationError: The test suite module was not found. + """ + def is_test_suite(object: Any) -> bool: try: if issubclass(object, TestSuite) and object is not TestSuite: From patchwork Mon Dec 4 10:24:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134797 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31C764366A; Mon, 4 Dec 2023 11:25:40 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 888A541141; Mon, 4 Dec 2023 11:24:44 +0100 (CET) Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) by mails.dpdk.org (Postfix) with ESMTP id 70631410E3 for ; Mon, 4 Dec 2023 11:24:40 +0100 (CET) Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-40c0a03eb87so10658995e9.3 for ; Mon, 04 Dec 2023 02:24:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685480; x=1702290280; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r8ptTkrC1tsM0AxROhb6o0FU3WTzzkdfCea6YQDDbXw=; b=JABLiWLQXH1FdEMJVErr//wce/Ldo5z9hunXZafHi9XsfYyNFliVYD84dLpMYjWiIm MSljtCwJ7GwbQFmOrFxl9imGgtS06fB3vQMOdnHj5uoYUaeA3y1PZq2/24FPPBHeX0Br s1u9ZZqgpj/9ikgnv0k7lBVQGsQSiD3n0wxUB5k4HMNeWEIlMkkOJaYZA8Bo5PN9ShHG esAa9OtodwIJ6WRp/AoGfS9IkKFg9jX1qtsgDfGHfuaEMJ67lw0plOV+eTWC3yMpGxGc mRGcPlpXwDgcUW2rl9XXMr8K8bmwoog8nG02c7hh+QSrPXI2bbhJm9xz+Oxzh0f8Z3ff 8GXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685480; x=1702290280; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r8ptTkrC1tsM0AxROhb6o0FU3WTzzkdfCea6YQDDbXw=; b=UVV/TScAlcAqP4zrq2E75M9db4kz1mUUoDz8d4jdaCZHVuxvZ48nKOAvFALFdpTNDO UM6dx/sdJo/GTzpu1LI5zMzGN0/J6xLCZD7l0LlkMxazIbD6Tv9iY128FWAVPcEaIWXY D9fIRBgu6IHeHYq+XaqfJtO2fRuAysOVDkCHbUCQSp7Y+AQxmTjTvASjhQRvaadOhyg0 kp9YE1obkwoUm36Fzdab/WBimuQvwhuVncWvepIGzcFCcwjR4VPUc1ju1knFZvN12LFG zXgzSmC3e7AgotSYuCG9bafA/i51N8jx3K+v06qsdo7A3L9La2qDpCQSQwjhzIqR3zuL Eaww== X-Gm-Message-State: AOJu0Yw5JUhu2931hTHOBES+AhVRHVaHMyRKlpYn/d1Pby1Cw7qWsyiR oEocUK6BOoyQfNzHtjO1Nio5Aw== X-Google-Smtp-Source: AGHT+IGGOrgtWUanIBWUQ24d3IBwlKv/xHEA9Axs9K+J99kcXG4gfi5Y1piU0NGvNub2SHkUkq53YQ== X-Received: by 2002:a05:600c:3588:b0:40b:5e22:2e8 with SMTP id p8-20020a05600c358800b0040b5e2202e8mr1161220wmq.84.1701685480013; Mon, 04 Dec 2023 02:24:40 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:39 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 09/21] dts: test result docstring update Date: Mon, 4 Dec 2023 11:24:17 +0100 Message-Id: <20231204102429.106709-10-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/test_result.py | 297 ++++++++++++++++++++++++++++------- 1 file changed, 239 insertions(+), 58 deletions(-) diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py index 57090feb04..4467749a9d 100644 --- a/dts/framework/test_result.py +++ b/dts/framework/test_result.py @@ -2,8 +2,25 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire -""" -Generic result container and reporters +r"""Record and process DTS results. + +The results are recorded in a hierarchical manner: + + * :class:`DTSResult` contains + * :class:`ExecutionResult` contains + * :class:`BuildTargetResult` contains + * :class:`TestSuiteResult` contains + * :class:`TestCaseResult` + +Each result may contain multiple lower level results, e.g. there are multiple +:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`. +The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`, +which also defines some common behaviors in its methods. + +Each result class has its own idiosyncrasies which they implement in overridden methods. + +The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment +variable modify the directory where the files with results will be stored. """ import os.path @@ -26,26 +43,34 @@ class Result(Enum): - """ - An Enum defining the possible states that - a setup, a teardown or a test case may end up in. - """ + """The possible states that a setup, a teardown or a test case may end up in.""" + #: PASS = auto() + #: FAIL = auto() + #: ERROR = auto() + #: SKIP = auto() def __bool__(self) -> bool: + """Only PASS is True.""" return self is self.PASS class FixtureResult(object): - """ - A record that stored the result of a setup or a teardown. - The default is FAIL because immediately after creating the object - the setup of the corresponding stage will be executed, which also guarantees - the execution of teardown. + """A record that stores the result of a setup or a teardown. + + :attr:`~Result.FAIL` is a sensible default since it prevents false positives (which could happen + if the default was :attr:`~Result.PASS`). + + Preventing false positives or other false results is preferable since a failure + is mostly likely to be investigated (the other false results may not be investigated at all). + + Attributes: + result: The associated result. + error: The error in case of a failure. """ result: Result @@ -56,21 +81,37 @@ def __init__( result: Result = Result.FAIL, error: Exception | None = None, ): + """Initialize the constructor with the fixture result and store a possible error. + + Args: + result: The result to store. + error: The error which happened when a failure occurred. + """ self.result = result self.error = error def __bool__(self) -> bool: + """A wrapper around the stored :class:`Result`.""" return bool(self.result) class Statistics(dict): - """ - A helper class used to store the number of test cases by its result - along a few other basic information. - Using a dict provides a convenient way to format the data. + """How many test cases ended in which result state along some other basic information. + + Subclassing :class:`dict` provides a convenient way to format the data. + + The data are stored in the following keys: + + * **PASS RATE** (:class:`int`) -- The FAIL/PASS ratio of all test cases. + * **DPDK VERSION** (:class:`str`) -- The tested DPDK version. """ def __init__(self, dpdk_version: str | None): + """Extend the constructor with keys in which the data are stored. + + Args: + dpdk_version: The version of tested DPDK. + """ super(Statistics, self).__init__() for result in Result: self[result.name] = 0 @@ -78,8 +119,17 @@ def __init__(self, dpdk_version: str | None): self["DPDK VERSION"] = dpdk_version def __iadd__(self, other: Result) -> "Statistics": - """ - Add a Result to the final count. + """Add a Result to the final count. + + Example: + stats: Statistics = Statistics() # empty Statistics + stats += Result.PASS # add a Result to `stats` + + Args: + other: The Result to add to this statistics object. + + Returns: + The modified statistics object. """ self[other.name] += 1 self["PASS RATE"] = ( @@ -88,9 +138,7 @@ def __iadd__(self, other: Result) -> "Statistics": return self def __str__(self) -> str: - """ - Provide a string representation of the data. - """ + """Each line contains the formatted key = value pair.""" stats_str = "" for key, value in self.items(): stats_str += f"{key:<12} = {value}\n" @@ -100,10 +148,16 @@ def __str__(self) -> str: class BaseResult(object): - """ - The Base class for all results. Stores the results of - the setup and teardown portions of the corresponding stage - and a list of results from each inner stage in _inner_results. + """Common data and behavior of DTS results. + + Stores the results of the setup and teardown portions of the corresponding stage. + The hierarchical nature of DTS results is captured recursively in an internal list. + A stage is each level in this particular hierarchy (pre-execution or the top-most level, + execution, build target, test suite and test case.) + + Attributes: + setup_result: The result of the setup of the particular stage. + teardown_result: The results of the teardown of the particular stage. """ setup_result: FixtureResult @@ -111,15 +165,28 @@ class BaseResult(object): _inner_results: MutableSequence["BaseResult"] def __init__(self): + """Initialize the constructor.""" self.setup_result = FixtureResult() self.teardown_result = FixtureResult() self._inner_results = [] def update_setup(self, result: Result, error: Exception | None = None) -> None: + """Store the setup result. + + Args: + result: The result of the setup. + error: The error that occurred in case of a failure. + """ self.setup_result.result = result self.setup_result.error = error def update_teardown(self, result: Result, error: Exception | None = None) -> None: + """Store the teardown result. + + Args: + result: The result of the teardown. + error: The error that occurred in case of a failure. + """ self.teardown_result.result = result self.teardown_result.error = error @@ -137,27 +204,55 @@ def _get_inner_errors(self) -> list[Exception]: ] def get_errors(self) -> list[Exception]: + """Compile errors from the whole result hierarchy. + + Returns: + The errors from setup, teardown and all errors found in the whole result hierarchy. + """ return self._get_setup_teardown_errors() + self._get_inner_errors() def add_stats(self, statistics: Statistics) -> None: + """Collate stats from the whole result hierarchy. + + Args: + statistics: The :class:`Statistics` object where the stats will be collated. + """ for inner_result in self._inner_results: inner_result.add_stats(statistics) class TestCaseResult(BaseResult, FixtureResult): - """ - The test case specific result. - Stores the result of the actual test case. - Also stores the test case name. + r"""The test case specific result. + + Stores the result of the actual test case. This is done by adding an extra superclass + in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and + the class is itself a record of the test case. + + Attributes: + test_case_name: The test case name. """ test_case_name: str def __init__(self, test_case_name: str): + """Extend the constructor with `test_case_name`. + + Args: + test_case_name: The test case's name. + """ super(TestCaseResult, self).__init__() self.test_case_name = test_case_name def update(self, result: Result, error: Exception | None = None) -> None: + """Update the test case result. + + This updates the result of the test case itself and doesn't affect + the results of the setup and teardown steps in any way. + + Args: + result: The result of the test case. + error: The error that occurred in case of a failure. + """ self.result = result self.error = error @@ -167,36 +262,64 @@ def _get_inner_errors(self) -> list[Exception]: return [] def add_stats(self, statistics: Statistics) -> None: + r"""Add the test case result to statistics. + + The base method goes through the hierarchy recursively and this method is here to stop + the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree. + + Args: + statistics: The :class:`Statistics` object where the stats will be added. + """ statistics += self.result def __bool__(self) -> bool: + """The test case passed only if setup, teardown and the test case itself passed.""" return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result) class TestSuiteResult(BaseResult): - """ - The test suite specific result. - The _inner_results list stores results of test cases in a given test suite. - Also stores the test suite name. + """The test suite specific result. + + The internal list stores the results of all test cases in a given test suite. + + Attributes: + suite_name: The test suite name. """ suite_name: str def __init__(self, suite_name: str): + """Extend the constructor with `suite_name`. + + Args: + suite_name: The test suite's name. + """ super(TestSuiteResult, self).__init__() self.suite_name = suite_name def add_test_case(self, test_case_name: str) -> TestCaseResult: + """Add and return the inner result (test case). + + Returns: + The test case's result. + """ test_case_result = TestCaseResult(test_case_name) self._inner_results.append(test_case_result) return test_case_result class BuildTargetResult(BaseResult): - """ - The build target specific result. - The _inner_results list stores results of test suites in a given build target. - Also stores build target specifics, such as compiler used to build DPDK. + """The build target specific result. + + The internal list stores the results of all test suites in a given build target. + + Attributes: + arch: The DPDK build target architecture. + os: The DPDK build target operating system. + cpu: The DPDK build target CPU. + compiler: The DPDK build target compiler. + compiler_version: The DPDK build target compiler version. + dpdk_version: The built DPDK version. """ arch: Architecture @@ -207,6 +330,11 @@ class BuildTargetResult(BaseResult): dpdk_version: str | None def __init__(self, build_target: BuildTargetConfiguration): + """Extend the constructor with the `build_target`'s build target config. + + Args: + build_target: The build target's test run configuration. + """ super(BuildTargetResult, self).__init__() self.arch = build_target.arch self.os = build_target.os @@ -216,20 +344,35 @@ def __init__(self, build_target: BuildTargetConfiguration): self.dpdk_version = None def add_build_target_info(self, versions: BuildTargetInfo) -> None: + """Add information about the build target gathered at runtime. + + Args: + versions: The additional information. + """ self.compiler_version = versions.compiler_version self.dpdk_version = versions.dpdk_version def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: + """Add and return the inner result (test suite). + + Returns: + The test suite's result. + """ test_suite_result = TestSuiteResult(test_suite_name) self._inner_results.append(test_suite_result) return test_suite_result class ExecutionResult(BaseResult): - """ - The execution specific result. - The _inner_results list stores results of build targets in a given execution. - Also stores the SUT node configuration. + """The execution specific result. + + The internal list stores the results of all build targets in a given execution. + + Attributes: + sut_node: The SUT node used in the execution. + sut_os_name: The operating system of the SUT node. + sut_os_version: The operating system version of the SUT node. + sut_kernel_version: The operating system kernel version of the SUT node. """ sut_node: NodeConfiguration @@ -238,34 +381,53 @@ class ExecutionResult(BaseResult): sut_kernel_version: str def __init__(self, sut_node: NodeConfiguration): + """Extend the constructor with the `sut_node`'s config. + + Args: + sut_node: The SUT node's test run configuration used in the execution. + """ super(ExecutionResult, self).__init__() self.sut_node = sut_node def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTargetResult: + """Add and return the inner result (build target). + + Args: + build_target: The build target's test run configuration. + + Returns: + The build target's result. + """ build_target_result = BuildTargetResult(build_target) self._inner_results.append(build_target_result) return build_target_result def add_sut_info(self, sut_info: NodeInfo) -> None: + """Add SUT information gathered at runtime. + + Args: + sut_info: The additional SUT node information. + """ self.sut_os_name = sut_info.os_name self.sut_os_version = sut_info.os_version self.sut_kernel_version = sut_info.kernel_version class DTSResult(BaseResult): - """ - Stores environment information and test results from a DTS run, which are: - * Execution level information, such as SUT and TG hardware. - * Build target level information, such as compiler, target OS and cpu. - * Test suite results. - * All errors that are caught and recorded during DTS execution. + """Stores environment information and test results from a DTS run. - The information is stored in nested objects. + * Execution level information, such as testbed and the test suite list, + * Build target level information, such as compiler, target OS and cpu, + * Test suite and test case results, + * All errors that are caught and recorded during DTS execution. - The class is capable of computing the return code used to exit DTS with - from the stored error. + The information is stored hierarchically. This is the first level of the hierarchy + and as such is where the data form the whole hierarchy is collated or processed. - It also provides a brief statistical summary of passed/failed test cases. + The internal list stores the results of all executions. + + Attributes: + dpdk_version: The DPDK version to record. """ dpdk_version: str | None @@ -276,6 +438,11 @@ class DTSResult(BaseResult): _stats_filename: str def __init__(self, logger: DTSLOG): + """Extend the constructor with top-level specifics. + + Args: + logger: The logger instance the whole result will use. + """ super(DTSResult, self).__init__() self.dpdk_version = None self._logger = logger @@ -285,21 +452,33 @@ def __init__(self, logger: DTSLOG): self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt") def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: + """Add and return the inner result (execution). + + Args: + sut_node: The SUT node's test run configuration. + + Returns: + The execution's result. + """ execution_result = ExecutionResult(sut_node) self._inner_results.append(execution_result) return execution_result def add_error(self, error: Exception) -> None: + """Record an error that occurred outside any execution. + + Args: + error: The exception to record. + """ self._errors.append(error) def process(self) -> None: - """ - Process the data after a DTS run. - The data is added to nested objects during runtime and this parent object - is not updated at that time. This requires us to process the nested data - after it's all been gathered. + """Process the data after a whole DTS run. + + The data is added to inner objects during runtime and this object is not updated + at that time. This requires us to process the inner data after it's all been gathered. - The processing gathers all errors and the result statistics of test cases. + The processing gathers all errors and the statistics of test case results. """ self._errors += self.get_errors() if self._errors and self._logger: @@ -313,8 +492,10 @@ def process(self) -> None: stats_file.write(str(self._stats_result)) def get_return_code(self) -> int: - """ - Go through all stored Exceptions and return the highest error code found. + """Go through all stored Exceptions and return the final DTS error code. + + Returns: + The highest error code found. """ for error in self._errors: error_return_code = ErrorSeverity.GENERIC_ERR From patchwork Mon Dec 4 10:24:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134798 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 03C874366A; Mon, 4 Dec 2023 11:25:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 51AD841611; Mon, 4 Dec 2023 11:24:46 +0100 (CET) Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) by mails.dpdk.org (Postfix) with ESMTP id B36B041109 for ; Mon, 4 Dec 2023 11:24:41 +0100 (CET) Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-40c09dfa03cso10501585e9.2 for ; Mon, 04 Dec 2023 02:24:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685481; x=1702290281; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zMSy2qEiR6GNRiO4OC9OI2GqwU9Yj9AIYnq6MiKrGeo=; b=BC1Hm7Gzc1OM7qBX4QWWM4T8Gh8NM5QxMZ8A/e8dDrouPYhc+3JbbXKusxDWlvEdZQ Nag1wahs2ZaVV+rKRbtvX/a43HaDp9qn9/l6BcTxezc15/0u474wpybNCvf1Ij8PWfp3 UuBNGpzjEVSwJbfou03b1zuph0ZJAChU9MZ8kiVgdHLErkDh+wkoMo8D7hSvNruyzgkQ mb7edKUEsuwdQHi2FGQmfi+D1tUlMFIrSpZZ+9l6rG1FWfhrhHnerEbiJwS1H9TGSZu4 qTDQPjGctOOav9NJEEJEOwMsKULsMTX/puKlBMtVSwjxDku/l3g74mMXKYu/LutkJPuE SBgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685481; x=1702290281; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zMSy2qEiR6GNRiO4OC9OI2GqwU9Yj9AIYnq6MiKrGeo=; b=KRRD6yvCLtDMVdeh9jfEcCUU8y4yiqpS9vwznxo/7e+rDril6CtMNjaRR5PYEsxBZZ zFGQSZimy39CwVW/JALAJWRyBhVh3ruUTkb5SU3WC2LZt56XkLwhJEwhqmkdD2OgpD96 AjBzkh0GSPVBm/CbKZRtfaDYNcBMVHaNjuLlGOAdLHnMkND/mJ/kfbAAmISo5CNvCiKg RmRUFBjJhdxkhKWkAWhyCkoZ8BNEgl14jtxWfnQK6LZZTlGFa8E8tULRwC7TM9Y/eXyY mC0JC0Kx/ZK28b83n/8OpalcBBj+V+UPxAS75Xsm9EmvGI/8jq3T5fe4QBoZcT0y/e8t W4Ug== X-Gm-Message-State: AOJu0Yy5XR2csaLvKCHsqZtTS84PhjEdo+6FmaKhn9yr19Tj1/YIO2T0 saJ0KK41cqVdqiZo8rDaL1T+FQ== X-Google-Smtp-Source: AGHT+IGkDaWwVHXkp92J5Sf22Egt+LMJOPDA+eqkfufqzlm8QBQy4y1/o5pQr9EK595m8pYNWORPJA== X-Received: by 2002:a05:600c:1149:b0:40b:5e21:d35a with SMTP id z9-20020a05600c114900b0040b5e21d35amr2544324wmz.99.1701685481306; Mon, 04 Dec 2023 02:24:41 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:40 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 10/21] dts: config docstring update Date: Mon, 4 Dec 2023 11:24:18 +0100 Message-Id: <20231204102429.106709-11-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/config/__init__.py | 369 ++++++++++++++++++++++++++----- dts/framework/config/types.py | 132 +++++++++++ 2 files changed, 444 insertions(+), 57 deletions(-) create mode 100644 dts/framework/config/types.py diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py index ef25a463c0..62eded7f04 100644 --- a/dts/framework/config/__init__.py +++ b/dts/framework/config/__init__.py @@ -3,8 +3,34 @@ # Copyright(c) 2022-2023 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -Yaml config parsing methods +"""Testbed configuration and test suite specification. + +This package offers classes that hold real-time information about the testbed, hold test run +configuration describing the tested testbed and a loader function, :func:`load_config`, which loads +the YAML test run configuration file +and validates it according to :download:`the schema `. + +The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout +this package. The allowed keys and types inside this dictionary are defined in +the :doc:`types ` module. + +The test run configuration has two main sections: + + * The :class:`ExecutionConfiguration` which defines what tests are going to be run + and how DPDK will be built. It also references the testbed where these tests and DPDK + are going to be run, + * The nodes of the testbed are defined in the other section, + a :class:`list` of :class:`NodeConfiguration` objects. + +The real-time information about testbed is supposed to be gathered at runtime. + +The classes defined in this package make heavy use of :mod:`dataclasses`. +All of them use slots and are frozen: + + * Slots enables some optimizations, by pre-allocating space for the defined + attributes in the underlying data structure, + * Frozen makes the object immutable. This enables further optimizations, + and makes it thread safe should we ever want to move in that direction. """ import json @@ -12,11 +38,20 @@ import pathlib from dataclasses import dataclass from enum import auto, unique -from typing import Any, TypedDict, Union +from typing import Union import warlock # type: ignore[import] import yaml +from framework.config.types import ( + BuildTargetConfigDict, + ConfigurationDict, + ExecutionConfigDict, + NodeConfigDict, + PortConfigDict, + TestSuiteConfigDict, + TrafficGeneratorConfigDict, +) from framework.exception import ConfigurationError from framework.settings import SETTINGS from framework.utils import StrEnum @@ -24,55 +59,97 @@ @unique class Architecture(StrEnum): + r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s.""" + + #: i686 = auto() + #: x86_64 = auto() + #: x86_32 = auto() + #: arm64 = auto() + #: ppc64le = auto() @unique class OS(StrEnum): + r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s.""" + + #: linux = auto() + #: freebsd = auto() + #: windows = auto() @unique class CPUType(StrEnum): + r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s.""" + + #: native = auto() + #: armv8a = auto() + #: dpaa2 = auto() + #: thunderx = auto() + #: xgene1 = auto() @unique class Compiler(StrEnum): + r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s.""" + + #: gcc = auto() + #: clang = auto() + #: icc = auto() + #: msvc = auto() @unique class TrafficGeneratorType(StrEnum): + """The supported traffic generators.""" + + #: SCAPY = auto() -# Slots enables some optimizations, by pre-allocating space for the defined -# attributes in the underlying data structure. -# -# Frozen makes the object immutable. This enables further optimizations, -# and makes it thread safe should we every want to move in that direction. @dataclass(slots=True, frozen=True) class HugepageConfiguration: + r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + amount: The number of hugepages. + force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node. + """ + amount: int force_first_numa: bool @dataclass(slots=True, frozen=True) class PortConfig: + r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + node: The :class:`~framework.testbed_model.node.Node` where this port exists. + pci: The PCI address of the port. + os_driver_for_dpdk: The operating system driver name for use with DPDK. + os_driver: The operating system driver name when the operating system controls the port. + peer_node: The :class:`~framework.testbed_model.node.Node` of the port + connected to this port. + peer_pci: The PCI address of the port connected to this port. + """ + node: str pci: str os_driver_for_dpdk: str @@ -81,18 +158,44 @@ class PortConfig: peer_pci: str @staticmethod - def from_dict(node: str, d: dict) -> "PortConfig": + def from_dict(node: str, d: PortConfigDict) -> "PortConfig": + """A convenience method that creates the object from fewer inputs. + + Args: + node: The node where this port exists. + d: The configuration dictionary. + + Returns: + The port configuration instance. + """ return PortConfig(node=node, **d) @dataclass(slots=True, frozen=True) class TrafficGeneratorConfig: + """The configuration of traffic generators. + + The class will be expanded when more configuration is needed. + + Attributes: + traffic_generator_type: The type of the traffic generator. + """ + traffic_generator_type: TrafficGeneratorType @staticmethod - def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": - # This looks useless now, but is designed to allow expansion to traffic - # generators that require more configuration later. + def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig": + """A convenience method that produces traffic generator config of the proper type. + + Args: + d: The configuration dictionary. + + Returns: + The traffic generator configuration instance. + + Raises: + ConfigurationError: An unknown traffic generator type was encountered. + """ match TrafficGeneratorType(d["type"]): case TrafficGeneratorType.SCAPY: return ScapyTrafficGeneratorConfig( @@ -104,11 +207,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig": @dataclass(slots=True, frozen=True) class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig): + """Scapy traffic generator specific configuration.""" + pass @dataclass(slots=True, frozen=True) class NodeConfiguration: + r"""The configuration of :class:`~framework.testbed_model.node.Node`\s. + + Attributes: + name: The name of the :class:`~framework.testbed_model.node.Node`. + hostname: The hostname of the :class:`~framework.testbed_model.node.Node`. + Can be an IP or a domain name. + user: The name of the user used to connect to + the :class:`~framework.testbed_model.node.Node`. + password: The password of the user. The use of passwords is heavily discouraged. + Please use keys instead. + arch: The architecture of the :class:`~framework.testbed_model.node.Node`. + os: The operating system of the :class:`~framework.testbed_model.node.Node`. + lcores: A comma delimited list of logical cores to use when running DPDK. + use_first_core: If :data:`True`, the first logical core won't be used. + hugepages: An optional hugepage configuration. + ports: The ports that can be used in testing. + """ + name: str hostname: str user: str @@ -121,55 +244,89 @@ class NodeConfiguration: ports: list[PortConfig] @staticmethod - def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]: - hugepage_config = d.get("hugepages") - if hugepage_config: - if "force_first_numa" not in hugepage_config: - hugepage_config["force_first_numa"] = False - hugepage_config = HugepageConfiguration(**hugepage_config) - - common_config = { - "name": d["name"], - "hostname": d["hostname"], - "user": d["user"], - "password": d.get("password"), - "arch": Architecture(d["arch"]), - "os": OS(d["os"]), - "lcores": d.get("lcores", "1"), - "use_first_core": d.get("use_first_core", False), - "hugepages": hugepage_config, - "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]], - } - + def from_dict( + d: NodeConfigDict, + ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]: + """A convenience method that processes the inputs before creating a specialized instance. + + Args: + d: The configuration dictionary. + + Returns: + Either an SUT or TG configuration instance. + """ + hugepage_config = None + if "hugepages" in d: + hugepage_config_dict = d["hugepages"] + if "force_first_numa" not in hugepage_config_dict: + hugepage_config_dict["force_first_numa"] = False + hugepage_config = HugepageConfiguration(**hugepage_config_dict) + + # The calls here contain duplicated code which is here because Mypy doesn't + # properly support dictionary unpacking with TypedDicts if "traffic_generator" in d: return TGNodeConfiguration( + name=d["name"], + hostname=d["hostname"], + user=d["user"], + password=d.get("password"), + arch=Architecture(d["arch"]), + os=OS(d["os"]), + lcores=d.get("lcores", "1"), + use_first_core=d.get("use_first_core", False), + hugepages=hugepage_config, + ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]], traffic_generator=TrafficGeneratorConfig.from_dict(d["traffic_generator"]), - **common_config, ) else: return SutNodeConfiguration( - memory_channels=d.get("memory_channels", 1), **common_config + name=d["name"], + hostname=d["hostname"], + user=d["user"], + password=d.get("password"), + arch=Architecture(d["arch"]), + os=OS(d["os"]), + lcores=d.get("lcores", "1"), + use_first_core=d.get("use_first_core", False), + hugepages=hugepage_config, + ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]], + memory_channels=d.get("memory_channels", 1), ) @dataclass(slots=True, frozen=True) class SutNodeConfiguration(NodeConfiguration): + """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration. + + Attributes: + memory_channels: The number of memory channels to use when running DPDK. + """ + memory_channels: int @dataclass(slots=True, frozen=True) class TGNodeConfiguration(NodeConfiguration): + """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration. + + Attributes: + traffic_generator: The configuration of the traffic generator present on the TG node. + """ + traffic_generator: ScapyTrafficGeneratorConfig @dataclass(slots=True, frozen=True) class NodeInfo: - """Class to hold important versions within the node. - - This class, unlike the NodeConfiguration class, cannot be generated at the start. - This is because we need to initialize a connection with the node before we can - collect the information needed in this class. Therefore, it cannot be a part of - the configuration class above. + """Supplemental node information. + + Attributes: + os_name: The name of the running operating system of + the :class:`~framework.testbed_model.node.Node`. + os_version: The version of the running operating system of + the :class:`~framework.testbed_model.node.Node`. + kernel_version: The kernel version of the running operating system of + the :class:`~framework.testbed_model.node.Node`. """ os_name: str @@ -179,6 +336,20 @@ class NodeInfo: @dataclass(slots=True, frozen=True) class BuildTargetConfiguration: + """DPDK build configuration. + + The configuration used for building DPDK. + + Attributes: + arch: The target architecture to build for. + os: The target os to build for. + cpu: The target CPU to build for. + compiler: The compiler executable to use. + compiler_wrapper: This string will be put in front of the compiler when + executing the build. Useful for adding wrapper commands, such as ``ccache``. + name: The name of the compiler. + """ + arch: Architecture os: OS cpu: CPUType @@ -187,7 +358,18 @@ class BuildTargetConfiguration: name: str @staticmethod - def from_dict(d: dict) -> "BuildTargetConfiguration": + def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration": + r"""A convenience method that processes the inputs before creating an instance. + + `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and + `name` is constructed from `arch`, `os`, `cpu` and `compiler`. + + Args: + d: The configuration dictionary. + + Returns: + The build target configuration instance. + """ return BuildTargetConfiguration( arch=Architecture(d["arch"]), os=OS(d["os"]), @@ -200,23 +382,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration": @dataclass(slots=True, frozen=True) class BuildTargetInfo: - """Class to hold important versions within the build target. + """Various versions and other information about a build target. - This is very similar to the NodeInfo class, it just instead holds information - for the build target. + Attributes: + dpdk_version: The DPDK version that was built. + compiler_version: The version of the compiler used to build DPDK. """ dpdk_version: str compiler_version: str -class TestSuiteConfigDict(TypedDict): - suite: str - cases: list[str] - - @dataclass(slots=True, frozen=True) class TestSuiteConfig: + """Test suite configuration. + + Information about a single test suite to be executed. + + Attributes: + test_suite: The name of the test suite module without the starting ``TestSuite_``. + test_cases: The names of test cases from this test suite to execute. + If empty, all test cases will be executed. + """ + test_suite: str test_cases: list[str] @@ -224,6 +412,14 @@ class TestSuiteConfig: def from_dict( entry: str | TestSuiteConfigDict, ) -> "TestSuiteConfig": + """Create an instance from two different types. + + Args: + entry: Either a suite name or a dictionary containing the config. + + Returns: + The test suite configuration instance. + """ if isinstance(entry, str): return TestSuiteConfig(test_suite=entry, test_cases=[]) elif isinstance(entry, dict): @@ -234,19 +430,49 @@ def from_dict( @dataclass(slots=True, frozen=True) class ExecutionConfiguration: + """The configuration of an execution. + + The configuration contains testbed information, what tests to execute + and with what DPDK build. + + Attributes: + build_targets: A list of DPDK builds to test. + perf: Whether to run performance tests. + func: Whether to run functional tests. + skip_smoke_tests: Whether to skip smoke tests. + test_suites: The names of test suites and/or test cases to execute. + system_under_test_node: The SUT node to use in this execution. + traffic_generator_node: The TG node to use in this execution. + vdevs: The names of virtual devices to test. + """ + build_targets: list[BuildTargetConfiguration] perf: bool func: bool + skip_smoke_tests: bool test_suites: list[TestSuiteConfig] system_under_test_node: SutNodeConfiguration traffic_generator_node: TGNodeConfiguration vdevs: list[str] - skip_smoke_tests: bool @staticmethod def from_dict( - d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]] + d: ExecutionConfigDict, + node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]], ) -> "ExecutionConfiguration": + """A convenience method that processes the inputs before creating an instance. + + The build target and the test suite config are transformed into their respective objects. + SUT and TG configurations are taken from `node_map`. The other (:class:`bool`) attributes + are just stored. + + Args: + d: The configuration dictionary. + node_map: A dictionary mapping node names to their config objects. + + Returns: + The execution configuration instance. + """ build_targets: list[BuildTargetConfiguration] = list( map(BuildTargetConfiguration.from_dict, d["build_targets"]) ) @@ -283,10 +509,31 @@ def from_dict( @dataclass(slots=True, frozen=True) class Configuration: + """DTS testbed and test configuration. + + The node configuration is not stored in this object. Rather, all used node configurations + are stored inside the execution configuration where the nodes are actually used. + + Attributes: + executions: Execution configurations. + """ + executions: list[ExecutionConfiguration] @staticmethod - def from_dict(d: dict) -> "Configuration": + def from_dict(d: ConfigurationDict) -> "Configuration": + """A convenience method that processes the inputs before creating an instance. + + Build target and test suite config are transformed into their respective objects. + SUT and TG configurations are taken from `node_map`. The other (:class:`bool`) attributes + are just stored. + + Args: + d: The configuration dictionary. + + Returns: + The whole configuration instance. + """ nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list( map(NodeConfiguration.from_dict, d["nodes"]) ) @@ -303,9 +550,17 @@ def from_dict(d: dict) -> "Configuration": def load_config() -> Configuration: - """ - Loads the configuration file and the configuration file schema, - validates the configuration file, and creates a configuration object. + """Load DTS test run configuration from a file. + + Load the YAML test run configuration file + and :download:`the configuration file schema `, + validate the test run configuration file, and create a test run configuration object. + + The YAML test run configuration file is specified in the :option:`--config-file` command line + argument or the :envvar:`DTS_CFG_FILE` environment variable. + + Returns: + The parsed test run configuration. """ with open(SETTINGS.config_file_path, "r") as f: config_data = yaml.safe_load(f) @@ -314,6 +569,6 @@ def load_config() -> Configuration: with open(schema_path, "r") as f: schema = json.load(f) - config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data) - config_obj: Configuration = Configuration.from_dict(dict(config)) + config = warlock.model_factory(schema, name="_Config")(config_data) + config_obj: Configuration = Configuration.from_dict(dict(config)) # type: ignore[arg-type] return config_obj diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py new file mode 100644 index 0000000000..1927910d88 --- /dev/null +++ b/dts/framework/config/types.py @@ -0,0 +1,132 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +"""Configuration dictionary contents specification. + +These type definitions serve as documentation of the configuration dictionary contents. + +The definitions use the built-in :class:`~typing.TypedDict` construct. +""" + +from typing import TypedDict + + +class PortConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + pci: str + #: + os_driver_for_dpdk: str + #: + os_driver: str + #: + peer_node: str + #: + peer_pci: str + + +class TrafficGeneratorConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + type: str + + +class HugepageConfigurationDict(TypedDict): + """Allowed keys and values.""" + + #: + amount: int + #: + force_first_numa: bool + + +class NodeConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + hugepages: HugepageConfigurationDict + #: + name: str + #: + hostname: str + #: + user: str + #: + password: str + #: + arch: str + #: + os: str + #: + lcores: str + #: + use_first_core: bool + #: + ports: list[PortConfigDict] + #: + memory_channels: int + #: + traffic_generator: TrafficGeneratorConfigDict + + +class BuildTargetConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + arch: str + #: + os: str + #: + cpu: str + #: + compiler: str + #: + compiler_wrapper: str + + +class TestSuiteConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + suite: str + #: + cases: list[str] + + +class ExecutionSUTConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + node_name: str + #: + vdevs: list[str] + + +class ExecutionConfigDict(TypedDict): + """Allowed keys and values.""" + + #: + build_targets: list[BuildTargetConfigDict] + #: + perf: bool + #: + func: bool + #: + skip_smoke_tests: bool + #: + test_suites: TestSuiteConfigDict + #: + system_under_test_node: ExecutionSUTConfigDict + #: + traffic_generator_node: str + + +class ConfigurationDict(TypedDict): + """Allowed keys and values.""" + + #: + nodes: list[NodeConfigDict] + #: + executions: list[ExecutionConfigDict] From patchwork Mon Dec 4 10:24:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134799 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8CAE44366A; Mon, 4 Dec 2023 11:25:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C69FE427E1; Mon, 4 Dec 2023 11:24:47 +0100 (CET) Received: from mail-wr1-f42.google.com (mail-wr1-f42.google.com [209.85.221.42]) by mails.dpdk.org (Postfix) with ESMTP id A5348410E6 for ; Mon, 4 Dec 2023 11:24:42 +0100 (CET) Received: by mail-wr1-f42.google.com with SMTP id ffacd0b85a97d-3333a3a599fso828376f8f.0 for ; Mon, 04 Dec 2023 02:24:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685482; x=1702290282; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=P0Nblz39VHMsl/sn7WugqAoH0RlFKnbia6lvcHGMAig=; b=Mwu2jcI2e0dPHd6wwBXxfoApwR1IKqv7HdL5KfL4V+0hHtf9DULEP+GBMqPCJuW1Ng PDNIYhMakrskanoZ7gMPPO3U7A2a+lRnuCtb4d1JVDpKyrD7c+4/PCTWTXYtBaR9fBUa RCswiu1t1dxpRtxIi4jnbM0LMHTL7+0JW4/7tTu4D7eluODurCEICRj8MjW6YBFFZYUv PfJhCIMuKNyCDZu6Puy1h2KksaYgI9XhDwfia/rV/Jf63jisbCQa3cd10IV/wDD4I37H NTlrSvsa2yjNMbUPzixq55rIR7C0p+aDFRv3UI/Hjgc9iaoGtxobwTg0HRv2RT7EgS3Q ayBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685482; x=1702290282; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P0Nblz39VHMsl/sn7WugqAoH0RlFKnbia6lvcHGMAig=; b=phKVOaWnm7DIam7F2gOBC4OuuZMI/6V2RiMlN85DuT0DrK2+YS/1VMEpe36UiPSx2T U6lng6k43PP7FvnISyqrSm/JSTRspX9ZmOIkrhEQCQsASzGDdsiXmzd/1yRoFOkfj8yR X4SbKWwESZ04E7ylhl/A2QVjnk8iLTqv609kuGCXAaQAJ0Ud8Liuj6Bxm53+6mM+qfUl MenYEvUxUU5EnK5tHwflvp8yMjdbeeKltYv10Nr3HU/ywZoRJGAy6AcdcnB3Ah1ayISA fpt8pCgpxU1Uww68kAUduDfs4IDlGb7kjPOSYJ8mzWfn9dBCIvQ1vHugiBSqrGYPEgDw AlTg== X-Gm-Message-State: AOJu0YxeyhoDbzdPvgTCpC4HTNY3pkjGQNO1JRlfK/KqqjX1+Jeq0eGo PmnF+rLS1h47oS0imDyyUPjvfg== X-Google-Smtp-Source: AGHT+IEsht1ohWsEpgtusD2mRrQBxdb603Q0DZ5rh15JP+N+dVKFoU0mn0k64o1vQWQaDSwpMnvu+g== X-Received: by 2002:a05:600c:84ce:b0:40b:5e26:2376 with SMTP id er14-20020a05600c84ce00b0040b5e262376mr2881098wmb.39.1701685482256; Mon, 04 Dec 2023 02:24:42 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:41 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 11/21] dts: remote session docstring update Date: Mon, 4 Dec 2023 11:24:19 +0100 Message-Id: <20231204102429.106709-12-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/remote_session/__init__.py | 39 +++++- .../remote_session/remote_session.py | 130 +++++++++++++----- dts/framework/remote_session/ssh_session.py | 16 +-- 3 files changed, 137 insertions(+), 48 deletions(-) diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py index 5e7ddb2b05..51a01d6b5e 100644 --- a/dts/framework/remote_session/__init__.py +++ b/dts/framework/remote_session/__init__.py @@ -2,12 +2,14 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire -""" -The package provides modules for managing remote connections to a remote host (node), -differentiated by OS. -The package provides a factory function, create_session, that returns the appropriate -remote connection based on the passed configuration. The differences are in the -underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux). +"""Remote interactive and non-interactive sessions. + +This package provides modules for managing remote connections to a remote host (node). + +The non-interactive sessions send commands and return their output and exit code. + +The interactive sessions open an interactive shell which is continuously open, +allowing it to send and receive data within that particular shell. """ # pylama:ignore=W0611 @@ -26,10 +28,35 @@ def create_remote_session( node_config: NodeConfiguration, name: str, logger: DTSLOG ) -> RemoteSession: + """Factory for non-interactive remote sessions. + + The function returns an SSH session, but will be extended if support + for other protocols is added. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + + Returns: + The SSH remote session. + """ return SSHSession(node_config, name, logger) def create_interactive_session( node_config: NodeConfiguration, logger: DTSLOG ) -> InteractiveRemoteSession: + """Factory for interactive remote sessions. + + The function returns an interactive SSH session, but will be extended if support + for other protocols is added. + + Args: + node_config: The test run configuration of the node to connect to. + logger: The logger instance this session will use. + + Returns: + The interactive SSH remote session. + """ return InteractiveRemoteSession(node_config, logger) diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py index 719f7d1ef7..2059f9a981 100644 --- a/dts/framework/remote_session/remote_session.py +++ b/dts/framework/remote_session/remote_session.py @@ -3,6 +3,13 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire +"""Base remote session. + +This module contains the abstract base class for remote sessions and defines +the structure of the result of a command execution. +""" + + import dataclasses from abc import ABC, abstractmethod from pathlib import PurePath @@ -15,8 +22,14 @@ @dataclasses.dataclass(slots=True, frozen=True) class CommandResult: - """ - The result of remote execution of a command. + """The result of remote execution of a command. + + Attributes: + name: The name of the session that executed the command. + command: The executed command. + stdout: The standard output the command produced. + stderr: The standard error output the command produced. + return_code: The return code the command exited with. """ name: str @@ -26,6 +39,7 @@ class CommandResult: return_code: int def __str__(self) -> str: + """Format the command outputs.""" return ( f"stdout: '{self.stdout}'\n" f"stderr: '{self.stderr}'\n" @@ -34,13 +48,24 @@ def __str__(self) -> str: class RemoteSession(ABC): - """ - The base class for defining which methods must be implemented in order to connect - to a remote host (node) and maintain a remote session. The derived classes are - supposed to implement/use some underlying transport protocol (e.g. SSH) to - implement the methods. On top of that, it provides some basic services common to - all derived classes, such as keeping history and logging what's being executed - on the remote node. + """Non-interactive remote session. + + The abstract methods must be implemented in order to connect to a remote host (node) + and maintain a remote session. + The subclasses must use (or implement) some underlying transport protocol (e.g. SSH) + to implement the methods. On top of that, it provides some basic services common to all + subclasses, such as keeping history and logging what's being executed on the remote node. + + Attributes: + name: The name of the session. + hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon) + or a domain name. + ip: The IP address of the node or a domain name, whichever was used in `hostname`. + port: The port of the node, if given in `hostname`. + username: The username used in the connection. + password: The password used in the connection. Most frequently empty, + as the use of passwords is discouraged. + history: The executed commands during this session. """ name: str @@ -59,6 +84,16 @@ def __init__( session_name: str, logger: DTSLOG, ): + """Connect to the node during initialization. + + Args: + node_config: The test run configuration of the node to connect to. + session_name: The name of the session. + logger: The logger instance this session will use. + + Raises: + SSHConnectionError: If the connection to the node was not successful. + """ self._node_config = node_config self.name = session_name @@ -79,8 +114,13 @@ def __init__( @abstractmethod def _connect(self) -> None: - """ - Create connection to assigned node. + """Create a connection to the node. + + The implementation must assign the established session to self.session. + + The implementation must except all exceptions and convert them to an SSHConnectionError. + + The implementation may optionally implement retry attempts. """ def send_command( @@ -90,11 +130,24 @@ def send_command( verify: bool = False, env: dict | None = None, ) -> CommandResult: - """ - Send a command to the connected node using optional env vars - and return CommandResult. - If verify is True, check the return code of the executed command - and raise a RemoteCommandExecutionError if the command failed. + """Send `command` to the connected node. + + The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` + environment variable configure the timeout of command execution. + + Args: + command: The command to execute. + timeout: Wait at most this long in seconds for `command` execution to complete. + verify: If :data:`True`, will check the exit code of `command`. + env: A dictionary with environment variables to be used with `command` execution. + + Raises: + SSHSessionDeadError: If the session isn't alive when sending `command`. + SSHTimeoutError: If `command` execution timed out. + RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed. + + Returns: + The output of the command along with the return code. """ self._logger.info(f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")) result = self._send_command(command, timeout, env) @@ -111,29 +164,38 @@ def send_command( @abstractmethod def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult: - """ - Use the underlying protocol to execute the command using optional env vars - and return CommandResult. + """Send a command to the connected node. + + The implementation must execute the command remotely with `env` environment variables + and return the result. + + The implementation must except all exceptions and raise: + + * SSHSessionDeadError if the session is not alive, + * SSHTimeoutError if the command execution times out. """ def close(self, force: bool = False) -> None: - """ - Close the remote session and free all used resources. + """Close the remote session and free all used resources. + + Args: + force: Force the closure of the connection. This may not clean up all resources. """ self._logger.logger_exit() self._close(force) @abstractmethod def _close(self, force: bool = False) -> None: - """ - Execute protocol specific steps needed to close the session properly. + """Protocol specific steps needed to close the session properly. + + Args: + force: Force the closure of the connection. This may not clean up all resources. + This doesn't have to be implemented in the overloaded method. """ @abstractmethod def is_alive(self) -> bool: - """ - Check whether the remote session is still responding. - """ + """Check whether the remote session is still responding.""" @abstractmethod def copy_from( @@ -143,12 +205,12 @@ def copy_from( ) -> None: """Copy a file from the remote Node to the local filesystem. - Copy source_file from the remote Node associated with this remote - session to destination_file on the local filesystem. + Copy `source_file` from the remote Node associated with this remote session + to `destination_file` on the local filesystem. Args: - source_file: the file on the remote Node. - destination_file: a file or directory path on the local filesystem. + source_file: The file on the remote Node. + destination_file: A file or directory path on the local filesystem. """ @abstractmethod @@ -159,10 +221,10 @@ def copy_to( ) -> None: """Copy a file from local filesystem to the remote Node. - Copy source_file from local filesystem to destination_file - on the remote Node associated with this remote session. + Copy `source_file` from local filesystem to `destination_file` on the remote Node + associated with this remote session. Args: - source_file: the file on the local filesystem. - destination_file: a file or directory path on the remote Node. + source_file: The file on the local filesystem. + destination_file: A file or directory path on the remote Node. """ diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py index a467033a13..782220092c 100644 --- a/dts/framework/remote_session/ssh_session.py +++ b/dts/framework/remote_session/ssh_session.py @@ -1,6 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""SSH remote session.""" + import socket import traceback from pathlib import PurePath @@ -26,13 +28,8 @@ class SSHSession(RemoteSession): """A persistent SSH connection to a remote Node. - The connection is implemented with the Fabric Python library. - - Args: - node_config: The configuration of the Node to connect to. - session_name: The name of the session. - logger: The logger used for logging. - This should be passed from the parent OSSession. + The connection is implemented with + `the Fabric Python library `_. Attributes: session: The underlying Fabric SSH connection. @@ -78,6 +75,7 @@ def _connect(self) -> None: raise SSHConnectionError(self.hostname, errors) def is_alive(self) -> bool: + """Overrides :meth:`~.remote_session.RemoteSession.is_alive`.""" return self.session.is_connected def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult: @@ -85,7 +83,7 @@ def _send_command(self, command: str, timeout: float, env: dict | None) -> Comma Args: command: The command to execute. - timeout: Wait at most this many seconds for the execution to complete. + timeout: Wait at most this long in seconds for the command execution to complete. env: Extra environment variables that will be used in command execution. Raises: @@ -110,6 +108,7 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.remote_session.RemoteSession.copy_from`.""" self.session.get(str(destination_file), str(source_file)) def copy_to( @@ -117,6 +116,7 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.remote_session.RemoteSession.copy_to`.""" self.session.put(str(source_file), str(destination_file)) def _close(self, force: bool = False) -> None: From patchwork Mon Dec 4 10:24:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134800 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D2A64366A; Mon, 4 Dec 2023 11:26:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0E5F140DDE; Mon, 4 Dec 2023 11:24:49 +0100 (CET) Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) by mails.dpdk.org (Postfix) with ESMTP id E437A4113D for ; Mon, 4 Dec 2023 11:24:43 +0100 (CET) Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-40bd5eaa66eso22691945e9.3 for ; Mon, 04 Dec 2023 02:24:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685483; x=1702290283; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+HTz7KzzAwfNnJ2MaozaM96TlkDEsRtXO8kE7kwzwzY=; b=lQGQ8FYWmCCUglmr/IKHCs8SmfnJuBsWL3jTGl1goMmr8NGdOqF1IJn6XaVE6VwZwi ZX5iDlUSPlKFt2z0QXFmL87TT0JFbvqCqX38Z97DHcu1kI5DcQfBMPbYW3tZsuyKDokT L6zgS46uM98l4Qz3Nd+bgJ03wY3CNsQZ26ggRDUB8cPm9A19YKKERJVRKKee7jgeJUyb oSn2wcQwWBgYMMxCHlYmrniFj8ppOtXpj2vDJ6YB35jEXdiITH2zQkSEIx26MrEpo/3m CoRs38b4tuYxP5HSTcJv2fKmR2WhCr89PBzJHeLs6lWbbW60bg9vOYnku9LB0kB/2J2f 1bmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685483; x=1702290283; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+HTz7KzzAwfNnJ2MaozaM96TlkDEsRtXO8kE7kwzwzY=; b=L3asliYIE/VSH2r63Mw1434JZh+GZ8rjcpv+qMcZr5mLYcBbMYq18p+glFmbsWPP42 hjzecrnvDyRRAeT+7X6ywAcUsRmhpRvQTFU6eZcia456wLd3e8P0KqyLWysasUHKFhjL nXe41op2lA7TgI4McWpS9NIkD1rbcilzAAWQ4lLr260XzoSU42ejMMJdUWJSXfLTksEG 0ND0fezvsglHunqccEJ9eS5GRTiod2KnwTae3eymuNzJSt/83noq9HZ7vRS6rd7/9x6/ 434NTJ/l16nCxgXf6+yHVtX+PsnSlGTGVGY7jEEGxezFCGsL0jSegNbBlqGAFhb72wCx cy6w== X-Gm-Message-State: AOJu0YzGv6gAFzbVSHBiMyWPBulaBwx+VxTdWKFLtvTYSoeZvi0BK6uR j04ry6WQEhjJMF9J0EVrRJDqwA== X-Google-Smtp-Source: AGHT+IFFZquxFlFjjnzHrvELpu/0HJD4Cem3ui4k22ZrzpD36hE5DiLUQjyL0vnbPYKoraeKmhJfew== X-Received: by 2002:a05:600c:32a7:b0:40b:5e59:e9f8 with SMTP id t39-20020a05600c32a700b0040b5e59e9f8mr2375283wmp.151.1701685483347; Mon, 04 Dec 2023 02:24:43 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:42 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 12/21] dts: interactive remote session docstring update Date: Mon, 4 Dec 2023 11:24:20 +0100 Message-Id: <20231204102429.106709-13-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../interactive_remote_session.py | 36 +++---- .../remote_session/interactive_shell.py | 99 +++++++++++-------- dts/framework/remote_session/python_shell.py | 26 ++++- dts/framework/remote_session/testpmd_shell.py | 59 +++++++++-- 4 files changed, 150 insertions(+), 70 deletions(-) diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py index 098ded1bb0..1cc82e3377 100644 --- a/dts/framework/remote_session/interactive_remote_session.py +++ b/dts/framework/remote_session/interactive_remote_session.py @@ -22,27 +22,23 @@ class InteractiveRemoteSession: """SSH connection dedicated to interactive applications. - This connection is created using paramiko and is a persistent connection to the - host. This class defines methods for connecting to the node and configures this - connection to send "keep alive" packets every 30 seconds. Because paramiko attempts - to use SSH keys to establish a connection first, providing a password is optional. - This session is utilized by InteractiveShells and cannot be interacted with - directly. - - Arguments: - node_config: Configuration class for the node you are connecting to. - _logger: Desired logger for this session to use. + The connection is created using `paramiko `_ + and is a persistent connection to the host. This class defines the methods for connecting + to the node and configures the connection to send "keep alive" packets every 30 seconds. + Because paramiko attempts to use SSH keys to establish a connection first, providing + a password is optional. This session is utilized by InteractiveShells + and cannot be interacted with directly. Attributes: - hostname: Hostname that will be used to initialize a connection to the node. - ip: A subsection of hostname that removes the port for the connection if there + hostname: The hostname that will be used to initialize a connection to the node. + ip: A subsection of `hostname` that removes the port for the connection if there is one. If there is no port, this will be the same as hostname. - port: Port to use for the ssh connection. This will be extracted from the - hostname if there is a port included, otherwise it will default to 22. + port: Port to use for the ssh connection. This will be extracted from `hostname` + if there is a port included, otherwise it will default to ``22``. username: User to connect to the node with. password: Password of the user connecting to the host. This will default to an empty string if a password is not provided. - session: Underlying paramiko connection. + session: The underlying paramiko connection. Raises: SSHConnectionError: There is an error creating the SSH connection. @@ -58,9 +54,15 @@ class InteractiveRemoteSession: _node_config: NodeConfiguration _transport: Transport | None - def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None: + def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None: + """Connect to the node during initialization. + + Args: + node_config: The test run configuration of the node to connect to. + logger: The logger instance this session will use. + """ self._node_config = node_config - self._logger = _logger + self._logger = logger self.hostname = node_config.hostname self.username = node_config.user self.password = node_config.password if node_config.password else "" diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py index 4db19fb9b3..b158f963b6 100644 --- a/dts/framework/remote_session/interactive_shell.py +++ b/dts/framework/remote_session/interactive_shell.py @@ -3,18 +3,20 @@ """Common functionality for interactive shell handling. -This base class, InteractiveShell, is meant to be extended by other classes that -contain functionality specific to that shell type. These derived classes will often -modify things like the prompt to expect or the arguments to pass into the application, -but still utilize the same method for sending a command and collecting output. How -this output is handled however is often application specific. If an application needs -elevated privileges to start it is expected that the method for gaining those -privileges is provided when initializing the class. +The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain +functionality specific to that shell type. These subclasses will often modify things like +the prompt to expect or the arguments to pass into the application, but still utilize +the same method for sending a command and collecting output. How this output is handled however +is often application specific. If an application needs elevated privileges to start it is expected +that the method for gaining those privileges is provided when initializing the class. + +The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` +environment variable configure the timeout of getting the output from command execution. """ from abc import ABC from pathlib import PurePath -from typing import Callable +from typing import Callable, ClassVar from paramiko import Channel, SSHClient, channel # type: ignore[import] @@ -30,28 +32,6 @@ class InteractiveShell(ABC): and collecting input until reaching a certain prompt. All interactive applications will use the same SSH connection, but each will create their own channel on that session. - - Arguments: - interactive_session: The SSH session dedicated to interactive shells. - logger: Logger used for displaying information in the console. - get_privileged_command: Method for modifying a command to allow it to use - elevated privileges. If this is None, the application will not be started - with elevated privileges. - app_args: Command line arguments to be passed to the application on startup. - timeout: Timeout used for the SSH channel that is dedicated to this interactive - shell. This timeout is for collecting output, so if reading from the buffer - and no output is gathered within the timeout, an exception is thrown. - - Attributes - _default_prompt: Prompt to expect at the end of output when sending a command. - This is often overridden by derived classes. - _command_extra_chars: Extra characters to add to the end of every command - before sending them. This is often overridden by derived classes and is - most commonly an additional newline character. - path: Path to the executable to start the interactive application. - dpdk_app: Whether this application is a DPDK app. If it is, the build - directory for DPDK on the node will be prepended to the path to the - executable. """ _interactive_session: SSHClient @@ -61,10 +41,22 @@ class InteractiveShell(ABC): _logger: DTSLOG _timeout: float _app_args: str - _default_prompt: str = "" - _command_extra_chars: str = "" - path: PurePath - dpdk_app: bool = False + + #: Prompt to expect at the end of output when sending a command. + #: This is often overridden by subclasses. + _default_prompt: ClassVar[str] = "" + + #: Extra characters to add to the end of every command + #: before sending them. This is often overridden by subclasses and is + #: most commonly an additional newline character. + _command_extra_chars: ClassVar[str] = "" + + #: Path to the executable to start the interactive application. + path: ClassVar[PurePath] + + #: Whether this application is a DPDK app. If it is, the build directory + #: for DPDK on the node will be prepended to the path to the executable. + dpdk_app: ClassVar[bool] = False def __init__( self, @@ -74,6 +66,19 @@ def __init__( app_args: str = "", timeout: float = SETTINGS.timeout, ) -> None: + """Create an SSH channel during initialization. + + Args: + interactive_session: The SSH session dedicated to interactive shells. + logger: The logger instance this session will use. + get_privileged_command: A method for modifying a command to allow it to use + elevated privileges. If :data:`None`, the application will not be started + with elevated privileges. + app_args: The command line arguments to be passed to the application on startup. + timeout: The timeout used for the SSH channel that is dedicated to this interactive + shell. This timeout is for collecting output, so if reading from the buffer + and no output is gathered within the timeout, an exception is thrown. + """ self._interactive_session = interactive_session self._ssh_channel = self._interactive_session.invoke_shell() self._stdin = self._ssh_channel.makefile_stdin("w") @@ -90,6 +95,10 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None This method is often overridden by subclasses as their process for starting may look different. + + Args: + get_privileged_command: A function (but could be any callable) that produces + the version of the command with elevated privileges. """ start_command = f"{self.path} {self._app_args}" if get_privileged_command is not None: @@ -97,16 +106,24 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None self.send_command(start_command) def send_command(self, command: str, prompt: str | None = None) -> str: - """Send a command and get all output before the expected ending string. + """Send `command` and get all output before the expected ending string. Lines that expect input are not included in the stdout buffer, so they cannot - be used for expect. For example, if you were prompted to log into something - with a username and password, you cannot expect "username:" because it won't - yet be in the stdout buffer. A workaround for this could be consuming an - extra newline character to force the current prompt into the stdout buffer. + be used for expect. + + Example: + If you were prompted to log into something with a username and password, + you cannot expect ``username:`` because it won't yet be in the stdout buffer. + A workaround for this could be consuming an extra newline character to force + the current `prompt` into the stdout buffer. + + Args: + command: The command to send. + prompt: After sending the command, `send_command` will be expecting this string. + If :data:`None`, will use the class's default prompt. Returns: - All output in the buffer before expected string + All output in the buffer before expected string. """ self._logger.info(f"Sending: '{command}'") if prompt is None: @@ -124,8 +141,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str: return out def close(self) -> None: + """Properly free all resources.""" self._stdin.close() self._ssh_channel.close() def __del__(self) -> None: + """Make sure the session is properly closed before deleting the object.""" self.close() diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py index cc3ad48a68..ccfd3783e8 100644 --- a/dts/framework/remote_session/python_shell.py +++ b/dts/framework/remote_session/python_shell.py @@ -1,12 +1,32 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""Python interactive shell. + +Typical usage example in a TestSuite:: + + from framework.remote_session import PythonShell + python_shell = self.tg_node.create_interactive_shell( + PythonShell, timeout=5, privileged=True + ) + python_shell.send_command("print('Hello World')") + python_shell.close() +""" + from pathlib import PurePath +from typing import ClassVar from .interactive_shell import InteractiveShell class PythonShell(InteractiveShell): - _default_prompt: str = ">>>" - _command_extra_chars: str = "\n" - path: PurePath = PurePath("python3") + """Python interactive shell.""" + + #: Python's prompt. + _default_prompt: ClassVar[str] = ">>>" + + #: This forces the prompt to appear after sending a command. + _command_extra_chars: ClassVar[str] = "\n" + + #: The Python executable. + path: ClassVar[PurePath] = PurePath("python3") diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py index 08ac311016..0184cc2e71 100644 --- a/dts/framework/remote_session/testpmd_shell.py +++ b/dts/framework/remote_session/testpmd_shell.py @@ -1,41 +1,80 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 University of New Hampshire +# Copyright(c) 2023 PANTHEON.tech s.r.o. + +"""Testpmd interactive shell. + +Typical usage example in a TestSuite:: + + testpmd_shell = self.sut_node.create_interactive_shell( + TestPmdShell, privileged=True + ) + devices = testpmd_shell.get_devices() + for device in devices: + print(device) + testpmd_shell.close() +""" from pathlib import PurePath -from typing import Callable +from typing import Callable, ClassVar from .interactive_shell import InteractiveShell class TestPmdDevice(object): + """The data of a device that testpmd can recognize. + + Attributes: + pci_address: The PCI address of the device. + """ + pci_address: str def __init__(self, pci_address_line: str): + """Initialize the device from the testpmd output line string. + + Args: + pci_address_line: A line of testpmd output that contains a device. + """ self.pci_address = pci_address_line.strip().split(": ")[1].strip() def __str__(self) -> str: + """The PCI address captures what the device is.""" return self.pci_address class TestPmdShell(InteractiveShell): - path: PurePath = PurePath("app", "dpdk-testpmd") - dpdk_app: bool = True - _default_prompt: str = "testpmd>" - _command_extra_chars: str = "\n" # We want to append an extra newline to every command + """Testpmd interactive shell. + + The testpmd shell users should never use + the :meth:`~.interactive_shell.InteractiveShell.send_command` method directly, but rather + call specialized methods. If there isn't one that satisfies a need, it should be added. + """ + + #: The path to the testpmd executable. + path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd") + + #: Flag this as a DPDK app so that it's clear this is not a system app and + #: needs to be looked in a specific path. + dpdk_app: ClassVar[bool] = True + + #: The testpmd's prompt. + _default_prompt: ClassVar[str] = "testpmd>" + + #: This forces the prompt to appear after sending a command. + _command_extra_chars: ClassVar[str] = "\n" def _start_application(self, get_privileged_command: Callable[[str], str] | None) -> None: - """See "_start_application" in InteractiveShell.""" self._app_args += " -- -i" super()._start_application(get_privileged_command) def get_devices(self) -> list[TestPmdDevice]: - """Get a list of device names that are known to testpmd + """Get a list of device names that are known to testpmd. - Uses the device info listed in testpmd and then parses the output to - return only the names of the devices. + Uses the device info listed in testpmd and then parses the output. Returns: - A list of strings representing device names (e.g. 0000:14:00.1) + A list of devices. """ dev_info: str = self.send_command("show device info all") dev_list: list[TestPmdDevice] = [] From patchwork Mon Dec 4 10:24:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134801 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 233CA4366A; Mon, 4 Dec 2023 11:26:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B2A6342831; Mon, 4 Dec 2023 11:24:50 +0100 (CET) Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by mails.dpdk.org (Postfix) with ESMTP id CC39641144 for ; Mon, 4 Dec 2023 11:24:44 +0100 (CET) Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-40c09fcfa9fso10723505e9.2 for ; Mon, 04 Dec 2023 02:24:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685484; x=1702290284; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kLCM/2aeBwtLd6yTf6evW5OSTnsSROs8IoIoBeLdShI=; b=Mu4nTgMEnvLyqyN0vViWOn+hGCKXE+XqRvpd0gaGJ7ZJSrjGgt3Mo/DZTI1nOq5Qu4 JQ0L9l0u66lX741hdV3JgRs/MA83UAsHygkhOch9G9UCjIW9KuZ5y8ZJN+XwGVDQzBQA ZyfBkyscACeO2rneqQ4oi/6D7Tkw+FQ9cLJpbAgZl78RsrZFC6D5iUXU5KRVJt0UmgS6 E2YFHEtGa9m59FINMA2qKchuKcYwlKWyaO+s2SNT8nsLvrhstUYtbgAoD218sEXRnS5H L9HGeoMR86VP/0GuGAtfETX/Nvfzkk6Asn51RtfiHw2VrXPYFEMsM17Nl2MrflNZsPhz X1kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685484; x=1702290284; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kLCM/2aeBwtLd6yTf6evW5OSTnsSROs8IoIoBeLdShI=; b=aCyIQvL/bMBkWkhihdHPnjOrwzVhHTt9gJjGEaGC26otfAl6EiAyhi9OGbSyJjD+gr MieIf5xi7GhrXV/9hSLrE/7hBTmCji33nSqwJOCxW6AAN4G0rY77dMYOKw1f9vHM+joZ PbplWF1TGuYUQFnOw1UNy4xaJzlbVuIs1JcSEU1bmjfX619xLMINR9nXiN45vQrcspWu PXT9VQd3ySID8lUprC6PXXyryFvg/235K1gdw3uAO66p6+GuGQgKabuSNxx2RCYnn4Aj r576xHSXWc/Hs6mKY0Da9cVa8H7NqZ5VU2xXSkTdtC0Rp0syiy7gqSzecD9BqrpcfNS8 zsRg== X-Gm-Message-State: AOJu0YwTxt2gZMAwVIyi3NpDymb/YmaO1KQ8+zt0/2TiabBQHGKBIzv+ pp272OPcy0462luBx3gRCZOuYA== X-Google-Smtp-Source: AGHT+IFQxBqLbTaSkiQc31rSYPbEfMxV9o9GRu+gtdUdwFWwP9y/B3Oj9yqgfmRGh/2Ka/8PRqWi3Q== X-Received: by 2002:a05:600c:2604:b0:40b:5eed:20a0 with SMTP id h4-20020a05600c260400b0040b5eed20a0mr1285834wma.96.1701685484527; Mon, 04 Dec 2023 02:24:44 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:44 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 13/21] dts: port and virtual device docstring update Date: Mon, 4 Dec 2023 11:24:21 +0100 Message-Id: <20231204102429.106709-14-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/__init__.py | 17 ++++-- dts/framework/testbed_model/port.py | 53 +++++++++++++++---- dts/framework/testbed_model/virtual_device.py | 17 +++++- 3 files changed, 72 insertions(+), 15 deletions(-) diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py index 8ced05653b..6086512ca2 100644 --- a/dts/framework/testbed_model/__init__.py +++ b/dts/framework/testbed_model/__init__.py @@ -2,9 +2,20 @@ # Copyright(c) 2022-2023 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" -This package contains the classes used to model the physical traffic generator, -system under test and any other components that need to be interacted with. +"""Testbed modelling. + +This package defines the testbed elements DTS works with: + + * A system under test node: :class:`~.sut_node.SutNode`, + * A traffic generator node: :class:`~.tg_node.TGNode`, + * The ports of network interface cards (NICs) present on nodes: :class:`~.port.Port`, + * The logical cores of CPUs present on nodes: :class:`~.cpu.LogicalCore`, + * The virtual devices that can be created on nodes: :class:`~.virtual_device.VirtualDevice`, + * The operating systems running on nodes: :class:`~.linux_session.LinuxSession` + and :class:`~.posix_session.PosixSession`. + +DTS needs to be able to connect to nodes and understand some of the hardware present on these nodes +to properly build and test DPDK. """ # pylama:ignore=W0611 diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py index 680c29bfe3..817405bea4 100644 --- a/dts/framework/testbed_model/port.py +++ b/dts/framework/testbed_model/port.py @@ -2,6 +2,13 @@ # Copyright(c) 2022 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""NIC port model. + +Basic port information, such as location (the port are identified by their PCI address on a node), +drivers and address. +""" + + from dataclasses import dataclass from framework.config import PortConfig @@ -9,24 +16,35 @@ @dataclass(slots=True, frozen=True) class PortIdentifier: + """The port identifier. + + Attributes: + node: The node where the port resides. + pci: The PCI address of the port on `node`. + """ + node: str pci: str @dataclass(slots=True) class Port: - """ - identifier: The PCI address of the port on a node. - - os_driver: The driver used by this port when the OS is controlling it. - Example: i40e - os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it, - Example: vfio-pci. + """Physical port on a node. - Note: os_driver and os_driver_for_dpdk may be the same thing. - Example: mlx5_core + The ports are identified by the node they're on and their PCI addresses. The port on the other + side of the connection is also captured here. + Each port is serviced by a driver, which may be different for the operating system (`os_driver`) + and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``. - peer: The identifier of a port this port is connected with. + Attributes: + identifier: The PCI address of the port on a node. + os_driver: The operating system driver name when the operating system controls the port, + e.g.: ``i40e``. + os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``. + peer: The identifier of a port this port is connected with. + The `peer` is on a different node. + mac_address: The MAC address of the port. + logical_name: The logical name of the port. Must be discovered. """ identifier: PortIdentifier @@ -37,6 +55,12 @@ class Port: logical_name: str = "" def __init__(self, node_name: str, config: PortConfig): + """Initialize the port from `node_name` and `config`. + + Args: + node_name: The name of the port's node. + config: The test run configuration of the port. + """ self.identifier = PortIdentifier( node=node_name, pci=config.pci, @@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig): @property def node(self) -> str: + """The node where the port resides.""" return self.identifier.node @property def pci(self) -> str: + """The PCI address of the port.""" return self.identifier.pci @dataclass(slots=True, frozen=True) class PortLink: + """The physical, cabled connection between the ports. + + Attributes: + sut_port: The port on the SUT node connected to `tg_port`. + tg_port: The port on the TG node connected to `sut_port`. + """ + sut_port: Port tg_port: Port diff --git a/dts/framework/testbed_model/virtual_device.py b/dts/framework/testbed_model/virtual_device.py index eb664d9f17..e9b5e9c3be 100644 --- a/dts/framework/testbed_model/virtual_device.py +++ b/dts/framework/testbed_model/virtual_device.py @@ -1,16 +1,29 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""Virtual devices model. + +Alongside support for physical hardware, DPDK can create various virtual devices. +""" + class VirtualDevice(object): - """ - Base class for virtual devices used by DPDK. + """Base class for virtual devices used by DPDK. + + Attributes: + name: The name of the virtual device. """ name: str def __init__(self, name: str): + """Initialize the virtual device. + + Args: + name: The name of the virtual device. + """ self.name = name def __str__(self) -> str: + """This corresponds to the name used for DPDK devices.""" return self.name From patchwork Mon Dec 4 10:24:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134802 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 336CF4366A; Mon, 4 Dec 2023 11:26:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D4C0942D0C; Mon, 4 Dec 2023 11:24:51 +0100 (CET) Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) by mails.dpdk.org (Postfix) with ESMTP id BB64441153 for ; Mon, 4 Dec 2023 11:24:45 +0100 (CET) Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-40c09dfa03cso10502345e9.2 for ; Mon, 04 Dec 2023 02:24:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685485; x=1702290285; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lDAzXB13bmA5V61UH4hz6KNJGJbhuXFju4FDnIEpq4c=; b=b+eJJJDh/KG3Y491l0CiSemORgtEaz5f030QhWuy2C+VkNKBYO1a9kMFr+tO1o73Hg 41QJUN8dMcganKUuDQ2bvbLby2vtyS0gLTMk8xLc1zIJ+DKh28QRjmdPHQsSkVbA3Jd+ IodZ7TmFucH9cj4b5+AdGen68z8Kcds615vIPCPuAOJXBet6uXDdrOH8wNZS7LFCtOQy YHZrWerDqgMCLR8KweLB4AYIgQlyqn9mnxH5mZnjaP8jGwdRoaufysojnw4xLKVCyLyM RMm6dPVSvoDuVyVnwif9xUlBRRBWTfYnWSUBeR66Gwirv4PnNzxrURxjLArgMO6/5FRy omGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685485; x=1702290285; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lDAzXB13bmA5V61UH4hz6KNJGJbhuXFju4FDnIEpq4c=; b=c9HPrO70RWjbsgPTp8XzPo1QzHPY5oTpXlACMjpmdKtPJoF2sJvvserFodJcepRqBD GEv5WLq3b36yQEn6FEr+bp8rRe3rghl7+ZmU6l34bSWzEjU95w5orU4qkWaWf+ylXJds ksHyASghUpi2cLLQVMg1QsRixbQ/VMI3pLZSJ+7cE7aBlBcCRt0EAd+wQGsmVUyVBE6D ESdr4N8Aucu4B8vPzsoq6ionKvMiS/WnZHizWaRiaU1n2ZpQtUOBjhBqvf6JVM0CiLJ+ RM//otW8cvBC2Ta1iEvAIiwQEkioavjKcl7BQwrggFui6McIDf0ONGuorOL3XSoB1Dlo W6Sw== X-Gm-Message-State: AOJu0YxVT2EUJQmc8AJqEAUjZ6zCQRn9uLNhqlwNn/L+A2NvKQwGePHA dHxJ24ywoaDTsev4US8+io2dqg== X-Google-Smtp-Source: AGHT+IFiCFiPxIQ/TDVtxFj/70TJ9dabIiG3KI1vVPKghAcld1ARfVEO8DrR0w+SGAL2mKojT0b6Dg== X-Received: by 2002:a1c:7402:0:b0:40b:5e59:c573 with SMTP id p2-20020a1c7402000000b0040b5e59c573mr2617610wmc.157.1701685485372; Mon, 04 Dec 2023 02:24:45 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:45 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 14/21] dts: cpu docstring update Date: Mon, 4 Dec 2023 11:24:22 +0100 Message-Id: <20231204102429.106709-15-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++-------- 1 file changed, 144 insertions(+), 52 deletions(-) diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py index 1b392689f5..9e33b2825d 100644 --- a/dts/framework/testbed_model/cpu.py +++ b/dts/framework/testbed_model/cpu.py @@ -1,6 +1,22 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""CPU core representation and filtering. + +This module provides a unified representation of logical CPU cores along +with filtering capabilities. + +When symmetric multiprocessing (SMP or multithreading) is enabled on a server, +the physical CPU cores are split into logical CPU cores with different IDs. + +:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify +the socket from which to filter the number of logical cores. It's also possible to not use all +logical CPU cores from each physical core (e.g. only the first logical core of each physical core). + +:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that +the logical cores are actually present on the server. +""" + import dataclasses from abc import ABC, abstractmethod from collections.abc import Iterable, ValuesView @@ -11,9 +27,17 @@ @dataclass(slots=True, frozen=True) class LogicalCore(object): - """ - Representation of a CPU core. A physical core is represented in OS - by multiple logical cores (lcores) if CPU multithreading is enabled. + """Representation of a logical CPU core. + + A physical core is represented in OS by multiple logical cores (lcores) + if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same. + + Attributes: + lcore: The logical core ID of a CPU core. It's the same as `core` with + disabled multithreading. + core: The physical core ID of a CPU core. + socket: The physical socket ID where the CPU resides. + node: The NUMA node ID where the CPU resides. """ lcore: int @@ -22,27 +46,36 @@ class LogicalCore(object): node: int def __int__(self) -> int: + """The CPU is best represented by the logical core, as that's what we configure in EAL.""" return self.lcore class LogicalCoreList(object): - """ - Convert these options into a list of logical core ids. - lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores - lcore_list=[0,1,2,3] - a list of int indices - lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported - lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported - - The class creates a unified format used across the framework and allows - the user to use either a str representation (using str(instance) or directly - in f-strings) or a list representation (by accessing instance.lcore_list). - Empty lcore_list is allowed. + r"""A unified way to store :class:`LogicalCore`\s. + + Create a unified format used across the framework and allow the user to use + either a :class:`str` representation (using ``str(instance)`` or directly in f-strings) + or a :class:`list` representation (by accessing the `lcore_list` property, + which stores logical core IDs). """ _lcore_list: list[int] _lcore_str: str def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str): + """Process `lcore_list`, then sort. + + There are four supported logical core list formats:: + + lcore_list=[LogicalCore1, LogicalCore2] # a list of LogicalCores + lcore_list=[0,1,2,3] # a list of int indices + lcore_list=['0','1','2-3'] # a list of str indices; ranges are supported + lcore_list='0,1,2-3' # a comma delimited str of indices; ranges are supported + + Args: + lcore_list: Various ways to represent multiple logical cores. + Empty `lcore_list` is allowed. + """ self._lcore_list = [] if isinstance(lcore_list, str): lcore_list = lcore_list.split(",") @@ -58,6 +91,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str): @property def lcore_list(self) -> list[int]: + """The logical core IDs.""" return self._lcore_list def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]: @@ -83,28 +117,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]: return formatted_core_list def __str__(self) -> str: + """The consecutive ranges of logical core IDs.""" return self._lcore_str @dataclasses.dataclass(slots=True, frozen=True) class LogicalCoreCount(object): - """ - Define the number of logical cores to use. - If sockets is not None, socket_count is ignored. - """ + """Define the number of logical cores per physical cores per sockets.""" + #: Use this many logical cores per each physical core. lcores_per_core: int = 1 + #: Use this many physical cores per each socket. cores_per_socket: int = 2 + #: Use this many sockets. socket_count: int = 1 + #: Use exactly these sockets. This takes precedence over `socket_count`, + #: so when `sockets` is not :data:`None`, `socket_count` is ignored. sockets: list[int] | None = None class LogicalCoreFilter(ABC): - """ - Filter according to the input filter specifier. Each filter needs to be - implemented in a derived class. - This class only implements operations common to all filters, such as sorting - the list to be filtered beforehand. + """Common filtering class. + + Each filter needs to be implemented in a subclass. This base class sorts the list of cores + and defines the filtering method, which must be implemented by subclasses. """ _filter_specifier: LogicalCoreCount | LogicalCoreList @@ -116,6 +152,17 @@ def __init__( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool = True, ): + """Filter according to the input filter specifier. + + The input `lcore_list` is copied and sorted by physical core before filtering. + The list is copied so that the original is left intact. + + Args: + lcore_list: The logical CPU cores to filter. + filter_specifier: Filter cores from `lcore_list` according to this filter. + ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`, + sort in descending order. + """ self._filter_specifier = filter_specifier # sorting by core is needed in case hyperthreading is enabled @@ -124,31 +171,45 @@ def __init__( @abstractmethod def filter(self) -> list[LogicalCore]: - """ - Use self._filter_specifier to filter self._lcores_to_filter - and return the list of filtered LogicalCores. - self._lcores_to_filter is a sorted copy of the original list, - so it may be modified. + r"""Filter the cores. + + Use `self._filter_specifier` to filter `self._lcores_to_filter` and return + the filtered :class:`LogicalCore`\s. + `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified. + + Returns: + The filtered cores. """ class LogicalCoreCountFilter(LogicalCoreFilter): - """ + """Filter cores by specified counts. + Filter the input list of LogicalCores according to specified rules: - Use cores from the specified number of sockets or from the specified socket ids. - If sockets is specified, it takes precedence over socket_count. - From each of those sockets, use only cores_per_socket of cores. - And for each core, use lcores_per_core of logical cores. Hypertheading - must be enabled for this to take effect. - If ascending is True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the highest - id and continue in descending order. This ordering affects which - sockets to consider first as well. + + * The input `filter_specifier` is :class:`LogicalCoreCount`, + * Use cores from the specified number of sockets or from the specified socket ids, + * If `sockets` is specified, it takes precedence over `socket_count`, + * From each of those sockets, use only `cores_per_socket` of cores, + * And for each core, use `lcores_per_core` of logical cores. Hypertheading + must be enabled for this to take effect. """ _filter_specifier: LogicalCoreCount def filter(self) -> list[LogicalCore]: + """Filter the cores according to :class:`LogicalCoreCount`. + + Start by filtering the allowed sockets. The cores matching the allowed sockets are returned. + The cores of each socket are stored in separate lists. + + Then filter the allowed physical cores from those lists of cores per socket. When filtering + physical cores, store the desired number of logical cores per physical core which then + together constitute the final filtered list. + + Returns: + The filtered cores. + """ sockets_to_filter = self._filter_sockets(self._lcores_to_filter) filtered_lcores = [] for socket_to_filter in sockets_to_filter: @@ -158,24 +219,37 @@ def filter(self) -> list[LogicalCore]: def _filter_sockets( self, lcores_to_filter: Iterable[LogicalCore] ) -> ValuesView[list[LogicalCore]]: - """ - Remove all lcores that don't match the specified socket(s). - If self._filter_specifier.sockets is not None, keep lcores from those sockets, - otherwise keep lcores from the first - self._filter_specifier.socket_count sockets. + """Filter a list of cores per each allowed socket. + + The sockets may be specified in two ways, either a number or a specific list of sockets. + In case of a specific list, we just need to return the cores from those sockets. + If filtering a number of cores, we need to go through all cores and note which sockets + appear and only filter from the first n that appear. + + Args: + lcores_to_filter: The cores to filter. These must be sorted by the physical core. + + Returns: + A list of lists of logical CPU cores. Each list contains cores from one socket. """ allowed_sockets: set[int] = set() socket_count = self._filter_specifier.socket_count if self._filter_specifier.sockets: + # when sockets in filter is specified, the sockets are already set socket_count = len(self._filter_specifier.sockets) allowed_sockets = set(self._filter_specifier.sockets) + # filter socket_count sockets from all sockets by checking the socket of each CPU filtered_lcores: dict[int, list[LogicalCore]] = {} for lcore in lcores_to_filter: if not self._filter_specifier.sockets: + # this is when sockets is not set, so we do the actual filtering + # when it is set, allowed_sockets is already defined and can't be changed if len(allowed_sockets) < socket_count: + # allowed_sockets is a set, so adding an existing socket won't re-add it allowed_sockets.add(lcore.socket) if lcore.socket in allowed_sockets: + # separate lcores into sockets; this makes it easier in further processing if lcore.socket in filtered_lcores: filtered_lcores[lcore.socket].append(lcore) else: @@ -192,12 +266,13 @@ def _filter_sockets( def _filter_cores_from_socket( self, lcores_to_filter: Iterable[LogicalCore] ) -> list[LogicalCore]: - """ - Keep only the first self._filter_specifier.cores_per_socket cores. - In multithreaded environments, keep only - the first self._filter_specifier.lcores_per_core lcores of those cores. - """ + """Filter a list of cores from the given socket. + + Go through the cores and note how many logical cores per physical core have been filtered. + Returns: + The filtered logical CPU cores. + """ # no need to use ordered dict, from Python3.7 the dict # insertion order is preserved (LIFO). lcore_count_per_core_map: dict[int, int] = {} @@ -238,15 +313,21 @@ def _filter_cores_from_socket( class LogicalCoreListFilter(LogicalCoreFilter): - """ - Filter the input list of Logical Cores according to the input list of - lcore indices. - An empty LogicalCoreList won't filter anything. + """Filter the logical CPU cores by logical CPU core IDs. + + This is a simple filter that looks at logical CPU IDs and only filter those that match. + + The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything. """ _filter_specifier: LogicalCoreList def filter(self) -> list[LogicalCore]: + """Filter based on logical CPU core ID. + + Return: + The filtered logical CPU cores. + """ if not len(self._filter_specifier.lcore_list): return self._lcores_to_filter @@ -269,6 +350,17 @@ def lcore_filter( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool, ) -> LogicalCoreFilter: + """Factory for providing the filter that corresponds to `filter_specifier`. + + Args: + core_list: The logical CPU cores to filter. + filter_specifier: The filter to use. + ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`, + sort in descending order. + + Returns: + The filter that corresponds to `filter_specifier`. + """ if isinstance(filter_specifier, LogicalCoreList): return LogicalCoreListFilter(core_list, filter_specifier, ascending) elif isinstance(filter_specifier, LogicalCoreCount): From patchwork Mon Dec 4 10:24:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134803 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 081034366A; Mon, 4 Dec 2023 11:26:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2D95C42D2A; Mon, 4 Dec 2023 11:24:53 +0100 (CET) Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) by mails.dpdk.org (Postfix) with ESMTP id 8EA3B42670 for ; Mon, 4 Dec 2023 11:24:46 +0100 (CET) Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-40bda47b7c1so22906025e9.1 for ; Mon, 04 Dec 2023 02:24:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685486; x=1702290286; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e11UDpSi+pdOs0mbM57SdnfS7daFl2y2OQBCS57ioc4=; b=LZBuE12ZG/JjvB9gWKqPlxuVEiXR5iL9W53YeRS8tE3aQlKk68GJPGggyUJerU8/BP UZWM+mLvJjk1MNG+tr8PxP4g63cACVxoPssROT/8umeaAj/i0nXRrhmlJUUUwVnsVEGa RMkkqCJ8SrVulnW4lZWm647TT8k7BqzHwKei0ukavQ4Nct4JfQzhiJ0T3VE9TVD6EdQp Mt5aAyl2jbDbDp+3AqHKKqJQ+mcDlvTRQRqspEFI31KayymcaJbt4Awi204i0Dk5wgpu 9BZZ7llLKxfLNBEq0d6LaakFMHV7QEL1K+L1VFRdaY2rrvZWtwTA1cw/JdKMUixvJ6Mt yDaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685486; x=1702290286; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e11UDpSi+pdOs0mbM57SdnfS7daFl2y2OQBCS57ioc4=; b=GMfjSBaOkseHoinHMNUWg0814h4P3kpB3k456zF4YsGYqVKpdMGwOcbx/opSltAUlF 8ZtgN+xuypszIDDNd6WT5xpidfJMZvdapv/XcAOmeTn9EWwAVVaQO/HvUR75K5ET6Sn7 5xx9iRav8R+7zV9lQJuTh+0pj52Hke2mprsTtIcSDt9JyGXamBEQQ3mubwjla84PlkwB 9CIhSMpU5u0EeUYQW6aIYw8L0NPkG1SX1P+Ab87YT+rGXG/0n73LruRM/6Prt2Wpvm/d 26SwXZVdwuKoWa3mD4pWYJop1ZYADfqzWaSMJvi0VbihdH9ZqL6I0ryXwfM8CDl+RrKH 1Yyg== X-Gm-Message-State: AOJu0Yzsb+2BXyFFar2dz1hEls+l4FdYANwDxysivoSaiYUj1+kOTbpK I6s00ToGZtUeOWMueB3KN1+Krw== X-Google-Smtp-Source: AGHT+IE0JWlDlGLuq7Fj5S7EkP5rK2t0HIm6jqdJwS59y5bbXcjV0zlAqkBvm/mSRKlNEyk83E3kAw== X-Received: by 2002:a05:600c:3c90:b0:40b:5f03:b3ca with SMTP id bg16-20020a05600c3c9000b0040b5f03b3camr1051555wmb.236.1701685486141; Mon, 04 Dec 2023 02:24:46 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:45 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 15/21] dts: os session docstring update Date: Mon, 4 Dec 2023 11:24:23 +0100 Message-Id: <20231204102429.106709-16-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/os_session.py | 272 ++++++++++++++++------ 1 file changed, 205 insertions(+), 67 deletions(-) diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py index 76e595a518..ac6bb5e112 100644 --- a/dts/framework/testbed_model/os_session.py +++ b/dts/framework/testbed_model/os_session.py @@ -2,6 +2,26 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""OS-aware remote session. + +DPDK supports multiple different operating systems, meaning it can run on these different operating +systems. This module defines the common API that OS-unaware layers use and translates the API into +OS-aware calls/utility usage. + +Note: + Running commands with administrative privileges requires OS awareness. This is the only layer + that's aware of OS differences, so this is where non-privileged command get converted + to privileged commands. + +Example: + A user wishes to remove a directory on a remote :class:`~.sut_node.SutNode`. + The :class:`~.sut_node.SutNode` object isn't aware what OS the node is running - it delegates + the OS translation logic to :attr:`~.node.Node.main_session`. The SUT node calls + :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and + the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the node's OS is Linux + and other commands for other OSs. It also translates the path to match the underlying OS. +""" + from abc import ABC, abstractmethod from collections.abc import Iterable from ipaddress import IPv4Interface, IPv6Interface @@ -28,10 +48,16 @@ class OSSession(ABC): - """ - The OS classes create a DTS node remote session and implement OS specific + """OS-unaware to OS-aware translation API definition. + + The OSSession classes create a remote session to a DTS node and implement OS specific behavior. There a few control methods implemented by the base class, the rest need - to be implemented by derived classes. + to be implemented by subclasses. + + Attributes: + name: The name of the session. + remote_session: The remote session maintaining the connection to the node. + interactive_session: The interactive remote session maintaining the connection to the node. """ _config: NodeConfiguration @@ -46,6 +72,15 @@ def __init__( name: str, logger: DTSLOG, ): + """Initialize the OS-aware session. + + Connect to the node right away and also create an interactive remote session. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + """ self._config = node_config self.name = name self._logger = logger @@ -53,15 +88,15 @@ def __init__( self.interactive_session = create_interactive_session(node_config, logger) def close(self, force: bool = False) -> None: - """ - Close the remote session. + """Close the underlying remote session. + + Args: + force: Force the closure of the connection. """ self.remote_session.close(force) def is_alive(self) -> bool: - """ - Check whether the remote session is still responding. - """ + """Check whether the underlying remote session is still responding.""" return self.remote_session.is_alive() def send_command( @@ -72,10 +107,23 @@ def send_command( verify: bool = False, env: dict | None = None, ) -> CommandResult: - """ - An all-purpose API in case the command to be executed is already - OS-agnostic, such as when the path to the executed command has been - constructed beforehand. + """An all-purpose API for OS-agnostic commands. + + This can be used for an execution of a portable command that's executed the same way + on all operating systems, such as Python. + + The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT` + environment variable configure the timeout of command execution. + + Args: + command: The command to execute. + timeout: Wait at most this long in seconds for `command` execution to complete. + privileged: Whether to run the command with administrative privileges. + verify: If :data:`True`, will check the exit code of the command. + env: A dictionary with environment variables to be used with the command execution. + + Raises: + RemoteCommandExecutionError: If verify is :data:`True` and the command failed. """ if privileged: command = self._get_privileged_command(command) @@ -89,8 +137,20 @@ def create_interactive_shell( privileged: bool, app_args: str, ) -> InteractiveShellType: - """ - See "create_interactive_shell" in SutNode + """Factory for interactive session handlers. + + Instantiate `shell_cls` according to the remote OS specifics. + + Args: + shell_cls: The class of the shell. + timeout: Timeout for reading output from the SSH channel. If you are + reading from the buffer and don't receive any data within the timeout + it will throw an error. + privileged: Whether to run the shell with administrative privileges. + app_args: The arguments to be passed to the application. + + Returns: + An instance of the desired interactive application shell. """ return shell_cls( self.interactive_session.session, @@ -114,27 +174,42 @@ def _get_privileged_command(command: str) -> str: @abstractmethod def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath: - """ - Try to find DPDK remote dir in remote_dir. + """Try to find DPDK directory in `remote_dir`. + + The directory is the one which is created after the extraction of the tarball. The files + are usually extracted into a directory starting with ``dpdk-``. + + Returns: + The absolute path of the DPDK remote directory, empty path if not found. """ @abstractmethod def get_remote_tmp_dir(self) -> PurePath: - """ - Get the path of the temporary directory of the remote OS. + """Get the path of the temporary directory of the remote OS. + + Returns: + The absolute path of the temporary directory. """ @abstractmethod def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: - """ - Create extra environment variables needed for the target architecture. Get - information from the node if needed. + """Create extra environment variables needed for the target architecture. + + Different architectures may require different configuration, such as setting 32-bit CFLAGS. + + Returns: + A dictionary with keys as environment variables. """ @abstractmethod def join_remote_path(self, *args: str | PurePath) -> PurePath: - """ - Join path parts using the path separator that fits the remote OS. + """Join path parts using the path separator that fits the remote OS. + + Args: + args: Any number of paths to join. + + Returns: + The resulting joined path. """ @abstractmethod @@ -143,13 +218,13 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: - """Copy a file from the remote Node to the local filesystem. + """Copy a file from the remote node to the local filesystem. - Copy source_file from the remote Node associated with this remote - session to destination_file on the local filesystem. + Copy `source_file` from the remote node associated with this remote + session to `destination_file` on the local filesystem. Args: - source_file: the file on the remote Node. + source_file: the file on the remote node. destination_file: a file or directory path on the local filesystem. """ @@ -159,14 +234,14 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: - """Copy a file from local filesystem to the remote Node. + """Copy a file from local filesystem to the remote node. - Copy source_file from local filesystem to destination_file - on the remote Node associated with this remote session. + Copy `source_file` from local filesystem to `destination_file` + on the remote node associated with this remote session. Args: source_file: the file on the local filesystem. - destination_file: a file or directory path on the remote Node. + destination_file: a file or directory path on the remote node. """ @abstractmethod @@ -176,8 +251,12 @@ def remove_remote_dir( recursive: bool = True, force: bool = True, ) -> None: - """ - Remove remote directory, by default remove recursively and forcefully. + """Remove remote directory, by default remove recursively and forcefully. + + Args: + remote_dir_path: The path of the directory to remove. + recursive: If :data:`True`, also remove all contents inside the directory. + force: If :data:`True`, ignore all warnings and try to remove at all costs. """ @abstractmethod @@ -186,9 +265,12 @@ def extract_remote_tarball( remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None, ) -> None: - """ - Extract remote tarball in place. If expected_dir is a non-empty string, check - whether the dir exists after extracting the archive. + """Extract remote tarball in its remote directory. + + Args: + remote_tarball_path: The path of the tarball on the remote node. + expected_dir: If non-empty, check whether `expected_dir` exists after extracting + the archive. """ @abstractmethod @@ -201,69 +283,119 @@ def build_dpdk( rebuild: bool = False, timeout: float = SETTINGS.compile_timeout, ) -> None: - """ - Build DPDK in the input dir with specified environment variables and meson - arguments. + """Build DPDK on the remote node. + + An extracted DPDK tarball must be present on the node. The build consists of two steps:: + + meson setup remote_dpdk_dir remote_dpdk_build_dir + ninja -C remote_dpdk_build_dir + + The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT` + environment variable configure the timeout of DPDK build. + + Args: + env_vars: Use these environment variables when building DPDK. + meson_args: Use these meson arguments when building DPDK. + remote_dpdk_dir: The directory on the remote node where DPDK will be built. + remote_dpdk_build_dir: The target build directory on the remote node. + rebuild: If :data:`True`, do a subsequent build with ``meson configure`` instead + of ``meson setup``. + timeout: Wait at most this long in seconds for the build execution to complete. """ @abstractmethod def get_dpdk_version(self, version_path: str | PurePath) -> str: - """ - Inspect DPDK version on the remote node from version_path. + """Inspect the DPDK version on the remote node. + + Args: + version_path: The path to the VERSION file containing the DPDK version. + + Returns: + The DPDK version. """ @abstractmethod def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: - """ - Compose a list of LogicalCores present on the remote node. - If use_first_core is False, the first physical core won't be used. + r"""Get the list of :class:`~.cpu.LogicalCore`\s on the remote node. + + Args: + use_first_core: If :data:`False`, the first physical core won't be used. + + Returns: + The logical cores present on the node. """ @abstractmethod def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: - """ - Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If - dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean. + """Kill and cleanup all DPDK apps. + + Args: + dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`. + If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean. """ @abstractmethod def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: - """ - Get the DPDK file prefix that will be used when running DPDK apps. + """Make OS-specific modification to the DPDK file prefix. + + Args: + dpdk_prefix: The OS-unaware file prefix. + + Returns: + The OS-specific file prefix. """ @abstractmethod - def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: - """ - Get the node's Hugepage Size, configure the specified amount of hugepages + def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None: + """Configure hugepages on the node. + + Get the node's Hugepage Size, configure the specified count of hugepages if needed and mount the hugepages if needed. - If force_first_numa is True, configure hugepages just on the first socket. + + Args: + hugepage_count: Configure this many hugepages. + force_first_numa: If :data:`True`, configure hugepages just on the first numa node. """ @abstractmethod def get_compiler_version(self, compiler_name: str) -> str: - """ - Get installed version of compiler used for DPDK + """Get installed version of compiler used for DPDK. + + Args: + compiler_name: The name of the compiler executable. + + Returns: + The compiler's version. """ @abstractmethod def get_node_info(self) -> NodeInfo: - """ - Collect information about the node + """Collect additional information about the node. + + Returns: + Node information. """ @abstractmethod def update_ports(self, ports: list[Port]) -> None: - """ - Get additional information about ports: - Logical name (e.g. enp7s0) if applicable - Mac address + """Get additional information about ports from the operating system and update them. + + The additional information is: + + * Logical name (e.g. ``enp7s0``) if applicable, + * Mac address. + + Args: + ports: The ports to update. """ @abstractmethod def configure_port_state(self, port: Port, enable: bool) -> None: - """ - Enable/disable port. + """Enable/disable `port` in the operating system. + + Args: + port: The port to configure. + enable: If :data:`True`, enable the port, otherwise shut it down. """ @abstractmethod @@ -273,12 +405,18 @@ def configure_port_ip_address( port: Port, delete: bool, ) -> None: - """ - Configure (add or delete) an IP address of the input port. + """Configure an IP address on `port` in the operating system. + + Args: + address: The address to configure. + port: The port to configure. + delete: If :data:`True`, remove the IP address, otherwise configure it. """ @abstractmethod def configure_ipv4_forwarding(self, enable: bool) -> None: - """ - Enable IPv4 forwarding in the underlying OS. + """Enable IPv4 forwarding in the operating system. + + Args: + enable: If :data:`True`, enable the forwarding, otherwise disable it. """ From patchwork Mon Dec 4 10:24:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134804 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C05C54366A; Mon, 4 Dec 2023 11:26:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7C10842D45; Mon, 4 Dec 2023 11:24:54 +0100 (CET) Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by mails.dpdk.org (Postfix) with ESMTP id 67FB4427D7 for ; Mon, 4 Dec 2023 11:24:47 +0100 (CET) Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-40c08af319cso13806195e9.2 for ; Mon, 04 Dec 2023 02:24:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685487; x=1702290287; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t6KeZaLHiysMT+K7nNxDXG0i4w3eJCHNkMFr/vP4ebc=; b=JHoIVy3zSNrmgL0WPgDXktHzW7gUlkjwiGapVv1gKmFAPV7cfOxeSxThndAIImdVJT 2um280KDNEz6NoeRwYEEgx9m/j0CiztLJ96pqHObIr3+K51FI8dkAiBuH6wcbLqUHmHw bq6qcYOAswWB23o/IGxBFSxxArXpXRc86s0M+mK9/QS1c7EExqWOBqPfAudCMnjEnlYB 1scBA9lPljbrUlV2WXzt5SEHvRxaCLCBdFdlZJvd+buVLmaiXGAPYDKIIQKzh8a/PpWG xiHlAKdhhFzQvNmTIDlCYEBr+5ESbG8gWcLwe2m0tbqegw9YlJiYSyeLa+GvUW4kZWkW nbTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685487; x=1702290287; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t6KeZaLHiysMT+K7nNxDXG0i4w3eJCHNkMFr/vP4ebc=; b=wCXHUEnzfs4lRKUJW31O/lyD8fVmlf4LBHEHJ5K/YA4MWiYTdgkMrZ9lUo/FE5byB2 H9F5OTy9wkPD37G2s6BR4thcKgBJxEf+wN+YS3yhfeNzsRCAmAptZtMJBprW81uHYblb 3/x69DCKImMKG8vuU5jDEYLS9t8OcaJ76smOuG/UP9XccJVGPFQxxBDPnv9H58pLXh6w DRrzDVbOXtFRcoBwfMCDc0Qqk83pLJMQ9IQcNV/zI//pFWq3NxxF/Pl1/vnIT4S2Ys1S bsi9XFYg+rocZbV7hsK/sy/T6f0SnXKM8wCWB/bk5X49vsA70mNwtbp/Y22tSKoQk9Jw Ei7w== X-Gm-Message-State: AOJu0YwYMSiGOiWGZ5TtfVe8/huHym6Q+nBQWhgkOX6KaygT6PjBLlx5 0JtjrrFwu2thurFm7FlzinbZDA== X-Google-Smtp-Source: AGHT+IEwUS3wQjyd8qDni0zOHLt/lR+xVNcnMUWUwQXc4s5tgL2D/asIJ21xv2rlxu92Fwkn7m6FlA== X-Received: by 2002:a05:600c:4eca:b0:40a:6235:e82d with SMTP id g10-20020a05600c4eca00b0040a6235e82dmr2316431wmq.15.1701685487096; Mon, 04 Dec 2023 02:24:47 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:46 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 16/21] dts: posix and linux sessions docstring update Date: Mon, 4 Dec 2023 11:24:24 +0100 Message-Id: <20231204102429.106709-17-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/linux_session.py | 64 +++++++++++----- dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++--- 2 files changed, 114 insertions(+), 31 deletions(-) diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py index 055765ba2d..0ab59cef85 100644 --- a/dts/framework/testbed_model/linux_session.py +++ b/dts/framework/testbed_model/linux_session.py @@ -2,6 +2,13 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""Linux OS translator. + +Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly +compliant with POSIX standards, so this module only implements the parts that aren't. +This intermediate module implements the common parts of mostly POSIX compliant distributions. +""" + import json from ipaddress import IPv4Interface, IPv6Interface from typing import TypedDict, Union @@ -17,43 +24,52 @@ class LshwConfigurationOutput(TypedDict): + """The relevant parts of ``lshw``'s ``configuration`` section.""" + + #: link: str class LshwOutput(TypedDict): - """ - A model of the relevant information from json lshw output, e.g.: - { - ... - "businfo" : "pci@0000:08:00.0", - "logicalname" : "enp8s0", - "version" : "00", - "serial" : "52:54:00:59:e1:ac", - ... - "configuration" : { - ... - "link" : "yes", - ... - }, - ... + """A model of the relevant information from ``lshw``'s json output. + + Example: + :: + + { + ... + "businfo" : "pci@0000:08:00.0", + "logicalname" : "enp8s0", + "version" : "00", + "serial" : "52:54:00:59:e1:ac", + ... + "configuration" : { + ... + "link" : "yes", + ... + }, + ... """ + #: businfo: str + #: logicalname: NotRequired[str] + #: serial: NotRequired[str] + #: configuration: LshwConfigurationOutput class LinuxSession(PosixSession): - """ - The implementation of non-Posix compliant parts of Linux remote sessions. - """ + """The implementation of non-Posix compliant parts of Linux.""" @staticmethod def _get_privileged_command(command: str) -> str: return f"sudo -- sh -c '{command}'" def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: + """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`.""" cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout lcores = [] for cpu_line in cpu_info.splitlines(): @@ -65,18 +81,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]: return lcores def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`.""" return dpdk_prefix - def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: + def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.setup_hugepages`.""" self._logger.info("Getting Hugepage information.") hugepage_size = self._get_hugepage_size() hugepages_total = self._get_hugepages_total() self._numa_nodes = self._get_numa_nodes() - if force_first_numa or hugepages_total != hugepage_amount: + if force_first_numa or hugepages_total != hugepage_count: # when forcing numa, we need to clear existing hugepages regardless # of size, so they can be moved to the first numa node - self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa) + self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa) else: self._logger.info("Hugepages already configured.") self._mount_huge_pages() @@ -132,6 +150,7 @@ def _configure_huge_pages(self, amount: int, size: int, force_first_numa: bool) self.send_command(f"echo {amount} | tee {hugepage_config_path}", privileged=True) def update_ports(self, ports: list[Port]) -> None: + """Overrides :meth:`~.os_session.OSSession.update_ports`.""" self._logger.debug("Gathering port info.") for port in ports: assert port.node == self.name, "Attempted to gather port info on the wrong node" @@ -161,6 +180,7 @@ def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str) ) def configure_port_state(self, port: Port, enable: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_port_state`.""" state = "up" if enable else "down" self.send_command(f"ip link set dev {port.logical_name} {state}", privileged=True) @@ -170,6 +190,7 @@ def configure_port_ip_address( port: Port, delete: bool, ) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`.""" command = "del" if delete else "add" self.send_command( f"ip address {command} {address} dev {port.logical_name}", @@ -178,5 +199,6 @@ def configure_port_ip_address( ) def configure_ipv4_forwarding(self, enable: bool) -> None: + """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`.""" state = 1 if enable else 0 self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True) diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py index 5657cc0bc9..d279bb8b53 100644 --- a/dts/framework/testbed_model/posix_session.py +++ b/dts/framework/testbed_model/posix_session.py @@ -2,6 +2,15 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""POSIX compliant OS translator. + +Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards +for portability between Unix operating systems which not all Linux distributions +(or the tools most frequently bundled with said distributions) adhere to. Most of Linux +distributions are mostly compliant though. +This intermediate module implements the common parts of mostly POSIX compliant distributions. +""" + import re from collections.abc import Iterable from pathlib import PurePath, PurePosixPath @@ -15,13 +24,21 @@ class PosixSession(OSSession): - """ - An intermediary class implementing the Posix compliant parts of - Linux and other OS remote sessions. - """ + """An intermediary class implementing the POSIX standard.""" @staticmethod def combine_short_options(**opts: bool) -> str: + """Combine shell options into one argument. + + These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``. + + Args: + opts: The keys are option names (usually one letter) and the bool values indicate + whether to include the option in the resulting argument. + + Returns: + The options combined into one argument. + """ ret_opts = "" for opt, include in opts.items(): if include: @@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str: return ret_opts def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`.""" remote_guess = self.join_remote_path(remote_dir, "dpdk-*") result = self.send_command(f"ls -d {remote_guess} | tail -1") return PurePosixPath(result.stdout) def get_remote_tmp_dir(self) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`.""" return PurePosixPath("/tmp") def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: - """ - Create extra environment variables needed for i686 arch build. Get information - from the node if needed. + """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`. + + Supported architecture: ``i686``. """ env_vars = {} if arch == Architecture.i686: @@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict: return env_vars def join_remote_path(self, *args: str | PurePath) -> PurePosixPath: + """Overrides :meth:`~.os_session.OSSession.join_remote_path`.""" return PurePosixPath(*args) def copy_from( @@ -70,6 +90,7 @@ def copy_from( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.os_session.OSSession.copy_from`.""" self.remote_session.copy_from(source_file, destination_file) def copy_to( @@ -77,6 +98,7 @@ def copy_to( source_file: str | PurePath, destination_file: str | PurePath, ) -> None: + """Overrides :meth:`~.os_session.OSSession.copy_to`.""" self.remote_session.copy_to(source_file, destination_file) def remove_remote_dir( @@ -85,6 +107,7 @@ def remove_remote_dir( recursive: bool = True, force: bool = True, ) -> None: + """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`.""" opts = PosixSession.combine_short_options(r=recursive, f=force) self.send_command(f"rm{opts} {remote_dir_path}") @@ -93,6 +116,7 @@ def extract_remote_tarball( remote_tarball_path: str | PurePath, expected_dir: str | PurePath | None = None, ) -> None: + """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`.""" self.send_command( f"tar xfm {remote_tarball_path} -C {PurePosixPath(remote_tarball_path).parent}", 60, @@ -109,6 +133,7 @@ def build_dpdk( rebuild: bool = False, timeout: float = SETTINGS.compile_timeout, ) -> None: + """Overrides :meth:`~.os_session.OSSession.build_dpdk`.""" try: if rebuild: # reconfigure, then build @@ -138,10 +163,12 @@ def build_dpdk( raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.") def get_dpdk_version(self, build_dir: str | PurePath) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`.""" out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True) return out.stdout def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: + """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`.""" self._logger.info("Cleaning up DPDK apps.") dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list) if dpdk_runtime_dirs: @@ -153,6 +180,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None: self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs) def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]: + """Find runtime directories DPDK apps are currently using. + + Args: + dpdk_prefix_list: The prefixes DPDK apps were started with. + + Returns: + The paths of DPDK apps' runtime dirs. + """ prefix = PurePosixPath("/var", "run", "dpdk") if not dpdk_prefix_list: remote_prefixes = self._list_remote_dirs(prefix) @@ -164,9 +199,13 @@ def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePo return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list] def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None: - """ - Return a list of directories of the remote_dir. - If remote_path doesn't exist, return None. + """Contents of remote_path. + + Args: + remote_path: List the contents of this path. + + Returns: + The contents of remote_path. If remote_path doesn't exist, return None. """ out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout if "No such file or directory" in out: @@ -175,6 +214,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None: return out.splitlines() def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]: + """Find PIDs of running DPDK apps. + + Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes + that opened those file. + + Args: + dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs. + + Returns: + The PIDs of running DPDK apps. + """ pids = [] pid_regex = r"p(\d+)" for dpdk_runtime_dir in dpdk_runtime_dirs: @@ -193,6 +243,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool: return not result.return_code def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None: + """Check there aren't any leftover hugepages. + + If any hugepages are found, emit a warning. The hugepages are investigated in the + "hugepage_info" file of dpdk_runtime_dirs. + + Args: + dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs. + """ for dpdk_runtime_dir in dpdk_runtime_dirs: hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info") if self._remote_files_exists(hugepage_info): @@ -208,9 +266,11 @@ def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath]) self.remove_remote_dir(dpdk_runtime_dir) def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`.""" return "" def get_compiler_version(self, compiler_name: str) -> str: + """Overrides :meth:`~.os_session.OSSession.get_compiler_version`.""" match compiler_name: case "gcc": return self.send_command( @@ -228,6 +288,7 @@ def get_compiler_version(self, compiler_name: str) -> str: raise ValueError(f"Unknown compiler {compiler_name}") def get_node_info(self) -> NodeInfo: + """Overrides :meth:`~.os_session.OSSession.get_node_info`.""" os_release_info = self.send_command( "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release", SETTINGS.timeout, From patchwork Mon Dec 4 10:24:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134805 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E11B94366A; Mon, 4 Dec 2023 11:26:40 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B6F442D7F; Mon, 4 Dec 2023 11:24:56 +0100 (CET) Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) by mails.dpdk.org (Postfix) with ESMTP id A38F540DDE for ; Mon, 4 Dec 2023 11:24:48 +0100 (CET) Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-40c09fcfa9fso10724135e9.2 for ; Mon, 04 Dec 2023 02:24:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685488; x=1702290288; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ymyqLmFCzT7culYoGvo3LXHdmL+H/P15hUTlgEDVUOk=; b=T1mpJHSqUDFi1zUO0oMoEkhyxuF4U5SKN4aYAnGbZxCmfQHdwVAYKDPd1E4KJJ3qUo jjvGAqaM09mKOcJ+Wk2SaKAW5EHCl1mu6XgkPGzMt5xWfB0ZahxV0Grip3QKhsqp0LIQ /0+8+rOPLJIXzRRyQRID0l7ztRKn9Zex3btB3tQPZlov69FZPHhXjEh/iv62quhvqpAb C94Hz4el03MIJXAv440QnHprimIetADP3kXLV8ZjArNCMd/V0yw7dgYLgT27S5ASKs0p OvHyyZ6lDsUtdsX/jhADhefri51NjW1XYNXR1ilv7tWPdsyQCaeWvl+kyMEQpTiNUkeK nkFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685488; x=1702290288; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ymyqLmFCzT7culYoGvo3LXHdmL+H/P15hUTlgEDVUOk=; b=m0FZiMeVrF1sGKB9ktRvDKrRl31oxDzWeycJ2eW8rigS0zHrAYbHzbKKfqPk9lirNU EUfMUg8ELVDaAeWHiNH66tVgzXjTF5HBNe/jTBO+KO0PUJuzNmz28oaLdZ1KiyM2T2YE VvcfRXSzr+eyVKgJGlrb1fyNYiMAIXx6i6C/xOz/aHHmounSIdbQc0oTucq9N+/kQyWv picU9bTCIPRZGSp7Hs3SXrdEsNcvZOLttvOFnhfN7n95mQ5A6rmJVdf/mXTwPhZlVd/E y1HZGnjKgZm8g2MZRp427yAiEmnglHE0RAwtN1uGFsj/HbJARnM+0K9O2iZX6Djh24CC l+Ag== X-Gm-Message-State: AOJu0YzdifBn7WC01hXqEj8EoIYyRQd9OQdWu/CXn/Gd20kinr8UUEVP 3+AtxI0jUua7vATub6EKg9R1pg== X-Google-Smtp-Source: AGHT+IEYz8GGkqx9BBscur7cZcFt+goHCqZYkg4Y3aw7HM3+ojwNsrmPv0yPDm4ubDndO6iP1ENdFQ== X-Received: by 2002:a05:600c:198a:b0:40b:5f03:b3cb with SMTP id t10-20020a05600c198a00b0040b5f03b3cbmr1094877wmq.237.1701685488242; Mon, 04 Dec 2023 02:24:48 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:47 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 17/21] dts: node docstring update Date: Mon, 4 Dec 2023 11:24:25 +0100 Message-Id: <20231204102429.106709-18-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/node.py | 191 +++++++++++++++++++--------- 1 file changed, 131 insertions(+), 60 deletions(-) diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py index b313b5ad54..1a55fadf78 100644 --- a/dts/framework/testbed_model/node.py +++ b/dts/framework/testbed_model/node.py @@ -3,8 +3,13 @@ # Copyright(c) 2022-2023 PANTHEON.tech s.r.o. # Copyright(c) 2022-2023 University of New Hampshire -""" -A node is a generic host that DTS connects to and manages. +"""Common functionality for node management. + +A node is any host/server DTS connects to. + +The base class, :class:`Node`, provides features common to all nodes and is supposed +to be extended by subclasses with features specific to each node type. +The :func:`~Node.skip_setup` decorator can be used without subclassing. """ from abc import ABC @@ -35,10 +40,22 @@ class Node(ABC): - """ - Basic class for node management. This class implements methods that - manage a node, such as information gathering (of CPU/PCI/NIC) and - environment setup. + """The base class for node management. + + It shouldn't be instantiated, but rather subclassed. + It implements common methods to manage any node: + + * Connection to the node, + * Hugepages setup. + + Attributes: + main_session: The primary OS-aware remote session used to communicate with the node. + config: The node configuration. + name: The name of the node. + lcores: The list of logical cores that DTS can use on the node. + It's derived from logical cores present on the node and the test run configuration. + ports: The ports of this node specified in the test run configuration. + virtual_devices: The virtual devices used on the node. """ main_session: OSSession @@ -52,6 +69,17 @@ class Node(ABC): virtual_devices: list[VirtualDevice] def __init__(self, node_config: NodeConfiguration): + """Connect to the node and gather info during initialization. + + Extra gathered information: + + * The list of available logical CPUs. This is then filtered by + the ``lcores`` configuration in the YAML test run configuration file, + * Information about ports from the YAML test run configuration file. + + Args: + node_config: The node's test run configuration. + """ self.config = node_config self.name = node_config.name self._logger = getLogger(self.name) @@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration): self._logger.info(f"Connected to node: {self.name}") self._get_remote_cpus() - # filter the node lcores according to user config + # filter the node lcores according to the test run configuration self.lcores = LogicalCoreListFilter( self.lcores, LogicalCoreList(self.config.lcores) ).filter() @@ -76,9 +104,14 @@ def _init_ports(self) -> None: self.configure_port_state(port) def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """ - Perform the execution setup that will be done for each execution - this node is part of. + """Execution setup steps. + + Configure hugepages and call :meth:`_set_up_execution` where + the rest of the configuration steps (if any) are implemented. + + Args: + execution_config: The execution test run configuration according to which + the setup steps will be taken. """ self._setup_hugepages() self._set_up_execution(execution_config) @@ -87,54 +120,70 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: self.virtual_devices.append(VirtualDevice(vdev)) def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional execution setup steps for subclasses. + + Subclasses should override this if they need to add additional execution setup steps. """ def tear_down_execution(self) -> None: - """ - Perform the execution teardown that will be done after each execution - this node is part of concludes. + """Execution teardown steps. + + There are currently no common execution teardown steps common to all DTS node types. """ self.virtual_devices = [] self._tear_down_execution() def _tear_down_execution(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional execution teardown steps for subclasses. + + Subclasses should override this if they need to add additional execution teardown steps. """ def set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None: - """ - Perform the build target setup that will be done for each build target - tested on this node. + """Build target setup steps. + + There are currently no common build target setup steps common to all DTS node types. + + Args: + build_target_config: The build target test run configuration according to which + the setup steps will be taken. """ self._set_up_build_target(build_target_config) def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional build target setup steps for subclasses. + + Subclasses should override this if they need to add additional build target setup steps. """ def tear_down_build_target(self) -> None: - """ - Perform the build target teardown that will be done after each build target - tested on this node. + """Build target teardown steps. + + There are currently no common build target teardown steps common to all DTS node types. """ self._tear_down_build_target() def _tear_down_build_target(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Optional additional build target teardown steps for subclasses. + + Subclasses should override this if they need to add additional build target teardown steps. """ def create_session(self, name: str) -> OSSession: - """ - Create and return a new OSSession tailored to the remote OS. + """Create and return a new OS-aware remote session. + + The returned session won't be used by the node creating it. The session must be used by + the caller. The session will be maintained for the entire lifecycle of the node object, + at the end of which the session will be cleaned up automatically. + + Note: + Any number of these supplementary sessions may be created. + + Args: + name: The name of the session. + + Returns: + A new OS-aware remote session. """ session_name = f"{self.name} {name}" connection = create_session( @@ -152,19 +201,19 @@ def create_interactive_shell( privileged: bool = False, app_args: str = "", ) -> InteractiveShellType: - """Create a handler for an interactive session. + """Factory for interactive session handlers. - Instantiate shell_cls according to the remote OS specifics. + Instantiate `shell_cls` according to the remote OS specifics. Args: shell_cls: The class of the shell. - timeout: Timeout for reading output from the SSH channel. If you are - reading from the buffer and don't receive any data within the timeout - it will throw an error. + timeout: Timeout for reading output from the SSH channel. If you are reading from + the buffer and don't receive any data within the timeout it will throw an error. privileged: Whether to run the shell with administrative privileges. app_args: The arguments to be passed to the application. + Returns: - Instance of the desired interactive application. + An instance of the desired interactive application shell. """ if not shell_cls.dpdk_app: shell_cls.path = self.main_session.join_remote_path(shell_cls.path) @@ -181,14 +230,22 @@ def filter_lcores( filter_specifier: LogicalCoreCount | LogicalCoreList, ascending: bool = True, ) -> list[LogicalCore]: - """ - Filter the LogicalCores found on the Node according to - a LogicalCoreCount or a LogicalCoreList. + """Filter the node's logical cores that DTS can use. + + Logical cores that DTS can use are the ones that are present on the node, but filtered + according to the test run configuration. The `filter_specifier` will filter cores from + those logical cores. + + Args: + filter_specifier: Two different filters can be used, one that specifies the number + of logical cores per core, cores per socket and the number of sockets, + and another one that specifies a logical core list. + ascending: If :data:`True`, use cores with the lowest numerical id first and continue + in ascending order. If :data:`False`, start with the highest id and continue + in descending order. This ordering affects which sockets to consider first as well. - If ascending is True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the highest - id and continue in descending order. This ordering affects which - sockets to consider first as well. + Returns: + The filtered logical cores. """ self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.") return lcore_filter( @@ -198,17 +255,14 @@ def filter_lcores( ).filter() def _get_remote_cpus(self) -> None: - """ - Scan CPUs in the remote OS and store a list of LogicalCores. - """ + """Scan CPUs in the remote OS and store a list of LogicalCores.""" self._logger.info("Getting CPU information.") self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core) def _setup_hugepages(self) -> None: - """ - Setup hugepages on the Node. Different architectures can supply different - amounts of memory for hugepages and numa-based hugepage allocation may need - to be considered. + """Setup hugepages on the node. + + Configure the hugepages only if they're specified in the node's test run configuration. """ if self.config.hugepages: self.main_session.setup_hugepages( @@ -216,8 +270,11 @@ def _setup_hugepages(self) -> None: ) def configure_port_state(self, port: Port, enable: bool = True) -> None: - """ - Enable/disable port. + """Enable/disable `port`. + + Args: + port: The port to enable/disable. + enable: :data:`True` to enable, :data:`False` to disable. """ self.main_session.configure_port_state(port, enable) @@ -227,15 +284,17 @@ def configure_port_ip_address( port: Port, delete: bool = False, ) -> None: - """ - Configure the IP address of a port on this node. + """Add an IP address to `port` on this node. + + Args: + address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6. + port: The port to which to add the address. + delete: If :data:`True`, will delete the address from the port instead of adding it. """ self.main_session.configure_port_ip_address(address, port, delete) def close(self) -> None: - """ - Close all connections and free other resources. - """ + """Close all connections and free other resources.""" if self.main_session: self.main_session.close() for session in self._other_sessions: @@ -244,6 +303,11 @@ def close(self) -> None: @staticmethod def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: + """Skip the decorated function. + + The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP` + environment variable enable the decorator. + """ if SETTINGS.skip_setup: return lambda *args: None else: @@ -251,6 +315,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]: def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession: + """Factory for OS-aware sessions. + + Args: + node_config: The test run configuration of the node to connect to. + name: The name of the session. + logger: The logger instance this session will use. + """ match node_config.os: case OS.linux: return LinuxSession(node_config, name, logger) From patchwork Mon Dec 4 10:24:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134806 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 34D244366A; Mon, 4 Dec 2023 11:26:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6510842D89; Mon, 4 Dec 2023 11:24:57 +0100 (CET) Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) by mails.dpdk.org (Postfix) with ESMTP id B4690427E3 for ; Mon, 4 Dec 2023 11:24:49 +0100 (CET) Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-40c09fcfa9fso10724285e9.2 for ; Mon, 04 Dec 2023 02:24:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685489; x=1702290289; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1GQUWl4fKCWCJwAOefP/gX8Mj3lF66GL8g1NajdfKkM=; b=ubcdVFPeGbpTDM2kwLATSXQzyNOGediHcOic8kohHrPSxpY2kEBDxxvUyITSXcU/Xf SsxX3P8W4x4c/VZuUekr2TUToTcLKCnQXliey2Ps8MhWHssTME2+I5iajlk/0Nnei51y QB9x+pWjIB7kUH8niipI61cU+0A/lVIOpJhMhjIPilIQB1Z+SnxCbrojY0z+FOCW0R1T o0SLiz/KLefiD0McGqnyZmncJhoNHT3VLmQW4PR/nMUEtlON4KCNDNdA94J8xpDA6grN fCn1mlPZTeRqj0hlOW/7hilvoQat4yWn0zEU6ebfHdFnJTagBQaDcCWEetYW2iuoz4UQ 3z+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685489; x=1702290289; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1GQUWl4fKCWCJwAOefP/gX8Mj3lF66GL8g1NajdfKkM=; b=F7XePAHHyHF0CufRKRkW/OFbBoyfWPuD90PcluR5OSi5+NL7krym0wRoAFD2O5FMQp un/QGaOyh3jVGmh/EfmgSJT28tV1GVpN+frSMrYSAh7GVQOhD/eth1wOn8BNTB9m9Iih 5RaY5u35dOGHycuxJ2GpRwKkmfqLcqf22Bs2Yu4zZJy6l5JFybkSrS3A40O1T7B6eigX qhMvDfeWX1ZxnA432CSaqrtcDW8T6GVO77VEX+zUeh48W+2UhSHQ52c3c4VXSRVDdJN6 IWV2fmGRo2QCeLk5id5yZ6Bx4gNP5LG6kwY0o5SlOf8prz2CdQcoWE9pq78KpYGxPPa7 WaQQ== X-Gm-Message-State: AOJu0Ywqp0mwXsB0cwGX+K3ptsFvMxY3cirPxxeYKdfldzXfN76Tww1E SP7qfbHyXJSvu+uvs9fEbQxImg== X-Google-Smtp-Source: AGHT+IGlQbKp9Ya8pztd+ja2JyNnfcVFsxET/dMyscxRfYeoqXWqK7QbEobGQxbwWWkVTR3rG8bTzg== X-Received: by 2002:a7b:c8c4:0:b0:40b:5f03:b405 with SMTP id f4-20020a7bc8c4000000b0040b5f03b405mr1097481wml.295.1701685489282; Mon, 04 Dec 2023 02:24:49 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:48 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 18/21] dts: sut and tg nodes docstring update Date: Mon, 4 Dec 2023 11:24:26 +0100 Message-Id: <20231204102429.106709-19-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- dts/framework/testbed_model/sut_node.py | 230 ++++++++++++++++-------- dts/framework/testbed_model/tg_node.py | 42 +++-- 2 files changed, 176 insertions(+), 96 deletions(-) diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py index 5ce9446dba..c4acea38d1 100644 --- a/dts/framework/testbed_model/sut_node.py +++ b/dts/framework/testbed_model/sut_node.py @@ -3,6 +3,14 @@ # Copyright(c) 2023 PANTHEON.tech s.r.o. # Copyright(c) 2023 University of New Hampshire +"""System under test (DPDK + hardware) node. + +A system under test (SUT) is the combination of DPDK +and the hardware we're testing with DPDK (NICs, crypto and other devices). +An SUT node is where this SUT runs. +""" + + import os import tarfile import time @@ -26,6 +34,11 @@ class EalParameters(object): + """The environment abstraction layer parameters. + + The string representation can be created by converting the instance to a string. + """ + def __init__( self, lcore_list: LogicalCoreList, @@ -35,21 +48,23 @@ def __init__( vdevs: list[VirtualDevice], other_eal_param: str, ): - """ - Generate eal parameters character string; - :param lcore_list: the list of logical cores to use. - :param memory_channels: the number of memory channels to use. - :param prefix: set file prefix string, eg: - prefix='vf' - :param no_pci: switch of disable PCI bus eg: - no_pci=True - :param vdevs: virtual device list, eg: - vdevs=[ - VirtualDevice('net_ring0'), - VirtualDevice('net_ring1') - ] - :param other_eal_param: user defined DPDK eal parameters, eg: - other_eal_param='--single-file-segments' + """Initialize the parameters according to inputs. + + Process the parameters into the format used on the command line. + + Args: + lcore_list: The list of logical cores to use. + memory_channels: The number of memory channels to use. + prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``. + no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``. + vdevs: Virtual devices, e.g.:: + + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + other_eal_param: user defined DPDK EAL parameters, e.g.: + ``other_eal_param='--single-file-segments'`` """ self._lcore_list = f"-l {lcore_list}" self._memory_channels = f"-n {memory_channels}" @@ -61,6 +76,7 @@ def __init__( self._other_eal_param = other_eal_param def __str__(self) -> str: + """Create the EAL string.""" return ( f"{self._lcore_list} " f"{self._memory_channels} " @@ -72,11 +88,21 @@ def __str__(self) -> str: class SutNode(Node): - """ - A class for managing connections to the System under Test, providing - methods that retrieve the necessary information about the node (such as - CPU, memory and NIC details) and configuration capabilities. - Another key capability is building DPDK according to given build target. + """The system under test node. + + The SUT node extends :class:`Node` with DPDK specific features: + + * DPDK build, + * Gathering of DPDK build info, + * The running of DPDK apps, interactively or one-time execution, + * DPDK apps cleanup. + + The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL` + environment variable configure the path to the DPDK tarball + or the git commit ID, tag ID or tree ID to test. + + Attributes: + config: The SUT node configuration """ config: SutNodeConfiguration @@ -94,6 +120,11 @@ class SutNode(Node): _path_to_devbind_script: PurePath | None def __init__(self, node_config: SutNodeConfiguration): + """Extend the constructor with SUT node specifics. + + Args: + node_config: The SUT node's test run configuration. + """ super(SutNode, self).__init__(node_config) self._dpdk_prefix_list = [] self._build_target_config = None @@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration): @property def _remote_dpdk_dir(self) -> PurePath: + """The remote DPDK dir. + + This internal property should be set after extracting the DPDK tarball. If it's not set, + that implies the DPDK setup step has been skipped, in which case we can guess where + a previous build was located. + """ if self.__remote_dpdk_dir is None: self.__remote_dpdk_dir = self._guess_dpdk_remote_dir() return self.__remote_dpdk_dir @@ -123,6 +160,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None: @property def remote_dpdk_build_dir(self) -> PurePath: + """The remote DPDK build directory. + + This is the directory where DPDK was built. + We assume it was built in a subdirectory of the extracted tarball. + """ if self._build_target_config: return self.main_session.join_remote_path( self._remote_dpdk_dir, self._build_target_config.name @@ -132,18 +174,21 @@ def remote_dpdk_build_dir(self) -> PurePath: @property def dpdk_version(self) -> str: + """Last built DPDK version.""" if self._dpdk_version is None: self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_dir) return self._dpdk_version @property def node_info(self) -> NodeInfo: + """Additional node information.""" if self._node_info is None: self._node_info = self.main_session.get_node_info() return self._node_info @property def compiler_version(self) -> str: + """The node's compiler version.""" if self._compiler_version is None: if self._build_target_config is not None: self._compiler_version = self.main_session.get_compiler_version( @@ -158,6 +203,7 @@ def compiler_version(self) -> str: @property def path_to_devbind_script(self) -> PurePath: + """The path to the dpdk-devbind.py script on the node.""" if self._path_to_devbind_script is None: self._path_to_devbind_script = self.main_session.join_remote_path( self._remote_dpdk_dir, "usertools", "dpdk-devbind.py" @@ -165,6 +211,11 @@ def path_to_devbind_script(self) -> PurePath: return self._path_to_devbind_script def get_build_target_info(self) -> BuildTargetInfo: + """Get additional build target information. + + Returns: + The build target information, + """ return BuildTargetInfo( dpdk_version=self.dpdk_version, compiler_version=self.compiler_version ) @@ -173,8 +224,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath: return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir) def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None: - """ - Setup DPDK on the SUT node. + """Setup DPDK on the SUT node. + + Additional build target setup steps on top of those in :class:`Node`. """ # we want to ensure that dpdk_version and compiler_version is reset for new # build targets @@ -186,16 +238,14 @@ def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> self.bind_ports_to_driver() def _tear_down_build_target(self) -> None: - """ - This method exists to be optionally overwritten by derived classes and - is not decorated so that the derived class doesn't have to use the decorator. + """Bind ports to the operating system drivers. + + Additional build target teardown steps on top of those in :class:`Node`. """ self.bind_ports_to_driver(for_dpdk=False) def _configure_build_target(self, build_target_config: BuildTargetConfiguration) -> None: - """ - Populate common environment variables and set build target config. - """ + """Populate common environment variables and set build target config.""" self._env_vars = {} self._build_target_config = build_target_config self._env_vars.update(self.main_session.get_dpdk_build_env_vars(build_target_config.arch)) @@ -207,9 +257,7 @@ def _configure_build_target(self, build_target_config: BuildTargetConfiguration) @Node.skip_setup def _copy_dpdk_tarball(self) -> None: - """ - Copy to and extract DPDK tarball on the SUT node. - """ + """Copy to and extract DPDK tarball on the SUT node.""" self._logger.info("Copying DPDK tarball to SUT.") self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir) @@ -238,8 +286,9 @@ def _copy_dpdk_tarball(self) -> None: @Node.skip_setup def _build_dpdk(self) -> None: - """ - Build DPDK. Uses the already configured target. Assumes that the tarball has + """Build DPDK. + + Uses the already configured target. Assumes that the tarball has already been copied to and extracted on the SUT node. """ self.main_session.build_dpdk( @@ -250,15 +299,19 @@ def _build_dpdk(self) -> None: ) def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath: - """ - Build one or all DPDK apps. Requires DPDK to be already built on the SUT node. - When app_name is 'all', build all example apps. - When app_name is any other string, tries to build that example app. - Return the directory path of the built app. If building all apps, return - the path to the examples directory (where all apps reside). - The meson_dpdk_args are keyword arguments - found in meson_option.txt in root DPDK directory. Do not use -D with them, - for example: enable_kmods=True. + """Build one or all DPDK apps. + + Requires DPDK to be already built on the SUT node. + + Args: + app_name: The name of the DPDK app to build. + When `app_name` is ``all``, build all example apps. + meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory. + Do not use ``-D`` with them. + + Returns: + The directory path of the built app. If building all apps, return + the path to the examples directory (where all apps reside). """ self.main_session.build_dpdk( self._env_vars, @@ -277,9 +330,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa ) def kill_cleanup_dpdk_apps(self) -> None: - """ - Kill all dpdk applications on the SUT. Cleanup hugepages. - """ + """Kill all dpdk applications on the SUT, then clean up hugepages.""" if self._dpdk_kill_session and self._dpdk_kill_session.is_alive(): # we can use the session if it exists and responds self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list) @@ -298,33 +349,34 @@ def create_eal_parameters( vdevs: list[VirtualDevice] | None = None, other_eal_param: str = "", ) -> "EalParameters": - """ - Generate eal parameters character string; - :param lcore_filter_specifier: a number of lcores/cores/sockets to use - or a list of lcore ids to use. - The default will select one lcore for each of two cores - on one socket, in ascending order of core ids. - :param ascending_cores: True, use cores with the lowest numerical id first - and continue in ascending order. If False, start with the - highest id and continue in descending order. This ordering - affects which sockets to consider first as well. - :param prefix: set file prefix string, eg: - prefix='vf' - :param append_prefix_timestamp: if True, will append a timestamp to - DPDK file prefix. - :param no_pci: switch of disable PCI bus eg: - no_pci=True - :param vdevs: virtual device list, eg: - vdevs=[ - VirtualDevice('net_ring0'), - VirtualDevice('net_ring1') - ] - :param other_eal_param: user defined DPDK eal parameters, eg: - other_eal_param='--single-file-segments' - :return: eal param string, eg: - '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420'; - """ + """Compose the EAL parameters. + + Process the list of cores and the DPDK prefix and pass that along with + the rest of the arguments. + Args: + lcore_filter_specifier: A number of lcores/cores/sockets to use + or a list of lcore ids to use. + The default will select one lcore for each of two cores + on one socket, in ascending order of core ids. + ascending_cores: Sort cores in ascending order (lowest to highest IDs). + If :data:`False`, sort in descending order. + prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``. + append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix. + no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``. + vdevs: Virtual devices, e.g.:: + + vdevs=[ + VirtualDevice('net_ring0'), + VirtualDevice('net_ring1') + ] + other_eal_param: user defined DPDK EAL parameters, e.g.: + ``other_eal_param='--single-file-segments'``. + + Returns: + An EAL param string, such as + ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``. + """ lcore_list = LogicalCoreList(self.filter_lcores(lcore_filter_specifier, ascending_cores)) if append_prefix_timestamp: @@ -348,14 +400,29 @@ def create_eal_parameters( def run_dpdk_app( self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30 ) -> CommandResult: - """ - Run DPDK application on the remote node. + """Run DPDK application on the remote node. + + The application is not run interactively - the command that starts the application + is executed and then the call waits for it to finish execution. + + Args: + app_path: The remote path to the DPDK application. + eal_args: EAL parameters to run the DPDK application with. + timeout: Wait at most this long in seconds for `command` execution to complete. + + Returns: + The result of the DPDK app execution. """ return self.main_session.send_command( f"{app_path} {eal_args}", timeout, privileged=True, verify=True ) def configure_ipv4_forwarding(self, enable: bool) -> None: + """Enable/disable IPv4 forwarding on the node. + + Args: + enable: If :data:`True`, enable the forwarding, otherwise disable it. + """ self.main_session.configure_ipv4_forwarding(enable) def create_interactive_shell( @@ -365,9 +432,13 @@ def create_interactive_shell( privileged: bool = False, eal_parameters: EalParameters | str | None = None, ) -> InteractiveShellType: - """Factory method for creating a handler for an interactive session. + """Extend the factory for interactive session handlers. + + The extensions are SUT node specific: - Instantiate shell_cls according to the remote OS specifics. + * The default for `eal_parameters`, + * The interactive shell path `shell_cls.path` is prepended with path to the remote + DPDK build directory for DPDK apps. Args: shell_cls: The class of the shell. @@ -377,9 +448,10 @@ def create_interactive_shell( privileged: Whether to run the shell with administrative privileges. eal_parameters: List of EAL parameters to use to launch the app. If this isn't provided or an empty string is passed, it will default to calling - create_eal_parameters(). + :meth:`create_eal_parameters`. + Returns: - Instance of the desired interactive application. + An instance of the desired interactive application shell. """ if not eal_parameters: eal_parameters = self.create_eal_parameters() @@ -396,8 +468,8 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None: """Bind all ports on the SUT to a driver. Args: - for_dpdk: Boolean that, when True, binds ports to os_driver_for_dpdk - or, when False, binds to os_driver. Defaults to True. + for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk. + If :data:`False`, binds to os_driver. """ for port in self.ports: driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py index 8a8f0019f3..f269d4c585 100644 --- a/dts/framework/testbed_model/tg_node.py +++ b/dts/framework/testbed_model/tg_node.py @@ -5,13 +5,8 @@ """Traffic generator node. -This is the node where the traffic generator resides. -The distinction between a node and a traffic generator is as follows: -A node is a host that DTS connects to. It could be a baremetal server, -a VM or a container. -A traffic generator is software running on the node. -A traffic generator node is a node running a traffic generator. -A node can be a traffic generator node as well as system under test node. +A traffic generator (TG) generates traffic that's sent towards the SUT node. +A TG node is where the TG runs. """ from scapy.packet import Packet # type: ignore[import] @@ -24,13 +19,16 @@ class TGNode(Node): - """Manage connections to a node with a traffic generator. + """The traffic generator node. - Apart from basic node management capabilities, the Traffic Generator node has - specialized methods for handling the traffic generator running on it. + The TG node extends :class:`Node` with TG specific features: - Arguments: - node_config: The user configuration of the traffic generator node. + * Traffic generator initialization, + * The sending of traffic and receiving packets, + * The sending of traffic without receiving packets. + + Not all traffic generators are capable of capturing traffic, which is why there + must be a way to send traffic without that. Attributes: traffic_generator: The traffic generator running on the node. @@ -39,6 +37,13 @@ class TGNode(Node): traffic_generator: CapturingTrafficGenerator def __init__(self, node_config: TGNodeConfiguration): + """Extend the constructor with TG node specifics. + + Initialize the traffic generator on the TG node. + + Args: + node_config: The TG node's test run configuration. + """ super(TGNode, self).__init__(node_config) self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator) self._logger.info(f"Created node: {self.name}") @@ -50,17 +55,17 @@ def send_packet_and_capture( receive_port: Port, duration: float = 1, ) -> list[Packet]: - """Send a packet, return received traffic. + """Send `packet`, return received traffic. - Send a packet on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic + Send `packet` on `send_port` and then return all traffic captured + on `receive_port` for the given duration. Also record the captured traffic in a pcap file. Args: packet: The packet to send. send_port: The egress port on the TG node. receive_port: The ingress port in the TG node. - duration: Capture traffic for this amount of time after sending the packet. + duration: Capture traffic for this amount of time after sending `packet`. Returns: A list of received packets. May be empty if no packets are captured. @@ -70,6 +75,9 @@ def send_packet_and_capture( ) def close(self) -> None: - """Free all resources used by the node""" + """Free all resources used by the node. + + This extends the superclass method with TG cleanup. + """ self.traffic_generator.close() super(TGNode, self).close() From patchwork Mon Dec 4 10:24:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134807 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B55DA4366A; Mon, 4 Dec 2023 11:26:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 959F542D95; Mon, 4 Dec 2023 11:24:58 +0100 (CET) Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by mails.dpdk.org (Postfix) with ESMTP id 6572E427E9 for ; Mon, 4 Dec 2023 11:24:50 +0100 (CET) Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-40c0a03eb87so10660925e9.3 for ; Mon, 04 Dec 2023 02:24:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685490; x=1702290290; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GWyA4bZUAswyb84d4bT9GWkMZ5M49NrE4Df0L5kkviM=; b=EGHwBjYkQ2JXMq3n45C1Cl9DKa0IYs6LgPD+CteCZsCIp0gvFbGCuUeunyXfrCQ/+b +ZvVJHHizUrq02nmG9cf4uZLDMIaAdAFw7+uK+tZxQQqRF7Jlhg1j6IKBpXP8dhshxDM FnTbjfZG0f202HNAd/UTxVYZQuEPo7SfygelYABe25t/H+Hte8AJ3tEKSorWa2JJa6Hh LMmoXOEpiFF8IKD3LZT9iCdFLg2Y7hR4cT00EbWU2dFV5Xlbkrn+709Kt8KGjhsq8SZU E9smPkoI0ws6zy0AYfcEclVvvDGdANbeFTLX6yhTZe2dUcCrjwPFIHYUxjr+2RL8qgtL aeHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685490; x=1702290290; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GWyA4bZUAswyb84d4bT9GWkMZ5M49NrE4Df0L5kkviM=; b=LWOJjXN4CaQhmaQOAzckxAzJmRdJ8i46WcANpMxLY7bQZWav7BeCuxNugqzQb1zqR8 GmL13to0lQh+4jsvIFc/Xw91KKx7PVcpCsDYPrj3Q9cGEIgJr8JlK8FB0S0me87zhEek kfF2/JsFVLzW1fssLgZb2wbIYT/1NLlaafTvUbZ4I4eOPfPTIE3EXwvlNf6IPB+9CXt3 VV7aDpdCDpwMSGlpv5lNkmcGCnX/xjjcIuFl3dyi33uVdTtqmq81xRPbqtGimr1YI7n9 Dq0ldtX9ciNmpOj/gews6z0YHXh9AlUvCTSQDuWoeTholEy7RIaiAChVVWz2EY4HHKTI LkHA== X-Gm-Message-State: AOJu0YwYY0W7xPJUp0p58ZqyYQfmm1EDq6hH82mSv31Q3HCuz55EkqAr xcUv+82icDaXKJkzdzr9PCkrvA== X-Google-Smtp-Source: AGHT+IHvIatc95chZVqAnRbi2lw4mFWpu32crFq0kmoYmTX5paXsrRuApgjpTnGJa3CByioutzFgaQ== X-Received: by 2002:a1c:7210:0:b0:40c:c00:b49 with SMTP id n16-20020a1c7210000000b0040c0c000b49mr230345wmc.56.1701685490093; Mon, 04 Dec 2023 02:24:50 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:49 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 19/21] dts: base traffic generators docstring update Date: Mon, 4 Dec 2023 11:24:27 +0100 Message-Id: <20231204102429.106709-20-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../traffic_generator/__init__.py | 22 ++++++++- .../capturing_traffic_generator.py | 45 +++++++++++-------- .../traffic_generator/traffic_generator.py | 33 ++++++++------ 3 files changed, 67 insertions(+), 33 deletions(-) diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py index 52888d03fa..11e2bd7d97 100644 --- a/dts/framework/testbed_model/traffic_generator/__init__.py +++ b/dts/framework/testbed_model/traffic_generator/__init__.py @@ -1,6 +1,19 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. +"""DTS traffic generators. + +A traffic generator is capable of generating traffic and then monitor returning traffic. +All traffic generators must count the number of received packets. Some may additionally capture +individual packets. + +A traffic generator may be software running on generic hardware or it could be specialized hardware. + +The traffic generators that only count the number of received packets are suitable only for +performance testing. In functional testing, we need to be able to dissect each arrived packet +and a capturing traffic generator is required. +""" + from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType from framework.exception import ConfigurationError from framework.testbed_model.node import Node @@ -12,8 +25,15 @@ def create_traffic_generator( tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig ) -> CapturingTrafficGenerator: - """A factory function for creating traffic generator object from user config.""" + """The factory function for creating traffic generator objects from the test run configuration. + + Args: + tg_node: The traffic generator node where the created traffic generator will be running. + traffic_generator_config: The traffic generator config. + Returns: + A traffic generator capable of capturing received packets. + """ match traffic_generator_config.traffic_generator_type: case TrafficGeneratorType.SCAPY: return ScapyTrafficGenerator(tg_node, traffic_generator_config) diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py index 1fc7f98c05..0246590333 100644 --- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py @@ -23,19 +23,21 @@ def _get_default_capture_name() -> str: - """ - This is the function used for the default implementation of capture names. - """ return str(uuid.uuid4()) class CapturingTrafficGenerator(TrafficGenerator): """Capture packets after sending traffic. - A mixin interface which enables a packet generator to declare that it can capture + The intermediary interface which enables a packet generator to declare that it can capture packets and return them to the user. + Similarly to :class:`~.traffic_generator.TrafficGenerator`, this class exposes + the public methods specific to capturing traffic generators and defines a private method + that must implement the traffic generation and capturing logic in subclasses. + The methods of capturing traffic generators obey the following workflow: + 1. send packets 2. capture packets 3. write the capture to a .pcap file @@ -44,6 +46,7 @@ class CapturingTrafficGenerator(TrafficGenerator): @property def is_capturing(self) -> bool: + """This traffic generator can capture traffic.""" return True def send_packet_and_capture( @@ -54,11 +57,12 @@ def send_packet_and_capture( duration: float, capture_name: str = _get_default_capture_name(), ) -> list[Packet]: - """Send a packet, return received traffic. + """Send `packet` and capture received traffic. + + Send `packet` on `send_port` and then return all traffic captured + on `receive_port` for the given `duration`. - Send a packet on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic - in a pcap file. + The captured traffic is recorded in the `capture_name`.pcap file. Args: packet: The packet to send. @@ -68,7 +72,7 @@ def send_packet_and_capture( capture_name: The name of the .pcap file where to store the capture. Returns: - A list of received packets. May be empty if no packets are captured. + The received packets. May be empty if no packets are captured. """ return self.send_packets_and_capture( [packet], send_port, receive_port, duration, capture_name @@ -82,11 +86,14 @@ def send_packets_and_capture( duration: float, capture_name: str = _get_default_capture_name(), ) -> list[Packet]: - """Send packets, return received traffic. + """Send `packets` and capture received traffic. - Send packets on the send_port and then return all traffic captured - on the receive_port for the given duration. Also record the captured traffic - in a pcap file. + Send `packets` on `send_port` and then return all traffic captured + on `receive_port` for the given `duration`. + + The captured traffic is recorded in the `capture_name`.pcap file. The target directory + can be configured with the :option:`--output-dir` command line argument or + the :envvar:`DTS_OUTPUT_DIR` environment variable. Args: packets: The packets to send. @@ -96,7 +103,7 @@ def send_packets_and_capture( capture_name: The name of the .pcap file where to store the capture. Returns: - A list of received packets. May be empty if no packets are captured. + The received packets. May be empty if no packets are captured. """ self._logger.debug(get_packet_summaries(packets)) self._logger.debug( @@ -121,10 +128,12 @@ def _send_packets_and_capture( receive_port: Port, duration: float, ) -> list[Packet]: - """ - The extended classes must implement this method which - sends packets on send_port and receives packets on the receive_port - for the specified duration. It must be able to handle no received packets. + """The implementation of :method:`send_packets_and_capture`. + + The subclasses must implement this method which sends `packets` on `send_port` + and receives packets on `receive_port` for the specified `duration`. + + It must be able to handle receiving no packets. """ def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]) -> None: diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py index 0d9902ddb7..c49fbff488 100644 --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py @@ -22,7 +22,8 @@ class TrafficGenerator(ABC): """The base traffic generator. - Defines the few basic methods that each traffic generator must implement. + Exposes the common public methods of all traffic generators and defines private methods + that must implement the traffic generation logic in subclasses. """ _config: TrafficGeneratorConfig @@ -30,14 +31,20 @@ class TrafficGenerator(ABC): _logger: DTSLOG def __init__(self, tg_node: Node, config: TrafficGeneratorConfig): + """Initialize the traffic generator. + + Args: + tg_node: The traffic generator node where the created traffic generator will be running. + config: The traffic generator's test run configuration. + """ self._config = config self._tg_node = tg_node self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}") def send_packet(self, packet: Packet, port: Port) -> None: - """Send a packet and block until it is fully sent. + """Send `packet` and block until it is fully sent. - What fully sent means is defined by the traffic generator. + Send `packet` on `port`, then wait until `packet` is fully sent. Args: packet: The packet to send. @@ -46,9 +53,9 @@ def send_packet(self, packet: Packet, port: Port) -> None: self.send_packets([packet], port) def send_packets(self, packets: list[Packet], port: Port) -> None: - """Send packets and block until they are fully sent. + """Send `packets` and block until they are fully sent. - What fully sent means is defined by the traffic generator. + Send `packets` on `port`, then wait until `packets` are fully sent. Args: packets: The packets to send. @@ -60,19 +67,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None: @abstractmethod def _send_packets(self, packets: list[Packet], port: Port) -> None: - """ - The extended classes must implement this method which - sends packets on send_port. The method should block until all packets - are fully sent. + """The implementation of :method:`send_packets`. + + The subclasses must implement this method which sends `packets` on `port`. + The method should block until all `packets` are fully sent. + + What fully sent means is defined by the traffic generator. """ @property def is_capturing(self) -> bool: - """Whether this traffic generator can capture traffic. - - Returns: - True if the traffic generator can capture traffic, False otherwise. - """ + """This traffic generator can't capture traffic.""" return False @abstractmethod From patchwork Mon Dec 4 10:24:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134808 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D9EE4366A; Mon, 4 Dec 2023 11:27:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B0C3442DC3; Mon, 4 Dec 2023 11:25:00 +0100 (CET) Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by mails.dpdk.org (Postfix) with ESMTP id AD01A427E4 for ; Mon, 4 Dec 2023 11:24:51 +0100 (CET) Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-40c07ed92fdso13964195e9.3 for ; Mon, 04 Dec 2023 02:24:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685491; x=1702290291; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=D3ukE+iAccr/HTVQO68Na71DAaTJVq8iFBYXJdhnV94=; b=jT7NlTodD/R9oUZAHTlpIxfYi46x5wpFzEPxcWyR+sVXDJu8lSBxADvNZvsacEoqFG G7Kdc31fBlQnLRxH6V7LmSqKesfq5uArJUAMtESh6+0zvNGPyFrdGGreqoEPvOGCAkOJ tTwU9NYRHdRrBqAW4zHXOznwVeS7DZaFv3fFvbkh7KUwA8GwscOeKUCU8L+2S3J1PlNU 6WgnQKZKkUlnWmsSOKOpa/+iKzp1ed9zmBfcyhSdsFjT706+g7u0r0WPiRDTdOiCcooJ Y/gV6Ye9ncFOl9pNPvV75GWJ/lh1BtrA3Xz88MWFo+j758ZWRkH8NWnYtJTm/8j1Pihu hRfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685491; x=1702290291; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D3ukE+iAccr/HTVQO68Na71DAaTJVq8iFBYXJdhnV94=; b=aOQjNp1WBUXv8QVjFnyaccaYzng7pmDrGyGwY9t0DAiqWcrkKYyc2fqRwqK5GyDizS rlFvp8ScO7P5AY1sbZ7Ru0/vGABI8cjNzsCDCwTldPJ0WQxMRf/n1mrkbmMlthSaWkh2 ADUYIT7kmyQqsHkgH6izRneXcjpNrOpH+lXfFNLSvCLvSe49xFyx2sB4A5+TKBW1WZAO sR8SFyIAaJnh454PI07A3vzkIXli9O1emNAdEcYwC3Y/RUuK6oLd3JTKVDhyR9C/zGZN 5HkHurw2Bp2bl/8kzmiu8uF39itmCsQ3wmGq7TCHHaJ7KZ31154ZizlOZSOE1RNCXO8h o/Pg== X-Gm-Message-State: AOJu0YxVhzSzT8Vz7OHEZqn5Wf7mqHa48DhZNzNPU2G2BEyhbhZybLGa n7W1GF9E3/vICFJXktG7bJIHCvHXYCIIJl66Ct7Rxw== X-Google-Smtp-Source: AGHT+IEQTP9kjXWa/MFy26WINQodNoPR82mTcf5Fsa4GqqZIdGTiL4+o/GBDeuNVRbDerff3y4Eerw== X-Received: by 2002:a05:600c:2307:b0:40c:b0f:9656 with SMTP id 7-20020a05600c230700b0040c0b0f9656mr650763wmo.153.1701685491355; Mon, 04 Dec 2023 02:24:51 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:50 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 20/21] dts: scapy tg docstring update Date: Mon, 4 Dec 2023 11:24:28 +0100 Message-Id: <20231204102429.106709-21-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš --- .../testbed_model/traffic_generator/scapy.py | 91 +++++++++++-------- 1 file changed, 54 insertions(+), 37 deletions(-) diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py index c88cf28369..5b60f66237 100644 --- a/dts/framework/testbed_model/traffic_generator/scapy.py +++ b/dts/framework/testbed_model/traffic_generator/scapy.py @@ -2,14 +2,15 @@ # Copyright(c) 2022 University of New Hampshire # Copyright(c) 2023 PANTHEON.tech s.r.o. -"""Scapy traffic generator. +"""The Scapy traffic generator. -Traffic generator used for functional testing, implemented using the Scapy library. +A traffic generator used for functional testing, implemented with +`the Scapy library `_. The traffic generator uses an XML-RPC server to run Scapy on the remote TG node. -The XML-RPC server runs in an interactive remote SSH session running Python console, -where we start the server. The communication with the server is facilitated with -a local server proxy. +The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server +in an interactive remote Python SSH session. The communication with the server is facilitated +with a local server proxy from the :mod:`xmlrpc.client` module. """ import inspect @@ -69,20 +70,20 @@ def scapy_send_packets_and_capture( recv_iface: str, duration: float, ) -> list[bytes]: - """RPC function to send and capture packets. + """The RPC function to send and capture packets. - The function is meant to be executed on the remote TG node. + This function is meant to be executed on the remote TG node via the server proxy. Args: xmlrpc_packets: The packets to send. These need to be converted to - xmlrpc.client.Binary before sending to the remote server. + :class:`~xmlrpc.client.Binary` objects before sending to the remote server. send_iface: The logical name of the egress interface. recv_iface: The logical name of the ingress interface. duration: Capture for this amount of time, in seconds. Returns: A list of bytes. Each item in the list represents one packet, which needs - to be converted back upon transfer from the remote node. + to be converted back upon transfer from the remote node. """ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets] sniffer = scapy.all.AsyncSniffer( @@ -96,19 +97,15 @@ def scapy_send_packets_and_capture( def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str) -> None: - """RPC function to send packets. + """The RPC function to send packets. - The function is meant to be executed on the remote TG node. - It doesn't return anything, only sends packets. + This function is meant to be executed on the remote TG node via the server proxy. + It only sends `xmlrpc_packets`, without capturing them. Args: xmlrpc_packets: The packets to send. These need to be converted to - xmlrpc.client.Binary before sending to the remote server. + :class:`~xmlrpc.client.Binary` objects before sending to the remote server. send_iface: The logical name of the egress interface. - - Returns: - A list of bytes. Each item in the list represents one packet, which needs - to be converted back upon transfer from the remote node. """ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets] scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True) @@ -128,11 +125,19 @@ def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: s class QuittableXMLRPCServer(SimpleXMLRPCServer): - """Basic XML-RPC server that may be extended - by functions serializable by the marshal module. + """Basic XML-RPC server. + + The server may be augmented by functions serializable by the :mod:`marshal` module. """ def __init__(self, *args, **kwargs): + """Extend the XML-RPC server initialization. + + Args: + args: The positional arguments that will be passed to the superclass's constructor. + kwargs: The keyword arguments that will be passed to the superclass's constructor. + The `allow_none` argument will be set to :data:`True`. + """ kwargs["allow_none"] = True super().__init__(*args, **kwargs) self.register_introspection_functions() @@ -140,13 +145,12 @@ def __init__(self, *args, **kwargs): self.register_function(self.add_rpc_function) def quit(self) -> None: + """Quit the server.""" self._BaseServer__shutdown_request = True return None def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None: - """Add a function to the server. - - This is meant to be executed remotely. + """Add a function to the server from the local server proxy. Args: name: The name of the function. @@ -157,6 +161,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N self.register_function(function) def serve_forever(self, poll_interval: float = 0.5) -> None: + """Extend the superclass method with an additional print. + + Once executed in the local server proxy, the print gives us a clear string to expect + when starting the server. The print means this function was executed on the XML-RPC server. + """ print("XMLRPC OK") super().serve_forever(poll_interval) @@ -164,19 +173,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None: class ScapyTrafficGenerator(CapturingTrafficGenerator): """Provides access to scapy functions via an RPC interface. - The traffic generator first starts an XML-RPC on the remote TG node. - Then it populates the server with functions which use the Scapy library - to send/receive traffic. - - Any packets sent to the remote server are first converted to bytes. - They are received as xmlrpc.client.Binary objects on the server side. - When the server sends the packets back, they are also received as - xmlrpc.client.Binary object on the client side, are converted back to Scapy - packets and only then returned from the methods. + This class extends the base with remote execution of scapy functions. - Arguments: - tg_node: The node where the traffic generator resides. - config: The user configuration of the traffic generator. + Any packets sent to the remote server are first converted to bytes. They are received as + :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets + back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are + converted back to :class:`~scapy.packet.Packet` objects and only then returned from the methods. Attributes: session: The exclusive interactive remote session created by the Scapy @@ -190,6 +192,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator): _config: ScapyTrafficGeneratorConfig def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig): + """Extend the constructor with Scapy TG specifics. + + The traffic generator first starts an XML-RPC on the remote `tg_node`. + Then it populates the server with functions which use the Scapy library + to send/receive traffic: + + * :func:`scapy_send_packets_and_capture` + * :func:`scapy_send_packets` + + To enable verbose logging from the xmlrpc client, use the :option:`--verbose` + command line argument or the :envvar:`DTS_VERBOSE` environment variable. + + Args: + tg_node: The node where the traffic generator resides. + config: The traffic generator's test run configuration. + """ super().__init__(tg_node, config) assert ( @@ -231,10 +249,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None: # or class, so strip all lines containing only whitespace src = "\n".join([line for line in src.splitlines() if not line.isspace() and line != ""]) - spacing = "\n" * 4 - # execute it in the python terminal - self.session.send_command(spacing + src + spacing) + self.session.send_command(src + "\n") self.session.send_command( f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));server.serve_forever()", "XMLRPC OK", @@ -267,6 +283,7 @@ def _send_packets_and_capture( return scapy_packets def close(self) -> None: + """Close the traffic generator.""" try: self.rpc_server_proxy.quit() except ConnectionRefusedError: From patchwork Mon Dec 4 10:24:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Juraj_Linke=C5=A1?= X-Patchwork-Id: 134809 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E2DE4366A; Mon, 4 Dec 2023 11:27:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CEF7242D9A; Mon, 4 Dec 2023 11:25:01 +0100 (CET) Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by mails.dpdk.org (Postfix) with ESMTP id 8F03E42D28 for ; Mon, 4 Dec 2023 11:24:52 +0100 (CET) Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-40bd5eaa66cso26118765e9.2 for ; Mon, 04 Dec 2023 02:24:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1701685492; x=1702290292; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Gu+4JrC2IuPtEiL7DfKPBV14u/4QgUSAAuxdak3zylg=; b=B6sIv+F8AsHCyzuuxQXPxP2tq96LUoMjrYDInwfWfmKZAyR27THgdhxOozCAWB7XDZ fMoQOp3nC4mXAC9iS8fxfFHJZNTgkvrTbSnTNRQg9iwv8OIaS58ISA6/+kDxvxF6EdnT GM34ok0AmZJIxzfAxh38HGfQf7n/W/yMb7Scg1eB90B3bfVtNyJAV3y4hgaPN+bqS9PH +m5eIk47V1HGAQdFZqExPxobCOI0urwU7TwoHMmvQDP7g4VM2ye0G5Nrr7dWzoRWQLHQ 8ZjIGRVe/UW9F9ZT5i0IuQkjhYy/TJK9nS29Jj6+EcgssYhlpui1ILo0SgDrVp8BUxpb nEQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701685492; x=1702290292; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Gu+4JrC2IuPtEiL7DfKPBV14u/4QgUSAAuxdak3zylg=; b=xMecA+UgM8tlt7Wlp5qryziz3hWCyw3e55dcUBnXi6LPNvhIJ/owzF8EFmn4jPNgid cpZi5nS73S4SPMmhJlOiSt+f9t9/tBJxmD3sNg2Y7hZvugf0PFhNpLhSgWF8BEW5AiDt 5uoROdxBAw4QCIE5TKPjqiXlHEeb0BSKyuws8/FvpGPWH/TqziHGg9CAqYdSrooxxfTy p4Kr0m2/nzK4vqp6qcp1HNnfv31Vb6JEQX3Uj7ibV9c4MTtN7/PJQpqvUbJWr6SJgDBG uIEuYfpQOZJNhX+9C+cinefUokhtbiUXNapg8d8iT/43ppDto4oyOKwix1tYjktYIn0t ny3A== X-Gm-Message-State: AOJu0Yy9eGUHQsLntD1MYoTHYXaBbj15PVc9g65qsswwXy5XnI0G2hHS F53SdTTqc2dAGyDtJoaiwU013A== X-Google-Smtp-Source: AGHT+IFNfuafLMTrnzuZhtSHAi1wJfByk8cu/8OswzuPQJa99i4NBxxd370wDKhG9B+5xT78ocdBhQ== X-Received: by 2002:a05:600c:11d2:b0:409:19a0:d247 with SMTP id b18-20020a05600c11d200b0040919a0d247mr2725139wmi.18.1701685492200; Mon, 04 Dec 2023 02:24:52 -0800 (PST) Received: from jlinkes-PT-Latitude-5530.pantheon.local ([81.89.53.154]) by smtp.gmail.com with ESMTPSA id m28-20020a05600c3b1c00b0040b2b38a1fasm14255415wms.4.2023.12.04.02.24.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 02:24:51 -0800 (PST) From: =?utf-8?q?Juraj_Linke=C5=A1?= To: thomas@monjalon.net, Honnappa.Nagarahalli@arm.com, jspewock@iol.unh.edu, probb@iol.unh.edu, paul.szczepanek@arm.com, yoan.picchi@foss.arm.com, Luca.Vizzarro@arm.com Cc: dev@dpdk.org, =?utf-8?q?Juraj_Linke=C5=A1?= Subject: [PATCH v9 21/21] dts: test suites docstring update Date: Mon, 4 Dec 2023 11:24:29 +0100 Message-Id: <20231204102429.106709-22-juraj.linkes@pantheon.tech> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204102429.106709-1-juraj.linkes@pantheon.tech> References: <20231123151344.162812-1-juraj.linkes@pantheon.tech> <20231204102429.106709-1-juraj.linkes@pantheon.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Format according to the Google format and PEP257, with slight deviations. Signed-off-by: Juraj Linkeš Reviewed-by: Jeremy Spewock --- dts/tests/TestSuite_hello_world.py | 16 +++++--- dts/tests/TestSuite_os_udp.py | 20 ++++++---- dts/tests/TestSuite_smoke_tests.py | 61 ++++++++++++++++++++++++------ 3 files changed, 72 insertions(+), 25 deletions(-) diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py index 768ba1cfa8..fd7ff1534d 100644 --- a/dts/tests/TestSuite_hello_world.py +++ b/dts/tests/TestSuite_hello_world.py @@ -1,7 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2010-2014 Intel Corporation -""" +"""The DPDK hello world app test suite. + Run the helloworld example app and verify it prints a message for each used core. No other EAL parameters apart from cores are used. """ @@ -15,22 +16,25 @@ class TestHelloWorld(TestSuite): + """DPDK hello world app test suite.""" + def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: Build the app we're about to test - helloworld. """ self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld") def test_hello_world_single_core(self) -> None: - """ + """Single core test case. + Steps: Run the helloworld app on the first usable logical core. Verify: The app prints a message from the used core: "hello from core " """ - # get the first usable core lcore_amount = LogicalCoreCount(1, 1, 1) lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter() @@ -42,14 +46,14 @@ def test_hello_world_single_core(self) -> None: ) def test_hello_world_all_cores(self) -> None: - """ + """All cores test case. + Steps: Run the helloworld app on all usable logical cores. Verify: The app prints a message from all used cores: "hello from core " """ - # get the maximum logical core number eal_para = self.sut_node.create_eal_parameters( lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores) diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py index bf6b93deb5..2cf29d37bb 100644 --- a/dts/tests/TestSuite_os_udp.py +++ b/dts/tests/TestSuite_os_udp.py @@ -1,7 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 PANTHEON.tech s.r.o. -""" +"""Basic IPv4 OS routing test suite. + Configure SUT node to route traffic from if1 to if2. Send a packet to the SUT node, verify it comes back on the second port on the TG node. """ @@ -13,24 +14,26 @@ class TestOSUdp(TestSuite): + """IPv4 UDP OS routing test suite.""" + def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: - Configure SUT ports and SUT to route traffic from if1 to if2. + Bind the SUT ports to the OS driver, configure the ports and configure the SUT + to route traffic from if1 to if2. """ - - # This test uses kernel drivers self.sut_node.bind_ports_to_driver(for_dpdk=False) self.configure_testbed_ipv4() def test_os_udp(self) -> None: - """ + """Basic UDP IPv4 traffic test case. + Steps: Send a UDP packet. Verify: The packet with proper addresses arrives at the other TG port. """ - packet = Ether() / IP() / UDP() received_packets = self.send_packet_and_capture(packet) @@ -40,7 +43,8 @@ def test_os_udp(self) -> None: self.verify_packets(expected_packet, received_packets) def tear_down_suite(self) -> None: - """ + """Tear down the test suite. + Teardown: Remove the SUT port configuration configured in setup. """ diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py index 8958f58dac..5e2bac14bd 100644 --- a/dts/tests/TestSuite_smoke_tests.py +++ b/dts/tests/TestSuite_smoke_tests.py @@ -1,6 +1,17 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2023 University of New Hampshire +"""Smoke test suite. + +Smoke tests are a class of tests which are used for validating a minimal set of important features. +These are the most important features without which (or when they're faulty) the software wouldn't +work properly. Thus, if any failure occurs while testing these features, +there isn't that much of a reason to continue testing, as the software is fundamentally broken. + +These tests don't have to include only DPDK tests, as the reason for failures could be +in the infrastructure (a faulty link between NICs or a misconfiguration). +""" + import re from framework.config import PortConfig @@ -11,23 +22,39 @@ class SmokeTests(TestSuite): + """DPDK and infrastructure smoke test suite. + + The test cases validate the most basic DPDK functionality needed for all other test suites. + The infrastructure also needs to be tested, as that is also used by all other test suites. + + Attributes: + is_blocking: This test suite will block the execution of all other test suites + in the build target after it. + nics_in_node: The NICs present on the SUT node. + """ + is_blocking = True # dicts in this list are expected to have two keys: # "pci_address" and "current_driver" nics_in_node: list[PortConfig] = [] def set_up_suite(self) -> None: - """ + """Set up the test suite. + Setup: - Set the build directory path and generate a list of NICs in the SUT node. + Set the build directory path and a list of NICs in the SUT node. """ self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir self.nics_in_node = self.sut_node.config.ports def test_unit_tests(self) -> None: - """ + """DPDK meson ``fast-tests`` unit tests. + + Test that all unit test from the ``fast-tests`` suite pass. + The suite is a subset with only the most basic tests. + Test: - Run the fast-test unit-test suite through meson. + Run the ``fast-tests`` unit test suite through meson. """ self.sut_node.main_session.send_command( f"meson test -C {self.dpdk_build_dir_path} --suite fast-tests -t 60", @@ -37,9 +64,14 @@ def test_unit_tests(self) -> None: ) def test_driver_tests(self) -> None: - """ + """DPDK meson ``driver-tests`` unit tests. + + Test that all unit test from the ``driver-tests`` suite pass. + The suite is a subset with driver tests. This suite may be run with virtual devices + configured in the test run configuration. + Test: - Run the driver-test unit-test suite through meson. + Run the ``driver-tests`` unit test suite through meson. """ vdev_args = "" for dev in self.sut_node.virtual_devices: @@ -60,9 +92,12 @@ def test_driver_tests(self) -> None: ) def test_devices_listed_in_testpmd(self) -> None: - """ + """Testpmd device discovery. + + Test that the devices configured in the test run configuration are found in testpmd. + Test: - Uses testpmd driver to verify that devices have been found by testpmd. + List all devices found in testpmd and verify the configured devices are among them. """ testpmd_driver = self.sut_node.create_interactive_shell(TestPmdShell, privileged=True) dev_list = [str(x) for x in testpmd_driver.get_devices()] @@ -74,10 +109,14 @@ def test_devices_listed_in_testpmd(self) -> None: ) def test_device_bound_to_driver(self) -> None: - """ + """Device driver in OS. + + Test that the devices configured in the test run configuration are bound to + the proper driver. + Test: - Ensure that all drivers listed in the config are bound to the correct - driver. + List all devices with the ``dpdk-devbind.py`` script and verify that + the configured devices are bound to the proper driver. """ path_to_devbind = self.sut_node.path_to_devbind_script